Also see the Business Case which includes some architectural issues.
How ALF Works
This document provides a brief non-technical overview of the functions of an ALF system. Why someone would do employ ALF, what problems ALF solves, and the specific technical details about ALF technologies and architectures are found elsewhere. The intended reader of this document is a nontechnician who wishes to know more, or alternatively the serious ALF developer or user seeking an introductory overview.
The ALF project consists of four main components:
Federation Mechanism: a means to apply a standard ontology to heterogeneous pools to import diverse information by mixed means into the system. What this means is that ALF separates the functions of how information gets into ALF from what ALF does once information is in the system. This Federation Mechanism does a few special things to prepare for systems like ALF but the novelty of this front end is not in the implementation technology, rather the notions and metainformation captured. (Metainformation is information about information, often used in indices, for instance, a librarys card catalog consists of metainformation, while the books hold the actual information.) The Federation Mechanism is expected to have general utility to those who use enterprise analysis and control systems other than ALF. In particular, the first element of this component converts processes and related information to XML in accordance with several emerging standards. XML is the new underlying standard for data collaboration on the internet, so that subcomponent has more general applicability.
Lattice Mechanism: a means for creating and maintaining a Regular Periodic Concept Lattice environment with this information. ALF uses a specific, novel way of relating bits of information to each other which has several advantages.
Agent Mechanism: a means of forming agents of concepts and concept aggregations to allow emergent behavior. Emergent behavior means that many small parts of the system bump up against each other and interact. By acting selfishly and adapting, they evolve unexpected solutions. ALFs novelties in this mechanism involve the agency of soft information (described below) and a new notion of higher level agents that emerge from lower ones. Both notions make emergent systems behave very much more like real life.
Visualization Mechanism: a means of formatting and presenting information to a coherent visualization environment. This is a non-trivial need given the primary target domain of ALF: complex systems. The need is for humans to access the big picture in an intuitive way, where the world is completely reformatted depending on your point of view. Geometrical navigation and visual cues are used in concert with some metaphoric techniques.
ALF approaches these four mechanisms in a unified, complementary way. Each is generally described below with more detail given for the initial (federation) component.
The Federation Mechanism
In simple terms, what this component does is get information into the ALF engines so that great things can be accomplished. But the problem is complex: one wants to work with open standards where they exist; one wants to accommodate interaction over the web; and one must capture some unique attributes to satisfy the requirements of ALF. The process should be fast and cheap, but it has to accommodate at least three cases:
The case where information is permanently translated into normal ALF terms and is expected to reside within an ALF repository for the duration.
The case where information exists in a store which has some utility in that form, but one wishes to mirror all or some part of it in ALF. In this case, continuous translation might work, but a more lightweight wrapper is usually called for. A wrapper is information that is added to the original information, keeping the original stuff usually unchanged. Clever wrappers are in effect local translators, and the value is that the stuff inside might change and not have to be translated until needed by some component of ALF. Wrappers will be pretty important to ALF, because a specific design goal of ALF is that big picture, system-wide snapshots should be cheaply computable without having to actually look at all the data of the system.
The case where information exists somewhere in a peculiar format. It is used constantly and changes frequently. There are no non-ALF requirements to centralize this information. Using wrappers is costly because you have to continually keep two copies, one for the real user and one for ALF to wrap. Plus the ALF copy always will be behind. In such cases, you really want to keep just one copy, and keep it where it is needed. If ALF needs that information, it just swoops down and copies what it needs. Keep in mind that ALF could be federating with thousands of different types of information. This is what is meant by federation, though as in many cases in the marketplace, vendors fudge and call one of the others above federation if it deals with diverse information. ALF requires federation in this difficult, dynamic sense.
These are listed in ascending order of difficulty, with federation the most difficult. In all cases, the sources could be numerous, highly dynamic, and exist in a number of formats that employ radically different worldviews. But the problem is even worse than that! ALF intends to deal with relevant information wherever it is, even if it is not well modeled. In fact, three distinct cases are expected:
The case where the information is well-modeled process information, and ALF can make certain assumptions about its correctness. The fact that process models are involved is important. ALF intends to be able to understand an enterprise, evaluate certain futures by simulation, and actually control the enterprise. This last increases the difficulty substantially because ALF has to have detailed notions of state, and this is captured in process models. A bottom line is that process models are the fullest model ALF sees.
The case where the information is well-modeled but it concerns non-process information, or at least seems to. A crude example is information about how much fuel is in a plants reserve. It gets consumed as a result of processes within the plant and filled as a result of an informal monitoring process. No one really cares to know the precise linkage between the states of a specific milling operation and the level of fuel except in a general sense. So ALF has to add some state markers. Fortunately, ALF leverages a special mechanism (from situation theory) to do this without asking a lot of bothersome and expensive questions. The point is that for this type of information, it can be translated, wrapped or indexed for federation only by addition of some supplementary information concerning state.
The case where information is unstructured, poorly structured, or structured with a lot of mistakes (rendering the structure useless). In this case, one needs a lightweight modeling methodology. It is lightweight because the only reason the information is being structured is for Alf utility, otherwise it would already be in one of the forms noted above. This lightweight structure captures just the information needed by ALF. An intuitive notion is employed, one using speech acts. (Speech acts are a simple way of looking at processes as a set of types of communications like questions, replies, commands, qualifications and such.)
The Federation Mechanism consists of three discrete steps. Each step can be seen as a black box with specific inputs and results. The Federation Mechanism is divided into these steps because:
Doing so makes it more feasible to leverage existing tools and to ally with parallel projects.
Producing two intermediate representations provides for better auditability. One effect ALF provides is the clever mix of deterministic and nondeterministic processes. (Deterministic processes provide as much visibility into the causes of an effect that one wishes to have. Non-determistic processes either produce results that cannot be predicted, or produce predicted results by means which cannot be predicted depending on how one engineers the system.) Many ALF users will want to produce an optimized plan or configuration using multiple nondeterministic simulations, then audit the process steps, perhaps multiple paths of process steps, so that they can be managed in a traditional way, by conventionally resourcing and monitoring them. ALF calls this apparently deterministic non-determinism; key support for this must be built in so that there are discrete visibility points so that processes deep within ALF can be intuitively and unambiguously traced back to information outside of ALF. The two intermediate representations provide for this.
Each of the three steps employs a different theoretical base, and probably a different set of implementation technologies (including languages).
Many Enterprises and vendor-supported systems already have some tools that perform similar functions for their respective frameworks. The initial vision of the ALF project is that it will produce production-quality code for the Agent and Lattice Mechanisms. For this Federation Mechanism, ALF will produce a detailed specification and example code so that ALF users can adapt their own tools and products. Moreover, it is hoped that detailed peer review of such an open specification will place desired emphasis on thoroughly characterizing and managing the cost of the Federation Mechanism, which is expected to consume the majority of expense in a practical implementation. Also, since most interesting enterprises will vary so far as federation requirements and model types, the specification should be broad enough to allow reasonable estimations of implementation difficulty. The subdivisions of the Federation Mechanism should facilitate this.
The three subdivisions of the Federation Mechanism are:
The Normalizer, which has the task of linking to the various forms of information noted above, from various sources and in various formats. It models, translates, and/or wraps where needed. Presumably much of the information will be federated, meaning it will continue to reside and be used locally but be tapped as needed by ALF. The result of this step is an XML (eXtensible Markup Language) representation of the information and relevant metainformation for indexing. XML is the prevailing standard for the publishing and exchange of such information over the internet. The basic function of this step is to make all the information appear as if it were all modeled using the same methods and to ensure that information needed for ALF is included in that virtual representation.
The Situation Contexualizer, which has the task of providing big picture information as additional metainformation so that every process and fact is situated in the enterprise context. The result of this step are a standard form of expression called a symbolic expression, or s-expression, which is a long-standing form for representing logical statements in a computable format.
The Function Constructor , which has the final task of taking those s-expressions and turning them into elements of programming code that are functions, can have agency in the ALF sense, and can readily fit into the efficient representation scheme that ALF uses.
(Functions are a set of program statements that define a computerized process that takes certain values as inputs, and return certain outputs. This is much like any other type of programming but has certain constraints which allow the functions to have clean mathematical properties that ALF -- and many other so-called functional systems -- can exploit.)
(Having agency means that some of these functions can act like cells or bacteria: small self-contained packets of behavior that interact with each other.)
Challenges of the Normalizer: The Normalizer has the hardest job in terms of the difficulty of whats required. Comparatively, the other elements do their work less by brute force and more by sophistication. Fortunately, the normalizer can rely on a vast array of accumulated knowledge and existing tools that address just this problem. Whats different about ALF is the complexity of the information it needs, because ALF does things no one else can, which is why it is attractive. Thats the bad news. The good news is that ALF doesnt need as much information as the systems these tools were built to support. ALF doesnt need to know everything about a system to say anything useful about the whole system. Thats the point.
The normal task for these tools is to take information that is represented in one format and translate to another format that another tool uses. The understanding of this problem and the ability to actually do it is one of the few major accomplishments of what used to be called the artificial intelligence community. It involves the notion of an ontology which is a sort of structured knowledge base about the elements each representation uses. A simple example is that one system may have a concept which it calls time and another a concept which it calls date. The ontology will have a formal description of time and date that encompasses these both (if they are different) so that when one system says time, the other system knows what it means.
That is a simple example, but it quickly becomes complex, as in the case of in. When one system says Ted is in the Boy Scouts, another system needs to know the difference between that and Ted is in the house, or Ted is in trouble, or Teds clothes are in.
There are some good tools for specifying ontologies. ALF uses the most mature ontology language, a slightly extended KIF (Knowledge Interchange Format). Many KIF and KIF-like ontologies already exist, a few prominent XML-friendly ontology projects are underway, there is some overlap and most of these are targeted at business to business collaboration. ALF will coordinate where appropriate, however very little of this work is useful. What ALF needs is a high level of formalism applied to the problems of state.
Representing state is a wellknown, complex bugaboo. Some issues of state concern whether something can happen, whether it is happening, and whether it has happened, and in all cases what the are transformative effects of having happened. How things are created and consumed are tied up in this, which has means that what something is, is a moving target. There are lots of complex problems here, problems which all the highly public ontology efforts avoid by ignoring them.
One ontology project that hasnt is the Process Specification Language, a project led by the National Institute of Standards and Technology to build a standard ontology for control of processes on the factory floor. This is a domain which just cannot avoid the problems of state, and it is being done with scrupulous formality. PSL is designed in such a way that it is easily extended.
ALFs product for the Normalizer will consist of an enterprise ontology that builds on and extends PSL, plus example code to support basic functions of the ontology. Probably all this code will be in Lisp, which is the standard programming language for these issues. Probably the example code will leverage a large, public Lisp project called Ontolingua which is an environment to do just this type of work.
Note that this ontology will not be directly applicable to all users of the ALF lattice and agent engines. It only speaks to the Virtual Enterprise, Fluid Supply Chain and E-commerce communities. Associated with this ontology will be a set of Speech Act "performatives" to allow cogent, lightweight modeling of unstructured and no sufficiently structured information
Challenges of the Situation Contexualizer:
Challenges of the Function Constructor: