[OVERVIEW] [PURPOSE] [ARCHITECTURE] [TECHNOLOGIES] [SOURCEFORGE]

Architectural Support for the Advanced Virtual Enterprise

(Draft)

Abstract

Requirements for Enterprise Integration Infrastructure have matured in the past decade, and now form the core of an ambitious vision of fluid enterprises. We describe the new agenda; the key technical challenges; and the concurrent required innovations in business practice. The leading technical approaches for this agenda are briefly reviewed and some observations on relevant market forces are made.

Keywords

Enterprise Integration; Virtual Enterprise; Self-Federation; Soft Modeling; Agent Systems

The General Business Problem

Several studies have identified the advanced virtual enterprise (AVE) as a desirable future business model for some high value sectors. The AVE is characterized by highly dynamic configuration, changing partners and roles, and evolving products and process well after start -- also cheap, opportunistic formation, dissolving and transitions to other forms. A typical AVE might be characterized as the best configuration of smaller players quickly aggregating to address an opportunity. Diversity of business management styles within the synthetic enterprise is considered a competitive advantage, so strongly heterogeneous information infrastructure must be readily federated.

Many of those same studies indicate that a simple standard- based approach won’t work because of clockspeed of standards compared to standards development, mismatches with market realities, and a forced homogeneity where diversity is desired. This includes standard processes, standard modeling representation and control methods, even to a large extent standard protocols in various stages of collaboration. Certainly, there are virtual enterprises which do benefit from a conformance to standards, especially when the partners are controllable under an influential prime contractor. But the AVE described here looks for new competitive advantage, and if a partner devises business methods, operational processes or technical infrastructure that provides a competitive advantage over “one-size-fits-all,” then that advantage should be leveraged.

The AVE might be an aggregation that is facilitated by special-purpose agents, or it may be more like a Fluid Supply Chain (FSC), facilitated by a lead partner. In both cases, a key notion is the transition of complex management tasks to the “invisible hand” of market forces. Of course, this trend is already a hundred years old as industry has moved from highly vertical, consolidated enterprises to supply chains. The benefits are clear: instead of relying on good management decisions to manage certain functions, these are handled by competitive mechanisms. Each partner gets rewarded for satisfying the upstream customer. The extreme example of the centralized model is the Soviet economy, and the trend seems to be to move as much to the more effective market forces as economic “gravity” allows.

Concerning those limits: “outsourcing” only works in situations where the product is the token of exchange, with clear associated metrics: quantities, quality, timeliness, and cost. Current infrastructures have some real problems in managing and coordinating processes in the distributed, multigoaled enterprise, and this is now a firm barrier to handling greater complexity and dynamism through reliance on market forces. There are currently some intrusive methods, such as mandating quality standards for processes, and the demanding single-vendor information infrastructure by prime contractors. But these can scarcely “see” and optimize processes deep in partners. The result is that the barrier to further collaborative optimization remains firm, and the enterprise incurs a greater burden in supply chain “management.” Managing the supply chain often means “controlling: suppliers, a philosophy that flies in the face of the greater benefits of decentralization.

Those benefits of handling dynamism and complexity by market forces are well known. The barriers are as well -- these are a matter of federating information infrastructures that allow distributed visibility and optimization of processes in an enterprise-wide system context. Additionally, business infrastructures need to be extended to accommodate metrics that dynamically evaluate the fractional value of a process or a process improvement to the enterprise and supports a consonant risk/reward incentive strategy.

What one expects to result is a set of integrated business/information infrastructure where:

• on identifying a potential market opportunity, some critical mass of partners can perform interactive simulations to evaluate the opportunity within the context of many different partner configurations and process recipes. Many simulations will productively use incomplete information. And most simulations will be not from models created just for the simulation, but from actual processes used to “do the work.”

• many opportunities that formerly were not feasible to tentatively address can now be competitively explored because of the low costs of assembling (and dissolving) the virtual enterprise or supply chain.

• most opportunities will benefit from -- or even be made ultimately feasible because -- the enterprise will be constantly adjusting to incorporate new insights and improvements. Every partner will be incentivized to seek enterprisewide improvements, including situations that involve many partners, and situations where a partner’s role is diminished or eliminated. Discovery of and implementation of increased value will be rewarded. Actively attempting to seek optimizing changes and collaborating in similar efforts by others will also be rewarded even if no substantial improvement is found.

• a feature-based value metric permeates the enterprise, subsuming customer values, product features, value-producing activities in the sense of “activity based costing” processes, and financial metrics of the type collected under “economic value added” methods.

The Business and Technical Problems

Such a vision of Advanced Virtual Enterprises including Fluid Supply Chains is within reach. On the business side, needs are:

• devising a set of “value features” that are derived from elements that produce value at the customer level. These features would form the basis of an enterprisewide set of metrics where every activity was measured against the value it produced for the enterprise as externally seen. They would subsume (but not necessarily replace) existing features as described above and complement cost metrics such as an enlightened activity-based costing regime.

• sharpening the enterprisewide awareness and appreciation of activity-based value metrics derived from the value features. These will be used to evaluate shares in enterprise formation, recognize and reward innovation, and support a bid-arbitration mechanism for trust.

• establishing a feasible trust arbitrator to support the acceptance of market-driven risk-reward activity based on the “fractional value metrics” resulting from the above.

These adjustments in management infrastructure do not appear onerous. They have been readily instituted in experiments and in any case mirror expectations raised years ago by activity based costing zealots. The real challenges are in the information infrastructure.

On the technical side:

There are four primary areas of technical challenge:

• Federation mechanisms

• Auditable agent-based simulation and control capabilities

• Mechanisms to accommodate implementation realities: multilevel control, soft modeling and complexity management

• Standard methods to interface existing systems, new “machine-friendly” interfaces and especially new “human-friendly” visualization techniques.

Each of these is examined in turn.

Federation Mechanisms

The notion of a federated enterprise evolved from early architectural considerations of enterprise integration. But market drivers in the information infrastructure marketplace deviate from delivering best customer value. The reasons are explored later in this paper. But the fact is that enterprise integration solutions evolved in toward designs that built centralized data stores, devised and mandated enterprisewide registration schemes and forced homogenous solutions. Such systems, especially in the Enterprise Resource Planning (ERP) space, were successfully sold and implemented because their slight advantages provided some competitive edge. No alternative existed. Similar solutions appeared in versions of enterprise integration coming from other enterprise viewpoints.

But such solutions are incredibly costly as legacy systems are converted or custom-encapsulated. They are famously inflexible. They erase competitive advantage in unique business and operational practice because the idea of shared development forces a one-size-fits-all notion of process. And because partners and supply chains must be pre-engineered to fit the system, fluidity vanishes. Worse, some central stores compromise important processes. Product Data Management (PDM) solutions, for instance, often force engineers to use less capable tools than they would prefer in order to integrate with the central store.

The “Holy Grail” of enterprise integration has been the federated environment. In such an ideal situation:

• The actual models, data, processes, metrics and authorities of the enterprise would be decentralized, and controlled at the (presumably low) level that is optimum for the creation of value.

• The process owner on up to any level would have the freedom to implement any tool, any modeling method, any storage index, any ontology that is apt for the best execution of the tasks in that domain. In other words, the systems would be encouraged to be highly heterogeneous as a matter of competitive strategy.

• The system must be viewable and analyzable from lots of different perspectives, reflecting the simple realities of life in the enterprise. Some of these are a matter of different metrics: financial metrics of the CFO compared to the performance metrics of the Operations Manager (units, quality, timeliness). But a vexing behavior of the enterprise is its aggregation into layers. Superficially, these are hierarchical: enterprise managers that “control” companies that “control” plants which “control” production lines that “control” processes that “control” individual assets and people. But actually the bottom of this tree is where the real value and innovation comes from. And as it aggregates in higher levels, those levels take on a selfish individuality of its own together with somewhat distinct behavior of that “world.” (For instance: the laws of collaboration between corporations bear scant relationship with how humans collaborate in an engineering department.) The desired environment would recognize these levels and their aggregating behavior as first class entities.

• Within reason, every process owner needs to see how her process (with others) affects the value chain through to the customer. That process owner should have “reach” in the ability to explore and effect change to increase that value, both as complement to and in collaboration with conventional top-down optimization.

• Above all, the environment must be inexpensive to implement and support, readily federate legacy systems with minimal disruption, and be “self-annealing,” meaning that the removal of any function by choice or misadventure doesn’t bring the system tumbling down.

• The federation mechanism and subsequent interaction must be web-capable, and as vendor independent as possible. That is a clear reality.

No comprehensive solution to federation exists of course. Market forces just don’t work that way, and at least in the United States, government does not consider intrusion in this market appropriate. But there are some efforts which have addressed the general problem.

In the artificial intelligence community, this general problem has been a bugaboo for decades. Knowledge-based systems tend to be highly domain specific in their solutions, and the variance in knowledge representation systems is profound. A “knowledge-sharing ” research initiative focused on the notion of shared ontologies and ontology descriptions. An ontology is a formal description of the semantics of a representation system, not only what a specific term means, but what its behavior is in the world described.

The idea is that one can allow many different knowledge representation systems to collaborate on a problem, keeping their native representation and avoiding maintaining a translated central universal representation. In this scenario, point-to-point collaboration could occur by one system accessing the knowledge store of another, plus the ontology which describes the nature of the store. That way, the remote system can translate individual elements as needed.

Translation is not avoided, but only occurs on discrete elements which are “live,” so their latest state is used. A special language and set of principles were developed and are now widely employed. The key elements are KIF (Knowledge Interchange Format) and Ontolingua. Early in the game, process knowledge was recognized as fundamentally different because the effect of a process can change a relatively global state. This is a tricky ontological problem. Process Interchange Format (PIF) was developed, but with compromised effectiveness. This was replaced by the Process Specification Language (PSL), a proposed international standard led by the U. S. National Institute of Standards and Technology. PSL is strictly formal, and is structured in a manner to allow modular flexibility. PSL has a project to develop an XML expression.

PSL has a focused mission: it is targeted as an “interlingua,” a means by which one collection of process models can be translated to another system. PSL concerns itself with “shop floor” type processes, not softer business and business processes. Nonetheless, PSL offers a promising starting point for the type of federation outlined above:

• While PSL is targeted as an interlingua, the very same ontology can be used as a reference ontology for federation of knowledge. In this case, processes can remain in their native environment and ontologically registered in a central index. Where it makes sense, they can be centralized, or copied.

• PSL focuses only on shop floor operations, and eschews other processes in the enterprise. Indeed, there are independent ontology efforts underway for business and workflow processes. However, shop floor processes are the most complex in the enterprise owing to certain notions (consumable resources, global state, transformations), and other ontologies should be compliant with PSL in a deep sense.

• PSL is open. The development process is open. The result is in process as an international standard. There are no hidden competitive agendas.

• PSL has an active project to develop an XML expression, fully web-capable.

Clearly, PSL provides a rational starting point for basic federation capabilities of enterprisewide processes. Open issues concern the manner in which basic value-creating processes (on the shop floor) relate to “higher-level” processes, and how “soft” facts and dynamics are accommodated. These are addressed below.

Auditable Agent-based Simulation and Control Capabilities

The AVE and FSC are characterized by highly fluid combinations of products and partners, and all sorts of different mappings of process to product value creation. These systems tend to be nonlinear, and because they often define new businesses, one cannot overly rely on extrapolating past trends. Multiple iterative simulation is the best strategy in these situations, but it is rarely used because the entire system must be well modeled in order for any meaningful system behavior to emerge. This is an incredibly costly undertaking, never done in the real world.

In the real world, one is more worried about getting workable code integrated in some way to actually manage the work. But what if one were able to quickly do this, using the approach to federation outlined above? And what if you could use the same code that supports operational processes in a sort of “virtual business” environment, perhaps with some real managers in the loop, to run through many different scenarios?

This would be a “war gaming” environment, where lots and lots of combinations of partners, processes and business strategies could be evaluated. Many market niches and product ideas could be evaluated which otherwise would have to be abandoned. Then, once a business case is made and a feasible, satisfactorily robust strategy found, one could then just shift the infrastructure from simulation to control. The beauty of this is in addressing a common problem: many of the business cases involved will not only require agility in the identification of opportunities, resources and strategies as just outlined. They require the constant ability to adapt throughout operation.

That’s because often you learn as you go, especially when entering areas your competition is afraid to go because it is dark territory. You need to be constantly reevaluating your strategy as you learn more, always adapting, perhaps changing the partner mix, perhaps even cheaply ending the adventure. This means that you are always running parallel simulations beside your control infrastructure, with the “real world” calibrating the virtual one.

And this is not a simple simulation run by the enterprise’s commander-in-chief. They might be simulations initiated by the owner of a specific process deep in a partner. That owner might have an idea about how to improve the value to the enterprise, perhaps by effecting a change shared by himself and other processes or product features into which she normally wouldn’t have insight.

There are some second order requirements that such a system requires, concerning partial models, levels of control and management of complexity. These are addressed separately below. The absolute necessary requirements of such a system, and in fact current barriers to adoption concern the manner in which processes are abstracted into evaluated within the system.

Specifically:

• The results could be strongly unintuitive and unexpected and still be useful. But managers require auditability; they need to know more than what the product and strategy are, they need to see the chain of logic, the specific mechanics of the value chain, and at any level of granularity they need some metrics on costs, value generated, risks and frailties. This severely limits the abstraction strategies of infrastructure architects. Each element in the simulation/control infrastructure must maintain its identity. So if for instance Ajax Corp. has a painting process and resources that play in the system, value added, costs and risks at any stage of the simulation or operation need to be naturally resolvable to that entity.

• The simulation mechanism needs to be a particularly aggressive type of that captures the notion that every process is acting selfishly according to market forces. Recall that the primary management notion we are leveraging is the exploration of options and the optimization of the system using the distributed, selfish invisible hand of market forces. So each process, and some aggregations of processes will be acting for their own selfish benefit, offering to adapt if need be, and evolving and learning in other dimensions as a normal endeavor.

Putting these two requirements together results in a requirement for an agent-based simulation and control system where each process is federated or translated into the system as an agent. Constraints are set by market realities, regulations and such. The environment is guided by a “big picture” agent of some type, but the basic mechanics are of an autonomous agent system exhibiting non-deterministic emergent behavior. But there are more design requirements:

• The auditability process needs to be native. There already are workable, entrenched philosophies and metrics for costs (using an enlightened Activity Based Costing method) and risks. What is missing is the metric that captures the value to the enterprise from each process. Because this is a market-based infrastructure, the metric will consist of two parts. The first component measure the cost to the enterprise to replace or reinvent the process including an amortized benefit if the replacement is better. That allows for what really is occurring: process-oriented bidding arbitrage.

The second component is a dynamic extension of the first. You want partners in the enterprise who are constantly striving to improve the enterprise. There is a value in how active and clever (and non-disruptive) agents can be. Presumably, this component will be based on a history-based rating.

• The constellation of processes as agents in the environment exhibits complex behavior. You want to be able to examine the outcomes of a single process-as-agent purely, without side effects. This is a result of the process-bound metric described above. But at the same time, you want to be able to perform “clean” analytic operations over processes, clean meaning formally based in the mathematical sense. Moreover, some processes in the system (for instance many business processes) are introspective over others (for instance constantly adapting operations).

These two formal requirements are the rational for a specific programming paradigm called “Functional.” In the functional style, everything is a function, a term meaning essentially a process. Each function is purely without “side effects,” meaning that the entire behavior of the function can be seen by examining that function only. Techniques exist to examine the global situation of course.

The functional style’s primary strength is that functions adhere to the mathematical notion of functions, so can have all sorts of useful mathematical methods applied. Some of those will be noted below when piling on necessary characteristics. The main advantage, one which concerns us here, is the ability to auditably support value metrics in an agent environment. The unique strength this provides cannot be underestimated. Essentially what we want is a system that has the advantages of non-deterministic results but determinism in cost and value assignment.

Bottom line: the XML import and federation of processes must be into an agent-based system, where each process maps directly to an autonomous agent, and each agent is expressed as a function.

Functional programming systems are relatively unpopular because the addition power comes at a cost of non-intuitiveness: one doesn’t worry so much about how to solve a problem, rather what the physics of the problem space is. Also, in many situations, performance takes a hit: in general, the more high level and capable the language, the harder to optimize for speed. So functional approaches lose out for the easy stuff. But we are by definition addressing the hardest problems here, problems which by definition conventional infrastructure approaches cannot reach.

Candidate functional languages are three. Each has targeted strengths such that all three might be incorporated into a solution. Haskell is the most famous. It was designed as a late-generation standard language to specifically provide pure support for the paradigm. Haskell is supported by several open source environments and is widely used in research and teaching.

Erlang was originally developed by Ericsson for distributed on-line, mission critical telecom applications. It abandons some of the more costly features of Haskell and adds distributed concurrency. It is fast, reliable and open source as well. Erlang is the poster child of functional languages because there are so many fielded applications.

Clean is a later generation development which stands between the two. It supports distributed concurrency like Erlang, and many of the more exotic and useful features of Erlang. In target problems, it is very fast, Clean’s status as a production environment can be characterized as “advanced research,” but it is plainly designed for production not research.

Agency and Types

There are a few key operational modes that such a system must support. Of course the modes listed below derive from our opportunity-centric notion of virtual enterprises and supply chains. There would be complementary modes for the resource-centric notion as well. In that case, you have a collection of resources and processes and wish to “compile” the best products. That notion presumes a less revolutionary, more stable enterprise model than addressed here -- the infrastructure should readily support that case. For the more advanced case:

• Given a suggested product, what is the best aggregation of processes from the known vocabulary to support that product

• Given an operational Advanced Virtual Enterprise or Fluid Supply Chain, and a suggested improvement of some kind from the “bottom,” evaluate the merits of that improvement, suggest improved modifications to that suggestion, note and coordinate dependent changes, and update the reward tallies.

• Given an operational Advanced Virtual Enterprise or Fluid Supply Chain, and specific (mostly legacy) optimization and management tools from the “top.” Federate and coordinate these valued “traditional” analyses with other, newer infrastructure functions, as a workable “enterprise dashboard.”

• Many of the physical equipment and process assets in the enterprise are information infrastructure assets, These differ in nature from shop floor tools and controllers in having to be self-aware in some way. In other words, they constitute the system that runs the enterprise, but they are a significant part of that enterprise, and many of the process improvements will involve changes to itself.

In various ways, these needs force the infrastructure to have a particularly strong mechanism for managing types. “Types” within the infrastructure are the basic abstractions that for the basis of representation. Some types need to be “metatypes,” derived in various manners from other types.

The mathematical theory of types is Category Theory, and the requirement boils down to the infrastructure needs to intrinsically support particularly robust applications of category theory. Category Theory is often contrasted with Set Theory: categories are groups of types, and sets are collections of elements. Categories are a richer notion because the underlying mechanics of abstraction can define or discriminate categories, and this inherently supports introspection. Set theory is the default underlying mathematics for logic, but a roughly analogous parallel system can be maintained in category theory, producing the sorts of benefits the system requires.

Since other constraints drive the requirement for functional agents, the application of category theory comes essentially “free.”

Multilevel Control

There is a crisis of sorts in the typical enterprise. Business processes exist on a wholly different world than operational processes. They have different ontologies, and different perspectives. Business goals have a strong strategic component where enterprise value is often in indirect income: higher stock price, lower costs of capital, customer goodwill, market share and so on. Operational metrics tend to be more tactical. They include quality, timeliness, consistency measures as well as unit costs, Those unit costs are strongly quantitative, and are a major influence on that period’s bottom line.

The two worlds tend to be managed from different centers of influence, and support mostly independent infrastructures. Each world’s models imperfectly capture the realities of the other and many horror stories abound of bad -- even stupid -- decisions made from these imperfect mappings. Many current best practices are at root shortcuts for bridging this gap, harmonizing goals and rationalizing metrics.

Once newer distributed and fluid business models enter the picture, the situation becomes worse. One new level is the partner firm compared to the entire enterprise. One cannot assume that a collection of selfish firms automatically aggregate to an effective enterprise. And the problem explodes if the process owner is enfranchised. Quite possibly, one might face levels with different worlds at the level of process, line (or department), plant, company, enterprise, and enterprise/customer system, especially if a basic mechanism is bottoms-up agility using emergent behavior.

Recall that a simple explanation of bottoms-up value combines collaborative processes in some structured way to form an enterprise. But there are now new types of agents in the system: a firm is an agent after all. A process line or engineering department will want to exhibit some behavior as an agent in the system.

What we end up with is some agents that are composed of others, but which are not a simple sum of the components. Moreover, the natural world of types and ontology of that level will differ. This is a well known problem in systems of autonomous agents, but not one that has produced -- as yet -- a general solution.

Essentially all agent systems depend on the invariance of the environment in which agents interact. Many systems employ different types of agents in this single soup, and a few of those systems generate agents from component or predecessor agents. But the fluid enterprise problem described above has not only derived agents but derived soup (the interactive environment) in which those higher level agents live. It is more than a simple aggregation or evolution -- it is as if one world creates another. The world of Disney creates the world of Snow White, except Snow White in this analogy would have the same verity as Walt.

This problem is well defined in the biocomputing world which has the problem of chemical/molecular agents evolving cellular/neural agents evolving minds (and then societies). And each of these worlds (including the world of elementary particle physics to be complete) has a related but distinct “physics.” Standard Santa Fe approaches to emergence avoid or ignore this problem. While the biocomputing problem thus far eludes practical solution, the universe of business is far friendlier. That’s because business is a synthetic activity, and while each layer in our enterprise may have different types and abstractions, each types has a quantitative metatype. That’s just a fancy way of saying that everyone in the enterprise understands the notion of fractional value that they contribute and that everyone denominates that numerically as dollars (or other currency).

That allows a workable notion concerning evolutionary levels in business enterprises that is also intuitive to all parties: agents creating numeric fractional value (which also directly measures reward).

How this would work in the most radical case or enterprise definition: a strategic goal is defined. A vocabulary of candidate processes, resources and such is available, perhaps through web-based registration. A set of established, workable business rules exists as allowable ontologies -- and naturally all legacy systems have a plugin opportunity through one of these ontologies. (All ontologies derive from the base: PSL.)

The iterative parallel simulations would be of the type: “how about this combination of players (defined as root processes with some higher level infrastructure in place)?” The simulations will produce a global enterprise value (how much money can be made at what cost and risk); a fractional value for each process (so that each player has a chance to take or reject the “bid”); and emergent behavior which would suggest novel combinations or existing higher layers of even new layers. An example of the latter may be instead of shipping product to Ajax Paint company facilities, have Ajax Paint employees use proprietary Ajax paint processes and technology on equipment supplied by Thor Leasing in facilities of Athena Milling (where the parts are fabricated) and supervised by Zeus Auto (who assembles and sells the product.)

The bottom line is that any usable information infrastructure that supports fluid enterprises by autonomous agent simulation must accommodate the emergence of layers with distinct behavior that maps to (or is congruent with) strategic mechanics of business units in the management and accounting sense.

Soft Modeling

Perhaps the most recognized problem in any simulation, control or analysis system is the poor support for uncertain and unknown facts and dynamics. Generally, these elements are just “left out,” producing some pretty unrealistic results. Often, probabilities are inserted in some attempt to leverage historical data or inductive process. Sometimes, modal logics are employed theoretically providing full support, but they add immense complexities to an already burdened system and require careful attention.

This “soft” information is of several types:

• information that is unknowable, such as where a terrorist may strike or when an earthquake might occur. A great many of these unknown futures open market opportunities or niches for which the enterprise is unprepared. Or they may create challenges or disruptions which planning has ignored. Perhaps a good risk avoidance analysis would have advised against that line of business altogether.

• information that is knowable but which for some reason is not accessible in an explicit form. Usually this is information that is not “in the system” for cost reasons. But security, privacy and legal constraints often create such situations as well.

• information which is intrinsically “soft,” such as the “physics” of the soft sciences. For instance collaborative psychology, cultural anthropology and sociology of affinity groups all factor heavily in any enterprise. But the behavior of these domains doesn’t lend itself to comprehensive explicit predictive dynamics. (Retrospective explanations are often better characterized but the extrapolation remains soft.)

• information that is “tacit.” Tacit knowledge is usually of two types. One concerns the knowledge itself, constituting knowledge that everyone knows and which is so obvious no one thinks to explicitly “say” in the system -- like one can often put a cake in a box but only with difficulty put a box in a cake. The other form is knowledge about how knowledge is expressed. The best example is domain-specific lingo; people would know what is being said if it were said without restrictive presumptions. Utterances like “hand me the Henry spanner,” assume a tacit vocabulary.

Mishandling of any of these can be disastrous to an enterprise, but they are usually mishandled just because the tools don’t exist. The problem is well recognized.

There is a potential tool that precisely addresses this problem, in fact was specifically devised from this problem statement. Actually, the target problem was more focused on the tacit knowledge and the problem stated more in linguistic terms. Generally, the problem is that logic was designed to reason about facts, which is all well and good. But a great deal of reasoning, essentially all reasoning by enterprises, is about “situations.” Situations are what concerns us. We take all the information we can reasonably have about our current situation, plus what we might know about the future and make reasoned decisions.

Often, we have facts about situations. Sometimes we don’t have many or all the facts, but that doesn’t change the reality that we have to reason about these situations and make sometimes important and expensive decisions.

“Situation Theory” was devised by some very bright mathematicians, logicians and linguists working at Stanford, to address this very problem. Specifically, the problem was to use the powerful mechanics of logic (first order logic in particular) which deals with facts and also have it deal with a new entity, the situation, which may “contain” or infer facts that aren’t immediately available for reasoning.

Situation Theory is one of the miracles of modern logic. A separate research center at Stanford was built around it (the Center for the Study of Language and Information), there is a dedicated publishing imprint, and many conferences, articles etc., have resulted. But Situation Theory is much like Functional Programming: it is different and requires specialized (read costly) expertise. Many of the workarounds in less capable approaches are sold as “good enough” in their special domains. The information infrastructure marketplace is driven by point solutions and not big picture systems. And experts in this area tend to be academics focused on pure research problems, so situation theory hasn’t yet been taken advantage of.

But there were some workshops in the past few years to remedy this problem, the Business Applications of Situation Theory series, which addressed the business context specifically with consideration of the agent problems outlined above. In particular one would want the ability to support a simple placeholder for unknown facts, so that as various analyses progress, one is at least always cognizant that something is missing. Then, as analyses progress if missing facts need to be brought into the system, and can be done so, then one can complete the analysis. If one cannot, one knows with some precision the limits of the analysis.

This notion of indicating the facts needed is a powerful basic capability. It means one could start a system-wide analysis with only a very little of the system modeled in some way. A result of the first round would be a suspected answer and the set of situations that need to be resolved in order to make the results more precise. Then one can model of federate only those elements of the enterprise absolutely needed to answer the question. This is a far cry from orthodox methods where one needs to model all elements of the enterprise -- usually at mammoth cost -- to answer any question. This process of drilling in has been termed layered zooming in other applications of situation theory in the enterprise.

Complexity Management

All of the notions outlined above are necessary. The architectural directions outlined appear feasible. But the problem begins with overwhelming complexity just because enterprises are complex. At each step along the way, additional complexity has been added, by the federation mechanism, agent system, auditability requirement over parallel simulations, multilevel emergence support, and facility for soft modeling. If the implementor wants to be practical, she will have devised a specific strategy for minimizing complexity, managing what is left and “natively” supporting each of the noted infrastructure capabilities.

In other words, the knowledge/data representation mechanism needs to be extremely simple and efficient. In particular:

• the system involves many kinds and a lot of metainformation. For simplicity and to allow for arbitrary reflection, this information should be handled in the same way as the information itself. The implication is that a single representation structure will result.

• comparing and auditing parallel simulations in the same information store requires some static tracing of events and reasoning dependencies. That means some sort of annotated graph is necessitated.

• identifying value patterns for enterprise construction, and building libraries of patterns implies that some complex graph-based pattern grammar be supported. New analytical programs (to support specific planning functions) should be able to cluster information by structure to build new categories.

• because the system is action (rather than fact) based, it needs a native mapping of the “acton” in the system.

• the operational grammar must be simple, intuitive for human interaction, computationally fast, ideally vector arithmetic-based to reap advantages of modern processors, and apply uniformly over actions, situations and arbitrary metainformation.

Regular, periodic concept lattices with an accompanying symmetry grammar appears to be the optimum solution, perhaps the only solution to this regimen.

Graphs are simple structures that capture information and relationships in networks. Trees, representing simple hierarchies, are the simplest and most common graphs. More complex graphs have network structures. A specific type of graph is a lattice which adds some semantic content to the location of the node, so that by knowing the relationship of one node to another one knows something about the information. Lattices, often called concept lattices, allow a sort of vector math over concepts so that by combining operations between two pairs of nodes, you denote the same information as between the start of the first and end of the second.

A novel approach to lattices sets a predetermined, regular periodic structure. This global structure allows for the operational grammar to be exceeding simple within a single symmetry: translation, rotation, reflection. Abstraction on itself is readily supported as an action recordable in itself. Symmetry operations are intuitive to humans (at lower dimensions) and are blindingly fast computationally.

Since such a strong relationship exists between semantics and representational “syntax,” pattern patching of concept clusters is straightforward. Vector operations of concepts and concept aggregations become possible. The complexity of this (highly abstract) representation topology remains essentially constant as the complexity and scale of the system grows. This is a highly desirable quality.

New types of analytical techniques can transform the symmetries or dimensions of the space, to define or identify new clustering contexts. The topology of the space can be transformed to produce fibre bundles of concepts (actions, processes), and extremely powerful notion.

Novel Visualization and Interface

A specific challenge that must be addressed is the problem of human access. The hardest part of this is the “big picture,” the simplest case of which is allowing a high level manager to see the entire enterprise, all its activities, and values in the dynamic context. Somewhat more complex is the problem of giving perhaps thousands of low-level process owners a similar view, centered on their own world, showing relationships and dependencies as they fan out and create enterprise-wide value. Both of these situations are made immensely more complex when each type of manager views not a single circumstance, but is viewing and directing possibly hundreds of thousands of simulations of different situations, hoping to “see” which are the most promising options.

Traditionally when confronted with such complexity, humans develop metaphors. Metaphors, usually using concrete images, “stand for” more complex notions, and the better metaphors do all sorts of multiple duty for various similar complexities.

A common metaphor in the computer world is the “desktop” metaphor developed by Xerox and Apple, stolen by Microsoft and made universal. Files are like “papers” that are “in” “folders” that can be in other folders. These can be “placed” on a “desktop” for work. Few people have trouble with a concurrent metaphor of “opening” a document or folder opens a “window.”

Until mathematics provided a semantics-free set of metaphors, all scientific and religious cosmologies were metaphorically based, often with the metaphors placed in a rather complex geometric relationship. Intuitive access to these vocabularies seems to have not abated in a mere 300 years. Most reasoning, even the most abstract, still appears to have a visual component and this increases with complexity.

A possible solution in the advanced virtual enterprise context leverages three ideas:

• Three dimensional concept lattices provide a useful, navigable visual structure for “zooming” in and out from process by process levels to bigger picture.

• There are many rich “prefabricated” metaphoric structures to leverage from mythology, religion and popular arts. A new project is being formed to explore and exploit these in this new integrated system context.

• Navigation is not a necessary challenge to address. But keyboard, mouse, even voice control is blunt. An approach worth investigating leverages metaphor and geometric control by ideograms and string figures. The ideogram component leverages the several thousand year familiarity of billions with calligraphic motions that “draw pictures” that have meaning. Also combined is the universally found exercise of string figures (“cat’s cradle”) which uses simple geometric operations that similarly “draw pictures” and which is a likely precursor to written language. Knot and group theories figure heavily in string figure manipulation. Added in are best current knowledge of visualizing and manipulating protein folding. This latter is heavily studied and results can presumably transfer between the business management and medical research communities. Protein folding is heavily based on symmetries and in some cases concerned with agency. These elements are included in the new study noted above.

Distributed Peer to Peer Interaction

The description so far has presumed the infrastructure host to be a central service federated to distributed processes and information stores. But the agent space in which the simulations occur is consolidated. Needless to say, an ideal situation would distribute the core infrastructure services as well. This might be desirable when the case is a highly distributed virtual enterprise that is self-directed in concert with specialized agents. Such agents might have roles such as indemnifying risk (like insurance), tracking, protecting and assigning intellectual property, managing persistent liabilities, providing enterprisewide market and customer information.

An ultimate distributed requirement would be the complete dissolution of the infrastructure in exact proportion to the process centers being served. In this scenario, every process has a local service which “agentifies” it and allows enterprisewide interactivity. The more practical way to effect this mechanically is to leverage the peer to peer frameworks already well in development under the Java, Microsoft and Open Source banderoles. Fortunately, the infrastructure will already be dealing with processes in a strictly formal XML specification, and XML is the advertised lingua franca of all major peer to peer collaborative frameworks.

What’s left is to capture the collaborative metaprocesses of the infrastructure in the same way. Since scrupulous attention would have been given to treating metaprocesses in precisely the same fashion as processes, this is likely to be a relatively simple task. Moreover, single the regular, periodic lattice id decomposable into cells and cells blocks, administration of concepts and concept types can be similarly distributed and XML-ized.

Infrastructure Market Forces

The enterprise integration market is huge, both in terms of the scope outlined above, but also in the smaller fashion addressed in the past decade. With such a large market whose benefits might be measured in the trillions of dollars, one might reasonably ask why more fieldable progress hasn’t been made. The technical barriers outlined above are not the only barrier. If that were the case, one would see a panoply of products in the marketplace to explore various balances of technical approaches.

One barrier concerns the dynamics of the marketplace in which information infrastructure vendors operate. This is one area where market dynamics break down as a result perturbing government policies and monopolistic practices. In a perfect market, customer benefits will shape the market, but in this space, the suppliers have such power that the primary product characteristics are determined by what best satisfies the business health of the supplier.

For example, the more centralized the approach and the more captive the representation format, the stronger the “lock in.” All enterprise integration products have tended toward this form because the business model works, This kind of business works, which one can readily see from the landscape: IBM in the 80s; Oracle and Microsoft plus major service consultancies today. Every key sector is led by this bludgeon.

Another example is the way in which the adoption process penetrates the enterprise. The way ERP products gained control was through the clever mechanism of alliances with large consulting firms. By promoting ERP, consultancies that were already present could expand and cement their roles. With tens of thousands of well-placed, newly enfranchised sales agents, massive adoption was assured.

Open source solutions are expected to demolish -- or at least completely reshape -- these barriers.

All of those dynamics are on the supply side, but there are demand side perturbations as well. A typical enterprise is led by a large firm which the customer sees as a monolithic entity. But in fact, these large firms are composed of several discrete interwoven domains: operational. financial, marketing/engineering are often the main players. As we’ve noted, these players are not well integrated and if asked really don’t want to be. Each thinks of themselves as the “real” owner of infrastructure, and integration in practical terms translates into a requirement for that sector to “control” the others.

The result is that there is no customer in the enterprise for a truly integrated enterprise. This is not likely to change with firms that currently exist. So if the supply side of the equation were “fixed” by open source solutions and components, the change in existing enterprises is likely to enter through the operational infrastructure, allowing it to swallow and preserve the others perhaps without disruption. The notion of improvement is captured in the notion of “fluidity.” A rough analogy might be how Walmart (and later Amazon) changed retailing and Dell changed supply chain management.

The larger promise is likely to be felt in completely new AVEs that revolutionize their sectors, and this is in part because this business model has less legacy power structure to reinvent. One interesting legacy is the presumed tight binding of the management of capital with the management of production. For historical reasons these two functions were inextricably bound and in fact the combination was the potent driver of the industrial revolution.

But a hundred years later, these two functions want to be independent; that dynamics is just a restatement of the move to greater management by market forces. Capital is fluid, and an enterprise’s access to less fluid capital in the form of resources (equipment, material, workforce) wants to be more fluid, which is a motivator behind the AVE. But on inspection some core financial metrics employed in the enterprise freeze this notion, creating more legacy inertia to overcome.

Finally, the usual ombudsman for infrastructure of this scale is national governments. The European Community has a long-standing commitment to sponsoring research in this area of enterprise integration and has a tradition of public promotion of resulting infrastructure. But there are structural problems: the research model traditionally focuses on small efforts that do not integrate well. This is also hampered by a somewhat problematic handling of commercializable intellectual property. Where large projects have been formed, for political reasons these are unambitious technically. And in any case, the giants in the infrastructure marketplace are U. S. corporations.

On the U. S. side are structural problems of a different nature. The relationship between industry and government support for standards is based on a rather pure philosophy of free market mechanics. So U. S. agencies are forbidden from leading or generating standards, only hosting and facilitating industry initiatives. But as we have noted, that market is broken. As a result, while the need for government leadership in enterprise infrastructure is widely recognized, no one is chartered with r resourced to address the problem.

On another front, since World War II, the lead research agency for advances in information infrastructure has been the U. S. Department of Defense, and indeed most of the results reported here are the result of work sponsored by the Defense Advanced Research Projects Agency (DARPA). But DARPA in recent years has been buffeted in ways that have forced it out of the business of dealing with industrially relevant infrastructure research. In part, this is a result of a desire by Congressional sponsors to convert from a Cold War research establishment. In part, this specific area has become a political football for debates over the proper role for government in the marketplace. As a result, at the time of writing no leadership of any kind exists in the U. S. government concerning this infrastructure domain.

Open Source Project: ALF

Consonant with the requirements noted here, a new open source project has been established to deliver infrastructure components and test concepts in production environments. The Open Source mechanism provides for the fastest transition from architecture to code, the widest possible peer review process, the most dynamic development mechanism and the fastest technology transfer process available. That project is named ALF (VE) and is registered on SourceForge.

A new research center has been formed to explore advanced enterprise infrastructures and administer the progress of the open source project: AERO (the Advanced Enterprise Research Office) at Old Dominion University.

Summary

A new generation of enterprise integration infrastructure is emerging. It seems poised to overcome several technical barriers and to bypass structural market barriers. The former is by dint of longlived research efforts; the latter leverages the mechanism of open source development.

One might reasonably expect the first revolutions to be in the adoption and fast evolution of these infrastructure ideas in the Fluid Supply Chain because there is a well defined customer and competitive threat.

A more fundamental revolution might be enabled by the leveraging of the infrastructure ideas to fuel the Advanced Virtual Enterprise. The benefits of the AVE are so promising that it is manifest destiny that this model, or something like it, will appear. But this will require some new roles in the marketplace concerning agents to facilitate various functions in forming and operating the AVE. It will be quite an adventure to see how quickly and in what way these may appear.

[OVERVIEW] [PURPOSE] [ARCHITECTURE] [TECHNOLOGIES] [SOURCEFORGE]