Grid Phases
January 14, 1999 - Jack Harich - Go Back
This technology is so large, complex, and full of unknowns we must take a careful multi-phased approach. The elements in each phase make the next phases possible and yield multiplier effects. Discovery occurs gradually. By the time we get to the really tough phases we will have greater clarity from all our preparative work. |
Strategy
Each phase adds a 2 to 10 fold productivity increase due to the immense power of the abstractions introduced, and their ability to interact with other abstractions in a multiplier effect. But this will occur only if we can solve a series of tough problems.
We start with the better known abstractions and move towards the unknown, while relying on the use of young abstractions to reveal the next ones until we reach the end of the tunnel into the future. This is "predictable discovery."
We may not achieve everything we want to in a phase. No problem. As long as it supports the upper phases with at least minimal functionality, we can proceed. Actually we expect to fall short in many phases. The vision as a whole will not be endangered, however, because most of the hard work has already been done in Bean Assembler One.
What's surprising is the Parts stage contains little risk - it just takes time. I've become familar with parts, how to manage them, where the troubles are, what users need. The only new significant element in the Parts stage is Base Roles. This is not required to move forward, but if we can solve it, it will give some substantial leverage to later phases.
Even the No Code phase is half solved. By isolating Configure Logic to Policy Parts, we have to solve only a small problem - Given initial parameters, how can we configure the response to incoming messages? This can be solved part way with standard rules logic. But we probably need a fuller solution to support subsequent phases.
The Learning phase is where it starts to get difficult. Then again, all we need to solve is easy persistence of what has been learned, plus additive configurable logic. This has already been solved by many expert systems, but having not investigated I don't know how well existing solutions will work for us.
The Proactive and Creativity phases are high risk, because nearly all the key technology is unknown or immature industry wide. But prior phases should make these phases much easier. For example, it may turn out that to be Proactive we just need to steer the Learning parts in preferred directions by providing certain optional goals, and give them a way to learn rules and facts about new domains, which has been done.
The following is an overview of the phases:
Stage Phases - Reusable Parts
Nutshell Vision - Allow users and developers to assemble systems from parts.
Phase 1 - Part Centric - Here we dedicate ourselves to being Part Centric as we work. Our goal and main effect is building systems from reusable parts. This takes two main principles: Encapsulation and Mediator. The main mechanisms are flexible containers and parts, and a System Mediator to initialize the system, handle part creation, findability and destruction.
Manifestations of this are Layered Architecture frameworks using standard member interfaces, typical IDEs and server application frameworks. In our case the System Mediator later becomes the Bean Asembler system engine.
Phase 2 - Microkernel - Here the goal is a stable, lightweight, simple, application neutral system engine that provides the minimum system infrastructure needed by all systems seeking Ultra High Reuse. This microkernel becomes the all important foundational element for all future work, much like the microkernels in Unix and NT.
Looking way down the road, we build in several features to support Continuous Evolution (no design/run dichotomy) and Configurable State using No Code. To achieve this we use the principles of Declarative Knowledge, Anonymous Collaboration and Infinite Extensibility.
It's important to understand that this microkernel is not the Bean Assembler Editor, Data Framework, Parts Framework or Logic Framework - it is the Bean Assembler System Engine. It is just the foundation for a Layered Architecture where we keep adding layers and parts to do those things. Otherwise we are building a huge monolithic monster that, while initially useful, cannot evolve easily or provide high reuse. The less the microkernel does, the better. Here's the first iteration System Engine Model.
Phase 3 - Ease of Use - We now have enough complexity (from phases 1 and 2) to need wrapping with friendly editors to achieve the full potential of the tool easily and quickly. This phase provides a better and complete editor to let developers assemble parts into containers and configure them with parameters.
Actual ease of use will come from the visual System Tree, the Param Editor, Inspector and possibly a Configurator. These will be overhauled to do everything a user or develper needs without code or manual file management. They will use a Look and Choose metaphor, rather than a Remember and Enter such as in command line commands or raw parameter text editor. Message linking will be fully supported, including generation of parex changes and link validation.
The largest improvement will be the use of ParamTypes per container, similar or identical to XML DTDs, that allow new parts to be added with default parameters, values to be choose from lists, etc in a typical modern UI fashion. All changes will be validated.
An unreliable system is hard to use. Reliability will be addressed in two areas: Data Integrity and System Robustness. For the first we build in inherent data integrity all they way through, so that it's easy. For Robustness we get into Regenerative Problem Recovery. This detects and escalates problems to higher and higher levels until the system has recovered from the problem. If typical recovery is not possible, the system will regenerate the parts involved, reinitialize them with the pre-transaction state, and start over at the point before the problem occurred. This is about one level above what transactions do.
To fulfill the principle "All structures must be self validating" we will do something like a structure of depedencies related to desired system behavior.
Phase 4 - Parts Management - For ease of use to blossom into usefulness, we need several thousand well managed parts. Gasp! Is this possible? Of course. Is it easy? No.
To achieve its fullest potential, a reusable part needs to be findable, understandable and useful. For thousands of parts to work flawlessly together, they need Transparent Collaboration and Location, a small number of Base Roles to play, and each must be a High Quality Part.
The most interesting mechanism to achiweve this is the Part Shop.
The most subtle mechanism is the Policy Part. Its mission is to decide things. It is not a view, is not persistent, is not a task. It is used for reference by this other types of parts to decide what to do in a centralized manner. This allows the designer to Isolate All Business Rules. In this phase the Policy Part does it with code. In the next phase, it does it with Coinfigurable Logic. It is a deliberate bridge from the Parts to Logic stages. Once we figure a few things out, Policy Parts will probably be replaced by other to-be-discovered types of parts, or possibly by all parts.
A new principle emerged here - Segmented Architecture. This fulfills the need of 95% and later 100% parts reuse. Remember a part can be a collection of parts, and even a collection of collections of parts. A part can be a system reused by another system. It is unpredictable what parts or collections will need to be reused, replaced, updated or removed, so we need to be able to slice a system anywhere for part cleavage. Segmented Architecture says that, like a segmented worm that can be broken anywhere and will regrow, you can safely cleave a system anywhere for whatever purpose you want, and that section of the system will be self-contained and reusable. For example you can also snip off a system branch and substitute a compatable one. This is accomplished with Hierarchial Composition and Anonymous Collaboration, some rather useful reusable principles.
Stage Phases - Dynamic Logic
Nutshell Vision - Let users configure, teach and guide the parts on what to do. Let the parts decide how to do it, including asking users and other parts for advice. Eventually let the parts be self directed. All this is accomplished by "dynamic logic" rather than "static code", with the parts providing more and more of their own logic.
Phase 5 - No Code - Of all the phases, this is the crucial one. Here we leap from what is really just highly organized procedural code to a whole new way of expressing behavior. To reduce the risk, this phase involves getting a single key mechanism to work - a Policy Part driven by Configurable Logic, not code.
For example, suppose you wanted a Workflow Part. It might receive a Message named "AcquireNextTaskDescriptor" containing the properties "AppName", "PreviousTaskName" and "CustomerOrderStatus". The Workflow Part would decide the "NextTaskName" based on this input data, and add that property to the Message. Other policy examples are where data is stored, data input validation, security logic and computer assisted training.
A cursory review of the AI literature shows this is easily done with goals, rules, facts and inference engines. There are other ways, but Rules Based is probably the easiest. The goals, rules and facts are what vary per Policy Part, and would be configured. The inference engine is resuable, and there are many on the market.
There is much work to be done here, but it centers around determining the best way, not finding a way. Plus we are solving a small problem, since a Policy Part exists inside a carefully designed infrastructure. That's why we feel this phase is easily doable.
Phase 6 - Learning - Up to this point, all part behavior has been individually specified by users. That takes time, is error prone, and is from the user's point of view, not the part's. There's got to be a better way to turn user's needs into system behavior. That better way is systems that learn from all their activity, ask for guidance when necessary and allow users to direct them on what to learn.
This builds on the previous phase. Dynamic Logic is changing the variable portion of logic, which could be goals, rules and facts. Learning consists of parts that, given goals of what to learn, learn it by remembering what's important, storing those rules and facts, and using them for new behavior.
For example suppose you want a system to learn where you put your windows, how you size them, which ones you use a lot, and which one you tend to open first. This would make your work faster and more pleasurable. You tell the system these are your goals. It then monitors activity in these areas, remembers the relevant data, and gradually changes its behavior to suit your needs. It might notice you move a window near the corner, and ask if you'd like to move it exactly into the corner. After a while you might tell it to include only windows in certain apps, because the rest seem to be okay already.
If we do this phase well, users can get systems to conform much, much more closely to their individual needs. This will immediately lead to User Enrapture, where users start really liking their systems because they are so adaptable and suit them to a T. This is vastly different from the current genre of products where one size fits all, with some "preferences."
Phase 7 - Proactive - This phase and beyond is currently full of unknowns, but by the time we get there we expect is to be doable, due to what we have to build upon."Proactive" occurs then the system anticipates user's and other system's needs, without any guidance or goals. This can be done by Pattern Recognition of what users and sytems are doing. When the same thing is happening over and over, and there is a better way to do it, the system would proactively suggest to the user "Here's a better way to do that." The system would begin to behave like a friendly Virtual Assistant.
For example you may be taking 12 steps to do something that can be done in 8 steps. Or you may be opening 3 windows to set just 1 or 2 properties on each when starting a new document page, and this could be better done with one new window with those properties. Or the system may notice you are logging on at different sites around the city, and using the equivalent a certain desktop in each site. It would offer to remember and provide those desktop styles. Or the system would automatically remember and finish certain keystroke combinations. If you add up hundreds of these cases of proactiveness, the system takes on a whole new attitude.
Phase 8 - Creative - What is it we look for in the best people or establishments we interact with? One trait is creativity, the ability to spot new usefulness where it was unnoticed before or without being told. For the funny British this is wit. When we are working it's anything that helps our work go better. For the employer it's the employee who continually figues out better, new ways to do things.Creativity occurs when a system offers users and other systems entirely new usefulness, without being told specifically to do that. This means that in addition to specific responsibilities, systems adopt the vague, higher level responsibility of helping their users any way they can. This is expressed in better, novel system behavior.
Again, we build on previous phases. The Proactive phase relied on Pattern Recognition. We up that mechanism a notch and use Opportunity Recognition.
The difference between Proactive and Creative is small, breaking up a larger into a smaller problem. In Proactive the new behavior direction is well defined. In Creative, it's not. Proactive and Creative will eventually use a large number of mechanisms, such as the may ways Big Blue determined its next chess move. For example:
Opportunity can be recognized by running simulations of possible new behavior, and seeing what the result is. If it achieves old goals better, the system could then evolve in that direction, and occasionally verify it with users and other systems.
Opportunity Recognition can be done by pooling many rules and facts, and looking for meta patterns that have hitherto escaped detection. Specalist parts would do this, using many other parts and systems for help and as source material.
We don't know exactly what Opportunity Recognition really is yet, but as they say, "Every man's reach should exceed his grasp."