Magnification
January 14, 1999 - Jack Harich - Go Back
The ability of elements to combine in multiple ways and offer substantially better ways to do things through magnification is where the full power of carefully designed technology lies.
A Telescope Analogy
If you examine how Magnification works in other areas, certain patterns emerge. Take the case of telescopes. When I was a kid, I made one out of a mail order kit from Edmonds Scientific consisting of cardboard tubes and 3 lenses. Light passed through the objective lens (the largest one), travelled through the cardboard tube (black on the inside), became smaller, passed through a center lens, passed through a smaller tube, passed through the eye piece lens, and then to my eye. You focused by sliding the tubes. The image was upside down, but it worked and it was simple.
Even in something as simple as this, a goldmine of enabling principles exist:
- The same thing (light) passes through the entire system.
- What is passing through is shielded from contamination. (blackened tubes)
- The system entry point is one of the largest elements. (the objective lens)
- What is passing through is successively refined into the desired result (the image)
- Better results are often achieved by smaller elements rather than a single big element. (three lenses, one giant lens won't work)
- The more similar the elements, the easier the design. (three lenses of glass)
- Each successive element depends precisely on the effects of previous elements, and adds its effect in a manner similar to the previous elements. (three successive lenses)
The "same thing" passed through the system is Datatrons. These are a small number (5 or less) of standard data structures. A Datatron is like an electron. Both flow through systems carrying signals. A Datatron carries primitives, Strings or other Datatrons. They provide loose coupling between classes and systems. If fact, if the Datatrons are passed by an intermediary, coupling is only data structure dependent.
We have several key successive elements groups.
One is for Declarative Knowledge:
- Params. These are Datatrons containing parameters for part initialization.
- Parameter driven parts.
- Parameter driven containers.
- System Tree, a hierarchy of containers and parts.
- Parameter Editor
Another is for Anonymous Collaboration between parts:
- System Mediator
- Container
- Part
- Message
- Datatrons in Message
Another is for Dynamic Logic:
- Parts Shop
- Policy Part, created by Parts Shop per needs
- Logic Engine, driven by Policy Part's input
- Part Builders, driven by Logic Engine output
- Learning Part, driven by various
- Knowledge Part, driven by Learning Part's output
Policy Part
Magnification also occurs when many elements converge. This will happen as we achieve Dynamic Logic in Policy Parts, and later other parts. Consider that all reusable parameter driven parts collaborating with Messages follow this model:
Dynamic Logic is hard. The above makes it much, much easier because we only have to solve the Logic element, given the standard context. The main reason Artificial Intelligence (AI) has proceeded so slowly is, IMHO, is it is attempting to solve too large and too many problems. AI doesn't have the luxury of solving something easy, because it doesn't have a mature, complete domain neutral infrastructure that causes the problem to be solved to be very small and simple. All we have to do here is have Configurable Logic that responds appropriately to Input Messages that are mere standard data structures. There is no domain dependent work. If effectors are needed, they can be done with output Messages. Experience shows the input to usually be a very small number of datums. We have simplicity, simplicity, simplicity due to reuse, reuse, reuse.