Connectivity, Levels, and Boxes
October 22, 2011 § Leave a comment
In programming, one is constantly creating boxes, or let us be more precise, compartments. Even as there is the holy grail of completely closed and re-usable software objects, programmers are fighting all the time against blurring the boundaries of structures they themselves have created. Almost since their invention, programming languages have thus been defined to support structured programming (ALGOL60, Pascal), which resulted in the object-oriented style paradigm around the mid-90ies. In an ideal programming world of of a 1990ies software engineer, those “objects” are completely independent, they are black-boxes regarding the way they exert their job, hence, some technicians say, they behave. More profane, they create an enclosed globality, they represent something like a transportable software agent.Of course, these agents do not “act,” they have to perform in an exactly defined behavior.
Similarly, databases got reasonably structured around 1985, and for more than 20 years almost any database system followed the relational approach. A similar story happened for networking, i.e. for the task of connecting the various parts of a computing system across the physical box of a computer. For decades, and still for the vast majority of installations, engineers followed the strict belief in the paraphrase of “divide & conquer,” which results invariably in a strict hierarchical system. The famous “Server/Client” architecture is legend.
In practice, perfect hierarchies rarely match. Perfect hierarchies increasingly fail the further away you are from the processor of the computer. An operating system may work in a hierarchical structure, a storage system might, a software system like SAP R/3, but not the people in social processes. Who would claim that thoughts appearing in natural brains follow a hierarchical structure? Thus we can not claim that thoughts are results of applied logic. Just remember the great failure of Prolog… Since a few years now, things fortunately change, at least, it feels like. There are link-based (Neo4J) or document=object-based (CouchDB) databases. Yet, if you think you of so-called storage clouds (e.g. Apple’s iCloud), you are taking the donkey for the… They only look like non-hierarchical systems, in fact, internally they are still strongly hierarchical. What yields the impression is the contact between the human user (you) as a body (which is much slower than the speed of light) and the processing speed/bandwidth of networks.
Back to our endeavor of intelligent systems here. Note, that we really mean “intelligent” on its own, not just intelligently engineered, or marketed. The system should not repeat programmed steps (like the ugly Deep Blue…) at the speed of light. We make the first reasonable assumption that an intelligent (computer) system is built from a kind of more or less separated modules. We would prefer to avoid to assign “functions” or “functional roles” to them for the time being. These modules have to be connected. As for the content, the programmer should not determine in advance which modules are connected in which way. Additionally, there is the practical question about the structural level on which these modules of the envisioned computing “ecosystem” should be connected?
Since around 1998 several standards for linking computer systems emerged: ORB, Web Services, SOAP, AJAX, Websockets, and lately REST. From those, the last one stands apart, because it is the only one which enforces a strict reference to the behavior of the linked systems, and only that. There are only a few cases, where the raw data coincide with “behavior,” namely physical sensors. Linking instances of software, however, is a different game. In order to achieve robustness against future changes, softwares should be linked only on the “behavioral” level. Probably here is the problem of Object request brokers (ORB), Web Services, and SOAP, which all did not spread as intended, albeit they have been pushed into the market. They are not only too complicated; I am convinced they encourage to do the wrong things when it comes to linking different computing systems. With regard to a more thorough and philosophically sound thinking, it is probably even worse that they pretend to cover semantics, but rather to the opposite they get drowned in extensive syntactical heavy-duty rules. Yet, no amount of syntax whatsoever will “create” semantics. ORB (and similarly RIM, or Akka) even require the same format on the binary level, as they exchange objects or signatures of objects, which are always proprietary for programming languages and their compilers. Of course, once you have decided to use just Java and nothing else, Akka is a quite elegant solution for the challenge of distributed computing!
What is wrong with those approaches with regard to intelligent systems? They are trying to standardize the interaction on the level of structural field definitions, instead on the level of behavior, i.e. results. Thus, they are always highly specialized already from the beginning. And specialists are always in danger to become extinct. You only need to change a single bit, and the whole machinery will stop working, and along with it, the interaction. In other, more theoretical terms, we can state that the linked systems have to have an isomorphic interface on the level of semantics. Can you see the architectural misconception? Astonishingly, this misunderstanding happened on top of a long history of abstraction concerning standards for network transport. While for more hardware related things here is the OSI model stack, these lessons seemingly went forgotten when it came to interactions, where semantics played a role.
What we obviously need is a much more strict decoupling of the transport layer from the transported stuff, and, at the same time, a similarly strong decoupling between the source of a message and any potential corresponding receptor. The only thing that is shared between a SOURCE and a RECEPTOR is a kind of loose contract about the type of data exchanged. Secondly, they never get into contact directly. Any exchange is always mediated by some kind of relaying instance. For the time being, we call it a “message board.” Take it for now as a mixture of telephone relay station, kind media, a black board, a bunch of neurons and their fibers, or the stage of a theater. The message board establishes links between participants (sources or receptors) in a probabilistic manner, it is kind of a media for the participants, or better, a kind of active milieu. In this way, links are not necessarily transparent any more. Instead, the activity of the message board allows for the usage-driven emergence of new codes. From a communicological point of view, the message board may be conceived as a milieu, offering different topologies of relaying (1:1 -> n:m) as well as different modes to establish linkages. Participants may transfer rules for matching SOURCEs and RECEPTORs.
We have said, that participants exchanges the contract about the type of document; that’s not completely true, because in this case we again would have to negotiate the interaction, which in turn would require that we standardize on the level of fields, i.e. the bit-structure of variables. Exactly this causes the mess with SOAP. Our proposal is different, leaning towards the pattern provided by biological bodies, in particular by the way the nervous system “encodes” stimuli “for” particular brain regions, or by the way, the endocrinological system is organized. The fact, is that neither the humoral system nor the nervous system first define a code to negotiate and then negotiate the interaction and the exchange of the data. The same is true for immune system and its matching against infecting agents. In all these cases, the match is built into the matter, directly. In those natural systems, an effect is solicited either through repeated use, or through a large population of almost identical “interactions” between participants of such a matching game. Whether they match or not is not part of the game. They are simply there. Any response is a matter of a secondary processing (intracellular amplifying biochemical chains).
This means, that on both sides of the message board there have to be populations of entities, which match across the medium apriori regarding the transport. Any processing is then taking place inside the entities. Whether the transmitted package is processed or not is NOT subject of the transmission game anymore. This principle is not only diametral to approaches like WebServices, SOAP, ORB, or RIM. In those frameworks, once a message has been transmitted and matched, it is mandatory that it is also processed. That’s fine for systems, where one is happy with database look-ups, i.e. in purely syntactical systems. We propose that it is unsuitable for systems that shall show some capability for semantics. The separation between successful transmission and the decision about processing is a principle that we constantly employ in language-based communication and the core principle for messaging in biological systems. We neither negotiate words nor do we negotiate the design of the ears and the vocal apparatus. They match apriori to any communication.
In case of a cognitive system that has just booted, or one that is in the booting process,—like post-natal animals, if you like—the problematics is a different one. It is solved by two conditions: the ability to associate in certain receiving modules and the emerging regularity of inputs to the same sections of the population of those associative modules. The result is again a “body” where the match between sender and receiver needs not to be negotiated. In case of the immune system we know of “priming” the creation of receptors through “exposing” certain patterns, which are realized as molecular configurations. Yet, it is not reasonable to design the connectivity of modules in a cognitive system following the assumption that the whole system is always in a kind of peri-natal state. Instead, we assume a body made from matching “material” parts. Only if this condition is fulfilled, so our guess, a sustainable learning will be possible.
Practically, then, we create the message as an instance board that fulfills the following conditions:
— it is able to run on any physical transport protocol, such like UDP, TCP, FTP,
…..or HTTP (in a “restful” manner);
— on top of these transport protocols, a transmission procedure is implemented
…..as a transactional process, actualized as a stack of simple and type-free
…..XML-sets, which are known to the participants, too;
— optionally, the participants can deliver contextual matching rules to the
— the semantic content is completely wrapped, i.e. black-boxed from the
…..perspective of the message board, using text-based encoding
If the receptor receiving the message can handle it, it will do so, if not, it won’t. If the population of receptors is comparatively small (as always in technical systems), the receptor also will return a trace (kind of “feedback”) to the message board, signalling the acceptance / denial of the package.
Such a framework is highly suited to connect members of a population of entities that are able to perform associative learning, where those entities are existing as separated “behaving objects,” just linked together by means of the messages they exchange through the message board(s). From a bird’s view, the message board may not conceived as a black board anymore, it is more like an active glue between neuronal instances. In neurobiology, such an entity is called glia. As it has been recently discovered for the biological glia, the message glue possesses own capabilities for processing, for amplifying, dispatching or repeating signals. We do not claim, of course, that our system of connecting artificial neuronal collections is like the glia or a simulation of it. Yet, we think that we turned away from the perspective, which tries to render the transmission medium strictly invisible, and which tries to negotiate all the time between non-matching “bodies.”