How to Glue

November 9, 2011 § Leave a comment

Even software engineers know about their holy grail.

As descendants of the peak of modernism, I mean the historical phase between the “Wiener Kreis” and its founders of the positivism and the formalization of cybernetics by Norbert Wiener, this holy grail is given, of course, by the trinity of immediacy, transparency and independence.

It sounds somewhat strange that programmers are chasing this holy trinity. All the day long they are linking and fixing and defining, using finite state automata (paradoxically called “language”). Yet, programming is also about abstraction, which means nothing less as to decouple “things” from “ideas,” or more profane, the material  from the immaterial. Data are decoupled from programs, data are also decoupled from their representation (format), machines are decoupled, nowadays forming clouds (what a romantic label for distributed control), even inside a program, structures are decoupled from their actualization (into a programming language), objects are decoupled as much as possible and so on.

These are probably some of the reasons upon which Donald Knuth coined the issue of “The Art of Programming.” Again, it has of course nothing to do with the proclaimed subject, art in this case, even not metaphorically. Software engineers suffer from the illness of romanticism since the inception of software. Software always has a well-identified purpose, simply put, by definition. By definition, software is not art. Yet, there is of course something concerning software engineering, which can’t be defined formally (even as it is not art); maybe, that’s why Knuth felt to be inclined towards this misleading comparison (it not only hides the “essentials” of art, but also of formalization).

Due to the mere size of the issue of abstraction or even abstraction in programming we have to refrain from the discussion of this trinitarian grail filled with modernist phantasms about evanescence. We will perhaps do so elsewhere. Yet, what we will introduce here is a new approach to link different parts of a program, or different instances of some programs, where “program” means something like “executable” on a standard physical computer in 2011.

This approach will follow the constraint that the whole arrangement should be able to grow and to differentiate. In that we have to generalize and to transcend the principles as listed above. In a first, still coarse step we could say that we relate programs such that

  • – the linkage is an active medium, it is ought to be meta-transparent and almost immediate;
  • – independence is indeed complete, thus causing the necessity of extrinsic higher-order symbolism (non-behavioristic behavior)
  • – the linking instance is able to actualize very different, but first of all randomized communicological models

We created a small piece of software that is able (not yet completely, of course) to  represent these principles. Soon you will find it in the download area.

There are, of course, a lot of highly sophisticated softwares, both commercial and open sourced, dealing with the problem of linking programs, or users to programs. We discuss a sample of the more important ones here, along with the reasons why we did not take them as-is for our purpose of a (physically and abstractly) growing system. Of course, it was inspiring to check their specifications.

This brings us to the specification of or piece, which we call NooLabGlue:

  • – its main purpose is to link instances or parts of programs, irrespective of physical boundaries, such as networks or transport protocols (see OSI-stack);
  • – above all, its usage of the client-library API for the programmer has to be as simple as possible; actually, it has to be so simple, that its instantiation can be formalized.
  • – its functionality of “glueing” should not be dependent on a particular programming language;
  • – in its implementation (on the level of the computation procedure [1]) it has to follow the paradigm of natural neural systems: randomization and associativity; this means that it also should be able to learn;
  • – any kind of “data” should be transferable, even programs, or modules thereof;

In order to fulfill this requirements some basic practical decisions have to be made about the “architecture” and the tools to be employed:

  • – the communicative roles of a “glued” system are participants that are linking to a MessageBoard; participants have properties, e.g. regarding their type of activity (being source, receptor, or both), the type of data they are releasing or accepting, or as specified by more elaborate filters (concerning content), issued by the participants and hosted by the MessageBoard; there may be even “personal” relationships between participants, or between groups (1+:1+) of them;
  • – the exchange of messages is realized as a fully transactional system: participants as well as the MessageBoard (the message “server”) may shut down / restart at any time, and still “nothing” will be lost (nothing: none of the completely transferred messages);
  • – xml and only xml is transferred, no direct object linking like in ORB, RMI, or Akka; objects and binary stuff (like pdf, etc.) are taken as data elements inside the xml and they are transferred in encoded form (basically as a string of base64);
  • – contracting is NOT on the level of fields and data types, but instead on the level of document types and the respective behavioral level;
  • – MessageBoards are able to cascade is necessary, using different transport protocols or OSI-layers at the same time, e.g. for relaying messages between local and remote resources;
  • – the “style” to run remote MessageBoards is actualized following restful approach (we use the Java Restlet framework); that means, that there are no difficulties to connect applications across firewalls; note that the restful resource approach is taken as a style on top of HTTP, as a replacement of home-grown HTTP client; yet, the “semantics” will NOT be encoded into the URL as this would need cookies and similar stuff: the “semantics” remains fully within the xml (which even could be encrypted); in local networks, one may use transport through UDP (small messages) or TCP (any message size), neither the programmer nor the “vegetative” system need to be aware of the actual format/type of transport protocol.

Now, imagine that there are several hundred instances of growing and pullulating Self-Organizing Maps around, linked by this kind of “infrastructure”… the SOM may even be part of the MessageBoard (at least concerning the adaptive routing of “messages”).

Such an infrastructure seems to be perfect for a system that is able to develop the capability for Non-Turing-Computation.

(download of a basic version will be available soon)

  • [1]

Probabilistic Networks

November 1, 2011 § Leave a comment

Everything is linked together and related.

There always have been smart people who nor only knew this, but also considered it as primary against the point, the dot, the spot. Thinking in relations is deeply incompatible with one of the most central elements of modernity, the metaphysical belief of independence. Today, in 2013, in the age of the ubiquitous “network,” everything indeed does seem to be improved, doesn’t it, given the fact that for the last 5 or 6 years the concept with the steepest career is the network.

Certainly, one of the main reason the network became a major concept from everyday life to science is given by the fact that connecting things, establishing links between devices and establishing the potential for population of links became a concrete experience, even for private persons. Before the era of WiFi and its almost perfectly automated process to establish a link, the network has been something very palpable. There have been modems for dial-up, confirming their working by a twittering sound, a lot of cables in the office, and the frequent experience of a failure of such technical infrastructure. In other words, networking became an activity with its own specific corporeality.

So, what does it mean to say that things are connected? What are the consequences, both regarding the empiric side concerning the construction or observation of systems or machines, or regarding the conceptual level? For instance, so far there is no particular “network logics”. Whenever networks meet logic, logic wins, meaning that the network will be reduced to individual steps, nodes, transfers, etc., in other words unrelated atoms.

Intuitively, the concept of networks is closely related to the notion of information. Today, this linkage has been integrated deeply into our Form of Life. Through the internet, the world wide web, and of course through the so-called social media we experience and practice this linkage in a rather intensive manner. And the social media just invoke a further important topic that is related to networks: mediality.

Here we meet a first hint for potential friction. Networks are usually well-defined, people speak about nodes and relations. Think just about the telephone network or a network of streets. Even social networks are explicable. Yet, in social media the strict determination starts to get lost. While social media are still based on a network of cables, something different is going on there, which is drastically different from the cable-layer.

The notion of partial indeterminateness brings us to mediality and its inherent element of contingency and probabilism. Yet, what does “probabilistic element” exactly refer to? Particularly with respect to networks? Is it, after all, not just some formalistic exercise to say that there is a random element, largely superfluous when it comes to real systems and problems? Particularly, as cultural artifacts are planned. Actually, I don’t think so. Quite to the opposite, one even would say that in some sense non-probabilistic networks are not networks at all.

In the remainder of this essay we will have to clarify the issues around these concepts, both regarding physical systems and the conceptual aspects, as well as the aspect of application. We will have to take a closer look to the elements of the network, nodes and links, as well as to to the network as an entirety. There is the question of the telos of the network. What is it that networks as a whole introduce? Is it possible to ask about their particular quality, beyond the trivial fact that things are connected?

Such, we first will deal with networks, their elements and the properties of both in a basic manner.

1. Basics of Networks

When dealing with networks, there is immediately a strong reference to topology, that is the way in which items belonging to the network are linked together. More precisely, what actually matters concerning the topology of networks are the symmetry properties of the connectedness. It does not really come as a surprise that the issue of symmetry relates networks to crystals, (mathematical) groups and knots. Yet topology and its symmetry is not the only important dimension.

1.1. Topology

So, before getting precise, let us start with a simple example for a network. What we see here are 3 nodes linked by 3 edges. The nodes represent items, while the edges represent certain relations between them.

o —— o
\       /

Actually, this example is almost too simple. Despite the fact that it contains all the basic elements, there are notably only 2, the node and the relation, many would not regard it as a network. What seems to be missing is a certain multiplicity of possible paths. Such a multiplicity would be introduced by at least one “crossing”, that is, we need at least one node that maintains three relations. In turn this means that we need at least 4 nodes to build an arrangement that could be called a network

o —— o
\       /     \
o —— o

On the other hand, we would consider arrangements like the following also as a network, though there is not multiplicity. It is a perfectly hierarchical structure, albeit there are several possible roots for it.

o —— o        o
\                  /
o —— o —-o
o — o — o —o

Obviously, we may distinguish networks by means of their redundancy. In physical systems, if we are going to connect points from a large set within a given “area” among each other, we usually try to avoid redundancy, since redundancy means increased costs for building and maintaining the network. Just think about a street network, the power grid, the water supply grid or the telephone network, in each case the degree of redundancy is quite low.

Yet, things are not that simple, of course. Some degree of redundancy could be quite beneficial if edges or nodes can fail. In case of the internet, for example, it was much higher at its beginnings. Actually, redundancy has been a design goal for the ArpaNet (next figure) for it ought to survive a nuclear attack to the U.S.

Figure 1: The logical layout of the ArpaNet in 1977.


scale-free Barabasi

1.2. Symmetry


1.3. Differentiation

Besides redundancy Taking the case of a street network as an example, the streets between crossings interpreted as edges or relations, we immediately see that beside the redundancy also the transfer capacity of edges is a further important parameter.


Those should be clearly distinguished from logistic networks, whose purpose is given by organizing any kind of physical transfer. Associative networks re-arrange, sort, classify and learn

logistics and growth

Yet, we only are at the beginnings to understand what networks “are.” Since there are a lot of prejudices around, we will first give some examples. The second major section discusses the main concepts and adds a few fresh ones. The third section discusses the consequences of changing a network int a probabilistic one.

mapping of items (objects) to nodes and relations to edges

(under construction)

Where Am I?

You are currently viewing the archives for November, 2011 at The "Putnam Program".