Growth Patterns

November 29, 2012 § Leave a comment

Growing beings and growing things, whether material

or immaterial, accumulate mass or increase their spreading. Plants grow, black holes grow, a software program grows, economies grow, cities grow, patterns grow, a pile of sand grows, a text grows, the mind grows and even things like self-confidence and love are said to grow. On the other hand, we do not expect that things like cars or buildings “grow.”

Despite the above mentioned initial “definition” might sound fairly trivial, the examples demonstrate that growth itself, or more precisely, the respective language game, is by far not a trivial thing. Nevertheless, when people start to talk about growth or if they invoke the concept of growth implicitly, they mostly imagine a smooth and almost geometrical process, a dilation, a more or less smooth stretching. Urbanists and architects are no exception to this undifferentiated and prosy perspective. Additionally, growth is usually not con- sidered seriously beyond its mere wording, probably due to the hasty prejudgment about the value of biological principles. Yet, if one can’t talk appropriately about growth—which includes differentiation—one also can’t talk about change. As a result of a widely (and wildly) applied simplistic image of growth, there is a huge conceptual gap in many, if not almost all works about urban conditions, in urban planning, and about architecture.1  But why talking about change, for in architecture and urbanism is anyway all about planning…

The imprinting by geometry often entails another prejudice: that of globality. Principles, rules, structures are thought to be necessarily applied to the whole, whatever this “wholeness” is about. This is particularly problematic, if these rules refer more or less directly to mere empirical issues. Such it frequently goes unnoticed that maintaining a particular form or keeping position in a desired region of the parameter space of a forming process requires quite intense interconnected local processes, both for building as well as destroying structures.

It was one of the failures in the idea of Japanese Metabolism not to recognize the necessity for deep integration of this locality. Albeit they followed the intention to (re-)introduce the concept of “life cycle” into architecture and urbanism, they kept aligned to cybernetics. Such, Metabolism failed mainly for two reasons. Firstly, they attempted to combine incommensurable mind sets. It is impossible to amalgamate modernism and the idea of bottom-up processes like self-organization or associativity, and the Metabolists always followed the modernist route. Secondly, the movement has been lacking a proper structural setup: the binding problem remained unresolved. They didn’t develop a structural theory of differentiation that would have been suitable to derive appropriate mechanisms.

This Essay

Here in this piece we just would like to show some possibilities to enlarge the conceptual space and the vocabulary that we could use to describe (the) “growing” (of) things. We will take a special reference to architecture and urbanism, albeit the basics would apply to other fields as well, e.g. to the growth and the differentiation of organizations (as “management”) or social forms, but also of more or even “completely” immaterial entities. In some way, this power is even mandatory, if we are going to address the Urban6, for the Urban definitely exceeds the realm of the empirical.
We won’t do much of philosophical reflection and embedding, albeit it should be clear that these descriptions don’t make sense without proper structural, i.e. theoretical references as we have argued in the previous piece. “As such” they would be just kind of a pictorial commentary, mistaking metaphor as allegory. There are two different kinds of important structural references. One is pointing to the mechanisms2, the abstract machinery with its instantiation on the micro-level or with respect to the generative processes. The other points to the theoretico-structural embedment, which we have been discussing in the previous essay. Here, it is mainly the concept of generic differentiation that provides us the required embedding and the power to overcome the binding problem in theoretical work.

The remainder of this essay comprises the following sections (active links):

1. Space

Growth concerns space, both physical and abstract space. Growth concerns even the quality of space. The fact of growth is incompatible with the conception of space as a container. This becomes obvious in case of the fractals, which got their name due to their “broken” dimensionality. A fractal could be 2.846-dimensional. Or 1.2034101 dimensional. The space established by the “inside” of a fractal is very different from the 3-dimensional space. Astonishingly, the dimensionality even need not be constant at all while traveling through a fractal.

Abstract spaces, on the other hand, can be established by any set of criteria, just by interpreting criteria as dimensions. Such, one gets a space for representing and describing items, their relations and their transformations. In mathematics, a space is essentially defined as the possibility to perform a mapping from one set to another, or in other terms, by the abstract (group-theoretic) symmetry properties of the underlying operations on the relations between any entities.

Strangely enough, in mathematics spaces are almost exclusively conceived as consisting from independent dimensions. Remember that “independence” is the at the core of the modernist metaphysical belief set! Yet, they need neither be Euclidean nor Cartesian as the generalization of the former. The independence of descriptive dimensions can be dropped, as we have argued in an earlier essay. The resulting space is not a dimensional space, but rather an aspectional space, which can be conceived as a generalization of dimensional space.

In order to understand growth we should keep in contact with a concept of space that is as general as possible. It would be really stupid for instance, to situate growth restrictively in a flat 2-dimensional Euclidean space. At least since Descartes’ seminal work “Regulae” (AT X 421-424) it should be clear that any aspect may be taken as a contribution to the cognitive space [8].

The Regulae in its method had even allowed wide latitude to the cognitive use of fictions for imagining artificial dimensions along which things could be grasped in the process of problem solving. Natures in the Meditations, however, are no longer aspects or axes along which things can be compared, evaluated, and arrayed, but natures in the sense that Rule 5 had dismissed: natures as the essences of existing things.

At the same time Descartes also makes clear that these aspects should not be taken as essences of existing things. In other words, Descartes has been ahead of 20ieth century realism and existentialism! Aspects do not represent things in their modes of existence, they represent our mode of talking about the relations we establish to those things. Yet, these relations are more like those threads as String Theory describes them: without fixed endings on either side. All we can say about the outer world is that there is something. Of course, that is far to little to put it as a primacy for human affairs.

The consequence of such a dimensional limitation would be a blind spot (if not a population of them), a gap in the potential to perceive, to recognize, to conceive of and to understand. Unfortunately, the gaps themselves, the blind spots are not visible for those who suffer from them. Nevertheless, any further conceptualization would remain in the state of educated nonsense.

Growth is established as a transformation of (abstract) space. Vice versa, we can conceive of it also as the expression of the transformation of space. The core of this transformation is the modulation of the signal intensity length through the generation of compartments, rendering abstract space into a historical, individual space. Vice versa, each transformation of space under whatsoever perspective can be interpreted as some kind of growth.

The question is not any more to be or not to be, as ontologists tried to proof since the first claim of substance and the primacy of logics and identity. What is more, already Shakespeare demonstrated the pen-ultimate consequences of that question. Hamlet, in his mixture of being realist existentialist (by that very question) and his like of myths and (use) of hidden wizards, guided by the famous misplaced question, went straight into his personal disaster, not without causing a global one. Shakespeare’s masterfully wrapped lesson is that the question about Being leads straight to disaster. (One might add that this holds also for ontology and existentialism: it is consequence of ethical corruption.)

Substance has to be thought of being always and already a posteriori to change, to growth. Setting change as a primacy means to base thought philosophically on difference. While this is almost a completely unexplored area, despite Deleuze’s proposal of the plane of immanence, it is also clear that starting with identity instead causes lots of serious troubles. For instance, we would be forced to acknowledge that the claim of the possibility that a particular interpretation indeed could be universalized. The outcome? A chimaera of Hamlet (the figure in the tragedy!) and Stalin.

Instead, the question is one of growth and the modulation of space: Who could reach whom? It is only through this question that we can integrate the transcendence of difference, its primacy, and to secure the manifold of the human in an uncircumventable manner. Life in all of its forms, with all its immanence,  always precedes logic.3 Not only for biological assemblages, but also for human beings and all its produces, including “cities” and other forms of settlements.

Just to be clear: the question of reaching someone else is not dependent on anything given. The given is a myth, as philosophers from Wittgenstein to Quine until Putnam and McDowell have been proofing. Instead, the question about the possibility to reach someone else, to establish a relation between any two (at least) items is one of activity, design, and invention, targeting the transformation of space. This holds even in particle physics.

2. Modes of Talking

Traditionally spoken, the result of growth is formed matter. More exactly, however, it is transformed space. We may distinguish a particular form, morphos, or with regard to psychology also a “Gestalt,” and form as an abstractum. The result of growth is form. Thus, form actually does not only concern matter, it always concerns the potential relationality.

For instance, growing entities never interact “directly”. They, that is, also: we, always interact through their spaces and the mediality that is possible within them.4 Otherwise it would be completely impossible for a human individual to interact with a city. Before any semiotic interpretive relation it is the individual space that enables incommensurable entities to relate.

If we consider the growth of a plant, for instance, we find a particular morphology. There are different kinds of tissues and also a rather typical habitus, i.e. a general appearance. The underlying processes are of biological nature, spanning from physics and bio-chemistry to information and the “biological integration” of those.

Talking about the growth of a building or the growth of a city we have to spot the appropriate level of abstraction. There is no 1:1 transferability. In a cell we do neither find craftsmen nor top-down-implementations of plans. In contrast, rising a building apparently does not know anything about probabilistic mechanisms. Just by calling something intentionally “metabolism” (Kurokawa) or “fractal” (Jencks), invoking thereby associations of organisms and their power to maintain themselves in physically highly unlikely conditions, we certainly do not approach or even acquire any understanding.

The key for any growth model is the identification of mechanisms (cf. [4]). Biology  is the science that draws most on the concept of mechanism (so far), while physics does so for the least. The level of mechanism is already an abstraction, of course. It needs to be completed, however, by the concept of population, i.e. a dedicated probabilistic perspective, in order to prevent falling back to the realm of trivial machines. In a cross-disciplinary setting we have to generalize the mechanisms into principles, such that these provide a shared differential entity.5

Well, we already said that a building is rarely raised by a probabilistic process. Yet, this is only true if we restrict our considerations to the likewise abstract description of the activities of the craftsmen. Else, the building process starts long before any physical matter is touched.

Secondly, from the perspective of abstraction we never should forget—and many people indeed forget about this—that the space of expressibility and the space of transformation also contains the nil-operator. From the realm of numbers we call it the zero. Note that without the zero many things could not be expressed at all. Similarly, the negative is required for completing the catalog of operations. Both, the nil-operator and the inverse element are basic constituents of any mathematical group structure, which is the most general way to think about the conditions for operations in space.

The same is true for our endeavor here. It would be impossible to construct the possibility for graded expressions, i.e. the possibility for a more or less smooth scale, without the nil and the negative. Ultimately, it is the zero and the nil-operation together with the inverse that allows to talk reflexively at all, to create abstraction, in short to think through.

3. Modes of Growth

Let us start with some instances of growth from “nature”. We may distinguish crystals, plants, animals and swarms. In order to compare even those trivial and quite obviously very different “natural” instances with respect to growth, we need a common denominator. Without that we could not accomplish any kind of reasonable comparison.

Well, initially we said that growth could be considered as accumulation of mass or as an increase of spread. After taking one step back we could say that something gets attached. Since crystals, plants and animals are equipped with different capabilities, and hence mechanisms, to attach further matter, we choose the way of organizing the attachment as the required common denominator.

Given that, we can now change the perspective onto our instances. The performance of comparing implies an abstraction, hence we will not talk about crystals etc. as phenomena, as this would inherit the blindness of phenomenology against its conditions. Instead, we conceive of them as models of growth, inspired by observations that can be classified along the mode of attachment.

Morphogenesis, the creation of new instances of formed matter, or even the creation of new forms, is tightly linked to complexity. Turing titled his famous article the “Chemical Basis of Morphogenesis“. This, however, is not exactly what he invented, for we have to distinguish between patterns and forms, or likewise, between order and organization. Turing described the formal conditions for emergence of order from a noisy flow of entropy. Organization, in contrast, also needs the creation of remnants, partial decay, and it is organization that brings in historicity. Nevertheless, the mechanisms of complexity of which the Turing-patterns and -mechanisms are part of, are indispensable ingredients for the “higher” forms of growth, at least, that is, for anything besides crystals (but probably even for for them in same limited sense). Note that morphogenesis, in neither of its aspects, should not be conceived as something “cybernetical”!

3.1. Crystals

Figure 1a: Crystals are geometric entities out of time.

Crystals are geometrical entities. In the 19th century, the study of crystals and the attempt to classify them inspired mathematicians in their development of the concept of symmetry and group theory. Crystals are also entities that are “structurally flat”. There are no levels of integration, their macroscopic appearance is a true image of their constitution on the microscopic level. A crystal looks exactly the same on the level of atoms up to the scale of centimeters. Finally, crystals are outside of time. For their growth is only dependent on the one or two layers of atoms (“elementary cells”) that had been attached before at the respective site.

There are two important conditions in order to grow a 3-dimensional crystal. The site of precipitation and attachment need to be (1) immersed in a non-depletable solution where (2) particles can move through diffusion in three dimensions. If these conditions are not met, mineral depositions look very different. As far as it concerns the global embedding conditions, the rules have changed. More abstractly, the symmetry of the solution is broken, and so the result of the process is a fractal.

Figure 1b. Growth in the realm of minerals under spatial constraints, particularly the reduction of dimensionality. The image does NOT show petrified plants! It is precipitated mineral from a solution seeped into a nearly 2-dimensional gap between  two layers of (lime) rock. The similarity of shapes points to a similarity of mechanisms.

Both examples are about mineralic growth. We can understand now that the variety of resulting shapes is highly dependent on the dimensional conditions embedding the growth process.

Figure 1c. Crystalline buildings. Note that it is precisely and only this type of building that actualizes a “perfect harmony” between the metaphysics of the architect and the design of social conditions. The believe in independence and the primacy of identity  has been quite effectively delivered into the habit of the everyday housing conditions.

Figure 1d. Crystalline urban layout, instantiated as “parametrism”. The “curvy” shape should not be misinterpreted as “organic”. In this case it is just a little dose of artificial “erosion” imposed as a parametric add-on to the crystalline base. We again meet the theme of the geological. Nothing could be more telling than the claim of a “new global style”: Schumacher is an arch-modernist, a living fossil, mistaking design as religion, who benefits from advanced software technology. Who is Schumacher that he could decree a style globally?

The growth of crystals is a very particular transformation of space. It is the annihilation of any active part of it. The relationality of crystals is completely exhausted by resistance and the spread of said annihilation.

Regarding the Urban6, parametrism must be considered as being deeply malignant. As the label says, it takes place within a predefined space. Yet, who the hell Schumacher (and Hadid, the mathematician) thinks s/he is that s/he is allowed, or even being considered as being able, to define the space of the Urban? For the Urban is a growing “thing,” it creates its own space. Consequently all the rest of the world admits not to “understand” the Urban, yet Hadid and her barking Schumacher even claim to be able to define that space, and thus also claim that this space shall be defined. Not surprisingly, Schumacher is addicted to the mayor of all bureaucrats of theory, Niklas Luhman (see our discussion here), as he proudly announces in his book “The Autopoiesis of Architecture” that is full of pseudo- and anti-theory.

The example of the crystal clearly shows that we have to consider the solution and the deposit together as a conditioned system. The forces that rule their formation are a compound setup. The (electro-chemical) properties of the elementary cell on the microscopic level, precisely where it is in contact with the solution, together with the global, macroscopic conditions of the immersing solution determine the instantiation of the basic mechanism. Regardless the global conditions, basic mechanism for the growth of crystals is the attachment of matter is from the outside.

In crystals, we do not find a separated structural process layer that would be used for regulation of the growth. The deep properties of matter determine their growth. Else, only the outer surface is involved.

3.2. Plants

With plants, we find a class of organisms that grow—just as crystals—almost exclusively at their “surface”. With only a few exceptions, matter is almost exclusively attached at the “outside” of their shape. Yet, matter is also attached from their inside, at precisely defined locations, the meristemes. Else, there is a dedicated mechanism to regulate growth, based on a the diffusion of certain chemical compounds, the phyto-hormones, e.g. auxin. This regulation emancipates the plant in its growth from the properties of the matter it is built from.

Figure 2a. Growth in Plants. The growth cone is called apical meristeme. There are just a handful of largely undifferentiated cells that keep dividing almost infinitely. The shape of the plant is largely determined by a reaction-diffusion-system in the meristem, based on phyto-hormones that determine the cells. Higher plants can build secondary meristemes at particular locations, leading to a characteristic branching pattern.

 

Figure 2b. A pinnately compound leaf of a fern, showing its historical genesis as attachment at the outside (the tip of the meristeme)  from the inside. If you apply this principle to roots, you get a rhizome.

Figure 2c. The basic principle of plant growth can be mapped into L-Grammars, n order to create simulations of plant-like shapes. This makes clear that fractal do not belong to geometry! Note that any form creation that is based on formal grammars is subject to the representational fallacy.

Instead of using L-grammars as a formal reference we could also mention self-affine mapping. Actually, self-affine mapping is the formal operation that leads to perfect self-similarity and scale invariance. Self-affine mapping projects a minor version of the original, often primitive graph onto itself. But let us inspect two examples.

Figure 2d.1. Scheme showing the self-affine mapping that would create a graph that looks like a leaf of a fern (image from wiki).

self-affine Fractal fern scheme
Figure 2d.2. Self-affine fractal (a hexagasket) and its  neighboring graph, which encodes its creation [9].
self-affine fractals hexagasket t

Back to real plants! Nowadays, most plants are able to build branches. Formally, they perform a self-affine mapping. Bio-chemically, the cells in their meristeme(s) are able to respond differentially to the concentration of one (or two) plant hormones, in this case auxine. Note, that for establishing a two component system you won’t necessarily need two hormones! The counteracting “force” might be realized by some process just inside the cells of the meristeme as well.

From this relation between the observable fractal form, e.g. the leaf of the fern, or the shape of the surrounding of a city layout, and the formal representation we can draw a rather important conclusion. The empirical analysis of a shape should never stop with the statement that the respective shape shows scale-invariance, self-similarity or the like. Literally nothing is gained by that! It is just a promising starting point. What one has to do subsequently is to identify the mechanisms leading to the homomorphy between the formal representation and the particular observation. If you like, the chemical traces of pedestrians, the tendency to imitate, or whatever else. Even more important, in each particular case these actual mechanisms could be different, though leading to the same visual shape!!!

In earlier paleobiotic ages, most plants haven’t been able to build branches. Think about tree ferns, or the following living fossile.

Figure 2d. A primitive plant that can’t build secondary meristemes (Welwitschia). Unlike in higher plants, where the meristeme is transported by the growth process to the outer regions of the plant (its virtual borders), here it remains fixed; hence, the leaf is growing only in the center.

Figure 2e. The floor plan of Guggenheim Bilbao strongly reminds to the morphology of Welwitschia. Note that this “reminding” represents a naive transfer on the representational level. Quite in contrast, we have to say that the similarity in shape points to a similarity regarding the generating mechanisms. Jencks, for instance, describes the emanations as petals, but without further explanation, just as metaphor. Gehry himself explained the building by referring to the mythology of the “world-snake”, hence the importance of the singularity of the “origin”. Yet, the mythology does not allow to say anything about the growth pattern.

Figure 2f. Another primitive plant that can’t build secondary apical meristems. common horsetail (Equisetum arvense). Yet, in this case the apical meristeme is transported.

Figure 2g. Patrick Schumacher, Hadid Office, for the master plan of the Istanbul project. Primitive concepts lead to primitive forms and primitive habits.

Many, if not all of the characteristics of growth patterns in plants are due to the fact that they are sessile life forms. Most buildings are also “sessile”. In some way, however, we consider them more as geological formations than as plants. It seems to be “natural” that buildings start to look like those in fig.2g above.

Yet, in such a reasoning there are even two fallacies. First, regarding design there is neither some kind of “naturalness”, nor any kind of necessity. Second, buildings are not necessarily sessile. All depends on the level of the argument. If we talk just about matter, then, yes, we can agree that most buildings do not move, like crystals or plants. Buildings could not be appropriately described, however, just on the physical level of their matter. It is therefore very important to understand that we have to argue on the level of structural principles. Later we will provide an impressive example of an “animal” or “animate” building.7 

As we said, plants are sessile, all through, not only regarding their habitus. In plants, there are no moving cells in the inside. Thus, plants have difficulties to regenerate without dropping large parts. They can’t replace matter “somewhere in between”, as animals can do. The cells in the leafs, for instance, mature as cells do in animals, albeit for different reasons. In plants, it is mainly the accumulation of calcium. Such, even in tropical climates trees drop off their leaves at least once a year, some species all of them at once.

The conclusion for architecture as well as for urbanism is clear. It is just not sufficient to claim “metabolism” (see below) as a model. It is also appropriate to take “metabolism” as a model, not even if we would avoid the representational fallacy to which the “Metabolists” fell prey. Instead, the design of the structure of growth should orient itself in the way animals are organized, at the level of macroscopic structures like organs, if we disregard swarms for the moment, as most of them are not able to maintain persistent form.

This, however, brings immediately the problematics of territorialization to the fore. What we would need for our cities is thus a generalization towards the body without organs (Deleuze), which orients towards capabilities, particularly the capability to choose the mode of growth. Yet, the condition for this choosing is the knowledge about the possibilities. So, let us proceed to the next class of growth modes.

3.3. Swarms

In plants, the growth mechanisms are implemented in a rather deterministic manner. The randomness in their shape is restricted to the induction of branches. In swarms, we find a more relaxed regulation, as there is only little persistent organization. There is just transient order. In some way, many swarms are probabilistic crystals, that is, rather primitive entities. Figures 3a thru 3d provide some examples for swarms.

From the investigation of swarms in birds and fishes it is known that any of the “individual” just looks to the movement vector of its neighbors. There is no deep structure, precisely because there is no persistent organization.

Figure 3a. A flock of birds. Birds take the movement of several neighbors into account, sometimes without much consideration of their distance.

Figure 3b. A swarm of fish, a “school”. It has been demonstrated that some fish not only consider the position or the direction of their neighbors, but also the form of the average vector. A strong straight vector seems to be more “convincing” for the neighbors as a basis for their “decision” than one of unstable direction and scalar.

Figure 3c. The Kaaba in Mekka. Each year several persons die due to panic waves. Swarm physics helped to improve the situation.

Figure 3d. Self-ordering in a pedestrians population at Shibuya, Tokyo. In order to not crash into each other, humans employ two strategies. Either just to follow the person ahead, or to consider the second derivative of the vector, if the first is not applicable. Yet, it requires a certain “culture”, an unspoken agreement to do so (see this for what happens otherwise)

A particularly interesting example for highly developed swarms that are able to establish persistent organization is provided by Dictyostelium (Fig 4a), in common language called a slime-mold. In biological taxonomy, they form a group called Mycetozoa, which indicates their strangeness: Partly, they behave like fungi, partly like primitive animals. Yet, they are neither prototypical fungi nor prototypical animals. in both cases the macroscopic appearance is a consequence of (largely) chemically organized collaborative behavior of a swarm of amoeboids. Under good environmental conditions slime-molds split up into single cells, each feeding on their own (mostly on bacteria). Under stressing conditions, they build astonishing macroscopic structures, which are only partially reversible as parts of the population might be “sacrificed” to meet the purpose of non-local distribution.

Figure 4a. Dictyostelium, “fluid” mode; the microscopic individuals are moving freely, creating a pattern that optimizes logistics. Individuals can smoothly switch roles from moving to feeding. It should be clear that the “arrangement” you see is not a leaf, nor a single organism! It is a population of coordinating individuals. Yet, the millions of organisms in this population can switch “phase”… (continue with 4b…)

Figure 4b. Dictyostelium, in “organized” mode, i.e. the “same” population of individuals now behaving “as if” it would be an organism, even with different organs. Here, individuals organize a macroscopic form, as if they were a single organism. There is irreversible division of labor. Such, the example of Dictyostelium shows that the border between swarms and plants or animals can be blurry.

The concept of swarms has also been applied to crowds of humans, e.g. in urban environments [11]. Here, we can observe an amazing re-orientation. Finally, after 10 years or so of research on swarms and crowds, naïve modernist prejudices are going to be corrected. Independence and reductionist physicism have been dropped, instead, researchers get increasingly aware of relations and behavior [14].

Trouble is, the simulations treat people as independent particles—ignoring our love of sticking in groups and blabbing with friends. Small groups of pedestrians change everything, says Mehdi Moussaid, the study’s leader and a behavioral scientist at the University of Toulouse in France. “We have to rebuild our knowledge about crowds.”

Swarms solve a particular class of challenges: logistics. Whether in plants or slime-molds, it is the transport of something as an adaptive response that provides their framing “purpose”. This something could be the members of the swarm itself, as in fish, or something that is transported by the swarm, as it is the case in ants. Yet, the difference is not that large.

Figure 5: Simulation of foraging raid patterns in army ants Eciton. (from [12]) The hive (they haven’t a nest) is at the bottom, while the food source is towards thr top.  The only difference between A and B is the number of food sources.

When compared to crystals, even simple swarms show important differences. Firstly, in contrast to crystals, swarms are immaterial. What we can observe at the global scale, macroscopically, is an image of rules that are independent of matter. Yet, in simple, “prototypical” swarms the implementation of those rules is still global, just like in crystals. Everywhere in the primitive swarm the same basic rules are active. We have seen that in Dictyostelium, much like in social insects, rules begin to be active in a more localized manner.

The separation of immaterial components from matter is very important. It is the birth of information. We may conceive information itself as a morphological element, as a condition for the probabilistic instantiation. Not by chance we assign the label “fluid” to large flocks of birds, say starlings in autumn. On the molecular level, water itself is organized as a swarm.

As a further possibility, the realm of immaterial rules provides allows also for a differentiation of rules. For in crystals the rule is almost synonymic to the properties of the matter, there is no such differentiation for them. They are what they are, eternally. In contrast to that, in swarms we always find a setup that comprises attractive and repellent forces, which is the reason for their capability to build patterns. This capability is often called self-organization, albeit calling it self-ordering would be more exact.

There is last interesting point with swarms. In order to boot a swarm as swarm, that is, to effectuate the rules, a certain, minimal density is required. From this perspective, we can recognize also a link between swarms and mediality. The appropriate concept to describe swarms is thus the wave of density (or of probability).

Not only in urban research the concept of swarms is often used in agent-based models. Unfortunately, however, only the most naive approaches are taken, conceiving of agents as entities almost without any internal structure, i.e. also without memory. Paradoxically, researchers often invoke the myth of “intelligent swarms”, overlooking that intelligence is nothing that is associated to swarms. In order to find appropriate solutions to a given challenge, we simply need an informational n-body system, where we find emergent patterns and evolutionary principles as well. This system can be realized even in a completely immaterial manner, as a pattern of electrical discharges. Such a process we came to call a “brain”… Actually, swarms without an evolutionary embedding can be extremely malignant and detrimental, since in swarms the purpose is not predefined. Fiction authors (M.Crichton, F.Schätzing) recognized this long ago. Engineers seem to still have difficulties with that.

Such, we can also see that swarms actualize the most seriously penetrating form of growth.

3.4. Animals

So far, we have met three models of growth. In plants and swarms we find different variations of the basic crystalline mode of growth. In animals, the regulation of growth acquired even more degrees of freedom.

The major determinant of the differences between the forms of plants and animals is movement. This not only applies to the organism as a whole. We find it also on the cellular level. Plants do not have blood or an immune system, where cells of a particular type are moving around. Once they settled, they are fixed.

The result of this mobility is a greatly diversified space of possibilities for instantiating compartmentalization. Across the compartments, which we find also in the temporal domain, we may even see different modes of growth. The liver of the vertebrates, for instance, grows more like a plant. It is somehow not surprising that the liver is the organ with the best ability for regeneration. We also find interacting populations of swarms in animals, even in the most primitive ones like sponges.

The important aspects of form in animals are in their interior. While for crystals there is no interiority, plants differ in their external organization, their habitus, with swarms somewhere in between. Animals, however, are different due to their internal organization on the level of macroscopic compartments, which includes their behavioral potential. (later: remark about metabolism, as taking the wrong metaphorical anchor) Note that the cells of animals look quite similar, they are highly standardized, even between flies and humans.

Along with the importance of the dynamics and form of interior compartments, the development of animals in their embryological phase8 is strictly choreographed. Time is not an outer parameter any more. Much more than plants, swarms or even crystals, of course, animals are beings in and of time. They have history, as individual and as population, which is independent of matter. In animals, history is a matter of form and rules, of interior, self-generated conditions.

During the development of animal embryos we find some characteristic operations of form creating, based on the principle of mobility, additionally to the principles that we can describe for swarms, plants and crystals. These are

  • – folding, involution and blastulation;
  • – melting, and finally
  • – inflation and gastrulation;

The mathematics for describing these operations is not geometry any more. We need topology and category theory in order to grasp it, that is the formalization of transformation.

Folding brings compartments together that have been produced separately. It breaks the limitations of signal horizons by initiating a further level of integration. Hence, the role of folding can be understood as a way as a means to overcome or to instantiate dimensional constraints and/or modularity. While inflation is the mee accumulation of mass and amorphous enlargement of a given compartment by attachment from the interior, melting may be conceived as a negative attachment. Abstractly taken, it introduces the concept of negativity, which in turn allows for smooth gradation. Finally, involution, gastrulation and blastulation introduce floating compartments, hence swarm-like capabilities in the interior organization. It blurs the boundaries between structure and movement, introducing probabilism and reversibility into the development and the life form of the being.

Figure 6a. Development in Embryos. Left-hand, a very early phase is shown, emphasizing the melting and inflating, which leads to “segments”, called metamers. (red arrows show sites of apoptosis, blue arrows indicate inflation, i.e. ordinary increase of volume)

Figure 6b. Early development phase of a hand. The space between fingers is melted away in order to shape the fingers.

Figure 6c. Rem Koolhaas [16]. Inverting the treatment of the box, thereby finding (“inventing”?) the embryonic principle of melting tissue in order to generate form. Note that Koolhaas himself never referred to “embryonic principles” (so far). This example demonstrates clearly where we have to look for the principles of morphogenesis in architecture!

In the image 6a above we can not only see the processes of melting and attaching, we also can observe another recipe of nature: repetition. In case of the Bauplan of animal organisms the result is metamery.9 While in lower animals such as worms (Annelidae), metamers are easily observed, in higher animals, such as insects or vertebrates, metamers are often only (clearly) visible in the embryonal phase. Yet, in animals metamers are always created through a combination of movement or melting and compartmentalization in the interior of the body. They are not “added” in the sense of attaching—adding—them to the actual border, as it is the case in plants or crystals. In mathematical terms, the operation in animals’ embryonic phase is multiplication, not addition.

Figure 6d. A vertebrate embryo, showing the metameric organization of the spine (left), which then gets replicated by the somites (right). In animals, metamers are a consequence of melting processes, while in plants it is due to attachment. (image found here)

The principles of melting (apoptosis), folding, inflating and repetition can be used to create artificial forms, of course. The approach is called subdivision. Note that the forms shown below have nothing to do with geometry anymore. The frameworks needed to talk about them are, at least, topology and category theory. Additionally, they require an advanced non-Cartesian conception of space, as we have been outlining one above.

Figure 7. Forms created by subdivision (courtesy Michael Hansmeyer). It is based on a family of procedures, called subdivision, that are directed towards the differentiation of the interior of a body. It can’t be described by geometry any more. Such, it is a non-geometrical, procedural form, which expresses time, not matter and its properties. The series of subdivisions are “breaking” the straightness of edges and can be seen also as a series of nested, yet uncompleted folds (See Deleuze’s work on the Fold and Leibniz). Here, in Hansmeyer’s work, each column is a compound of three “tagmata”, that is, sections that have been grown “physically” independently from each other, related just by a similar dynamics in the set of parameters.

subdivision columns

Creating such figurated forms is not fully automatic, though. There is some contingency, represented by the designer’s choices while establishing a particular history of subdivisions.

Animals employ a wide variety of modes in their growing. They can do so due to the highly developed capability of compartmentalization. They gain almost complete independence from matter10 , regarding their development, their form, and particularly regarding their immaterial setup, which we can observe as learning and the use of rules. Learning, on the other hand, is intimately related to perception, in other words, configurable measurement, and data. Perception, as a principle, is in turn mandatory for the evolution of brains and the capability to handle information. Thus, staffing a building with sensors is not a small step. It could take the form of a jump into another universe, particularly if the sensors are conceived as being separate from the being of the house, for instance in order to facilitate or modify mental or social affairs of their inhabitants.

3.5. Urban Morphing

On the level of urban arrangements, we also can observe different forms of differentiation on the level of morphology.

Figure 8. Urban Sprawl, London (from [1]). The layout looks like a slime-mold. We may conclude that cities grow like slime-molds, by attachment from the inside and directed towards the inside and the outside. Early phases of urban sprawl, particularly in developing countries, grow by attachment form the outside, hence they look more like a dimensionally constrained crystal (see fig.1b).

The concept of the fractal and the related one of self-similarity entered, of course, also the domain of urbanism, particularly an area of interest which is called Urban Morphology. This has been born as a sub-discipline of geography. It is characterized by a salient reductionism of the Urban to the physical appearance of a city and its physical layout, which of course is not quite appropriate.

Given the mechanisms of attachment, whether it is due to interior processes or attachment from the outside (through people migrating to the city), it is not really surprising to find similar fractal shapes as in case of (dimensionally) constrained crystalline growth, or in the case of slime-molds with their branching amoeba highways. In order to understand the city, the question is not whether there is a fractal or not, whether there is a dimensionality of 1.718 or one of 1.86.

The question is about the mechanisms that show up as a particular material habitus, and about the actual instantiation of these mechanisms. Or even shorter: the material habitus must be translated into a growth model. In turn, this would provide the means to shape the conditions of the cities own unfolding and evolution. We already know that dedicated planning and dedicated enforcement of plans will not work in most cities. It is of utmost importance here, not to fall back into representationalist patterns, as for instance Michael Batty sometimes falls prey to [1]. Avoiding representationalist fallacies is possible only if we embed the model about abstract growth into a properly bound compound which comprises theory (methodology and philosophy) and politics as well, much like we proposed in the previous essay.

Figure 9a. In former times, or as a matter of geographical facts, attachment is excluded. Any growth is directed towards the inside and shows up as a differentiation. Here, in this figure we see a planned city, which thus looks much like a crystal.

Figure 9b. A normally grown medieval city. While the outer “shell” looks pretty standardized, though not “crystalline”, the interior shows rich differentiation. In order to describe the interior of such cities we have to use the concept of type.

Figure 10a. Manhattan is the paradigmatic example for congestion due to a severe (in this case: geographical) limitation of the possibility to grow horizontally. In parallel, the overwhelming interior differentiation created a strong connectivity and abundant heterotopias. This could be interpreted as the prototype of the internet, built in steel and glass (see Koolhaas’ “Delirious New York” [15]).

Figure 10b. In the case of former Kowloon (now torn down), it wasn’t geological, but political constraints. It was a political enclave/exclave, where actually no legislative regulations could be set active. In some way it is the chaotic brother of Manhattan. This shows Kowloon in 1973…

Figure 10c. And here the same area in 1994.

Figure 10d. Somewhere in the inside. Kowloon developed more and more into an autonomous city that provided any service to its approx. 40’000 inhabitants. On the roof of the buildings they installed the play grounds for the children.

The medieval city, Manhattan and Kowloon share a particular growth pattern. While the outer shape remains largely constant, their interior develops any kind of compartments, any imaginable kind of flow and a rich vertical structure, both physical and logical. This growth pattern is the same as we can observe in animals. Furthermore, those cities, much like animals, start to build an informational autonomy, they start to behave, to build an informational persistence, to initiate an intense mediality.

3.6. Summary of Growth Modes

The following table provides a brief overview about the main structural differences of growth models, as they can be derived from their natural instantiations.

Table 1: Structural differences of the four basic classes of modes of growth. Note that the class labels are indeed just that: labels of models. Any actual instantiation, particularly in case of real animals, may comprise a variety of compounds made from differently weighted classes.

Aspect \ Class crystal plant swarm animal
Mode of Attachment passive positive active positive active positive and negative active positive and negative
Direction from outside from inside from inside  towards outside or inside from & towards the inside
Morphogenetic Force as a fact by matter explicitly produced inhibiting fields implicit and explicit multi-component fields 11 explicitly produced multi-component fields
Status of Form implicitly templated by existing form beginning independence from matter independence from matter independence from matter
Formal Tools geometric scaling, representative reproduction, constrained randomness Fibonacci patterns, fractal habitus, logistics fractal habitus, logistics metamerism, organs, transformation, strictly a-physical
Causa Finalis(main component) actualization of identity space filling logistics mobile logistics short-term adaptivity

4. Effects of Growth

Growth increases mass, spread or both. Saying that doesn’t add anything, it is an almost syntactical replacement of words. In Aristotelian words, we would get stuck with the causa materialis and the causa formalis. The causa finalis of growth, in other words its purpose and general effect, besides the mere increase of mass, is differentiation12, and we have to focus the conditions for that differentiation in terms of information. For the change of something is accessible only upon interpretation by an observing entity. (Note that this again requires relationality as a primacy)

The very possibility of difference and consequently of differentiation is bound to the separation of signals.13 Hence we can say that growth is all about the creation of a whole bouquet of signal intensity lengths, instantiated on a scale that stretches from as morpho-physical compartments through morpho-functional compartments to morpho-symbolic specializations.14

Inversely we may say that abstract growth is a necessary component for differentiation. Formally, we can cover differentiation as an abstract complexity  of positive and negative growth. Without abstract growth—or differentiation—there is no creation or even shaping of space into an individual space with its own dynamical dimensionality, which in turn would preclude the possibility for interaction. Growth regulates the dimensionality of the space of expressibility.

5. Growth, an(d) Urban Matter

5.1. Koolhaas, History, Heritage and Preservation

From his early days as urbanist and architect, Koolhaas has been fascinated by walls and boxes [16], even with boxes inside boxes. While he conceived the concept of separation first in a more representational manner, he developed it also into a mode of operation later. We now can decode it as a play with informational separation, as an interest in compartments, hence with processes of growth and differentiation. This renders his personal fascinosum clearly visible: the theory and the implementation of differentiation, particularly with respect to human forms of life. It is probably his one and only subject.

All of Koolhaas’ projects fit into this interest. New York, Manhattan, Boxes, Lagos, CCTV, story-telling, Singapore, ramps, Lille, empirism, Casa da Musica, bigness, Metabolism. His exploration(s) of bigness can be interpreted as an exploration of the potential of signal intensity length. How much have we to inflate a structure in order to provoke differentiation through the shifting the signal horizon into the inside of the structure? Remember, that the effective limit of signal intensity length manifests as breaking of symmetry, which in turn gives rise to compartmentalization, opposing forces, paving the way for complexity, emergence, that is nothing else than a dynamic generation of patterns. BIG BAG. BIG BANG. Galaxies, stardust, planets, everything in the mind of those crawling across and inside bigness architecture.  Of course, it appears to be more elegant to modulate the signal intensity length through other means than just by bigness, but we should not forget about it. Another way for provoking differentiation is through introducing elements of complexity, such as contradictory elements and volatility. Already in 1994, Koolhaas wrote [17]15

But in fact, only Bigness instigates the regime of complexity that mobilizes the full intelligence of architecture and its related fields. […] The absence of a theory of Bigness–what is the maximum architecture can do?–is architecture’s most debilitating weakness. […] By randomizing circulation, short-circuiting distance, […] stretching dimensions, the elevator, electricity, air-conditioning,[…] and finally, the new infrastructures […] induced another species of architecture. […] Bigness perplexes; Bigness transforms the city from a summation of certainties into an accumulation of mysteries. […] Bigness is no longer part of any urban tissue. It exists; at most, it coexists. Its subtext is fuck context.

The whole first part of this quote is about nothing else than modulating signal intensity length. Consequently, the conclusion in the second part refers directly to complexity that creates novelty. An artifice that is double-creative, that is creative and in each of its instances personalized creative, how should it be perceived other than as a mystery? No wonder, modernists get overcharged…

The only way to get out of (built) context is through dynamically creating novelty., by creating an exhaustively new context outside of built matter, but strongly building on it. Novelty is established just and only by the tandem of complexity and selection (aka interpretation). But, be aware, complexity here is fully defined and not to be mistaken with the crap delivered by cybernetics, systems theory or deconstructivism.

The absence of a theory of Bigness—what is the maximum architecture can do? —is architecture’s most debilitating weakness. Without a theory of Bigness, architects are in the position of Frankenstein’s creators […] Bigness destroys, but it is also a new beginning. It can reassemble what it breaks. […] Because there is no theory of Bigness, we don’t know what to do with it, we don’t know where to put it, we don’t know when to use it, we don’t know how to plan it. Big mistakes are our only connection to Bigness. […] Bigness destroys, but it is also a new beginning. It can reassemble what it breaks. […] programmatic elements react with each other to create new events- Bigness returns to a model of programmatic alchemy.

All this reads like a direct rendering of our conceptualization of complexity. It is, of course, nonsense to think that

[…] ‘old’ architectural principles (composition, scale, proportion, detail) no longer apply when a building acquires Bigness. [18]

Koolhaas sub-contracted Jean Nouvel for caring of large parts of Euro-Lille. Why should he do so, if proportions wouldn’t be important? Bigness and proportions are simply on different levels! Bigness instantiates the conditions for dynamic generation of patterns, and those patters, albeit volatile and completely on the side of the interpreter/observer/user/inhabitant/passer-by, deserve careful thinking about proportions.

Bigness is impersonal: the architect is no longer condemned to stardom.

Here, again, the pass-porting key is the built-in creativity, based on elementarized, positively defined complexity. We thus would like to propose to consider our theory of complexity—at least—as a theory of Bigness. Yet, the role of complexity can be understood only as part of generic differentiation. Koolhaas’ suggestion for Bigness does not only apply for architecture. We already mentioned Euro-Lille. Bigness, and so complexity—positively elementarized—is the key to deal with Urban affairs. What could be BIGGER than the Urban? Koolhaas concludes

Bigness no longer needs the city, it is the city.’ […]

Bigness = urbanism vs. architecture.

Of course, by “architecture” Koolhaas refers to the secretions by the swarm architects’ addiction to points, lines, forms and apriori functions, all these blinkers of modernism. Yet, I think, urbanism and a re-newed architecture (one htat embraces complexity) may be well possible. Yet, probably only if we, architects and their “clients”, contemporary urbanists and their “victims,” start to understand both as parts of a vertical, differential (Deleuzean) Urban Game. Any comprehensive apprehension of {architecture, urbanism} will overcome the antipodic character of the relations between them. Hope is that it also will be a cure for junkspace.

There are many examples from modernism, where architects spent the utmost efforts to prevent the “natural” effect of bigness, though not always successful. Examples include Corbusier as well as Mies van der Rohe.

Koolhaas/OMA not only uses assemblage, bricolage and collage as working techniques, whether as “analytic” tool (Delirious New York) or in projects, they also implement it in actual projects. Think of Euro-Lille, for instance. Implementing the conditions of or for complexity creates a never-ending flux of emergent patterns. Such an architecture not only keeps being interesting, it is also socially sustainable.

Such, it is not really a surprise that Koolhaas started to work on the issue and the role of preservation during the recent decade, culminating in the contribution of OMA/AMO to the Biennale 2010 in Venice.

In an interview given there to Hans Ulrich Obrist [20] (and in a lecture at the American University of Beirut), Koolhaas mentioned some interesting figures about the quantitative consequences of preservation. In 2010, 3-4% of the area of the earths land surface has been declared as heritage site. This amounts to a territory larger than the size of India. The prospects of that have been that soon up to 12% are protected against change. His objection was that this development can lead to kind of a stasis. According to Koolhaas, we need a new vocabulary, a theory that allows to talk about how to get rid of old buildings and to negotiate of which buildings we could get rid of. He says that we can’t talk about preservation without also talking about how to get rid of old stuff.

There is another interesting issue about preservation. The temporal distance marked by the age of the building to be preserved and the attempt to preserve the building constantly decreased across history. In 1800 preservation focused on buildings risen 2000 years before, in 1900 the time distance shrunk to 300 years, and in 2000 it was as little as 30 years. Koolhaas concludes that we obviously are entering a phase of prospective preservation.

There are two interpretations for this tendency. The first one would be, as a pessimistic one, that it will lead to a perfect lock up. As an architect, you couldn’t do anything anymore without being engaged in severely intensified legislation issues and a huge increase in bureaucrazy. The alternative to this pessimistic perspective is, well, let’s call it symbolic (abstract) organicism, based on the concept of (abstract) growth and differentiation as we devised it here. The idea of change as a basis of continuity could be built so deeply into any architectural activity, that the result would not only comprise preservation, it would transcend it. Obviously, the traditional conception of preservation would vanish as well.

This points to an important topic: Developing a theory about a cultural field, such as it is given by the relation between architecture and preservation, can’t be limited to just the “subject”. It inevitably has to include a reflection about the conceptual layer as well. In the case of preservation and heritage, we simply find that the language game is still of an existential character, additionally poisoned by values. Preservation should probably not target the material aspects. Thus, the question whether to get rid of old buildings is inappropriate. Transformation should not be regarded as a question of performing a tabula rasa.

Any well-developed theory of change in architectural or Urban affairs brings a quite important issue to the foreground. The city has to decide what it wants to be. The alternatives are preformed by the modes of growth. It could conceive of itself as an abstract crystal, as a plant, a slime-mold made from amoeboids, or as an abstract animal. Each choice offers particular opportunities and risks. Each of these alternatives will determine the characteristics and the quality of the potential forms of life, which of course have to be supported by the city. Selecting an alternative also selects the appropriate manner of planning, of development. It is not possible to perform the life form of an animal and to plan according to the characteristics of a crystal. The choice will also determine whether the city can enter a regenerative trajectory, whether it will decay to dust, or whether it will be able to maintain its shape, or whether it will behave predatory. All these consequences are, of course, tremendously political. Nevertheless, we should not forget that the political has to be secured against the binding problem as much as conceptual work.

In the cited interview, Koolhaas also gives a hint about that when he refers to the Panopticum project, a commission to renovate a 19th century prison. He mentions that they discovered a rather unexpected property of the building: “a lot of symbolic extra-dimensions”. These symbolic capital allows for “much more and beautiful flexibility” to handle the renovation. Actually, one “can do it in 50 different ways” without exhausting the potential, something, which according to Koolhaas is “not possible for modern architecture”.

Well, again, not really a surprise. Neither function, nor functionalized form, nor functionalized fiction (Hollein) can bear symbolic value except precisely that of the function. Symbolic value can’t be implanted as little as meaning can be defined apriori, something that has not been understood, for instance, by Heinrich Klotz14. Due to the deprivation of the symbolic domain it is hard to re-interpret modernist buildings. Yet, what would be the consequence for preservation? Tearing down all the modernist stuff? Probably not the worst idea, unless the future architects are able to think in terms of growth and differentiation.

Beyond the political aspects the practical question remains, how to decide on which building, or district, or structure to preserve? Koolhaas already recognized that the politicians started to influence or even rule the respective decision-making processes, taking responsibility away from the “professional” city-curators. Since there can’t be a rational answer, his answer is random selection.

Figure 11: Random Selection for Preservation Areas, Bejing. Koolhaas suggested to select preservation areas randomly, since it can’t be decided “which” Bejing should be preserved (there are quite a few very different ones).

Yet, I tend to rate this as a fallback into his former modernist attitudes. I guess, the actual and local way for the design of the decision-making process is a political issue, which in turn is dependent on the type of differentiation that is in charge, either as a matter of fact, or as a subject of political design. For instance, the citizens of the whole city, or just of the respective areas could be asked about their values, as it is a possibility (or a duty) in Switzerland. Actually, there is even a nice and recent example for it. The subject matter is a bus-stop shelter designed by Santiago Calatrava in 1996, making it to one of his first public works.

Figure 12: Santiago Calatrava 1996, bus stop shelter in St.Gallen (CH), at a central place of the city; there are almost no cars, but every 1-2 minutes a bus, thus a lot of people are passing even several times per day. Front view…

…and rear view

In 2011, the city parliament decided to restructure the place and to remove the Calatrava shelter. It was considered by the ‘politicians’ to be too “alien” for the small city, which a few steps away also hosts a medieval district that is a Unesco World Heritage. Yet, many citizen rated the shelter as something that provides a positive differential, a landmark, which could not be found in other cities nearby, not even in whole Northern Switzerland. Thus, a referendum has been enforced by the citizens, and the final result from May 2012 was a clear rejection of the government’s plans. The effect of this recent history is pretty clear: The shelter accumulates even more symbolic capital than before.

Back to the issue of preservation. If it is not the pure matter, what else should be addressed? Again, Koolhaas himself already points to the right direction. The following fig.13 shows a scene from somewhere in Bejing. The materials of the dwelling are bricks, plastic, cardboard. Neither the site nor the matter nor the architecture seems to convey anything worthwhile to be preserved.

Figure 13: When it comes to preservation, the primacy is about the domain of the social, not that of matter.

Yet, what must be preserved mandatorily is the social condition, the rooting of the people in their environment. Koolhaas, however, says that he is not able to provide any answer to solve this challenge. Nevertheless it s pretty clear, that “sustainability” start right here, not in the question of energy consumption (despite the fact that this is an important aspect too).

5.2. Shrinking. Thinning. Growing.

Cities have been performances of congestion. As we have argued repeatedly, densification, or congestion if you like, is mandatory for the emergence of typical Urban mediality. Many kinds of infrastructures are only affordable, let alone be attractive, if there are enough clients for it. Well, the example of China—or Singapore—and its particular practices of implementing plans demonstrate that the question of density can take place also in a plan, in the future, that is, in the domain of time. Else, congestion and densification may actualize more and more in the realm of information, based on the new medially active technologies. Perhaps, our contemporary society does not need the same corporeal density as it was the case in earlier times. There is a certain tendency that the corporeal city and the web amalgamate into something new that could be called the “wurban“. Nevertheless, at the end of the day, some kind of density is needed to ignite the conditions for the Urban.

Such, it seems that the Urban is threatened by the phenomenon of thinning. Thinning is different from shrinking, which appears foremost in some regions of the U.S. (e.g. Detroit) or Europe (Leipzig, Ukrainia) as a consequence of monotonic, or monotopic economic structure. Yet, shrinking can lead to thinning. Thinning describes the fact that there is built matter, which however is inhabited only for a fraction of time. Visually dense, but socially “voided”.

Thinning, according to Koolhaas, considers the form of new cities like Dubai. Yet, as he points out, there is also a tendency in some regions, such as Switzerland, or the Netherlands, that approach the “thinned city” from the other direction. The whole country seems to transform itself into something like an urban garden, neither of rural nor of urban quality. People like Herzog & deMeuron lament about this form, conceiving it as urban sprawl, the loss of distinct structure, i.e. the loss of clearly recognizable rural areas on the one hand, and the surge of “sub-functional” city-fragments on the other. Yet, probably we should turn perspective, away from reactive, negative dialectics, into a positive attitude of design, as it may appear a bit infantile to think that a palmful of sociologists and urbanists could act against a gross cultural tendency.

In his lecture at the American University in Beirut in 2010 [19], Koolhaas asked “What does it [thinning] mean for the ‘Urban Condition’?”

Well, probably nothing interesting, except that it prevents the appearance of the Urban16 or lets it vanish, would it have been present. Probably cities like Dubai are just not yet “urban”, not to speak of the Urban. From the distant, Dubai still looks like a photomontage, a Potemkin village, an absurdity. The layout of the arrangement of the high-rises remembers to the small street villages, just 2 rows of cottages on both sides of  a street, arbitrarily placed somewhere in the nowhere of a grassland plain. The settlement being ruled just by a very basic tendency for social cohesion and a common interest for exploiting the hinterland as a resource. But there is almost no network effect, no commonly organized storage, no deep structure.

Figure 14a: A collage shown by Koolhaas in his Beirut lecture, emphasizing the “absurdity” (his words) of the “international” style. Elsewhere, he called it an element of Junkspace.

The following fig 14b demonstrates the artificiality of Dubai, classifying more as a lined village made from huge buildings than actually as a “city”.

Figure 14b. Photograph “along” Dubai’s  main street taken in late autumn 2012 by Shiva Menon (source). After years of traffic jamming the nomadic Dubai culture finally accepted that something like infrastructure is necessary in a more sessile arrangement. They started to build a metro, which is functional with the first line since Sep 2010.

dubai fog 4 shiva menon

Figure 14c below shows the new “Simplicity ™”. This work of Koolhaas and OMA oscillates between sarcasm, humor pretending to be naive, irony and caricature. Despite a physical reason is given for the ability of the building to turn its orientation such as to minimize insulation, the effect is a quite different one. It is much more a metaphor for the vanity of village people, or maybe the pseudo-religious power of clerks.

Figure 14c-1. A proposal by Koolhaas/OMA for Dubai (not built, and as such, pure fiction). The building, called “Simplicity”, has been thought to be 200m wide, 300m tall and measuring only 21m in depth. It is placed onto a plate that rotates in order to minimize insulation.

Figure 14b-2. The same thing a bit later the same day

Yet, besides the row of high-rises we find the dwellings of the migration workers in a considerable density, forming a multi-national population. However, the layout here remembers more to Los Angeles than to any kind of “city”. Maybe, it simply forms kind of the “rural” hinterland of the high-rise village.

Figure 15. Dubai, “off-town”. Here, the migration workers are housing. In the background the skyscrapers lining the infamous main street.

For they, for instance, also started to invest into a metro, despite the (still) linear, disseminated layout of the city, which means that connectivity, hence network effects are now recognized as a crucial structural element for the success of the city. And this then is not so different anymore from the classical Western conception. Anyway, even the first cities of mankind, risen not in the West, provided certain unique possibilities, which as a bouquet could be considered as urban.

There is still another dimension of thinning, related to the informatization of presence via medially active technologies. Thinning could be considered as an actualization of the very idea of the potentiality of co-presence, much as it is exploited in the so-called “social media”. Of course, the material urban neighborhood, its corporeality, is dependent on physical presence. Certainly, we can expect either strong synchronization effects or negative tipping points, demarcating a threshold towards sub-urbanization. On the other hand, this could give rise to new forms of apartment sharing, supported by urban designers and town officials…

On the other hand, we already mentioned natural structures that show a certain dispersal, such as the blood cells, the immune system in vertebrates, or the slime-molds. These structures are highly developed swarms. Yet, all these swarms are highly dependent on the outer conditions. As such, swarms are hardly persistent. Dubai, the swarm city. Technology, however, particularly in the form of the www and so-called social media could stabilize the swarm-shape.17

From a more formal perspective we may conceive of shrinking and thinning simply as negative growth. By this growth turns, of course, definitely into an abstract concept, leaving the representational and even the metaphorical far behind. Yet, the explication of a formal theory exceeds the indicated size of this text by far. We certainly will do it later, though.

5.3. In Search for Symbols

What turns a building into an entity that may grow into an active source for symbolization processes? At least, we can initially know that symbols can’t be implanted in a direct manner. Of course, one always can draw on exoticism, importing the cliché that already is attached to the entity from abroad. Yet, this is not what we are interested in here.The question is not so dissimilar to the issue of symbolization at large, as it is known from the realm of language. How could a word, a sign, a symbol gain reference, and how could a building get it? We could even take a further step by asking: How could a building acquire generic mediality such that it could be inhabited not only physically, but also in the medial realm? [23] We can’t answer the issues around these questions here, as there is a vast landscape of sources and implications, enough for filling at least a book. Yet, conceiving buildings as agents in story-telling could be a straightforward and not too complicated entry into this landscape.

Probably, story-telling with buildings works like a good joke. If they are too direct, nobody would laugh. Probably, story-telling has a lot to do with behavior and the implied complexities, I mean, the behavior of the building. We interpret pets, not plants. With plants, we interpret just their usage. We laugh about cats, dogs, apes, and elephants, but not about roses and orchids, and even less about crystals. Once you have seen one crystal, you have seen all of them. Being inside a crystal can be frightening, just think about Snow White. While in some way this holds even for plants, that’s certainly not true for animals. Junkspace is made from (medial) crystals. Junkspace is so detrimental due to the fundamental modernist misunderstanding that claims the possibility of implementing meaning and symbols, if these are regarded as relevant at all.

Closely related to the issue of symbols is the issue of identity.

Philosophically, it is definitely highly problematic to refer to identity as a principle. It leads to deep ethical dilemmata. If we are going to drop it, we have to ask immediately about a replacement, since many people indeed feel that they need to “identify” with their neighborhood.

Well, first we could say that identification and “to identify” are probably quite different from the idea of identity. Every citizen in a city could be thought to identify with her or his city, yet, at the same time there need not be such a thing as “identity”. Identity is the abstract idea, imposed by mayors and sociologists, and preferably it should be rejected just for that, while the process of feeling empathy with one’s neighborhood is a private process that respects plurality. It is not too difficult to imagine that there are indeed people that feel so familiar with “their” city, the memories about experiences, the sound, the smell, the way people walk, that they feel so empathic with all of this such that they source a significant part of their personality from it. How to call this inextricable relationship other than “to identify with”?

The example of the Calatrava-bus stop shelter in St.Gallen demonstrates one possible source of identification: Success in collective design decisions. Or more general: successfully finished negotiations about collective design issues, a common history about such successful processes. Even if the collective negotiation happens as a somewhat anonymous process. Yet, the relative preference of participation versus decreed activities depends on the particular distribution of political and ethical values in the population of citizens. Certainly, participatory processes are much more stable than top-down-decrees, not only in the long run, as even the Singaporean government has recognized recently. But anyway, cities have their particular personality, because they behave18 in a particular manner, and any attempt to get clear or to decide about preservation must respect this personality. Of course, it also applies that the decision-making process should be conscious enough to be able to reflect about the metaphysical belief set, the modes of growth and the long-term characteristics of the city.

5.4. The Question of Implementation

This essay tries to provide an explication of the concept of growth in the larger context of a theory of differentiation in architecture and urbanism. There, we positioned growth as one of four principles or schemata that are constitutive for generic differentiation.

In this final section we would like to address the question of implementation, since only little has been said so far about how to deal with the concept of growth. We already described how and why earlier attempts like that of the Metabolists dashed against the binding problem of theoretical work.

If houses do not move physically, how then to make them behaving, say, similar to the way an animal does? How to implement a house that shares structural traits with animals? How to think of a city as a system of plants and animals without falling prey to utter naivity?

We already mentioned that there is no technocratic, or formal, or functionalist solution to the question of growth. At first, the city has to decide what it wants to be, which kind of mix of growth modes should be implemented in which neighborhoods.

Let us first take some visual impressions…

Figure 16a,b,c. The Barcelona Pavilion by Mies van der Rohe (1929 [1986]).

This pavilion is a very special box. It is non-box, or better, it establishes a volatile collection of virtual boxes. In this building, Mies reached the mastery of boxing. Unfortunately, there are not so much more examples. In some way, the Dutch Embassy by Koolhaas is the closest relative to it, if we consider more recent architecture.

Just at the time the Barcelona pavilion has been built, another important architect followed similar concepts. In his Villa Savoye, built 1928-31, LeCorbusier employed and demonstrated several new elements in his so-called “new architecture,” among others the box and the ramp. Probably the most important principle, however, was to completely separate construction and tectonics from form and design. Such, he achieved a similar “mobility” as Mies in his Pavilion.

Figure 17a: La Villa Savoye, mixing interior and exterior on the top-roof “garden”. The other zone of overlapping spaces is beneath the house (see next figure 17b).

corbusier Villa Savoye int-ext

Figure 17b: A 3d model of Villa Savoye, showing the ramps that serve as “entrance” (from the outside) and “extrance” (towards the top-roof garden). The principle of the ramp creates a new location for the creation and experience of duration in the sense of Henri Bergson’s durée. Both the ramp and the overlapping of spaces creates a “zona extima,” which is central to the “behavioral turn”.

Corbusier Villa Savoye 06 small model

Comparing La Villa Savoye with the Barcelona pavilion regarding the mobility of space, it is quite obvious, that LeCorbusier handled the confluence and mutual penetration of interior and exterior in a more schematic and geometric manner.19

The quality of the Barcelona building derives from the fact that its symbolic value is not directly implemented, it just emerges upon interaction with the visitor, or the inhabitant. It actualizes the principle of “emerging symbolicity by induced negotiation” of compartments. The compartments become mobile. Such, it is one of the roots of the ramp that appeared in many works of Koolhaas. Yet, its working requires a strong precondition: a shared catalog of values, beliefs and basic psychological determinants, in short, a shared form of life.

On the other hand, these values and beliefs are not directly symbolized, shifting them into their volatile phase, too. Walking through the building, or simply being inside of it, instantiates differentiation processes in the realm of the immaterial. All the differentiation takes place in the interior of the building, hence it brings forth animal-like growth, transcending the crystal and the swarm.

Thus the power of the pavilion. It is able to transform and to transcend the values of the inhabitant/visitor. The zen of silent story-telling.

This example demonstrates clearly that morphogenesis in architecture not only starts in the immateriality of thought, it also has to target the immaterial.

It is clear that such a volatile dynamics, such a active, if not living building is hard to comprehend. In 2008, the Japanese office SANAA has been invited for contributing the annual installation in the pavilion. They explained their work with the following words [24].

“We decided to make transparent curtains using acrylic material, since we didn’t want the installation to interfere in any way with the existing space of the Barcelona Pavilion,” says Kazuyo Sejima of SANAA.

Figure 18. The installation of Japanese office SANAA in the Barcelona Pavilion. You have to take a careful look in order to see the non-interaction.

Well, it certainly rates as something between bravery and stupidity to try “not to interfere in any way with the existing space“. And doing so by highly transparent curtains is quite to the opposite of the buildings characteristics, as it removes precisely the potentiality, the volatility, virtual mobility. Nothing is left, beside the air, perhaps. SANAA committed the typical representational fault, as they tried to use a representational symbol. Of course, the walls that are not walls at all have a long tradition in Japan. Yet, the provided justification would still be simply wrong.

Instead of trying to implement a symbol, the architect or the urbanist has to care about the conditions for the possibility of symbol processes and sign processes. These processes may be political or not, they always will refer to the (potential) commonality of shared experiences.

Above we mentioned that the growth of a building has its beginning in the immateriality of thought. Even for the primitive form of mineralic growth we found that we can understand the variety of resulting shapes only through the conditions embedding the growth process. The same holds, of course, for the growth of buildings. For crystals the outer conditions belong to them as well, so the way of generating the form of a building belongs to the building.

Where to look for the outer conditions for creating the form? I suppose we have to search for them in the way the form gets concrete, starting from a vague idea, which includes its social and particularly its metaphysical conditions. Do you believe in independence, identity, relationality, difference?

It would be interesting to map the difference between large famous offices, say OMA and HdM.

According to their own words, HdM seems to treat the question of material very differently from OMA, where the question of material comes in at later stage [25]. HdM seems to work much more “crystallinic”, form is determined by the matter, the material and the respective culture around it. There are many examples for this, from the wine-yard in California, the “Schaulager” in Basel (CH), the railway control center (Basel), up to the “Bird’s Nest” in Bejing (which by the way is an attempt for providing symbols that went wrong). HdM seem to try to rely to the innate symbolicity of the material, of corporeality itself. In case of the Schaulager, the excavated material have been used to raise the building, the stones from the underground have been erected into a building, which insides looks like a Kafkaesque crystal. They even treat the symbols of a culture as material, somehow counterclockwise to their own “matérialisme brut”. Think about their praise of simplicity, the declared intention to avoid any reference beside the “basic form of the house” (Rudin House). In this perspective, their acclaimed “sensitivity” to local cultures is little more than the exploitation of a coal mine, which also requires sensitivity to local conditions.

Figure 18: Rudin House by Herzog & deMeuron

HdM practice a representationalist anti-symbolism, leaning strongly to architecture as a crystal science, a rather weird attitude to architecture. Probably it is this weirdness that quite unintentionally produces the interest in their architecture through a secondary dynamics in the symbolic. Is it, after all, Hegel’s tricky reason @ work? At least this would explain the strange mismatch of their modernist talking and the interest in their buildings.

6. Conclusions

In this essay we have closed a gap with respect to the theoretical structure of generic differentiation. Generic Differentiation may be displayed by the following diagram (but don’t miss the complete argument).

Figure 19: Generic Differentiation is the key element for solving the binding problem of theory works. This structure is to be conceived not as a closed formula, but rather as a module of a fractal that is created through mutual self-affine mappings of all of the three parts into the respective others.

basic module of the fractal relation between concept/conceptual, generic differentiation/difference and operation/operational comprising logistics and politics that describes the active subject

In earlier essays, we proposed abstract models for probabilistic networks, for associativity and for complexity. These models represent a perspective from the outside onto the differentiating entity. All of these have been set up in a reflective manner by composing certain elements, which in turn can be conceived as framing a particular space of expressibility. Yet, we also proposed the trinity of development, evolution and learning (chp.10 here) for the perspective from the inside of the differentiation process(es), describing different qualities of differentiation.

Well, the concept of growth20 is now joining the group of compound elements for approaching the subject of differentiation from the outside. In some way, using a traditional and actually an inappropriate wording, we could say that this perspective is more analytical than synthetical, more scientific than historiographical. This does not mean, of course, that the complementary perspective is less scientific, or that talking about growth or complexity is less aware of the temporal domain. It is just a matter of weights. As we have pointed out in the previous essay, the meta-theoretical conception (as a structural description of the dynamics of theoretical work) is more like a fractal field than a series of activities.

Anyway, the question is what can we do with the newly re-formulated concept of growth?

First of all, it completes the concept of generic differentiation, as we already mentioned just before. Probably the most salient influence is the enlarged and improved vocabulary to talk about change as far as it concerns the “size” of the form of a something, even if these something is something immaterial. For many reasons, we definitely should resist the tendency to limit the concept of growth to issues of morphology.

Only through this vocabulary we can start to compare the entities in the space of change. Different things from different domains or even different forms of life can be compared to each other, yet not as those things, but rather as media of change. Comparing things that change means to investigate the actualization of different modes of change as this passes through the something. This move is by no means eclecticist. It is even mandatory in order to keep aligned to the primacy of interpretation, the Linguistic Turn, and the general choreostemic constitution.

By means of the new and generalized vocabulary we may overcome the infamous empiricist particularism. Bristle counting, as it is called in biology, particularly entomology. Yes, there are around 450’000 different species of beetles… but… Well, overcoming particularism means that we can spell out new questions: about regulative factors, e.g. for continuity, melting and apoptosis. Guided by the meta-theoretical structure in fig.19 above we may ask: How would a politics of apoptosis look like? What about recycling of space? How could infrastructure foster associativity, learning and creativity of the city, rather than creativity in the city? What is epi/genetics of the growth and differentiation processes in a particular city?

Such questions may appear as elitary, abstract, of only little use. Yet, the contrary is true, as precisely such questions directly concern the productivity of a city, the speed of circulation of capital, whether symbolic or monetary (which anyway is almost the same). Understanding the conditions of growth may lead to cities that are indeed self-sustaining, because the power of life would be a feature deeply built into them. A little, perhaps even homeopathic dose of dedetroitismix, a kind of drug to cure the disease that infected the city of Detroit as well as the planners of Detroit or also all the urbanists that are pseudo-reasoning about Detroit in particular and sustainability in general. Just as Paracelsus mentioned that there is not just one kind of stomach, instead there are hundreds of kinds of stomach, we may recognize how to deal with the thousands of different kinds of cities that all spread across thousands of plateaus, if we understand of how to speak and think about growth.

Notes

1. This might appear a bit arrogant, perhaps, at first sight. Yet, at this point I must insist on it, even as I take into account the most advanced attempts, such as those of Michael Batty [1], Luca D’Acci or Karl Kropf [2]. The proclaimed “science of cities” is in a bad state. Either it is still infected by positivist or modernist myths, or the applied methodological foundations are utterly naive. Batty for instance embraces full-heartedly complexity. But how could one use complexity other as a mere label, if he is going to write such weird mess [3], mixing wildly concepts and subjects?

“Complexity: what does it mean? How do we define it? This is an impossible task because complex systems are systems that defy definition. Our science that attempts to understand such systems is incomplete in the sense that a complex system behaves in ways that are unpredictable. Unpredictability does not mean that these systems are disordered or chaotic but that defy complete definition.

Of course, it is not an impossible task to conceptualize complexity in a sound manner. This is even a mandatory precondition to use it as a concept. It is a bit ridiculous to claim the impossibility and then writing a book about its usage. And this conceptualization, whatsoever it would look like, has absolutely nothing to do with the fact that complex systems may behave unpredictable. Actually, in some way they are better predictable than complete random processes. It remains unclear which kind of unpredictability Batty is referring to? He didn’t disclose anything about this question, which is a quite important one if one is going to apply “complexity science”. And what about the concept of risk, and modeling, then, which actually can’t be separated at all?

His whole book [1] is nothing else than an accumulation of half-baked formalistic particulars. When he talks about networks, he considers only logistic networks. Bringing in fractals, he misses to mention the underlying mechanisms of growth and the formal aspects (self-affine mapping). In his discussion of the possible role of evolutionary theory [4], following Geddes, Batty resorts again to physicalism and defends it. Despite he emphasizes the importance of the concept of “mechanism”, despite he correctly distinguishes development from evolution, despite he demands an “evolutionary thinking”, he fails to get to the point: A proper attitude to theory under conditions of evolution and complexity, a probabilistic formulation, an awareness for self-referentiality, insight to the incommensurability of emergent traits, the dualism of code and corporeality, the space of evo-devo-cogno. In [4], one can find another nonsensical statement about complexity on p.567:

“The essential criterion for a complex system is a collection of elements that act independently of one another but nevertheless manage to act in concert, often through constraints on their actions and through competition and co-evolution. The physical trace of such complexity, which is seen in aggregate patterns that appear ordered, is the hallmark of self-organisation.” (my emphasis).

The whole issue with complex systems is that there is no independence… they do not manage to act in concert… wildly mixing with concepts like evolution or competition… physics definitely can nothing say about the patterns, and the hallmark of self-organizing systems is not surely not just the physical trace: it is the informational re-configuration.

Not by pure chance therefore he is talking about “tricks” ([5], following Hamdi [7]): “The trick for urban planning is to identify key points where small change can lead spontaneously to massive change for the better.” Without a proper vocabulary of differentiation, that is, without a proper concept of differentiation, one inevitably has to invoke wizards…

But the most serious failures are the following: regarding the cultural domain, there is no awareness about the symbolic/semiotic domain, the disrespect of information, and regarding methodology, throughout his writings, Batty mistakes theory for models and vice versa, following the positivist trail. There is not the slightest evidence in his writing that there is even a small trace of reflection. This however is seriously indicated, because cities are about culture.

This insensitivity is shared by talented people like Luca D’Acci, who is still musing about “ideal cities”. His procedural achievements as a craftsman of empirism are impressive, but without reflection it is just threatening, claiming the status of the demiurg.

Despite all these failures, Batty’s approach and direction is of course by far more advanced than the musings of Conzen, Caniggia or Kropf, which are intellectually simply disastrous.There are numerous examples for a highly uncritical use of structural concepts, for mixing of levels of arguments, crude reductionism, a complete neglect of mechanisms and processes etc. For instance, Kropf in [6]

A morphological critique is necessarily a cultural critique. […] Why, for example, despite volumes of urban design guidance promoting permeability, is it so rare to find new development that fully integrates main routes between settlements or roads directly linking main routes (radials and counter-radials)?” (p.17)

The generic structure of urban form is a hierarchy of levels related part to whole. […] More effective and, in the long run, more successful urbanism and urban design will only come from a better understanding of urban form as a material with a range of handling characteristics.” (p.18)

It is really weird to regard form as matter, isn’t it? The materialist final revenge… So, through the work of Batty there is indeed some reasonable hope for improvement. Batty & Marshall are certainly heading to the right direction when they demand (p.572 [4]):

“The crucial step – still to be made convincingly – is to apply the scientifically inspired understanding of urban morphology and evolution to actual workable design tools and planning approaches on the ground.

But it is equally certain that an adoption of evolutionary theory that seriously considers an “elan vital” will not be able to serve as a proper foundation. What is needed instead is a methodologically sound abstraction of evolutionary theory as we have proposed it some time ago, based on a probabilistic formalization and vocabulary. (…end of the longest footnote I have ever produced…)

2. The concept mechanism should not be mistaken as kind of a “machine”. In stark contrast to machines, mechanisms are inherently probabilistic. While machines are synonymic to their plan, mechanisms imply an additional level of abstraction, the population and its dynamics. .

3. Whenever it is tried to proof or implement the opposite, the primacy of logic, characteristic gaps are created, more often than not of a highly pathological character.

4. see also the essay about “Behavior”, where we described the concept of “Behavioral Coating”.

5. Deleuzean understanding of differential [10], for details see “Miracle of Comparison”.

6. As in the preceding essays, we use the capital “U” if we refer to the urban as a particular quality and as a concept, in order to distinguish it from the ordinary adjective that refers to common sense understanding.

7. Only in embryos or in automated industrial production we find “development”.

8. The definition (from Wiki) is: “In animals, metamery is defined as a mesodermal event resulting in serial repetition of unit subdivisions of ectoderm and mesoderm products.”

9. see our essay about Reaction-Diffusion-Systems.

10. To emancipate from constant and pervasive external “environmental” pressures is the main theme of evolution. This is the deep reason that generalists are favored to the costs of specialists (at least on evolutionary time scales).

11. Aristotle’s idea of the four causes is itself a scheme to talk about change. .

12. This principle is not only important for Urban affairs, but also for a rather different class of arrangements, machines that are able to move in epistemic space.

13. Here we meet the potential of symbols to behave according to a quasi-materiality.

14. Heinrich Klotz‘ credo in [21] is „not only function, but also fiction“, without however taking the mandatory step away from the attitude to predefine symbolic value. Such, Klotz himself remains a fully-fledged modernist. see also Wolfgang Welsch in [22], p.22 .

15. There is of course also Robert Venturi with his  “Complexity and Contradiction in Architecture”, or Bernard Tschumi with his disjunction principle summarized in “Architecture and Disjunction.” (1996). Yet, both went as far as necessary, for “complexity” can be elementarized and generalized even further as he have been proposing it (here), which is, I think a necessary move to combine architecture and urbanism regarding space and time. 

16. see footnote 5.

17. ??? .

18. Remember, that the behavior of cities is also determined by the legal setup, the traditions, etc.

19.The ramp is an important element in contemporary architecture, yet, often used as a logistic solution and mostly just for the disabled or the moving staircase. In Koolhaas’ works, it takes completely different role as an element of story-telling. This aspect of temporality we will investigate in more detail in another essay. Significantly, LeCorbusier used the ramp as a solution for a purely spatial problem.

20. Of course, NOT as a phenomenon!

References

  • [1] Michael Batty, Cities and Complexity: Understanding Cities with Cellular Automata, Agent-Based Models, and Fractals. MIT Press, Boston 2007.
  • [2] Karl Kropf (2009). Aspects of urban form. Urban Morphology 13 (2), p.105-120.
  • [3] Michael Batty’s website.
  • [4] Michael Batty and Stephen Marshall (2009). The evolution of cities: Geddes, Abercrombie and the new physicalism. TPR, 80 (6) 2009 doi:10.3828/tpr.2009.12
  • [5] Michael Batty (2012). Urban Regeneration as Self-Organization. Architectural Design, 215, p.54-59.
  • [6] Karl Kropf (2005). The Handling Characteristics of Urban Form. Urban Design 93, p.17-18.
  • [7] Nabeel Hamdi, Small Change: About the Art of Practice and the Limits of Planning, Earthscan, London 2004.
  • [8] Dennis L. Sepper, Descartes’s Imagination Proportion, Images, and the Activity of Thinking. University of California Press, Berkeley 1996. available online.
  • [9] C. Bandt and M. Mesing (2009). Self-affine fractals of finite type. Banach Center Publications 84, 131-148. available online.
  • [9] Gilles Deleuze, Difference & Repetition. [1967].
  • [10] Moussaïd M, Perozo N, Garnier S, Helbing D, Theraulaz G (2010). The Walking Behaviour of Pedestrian Social Groups and Its Impact on Crowd Dynamics. PLoS ONE 5(4): e10047. doi:10.1371/journal.pone.0010047.
  • [11] Claire Detrain, Jean-Louis Deneubourg (2006). Self-organized structures in a superorganism: do ants “behave” like molecules? Physics of Life Reviews, 3(3)p.162–187.
  • [12] Dave Mosher, Secret of Annoying Crowds Revealed, Science now, 7 April 2010. available online.
  • [13] Charles Jencks, The Architecture of the Jumping Universe. Wiley 2001.
  • [14] Rem Koolhaas. Delirious New York.
  • [15] Markus Heidingsfelder, Rem Koolhaas – A Kind of Architect. DVD 2007.
  • [16] Rem Koolhaas, Bigness – or the problem of Large. in: Rem Koolhaas, Bruce Mau & OMA, S,M,L,XL. p.495-516. available here (mirrored)
  • [17] Wiki entry (english edition) about Rem Koolhaas, http://en.wikipedia.org/wiki/Rem_Koolhaas, last accessed Dec 4th, 2012.
  • [18] Rem Koolhaas (2010?). “On OMA’s Work”. Lecture as part of “The Areen Architecture Series” at the Department of Architecture and Design, American University of Beirut. available online. (the date of the lecture is not clearly identifiable on the Areen AUB website).
  • [19] Hans Ulrich Obrist, Interview with Rem Koolhaas at the Biennale 2010, Venice. Produced by the Institute of the 21st Century with support from ForYourArt, The Kayne Foundation. available online on youtube, last accessed Nov 27th, 2012.
  • [20] Heinrich Klotz, The history of postmodern architecture, 1986.
  • [21] Wolfgang Welsch, Unsere postmoderne Moderne. 6.Auflage, Oldenbourg Akademie Verlag, Berlin 2002 [1986].
  • [22] Vera Bühlmann, inahbiting media. Thesis, University of Basel 2009. (in german, available online)
  • [23] Report in de zeen (2008). available online.
  • [24] Jacques Herzog, Rem Koolhaas, Urs Steiner (2000). Unsere Herzen sind von Nadeln durchbohrt. Ein Gespräch zwischen den Architekten Rem Koolhaas und Jacques Herzog über ihre Zusammenarbeit. Aufgezeichnet von Urs Steiner.in: Marco Meier (Ed.). Tate Modern von Herzog & de Meuron. in: Du. Die Zeitschrift der Kultur. Vol. No. 706, Zurich, TA-Media AG, 05.2000. pp. 62-63. available online.

۞

Behavior

September 7, 2012 § Leave a comment

Animals behave. Of course, one could say.

Yet, why do we feel a certain naturalness here, in this relation between the cat as an observed and classified animal on the one side and the language game “behavior” on the other? Why don’t we say, for instance, that the animal happens? Or, likewise, that it is moved by its atoms? To which conditions does the language game “behavior” respond?

As strange as this might look like, it is actually astonishing that physicists easily attribute the quality of “behavior” to their dog or their cat, albeit they rarely will attribute them ideas (for journeys or the like). For physicists usually claim that the whole world can be explained in terms of the physical laws that govern the movement of atoms (e.g. [1]). Even physicists, it seems, exhibit some dualism in their concepts when it comes to animals. Yet, physicists claimed for a long period of time, actually into the mid of the 1980ies, that behavioral sciences actually could not count as a “science” at all, despite the fact that Lorenz and Tinbergen won the Nobel prize for medical sciences in 1973.

The difficulties physicists obviously suffer from are induced by a single entity: complexity. Here we refer to the notion of complexity that we developed earlier, which essentially is built from the following 5 elements.

  • – Flux of entropy, responsible for dissipation;
  • – Antagonistic forces, leading to emergent patterns;
  • – Standardization, mandatory for temporal persistence on the level of basic mechanisms as well as for selection processes;
  • – Compartmentalization, together with left-overs leading to spatio-temporal persistence as selection;
  • – Self-referential hypercycles, leading to sustained 2nd order complexity with regard to the relation of the whole to its parts.

Any setup for which we can identify this set of elements leads to probabilistic patterns that are organized on several levels. In other words, these conditioning elements are necessary and sufficient to “explain” complexity. In behavior, the sequence of patterns and the sequence of more simple elements within patterns are by far not randomly arranged, yet, it is more and more difficult to predict a particular pattern the higher its position in the stack of nested patterns, that is, its level of integration. Almost the same could be said about the observable changes in complex systems.

Dealing with behavior is thus a non-trivial task. There are no “laws” that would be mapped somehow into the animal such that an apriori defined mathematical form would suffice for a description of the pattern, or the animal as a whole. In behavioral sciences, one first has to fix a catalog of behavioral elements, and only by reference to this catalog we can start to observe in a way that will allow for comparisons with other observations. I deliberately avoid the concept of “reproducibility” here. How to know about that catalog, often called behavioral taxonomy? The answer is we can’t know in the beginning. To reduce observation completely to the physical level is not a viable alternative either. Observing a particular species, and often even a particular social group or individual improves over time, yet we can’t speak about that improvement. There is a certain notion of “individual” culture here that develops between the “human” observer and the behaving system, the animal. The written part of this culture precipitates in the said catalog, but there remains a large part of habit of observing that can’t be described without performing it. Observations on animals are never reproducible in the same sense as it is possible with physical entities. The ultimate reason being that the latter are devoid of individuality.

A behavioral scientist may work on quite different levels. She could investigate some characteristics of behavior in relation to the level of energy consumption, or to differential reproductive success. On this level, one would hardly go into the details of the form of behavior. Quite differently to this case are those investigations that are addressing the level of the form of the behavior. The form becomes an important target of the investigation if the scientist is interested in the differential social dynamics of animals belonging to different groups, populations or species. In physics, there is no form other than the mathematical. Electrons are (treated in) the same (way) by physicists all over the world, even across the whole universe. Try this with cats… You will loose the cat-ness.

It is quite clear that the social dynamics can’t be addressed by means of mere frequencies of certain simple behavioral elements, such like scratching, running or even sniffing at other animals. There might be differences, but we won’t understand too much of the animal, of course, particularly not with regard to the flow of information in which the animal engages.

The big question that arose during the 1970ies and the 1980ies was, how to address behavior, its structure, its patterning, and thereby to avoid a physicalist reduction?

Some intriguing answers has been given in the respective discourse since the beginning of the 1950ies, though only a few people recognized the importance of the form. For instance, to understand wolves Moran and Fentress [2] used the concept of choreography to get a descriptional grip on the quite complicated patterns. Colmenares, in his work about baboons, most interestingly introduced the notion of the play to describe the behavior in a group of baboons. He distinguished more than 80 types of social games as an arrangement of “moves” that span across space and time in a complicated way; this behavioral wealth rendered it somewhat impossible to analyze the data at that time. The notion of the social game is so interesting because it is quite close to the concept of language game.

Doing science means to translate observations into numbers. Unfortunately, in behavioral sciences this translation is rather difficult and in itself only little standardized (so far) despite many attempts, precisely for the reason that behavior is the observable output of a deeply integrated complex system, for instance the animal. Whenever we are going to investigate behavior we carefully have to instantiate the selection of the appropriate level we are going to investigate. Yet, in order to understand the animal, we even could not reduce the animal onto a certain level of integration. We should map the fact of integration itself.

There is a dominant methodological aspect in the description of behavior that differs from those in sciences more close to physics. In behavioral sciences one can invent new methods by inventing new purposes, something that is not possible in classic physics or engineering, at least if matter is not taken as something that behaves. Anyway, any method for creating formal descriptions invokes mathematics.

Here it becomes difficult, because mathematics does not provide us any means to deal with emergence. We can’t, of course, blame mathematics for that. It is not possible in principle to map emergence onto an apriori defined set of symbols and operations.

The only way to approximate an appropriate approach is by a probabilistic methodology that also provides the means to distinguish various levels of integration. The first half of this program is easy to accomplish, the second less so. For the fact of emergence is a creative process, it induces the necessity for interpretation as a constructive principle. Precisely this has been digested by behavioral science into the practice of the behavioral catalog.

1. This Essay

Well, here in this essay I am not interested mainly in the behavior of animals or the sciences dealing with the behavior of animals. Our intention was just to give an illustration of the problematic field that is provoked by the “fact” of the animals and their “behavior”.  The most salient issue in this problematic field is the irreducibility, in turn caused by the complexity and the patterning resulting from it. The second important part on this field is given by the methodological answers to these concerns, namely the structured probabilistic approach, which responds appropriately to the serial characteristics of the patterns, that is, to the transitional consistency of the observed entity as well as the observational recordings.

The first of these issues—irreducibility—we need not to discuss in detail here. We did this before, in a previous essay and in several locations. We just have to remember that empiricist reduction means to attempt for a sufficient description through dissecting the entity into its parts, thereby neglecting the circumstances, the dependency on the context and the embedding into the fabric of relations that is established by other instances. In physics, there is no such fabric, there are just anonymous fields, in physics, there is no dependency on the context, hence form is not a topic in physics. As soon as form becomes an issue, we leave physics, entering either chemistry or biology. As said, we won’t go into further details about that. Here, we will deal mainly with the second part, yet, with regard to two quite different use cases.

We will approach these cases, the empirical treatment of “observations” in computational linguistics and in urbanism, first from the methodological perspective, as both share certain conditions with the “analysis” of animal behavior. In chapter 8 we will give more pronounced reasons about this alignment, which at first sight may seem to be, well, a bit adventurous. The comparative approach, through its methodological arguments, will lead us to the emphasis of what we call “behavioral turn”. The text and the city are regarded as behaving entities, rather than the humans dealing with them.

The chapters in this essay are the following:

Table of Content (active links)

2. The Inversion

Given the two main conceptual landmarks mentioned above—irreducibility and the structured probabilistic approach—that establish the problematic field of behavior, we now can do something exciting. We take the concept and its conditions, detach it from its biological origins and apply it to other entities where we meet the same or rather similar conditions. In other words, we practice a differential as Deleuze understood it [3]. So, we have to spend a few moments for dealing with these conditions.

Slightly re-arranged and a bit more abstract than it is the case in behavioral sciences, these conditions are:

  • – There are patterns that appear in various forms, despite they are made from the same elements.
  • – The elements that contribute to the patterns are structurally different.
  • – The elements are not all plainly visible; some, most or even the most important are only implied.
  • – Patterns are arranged in patterns, implying that patterns are also elements, despite the fact that there is no fixed form for them.
  • – The arrangement of elements and patterns into other patterns is dependent on the context, which in turn can be described only in probabilistic terms.
  • – Patterns can be classified into types or families; the classification however, is itself non-trivial, that is, it is not supported.
  • – The context is given by variable internal and external influences, which imply a certain persistence of the embedding of the observed entity into its spatial, temporal and relational neighborhood.
  • – There is a significant symbolic “dimension” in the observation, meaning that the patterns we observe occur in sequence space upon an alphabet of primitives, not just in the numerical space. This symbolistic account is invoked by the complexity of the entity itself. Actually, the difference between symbolic and numerical sequences and patterns are much less than categorical, as we will see. Yet, it makes a large difference either to include or to exclude the methodological possibility for symbolic elements in the observation.

Whenever we meet these conditions, we can infer the presence of the above mentioned problematic field, that is mainly given by irreducibility and­­­—as its match in the methodological domain—the practice of a structured probabilistic approach. This list provides us an extensional circumscription of abstract behavior.

A slightly different route into this problematic field draws on the concept of complexity. Complexity, as we understand it by means of the 5 elements provided above (for details see the full essay on this subject), can itself be inferred by checking for the presence of the constitutive elements. Once we see antagonisms, compartments, standardization we can expect emergence and sustained complexity, which in turn means that the entity is not reducible and in turn, that a particular methodological approach must be chosen.

We also can clearly state what should not be regarded as a member of this field. The most salient one is the neglect of individuality. The second one, now in the methodological domain, is the destruction of the relationality as it is most easy accomplished by referring to raw frequency statistics. It should be obvious that destroying the serial context in an early step of the methodological mapping from observation to number also destroys any possibility to understand the particularity of the observed entity. The resulting picture will not only be coarse, most probably it also will be utterly wrong, and even worse, there is no chance to recognize this departure into the area that is free from any sense.

3. The Targets

At the time of writing this essay, there are currently three domains that suffer most from the reductionist approach. Well, two and a half, maybe, as the third, genetics, is on the way to overcome the naïve physicalism of former days.

This does not hold for the other two areas, urbanism and computational linguistics, at least as far as it is relevant for text mining  and information retrieval1. The dynamics in the respective communities are of course quite complicated, actually too complicated to achieve a well-balanced point of view here in this short essay. Hence, I am asking to excuse the inevitable coarseness regarding the treatment of those domains as if they would be homogenous. Yet, I think, that in both areas the mainstream is seriously suffering from a mis-understood scientism. In some way, people there strangely enough behave more positivist than researchers in natural sciences.

In other words, we follow the question how to improve the methodology in those two fields of urbanism and computerized treatment of textual data. It is clear that the question about methodology implies a particular theoretical shift. This shift we would like to call the “behavioral turn”. Among other changes, the “behavioral turn” as we construct it allows for overcoming the positivist separation between observer and the observed without sacrificing the possibility for reasonable empiric modeling.2

Before we argue in a more elaborate manner about this proposed turn in relation to textual data and urbanism, we first would like two accomplish two things. First, we briefly introduce two methodological concepts that deliberately try to cover the context of events, where those events are conceived as part of a series that always also develops into kind of a network of relations. Thus, we avoid to conceive of events as a series of separated points.

Secondly, we will discuss current mainstream methodology in the two fields that we are going to focus here. I think that the investigation of the assumptions of these approaches, often remaining hidden, sheds some light onto the arguments that support the reasonability of the “behavioral turn”.

4. Methodology

The big question remaining to deal with is thus: how to deal with the observations that we can make in and about our targets, the text or the city?

There is a clear starting point for the selection of any method as a method that could be considered as appropriate. The method should inherently respond to the seriality of the basic signal. A well-known method of choice for symbolic sequences are Markov chains, another important one are random contexts and random graphs. In the domain of numerical sequences wavelets are the most powerful way to represent various aspects of a signal at once.

Markov Processes

A Markov chain is the outcome of applying the theory of Markov processes onto a symbolic sequence. A Markov process is a neat description of the transitional order in a sequence. We also may say that it describes the conditional probabilities for the transitions between any subset of elements. Well, in this generality it is difficult to apply. Let us thus start with the most simple form, the Markov process of 1st order.

A 1st order Markov process describes just and only all pairwise transitions that are possible for given “alphabet” of discrete entries (symbols). These transitions can be arranged in a so-called transition matrix if we obey to the standard to use the preceding part of the transitional pair as row header and the succeeding part of the transitional pair as a column header. If a certain transition occurs, we enter a tick into the respective cell, given by the address row x column, which derives from the pair prec -> succ. That’s all. At least for the moment.

Such a table captures in some sense the transitional structure of the observed sequence. Of course, it captures only a simple aspect, since the next pair does not know anything about the previous pair. A 1st order Markov process is thus said to have no memory. Yet, it would be a drastic misunderstanding to generalize the absence of memory to any kind of Markov process. Actually, Markov processes can precisely be used to investigate the “memories” in a sequence, as we will see in a moment.

Anyway, on any kind of such a transition table we can do smart statistics, for instance to identify transitions that are salient for the “exceptional” high or low frequency. Such a reasoning takes into account the marginal frequencies of such a table and is akin to correspondence analysis. Van Hooff developed this “adjusted residual method” and  has been applying it with great success in the analysis of observational data on Chimpanzees [4][5].

These residuals are residuals against a null-model, which in this case is the plain distribution. In other words, the reasoning is simply the same as always in statistics, aiming at establishing a suitable ratio of observed/expected, and then to determine the reliability of a certain selection that is based on that ratio. In the case of transition matrices the null-model states that all transitions occur with the same frequency. This is of course, simplifying, but it is also simple to calculate. There are of course some assumptions in that whole procedure that are worthwhile to be mentioned.

The most important assumption of the null-model is that all elements that are being used to set up the transitional matrix are independent from each other, except their 1st order dependency, of course. This also means that the null-model assumes equal weights for the elements of the sequence. It is quite obvious that we should assume so only in the beginning of the analysis. The third important assumption is that the process is stationary, meaning the kind and the strength of the 1st order dependencies do not change for the entire observed sequence.

Yet, nothing enforces us to stick to just the 1st order Markov processes, or to apply it globally. A 2nd order Markov process could be formulated which would map all transitions x(i)->x(i+2). We may also formulate a dense process for all orders >1, just by overlaying all orders from 1 to n into a single transitional matrix.

Proceeding this way, we end up with an ensemble of transitional models. Such an ensemble is suitable for the comparatist probabilistic investigation of the memory structure of a symbolic sequence that is being produced by a complex system. Matrices can be compared (“differenced”) regarding their density structure, revealing even spurious ties between elements across several steps in the sequence. Provided the observed sequence is long enough, single transition matrices as well as ensembles thereof can be resampled on parts of sequences in order to partition the global sequence, that is, to identify locally stable parts of the overall process.

Here you may well think that this sounds like a complicated “work-around” for a Hidden Markov Model (HMM). Yet, despite a HMM is more general than the transition matrix perspective in some respect, it is also less wealthy. In HMM, the multiplicity is—well—hidden. It reduces the potential complexity of sequential data into a single model, again with the claim of global validity. Thus, HMM are somehow more suitable the closer we are to physics, e.g. in speech recognition. But even there their limitation is quite obvious.

From the domain of ecology we can import another trick for dealing with the transitional structure. In ecosystems we can observe the so-called succession. Certain arrangements of species and their abundance follow rather regularly, yet probabilistic to each other, often heading towards some stable final “state”. Given a limited observation about such transitions, how can we know about the final state? Using the transitional matrix the answer can be found simply by a two-fold operation of multiplying the matrix with itself and intermittent filtering by renormalization. This procedure acts as a frequency-independent filter. It helps to avoid type-II errors when applying the adjusted residuals method, that is, transitions with a weak probability will be less likely dismissed as irrelevant ones.

Contexts

The method of Markov processes is powerful, but is suffers from a serious problem. This problem is introduced by the necessity to symbolize certain qualities of the signal in advance to its usage in modeling.

We can’t use Markov processes directly on the raw textual data. Doing so instead would trap us in the symbolistic fallacy. We would either ascribe the symbol itself a meaning—which would result in a violation of the primacy of interpretation—or it would conflate the appearance of a symbol with its relevance, which would constitute a methodological mistake.

The way out of this situation is provided by a consequent probabilization. Generally we may well say that probabilisation takes the same role for quantitative sciences as the linguistic turn did for philosophy. Yet, it is still an attitude that is largely being neglected as a dedicated technique almost everywhere in any science. (for an example application of probabilisation with regard to evolutionary theory see this)

Instead of taking symbols as they are pretended to be found “out there”, we treat them as outcome of an abstract experiment, that is, as a random variable. Random variables establish them not as dual concepts, as 1 or 0, to be or not to be, they establish themselves as a probability distribution. Such a distribution contains potentially an infinite number of discretizations. Hence, probabilistic methods are always more general than those which rely on “given” symbols.

Kohonen et al. proposed a simple way to establish a random context [6]. The step from symbolic crispness to a numerical representation is not trivial, though. We need a double-articulated entity that is “at home” in both domains. This entity is a high-dimensional random fingerprint. Such a fingerprint consists simply of a large number, well above 100, of random values from the interval [0..1]. According to the Lemma of Hecht-Nielsen [7]  any two of such vectors are approximately orthogonal to each other. In other words, it is a name expressed by numbers.

After a recoding of all symbols in a text into their random fingerprints it is easy to establish  probabilistic distributions of the neighborhood of any word. The result is a random context, also called a random graph. The basic trick to accomplish such a distribution is to select a certain, fixed size for the neighborhood—say five or seven positions in total—and then arrange the word of interest always to a certain position, for instance into the middle position.

This procedure we do for all words in a text, or any symbolic series. Doing so, we get a collection of random contexts, that overlap. The final step then is a clustering of the vectors according to their similarity.

It is quite obvious that this procedure as it has been proposed by Kohonen sticks to strong assumptions, despite its turn to probabilization. The problem is the fixed order, that is, the order is independent from context in his implementation. Thus his approach is still limited in the same way as the n-gram approach (see chp.5.3 below). Yet, sometimes we meet strong inversions and extensions of relevant dependencies between words. Linguistics speak of injected islands with regard to wh*-phrases. Anaphors are another example. Chomsky critized the approach of fixed–size contexts very early.

Yet, there is no necessity to limit the methodology to fixed-size contexts, or to symmetrical instances of probabilistic contexts. Yes, of course this will result in a situation, where we corrupt the tabularity of the data representation. Many rows are different in their length and there is (absolutely) no justification to enforce a proper table by filling “missing values” into the “missing” cells of the table

Fortunately, there is another (probabilistic) technique that could be used to arrive at a proper table, without distorting the content by adding missing values. This technique is random projection, first identified by Johnson & Lindenstrauss (1984), which in the case of free-sized contexts has to be applied in an adaptive manner (see [8] or [9] for a more recent overview). Usually, a source (n*p) matrix (n=rows, p=columns=dimensions) is multiplied with a (p*k) random matrix, where the random numbers follow a Gaussian distribution), resulting in a target matrix of only k dimensions and n rows. This way a matrix of 10000+ columns can be projected into one made only from 100 columns without loosing much information. Yet, using the lemma of Hecht-Nielsen we can compress any of the rows of a matrix individually. Since the random vectors are approximately orthogonal to each other we won’t introduce any information across all the data vectors that are going to be fed into the SOM. This stepwise operation becomes quite important for large amounts of documents, since in this case we have to adopt incremental learning.

Such, we approach slowly but steadily the generalized probabilistic context that we described earlier. The proposal is simply that in dealing with texts by means of computers we have to apply precisely the most general notion of context, which is devoid from structural pre-occupations as we can meet them e.g. in the case of n-grams or Markov processes.

5. Computers Dealing with Text

Currently, so-called “text mining” is a hot topic. More and more of human communication is supported by digitally based media and technologies, hence more and more texts are accessible to computers without much efforts. People try to use textual data from digital environments for instance to do sentiment analysis about companies, stocks, or persons, mainly in the context of marketing. The craziness there is that they pretend to classify a text’s sentiment without understanding it, more or less on the frequency of scattered symbols.

The label “text mining” reminds to “data mining”; yet, the structure of the endeavors are drastically different. In data mining one is always interested in the relevant variables n order to build a sparse model that even could be understood by human clients. The model then in turn is used to optimize some kind of process from which the data for modeling has been extracted.

In the following we will describe some techniques, methods and attitudes that are highly unsuitable for the treatment of textual “data”, despite the fact that they are widely used.

Fault 1 : Objectivation

The most important difference between the two flavor of “digital mining” concerns however, the status of the “data”. In data mining, one deals with measurements that are arranged in a table. This tabular form is only possible on the basis of a preceding symbolization, which additionally is strictly standardized also in advance to the measurement.

In text mining this is not possible. There are no “explanatory” variables that could be weighted. Text mining thus just means to find a reasonable selection of text in response to a “query”. For textual data it is not possible to give any criterion how to look at a text, how to select a suitable reference corpus for determining any property of the text, or simply to compare it to other texts before its interpretation. There are no symbols, no criteria that could be filled into a table. And most significant, there is no target that could be found “in the data”.

It is devoid of any sense to try to optimize a selection procedure by means of a precision/recall ratio. This would mean that the meaning of text could be determined objectively before any interpretation, or, likewise, that the interpretation of a text is standardisable up to a formula. Both attempts are not possible, claiming otherwise is ridiculous.

People responded to these facts with a fierce endeavor, which ironically is called “ontology”, or even “semantic web”. Yet, neither will the web ever become “semantic” nor is database-based “ontology” a reasonable strategy (except for extremely standardized tasks). The idea in both cases is to determine the meaning of an entity before its actual interpretation. This of course is utter nonsense, and the fact that it is nonsense is also the reason why the so-called “semantic web” never started to work. They guys should really do more philosophy.

Fault 2 : Thinking in Frequencies

A popular measure for describing the difference of texts are variants of the so-called tf-idf measure. “tf” means “term frequency” and describes the normalized frequency of a term within a document. “idf” means “inverse document frequency”, which, actually, refers to the frequency of a word across all documents in a corpus.

The frequency of a term, even its howsoever differentialized frequency, can hardly be taken as the relevance of that term given a particular query. To cite the example from the respective entry in Wikipedia, what is “relevant” to select a document by means of the query “the brown cow”? Sticking to terms makes sense only if and only if we accept an apriori contract about the strict limitation to the level of the terms. Yet, this has nothing to do with meaning. Absolutely nothing. It is comparing pure graphemes, not even symbols.

Even if it would be related to meaning it would be the wrong method. Simply think about a text that contains three chapters: chapter one about brown dogs, chapter two about the relation of (lilac) cows and chocolate, chapter three about black & white cows. There is no phrase about a brown cow in the whole document, yet, it would certainly be selected as highly significant by the search engine.

This example nicely highlights another issue. The above mentioned hypothetical text could nevertheless be highly relevant, yet only in the moment the user would see it, triggering some idea that before not even was on the radar. Quite obviously, despite the search would have been different, probably, the fact remains that the meaning is neither in the ontology nor in the frequency and also not in text as such—before the actual interpretation by the user. The issue becomes more serious if we’d consider slightly different colors that still could count as “brown”, yet with a completely different spelling. And even more, if we take into account anaphoric arrangement.

The above mentioned method of Markov processes helps a bit, but not completely of course.

Astonishingly, even the inventors of the WebSom [6], probably the best model for dealing with textual data so far, commit the frequency fallacy. As input for the second level SOM they propose a frequency histogram. Completely unnecessary, I have to add, since the text “within” the primary SOM can be mapped easily to a Markov process, or to probabilistic contexts, of course. Interestingly, any such processing that brings us from the first to the second layer reminds somewhat more to image analysis than to text analysis. We mentioned that already earlier in the essay “Waves, Words and Images”.

Fault 3 : The Symbolistic Fallacy (n-grams & co.)

Another really popular methodology to deal with texts is n-grams. N-grams are related to Markov processes, as they also take the sequential order into account. Take for instance (again the example from Wiki) the sequence “to be or not to be”. The transformation into a 2-gram (or bi-gram) looks such “to be, be or, or not, not to, to be,” (items are between commas), while the 3-gram transformation produces “to be or, be or not, or not to, not to be”. In this way, the n-gram can be conceived as a small extract from a transition table of order (n-1). N-grams share a particular weakness with simple Markov models, which is the failure to capture long-range dependencies in language. These can be addressed only by means of deep grammatical structures. We will return to this point later in the discussion of the next fault No.4 (Structure as Meaning).

The strange thing is that people drop the tabular representation, thus destroying the possibility of calculating things like adjusted residuals. Actually, n-grams are mostly just counted, which is committing the first fault of thinking in frequencies, as described above.

N-gram help to build queries against databases that are robust against extensions of words, that is prefixes, suffixes, or forms of verbs due to flexing. All this has, however, nothing to do with meaning. It is a basic and primitive means to make symbolic queries upon symbolic storages more robust. Nothing more.

The real problem is the starting point: taking the term as such. N-grams start with individual words that are taken blindly as symbols. Within the software doing n-grams, they are even replaced by some arbitrary hash code, i.e. the software does not see a “word”, it deals just with a chunk of bits.

This way, using n-grams for text search commits the symbolistic fallacy, similar to ontologies, but even on a more basic level. In turn this means that the symbols are taken as “meaningful” for themselves. This results in a hefty collision with the private-language-argument put forward by Wittgenstein a long time ago.

N-grams are certainly more advanced than the nonsense based on tf-idf. Their underlying intention is to reflect contexts. Nevertheless, they fail as well. The ultimate reason for the failure is the symbolistic starting point. N-grams are only a first, though far too trivial and simplistic step into probabilization.

There is already a generalisation of n-grams available as described in published papers by Kohonen & Kaski: random graphs, based on random contexts, as we described it above. Random graphs overcome the symbolistic fallacy, especially if used together with SOM. Well, honestly I have to say that random graphs imply the necessity of a classification device like the SOM. This should not be considered as being a drawback, since n-grams are anyway often used together with Bayesian inference. Bayesian methods are, however, not able to distil types from observations as SOM are able to do. That now is indeed a drawback since in language learning the probabilistic approach necessarily must be accompanied with the concept of (linguistic) types.

Fault 4 : Structure as Meaning

The deep grammatical structure is an indispensable part of human languages. It is present from the sub-word level up to the level of rhetoric. And it’s gonna get really complicated. There is a wealth of rules, most of them to be followed rather strict, but some of them are applied only in a loose manner. Yet, all of them are rules, not laws.

Two issues are coming up here that are related to each other. The first one concerns the learning of a language. How do we learn a language? Wittgenstein proposed, simply by getting shown how to use it.

The second issue concerns the status of the models about language. Wittgenstein repeatedly mentioned that there is no possibility for a meta-language, and after all we know that Carnap’s program of a scientific language failed (completely). Thus we should be careful when applying a formalism to language, whether it is some kind of grammar, or any of the advanced linguistic “rules” that we know of today (see the lexicon of linguistics for that). We have to be aware that these symbolistic models are only projective lists of observations, arranged according to some standard of a community of experts.

Linguistic models are drastically different from models in physics or any other natural science, because in linguistics there is no outer reference. (Computational) Linguistics is mostly on the stage of a Babylonian list science [10], doing more tokenizing than providing useful models, comparable to biology in the 18th century.

Language is a practice. Language is a practice of human beings, equipped with a brain and embedded in a culture. In turn language itself is contributing to cultural structures and is embedded into it. There are many spatial, temporal and relational layers and compartments to distinguish. Within such arrangements, meaning happens in the course of an ongoing interpretation, which in turn is always a social situation. See Robert Brandom’s Making it Explicit as an example for an investigation of this aspect.

What we definitely have to be aware of is that projecting language onto a formalism, or subordinating language to an apriori defined or standardized symbolism (like in formal semantics) looses essentially everything language is made from and referring to. Any kind of model of a language is implicitly also claiming that language can be detached from its practice and from its embedding without loosing its main “characteristics”, its potential and its power. In short, it is the claim that structure conveys meaning.

This brings us to the question about the role of structure in language. It is a fact that humans not only understand sentences full of grammatical “mistakes”, and quite well so, in spoken language we almost always produce sentences that are full of grammatical mistakes. In fact, “mistakes” are so abundant that it becomes questionable to take them as mistakes at all. Methodologically, linguistics is thus falling back into a control science, forgetting about the role and the nature of symbolic rules such as it is established by grammar. The nature is an externalization, the role is to provide a standardization, a common basis, for performing interpretation of sentences and utterances in a reasonable time (almost immediately) and in a more or less stable manner. The empirical “given” of a sentence alone, even a whole text alone, can not provide enough evidence for starting with interpretation, nor even to finish it. (Note that a sentence is never a “given”.)

Texts as well as spoken language are nothing that could be controlled. There is no outside of language that would justify that perspective. And finally, a model should allow for suitable prediction, that is, it should enable us to perform a decision. Here we meet Chomsky’s call for competence. In case of language, a linguistic models should be able to produce language as a proof of concept. Yet, any attempt so far failed drastically, which actually is not really a surprise. Latest here it should become clear that the formal models of linguistics, and of course all the statistical approaches to “language processing” (another crap term from computational linguistics) are flawed in a fundamental way.

From the perspective of our interests here on the “Putnam Program” we conceive of formal properties as Putnam did in his “Meaning of “Meaning””. Formal properties are just that: properties among other properties. In our modeling essay we proposed to replace the concept of properties by the concept of the assignate, in order to emphasize the active role of the modeling instance in constructing and selecting the factors. Sometimes we use formal properties of terms and phrases, sometimes not, dependent on context, purpose or capability. There is neither a strict tie of formal assignates to the entity “word” or “sentence” nor could we detach them as part of formal approach.

Fault 5 : Grouping, Modeling and Selection

Analytic formal models are a strange thing, because such a model essentially claims that there is no necessity for a decision any more. Once the formula is there, it claims a global validity. The formula denies the necessity for taking the context as a structural element into account. It claims a perfect separation between observer and the observed. The global validity also means that the weights of the input factors are constant, or even that there are no such weights. Note that the weights translates directly into the implied costs of a choice, hence formulas also claim that the costs are globally constant, or at least, arranged in a smooth differentiable space. This is of course far from any reality for almost any interesting context, and of course for the contexts of language and urbanism, both deeply related to the category of the “social”.

This basic characteristic hence limits the formal symbolic approach to physical, if not just to celestial and atomic contexts. Trivial contexts, so to speak. Everywhere else something rather different is necessary. This different thing is classification as we introduced it first in our essay about modeling.

Searching for a text and considering a particular one as a “match” to the interests expressed by the search is a selection, much like any other “decision”. It introduces a notion of irreversibility. Searching itself is a difficult operation, even so difficult that is questionable whether we should follow this pattern at all. As soon as we start to search we enter the grammatological domain of “searching”. This means that we claim the expressibility of our interests in the search statement.

This difficulty is nicely illustrated by an episode with Gary Kasparov in the context of his first battle against “Deep Blue”. Given the billions of operations the super computer performed, a journalist came up with the question “How do find the correct move so fast?” Obviously, the journalist was not aware about the mechanics of that comparison. Kasparov answered: “ I do not search, I just find it.” His answer is not perfectly correct, though, as he should have said “I just do it”. In a conversation we mostly “just do language”. We practice it, but we very rarely search for a word, an expression, or the like. Usually, our concerns are on the strategic level, or in terms of speech act theory, on the illocutionary level.

Such we arrive now at the intermediary result that we have some kind of non-analytical models on the one hand, and the performance of their application on the other. Our suggestion is that most of these models are situated on an abstract, orthoregulative level, and almost never on the representational level of the “arrangement” of words.

A model has a purpose, even if it is an abstract one. There are no models without purpose. The purpose is synonymic to the selection. Often, we do not explicitly formulate a purpose, we just perform selections in a consistent manner. It is this consistency in the selections that imply a purpose. The really important thing to understand is also that the abstract notion of purpose is also synonymic to what we call “perspective”, or point of view.

One could mention here the analytical “models”, but those “models” are not models because they are devoid of a purpose. Given any interesting empirical situation, everybody knows that things may look quite different, just dependent on the “perspective” we take. Or in our words, which abstract purpose we impose to the situation. The analytic approach denies such a “perspectivism”.

The strange thing now is that many people mistake the mere clustering of observation on the basis of all contributing or distinguished factors as a kind of model. Of course, that grouping will radically change if we withdraw some of the factors, keeping only a subset of all available ones. Not only the grouping changes, but also the achievable typology and any further generalization will be also very different. In fact, any purpose, and even the tuning of the attitude towards the risk (costs) of unsuitable decisions changes the set of suitable factors. Nothing could highlight more the nonsense to call naïve take-it-all-clustering a “unsupervised modeling”. First, it is not a model. Second, any clustering algorithm or grouping procedure follows some optimality criterion, that is it supervises it despite claiming the opposite. “Unsupervised modeling” claims implicitly that it is possible to build a suitable model by pure analytic means, without any reference to the outside at all. This is, f course, not possible. It is this claim that is introducing a contradiction to the practice itself, because clustering usually means classification, which is not an analytic move at all. Due to this self-contradiction the term “unsupervised modeling” is utter nonsense. It is not only nonsense, it is even deceiving, as people get vexed by the term itself: they indeed believe that they are modeling in a suitable manner.

Now back to the treatment of texts. One of the most advanced procedures—it is a non-analytical one—is the WebSom. We described it in more detail in previous essays (here and here). Yet, as the second step Kohonen proposes clustering as a suitable means to decide about the similarity of texts. He is committing exactly the same mistake as described before. The trick, of course, is to introduce (targeted) modeling to the comparison of texts, despite the fact that there are no possible criteria apriori. What seems to be irresolvable disappears, however, as a problem if we take into account the self-referential relations of discourses, which necessarily engrave into the interpreter as self-modifying structural learning and historical individuality.

6. The Statistics of Urban Environments

The Importance of Conceptual Backgrounds

There is no investigation without implied purpose, simply because any investigation has to perform more often many selections rather than just some. One of the more influential selections that has to be performed considers the scope of the investigation. We already met this issue above when we discussed the affairs as we can meet it in behavioral sciences.

Considering investigations about social entities like urban environments, architecture or language. “scope” largely refers to the status of the individual, and in turn, to the status of time that we instantiate in our investigation. Both together establish the dimension of form as an element of the space of expressibility that we choose for the investigation.

Is the individual visible at all? I mean, in the question, in the method and after applying a methodology? For instance, as soon as we ask about matters of energy, individuals disappear. They also disappear if we apply statistics to raw observations, even if at first hand we would indeed observe individuals as individuals. To retain the visibility of individuals as individuals in a set of relations we have to apply proper means first. It is clear, that any cumulative measure like those from socio-economics also cause the disappearance of the context and the individual.

If we keep the individuals alive in our method, the next question we have to ask concerns the relations between the individuals. Do we keep them or do we drop them? Finally, regarding the unfolding of the processes that result from the temporal dynamics of those relations, we have to select whether we want to keep aspects of form or not. If you think that the way a text unfolds or the way things are happening in the urban environment is at least as important as their presence,  well in this case you would have to care about patterns.

It is rather crucial to understand that these basic selections determine the outcome of an investigation as well as of any modeling or even theory building as grammatological constraints. Once we took a decision on the scope, the problematics of that choice becomes invisible, completely transparent. This is the actual reason for the fact that choosing a reductionist approach as the first step is so questionable.

In our earlier essay about the belief system in modernism we emphasized the inevitability of the selection of a particular metaphysical stance, ways before we even think about the scope of an investigation in a particular domain. In case of modernistic thinking, from positivism to existentialism, including any shape of materialism, the core of the belief system is metaphysical independence, shaping all the way down towards politics methods, tools, attitudes and strategies. If you wonder whether there is an alternative to modernistic thinking, take a look to our article where we introduce the concept of the choreostemic space.

Space Syntax

In the case of “Space Syntax” the name is program. The approach is situated in urbanism; it has been developed and is still being advocated by Bill Hillier. Originally, Hillier was a geo-scientist, which is somewhat important to follow his methodology.

Put into a nutshell, the concept of space syntax claims that the description of the arrangement of free space in a built environment is necessary and sufficient for describing the quality of a city. The method of choice to describe that arrangement is statistics, either through the concept of probabilistic density of people or through the concept of regression, relating physical characteristics of free space with the density of people. Density in turn is used to capture the effect of collective velocity vectors. If people start to slow down, walking around in different directions, density increases. Density of course also increases as a consequence of narrow passages. Yet, in this case the vectors are strongly aligned.

The spatial behavior of individuals is a result and a means of social behavior in many animal species. Yet it makes a difference whether we consider the spatial behavior of individuals or the arrangement of free space in a city as a constraint of the individual spatial behavior. Hillier’s claim of “The Space is the Machine” is mistaking the one for the other.

In his writings, Hillier over and over again commits the figure of the petitio principii. He starts with the strong belief in analytics and upon that he tries to justify the use of analytical techniques. His claim of “The need for an analytic theory of architecture” ([11], p.40) is just one example. He writes

The answer proposed in this chapter is that once we accept that the object of architectural theory is the non-discursive — that is, the configurational — content of space and form in buildings and built environments, then theories can only be developed by learning to study buildings and built environments as non-discursive objects.

Excluding the discourse as a constitutional element only the analytic remains. He drops any relational account, focusing just the physical matter and postulating meaning of physical things, i.e. meaning as an apriori in the physical things. His problem is just his inability to distinguish different horizons of time, of temporal development. Dismissing time means to dismiss memory, and of course also culture. For a physicalist or ultra-modernist like him this blindness is constitutive. He never will understand the structure of his failure.

His dismissal of social issues as part of a theory serves eo ipso as his justification of the whole methodology. This is only possible due to another, albeit consistent, mistake, the conflation of theory and models. Hillier is showing us over and over again only models, yet not any small contribution to an architectural theory. Applying statistics shows us a particular theoretical stance, but is not to be taken as such! Statistics instantiates those models, that is his architectural theory is following largely the statistical theory. We repeatedly pointed to the problems that appear if we apply statistics to raw observations.

The high self-esteem Hillier expresses in his nevertheless quite limited writings is topped by treating space as syntax, in other words as a trivial machine. Undeniably, human beings have a material body, and buildings take space as material arrangements. Undeniably matter arranges space and constitutes space. There is a considerably discussion in philosophy about how we could approach the problematic field of space. We won’t go into details here, but Hillier simply drops the whole stuff.

Matter arranges in space. This becomes quickly a non-trivial insight, if we change perspective from abstract matter and the correlated claim of the possibility of reductionism to spatio-temporal processes, where the relations are kept taken as a starting point. We directly enter the domain of self-organization.

By means of “Space Syntax” Hillier claimed to provide a tool for planning districts of a city, or certain urban environments. If he would restrict his proposals to certain aspects of the anonymized flow of people and vehicles, it would be acceptable as a method. But it is certainly not a proper tool to describe the quality of urban environments, or even to plan them.

Recently, he delivered a keynote speech [12] where he apparently departed from his former Space Syntax approach, that reaches back to 1984. There he starts with the following remark.

On the face of it, cities as complex systems are made of (at least) two sub-systems: a physical sub-system, made up of buildings linked by streets, roads and infrastructure; and a human sub-system made up of movement, interaction and activity. As such, cities can be thought of as socio-technical systems. Any reasonable theory of urban complexity would need to link the social and technical sub-systems to each other.

This clearly is much less reductionist, at first sight at least, than “Space Syntax”. Yet, Hillier remains aligned to hard-core positivism. Firstly, in the whole speech he fails to provide a useful operationalization of complexity. Secondly, his Space Syntax simply appears wrapped in new paper. Agency for him is still just spatial agency. The relevant urban networks for him is just the network of streets. Thirdly, it is bare nonsense to separate a physical and a human subsystem, and then to claim the lumping together of those as a socio-technical system. He obviously is unaware of more advance and much more appropriate ways of thinking about culture, such as ANT, the Actor-Network-Theory (Bruno Latour), which precisely drops the categorical separation of physical and human. This separation has been first critized by Merlau-Ponty in the 1940ies!

Hillier served us just as an example, but you may have got the point. Occasionally, one can meet attempts that at least try to integrate a more appropriate concept of culture and human being in urban environments. Think about Koolhaas and his AMO/OMA, for instance, despite the fact that Koolhaas himself also struggles with the modernist mindset (see our introductions into “JunkSpace” or “The Generic City”). Yet, he at least recognized that something is fundamentally problematic with that.

7. The Toolbox Perspective

Most of the interesting and relevant systems are complex. It is simply a methodological fault to use frequencies of observational elements to describe these systems, whether we are dealing with animals, texts, urban environments or people (dogs, cats) moving around in urban environments.

Tools provide filters, they respond to certain issues, both of the signal and of the embedding. Tools are artifacts for transformation. As such they establish the relationality between actors, things and processes. Tools produce and establish Heidegger’s “Gestell” as well as they constitute the world as a fabric of relations as facts and acts, as Wittgenstein emphasized so often and already in the beginning of the Tractatus.

What we like to propose here is a more playful attitude towards the usage of tools, including formal methods. By “playful” we refer to Wittgenstein’s rule following, but also to a certain kind of experimentation, not induced by theory, but rather triggered by the know-how of some techniques that are going to be arranged. Tools as techniques, or techniques as tools are used to distil symbols from the available signals. Their relevancy is determined only by the subsequent step of classification, which in turn is (ortho-)regulated by strategic goal or cultural habits. Never, however, should we take a particular method as a representative for the means to access meaning from a process, let it a text or an urban environment.

8. Behavior

In this concluding chapter we are going to try to provide more details about our move to apply the concept of behavior to urbanism and computational linguistics.

Text

Since Friedrich Schleiermacher in 1830ies, hermeneutics is emphasizing a certain kind of autonomy of the text. Of course, the text itself is not a living thing as we consider it for animals. Before it “awakes” it has to be entered into mind matter, or more generally, it has to be interpreted. Nevertheless, an autonomy of the text remains, largely due to the fact that there is no Private Language. The language is not owned by the interpreting mind. Vilem Flusser proposed to radically turn the perspective and to conceive the interpreter as medium for texts and other “information”, rather than the other way round.

Additionally, the working of the brain is complex, least to say. Our relation to our own brain and our own mind is more that of an observer than that of a user or even controller. We experience them. Both together, the externality of language and the (partial) autonomy of the brain-mind lead to an arrangement where the text becomes autonomous. It inherits complimentary parts of independence from both parts of the world, from the internal and the external.

Furthermore, human languages are unlimited in their productivity. It is not only unlimited, it also is extensible. This pairs with its already mentioned deep structure, not only concerning the grammatical structure. Using language, or better, mastering language means to play with the inevitable inner contradictions that appear across the various layers, levels, aspects and processes of applied language. Within practiced language, there are many time horizons, instantiated by structural and semantic pointers. These aspects render the original series of symbols into an associative network of active components, which contributes further to the autonomy of texts. Roland Barthes notes (in [17]) that

The Plural of the Text depends … not on the ambiguity of its contents but on what might be called the sterographic plurality of its weave of signifiers (etymologically, the text is a tissue, a woven fabric). The reader of the Text may be compared to someone at a loose end.

Barthes implicitly emphasizes that the text does not convey a meaning, the meaning is not in the text, it can’t be conceived as something externalizable. In this essay he also holds that a text can’t be taken as just a single object. It is a text only in the context of other texts, and so the meaning that it develops upon interpretation is also dependent on the corpus into which it is embedded.

Methodologically, this (again) highlights the problematics that Alan Hajek called the reference class problem [13]. It is impossible for an interpreter to develop the meaning of a text outside of a previously chosen corpus. This dependency is inherited by any phrase, any sentence and any word within the text. Even a label like “IBM” that seems to be bijectively unique regarding the mapping of the graphem to its implied meaning is dependent on that. Of course, it will always refer somehow to the company. Yet, without the larger context it is not clear in any sense to which aspect of that company and its history the label refers to in a particular case. In literary theory this is called intertextuality. Further more, it is almost palpable here in this example that signs refer only to signs (the cornerstone of Peircean semiotics), and that concepts are nothing that could be defined (as we argued earlier in more detail).

We may settle here that a text as well as any part of it is established even through the selection of the embedding corpus, or likewise, a social practice, a life-form. Without such an embedding the text simply does not exist as a text. We just would find a series of graphemes. It is a hopeless exaggeration , if not self-deception, if people call the statistical treatment of texts “text mining”. reading it in another way, it may be considered even as a cynical term.

It is this dependence on local and global contexts, synchronically and diachronically, that renders the interpretation of a text similar to the interpretation of animal behavior.

Taken together, conceiving of texts as behaving systems is probably less a metaphor than it appears at first sight. Considering the way we make sense of a text, approaching a text is in many ways comparable with approaching an animal of a familiar species. We won’t know exactly what is going to happen, the course of events and action depends significantly on ourselves. The categories and ascribed properties necessary to establish an interaction are quite undefined in the beginning, also available only as types of rules, not as readily parameterized rules itself. And like in animals, the next approach will never be a simple repetition of the former one, even one knows the text quite good.

From the methodological perspective the significance of such a “behavioral turn”3 can’t be underestimated. For instance, nobody would interpret an animal by a rather short series of photographs, and keep the conclusion thereof once and for all. Interacting with a text as if it would behave demands for a completely different set of procedures. After all, one would deal with an open interaction. Such openness must be responded to with an appropriate attitude of the willingness for open structural learning.  This holds not only for human interpreters, but rather also for any interpreter, even if it would be software. In other words, the software dealing with text must itself be active in a non-analytical manner in order to constitute what we call a “text”. Any kind of algorithm (in the definition of Knuth) does not deal with text, but just and blindly with a series of dead graphemes.

The Urban

For completely different material reasons cities can be considered also as autonomous entities. Their patterns of growth and differentiation looks much more like that of ensembles of biological entities than that of minerals. Of course, this doesn’t justify the more or less naïve assignment of the “city as organism”. Urban arrangements are complex in the sense we’ve defined it, they are semiogenic and associative. There is a continuous contest between structure as regulation and automation on the one side and liquification as participation and symbolization on the other, albeit symbols may play for both parties.

Despite this autonomy, it remains a fact that without human activity cities are as little alive as texts are. This raises the particular question of the relationships between a city and its inhabitants, between the people as citizens of the city that they constitute. This topic has been subject of innumerable essay, novels, and investigations. Recently, a fresh perspective onto that has been opened by Vera Bühlmann’s notion of the “Quantum City”.[14]

We can neither detach the citizens from their city, not vice versa. Nevertheless, the standardized and externalized collective contribution across space and time creates an arrangement that produces dissipative flows and shows a strong meta-stability that transcends the activities of the individuals. This stability should not be mistaken as a “state”, though. Like for any other complex system, including texts, we should avoid to try to assign a “state” to a particular city, or even a part of it. Everything is a process within a complex system, even if it appears to be rather stable. yet, this stability depends on the perspective of the observer. In turn, the seeming stability does not mean that a city-process could not be destroyed by human activity, let it be by individuals (Nero), by a collective, or by socio-economic processes. Yet, again as in case of complex systems, the question of causality would be the wrong starting point for addressing the issue of change as it would be a statistical description.

Cities and urban environments are fabrics of relations between a wide range of heterogenic and heterotopic (See Foucault or David Shane [15]) entities and processes across a likewise large range of temporal scales, meeting any shade between the material and the immaterial. There is the activity of single individuals, of collectives of individuals, of legislative and other norms, the materiality of the buildings and their changing usage and roles, different kinds of flows and streams as well as stores and memories.

Elsewhere we argued that this fabric may be conceived as a dynamic ensemble of associative networks [16]. Those should be clearly distinguished from logistic networks, whose purpose is given by organizing any kind of physical transfer. Associative networks re-arrange, sort, classify and learn. Such, they are also the abstract location of the transposition of the material into the immaterial. Quite naturally, issues of form and their temporal structure arise, in other words, behavior.

Our suggestion thus is to conceive of a city as en entity that behaves. This proposal has (almost) nothing to do with the metaphor of the “city as organism”, a transfer that is by far too naïve. Changes in urban environments are best conceived as “outcomes” of probabilistic processes that are organized as overlapping series, both contingent and consistent. The method of choice to describe those changes is based on the notion of the generalized context.

Urban Text, Text and Urbanity, Textuality and Performance

Urban environments establish or even produce a particular kind of mediality. We need not invoke the recent surge of large screens in many cities for that. Any arrangement of facades encodes a rich semantics that is best described employing a semiotic perspective, just as Venturi proposed it. Recently, we investigated the relationship between facades, whether made from stone or from screens, and the space that they constitute [17].

There is yet another important dimension between the text and the city. For many hundred years now, if not even millenia, cities are not imaginable without text in one or the other form. Latest since the early 19th century, text and city became deeply linked to one another with the surge of newspapers and publishing houses, but also through the intricate linkage between the city and the theater. Urban culture is text culture, far more than it could be conceived as an image culture. This tendency is only intensified through the web, albeit urbanity now gets significantly transformed by and into the web-based aspects of culture. At least we may propose that there is a strong co-evolution between the urban (as entity and as concept) and mediality, whether it expresses itself as text, as movie or as webbing.

The relationship between the urban and the text has been explored many times. It started probably with Walter Benjamin’s “flâneur” (for an overview see [18]). Nowadays, urbanists often refer to the concept of the “readability” of a city layout, a methodological habit originated by Kevin Lynch. Yet, if we consider the relation between the urban and the textual, we certainly have to take an abstract concept of text, we definitely have to avoid the idea that there are items like characters or words out there in the city. I think, we should at least follow something like the abstract notion of textuality, as it has been devised by Roland Barthes in his “From Work to Text” [19] as a “methodological field”. Yet, this probably is still not abstract enough, as urban geographers like Henri Lefebvre mistook the concept of textuality as one of intelligibility [20]. Lefebvre obviously didn’t understand the working of a text. How should he, one might say, as a modernist (and marxist) geographer. All the criticism that was directed against the junction between the urban and textuality conceived­—as far as we know—text as something object-like, something that is out there as such, awaiting passively to be read and still being passive as it is being read, finally maybe even as an objective representation beyond the need (and the freedom for) interpretation. This, of course, represents a rather limited view on textuality.

Above we introduced the concept of “behaving texts”, that is, texts as active entities. These entities become active as soon as they are mediatized with interpreters. Again: not the text is conceived as the media or in a media-format, but rather the interpreter, whether it is a human brain-mind or a a suitable software tat indeed is capable for interpreting, not just for pre-programmed and blind re-coding. This “behavioral turn” renders “reading” a text, but also “writing” it, into a performance. Performances, on the other hand, comprise always and inevitable a considerable openness, precisely because they let collide the immaterial and the material from the side of the immaterial. Such, performances are the counterpart of abstract associativity, yet also settling at the surface that sheds matter from ideas.

In the introduction to their nicely edited book ”Performance and the City” Kim Solga, D.Hopkins and Shelley Orr [18] write, citing the urban geographer Nigel Thrift:

Although de Certeau conceives of ‘walking in the city’ not just as a textual experience but as a ‘series’ of embodied, creative’ practices’ (Lavery: 152), a ‘spatial acting-out of place’ (de Certeau: 98, our emphasis), Thrift argues that de Certeau: “never really leaves behind the operations of reading and speech and the sometimes explicit, sometimes implicit claim that these operations can be extended to other practices. In turn, this claim [ … ] sets up another obvious tension, between a practice-based model of often illicit ‘behaviour’ founded on enunciative speech-acts and a text-based model of ‘representation’ which fuels functional social systems.” (Thrift 2004: 43)

Quite obviously, Thrift didn’t manage to get the right grip to Certeau’s proposal that textual experience may be conceived—I just repeat it— as a series of embodied, creative practices. It is his own particular blindness that lets Thrift denunciate texts as being mostly representational.

Solsa and colleagues indeed emphasize the importance of performance, not just in their introduction, but also through their editing of the book. Yet, they explicitly link textuality and performance as codependent cultural practices. They write:

While we challenge the notion that the city is a ‘text’ to be read and (re)written, we also argue that textuality and performativity must be understood as linked cultural practices that work together to shape the body of phenomenal, intellectual, psychic, and social encounters that frame a subject’s experience of the city. We suggest that the conflict, collision, and contestation between texts and acts provoke embodied struggles that lead to change and renewal over time. (p.6)

Such, we find a justification for our “behavioral turn” and its application to texts as well as to the urban from a rather different corner. Even more significant, Solsa et al. seem to agree that performativity and textuality could not be detached from the urban at all. Apparently, the urban as a particular quality of human culture more and more develops into the main representative of human culture.

Yet, neither text nor performance, nor their combination count for a full account of the mediality of the urban. As we already indicated above, the movie as kind of a cross-media from text, image, and performance is equally important.

The relations between film and the urban, between architecture and the film, are also quite wide-spread. The cinema, somehow the successor of the theatre, could be situated only within the city. From the opposite direction, many would consider a city without cinemas as being somehow incomplete. The co-evolutionary story between both is still being under vivid development, I think.

There is particularly one architect/urbanist who is able to blend the film and the building into each other. You may know him quite well, I refer to Rem Koolhaas. Everybody knows that he has been an experimental moviemaker in his youth. It is much less known that he deliberately organized at least one of his buildings as kind of a movie: The Embassy of the Netherlands in Berlin (cf. [21]).

Here, Koolhaas arranged the rooms along a dedicated script. Some of the views out of the window he even trademarked to protect them!

Figure 1: Rem Koolhaas, Dutch Embassy, Berlin. The figure shows the script of pathways as a collage (taken from [21]).

9. The Behavioral Turn

So far we have shown how the behavioral turn could be supported and which are some of the first methodological consequences, if we embrace it. Yet, the picture developed so far is not complete, of course.

If we accept the almost trivial concept that autonomous entities are best conceived as behaving entities—remember that autonomy implies complexity—, then we further can ask about the structure of the relationship between the behaving subject and its counterpart, whether this is also a behaving subject or whether it is conceived more like passive object. For Bruno Latour, for instance, both together form a network, thereby blurring the categorical distinction between both.

Most descriptions of the process of getting into touch with something nowadays is dominated by the algorithmic perspective of computer software. Even Designer started to speak about interfaces. The German term for the same thing—“Schnittstelle”—is even more pronounced and clearly depicts the modernist prejudice in dealing with interaction. “Schnittstelle” implies that something, here the relation, is cut into two parts. A complete separation between interacting entities is assumed apriori. Such a separation is deeply inappropriate, since it would work only in strictly standardized environments, up to being programmed algorithmically. Precisely this was told us over and over again by designers of software “user interfaces”. Perhaps here we can find the reason for so many bad designs, not only concerning software. Fortunately, though just through a slow evolutionary process, things improve more and more. So-called “user-centric” design, or “experience-oriented” design became more abundant in recent years, but their conceptual foundation is still rather weak, or a wild mixture of fashionable habits and strange adaptations of cognitive science.

Yet, if we take the primacy of interpretation serious, and combine it with the “behavioral turn” we can see a much more detailed structure than just two parts cut apart.

The consequence of such a combination is that we would drop the idea of a clear-cut surface even for passive objects. Rather, we could conceive objects as being stuffed with a surrounding field that becomes stronger the closer we approach the object. By means of that field we distinguish the “pure” physicality from the semiotically and behaviorally active aspects.

This field is a simple one for stone-like matter, but even there it is still present. The field becomes much more rich, deep and vibrant if the entity is not a more or less passive object, but rather an active and autonomous subject. Such as an animal, a text, or a city. The reason being that there are no apriori and globally definable representative criteria that we could use to approach such autonomous entities. We only can know about more or less suitable procedures about how to derive such criteria in the particular case, approaching a particular individual {text, city}. The missing of such criteria is a direct correlate for their semantic productivity, or, likewise, for their unboundedness.

Approaching a semantically productive entity—such entities are also always able to induce new signs, they are semiosic entities—is reminds to approaching a gravitational field. Yet it is also very different from a gravitational field, since our semio-behavioral field shows increasing structural richness the closer the entities approach to each other. It is quite obvious that only by means of such a semio-behavioral field we can close the gap between the subject and the world that has been opened, or at least deepened by the modernist contributions from the times of Descartes until late computer science. Only upon a concept like the semio-behavioral field, which in turn is a consequence of the behavioral turn, we can overcome the existential fallacy as it has been purported and renewed over and over again by the dual pair of material and immaterial. The language game that separates the material and immaterial inevitably leads into the nonsensical abyss of existentialism. Dual concepts always come with tremendous costs, as they prevent any differentiated way of speaking about the matter. For instance, it prevents to recognize the materiality of symbols, or more precisely, the double-articulation of symbols between the more material and the more immaterial aspects of the world.

The following series of images may be taken as a metaphorical illustration of that semio-behavioral field. We call it the zona extima of the behavioral coating of entities.

Figure 2a: The semio-behavioral field around an entity.

Figure 2b: The situation as another entity approaches perceptively.

Figure 2c: Mutual penetration of semio-behavioral fields.

Taken together we may say, that whenever {sb,sth} gets into contact with {sb, sth}, we do so through the behavioral coating. This zone is of contact is not intimate (as Peter Sloterdijk describes it), it is rather extimate, though there is a smooth and graded change of quality from extimacy to intimacy as the distance decreases. The zona extima is a borderless (topological) field, driven by purposes (due to modelling), it is medial, behaviorally  choreographed as negotiation, exposure, call & request.

The concept of extimation, or also the process of extimating, is much more suitable than “interaction” to describe what‘s going on when we act, behave, engage, actively perceive, encounter with or towards the other. The interesting thing with the web-based media is that some aspects of zona extima can be transferred.

10. Conclusion

In this essay we try to argument in favor of a behavioral turn as a general attitude when it comes to conceive the interaction of any kind of two entities. The behavioral turn is a consequence of three major and interrelated assumptions:

  • – primacy of interpretation in the relation to the world;s;
  • – primacy of process and relation against matter and point;
  • – complexity and associativity in strongly mediatized environments.

All three assumptions are strictly outside of anything that phenomenological, positivist or modernist approaches can talk about or even practice.

It particularly allows to overcome the traditional and strict separation between the material and the immaterial, as well as the separation between the active and the passive. These shifts can’t be underestimated; they have far-reaching consequences upon the way we practice and conceive our world.

The behavioral turn is the consequence of a particular attitude that respects the bi-valency of world as a dynamic system of populations of relations. It is less the divide between the material and the immaterial, which anyway is somewhat an illusion deriving from the metaphysical claim of the possibility of essences. For instance, the jump that occurs between the realms of the informational and the causal establishes as a pair of two complimentary but strictly and mutually exclusive modes of speaking about the orderliness in the world. In some way, it is also the orderliness in the behavior of the observer—as repetition—that creates the informational that the observer than may perceive. The separation is thus a highly artificial one, in either direction. It is simply silly to discuss the issue of causality without referring to the informational aspects (for a full discussion of the issue see this essay). In any real-world case we always find both aspects together, and we find it as behavior.

Actually, the bi-valent aspect that I mentioned before refers to something quite different, in fact so different that we even can’t speak properly about it. It refers to these aspects that are apriori to modeling or any other comprehension, that are even outside to the performance of the individual itself. What I mean is the resistance of existential arrangements, inclusive the body that the comprehending entity is partially built from. This existential resistance introduces something like outer space for the cultural sphere. Needless to say that we can exist only within this cultural sphere. Yet, any action upon the world enforces us to take a short trip into the vacuum, and if we are lucky the re-entrance is even productive. We may well expect an intensification of the aspect of the virtual, as we argued here. Far from being suitable to serve as a primacy (as existentialism misunderstood the issue), the existential resistance, the absolute outside, enforces us to bark on the concept of behavior. Only “behavior” as a perceptional and performative attitude allows to extract coherence from the world without neglecting the fact of that resistance or contumacy.

The behavioral turn triggers a change in the methodology for empiric investigations as well. The standard set of methods for empiric descriptions changes, using the relation and the coherent series always as the starting point, best in its probabilized form, that is, as generalized probabilistic context. This also prevents the application of statistical methods directly to raw data. There should always be some kind of grouping or selection preceding the statistical reasoning. Otherwise we would try to follow the route that Wittgenstein blocked as a “wrong usage of symbols” (in his rejection of the reasonability of Russel/Whitehead’s Principia Mathematica). The concept of abstract behavior inclusive the advanced methodology that avoids to start with representational symbolification is clearly a sound way out of this deep problem from which any positivist empiric investigation suffers.

Interaction, including any action upon some other entity, when understood within the paradigm of behavior, becomes a recurrent, though not repetitive, self-adjusting process. During this process means and symbols may change and be replaced all the way down until a successful handshake. There is no objectivity in this process other than the mutual possibility for anticipation. Despite the existential resistance and contumacy that is attached to any re-shaping of the world, and even more so if we accomplish it by means of tools, this anticipation is, of course, greatly improved upon the alignment to cultural standards, contributing to the life-world as a shared space of immanence.

This provides us finally a sufficiently abstract, but also a sufficiently rich or manifold perspective on the issue of the roles of symbols regarding the text, the urban and the anime, the animal-like. None of those could be comprehended without first creating a catalog or a system of symbols. These symbols, both material and immaterial and thus kind of a hinge, a double-articulation, are rooted both in the embedding culture (as a de-empirifying selective force) and the individual, which constitutes another double-articulation. The concept of abstract behavior, given as a set of particular conditions and attitudes, allows to respond appropriately to the symbolic.

The really big question concerning our choreostemic capabilities—and those of the alleged machinic—therefore is: How to achieve the fluency in dealing with the symbolic without presuming it as a primary entity? Probably by exercising observing. I hope that the suggestions expressed so far in these essay provide some robust starting points. …we will see.

Notes

1. Here we simply cite the term of “information retrieval”, we certainly do not agree that the term is a reasonable one, since it is deeply infected by positivist prejudices. “Information” can’t be retrieved, because it is not “out there”. Downloading a digitally encoded text is neither a hunting nor a gathering for information, because information can’t be considered to be an object. Information is only present during the act of interpretation (more details about the status of information you can find here). Actually, what we are doing is simply “informationing”.

2. The notion of a “behavioral turn” is known from geography since the late 1960ies [22][23], and also from economics. In both fields, however, the behavioral aspect is related to the individual human being. In both areas, any level of abstraction with regard to the concept of behavior is missing. Quite in contrast to those movements, we do not focus on the neglect of the behavioral domain when it comes to human society, but rather the transfer of the abstract notion of behavior to non-living entities.

Another reference to “behavioral sciences” can be found in social sciences. Yet, in social sciences “behavioral” is often reduced to “behaviorist”, which of course is nonsense. A similar misunderstanding is abundant in political sciences.

3. Note that the proposed „behavioral turn“ should not be mistaken as a “behavioristic” move, as sort of a behaviorism. We strictly reject the stimulus-response scheme of the behaviorism. Actually, behaviorism as it has been developed by Watson and Pavlov has only little to do with behavior at all. It is nothing else than an overt reductionist program, rendering any living being into a trivial machine. Unfortunately, the primitive scheme of behaviorism is experiencing kind of a come-back in so-called “Behavioral Design”, where people talk about “triggers” much in the same way as Pavlov did (c.f. BJ Fogg’s Behavior Model).

References

  • [1] Michael Epperson (2009). Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse. Process Studies, 38(2): 339-366.
  • [2] G. Moran, J.C. Fentress (1979). A Search for Order in Wolf Social Behavior. pp.245-283. in: E. Klinghammer (ed.), The Behavior and Ecology of Wolves. Symp. held on 23-24.5.1975 in Wilmington N.C.), Garland STPM Press, New York..
  • [3] Gilles Deleuze, Difference and repetitionGilles Deleuze, Difference and Repetition.
  • [4] J.A.R.A.M. Van Hooff (1982). Categories and sequences of behaviour: methods of description and analysis. in: Handbook of methods in nonverbal behavior research (K.R. Scherer& P. Ekman, eds). Cambridge University Press, Cambridge.
  • [5] P.G.M. van der Heijden, H. de Vries, J.A.R.A.M. van Hooff (1990). Correspondence analysis of transition matrices, with special attention to missing entries and asymmetry. Anim.Behav. 40: 49-64.
  • [6] Teuvo Kohonen, Samuel Kaski, K. Lagus und J. Honkela (1996). Very Large Two-Level SOM for the Browsing of Newsgroups. In: C. von der Malsburg, W. von Seelen, J. C. Vorbrüggen and B. Sendhoff, Proceedings of ICANN96, International Conference on Artificial Neural Networks, Bochum, Germany, July 16-19, 1996, Lecture Notes in Computer Science, Vol. 1112, pp.269-274. Springer, Berlin.
  • [7] Hecht-Nielsen (1994).
  • [8] Javier Rojo Tuan, S. Nguyen (2010). Improving the Johnson-Lindenstrauss Lemma. available online.
  • [9] Sanjoy Dasgupta, Presentation given about: Samuel Kaski (1998), Dimensionality Reduction by Random Mapping: Fast Similarity Computation for Clustering, Helsinki University of Technology 1998. available online.
  • [10] Michel Serres, Nayla Farouki. Le trésor. Dictionnaire des sciences. Falmamrion, Paris 1998. p.394.
  • [11] Bill Hillier, Space Syntax. E-edition, 2005.
  • [12] Bill Hillier (2009). The City as a Socio-technical System: a spatial reformulation in the light of the levels problem and the parallel problem. Keynote paper to the Conference on Spatial Information Theory, September 2009.
  • [13] Alan Hájek (2007). The Reference Class Problem is Your Problem Too. Synthese 156 (3):563-585.
  • [14] Vera Bühlmann (2012). In the Quantum City – design, and the polynomial grammaticality of artifacts. forthcoming.
  • [15] David G. Shane. Recombinant Urbanism. 2005.
  • [16] Klaus Wassermann (2010). SOMcity: Networks, Probability, the City, and its Context. eCAADe 2010, Zürich. September 15-18, 2010. available online.
  • [17] Klaus Wassermann, Vera Bühlmann, Streaming Spaces – A short expedition into the space of media-active façades. in: Christoph Kronhagel (ed.), Mediatecture, Springer, Wien 2010. pp.334-345. available here. available here.
  • [18] D.J. Hopkins, Shelley Orr and Kim Solga (eds.), Performance and the City. Palgrave Macmillan, Basingstoke 2009.
  • [19] Roland Barthes, From Work to Text. in: Image, Music, text: Essay Selected and translated. Transl. Stephen Heath, Hill&Wang, New York 1977. also available online @ google books p.56.
  • [20] Henri Lefebvre, The Production of Space. 1979.
  • [21] Vera Bühlmann. Inhabiting media. Thesis, University of Basel (CH) 2009.
  • [22] Kevin R Cox, Jennifer Wolch and Julian Wolpert (2008). Classics in human geography revisited. “Wolpert, J. 1970: Departures from the usual environment in locational analysis. Annals of the Association of American Geographers 50, 220–29.” Progress in Human Geography (2008) pp.1–5.
  • [23] Dennis Grammenos. Urban Geography. Encyclopedia of Geography. 2010. SAGE Publications. 1 Oct. 2010. available online.

۞

The Text Machine

July 10, 2012 § Leave a comment

What is the role of texts? How do we use them (as humans)?

How do we access them (as reading humans)? The answers to such questions seem to be pretty obvious. Almost everybody can read. Well, today. Noteworthy, reading itself, as a performance and regarding its use, changed dramatically at least two times in history: First, after the invention of the vocal alphabet in ancient Greece, and the second time after book printing became abundant during the 16th century. Maybe, the issue around reading isn’t so simple as it seems in everyday life.

Beyond such accounts of historical issues and basic experiences, we have a lot of more theoretical results concerning texts. Beginning with Friedrich Schleiermacher who was the first to identify hermeneutics as a subject around 1830 and formulated it in a way that has been considered as more complete and powerful than the version proposed by Gadamer in the 1950ies. Proceeding of course with Wittgenstein (language games, rule following), Austin (speech act theory) or Quine (criticizing empirism). Philosophers like John Searle, Hilary Putnam and Robert Brandom then explicating and extending the work of the former heroes. And those have been accompanied by many others. If you wonder about linguistics missing here, well, then because linguistics does not provide theories about language. Today, the domain is largely caught by positivism and the corresponding analytic approach.

Here in his little piece we pose these questions in the context of certain relations between machines and texts. There are a lot of such relations, and even quite sophisticated or surprising ones. For instance, texts can be considered as kind of machines. Yet, they bear a certain note of (virtual) agency as well, resulting in a considerable non-triviality of this machine aspect of texts. Here we will not deal with this perspective. Instead, we just will take a look on the possibilities and the respective practices to handle or to “treat” texts with machines. Or, if you prefer, the treating of texts by machines, as far as a certain autonomy of machines could be considered as necessary to deal with texts at all.

Today, we can find a fast growing community of computer programmers that are dealing with texts as kind of unstructured information. One of the buzz-words is the so-called “semantic web”, another one is “sentiment analysis”. We won’t comment in any detail about those movements, because they are deeply flawed. The first one is trying to formalize semantics and meaning apriori, trying to render the world into a trivial machine. We repeatedly criticized this and we agree herein with Douglas Hofstadter. (see this discussion of his “Fluid Analogy”). The second is trying to identify the sentiment of a text or a “tweet”, e.g. about a stock or an organization, on the basis of statistical measures about keywords and their utterly naive “n-grammed” versions, without actually paying any notice to the problem of “understanding”. Such nonsense would not be as widespread if programmers would read only a few fundamental philosophical texts about language. In fact, they don’t, and thus they are condemned to visit any of the underdeveloped positions that arose centuries ago.

If we neglect the social role of texts for a moment, we might identify a single major role of texts, albeit we have to describe it then in rather general terms. We may say that the role of a text, as a specimen of many other texts from a large population, is its functioning as a medium for the externalization of mental content in order to serve the ultimate purpose, which consists of the possibility for a (re)construction of resembling mental content on the side of the interpreting person.

This interpretation is a primacy. It is not possible to assign meaning to text like a sticky note, then putting the text including the yellow sticky note directly into the recipients brain. That may sound silly, but unfortunately it’s the “theory” followed by many people working in the computer sciences. Interpretation can’t be controlled completely, though, not even by the mind performing it, not even by the same mind who seconds before externalized the text through writing or speaking.

Now, the notion of mental content may seem both quite vague and hopelessly general as well. Yet, in the previous chapter we introduced a structure, the choreostemic space, which allows to speak pretty precise about mental content. Note that we don’t need to talk about semantics, meaning or references to “objects” here. Mental content is not a “state” either. Thinking “state” and the mental together is much on the same stage as to seriously considering the existence of sea monsters in the end of 18th century, when the list science of Linnaeus was not yet reshaped by the upcoming historical turn in the philosophy of nature. Nowadays we must consider it as silly-minded to think about a complex story like the brain and its mind by means of “state”. Doing so, one confounds the stability of the graphical representation of a word in a language with the complexity of a multi-layered dynamic process, spanned between deliberate randomness, self-organized rhythmicity and temporary thus preliminary meta-stability.

The notion of mental content does not refer to the representation of referenced “objects”. We do not have maps, lists or libraries in our heads. Everything which we experience as inner life builds up from an enormous randomness through deep stacks of complex emergent processes, where each emergent level is also shaped from top-down, implicitly and, except the last one usually called “consciousness,” also explicitly. The stability of memory and words, of feelings and faculties is deceptive, they are not so stable at all.  Only their externalized symbolic representations are more or less stable, their stability as words etc.  can be shattered easily. The point we would like to emphasize here is that everything that happens in the mind is constructed on the fly, while the construction is completed only with the ultimate step of externalization, that is, speaking or writing. The notion of “mental content” is thus a bit misleading.

The mental may be conceived most appropriately as a manifold of stacked and intertwined processes. This holds for the naturalist perspective as well as for the abstract perspective, as he have argued in the previous chapter. It is simply impossible to find a single stable point within the (abstract) dynamics between model, concept, mediality and virtuality, which could be thought of as spanning a space. We called it the choreostemic space.

For the following remarks about the relation between text and machines and the practitioners engaged in building machines to handle texts we have to keep in mind just those two things: (i) there is a primacy of interpretation, (ii) the mental is a non-representative dynamic process that can’t be formalized (in the sense of “being represented” by a formula).

In turn this means that we should avoid to refer to formulas when going to build a “text machine”. Text machines will be helpful only if their understanding of texts, even if it is a rudimentary understanding, follows the same abstract principles as our human understanding of texts does. Machines pretending to deal with texts, but actually only moving dead formal symbols back and forth, as it is the case in statistical text mining, n-gram based methods and similar, are not helpful at all. The only thing that happens is that these machines introduce a formalistic structure into our human life. We may say that these techniques render humans helpful to machines.

Nowadays we can find a whole techno-scientific community that is engaged in the field of machine learning, devised to “textual data”. The computers are programmed in such a way that they can be used to classify texts. The idea is to provide some keywords, or anti-words, or even a small set of sample texts, which then are taken by the software as a kind of template that is used to build a selection model. This model then is used to select resembling texts from a large set of texts. We have to be very clear about the purpose of these software programs: they classify texts.

The input data for doing so is taken from the texts themselves. More precisely, they are preprocessed according to specialized methods. Each of the texts gets described by a possibly large set of “features” that have been extracted by these methods. The obvious point is that the procedure is purely empirical in the strong sense. Only the available observations (the texts) are taken to infer the “similarity” between texts. Usually, not even linguistic properties are used to form the empirical observations, albeit there are exceptions. People use the so-called n-gram approach, which is only little more than counting letters. It is a zero-knowledge model about the series of symbols, which humans interpret as text. Additionally, the frequency or relative positions of keywords and anti-words are usually measured and expressed by mostly quite simple statistical methods.

Well, classifying texts is something that is quite different from understanding texts. Of course. Yet, said community tries to reproduce the “classification” achieved or produced by humans. Such, any of the engineers of the field of machine learning directed to texts implicitly claims kind of an understanding. They even organize competitions.

The problems with the statistical approach are quite obvious. Quine called it the dogma of empiricism and coined the Gavagai anecdote about it, which even provides much more information than the text alone. In order to understand a text we need references to many things outside the particular text(s) at hand. Two of those are especially salient: concepts and the social dimension. Straightly opposite to the believe of positivists, concepts can’t be defined in advance to a particular interpretation. Using catalogs of references does not help much, if these catalogs are used just as lists of references. The software does not understand “chair” by the “definition” stored in a database, or even by the set of such references. It simply does not care whether there are encoded ASCII codes that yield the symbol “chair” or the symbol “h&e%43”. Douglas Hofstadter has been stressing this point over and over again, and we fully agree to that.

From that necessity to a particular and rather wide “background” (notion by Searle) the second problem derives, which is much more serious, even devastating to the soundness of the whole empirico-statistical approach. The problem is simple: Even we humans have to read a text before being able to understand it. Only upon understanding we could classify it. Of course, the brain of many people is trained sufficiently as to work about the relations of the texts and any of its components while reading the text. The basic setup of the problem, however, remains the same.

Actually, what is happening is a constantly repeated re-reading of the text, taking into account all available insights regarding the text and the relations of it to the author and the reader, while this re-reading often takes place in the memory. To perform this demanding task in parallel, based on the “cache” available from memory, requires a lot of experience and training, though. Less experienced people indeed re-read the text physically.

The consequence of all of that is that we could not determine the best empirical discriminators for a particular text in-the-reading in order to select it as-if we would use a model. Actually, we can’t determine the set of discriminators before we have read it all, at least not before the first pass. Let us call this the completeness issue.

The very first insight is thus that a one-shot approach in text classification is based on a misconception. The software and the human would have to align to each other in some kind of conversation. Otherwise it can’t be specified in principle what the task is, that is, which texts should actually be selected. Any approach to text classification not following the “conversation scheme” is necessarily bare nonsense. Yet, that’s not really a surprise (except for some of the engineers).

There is a further consequence of the completeness issue. We can’t set up a table to learn from at all. This too is not a surprise, since setting up a table means to set up a particular symbolization. Any symbolization apriori to understanding must count as a hypothesis. Such simple. Whether it matches our purpose or not, we can’t know before we didn’t understand the text.

However, in order to make the software learning something we need assignates (traditionally called “properties”) and some criteria to distinguish better models from less performant models. In other words, we need a recurrent scheme on the technical level as well.

That’s why it is not perfectly correct to call texts “unstructured data”. (Besides the fact that data are not “out there”: we always need a measurement device, which in turn implies some kind of model AND some kind of theory.) In the case of texts, imposing a structure onto a text simply means to understand it. We even could say that a text as text is not structurable at all, since the interpretation of a text can’t never be regarded as finished.

All together, we may summarize the issue of complexity of texts as deriving from the following properties in the following way:

  • – there are different levels of context, which additionally stretch across surrounds of very different sizes;
  • – there are rich organizational constraints, e.g. grammars
  • – there is a large corpus of words, while any of them bears meaning only upon interpretation;
  • – there is a large number of relations that not only form a network, but which also change dynamically in the course of reading and of interpretation;
  • – texts are symbolic: spatial neighborhood does not translate into reference, in neither way;
  • understanding of texts requires a wealth of external, and quite abstract-concepts, that appear as significant only upon interpretation, as well as a social embedding of mutual interpretation,.

This list should at least exclude any attempt to defend the empirico-statistical approach as a reasonable one. Except the fact that it conveys a better-than-nothing attitude. These brings us to the question of utility.

Engineers build machines that are supposedly useful, more exactly, they are intended to be fulfill a particular purpose. Mostly, however, machines, even any technology in general, is useful only upon processes of subjective appropriation. The most striking example for this is the car. Else, computers have evolved not for reasons of utility, but rather for gaming. Video did not become popular for artistic reasons or for commercial ones, but due to the possibilities the medium offered for the sex industry. The lesson here being that an intended purpose is difficult to achieve as of the actual usage of the technology. On the other hand, every technology may exert some gravitational forces to develop a then unintended symbolic purpose and regarding that even considerable value. So, could we agree that the classification of texts as it is performed by contemporary technology is useful?

Not quite. We can’t regard the classification of texts as it is possible with the empirico-statistical approach as a reasonable technology. For the classification of texts can’t be separated from their understanding. All we can accomplish by this approach is to filter out those texts that do not match our interests with a sufficiently high probability. Yet, for this task we do not need text classification.

Architectures like 3L-SOM could also be expected to play an important role in translation, as translation requires even deeper understanding of texts as it is needed for sorting texts according to a template.

Besides the necessity for this doubly recurrent scheme we haven’t said much so far here about how then actually to treat the text. Texts should not be mistaken as empiric data. That means that we have to take a modified stance regarding measurement itself. In several essays we already mentioned the conceptual advantages of the two-layered (TL) approach based on self-organizing maps (TL-SOM). We already described in detail how the TL-SOM works, including the the basic preparation of the random graph as it has been described by Kohonen.

The important thing about TL-SOM is that it is not a device for modeling the similarity of texts. It is just a representation, even as it is a very powerful one, because it is based on probabilistic contexts (random graphs). More precisely, it is just one of many possible representations, even as it is much more appropriate than n-gram and other jokes. We even should NOT consider the TL-SOM as so-called “unsupervised modeling”, as the distinction between unsupervised vs. supervised is just another myth (=nonsense if it comes to quantitative models). The TL-SOM is nothing else than an instance for associative storage.

The trick of using a random graph (see the link above) is that the surrounds of words are differentially represented as well. The Kohonen model is quite scarce in this respect, since it applies a completely neutral model. In fact, words in a text are represented as if they would be all the same: of the same kind, of the same weight, etc. That’s clearly not reasonable. Instead, we should represent a word in several, different manners into the same SOM.

Yet, the random graph approach should not be considered just as a “trick”. We repeatedly argued (for instance here) that we have to “dissolve” empirical observations into a probabilistic (re)presentation in order to evade and to avoid the pseudo-problem of “symbol grounding”. Note that even by the practice of setting up a table in order to organize “data” we are already crossing the rubicon into the realm of the symbolic!

The real trick of the TL-SOM, however, is something completely different. The first layer represents the random graph of all words, the actual pre-specific sorting of texts, however, is performed by the second layer on the output of the first layer. In other words, the text is “renormalized”, the SOM itself is used as a measurement device. This renormalization allows to organize data in a standardized manner while allowing to avoid the symbolic fallacy. To our knowledge, this possible usage of the renormalization principle has not been recognized so far. It is indeed a very important principle that puts many things in order. We will deal later in a separate contribution with this issue again.

Only based on the associative storage taken as an entirety appropriate modeling is possible for textual data. The tremendous advantage of that is that the structure for any subsequent consideration now remains constant. We may indeed set up a table. The content of this table, the data, however is not derived directly from the text. Instead we first apply renormalization (a technique known from quantum physics, cf. [1])

The input is some description of the text completely in terms of the TL-SOM. More explicit, we have to “observe” the text as it behaves in the TL-SOM. Here, we are indeed legitimized to treat the text as an empirical observation, albeit we can, of course, observe the text in many different ways. Yet, observing means to conceive the text as a moving target, as a series of multitudes.

One of the available tools is Markov modeling, either as Markov chains, or by means of Hidden Markov Models. But there are many others. Most significantly, probabilistic grammars, even probabilistic phrase structure grammars can be mapped onto Markov models. Yet, again we meet the problem of apriori classification. Both models, Markovian as well as grammarian, need an assignment of grammatical type to a phrase, which often first requires understanding.

Given the autonomy of text, their temporal structure and the impossibility to apply apriori schematism, our proposal is that we just have to conceive of the text like we do of (higher) animals. Like an animal in its habitat, we may think of the text as inhabiting the TL-SOM, our associative storage. We can observe paths, their length and form, preferred neighborhoods, velocities, size and form of habitat.

Similar texts will behave in a similar manner. Such similarity is far beyond (better: as if from another planet) the statistical approach. We also can see now that the statistical approach is being trapped by the representationalist fallacy. This similarity is of course a relative one. The important point here is that we can describe texts in a standardized manner strictly WITHOUT reducing their content to statistical measures. It is also quite simple to determine the similarity of texts, whether as a whole, or whether regarding any part of it. We need not determine the range of our source at all apriori to the results of modeling. That modeling introduces a third logical layer. We may apply standard modeling, using a flexible tool for transformation and a further instance of a SOM, as we provide it as SomFluid in the downloads. The important thing is that this last step of modeling has to run automatically.

The proposed structure keeps any kind of reference completely intact. It also draws on its collected experience, that is, all texts it have been digesting before. It is not necessary to determine stopwords and similar gimmicks. Of course, we could, but that’s part of the conversation. Just provide an example of any size, just as it is available. Everything from two words, to a sentence, to a paragraph, to the content of a directory will work.

Such a 3L-SOM is very close to what we reasonably could call “understanding texts”. But does it really “understand”?

As such, not really. First, images should be stored in the same manner (!!), that is, preprocessed as random graphs over local contexts of various size, into the same (networked population of) SOM(s). Second, a language production module would be needed. But once we have those parts working together, then there will be full understanding of texts.

(I take any reasonable offer to implement this within the next 12 months, seriously!)

Conclusion

Understanding is a faculty to move around in a world of symbols. That’s not meant as a trivial issue. First, the world consists of facts, where facts comprise an universe of dynamic relations. Symbols are just not like traffic signs or pictograms as these belong to the more simple kind of symbols. Symbolizing is a complex, social, mediatized diachronic process.

Classifying, understood as “performing modeling and applying models” consists basically of two parts. One of them could be automated completely, while the other one could not treated by a finite or apriori definable set of rules at all: setting the purpose. In the case of texts, classifying can’t be separated from understanding, because the purpose of the text emerges only upon interpretation, which in turn requires a manifold of modeling raids. Modeling a (quasi-)physical system is completely different from that, it is almost trivial. Yet, the structure of a 3L-SOM could well evolve into an arrangement that is capable to understand in a similar way as we humans do. More precisely, and a bit more abstract, we also could say, that a “system” based on a population of 3L-SOM once will be able to navigate in the choreostemic space.

References
  • [1] B. Delamotte (2003). A hint of renormalization. Am.J.Phys. 72 (2004) 170-184, available online: arXiv:hep-th/0212049v3.

۞

A Deleuzean Move

June 24, 2012 § Leave a comment

It is probably one of the main surprises in the course of

growing up as a human that in the experience of consciousness we may meet things like unresolvable contradictions, thoughts that are incommensurable, thoughts that lead into contradictions or paradoxes, or thoughts that point to something which is outside of the possibility of empirical, so to speak “direct” experience. All these experiences form a particular class of experience. For one or the other reason, these issues are issues of mental itself. We definitely have to investigate them, if we are going to talk about things like machine-based episteme, or the urban condition, which will be the topic of the next few essays.

There have been only very few philosophers1 who have been embracing paradoxicality without getting caught by antinomies and paradoxes in one or another way.2 Just to be clear: Getting caught by paradoxes is quite easy. For instance, by violating the validity of the language game you have been choosing. Or by neglecting virtuality. The first of these avenues into persistent states of worries can be observed in sciences and mathematics3, while the second one is more abundant in philosophy. Fortunately, playing with paradoxicality without getting trapped by paradoxes is not too difficult either. There is even an incentive to do so.

Without paradoxicality it is not possible to think about beginnings, as opposed to origins. Origins­­—understood as points of {conceptual, historical, factual} departure—are set for theological, religious or mystical reasons, which by definition are always considered as bearer of sufficient reason. To phrase it more accurately, the particular difficulty consists in talking about beginnings as part of an open evolution without universal absoluteness, hence also without the need for justification at any time.

Yet, paradoxicality, the differential of actual paradoxes, could form stable paradoxes only if possibility is mixed up with potentiality, as it is for instance the case for perspectives that could be characterised as reductionist or positivist. Paradoxes exist strictly only within that conflation of possibility and potentiality. Hence, if a paradox or antinomy seems to be stable, one always can find an implied primacy of negativity in lieu of the problematic field spawned and spanned by the differential. We thus can observe the pouring of paradoxes also if the differential is rejected or neglected, as in Derrida’s approach, or the related functionalist-formalist ethics of the Frankfurt School, namely that proposed by Habermas [4]. Paradoxes are like knots that always can be untangled in higher dimensions. Yet, this does NOT mean that everything could be smoothly tiled without frictions, gaps or contradictions.

Embracing the paradoxical thus means to deny the linear, to reject the origin and the absolute, the centre points [6] and the universal. We may perceive remote greetings from Nietzsche here4. Perhaps, you already may have classified the contextual roots of these hints: It is Gilles Deleuze to whom we refer here and who may well be regarded as the first philosopher of open evolution, the first one who rejected idealism without sacrificing the Idea.5

In the hands of Deleuze—or should we say minds?—paradoxicality does neither actualize into paradoxes nor into idealistic dichotomic dialectics. A structural(ist) and genetic dynamism first synthesizes the Idea, and by virtue of the Idea as well as the space and time immanent to the Idea paradoxicality turns productive.7

Philosophy is revealed not by good sense but by paradox. Paradox is the pathos or the passion of philosophy. There are several kinds of paradox, all of which are opposed to the complementary forms of orthodoxy – namely, good sense and common sense. […] paradox displays the element which cannot be totalised within a common element, along with the difference which cannot be equalised or cancelled at the direction of a good sense. (DR227)

As our title already indicates, we not only presuppose and start with some main positions and concepts of Deleuzean philosophy, particularly those he once developed in Difference and Repetition (D&R)8. There will be more details later9. We10 also attempt to contribute some “genuine” aspects to it. In some way, our attempt could be conceived as a development being an alternative to part V in D&R, entitled “Asymmetrical Synthesis of the Sensible”.

This Essay

Throughout the collection of essays about the “Putnam Program” on this site we expressed our conviction that future information technology demands for an assimilation of philosophy by the domain of computer sciences (e.g. see the superb book by David Blair “Wittgenstein, Language and Information” [47]). There are a number of areas—of both technical as well as societal or philosophical relevance—which give rise to questions that already started to become graspable, not just in the computer sciences. How to organize the revision of beliefs?11 What is the structure of the “symbol grounding problem”? How to address it? Or how to avoid the fallacy of symbolism?12 Obviously we can’t tackle such questions without the literacy about concepts like belief or symbol, which of course can’t be reduced to a merely technical notion. Beliefs, for instance, can’t be reduced to uncertainty or its treatment, despite there is already some tradition in analytical philosophy, computer sciences or statistics to do so. Else, with the advent of emergent mental capabilities in machines ethical challenges appear. These challenges are on both sides of the coin. They relate to the engineers who are creating such instances as well as to lawyers who—on the other side of the spectrum—have to deal with the effects and the properties of such entities, and even “users” that have to build some “theory of mind” about them, some kind of folk psychology.

And last but not least, just the externalization of informational habits into machinal contexts triggers often pseudo-problems and “deep” confusion.13 Examples for such confusion are the question about the borders of humanity, i.e. as kind of a defense war fought by anthropology, or the issue of artificiality. Where does the machine end and where does the domain of the human start? How can we speak reasonably about “artificiality”, if our brain/mind remains still dramatically non-understood and thus implicitly is conceived by many as kind of a bewildering nature? And finally, how to deal with technological progress: When will computer scientists need self-imposed guidelines similar to those geneticists ratified for their community in 1974 during the Asimolar Conferences? Or are such guidelines illusionary or misplaced, because we are weaving ourselves so intensively into our new informational carpets—made from multi- or even meta-purpose devices—that are righteous flying carpets?

There is also a clearly recognizable methodological reason  for bringing the inventioneering of advanced informational “machines” and philosophy closer together. The domain of machines with advanced mental capabilities—I deliberately avoid the traditional term of “artificial intelligence”—, let us abbreviate it MMC, acquires ethical weight in itself. MMC establishes a subjective Lebenswelt (life form) that is strikingly different from ours and which we can’t understand analytically any more (if at all)14. The challenge then is how to talk about this domain? We should not repeat the same fallacy as anthropology and anthropological philosophy have been committing since Kant, where human measures have been applied (and still are up today) to “nature”. If we are going to compare two different entities we need a differential position from which both can be instantiated. Note that no resemblance can be expected between the instances, nor between the instances and the differential. That differential is a concept, or an idea, and as such it can’t be addressed by any kind of technical perspective. Hence, questions of mode of speaking can’t be conceived as a technical problem, especially not for the domain of MMC, also due to the implied self-referentiality of the mental itself.

Taken together, we may say that our motivation follows two lines. Firstly, the concern is about the problematic field, the problem space itself, about the possibility that problems could become visible at all. Secondly, there is a methodological position characterisable as a differential that is necessary to talk about the subject of incommensurable that are equipped entities with mental capacities.15

Both directions and all related problems can be addressed in the same single move, so at least is our proposal. The goal of this essay is the introduction and a brief discussion of a still emerging conceptual structure that may be used as an image of thought, or likewise as a tool in the sense of an almost formal mental procedure, helping to avoid worries about the diagnosis—or supporting it—of the challenges opened by the new technologies. Of course, it will turn out that the result is not just applicable to the domain of philosophy of technology.

In the following we will introduce a unique structure that has been inspired not only from heterogeneous philosophical sources. Those stretch from Aristotle to Peirce, from Spinoza to Wittgenstein, and from Nietzsche to Deleuze, to name but a few, just to give you an impression what mindset you could expect. Another important source is mathematics, yet not used as a ready-made system for formal reasoning, but rather as a source for a certain way of thinking. Last, but not least, biology is contributing as the home of the organon, of complexity, of evolution, and, more formally, on self-referentiality. The structure we will propose as a starting point that appears merely technical, thus arbitrary, and at the same time it draws upon the primary amalgamate of the virtual and the immanent. Its paradoxicality consists in its potential to describe the “pure” any, the Idea that comprises any beginning. Its particular quality as opposed to any other paradoxicality is caused by a profound self-referentiality that simultaneously leads to its vanishing, its genesis and its own actualization. In this way, the proposed structure solves a challenge that is considered by many throughout the history of philosophy to be one of the most serious one. The challenge in question is that of sufficient reason, justification and conditionability. To be more precise, that challenge is not solved, it is more correct to say that it is dissolved, made disappear. In the end, the problem of sufficient reason will be marked as a pseudo-problem.

Here, a small remark is necessary to be made to the reader. Finally, after some weeks of putting this down, it turned out as a matter of fact that any (more or less) intelligible way of describing the issues exceeds the classical size of a blog entry. After all, now it comprises approx. 150’000 characters (incl white space), which would amount to 42+ pages on paper. So, it is more like a monograph. Still, I feel that there are many important aspects left out. Nevertheless I hope that you enjoy reading it.

The following provides you a table of content (active links) for the remainder of this essay:

2. Brief Methodological Remark

As we already noted, the proposed structure is self-referential. Self-referentiality also means that all concepts and structures needed for an initial description will be justified by the working of the structure, in other words, by its immanence. Actually, similarly to the concept of the Idea in D&R, virtuality and immanence come very close to each other, they are set to be co-generative. As an Idea, the proposed structure is complete. As any other idea, it needs to be instantiated into performative contexts, thus it is to be conceived as an entirety, yet neither as a completeness nor as a totality. Yet, its self-referentiality allows for and actually also generates a “self-containment” that results in a fractal mirroring of itself, in a self-affine mapping. Metaphorically, it is a concept that develops like the leaf of a fern. Superficially, it could look like a complete and determinate entirety, but it is not, similar to area-covering curves in mathematics. Those fill a 2-dimensional area infinitesimally, yet, with regard to their production system they remain truly 1-dimensional. They are a fractal, an entity to which we can’t apply ordinal dimensionality. Such, our concept also develops into instances of fractal entirety.

For these reasons, it would be also wrong to think that the structure we will describe in a moment is “analytical”, despite it is possible to describe its “frozen” form by means of references to mathematical concepts. Our structure must be understood as an entity that is not only not neutral or invariant against time. It forms its own sheafs of time (as I. Prigogine described it) Analytics is always blind against its generative milieu. Analytics can’t tell anything about the world, contrary to a widely exercised opinion. It is not really a surprise that Putnam recommended to reduce the concept of “analytic” to “an inexplicable noise”. Very basically it is a linear endeavor that necessarily excludes self-referentiality. Its starting point is always based on an explicit reference to kind of apparentness, or even revelation. Analytics not only presupposes a particular logic, but also conflates transcendental logic and practiced quasi-logic. Else, the pragmatics of analysis claims that it is free from constructive elements. All these characteristics do not apply to out proposal, which is as less “analytical” as the philosophy of Deleuze, where it starts to grow itself on the notion of the mathematical differential.

3. The Formal Structure

For the initial description of the structure we first need a space of expressibility. This space then will be equipped with some properties. And right at the beginning I would like to emphasize that the proposed structure does not “explain” by itself anything, just like a (philosophical) grammar. Rather, through its usage, that is, its unfolding in time, it shows itself and provides a stable as well as a generative ground.

The space of the structure is not a Cartesian space, where some concepts are mapped onto the orthogonal dimensions, or where concepts are thought to be represented by such dimensions. In a Cartesian space, the dimensions are independent from each other.16 Objects are represented by the linear and additive combination of values along those dimensions and thus their entirety gets broken up. We loose the object as a coherent object and there would be no way to regain it later, regardless the means and the tools we would apply. Hence the Cartesian space is not useful for our purposes. Unfortunately, all the current mathematics is based on the cartesian, analytic conception. Currently, mathematics is a science of control, or more precisely, a science about the arrangement of signs as far as it concerns linear, trivial machines that can be described analytically. There is not yet a mathematics of the organon. Probably category theory is a first step into its direction.

Instead, we conceive our space as an aspectional space, as we introduced it in a previous chapter. In an aspectional space concepts are represented by “aspections” instead of “dimensions”. In contrast to the values in a dimensional space, values in an aspectional can not be changed independently from each other. More precisely, we always can keep only at most 1 aspection constant, while the values along all the others change simultaneously. (So-called ternary diagrams provide a distantly related example for this in a 2-dimensional space.) In other words, within the N-manifolds of the aspectional space always all values are dependent on each other.

This aspectional space is stuffed with a hyperbolic topological structure. The space of our structure is not flat. You may take M.C. Escher’s plates as a visualization of such a space. Yet, our space is different from such a fixed space; it is a relativistic space that is built from overlapping hyperbolic spaces. At each point in the space you will find a point of reference (“origin”) for a single hyperbolic reference system. Our hyperbolic space is locally centred. A mathematical field about comparable structures would be differential topology.

So far, the space is still quite easy and intuitively to understand. At least there is still a visualization possible for it. This changes probably with the next property. Points in this aspectional space are not “points”, or expressed in a better, less obscure way, our space does not contain points at all. In a Cartesian space, points are defined by one or more scales and their properties. For instance, in a x-y-coordinate system we could have real numbers on both dimensions, i.e. scales, or we could have integers on the first, and reals on the second one. The interaction of the number systems used to create a scale along a dimension determines the expressibility of the space. This way, a point is given as a fixed instance of a set of points as soon as the scale is given. Points themselves are thus said to be 0-dimensional.

Our “points”, i.e. the content of our space is quite different from that. It is not “made up” from inert and passive points but the second differential, i.e. ultimately a procedure that invokes an instantiation. Our aspectional space thus is made from infinitesimal procedural sites, or “situs” as Leibniz probably would have said. If we would represent the physical space by a Cartesian dimensional system, then the second derivative would represent an acceleration. Take this as a metaphor for the behavior of our space. Yet, our space is not a space that is passive. The second-order differential makes it an active space and a space that demands for an activity. Without activity it is “not there”.

We also could describe it as the mapping of the intensity of the dynamics of transformation. If you would try to point to a particular location, or situs, in that space, which is of course excluded by its formal definition, you would instantaneously “transported” or transformed, such that you would find yourself elsewhere instantaneously. Yet, this “elsewhere” can not be determined in Cartesian ways. First, because that other point does not exist, second, because it depends on the interaction of the subject’s contribution to the instantiation of the situs and the local properties of the space. Finally, we can say that our aspectional space thus is not representational, as the Cartesian space is.

So, let us sum the elemental17 properties of our space of expressibility:

  • 1. The space is aspectional.
  • 2. The topology of the space is locally hyperbolic.
  • 3. The substance of the space is a second-order differential.

4. Mapping the Semantics

We now are going to map four concepts onto this space. These concepts are themselves Ideas in the Deleuzean sense, but they are also transcendental. They are indeterminate and real, just as virtual entities. As those, we take the chosen concepts as inexplicable, yet also as instantiationable.

These four concepts have been chosen initially in a hypothetical gesture, such that they satisfy two basic requirements. First, it should not be possible to reduce them to one another. Second, together they should allow to build a space of expressibility that would contain as much philosophical issues of mentality as possible. For instance, it should contain any aspect of epistemology or of languagability, but it does not aim to contribute to the theory of morality, i.e. ethics, despite the fact that there is, of course, significant overlapping. For instance, one of the possible goals could be to provide a space that allows to express the relation between semiotics and any logic, or between concepts and models.

So, here are the four transcendental concepts that form the aspections of our space as described above:

  • – virtuality
  • – mediality
  • – model
  • – concept

Inscribing four concepts into a flat, i.e. Euclidean aspectional space would result in a tetraedic space. In such a space, there would be “corners,” or points of inflections, which would represent the determinateness of the concepts mapped to the aspections. As we have emphasized above, our space is not flat, though. There is no static visualization possible for it, since our space can’t be mapped to the flat Euclidean space of a drawing, or of the space of our physical experience.

So, let us proceed to the next level by resorting to the hyperbolic disc. If we take any two points inside the disc, their distance is determinate. Yet, if we take any two points at the border of the disc, the distance between those points is infinite from the inside perspective, i.e. for any perspective associated to a point within the disc. Also the distance from any point inside the disc to the border is infinite. This provides a good impression how transcendental concepts that by definition can’t be accessed “as such”, or as a thing, can be operationalized by the hyperbolic structure of a space. Our space is more complicated, though, as the space is not structured by a fixed hyperbolic topology that is, so to speak, global for the entire disc. The consequence is that our space does not have a border, but at the same time it remains an aspectional space. Turning the perspective around, we could say that the aspections are implied into this space.

Let us now briefly visit these four concepts.

4.1. Virtuality

Virtuality describes the property of “being virtual”. Saying that something is virtual does not mean that this something does not exist, despite the property “existing” can’t be applied to it either. It is fully real, but not actual. Virtuality is the condition of potentiality, and as such it is a transcendental concept. Deleuze repeatedly emphasises that virtuality does not refer to a possibility. In the context of information technologies it is often said that this or that is “virtual”, e.g. virtualized servers, or virtual worlds. This usage is not the same as in philosophy, since, quite obviously, we use the virtual server as a server, and the world dubbed “virtual“ indeed does exist in an actualized form. Yet, in both examples there is also some resonance to the philosophical concept of virtuality. But this virtuality is not exclusive to the simulated worlds, the informationally defined server instances or the WWW. Virtualization is, as we will see in a moment, implied by any kind of instance of mediality.

As just said, virtuality and thus also potentiality must be strictly distinguished from possibility. Possible things, even if not yet present or existent, can be thought of in a quasi-material way, as if they would exist in their material form. We even can say that possible things and the possibilities of things are completely determined in any given moment. It is not possible to say so about potentiality. Yet, without the concept of potentiality we could not speak about open evolutionary processes. Neglecting virtuality thus is necessarily equivalent to the apriori claim of determinateness, which is methodologically and ethically highly problematic.

The philosophical concept of virtuality is known since Aristotle. Recently, Bühlmann18 brought it to the vicinity of semiotics and the question of reference19 in her work about mediality. There would be much, much more to say about virtuality here, just, the space is missing…

4.2. Mediality

Mediality, that is the medial aspects of things, facts and processes belongs to the most undervalued concepts nowadays, even as we get some exercise by means of so-called “social media”. That term perfectly puts this blind spot to stage through its emphasis: Neither is there any mediality without sociality, nor is there any sociality without mediality. Mediality is the concept that has been “discovered” as the last one of our small group. There is a growing body of publications, but many are—astonishingly—deeply infected by romanticism or positivism20, with only a few exceptions.21 Mediality comprises issues like context, density, or transformation qua transfer. Mediality is a concept that helps to focus the appropriate level of integration in populations or flows when talking about semantics or meaning and their dynamics. Any thing, whether material or immaterial, that occurs in a sufficient density in its manifoldness may develop a mediality within a sociality. Mediality as a “layer of transport” is co-generative to sociality. Media are never neutral with respect to the transported, albeit one can often find counteracting forces here.

Signs and symbols could not exist as such without mediality. (Yet, this proposal is based on the primacy of interpretation, which is rejected by modernist set of beliefs. The costs for this are, however, tremendous, as we are going to argue here) The same is true for words and language as a whole. In real contexts, we usually find several, if not many medial layers. Of course, signs and symbols are not exhaustively described by mediality. They need reference, which is a compound that comprises modeling.

4.3. Model

Models and modeling need not be explicated too much any more, as it is one of the main issues throughout our essays. We just would like to remember to the obvious fact that a “pure” model is not possible. We need symbols and rules, e.g. about their creation or usage, and necessarily both are not subject of the model itself. Most significantly, models need a purpose, a concept to which they refer. In fact, any model presupposes an environment, an embedding that is given by concepts and a particular social embedding. Additionally, models would not be models without virtuality. On the one hand, virtuality is implied by the fact that models are incarnations of specific modes of interpretation, and on the other hand they imply virtuality themselves, since they are, well, just models.

We frequently mentioned that it is only through models that we can build up references to the external world. Of course, models are not sufficient to describe that referencing. There is also the contingency of the manifold of populations and the implied relations as quasi-material arrangements that contribute to the reference of the individual to the common. Yet, only modeling allows for anticipation and purposeful activity. It is only though models that behavior is possible, insofar any behavior is already differentiated behavior. Models are thus the major site where information is created. It is not just by chance that the 20th century experienced the abundance of models and of information as concepts.

In mathematical terms, models can be conceived as second-order categories. More profane, but equivalent to that, we can say that models are arrangement of rules for transformation. This implies the whole issue of rule-following as it has been investigated and formulated by Wittgenstein. Note that rule-following itself is a site of paradoxicality. As there is no private language, there is also no private model. Philosophically, and a bit more abstract, we could describe them as the compound of providing the possibility for reference (they are one of the conditions for such) and the institutionalized site for creating (f)actual differences.

4.4. Concepts

Concept is probably one of the most abused, or at least misunderstood concepts, at least in modern times. So-called Analytical Philosophy is claiming over and over again that concepts could be explicated unambiguously, that concepts could be clarified or defined. This way, the concept and its definition are equaled. Yet, a definition is just a definition, not a concept. The language game of the definition makes sense only in a tree of analytical proofs that started with axioms. Definitions need not to be interpreted. They are fully given by themselves. Such, the idea of clarifying a concept is nothing but an illusion. Deleuze writes (DR228)

It is not surprising that, strictly speaking, difference should be ‘inexplicable’. Difference is explicated, but in systems in which it tends to be cancelled; this means only that difference is essentially implicated, that its being is implication. For difference, to be explicated is to be cancelled or to dispel the inequality which constitutes it. The formula according to which ‘to explicate is to identify’ is a tautology.

Deleuze points to the particular “mechanism” of eradication by explication, which is equal to its transformation into the sayable. There is a difference between 5 and 7, but the arithmetic difference does not cover all aspects of difference. Yet, by explicating the difference using some rules, all the other differences except the arithmetic one vanish. Such, this inexplicability is not limited to the concept of difference. In some important way, these other aspects are much more interesting and important than the arithmetic operation itself or the result of it. Actually, we can understand differencing only as far we are aware of these other aspects.

Elsewhere, we already cited Augustine and his remark about time:22 “What, then, is time? If no one ask of me, I know; if I wish to explain to him who asks, I know not.” Here, we can observe at least two things. Firstly, this observation may well be the interpreted as the earliest rejection of “knowledge as justified belief”, a perspective which became popular in modernism. Meanwhile it has been proofed to be inadequate by the so-called Gettier problem. The consequences for the theory of data bases, or machine-based processing of data, can’t be underestimated. It clearly shows, that knowledge can’t be reduced to confirmed hypotheses qua validated models, and belief can’t be reduced to kind of a pre-knowledge. Belief must be something quite different.

The second thing to observe by those two example concerns the status of interpretation. While Augustine seems to be somewhat desperate, at least for a moment23, analytical philosophy tries to abolish the annoyance of indeterminateness by killing the freedom inherent to interpretation, which always and inevitably happens, if the primacy of interpretation is denied.

Of course, the observed indeterminateness is equally not limited to time either. Whenever you try to explicate a concept, whether you describe it or define it, you find the unsurmountable difficulty to pick one of many interpretations. Again: There is no private language; meaning, references and signs exist only within social situations of interpretation. In other words, we again find the necessity of invoking the other conceptual aspects from which we build our space. Without models and mediality there is no concept. And even more profound than models, concepts imply virtuality.

In the opposite direction we can understand now that these four concepts are not only not reducible to each other. They are dependent on each other and—somewhat paradoxically—they are even competitively counteracting. From this we can expect an abstract dynamics that reminds somewhat to the patterns evolving in Reaction-Diffusion-Systems. These four concepts imply the possibility for a basic creativity in the realm of the Idea, in the indeterminate zone of actualisation that will result in a “concrete” thought, or at least the experience of thinking.

Before we proceed we would like to introduce a notation that should be helpful in avoiding misunderstandings. Whenever we refer to the transcendental aspects between which the aspections of our space stretch out, we use capital letters and mark it additionally by a bar, such as “_Concept”,or “_Model”.The whole set of aspects we denote by “_A”,while its unspecified items are indicated by “_a”.

5. Anti-Ontology: The T-Bar-Theory

The four conceptual aspects _Aplay different roles. They differ in the way they get activated. This becomes visible as soon as we use our space as a tool for comparing various kinds of mental concepts or activities, such as believing, referring, explicating or understanding. These we will inspect later in detail.

Above we described the impossibility to explicate a concept without departing from the “conceptness”. Well, such a description is actually not appropriate according to our aspectional space. The four basic aspections are built by transcendental concepts. There is a subjective, imaginary yet pre-specific scale along those aspections. Hence, in our space “conceptness” is not a quality, but an intensity, or almost a degree, a quantity. The key point then is that a mental concept or activity relates always to all four transcendental aspections in such a way that the relative location of the mental activity can’t be changed along just a single aspect alone.

We also can recognize another significant step that is provided by our space of expressibility. Traditionally, concepts are used as existential signifiers, in philosophy often called qualia. Such existential signifiers are only capable to indicate presence or absence, which thus is also confined to naive ontology of Hamletian style (to be or not to be). It is almost impossible to build a theory or a model from existential signifiers. From the modeling or the measurement theory point of view, concepts are on the binary scale. Despite concepts collect a multitude of such binary usages, appropriate modeling remains impossible due the binary scale, unless we would probabilize all potential dual pairs.

Similarly to the case of logic we also have to distinguish the transcendental aspect _a,that is, the _Model,_Mediality,_Concept,and _Virtualityfrom the respective entity that we find in applications. Those practiced instances of a are just that: instances. That is: instances produced by orthoregulated habits. Yet, the instances of a that could be gained through the former’s actualization do not form singularities, or even qualia. Any a can be instantiated into an infinite diversity of concrete, i.e. definable and sayable abstract entities. That’s the reason for the kinship between probabilistic entities and transcendental perspectives. We could operationalize the latter by the former, even if we have to distinguish sharply between possibility and potentiality. Additionally we have to keep in mind that the concrete instances do not live independently from their transcendental ancestry24.

Deleuze provides us a nice example of this dynamics in the beginning of part V in D&R. For him, “divergence” is an instance of the transcendental entity “Difference”.

Difference is not diversity. Diversity is given, but difference is that by which the given is given, that by which the given is given as diverse. Difference is not phenomenon but the noumenon closest to the phenomenon.

What he calls “phenomenon” we dubbed “instance”, which is probably more appropriate in order to avoid the reference to phenomenology and the related difficulties. Calling it “phenomenon” pretends—typically for any kind of phenomenology or ontology—sort of a deeply unjustified independence of mentality and its underlying physicality.

This step from existential signifiers to the situs in a space for expressibility, made possible by our aspectional space, can’t be underestimated. Take for instance the infamous question that attracted so many misplaced answers: “How do words or concepts acquire reference?” This question appears to be especially troubling because signs do refer only to signs.25 In existential terms, and all the terms in that question are existential ones, this question can’t be answered, even not addressed at all. As a consequence, deep mystical chasms unnecessarily keep separating the world from the concepts. Any resulting puzzle is based on a misconception. Think of Platons chorismos (greek for “separation”) of explanation and description, which recently has been taken up, refreshed and declared being a “chasm” by Epperson [31] (a theist realist, according to his own positioning; we will meet him later again). The various misunderstandings are well-known, ranging from nominalism to externalist realism to scientific constructivism.

They all vanish in a space that overcomes the existentiality embedded in the terms. Mathematically spoken, we have to represent words, concepts and references as probabilized entities, as quasi-species as Manfred Eigen called it in a different context, in order to avoid naive mysticism regarding our relations to the world.

It seems that our space provides the possibility for measuring and comparing different ways of instantiation for _A,kind of a stable scale. We may use it to access concepts differentially, that is, we now are able to transform concepts in a space of quantitability (a term coined by Vera Bühlmann). The aspectional space as we have constructed it is thus necessary even in order to talk just about modeling. It would provide the possibility for theories about any transition between any mental entities one could think of. For instance, if we conceive “reference” as the virtue of purposeful activity and anticipation, we could explore and describe the conditions for the explication of the path between the _Modelon the one side and the _Concept on the other.On this path—which is open on both sides—we could, for instance, first meet different kinds of symbols near the Model, started by idealization and naming of models, followed by the mathematical attitude concerning the invention and treatment of signs, _Logicand all of its instances, semiosis and signs, words, and finally concepts, not forgetting above all that this path necessarily implies a particular dynamics regarding _Medialityand _Virtuality.

Such an embedding of transformations into co-referential transcendental entities is anything we can expect to “know” reliably. That was the whole point of Kant. Well, here we can be more radical than Kant dared to. The choreostemic space is a rejection of the idea of “pure thought”, or pure reason, since such knowledge needs to undergo a double instantiation, and this brings subjectivity back. It is just a phantasm to believe that propositions could be secured up to “truth”. This is even true for least possible common denominator, existence.

I think that we cannot know whether something exists or not (here, I pretend to understand the term exist), that it is meaningless to ask this. In this case, our analysis of the legitimacy of uses has to rest on something else. (David Blair [49])

Note that Blair is very careful in his wording here. He is not about any universality regarding the justification, or legitimization. His proposal is simply that any reference to “Being” or “Existence” is useless apriori. Claiming seriousness of ontology as an aspect of or even as an external reality immediately instantiates the claim of an external reality as such, which would be such-and-such irrespective to its interpretation. This, in turn, would consequently amount to a stance that would set the proof of irrelevance of interpretation and of interpretive relativism as a goal. Any familiar associations about that? Not to the least do physicists, but only physicists, speak of “laws” in nature. All of this is, of course, unholy nonsense, propaganda and ideology at least.

As a matter of fact, even in a quite strict naturalist perspective, we need concepts and models. Those are obviously not part of the “external” nature. Ontology is an illusion, completely and in any of its references, leading to pseudo-problems that are indeed  “very difficult” to “solve”. Even if we manage to believe in “existence”, it remains a formless existence, or more precisely, it has to remain formless. Any ascription of form immediately would beat back as a denial of the primacy of interpretation, hence in a naturalist determinism.

Before addressing the issue of the topological structure of our space, let us trace some other figures in our space.

6. Figures and Forms

Whenever we explicate a concept we imply or refer to a model. In a more general perspective, this applies to virtuality and mediality as well. To give an example: Describing a belief does not mean to belief, but to apply a model. The question now is, how to revert the accretion of mental activities towards the _Model._Virtuality can’t be created deliberately, since in this case we would refer again to the concept of model. Speaking about something, that is, saying in the Wittgensteinian sense, intensifies the _Model.

It is not too difficult, though, to find some candidate mechanics that turns the vector of mental activity away from the _Concept.It is through performance, mere action without explicable purpose, that we introduce new possibilities for interpretation and thus also enriched potential as the (still abstract) instance of _Virtuality.

In contrast to that, the _Concept is implied.The _Conceptcan only be demonstrated. Even by modeling. Traveling on some path that is heading towards the _Model,the need for interpretation continuously grows, hence, the more we try to approach the “pure” _Model,the stronger is the force that will flip us back towards the _Concept.

_Mediality,finally, the fourth of our aspects, binds its immaterial colleagues to matter, or quasi-matter, in processes that are based on the multiplicity of populations. It is through _Medialityand its instances that chunks of information start to behave as device, as quasi-material arrangement. The whole dynamics between _Conceptsand _Modelsrequires a symbol system, which can evolve only through the reference to _Mediality,which in turn is implied by populations of processes.

Above we said that the motivation for this structure is to provide a space of expressibility for mental phenomena in their entirety. Mental activity does not consist of isolated, rare events. It is an multitude of flows integrated into various organizational levels, even if we would consider only the language part. Mapping these flows into our space rises the question whether we could distinguish different attractors, different forms of recurrence.

Addressing this question establishes an interesting configuration, since we are talking about the form of mental activities. Perhaps it is also appropriate to call these forms “mental style”. In any case, we may take our space as a tool to formalize the question about potential classes of mental styles. In order to render out space more accessible, we take the tetraedic body as a (crude) approximating metaphor for it.

Above we stressed the point that any explication intensifies the _Model aspect. Transposed into a Cartesian geometry we would have said—metaphori- cally—that explication moves us towards the corner of the model. Let us stick to this primitive representation for a moment and in favour of a more intuitive understanding. Now imagine constructing a vector that points away from the model corner, right to the middle of the area spanned by virtuality, mediality and concept. It is pretty clear, that mental activity that leaves the model behind, and quite literally so, in this way will be some form of basic belief, or revelation. Religiosity (as a mental activity) may be well described as the attempt to balance virtuality, mediality and concept without resorting to any kind of explication, i.e. models. Of course, this is not possible in an absolute manner, since it is not possible to move in the aspectional space without any explication. This in turn then yields a residual that again points towards the model corner.

Inversely, it is not possible to move only in the direction of the _Model.Nevertheless, there are still many people proposing such, think for instance about (abundant as well as overdone) scientism. What we can see here are particular forms of mental activity. What about other forms? For instance, the fixed-point attractor?

As we have seen, our aspectional space does not allow for points as singularities. Both the semantics of the aspections as well as the structure of the space as a second-order differential prevents them. Yet, somebody could attempt to realize an orbit around a singularity that is as narrow as possible. Despite such points of absolute stability are completely illusionary, the idea of the absoluteness of ideas—idealism—represents just such an attempt. Yet, the claim of absoluteness brings mental activity to rest. It is not by accident therefore that it was the logician Frege who championed kind of a rather strange hyperplatonism.

At this point we can recognize the possibility to describe different forms of mental activity using our space. Mental activity draws specific trails into our space. Moreover, our suggestion is that people prefer particular figures for whatever reasons, e.g. due to their cultural embedding, their mental capabilities, their knowledge, or even due to their basic physical constraints. Our space allows to compare, and perhaps even to construct or evolve particular figures. Such figures could be conceived as the orthoregulative instance for the conditions to know. Epistemology thus looses its claim of universality.

It seems obvious to call our space a “choreostemic” space, a term which refers to choreography. Choreography means to “draw a dance”, or “drawing by dancing”, derived from Greek choreia (χορεύω) for „dancing, (round) dance”. Vera Bühlmann [19] described that particular quality as “referring to an unfixed point loosely moving within an occurring choreography, but without being orchestrated prior to and independently of such occurrence.”

The notion of the choreosteme also refers to the chorus of the ancient theatre, with all its connotations, particularly the drama. Serving as an announcement for part V of D&R, Deleuze writes:

However, what carries out the third aspect of sufficient reason—namely, the element of potentiality in the Idea? No doubt the pre-quantitative and pre-qualitative dramatisation. It is this, in effect, which determines or unleashes, which differenciates the differenciation of the actual in its correspondence with the differentiation of the Idea. Where, however, does this power of dramatisation come from? (DR221)

It is right here, where the choreostemic space links in. The choreostemic space does not abolish the dramatic in the transition from the conditionability of Ideas into concrete thoughts, but it allows to trace and to draw, to explicate and negotiate the dramatic. In other words, it opens the possibility for a completely new game: dealing with mental attitudes. Without the choreostemic space this game is not even visible, which itself has rather unfortunate consequences.

The choreostemic space is not an epistemic space either. Epistemology is concerned about the conditions that are influencing the possibility to know. Literally, episteme means “to stand near”, or “to stand over”. It draws upon a fixed perspective that is necessary to evaluate something. Yet, in the last 150 years or so, philosophy definitely has experienced the difficulties implied by epistemology as an endeavour that has been expected to contribute finally to the stabilization of knowledge. I think, the choreostemic space could be conceived as a tool that allows to reframe the whole endeavour. In other words, the problematic field of the episteme, and the related research programme “epistemology” are following an architecture (or intention), that has been set up far too narrow. That reframing, though, has become accessible only through the “results” of—or the tools provided by — the work of Wittgenstein and Deleuze. Without the recognition of the role of language and without a renewal of the notion of the virtual, including the invention of the concept of the differential, that reframing would not have been possible at all.

Before we are going to discuss further the scope of the choreostemic space and the purposes it can serve, we have to correct the Cartesian view that slipped in through our metaphorical references. The Cartesian flavour keeps not only a certain arbitrariness alive, as the four conceptual aspects _Aare given just by some subjective empirical observations. It also keeps us stick completely within the analytical space, hence with a closed approach that again would need a mystical external instance for its beginning. This we have to correct now.

7. Reason and Sufficiency

Our choreostemic space is built as an aspectional space that is spanned by transcendental entities. As such, they reflect the implied conditionability of concrete entities like definitions, models or media. The _Conceptcomprises any potential concrete concept, the _Modelcomprises any actual model of whatsoever kind and expressed in whatsoever symbolic system, the _Medialitycontains the potential for any kind of media, whether more material or more immaterial in character. The transcendental status of these aspects also means that we never can “access” them in their “pure” form. Yet, due to these properties our space allows to map any mental activity, not just of the human brain. In a more general perspective, our space is the space where the _Comparison takes place.

The choreostemic space is of course itself a model. Given the transcendentality of the four conceptual aspects _A,we can grasp the self-referentiality. Yet, this neither does result in an infinite regress, nor in circularity. This would be the case only if the space would be Cartesian and the topological structure would be flat (Euclidean) and global.

First, we have to consider that the choreostemic space is not only model, precisely due to its self-referentiality. Second, it is a tool, and as such it is not time-inert as a physical law. Its relevance unfolds only if it is used. This, however, invokes time and activity. Thus the choreostemic space could be conceived also as a means to intensify the virtual aspects of thought. Furthermore, and third, it is of course a concept, that is, it is an instance of the _Concept.As such, it should be constructed in a way that abolishes any possibility for a Cartesio-Euclidean regression. All these aspects are covered by the topological structure of the choreostemic space: It is meant to be a second-order differential.

A space made by the second-order differential does not contain items. It spawns procedures. In such a space it is impossible to stay at a fixed point. Whenever one would try to determine a point, one would be accelerated away. The whole space causes divergence of mental activities. Here we find the philosophical reason for the impossibility to catch a thought as a single entity.

We just mentioned that the choreostemic space does not contain items. Due to the second-order differential it is not made up as a set of coordinates, or, if we’d consider real scaled dimensions, as potential sets of coordinates. Quite to the opposite, there is nothing determinable in it. Yet, in rear-view, or hindsight, respectively, we can reconstruct figures in a probabilistic manner. The subject of this probabilism is again not determinable coordinates, but rather clouds of probabilities, quite similar to the way things are described in quantum physics by the Schrödinger equation. Unlike the completely structureless and formless clouds of probability which are used in the description of electrons, the figures in our space can take various, more or less stable forms. This means that we can try to evolve certain choreostemic figures and even anticipate them, but only to a certain degree. The attractor of a chaotic system provides a good metaphor for that: We clearly can see the traces in parameter space as drawn by the system, yet, the system’s path as described by a sequence of coordinates remains unpredictable. Nevertheless, the attractor is probabilistically confined to a particular, yet cloudy “figure,” that is, an unsharp region in parameter space. Transitions are far from arbitrary.

Hence, we would propose to conceive the choreostemic space as being made up from probabilistic situs (pl.). Transitions between situs are at the same time also transformations. The choreostemic space is embedded in its own mediality without excluding roots in external media.

Above we stuffed the space with a hyperbolic topology in order to align to the transcendentality of the conceptual aspects. It is quite important to understand that the choreostemic space does not implement a single, i.e. global hyperbolic relation. In contrast, each situs serves as point of reference. Without this relativity, the choreostemic space would be centred again, and in consequence it would turn again to the analytic and totalising side. This relativity can be regarded as the completed and subjectivising Cartesian delocalization of the “origin”. It is clear that the distance measures of any two such relative hyperbolic spaces do not coincide any more. There is neither apriori objectivity nor could we expect a general mapping function. Approximate agreement about distance measures may be achievable only for reference systems that are rather close to each other.

The choreostemic space comprises any condition of any mental attitude or thought. We already mentioned it above: The corollary of that is that the choreostemic space is the space of _Comparisonas a transcendental category.

It comprises the conditions for the whole universe of Ideas, it is an entirety. Here, it is again the topological structure of the space that saves us from mental dictatorship. We have to perform a double instantiation in order to arrive at a concrete thought. It is somewhat important to understand that these instantiations are orthoregulated.

It is clear that the choreostemic space destroys the idea of a uniform rationality. Rationality can’t be tied to truth, justice or utility in an objective manner, even if we would soften objectivity as a kind of relaxed intersubjectivity. Rationality depends completely on the preferred or practiced figures in the choreostemic space. Two persons, or more generally, two entities with some mental capacity, could completely agree on the facts, that is on the percepts, the way of their construction, and the relations between them, but nevertheless assign them completely different virtues and values, simply for the fact that the two entities inhabit different choreostemic attractors. Rationality is global within a specific choreostemic figure, but local and relative with regard to that figure. The language game of rationality therefore does not refer to a particular attitude towards argumentation, but quite in contrast, it includes and displays the will to establish, if not to enforce uniformity. Rationality is the label for the will to power under the auspices of logic and reductionism. It serves as the display for certain, quite critical moral values.

Thus, the notion of sufficient reason looses its frightening character as well. As any other principle of practice it gets transformed into a strictly local principle, retaining some significance only with regard to situational instrumentality. Since the choreostemic space is a generative space, locality comprises temporal locality as well. According to the choreostemic space, sufficient reasons can’t even be transported between subsequent situations. In terms of the choreostemic space notions like rationality or sufficient reason are relative to a particular attractor. In different attractors their significance could be very different, they may bear very different meanings. Viewed from the opposite direction, we also can see that a more or less stable attractor in the choreostemic has first to form, or: to be formed, before there is even the possibility for sufficient reasons. This goes straightly parallel to Wittgenstein’s conception of logic as a transcendental apriori that possibly becomes instantiated only within the process of an unfolding Lebensform. As a contribution to political reason, the choreostemic space it enables persons inhabiting different attractors, following different mental styles. Later, we will return to this aspect.

In D&R, Deleuze explicated the concept of the “Image of Thought”, as part III of D&R is titled. There he first discusses what he calls the dogmatic image of thought, comprised according to him from eight elements that together lead to the concept of the idea as an representation (DR167). Following that we insists that the idea is bound to repetition and difference (as differenciation and differentiation), where repetition introduces the possibility of the new, as it is not the repetition of the same. Nevertheless, Deleuze didn’t develop this Image into a multiplicity, as it could have been expected from a more practical perspective, i.e. the perspective of language games. These games are different from his notion emphasizing at several instances that language is a rich play.

For me it seems that Deleuze didn’t (want to) get rid of ontology, hence he did not conceive of his great concept of the “differential” as a language game, and in turn he missed to detect the opportunity for self-referentiality or even to apply it in a self-referential manner. We certainly do therefore not agree with his attempt to ground the idea of sufficient reason as a global principle. Since “sufficient reason” is a practice I think it is not possible or not sufficient to conceive of it as a transcendental guideline.

8. Elective Kinships

It is pretty clear that the choreostemic space is applicable to many problematic fields concerning mental attitudes, and hence concerning cultural issues at large, reaching far beyond the specificity of individual domains.

As we will see, the choreostemic space may serve as a treatment for several kinds of troublesome aberrances, in philosophy itself as well as in its various applications. Predominantly, the choreostemic space provides the evolutionary perspective towards the self-containing theoretical foundation of plurality and manifoldness.26 Comparing that with Hegel’s slogans of “the synthesis of the nation’s reason“ (“Synthese des Volksgeistes“) or „The Whole is the Truth“ („Das Ganze ist das Wahre“) shows the difference regarding its level and scope.

Before we go into the details of the dynamics that unfolds in the choreostemic space, we would like to pick up on two areas, the philosophy of the episteme and the relationship between anthropology and philosophy.

8.1. Philosophy of the Episteme

The choreostemic space is not about a further variety of some epistemological argument. It is thought as a reframing of the concerns that have been addressed traditionally by epistemology. (Here, we already would like to warn of the misunderstanding that the choreostemic space exhausts as epistemology.) Hence, it should be able to serve (as) the theoretical frame for the sociology of science or the philosophy of science as well. Think about the work of Bruno Latour [9], Karin Knorr Cetina [10] or Günther Ropohl [11] for the sociology of science or the work of van Fraassen [12] of Giere [13] for the field of philosophy of science. Sociology and philosophy, and quite likely any of the disciplines in human sciences, should indeed establish references to the mental in some way, but rather not to the neurological level, and—since we have to avoid anthropological references—to cognition as it is currently understood in psychology as well.

Giere, for instance, brings the “cognitive approach” and hence the issue of practical context close to the understanding of science, criticizing the idealising projection of unspecified rationality:

Philosophers’ theories of science are generally theories of scientific rationality. The scientist of philosophical theory is an ideal type, the ideally rational scientist. The actions of real scientists, when they are considered at all, are measured and evaluated by how well they fulfill the ideal. The context of science, whether personal, social or more broadly cultural, is typically regarded as irrelevant to a proper philosophical understanding of science” (p.3).

The “cognitive approach” that Giere proposes as a means to understand science is, however, threatened seriously by the fact that there is no consensus about the mental. This clearly conflicts with the claim of trans-cultural objectivity of contemporary science. Concerning cognition, there are still many simplistic paradigms around, recently seriously renewed by the machine learning community. Aaron Ben Ze’ev [14] writes critically:

In the schema paradigm [of the mind, m.], which I advocate, the mind is not an internal container but a dynamic system of capacities and states. Mental properties are states of a whole system, not internal entities within a particular system. […] Novel information is not stored in a separate warehouse, but is ingrained in the constitution of the cognitive system in the form of certain cognitive structures (or schemas). […] The attraction of the mechanistic paradigm is its simplicity; this, however, is an inadequate paradigm, because it fails to explain various relevant phenomena. Although the complex schema paradigm does not offer clear-cut solutions, it offers more adequate explanations.

How problematic even such critiques are can be traced as soon as we remember Wittgenstein’s mark on “mental states” (Brown Book, p.143):

There is a kind of general disease of thinking which always looks for (and  finds) what would be called a mental state from which all our acts spring as from a reservoir.

In the more general field of epistemology there is still no sign for any agreement about the concept of knowledge. From our position, this is little surprising. First, concepts can’t be defined at all. All we can find are local instances of the transcendental entity. Second, knowledge and even its choreostemic structure is dependent on the embedding culture while at the same time it is forming the culture. The figures in the choreostemic space are attractors: They do not prescribe the next transformation, but they constrain the possibility for it. How ever to “define” knowledge in an explicit, positively representationalist manner? For instance, knowledge can’t be reduced to confirmed hypotheses qua validated models. It is just impossible in principle to say “knowledge is…”, since this implies inevitably the demand for an objective justification. At most, we can take it as a language game. (Thus the choreosteme, that is, the potential of building figures in the choreostemic space, should not be mixed with the episteme! We will return to this issue later again.)

Yet, just to point to the category of the mental as a language game does not feel satisfying at all. Of course, Wittgenstein’s work sheds bright light on many aspects of mentality. Nevertheless, we can’t use Wittgenstein’s work as a structure; it is itself to be conceived as a result of a certain structuredness. On the other hand, it is equally disappointing to rely on the scientific approach to the mental. In some way, we need a balanced view, which additionally should provide the possibility for a differential experimentation with mechanisms of the mental.

Just that is offered by the choreostemic space. We may relate disciplinary reductionist models to concepts as they live in language games without any loss and without getting into troubles as well.

Let us now see what is possible by means of the choreostemic space and the anti-ontological T-Bar-Theory for the terms believing, referring, explicating, understanding and knowing. It might be relevant to keep in mind that by “mental activities” we do not refer to any physical or biochemical process. We distinguish the mental from the low-level affairs in the brain. Beliefs, or believing, are thus considered to be language games. From that perspective our choreostemic space just serves as a tool to externalize language in order to step outside of it, or likewise, to get able to render important aspects of playing the language game visible.

Believing

The category of beliefs, or likewise the activity of believing27, we already met above. We characterised it as a mental activity that leaves the model behind. We sharply refute the quite abundant conceptualisation of beliefs as kind of uncertainty in models. Since there is no certainty at all, not even with regard to transcendental issues, such would make little sense. Actually, the language game of believing shows its richness even on behalf of a short investigation like this one.

Before we go into details here let us see how others conceive of it. PMS Hacker [27] gave the following summary:

Over the last two and a half centuries three main strands of opinion can be discerned in philosophers’ investigations of believing. One is the view that believing that p is a special kind of feeling associated with the idea that p or the proposition that p. The second view is that to believe that p is to be in a certain kind of mental state. The third is that to believe that p is to have a certain sort of disposition.

Right to the beginning of his investigation, Hacker marks the technical, reductionist perspective onto believe as a misconception. This technical reductionism, which took form as so-called AGM-theory in the paper by Alchourron, Gärdenfors and Makinson [28] we will discuss below. Hacker writes about it:

Before commencing analysis, one misconception should be mentioned and put aside. It is commonly suggested that to believe that p is a propositional attitude.That is patently misconceived, if it means that believing is an attitude towards a proposition. […] I shall argue that to believe that p is neither a feeling, nor a mental state, nor yet a disposition to do or feel anything.

Obviously, believing has several aspects. First, it is certainly kind of a mental activity. It seems that I need not to tell anybody that I believe in order to be able to believe. Second, it is a language game, and a rich one, indeed. It seems almost to be omnipresent. As a language game, it links “I believe that” with, “I believe A” and “I believe in A”. We should not overlook, however, that these utterances are spoken towards someone else (even in inner speech), hence the whole wealth of processes and relations of interpersonal affairs have to be regarded, all those mutual ascriptions of roles, assertions, maintained and demonstrated expectations, displays of self-perception, attempts to induce a certain co-perception, and so on. We frequently cited Robert Brandom who analysed that in great detail in his “Making it Explicit”.

Yet, can we really say that believing is just a mental activity? For the one, above we did not mention that believing is something like a “pure” mental activity. We clearly would reject such a claim. First, we clearly can not set the mental as such into a transcendental status, as this would lead straight to a system like Hegel’s philosophy, with all its difficulties, untenable claims and disastrous consequences. Second, it is impossible to explicate “purity”, as this would deny the fact that models are impossible without concepts. So, is it possible that a non-conscious being or entity can believe? Not quite, I would like to propose. Such an entity will of course be able to build models, even quite advanced ones, though probably not about reflective subjects as concepts or ideas. It could experience that it could not get rid of uncertainty and its closely related companion, risk. Such we can say that these models are not propositions “about” the world, they comprise uncertainty and allow to deal with uncertainty through actions in the world. Yet, the ability to deal with uncertainty is certainly not the same as believing. We would not need the language game at all. Saying “I believe that A” does not mean to have a certain model with a particular predictive power available. As models are explications, expressing a belief or experiencing the compound mental category “believing” is just the demonstration that any explication is impossible for the person.

Note that we conceive of “belief “as completely free of values and also without any reference to mysticism. Indeed, the choreostemic space allows to distinguish different aspects of the “compound experience” that we call “belief”, which otherwise are not even visible as separate aspects of it. As a language game we thus may specify it as the indication that the speaker assigns—or the listener is expected to assign—a considerable portion of the subject matter to that part of the choreostemic figure that points away from the _Model.It is immediately clear from the choreostemic space that mental activity without belief is not possible. There is always a significant “rest” that could not be covered by any kind of explication. This is true for engineering and of course for any kind of social interaction, as soon as mutual expectations appear on the stage. By means of the choreostemic space we also can understand the significance of trust in any interaction with the external world. In communicative situations, this quickly may lead to a game of mutual deontic ascriptions, as Robert Brandom [15] has been arguing for in his “Making it Explicit”.

Interestingly enough, belief (in its choreostemically founded version) is implied by any transition away from the _Model,for instance also in case of the transition path that ultimately is heading towards the _Concept.Even more surprising—at first sight—and particularly relevant is the “inflection dynamics” in the choreostemic space. The more one tries to explicate something the larger the necessary imports (e.g. through orthoregulations) from the other _a,and hence the larger is the propensity for an inflecting flip.28

As an example, take for instance the historical development of theories in particle physics. There, people started with rather simple experimental observations, which then have been assimilated by formal mathematical models. Those in turn led to new experiments, and so forth, until physics has been reaching a level of sophistication where “observations” are based on several, if not many layers of derived concepts. On the way, structural constants and heuristic side conditions are implied. Finally, then, the system of the physical model turns into an architectonics, a branched compound of theory-models, that sounds as trivial as it is conceptual. In case of physics, it is the so-called grand unified theory. There are several important things here. First, due to large amounts of heuristic settings and orthoregulations, such concepts can’t be proved or disproved anymore, the least by empirical observations. Second, on the achieved level of abstraction, the whole subject could be formulated in a completely different manner. Note that such a dynamic between experiment, model, theory29 and concept never has been described in a convincing manner before.30

Now that we have a differentiated picture about belief at our disposal we can briefly visit the field of so-called belief revision. Belief revision has been widely adopted in artificial intelligence and machine learning as the theory for updating a data base. Quite unfortunately, the whole theory is, well, simply crap, if we would go to apply it according to its intention. I think that we can raw some significance of the choreostemic space from this mismatch for a more appropriate treatment of beliefs in information technology.

The theory of belief revision was put forward by a branch of analytical philosophy in a paper by Alchourron, Gärdenfors and Makinson (1985) [29], often abbr. as “AGM-theory.” Hansson [30] writes:

A striking feature of the framework employed there [monnoo: AGM] is its simplicity. In the AGM framework, belief states are represented by deductively closed sets of sentences, called belief sets. Operations of change take the form of either adding or removing a specified sentence.

Sets of beliefs are held by an agent, who establishes or maintains purely logical relations between the items of those beliefs. Hansson correctly observes that:

The selection mechanism used for contraction and revision encodes information about the belief state not represented by the belief set.

Obviously, such “belief sets” have nothing to do with beliefs as we know it from language game, besides the fact that is a misdone caricature. As with Pearl [23], the interesting stuff is left out: How to achieve those logical sentences at all, notably by a non-symbolic path of derivation?  (There are no symbols out there in the world.) By means of the choreostemic space we easily derive the answer: By an orthoregulated instantiation of a particular choreostemic performance in an unbounded (open) aspectional space that spans between transcendental entities. Since the AGM framework starts with or presupposes logic, it simply got stuck in symbolistic fallacy or illusion. Accordingly, Pollock & Gillies [30] demonstrate that “postulational approaches” such as the AGM-theory can’t work within a fully developed “standard” epistemology. Both are simply incompatible to each other.

Explicating

Closely related to believing is explicating, the latter being just the inverse of the former, pointing to the “opposite direction”. Explicating is almost identical to describing a model. The language game of “explication” means to transform, to translate and to project choreostemic figures into lists of rules that could be followed, or in other words, into the sayable. Of course, this transformation and projection is neither analytic nor neutral. We must be aware of the fact that even a model can’t be explicated completely. Else, this rule-following itself implies the necessity of believes and trust, and it requires a common understanding about the usage or the influence of orthoregulations. In other words, without an embedding into a choreostemic figure, we can’t accomplish an explication.

Understanding, Explaining, Describing

Outside of the perspective of the language game “understanding” can’t be understood. Understanding emerges as a result of relating the items of a population of interpretive acts. This population and the relations imposed on them are closely akin to Heidegger’s scaffold (“Gestell”). Mostly, understanding something is just extending an existent scaffold. About these relations we can’t speak clearly or in an explicit manner any more, since these relations are constitutive parts of the understanding. As all language games this too unfolds in social situations, which need not be syntemporal. Understanding is a confirming report about beliefs and expectations into certain capabilities of one’s own.

Saying “I understand” may convey different meanings. More precisely, understanding may come along in different shades that are placed between two configurations. Either it signals that one believes to be able to extend just the own scaffold, one’s own future “Gestelltheit”. Alternatively it is used to indicate the belief that the extension of the scaffold is shared between individuals in such a way as to be able to reproduce the same effect as anyone else could have produced understanding the same thing. This effect could be merely instrumental or, more significantly, it could refer to the teaching of further pupils. In this case, two people understand something if they can teach another person to the same ends.

Beside the performative and social aspects of understanding there are of course the mental aspects of the concept of “understanding” something. These can be translated into choreostemic terms. Understanding is less a particular “figure” in the CS than it is a deliberate visiting of the outer regions of the figure and the intentional exploration of those outposts. We understand something only in case we are aware of the conditions of that something and of our personal involvements. These includes cognitive aspects, but also the consequences of the performative parts of acts that contribute to an intensifying of the aspect of virtuality. A scientist who builds a strong model without considering his and its conditionability does not understand anything. He just would practice a serious sort of dogma (see Quine about the dogmas of empiricism here!). Such a scientist’s modeling could be replaced by that of a machine.

A similar account could be given to the application of a grammar, irrespective the abstractness of that grammar. Referring to a grammar without considering its conditionability could be performed by a mindless machine as well. It would indeed remain a machine: mindless, and forever determined. Such is most, if not all of the computer software dealing with language today.

We again would like to emphasize that understanding does not exhaust in the ability to write down a model. Understanding means to relate the model to concepts, that is, to trace a possible path that would point towards the concept. A deep understanding refers to the ability to extend a figure towards the other transcendental aspects in a conscious manner. Hence, within idealism and (any sort of) representationalism understanding is actually excluded. They mistake the transcendental for the empirical and vice versa, ending in a strict determinism and dogmatism.

Explaining, in turn, indicates the intention to make somebody else to understand a certain subject. The infamous existential “Why?” does not make any sense. It is not just questionable why this language game should by performed at all, as the why of absolute existence can’t be answered at all. Actually, it seems to be quite different from that. As a matter of fact, we indeed play this game in a well comprehensible way and in many social situations. Conceiving “explanation” of nature as to account for its existence (as Epperson does it, see [31] p.357) presupposes that everything could turned into the sayable. It would result in the conflation of logic and factual world, something Epperson indeed proposes. Some pages later in his proposal about quantum physics he seems to loosen that strict tie when referring to Whitehead he links “understanding” to coherence and empirical adequacy. ([31] p.361)

I offer this argument in the same speculative philosophical spirit in which Whitehead argued for the fitness of his metaphysical scheme to the task of understanding (though not “explaining”) nature—not by the “provability” of his first principles via deduction or demonstration, but by their evaluation against the metrics of coherence and empirical adequacy.

Yet, this presents us an almost a perfect phenomenological stance, separating objects from objects and subjects. Neither coherence nor empirical adequacy can be separated from concepts, models and the embedding Lebenswelt. It expresses thus the believe of “absolute” understanding and final reason. Such ideas that are at least highly problematic, even and especially if we take into account the role Whitehead gives the “value” as an cosmological apriori. It is quite clear, that this attitude to understanding is sharply different from anything that is related to semiotics, the primacy of interpretation, to the role of language or a relational philosophy, in short, to anything what resembles even remotely to what we proposed about understanding of understanding a few lines above.

The intention to make somebody else to understand a certain subject necessarily implies a theory, where theory here is understood (as we always do) as a milieu for deriving or inventing models. The “explaining game” comprises the practice of providing a general perspective to the recipient such that she or he could become able to invent such a model, precisely because a “direct” implant of an idea into someone else is quite impossible. This milieu involves orthoregulation and a grammar (in the philosophical sense). The theory and the grammar associated or embedded with it does nothing else than providing support to find a possibility for the invention or extension of a model. It is a matter of persistent exchange of models from a properly grown population of models that allow to develop a common understanding about something. In the end we then may say “yes, I can follow you!”

Describing is often not distinguished (properly) from explaining. Yet, in our context of choreostemically embedded language games it is neither mysterious nor difficult to do so. We may conceive of describing just as explicating something into the sayable, the element of cross-individual alignment is not part of it, at least in a much less explicit way. Hence, usually the respective declaration will not be made. The element of social embedding is much less present.

Describing pretends more or less that all the three aspects accompanying the model aspect could be neglected, particularly however the aspects of mediality and virtuality. The mathematical proof can be taken as an extreme example for that. Yet, even there it is not possible, since at least a working system of symbols is needed, which in turn is rooted in a dynamics unfolding as choreostemic figure, the mental aspect of Forms of Life. Basically, this impossibility for fixing a “position” in the choreostemic space is responsible for the so-called foundational crisis in mathematics. This crisis prevails even today in philosophy, where many people naively enough still search for absolute  justification, or truth, or at least regard such as a reasonable concept.

All this should not be understood as an attempt to deny description or describing as a useful category. Yet, we should be aware that the difference to explaining is just one of (choreostemic) form. More explicitly, said difference is an affair of of culturally negotiated portions of the transcendental aspects that make up mental life.

I hope this sheds some light on Wittgenstein’s claim that philosophy should just describe, but not explain anything. Well, the possibly perceived mysteriousness may vanish as well, if we remember is characterisation of grammar

Both, understanding and explaining are quite complicated socially mediated processes, hence they unfold upon layers of milieus of mediality. Both not only relate to models and concepts that need to exist in advance and thus to a particular dynamics between them, they require also a working system of symbols. Models and concepts relate to each other only as instances of _Models and _Concepts,that is in a space as it is provided by the choreostemic space. Talking about understanding as a practice is not possible without it.

Referring

Referring to something means to point to the expectation that the referred entity could point to the issue at hand. Referring is not “pointing to” and hence does not consist of a single move. It is “getting pointed to”. Said expectation is based on at least one model. Hence, if we refer to something, we put our issue as well as ourselves into the context of a chain of signifiers. If we refer to somebody, or to a named entity, then this chain of interpretive relations transforms in one of two ways.

Either the named entity is used, that is, put into a functional context, or more precisely, by assigning it a sayable function. The functionalized entity does not (need to) interpret any more, all activity gets centralized, which could be used as the starting point for totalizing control. This applies to any entity, whether it is just material or living, social.

The second way how referencing is affected by names concerns the reference to another person, or a group of persons. If it is not a functional relationship, e.g. taking the other as a “social tool”, it is less the expected chaining as signifier by the other person. Persons could not be interpreted as we interpret things or build signs from signals. Referring to a person means to accept the social game that comprises (i) mutual deontic assignments that develop into “roles”, including deontic credits and their balancing (as first explicated by Brandom [15]), (ii) the acceptance of the limit of the sayable, which results in a use of language that is more or less non-functional, always metaphorical and sometimes even poetic, as well as (iii) the declared persistence for repeated exchanges. The fact that we interpret the utterances of our partner within the orthoregulative milieu of a theory of mind (which builds up through this interpretations) means that we mediatize our partner at least partially.

The limit of the sayable is a direct consequence of the choreostemic constitution of performing thinking. The social is based on communication, which means “to put something into common”; hence, we can regard “communication” as the driving, extending and public part of using sign systems. As a proposed language game, “functional communication” is nonsense, much like the utterance “soft stone”.

By means of the choreostemic space we also can see that any referencing is equal to a more or less extensive figure, as models, concepts, performance and mediality is involved.

Knowing

At first hand, we could suspect that before any instantiation qua choreostemic performance we can not know something positively for sure in a global manner, i.e. objectively, as it is often meant to be expressed by the substantive “knowledge”. Due to that performance we have to interpret before we could know positively and objectively. The result is that we never can know anything for sure in a global manner. This holds even for transcendental items, that is, what Kant dubbed “pure reason”. Nevertheless, the language game “knowledge” has a well-defined significance.

“Knowledge” is a reasonable category only with respect to performing, interpreting (performance in thought) and acting (organized performance). It is bound to a structured population of interpretive situations, to Peircean signs. We thus find a gradation of privacy vs. publicness with respect to knowledge. We just have to keep in mind that neither of these qualities could be thought of as being “pure”. Pure privacy is not possible, because there is nothing like a private language (meaning qua usage and shared reference). Pure publicness is not possible because there is the necessity of a bodily rooted interpreting mechanism (associative structure). Things like “public space” as a purely exterior or externalized thing do not exist. The relevant issue for our topic of a machine-based episteme is that functionalism always ends in a denial of the private language argument.

We now can see easily why knowledge could not be conceived as a positively definable entity that could be stored or transferred as such. First, it is of course a language game. Second, and more important, “knowing {of, about, that}” always relates to instances of transcendental entities, and necessarily so. Third, even if we could agree on some specific way of instantiating the transcendental entities, it always invokes a particular figure unfolding in an aspectional space. This figure can’t be transferred, since this would mean that we could speak about it outside of itself. Yet, that’s not possible, since it is in turn impossible to just pretend to follow a rule.

Given this impossibility we should stay for a moment at the apparent gap opened by it towards teaching. How to teach somebody something if knowledge can’t be transferred? The answer is furnished by the equipment that is shared among the members of a community of speakers or co-inhabitants of the choreostemic space. We need this equipment for matching the orthoregulation of our rule-following. The parts, tools and devices of this equipment are made from palpable traditions, cultural rhythms, institutions, individual and legal preferences regarding the weighting of individuals versus the various societal clusters, the large story of the respective culture and the “templates” provided by it, the consciously accessible time horizon, both to the past and the future31, and so on. Common sense wrongly labels the resulting “setup” as “body of values”. More appropriately, we could call it grammatical dynamics. Teaching, then, is in some way more about the reconstruction of the equipment than about the agreement of facts, albeit the arrangement of the facts may tell us a lot about the grammar.

Saying ‘I know’ means that one wants to indicate that she or he is able to perform choreostemically with regard to the subject at hand. In other words, it is a label for a pointer (say reference) to a particular image of thought and its use. This includes the capability of teaching and explaining, which probably are the only way to check if somebody really knows. We can, however, not claim that we are aligned to a particular choreostemic dynamics. We only can believe that our choreostemic moves are part of a supposed attractor in the choreostemic space. From that also follows that knowledge is not just about facts, even if we would conceive of facts as a compound of fixed relations and fixed things.

The traditional concerns of epistemology as the discipline that asks about the conditions of knowing and knowledge must be regarded as a misplaced problem. Usually, epistemology does not refer to virtuality or mediality. Else, in epistemology knowledge is often sharply separated from belief, yet for the wrong reasons. The formula of “knowledge as justified belief” puts them both onto the same stage. It then would have to be clarified what “justified” should mean, which is not possible in turn. Explicating “justifying” would need reference to concepts and models, or rather the confinement to a particular one: logic. Yet, knowledge and belief are completely different with regard to their role in choreostemic dynamics. While belief is an indispensable element of any choreostemic figure, knowledge is the capability to behave choreostemically.

8.2. Anthropological Mirrors

Philosophy suffers even more from a surprising strangeness. As Marc Rölli recently mentioned [34] in his large work about the relations between anthropology and philosophy (KAV),

Since more than 200 years philosophy is anthropologically determined. Yet, philosophy didn’t investigate the relevance of this fact to any significant extent. (KAV15)32

Rölli agrees with Nietzsche regarding his critique of idealism.

“Nietzsche’s critique of idealism, which is available in many nuances, always targeting the philosophical self-misunderstanding of the pure reason or pure concepts, is also directed against a certain conception of nature.” (KAV439)33.

…where this rejected certain conception of nature is purposefulness. In nature there is no forward directed purpose, no plan. Such ideas are either due to religious romanticism or due to a serious misunderstanding of the Darwinian theory of natural evolution. In biological nature, there is only blind tendency towards the preference of intensified capability for generalization34. Since Kant, and inclusively him, and in some way already Descartes, philosophy has been influenced by scientific, technological or anthropological conceptions about nature in general, or the nature of the human mind.

Such is (at least) problematic for three reasons. First, it constitutes a misunderstanding of the role of philosophy to rely on scientific insights. Of course, this perspective is becoming (again) visible only today, notably after the Linguistic Turn as far as it regards non-analytical philosophy. Secondly, however, it is clear that the said influence implies, if it remains unreflected, a normative tie to empiric observations. This clearly represents a methodological shortfall. Thirdly, even if one would accept a certain link between anthropology and philosophy, the foundations taken from a “philosophy of nature”35 are so simplistic, that they hardly could be regarded as viable.

This almost primitive image about the purposeful nature finally flowed into the functionalism of our days, whether in philosophy (Habermas) or so-called neuro-philosophy, by which many feel inclined to establish a variety of determinism that is even proto-Hegelian.

In the same passage that invokes Nietzsche’s critique, Rölli cites Friedrich Albert Lange [39]

“The topic that we actually refer to can be denoted explicitly. It is quasi the apple in the logical lapse of German philosophy subsequent to Kant: the relation between subject and object within knowledge.” (KAV443)36

Lange deliberately attests Kant—in contrast to the philosophers of the German idealism— to be clear about that relationship. For Kant subject and object constitute only as an amalgamate, the pure whatsoever has been claimed by Hegel, Schelling and their epigones and inheritors. The intention behind introducing pureness, according to Lange, is to support absolute reason or absolute understanding, in other words, eternally justified reason and undeniability of certain concepts. Note that German Idealism was born before the foundational crisis in mathematics, that started with Russell’s remark on Frege’s “Begriffsschrift” and his “all” quantor, that found its continuation in the Hilbert programme and that finally has been inscribed to the roots of mathematics by Goedel. Philosophies of “pureness” are not items of the past, though. Think about materialism, or about Agamben’s “aesthetics of pure means”, as Benjamin Morgan [39] correctly identified the metaphysical scaffold of Agamben’s recent work.

Marc Rölli dedicates all of the 512 pages to the endeavor to destroy the extra-philosophical foundations of idealism. As the proposed alternative we find pragmatism, that is a conceptual foundation of philosophy that is based on language and Life form (Lebenswelt in the Wittgensteinian sense). He concludes his work accordingly:

After all it may have become more clear that this pragmatism is not about a simple, naive pragmatism, but rather about a pragmatism of difference37 that has been constructed with great subtlety. (KAV512)38

Rölli’s main target is German Idealism. Yet, undeniably Hegelian philosophy is not only abundant on the European continent, where it is the Frankfurt School from Adorno to Habermas and even K.-O. Apel, followed by the ill-fated ideas of Luhmann that are infected by Hegel as well. Significant traces of it can be found in Germany’s society also in contemporary legal positivism and the oligarchy of political parties.

During the last 20 years or so, Hegelian positions spread considerably also in anglo-american philosophy and political theory. Think about Hard and Negri, or even the recent works of Brian Massumi. Hegelian philosophy, however, can’t be taken in portions. It is totalitarian all through, because its main postulates such as “absolute reason” are totalizing by themselves. Hegelian philosophy is a relic, and a quite dangerous one, regardless whether you interpret it in a leftist (Lenin) or in a rightist (Carl Schmitt) manner. With its built-in claim for absoluteness the explicit denial of context-specificity, of the necessary relativity of interpretation, of the openness of future evolution, of the freedom inscribed deeply even into the basic operation of comparison, all of these positions turn into transcendental aprioris. The same holds for the claim that things, facts, or even norms can be justified absolutely. No further comment should be necessary about that.

The choreostemic space itself can not result in a totalising or even totalitarian attitude. We met this point already earlier when we discussed the topological structure of the space and its a-locational “substance” (Reason and Sufficiency). As Deleuze emphasized, there is a significant difference between entirety and completeness, which just mirrors the difference between the virtual and the actual. We’d like to add that the choreostemic space also disproves the possibility for universality of any kind of conception. In some way, yet implicitly, the choreostemic space defends humanity against materiality and any related attitude. Even if we would be determined completely on the material level, which we are surely not39, the choreostemic space proofs the indeterminateness and openness of our mental life.

You already may have got the feeling that we are going to slip into political theory. Indeed, the choreostemic space not only forms a space indeterminateness and applicable pre-specificity, it provides also a kind of a space of “Swiss neutrality”. Its capability to allow for a comparison of collective mental setups, without resorting to physicalist concepts like swarms or mysticistic concepts like “collective intelligence”, provides a fruitful ground for any construction of transitions between choreostemic attractors.

Despite the fact that the choreostemic space concerns any kind of mentality, whether seen as hosted more by identifiable individuals or by collectives, the concept should not be taken as an actual philosophy of reason (“Philosophie des Geistes”). It transcends it as it does regarding any particular philosophical stance. It would be wrong as well to confine it into an anthropology or an anthropological architecture of philosophy, as it is the case not only in Hegel (Rölli, KAV137). In some way, it presents a generative zone for a-human philosophies, without falling prey to the necessity to define what human or a-human should mean. For sure, here we do not refer to transhumanism as it is known today, which just follows the traditional anthropological imperative of growth (“Steigerungslogik”), as Rölli correctly remarks (KAV459).

A-Human simply means that as a conception it is neither dependent nor confined to the human Lebenswelt. (We again would like to stress the point that it does neither represent a positively sayable universalism not even kind of a universal procedural principle, and as well that this “a-” should also not be understood as “anti” or “opposed”, simply as “being free of”). It is this position that is mandatory to draw comparisons40 and, subsequently, conclusions (in the form of introduced irreversibilities) about entities that belong to strikingly different Lebenswelten (forms of life). Any particular philosophical position immediately would be guilty in applying human scales to non-human entities. That was already a central cornerstone of Nietzsche’s critique not only of German philosophy of the 19th century, but also of natural sciences.

8.3. Simplicissimi

Rölli criticizes the uncritical adoption of items taken from the scientific world view by philosophy in the 19th century. Today, philosophy is still not secured against simplistic conceptions, uncritically assimilated from certain scientific styles, despite the fact that nowadays we could know about the (non-analytic) Linguistic Turn, or the dogmatics in empiricism. What I mean here comprises two conceptual ideas, the reduction of living or social system to states and the notion of exception or that of normality respectively.

There are myriads of references in the philosophy of mind invoking so-called mental states. Yet, not only in the philosophy of mind one can find the state as a concept, but also in political theory, namely in Giorgio Agamben’s recent work, which also builds heavily on the notion of the “state of exception”. The concept of a mental state is utter nonsense, though, and mainly so for three very different reasons. The first one can be derived from the theory of complex systems, the second one from language philosophy, and the third one from the choreostemic space.

In complex systems, the notion of a state is empty. What we can observe subsequent to the application of some empiric modeling is that complex systems exhibit meta-stability. It looks as if they are stable and trivial. Yet, what we could have learned mainly from biological sciences, but also from their formal consideration as complex systems, is that they aren’t trivial. There is no simple rule that could describe the flow of things in a particular period of time. The reason is precisely that they are creative. They build patterns, hence the build a further “phenomenal” level, where the various levels of integration can’t be reduced to one another. They exhibit points of bifurcation, which can be determined only in hindsight. Hence, from the empirical perspective we only can estimate the probability for stability. This, however, is clearly to weak as to support the claim of “states”.

In philosophy, Deleuze and Guattari in their “Thousand Plateaus” (p.48) have been among the first who recognized the important abstract contribution of Darwin by means of his theory. He opened the possibility to replace types and species by population, degrees by differential relations. Darwin himself, however, has not been able to complete this move. It took another 100 years until Manfred Eigen coined the term quasi-species as an increased density in a probability distribution. Talking about mental states is noting than a fallback into Linnean times when science was the endeavor to organize lists according to uncritical use of concepts.

Actually, from the perspective of language-oriented philosophy, the notion of a state is even empty for any dynamical system that is subject to open evolution (but probably even for trivial dynamic systems). A real system does not build “states”. There are only flows and memories. “State” is a concept, in particular, an idealistic—or at least an idealizing—concept that are only present in the interpreting entity. The fact that one first has to apply a model before it is possible to assign states is deliberately peculated whenever it is invoked by an argument that relates to philosophy or to any (other) kind of normativity. Therefore, the concept of “state” can’t be applied analytically, or as a condition in a linearly arranged argument. Saying that we do not claim that the concept of state is meaningless at large. In natural science, especially throughout the process of hypothesis building, the notion of state can be helpful (sometimes, at least).

Yet, if one would use it in philosophy in a recurrent manner, one would quickly arrive at the choreostemic space (or something very similar), where states are neither necessary nor even possible. Despite that a “state” is only assigned, i.e. as a concept, philosophers of mind41 and philosophers of political theory alike (as Agamben [37] among other materialists) use it as a phenomenal reference. It is indeed somewhat astonishing to observe this relapse into naive realism within the community of otherwise trained philosophers. One of the reasons for this may well be met in the missing training in mathematics.42

The third argument against the reasonability of the notion of “state” in philosophy can be derived from the choreostemic space. A cultural body comprises individual mentality as well as a collective mentality based on externalized symbolic systems like language, to make a long story short. Both together provide the possibility for meaning. It is absolutely impossible to assign a “state” to a cultural body without loosing the subject of culture itself. It would be much like a grammatical mistake. That “subject” is nothing else than a figurable trace in the choreostemic space. If one would do such an assignment instead, any finding would be relevant only within the reduced view. Hence, it would be completely irrelevant, as it could not support the self-imposed pragmatics. Continuing to argue about such finding then establishes a petitio principii: One would find only what you originally assumed. The whole argument would be empty and irrelevant.

Similar arguments can be put forward regarding the notion of the exceptional, if it is applied in contexts that are governed by concepts and their interpretation, as opposed to trivial causal relationships. Yet, Giorgio Agamben indeed started to built a political theory around the notion of exception [37], which—at first sight strange enough—already triggered an aesthetics of emergency. Elena Bellina [38] cites Agamben:

The state of exception “is neither external nor internal to the juridical order, and the problem of defining it concerns a threshold, or a zone of indifference, where inside and outside do not exclude each other but rather blur with each other.” In this sense, the state of exception is both a structured or rule-governed and an anomic phenomenon: “The state of exception separates the norm from its application in order to make its application possible. It introduces a zone of anomie into the law in order to make the effective regulation of the real possible.”

It results in nothing else than disastrous consequences if the notion of the exception would be applied to areas where normativity is relevant, e.g. in political theory. Throughout history there are many, many terrible examples for that. It is even problematic in engineering. We may even call it fully legitimized “negativity engineering”, as it establishes completely unnecessary the opposite of the normal and the deviant as an apriori. The notion of the exception presumes total control as an apriori. As such, it is opposed to the notion of openness, hence it also denies the primacy of interpretation. Machines that degenerate and that would produce disasters on any malfunctioning can’t be considered as being built smartly. In a setup that embraces indeterminateness, there is even no possibility for disastrous fault. Instead, deviances are defined only with respect to the expectable, not against an apriori set, hence obscure, normality. If the deviance is taken as the usual (not the normal, though!), fault-tolerance and even self-healing could be built in as a core property, not as an “exception handling”.

Exception is the negative category to the normal. It requires models to define normality, models to quantify the deviation and finally also arbitrary thresholds to label it. All of the three steps can be applied in linear domains only, where the whole is dependent on just very few parameters. For social mega-systems as societies it is nothing else than a methodological categorical illusion to apply the concept of the exception.

9. Critique of Paradoxically Conditioned Reason

Nothing could be more different to that than pragmatism, for which the choreostemic space can serve as the ultimate theory. Pragmatism always suffered from—or at least has been violable against—the reproach of relativism, because within pragmatism it is impossible to argue against it. With the choreostemic space we have constructed a self-sufficient, self-containing and a necessary model that not only supports pragmatism, but also destroys any possibility of universal normative position or normativity. Probably even more significant, it also abolishes relativism through the implied concept of the concrete choreostemic figure, which can be taken as the differential of the institution or the of tradition43. Choreostemic figures are quite stable since they relate to mentality qua population, which means that they are formed as a population of mental acts or as mental acts of the members of a population. Even for individuals it is quite hard to change the attractor inhabited in choreostemic space, to change into another attractor or even to build up a new one.

In this section we will check out the structure of the way we can use the choreostemic space. Naively spoken we could ask for instance, how can we derive a guideline to improve actions? How can we use it to analyse a philosophical attitude or a political writing? Where are the limits of the choreostemic space?

The structure behind such questions concerns a choice on a quite fundamental level. The issue is whether to argue strictly in positive terms, to allow negative terms, or even to define anything starting from negative terms only. In fact, there are quite a few of different possibilities to arrange any melange of positivity or negativity. For instance, one could ontologically insist first on contingency as a positivity, upon then constraints would act as a negativity. Such traces we will not follow here. We regard them either as not focused enough or, most of them, as being infected by realist ontology.

In more practical terms this issue of positivity and negativity regards the way of how to deal with justifications and conditions. Deleuze argues for strict positivity; in that he follows Spinoza and Nietzsche. Common sense, in contrast, is given only as far as it is defined against the non-common. In this respect, any of the existential philosophical attitudes, whether Christian religion, phenomenology or existentialism, are quite similar to each other. Even Levinas’ Other is infected by it.

Admittedly, at first hand it seems quite difficult, if not impossible, to arrive at an appropriate valuation of other persons, the stranger, the strange, in short, the Other, but also the alienated. Or likewise, how to derive or develop a stance to the world that does not start from existence. Isn’t existence the only thing we can be sure about? And isn’t the external, the experience the only stable positivity we can think about? Here, we shout a loud No! Nevertheless we definitely do not deny the external either.

We just mentioned that the issue of justification is invoked by our interests here. This gives rise to ask about the relation of the choreostemic space to epistemology. We will return to this in the second half of this section.

Positivity. Negativity.

Obviously, the problem of the positive is not the positive, but how we are going to approach it. If we set it primary, we first run into problems of justification, then into ethical problems. Setting the external, the existence, or the factual positive as primary we neglect the primacy of interpretation. Hence, we can’t think about the positive as an instance. We have to think of it as a Differential.

The Differential is defined as an entirety, yet not instantiated. Its factuality is potential, hence its formal being is neither exhaustive nor limiting its factuality, or positivity. Its givenness demands for action, that is for a decision (which is sayable regarding its immediacy) bundled with a performance (which is open and just demonstrable as a matter of fact).

The concept of choreosteme follows closely Deleuze’s idea of the Differential: It is built into the possibility of expressibility that spans as the space between the _Directionsas they are indicated by the transcendental aspects _A.The choreostemic space does not constitute a positively definable stance, since the space for it, the choreostemic space is not made from elements that could be defined apriori to any moment in time. Nevertheless it is well-defined. In order to provide an example which requires a similar approach we may refer to the space of patterns as they are potentially generated by Turing-systems. The mechanics of Turing-patterns, its mechanism, is well-defined as well, it is given in its entirety, but the space of the patterns can’t be defined positively. Without deep interpretation there is nothing like a Turing-pattern. Maybe, that’s one of the reasons that hard sciences still have difficulties to deal adequately with complexity.

Besides the formal description of structure and mechanism of our space there is nothing left about one could speak or think any further. We just could proceed by practicing it. This mechanism establishes a paradoxicality insofar as it does not contain determinable locations. This indeterminateness is even much stronger than the principle of uncertainty as it is known from quantum physics, which so far is not constructed in a self-referential manner (at least if we follow the received views). Without any determinate location, there seems to be no determinable figure either, at least none of which we could say that we could grasp them “directly”, or intuitively. Yet, figures may indeed appear in the choreostemic space, though only by applying orthoregulative scaffolds, such as traditions, institutions, or communities that form cultural fields of proposals/propositions (“Aussagefeld”), as Foucault named it [40].

The choreostemic space is not a negativity, though. It does not impose apriori determinable factual limits to a real situation, whether internal or external. It even doesn’t provide the possibility for an opposite. Due to its self-referentiality it can be instantiated into positivity OR negativity, dependent on the “vector”—actually, it is more a moving cloud of probabilities—one currently belongs to or that one is currently establishing by one’s own  performances.

It is the necessity of choice itself, appearing in the course of instantiation of the twofold Differential, that introduces the positive and the negative. In turn, whenever we meet an opposite we can conclude that there has been a preceding choice within an instantiation. Think about de Saussure structuralist theory of language, which is full of opposites. Deleuze argues (DR205) that the starting point of opposites betrays language:

In other words, are we not on the lesser side of language rather than the side of the one who speaks and assigns meaning? Have we not already betrayed the nature of the play of language – in other words, the sense of that combinatory, of those imperatives or linguistic throws of the dice which, like Artaud’s cries, can be understood only by the one who speaks in the transcendent exercise of language? In short, the translation of difference into opposition seems to us to concern not a simple question of terminology or convention, but rather the essence of language and the linguistic Idea.

In more traditional terms one could say it is dependent on the “perspective”. Yet, the concept of “perspective” is fallacious here, at least so, since it assumes a determinable stand point. By means of the choreostemic space, we may replace the notion of perspectives by the choreostemic figure, which reflects both the underlying dynamics and the problematic field much more adequately. In contrast to the “perspective”, or even of such, a choreostemic figure spans across time. Another difference is that a perspective needs to be taken, which does not allow for continuity, while a choreostemic figure evolves continually. The possibility for negativity is determined along the instantiation from choreosteme to thought, while the positivity is built into the choreostemic space as a potential. (Negative potentials are not possible.)

Such, the choreostemic space is immune to any attempt—should we say poison pill?—to apply a dialectic of the negative, whether we consider single, double, or absurdly enough multiply repeated ones. Think about Hegel’s negativity, Marx’s rejection and proposal for a double negativity, or the dropback by Marcuse, all of which must be counted simply as stupidity. Negativity as the main structural element of thinking did not vanish, though, as we can see in the global movement of anti-capitalism or the global movement of anti-globalization. They all got—or still get—victimized by the failure to leave behind the duality of concepts and to turn them into a frame of quantitability. A recent example for that ominous fault is given by the work of Giorgio Agamben; Morgan writes:

Given that suspending law only increases its violent activity, Agamben proposes that ‘deactivating’ law, rather erasing it, is the only way to undermine its unleashed force. (p.60)

The first question, of course, is, why the heck does Agamben think that law, that is: any lawfulness, must be abolished. Such a claim includes the denial of any organization and any institution, above all, as practical structures, as immaterial infrastructures and grounding for any kind of negotiation. As Rölli noted in accordance to Nietzsche, there is quite an unholy alliance between romanticism and modernism. Agamben, completely incapable of getting aware of the virtual and of the differential alike, thus completely stuck in a luxurating system of “anti” attitudes, finds himself faced with quite a difficulty. In his mono-(zero) dimensional modernist conception of world he claims:

“What is found after the law is not a more proper and original use value that precedes law, but a new use that is born only after it. And use, which has been contaminated by law, must also be freed from its value. This liberation is the task of study, or of play.”

Is it really reasonable to demand for a world where uses, i.e. actions, are not “contaminated” by law? Morgan continues:

In proposing this playful relation Agamben makes the move that Benjamin avoids: explicitly describing what would remain after the violent destruction of normativity itself. ‘Play’ names the unknowable end of ‘divine violence’.

Obviously, Agamben never realized any paradox concerning rule-following. Instead, he runs amok against his own prejudices. “Divine violence” is the violence of ignorance. Yet, abolishing knowledge does not help either, nor is it an admirable goal in itself. As Derrida (another master of negativity) before him, in the end he demands for stopping interpretation, any and completely. Agamben provides us nothing else than just another modernist flavour of a philosophy of negativity that results in nihilistic in-humanism (quite contrary to Nietzsche, by the way). It is somewhat terrifying that Agamben receives not jut little attention currently.

In the last statement we are going to cite from Morgan, we can see in which eminent way Agamben is a thinker of the early 19th century, incapable to contribute any reasonable suggestion to current political theory:

But it is not only the negative structure of the argument but also the kind of negativity that is continuous between Agamben’s analyses of aesthetic and legal judgement. In other words, ‘normality without a norm’, which paradoxically articulates the subtraction of normativity from the normal, is simply another way of saying ‘law without force or application’.

This Kantian formulation is not only fully packed with uncritical aprioris, such like normality or the normal, which marks Agamben as an epigonic utterer of common sense. As this ancient form of idealism demonstrates, Agamben obviously never heard anything of the linguistic turn as well. The unfortunate issue with Agamben’s writing is that it is considered both as influential and pace-making.

So, should we reject negativity and turn to positivity? Rejecting negativity turns problematic only if it is taken as an attitude that stretches out from the principle down to the activity. Notably, the same is true for positivity. We need not to get rid of it, which only would send us into the abyss of totalised mysticism. Instead, we have to transcend them into the Differential that “precedes” both. While the former could be reframed into the conditionability of processes (but not into constraints!), the latter finds its non-representational roots in the potential and the virtual. If the positive is taken as a totalizing metaphysics, we soon end in overdone specialization, uncritical neo-liberalism or even dictatorship, or in idealism as an ideology. The turn to a metaphysics of (representational) positivity is incurably caught in the necessity of justification, which—unfortunately enough for positivists—can’t be grounded within a positive metaphysics. To justify, that is to give “good reasons”, is a contradictio in adiecto, if it is understood in its logic or idealistic form.

Both, negativity and positivity (in their representational instances) could work only if there is a preceding and more or less concrete subject, which of course could not presupposed when we are talking about “first reasons” or “justification”. This does not only apply to political theory or practice, it even holds for logic as a positively given structure. Abstractly, we can rewrite the concreteness into countability. Turning the whole thing around we see that as long as something is countable we will be confined by negativity and positivity on the representational level. Herein lies the limitation of the Universal Turing Machine. Herein lies also the inherent limitation of any materialism, whether in its profane or it theistic form. By means of the choreostemic space we can see various ways out of this confined space. We may, for instance, remove the countability from numbers by mediatizing it into probabilities. Alternatively, we may introduce a concept like infinity to indicate the conceptualness of numbers and countability. It is somewhat interesting that it is the concept of the infinite that challenges the empiric character of numbers. Else, we could deny representationalism in numbers while trying to keep countability. This creates the strange category of infinitesimals. Or we create multi-dimensional number spaces like the imaginary numbers. There are, of course, many, many ways to transcend the countability of numbers, which we can’t even list here. Yet, it is of utmost importance to understand that the infinite, as any other instance of departure from countability, is not a number any more. It is not countable either in the way Cantor proposed, that is, thinking of a smooth space of countability that stretches between empiric numbers and the infinite. We may count just the symbols, but the reference has inevitably changed. The empirics is targeting the number of the symbols, not the their content, which has been defined as “incountability”. Only by this misunderstanding one could get struck by the illusion that there is something like the countability of the infinite. In some ways, even real numbers do not refer to the language game of countability, and all the more irrational numbers don’t either. It is much more appropriate to conceive of them as potential numbers; it may well be that precisely this is the major reason for the success of mathematics.

The choreostemic space is the condition for separating the positive and the negative. It is structure and tool, principle and measure. Its topology implies the necessity for instantiation and renders the representationalist fallacy impossible; nevertheless, it allows to map mental attitudes and cultural habits for comparative purposes. Yet, this mapping can’t be used for modeling or anticipation. In some way it is the basis for subjectivity as pre-specific property, that is for a _Subjectivity,of course without objectivity. Therefore, the choreostemic space also allows to overcome the naïve and unholy separation of subjects and objects, without denying the practical dimension of this separation. Of course, it does so by rejecting even the tiniest trace of idealism, or apriorisms respectively.

The choreostemic space does not separate apriori the individual or the collective forms of mentality. In describing mentality it is not limited to the sayable, hence it can’t be attacked or even swallowed by positivism. Since it provides the means to map those habitual _Mentalfigures, people could talk about transitions between different attractors, which we could call “choreostemic galaxies”. The critical issue of values, those typical representatives of uncritical aprioris, is completely turned into a practical concern. Obviously, we can talk about “form” regarding politics without the need to invoke aesthetics. As Benjamin Morgan recently demonstrated (in the already cited [41]), aesthetics in politics necessarily refers to idealism.

Rejecting representational positivity, that is, any positivity that we could speak of in a formal manner, is equivalent to the rejection of first reason as an aprioric instance. As we already proposed for representational positivity, the claim of a first reason as a point of departure that is never revisited again results as well in a motionless endpoint, somewhere in the triangle built from materialism, idealism or realism. Attempts to soften this outcome by proposing a playful, or hypothetical, if not pragmatic, “fixation of first principles” are not convincing, mainly because this does not allow for any coherence between games, which results in a strong relativity of principles. We just could not talk about the relationships between those “firstness games”. In other words, we would not gain anything. An example for such a move is provided by Epperson [42].  Though he refers to the Aristotelian potential, he sticks with representational first principles, in his case logic in the form of the principle of the excluded middle and the principle of non-contradiction. Epperson does not get aware of the problems regarding the use of symbols in doing this. Once Wittgenstein critized the very same point in the Principia by Russell and Whitehead. Additionally, representational first principles are always transporters for ontological claims. As long as we recognize that the world is NOT made from objects, but of relations organized, selected and projected by each individual through interpretation, we would face severe difficulties. Only naive realism allows for a frictionless use of first principles. Yet, for a price that is definitely too high.

We think that the way we dissolved the problem of first reason has several advantages as compared to Deleuze’s proposal of the absolute plane of immanence. First, we do not need the notion of absoluteness, which appears at several instances in Deleuze’s main works “What is Philosophy?” [35] (WIP), “Empiricism and Subjectivity [43], and his “Pure Immanence” [44]. The second problem with the plane of immanence concerns the relation between immanence and transcendence. Deleuze refers to two different kinds of transcendence. While in WIP he denounces transcendence as inappropriate due to its heading towards identity, the whole concept of transcendental empiricism is built on the Kantian invention. This two-fold measure can’t be resolved. Transcendence should not be described by its target. Third, Deleuze’s distinction between the absolute plane of immanence and the “personal” one, instantiated by each new philosophical work, leaves a major problem: Deleuze leaves completely opaque how to relate the two kinds of immanence to each other. Additionally, there is a potentially infinite number of “immanences,” implying a classification, a differential and an abstract kind of immanence, all of which is highly corrosive for the idea of immanence itself. At least, as long one conceives immanence not as an entity that could be naturalized. This way, Deleuze splits the problem of grounding into two parts: (1) a pure, hence “transcendent” immanence, and (2) the gap between absolute and personal immanence. While the first part could be accepted, the second one is left completely untouched by Deleuze. The problem of grounding has just been moved into a layer cake. Presumably, these problems are caused by the fact that Deleuze just considers concepts, or _Concepts, if we’d like to consider the transcendental version as well. Several of those imply the plane of immanence, which can’t be described, which has no structure, and which just is implied by the factuality of concepts. Our choreostemic space moves this indeterminacy and openness into a “form” aspect in a non-representational, non-expressive space with the topology of a double-differential. But more important is that we not only have a topology at our disposal which allows to speak about it without imposing any limitation, we else use three other foundational and irreducibly elements to think that space, the choreostemic space. The CS thus also brings immanence and transcendence into the same single structure.

In this section we have discussed a change of perspective towards negativity and positivity. This change did become accessible by the differential structure of the choreostemic space. The problematic field represented by them and all the respective pseudo-solutions has been dissolved. This abandonment we achieved through the “Lagrangean principle”, that is, we replaced the constants—positivity and negativity respectively—by a procedure—instantiation of the Differential—plus a different constant. Yet, this constant is itself not a not a finite replacement, i.e. a “constant” as an invariance. The “constant” is only a relative one: the orthoregulation, comprising habits, traditions and institutions.

Reason—or as we would like to propose for its less anthropological character and better scalability­, mentality—has been reconstructed as a kind of omnipresent reflection on the conditionability of proceedings in the choreostemic space. The conditionability can’t be determined in advance to the performed mental proceedings (acts), which for many could appear as somewhat paradoxical. Yet, it is not. The situation is quite similar to Wittgenstein’s transcendental logic that also gets instantiated just by doing something, while the possibility for performance precedes that of logic.

Finally, there is of course the question, whether there is any condition that we impose onto the choreostemic itself, a condition that would not be resolved by its self-referentiality. Well, there is indeed one: The only unjustified apriori of the choreostemic space seems to be the primacy of interpretation (POI). This apriori, however, is only a weak one, and above all, a practicable one, or one that derives from the openness of the world. Ultimately, the POI in turn is a direct consequence of the time-being. Any other aspect of interpretation is indeed absorbed by the choreostemic space and its self-referentiality, hence requiring no further external axioms or the like. In other words, the starting point of the choreostemic space, or the philosophical attitude of the choreosteme, is openness, the insight that the world is far to generative as to comprehend all of it.

The fact that it is almost without any apriori renders the choreostemic space suitable for those practical purposes where the openness and its sibling, ignorance, calls for dedicated activity, e.g. in all questions of cross-disciplinarity or trans-culturality. As far as different persons establish different forms of life, the choreostemic space even is highly relevant for any aspect of cross-personality. This in turn gives rise to a completely new approach to ethics, which we can’t follow here, though.

<h5>Mentality without Knowledge</h5>

Two of the transcendental aspects of the choreostemic space are _Model,and _Concept. The concepts of model and concept, that is, instantiations of our aspects, are key terms in philosophy of science and epistemology. Else, we proposed that our approach brings with it a new image of thought. We also said that mental activities inscribe figures or attractors into that space. Since we are additionally interested in the issue of justification—we are trying to get rid of them—the question of the relation between the choreostemic space and epistemology is being triggered.

The traditional primary topic of epistemology is knowledge, how we acquire it, particularly however the questions of first how to separate it from beliefs (in the common sense) on the one hand, and second how to secure it in a way that we possibly could speak about truth. In a general account, epistemology is also about the conditions of knowledge.

Our position is pretty clear: the choreostemic space is something that is categorically different from episteme or epistemology. Which are the reasons?

We reject the view that truth in its usual version is a reasonable category for talking about reasoning. Truth as a property of a proposition can’t be a part of the world. We can’t know anything for sure, neither regarding the local context, nor globally. Truth is an element of logic, and the only truth we can know of is empty: a=a. Yet, knowledge is supposed to be about empirical facts (arrangements of relations). Wittgenstein thus set logic as transcendental. Only the transcendental logic can be free of semantics and thus only within transcendental logic we can speak of truth conditions. The consequence is that we can observe either of two effects. First, any actual logic contains some semantic references, because of which it could be regarded as “logic” only approximately. Second, insisting on the application of logical truth values to actual contexts instead results in a categorical fault. The conclusion is that knowledge can’t be secured neither locally from a small given set of sentences about empirical facts, nor globally. We even can’t measure the reliability of knowledge, since this would mean to have more knowledge about the fact than it is given by the local observations provide. As a result, paradoxes and antinomies occur. The only thing we can do is try to build networks of stable models for a negotiable anticipation with negotiable purposes. In other words, facts are not given by relation between objects, but rather as a system of relations between models, which as a whole is both accepted by a community of co-modelers and which provides satisfying anticipatory power. Compared to that the notion of partial truth (Newton da Costa & Steven French) is still misconceived. It keeps sticking to the wrong basic idea and as such it is inferior to our concept of the abstract model. After all, any account of truth violates the fact that it is itself a language game.

Dropping the idea of truth we could already conclude that the choreostemic space is not about epistemology.

Well, one might say, ok, then it is an improved epistemology. Yet, this we would reject as well. The reason for that is a grammatical one. Knowledge in the meaning of epistemology is either about sayable or demonstrable facts. If someone says “I know”, or if someone ascribes to another person “he knows”, or if a person performs well and in hindsight her performance is qualified as “based on intricate knowledge” or the like, we postulate an object or entity called knowledge, almost in an ontological fashion. This perspective has been rejected by Isabelle Peschard [45]. According to her, knowledge can’t be separated from activity, or “enaction”, and knowledge must be conceived as a social embedded practice, not as a stateful outcome. For her, knowledge is not about representation at all. This includes the rejection of the truth conditions as a reasonable part of a concept of knowledge. Else, it will be impossible to give a complete or analytical description of this enaction, because it is impossible to describe (=to explicate) the Form of Life in a self containing manner.

In any case, however, knowledge is always, at least partially, about how to do something, even if it is about highly abstract issues. That means that a partial description of knowledge is possible. Yet, as a second grammatical reason, the choreostemic space does not allow for any representations at all, due to its structure, which is strictly local and made up from the second-order differential.

There are further differences. The CS is a tool for the expression of mental attractors, to which we can assign distinct yet open forms. To do so we need the concepts of mediality and virtuality, which are not mentioned anywhere in epistemology. Mental attractors, or figures, will always “comprise” beliefs, models, ideas, concepts as instances of transcendental entities, and these instances are local instances, which are even individually constrained. It is not possible to explicate these attractors other than by “living” it.

In some way, the choreostemic space is intimately related to the philosophy of C.S. Peirce, which is called “semiotics”. As he did, we propose a primacy of interpretation. We fully embrace his emphasis that signs only refer to signs. We agree with his attempt for discerning different kinds of signs. And we think that his firstness, secondness and thirdness could be related to the mechanisms of the choreostemic space. In some way, the CS could be conceived as a generalization of semiotics. Saying this, we also may point to the fact that Peirce’s philosophy is not  regarded as epistemology either.

Rejecting the characterization of the choreostemic space as an epistemological subject we can now even better understand the contours of the notion of mentality. The “mental” can’t be considered as a set of things like beliefs, wishes, experiences, expectations, thought experiments, etc. These are just practices, or likewise practices of speaking about the relation between private and public aspects of thinking. Any of these items belong to the same mentality, to the same choreostemic figures.

In contrast to Wittgenstein, however, we propose to discard completely the distinction between internal and external aspects of the mental.

And nothing is more wrong-headed than calling meaning a mental activity! Unless, that is, one is setting out to produce confusion.” [PI §693]

One of the transcendental aspects in the CS is concept, another is model. Both together are providing the aspects of use, idea and reference, that is, there is nothing internal and external any more. It simply depends on the purpose of the description, or the kind of report we want to create about the mental, whether we talk about the mental in an internalist or in externalist way, whether we talk about acts, concepts, signs, or models. Regardless, what we do as humans, it will always be predominantly a mental act, irrespective the change of material reconfigurations.

10. Conclusion

It is probably not an exaggeration to say that in the last two decades the diversity of mentality has been discovered. A whole range of developments and shifts in public life may have been contributing to that, concerning several domains, namely from politics, technology, social life, behavioural science and, last but not least, brain research. We saw the end of the Cold War, which has been signalling an unrooting of functionalism far beyond the domain of politics, and simultaneously the growth and discovery of the WWW and its accompanied “scopic44 media” [46, 47]. The “scopics” spurred the so-called globalization that worked much more in favour of the recognition of diversity than it levelled that diversity, at least so far. While we are still in the midst of the popularization and increasingly abundant usage of so-called machine learning, we already witness an intensified mutual penetration and amalgamation of technological and social issues. In the behavioural sciences, probably also supported by the deepening of mediatization, an unforeseen interest in the mental and social capabilities of animals manifested, pushing back the merely positivist and dissecting description of behavior. As one of the most salient examples may serve the confirmation of cultural traditions in dolphins and orcas, concerning communication as well as highly complex collaborative hunting.  The unfolding of collaboration requires the mutual and temporal assignment of functional roles for a given task. This not only prerequisites a true understanding of causality, but even its reflected use as a game in probabilistic spaces.

Let us distil three modes or forms here, (i) the animal culture, (ii) the machine-becoming and of course (iii) the human life forms in the age of intensified mediatization. All three modes must be considered as “novel” ones, for one reason or another. We won’t go in any further detail here, yet it is pretty clear that the triad of these three modes render any monolithic or anthropologically imprinted form of philosophy of mind impossible. In turn, any philosophy of mind that is limited to just the human brains relation to the world, or even worse, which imposes analytical, logical or functional perspectives onto it, must be considered as seriously defect. This applies still to large parts of the mainstream in philosophy of mind (and even ethics).

In this essay we argued for a new Image of Thought that is independent from the experience of or by a particular form of life, form of informational45 organization or cultural setting, respectively. This new Image of Thought is represented through the choreostemic space. This space is dynamic and active and can be described formally only if it is “frozen” into an analytical reduction. Yet, its self-referentiality and self-directed generativity is a major ingredient. This self-referentiality is takes a salient role in the space’s capability to  leave its conditions behind.

One of the main points of the choreostemic space (CS) probably is that we can not talk about “thought”—regardless its quasi-material and informational foundations—without referring to the choreostemic space. It is a (very) strong argument against Rylean concepts about the mind that claim the irrelevance of the concept of the mental by proposing that looking at the behavior is sufficient to talk about the “mind”. Of course, the CS does not support “the dogma of the ghost in the machine“ either. The choreostemic space defies (and helps to defy) any empirical and so also anthropological myopias through its triple-feature of transcendental framing, differential operation and immanent rooting. Such it is immune against naturalist fallacies such as Cartesian dualism as well as against arbitrariness or relativism. Neither it could be infected by any kind of preoccupation such like idealism or universalism. Despite one could regard it in some way as “pure Thought”, or consider it as the expressive situs of it, its purity is not an idealistic one. It dissolves either into the metaphysical transcendentality of the four conceptual aspects _a,that is, the _Model, _Mediality,_Concept,and _Virtuality.Or it takes the form of the Differential that could be considered as being kind of a practical transcendentality46 [48].  There, as one of her starting points Bühlmann writes:

Deleuze’s fundamental critique in Difference and Repetition is that throughout the history of philosophy, these conditions have always been considered as »already confined« in one way or another: Either within »a formless, entirely undifferentiated underground« or »abyss« even, or within the »highly personalized form« of an »autocratically individuated Being«

Our choreostemic space provides also the answer to the problematics of conditions.47  As Deleuze, we suggest to regard conditions only as secondary, that is as relevant entities only after any actualization. This avoids negativity as a metaphysical principle. Yet, in order to get completely rid of any condition while at the same time retain conditionability as a transcendental entity we have to resort to self-referentiality as a generic principle. Hence, our proposal goes beyond Deleuze’s framework as he developed it from “Difference and Repetition” until “What is Philosophy?”, since he never made this move.

Basically, the CS supports Wittgenstein’s rejection of materialism, which experienced a completely unjustified revival in the various shades of neuro-isms. Malcolm cites him [49]:

It makes as little sense to ascribe experiences, wishes, thoughts, beliefs, to  a brain as to a mushroom. (p.186)

This support should not surprise, since the CS was deliberately constructed to be compatible with the concept of language game. Despite the CS also supports his famous remark about meaning:

And nothing is more wrong-headed than calling meaning a mental activity! Unless, that is, one is setting out to produce confusion.” [PI §693]

it is also clear that the CS may be taken as a means to overcome the debate about external or internal primacies or foundations of meaning. The duality of internal vs. external is neutralized in the CS. While modeling and such the abstract model always requires some kind of material body, hence representing the route into some interiority, the CS is also spanned by the Concept and by Mediality. Both concepts are explicit ties between any kind of interiority and and any kind of exteriority, without preferring a direction at all. The proposal that any mental activity inscribes attractors into that space just means that interiority and exteriority can’t be separated at all, regardless the actual conceptualisation of mind or mentality. Yet, in accordance with PI 693 we also admit that the choreostemic space is not equal to the mental. Any particular mentality unfolds as an actual performance in the CS. Of course, the CS does not describe material reconfigurations, environmental contingency etc. and the performance taking place “there”. In other words, it does not cover any aspect of use. On the other hand, material reconfiguration are simply not “there” as long as they do not get interpreted by applying some kind of model.

The CS clearly shows that we should regard questions like “Where is the mind?” as kind of a grammatical mistake, as Blair lucidly demonstrates [50]. Such a usage of the word “mind” not only implies irrevocably that it is a localizable entity. It also claims its conceptual separatedness. Such a conceptualization of the mind is illusionary. The consequences for any attempt to render “machines” “more intelligent” are obviously quite dramatic. As for the brain, it is likewise impossible to “localize” mental capacities in the case of epistemic machines. This fundamental de-territorialization is not a consequence of scale, as in quantum physics. It is a consequence of the verticality of the differential, the related necessity of forms of construction and the fact, that a non-formal, open language, implying randolations to the community, is mandatory to deal with concepts.

One important question about a story like the “choreostemic space” with its divergent, but nevertheless intimately tied four-fold transcendentality is about the status of that space. What “is” it? How could it affect actual thought? Since we have been starting even with  mathematical concepts like space, mappings, topology, or differential, and since our arguments frequently invokes the concept of mechanism,one could suspect that it is a piece of analytical philosophy. This ascription we can clearly reject.

Peter Hacker convincingly argues that “analytical philosophy” can’t be specified by a set of properties of such assumed philosophy. He proposes to consider it as a historical phase of philosophy, with several episodes, beginning around 1890 [53]. Nevertheless, during the 1970ies a a set of believes formed kind of a basic setup. Hacker writes:

But there was broad consensus on three points. First, no advance in philosophical understanding can be expected without the propaedeutic of investigating the use of the words relevant to the problem at hand. Second, metaphysics, understood as the philosophical investigation into the objective, language-independent, nature of the world, is an illusion. Third, philosophy, contrary to what Russell had thought, is not continuous with, but altogether distinct from science. Its task, contrary to what the Vienna Circle averred, is not the clarification or ‘improvement’ of the language of science.

Where we definitely disagree is at the point about metaphysics. Not only do we refute the view that metaphysics is about the objective, language-independent, nature of the world. As such we indeed would reject metaphysics. An example for this kind of thinking is provided by the writing of Whitehead. It should have become clear throughout our writing that we stick to the primacy of interpretation, and accordingly we do regard the believe in an objective reality as deeply misconceived. Thereby we do neither claim that our mental life is independent from the environment—as radical constructivism (Varela & Co) does—nor do we claim that there is no external world around us that is independent from our perception and constructions. Such is just belief in metaphysical independence, which plays an important tole in modernism. The idea of objective reality is also infected by this belief, resulting in a self-contradiction. For “objective” makes sense only as an index to some kind of sociality, and hence to a group sharing a language, and further to the use of language. The claim of “objective reality is thus childish.

More important, however, we have seen that the self-referentiality of terms like concept (we called those “strongly singular terms“) enforces us to acknowledge that Concept, much like logic, is a transcendental category. Obviously we refer strongly to transcendental, that is metaphysical categories. At the same time we also propose, however, that there are manifolds of instances of those transcendental categories.

The choreostemic space describes a mechanism. In that it resembles to the science of biology, where the concept of mechanism is an important epistemological tool. As such, we try to defend against mysticism, against the threat that is proposed by any all too quick reference to the “Lebenswelt”, the form of life and the ways of living. But is it really an “analysis”?

Putnam called “analysis” an “inexplicable noise”[54]. His critique was precisely that semantics can’t be found by any kind of formalization, that is outside of the use of language. In this sense we certainly are not doing analytic philosophy. As a final point we again want to emphasize that it is not possible to describe the choreostemic space completely, that is, all the conditions and effects, etc., due to its self-referentiality. It is a generative space that confirms its structure by itself. Nevertheless it is neither useless nor does it support solipsism. In a fully conscious act it can be used to describe the entirety of mental activity, and only as a fully conscious act, while this description is a fully non-representational description. In this way it overcomes not only the Cartesian dualism about consciousness. In fact, it is another way to criticise the distinction between interiority and exteriority.

For one part we agree with Wittgenstein’s critique (see also the work of PMS Hacker about that), which identifies the “mystery” of consciousness as an illusion. The concept of the language game, which is for one part certainly an empiric concept, is substantial for the choreostemic space. Yet, the CS provides several routes between the private and the communal, without actually representing one or the other. The CS does no distinguish between the interior and the exterior at all, just recall that mediality is one of the transcendental aspects. Along with Wittgenstein’s “solipsistic realism” we consequently reject also the idea that ontology can be about the external world, as this again would introduce such a separation. Quite to the contrast, the CS vanishes the need for the naive conception of ontology. Ontology makes sense only within the choreostemic space.

Yet, we certainly embrace the idea that mental processes are ultimately “based” on physical matter, but unfolded into and by their immaterial external surrounds, yielding an inextricable compound. Referring to any “neuro” stuff regarding the mental does neither “explain” anything nor is it helpful to any regard, whether one considers it as neuro-science or as neuro-phenomenology.

Summarizing the issue we may say that the choreostemic space opens a completely new level for any philosophy of the mental, not just what is being called the human “mind”. It also allows to address scientific questions about the mental in a different way, as well as it clarifies the route to machines that could draw their own traces and figures into that space. It makes irrepealable clear that any kind of functionalism or materialism is once and for all falsified.

Let us now finally inspect our initial question that we put forward in the editorial essay. Is there a limit for the mental capacity of machines? If yes, which kind of limit and where could we draw it? The question about the limit of machines directly triggers the question about the image of humanity („Bild des Menschen“), which is fuelled from the opposite direction. So, does this imply kind of a demarcation line between the domain of the machines and the realm of the human? Definitely not, of course. To opt for such a separation would not only follow idealist-romanticist line of critizising technology, but also instantiate a primary negativity.

Based on the choreostemic space, our proposal is a fundamentally different one. It can be argue that this space can contains any condition of any thought as an population of unfolding thoughts. These unfoldings inscribe different successions into the space, appearing as attractors and figures. The key point of this is that different figures, representing different Lebensformen (Forms of Life) that are probably even incommensurable to each other, can be related to each other without reducing any of them. The choreostemic space is a space of mental co-habitation.

Let us for instance start with the functionalist perspective that is so abundant in modernism since the times of Descartes. A purely functionalist stance is just a particular figure in that space, as it applies to any other style of thinking. Using the dictum of the choreosteme as a guideline, it is relatively easy to widen the perspective into a more appropriate one. Several developmental paths into a different choreostemic attractor are possible. For instance, mediatization through social embedding [52], opening through autonomous associative mechanisms as we have described it, or the adhoc recombination of conceptual principles as it has been demonstrated by Douglas Hofstadter. Letting a robot range freely around also provokes the first tiny steps away from functionalism, albeit the behavioral Bauplan of the insects (arthropoda) demonstrates that this does not install a necessity for the evolutionary path to advanced mental capabilities.

The choreostemic space can serve as such a guideline because it is not infected by anthropology in any regard. Nevertheless it allows to speak clearly about concepts like belief and knowledge, of course, without reducing these concepts to positive definite or functionalist definitions. It also remains completely compatible with Wittgenstein’s concept of the language game. For instance, we reconstructed the language game “knowing” as a label for a pointer (say reference) to a particular image of thought and its use. Of course, this figure should not be conceived as a fixed point attractor, as the various shades of materialism, idealism and functionalism actually would do (if they would argue along the choreosteme). It is somewhat interesting that here, by means of the choreostemic space, Wittgenstein and Deleuze approach each other quite closely, something they themselves would not have been supported, probably.

Where is the limit of machines, then?

I guess, any answer must refer to the capability to leave a well-formed trace in the choreostemic space. As such, the limits of machines are to be found in the same way as they are found for us humans: To feel and to act as an entity that is able to contribute to culture and to assimilate it in its mental activity.

We started the choreostemic space as a framework to talk about thinking, or more general: about mentality, in a non-anthropological and non.-reductionist manner. In the course of our investigation, we found a tool that actualizes itself into real social and cognitive situations. We also found the infinite space of choreostemic galaxies as attractors for eternal returns without repetition of the identical. Choreosteme keeps the any alive, without subjugating individuality, it provides a new and extended level of sayability without falling into representationalism. Taken together, as a new Image of Thought it allows to develop thinking deliberately and as part of a multitudinous variety.

Notes

1. This piece is thought of as a close relative to Deleuze’s Difference & Repetition (D&R)[1]. Think of it as a satellite of it, whose point of nearest approach is at the end of part IV of D&R, and thus also as a kind of extension of D&R.

2. Deleuze of course, belongs to them, but of course also Ludwig Wittgenstein (see §201 of PI [2], “paradox” of rule following), and Wilhelm Vossenkuhl [3], who presented three mutually paradoxical maxims as a new kind of a theory of morality (ethics), that resists the reference to monolithically set first principles, such as for instance in John Rawls’ “Theory of Justice”. The work of those philosophers also provides examples of how to turn paradoxicality productive, without creating paradoxes at all, the main trick being to overcome their fixation by a process. Many others, including Derrida, just recognize paradoxes, but are neither able to conceive of paradoxicality nor to distinguish them from paradoxes, hence they take paradoxes just as unfortunate ontological knots. In such works, one can usually find one or the other way to prohibit interpretation (think about the trail, grm. “Spur” in Derrida)

3. Paradoxes and antinomies like those described by Taylor, Banach-Tarski, Russell or of course Zenon are all defect, i.e. pseudo-paradoxes, because they violate their own “gaming pragmatics”. They are not paradoxical at all, but rather either simply false or arbitrarily fixed within the state of such violation. The same fault is committed by the Sorites paradox and its relatives. They are all mixing up—or colliding—the language game of countability or counting with the language game of denoting non-countability, as represented by the infinite or the infinitesimal. Instead of saying that they violate the apriori self-declared “gaming pragmatics” we also could say that they change the most basic reference system on the fly, without any indication of doing so. This may happen through an inadequate use of the concept of infiniteness.

4. DR 242 eternal return: it is not the same and the identical that returns, but the virtual structuredness (not even a “principle”), without which metamorphosis can’t be conceived.

5. In „Difference and Repetition“, Deleuze chose to spell “Idea” with a capital letter, in order to distinguish his concept from the ordinary word.

7. Here we find interesting possibilities for a transition to Alan Turing‘s formal foundation of creativity [5].

8. This includes the usage of concepts like virtuality, differential, problematic field, the rejection of the primacy of identity and closely related to that, the rejection of negativity, the rejection of the notion of representation, etc. Rejecting the negative opens an interesting parallel to Wittgenstein’s insisting on the transcendentality of logics and the subordination of any practical logic to performance. Since the negative is a purely symbolic entity, it is also purely aposteriori to any genesis, that is self-referential performance.

9. I would like to recommend to take a look to the second part of part IV in D&R, and maybe, also to the concluding chapter therein (download it here).

10. Saying „we“ here is not just due to some hyperbolic politeness. The targeted concept of this essay, the choreosteme, has been developed by Vera Bühlmann and the author of this essay (Klaus Wassermann) in close collaboration over a number of years. Finally the idea proofed to be so strong that now there is some dissent about the role and the usage of the concept.

11. For belief revision as described by others, overview @ Stanford, a critique by Pollock, who clarified that belief revision as comprised and founded by the AGM theory (see below) is incompatible to  standard epistemology.

12. By symbolism we mean the belief that symbols are the primary and apriori existent entities for any description of any problematic field. In machine-based epistemology for instance, we can not start with data organized in tables because this pre-supposes a completed process of “ensymbolization”. Yet, in the external world there are no symbols, because symbols only exist subsequent to interpretation. We can see that symbolism creates the egg-chick-problem.

13. Miriam Meckel, communication researcher at the university of Zürich, is quite active in drawing dark-grey pictures. Recently, she coined “Googlem” as a resemblance to Google and Golem. Meckel commits several faults in that: She does not understand the technology(accusing Google to use averages), and she forgets about the people (programmers) behind “the computer”, and the people using the software as well. She follows exactly the pseudo-romantic separation between nature and the artificial.

Miriam Meckel, Next. Erinnerungen an eine Zukunft ohne uns,  Rowohlt 2011.

14. Here we find a resemblance to Wittgenstein’s denial to attribute philosophy the role of an enabler of understanding. According to Wittgenstein, philosophy even does not and can not describe. It just can show.

15. This also concerns the issue of cross-culturality.

16. Due to some kind of cultural imprinting, a frequently and solitary exercised habit, people almost exclusively think of Cartesian spaces as soon as a “space” is needed. Yet, there is no necessary implication between the need for a space and the Cartesian type of space. Even Deleuze did not recognize the difficulties implied by the reference to the Cartesian space, not only in D&R, but throughout his work. Nevertheless, there are indeed passages (in What is philosophy? with “planes of immanence”, or in the “Fold”) where it seems that he could have smelled into a different conception of space.

17. For the role of „elements“ please see the article about „Elementarization“.

18. Vera Bühlmann [8]: „Insbesondere wird eine Neu-Bestimmung des aristotelischen Verhältnisses von Virtualität und Aktualität entwickelt, unter dem Gesichtspunkt, dass im Konzept des Virtuellen – in aller Kürze formuliert – das Problem struktureller Unendlichkeit auf das Problem der zeichentheoretischen Referenz trifft.“

19. which is also a leading topic of our collection of essays here.

20. e.g. Gerhard Gamm, Sybille Krämer, Friedrich Kittler

21. cf. G.C. Tholen [7], V.Bühlmann [8].

22. see the chapter about machinic platonism.

23. Actually, Augustine instrumentalises the discovered difficulty to propose the impossibility to understand God’s creation.

24. It is an „ancestry“ only with respect to the course in time, as the result of a process, not however in terms of structure, morphology etc.

25. cf. C.S. Peirce [16], Umberto Eco [17], Helmut Pape [18];

26. Note that in terms of abstract evolutionary theory rugged fitness landscapes enforce specialisation, but also bring along an increased risk for vanishing of the whole species. Flat fitness landscapes, on the other hand, allow for great diversity. Of course the fitness landscape is not a stable parameter space, neither locally not globally. IN some sense, it is even not a determinable space. Much like the choreostemic space, it would be adequate to conceive of the fitness landscape as a space built from 2-set of transformatory power and the power to remain stability. Both can be determined only in hindsight. This paradoxality is not by chance, yet it has not been discovered as an issue in evolutionary theory.

27. Of course I know that there are important differences between verbs and substantives, which we may level out in our context without loosing too much.

28. In many societies, believing has been thought to be tied to religion, the rituals around the belief in God(s). Since the renaissance, with upcoming scientism and profanisation of societies religion and science established sort of a replacement competition. Michel Serres described how scientists took over the positions and the funds previously held by the cleric. The impression of a competition is well-understandable, of course, if we consider the “opposite direction” of the respective vectors in the choreostemic space. Yet, it is also quite mistaken, maybe itself provoked by overly idealisation, since neither the clerk can make his day without models nor the scientist his one without beliefs.

29. The concept of “theory” referred to here is oriented towards a conceptualisation based on language game and orthoregulation. Theories need to be conceived as orthoregulative milieus of models in order to be able to distinguish between models and theories, something which can’t be accomplished by analytic concepts. See the essay about theory of theory.

30. Of course, we do not claim to cover completely the relation between experiments, experience, observation on the one side and their theoretical account on the other by that. We just would like to emphasize the inextricable dynamic relation between modeling and concepts in scientific activities, whether in professional or “everyday-type” of science. For instance, much could be said in this regard about the path of decoherence from information and causality. Both aspects, the decoherence and the flip from intensifying modeling over to a conceptual form has not been conceptualized before. The reason is simple enough: There was no appropriate theory about concepts.

When, for instance, Radder [28] contends that the essential step from experiment to theory is to disconnect theoretical concepts from the particular experimental processes in which they have been realized [p.157], then he not only misconceives the status and role of theories, he also does not realize that experiments are essentially material actualisations of models. Abstracting regularities from observations into models and shaping the milieu for such a model in order to find similar ones, thereby achieving generalization is anything but to disconnect them. It seems that he overshoot a bit in his critique of scientific constructivism. Additionally, his perspective does not provide any possibility to speak about the relation between concepts and models. Though Radder obviously had the feeling of a strong change in the way from putting observations into scene towards concepts, he fails to provide a fruitful picture about it. He can’t surpass that feeling towards insight, as he muses about “… ‘unintended consequences’ that might arise from the potential use of theoretical concepts in novel situations.” Such descriptions are close to scientific mysticism.

Radder’s account is a quite recent one, but others are not really helpful about the relation between experiment, model and concept either. Kuhn’s praised concept of paradigmatic changes [24] can be rated at most as a phenomenological or historizing description. Sure, his approach brought a fresh perspective in times of overdone reductionism, but he never provided any kind of abstract mechanism. Other philosophers of science stuck to concepts like prediction (cf. Reichenbach [20], Salmon [21]) and causality (cf. Bunge [22], Pearl [23]), which of course can’t say anything about the relation to the category of concepts. Finally, Nancy Cartwright [25], Isabelle Stengers [26], Bruno Latour [9] or Karin Knorr Cetina [10] are representatives for the various shades of constructivism, whether individually shaped or as a phenomenon embedded into a community, which also can’t say anything about concepts as categories. A screen through the Journal of Applied Measurement did not reveal any significantly different items.

Thus, so far philosophy of science, sociology and history of science have been unable to understand the particular dynamics between models and concepts as abstract categories, i.e. as _Modelsor _Concepts.

31. If the members of a community, or even the participants in random interactions within it, agree on the persistence of their relations, then they will tend to exhibit a stronger propensity towards collaboration. Robert Axelrod demonstrated that on the formal level by means of a computer experiment [33]. He has been the first one, who proposed game theory as a means to explain the choice of strategies between interactees.

32. Orig.: „Seit über 200 Jahren ist die Philosophie anthropologisch bestimmt. Was das genauer bedeutet, hat sie dagegen kaum erforscht.“

33. Orig.: „Nietzsches Idealismuskritik, die in vielen Schattierungen vorliegt und immer auf das philosophische Selbstmissverständnis eines reinen Geistes und reiner Begriffe zielt, richtet sich auch gegen ein bestimmtes Naturverständnis.“ (KAV439)

34. More precisely, in evolutionary processes the capability for generalization is selected under conditions of scarcity. Scarcity, however, is inevitably induced under the condition of growth or consumption. It is important to understand that newly emerging levels of generalization do not replace former levels of integration. Those undergo a transformation with regard to their relations and their functional embedding, i.e. with regard to their factuality. In morphology of biological specimens this is well-known as “Überformung”. For more details about evolution and generalization please see this.

35. The notions of “philosophy of nature” or even “natural philosophy” are strictly inappropriate. Both “kinds” of philosophy are not possible at all. They have to be regarded as a strange mixture of contemporarily available concepts from science (physics, chemistry, biology), mysticism or theism and the mistaken attempt to transfer topics as such from there to philosophy. Usually, the result is simply a naturalist fallacy with serious gaps regarding the technique of reflection. Think about Kant’s physicalistic tendencies throughout his philosophy, the unholy adaptation of Darwinian theory, analytic philosophy, which is deeply influenced by cybernetics, or the comeback of determinism and functionalism due to almost ridiculous misunderstandings of the brain.

Nowadays it must be clear that philosophy before the reflection of the role of language, or more general, before the role of languagability—which includes processes of symbolization and naming—can’t be regarded as serious philosophy. Results from sciences can be imported into philosophy only as formalized structural constraints. Evolutionary theory, for instance, first have to be formalized appropriately (as we did here), before it could be of any relevance to philosophy. Yet, what is philosophy? Besides Deleuze’s answer [35], we may conceive philosophy as a technique of asking about the conditionability of the possibility to reflect. Hence, Wittgenstein said that philosophy should be regarded as a cure. Thus philosophy includes fields like ethics as a theory of morality or epistemology, which we developed here into a “choreostemology”.

36. Orig.: „Der Punkt, um den es sich namentlich handelt, lässt sich ganz bestimmt angeben. Es ist gleichsam der Apfel in dem logischen Sündenfall der deutschen Philosophie nach Kant: das Verhältnis zwischen Subjekt und Objekt in der Erkenntnis.“

37. Despite Rölli usually esteems Deleuze’s philosophy of the differential, here he refers to the difference though. I think it should be read as “divergence and differential”.

38. Orig.: „Nach allem wird klarer geworden sein, dass es sich bei diesem Pragmatismus nicht um einen einfachen Pragmatismus handelt, sondern um einen mit aller philosophischen Raffinesse konstruierten Pragmatismus der Differenz.“

39. As scientific facts, Quantum physics, the probabilistic structure of the brain and the non-representationalist working of the brain falsify determinism as well as finiteness of natural processes, even if there should be something like “natural laws”.

40. See the article about the structure of comparison.

41. Even Putnam does so, not only in his early functionalist phase, but still in Representation and Reality [36].

42. Usually, philosophers are trained only in logics, which does not help much, since logic is not a process. Of course, being trained in mathematical structures does not imply that the resulting philosophy is reasonable at all. Take Alain Badiou as an example, who just blows up materialism.

43. A complete new theory of governmentality and sovereignty would be possible here.

44. The notion of “scopic” media as coined by Knorr Cetina means that modern media substantially change the point of view (“scopein”, looking, viewing). Today, we are not just immersed into them, but we deliberately choose them and search for them. The change of perspective is thought to be a multitude and contracting space and time. This however, is not quite typical for the new media.

45. Here we refer to our extended view onto “information” that goes far beyond the technical reduced perspective that is forming the main stream today. Information is a category that can’t be limited to the immaterial. See the chapter about “Information and Causality”.

46. Vera Bühlmann described certain aspects of Deleuze’s philosophy as an attempt to naturalize transcendentality in the context of emergence, as it occurs in complex systems. Deleuze described the respective setting in “Logic of Sense” [49] as the 14th series of paradoxes.

47. …which is not quite surprising, since we developed the choreostemic space together.

References
  • [1] Gilles Deleuze, Difference and Repetition. Translated by Paul Patton, Athlon Press, 1994 [1968].
  • [2] Ludwig Wittgenstein, Philosophical Investigations.
  • [3] Wilhelm Vossenkuhl. Die Möglichkeit des Guten. Beck, München 2006.
  • [4] Jürgen Habermas, Über Moralität und Sittlichkeit – was macht eine Lebensform »rational«? in: H. Schnädelbach (Hrsg.), Rationalität. Suhrkamp, Frankfurt 1984.
  • [5] Alan Turing. Chemical Basis of Morphogenesis.
  • [6] K. Wassermann, That Centre-Point Thing. The Theory Model in Model Theory. In: Vera Bühlmann, Printed Physics, Springer New York 2012, forthcoming.
  • [7] Georg Christoph Tholen. Die Zäsur der Medien. Kulturphilosophische Konturen. Suhrkamp, Frankfurt 2002.
  • [8] Vera Bühlmann. Inhabiting media : Annäherungen an Herkünfte und Topoi medialer Architektonik. Thesis, University of Basel 2011. available online, summary (in German language) here.
  • [9] Bruno Latour,
  • [10] Karin Knorr Cetina (1991). Epistemic Cultures: Forms of Reason in Science. History of Political Economy, 23(1): 105-122.
  • [11] Günther Ropohl, Die Unvermeidlichkeit der technologischen Aufklärung. In: Paul Hoyningen-Huene, & Gertrude Hirsch (eds.), Wozu Wissenschaftsphilosophie? De Gruyter, Berlin 1988.
  • [12] Bas C. van Fraassen, Scientific Representation: Paradoxes of Perspective. Oxford University Press, New York 2008.
  • [13] Ronald N. Giere, Explaining Science: A Cognitive Approach. Cambridge University Press, Cambridge 1988.
  • [14] Aaron Ben-Ze’ev, Is There a Problem in Explaining Cognitive Progress? pp.41-56 in: Robert F. Goodman & Walter R. Fisher (eds.), Rethinking Knowledge: Reflections Across the Disciplines (Suny Series in the Philosophy of the Social Sciences) SUNY Press, New York 1995.
  • [15] Robert Brandom, Making it Explicit.
  • [16] C.S. Peirce, var.
  • [17] Umberto Eco,
  • [18] Helmut Pape, var.
  • [19] Vera Bühlmann, “Primary Abundance, Urban Philosophy — Information and the Form of Actuality.” pp.114-154, in: Vera Bühlmann (ed.), Printed Physics. Springer, New York 2012, forthcoming.
  • [20] Hans Reichenbach, Experience and Prediction. An Analysis of the Foundations and the Structure of Knowledge, University of Chicago Press, Chicago, 1938.
  • [21] Wesley C. Salmon, Causality and Explanation. Oxford University Press, New York 1998.
  • [22] Mario Bunge, Causality and Modern Science. Dover Publ. 2009 [1979].
  • [23] Judea Pearl , T.S. Verma (1991) A Theory of Inferred Causation.
  • [24] Thomas S. Kuhn, Scientific Revolutions
  • [25] Nancy Cartwright. var.
  • [26] Isabelle Stengers, Spekulativer Konstruktivismus. Merve, Berlin 2008.
  • [27] Peter M. Stephan Hacker, “Of the ontology of belief”, in: Mark Siebel, Mark Textor (eds.),  Semantik und Ontologie. Ontos Verlag, Frankfurt 2004, pp. 185–222.
  • [28] Hans Radder, “Technology and Theory in Experimental Science.” in: Hans Radder (ed.), The Philosophy Of Scientific Experimentation. Univ of Pittsburgh 2003, pp.152-173
  • [29] C. Alchourron, P. Gärdenfors, D. Makinson (1985). On the logic of theory change: Partial meet contraction functions and their associated revision functions. Journal of Symbolic Logic, 50: 510–530.
  • [30] Sven Ove Hansson, Sven Ove Hansson (1998). Editorial to Thematic Issue on: “Belief Revision Theory Today”, Journal of Logic, Language, and Information 7(2), 123-126.
  • [31] John L. Pollock, Anthony S. Gillies (2000). Belief Revision and Epistemology. Synthese 122: 69–92.
  • [32] Michael Epperson (2009). Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse. Process Studies, 38:2, 339-366.
  • [33] Robert Axelrod, Die Evolution der Kooperation. Oldenbourg, München 1987.
  • [34] Marc Rölli, Kritik der anthropologischen Vernunft. Matthes & Seitz, Berlin 2011.
  • [35] Deleuze, Guattari, What is Philosophy?
  • [36] Hilary Putnam, Representation and Reality.
  • [37] Giorgio Agamben, The State of Exception.University of Chicago Press, Chicago 2005.
  • [38] Elena Bellina, “Introduction.” in: Elena Bellina and Paola Bonifazio (eds.), State of Exception. Cultural Responses to the Rhetoric of Fear. Cambridge Scholars Press, Newcastle 2006.
  • [39] Friedrich Albert Lange, Geschichte des Materialismus und Kritik seiner Bedeutung in der Gegenwart. Frankfurt 1974. available online @ zeno.org.
  • [40] Michel Foucault, Archaeology of Knowledge.
  • [41] Benjamin Morgan, Undoing Legal Violence: Walter Benjamin’s and Giorgio Agamben’s Aesthetics of Pure Means. Journal of Law and Society, Vol. 34, Issue 1, pp. 46-64, March 2007. Available at SSRN: http://ssrn.com/abstract=975374
  • [42] Michael Epperson, “Bridging Necessity and Contingency in Quantum Mechanics: The Scientific Rehabilitation of Process Metaphysics.” in: David R. Griffin, Timothy E. Eastman, Michael Epperson (eds.), Whiteheadian Physics: A Scientific and Philosophical Alternative to Conventional Theories. in process, available online; mirror
  • [43] Gilles Deleuze, Empiricism and Subjectivity. An Essay on Hume’s Theory of HUman Nature. Columbia UNiversity Press, New York 1989.
  • [44] Gilles Deleuze, Pure Immanence – Essays on A Life. Zone Books, New York 2001.
  • [45] Isabelle Peschard
  • [46] Knorr Cetina, Karin (2009): The Synthetic Situation: Interactionism for a Global World. In: Symbolic Interaction, 32 (1), S. 61-87.
  • [47] Knorr Cetina, Karin (2012): Skopische Medien: Am Beispiel der Architektur von Finanzmärkten. In: Andreas Hepp & Friedrich Krotz (eds.): Mediatisierte Welten: Beschreibungsansätze und Forschungsfelder. Wiesbaden: VS Verlag, S. 167-195.
  • [48] Vera Bühlmann, “Serialization, Linearization, Modelling.” First Deleuze Conference, Cardiff 2008) ; Gilles Deleuze as a Materialist of Ideality”, (lecture held at the Philosophy Visiting Speakers Series, University of Duquesne, Pittsburgh 2010.
  • [49] Gilles Deleuze, Logic of Sense. Columbia University Press, New York 1991 [1990].
  • [50] N. Malcolm, Nothing is Hidden: Wittgenstein’s Criticism of His Early Thought,  Basil Blackwell, Oxford 1986.
  • [51] David Blair, Wittgenstein, Language and Information: “Back to the Rough Ground!” Springer, New York 2006. mirror
  • [52] Caroline Lyon, Chrystopher L Nehaniv, J Saunders (2012). Interactive Language Learning by Robots: The Transition from Babbling to Word Forms. PLoS ONE 7(6): e38236. Available online (doi:10.1371/journal.pone.0038236)
  • [53] Peter M. Stephan Hacker, “Analytic Philosophy: Beyond the linguistic turn and back again”, in: M. Beaney (ed.), The Analytic Turn: Analysis in Early Analytic Philosophy and Phenomenology. Routledge, London 2006.
  • [54] Hilary Putnam, The Meaning of “Meaning”, 1976.

۞

Transformation

May 17, 2012 § Leave a comment

In the late 1980ies there was a funny, or strange, if you like,

discussion in the German public about a particular influence of the English language onto the German language. That discussion got not only teachers engaged in higher education going, even „Der Spiegel“, Germany’s (still) leading weekly news magazine damned the respective „anglicism“. What I am talking about here considers the attitude to „sense“. At those times well 20 years ago, it was meant to be impossible to say „dies macht Sinn“, engl. „this makes sense“. Speakers of German at that time understood the “make” as “to produce”. Instead, one was told, the correct phrase had to be „dies ergibt Sinn“, in a literal, but impossible translation something like „this yields sense“, or even „dies hat Sinn“, in a literal, but again wrong and impossible translation, „this has sense“. These former ways of building a reference to the notion of „sense“ feels even awkward for many (most?) speakers of German language today. Nowadays, the English version of the meaning of the phrase replaced the old German one, and one even can find in the “Spiegel“ now the analogue to “making” sense.

Well, the issue here is not just one historical linguistics or one of style. The differences that we can observe here are deeply buried into the structure of the respective languages. It is hard to say whether such idioms in German language are due to the history of German Idealism, or whether this particular philosophical stance developed on the basis of the structures in the language. Perhaps a bit of both, one could say from a Wittgensteinian point of view. Anyway, we may and can be relate such differences in “contemporary” language to philosophical positions.

It is certainly by no means an exaggeration to conclude that the cultures differ significantly in what their languages allow to be expressible. Such a thing as an “exact” translation is not possible beyond trivial texts or a use of language that is very close to physical action. Philosophically, we may assign a scale, or a measure, to describe the differences mentioned above in probabilistic means, and this measure spans between pragmatism and idealism. This contrast also deeply influences philosophy itself. Any kind of philosophy comes in those two shades (at least), often expressed or denoted by the attributes „continental“ and „anglo-american“. I think these labels just hide the relevant properties. This contrast of course applies to the reading of idealistic or pragmatic philosophers itself. It really makes a difference (1980ies German . . . „it is a difference“) whether a native English speaking philosopher reads Hegel, or a German native, whether a German native is reading Peirce or an American guy, whether Quine conducts research in logic or Carnap. The story quickly complicates if we take into consideration French philosophy and its relation to Heidegger, or the reading of modern French philosophers in contemporary German speaking philosophy (which is almost completely absent).1

And it becomes even more complicated, if not complex and chaotic, if we consider the various scientific sub-cultures as particular forms of life, formed by and forming their own languages. In this way it may well seem to be rather impossible—at least, one feels tempted to think so—to understand Descartes, Leibniz, Aristotle, or even the pre-Socratics, not to speak about the Cro-Magnon culture2, albeit it is probably more appropriate to reframe the concept of understanding. After all, it may itself be infected by idealism.

In the chapters to come you may expect the following sections. As we did before we’ll try to go beyond the mere technical description, providing the historical trace and the wider conceptual frame:

A Shift of Perspective

Here, I need this reference to the relativity as it is introduced in—or by­ —language for highlighting a particular issue. The issue concerns a shift in preference, from the atom, the point, from matter, substance, essence and metaphysical independence towards the relation and its dynamic form, the transformation. This shift concerns some basic relationships of the weave that we call “Lebensform” (form of life), including the attitude towards those empiric issues that we will deal with in a technical manner later in this essay, namely the transformation of “data”. There are, of course, almost countless aspects of the topos of transformation, such like evolutionary theory, the issue of development, or, in the more abstract domains, mathematical category theory. In some way or another we already dealt with these earlier (for category theory, for evolutionary theory). These aspects of the concept of transformation will not play a role here.

In philosophical terms the described difference between German and English language, and the change of the respective German idiom  marks the transition from idealism to pragmatism. This corresponds to the transition from a philosophy of primal identity to one where difference is transcendental. In the same vein, we could also set up the contrast between logical atomism and the event as philosophical topoi, or between favoring existential approaches and ontology against epistemology. Even more remarkably, we also find an opposing orientation regarding time. While idealism, materialism, positivism or existentialism (and all similar attitudes) are heading backwards in time, and only backwards, pragmatism and, more generally, a philosophy of events and transformation is heading forward, and only forward. It marks the difference between settlement (in Heideggerian „Fest-Stellen“, English something like „fixing at a location“, putting something into the „Gestell“3) and anticipation. Settlements are reflected by laws of nature in which time does not—and shall not—play a significant role. All physical laws, and almost all theories in contemporary physics are symmetric with respect to time. The “law perspective” blinds against the concept of context, quite obviously so. Yet, being blinded against context also disables to refer to information in an adequate manner.

In contrast, within a framework that is truly based on the primacy of interpretation and thus following the anticipatory paradigm, it does not make sense to talk about “laws”. Notably, issues like the “problem” of induction exist only in the framework of the static perspective of idealism and positivism.

It is important to understand that these attitudes are far from being just “academic” distinctions. There are profound effects to be found on the level of empiric activity, how data are handled using which kind of methods. Further more, they can’t be “mixed”, once one of them have been chosen. Despite we may switch between them in a sequential manner, across time or across domains, we can’t practice them synchronously as the whole setup of the life form is influenced. Of course, we do not want to rate one of them as the “best”, we just want to ensure that it is clear that there are particular consequences of that basic choice.

Towards the Relational Perspective

As late as 1991, Robert Rosen’s work about „Relational Biology“ has been anything but nearby [1]. As a mathematician, Rosen was interested in the problematics of finding a proper way to represent living systems by formal means. As a result of this research, he strongly proposed the “relational” perspective. He identifies Nicolas Rashevsky as the originator of it, who mentioned about it around 1935 for the first time. It really sounds strange that relational biology had to be (re-)invented. What else than relations could be important in biology? Yet, still today the atomistic thinking is quite abundant, think alone about the reductionist approaches in genetics (which fortunately got seriously attacked meanwhile4). Or think about the still prevailing helplessness in various domains to conceive appropriately about complexity (see our discussion of this here). Being aware of relations means that the world is not conceived as made from items that are described by inputs and outputs with some analytics, or say deterministics, in between. Only such items could be said that they “function”. The relational perspective abolishes the possibility of the reduction of real “systems” to “functions”.

As it is already indicated by the appearance of Rashevsky, there is, of course, a historical trace for this shift, kind of soil emerging from intellectual sediments.5 While the 19th century could be considered as being characterized by the topos of population (of atoms)—cf. the line from Laplace and Carnot to Darwin and Boltzmann—we can observe a spawning awareness for the relation in the 20th century. Wittgenstein’s Tractatus started to oppose Frege and has been always in stark contrast to logical positivism, then accompanied by Zermelo (“axiom” of choice6), Rashevsky (relational biology), Turing (morphogenesis in complex systems), McLuhan (media theory), String Theory in physics, Foucault (field of propositions), and Deleuze (transcendental difference). Comparing Habermas and Luhmann on the one side—we may label their position as idealistic functionalism—with Sellars and Brandom on the other—who have been digging into the pragmatics of the relation as it is present in humans and their culture—we find the same kind of difference. We also could include Gestalt psychology as kind of a pre-cursor to the party of “relationalists,” mathematical category theory (as opposed to set theory) and some strains from the behavioral sciences. Researchers like Ekman & Scherer (FACS), Kummer (sociality expresses as dynamics in relative positions), or Colmenares (play) focused the relation itself, going far beyond the implicit reference to the relation as a secondary quality. We may add David Shane7 for architecture and Clarke or Latour8 for sociology. Of course, there are many, many other proponents who helped to grow the topos of the relation, yet, even without a detailed study we may guess that compared to the main streams they still remain comparatively few.

These difference could not be underestimated in the field of information sciences, computer sciences, data analysis, or machine-based learning and episteme. It makes a great difference whether one would base the design of an architecture or the design of use on the concept of interfaces, most often defined as a location of full control, notably in both directions, or on the concept of behavioral surfaces.9. In the field of empiric activities, that is modeling in its wide sense, it yields very different setups or consequences whether we start with the assumption of independence between our observables or between our observations or whether we start with no assumptions about the dependency between observables, or observations, respectively. The latter is clearly the preferable choice in terms of intellectual soundness. Even if we stick to the first of both alternatives, we should NOT use methods that work only if that assumption is satisfied. (It is some kind of a mystery that people believe that doing so could be called science.) The reason is pretty simple. We do not know anything about the dependency structures in the data before we have finished modeling. It would inevitably result in a petitio principii if we’d put “independence” into the analysis, wrapped into the properties of methods. We would just find. . . guess what. After destroying facts—in the Wittgensteinian sense understood as relationalities—into empiristic dust we will not be able to find any meaningful relation at all.

Positioning Transformation (again)

Similarly, if we treat data as a “true” mapping of an outside “reality”, as “givens” that eventually are distorted a bit by more or less noise, we will never find multiplicity in the representations that we could derive from modeling, simply because it would contradict the prejudice. We also would not recognize all the possible roles of transformation in modeling. Measurement devices act as a filter10, and as such it does not differ from any analytic transformation of the data. From the perspective of the associative part of modeling, where the data are mapped to desired outcomes or decisions, “raw” data are simply not distinguishable from “transformed” data, unless the treatment itself would not be encoded as data as well. Correspondingly, we may consider any data transformation by algorithmic means as additional measurement devices, which are responding to particular qualities in the observations on their own. It is this equivalence that allows for the change from the linear to a circular and even a self-referential arrangement of empiric activities. Long-term adaptation, I would say even any adaptation at all is based on such a circular arrangement. The only thing we’d to change to earn the new possibilities was to drop the “passivist” representationalist realism11.

Usually, the transformation of data is considered as an issue that is a function of discernibility as an abstract property of data (Yet, people don’t talk like that, it’s our way of speaking here). Today, the respective aphorism as coined by Bateson already became proverbial, despite its simplistic shape: Information is the difference that makes the difference. According to the context in which data are handled, this potential discernibility is addressed in different ways. Let us distinguish three such contexts: (i) Data warehousing, (ii) statistics, and (iii) learning as an epistemic activity.

In Data Warehousing one is usually faced with a large range of different data sources and data sinks, or consumers, where the difference of these sources and sinks simply relates to the different technologies and formats of data bases. The warehousing tool should “transform” the data such that they can be used in the intended manner on the side of the sinks. The storage of the raw data as measured from the business processes and the efforts to provide any view onto these data has to satisfy two conditions (in the current paradigm). It has to be neutral—data should not be altered beyond the correction of obvious errors—and its performance, simply in terms of speed, has to be scalable, if not even independent from the data load. The activities in Data Warehousing are often circumscribed as “Extract, Transform, Load”, abbreviated ETL. There are many and large software solutions for this task, commercial ones and open source (e.g. Talend). The effect of DWH is to disclose the potential for an arbitrary and quickly served perspective onto the data, where “perspective” means just re-arranged columns and records from the database. Except cleaning and simple arithmetic operations, the individual bits of data itself remain largely unchanged.

In statistics, transformations are applied in order to satisfy the conditions for particular methods. In other words, the data are changed in order to enhance discernibility. Most popular is the log-transformation that shifts the mode of a distribution to the larger values. Two different small values that consequently are located nearby are separated better after a log-transformation, hence it is feasible to apply log-transformation to data that form a left-skewed distribution. Other transformations are aiming at a particular distribution, such as the z-score, or Fisher’s z-transformation. Interestingly, there is a further class of powerful transformations that is not conceived as such. Residuals are defined as deviation of the data from a particular model. In linear regression it is the square of the distance to the regression line.

The concept, however, can be extended to those data which do not “follow” the investigated model. The analysis of residual has two aspects, a formal one and an informal one. Formally, it is used as a complex test whether the investigated model does fit or whether it does not. The residual should not show any evident “structure”. That’s it. There is no institutional way back to the level of the investigated model, there are no rules about that, which could be negotiated in a yet to establish community. The statistical framework is a linear one, which could be seen as a heritage from positivism. It is explicitly forbidden to “optimize” a correlation by multiple actualization. Yet, informally the residuals may give hints on how to change the basic idea as represented by the model. Here we find a circular setup, where the strategy is to remove any rule-based regularity, i.e. discernibility form the data.

The effect of this circular arrangement takes completely place in the practicing human as kind of a refinement. It can’t be found anywhere in the methodological procedure itself in a rule-based form. This brings us to the third area, epistemic learning.

In epistemic learning, any of the potentially significant signals should be rendered in such a way as to allow for an optimized mapping towards a registered outcome. Such outcomes often come as dual values, or as a small group of ordinal values in the case of multi-constraint, multi-target optimization. In epistemic learning we thus find the separation of transformation and association in its most prominent form, despite the fact that data warehousing and statistics as well also are intended to be used for enhancing decisions. Yet, their linearity simply does not allow for any kind of institutionalized learning.

This arbitrary restriction to the linear methodological approach in formal epistemic activities results in two related quite unfavorable effects: First, the shamanism of “data exploration”, and second, the infamous hell of methods. One can indeed find thousands, if not 10s of thousands of research or engineering articles trying to justify a particular new method as the most appropriate one for a particular purpose. These methods themselves however are never identified as a „transformation“. Authors are all struggling for the “best” method, the whole community being neglecting the possibility—and the potential—of combining different methods after shaping them as transformations.

The laborious and never-ending training necessary to choose from the huge amount of possible methods then is called methodology… The situation is almost paradox. First, the methods are claimed to tell something about the world, despite this is not possible at all, not just because those methods are analytic.  It is an idealistic hope, which has been abolished already by Hume. Above all, only analytic methods are considered to be scientific. Then, through the large population of methods the choice for a particular one becomes aleatory, which renders the whole activity into a deeply non-scientific one. Additionally, it is governed by the features of some software, or the skills of the user of such software, not by a conceptual stance.

Now remember that any method is also a specific filter. Obviously, nothing could be known about the beneficiality of a particular method before the prediction that is based on the respective model had been validated. This simple insight renders “data exploration” into meaninglessness. It can only play its role within linear empirical frameworks, which are inappropriate any way. Data exploration is suggested to be done “intuitively”, often using methods of visualization. Yet, those methods are severely restricted with regard to the graspable dimensionality. More than 6 to 8 dimensions can’t be “visualized” at once. Compare this to the 2n (n: number of variables) possible models and you immediately see the problem. Else, the only effect of visualization is just a primitive form of clustering. Additionally, visual inputs are images, above all, and as images they can’t play a well-defined epistemological role.12

Complementary to the non-concept of “exploring” data13, and equally misconceived, is the notion of “preparing” data. At least, it must be rated as misconceived as far as it comprises transformations beyond error correction and arranging data into tables. The reason is the same: We can’t know whether a particular “cleansing” will enhance the predictive power of the model, in other words, whether it comprises potential information that supports the intended discernibility, before the model has been built. There is no possibility to decide which variables to include before having finished the modeling. In some contexts the information accessible through a particular variable could be relevant or even important. Yet, if we conceive transformations as preliminary hypothesis we can’t call them “preparation” any more. “Preparation” for what? For proofing the petitio principii? Certainly the peak of all preparatory nonsense is the “imputation” of missing values.

Dorian Pyle [11] calls such introduced variables “pseudo variables”, others call them “latent” or even “hidden variables”.14 Any of these labels is inappropriate, since the transformation is nothing else than a measurement device. Introduced variables are just variables, nothing else.

Indeed, these labels are reliable markers: whenever you meet a book or article dealing with data exploration, data preparation, the “problem” of selecting a method, or likewise, selecting an architecture within a meta-method like the Artificial Neural Networks, you can know for sure that the author is not really interested in learning and reliable predictions. (Or, that he or she is not able to distinguish analysis from construction.)

In epistemic learning the handling of residuals is somewhat inverse to their treatment in statistics, again as a result of the conceptual difference between the linear and the circular approach. In statistics one tries to prove that the model, say: transformation, removes all the structure from the data such that the remaining variation is pure white noise. Unfortunately, there are two drawbacks with this. First, one has to define the model before removing the noise and before checking the predictive power. Secondly, the test for any possibly remaining structure again takes place within the atomistic framework.

In learning we are interested in the opposite. We are looking for such transformations which remove the noise in a multi-variate manner such that the signal-noise ratio is strongly enhanced, perhaps even to the proto-symbolic level. Only after the de-noising due to the learning process, that is after a successful validation of the predictive model, the structure is then described for the (almost) noise-free data segment15 as an expression that is complementary to the predictive model.

In our opinion an appropriate approach would actualize as an instance of epistemic learning that is characterized by

  • – conceiving any method as transformation;
  • – conceiving measurement as an instance of transformation;
  • – conceiving any kind of transformation as a hypothesis about the “space of expressibility” (see next section), or, similarly, the finally selected model;
  • – the separation of transformation and association;
  • – the circular arrangement of transformation and association.

The Abstract Perspective

We now have to take a brief look onto the mechanics of transformations in the domain of epistemic activities.16 For doing this, we need a proper perspective. As such we choose the notion of space. Yet, we would like to emphasize that this space is not necessarily Euclidean, i.e. flat, or open, like the Cartesian space, i.e. if quantities running to infinite. Else, dimensions need not be thought of as being “independent”, i.e. orthogonal on each other. Distance measures need to be defined only locally, yet, without implying ideal continuity. There might be a certain kind of “graininess” defined by a distance D, below which the space is not defined. The space may even contain “bubbles” of lower dimensionality. So, it is indeed a very general notion of “space”.

Observations shall be represented as “points” in this space. Since these “points” are not independent from the efforts of the observer, these points are not dimensionless. To put it more precisely, they are like small “clouds”, that are best described as probability densities for “finding” a particular observation. Of course, this “finding” is kind of an inextricable mixture of “finding” and “constructing”. It does not make much sense to distinguish both on the level of such cloudy points. Note, that the cloudiness is not a problem of accuracy in measurement! A posteriori, that is, subsequent to introducing an irreversible move17, such a cloud could also be interpreted as an open set of the provoked observation and virtual observations. It should be clear by now that such a concept of space is very different from the Euclidean space that nowadays serves as a base concept for any statistics or data mining. If you think that conceiving such a space is not necessary or even nonsense, then think about quantum physics. In quantum physics we also are faced with the break-down of observer and observable, and they ended up quite precisely in spaces as we described it above. These spaces then are handled by various means of renormalization methods.18 In contrast to the abstract yet still physical space of quantum theory, our space need not even contain an “origin”. Elsewhere we called such a space aspectional space.

Now let us take the important step in becoming interested in only a subset of these observations. Assume we not only want to select a very particular set of observations—they are still clouds of probabilities, made from virtual observations—by means of prediction. This selection now can be conceived in two different ways. The first way is the one that is commonly applied and consists of the reconstruction of a “path”. Since in the contemporary epistemic life form of “data analysts” Cartesian spaces are used almost exclusively, all these selection paths start from the origin of the coordinate system. The endpoint of the path is the point of interest, the “outcome” that should be predicted. As a result, one first gets a mapping function from predictor variables to the outcome variable. All possible mappings form the space of mappings, which is a category in the mathematical sense.

The alternative view does not construct such a path within a fixed coordinate system, i.e. with a space with fixed properties. Quite to the contrast, the space itself gets warped and transformed until very simple figures appear, which represent the various subsets of observations according to the focused quality.

Imagine an ordinary, small, blown-up balloon. Next, imagine a grid in the space enclosed by the balloon’s hull, made by very thin threads. These threads shall represent the space itself. Of course, in our example the space is 3d, but it is not limited to this case. Now think of two kinds of small pearls attached to the threads all over the grid inside the balloon, blue ones and red ones. It shall be the red ones in which we are interested. The question now is what can we do to separate the blue ones from the red ones?

The way to proceed is pretty obvious, though the solution itself may be difficult to achieve. What we can try is to warp and to twist, to stretch, to wring and to fold the balloon in such a way that the blue pearls and the red pearls separate as nicely as possible. In order to purify the groups we may even consider to compress some regions of the space inside the balloon such that they are turn into singularities. After all this work—and beware it is hard work!—we introduce a new grid of threads into the distorted space and dissolve the old ones. All pearls automatically attach to the threads closest nearby, stabilizing the new space. Again, conceiving of such a space may seem weird, but again we can find a close relative in physics, the Einsteinian space of space-time. Gravitation effectively is warping that space, though in a continuous manner. There are famous empirical proofs of that warping of physical space-time.19

Analytically, these two perspectives, the path reconstruction on the hand and the space warping on the other, are (almost) equivalent. The perspective of space warping, however, offers a benefit that is not to be underestimated. We arrive at a new space for which we can define its own properties and in which we again can define measures that are different from those possible in the original space. The path reconstruction does not offer such a “a derived space”. Hence, once the path is reconstructed, the story stops. It is a linear story. Our proposal thus is to change perspective.

Warping the space of measurability and expressibility is an operation that inverts the generation of cusp catastrophes.20 (see Figure 1 below). Thus it transcends the cusp catastrophes. In the perspective of path reconstruction one has to avoid the phenomenon of hysteresis and cusps altogether, hence loosing a lot of information about the observed source of data.

In the Cartesian space and the path reconstruction methodology related to it, all operations are analytic, that is organized as symbolic rewriting. The reason for this is the necessity for the paths remaining continuous and closed. In contrast, space warping can be applied locally. Warping spaces in dealing with data is not an exotic or rare activity at all. It happens all the time. We know it even from (simple) mathematics, when we define different functions, including the empty function, for different sets of input parameter values.

The main consequence of changing the perspective from path reconstruction to space warping is an enlargement of the set of possible expressions. We can do more without the need to call it “heuristics”. Our guess is that any serious theory of data and measurement must follow the opened route of space warping, if this theory of data tries to avoid positivistic reductionism. Most likely, such a theory will be kind of a renormalization theory in a connected, relativistic data space.

Revitalizing Punch Cards and Stacks

In this section we will introduce the outline of a tool that allows to follow the circular approach in epistemic activities. Basically, this tool is about organizing arbitrary transformations. While for analytic (mathematical) expressions there are expression interpreters it is also clear that analytic expressions form only a subset of the set of all possible transformations, even if we consider the fact that many expression interpreters have been growing to some kind of programming languages, or script language. Indeed, Java contains an interpreting engine for JavaScript by default, and there are several quite popular ones for mathematical purposes. One could also conceive mathematical packages like Octave (open source), MatLab or Mathematica (both commercial) as such expression interpreters, even as their most recent versions can do much, much more. Yet, using MatLab & Co. are not quite suitable as a platform for general purpose data transformation.

The structural metaphor that proofed to be as powerful as it was sustainable for more than 10 years now is the combination of the workbench with the punch card stack.

Image 1: A Punched Card for feeding data into a computer

Any particular method, mathematical expression or arbitrary computational procedure resulting in a transformation of the original data is conceived as a “punch card”. This provides a proper modularization, and hence standardization. Actually, the role of these “functional compartments” is extremely standardized, at least enough to define an interface for plugins. Like the ancient punch cards made from paper, each card represents a more or less fixed functionality. Of course, these functionality may be defined by a plugin that itself connects to Matlab…

Else, again like the ancient punch cards, the virtualized versions can be stacked. For instance, we first put the treatment for missing values onto the stack, simply to ensure that all NULLS are written as -1. The next card then determines minimum and maximum in order to provide the data for linear normalization, i.e. the mapping of all values into the interval [0..1]. Then we add a card for compressing the “fat tail” of the distribution of values in a particular variable. Alternatively we may use a card to split the “fat tail” off into a new variable! Finally we apply the card=plugin for normalizing the data to the original and the new data column.

I think you got the idea. Such a stack is not only maintained for any of the variables, it is created on the fly according to the needs as these got detected by simple rules. You may think of the cards also as the set of rules that describe the capabilities of agents, which constantly check the data whether they could apply their rules. You also may think of these stacks as a device that works like a tailored distillation column , as it is used for fractional distillation in petro-chemistry.

Image 2: Some industrial fractional distillation columns for processing mineral oil. Dependent on the number of distillation steps different products result.

These stacks of parameterized procedures and expressions represent a generally programmable computer, or more precisely, operating system, quite similar to a spreadsheet, albeit the purpose of the latter, and hence the functionality, actualizes in a different form. The whole thing may even be realized as a language! In this case, one would not need a graphical user-interface anymore.

The effect of organizing the transformation of data in this way, by means of plugins that follow the metaphor of the “punch card stack”, is dramatic. Introducing transformations and testing them can be automated. At this point we should mention about the natural ally of the transformation workbench, the maximum likelihood estimation of the most promising transformations that combine just two or three variables into a new one. All three parts, the transformation stack engine, the dependency explorer, and the evolutionary optimized associative engine (which is able to create a preference weighting for the variables) can be put together in such a way that finding the “optimal” model can be run in a fully automated manner. (Meanwhile the SomFluid package has grown into a stage where it can accomplish this. . . download it here, but you need still some technical expertise to make it running)

The approach of the “transformation stack engine” is not just applicable to tabular data, of course. Given a set of proper plugins, it can be used as a digester for large sets of images or time series as well (see below).

Transforming Data

In this section we now will take a more practical and pragmatic perspective. Actually, we will describe some of the most useful transformations, including their parameters. We do so, because even prominent books about “data mining” have been handling the issue of transforming data in a mistaken or at least seriously misleading manner.21,22

If we consider the goal of the transformation of numerical data, increasing the discernibility of assignated observations , we will recognize that we may identify a rather limited number of types of such transformations, even if we consider the space of possible analytic functions, which combine two (or three) variables.

We will organize the discussion of the transformations into three sub-sections, whose subjects are of increasing complexity. Hence, we will start with the (ordinary) table of data.

Tabular Data

Tables may comprise numerical data or strings of characters. In its general form it may even contain whole texts, a complete book in any of the cells of a column (but see the section about unstructured data below!). If we want to access the information carried by the string data, we more sooner than later have to translate them into numbers. Unlike numbers, string data, and the relations between data points made from string data, must be interpreted. As a consequence, there are always several, if not many different possibilities of that representation. Besides referring to the actual semantics of the strings that could be expressed by means of the indices of some preference orders, there are also two important techniques of automatic scaling available, which we will describe below.

Besides string data, dates are further multi-dimensional category of data. A date encodes not only a serial number relative to some (almost) arbitrarily chosen base date, which we can use to express the age of the item represented by the observation. We have, of course, day of week, day of month, number of week, number of month, and not to forget about season as an approximate class. It depends a bit on the domain whether these aspects play any role at all. Yet, think about the rhythms in the city or on the stock markets across the week, or the “black Monday/ Tuesday/Friday effect” in production plants or hospitals then it is clear that we usually have to represent the single date value by several “informational extracts”.

A last class of data types that we have to distinguish are time values. We already mentioned the periodicity in other aspects of the calendar. In which pair of time values we find a closer similarity, T1( 23:41pm, 0:05pm), or T2(8:58am;3:17pm)? In case of any kind of distance measure the values of T2 are evaluated as much more similar than those in T1. What we have to do is to set a flag for “circularity” in order to calculate the time distances correctly.

Numerical Data: Numbers, just Numbers?

Numerical data are data for which in principle any value from within a particular interval could be observed. If such data are symmetrically normal distributed then we have little reasons to guess that there is something interesting within these sample of values. As soon as the distribution becomes asymmetrical, it starts to become interesting. We may observe “fat tails” (large values are “over-represented), or multi-modal distributions. In both cases we could suspect that there are at least two different processes, one dominating the other differentially across peaks. So we should split the variable into two (called “deciling”) and ceteris paribus check out the effect on the predictive power of the model. Typically one splits the values at the minimum between the peaks, but it is also possible to implement an overlap, where some records are present in both of the new variables.

Long tails indicate some aberrant behavior of the items represented by the respective records, or, like in medicine even pathological contexts. Strongly left-skewed distribution often indicate organizational or institutional influences. Here we could compress the long tail, log-shift, and then split the variable, that is decile it into two. 21

In some domains, like the finances, we find special values at which symmetry breaks. For ordinary money values the 0 is such a value. We know in advance that we have to split the variable into two, because the semantic and the structural difference between +50$ and -75$ is much bigger than between 150$ and 2500$… probably. As always, we transform it such that we create additional variables as kind of a hypotheses, for which we have to evaluate their (positive) contribution to the predictive power of the model.

In finances, but also in medicine, and more general in any system that is able to develop meta-stable regions, we have to expect such points (or regions) with increased probability of breaking symmetry and hence strong semantic or structural difference. René Thom first described similar phenomena by his theory that he labeled “catastrophe theory”. In 3d you can easily think about cusp catastrophes as a hysteresis in x-z direction that is however gradually smoothed out in y-direction.

Figure 1: Visualization of folds in parameters space, leading to catastrophes and hystereses.

In finances we are faced with a whole culture of rule following. The majority of market analysts use the same tools, for instance “stochasticity,” or a particularly parameterized MACD for deriving “signals”, that is, indicators for points of actions. The financial industries have been hiring a lot of physicists, and this population sticks to greatly the same mathematics, such as GARCH, combined with Monte-Carlo-Simulations. Approaches like fractal geometry are still regarded as exotic.23

Or think about option prices, where we find several symmetry breaks by means of contract. These points have to be represented adequately in dedicated, means derived variables. Again, we can’t emphasize it enough, we HAVE to do so as a kind of performing hypothesizing. The transformation of data by creating new variables is, so to speak, the low-level operationalization of what later may grow into a scientific hypothesis. Creating new variables poses serious problems for most methods, which may count as a reason why many people don’t follow this approach. Yet, for our approach it is not a problem, definitely not.

In medicine we often find “norm values”. Potassium in blood serum may take any value within a particular range without reflecting any physiologic problem. . . if the person is healthy. If there are other risk factors the story may be a different one. The ratio of potassium and glucose in serum provides us an example for a significant marker. . . if the person has already heart problems. By means of such risk markers we can introduce domain-specific knowledge. And that’s actually a good message, since we can identify our own “markers” and represent it as a transformation. The consequence is pretty clear: a system that is supposed to “learn” needs a suitable repository for storing and handling such markers, represented as a relational system (graph).

Let us return to the norm ranges briefly again. A small difference outside the norm range could be rated much more strongly than within the norm range. This may lead to the weight functions shown in the next figure, or more or less similar ones. For a certain range of input values, the norm range, we leave the values unchanged. The output weight equals 1. Outside of this range we transform them in a way that emphasizes the difference to the respective boundary value of the norm range. This could be done in different ways.

Figure 2: Examples for output weight configurations in norm-range transformation

Actually, this rationale of the norm range can be applied to any numerical data. As an estimate of the norm range one could use the 80% quantile, centered around the median and realized as +/-40% quantiles. On the level of model selection, this will result in a particular sensitivity for multi-dimensional outliers, notably before defining any criterion apriori of what an outlier should be.

From Strings to Orders to Numbers

Many data come as some kind of description or label. Such data are described as nominal data. Think for instance about prescribed drugs in a group of patients included into an investigation of risk factors for a disease, or think about the name or the type of restaurants in a urbanological/urbanistic investigation. Nominal data are quite frequent in behavioral, organizational or social data, that is, in contexts that are established mainly on a symbolic level.

It should be avoided to perform measurements only on the nominal scale, yet, sometimes it is not possible to circumvent it. It could be avoided at least partially by including further properties that can be represented by numerical values. For instance, instead using only the names cities in a data set, one can use the geographical location, number of inhabitants, or when referring to places within a city one can use descriptors that cover some properties of the respective area, such items as density of traffic, distance to similar locations, price level of consumer goods, economical structure etc. If a direct measurement is not possible, estimates can do the job as well, if the certainty of the estimate is expressed. The certainty then can be used to generate surrogate data. If the fine grained measurement creates further nominal variables, they could be combined for form a scale. Such enrichment is almost always possible, irrespective the domain. One should keep in mind, however, that any such enrichment is nothing else than a hypothesis.

Sometimes, data on the nominal level, technically a string of alphanumerical characters, already contains valuable information. For instance, the contain numerical values, as in the name of cars. If we would deal with things like names of molecules, where these names often come as compounds, reflecting the fact that molecules themselves are compounds, we can calculate the distance of each name to a virtual “average name” by applying a technique called “random graph”. Of course, in case of molecules we would have a lot of properties available that can be expressed as numerical values.

Ordinal data are closely related to nominal data. Essentially, there are two flavors of them. In case of the least valuable of them the numbers to not express a numerical value, the cipher is just used as kind of a letter, indicating that there is a set of sortable items. Sometimes, values of an ordinal scale represent some kind of similarity. Despite this variant is more valuable it still can be misleading, because the similarity may not scale isodistantly with the numerical values of the ciphers. Undeniably, there is still a rest of a “name” in it.

We are now going to describe some transformations to deal with data from low-level scales.

The least action we have to apply to nominal data is a basic form of encoding. We use integer values instead of the names. The next, though only slightly better level would be to reflect the frequency of the encoded item in the ordinal value. One would, for instance not encode the name into an arbitrary integer value, but into the log of the frequency. A much better alternative, however, is provided by the descendants of the correspondence analysis. These are called Optimal Scaling and the Relative Risk Weight. The drawback for these method is that some information about the predicted variable is necessary. In the context of modeling, by which we always understand target-oriented modeling—as opposed to associative storage24—we usually find such information, so the drawback is not too severe.

First to optimal scaling (OSC). Imagine a variable, or “assignate” as we prefer to call it25, which is scaled on the nominal or the low ordinal scale. Let us assume that there are just three different names or values. As already mentioned, we assume that a purpose has been selected and hence a target variable as its operationalization is available. Then we could set up the following table (the figures are denoting frequencies).

Table 1: Summary table derived from a hypothetical example data set. av(i) denote three nominally scaled assignates.

outcometv

av1

av2

av3

marginal sum

ta

140

120

160

420

tf (focused)

30

10

40

80

marginal sum

170

130

200

500

From these figures we can calculate the new scale values by the formula

For the assignate av1 this yields

Table 2: Here, various encodings are contrasted.

assignate

literal encoding

frequency

normalized log(freq)

optimal scaling

normalized OSC

av1

1

170

0.62

0.176

0.809

av2

2

130

0.0

0.077

0.0

av3

3

200

1.0

0.200

1.0

Using these values we could replace any occurrence of the original nominal (ordinal) values by the scaled values. Alternatively—or better additionally—, we could sum up all values for each observation (record), thereby collapsing the nominally scaled assignates into a single numerically scaled one.

Now we will describe the RRW. Imagine a set of observations {o(i)} where each observation is described by a set of assignates a(i). Also let us assume that some of these assignates are on the binary level, that is, the presence of this quality in the observation is encoded by “1”, its missing by “0”. This usually results in sparsely filled (regions of ) the data table. Depending on the size of the “alphabet”, even more than 99.9% of all values could simply be equal to 0. Such data can not be grouped in a reasonable manner. Additionally, if there are further assignates in the table that are not binary encoded, the information in the binary variables would be neglected almost completely without applying a rescaling like the RRW.

For the assignate av1 this yields

As you can see, the RRW uses the marginal from the rows, while the optimal scaling uses the marginal from the columns. Thus, the RRW uses slightly more information. Assuming a table made from binary assignates av(i), which could be summarized into table 1 above, the formula yields the following RRW factors for the three binary scaled assignates:

Table 3: Relative Risk Weights (RRW) for the frequency data shown in table 1.

Assignate

raw RRWi

RRWi

normalized RRW

av1

1.13

0.33

0.82

av2

0.44

0.16

0.00

av3

1.31

0.36

1.00

The ranking of av(i) based RRW is equal to that returned by OSC, even the normalized score values are quite similar. Yet, while in the case of nominal variables assignates are usually not collapsed, this will be done always in case of binary variables.

So, let us summarize these simple methods in the following table.

Table 4: Overview about some of the most important transformations for tabular data.

Transformation

Mechanism

Effect, New Value

Properties, Conditions

log-transform

analytic function

analytic combination

explicit analytic function (a,b)→f(a,b)

enhancing signal-to-noise ratio for the relationship between predictors and predicted, 1 new variable

targeted modeling

empiric combinational recoding

using simple clustering methods like KNN or K-means for a small number of assignates

distance from cluster centers and, or cluster center as new variables

targeted modeling

Deciling

upon evaluation of properties of the distribution

2 new variables

Collapsing

based on extreme-value quantiles

1 new variable, better distinction for data in frequent bins

optimal scaling

numerical encoding and/or rescaling using marginal sums

enhancing the scaling of the assignate from nominal to numerical

targeted modeling

relative risk weight

dto.

collapsing sets of sparsely filled variables

targeted modeling

Obviously, the transformation of data is not an analytical act, on both sides. Left-hand it refers to structural and hence semantic assumptions, while right hand it introduces hypotheses about those assumptions. Numbers are never ever just values, much like sentences and words do not consists just from letters. After all, the difference between both is probably less than one could initially presume. Later we will address this aspect from the opposite direction, when it comes to the translation of textual entities into numbers.

Time Series and Contexts

Time series data are the most valuable data. They allow the reconstruction of the flow of information in the observed system, either between variables intrinsic to the measurement setup (reflecting the “system”) or between treatment and effects. In the recent years, so-called “causal FFT” gained some popularity.

Yet, modeling time series data poses the same problematics as tabular data. We do not know apriori which variables to include, or how to transform variables in order to reflect particular parts of the information in the most suitable way. Simply pressing a FFT onto the data is nothing but naive. FFT assumes a harmonic oscillation, or a combination thereof, which certainly is not appropriate. Even if we interpret a long series of FFT terms as an approximation to an unknown function, it is by no means clear whether the then assumed stationarity26 is indeed present in the data.

Instead, it is more appropriate to represent the aspects of a time series in multiple ways. Often, there are many time series available, one for each assignate. This brings the additional problem of careful evaluation of cross-correlations and auto-correlations, and all of this under the condition that it is not known apriori whether the evolution of the system is stationary.

Fortunately, the analysis of multiple time series, even from non-stationary processes, is quite simple, if we follow the approach as outlined so far. Let us assume a set of assignates {a(i)} for which we have their time series measurement available, which are given by equidistant measurement points. A transformation then is constructed by a method m that is applied to a moving window of size md(k). All moving windows of any size are adjusted such that their endpoints meet at the measurement point at time t(m(k)). Let us call this point the prediction base point, T(p). The transformed values consist either from the residuals resulting from this methods values and the measurement data, or the parameters of the method fitted to the moving window. A example for the latter case are for instance given by the wavelet coefficients, which provide a quite suitable, multi-frequency perspective onto the development up to T(p). Of course, the time series data of different assignates could be related to each other by any arbitrary functional mapping.

The target value for the model could be any set of future points relative to t(m(k)). The model may predict a singular point, averages some time in the future, the volatility of the future development of the time series, or even the parameters of a particular mapping function relating several assignates. In the latter case the model would predict several criteria at once.

Such transformations yield a table that contain a lot more variables than originally available. The ratio may grow up to 1:100 in complex cases like the global financial markets. Just to be clear: If you measure, say the index values of 5 stock markets, some commodities like gold, copper, precious metals and “electronics metals”, the money market, bonds and some fundamentals alike, that is approx. 30 basic input variables, even a superficial analysis would have to inspect 3000 variables… Yes, learning and gaining experience can take quite a bit! Learning and experience do not become cheaper only for that we use machines to achieve it. Just exploring is more easy nowadays, not requiring life times any more. The reward consists from stable models about complex issues.

Each point in time is reflected by the original observational values and a lot of variables that express the most recent history relative to the point in time represented by the respective record. Any of the synthetic records thus may be interpreted as a set of hypothesis about the future development, where this hypothesis comes as a multidimensional description of the context up to T(p). It is then the task of the evolutionarily optimized variable selection based on the SOM to select the most appropriate hypothesis. Any subgroup contained in the SOM then represents comparable sets of relations between the past relative to T(p) and the respective future as it is operationalized into the target variable.

Typical transformations in such associative time series modeling are

  • – moving average and exponentially decaying moving average for de-seasoning or de-trending;
  • – various correlational methods: cross- and auto-correlation, including the result parameters of the Bartlett test;
  • – Wavelet-, FFT-, or Walsh- transforms of different order, residuals to the denoised reconstruction;
  • – fractal coefficients like Lyapunov coefficient or Hausdorff dimension
  • – ratios of simple regressions calculated over moving windows of different size;
  • – domain specific markers (think of technical stock market analysis, or ECG.

Once we have expressed a collection of time series as series of contexts preceding the prediction point T(p), the further modeling procedure does not differ from the modeling of ordinary tabular data, where the observations are independent from each other. From the perspective of our transformation tool, these time series transformation are nothing else than “methods”, they do not differ from other plugin methods with respect to the procedure calls in their programing interface.

„Unstructurable“ „Data“: Images and Texts

The last type of data for which we briefly would like to discuss the issue of transformation is “unstructurable” data. Images and texts are the main representatives for this class of entities. Why are these data “unstructurable”?

Let us answer this question from the perspective of textual analysis. Here, the reason is obvious, actually, there are several obvious reasons. Patrizia Violi [17] for instance emphasizes that words are creating their own context, upon which they are then going to be interpreted. Douglas Hofstadter extended the problematics to thinking at large, arguing that for any instance of analogical thinking—and any thinking he claimed as being analogical—it is impossible to define criteria that would allow to set up a table. Here on this site we argued repeatedly that it is not possible to define any criteria apriori that would capture the “meaning” of a text.

Else, understanding language, as well as understanding texts can’t be mapped to the problematics of predicting a time series. In language, there is no such thin as a prediction point T(p), and there is no positively definable “target” which could be predicted. The main reason for this is the special dynamics between context (background) and proposition (figure). It is a multi-level, multi-scale thing. It is ridiculous to apply n-grams to text, then hoping to catch anything “meaningful”. The same is true for any statistical measure.

Nevertheless, using language, that is, producing and understanding is based on processes that select and compose. In some way there must be some kind of modeling. We already proposed a structure, or more, an architecture, for this in a previous essay.

The basic trick consists of two moves: Firstly, texts are represented probabilistically as random contexts in an associative storage like the SOM. No variable selection takes place here, no modeling and no operationalization of a purpose is present. Secondly, this representation then is used as a basis for targeted modeling. Yet, the “content” of this representation does not consist from “language” data anymore. Strikingly different, it contains data about the relative location of language concepts and their sequence as they occur as random contexts in a text.

The basic task in understanding language is to accomplish the progress from a probabilistic representation to a symbolic tabular representation. Note that any tabular representation of an observation is already on the symbolic level. In the case of language understanding precisely this is not possible: We can’t define meaning, and above all, not apriori. Meaning appears as a consequence of performance and execution of certain rules to a certain degree. Hence we can’t provide the symbols apriori that would be necessary to set up a table for modeling, assessing “similarity” etc.

Now, instead of probabilistic non-structured representation we also could say arbitrary unstable structure. From this we should derive a structured, (proto-)symbolic and hence tabular and almost stable structure. The trick to accomplish this consists of using the modeling system itself as measurement device and thus also as a “root” for further reference in the then possible models. Kohonen and colleagues demonstrated this crucial step in their WebSom project. Unfortunately (for them), they then actualized several misunderstandings regarding modeling. For instance, they misinterpreted associative storage as a kind of model.

The nice thing with this architecture is that once the symbolic level has been achieved, any of the steps of our modeling approach can be applied without any change, including the automated transformation of “data” as described above.

Understanding the meaning of images follows the same scheme. The fact that there are no words renders the task more complicated and more simple at the same time. Note that so far there is no system that would have learned to “see”, to recognize and to understand images, despite many titles claim that the proposed “system” can do so. All computer vision approaches are analytic by nature, hence they are all deeply inadequate. The community is running straight into the method hell as the statisticians and the data miners did before, mistaking transformations as methods, conflating transformation and modeling, etc.. We discussed this issues at length above. Any of the approaches might be intelligently designed, but all are victimized by the representationalist fallacy, and probably even by naive realism. Due to the fact that the analytic approach is first, second and third mainstream, the probabilistic and contextual bottom-up approach is missing so far. In the same way as a word is not equal to the grapheme, a line is not defined on the symbolic level in the brain. We else and again meet the problem of analogical thinking even on the most primitive graphical level. When is a line still a line, when is a triangle still a triangle?

In order to start in the right way we first have to represent the physical properties of the image along different dimensions, such as textures, edges, or salient points, and all of those across different scales. Probably one can even detect salient objects by some analytic procedure. From any of the derived representations the random contexts are derived and arranged as vectors. A single image is represented as a table that contains random contexts derived from the image as a physical entity. From here on, the further processing scheme is the same as for texts. Note, that there is no such property as “line” in this basic mapping.

In case of texts and images the basic transformation steps thus consist in creating the representation as random contexts. Fortunately, this is “only” a question of the suitable plugins for our transformation tool. In both cases, for texts as well as images, the resulting vectors could grow considerably. Several thousands of implied variables must be expected. Again, there is already a solution, known as random projection, which allows to compress even very large vectors (say 20’000+) into one of say maximal 150 variables, without loosing much of the information that is needed to retain the distinctive potential. Random projection works by multiplying a vector of size N with a matrix of uniformly distributed random values of size NxM, which results in a vector of size M. Of course, M is chosen suitably (100+). The reason why this works is that with that many dimension all vectors are approximately orthogonal to each other! Of course, the resulting fields in such a vector do not “represent” anything that could be conceived as a reference to an “object”. Internally, however, that is from the perspective of a (population of) SOMs, it may well be used as a (almost) fixed “attribute”. Yet, neither the missing direct reference not the subjectivity poses a problem, as the meaning is not a mental entity anyway. Q.E.D.

Conclusion

Here in this essay we discussed several aspects related to the transformation of data as an epistemic activity. We emphasized that an appropriate attitude towards the transformation of data requires a shift in perspective and the focus of another vantage point. One of the more significant changes in attitude consider, perhaps, the drop of any positivist approach as one of the main pillars of traditional modeling. Remember that statistics is such a positivist approach. In our perspective, statistical methods are just transformations, nothing less, but above all also nothing more, characterized by a specific set of rather strong assumptions and conditions for their applicability.

We also provided some important practical examples for the transformation of data, whether tabular data derived from independent observations, time series data or “unstructurable” “data” like texts and images. According to the proposed approach we else described a prototypical architecture for a transformation tool, that could be used universally. In particular, it allows a complete automation of the modeling task, as it could be used for instance in the field of so-called data mining. The possibility for automated modeling is, of course, a fundamental requirement for any machine-based episteme.

Notes

1. The only reason why we do not refer to cultures and philosophies outside Europe is that we do not know sufficient details about them. Yet, I am pretty sure that taking into account Chinese or Indian philosophy would severe the situation.

2. It was Friedrich Schleiermacher who first observed that even the text becomes alien and at least partially autonomous to its author due to the necessity and inevitability of interpretation. Thereby he founded hermeneutics.

3. In German language these words all exhibit a multiple meaning.

4. In the last 10 years (roughly) it became clear that the gene-centered paradigms are not only not sufficient [2], they are even seriously defect. Evely Fox-Keller draws a detailed trace of this weird paradigm [3].

5. Michel Foucault [4]

6. The „axiom of choice“ is one of the founding axioms in mathematics. Its importance can’t be underestimated. Basically, it assumes that “something is choosable”. The notion of “something choosable” then is used to construct countability as a derived domain. This implies three consequences. First, this avoids to assume countability, that is, the effect of a preceding symbolification, as a basis for set theory. Secondly, it puts performance at the first place. These two implications render the “Axiom of Choice” into a doubly-articulated rule, offering two docking sites, one for mathematics, and one for philosophy. In some way, it thus can not count as an  “axiom”. Those implications are, for instance, fully compatible with Wittgenstein’s philosophy. For these reasons, Zermelo’s “axiom” may even serve as a shared point (of departure) for a theory of machine-based episteme. Finally, the third implication is that through the performance of the selection the relation, notably a somewhat empty relation is conceived as a predecessor of countability and the symbolic level. Interestingly, this also relates to Quantum Darwinism and String Theory.

7. David Grahame Shane’s theory on cities and urban entities [5] is probably the only theory in urbanism that is truly a relational theory.  Additionally, his work is full of relational techniques and concepts, such as the “heterotopy” (a term coined by Foucault).

8. Bruno Latour developed the Actor-Network-Theory [6,7], while Clarke evolved “Grounded Theory” into the concept of “Situational Analysis” [8]. Latour, as well as Clarke, emphasize and focus the relation as a significant entity.

9. behavioral coating, and behavioral surfaces ;

10. See Information & Causality about the relation between measurement, information and causality.

11. „Passivist“ refers to the inadequate form of realism according to which things exist as-such independently from interpretation. Of course, interpretation does affect the material dimension of a thing. Yet, it changes its relations insofar the relations of a thing, the Wittgensteinian “facts”, are visible and effective only if we assign actively significance to them. The “passivist” stance conceives itself as a re-construction instead of a construction (cf. Searle [9])

12. In [10] we developed an image theory in the context of the discussion about the mediality of facades of buildings.

13. nonsense of „non-supervised clustering“

14. In his otherwise quite readable book [11], though it may serve only as an introduction.

15. This can be accomplished by using a data segment for which the implied risk equals 0 (positive predictive value = 1). We described this issue in the preceding chapter.

16. hint to particle physics…

17. See our previous essay about the complementarity of the concepts of causality and information.

18. For an introduction of renormalization (in physics) see [12], and a bit more technical [13]

19. see the Wiki entry about so-called gravitational lenses.

20. Catastrophe theory is a concept invented and developed by French mathematician Rene Thom as a field of Differential Topology. cf. [14]

21.  In their book, Witten & Eibe [15] recognized the importance of transformation and included a dedicated chapter about it. They also explicitly mention the creation of synthetic variables. Yet, they do also explicitly retreat from it as a practical means for the reason of computational complexity (=here, the time needed to perform a calculation in relation to the amount of data). After all, their attitude towards transformation is somehow that towards an unavoidable evil. They do not recognize its full potential. After all, as a cure for the selection problem, they propose SVM and their hyperplanes, which is definitely a poor recommendation.

22. Dorian Pyle [11]

23. see Benoit Mandelbrot [16].

24. By using almost meaningless labels target-oriented modeling is often called supervised modeling as opposed to “non-supervised modeling”, where no target variable is being used. Yet, such a modeling is not a model, since the pragmatics of the concept of “model” invariably requires a purpose.

25. About assignates: often called property, or feature… see about modeling

26. Stationarity is a concept in empirical system analysis or description, which denotes the expectation that the internal setup of the observed process will not change across time within the observed period. If a process is rated as “stationary” upon a dedicated test, one could select a particular, and only one particular method or model to reflect the data. Of course, we again meet the chicken-egg problem. We can decide about stationarity only by means of a completed model, that is after the analysis. As a consequence, we should not use linear methods, or methods that depend on independence, for checking the stationarity before applying the “actual” method. Such a procedure can not count as a methodology at all. The modeling approach should be stable against non-stationarity. Yet, the problem of the reliability of the available data sample remains, of course. As a means to “robustify” the resulting model against the unknown future one can apply surrogating. Ultimately, however, the only cure is a circular, or recurrent methodology that incorporates learning and adaptation as a structure, not as a result.

References
  • [1] Robert Rosen, Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life. Columbia University Press, New York 1991.
  • [2] Nature Insight: Epigenetics, Supplement Vol. 447 (2007), No. 7143 pp 396-440.
  • [3] Evelyn Fox Keller, The Century of the Gene. Harvard University Press, Boston 2002. see also: E. Fox Keller, “Is There an Organism in This Text?”, in P. R. Sloam (ed.), Controlling Our Destinies. Historical, Philosophical, Ethical, and Theological Perspectives on the Human Genome Project, Notre Dame (Indiana), University of Notre Dame Press, 2000, pp. 288-289
  • [4] Michel Foucault, Archeology of Knowledge. 1969.
  • [5] David Grahame Shane. Recombinant Urbanism: Conceptual Modeling in Architecture, Urban Design and City Theory
  • [6] Bruno Latour. Reassembling The Social. Oxford University Press, Oxford 2005.
  • [7] Bruno Latour (1996). On Actor-network Theory. A few Clarifications. in: Soziale Welt 47, Heft 4, p.369-382.
  • [8] Adele E. Clarke, Situational Analysis: Grounded Theory after the Postmodern Turn. Sage, Thousand Oaks, CA 2005).
  • [9] John R. Searle, The Construction of Social Reality. Free Press, New York 1995.
  • [10] Klaus Wassermann & Vera Bühlmann, Streaming Spaces – A short expedition into the space of media-active façades. in: Christoph Kronhagel (ed.), Mediatecture, Springer, Wien 2010. pp.334-345. available here
  • [11] Dorian Pyle, Data Preparation for Data Mining. Morgan Kaufmann, San Francisco 1999.
  • [12] John Baez (2009). Renormalization Made Easy. Webpage
  • [13] Bertrand Delamotte (2004). A hint of renormalization. Am.J.Phys. 72: 170-184. available online.
  • [14] Tim Poston & Ian Stewart, Catastrophe Theory and Its Applications. Dover Publ. 1997.
  • [15] Ian H. Witten & Frank Eibe, Data Mining. Practical Machine Learning Tools and Techniques (2nd ed.). Elsevier, Oxford 2005.
  • [16] Benoit Mandelbrot & Richard L. Hudson, The (Mis)behavior of Markets. Basic Books, New York 2004.
  • [17] Patrizia Violi (2000). Prototypicality, typicality, and context. in: Liliana Albertazzi (ed.), Meaning and Cognition – A multidisciplinary approach. Benjamins Publ., Amsterdam 2000. p.103-122.

۞

Analogical Thinking, revisited.

March 19, 2012 § Leave a comment

What is the New York of California?

(I/II)

Or even, what is the New York of New York? Almost everybody will come up with the same answer, despite the fact that not only the question is not only ill-defined. Both the question and its answer can be described only after the final appearance of the answer. In other words, it is not possible to provide any proposal about the relevance of those properties apriori to its completion, that aposteriori are easily tagged as relevant for the description of both the question as well as the answer. Both the question and the solution do not “exist” in the way that is pretended by their form before we have finished making sense of it. There is a wealth of philosophical issues around this phenomenon, which we all have to bypass here. Here we will focus just on the possibility for mechanisms that could be invoked in order to build a model that is capable to behave phenomeno-logically “as if“.

The credit to render such questions and the associated problematics salient in the area of computer models of thinking belongs to Douglas Hofstadter and his “Fluid Analogy Research group” (FARG). In his book “Fluid Concepts and Creative Analogies” that we already mentioned here he proposes a particular model of which he claims that it is a proper model for analogical thinking. In constructing this model, which took more than 10 years of research, we did not try to stick (to get stuck?) to the neuronal level. Accordingly, one can’t describe the performance of a tennis player at the molecular level, he says. Remarkably, he also keeps the so-called cognitive sciences and their laboratory wisdom at distance. Instead, his starting point is the everyday language, and presumably a good deal of introspection as well. He sees his model located at an intermediate level between the neurons and consciousness (quite a large field, though).

His overarching claim is as simple as it is distant from the main stream of AI and cognitive science. (Note that Hofstadter does not formulate “analogical reasoning.”)

Thinking is largely equivalent with making analogies.

Hofstadter is not interested to produce just another model for analogy making. There are indeed quite a lot of such models, which he discusses in great detail. And he refutes them all; he proofs that they are all ill-posed, since they all do not start with perception. Without exception they all assume that the “knowledge” is already in the computer and based on this assumption some computer program is established. Of course, such approaches are nonsense, euphemistically called “knowledge acquisition bottleneck” by people working in the field of AI / machine learning. Yet, knowledge is nothing that could be externalized and then acquired subsequently by some other party, it can’t be found “in” the world, and of course it can’t be separated as something that “exists” beside the processing mechanisms of the brain, making the whole thing “smart”. As already mentioned, such ideas are utter nonsense.

Hofstadter’s basic strategy is different. He proposes to create a software system that is able for “concept slipping” as an emergent phenomenon, deeply based on perceptional mechanisms. He even coined the term “high-level perception.”

That is, the […] project is not about simulating analogy-making per se, but about simulating the very crux of human cognition: fluid concepts. (p.208)

This essay will investigate his model. We will find that despite its appeal it is nevertheless seriously unrealistic, even according to Hofstadter’s own standards. Yet, despite its particular weaknesses it also demonstrates very interesting mechanisms. After extracting the cornerstones of his model we will try to map his insights to the world of self-organizing maps. We also will discuss how to transfer the interesting parts of Hofstadter’s model. Hofstadter himself clearly stated the deficiencies of “connectionist models” of “learning,” yet, my impression is that he was not aware about self-organizing maps at this time. By “connectionism” he obviously referred to artificial neural networks (ANN), and for those we completely agree to his critique.

Before we start I would like to provide some original sources, that is, copies of those parts that are most relevant for this essay. These parts are from chapter 5chapter 7 and chapter 8 of the aforementioned book. There you will find much more details and lucid examples about it in Hofstadter’s own words.

Is there an Alternative to Analogies?

In order to find an alternative we have to take a small bird’s view. Very coarsely spoken, thinking transforms some input into some output while being affected and transforming itself. In some sense, any transformation of input to output transforms the transforming instance, though in vastly different degrees. A trivial machine just wears off, a trivial computer—that is, any digital machine that fits into the scheme of the Turing-computing1—can be reset to meet exactly a previous state. As soon as historical contingency is involved, reproducibility vanishes and strictly non-technical entities appear: memory, value, and semantics (among others).

This transformation game applies to analogy making, and it also applies to traditional modeling.Is it possible to apply any kind of modeling to the problematics that is represented by the “transfer game”, for which those little questions posed in the beginning are just an example?

In his context, Hofstadter calls the modeling approach the brute-force approach (p.327, chp.8). The outline of the modeling approach could look like this (p.337).

  • Step 1: Run down the apriori list of city-characterization criteria and characterize the “source town” A according to each of them.
  • Step 2: Retrieve an apriori list of “target towns” inside target region Y from the data base.
  • Step 3: For each retrieved target town X, run down the a priori list of city-characterization criteria again, calculating X’s numerical degree of match with A for every criterion in the list.
  • Step 4: For each target town X, sum up the points generated in Step 3, possibly using apriori weights, thus allowing some criteria to be counted more heavily than others.
  • Step 5: Locate the target town with the highest overall rating as calculated in Step 4, and propose it as “the A of Y”.

Any plausible apriori list of city-characterization criteria would be long, very long indeed. Effectively, it can’t be limited in advance, since any imposed limit would represent a model that would claim to be better suited to decide about the criteria than the model being built. We are crashed by an infinite regress, not just in theory. What we experience here is Wittgenstein’s famous verdict that justifications have to come to an end. Rules are embedded in the form of life (“Lebensform”) and without knowing all about a particular Lebensform and to take into consideration anything comprised by such (impossible) knowledge we can’t start to model at all.

He identifies four characteristic difficulties for the modeling approach with regard to his little “transfer game” that plays around with cities.

  • – Difficulty 1: It is psychologically unrealistic to explicitly consider all the towns one knows in a given region in order to come up with a reasonable answer.
  • – Difficulty 2: Comparison of a target town and a source town according to a specific city-characterization criterion is not a hard-edged mechanical task, but rather, can itself constitute an analogy problem as complex as the original top-level puzzle.
  • – Difficulty 3: There will always be source towns A whose “essence”—that is, set of most salient characteristics—is not captured by a given fixed list of city-characterization criteria.
  • – Difficulty 4: What constitutes a “town in region Y” is not apriori evident.

Hofstadter underpins his point with the following question (p.347).

What possible set of apriori criteria would allow a computer to reply, perfectly self-confidently, that the country of Monaco is “the Atlantic City of France”?

Of course, the “computer” should come up with the answer in a way that is not pre-programmed explicitly.

Obviously, the problematics of making analogies can’t be solved algorithmically. There is not only no such thing as a single “solution”, even the criteria to describe the problem are missing. Thus we can conclude that modeling, even in its non-algorithmical form, is not a viable alternative to analogy making.

The FARG Model

In the following, we investigate the model as proposed by Hofstadter and his group, mainly Melanie Mitchell. This is separated into the parts

  • – precis of the model,
  • – its elements,
  • – its extension as proposed by Hofstadter,
  • – the main problems of the model, and finally,
  • – the main superior aspects of the model as compared to connectionist models (from Hofstadter’s perspective, of course).
Precis of the Model

Hofstadter’s conclusion from the problems with the model-based approach and thus also the starting point for his endeavor is that the making of an analogy must appear as an emergent phenomenon. Analogy itself can’t be “defined” in terms of criteria, beyond sort f rather opaque statements about “similarity.” The point is that this similarity could be measured only aposteriori, so this concept does not help. The capability for making analogies can’t be programmed explicitly. It would not be “making” of analogies anymore, it would just be a look-up of dead graphems (not even symbols!) in a database.

He proofs his ideas by means of a small software called “Copycat”. This name derives from the internal processes of the software, as making “almost identical copies” is an important ingredient of it. Yet, it also refers to the problem that appears if you say: “I am doing this, now do the same thing…”

Copycat has three major parts, which he labels as (i) the Slipnet, (ii) the Workspace, (iii) the Coderack.

The Coderack is a rack that serves as a launching site for a population of agents of various kinds. Agents decease and are being created in various ways. They may be spawned by other agents, by the Coderack, or by any of the items in the Slipnet—as a top-down specialist bred just to engage in situations represented by the Slipnet item. Any freshly created agent will be first put into the Coderack, regardless its originator or kind.

Any particular agent behaves as a specialist for recognizing a particular situation or to establish a particular relation between parts of the input “data, ” the initial observation. This recognition requires a model apriori, of course. Since these models are rather abstract as compared to the observational data, Hofstadter calls them “concepts.” After their set up, agents are put into the Coderack from where they start in random order, but also dependent on their “inner state,” which Hofstadter calls “pressure.”

The Slipnet is a loose “network” of deep and/or abstract concepts. In case of Copycat these concepts comprise

a, b, c, … , z, letter, successor, predecessor, alphabetic-first, alphabetic-last, alphabetic position, left, right, direction, leftmost, rightmost, middle, string position, group, sameness group, successor group, predecessor group, group length, 1, 2, 3, sameness, and opposite,

In total there are more than 60 of such concepts. These items are linked together, while the length of the link reflects the “distance” between concepts. This distance changes while Copycat is working on a particular task. The change is induced by the agents in response to their “success.” The Slipnet is not really a “network,” since it is neither a logistic network (it doesn’t transport anything) nor is it an associative network like a SOM. It is also not suitable to conceive it as a kind of filter in the sense of a spider’s web, or a fisherman’s net. It is thus more appropriate to consider it simply as a non-directed, dynamic graph, where discrete items are linked.

Finally, the third aspect is the Workspace. Hofstadter describes it as a “busy construction site” and likens it to the cytoplasm (p.216). In the Workspace, the agents establish bonds between the atomic items of the observation. As said, each agent knows nothing about the posed problem, it is just capable to perform on a mini-aspect of the task. The whole population of agents, however, build something larger. It looks much like the activity in ants or termites, building some morphological structure in the hive, or a macroscopic dynamic effect as hive population. The Workspace is the location of such intermediate structures of various degrees of stability, meaning that some agents also work to remove a particular structure.

So far we have described the morphology. The particular dynamics unfolding on this morphology is settled between competition and cooperation, with the result of a collective calming down of the activities. The decrease in activity is itself an emergent consequence of the many parallel processes inside Copycat.

A single run of Copycat yields one instance of the result. Yet, a single answer is not the result itself. Rather, as different runs of Copycat yield different singular answers, the result consists of a probability density for different singular answers. For the letter-domain in which Copycat is working the result look like this:

Figure 1: Probability densities as result of a Copycat run.

The Elements of the FARG Model

Before we proceed, I should emphasize that  here “element” is used as we have introduced the term here.

Returning to the FARG model, it is important to understand that a particularly constraint randomness plays a crucial role in its setup. The population of agents does not search through all possibilities all the time. Yet, any existing intermediate result, say structural hypothesis, serves as a constraint for the future search.

We also find different kinds of memories with different durations, we find dynamic historic constraints, which we also could call contingencies. We have a population of different kinds of agents that cooperate and compete. In some almost obvious way, Copycat’s mechanisms may be conceived as an instance of the generalized evolution that we proposed earlier. Hofstadter himself is not aware that he just proposed a mechanism for generalized evolutionary changes. He calls the process “parallel terraced scan”, thereby unnecessarily sticking to a functional perspective. Yet, we consider generalized evolution as one of the elements of Copycat. It could really be promising to develop Copycat as an alternative to so-called genetic algorithms.2

Despite a certain resemblance to natural evolution the mechanisms built into Copycat do not comprise an equivalent to what is known from biology as “gene doubling”. Gene doubling and the akin part of gene deletion are probably the most important mechanisms in natural evolution. Copycat produces different kinds of agents, but the informational setup of these agents does not change as it is given by the Slipnet. The equivalent to gene doubling would have to be implemented into the Slipnet. On the other hand, however, it is clear that the items in the Slipnet are too concrete, almost representational. In contrast, genes usually do not represent a particular function on the macro-level (which is one of the main structural faults of so-called genetic algorithms). So, we conclude that Copycat contains a restricted version of generalized evolution. Else, we see a structural resemblance to the theories of Edelman and his neuronal Darwinism, which actually is a nice insight.

Conceiving large parts of the mechanism of Copycat as (restricted) generalized evolution covers both the Coderack as well as the Workspace, but not the Slipnet.

The Slipnet acts as sort of a “Platonic Heaven” (Hofstadter’s term). It contains various kinds of abstract terms, where “abstract” simply means “not directly observable.” It is hence not comparable to those abstractions that can be used to build tree-like hierarchies. Think of the series “fluffy”-dog-mammal-animal-living entity. Significantly, the abstract terms in Copycat’s Slipnet also comprise concepts about relations, such as “right,” “direction,” “group,” or “leftmost.” Relations, however, are nothing else than even more abstract symmetries, that is transformational models, that may even build a mathematical group. Quite naturally, we could consider the items in Slipnet as a mathematical category (of categories). Again, Hofstadter and Mitchell do not refer in any way to such structures, quite unfortunately so.

The Slipnet’s items may well be conceived as instances of symmetry relations. Hofstadter treats them as idealizations of positional relations. Any of these items act as a structural property. This is a huge advance as compared to other models of analogy.

To summarize, we find two main elements in Copycat.

  • (1) restricted generalized evolution, and
  • (2) concrete instances of positional idealization.

Actually, these elements are top-level elements that must be conceived as compounds. In part 2 we will check out the elements of the Slipnet in detail, while the evolutionary aspects we already discussed in a previous chapter. Yet, this level of abstraction is necessary to render Copycat’s principles conceptually more mobile. In some way, we have to apply the principles of Copycat to the attempt to understand it.

The Copycat, released to the wild

Any generalization of Copycat has to withdraw the implicit constraints of its elements. In more detail, this would include the following changes:

  • (1) The representation of the items in the Slipnet could be changed into compounds, and these compounds should be expressed as “gene-like” entities.
  • (2) Introducing a mechanism to extend the Slipnet. This could be achieved through gene doubling in response to external pressures; yet, these pressures are not to be conceived as “external” to the whole system, just external to the Copycat. The pressures could be issued by a SOM. Alternatively, a SOM environment might also deliver the idealizations themselves. In either case, the resulting behavior of the Copycat has to be shaped by selection, either through internal mechanisms, or through environmentally induced forces (changes in the fitness landscape).
  • (3) The focus to positional idealization would have to be removed by introducing the more abstract notion of “symmetries”, i.e. mathematical groups or categories. This would render positional idealization just into a possible instance of potential idealization.

The resulting improvement of these changes would be dramatic. It would be not only much more easy to establish a Slipnet for any kind of domain, it also would allow the system (a CopyTiger?) to evolve new traits and capabilities, and to parametrize them autonomously. But these changes also require a change in the architectural (and mental) setup.

From Copycat to Metacat

Hofstadter himself tried to describe possible improvements of Copycat. A significant part of these suggestions for improvement is represented by the capability for self-monitoring and proliferating abstraction, hence he calls it “Metacat”.

The list of improvements comprises mainly the following five points (pp.315, chp.7).

  • (1) Self-monitoring of pressures, actions, and crucial changes as an explicit registering into parts of the Workspace.
  • (2) Disassembling of a given solution into the path of required actions.
  • (3) Hofstadter writes that “Metacat should store a trace of its solution of a problem in an episodic memory.
  • (4) A clear “meta-analogical” sense as an ability to see analogies between analogies, that is a multi-leveled type of self-reflectiveness.
  • (5) The ability to create and to enjoy the creation of new puzzles. In this context he writes “Indeed, I feel that responsiveness to beauty and its close cousin, simplicity, plays a central role in high-level cognition.

I am not really convinced of these suggestions, at least not if it would be implemented in the way that is suggested by Hofstadter “between the lines”. They look much more like a dream than a reasonable list of improvements, perhaps except the first one. The topic of self-monitoring has been explored by James  Marshall in his dissertation [1], but still his version of “Metacat” was not able to learn. This self-monitoring should not be conceived as a kind of Cartesian theater [2], perhaps even populated with homunculi on both sides of the stage.

The second point is completely incompatible with the architecture of Copycat, and notably Hofstadter does not provide even the tiniest comment on it. The third point violates the concept of “memory” as a re-constructive device. Hofstadter himself says elsewhere, while discussing alternative models of analogy, that the brain is not a database, which is quite correct. “Memory” is not a storage device. Yet, the consequence is that analogy making can’t be separated from memory itself (and vice versa).

The fourth suggestion, then, would require further platonic heavens, in case of Copycat/Metacat created by a programmer. This is highly implausible, and since it is a consequence of the architecture, the architecture of Copycat as such is not suitable to address real-world entities.

Finally, the fifth suggestion displays a certain naivity regarding either evolutionary contexts, to philosophical aspects of reasoning that are known since Immanuel Kant, or to the particular setup of human cognition, where emotions and propositional reasoning appear as deeply entangled issues.

The main Problem(s) of the FARG model

We already mentioned Copycat’s main problems, which are (i) the “Platonic heaven”, and (ii) the lack of the capability to learn as a kind of structural self-transformation.

Both problems are closely related. Actually, somehow there is only one single problem, and that’s the issue that Hofstadter got trapped by idealism. A Platonic heaven that is filled by the designer with an x-cat (or a Copy-x) is hard to comprehend. Even for the really small letter domain there are more than 60 of such idealistic, top-down and externally imposed concepts. These concepts have to be linked and balanced in just the right way, otherwise the capicut will not behave interesting in any way. Further more, the Slipnet is a structurally static entity. There are some parameters that change during its activity, but Copycat does not add new items to its Slipnet.

For these reasons it remains completely opaque, how Mitchell and Hofstadter arrived at that particular instance of the Slipnet for the letter domain, and thus it also remains completely unclear how the “computer” itself could build or achieve something like a Slipnet. Albeit Linhares [3] was able to implement an analogous FARG model for the domain of chess3, his model too suffers from the static Slipnet in the same way: it is extremely tedious to set up a Slipnet. Further more, the validation is even more laborious, if not impossible, due to the very nature of making analogies and the idealismic Slipnet.

The result is, well, a model that can not serve as a template for any kind of application that is designed to be able to adapt and to learn, at least if we take it without abstracting from it.

From an architectural point of view the Slipnet is simply not compatible to the rest of Copycat, which is strongly based on randomness and probabilistic processes in populations. The architecture of the Slipnet and the way it is used does not offer something like a probabilistic pathway into it. But why should the “Slipnet” not be a probabilistic process either?

Superior Aspects of the FARG model

Hofstadter clearly and correctly separates his project from connectionism (p.308):

Connectionist (neural-net) models are doing very interesting things these days, but they are not addressing questions at nearly as high a level of cognition as Copycat is, and it is my belief that ultimately, people will recognize that the neural level of description is a bit too low to capture the mechanisms of creative, fluid thinking. Trying to use connectionist language to describe creative thought strikes me as a bit like trying to describe the skill of a great tennis player in terms of molecular biology, which would be absurd.

A cornerstone in Hofstadter’s arguments and concepts around Copycat is conceptual slippage. This occurs in Slipnet and is represented as a sudden change in the weights of the items such that the most active (or influential) “neigh-borhood” also changes. To describe these neighborhoods, he invokes the concept of a halo. The “halo” is a more or less circular region around one of the abstract items in the Slipnet, yet without a clear boundary. Items in the Slipnet change their relative position all the time, thus their co-excitation also changes dynamically.

Hofstadter lists (p.215) the following missing issues in connectionist network (CN) models with regard to cognition, particularly with regard to concept slippage and fluid analogies.

  • – CN don’t develop a halo around the representatives of concepts in case of localist networks, i.e. node oriented networks and thus no slippability emerges;
  • – CN don’t develop a core region for a halo in case of networks where a “concept” is distributed throughout the network, and thus no slippability emerges;
  • – CN have no notion of normality due to learning that is instantiated in any encounter with data.

This critique appears both to be a bit overdone and misdirected. As we have seen above, Copycat can be interpreted as to comprise a slightly restricted case of generalized evolution. Standard neuronal techniques do not know of evolutionary techniques, there are no “coopetitioning” agents, and there is no separation into different memories of different durations. The abstraction achieved by artificial neuronal networks (ANN) or even by standard SOMs is always exhausted by the transition from extensional (observed items) to intensional description (classes, types). The abstract items in the Slipnet are not just intensional descriptions and could not be found/constructed by an ANN or a SOM that would work just on the observation, especially, if there is just a single observation at all!

Copycat  is definitely working in a different space as compared to network-based models.1 While the latter can provide the mechanisms to proceed from extensions to intensions in a “bottom-up” movement, the former is applying those intensions in a “top-down” manner. Saying this, we may invoke the reference to the higher forms of comparison and the Deleuzean differential. As many other things mentioned here, this would deserve a closer look from a philosophical perspective, which however we can’t provide here and now.

Nevertheless, Hofstadter’s critique of connectionist models seems to be closely related to the abandonment of modeling as a model for analogy making. Any of the three points above can be mitigated if we take a particular collection of SOM as a counterpart for Copycat. In the next section (which will be found in part II of this essay) we will see how the two approaches can inform each other.

Notes

1. We would like to point you to our discussion of non-Turing computation and else make you aware of the this conference: 11th International Conference on Unconventional Computation & Natural Computation 2012, University of Orléans, conference website.

2. Interestingly, Hofstadter’s PhD-student, co-worker and co-author Melanie Mitchell started to publish in the field of genetic algorithms (GA), yet, she never realized the kinship between GA and Copycat, at least she never said anything like this publicly.

3. He calls his model implementation “Capyblanca”; it is available through Google Code.

4. The example provided by Blank [4] where he tried to implement analogy making in a simply ANN is seriously deficient in many respects.

  • [1] James B. Marshall, Metacat: A Self-Watching Cognitive Architecture for Analogy-Making and High-Level Perception. PhD Thesis, Indiana University 1999. available online (last access 18/3/2012)
  • [2] Daniel Dennett, Consciousness Explained. 1992. p.107.
  • [3] Alexandre Linhares (2008). The emergence of choice: Decision-making and strategic thinking through analogies. available online.
  • [4] Douglas S. Blank, Implicit Analogy-Making: A Connectionist Exploration.
    Indiana University Computer Science Department. available online.

۞

Data

February 28, 2012 § Leave a comment

There are good reasons to think that data appear

as the result of friendly encounters with the world.

Originally, “data” has been conceived as the “given”, or as things that are given, if we follow the etymological traces. That is not quite surprising since it is closely related to the concept of date as a point in time. And what if not time could be something that is given? The concept of date is, on the the other, related to the computation, at least, if we consider etymology again. Towards the end of the medieval ages, the problems around the calculation of the next Easter date(s) triggered the first institutionalized recordings of rule-based approaches that have been called “computation.” At those times, it already has been a subject for specialists…

Yet, the cloud of issues around data also involves things. But “things” are nothing that are invariably given, so to speak as a part of an independent nature. In Nordic languages there is a highly interesting link to constructivism. Things originally denoted some early kind of parliament. The Icelandic “alþingi”, or transposed “Althingi” is the oldest parliamentary institution in the world still extant, founded in 930. If we take this thread further it is clear that things refer to entities that have been recognized by the community as subject for standardization. That’s the job of parliaments or councils. Said standardization comprises the name, rules for recognizing it, and rules for using or applying it, or simply, how to refer to it, e.g. as part of a semiosic process. That is, some kind of legislation, or norming, if not to say normalization. (That’s not a bad thing in itself, only if a society is too eager in doing so, standardization is a highly relevant condition for developing higher complexity, see here) And, back to the date, we fortunately know also about a quite related usage of the “date” as in “dating” or to make a date, in other words, to fix the (mostly friendly) issues with another person…

The wisdom of language, as Michel Serres once coined it (somewhere in his Hermes series, I suppose) knew everything, it seems. Things are not, because they remain completely beyond even any possibility to perceive them if there is no standard to treat the differential signals it provides. This “treatment” we usually call interpretation.

What we can observe here in the etymological career of “data” is nothing else than a certain relativization, a de-centering of the concept away from the absolute centers of nature, or likewise the divine. We observe nothing else than the evolution of a language game into its reflected use.

This now is just another way to abolish ontology and its existential attitude, at least as far as it claims an “independent” existence. In order to become clear about the concept of data, what we can do about it, or even how to use data, we have to arrive at a proper level of abstraction, that to understand is not a difficult thing in itself.

This, however, also means that “data processing” can’t be conceived in the way as we conceive, for instance, the milling of grain. Data processing should me taken much more as a “data thinging” than as a data milling, or data mining. There is deep relativity in the concept of data, because it is always an interpretation that creates them. It is nonsense to naturalize them in the infamous equation  “information=data+meaning”, we already discussed that in the chapter about information. Yet, this process probably did not reach its full completion, especially not in the discipline of so-called computer “sciences”. Well, every science started as some kind of Hermetism or craftmenship…

Yet, one still might say that at a given point at time we come upon encoded information, we encounter some written, stored, or somehow else materially represented structured differences. Well, ok, that’s  true. However, and that’s a big however: We still can NOT claim that the data is something given.

This raises a question: what are we actually doing when we say that we “process” data? At first sight, and many people think so, that this processing data produces information. But again, it is not a processing in the sense of milling. This information thing is not the result of some kind of milling. It needs constructive activities and calls for affected involvement.

Obviously, the result or the produce of processing data is more data. Data processing is thus a transformation. Probably it is appropriate to say that “data” is the language game for “transforming the possibility for interpretation into its manifold.” Nobody should wonder about the fact that there are more and more “computers” all the time and everywhere. Besides the fact that the “informationalization” of any context allows for a improved generality as well as for improved accuracy (they excluded each other in the mechanical age), the conceptual role of data itself produces an built-in acceleration.

Let us leave the trivial aspects of digital technology behind, that is, everything that concerns mere re-arrangement and recombination without loosing and adding anything. Of course, creating a pivot table may lead to new insights since we suddenly (and simply) can relate things that we couldn’t without pivoting. Nevertheless, it is mere re-arrangement, despite it is helpful, of course. It is clear that pivoting itself does not produce any insight, of course.

Our interest is in machine-based episteme and its possibility. So, the natural question is: How to organize data and its treatment such that machine-based episteme is possible? Obviously this treatment has to be organized and developed in a completely autonomous manner.

Treating Data

In so-called data mining, which only can be considered as a somewhat childish misnomer, people often report that they spend most of the time in preparing data. Up to 80% of the total project time budget is spent for “preparing data”. Nothing else cold render the inappropriate concepts behind data mining more visible than this fact. But one step at a time…

The input data to machine learning are often considered to be extremely diverse. At first place, we have to distinguish between structured and unstructured data, secondly, we unstructured qualities like text or images or the different scales of expression.

Table 1: Data in the Quality Domain

structured data things like tables, or schemes, or data that could be brought into that form in one way or another; often related to physical measurement devices or organizational issues (or habits)
unstructured data  —- entities that can’t be brought into a structured form before processing them in principle. It is impossible to extract the formal “properties” of text before interpreting it; those properties we would have to know before being able to set up any kind of table into which we could store our “measurement”. Hence, unstructured data can’t be “measured”. Everything is created and constructed “on-the-fly”, sailing while building the raft, as Deleuze (Foucault?) put it once. Any input needs to be conceived as and presented to the learning entity in a probabilized form.

Table 1: Data in the Scale Domain

real-valued scale numeric, like 1.232; mathematically: real numbers, (ir)rational numbers, etc. infinitely different values
ordinal scale enumerations, ordering, limited to a rather small set of values, typically n<20, such like 1,2,3,4; mathematically: natural numbers, integers
nominal scale singular textual tokens, such like “a”, “abc”, “word”
binary scale only two values are used for encoding, such as 1,0, or yes,no etc.

Often it is proposed to regard the real-valued scale as the most dense one, hence it is the scale that could be expected to transport the largest amount of information. Despite the fact that this is not always true, it surely allows for a superior way to describe the risk in modeling.

That’s not all of course. Consider for instance domains like the financial industry. Here, all the data are marked by a highly relevant point of anisotropy regarding the scale: the zero. As soon something becomes negative, it belongs to a different category, albeit it could be quite close to another value if we consider just the numeric value. It is such domain specific issues that contribute to the large efforts people spend to the preparation of data. It is clear that any domain is structured by and knows about lot of such “singular” points. People then claim that they have to be a specialist in the respective domain in order to be able to prepare the data.

Yet, that’s definitely not true, as we will see.

In order to understand the important point we have to understand a further feature of data in the context of empirical analysis.Remember, that in empirical analysis we are looking primarily for a mapping function, which transforms values from measurement into values of a prediction or diagnosis, in short, into the values that describe the outcome. In medicine we may measure physiological data in order to achieve a diagnosis, and doing so is almost identical as other people perform measures in an organization.
Measured data can be described by means of a distribution. A distribution simply describes the relative frequency of certain values. Let us resort to the following two. examples. Here you see simply frequency histograms, where each bin reflects the relative frequency of the values falling into the respective bin.

What is immediately striking is that both are far from the analytical distributions like the normal distribution. They are both strongly rugged, far from being smooth. What we can see also: they have more than one peak, even as it is not clear how many peaks there are.

Actually, in data analysis one meets such conditions quite often.

Figure 1a. A frequency distribution showing (at least) two modes.

Figure 1b. A sparsely filled frequency distribution

So, what to do with that?

First, the obvious anisotropy renders any trivial transformation meaningless. Instead, we have to focus precisely those inhomogeneities.  In a process perspective we may reason that the data that have been measured by a single variable actually are from at least two different processes, or that the process is non-stationary and switches between (at least two) different regimes. In either case, we split the variable into two, applying a criterion that is intrinsic to the data. This transformation is called deciling, and it is probably the third-most important transformation that could be applied to data.

Well, let us apply deciling to data shown inFigure 1a.

Figure 2a,b: Distributions after deciling a variable V0 (as of Figure 1a) into V1 and V2. The improved resolution for the left part is not shown.

The result is three variables, and each of them “expresses” some features. Since we can treat them (and the values comprised) independently, we obviously constructed something. Yet, we did not construct a concept, we just introduced additional potential information. At that stage, we do not know whether this deciling will help to build a better model.

Variable V1 (Figure 2a (left part ) ) can be transformed further, by shifting the value to the right through applying a log-transformation. A log-transformation increases the differences between small values and decreases the differences between large values, and it does so in a continuous fashion. As a result, the peak of the distribution will move more to the right (and it will also be less prominent). Imagine a large collection of bank accounts, most of them filled with amounts between 1’000 and 20’000, while some host 10’00’000.  If we map all those values onto the same width, the small amounts can’t be well distinguished any more, and we have to do that mapping, called linear normalization, with all our variables in order to make variances comparable. It is mandatory to transform such left-skewed distributions into a new variable in order to access the potential information represented by it. Yet, as always in data analysis, before we didn’t complete the whole modeling cycle down to validation we can not know whether a particular transformation will have any or even a positive effect for the power of our model.

The log transformation has a further quite neat feature: it is defined only for positive values. Thus, is we apply a transformation that creates negative values for some of the observed values and subsequently apply a log-transform, we create missing values. In other words, we disregard some parts of the information that originally has been available in the data. So, a log-transform can be used to

  • – render items discernible in left-skewed distributions, and to
  • – blend out parts of information dedicatedly by a numeric transformation.

These two possible achievements make the log-transform one of the most frequently applied.

The most important transformation in predictive modeling is the construction of new variables by combining a small number (typically 2) of hitherto available ones, either analytically by some arithmetics, or more generally, any suitable mapping, inclusive the SOM, from n variables to 1 variable. Yet, this will be discussed at a later point (in another chapter, for an overview see here). The trick is to find the most promising of such combinations of variables, because obviously the number of possible combinations is almost infinitely large.

Anyway, the transformed data will be subject to an associative mechanism, such like the SOM. Such mechanism are based on the calculation of similarities and the comparison of similarity values. That is, the associative mechanism does not consider any of the tricky transformations, it just reflects the differences in the profiles (see here for a discussion of that).

Up to this point the conclusion is quite clear. Any kind of data preparation just has to improve the distinguishability of individual bits. Since we anyway do not know anything about the structure of the relationship between measurement, the prediction and the outcome we try to predict, there is nothing else we could do in advance. On the second line this means that there is no need to import any kind of semantics. Now remember that transforming data is an analytic activity, while it is the association of things that is a constructive activity.

There is a funny effect of this principle of discernibility. Imagine an initial model that comprises two variables v-a and v-b, among some others, for which we have found that the combination a*b provides a better model. In other words, the associative mechanism found a better representation for the mapping of the measurement to the outcome variable. Now first remember that all values for any kind of associative mechanism has to be scaled to the interval [0..1]. Multiplying two sets of such values introduces a salient change if both values are small or if both values are large. So far, so good. The funny thing is that the same degree of discernibility can be achieved by the transformative coupling v-a/v-b, by the division. The change is orthogonal to that introduced by the multiplication, but that is not relevant for the comparison of profiles. This simple effect nicely explains a “psychological” phenomenon… actually, it is not psychological but rather an empiric one: One can invert the proposal about a relationship between any two variables without affecting the quality of the prediction. Obviously, it is rather not the transformative function as such that we have to consider as important. Quite likely, it is the form aspect of the data space warping qua transformation that we should focus on.

All of those transformation efforts exhibit two interesting phenomena. First, we apply them all as a hypothesis, which describes the relation between data, the (more or less) analytic transformation, the associative mechanism, and the power of the model. If we can improve the power of the model by selecting just the suitable transformations, we also know which transformations are responsible for that improvement. In other words, we carried out a data experiment, that, and that’s the second point to make here, revealed a structural hypothesis about the system we have measured. Structural hypotheses, however, could qualify as pre-cursors of concepts and ideas. This switching forth and back between the space of hypotheses H and the space of models (or the learning map L, as Poggio et al. [1] call it)

Thus we end up with the insight that any kind of data preparation can be fully automated, which is quite contrary to the mainstream. For the mere possibility of machine-based episteme it is nevertheless mandatory. Fortunately, it is also achievable.

One (or two) last word on transformations. A transformation is nothing else than a method, and importantly, vice versa. This means that any method is just: a potential transformation. Secondly, transformations are by far, and I mean really by far, more important than the choice of the associative method. There is almost no (!) literature about transformations, and almost all publications are about the proclaimed features of a “new” method. Such method hell is dispensable. The chosen method just needs to be sufficiently robust, i.e. it should not—preferably: never—introduce a method-specific bias or, alternatively, it should allow to control as much of its internal parameters as possible. Thus we chose the SOM. It is the most transparent and general method to associate data into groups for establishing the transition from extensions to intensions.

Besides the choice of the final model, the construction of a suitable set of transformation is certainly one of the main jobs in modeling.

Automating the Preparation of Data

How to automate the preparation of data? Fortunately, this question is relatively easy to answer: by machine-learning.

What we need is just a suitable representation of the problematics. In other words, we have to construct some properties that together potentially describe the properties of the data, especially the frequency distribution.

We have made good experiences by applying curve fitting to the distribution in order to create the fingerprint that describe the properties of the values represented by a variable. For instance, a 5-th order polynomial, together with an negative exponential and a harmonic fit (trigonometric functions) are essential for such a fingerprint (don’t forget the first derivatives, and the deviation from the models). Further properties are the count and location of empty bins. The resulting vector typically comprises some 30 variables and thus contains enough information for learning the appropriate transformation.

Conclusion

We have seen that the preparation of data can be automated. Only very few domain-specific rules are necessary to be defined apriori, such as the anisotropy around zero for the financial domain. Yet, the important issue is that they indeed can be defined apriori, outside the modeling process, and fortunately, they are usually quite well-known.

The automation of the preparation of data is not an exotic issue. Our brain does it all the time. There is no necessity for an expert data-mining homunculus. Referring to the global scheme of targeted modeling (in the chapter about technical aspects) we now have completed the technical issues for this part. Since we already handled the part of associative storage, “only” two further issues on our track towards machine-based episteme remain: the issue of the emergence of ideas and concepts, and secondly, the glue between all of this.

From a wider perspective we definitely experienced the relativity of data. It is not appropriate to conceive data as “givens”. Quite in contrast, they should be considered as subject for experimental re-combination, as kind of an invitation to transform them.

Data should not be conceived as a result of experiments or measurements, some kind of immutable entities. Such beliefs are directly related to naive realism, to positivism or the tradition of logical empiricism. In contrast, data are the subject or the substrate of experiments of their own kind.

Once the purpose of modeling is given, the automation of modeling thus is possible.  Yet, this “purpose” can be first quite abstract, and usually it is something that results from social processes. It is a salient and an open issue, not only for machine-based episteme, how to create, select or achieve a “purpose.”

Even as it still remains within the primacy of interpretation, it is not clear so far whether targeted modeling can contribute here. We guess, not so much, at least not for its own. What we obviously need is a concept for “ideas“.

  • [1] Tomaso Poggio, Ryan Rifkin, Sayan Mukherjee1 & Partha Niyogi (2004). General conditions for predictivity in learning theory. Nature 428: 419-422 (25 March 2004).

۞

Where Am I?

You are currently browsing entries tagged with context at The "Putnam Program".