Descartes, updated.

December 27, 2012 § 1 Comment

Yes, I am a Cartesian. Well, at least abstractly and partially.

Why Descartes? And why updating him? And why here in this series about Urban Reason?

Well, there are roughly three reasons for that. Firstly, because he was the first who concisely expressed the notion of method. And that is certainly of some relevance concerning our collateral target, planning in the context of urban affairs. Second, because the still prevailing modernist thinking is soaked by Descartes’ rationalist ideas. Doing one thing after another, the strategy of divide and conquer, is essentially Cartesian. Such, Descartes is still the secret hero among functionalists and software programmers of our days. And the third reason, finally,  for revisiting Descartes is that regarding the issues risen by planning and method we have to get clear about the problematics of rationalism1, quite beyond the more naturalist approach that we put forward earlier, aligning planning to the embryonic mode of differentiation. We again meet the “binding problem,” for at the one side Descartes’ “Methode” considers epistemic issues,  but on the other neither planning nor method could be considered just as a matter of internal epistemic stances. To put in a more rhetoric manner, could we (i) plan thinking2 and could we (ii) expect to completely think through a plan?

Descartes, living in a transitional time between two great ages, between renaissance and enlightenment, expressed for the first time a strong rational “system”, renewing and updating thereby Platon’s philosophy. When dozing in the Portuguese sun, while ears being filled with some deep house I can imagine that today we are going to experience kind of a reverse passage, a trajectory through Descartes, back from rationalist, logicist, mechanist way of thinking full of abstract ideas that are detached from life like for instance independence towards the classic praise of vortices, broiling, emergence, creativity and dignity of the human practices, that is relating to each other in first place. As one of the first we will meet Leonardo, the timeless genius.

Figure 1. A vortex, in Leonardo’s imaginations.

blobs-1b-det2

In short,  it seems,  in such day dreaming, that we are going to leave the (Roman) module, returning to Athens figures.3 Of course, on this course we carry a backpack, and not a small one, filled with more recent philosophical achievements.4

Here in this essay, I will try to outline a possible update of Cartesian thinking. I tend to propose that modernism, and thus still large parts of contemporary culture, is strongly shaped by his legacy. Obviously, this applies also for the thinking of most of the people and their thinking at least in Western Cultures.

Descartes brought us the awareness about method.  Yet, his initializing version came with tremendous costs. Cartesian thinking implemented the metaphysical believe of independence into the further history of Western societies to come.5 For our investigation, it is the general question about method, mainly with regard to planning, that serves us as a motivational base. We will see whether it is possible to develop the Cartesian concept of method without sticking to his metaphysical believes and the resulting overt rationalism.

Serving still the same purpose as intended by Descartes—to add some update on the notion of method—, in the end this update will turn out to be more like a major release, just to borrow a notion from software production. While the general intention may still resemble Descartes’ layout, the actual mechanisms will be quite different, and probably the whole thing won’t be regarded as Cartesian any more by the respective experts.

But why should one, regarding plans and their implementation, bother with philosophy and other abstract stuff of similar kinds at all, particularly in architecture and urbanism? Isn’t architecture just about pretty forms and optimal functions, optimal fulfillment of a program—whether regarding land-use or the list of rooms in a building—, mingling those with a more or less artful attitude? Isn’t urbanism just about properly building networks of streets and other infrastructure, including immaterial ones such as safety (police, fire, health) and legislative prescriptions for guiding development?

Let us listen to the voice of Vanessa Watson [3], University of Cape Town, South Africa, as she has been writing about it in an article published in 2006 (my emphasis):

The purpose of this article has been to question the appropriateness of much of the thinking in planning that relates to values and judgement. I argue that two main aspects of this thinking are problematic: a focus on process and a neglect of outcomes, together with the assumption that such processes can be guided by a universal set of deontological values, shaped by the liberal tradition. These aspects become particularly problematic in a world which is characterized by deepening social and economic differences and inequalities and by the aggressive promotion of neoliberal values by particular dominant nation-states. (p. 46)

Obviously,  she is asking about the conditions of such implementation. Particularly, she argues that one should be aware about values.

The notion of introducing values into deliberative processes is explored.  (p.31)

In fact, the area of planning6 is a hot spot for all issues about the question what humans would like to “be”, to achieve. Not primarily as an individual (though this could not be neglected), but rather as a “group” in these ages of globalization.7 And many believe not only that human affairs are based on values, but also that this is necessarily so. Watson’s article is just one example for that.

Quite obviously, planning is about the future, and more precisely, about decision-making regarding this future. Equally obvious, it would be ridiculous to confine planning just to that. Yet, stating that ex-post is something very different from ex-ante, as Moroni [4] does in his review of [5], is not only not sufficient, it is struck by several blind spots, e.g. regarding the possibility of predictive modeling. Actually, bringing ex-post and ex-ante perspective to a match is the only way to enable oneself for proper anticipation, as it is well known in financial industries and empiric risk analysis. This is not only admissible in economic contexts. It has been demonstrated as a valuable tool in digital humanities as well. Else, it should be clear that a reduction to either the process or the outcome must be regarded as seriously myopic. What then is planning? (If there is a possible viable definition of it at all.)

Actually, looking to the literature there seem to be as much different definitions for planning as there are people calling themselves planners. In the community of those people there is a fierce discussion about it, even after more than a century of town planning offices. Different schools can be observed, such as rationalists (cf. [5]) or “radical hands-on practitioners,” the former believing in the possibility of pervasive comprehension, the latter denying the feasibility of theory and just insisting on manuals as collections of mystical hands-on recipes [6]. Others, searching for kind of a salvation, are trying to adopt theories from other domains, which poses at least a double-sided problem, if neither the source such as complexity or evolutionary theory is properly understood (cf. [7], [8], [9]) nor the process of adopting them, as Angelique Chettiparamb has been pointing out [10]. As a matter of fact urban or regional planning still fails much too often, particularly corresponding to the size and the scope of the project, and a peculiar structure shows up in this failure: the missing of a common structure across planning projects. One of the reasons at the surface for complicating the subject matter is certainly the extended time horizon affected by the larger plans. Of course, there is also the matter of scale. Small projects often succeed: they are completed within budget, within time, they look like designed and clients are permanently satisfied. Yet, this establishes swarms of independent planning and building, which, according to Koolhaas led to Junkspace. And we should not overlook urban sprawl, which many call the largest failure of planning. Swarms of small projects, even if all of them would be successful, can’t replace large-scale design, it seems.

In other words, the suspicion is that there is a problem with the foundations, with the concepts buried in the idea of planning, the way of speaking, i.e. the performed language games, and probably even with the positioning of the whole area, with the methods, or with all of those issues together. In agreement with Franco Archibugi [5] we may conclude that there are two main challenges: (i) the area of planning is largely devoid of a proper discourse about its foundations and (ii) it is seriously suffering from the binding problem as well.

The question about the foundations is “foundational” for the possibility of a planning science at large. Heidegger in “Sein und Zeit” mentioned ([11]p.9)

Even as the significance of scientific research is always given in this positivity, its actual progress completes not so much through the collection of results and their salvage in “manuals” than in the asking for the basic constitutions of the respective domain, an asking that mostly will be seen as reactively driven out of the increasing technical expertise being fixed in such manuals.

…and a few sentences later :

The level of a science is determined by its capability for a crisis of its foundational concepts.8

Nowadays, we even can understand that this crisis has to be an ongoing crisis. It has to be built into the structure of the respective science itself, such that the “crisis as event” is not possible any more. As an example we will not only throw a glimpse towards biology, we will even assimilate its methodological structure.

I believe that all those methodological (meta-)issues can’t be addressed separately, and also not separately from so-called practical issues. Additionally, I think that in case of an investigation that reaches out into the “social” the question of method can’t be separated from that about the relation between ethics and planning, or from its target, the Urban (cf. [12]). Such a separation would implicitly follow the structure of reductionist rationalism,  which we have, of course, to avoid as a structural predetermination. Therefore I decided to articulate and to braid these issues in a first round all together into one single essay, even to the cost of its considerable length.9

The remainder of this essay revolves around method, plan and their vicinity, arranged to the following sections (active links):

1. Method a la Carte(sian)

Descartes meant to extend the foundations devised long before him by Aristotle. The conviction that some kind of foundations are necessary and possible is called foundationalism. In his essay about Descartes epistemology [13], Newman holds that

The central insight of foundationalism is to organize knowledge in the manner of a well-structured, architectural edifice. Such an edifice owes its structural integrity to two kinds of features: a firm foundation and a superstructure of support beams firmly anchored to the foundation. A system of justified beliefs might be organized by two analogous features: a foundation of unshakable first principles, and a superstructure of further propositions anchored to the foundation via unshakable inference.

In Descartes’ own words:

Throughout my writings I have made it clear that my method imitates that of the architect. When an architect wants to build a house which is stable on ground where there is a sandy topsoil over underlying rock, or clay, or some other firm base, he begins by digging out a set of trenches from which he removes the sand, and anything resting on or mixed in with the sand, so that he can lay his foundations on firm soil. In the same way, I began by taking everything that was doubtful and throwing it out, like sand … (Replies 7, AT 7:537)

Here the reference to architecture is a homage to Aristotle, who also used architecture as kind of a structural template. The big question is whether such a stable ground is possible in the realm of arguments. If not, a re-import of the expected stability won’t be possible, of course. The founder of mechanics, Archimedes, already mentioned that given a stable anchor point he could move the whole world. For him it was clear that such a stable point of reference is to be found only for local contexts.

In his “Discours de la Methode” Descartes distinguished four precepts, or rules, about how to achieve a proper way of thinking.

(1) The first was never to accept anything for true which I did not clearly know to be such; that is to say, carefully to avoid precipitancy and prejudice, and to comprise nothing more in my judgment than what was presented to my mind so clearly and distinctly as to exclude all ground of doubt.

(2) The second, to divide each of the difficulties under examination into as many parts as possible, and as might be necessary for its adequate solution.

(3) The third, to conduct my thoughts in such order that, by commencing with objects the simplest and easiest to know, I might ascend by little and little, and, as it were, step by step, to the knowledge of the more complex; assigning in thought a certain order even to those objects which in their own nature do not stand in a relation of antecedence and sequence.

(4) And the last, in every case to make enumerations so complete, and reviews so general, that I might be assured that nothing was omitted.

Put briefly, and employing a modernized shape, he demands to follow these principles:

  • (1) Stability: proceed only from stable grounds, i.e. after excluding all doubts;
  • (2) Additivity: practice the strategy of “divide & conquer”;
  • (3) Duality: not to mistake empirical causality for logical sequence;
  • (4) Transferability: try to generalize your insight, and apply the generalization to as much cases as possible.

Descartes proposes a certain “Image of Thought”, as Deleuze will call it much later in the 1960ies.10 There are some important objections about these precepts, of which Descartes, of course, could not have been aware. It needed at least two radical turns (Copernican by Kant, Linguistic by Wittgenstein) to render those problems visible. In the following we will explicate these problems around Descartes’ four methodological precepts in a yet quite brief manner.

ad (1), Stability

There two important assumptions here. First, that it is possible to exclude all doubts, (2) that it is possible to use language in a way that would not be vulnerable to any kind of doubt. Meanwhile, both assumptions have been destroyed, the first by Gödel and his incompleteness theorem, the second by Wittgenstein with his insisting on the primacy of language. This primacy makes language as a languagability a transcendent (not: transcendental!) entity, such that it is even apriori to any possible metaphysics. There are several implications of that, first regarding the meaning of “meaning” [14]. Surprisingly enough, at least for all rationalists and positivists, it is untenable to think that meaning is a mental entity, as this would lead to the claim that there is something like a private language. This has been excluded by Wittgenstein (see also [14][16]) and all the work of later Putnam is about this issue [17]. Language is fundamentally a “communal thing,” both synchronically and diachronically. Frankly, it is a mistake to think that meaning could be assigned or that meaning would be attached to words. The combined rejections of Descartes’ first precept leads us to the primacy of interpretation. Before interpretation there is nothing. This holds even for what usually is called “pure” matter. A consequence of that is the inseparability of form and matter, or if you like, information and matter. It is impossible to talk about matter without also talking about information and form. For Aristotle, this was a cornerstone. Since Newton, many lost the grip onto that insight.

ad (2), Additivity

This inconspicuous rule is probably the most influential one. In some way it dominates even the first one. This rule was to set out the framing for positivism. The claim is basically that it is generally possible, that is for any kind of subject in thinking, to understand that subject by breaking it up into as many parts as possible. Nothing would be lost by breaking it up.  In the end, we could recombine the “parts of understanding” into a combined version. If this property is assigned to an empirical whole11, this property is usually called “additivity” or “linearity”.

By this rule, Descartes clearly sets himself apart from Aristotle, who would clearly have refused it. For Aristotle, most things could not be split into parts without loosing the quality. The whole is different from the sum of its parts. (Metaphysic VII 17, 1041b) From the other direction this means that putting things together always creates something that haven’t been there before. Today we call this emergence. Yet, we have to distinguish different kinds of emergence, as we have to distinguish different kinds of splitting. When talking about emergence and complexity, we are not interested in emergence by rearrangement (association or by combination (water from hydrogen and oxygen), but rather in strong emergence, which opens a new organizational level.

The additivity of things in thought as well as of things in the world is a direct consequence of the theological metaphysics of Descartes. For him, man had to be independent from God in order to be able to be man able to and for reason.

He [God]… agitated variously and confusedly the different parts of this matter, so that there resulted a chaos as disordered as the poets ever feigned, and after that did nothing more than lend his ordinary concurrence to nature, and allow her to act in accordance with the laws which he had established.

There are general laws effective in the background, as a general condition, but there is no direct action of the divine principle anymore. In other words:  In his actions, man is independent from God. By means of this believe into the metaphysical independence12, Descartes and Leibniz, who thought similarly (see his Theodizee), became the founders and grandfathers of modernism as it still prevails today.

ad (3), Duality

Simply great. The issue has been rediscovered, and of course extended and deepened by Wittgenstein. Wittgenstein understood as the first ever that logic is transcendent. There is neither a direct way from the world into logic, nor from logic into world. It is impossible to claim truth values for worldly entities. Doing so instead results in the implicit claim that the world could be described analytically. This has been the position of idealist rationalists and positivists. Note that it is not a problem to behave rationally, but it is definitely a problem to claim this idealistically as a norm. For this would exclude any kind of creativity or inventiveness.

Descartes did not recognize that his third precept contradicts his second one at least partially. Neither did Aristotle with his conceptualization of the whole and the claim that the truth could be recognized within the world.

ad (4), Transferability

Also a great principle, which is still valid. It rejects what today is known as case-study (the most stupid thing positivism has brought along).

Yet, this also has to be extended. What exactly happens when we are generalizing from observations? What happens, if we apply a generalization to a case? We already discussed this in detail in our contemplation about the comparison.

One of the results that we found there is that even the most simple comparison needs something that is not empirical, something that can not be found by just looking (starring?) at it. It not only implies a concept, it also requires at least one concept that is apriori to the comparison or likewise the observation. The next step is to regard the concept itself as a quasi-material empirical thing. Yet, we will find the same situation again, though this does not establish circularity or a regress!

In order to apply an already established generalization, or a concept, we need some rules. This could be a model of some kind. The important thing then is to understand completely the fact that concepts and generalizations could not be analytical. Hence there are always many ways to apply a generalization. The habit to select a particular style for the instantiation of the concept I called orthoregulation. In Kantian terms we could call it forms of constructions, mirroring his forms of intuition (or schemata).

It is this inevitability of manifold instantiation of abstractions, ideas or generalizations which idealist rationalism does not recognize and thus fails in the most serious way. For its mistake being the claim that there is a single “correct” way to apply a concept.

2. Foundation, now

Descartes clearly expressed that the four parts of the method are suitable to follow first principles, but not sufficient for finding the first principle. For that he devised his method of doubt. Yet, after all, this as well as his whole foundationalist systematics was in need for being anchored in God.

But what if we would try to follow the foundational path without referring to God?13 Setting something else as a first principle is not suitable outside of mathematics or logic. In the case of the former we call it axiom, in the case of the latter tautology. In kind of a vertigo both areas still struggle for a foundation, searching for a holy grail that can’t exist. Outside of mathematics, it is quite obvious that we can’t set an axiom as a first principle. How to justify it?

Now we met the real important question. If we can’t answer it, so it was thought, any knowledge would immediately become subject to the respective circumstances, implying kind of a tertiary chaos, deep relativity and arbitrariness. Yet, the question is important, but somewhat surprisingly the answer is irrelevant. For the question is ill-posed, where its misguidedness represents its importance. There is no absolute justification, thus there is no justification at all, and in turn the question is based on a misbelief.

This does not mean, however, that there is no foundation in the sense that there is nothing beyond (or: behind) this foundation. In our essay “A Deleuzean Move” we presented a possibility for a self-referential conceptualization of the foundation that provides a foundation without being based on a first principle. Of course, there are still requirements. Yet, all required positive-definite items or proposals—such as symbols or logic—become part of the concept itself and are explained and dissolved by it. The remaining conditions are identified as transcendent: modelity, conceptuality, mediality and virtuality. Each of them can be translated or transposed into actual items, and in each “move” all of them are invoked to some, varying degree. These four transcendent and foundational conditions for thought, ideas and language establish a space, whose topology is a hyperbolic, embedding a second-order Deleuzean differential. All together we called it the choreostemic space, because different styles of human activity creates more or less distinct attractors in this space.

Such, the axiomatic nature of Descartes’ foundation which we may conceive as a proposal based on constants is changed into a procedural approach without any fixed point. Instead, the safety in the ocean of possible choreostemic forms derives solely from the habit of thought as it practiced in a community. The second-order differential prevents this space becoming representational, as it needs a double instantiation. It can’t be used to map or project anything into it, including intentions. Nevertheless it records the style of unfolding intentions, wishes, stories, informational activities etc. and renders different styles comparable. These styles can be described as a distinct dynamics in the choreostemic space, between the transcendent entities of concept, model, mediality and virtuality.

This choreostemic space traces the immanence of thought and the relation between immanence (of creation), transcendence (of condition) and the transcendental (of the outside). This outside is beyond the border of language, but for the first time it appears as an imaginary. Note that the divine and the existential are both in this outside, yet into different virtual directions. Neither God nor existence is conceived as something to which we could point to, or about which we could speak by means of actual terms. And at least for the existential it doe not make much sense to doubt it. Here we agree with Descartes as well as with Wittgenstein. Despite we can’t say anything about it, we can traverse it. We always do so when we experience existential resistance, like an astronaut in a Space Shuttle visiting the incompatible interplanetary zone. Only limited trips are possible, we always have to return into an atmosphere.

Saying that the choreostemic space establishes a self-referential foundation implies that it is also critical (Kantian), and even meta-critical (Post-Kantian), yet without being doomed to idealism (Fichte, Frege) or totality (Hegel) and the logicistic functionalism implied by those.

Above we mentioned that the transcendent elements of the choreostemic space, namely model, concept, mediality and virtuality, can be transposed into actual items. This yields a tremendous advantage of the choreostemic space. It does not just dissolve the problem of ultimate justification without scarifying epistemic stability, it also bridges the rather wide gap between transcendence and application. In order to put it into simple terms, the choreostemic space just reflects the necessity of social embedding of modeling, the role of belief and potential in actual moves we take in the world, and finally the importance of concepts, which can be conceived as ideas being detached from the empiric constitution (or parts) of language. In discourses about planning as well as in actual planning projects this 4-fold vector describes nothing less than a proper communicational setup that is part of goal-directed organizational processes.

There are some interesting further topics that can be derived from this choreostemic space, which you can find in the main essay about it. The important message here is that a constant, a metaphysical axiom gets completely dissolved in a procedure that links the informational of the individual with the informational of the communal. 

3. Method, now

3.1. …Taken Abstract

Method is not primarily an epistemological issue, such as models or concepts, or modelity and conceptuality, respectively. It combines rules into a whole of procedures and actions such that this whole can be seen as the operational equivalent of a goal or purpose. As such, it refers to action, strategy, and style, thus aesthetic issues. Hence, also to creativity and its hidden company, formalization. Despite the aspect of reproducibility is usually strongly emphasized, there is also always an element of open experimentation in the “methodological,” allowing to “harvest” the immanent potential, far beyond the encoding and its mechanistic implications. This holds even for thinking itself.

Descartes, of course, and similarly to Kant later, clearly addressed the role of projected concepts as a means of “making sense,” while these projections don’t respond to the object(s) hosting some assumed necessity. As part of the third precept in performing method he writes (see above):

“…   assigning in thought a certain order even to those objects which in their own nature do not stand in a relation of antecedence and sequence.”

Objectively, logically confirmed stable grounds are not part of methodological arrangements any more. There is some kind of stability, of course, yet this comes just as a procedural regularity, which is dependent on the context. In turn, this allows to evade analyticity towards adaptivity.

Any method thus comprises at least two different levels of rules, though usually there are quite a few more. The first will address the factual re-arrangement, while the second—let us call it the upper—level is concerned about the regularization of the application of the rules on the first level, as well as the integration of the rather heterogenic set on the lowest level. Just think about a laboratory, or the design and implementation of a plan in a project to get a feeling for the vey different kinds of subjects that have to be handled by and integrated into a method. The levels are tightly linked to each other, there is still a link to empiric issues on the second level. Thus there are not too much degrees of freedom for the rules on the upper level.

Saying this we already introduced a concept and actively built upon it that has not been available to Descartes: information. Although it could be traced in his 3rd and 4th precept, information as a well-distinguished category was not available before the mid of the 20th century. Itself being dependent on the notions of the (Peircean) sign and probability, information does not only allow for additional levels of abstraction, it also renders some important concept accessible, which otherwise would remain completely hidden. Among those are a clear image about measurement, the reflection about rules, the reflection about abstraction itself—think about the Deleuzean Differential—, the proceduralization, accumulation, transformation and re-distribution of executive knowledge, the associative networks, distributed causes, complexity, and the distinction between reversibility and irreversibility. All those conceptual categories are highly relevant for a theory of planning. None of them could be found explicitly and appropriately assimilated so far in the literature about planning (in the end of 2012).

These categories provide us with a vantage point that opens the possibility for a proper formulation of “method”, where “proper” means that it could be appropriately operationalized and instantiated into practical contexts. We can say that…

Methods are structured collections of more or less strict rules that organize the transformational flow of items.

These items could be documents, data, objects in software, material objects, but also ideas and concepts. In short, and from a different angle, anything that could be symbolized. In the context of planning, any of those particular kinds may be involved, since planning is the task of effectively rearranging matter, stocks and flows embedded into a problematic field spanning from design [19] and project management to logistics and politics. There is little sense to wrangle about the question whether design should be included in planning and planning theory or not [1]. Or whether one should follow a dedicated rationalist route or not [4].

Such questions derive mainly from two blind spots. Firstly, people are obviously caught in a configuration ruled by the duality of “context” and “definition”. It is not that the importance of context is not recognized. Fortunately,  the completely inadequate and almost stupid response of leaning towards case-based-reasoning, case studies or casuistic (cf. [20]) is quite rare.14 Secondly, planning seems to be conceived implicitly as something like an external object. Only Objects can be defined. Yet, objects are created by performing a definition and this “act of defining” in itself is strongly analytical. Conceptual work is outside of the work of the definition. Who, besides orthodox rationalists or logical positivists would claim that planning is something analytical? As a further suspicion we already could add that there are quite strong hints that favor a grand cultural hypothesis for planning.

3.2. … from the Domain Perspective

In order to get clear about this we could look for an example from another domain, where the future—as in planning—is also a major determinant. Hence, let us take the science of biology. Organisms are settling in a richly structured temporal space, always engaging with the future, on any scale. The reason is quite simple: Those who didn’t sufficiently, let it be as a species, or as individual, do not exist any more.

Biology is the science about all aspects of living entities. This definition is pretty simple, isn’t it? Yet, it is not a definition, it is a vague description, because it is by no means clear what “life” should mean. Recent textbooks on biology do not contain a definition of life anymore. So, how is biology structured as a science? Perhaps you know that physicists claimed since Darwin that biology isn’t a “science” at all, because its proclaimed lack of “laws” and respective abstract and formal generalizations. They always get puzzled by the huge amount of particularities, the historicity, the context-specificity, the individuality of the subjects of interest. So, we can clearly recognize that a planning science, whatever it will turn out to be, won’t be a science like physics.

It is not possible to describe all the relevant structural aspects of biology as science and the respective approaches and attitudes here. Yet, there is kind of an initiation of biology as a modern science that is easy to grasp. The breakthrough in biology came with Niko Tinbergen’s distinction of the four central vectors of or perspectives in biological thought:

  • (1) ontogenesis (embryology, growing up, learning),
  • (2) physiology,
  • (3) behavior, and
  • (4) phylogenesis (evolution).

The basic motivation for such a distinction arose from the differences regarding the tools and approaches for observation. There are simply different structures and scales in space-time and concept- space, roughly along the lines Tinbergen carved out. From the perspective of the organism, these four perspectives could be conceived as “functional compartments”. Later, this concept of the functional compartment has been applied with considerable success in cell biology. There, people called them genome, transcriptome, proteome, etc., in order to organize the discourse. Meanwhile it became obvious, however, that this distinction is not an analytic, i.e. “idealistic” one, since in cells and organisms we find any kind of interaction across any number of integrative organizational “levels”.

Any of these areas started with some kind of collecting, followed by taxonomies in order to master the particularity. Since the 1970ies, however, there is an increasing trend towards mathematical modeling. Techniques (sometimes fuzzily also called methods) comprise probabilistic modeling, Markov-models, analytic modeling such as the Marginal-Value-theorem in eco-behavior [21], any kind of statistics, graph-based methods, and data-based, or empirical classification by means of clusterization, and often a combination of them. These techniques are used for deriving concepts.

Interestingly, organisms and their populations are often described (i) in terms of a “currency”, which in biology is time and energy, and (ii) in terms of “strategies,” both on the individual as well as on the collective level. Famous the concept evolutionarily stable strategy (ESS) by Maynard-Smith from 1970 [22].

As a fifth part of biology we nowadays could add the particular  concerns about the integration of the four aspects as introduced by Tinbergen. The formal study of this integration is certainly given by the concept  of complexity.15

Whatever the final agreement about planning and method in Urban16 Affairs will comprise, it is pretty sure that there won’t be a closed definition of planning. Instead, and almost certainly we will also see the agreement on some kind of “Big four/five” perspectives. In the next section we are going to check out the possibility for an extension of it.  Note, that taxonomy is not one of those! And despite there are myriads of highly particular descriptive reports, biology never engaged in case studies.

3.3. The Specialty…

No question, the pragmatic approach of separating basic perspectives without sacrificing the idea of integration has been valuable for the development of biology. There are good chances that the adoption of these perspectives—carried out appropriately, that is not representationalist—will be fruitful for the further development of the domain of planning and planning theory. There is at least kind of a homeomorphism: in both areas we find a strong alignment to the future, which in turn means that adaptivity and persistence (sustainability) also play an important role.

The advantage of such a methodological alignment would be that planning theory would not have to repeat all the discussions regarding the proper concepts of observation. Planning could even learn from the myriads of different strategies of natural systems. For instance, the need for compartmentalization. Or the fact that the immediate results of initial plans (read: genes and transcripts) are in need for heavy post-processing. Or the reliability of probabilistic processes. Or the fact, that evolutionary processes are directed to increased generality, despite their basic blindness.

Yet, there are at least two large differences to the domain of planning. Firstly, planning takes place as a symbolic act in a culture, and secondly, planning involves normative structures and acts, to which we will take a closer look below. Both aspects are fundamentally different from the perspectivism in biology insofar as they don’t allow for a complete conceptual externalization as it is the case with biological subjects. Quite to the contrary, symbols and norms introduce a significant self-referentiality into all methods regarding method and planning in the context of the Urban.

Thus, additionally to the 4+1 structure that we could adopt from biology for dealing with the externalizable aspects, we need two further perspectives that are suitable to deal with the dynamics of symbols and the normative. For the first one, we already have proposed a suitable structure, the choreostemic space. Two notes about that. First, the choreostemic space could be turned into a methodological attitude. Second, the choreostemic explicitly comprises the potential and mediality as major determinants of any “worldly” move, besides models and concepts. The further issue of normativity we will discuss in the next section.

Meanwhile, we finally can formulate what method could mean in the context of the Urban. First, our perspectives for dealing with the subject of “planning,” the subjects of planning, and the respective methods would be the following (read 1 thru 4 in parallel to Tinbergen’s)

  • (1) genesis of the plan and genesis of the planned;
  • (2) mechanisms for implementation, mostly considering particular quasi-material aspects, and mechanisms in the implemented;
  • (3) behavior (of individuals, groups, and the whole) and social dynamics, during planning and in the implemented arrangement;
  • (4) adaptivity, persistence, sustainability and evolution of plans and the planned;
  • (5) Choreostemic of concepts and interaction, in planning and in the planned,;
  • (6) Ethical and moral considerations;
  • (7) Integration of planning and the planned as a complex system (see also below).

Within these perspectives, particular methods and techniques will evolve. Yet, we also could bundle all of it into a single methodological attitude. In any case we could say that…

Methods are collections of more or less strict rules that organize the transformational flow of items, where these collections are structured along basic perspectives.

3.4. …and the (Notorious, Critical) Game

Last, but not least, “method” is a language game—of course, I would like to add. As usual, several implications immediately derive. First, it is embedded into a Form of Life. Methods are by no means restricted to rationalism or the famous “Western perspective”. Any society knows language, rules and norms, and thus also regularity. Of course, the shape of the method may differ considerably. Yet, from the concept as we propose it here, these differences are just parameters. In terms of choreostemic space, methods result in different attractors in a non-representative metaphysical space of immanence.

This brings us to the second implication: the language game “method” is a “strongly singular term”. We can’t do anything without it, not even thinking in the most reduced manner, let even be a combined action-thinking. “Method” is one of these pervasive constructs in the basement of culture. Moreover, as a strongly singular term it introduces self-referentiality, and hence an immanent creativity. Thus the third implication: Whenever we use a method, we have to apply it critically. This basically means that there is no method without a clear indication about its conditions.

Regarding our concept of Generic Differentiation and its trinitary way of actualizing change, we thus have to expect that we will find the “method aspect” everywhere. No matter whether we take the perspective of the planning process or that of the planned. In order to illustrate this aspect using a metaphor, let me refer to the structure of atoms and molecules, particularly to the concept of the electron orbital. Orbital electrons are responsible for the electro-magnetic binding forces between atoms in molecules. It is through these electrons that molecules (and also metals and crystals) can exist at all.

Figure 2: the so-called orbitals of outer electrons of atoms in a molecule of CO2, showing their importance in building molecules from atoms. The cudgels (yellow, blue, green) should not be taken as well-defined 3-dimensional material volumes. They rather indicate fuzzy areas of increased probability for meeting an electron if a measurement would be taken.

co2-hybridization

Similarly, methods, as elements of choreostemic moves, may be conceived the mediators of binding forces between the aspects involved in thinking about differentiation.

Our concept of Generic Differentiation allows to overcome the wrong distinction between theory and practice. While the true dualism consists of theory or practice on the one side and performance on the other, it is still necessary to clarify the relation between theory, model and operation. We already derived that theories may be beneficially conceived as orthoregulating milieus for assembling models. But still, this is only a condition. I think that the relation between theory and structural models on the one side,  and predictive/operational models on the other side concerns a question that points right to the heart of actualization: How to organize interpretation? Again we meet a question that is invisible for rationalists and modernists17 as well, since both are blind against the necessity of forms of construction and the implied freedom, or manifoldness of choice, respectively. This issue of how to organize interpretation concerns, of course, all phases and aspects of planning, from creating the plan until living in the implemented plan.

4. Grand Cultural Perspective

Franco Archibugi is completely right in emphasizing that planning is pervasively relevant [5]. Planning of xyz is not just relevant for the subject xyz, where xyz could be something like land-use, city-layout, street planning, organizational planning, etc.

In other words, it [m: planning] is a system that concerns the entire social life and includes all the possible decision-makers that act within it. It is a holistic system. 18

So far, so good. He is also right in criticizing the positivistic approach to planning, which, according to him, has been prevalent in planning until recently. Yet, despite in his book he describes a lot of reasonable means and potential practices for an improved choreography of planning, comprising institutions down to consulting, it is not really an advance to replace the positivist attitude with a functionalist one, claiming that planning has to follow the paradigm of “programming”.

Among other weaknesses such as a weird concept of theory and theoricity—leading to rather empty distinctions like theory on, of and in planning and the mistake to mix case-studies with story-telling—, Archibugi is almost completely unaware about the ethical dimension and/or its challenges, apparently hoping to cover the aspect of difference and divergence by means of institutions. Since he believes in penetrating comprehensibility, complexity  and self-referentiality didn’t make it into his treatise as well, even if we would consider it in the limited way mainstream is using it.  Despite he wants to separate from positivist approach in his outline of “the first routes of the new discipline,” he proposes an “operational logical framework” which integrates and unifies all types, forms, and procedures of planning.19

Therein, Archibugi surely counts as an arch-rationalist, a close relative to the otherworldly stories published by Luhmann and Habermas. Yet, we certainly can’t apply pervasive rationalism for designing this “system”.  Social life can’t be planned and, more important, it should not be planned, as the inherent externalizing perspective introduced by plans implies to treat human beings as means.20

Our support of the grand cultural attitude is rooted quite differently. In this series of essays about the Urban (with a capital “U”, see footnote 16) we have been trying to find support for the concept of Urban Reason. Basically, this concept claims that human reason is strongly shaped or even determined by the embedding culture, which today, as a matter of fact, is urban culture. In short, human reason is itself a cultural phenomenon. One could indeed argue that this follows quite directly from Wittgenstein’s philosophy and the extensions provided by the late Putnam: Any rule following is deeply anchored in the respective Form of Life; any human thinking, which is largely based on language, hence has the communal as one of its main components. As a consequence of the increasing weight of urban culture, which meanwhile turned into a dominance even against the nation state, human reason is strongly shaped by the Form of Life of urban citizens. This holds for any tiny bit of the surface of planet earth, of course, even if an arbitrary tribal community never would have been in contact with modern forms of human social organization.

The quality of the Urban can’t be separated any more from human reason, thus from human culture at large. Everything we do around the Urban and within the Urban contributes to culture. This we call the Grand Cultural Hypothesis. In Deleuzean terms we could say that the Urban could be conceived as a distributed, process- and population-based, probabilistic plane of immanence. Regarding our extension of this Deleuzean concept, the Choreostemic Space, we could also say that the Urban establishes a particular attractor in it.

We even could extend this Grand Cultural Hypothesis by stating that all the institutions we nowadays rate as cultural emanence always have been urban. Things like writing, numbers, newspapers, books, astronomy, guilds, printing, operas, stadium, open source, bureaucracy, police, power or governmentality could have emerged only in those arrangements we call city. We have been discussing this already elsewhere and won’t repeat it.

The argument here is that the Urban is a particular form of dealing with differentiation. In turn, designing or at least establishing a particular way of dealing with differentiation and of inducing differentiating processes circumscribes what could be labeled a particular culture. Urban differentiation processes rarely engage with physical constraints, for the Urban introduces an emancipation from them, and people being immersed in the Urban invent things like money and insurances. In other words, the Urban provides a stable platform for safe-guarded experimentation with cultural goods, inventing also methods and conditions for experimenting. Thus, even the very notion of method, as opposed to tradition, has been shaped by the Urban.

All this is not really surprising.  It is well-known that cities are breeding grounds for symbolization and sign processes. The Urban creates its own mediality. The Urban puts differentiation onto its stage, it invokes an almost cinematographic mise-en-scene of differentiation21. This result is strongly contradicts the Cartesian and rationalist expectation that it would be possible to plan (aspects of) the city. Planning must be considered as just one of the three modes of differentiation, besides evolution and learning. Believing into the possibility and sufficiency of an apriori determinability just means to mistake the embryo for the fully fledged animal.

Obviously, the weighting of the three forms of actualization of differentiation is an act of will, albeit this could be observed so far only in very rare cases22. This irreducible trinity in differentiation should, however, not be assigned just to the individuals. It is a matter of politics and the collective as well, though this introduces a completely new level of negotiation into politics for most countries (except Switzerland, perhaps). Yet, probably it is the only form of politics that will remain in a truly and stable enlightened society. Each particular configuration of the above mentioned trinity will exert rather specific constraints and even consequences. A first benefit from our extended concept of Generic Differentiation concerns the possibility and the mode of communicating qualitative consequences of implementing certain designs.

The  great advantage of talking at this level of abstraction is that the problematic field can be relieved from the collision of “values” and facts. It is accessible through the Differential23, that is, a vertical speciation (just in contrast to Descartes’ method and also deconstructivism, both of which are applying horizontal differencing only). Values and facts are not disregarded completely by rigorous linguistic hygienic, as Latour suggests. They are just not taken as a starting point. One should acknowledge that values and facts are nothing else than kind of shortcuts in thinking, when thinking becomes a bit lazy.

Another advantage is that there is no possibility any more to clash outcome (by any means) and process (towards an open end). They are now deeply integrated into Generic Differentiation. This does not exclude indicative measures for the quality of a city or its neighborhoods, whether regarding for instance more general issues like adaptivity, or more concrete ones like the development or relative level of the attractiveness as measured by the monetary value of the cells in a district. It should be clear, however, that it is impossible to define short-term outcomes, e.g. as the “result” of the implementation of a plan. We even could say that measuring the city could be done almost in arbitrary ways, as long as there are many measures, the measures are going to address various organizational levels and the measures are stable across a long period of time.

All this allows us to rethink planning. It will have a profound effect on the self-perception of planners and the profession of planning at large. Calls like that forwarded by Vanessa Watson, demanding for “respecting cultural differences” [1] become dispensable, at least. We can see that they even lead to a false emphasis on identity, revitalizing the separation of into process and outcome against its own intentions.

Starting with the primacy of difference, in contrast, allows to bring in evolutionary aspects in a completely self-conscious manner. Difference is nothing that must be respected or created. It must be deeply braided into the method, not into the corporeality of people as a representationalist concept. More exactly, as deep as possible, that is as a transcendent principle. It is more or less canting to acclaim “be different”, or “rescue difference”, as this implies the belief in transcendental identity and logicism.

But now it is urgent to discuss the issue of ethics regarding planning and methods.

5. Values, Ethics, and Plans

No doubt, our attitudes towards our own future(s) are not only shaped by contextual utility and some overarching (idealistic) rationality may play only a partial role as well. From the background, or if you prefer: subliminally,  a rich and blurry structure determines our preferences, hopes and intentions. Usually, this sphere of personal opacity is also thought to comprise what often is called values. Not surprising, values also appear in the literature about planning  (cf. [24]24).

Undeniably, planning is in need for ethics25 and moral standards [25]. Yet, the area is a rather difficult one, to say the least. Rather well-known approaches like that proposed by Rawls (based on the abstract idea of justice), rationalism, or utilitarianism are known to be either defect, not suitable for contemporary challenges, or both. Furthermore, it is difficult to derive moral standards from the known philosophical theories. Fortunately, there is an alternative. Yet, before we start we have to shed some light on the rhetoric implied by the notion of “plan”.

5.1. Language Games

In the context of the concept of Generic Differentiation we already identified the “plan” and the respective notion of “development” as just one of the three modes of differentiation—development, evolution and learning—, which neither can’t be separated from each other nor be reduced to each other. It is just a matter of relative weight.

Such we can ask about the language game of “plan”.  Language games are more or less organized and more or less stable arrangement of rules about the actualization of concepts into speech. I won’t go into details here, you can find the discussion of relevant aspects in earlier essays.26 Yet, some points should be made explicit here as well.

 The first is that the notion of language game, as devised by Wittgenstein in his Philosophical Investigations, implies the “paradox of rule-following”27, which can be resolved only through the reference to the Form of Life, which in simplified terms concerns the entirety of culture. Second, as a practice in language, the language game, e.g. that of talking about “plan”, implies a particular pragmatics, or different kinds of aspect is such a speech act. Austin originally distinguished the locutionary, illocutionary and perlocutionary aspect. Austin maintains that these aspects are always present, they are not a matter of psychology or consciousness, but rather of language. With Deleuze (in Cinema 2) we can add the aspect of story-telling, which we called the delocutionary aspect of speech acts. Third, any actualization of a “bag of concepts” which let us then invoke the term “plan” is just one out of a manifold, for actualization of concepts require forms of construction, or orthoregulation, as we called it. Usually, we apply rather stable habits in this “way down” from concepts to words and acts, but always keep in mind that there are many different ways for this.

Underneath of all of that is an acknowledgment of the primacy of interpretation, which includes a strong rejection of the claim of analyticity. Note, that we reject analyticity here not as a consequence of some property of our subject, that is the property of “complexity,” in our case the complexity of the city. I think it is much stronger to reject it as a consequence of (human) culture and the fact of language itself.

Such, we can ask about three things regarding the notions of “plan” or “planning”, despite the aspects are certainly overlapping. First, which concepts are going to be invoked? Second, which story is to be told? Third, how is the story to be told?

The dimension of concepts could be covered by the notion of the “image of the city”. The “image of the city” is quite a bit more than just a model or a theory, albeit these make up a large deal of it. A preferential way to deal with images about the city, albeit it is just a starting point, is David Shane’s way of theorizing the city. He manages to combine morphological, historical, political, technological and socio-dynamical aspects in a neat manner. Another, quite different mode of story-telling is provided by Rem Koolhaas, as we have discussed it before.

The two latter questions are, of course, the more important ones. Just think about the idea of “ideal city,” the “garden city,” the “city of mobility,” or the “complex city”. Or the different stances such as rationalism, neo-liberalism, or utilitarianism. Or the issue of participation versus automation. Or who is going to tell the story? Let us start by returning to said “values”.

5.2. Values

Values are constants, singularities, quite literally so. As such, they destroy any possibility of comparison or mediatedness. Just as numbers as mere values don’t have an meaning. To build a mathematics you need a systematicity about operations as well. The complete story is always made from procedures and variables, where the former always dominates the latter. A value itself is like a statue showing a passer-by. Yet, values are fixed, devoid of any possibility to move around, “pure” territorialization.

Thus, a secondary symbolization, mediatization and distribution of values (cf.[26]) does not really help in mitigating these difficulties. Claiming and insisting on values means just to claim “I am not interested in exchange at all”. Values are existential terms: either they are, or they are not. They are strictly dichotomous. Thus they are also logical terms. Not really surprising we find utilitarist folks to make abundant use of positively formulated values.

Yet, values fail even with regard to their pretension of existentiality. Heidegger [11] writes (p.100) that

[…] the recourse towards “valueish” configurations [can not] bring into sight the Being as readiness-to-hand, let alone becoming it an ontological issue.
( […] die Zuflucht zu »wertlichen« Beschaffenheiten [kann] das Sein als Zuhandenheit auch nur in den Blick bringen, geschweige denn ontologisch zum Thema werden lassen.)

Consequently it is nothing but a formal mistake to think that values could be even near the foundation for decision-making. Their existential incommensurability is the reason for a truly disastrous effect: Values are the cause of wars, small ones and large ones. (And there is hardly another reason for it.) Values implement a particular mechanic of costs, which only could be measured in existential terms, too. What would be needed instead is a scale, not necessarily smooth, but at least useful for establishing some more advanced space of expressibility. Only such a double-articulating space, which is abstract and practical at the same time, allows for the possibility of translation, at first, followed by mutual transformation.

This triple move of enabling expression, translation and transformation has nothing to do with tolerance. Tolerance, similar to values, is a language game that indicates that there is no willingness for translation, not even for transformation of ones own “position”. In order to establish a true multiplicity, the contributing instances have to interpenetrate each other; otherwise, one just ends up with modernist piles of dust, “social dust particles” in this case, without any structure.

In this context it is interesting to take a look to Bergson’s conceptualization of temporality. For Bergson, free will, the basic human tendency for empathy and temporality are closely linked through the notion of multiplicity. In his contribution to the Stanford Encyclopedia Lawlor writes [27]:

The genius of Bergson’s description is that there is a heterogeneity of feelings here, and yet no one would be able to juxtapose them or say that one negates the other. There is no negation in the duration. […] In any case, the feelings are continuous with one another; they interpenetrate one another, and there is even an opposition between inferior needs and superior needs. A qualitative multiplicity is therefore heterogeneous (or singularized), continuous (or interpenetrating), oppositional (or dualistic) at the extremes, and progressive (or temporal, an irreversible flow, which is not given all at once).

Bergson’s qualitative multiplicity that he devises as a foundation for the possibility of empathy is, now in our terms, nothing else than the temporal unfolding of a particular and abstract space of expressibility. The concept of values make this space vanish into a caricature of isolated points. There is a remarkable consistency now that we can conclude with Bergson that values also abolish temporality itself. Yet, without temporality, how should be there any exchange, progress, or planning?

Some time ago, Bruno Latour argued in his “Politics of Nature” [28], albeit he meanwhile refreshed and extended his first investigations, that the distinction between facts and values is rarely useful and usually counterproductive:

We must avoid two types of fraud: one in which values are used in secret, to interrupt discussions of facts; and one in which matters of fact are surreptitiously used to impose values. But the point is not to maintain the dichotomy between moral judgments and scientific judgments. (p.100)

The way to overcome this dual and mutual assuring fraudulent arrangement Latour proposes three major moves. First, stopping to talk about nature (facts), which results in abolishing the concept of nature completely. This amounts to a Wittgensteinian move, and aligns to Deleuze as well in his critique of common sense. Already the talk about nature insinuates the fact and produces values as their complementary and incommensurable counterpart. “Nature” is an empty determination, since fro a considerable time now everything on this globe relates to mankind and the human, as Merleau-Ponty pointed out from a different perspective.

The second step in Latour’s strategy amounts to the application of the Actor-Network-Theory, ANT.  As a consequence, everything becomes political, even if the “thing” is not human, but for instance a device, or an animal, or any other element being non-human.28 Within the network of actors, he locates two different kinds of powers, the two powers to take into account (perplexity and consultation), traditionally called science, and the two powers to put in order (hierarchy and institution),  usually called politics. The third step, finally, consists in gluing everything together by a process model29, according too which actors “translate” them mutually in a purely political process, a “due process”. In other words, Latour applies a constitutional model, yet not a two-chamber-model, but rather one of continuous assimilation and transformation. This process finally turns into kind of “collective experimentation”.

Latour’s model is one that settles in in the domain of socio-politics. As such, it is a normative model. Latour explicates the four principles, assigned to two kinds of power, by respective moral demands, this or that one “shall” do or not. Not being rooted in a proper theory of morality, the Latourean moral appears arbitrary. It is simply puzzling to read about the “requirement of closure” meaning that once the discussion is closed, it should not be re-opened, or about the “requirement of the institution” (p.111).

What Latour tries to explain is just the way how groups can find a common base as a common sense that stabilizes into a persistent organizational form, in other words that would align this thought to our concept of complexity the transition from order—patterns in the widest sense—to organization.

Yet, Latour fails in his endeavor as it is presented in the “Politics of Nature”.

As Fraser remarked from a Deleuzean perspective [29],

Latour’s concept of exteriority obliges him to pursue a politics of reality which is the special providence of ‘moralists’, rather than a politics of virtual reality in which all entities, human and non-human, are engaged.

In order to construct his argument, he just replaces any old value by some new values, while his main (and mistaken) “enemy” is Platon’s idealism. His attempts are inconsistent and incomplete.

Latour’s concept is too flat, without vertical contours, despite its rugged rhetoric. We must go “deeper,” and much more close to the famous wall where one could get a “bloody nose” (Wittgenstein). Yet, Latour also builds on a the move of proceduralization, rejecting a single totalizing principle [28].

[…] to redifferentiate the collective using procedures taken either from scientific assemblies or from political assemblies. (p.31)

This move away from positive fixation yet towards procedures that are supposed to spur the emergence of a certain goal or even purpose may well be considered as one of the most important ones in the history of thought. The underlying insight is that any such positive fixation inevitably results in some kind of naïve metaphysics or politically practiced totalitarianism.

5.3. Ethics: Theories of Morality

Contrary to a widely held belief, ethics itself can’t say anything about the suitability of a social rule. As a theory30 about moral, ethics helps to derive an appropriate set of moral rules, but there can’t be “content” in ethics. It is extremely important to distinguish properly between ethics and morality. Sue Hendler, for instance, a rather influential scholar in planning ethics, never stopped messing ethics and morality [30].

As a branch of philosophy, ethics is the study of moral behaviour and judgements. A key concept from the field of ethics is that it is possible to evaluate a given behaviour and give coherent reasons why it is ,good or bad’. […] What criteria can be used to decide whether a given action is ethical?

Philosophy never “studies behavior”. Actions “are” not ethical, they can’t be for grammatical reasons. Henderson equates types with tokens, a common fault committed by positivists. Contrary to the fashion of initiating any kind of ethics, such as environmental ethics or said planning ethics, a terminology that appears frequently in respective journals about planning, it is bare nonsense, based on the same conflation of ethics and morality, that is, theory and model. There can be only on level of theoretical argumentation that could be called ethics. There could be different such theories, of course, but any of them would not consider directly practical cases. Behavior is subject of morality, while morality is subject of ethics. 

5.4. Proceduralizing Theory

Some years ago, Wilhelm Vossenkuhl [31]31 published a viable alternative, or more precise, a viable embedding for the concept of value, one which then ultimately would lead to their dissemination. By means of myriad of examples, Vossenkuhl first demonstrates that in the field of morals and ethics there are no “solutions”. Moral affairs remain problematic even after perfect agreements. Yet, he also rejects well-founded the usual trail of abstract principles, such as “justice”, which has been proposed by Rawls in 1971. As Kant remarked in 1796 [32],  any such singular principle can’t be realized except by a miracle. The reason is that any actualization of a singular principle corrupts the principle and its moral status  itself.32 What we can see here is the detrimental effect of the philosophy of identity. If identity is preferred over difference33, you end up with a self-contradiction. Additionally, a singularity can’t be generative, which implies that an external institution is needed to actualize the principle formulated by the singularity. This leads to a self-contradiction as well.

Vossenkuhl’s proposal is radically different. In great detail He formulates a procedural approach to ethics and moral action. He refuses a positive formulation of moral content. Ethics, as a theory of morality, is necessarily empty. Instead, he formulates three precepts that together can be followed as individual and communal mechanisms in order to establish a moral procedurality. This allows to achieve commonly acceptable factual configurations (as goals) without the necessity to define apriori the content of a principle, or even a preference order regarding the implied values, or profiles of values. These three precepts Vossenkuhl calls the maxims about scarcity (affecting the distribution of goods), norms (ensuring their viability) and integration (of goods and norms). All precepts regard the individual as well as the collective. The threefold mechanisms unfold in a field of tensions between the individual and the communal.

Such, ethics becomes the theory of the proceduralization of morality. Values—as constants of morality—are dissolved into procedures. This is the new Image of Ethics. Instead of talking about values, whether in planning, politics or elsewhere, one should simply care about the conditions for the possibility that such a proceduralization can take place. It should be noted that this proceduralization is closely related to Wittgenstein’s notion of rule-following.

There is nothing wrong to conceive this as an implementation, because this ethics as well as the moral is free of content. Only if this is the case, people engaging in a discourse that affects moral positions (values) can talk to each other, find a new position by negotiation, transforming such themselves, finally settling successfully a proper agreement. Note that this completely different from a tradeoff or from “tolerance”.

The precepts should not be imagined as kind of objects or entities with a clear border, or even with a border at all. After all, they are practiced by people, and usually by many of them. It is thus an idealistic delusion to think that the scarcity of goods or the safety of norms could be determined objectively, i.e. by a generally accepted scale. Instead, we deal with a population and the precepts are best conceived as quasi-species, more or less separated subsets in the distribution of intensities. For these reasons, we can find a two-fold source for opposition. (i) The random variation of all implied parameters in the population, and (ii) the factual or anticipated contradiction of expected outcomes for small variations of the relative intensities of the precepts. In other words, the precepts introduce genuine complexity, and hence creativity through emergence and self-generated ability for performing grouping.

The precepts are not only formulated as maxims to be followed, which means that they demand for dynamic behavior of individuals. Together, they also have the potential to set a genuine dynamic creativity into motion, yet now on the level of the collective. The precepts are dynamic and create dynamics.

So, what about the relation between planning and ethics, between a plan and moral action? Let us briefly recapitulate. First, the modern version of ethics combines generative bottom-up mechanisms with the potential for mutual opposition and top-down constraints into a dynamic process. Particularly this dynamics dissolves the mere possibility for identifiable borders between good and bad. The categories of good and bad are unmasked as misguided application of logic to the realm of the social. Second we found that plans demand inherently their literal implementation. As far as plans represent factual goals instead of probabilistic structural ones, e.g. as possibility or constraint, plans must be conceived as representational, hence simplistic models about the world. In extremis we even could say that plans represent their own world. Plans are devices for actualization the principle of the embryonic.

The consequence is quite clear. As long as plans address factual affairs they are not compatible with an appropriate ethics. Hence, in order to allow for a role of ethics in planning, plans have to retreat from concrete factual goals. This in turn has, of course, massive consequences for the way of controlling the implementation of plans. One possibility is again to follow an appropriate operationalization through some currency, where for instance the adaptive potential of the implemented plan is reflected.

This result may sound rather shocking at first sight. Yet, it is perfectly compatible with the perspective made possible through an applicable conceptualization of complexity, which we will meet again in a later section about the challenge of dealing with future(s).

6. Dealing with Future(s)

Differentiation is a process, pretty trivial. Yet, this means that we could observe a series of braided events, in short, an unfolding in time and a generation of time. We have to acknowledge that the events neither do unfold with the same speed, nor on the same thread, nor linearly, albeit at large the entirety of braided braids proceeds. The generation of time refers to the very possibility for as well as the possible form of further differentiation is created by the process itself.

We already mentioned that planning as one of the possible forms of differentiation represents only the deterministic, embryonic part of it. It is inherently analytic and representationalist, since the embryonic game demands a strict decoding and implementation of a plan, once the plan exists as some kind of a encoded document. In other words, planning praises causality.

6.1. Informational Tools

Here we meet just a further blind spot of planning as far as it is understood today. Elsewhere we have argued that we can’t speak about causality in any meaningful manner without also talking about information. It is simply a rather dirty reductionism, which even does not apply in physics any more, except perhaps in case of Newton’s balls (apples?).

This blind spot concerning information comes with dramatic costs. I mean, it is really a serious blindness, affecting the unlocking of a whole methodological universe. The consequence of which has been called the “dark side of planning” Bent Flyvbjerg [34]. He coined that notion in order to distinguish ideal planning from actual planning. It is pretty clear that a misconceived structure opens plenty of opportunities to exploit the resulting frictions. It is certainly a common reaction among politicians to switch to strong directives in cases where the promised causality does not appear. Hence, failing planning is always mirrored in open—and anti-democratic—demonstration of political power, which in turn affects future planning negatively. As any deep structure, so the philosophy of identity is more or less a self-fulfilling prophecy… unfortunately with all the costs, usually burdened to the “small” people.

The argument is pretty simple. First, everybody will agree that planning is about the future. Second, as we have shown, the restriction of differentiation to planning imposes the constraint that everything around a plan is pressed into the scheme of identifiable causality, which excludes all forms that can be described only in terms of information. It is not really surprising that planners have certain difficulties with the primacy of interpretation, that is, the primacy of difference. Hence they are so much in favor of cybernetic philosophers like Habermas and Hegel. Thinking in direct causes strictly requires that a planner is pervasively present. Since this is not possible in reality, most plans fail, often in a double fashion: The fail despite huge violations of budgets. There is a funny parallel to the field of IT-projects and their management, of which is well-known that 80% of all projects fail, doubly. Planning induces open demonstration of power, i.e. strictness, due to its structural strictness.

Without a “living” concept of information as a structural element a number of things, concepts and tools are neither visible nor accessible:

  • – risk, simulation, serious gaming, and approaches like Frederic Vester’s methodology,
  • – market,
  • – insurance
  • – participatory evolutionary forms of organization, such as open source.

Let us just focus on the aspects risk and market. Taking recent self-critical articles from the field of planning (cf. [4],[35]), but also a quick Google ™ search (first 300 entries), not a single notion of risk can be found, where it would be taken as a tool, not just as a parlance. Hence, tools and concepts for risk management are completely unknown in planning theory,  for instance value-of-risk methods for evaluating alternatives or the current “state” of the implementation, or scenario games34. Even conservative approaches such as “key performance indicators” from controlling are obviously unknown.

We already indicated that planning theory suffers from a lack of abstract concepts. One of those concerns the way of mediating incommensurable and indivisible goals. In an information-based perspective it is easy to find ways to organize a goal-finding process. Essentially, there are two possibilities: the concept of willingness-to-pay and the Delphi method (from so-called “soft operations research”).

Willingness-to-pay employs a market perspective. It should not be mistaken as a “capitalist” or even “neo-liberal” strategy, of course. Quite in contrast, it introduces a currency as a basis for abstraction, thereby the possibility for constructing a comparability. This currency is not necessarily represented by money. Else, it serves in both possible directions, regarding costs as well as benefits. Without that abstraction it is simply impossible to find any common aspects in those affairs that appear as incommensurable at first sight. Unfortunately, almost every aspect in human society is incommensurable at first sight.

The second example is the Delphi method. This can be used, for instance, even for the very first step in case of the necessity of mediating incommensurabilities in goals and expectations: finding a common vocabulary, operationalized as a list of qualitative, but quantifiable properties, finding “weights” for those, and making holistic profiles transparent for any involved person.

It is quite clear that a metaphysical belief in identity, independence and determinability renders the accessibility of such approaches completely impossible. Poor guys…

6.2. Complexity

Not only in planning theory it is widely held that, as Manson puts it [36],

[…] there is no single identifiable complexity theory, but instead an array of concepts applicable to complex systems.

Further more, he also states that

[…] we have identified an urgent need to address the question of appropriate levels of generalization and specificity in complexity-based research.

Research about complexity is strongly flavored by the respective domain of its invocation, such as physics, biology or sociology. As an imported general concept, complexity is often more or less directly equaled to concepts like self-organization, fractals, chaos or even the edge of it, emergence, strange attractors, dissipativity and the like. (also Haken etc.)

A lot of myths appeared around these labels. For instance, it has been claimed that chaos is necessary for emergence, which is utterly wrong. Even more catastrophic is the habit to mix cybernetics and cybernetical systems theory with complexity. Luhmannian and Habermasian talking represent the conceptual opposite to an understanding of complexity. Nothing could be more different from each other! Yet, there are even researchers [37] who (quite nonsensical) explain emergence by the Law of Large Numbers, … indeed a rather disappointing approach. Else, it must be clear that self-organization and fractals are only weakly linked to chaos, if at all. On the other hand, concepts like self-organization or emergence are just aspects of complexity, and even more important, they are macro-theoretical descriptive terms which could not be transferred across domains.

The major problem in the contemporary discourse about complexity is that it this discourse is not critical enough. Instead, people first always asked “what is complexity?” before they then despaired of their subject. Finally, the research about “complexity” made its way into the realm of the symbolic, expressing now more a habit than a concept that could be utilized in a reasonable manner. The 354th demonstration of a semi-logarithmical scaling is simply boring and has nothing to do with “complexity”. Note that a multiplicative junction of two purely random processes creates the same numerical effect…

Despite those difficulties, complexity entered various domains, yet, always just as an attitude. Usually, this leads either to a tremendous fuzziness of the respective research or writing, or to perfected emptiness. Franco Archibugi, who proposes a rationalist approach to planning, recently wrote ([5], p.64):

The planning system is a complex system (footnote 24).

… and in the respective footnote 24:

Truly this seems a tautology; any system is complex by definition.

Here, the property “complex” gets both inflated and logified, and neither is appropriate.

What has been missing so far is an appropriate elementarization on the level of mechanisms. In order to adapt the concept of complexity to any particular domain, these mechanisms then have to be formulated in a probabilistic manner, or strictly with regard to information. The five elements of complexity as we devised it previously in a dedicated essay are

  • (1) dissipation, i.e. deliberate creation of additional entropy by the system at hand;
  • (2) an antagonistic setting of distributed opposing “forces” similar to the morphogenetic reaction-diffusion-system described first by Alan Turing;
  • (3) standardization;
  • (4) active compartmentalization as a means of modulating the signal horizon as signal intensity length;
  • (5) systemic knots.

Arranging the talk about complexity in this way has several advantages. First, these five elements are abstract principles that together form a dynamic setup resulting in the concept of “complexity”. This way, it is a proceduralization of the concept, which allows to avoid the burden of a definition without slipping into fuzzy areas. Second, these elements can be matched rather directly to empirical observation across a tremendous range of domains. No metaphorical work is necessary as there is no transfer of a model from one domain to another.

Note, that for instance “emergence” is not part of our setup. Emergence is itself a highly integrated concept with a considerable degree of internal heterogeneity. We would have to discern weak from strong emergence, at least, we would have to clarify what we understand by “novelty” and so on, that is questions that neither could be clarified nor be used on the descriptive, empirical level.

There is yet a third significant methodological aspect of this elementarization. It is possible to think about a system that is missing one of those elements, that is, where one of these elements is set to zero in its intensity. The five elements thus span a space that transcends the quality of a particular system. These five elements create two spaces, one conceptual and one empirical, which however are homeomorphic. The elements are first necessary and sufficient to talk about complexity, but they are also necessary and sufficient for any corporeal arrangement to develop “complexity”. Thus, it is easy and straightforward to apply our concept of complexity.

The first step is always to ask for the respective instantiation of the elements: Which antagonism could we detect? What is the material carrier of it? How many parts could we distinguish in space and time? Which kind of process is embedding this antagonism? How is compartmentalization going to be established, material or immaterial? How stable is it? Is it morphological or a functional compartmentalization? What is the mechanism for establishing the transition from order to organization? Which levels of integration do we observe? Is there any instance of self-contradictory top-down regulation? Are there measures to avoid such (as for instance in military)?

These questions can be “turned around,” of course, then being used as design principles. In other words, using this elementarization it is perfectly possible to scale the degree of volatility shown by the “complex system”.

The only approach transparently providing such an elementarization and the respective possibility  for utilizing  the concept of complexity in a meaningful way is ours (still, and as far as we are aware of recent publications35… feedback about that is welcome here!)36.

From those, the elements 2 and 4 are the certainly the most important ones when it comes to the utilization of the concept of complexity. First, one has to understand that adaptivity requires a preceding act of creativity. Next, only complex systems can create emergent patterns, which in turn can be established as a persistent form only in either of two ways: either by partially dying, creating a left-over, or by evolution. The first of which is internal to the process at hand, the second external. Consequently, only complex systems can create adaptivity, which in in turn is mandatory for a sustainable regenerativity.

So, the element (2), the distributed antagonism denies the reasonability of identity and of consensus-finding as a homogenizing procedure, if the implemented arrangement (“system”) is thought to be adaptive (and enabled for sustainability). Element (4) emphasizes the importance of the transition from order (mere volatile pattern) to persistent or even morphological structures, called organization. Yet, living systems provide plenty of demonstrations that persistence does not mean “eternal”. In most cases structures are temporary, despite their stability. In other words, turnover and destroying is an active process in complex systems.

Complexity needs to be embraced by planning regarding its self-design as well as the plan and its implementation. Our elementarization opens the route to plan complexity. Even a smooth scaling of regarding the space between complexity and determination could be addressed now.

It is quite obvious that an appropriate theory of complexity is highly relevant for any planning in any domain. There are of course some gifted designers and architects as well as a few authors that have been following this route, some even long ago, as for instance Koolhaas in his Euro-Lille. Others like Michael Batty [42][43] or Angelique Chettiparamb (cf. [44][45][46]) investigate and utilize the concept of complexity in the fields of urbanism or planning almost as I propose it. Yet, just almost, for they did not conceptualize the notion of complexity in an operationalizable manner so far.

There is a final remark on complexity to put here, concerning its influence on the dynamics of theory work. Clearly, the concept of complexity transcends ideas such as rationalism or pragmatism. It may be conceived as a generic proceduralization that reaches from thought (“theory”) till action. It is its logic of genesis, as Deleuze called it, that precedes any particular “ism” as well as the separation of theory and practice in the space of the Urban. It is once again precisely here in this space of ever surprising novelty that ethics becomes important, notably an ethics that is structurally homeomorphic through its own proceduralization, where the procedures are at least partially antagonistic to each other.

6.3. Vision

Finally, let me formulate kind of a vision, by referring just to one of the more salient examples. In developing countries there is a large amount of informal settlements, more often tending towards slum conditions than not. More than 30% of urban citizens across the world live in slum conditions. At some point in time, the city administration usually decides to eradicate the whole area. Yet, this comes at the cost of destroying a more or less working social fabric. The question obviously is one of differentiation. How to improve means how to differentiate, which in turn means how to accumulate potential. The answer is quite easy: by supporting enlightened liberalism through an anti-directionist politics (cf. [48]). Instead of bulldozing and enforcing people to leave, and even instead of implanting the “solution” of whatsoever kind in a top-down manner, simply provide them two things: (i) the basic education about materials and organization in an accessibly compiled form, and (ii) the basic materials. The rest will be arranged by the people, as this introduces the opportunity for arbitrage profits. It will not only create a sufficiently diversified market, which of course can be supported in its evolution. It also will create a common good of increased value of the whole area. Such an approach will work for the water problem, whether fresh water or waste water. My vision is that this kind of thinking would be understood, at least (much) more frequently…

7. Perplexion

The history of the human, the history of conceptual thinking and—above all—its transmission by the manifold ways and manners this conceptual thinking has been devising, all of this, until the contemporary urban society, is a wonderful (quite literally) and almost infinite braid. Our attempts here are nothing more than just an attempt to secure this braiding by pointing to some old, almost forgotten embroidery patterns and by showing some new one.

I always have been clear about another issue, but I would like to emphasize it again: Starting with the idea of being, which equals that of existence or identity, demolishes any possibility for thinking the different, the growing, the novel, in short, life. This holds even for Whitehead’s process philosophy. Throughout this blog, as it is there so far, I have been trying to build something, not a system, not a box, but something like an Urban Thought. The ideas, concepts, ways in which that something have been actualizing are stuffed (at least in my hopes) with an inherent openness. Nevertheless I have to admit that it feels like approaching a certain limit, as thoughts and words tend increasingly to enter the “eternal return”. Yet, don’t take this as a resignation or even the beginning of a nihilistic phase. It is said as an out and out positive thought. But still…

Maybe,  these thoughts have been triggered by a friends’ hint towards a small, quite (highly?) exceptional book or booklet of unknown origin:  The “Liber viginti quattuor philosophorum”, the Book of the 24 Philosophers.37 Written presumably somewhere between 800 and 1200 ac38, it consists just of 24 philosophical theses about our relation to God. The main message is that we can’t know, despite it seems to be implicated.

7.1. Method, Generic Differentiation and Urban Reason.

Anyway. In this essay we explored the notion of method. Beginning with Descartes’ achievements, we then tried to develop a critique of it. Next we embedded the issue of planning and method into the context of Urban Reason, including the concept of Generic Differentiation [henceforth GD], which we explicated in the previous essay where we devised it for organizing theory works. Let us reproduce it here again, just as a little reminder.

Figure 3: The structural pragmatic module of Generic Differentiation for binding theory works, modeling and operations (for details see here). This module is part of a fluid moebioid fractal that grows and forms throughout thinking and acting, which thereby are folded into each other. The trinity of modes of actualization (planning, adapting, learning) passes through this fractal figure.

urban reason 4t

All of the four concepts of growth, networks, associativity and complexity can be conceptualized in a proceduralized form as well. Additionally, they all could be taken as perspectives onto abstract, randolated and thus virtual yet probabilistic networks.

Interestingly, this notion opens a route into mathematics through the notions of computability and non-turing computing (also see [52]). Here, we may take this just as a further indication to the fundamental perspective of information as a distinct element of construction whenever we talk about the city, the Urban and the design regarding it.

7.2. “Failing” Plans

Thinking of planning without the aspects of evolution and learning would equal, we repeatedly emphasized this point, the claim of the analyticity of the world. Such a planning would follow positivist or rationalist schemes and could be called “closed planning”. Only under the presupposition of the world’s analyticity such planning could be considered as reasonable.

Since the presupposition is obviously wrong, closed planning schemes such as positivist or rationalist ones are doomed to fail. Yet, this failing is a failure only from the perspective of the plan or planner. From the outside, we can’t criticize plans as failing, since in this case we would confine ourselves to the rationalist scheme. For the diagnosis of failure in a cultural artifice like such of a city, or settlement in the widest sense, always requires presuppositions itself. Of course, in some contexts like that of financial planning within an organization these presuppositions can be operationalized straightforwardly into amounts of money, since the whole context is dominated by it. Financial planning is almost exclusively closed planning.

In the context of town planning, however, even the result of bad planning will always be inhabitable in some way, for in reality the plan is actualized into an open non-analytical world. The argument is the same as Koolhaas applied to the question of the quality of buildings. In China, architects in average build hundreds if not thousands of times more space than in Europe. There is no particular awareness on what Western people call the quality of architecture. The material arrangements into which plans actualize will always be used in some way. But is is equally true that there always will be a considerable part in this usage that imposes ways of using the result that have not been planned.

This way, they never fail, but at the same time they always fail, as they always have to be corrected. The only thing that becomes clear by this is that the reduction of the planners perspective to plan sensu stricto is the actual failure. A planning theory that does not consider evolution and learning isn’t worth the paper onto which it is written.

Both aspects, evolution and learning, need to be expressed, of course, in a proper form before one could assimilate them to the domain of arranging future elements (and elements of the future). Particularly important to understand is that “learning” does not refer to human cognition. Here it refers to the whole, that is the respectively active segment of the city itself, much in the sense of an Actor-Network (following Bruno Latour [53]), but also the concept of the city as an associative corporeality in itself,  as I have been pointing out some time ago [54].

7.3. Eternal Folds

Generic Differentiation is deeply doubly-articulated, as Deleuze would perhaps have said it39. GD may serve as kind of a scaffold to organize thoughts (and hence actions) around the challenge of how to effectuate ideas and concepts. Remember that concepts are transcendent and not to be mistaken as definitions! Here in this piece we tried to outline how an update of the notion of “method” could look like. Perhaps you have been missing references to the more recent discourses, in which, among others, you could find Michel Serres, or Isabelle Stengers, but also Foucault to name just a few. The reason to dismiss them is just given by our focus on planning and the Urban, about which those authors did not talk too much (I mean with respect to the problematics of method).

Another route I didn’t follow was to develop and provide a recipe for planning of whatsoever sort, particularly not one that could be part of a cookbook for mindless robots. It would simply contradict the achieved insights about Differentiation. Yet, I think, that something rather close to a manual could be possible, perhaps a meta-manual targeting the task of creating a manual, that would help to write down a methodology. A “methodology“ which deserves the label is kind of an open didactic talking about methods, and such necessarily comprises some reflection (which is missing in recipes). Such, it is clear that the presented concepts about method around Generic Differentiation should not be perceived as such a methodology. Take it more as a pre-specific scaffold for externalizing and effectuating thought, to confront it with the existential resistance. Thus, the second joint of said double-articulation of Generic Differentiation, besides such scaffolding of thought, connects towards the scaffolding of action.

The double-articulated rooting of method (as we developed it as a concept here) in the dynamics of physical arrangements and the realm of thoughts and ideas enables us to pose three now rather urgent questions in a clear manner :

  • (1) How to find new ways into regenerative urban arrangements? (cf. [51]);
  • (2) How to operate the “Image of Urban”?40
  • (3) The question for a philosophy of the urban […] is how the energetic flow of undifferentiated potentiality in/of urban arrangement might be encoded and symbolically integrated, such that through its transposition into differentiable capacity ability, proficiency and artifice may emerge. (after [52], p.149)

Bühlmann (in [55] p.144/145) points out that

The difficulty, in philosophically cogitating the city or the urban, lies […] with the capacity of dealing in an open and open-ended, yet systematic manner with the determinability of initial and final states. It is precisely the determination of such “initial” and “final” states that needs to be proceduralized.

I guess that those three questions could be answered only together. It is in the corpus (and corporeality) of the virtual and actualized answers that we will meet the Urban Reason. Here, in concluding this essay, we can only indicate the directions, and this only rather broad strokes.

Regenerative cities in the sense of “sustainable sustainability” can be achieved only through a persistent and self-sustained, yet modulated complexity of the city. A respective process model is easy to achieve once it is understood how complexity and ethics are mutually supportive. This implies also a significant political aspect which has been often neglected in the literature about planning. We also referred to Latour’s suggestion of a “Politics of Nature,” which however does not contribute to the problem that he pretends to address.

We have shown here, that and how our notion of method and complexity can be matched with a respective contemporary ethics, which is a mandatory part of the planning game. Planning as such, i.e. is in the traditional meaning of mechanistic implementation ceases to exist. Instead, planning has to address the condition of the possible.

Such, any kind of planning of any kind of arrangement undergoes first a  Kantian turn through which it inevitably changes into “planning of the potential”. Planning the potential, in turn, may be regarded as a direct neighbor to design, its foundation [56] and methodology.41 This reflects the awareness for the primacy of the conditions for the possibility for complexity. These conditions can be actualized only, if planning is understood as one of the aspects of the trinity of Generic Differentiation, which comprises besides planning also evolution and learning, invoking in turn the concepts of population/probabilism and associativity. All parts of the “differentiation game” have to be practiced, of course, in their prozeduralized form. No fixed goals on the level of facts any more, no directive policies, no territorialism, no romanticism hugging the idea of identity any more, please… It is the practice of proceduralization, based on a proper elementarization and bridging from ethics to complexity, that we can identify as the method of choice.

The philosophical basis for such a layout must necessarily deny the idea of identity as a secure starting point. Instead, all the achievements presented here may appear only on the foundation provided by transcendent difference [57]. I am deeply convinced that any “Science of the City” or “Methodology of Planning” (the latter probably as a section of the former) must adhere to appropriate structural and philosophical foundations, for instance those that we presented here and which are part of Urban Reason. Otherwise it will quite likely give rise to the surge of a quite similar kind of political absolutism that succeeded Descartes’ consideration of the “Methode”.

8. Summary

We explored the notion of “method” and its foundations with regard to planning. Starting from its original form as created by Descartes in his “Methode de la Discourse” we found four basic vectors that span the conceptual space of planning.

Ethics and complexity are not only regarded as particular focal points, but rather as common and indispensable elements of any planning activity. The proposed four-fold determination of planning should be suitable to overcome rationalist, neo-liberal, typical modernist or positivist approaches. In other words, without those four elements it is impossible to express planning as an activity or to talk reasonably about it. In its revised form, both the concept and the field of planning allow for the integration of deep domain-specific knowledge from the contributing specializing domains, without stopping the operational aspects of planning. Particularly, however, the new, or renewed, image of planning offers the important possibility to join human reason into the Urban activities of designing and planning our urban neighborhood, and above all, living it.

9. Outlook

In most cases I didn’t give an outlook to the next essay, due to the spontaneous character of this bloggy journey as well as the inevitable autonomy of the segregated text that is increasing more and more as time passes.

This time, however, the topic of the follow-up is pretty clear. Once started with the precis of Koolhaas “Generic City” the said journey led us first to the concept of “Urban Reason” and the Urban as its unique, if not solitary cultural condition. The second step then consisted in bundling several abstract perspectives into the concept of Generic Differentiation. Both steps have been linked through the precept of “Nothing regarding the Urban Makes Sense Except in the Light of the Orchestration of Change.” The third step, as elaborated here, was then a brief (very brief indeed) investigation of the subject and the field of planning. Today, this field is still characterized by rather misty methodological conditions.

The runway towards the point of take-off for the topic of the next essay, then, could be easily commented by a quote from Sigfried Giedion’s “Space, Time and Architecture” (p.7):

For planning of any sort our knowledge must go beyond the state of affairs that actually prevails. To plan we must know what has gone on in the past and feel what is is coming in the future.

Giedion has been an interesting person, if not to say, composition, in order to borrow a notion from Bruno Latour. Being historian, engineer and entrepreneur, among several other roles, he has been in many ways modernist as well as a-modern. Not completely emancipated from the underlying modernist credo of metaphysical independence, he also demanded an integration of the aspect of time as well as that of relationability, which assigns him the attitude of a-modernism, if we utilize Aldo Rossi’s verdict on modernism’s attempt to expunge time from architecture.

Heidegger put it very clear (only marginally translated into my own words): Without understanding the role of time and temporality for the sphere of the human we can’t expect to understand the Being of man-made artifacts and human culture. Our challenge regarding Heidegger will be that we have to learn from his analysis without partaking in his enterprise to give a critique of fundamental ontology.

More recently, Yeonkyung Lee and Sungwoo Kim [58] pointed to the remarkable fact, based on Giedion’s work, that there is only little theoretical work about time in the field of architecture and urbanism. We regard this as a consequence of the prevailing physicalist reductionism. They also hold that

further critical and analytical approaches to time in architecture should be followed for more concrete development of this critical concept in architecture. (p.15)

Hence, our next topic will be just a subsection of Giedion’s work: Time and Architecture. The aspect of space can’t be split off of course, yet we won’t discuss it in any depth, because it deserves a dedicated treatment itself, mainly due to the tons of materialist nonsense that is floating around since Lefebvre’s (ideologic) speculations (“Production of Space”). Concerning the foundations, that is the concept of time, we will meet mainly Deleuze and Heidegger, Bergson and his enemy Einstein, and, of course, also Wittgenstein. As a result, I hopefully will enrich and differentiate the concept of Generic Differentiation even more, and thus also the possible space of the Urban.

Notes 

1. Descartes’ popularity is based, of course, on his condensed and almost proverbial “Cogito, ergo sum”, by which he sought to gain secure grounds for knowledge. Descartes’ Cogito raises difficult issues, and I can only guess that there are lots of misunderstandings about it. Critique of the Cogito started already with Leibniz, and included among almost everybody also Kant, Hume, Nietzsche and Russell. The critique targets either logic (“ergo”), the implications regarding existence (“sum”), or the “I” in the premise. I won’t neither add to this criticism nor comment it; yet, I just would like to point to another possibility to approach it opened by refraining from logic and existentialism: self-referentiality. The “I am thinking” may be taken as a simple, still unconscious observation that there is something going on that uses language. In other words, a language-pragmatic approach paired with self-referentiality opens a quite fresh perspective onto the cogito. Yet, this already would have to count as an update of the original notion. To my knowledge this has never been explored by any of the philosophical scholars. In my opinion, most of the critiques on the cogito are wrong, because they stick to rationalism themselves. The foundation of which, however, can’t be rational itself in its beginning, only through its end (not: “ends”!) and its finalization. Anyway, neither the Cogito nor the sum nor the “I” is subject of our considerations here. Actually, there is not much to say, as such “traditional” metaphysics misunderstands “grammatical sentences” as metaphysical sentences (Ludwig Wittgenstein, in “About Certainty”).

Concerning the wider topic of rationalism as a problematic field in philosophy, I suggest to resolve its position and (at least partial) incommensurability to other “-ism” – modes by means of the choreostemic space, where it just forms a particular attractor.

2. Wittgenstein and main stream cognitive science hold that this should not be possible. Yet, things are not as simple as it may appear at first sight. We could not expect that there is a “nature” of thinking, somehow buried beneath the corporeality of the brain. We certainly can take a particular attitude to our own thinking as well as we can (learn to) apply certain tools and even methodologies in our thought that is directed to our thought. The (Deleuzean) Differential is just one early example.

3. Just to mention here as a more recent example the “failure” of Microsoft’s strategy of recombinable software modules as opposed to the success of the unique app as it has been inaugurated by Apple.

4. Most of the items and boxes in this backpack did not influence the wider public in the same way as Descartes did. One of the most influential among the available items, Hegel, we already removed, it is just dead freight. The group of less known but highly important items comprises the Kantian invention of critique, the transparent description of the sign by Peirce, the insight into the importance of the Form of Life and the particular role and relation of language (Wittgenstein, Foucault), or the detrimental effects of founding thought on logicism—also known as the believe into necessity, truth values, and the primacy of identity—are not recognized among the wider public, whether we would consider sciences, the design area or politics. All these achievements are clearly beyond Descartes’, but we should not forget two things. Firstly, he just was a pioneer. Secondly, we should not forget that the whole era favored a mechanic cosmology. The lemma of the large numbers in the context of probabilism as a perspective had not been invented yet at his times.

5. The believe into this independence may well count as the most dominating of the influences that brought us the schizophrenias that culminated in the 19h and 20th century. Please don’t misunderstand this as a claim for “causality” as understood in the common sense! Of course, there have been great achievements, but the costs of those have always been externalized, first to the biological environment, and second to future generations of mankind.

6. By “planning” I don’t refer just to the “planning of land-use” or other “physical planning” of course. In our general context of Urban Reason and the particular context of the question about method here in this essay I would like to include any aspect around the planning within the Urban, particularly organizational planning.

7. Meant here without any kind of political, ethical or sociological reading, just as the fact of the mere physical and informational possibility.

8. Original in German language (my translation): ” Ob das Gewicht der Forschung gleich immer in dieser Positivität liegt, ihr eigentlicher Fortschritt vollzieht sich nicht so sehr in der Aufsammlung der Resultate und Bergung derselben in »Handbüchern«, als in dem aus solcher anwachsenden Kenntnis der Sachen meist reaktiv hervorgetriebenen Fragen nach den Grundverfassungen des jeweiligen Gebietes. […] Das Niveau einer Wissenschaft bestimmt sich daraus, wie weit sie einer Krisis ihrer Grundbegriffe fähig ist.”

9. As we mentioned elsewhere, the habitus of this site about practical aspects of Hilary Putnam’s philosophical stance is more that of a blook than that of a blog.

10. Descartes and Deleuze are of course not the only guys interested in the principles or methods of and in thought. For instance, Dedekind proposed “Laws of Thought” which shall include things like creative abstraction. It would be a misunderstanding, however, to look to psychology here. Even so-called cognitive psychology can’t contribute to the search for such principles, precisely because it is in need for schemata to investigate. Science always can investigate only what “there is”.

11. Nowadays often called system, and by that referring to “systems science”, often also to Niklaus Luhmann’s extension of cybernetics into the realm of the social. Yet, it is extremely important to distinguish the whole from a system. The whole is neither an empiric nor an analytic entity, it couldn’t be described completely as observation, a set of formula(s), a diagram or any combination thereof, which for instance is possible for a cybernetic system. Complex “systems” must not be conceived as as systems in the mood of systems theory, since openness and creativity belong to their basic characteristics. For complex systems, the crude distinction of “inside” and “outside” does not make much sense.

12. Thinking “items” as independent becomes highly problematic if this belief is going to be applied to culture itself in a self-referential manner. Consequently, man has been thought to be independent from nature. “Precisely, what is at stake is to show how the misguided constitution of modernity finds its roots in the myth of emancipation common to the Moderns. […] Social emancipation should not be condemned to be associated with an avulsion from nature, […]. The error of the modern constitution lies in the way it describes the world as two distinct entities separated from each other.” [18]. It is quite clear that the metaphysical believe into independence is beneath the dualisms of nature/culture, nature/nurture, and body/mind. This does not mean that we could not use in our talking the differences expressed in those dichotomies, yet, the differences need not be placed into a strictly dichotomic scheme. see section about “values” and Bruno Latour’s proposal.

13. This does not imply a denial of God. Yet, I think that any explicit reference to the principle of divinity implicitly corroborates that idea.

14. It is inadequate because by definition you can’t learn from a case study. It is a mis-believe, if not a mystical particularism to think that case studies could somehow “speak for themselves.” The role of a case study must be that it is taken as an opportunity to challenge classifications, models and theories. As such, they have to be used as a means and a target for transformative processes. Yet, such is rarely done with case studies.

15. Subsequent to Niko Tinbergen’s distinction, Dobzhansky introduced a particular weight onto those four perspectives, emphasizing the evolutionary aspect: Nothing in biology makes sense except in the light of evolution. For him, evolution served as a kind of integrative perspective.

16. As in the preceding essays, we use the capital “U” if we refer to the urban as a particular quality and as a concept in the vicinity of Urban Reason, in order to distinguish it from the ordinary adjective that refers to common sense understanding.

17. Difference between architecture and arts, particularly painting.

18. Yet, he continues: “As such, it must be designed according to a model which takes into account all the possible fields of decision-making and all decision-makers who play a role in social life. It has a territorial dimension which is “global” in the literal sense: it extends to the planetary scale.” (p.64) So, since he proposes a design of planning he obviously invokes a planning of planning. Yet, Archibugi does not recognize this twist. Instead, he claims that this design can be performed in a rationalist manner on a global scale, which—as an instance of extended control phantasm—is definitely overdone.

19. In more detail, Archibugi claims that his approach is able to integrate traditional fields of planning in a transdisciplinary methodological move, based on a “programming” approach ( as opposed to the still dominant positivistic approach). The individual parts of this approach are
+ a procedural scheme for the selection of plans;
+ clarification interrelationship between different “levels” of planning;
+ describing institutional procedures of plan bargaining;
+ devising a consulting system on preference, information,
monitoring, and plan evaluation.

Yet, such a scheme, particularly if conducted as a rationalist program, is doomed to fail for several reasons. In monitoring, for instance, he applies an almost neo-liberal scheme (cf. p.81), being unaware of the necessity of the apriori of theoretical attitudes as well as the limitation of reasoning that solely is grounded on empirical observations.

20. Of course, we are not going to claim that “society” does not need the activity of and the will to design itself. Yet, while any externalization needs a continuous legitimization—and by this I don’t refer to one election every four years—, the design of the social should target exclusively the conditions for its open unfolding. There is a dark line from totalitarian Nazi-Germany, the Jewish exiled sociologist, the Macy-Conferences and their attempt to apply cybernetics directly to the realm of social, finally followed by the rationalist Frankfurt School with its late proponent Habermas and his functionalism. All of those show the same totalitarian grammar.

21. Deleuze’s books about cinema and the image of time [33].

22. Rem Koolhaas, Euro-Lille, see this.

23. Just for recall: the Differential is the major concept in Deleuze’s philosophy of transcendental empiricism, which set difference, not identity, as primal, primacy of interpretation, rejection of identity and analyticity, a separation-integration.

24. Sue Hendler despises philosophical foundations of ethics for the area planning as “formalistic”. Instead she continues to draw on values, interestingly backed by a strong contractual element. As this may sound pragmatic in the first instance, it is nothing but utilitarian. Contracts in this case are just acts of ad-hoc institutionalizations, which in turn build on the legislative milieu. Thus I reject this approach, because in this case ethics would just turn into a matter of the size of the monetary investment into lawyers.

25. Note that ethics is the theory of morality, while morality is the way we deal with rules about social organization.

26. here and here or here;

27. It is a paradox only from a rationalist perspective.,of course.

28. “thing” is an originally Nordic concept that refers to the fixation of a mode of interpretation through negotiation. The “althing” is the name of the Islandic parliament, existing roughly since 930 ac in an uninterrupted period. A thing such exists as an objectified/objectifiable entity only subsequent to the communal negotiation, which may or may not include institutions.

29. inspired by Alfred N. Whitehead and Isabel Stengers.

30. See this about the concept of theory.

31. Unfortunately available in German language only.

32. This just demonstrates that it is not unproblematic to jump on the bandwagon of a received view, e.g. on the widely discussed and academically well-introduced Theory of Justice by John Rawls, as for instance exemplified by [23].

33. What is needed instead for a proper foundation is a practicable philosophy of Difference, for instance in the form proposed by Deleuze. Note that Derrida’s proclaimed “method” of deconstruction neither can serve as a philosophical foundation in general nor as an applicable one. Deconstruction establishes the ideal of negativity, from which nothing could be generated.

34. With one (1) [41], or probably two (2) [40] notable and somewhat similar exceptions which however did not find much (if any) resonance so far…

35. Jensen contributed also to a monstrous encyclopedia about “Complexity and Systems Science” [39], comprising more than 10’000 pages (!), which however does not contain one single useable operationalization of the notion of “complexity”.

36. One of the more advanced formulations of complexity has been provided by the mathematician Henrik Jeldtoft Jensen (cf. [38]). Yet, it is still quite incomplete, because he does neither recognize or refer to the importance of the distributed antagonism nor does he respond to the necessity that complex systems have to be persistently complex. Else he is also wrong about the conjecture that there must be a “large number of interacting components”.

37. see review by the German newspaper FAZ, a book in German language, a unofficial translation into English, and into French. Purportedly, there are translations into Spanish, yet I can’t provide a link.

38. Hudry [49] attributes it to Aristotle.

39. Deleuze & Guattari developed and applied this concept first in their Milles Plateaus [50].

40. The notion of an „Image of Urban“ is not a linguistic mistake, of course. It parallels Deleuze’s “Image of Thought”, where thought refers to a habit, or a habitus, a gestalt if you prefer, that comprises the conditions for the possibility of its actualization.

41. At first sight it seems as if such extended view on design, particularly if understood as the design of pre-specifics, could reduce or realign planning to the engineering part of it. Yet, planning in the context of the Urban always has to consider immaterial, i.e. informational aspects, which in turn introduces the fact of interpretation. We see, that no “analytic” domain politics is possible.

References

  • [1] Ian Hacking, The Emergence of Probability: A Philosophical Study of Early Ideas About Probability, Induction and Statistical Inference. Cambridge University Press, Cambridge 1984.
  • [2] Geoffrey Binder (2012). Theory(izing)/practice: The model of recursive cultural adaptation. Planning Theory 11(3): 221–241.
  • [3] Vanessa Watson (2006). Deep Difference: Diversity, Planning and Ethics. Planning Theory, 5(1): 31–50.
  • [4] Stefano Moroni (2006). Book Review: Teoria della pianificazione. Planning Theory 5: 92–96.
  • [5] Franco Archibugi, Planning Theory: From the Political Debate to the Methodological Reconstruction, 2007.
  • [6] Bernd Streich, Stadtplanung in der Wissensgesellschaft: Ein Handbuch. VS Verlag für Sozialwissenschaften, Wiesbaden 2011.
  • [7] Innes, J.E. and Booher,D.E. (1999). Consensus Building and Complex Adaptive Systems – A Framework for Evaluating Collaborative Planning. Journal of the American Planning Association 65(4): 412–23.
  • [8] Juval Portugali, Complexity, Cognition and the City. (Understanding Complex Systems). Springer, Berlin 2011.
  • [9] Hermann Haken, Complexity and Complexity Theories: Do these Concepts Make Sense? in: Juval Portugali, Hans Meyer, Egbert Stolk, Ekim Tan (eds.), Complexity Theories of Cities Have Come of Age: An Overview with Implications to Urban Planning and Design. 2012. p.7-20.
  • [10] Angelique Chettiparamb (2006). Metaphors in Complexity Theory and Planning. Planning Theory 5: 71.
  • [11] Martin Heidegger, Sein und Zeit. Niemayer, Tübingen 1967.
  • [12] Susan S. Fainstein (2005). Planning Theory and the City. Journal of Planning Education and Research 25:121-130.
  • [13] Newman, Lex, “Descartes’ Epistemology”, The Stanford Encyclopedia of Philosophy (Fall 2010 Edition), Edward N. Zalta (ed.), available online.
  • [14] Hilary Putnam, The meaning of “meaning”. University of Minnesota 1975.
  • [15] Ludwig Wittgenstein, Philosophical Investigations.
  • [16] Wilhelm Vossenkuhl, Ludwig Wittgenstein. Beck’sche Reihe, München 2003.
  • [17] Hilary Putnam, Representation and Reality. MIT Press, Cambridge (MA.) 1988.
  • [18] Florence Rudolf and Claire Grino (2012), The Nature-Society Controversy in France: Epistemological and Political Implications in: Dennis Erasga (ed.)”Sociological Landscape – Theories, Realities and Trends”, InTech. available online.
  • [19] John V. Punter, Matthew Carmona, The Design Dimension of Planning: Theory, Content, and Best Practice for design policies. Chapman & Hall, London 1997.
  • [20] David Thacher (2004). The Casuistical Turn in Planning Ethics. Lessons from Law and Medicine. Journal of Planning Education and Research 23(3): 269–285.
  • [21] E. L. Charnov (1976). Optimal foraging: the marginal value theorem. Theoretical Population Biology 9:129–136.
  • [22] John Maynard Smith, G.R. Price (1973). The logic of animal conflict. Nature 246: 15–18.
  • [23] Stanley M. Stein and Thomas L. Harper (2005). Rawls’s ‘Justice as Fairness’: A Moral Basis for Contemporary Planning Theory. Planning Theory 4(2): 147–172.
  • [24] Sue Hendler, “On the Use of Models in Planning Ethics”. In S. Mandelbaum, L. Mazza and R. Burchell (eds.), Explorations in Planning Theory. Center for Urban Policy Research. New Brunswisk (NJ) 1996. pp. 400–413.
  • [25] Heather Campbell (2012). ‘Planning ethics’ and rediscovering the idea of planning. Planning Theory 11(4): 379–399.
  • [26] Vera Bühlmann, Primary Abundance, Urban Philosophy. Information and the Form of Actuality”, in: Vera Bühlmann, Ludger Hovestadt (Eds.), Printed Physics. Applied Virtuality Series Vol. I, Birkhäuser Basel 2013, pp. 114–154 (forthcoming). available online.
  • [27] Leonard Lawlor and Valentine Moulard, “Henri Bergson”, The Stanford Encyclopedia of Philosophy (Fall 2012 Edition), Edward N. Zalta (ed.), available online: http://plato.stanford.edu/archives/fall2012/entries/bergson/.
  • [28] Bruno Latour, Politics of Nature. 2004.
  • [29] Mariam Fraser (2006). The ethics of reality and virtual reality: Latour, facts and values. History of the Human Sciences 19(2): 45–72.
  • [30] Sue Hendler and Reg Lang (1986). Planning and Ethics: Making the Link. Ontario Planning Journal September-October, 1986, p.14–15.
  • [31] Wilhelm Vossenkuhl, Die Möglichkeit des Guten. Beck, München 2006.
  • [32] Immanuel Kant, Zum ewigen Frieden.
  • [33] Gilles Deleuze , Cinema II – The Time- Image .
  • [34] Bent Flyvbjerg, “The Dark Side of Planning: Rationality and Realrationalität,” in: Seymour Mandelbaum, Luigi Mazza, and Robert Burchell (eds.), Explorations in Planning Theory. Center for Urban Policy Research Press, New Brunswick (NJ) 1996. pp. 383–394.
  • [35] Henry Mintzberg, Rise and Fall of Strategic Planning. Free Press, New York 1994.
  • [36] S. M. Manson, D. O’Sullivan (2006). Complexity theory in the study of space and place. Environment and Planning A 38(4): 677–692.
  • [37] John H. Miller, Scott E. Page, Complex Adaptive Systems: An Introduction to Computational Models of Social Life. 2007.
  • [38] Henrik Jeldtoft Jensen, Elsa Arcaute (2010). Complexity, collective effects, and modeling of ecosystems: formation, function, and stability. Ann. N.Y. Acad. Sci. 1195 (2010) E19–E26.
  • [39] Robert A. Meyers (ed.), Encyclopedia of Complexity and Systems Science. Springer 2009.
  • [40] Frederic Vester.
  • [41] Juval Portugali (2002). The Seven Basic Propositions of SIRN (Synergetic InterRepresentation Networks). Nonlinear Phenomena in Complex Systems 5(4):428-444.
  • [42] Michael Batty (2010). Visualizing Space–Time Dynamics in Scaling Systems. Complexity 16(2): 51–63.
  • [43] Michael Batty, Cities and Complexity: Understanding Cities with Cellular Automata, Agent-Based Models, and Fractals. MIT Press, Boston 2007.
  • [44] Angelique Chettiparamb (2005). Fractal spaces in planning and governance, Town Planning Review, 76(3): 317–340.
  • [45] Angelique Chettiparamb (2014, forthcoming). Complexity Theory and Planning: Examining ‘fractals’ for organising policy domains in planning practice. Planning Theory, 13(1).
  • [46] Angelique Chettiparamb (2013, forthcoming). Fractal Spatialities. Environment and Planning D: Society and Space.
  • [47] Gilles Deleuze, The Logic of Sense. 1968.
  • [48] Fred Moten and Stefano Harney (2011). Politics Surrounded. The South Atlantic Quarterly 110:4.
  • [49] ‘Liber viginti quattuor philosophorum’ (CCM, 143A [Hermes Latinus, t.3, p.1]), edited by: F. Hudry, Turnhout, Brepols, 1997.
  • [50] Gilles Deleuze, Félix Guattari. A Thousand Plateaus. [1980].
  • [51] Anna Leidreiter (2012). Circular metabolism: turning regenerative cities into reality. The Global Urbanist – Environment, 24. April 2012. available online.
  • [52] Marius Buliga (2011). Computing with space: a tangle formalism for chora and difference. arXiv:1103.6007v2 21 Apr 2011. available online.
  • [53] Bruno Latour (2010). “Networks, Societies, Spheres: Reflections of an Actor-network Theorist,” International Journal of Communication, Manuel Castells (ed.), special issue Vol.5, pp.796–810. available online.
  • [54] Klaus Wassermann (2010). SOMcity: Networks, Probability, the City, and its Context. eCAADe 2010, Zürich. September 15-18, 2010. available online.
  • [55] Vera Bühlmann, “Primary Abundance, Urban Philosophy. Information and the Form of Actuality”. in: Vera Bühlmann, Ludger Hovestadt (Eds.), Printed Physics (Applied Virtuality Series Vol. 1), Springer, Wien 2012. pp. 114–154.
  • [56] Vera Bühlmann, “The Integrity of Objects: Design, Information, and the Form of Actuality” to appear in ADD METAPHYSICS, ed. by Jenna Sutela et.al. Aalto University Digital Design Laboratory, ADDLAB (forthcoming 2013).
  • [57] Gilles Deleuze, Difference and Repetition. 1967.
  • [58] Yeonkyung Lee and Sungwoo Kim (2008). Reinterpretation of S. Giedion’s Conception of Time in Modern Architecture – Based on his book, Space, Time and Architecture. Journal of Asian Architecture and Building Engineering 7(1):15–22.

۞

Advertisements

Urban Strings

November 17, 2012 § Leave a comment

The urban life on this globe forms a vastly diverse and

heterogeneous universe. How could one ever expect to understand it in its entirety ? And isn’t some sort of understanding required to deal with all the challenges offered by the complexity of urban environments that we are faced with? Such, or similar, is the despair of the urbanist. Some say, urbanism is dead, has disappeared, at least as far as urbanism is said to be concerned about kind of a theory about the city or urban life. Whatever happened to urbanism [1], Herzog & deMeuron are convinced [2] that “There are no theories of cities; there are only cities.” No manifestos any more, please!

Should we dismiss the despair of our putative urbanist? Or should we take the expressed concerns serious? Is it reasonable at all to strive for an understanding? And what could “understanding“ mean in light of the complexity of large urban arrangements? The Newton of urban affairs is quite unlikely to appear, the globally unified formula about urban affairs is certainly a delusion. For what purpose should we aim for insights, as most planning initiatives don’t hit their targets anyway? Why not just dropping the distanced attitude that seems to be implied by theory and planning and just act, on the local or even micro-level, to deal with the challenges? At least urbanists of any shade have already many toolboxes for any kind of problem, haven’t they? Well, the outcome of the “just acting,” the collection of works contributed by swarm architects, results, according to Koolhaas, in nothing else than Junkspace.

The matter is not of least relevance, as there have been more than 50% of all humans living in urban environments by 2011, with a projected 75% by 2050, and even today the conditions for inhabitants of cities as well as for cities themselves are often threatening, to say the least. In many urban aggregations in the South, slums are more something common than an exception.

Behind the scenes, and on a quite general level, any discourse about the city and its theory is about the dynamics of urban culture, or simply the concept of change and its political actualization. Upfront it does not matter whether we talk about succeeding whole-sale plans as in the case of Singapore, or similarly perhaps Masdar, failing planning like in case of Mumbai whether we talk about the effects of the mobilization of people, with positive net total as in the case of Shanghai, or a negative net total as in the case of Leipzig (at least up to 2010), whether we talk about self-organized changes or any mixture of those. Given the enormous diversity of the “cultural actual” we have to find a structure for any argument about urban change that is both general enough to include all of those aspects and, most important, that could be bound to the operational level. Otherwise we simply would neither be able to compare them at all, or to “learn” from it. Note that it is not appropriate to “define” change, as this would obscure any theoretical notion. And the generality of this structure should not be burdened by a neglect of the realm of individual personality. The “operational” comprises the political, of course, and thus also issues of ethics and morality.

This Essay

This essay is proposed to be a further step into the direction of Urban Reason. Urban Reason could be circumscribed as human reason that is unfolding, emerging etc. under the condition of the Urban1. In this piece we will try to elucidate the link between some foundational, that is, more conceptual issues and the possibility for active practice.

As one of the pillars of that endeavor we follow the grand or omni-cultural hypotheses of urbanism: Nowadays, human culture is largely identical with urban culture, and through the influence of the cities even in seemingly non-urbanized areas.

The grand cultural hypothesis is b no means a new one. As early as 1966, Aldo Rossi formulated one of its first more complete versions in his “The architecture of the City” (p.51):

In other words, on the most general level, it must be understood that the city represents the progress of human reason, is a human creation par excellence; and this statement has meaning only when the fundamental point is emphasized that the city and every urban artifact are by nature collective.

Yet, Rossi remains largely on the rationalist track (as we will discuss in a later essay about time and architecture). Even as he departs from “classical modernism” in stressing the importance of history, time and (collective) memory with regard to the understanding of the city, the city still remains an artifact, something produced. As a “skeleton,” any existing architecture informs any subsequent architecture, which is beyond mere cause and effect, but for Rossi this influence also remains neutral regarding the possibility of conceptual schemes of thinking. Additionally, the urban remains constructed, there is no autonomy in it.

Despite Rossi’s concepts certainly provide a valuable starting point, it does not push the issue far enough. Even as he realizes that human reason is involved in the subject of the city, as a rationalist he fails to recognize the self-referentiality in such an arrangement.

The grand cultural hypothesis thus not only provokes the serious issue of how to speak about2 the Urban (see footnote 1). With respect to the realms of thoughts and taughts, the Urban takes a role that is quite similar to that of language: everything we (as humans) can think already takes place within language. We can’t step out of it. Likewise we may say that really everything we think and do relates to the Urban, at least nowadays. Thus, the omni-cultural hypotheses also relieves the thinking about the Urban from the monopolistic claims of science(s), relocating the issue of theory from control and pushing it towards design and play. The secondary claim thus is simply that a theory of the Urban is impossible without a strong and serious appropriation of philosophy.3

Such, our grand cultural hypothesis is markedly different from the early and almost classic opening of Henri Lefebvre in his “The Urban Revolution” :

I’ll begin with the following hypothesis: Society has been completely urbanized. This hypothesis implies a definition: An urban society is a society that results from a process of complete urbanization. This urbanization is virtual today, but will become real in the future. (p.1)

Lefebvre still treats the Urban (capital “U”) as something external, from the perspective of a science study, in this case “urbanism” being the target. After all, Lefebvre holds a strong materialist (-marxist) position throughout his work, rejecting even the idea that epistemology could play a role in dealing with the Urban. So, indeed, markedly different from ours.

Another “eternal” issue to be addressed in the context of the Urban is the question about the role of theory. Just throwing around some neologisms, importing exotic concepts from largely unrelated domains, expressing a demand for ethics or morality or doing historical studies does not constitute a theory. Not quite astonishingly, neither modernism in general nor positivism/scientism in particular have been able to develop an appropriate concept of theory. We will also see, for instance, that it is highly unreasonable to conceive “theory” somehow as the antipode of practice or practical concerns.

The refined and appropriately positioned concept of theory directly raises another, almost always overlooked topic. In the “negotiations” about the reasonability of some common ground there is neither a final justification for anything, nor is it reasonable to refer to “values”. Both abolish any possibility for open evolution and lead directly into narrow ideology and dictatorship. Instead, when talking about and engaging e.g. in urban design affairs, we firstly have to make visible our metaphysical stances. Without such exposition any single move or opinion is either rendered into blind—ultimately technocratic—activism or arbitrariness. Secondly, the metaphysics has to rely on a strictly processual approach, which is cleaned from any thinking that refers to origins, centers or axioms.

Both, theory and metaphysics limit effectively what can be expressed, hence what could be recognized, measured and done, both directly limit the achievable ethics, and both constrain the space of possible methods and means that could be applied in any practical case. There are some striking examples for that, as we will see later.

Another important pillar thus is the exploration and adjustment of the conceptual vocabulary. We propose to drop realism and existentialism as the structural basis of urbanism and to switch to a foundation that speaks “informational,” that embraces probabilism in a reflected manner, of course without sliding into the technocratic abyss and also without dropping aspects of empathy. This requires a proper methodological setup that consists of rather clearly identified methodological domains. We will propose a layered structure for that.

The effects of this re-orientation of Urban Theory and its two-sided, bi-lateral binding to both abstract philosophy and practical policy are not limited to the considerations of the Urban. It will also exert a significant force onto philosophy. What (for us) is particularly at stake philosophically is a reconciliation of transcendence with material aspects of the world. Which transposes in less spherical wording to the transitions between concepts and operations, which in turn regards the issue of methods and planning.

The remainder of this essay comprises the following sections (active links):

1. Rendering “Theory”

There are indeed a lot challenges, as even a short visit of the site The Global Urbanist may proof. The variety and scale of the problems is enormous—staggering would be probably a more appropriate description from the perspective of the putatively rational urbanist. The editors of the Global Urbanist site distinguish 7 major regions for this globe, they identify 6 top-level domains and for each of them 10 sub-domains. Any of these 60 areas could be assigned a couple of scientific domains. Taking into account the definition of science as a domain with a particular vocabulary, urbanism is probably well comparable to the attempt of building the tower of Babel.

All of this is indeed, I already mentioned it, impressive. Yet, what is completely missing on that site is a section for theory. Some kind of bottom-line, a frame is missing. The whole site provides reports on conferences about case studies and other so-called hands-on approaches, close to the factual conditions. At least for the Global Urbanist, which certainly provides a representative sample, HdM’s forecasting proposal from 2008 turned true as a matter of fact, it seems.

If we take the modernist conceptualization of theory into consideration, HdM have been completely right in expressing their doubts about the reasonability of theory in urbanism. From within modernism, the concept of theory has achieved a very clear definition, displayed extensively in Stegmüller’s series [3], which continues the legacy of Popper, Carnap, and Sneed, accompanied and extended by the work by Salmon Wesley and van Fraassen. Well, at least the late van Fraassen stumbled into some doubts about the analyticity of theories. For our concerns here it is important to see that the concept of theory is a matter of the philosophy of science, not of the sciences themselves.

Well, domain-specific theories not only introduce dedicated terms and rules that allow the derivation of models. The first important claim of the modernist notion of theory is that this derivation of models from a theory can be formalized. The second important claim about theories is that they have to be falsifiable, which implies and presupposes that any two theories could be separated in a clear-cut manner. The result of the these claims is devastating. Theories couldn’t be distinguished from models anymore, since any model also introduces theoretical terms. Since falsifiability and uniqueness are also required, both the difference to models as well as the value of the concept “theory” vanishes. Thus, analytic theories indeed don’t exist. They are not even possible. In some sense, modernism is an attitude free from any theory, just as HdM claimed. And HdM would be also right in rejecting another idea about theory that can be met often in architecture, namely, that theory ought to deal with that which is permanent and always valid, notably the rules of art and law of statics. In their exclamation that we cited in the beginning HdM did not deplore, of course, the missing of theories with regard to urbanism… they praised it.

Yet, the failure of modernism and positivism to provide an appropriate concept of theory does not mean at all that we have to drop theoreticity completely and once and for all. We just have to revoke the modernist conceptualization of “theory”. This gap we are now going to fill.

As we have argued in a previous essay about theory in general, theories are orthoregulative milieus for the invention of models. It is the models that we use for anticipation. This notion of theory relates modeling with the Form of Life in which said modeling takes place. As a consequence, it is clear that the subject of theories are models and the process of creating models. The subject of theory is not empirical issues, quite contrary to the modernist (positivist) attempt. Inversely, we can see that any anticipation, even any model that has some utility, whether it is a formalizable one or a de-facto model, implies a theory, since nothing could be done outside of any condition. There is no rule-based activity without at least one theory. The true conceptual antipode of theory is therefore not practice, but rather performance. This conception solves a number of riddles about theories. For instance, different theories may well overlap, even producing a common sub-set of models that are hardly separable when directly compared as such. It also opens a much more appropriate perspective onto the fuzzy evolutionary network of theories than Kuhn [4] has been able to conceive it. Revolutions, whether scientific or not, are a matter of underdevelopment, symptoms of the possibility of disconnected singularities, hence not any more appropriate for our current techno-scientific, globalized societies today. (Though there is no guarantee for the ability to prevent underdevelopment.)

What does this concept of theory mean for the practice of urbanism, for the practice of building within a city, whether it expands the city or differentiate it? Why is it justified to commiserate the missing of theory on the Global Urbanist website?

As a first hint we may take Frank Lloyd Wright’s frequently cited credo about the relation of principles and form:

“Do not try to teach design. Teach principles.”

Certainly, Wright did not provide an architectural theory that could have been understood easily. Despite he himself provided 9 principles, these principles can’t count as a reflected theory, albeit Wright’s approach is clearly heading towards the concept of theory as we understand it. Think for instance about his insisting on the aspect of instantiation as actualization, even as he didn’t use such wording. The required philosophy (Deleuze) was to be written down only years later. Doubtless Wright’s approach was an early one, and one that has to be developed much further. But his message is quite clear: Theory precedes form, or in philosophical terms, potentiality precedes actuality, and concepts precede representation. Well, what applies to architecture fits also to the affairs around urbanism.

Yet, principles are a weak foundation. They remain axiomatic, messing representations and values, hence remaining completely within naïve realism or phenomenology. This holds for other “principled” theoretical approaches as well, e.g. that of Christopher Alexander, LeCorbusier, or those of Bernhard Tschumi, notwithstanding their respective appeal. On the other hand, praising some philosophical stance, let us say, the deconstructivism as unfolded by Derrida, and trying to coin it more or less directly into architecture is just as deficient. Jumping on some ism-bandwagon doesn’t qualify as theory, neither in architecture, nor in urbanism or any other domain.

Let me highlight the issue with a small anecdote. Recently, Sam Mendes, the celebrated director of the latest James Bond 007 movie, reflected about the use of action elements in an interview regarding the making-of of the movie. After a few weeks of taking more and more action shots, perfecting them eventually, he said, you will arrive at a point where you have ask yourself: What is it that you actually want to do and show?

Obviously, Mendes relates a particular action to the dynamics of the whole story, and that “wholeness” is quite extensive in the case of the 007 series, after 22 other James Bond movies. Previously, and as an extension to the Austin/Searle speech-act-theory [5], we called this aspect the delocutionary aspect of an utterance. It concerns the story-telling—through which is also actualizes—and the play whose subject is the playing itself. Taking this delocutionary aspect into consideration, formally and content-wise, implies precisely the conceptualization of theory as an orthoregulative milieu. In contrast to that, the Austin/Searle theory remains completely compatible with a modernist, i.e. positivistic and reductionist approach, since its top-most level relates just to a strategy, that is to a predefined or at least a predefinable purpose, but fails to relate to the openness of social intercourse. Delocutionary aspects, in contrast, resist any kind of apriori assignment, since they precisely declare to play with the potential of assignment, thereby abolishing any actual apriori assignment.

Well, the same scheme applies—and I think quite well so—to the presentation of topics on the Global Urbanist site. A lot of activities, undisputably interesting, but no framing. More clearly: mostly like a herd of chickens running wildly across the limited ground within a well-defined cage. That does not mean that the reports could not be inspiring. Yet, they could be inspiring only before the background of a suitable theory. Otherwise, case reports can count just as kind of soulful portrays which hardly can provide any kind of “lesson learnt” whatsoever.

Let us take a brief view onto an example of activism devoid of theory (in our sense). Kerwin Datu, editor-in-chief of The Global Urbanist, reported about the World Urban Forum in Naples in the beginning of September 2012. He distils four key elements of spatial planning of expanding cities (emphasis by Datu).

The first is the inevitable expansion proposition: that urbanization is a process that cannot be stopped, only shaped, by effective spatial planning.

The second is the sustainable densities proposition: that in place of the commonplace mantra that cities need to densify, Angel argues that it needs only to be optimised. Cities should be dense enough to sustain a public transport system, but not so dense that they generate health risks for their inhabitants.

Third is the decent housing proposition. ‘Adequate housing is possible only when land is in ample supply,’ a situation that many local authorities must do a lot more to create. In many cities there is an effective coalition that restricts land supply to generate superprofits for landowners, with severe impacts on the affordability of housing for all.

And fourth is the public works proposition: ‘as a city expands, space for public works must be secured in advance of development,’ […].

For once, it appears that the basic principles of planning for urbanization have been identified, and packaged in a form simple enough for laypeople (which most politicians are when it comes to spatial planning) can understand. Of course, in a conference as large and fragmented as the World Urban Forum, it remains to be seen whether any urban leaders are willing to listen.

As Datu emphasizes, a lot of ministers and mayors have been attending, thus politically important people who indeed could make the difference. Yet, the results are just depressing, aren’t they? If these four points indeed would be taken as the “basic principles of planning for urbanization”, well, then no wonder the conditions in many cities are simply bad. These results of the World Urban Forum are obviously almost nil, precisely because there are no design commitments regarding the social quality. It represents the effect of misplaced, physicalist reductionism. Doing spatial planning just from the perspective of almost physical elements is nothing but deficient. A further reason for the irrelevance of these “results” is that there is not the slightest reference to even a simple theory of differentiation, well, to any theory. Obviously, politically important people are confused and disoriented. What a dark age…

Given that we again would like to drop a remark about the parentage of theory in a field concerning the topic of the Urban. Approaching the problems from a meta-perspective, from some distance so-to-speak, by applying some particular domain science, for instance sociology, statistics, geography, fluid physics, engineering of control, etc. is not sufficient for calling the approach a “theory”. Imposing the implied theoretical stances of any particular science onto the field of the Urban and so importing those stances reverses the roles. This way, one does not achieve anything that is related to the Urban. One just creates a kind of sub-species of the respective science, that is sociology about urban populations, geography about spatial pattern dynamics, etc. Clearly, that does not solve the problem of how to address the Urban itself. Sticking to this hope may well be called scientism. And that is clearly misplaced with regard to the Urban.

Quite interestingly, a few recent articles published on the Global Urbanist site argue in favor of bottom-up approaches4, emphasizing that large-scale projects inevitably fail in most cases, and stretching the point of planning-with instead of a planning-for attitude. This bottom-up attitude is running contrary to—the fallacious—modernist scientism. We will return to this issue later. Yet, the respective articles are case-studies that hardly could be generalized, hence their value is quite limited. This is even true for AMO’s and Koolhaas’ investigation of Lagos, Nigeria [6]. What we would need is—again—a proper theory of differentiation. Koolhaas and his AMO/OMA obviously recognized that. As we argued recently, they approached that problematic field practically through their buildings, and more theoretical through their delocutionary essays (Generic City, Junkspace, the first an alleged movie script, the second kind of text for staged play). This engagement continued with their recently published work about the Japanese Metabolists and their concepts [7], provided as a collection of interviews and reviews [8].

2. Clearance for Take-Off

From all of that it should be clear that we would like to suggest to reject the attitude that denies the relevance of theory for dealing with the Urban, whether it is suggested explicitly—as in the case of Herzog&deMeuron—or implicitly—as the Global Urbanists prefer.

The whole endeavor of theorizing about the Urban must respect the role of theory: theory is NOT concerned about those empirical facts or material arrangements that we can observe in any particular city. As soon as we are engaged in observing we have been moving into the realm of modeling.5

Our conceptualization of “theory” renders the task of creating—or at least that of approaching—a theory more easy. We can set the empirical manifold of the Urban apart, at least for the time being. Later we will see that the treatment of the vast and almost infinite body of empirical facts concerning the Urban can be structured neatly before the background of the theoretical move. Anyway, leaving the particularity of the Urban behind allows us to focus on methodological as well as delocutionary issues.

One of these issues concerns the pervasiveness of the Urban. As we have been deriving this in a previous article, nowadays the Urban is synonymic with human culture at large. There is no single aspect on this globe anymore that would not be significantly affected by human culture and that is, human urban culture. “ More than ever, the city is all we have.” [1] Anything that we could say about the Urban is already enclosed by the Urban, it always takes place with respect to and even within the Urban.

The situation is thus much like it is the case for language. Any investigation not only presupposes language, it takes place within it, especially however any investigation of language itself. This insight, first recognized by Wittgenstein, paved the way for a (small?) revolution in philosophy, eventually called the Linguistic Turn in the 1970ies.

Language, Reason, Concept, the Urban, or culture are examples for performable conceptual entities for which an objectifying externalization is impossible.6 Whenever we refer to them we already need them to express them. It is meaningless and methodologically silly to try to objectify them, say as we usually pretend to do for concepts like chair, table ball etc. Yet, even in those cases the explication could never be finitized, i.e. finally closed. This setting corrodes any attempt for a “closed”, i.e. formal analysis of the Urban, much like it does in the case of language. In other words, we find a strong self-referentiality. Wittgenstein phrased it as the “paradox of rule-following” in §201 in his Philosophical Investigations [9].

For Wittgenstein the consequence has been clear: Language, as form, as a performance as well as with regard to the conveyed meaning has to be anchored in the form of life. It is not possible to establish an investigation, whether about language or anything other, that would be complete by itself. In philosophical terms: No investigation about some observable can provide sufficient reason, which quickly amounts to the fact that there is no such thing as self-sufficient sufficient reason at all.

Hence, the attempt of a “scientific language” (Carnap) is nonsense. Language is performed much like a game or a play, where the rules are quite volatile and in themselves subject of the play. There are some rules that we follow, yet the rules are neither complete, nor fully determinable, neither stable nor “justifiable” at all.

In written German for instance, we find clearly separated sentences and each word has a clear positional value and a distinct grammatical type. Yet, the borders of a sentence, or a few of them, is almost never a representative of a proposal. And what is going to be said is almost never representable as a proposal. While this aspect is present in written language, writing can be conceived as a means to limit this effect—or to play explicitly with it. In spoken language, however, the situation aggravates dramatically, as even sentences appear almost never as a complete(d) unit. Instead, what is created by talking together, on any side of the discourse, is much more a probabilistic field of densities and potentials that is only usable = understandable as a multi-channel diachronically organized braid of possible stories, from which we as participants agree on focusing to a particular one. Yet, this focus certainly does not remove any of the other threads. I am absolutely sure that this “structure” applies to any other language, at least Indo-European language as well. I mean, that’s the whole issue of rhetoric.

Hence Wittgenstein came up with the idea that language always comes as a language game [9]. Meaning is nothing else than usage, which in case of language refers to the couple of “interpretation” and the “prompt to interpret”. Thus, meaning is neither a private affair, nor a mental one, nor could it be determined by somebody or apriori.

Why do I anatomize the language process with such an emphasis, despite our main topic is the Urban, and the particular form of reason(s) that spring out from it?

Well, there are two reasons for that. Firstly, I want to demonstrate that the grammatical rules and all the rules that we actually could talk about with respect to language games do not, by no means, tell us anything about the nature of the play. Even in chess, which is a strictly determinable game, we find different styles in the way the players contribute to the individuation of the game. Secondly, it should have become clear that language can’t be conceived in any way as a process that contains precisely determinable entities, or that even would be itself determinable. The impression of clarity is an illusion triggered by the habits around its usage. Language and its usage is essentially is a probabilistic process, despite the school grammars, and despite the positivist propaganda of contemporary linguistics.

Language games can be instantiated in extremely different ways, of course. Ultimately, we even could not claim that there is a determinable content in practiced speech. Content appears only upon a bag of retrograde interpretations, each spanning across a different time span, each of them with different resolution, each of them with different intensity. Language games and the putative content change with context, such that there won’t be a ever such a thing like an repeated utterance. Everything we say, we say it for the first time, despite and because we practice a certain style, caring thereby for our grown and growing habits.

We now can ask for the consequences of all of that for a theory of the Urban. I think, we just could perform the analogous move, that is, we may introduce the concept of the “Urban Game”. Everything we said above about language games applies to Urban Games as well.

We will discuss this concept of the Urban Game in more detail in a later piece. For the time being, we just would like to touch two issues. Firstly, we may say that the “Urban Game” takes the role of the Wittgensteinian “showing”. They are not only shaped by the urban environment, many of them would not even take place at all. While they could be described, of course, with respect to their visible parts, such descriptions would not catch up with their consequences, their sense and meaning. There is no single, crisp effect associated to them, they just release kind of “excitation” into the probabilistic network of the urban fabric. Essentially, we can’t describe the effect without pointing to the entirety of the city, its whole becoming. In this way, Urban Games work as kind of media, conveying the amorphous, unspecific showing (up) of the culture (reflexively: “es zeigt sich”), and also as a means to show the expectation of this mediated excitation (transitively). This refers to quite different activities and moves, as the category of Urban Games comprises the whole spectrum between legislation and installation. Secondly, the concept of “Urban Game” certainly allows to respect the aforementioned self-referentiality. And as we have seen, it demands for probabilistic concepts when describing them, like it is the case for language games. Probably even more important, it also provides a stable conceptual bridge between the individual and the communal level of urban affairs.

Regarding architecture, a typical Urban Game is the semiosical (!) play with styles. Semiosis is the spreading and branched and “culturally embedded” probabilistic process of creating new (Peircean) signs, i.e. to establish a new sign-practice. Venturi and his collaborators have been the first (and since then seriously neglected ones) that emphasized the importance of the dimension of the sign. While Koolhaas in his Junkspace [10] pejoratively lamented about the fact that

Through our ancient evolutionary equipment, our irrepressible attention span, we helplessly register, provide insight, squeeze meaning, read intention; we cannot stop making sense out of the utterly senseless… (p.188)

it is also certainly true that the city is a quite special breeding site for new signs, demanding ever for more interpretation, despite all the habituation [11]. And equally certainly, a term like “architectural incongruence” isn’t helpful to any extent, particularly when used in combination with the idea of a “mature streetscape”. For Michael Conzen, proponent of the British school of urban morphologists and who coined these terms, the semiotic dimension is simply irrelevant, calling them “linguistic problems” [12]. One has to know that Conzen beliefs in the reasonability to investigate the layout of the town map as a separate subject, albeit influenced by culture at large, while (as a geographer) at the same time he rejects the outbound attempt to benefit from other disciplines like biology. In his attempt to stay aware of the need of theory, he readily adopts phenomenological patterns, pimped by leaning towards Cassirer. Yet, Conzen not only completely fails to understand the role of theory, by means of that orientation he also remains entirely within the modernist tradition, even in its raw version, that is, not understanding the importance of the linguistic turn. In the next essay we will discuss this issue further.

It is important to see that in the context of the Urban neither language games nor of course Urban Games are necessarily bound to a particular speaker in a particular situation. Urban arrangements transform everything into probabilistic affairs.

The “Urban Game” always comprises language games, of course. Else, it provides a bridge between issues of matter, power and language. The language-driven perspective, which is also a semiotically7 driven perspective, includes the speech-act, which in our case includes the extension of the delocutionary act, that is, the open play that goes beyond mere rule-following.8

There are important consequences for any theory about Urban, for a critique of Urban Reason, but also for any kind of practice. We can refer only to the most important ones here:

  • 1. The Urban can’t be addressed analytically, hence it is also impossible to implement any kind of representational top-down control or planning without annihilating the Urban.
  • 2. The Urban Game is a potentially rule-changing social performance.
  • 3. There is no “complete” empirical description of the Urban, that is, any anticipatory model will fail at least partially. This failure has to be covered by an appropriate treatment of and attitude towards risk.
  • 4. The Urban can’t be constructed.
  • 5. The Urban may appear in an unlimited diversity.

Note that these items are not based on “values” or “attitudes”. They are the result of a rigorous philosophical argument.

There is still another issue that we can derive from language philosophy. With regard to language it is misguided to ask about some kind of absolute, global or stable meaning. Instead we have to ask: Which (kind of) language game she or he is playing? Since we are interested in theory here, this transforms immediately into a methodological issue. Regarding the Urban, we have to be clear about the relation between actions and concepts.

3. Schemata of a Critique of Urban Reason

For our purposes it is sufficient to distinguish two aspects of actions. Firstly, there is the aspect of rule-following. The rules implied by an action are chosen either due to some anticipatory “calculation” or due to the influence of the form of life. It is reasonable to expect that in most cases both sources are active. Whether the actions are based on free will or not is not relevant for us here.

The second aspect of actions that we’d like to distinguish concerns about what often is considered as “unintended effects”. Of course, the issues around acting upon the external world are much larger than just that. Actions unfold into material re-arrangements, they are a major component of irreversibility, hence they provoke what we previously called the “existential resistance”. The changes “then” are subject of further interpretation.

These two aspects, rule-following and the couple of acting and interpreting that are tied together through irreversibility, make clear that there is no direct link between concepts and actions. From a quite different perspective we achieved the same result earlier when introducing the choreostemic space. There we argued that in any move besides modeling and concepts also mediality and virtuality have to be taken into consideration, notably all of them conceived as transcendent entities (not: transcendental!). Also related to this issue is what the philosopher John McDowell called the unboundedness of concepts, according to him an inevitable consequence of the Myth of the Given. [13]

From this we can now proceed to the basic structure of theory building. Yet, insofar as we don’t want to just provide some rules, seemingly out of the blue, we‘d like to stress the point that we propose a “conscious,” that is a critical approach. A critical approach concerns about the conditions that are implied by setting it up. One of these concerns, and probably the major one, is language, regarded as a transcendent condition. Another one is the transcendentality itself, which causes the concept of Concept to be not only transcendent, but also virtual. A critical approach to theory building can’t stop, however, here, just stating that there are transcendent aspects. We also need to explicate the (abstract) mechanisms that are in charge in the field made from theory, structural models, predictive models and the organization of operations.

In a first and rather coarse step we can distinguish three layers that are important for theory building regarding the Urban:

  • – The operational level, including politics, legislation, immaterial and material logistics, the construction of infrastructure and all individual activities as well;
  • – The categorical work, providing the concepts that determine what could be expressed at all concerning the Urban;
  • – The model layer between the first two areas, providing concepts that enable us to describe the dynamics of the Urban on the structural level.

Here, a small remark about the operational level is probably indicated. Operations have to be distinguished from actions. We conceive of operations here indeed as the application of operators to the material world, whether physical or social. Actions comprise, in contrast to that, much more, e.g. models and concepts. Yet, precisely those we tried to make visible, including their relations among each other. The concept of action is hiding that inner structure. Operations can’t be regarded just as rule-following. To operate means to flexibly adapt to unforeseen contextual influences in order to actualize the respective model(s). It is clear that matter will exert some “resistance” to that, existential resistance. The world can’t be mapped to analytical descriptions by principle, hence operations always have to deal with some gap and ignorance.

This may be depicted as shown in the following figure 1. The brackets here should not be understood as objective borders, of course, it just reflects a particular focus. On both sides, regarding the conceptual area, i.e. philosophy, and the operational area, i.e. largely politics, are manifolds by themselves. Actually, there is no clear border between the fields, just “gravitational” spots. Additionally, one should resist analytical habits that would imply a certain directionality in this field. The field may be entered from either side, and any kind of sequence is possible, given the actual context and the individuality of persons engaging in the process. Yet, the scheme allows to organize that sequence, or to simply talk about it. That is, the process of theory building as well as its application are critical also insofar as the externalization may trigger a secondary symbolization.

Figure 1: Generalized methodological layering for the binding of abstract thought to operations.

The scheme is a projection of the choreostemic space, both simplifying and extending it. The “concept” area is subject of philosophy. Note that the three layers are mutually dependent; the dependency of these layers works in either direction. More exactly we may say that these fields are dependent on each other in a particular way. They build a high-dimensional fluid moebius fractal.

Let us briefly visit the two conceptual components, the moebioid and the fractal. A fractal can be created in several ways, which however are all traceable to a procedure called self-affine mapping. An example for a simple self-affine mapping in 2-dimensional space with 2 surfaces is the leaf of the fern (see figure 2a), by the Peano-curve, the Sierpinski triangle, or the Koch snowflake curve. Inversely, fractals are created also recursive sub-division procedures.

A moebioid is a n-dimensional body with a topological “defect”. Despite a 3-dimensional moebioid exists in 3 dimensions, it has only 1 topological surface, instead of the usual 2 surfaces. There is no “inside” or “outside” with it, as you can observe if draw a closed circle. (Astonishingly, you can even fill water “into” a Moebius bottle despite their is no “inside”.) A moebioid is also conceivable as a knot, though not built from threads but from surfaces. As it is the case for trivial, that is smooth knots, moebioids become flat = unknotted in higher dimensions. A fractal moebioid, however, can’t be unknotted in higher dimensions. (I have no proof for this, it is just a conjecture)

Just as a small remark: This concept about theory work (and the potential working of theory) has been deeply inspired by Deleuze&Guattari’s “What is Philosophy”[14], particularly the sections about concepts and the “Plane of Immanence”. You will find a strong resemblance, for instance concerning the fractal structure, the distinction between the concept and the field they generate, etc.. Nevertheless, what we propose here is an extension of Deleuze’s work, so to speak, down-stream towards politics and logistics. Deleuze himself always refused to approach these areas, focusing on philosophical aspects. [15] Actually, I regard the binding between theory and politics, mediated through models, as one of the most interesting ones, not just with regard to architecture and urbanism, and for sure I will prepare a dedicated essay about it (working title so far: “Braidings between Immanence and Politics: The Case of Urban Tales.”).

Back to our scheme from figure 1. Our requirement is that any of the three fields contains any sequence from the three fields. Fortunately, the sequences do not grow very much due to pragmatic reasons. In other words, it needs to be treated by a self-affine mapping in order to approximate the actual arrangements in socio-mental settings, while at the same time the actual form of the “embedding” or framing is only a matter of relative phase, i.e. pseudo-location on the surface of the moebioid. Additionally, the resulting figure should not be expected to be a fixed geometrical entity. Rather, it is fluid, pruning some sequences, bringing any of the field-like components to the surface through foldings, etc. A distantly approximating impression is provided by figure 2b, just click to to see the projections moving.

Operations can not do without deeply integrated models, as it is the case for concepts. There are no “pure” models, or concepts, either, of course. Which compartment is surrounded by the others is dependent on the respective purpose, i.e. context and style, I suppose. in the following we will try to develop this scheme into an abstract space that could be used to trace the dynamics of the Urban.

Figure 2a: The fern leaf as a simple example for a self-affine mapping.

The next two images provide visualizations of projections of objects (not of fractals!) in high-dimensional spaces, the first in figure 2b more “conventional” (it is different aspects of a Calabi-Yau-manifold, which takes an important role in String theory, found here), the second in figure 2c more artistic and moebioid (found here).

Figure 2b: A grid of projections of the 6-dimensional Calabi-Yau-manifolds into 3-dimensional space. Note that a projection from higher to lower dimensionality not only creates knots and moebius figures, there is also no single definite projection, hence the grid.

Figure 2c: This image actually has been produced by weaving a lightstick, capturing it with long exposure times, not by any kind of digital rendering of numericals.

Despite the scheme from figure 1 is still quite coarse, we nevertheless can say that the most important part of this scheme is the one referring to theory, the categorical work. This includes all the modes that are being used to apply abstract concepts for the derivation of the concepts assignable to the intermediate layer. Hence, the categorical work fully constrains what could be expressed about the Urban, but also what could be recognized, modeled, anticipated and integrated into the symbolic constitution of a particular urban instance, whether it is by means of population dynamics or of more or less centrally organized activities. It constrains entirely what can be thought and said, whether on the level of the generic model, on the level of actual models, or with respect or logistic or political actions.

From that we can conclude three things. (1) The conceptual part has to be abstract enough. Reasoning about geometric forms, generative grammars and other forms of “automated” (or state-bound) methods to generate forms, the “origin of the pictorial” following Paul Klee or Wassily Kandinsky, all of such approaches are certainly not abstract enough, neither for doing theory work in architecture nor in the context of the Urban. (2) We need appropriate concepts and techniques to derive such concepts for creating structural models. (3) Both together have to allow for the derivation of political actions that are compatible with basic philosophical insights, with appropriate ethical and political positions. This would include, for instance, the discourse about sustainability, which is definitely neither a trivial nor a eco-technical issue.

Anyway, we may propose that the methodological layering shown above is indeed a generalizable scheme for the binding of abstract thought to operations. We just have to add that it should be conceived more as a high-dimensional methodological field with blurred borders between the components. As we already mentioned, there are many proposals that suffer from a considerable methodological “binding problem”, from either side. This causes critical developments particularly in those domains where we can find self-referentiality, for instance in linguistics or urbanism through their subjects “language” and/or “culture”. Examples for such critical developments are the whole movement of idealism, or, somehow as its pretended counterpart, the denial of theory. As a further abundant methodological fault we may count representationalism and the closely related believe in the dominance of common sense, as Deleuze has been pointing out (for details see this previous essay).

Of course, we have to explicate the model layer. Yet, before that we first have to take the thread up again that is put down by the importance and the guiding role of the concepts.

It is quite important to understand that concepts are transcendent, but neither universal nor eternal. They are not transcendental either, which would mean that they represent the demand for some kind of ultimate origin. There is also nothing with them that could be called “truth”. Concepts act more like hubs for semiotic processes that allow for and organize certain kinds of “vectorial traffic”, yet without maintaining any kind of materiality—even not a symbolic one—on their own. This position of the concepts inherits towards language.

Precisely here we can exclude any philosophical framework as a proper candidate that does not respect the primacy of concepts and language in the genealogy of a theory.9 Among the rejected attitudes we comprise phenomenology, external realism, existentialism, positivism, structuralism, and deconstructivism.

So, we can ask now: What else?

4. The Core

Actually, it is quite simple. The core of any Urban Theory, as well as its critique, must necessarily comprise the following two questions:

  • 1.How to speak about the Urban?
  • 2.How to actualize the Urban Games?

These questions are far from being “only of theoretical” significance, “theoretical” used here in the inappropriate, common sense way. It is for instance simply meaningless to address questions of sustainability without first answering those, as it is superfluous to engage in research about planning without a proper answer to those. What we also meet here is the eternal (and internal) tension of conservatism: what to conserve, the status quo, the dynamics or the potential? In order not to demolish itself, it must stick to the conservation status quo, which on the other hand abolishes any reasonability. We certainly have to care not to trap the concept of sustainability in the same dilemma.

Another area where the dominance of language and the conceptual may appear surprising is public services, particularly concerning the essential flows, i.e. energy and water. We will discuss this in more detail in the application section below.

What we find here is nothing else than a very practical consequence of Wittgenstein’s famous, almost proverbial, proposal: The borders of one’s language constitute the borders of one’s world. Inversely, we always can conclude that in case these questions will not be addressed explicitly they necessarily are answered implicitly. Yet, this also means that the answers will be most likely inconsistent, arbitrary, and contingent, without any possibility to set up a reasonable discourse about the urgent local issues.

It is of utmost importance to understand that these important questions can’t be answered without reference to two rather divergent areas, albeit they are also deeply and strongly linked to each other: (1) the predominant Form of Life that is practiced in a community, and (2) the metaphysical setup on the level of the individuals.

It is precisely here that we find the entrance point for “modernism”, whether the “original”, i.e. European version, or in its segregated form in the case of Singapore. Across the decades and centuries there is of course a co-evolution of the Form of Life and its accompanying metaphysics.

5. Metaphysics

As we have described earlier, modernism can be described by a characteristic set of beliefs. The dominant component of this set, however, is the strong belief in the necessity of metaphysical independence. Note that the idea of identity builds just the other side of the coin, essentially, independence and identity are almost synonymic from the philosophical perspective. In our essays about the role of logic and our add-on to the Deleuzean dual concept of Difference & Repetition, the choreostemic space, we discussed the alternative to identity and independence: transcendental difference.

Though historically comprehensible, independence is as little justifiable as any other metaphysical belief. The fact is simply that you can tell different and different kind of stories, some being more extensible and more fruitful than others. Anyway, this belief into independence informed everything in Western societies at least for several hundred years up to present times, with origins deep in classic Greek thought and with a particular blossoming at the end of the 19th century and the 1950ies/1960ies. Even Descartes and a whole series of scientists from Newton to Helmholtz would not have been thinking the way they did without it.

This independence has a range of strong correlates. One of the most influential is the belief in the indispensability of centralized control. A more abstract companion is the belief in centers and middle points itself [16], together with the cosmology of the sphere [17]. Traces of that can be found in architecture—from Boullée to Buckminster—as well as in urbanism, particularly as the phantasm of the “ideal city” that has been prevailing throughout the centuries.

Figure 3a: Etienne L. Boullée, Kenotaph for Newton (1784)

Figure 3b: Claude-Nicolas Ledoux, Dwelling for the Gardener in an utopian ideal city, ~1800.

The sphere and the implied importance of the concept of the center-point did not only show up as utopian buildings. It was also used, and is still being used, for the layout of cities. The phantasm of the “ideal city” has been poisoning the discourse about the Urban up to our days.

Figure 4a: Nowa Huta, a Polish city built to praise the heroism of the mine workers in former communist Poland.

Figure 4b: Palma Nova, near Venice, Italy. Note, that in former times the costs for the fortification caused a drive for circular layouts for geometrical reasons. Palma Nova still exists. Yet, in former times people didn’t want to live there.

.

Even today density is often misunderstood as a center of a radial symmetrical arrangement, with Manhattan being the great and pleasant exception.

With regard to methodology, statistics as it is practiced since the mid of the 19th century up today, is deeply structured by the independence assumption, which, as a matter of fact, renders it incapable to deal with patterns. In urban environments, the deep modernistic belief in independence led to forms reflecting crystalline growth, that is, the most primitive form of growth, which also is the least adaptive one.

Fortunately, things are changing. Well, they change slowly, but steady. The first incentive stems from biology, of course. In biology, nothing makes sense under the assumption of independence. Everything is meaningful only if conceived as a historically constrained processual manifold, called evolution, yet which also includes complexity. The second incentive comes—astonishingly—from physics, yet from the “non-classical” area of physics, in particular the physics on sub-atomic scales.

Changing the metaphysical setup in order to pave the way for a more appropriate understanding of the Urban means to drop the addiction to the sphere, of independence, of the object, of the territory, to leave behind the strive for identity as a constant as well as the representational attitude in (“explicit”) controlling and planning. Maybe you already detected the remote reference to the philosophy of Gilles Deleuze here.10 It is rather important to understand that all these items are not “universal” in any respect. They just follow from certain methodological considerations, influenced for instance by the insight into the primacy of language. Yet, even if language and concepts can be considered to play a transcendent role, universality does not follow from that.

6. Dropping the Spheres

The revolution that started to erode the deterministic scientific cosmogony towards a de-centered metaphysical cosmology is still running at high rates. In many areas its main messages are still not assimilated. Modernism and its detrimental offspring prevail.

The first “step” into that revolution was the discovery of in-computability. In-computability is a principle barrier that could not be overcome by more accurate measures. Actually, on the level of the sub-atomic world accuracy does not make much sense. Basically, there are three contributions:

  • 1. Poincaré’s investigation of the three-body-problem (~1900), leading to the first description of chaotic systems.
  • 2. The invention of Quantum physics from Planck (~1890) to Schrödinger (~1950), including the Heisenberg Uncertainty Principle.
  • 3. The investigation of dissipative processes by Prigogine (~1975).

The second “step”, which also stretches across several decades, derives from the paradoxical situation of quantum physics. On the one hand, the so-called “Standard Model” is quite successful. For instance, a simple principle has been deduced that allowed the prediction of the existence of formerly unknown sub-nuclear particles. There is some kind of order for the set of particles.

Figure 5. The “periodic System” of elementary particles according to the Standard Model. Despite the usual graphical depiction conveys seemingly a certain degree of simplicity, it is neither not that simple, nor does it display the open issues. In other words, it is some kind of propaganda.

On the other hand, it fails completely, as it does not allow to create a super-symmetric theory, that is, a theory that combines all of the four fundamental forces in nature.

As a result, some—if not many—basic observations are still unexplained, on the mesocosmic, rather small scales as well as on the cosmic scales (cf. [23]). Let us just pick three of the most salient gaps. First, there is no explanation of electro-magnetism that goes beyond its phenomenal description. In other words, physicists still don’t understand exactly what a “charge” is, say of an electron. Secondly, the “condensation” of elementary particles from “clouds” of extremely high “temperature”, e.g. sub-nuclear gluon plasma, is not understood. All physicists can say is simply: it happens. One of the gaps, according to the physicist Quigg, of the Standard Model concerns what makes a top quark a top quark and an electron an electron. Both seemingly don’t have further internal structure, both have electrical charge, though the quark only 1/3 of an elementary charge owned by the electron. Thirdly, now on the cosmic scale, there is complete ignorance in physics about the so-called “dark matter”. Would the “Standard Model” be indeed applicable and accurate, neither of the three phenomena should remain inexplicable.11

This situation gave rise to a still heavily disputed theoretical framework that is completely different from the “Standard Model” (SM). It is the so-called String Theory, more recently extended into M-Theory (MST).

The difference between those two frameworks is tremendous. In fact, they follow different and incommensurable metaphysical belief sets, which provides the reason that their case is particularly interesting for us.

Aspect

Standard Model

String Theory

conventional Space-Time

presupposes it

induces it

Basic Form

spherical particles or sections of space with 3-d rotational symmetry

1-dimensional strings of energy of approximately defined, positive length, the Planck length (10-33m)

Sub-atomic Particles

extremely concentrated energy, but the mechanism of creating inertial as well as rest mass is unknown

amplitude of vibration

Type of Particles

existential, produce of condensation frThere are many fundamental differences between the two frameworks, yet, the basic ones that are interesting for us here are the following:om gluon plasma, but mechanisms/rules are unknown

modes of vibration

Particle-Wave Dualism

phenomenal existent

irrelevant

4 Basic Forces

gravitation remains incommensurable (even if the Higgs Boson would be confirmed)

gravitation is a consequence, a unified theory is possible

Structure of Space

3 spatial dimensions+1 temporal dimension, presupposed

~10 abstract dimensions, from which the mesocosmic space derives through “overlapping” of low-dimensional (2d) projections

Basic Characteristics of the Framework

existential, claims desperately a “God”-particle, the Higgs-boson

generative, existence is not a central concept

Philosophical Status of the implied Image of Thought

based on identity and representation, with energy as an onto-realistic fact

based on difference and form (information), with energy as a mediator

Conceptual Status

it is a model (indeed)

it is a theory, i.e. an orthoregulative set of rules about how to generate a model

Note that it does not make sense to think of the strings as kind of objects. It is not possible to draw them, despite there are many artistic interpretations around. The basic architectonic difference between the frameworks is their relation to the concept of mechanism. The Standard Model is based on 19th century attitudes, expressing the initial claim that logic is imprinted to nature. There is no place for incorporating information as a separate entity. Causality and information are not distinguished, which ultimately leads to pseudo- paradoxes12. There is even the claim of perfect analyticity, that is, calculability, despite quantum physics itself proposes the uncertainty principle. It is precisely this architectonic flaw of trying contradictory things that lead to the “paradoxes” of current mainstream interpretations of the Quantum world.

The String Theory, in contrast, comprises a proposal of a mechanism that creates kinds of matter based on different information. String Theory describes the form of energy, where different forms—in this case different modes of “vibration”—lead to different kinds of matter. It concerns all particles, even photons, i.e. electromagnetic waves.

Both models, however, share an extremely important property: in some way or another, the describe a probabilistic, yet quantized world.

The sub-atomic world is not a continuous one. That means that it is impossible to have a smooth transition from a “natural law”, expressed in an analytic formula, and the observation of the behavior of those tiny “objects”. At some point we thus need an abstract transition that creates a quantum. Despite physics can only state that there is the quantum, incapable to “explain” the why, we may well say that this transition is induced by a transfer of information, e.g. by a measurement. In other words, the objects and their phenomenal appearance is dependent on the measurement, whether this is imposed by another particle without an experimenter or by the apparatus and the actions of the experimenter. Before measurement, however, particles are not particles at all. There are only waves of probability. That transition is called decoherence. The whole arrangement is thus one of information. The quantum introduces one of the conditions of identifiability: discontinuity. The other condition is memory, which we find only in the String Theory. As we already said above, the greatest defect in Standard Theory is the architectonic flaw that it conflates causality and information, which in turn is a consequence of its representational characteristic.

Nevertheless, from all of that it should be clear that quantum physics developed a strikingly different tool-set as compared to that of statistical mechanics. There, particles—atoms or molecules in this case—are conceived as tiny billiard balls, almost without spatial extension. Initially, statistical mechanics did not know anything about information. Yet, statistical mechanics introduced another important perspective into the realm of potential expressions: the population. In some way, we may conceive the whole 19th century as the century of the discovery or invention of the population, from the French Revolution to Darwin to Helmholtz.

In quantum physics, particularly in String Theory, the modernist assumptions collapse.

  • 1. There are no objects independent of measurement, quite to the contrary, measurement is a form of information transfer that induces the way how the microscopic world transits=transforms=decoheres into a macro world.
  • 2. There is no independence at all.
  • 3. The basic mode of description is based on probability, that is information and risk.
  • 4. Induced generation and probabilistic relation supersede existential claims.
  • 5. Computability is a matter of context and performing interaction.
  • 6. There is no complete analytic, i.e. symbolic description for the transition from micro to macro.

So, if the modernist belief set has been already seriously corroded even in physics, why should we continue to stick to it in a field like urbanism? We’d suggest to drop existentialist attitudes completely, concerning both theoretical as well as performative and material aspects, and with it all the anti-cultural procedures like representational top-down planning.

Some important questions could be derived here. What else can we learn from the example of quantum physics, particularly for urbanism? Is there a “standard model” in urbanism, drawing mainly on existential claims like objecthood? How would a stringy theory of the Urban look like? How could we assimilate a probabilistic perspective into our methodological setup?

At least one aspect of those open issues could be addressed right now. We have seen that in quantum physics the separation between observer and the observed breaks down. The reason is that measurement takes place on the same scale, within the same actualization or form of matter. Measurement itself introduces indistinguishability. The result is known as wave-particle dualism, linked by decoherence. And it is probably not the last strangeness physicists are enforced to handle, just think about the yet unknown quality of what they call dark matter and dark energy.

Well, the similarity of scale and kind is not limited to physics. We find it everywhere in cultural studies. Unfortunately enough, it is rarely recognized at all. It is still to be unleashed what decoherence could mean for cultural and urban studies, but for sure there are similar kinds of processes, strictly limiting what can be measured. Probably, we could even say that the self-referentiality introduced by the sameness of measurement scales shows up as quantum effect as well. One of the possible candidates for a cultural “quantum” is nothing else than the sign as it is formulated by Peircean semiotics. For “quantum” just means that there is no countability, nor identifiability beyond it. Probably, we have to be aware of “quantum effects”, mediated by different “particles”, in any cultural study.

Indeed, the Peircean sign is fully compatible with probabilistic foundations, for it marks a continuous field of actional densities, from which eventually an actual vector or reference is taken. This way we could say that Peircean signs and the signs in the Urban are isomorph (at least). The urban quantum-sign raises the issue of the symbol, which is often treated in a rather unsuitable manner, mainly in the context of the question of identity or identification and the related issue of historical continuity. Yet, the topics of the symbol, there symbolicalness and symbolability we have to postpone to a later piece (without forgetting about the probabilistic foundations).

7. Revisiting the Core

After this small excursion into the world of physics, which allowed us to harvest some promising conceptual tools, we return to our starting point, the topic of approaching a theory about the Urban. This we sketched by the following two questions:

  • 1. How to speak about the Urban?
  • 2. How to actualize the Urban Games?

The first of those questions could be said to relate to the field between the conceptual and the performative13, while the second would link the performative with the story-telling and the political. Again, the two questions or perspectives do certainly not delineate ideally (geometrically) separated fields. We already mentioned that Urban Games comprise language games. Additionally, they work from different directions, creating a complex dynamics. As a suitable metaphor for this we may cite fluid dynamics, especially of free streams such like the Gulf stream.

Figure 6a: The Gulf stream in the North Atlantic, departing from the east coast of America westward towards Europe (source). Red color means high differential velocity. A lot of vortices can be seen in a highly complex dynamics, creating patterns of mutual embedding.

Figure 6b. Vortices in a turbulent stream. As in case of the Gulf steam, there is no clear border, i.e. no separability between two mixing streams.

Let us focus the first issue for now, the mode and the possibility of explications as it is constrained by conceptual tools on various levels.

From previous work and the results achieved here so far we can fix some basic requirements for the explication of the model layer from figure 1.

Table 2: Basic requirements for a theory about the Urban.

Aspect

Characteristics

type of processes

differentiation, behavior

methodological frame

probabilistic, generative

architectonic constraint

satisfying self-referentiality

internal structural dynamics

construction by elementarization

The four basic types of structural model perspectives that match these requirements are

Growth

establishing persistent form (“Gestalt”, morphos) by attachment (either positive or negative), or more general, by a change in magnitude in some property (or properties); we may call it morphodiny (grk. dino, abstractly: to give, provide)

Networks

describing the form of matter capable for re-arranging information;

Associativity

for the transition from probabilistic processes to propositional statements, i.e. the basis for symbolification and encoding/decoding;

Complexity

for pattern creation and morphogenesis, i.e. the transition from order to organization as a self-stabilizing process.14

All of them we introduced in previous essays, yet in a slightly different context, which means that in the future we will provide updates to them such to match better the wording of urbanism.

These structural models share four eminently important properties: (1) They are all relational. (2) They are all built from “elements”. (3) These elements in turn provide docking sites for the even more abstract conceptual layer and the metaphysical attitudes behind them. (4) They allow to derive anticipatory models that directly engage with operational issues.

It is crucial to understand that these four categories are simply different perspectives, or language games, useful for talking about differentiation. Whenever we find a process that produces something different, whether as novelty or as some kind of alteration, we may take one of these perspectives. Yet, we won’t be able to talk about form and the “becoming different” without those categories as a group. In general terms, these four categories are to be conceived again as elements that we can use to construct a space (an aspectional one!), or likewise a scale that allows to compare things

A second group of categories is needed to take the perspective of the process itself. We may distinguish the basic qualities in the arrangement of matter and information, which is nothing else than the orchestration of dynamical change.

The scale is actually being built along the differential weight of matter or information. If the weight of matter or plans (symbolic quasi-matter) is more pronounced than that of information, then we call it usually development, if the matter becomes less relevant, we find either evolution, or still further down in the same direction, learning;

Thus we can see that form (morphos), adaptation and behavior build an almost continuous space, and thus, quite important, also a subjectivating scale to describe the dynamics of things. In turn, talking about changing things by just referring to one of these perspectives, whether on the objectivating or on the subjectivating scale, always must be rated as a inadmissible reduction.

Note that the “Relational Turn” is completely incompatible with modernism and its belief set. From a modernist perspective, the particular role of the above mentioned four structural perspectives remains simply invisible, for it is even impossible to talk about the dynamic effects and emergences of relationality within the limits of modernist concepts. Interestingly, throughout the 20ieth century, more and more scientific disciplines discovered the necessity for  relational turn, from biology (Rashevsky, 1935, Rosen 1991 [28]) thru economics to architecture (Lorenzo-Hemmer [29]).

In order to support the transition into the are of anticipatory models, the structural models have to support some quite essential processes. Any of them has to…

  • — be formalizable,
  • — be capable to provide scales for different kinds of measurement ,
  • — be operationalizable for actual construction of measurements,
  • — allow for active comparatistics.

Without support for these constructive properties a structural model would be hardly of any value.

Figure 7: Three methodological layers. The model layer showing only the main types of structural models. The other component of the model layer, the anticipatory models are not shown.

All four types of structural models can be used also for describing the transition between the material and the informational. Interestingly, they apply both with respect to the empirically observable processes as well as the methodological concerns, where they serve the transfer from concepts to action.

Finally, we can fill the model layer with more concrete aspects, creating something like an associative field. Of course, and in striking contrast to the short list of structural models, this field is by far not complete. Actually, on the level of anticipatory modeling we find already the influence of the unlimited number of forms of life. This does not mean that a particular form of life would provide an infinite number of possible moves. Quite the contrary is true. However, it definitely does mean that the forms of life can’t be constrained, or limited in their number, apriori, or top-down. Anything else results directly in chauvinist or imperialist patterns.

Figure 8: A possible explication of the model layer, now showing a mixture of structural and anticipatory models as an associative field.

Concepts like the aspection, the choreosteme, or the theory of theory can be used as conceptual tools, but they are also conceptual categories.15 Some of its components are still quite abstract and strictly non-representative. Thus, the intermediate “model” layer in its entirety may be also conceived as a multinomial or multi-perspectival generic model.

Similar to the model layer the explication could be done for both the conceptual layer as well as the operational domain. Together they probably establish what Foucault once called the field of proposals and propositions. Since we here are interested in and arguing towards the Urban, this field also represents a possible instantiation of “Urban Reason”. We just should not forget that story-telling, the playful delocutionary speech-act, provides the nodes and strings and knots that will bind everything together.

Once we manage to be able to keep all three areas alive simultaneously, whether we are engaged in political operations or in philosophical concepts, we can expect to understand the schemata that can be used to perform a Critique of Urban Reason. From this vantage point, finally, again being conscious about delocution, the playful story-telling, we can start to think the construction of the city. Probably only from this perspective.

8. Tokens, Types

If we consider the four basic constituents of the model layer also as major mechanisms of actual differentiation processes, then an interesting issue appears. Given the enormous variety of urban forms, concerning morphology, material and immaterial organization, and cultural processes, we could address the question whether we could derive a classificatory scheme, or distinguish certain types.

One could think of at least two purposes of such a classification, albeit both are concerned with the topos of the “Urban in Time”. We may for instance ask about the evolution of Urban life forms, in a similar way as it is done in biology with respect to natural evolution. This purpose would be directed to the past, putatively allowing for a better understanding of the history of the city and of urban arrangements.

David Shane proposed an approach to the description of forms that could well be called a hermeneutical one, thus being closely related to this evolutionary attitude [29]. When describing the forms he derives abstract elements of construction, attaches empirical instances and distils an evolutionary sequence of the form of the city. He distinguishes Archi Città, Cine Città and Tele Città. Each of them is characterized by a particular cultural setup that precipitates in typical morphological structures. Thus, Shane is able to build a kind of metric for “measuring” by the distinguished elements of “citiness”. These elements comprise two morphological forms on the level of built matter: armatures and enclaves. Highly interesting, however, he also includes Foucaultian heterotopias as a third element of citiness. He even proceeds differentiates heterotopia induced by material crisis from heterotopia of immaterial illusion. The heterotopia comprises incommensurable components, hence it is nothing else than an instance of the opposing forces that is a major element of complexity. Shane’s approach clearly exceeds for instance Tom Mayne’s approach who distinguishes different kinds of armatures and maneuvers in order to build a morphological taxonomy. Mayne also invokes the concept of complexity, yet, he doesn’t arrive at a comparable level of generality. Not quite surprising, Mayne’s work tends to the figural and representational. One of his main clients is the federal government of the U.S.A.

Both, Shane and Mayne are heading for a taxonomy. Shane’s achievement in his “Recombinant Urbanism” [30] is more abstract and thus more general than Mayne’s “Combinatory urbanism” [31]. Mayne got caught by the primacy of aspects of form, to which he assigns behavior, rather than the opposite as it is the case in Shane’s approach. For Shane, behavior comes first. Thus, Shane is able to reflect about city theory while Mayne provides case studies. These are beautiful to look at, but there is no theory, even as Mayne tries to distil a “method” from it as common denominator.

Yet, even Shane does not arrive at a theory of differentiation. He just describes it, almost in a phenomenological manner. Underpinning the description with plausible arguments does not yield a theory of differentiation. Hence, Shane’s approach is still not suitable to derive a taxonomy of city-contexts. But his results are perfectly compatible with the abstract structure we propose here.

Another “problem” with the approach as proposed by Shane is its tendency towards global interpretations. An extension of his work would be needed focusing more on the dynamic mechanisms. Together then it would be possible to create a classification scheme for urban neighborhoods that would tell the urbanist which “species” he is dealing with.

The second purpose of a classification or a taxonomy is not directed to the past, but rather more to the future. The model of differentiation could provide a means to anticipate struggles and to organize precisely the differentiation in the desired manner, without getting caught by inherent limitations due to metaphysical blindness. The paradigmatic example for such a potential deadlock is provided by the case of Singapore, as we have discussed in the previous essay. Another example is Mumbai, where the city administration imposes embryological principles onto a self-organizing urban body. This creates a deep mismatch since the city itself is at least on the verge of developing the capability for learning, that is, a very dynamic form of differentiation (at least in some parts of it).

This brings us to the application perspective.

9. The Application Perspective

In this last section we will show some examples for the “binding problem” regarding the relation between theory and operation.

So far we have introduced the abstract structure that is necessary for binding theory, models and operations together. We are convinced that without this structure, that any neglect of this structure leads to pathological consequences, particularly with respect to all those domains that deal with observations from the social or cultural realm. These consequences could be labelled the “binding problem”. Note that there is no particular addressee, since it concerns any concept and any operation, whether on the level of urban politics or on the level of implementing urban infrastructure.

Philosophical stances develop their specific binding deficit, think for instance of analytical philosophy where one can find the dismissal of metaphysics, while political operations may induce likewise instances of another kind of typical binding deficit. Common to all these deficits is some structural inconsistency, or even internal contradiction concerning central issues of the respective stance, often appearing as kind of (pseudo-)paradox.

Metaphysics is involved in this binding whether one is aware of it or not. We have argued that metaphysical belief sets constrain what can be perceived, recognized, expressed and conceived. Now let us see how such belief unfolds in actual reality.

The examples we choose for this essay are the supply of water and energy, and the movement that called itself “Metabolism”.

Water

One of the most striking examples is provided by the challenge of providing clean water in urban areas of developing countries. The problem is usually rendered into terms of necessary investment and uncontrolled growth of slums, accompanied by corruption or other forms of weakness in government. Together, these factors seem to prevent the installation of a sufficiently stable system of water pipes. Well, the actual problem, however, is precisely this rendering. Why? 

If we resort to the results discussed above we immediately can ask about the theoretical conditions that lead to that rendering. These conditions have nothing to do with the living conditions or political conditions. It is the metaphysical belief in central control and the belief in the possibility of rationalist, if not even deterministic planning that is creating the visible part of the problem.16 Central control as well as the belief in rigorous planning are both top-down approaches, hence they are applicable only to development, yet not to open evolution. Development, on the other hand, requires a fixation of side-conditions, which results in a particular model of differentiation: the abstract embryo. (Again: note that the biological type serves as a structural sibling, not even as a model!) Actually, we all should stop talking exclusively about “urban development”. Concerning the differentiation processes it is quite urgently to be completed with “urban evolution” and “urban learning”.

Usually, in urban differentiation processes the fixation of side-conditions is not possible, whether due to ethical or practical reasons. The result is that the problem persists, and with it the suffering of the people, the examples are countless, particularly all around in Africa. It is both a scandal as well as it is ridiculous that provision of water has been declared to be the major problem of the urban areas in the South.

Dropping the belief in planning, control and development immediately directs the attention to local solutions. Any local solution for material resources need an identifiable source, available storage and the organization of flows. Everybody can see the material arrangements of that basic setup. It is not an anonymous flow anymore. Regarding water, all of that can be established—astonishingly enough—in a strictly local manner, even in less developed areas.

Recently, Najiyah Alwazir described a project called RAINS that was conducted in Sanaa, the capital of Jemen. The project designed a solution for the problem of water shortage, which is a quite pressing issue in the mostly arid climate of Jemen. As a developing or even “underdeveloped” country, Jemen does not provide a stable, pervasive and abundant infrastructure. According to RAINS, the core element of the solution is thus the installation of appropriate private=local storage capacities, since in Sanaa there is a short raining period two times a year. Storage devices can be made almost from everything, especially however from various sorts of plastic. Yet, storing water for months is not without problems. For instance, it needs to be heated which requires additional energy.

But where to take water from locally, when there is none, if the raining season doesn’t provide enough water, or huge storage devices can’t be realized? Well, it is not true that there is no water. There is almost always water around, even in arid areas of the tropical or subtropical latitudes. It is in the air. The respective technology is blastingly simple. Basically, it is a windmill that creates pressure in the closed circuit of a heat pump, in other contexts also known as refrigerator. (read the respective story here). Nicely enough, the technology can be scaled, from hi-tech to low-tech, from small to big. A mid-sized turbine produces up to 1000 liters per day. Yet, low-tech turbines would work as well, requiring only very little investment, besides the fact that it creates lots of workplaces.

Without any exaggeration we can say that if there will be (is?) any scarcity of water (or energy, as we will see in the next section), then exclusively due to modernist stupidity or cynical politics. Scenarios like that imagined in the projective documentation about the consequences of global warming, “Les temps changent,” [32] are complete nonsense, since they mechanically recite the catastrophe against which there is allegedly no measure that society, i.e. the centrally administered state could take.

Water is not only an essential resource for living beings. The principle “water from air” can be integrated into any kind of architecture in order to use it as the basis of passive cooling. It should be clear that such infrastructural solutions become thinkable only if the modernist belief set is left behind.

Energy

Not only in developing countries, or the urban areas in the South, problems prevail due to the addiction to modernist belief sets. In industrialized countries there is a quite similar issue.

Currently, countries like Germany or Switzerland are propagating the so-called “Energy Turn” (official grm.: “Energiewende”), meaning that the required energy supply should be organized through so-called “regenerative sources” (which actually is a mis-nomer), that is from wind energy and solar energy. The problem imposed by this change is that the individual source is both rather small and rather volatile regarding its output, as compared with large power plants.

The modernist “solution” has been propagated as the so-called “smart grid”. A lot of computers are thought to be needed to distribute the electricity from many small sources and to minimize the peak-capacities, using the existing grid. Yet, smart grids do not change the principle for distributing the electrical energy at all: it remains centralized.

Thinking locally leads to a completely different solution, quite analogous to the water story. We need local producers, which in this case is simply the solar panel on the roof. And we need some storage, in other words batteries. In fact, what can be forecasted is a whole new culture of energy storage, across many scales. Fortunately, the market already started to offer such storage devices. IBC Solar offers devices for individual buildings, and ABB is working on large scale storage devices. There is also a solution involving methane and fuel cells in a closed loop system. The most funny thing, however, is the possibility to create methane, the main component of mineral gas, directly from the CO2 from air and hydrolyzed water (descriptions in german, in engl.). The tendency is the same as in the case of water: decentralization, and democratization, emergence of local infrastructures for storage and distribution. Astonishingly, the involved chemical reaction is known for more than 100 years, and wind power is an equally traditional source of energy. It was modernist thinking preventing its appearance on the engineers’ (and investors) radar. And nowadays, they again think of it only in large, expensive, technically difficult to handle installations, which therefore would have to be administered and run following the paradigm of centralization.

It is clear that the result could be a completely different kind of organization for the grid and a completely different kind of differentiation processes. Bottom-up processes lead automatically to the emergence of cluster- or cell-like organization.17 Such an organization not only automatically provides redundancy. It also will create suitably designed and unforeseeable business opportunities on the fly, which in ecology is called niche creation. To large parts it will be privately owned (on the level of cells), just the overarching informational organization may be provided by institutions. Such, institutions become clients rather than remaining providers. It is clear, that only in such a bottom-up organized energy culture we will see a true market for usable energy differences, quite in contrast to the oligopolistic (at best) fake we have to deal with today.

Most important, however, replacing top-down with bottom-up ultimately results in a change of metaphysical attitudes. Away from the orientation towards the lithosphere, turning around towards the solar stream of usable energy. In one of the next essays we will discuss this in more details by means of reviewing an upcoming book about the issue.

Metabolism

As a third example for illustrating the binding problem regarding the relation between theory and operation we will briefly visit the idea of metabolism, or organicism in a wider perspective, with regard to architecture and urbanism.

Metabolism is a biological concept. It describes the capability of living cells or even whole organisms to grow, to differentiate and to maintain their structure. Etymologically, metabolism means “a change”, that is the observation of a particular change. Metabolic processes are observable as large variety of well-orchestrated changes, that form a dynamic “equilibrium”, i.e. a phenomenologically more or less stable macroscopic appearance, which however rests on myriads of changes on the microscopic level. Yet, it must be understood, that metabolic processes are dissipative processes, meaning that they create a surplus of entropy in order to build up structures, that is, negentropy. Creating a surplus of entropy requires quite excessive consumption of energy differences, turning them into heat radiation.

Above all, metabolism is not simply a particular change. Its orchestration requires a preceding structure, including the respective functional compartments. And this change is devised to a particular function, the synthesis of new morphological structures as well as their break-down and recycling. Such, biological metabolism denotes “change within structures that leads to change of morphology”. This does not mean, however, that the shortcut “metabolism is morpho-change” is allowed. Rather we have to consider that we have different levels of integration with respect to the changes, linked together by emergence and deposits—just as in any complex system.

The idea of metabolism was by no means revolutionary at that time in the beginnings of the 1960ies. It just extended a line of thinking that prevailed in architecture and urbanism at least for 30 years in advance. In architecture and urbanism, the idea of organicism appeared the first time in the work of Frank Lloyd Wright, already in the first or second decade of the 20th century. Yet, his notion of organicism had only little to do with organisms, or the Kantian organon. Wright called himself a modernist, and such his assimilation aimed for things like “super-nature,” designs better than nature. He tried to extract principles that almost naturally would lead to good design. All of this is utterly naïve, of course.

A next important step was the adoption of the concept of the organism into the Charta of Athens in 1933. Planners obviously felt overwhelmed by the complexity and vitality of cities, and perhaps by their own ignorance about that, that the notion of “city as organism” has been quite popular. Additionally, corporeality has been subject of heroism all around the developed countries throughout the 1930ies. A bit later Sigfried Giedion (1941) referenced organisms explicitly as a template for built architecture in his famous “Space, Time and Architecture: The Growth of a New Tradition”. Yet, growth is not developed as a concept there, and time is conceived just as “history”, but rather not as an intrinsic result of the Urban, something which had to wait until Aldo Rossi’s (1984) critique of modernist conceptions of cities and architecture.

Yet, a city is not an organism, of course. Despite both entities, cities as well as organisms, can be said to be complex entities, the actual mechanisms are quite different. Simply spoken, in a city, we do not find a Golgi-apparatus, and in the cell we don’t find  mayors or administration.

This topic appeared also in the discourse about urban morphology. In the recent two decades or so, the quarrel between the various schools on urban morphology happened to become really serious. The Italian school around Caniggia traditionally embraced the idea of the organism as kind of a template for thinking about urban form. Yet, they didn’t used it as a template for deriving a theoretical position, they approached it more in a sympathetic mood. This caused a fierce critique by Michael Conzen [12], one of the popes of the area:

In a recent issue of Urban Morphology, Nicola Marzot offered an interpretation of my approach to urban morphology as compared to that of Caniggia who ‘equated human history and natural history. Each entailed th processes of birth, development, maturity and death. And there was a clear implication of the products of human endeavours.’ If Caniggia really said that he would have committed an obvious absurdity, for the existence of an urban settlement is a fundamentally different thing from the life of a human individual. (p.78)

Yet, Conzen too has obviously been completely unable to derive a theoretical position himself from his almost infinite catalog of particulars. Of course, he is a pope, and as such he could not do without installing the need for exegesis.

What is needed is a suitable binding between predictive models that are used in operations and structural models that allow a transition or integration towards the conceptual level. In fact, and quite unfortunately, up today and with the exception of the approach we proposed earlier, even the concept of complexity wasn’t presented in a useful form so far. One of the dramatic effects of misunderstood organicism as envisioned by the Athens Charta was the program of de-densifying the core of the cities. Of course, the opposite, densification, can’t be limited to just the material aspects as for instance in case of the Banlieues of Paris (F), which additionally follows the crystalline growth model. In the context of the Urban, densification has to be understood always as an issue of mediality. Media in turn require densified semiosis, which will emerge only on the basis of sufficient diversity of life forms within the same physical space.

In both cases, with Wright and with the Athens Carta, we can observe a binding problem in the theory work, leading to a literal, representational adoption of concepts from another domain. As Girard puts it,

one should avoid allegory, which consists in replacing the object with its metaphor. ([33] p.136, his emphasis)

What is missing in both cases, in Wright’s writing as well as in the Athens Charta, is a proper concept of differentiation18 that could have been used as a binding element.

Before the background of the discourse about sustainability19 and regenerative cities20 the ideas of the Japanese Metabolists from the early 1960ies gain increasing attention. Koolhaas & Obrist are just the most recent ones publishing an anthology about them, though probably the most serious one, as it consists of lots of interviews with still living former proponents of the group and with sketches of drawings.

What is this Japanese “Metabolism” about? In a recent interview with a German newspaper about his book Koolhaas praises their intention [34]:

Kiyonori Kikutake explains why at that time they haven’t been satisfied by the time-honored laws about form and function any more, and they tried to transfer the life cycle of birth and growth to town planning and construction and architecture.21

If nothing else, then this citation definitely demonstrates Koolhaas’ interest in a theory of differentiation for urbanism and architecture. Yet, it also uncovers Koolhaas’ own deficits, which he shares with many other “experts” of the field. On his conscious radar only expansion appears, albeit in his practice he applied embryological principles several times, e.g. in case of Casa da Musica.

Kiyonori Kikutake [35] writes

Metabolism” is the name of the group […]. We regard human society as a vital process […]. The reason why we use such a biological word, the metabolism, is that, we believe, design and technology should be a denotation of human vitality.

And Kisho Kurokawa specifies (cited after [36] p.81):

…if spaces were composed on the basis of the theory of the metabolic cycle, it would be possible to replace only those parts that had lost their usefulness and in this way to contribute to the conservation of resources by using buildings longer.

Later, Kurokawa extended the Metabolists’ approach into a theory of “symbiosis” to be applied to urbanism, architecture and their relation to nature. Yet, despite their approach—as far it is conveyed in their writings—is certainly sympathetic, it is not so much more than that. It provides an early support of the idea of sustainability, but there are neither structural nor predictive models, there is no theory of differentiation and no reflection about metaphysical conditions. There is just a fluffy use of a biological metaphor and the operations, that is, building as operation and politics of building. Not quite surprisingly, they conceive of themselves also as modernists, publishing the “last manifesto” in urbanism. Looking to their built matter, it becomes clear that the Metabolists’ approach is deeply infected by cybernetics. The implied model of differentiation and morphogenesis that they applied is close to crystalline growth, as it is demonstrated by the Nagakin Capsule Tower from 1972. It looks like an unorderly grown crystal. Thus it fits to the overall impression that in case of the Capsule Tower (and its many replicates throughout Japan) the core idea of the Metabolists never got realized. Not a single capsule has been replaced. Crystals do not replace parts of themselves, dependent on the physical circumstances they either grow forever, fall into everlasting stasis or get destroyed. At least Kikutake’s private “Sky House” has been slightly modified throughout its life cycle ([37], p.17). But there is nothing particular “metabolizing” with it.

In both type of buildings, the communal as well as the solitary one, “metabolism” has been implemented on the physical level. We have to rate this just as an indication of missing abstraction. Above we said that the shortcut “metabolism is morpho-change” isn’t allowed at all, since this would neglect the emergence relation between morpho-structures and producer changes in the complex system “cell”, for which biologists developed a particular perspective of metabolism. The Metabolists neglect precisely this layering of the complex system. Such, however, the Metabolists’ theory is nothing else than a metaphor, victimized to flatness by modernist reduction.

In some way, this renders the Metabolists that always claimed to propose a “utopia” as late descendants of the idea of the “Ideal City”. As the label already conveys, it’s just idealism, which always suffers from the double illusion implied by all top-down approaches.

Japanese Metabolism headed for adaptivity. Such they have been years ahead of the mainstream. Yet, the honourable intention haven’t been backed by structural models, there are no predictive models present in their approach, no abstraction towards a theory of differentiation, no reflection about the conditionability. Well, okay, even philosophy wasn’t developed far enough, Deleuze still breeding on the foundations of his philosophy. And cell biology itself has been completely absorbed by cybernetics, as one can see in the works of Monod. It is not our intention to blame anybody here. But it must be clear, that the Japanese Metabolism could not be transferred into our times due to its structural deficiencies.

10. Urban Strings

In an interview about his S,M,L,XL, conducted in 2001, Koolhaas mentioned that

“Compared with the metropolises of the industrial nations, Lagos is 50 to 100 years ahead.“[38]22

Given the seemingly chaotic condition of Lagos, the failure of its official urban services and organizations, in other words, its immaterial infrastructures, that seems like a bold and weird statement. Yet, Koolhaas addresses nothing less than a change in the metaphysical setup.

“We have been interested in the fact that at the one hand all organizational systems fail, on the other hand, however, the city nevertheless is functioning. […] The reason for that being that the inhabitants organize themselves in micro-systems.”23

Bottom-up organizational processes are not compatible with the major claims of the modernist belief set, particularly the idea of independence. Self-organization starting on the micro-level requires the metaphysical primacy of relation.

As we mentioned already several times, here and in previous essay, our impression is that Koolhaas is clearly interested in the processual aspects of differentiation, where others not even got a grip to the fact that we are in need of a metaphysics of differentiation. As a guest editor of an issue of the “wired”, he mentioned [39]:

“Where space was considered permanent, it now feels transitory—on its way to becoming.”

In an earlier interview from 1994, he explicitly referred to a characteristic of complex systems, opposing forces, denying the economically and politically motivated”Taylorization” into defined fields of function. Regarding the central station in Lille, a mega-structure Koolhaas was engaged to generate, he relied on the “alchemia of mixed use”, something that he had been cherishing in his famous “Delirious New York”.

The understanding of complex, self-organizing entities differs dramatically from linear entities. Analytic and thus a comprehensive symbolic representation, e.g. as some kind of a “law” is possible only for the latter. Trying to do the same for the former usually ends in some kind of disaster. For in that case anticipation based on the assumption of linearity inevitably fail at any point in time for whatever reason, that is for no particular reason, despite the fact that for some time the model could have been working quite well. Complex entities can’t be controlled, as there is no law, there are just mechanisms, actualized in a manifold of mutually penetrating populations. The best one can try is to tune the side-conditions of the respective processes. Yet, there is no guarantee for a particular outcome.

In other words, if urbanism claims to respect the moral and ethical conditions of the inhabitants (see for instance this, then traditional attitudes to planning and development have to be dropped. Respect for people is incompatible to the mere concept of development. Implementing plans is always and necessarily accompanied by violence, even if that violence is not visible from within the plan.

Yet, if we talk about mechanisms, the question raises, which are the subjects of those mechanisms? Where to find them and how to talk about these mechanisms?

If we consider the case of models of complex systems, such as the Gray-Scott-model, we’d probably distinguish certain elementary species. In case of the Urban, these species can’t be representational or even material, I guess, as it is the case in those models, which assume them to be particular kinds of molecules.

So, we may adjust our question slightly. We now can ask, what are the elementary, abstract species that we need to build appropriate models of the Urban?

Approaching this question requires a framework, and a reasonable choice is that of differentiation, from the metaphysical level down to the operational and back. Previously we identified three levels of actualization for differentiation, which can be rendered into different forms. The basic form is certainly the trinity of development, evolution and learning. Yet, there are transpositions of this basic theme; any of those would be worthwhile to be subject for further investigation, yet, we just list them here:

  • – embryos, populations (or brains) and evolution (minds as hosts of ideas),
  • – plans, probabilization and mediatization,
  • – automation, participation and (abstract) creativity,
  • – form, process and virtualization,
  • – the particular, the species and the general (concepts).

These basic aspects all have to be thought of as principles that actualize exclusively in local contexts. The geographic space of a city could be consequently thought of as a highly dynamic and volatile patchwork of such actualizations, and each of those could be assigned to one of the three levels or types of differentiation. This patchwork is by no means randomly arranged, of course. We have to think of it more in terms of said complex system, built from several components. Yet, again, in contrast to the simulated models, we should defy the temptation of assuming any kind of global rules for the interaction of the respective “species”.

Any possible pairing within the trinity of differentiation is inherently contradictory, albeit this contradiction is not a mutual one, it is a directed one. Embryos neither evolve nor do they learn. Learning, however, definitely comprises “embryonic” as well as “evolutionary” phases, without exhausting them. Inversely, while there is quite some play in learning processes, there is only little of it in evolutionary and almost none in embryonic processes.

Building upon notions from biology, even if we use it in a quite abstract way as structural schemata, immediately relate us to a number of objections. The most thorough ones have been posed by Anthony Giddens in his “Constitution of Society” (1986) regarding evolution. Yet, albeit Giddens is certainly right in criticizing the direct application or transfer of the biological theory about evolution into the realm of the social, his critique commits the same mistake (p.228). His image of evolution remains by far too naive, and partially even severely misunderstood, as to justify his objections against evolutionary theory and his final rejection. Nevertheless, he correctly emphasizes that talking about the realm of the social involves processes of largely “immaterial” signification. While such processes imply learning, it also remains true that this does not imply an incompatibility with a generalized theory of evolution. The same holds for adapting the notion of the embryo, or of growth. We just have to be always aware that these are modes of talking.

It is clear, that we can speak about differentiation only by also invoking probabilistic concepts. On the other hand, differentiation not only concerns individuals in their life history, but also as subjects of those differentiation processes.

This highlights an interesting issue, as play is eminently social and development is not less distinct a matter of automation. We can read the whole period of unfolding modernism, starting with the end of the Middle Ages, as a continued battle between participation and automation. In some way, cities and the Urban form of life provide just a further, upfolded field for the eternal contest between control and play, between constraints and overturn, between automation and participation. Yet, it is also true that it is the Urban as a life form that transformed battlegrounds into playing fields, thereby rendering the aterritorial into a local as well as a global social practice. Hopefully, it is the Urban and the respective life form that renders the nation and the underlying detrimental ideas insignificant.

The patches in the urban patchwork of various kinds of differentiation processes certainly influence each other, but it is an issue of future research to determine whether and to which grade the interaction of those differentiation processes can be arranged in separate classes.

So, let’s return to the question of the species. Probably it is quite reasonable to assume the species being subject to the mechanisms of the Urban to be just the instances of those three types of differentiation processes. In figure 7 above we introduced 4 types of structural models as candidates for solving the binding problem in theory works, namely growth, networks, associativity and complexity.

Result 1

This assemblage we now can simplify by subsuming it to the concept of differentiation as we have discussed it so far, of course, without dropping those four components, as they are growth, networks, associativity and complexity. Yet, this differentiation still resides in the realm of models, hence we have to call it “generic differentiation”. The abstract (meta-)structure suitable to overcome the binding problem regarding theories about cultural processes as well as their political instantiation would look like so:

Figure 9: Generic Differentiation as key element for solving the binding problem of theory works. Three things are important here: (i) the charts depicts the elementary module of a fluid moebioid fractal, since there is no separability between the three parts. They are mutually embedded into each other. (ii) “Concept” is a transcendent entity (see this for the argument). (iii) The brackets need to be conceived as the “realm of method”, which is something that we still have to accomplish (in one of the next essays). A similar structure may be suitable for the foundation of a planning theory (also to be discussed in some future essay).

Note, that the basic metaphysical stance of this methodological structure builds upon the “probabilistic relational”, which directly derives from the (Deleuzean) transcendental difference as soon as we care about any kind of application, or rule following. Deleuze bound the repetition as sort of a still transcendental application closely to his concept of the transcendental difference.

The field of models can be summarized by the differential (in the Deleuzean sense) of the four basic types of designs, namely growth, networks, associativity and complexity. Any of them leads to some kind of “change,” whether as a horizontal difference or a vertical differential. Else, any of them is capable to “associate” or to “grow”, they all are kind of networks (just of various degree of fluidity), and they all refer to complexity, and last but not least they all are (basic) forms for the description of the transition from mainly material to mainly immaterial contexts (material/immaterial here used in the common sense as a first conceptual approach, yet, actually there is no categorical difference between them: just think about the quasi-materiality of symbols and the form of energy in String theory). We can’t delve further into this matter here, but I think it will be highly rewarding to develop a vocabulary and expressions in order to establish the respective space that then could be called the “Space of Generic Differentiation”.

Result 2

Above, in the context of figure 1, we already mentioned that this scheme as we have developed it starting with figure 1 up to here is only the atomic module of a fluid moebioid fractal. (not the city or any other empiric entity is meant to be a fractal here, but rather the dynamics of theory itself!) This very same module is part of any theory work, yet, both the weights of the three parts as well as the parameters for the mapping into the more mature forms must be expected to be very different.

Such, we finally arrived at a conceptualization for theory work that is applicable to any science, even to philosophy. One of the nice things is that it makes the categorical difference between hard and social sciences to vanish, without neglecting the actual differences. But we definitely removed the existentialist contamination or even intoxication from the socio-mental landscape.

Result 3

A small remark about the philosophical consequences shall be allowed here. We already mentioned, thru result 1 and result 2, that the structure shown in figure 9 above would represent the basic module for the category of change. Of course, we do not conceive change as something that could be objectively determined, because there is something in the outer world that cold be called “pure change”. We propose neither to follow Kant in his favor for physicalist aprioris, nor the external (=naive) realists.

Instead, our category of change “socializes” the Kantian approach. As such it complements the structure that we called the choreostemic space. That space describes the fundamental conditionability of becoming, without telling anything about the actual mechanisms to move around in this space. The category of change (as the moebioid fractal) focuses the individual and his actual moves, that is its use of concepts and its corporeal activities. After the linguistic turn there is no space for physics any more, regarding the realm of human affairs. The apriori is not space and time, it is generic differentiation, concepts and the political corporeality.

Note that time is a language game about the scale of measurement for changes. If there is no change, or if change is not determinable, then there is no time. Examples for that are the “life form” of the photon or black holes, where no signal can be transferred any more, because photons get fixed.

Result 4

Above, in the chapter about String Theory, we said that it describes the form of energy, where different forms lead to different kinds of matter. Could we assimilate or even transfer the structure of that theory into a critical theory about the Urban?

Well, the first thing for which we have to identify a parallel is the notion of energy. Probably the hottest candidate for a similar role with regard to the Urban, that is for culture, is mediality. Like in the case of energy, density plays a crucial role for it (cf. [40]). All of the four components of our generic differentiation are strongly dependent on mediality, induced by densification processes. Changing levels, this holds true even for generic differentiation itself, as part of the theoretical structure as shown in figure 9.

We certainly can say that the form of mediality, that is, the way it gets instantiated, is able to create very different urban styles. Think about the difference between a Maya city, with some 70’000 inhabitants, where most of the mediality is related to religious affairs, and then about a typical radio city (Berlin 1939?), a TV city (Los Angeles), and an internet city (Seoul?). Or about Manhattan, where mediality found a quite unique instantiation, comprising interpersonal contacts and high density of heterotopias. Or about Shanghai with its extreme neon density.

As mediality gets actualized in different ways, so the proportion of our four components of the Generic Differentiation. Without any doubt one can find the traces of the establishment of a particular proportion, that is, the location  of the Urban Game in a particular “region” in the (yet to be formulated) space of Generic Differentiation, in the built assemblage of urban neighborhoods, as well as in its individual and characteristic “urban look & feel”. Or in other word, the “quality” of a particular “city”. Generic differentiation is somehow the inverse or a n abstract consequence of mediality.

Result 5

Here in this figure 9, much like for the figures above, we don’t provide any detail about the conceptual and the operational side. Of course, both areas comprise their own rich structure. Yet, in order to avoid the binding problem, both the concepts and the operations need to be compatible to the model layer, at least insofar as the three components develop suitable docking sites.

Result 6

The structure in figure 9 above can be read in two very different ways. This is not  just due to the possibility of different vantage points, its more a kind of a principle duality.

The first one derives from a choreostemic perspective. In this case the structure describes the forces that lead to particular trajectories in the choreostemic space, representing a particular style to think about the city and to act within it, whether as an individual or as a population.

The second way to conceive of the structure is as the Urban itself, as the life form of the Urban, that is as the actualization of a Foucaultian field of proposals. In both cases the three areas of concept, differentiation and operation are not at all separated or separable. They form a field of simultaneous activity throughout, with varying degrees of overlapping and mutual infection.

In such a setting, story-telling takes an important role: it creates a dynamic fabric from all the relational elements, the tiny Urban Strings, of which myriads over myriads are produced all the time, released to float around in unpredictable yet beautifully arranged patterns, spanning from logistics to anticipation and metaphysics, providing the mere possibility for Urban meaning and Urban Reason.24

Notes

1. As in the preceding essays, we use the capital “U” if we refer to the urban as a particular quality and as a concept, in order to distinguish it from the ordinary adjective that refers to common sense understanding.

2.The terminus “speaking about” is by no means a trivial one. First, it implies that language is used and in turn we have to respect the transcendental role of language (for more details see here, and here). This has been not only the center point of Wittgenstein’s philosophy, it also resulted in a “revolution” throughout philosophy—unfortunately largely only in philosophy so far, the so-called “Linguistic Turn.” Particularly scientists are often quite forgetful about that. Secondly, “speaking about” also means that concepts have to be used. As we discussed in the context of the choreostemic space, concepts are also transcendent.

3. Here, philosophy is not understood as a domain that creates rules of a good life. Instead, we conceive it as a technique of thinking; as such it is helpful to explore the rules and principles of human affairs as a social process. Philosophy has no representational content!

4. Case of Bombay, informal workers.

5. For more details please the essays about modeling.

6. Previously we called such concepts “Strongly Singular Terms”. For details please refer to “Formalization and Creativity as Strongly Singular Terms”.

7. Concerning semiotics as always: CS Peirce.

8. Umberto Eco (2002): Semiotik der Theateraufführung. In: Wirth, Uwe (Hrsg.): Performanz. Zwischen Sprachphilosophie und Kulturwissenschaft. Frankfurt/M. S.262-276.

9. This is even true for the “hardest science” of all, physics. Even as physics benefits from the luxury of a stable external referent, though that referent has to be recognized as an unknown. This stability allows for a closed and quite fast loop between building and testing anticipatory models on the one hand, and inventing concepts on the other. This stability is possible only if the subject of the respective investigations is strictly a-historic, a-contextual and an-individual. Nevertheless it remains true that even the concepts of physics are at least partially dependent on the respective form of life. In sciences that deal with historic contingency like biology and all of the human sciences including architecture and urbanism, this stability is not present in principle.

10. Gilles Deleuze developed a dedicated counterdraft to these concepts, mainly in Difference & Repetition [18], A Thousand Plateaus [19], and Logic of Sense [20].

11. Note that even the discovery of the putative Higgs-Boson wouldn’t change much with regard to these open issues.

12. Usually, paradoxes are just a consequence of contradictions either in the metaphysical setup or in the course of their instantiation. Pseudo-paradoxes can be provoked also by choosing to few dimensions for the description of a problem. (for details see Deleuzean Move, footnote 3, and Vagueness: The Structure of Non-Existence.)

13. In German language the book “Performanz” edited by the semiotician Uwe Wirth [21]; unfortunately, I don’t know of any comparable work in English language.

14. Talking about complexity and story-telling may remind inevitably to Charles Jencks’ “jumping universe”, where he, among other things invokes the science of complexity and post-modernism as kind of twin-siblings. We clearly disassociate from Jencks’ writings, for multiple reasons so. It is nothing else than esoterism. He not only fails to accurately use the concept of fractals and chaos, he also misses to describe the mechanisms through which that “chaos” gets actualized. He does not provide any model for growth and differentiation, just using fractals as the universal weaponry. It is not really surprising that he finally ends up with cosmogonic phantasies.

We not only reject this kind of poor “theorizing,” but also post-modernism as a valuable way of talking about architecture or urbanism. Both suffer seriously from the binding problem, ending in wild speculations. It is telling that Jencks tries to proof the existence of a battle between modernist and post-modernist thinking. Nothing could be more unmasking. Above all, his crusade seems to be politically motivated. What we try instead in this series of essays is to provide a sound abstract structure for a value-free theory, from which a rich scape of models can be derived.

The post-modernist attitude of “not only function, but also fiction” (H.Klotz, The history of postmodern architecture, 1986) remains flat and representationalist, such as Hollein’s Juweliergeschäft (Wien 1972-1974). As Venturi once demonstrated, any arbitrary facade is semiotically active. Yet, the interpretation is not on the side of the designer! Thus, the “fiction” of the post-modernists are misplaced, and miles away from the story-telling Koolhaas is organizing for us and into which we may embed and integrate ourselves. In a later piece we will discuss the metaphysics, the hidden resentment and the limitations of post-modernism in greater detail.

15. Most of the items of that layer that is mediating between theory and operations we already discussed in earlier essays. Note that the set of possible terms of that map is far from being complete, albeit it certainly provides a useful cross-section. Links : choreosteme, complexity, model, orthoregulation, learning, memory, evolution, theory, aspection, network, probabilism, adaptivity, associativity, behavioral coating, operationalization.

16. Note that these beliefs are not to be mixed up with values. Values themselves are anyway highly problematic. Values are quite effective to abolish any discourse, since—by definition—they are not justifiable. Hence it is dangerous to invoke them “too early”. Actually, values that purport some representational attitude about a moral “good(ness)”, should be dropped altogether, except some last solitary and transcendental principle. According to Wilhelm Vossenkuhl [26], a German philosopher (mainly Kant, Wittgenstein and Ethics) and political scientist, all the other claimed values should be replaced by the techné of organizing discourses about the difficult challenges.

17. For details about morphogenesis through self-organization and complexity see this essay.

18. Differentiation not only includes morphogenesis sensu strictu, that is with regard to “purely” material aspects. It is anyway not possible to separate the material from the immaterial as the modernists and positivists always claimed. Differentiation and growth apply to the immaterial as well. In our essay about Koolhaas and Singapore we explicated three perspectives onto differentiation, for which we find varying grades of materiality: development, evolution and learning. also note that Deleuze’s work may be conceived as a philosophy of differentiation, whether concerning development, evolution or learning.

19. Sustainability that is backed by the the idea of protection [24,25,26]

20. Recently, Anna Leidreiter proposed to change perspective from mere sustainability (see previous footnote) to regeneration and “circular metabolism”. Despite we certainly agree with the intention, her approach is still suffering from the binding problem. There is no theory of differentiation, just a more or less metaphorical use of the concept of metabolism. Metabolism anyway is always organized by many overlapping “cycles”. It is naïve or even wrong that natural ecosystems run without producing waste, as she claims. In natural ecosystems there is a lot of decay, debris and sedimentation. How would debris look like with respect to the Urban?

Fitting to these suggestions is another point. Earlier we already pointed out that sustainability requires persistent adaptivity, and this in turn can be achieved only by complexity, that is self-organization, transition from order to organization, and emergence. As such it can’t be directly implemented, of course. In other words, planning and sustainability exclude each other.

21. German original: „Kiyonori Kikutake erklärt, warum ihnen die altehrwürdigen Gesetze der Form und Funktion damals nicht mehr ausreichten und sie versuchten, den Lebenszyklus von Geburt und Wachstum auf Städtebau und Architektur zu übertragen.“

22. German original: „Lagos ist den Metropolen der Industrienationen um 50 bis 100 Jahre voraus.“

23. German original : „Wir haben uns dafür interessiert, wie einerseits alle Organisationssysteme versagen, die Stadt aber andererseits trotzdem funktioniert. Das liegt daran, dass die Einwohner sich in Mikrosystemen organisieren.“

24. We are well aware of the fact that a concept like “generic differentiation”, particularly if it comprises growth and networks as sub-concepts, relates to the discourse about urban form, or urban morphology. For 15 years now, this discourse gets more and more organized through the journal “Urban Morphology”, issued by the International Seminar on Urban Form ISUF. This discourse suffers considerably from the binding problem, hence, any kind of naivity can be found there. Typically for the underdeveloped stage of the field is the fact that there are (still) at least two “schools”, inherited from times long ago (the French, the Italian, the Anglo-Saxon schools). Of course, there are also the great pioneers (pope-eneers?), celebrated individuals like Caniggia or Conzen. Yet, identifying the more valuable contributions requires (and deserves) a dedicated treatment. This will be the topic our next piece: How to speak about (urban) forms?

References

  • [1] Rem Koolhaas (1995), Whatever happened to Urbanism. In: O.M.A., Rem Koolhaas and Bruce Mau, S,M,X,XL. Crown Publishing Group, 1997. p.1009-1089.
  • [2] Herzog & deMeuron, How do Cities differ? Introductory text to the course of study on the cities of Naples – Paris – The Canary Islands – Nairobi at the ETH Studio Basel – Contemporary City Institute. In: Gerhard Mack (Ed.). Herzog & de Meuron 1997-2001. The Complete Works. Volume 4. Basel / Boston / Berlin, Birkhäuser, 2008. Vol. No. 4. pp. 241-244.First published in: Jacques Herzog: Terror sin Teoría. Ante la ‘Ciudad indiferente’. In: Luis Fernández-Galiano (Ed.). Arquitectura Viva. Herzog & de Meuron, del Natural. Vol. No. 91, Madrid, Arquitectura Viva, 07.2003. p. 128. available online.
  • [3] Wolfgang Stegmüller, Probleme und Resultate der Wissenschaftstheorie und Analytischen Philosophie, Band II Theorie und Erfahrung, Teil G: Strukturspecies. T-Theoretizität. Holismus. Approximation. Verallgemeinerte intertheoretische Relationen. Inkommensurabilität. Springer, Berlin Heidelberg 1986.
  • [4] Thomas S. Kuhn. The Structure of Scientific Revolutions. 1962.
  • [5] John R. Searle, Speech Acts: An Essay in the Philosophy of Language. Cambridge University Press, Cambridge 1969.
  • [6] O.M.A., Rem Koolhaas and Bruce Mau, S,M,X,XL. Crown Publishing Group, 1997.
  • [7] Kisho Kurokawa, From Metabolism to Symbiosis. John Wiley 1992.
  • [8] Rem Koolhaas & Hans Ulrich Obrist. Project Japan: Metabolism Talks. Taschen, Berlin 2011.
  • [9] Ludwig Wittgenstein, Philosophical Investigations.
  • [10] Rem Koolhaas (2002). Junkspace. October, Vol. 100, “Obsolescence”, pp. 175-190. MIT Press. available here
  • [11] Klaus Wassermann, Vera Bühlmann, Streaming Spaces – A short expedition into the space of media-active façades. in: Christoph Kronhagel (ed.), Mediatecture, Springer, Wien 2010. pp.334-345. available here. available here.
  • [12] Michael R. G. Conzen.  “Apropos a Sounder Philosophical Basis for Urban Morphology,” in: Thinking About Urban Form: Papers on Urban Morphology, 1932-1998. Google books. p.78.
  • [13] John McDowell, Mind and World. 1996. pp.25.
  • [14] Gilles Deleuze, Félix Guattari, What is Philosophy?
  • [15] Isabelle Garo, Molecular Revolutions: The Paradox of Politics in the Work of Gilles Deleuze, in: Ian Buchanan, Nicholas Thoburn (eds.), Deleuze and Politics. Edinburgh 2008.
  • [16] K. Wassermann, That Centre-Point Thing. The Theory Model in Model Theory. In: Vera Bühlmann, Printed Physics, Springer New York 2012, forthcoming.
  • [17] Peter Sloterdijk, Sphären I-III. Suhrkamp, Frankfurt 1998-2004.
  • [18] Gilles Deleuze, Difference & Repetition.
  • [19] Gilles Deleuze and Felix Guattari, A Thousand Plateaus.
  • [20] Gilles Deleuze, Logic of Sense.
  • [21] Uwe Wirth (Hrsg.), Performanz. Zwischen Sprachphilosophie und Kulturwissenschaft. Suhrkamp, Frankfurt/M. 2002.
  • [22] Charles Jencks, The Architecture of the Jumping Universe. Wiley-Academy 2001.
  • [23] Website of the Fermi-Lab: http://home.fnal.gov/~carrigan/pillars/Quarks.htm ; http://www.fnal.gov/pub/inquiring/matter/madeof/index.html.
  • [24] World Commission on Environment and Development (1987), Our Comm on Future (1987), page 24, para 27.
  • [25] World Summit on Social Development (1995), Copenhagen Declaration on Social Development, page 5.
  • [26] World Summit on Sustainable Development (2002), Plan of Implementation, page 8.
  • [27] Wilhelm Vossenkuhl, Die Möglichkeit des Guten. Hanser, München 2006.
  • [28] Robert Rosen, Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life, Columbia University Press 1991.
  • [29] Timothy Druckrey (2003). Relational Architecture: the work of Rafael Lozano-Hemmer, in: Debates & Credits. Media/Art/Public Domain. De Balie-Centre for Culture and Politics. Amsterdam 2003. p.69.
  • [30] David G. Shane, Recombinant Urbanism. 2005.
  • [31] Thom Mayne, Combinatory urbanism: The Complex Behavior of Collective Form. 2011.
  • [32] Jean Christoph de Reviere; Marion Milne (directors), Les temps changent, F/CDN 2008;
  • [33] Jean-Yves Girard, LOCUS SOLUM: From the rules of logic to the logic of rules (2001). Journal Mathematical Structures in Computer Science archive, Vol 11(3), p.301-506. available online.
  • [34] Barbara Nolte, „Unser westlicher Blick liefert Zerrbilder“, Interview mit Rem Koolhaas, 12.02.2012 in: Der Tagesspiegel (Berlin). available online.
  • [35] Kiyonori Kikutake et al. Preface, Metabolism: Proposals for New Urbanism. Tokyo 1960.
  • [36] Jennifer Johung, Replacing Home: From Primordial Hut to Digital Network in Contemporary Art. Minnesota University Press, Minneapolis 2012.
  • [37] Zhongjie Lin, Kenzo Tange and the Metabolist Movement: Urban Utopias of Modern Japan. Routledge, New York 2010.
  • [38] Ulrike Knöfel und Marianne Wellershoff (2001). „Eine der besten Erfindungen“, Interview mit Rem Koolhaas, 15.10.2001, in: DER SPIEGEL 42/2001, available online.
  • [39] Rem Koolhaas (2003). Editorial, The New World. 30 Spaces for the 21st Century. wired, Issue 11.06 | June 2003. available online.
  • [40] Vera Bühlmann, inhabiting media. Thesis, Basel 2009.

۞

The Text Machine

July 10, 2012 § Leave a comment

What is the role of texts? How do we use them (as humans)?

How do we access them (as reading humans)? The answers to such questions seem to be pretty obvious. Almost everybody can read. Well, today. Noteworthy, reading itself, as a performance and regarding its use, changed dramatically at least two times in history: First, after the invention of the vocal alphabet in ancient Greece, and the second time after book printing became abundant during the 16th century. Maybe, the issue around reading isn’t so simple as it seems in everyday life.

Beyond such accounts of historical issues and basic experiences, we have a lot of more theoretical results concerning texts. Beginning with Friedrich Schleiermacher who was the first to identify hermeneutics as a subject around 1830 and formulated it in a way that has been considered as more complete and powerful than the version proposed by Gadamer in the 1950ies. Proceeding of course with Wittgenstein (language games, rule following), Austin (speech act theory) or Quine (criticizing empirism). Philosophers like John Searle, Hilary Putnam and Robert Brandom then explicating and extending the work of the former heroes. And those have been accompanied by many others. If you wonder about linguistics missing here, well, then because linguistics does not provide theories about language. Today, the domain is largely caught by positivism and the corresponding analytic approach.

Here in his little piece we pose these questions in the context of certain relations between machines and texts. There are a lot of such relations, and even quite sophisticated or surprising ones. For instance, texts can be considered as kind of machines. Yet, they bear a certain note of (virtual) agency as well, resulting in a considerable non-triviality of this machine aspect of texts. Here we will not deal with this perspective. Instead, we just will take a look on the possibilities and the respective practices to handle or to “treat” texts with machines. Or, if you prefer, the treating of texts by machines, as far as a certain autonomy of machines could be considered as necessary to deal with texts at all.

Today, we can find a fast growing community of computer programmers that are dealing with texts as kind of unstructured information. One of the buzz-words is the so-called “semantic web”, another one is “sentiment analysis”. We won’t comment in any detail about those movements, because they are deeply flawed. The first one is trying to formalize semantics and meaning apriori, trying to render the world into a trivial machine. We repeatedly criticized this and we agree herein with Douglas Hofstadter. (see this discussion of his “Fluid Analogy”). The second is trying to identify the sentiment of a text or a “tweet”, e.g. about a stock or an organization, on the basis of statistical measures about keywords and their utterly naive “n-grammed” versions, without actually paying any notice to the problem of “understanding”. Such nonsense would not be as widespread if programmers would read only a few fundamental philosophical texts about language. In fact, they don’t, and thus they are condemned to visit any of the underdeveloped positions that arose centuries ago.

If we neglect the social role of texts for a moment, we might identify a single major role of texts, albeit we have to describe it then in rather general terms. We may say that the role of a text, as a specimen of many other texts from a large population, is its functioning as a medium for the externalization of mental content in order to serve the ultimate purpose, which consists of the possibility for a (re)construction of resembling mental content on the side of the interpreting person.

This interpretation is a primacy. It is not possible to assign meaning to text like a sticky note, then putting the text including the yellow sticky note directly into the recipients brain. That may sound silly, but unfortunately it’s the “theory” followed by many people working in the computer sciences. Interpretation can’t be controlled completely, though, not even by the mind performing it, not even by the same mind who seconds before externalized the text through writing or speaking.

Now, the notion of mental content may seem both quite vague and hopelessly general as well. Yet, in the previous chapter we introduced a structure, the choreostemic space, which allows to speak pretty precise about mental content. Note that we don’t need to talk about semantics, meaning or references to “objects” here. Mental content is not a “state” either. Thinking “state” and the mental together is much on the same stage as to seriously considering the existence of sea monsters in the end of 18th century, when the list science of Linnaeus was not yet reshaped by the upcoming historical turn in the philosophy of nature. Nowadays we must consider it as silly-minded to think about a complex story like the brain and its mind by means of “state”. Doing so, one confounds the stability of the graphical representation of a word in a language with the complexity of a multi-layered dynamic process, spanned between deliberate randomness, self-organized rhythmicity and temporary thus preliminary meta-stability.

The notion of mental content does not refer to the representation of referenced “objects”. We do not have maps, lists or libraries in our heads. Everything which we experience as inner life builds up from an enormous randomness through deep stacks of complex emergent processes, where each emergent level is also shaped from top-down, implicitly and, except the last one usually called “consciousness,” also explicitly. The stability of memory and words, of feelings and faculties is deceptive, they are not so stable at all.  Only their externalized symbolic representations are more or less stable, their stability as words etc.  can be shattered easily. The point we would like to emphasize here is that everything that happens in the mind is constructed on the fly, while the construction is completed only with the ultimate step of externalization, that is, speaking or writing. The notion of “mental content” is thus a bit misleading.

The mental may be conceived most appropriately as a manifold of stacked and intertwined processes. This holds for the naturalist perspective as well as for the abstract perspective, as he have argued in the previous chapter. It is simply impossible to find a single stable point within the (abstract) dynamics between model, concept, mediality and virtuality, which could be thought of as spanning a space. We called it the choreostemic space.

For the following remarks about the relation between text and machines and the practitioners engaged in building machines to handle texts we have to keep in mind just those two things: (i) there is a primacy of interpretation, (ii) the mental is a non-representative dynamic process that can’t be formalized (in the sense of “being represented” by a formula).

In turn this means that we should avoid to refer to formulas when going to build a “text machine”. Text machines will be helpful only if their understanding of texts, even if it is a rudimentary understanding, follows the same abstract principles as our human understanding of texts does. Machines pretending to deal with texts, but actually only moving dead formal symbols back and forth, as it is the case in statistical text mining, n-gram based methods and similar, are not helpful at all. The only thing that happens is that these machines introduce a formalistic structure into our human life. We may say that these techniques render humans helpful to machines.

Nowadays we can find a whole techno-scientific community that is engaged in the field of machine learning, devised to “textual data”. The computers are programmed in such a way that they can be used to classify texts. The idea is to provide some keywords, or anti-words, or even a small set of sample texts, which then are taken by the software as a kind of template that is used to build a selection model. This model then is used to select resembling texts from a large set of texts. We have to be very clear about the purpose of these software programs: they classify texts.

The input data for doing so is taken from the texts themselves. More precisely, they are preprocessed according to specialized methods. Each of the texts gets described by a possibly large set of “features” that have been extracted by these methods. The obvious point is that the procedure is purely empirical in the strong sense. Only the available observations (the texts) are taken to infer the “similarity” between texts. Usually, not even linguistic properties are used to form the empirical observations, albeit there are exceptions. People use the so-called n-gram approach, which is only little more than counting letters. It is a zero-knowledge model about the series of symbols, which humans interpret as text. Additionally, the frequency or relative positions of keywords and anti-words are usually measured and expressed by mostly quite simple statistical methods.

Well, classifying texts is something that is quite different from understanding texts. Of course. Yet, said community tries to reproduce the “classification” achieved or produced by humans. Such, any of the engineers of the field of machine learning directed to texts implicitly claims kind of an understanding. They even organize competitions.

The problems with the statistical approach are quite obvious. Quine called it the dogma of empiricism and coined the Gavagai anecdote about it, which even provides much more information than the text alone. In order to understand a text we need references to many things outside the particular text(s) at hand. Two of those are especially salient: concepts and the social dimension. Straightly opposite to the believe of positivists, concepts can’t be defined in advance to a particular interpretation. Using catalogs of references does not help much, if these catalogs are used just as lists of references. The software does not understand “chair” by the “definition” stored in a database, or even by the set of such references. It simply does not care whether there are encoded ASCII codes that yield the symbol “chair” or the symbol “h&e%43”. Douglas Hofstadter has been stressing this point over and over again, and we fully agree to that.

From that necessity to a particular and rather wide “background” (notion by Searle) the second problem derives, which is much more serious, even devastating to the soundness of the whole empirico-statistical approach. The problem is simple: Even we humans have to read a text before being able to understand it. Only upon understanding we could classify it. Of course, the brain of many people is trained sufficiently as to work about the relations of the texts and any of its components while reading the text. The basic setup of the problem, however, remains the same.

Actually, what is happening is a constantly repeated re-reading of the text, taking into account all available insights regarding the text and the relations of it to the author and the reader, while this re-reading often takes place in the memory. To perform this demanding task in parallel, based on the “cache” available from memory, requires a lot of experience and training, though. Less experienced people indeed re-read the text physically.

The consequence of all of that is that we could not determine the best empirical discriminators for a particular text in-the-reading in order to select it as-if we would use a model. Actually, we can’t determine the set of discriminators before we have read it all, at least not before the first pass. Let us call this the completeness issue.

The very first insight is thus that a one-shot approach in text classification is based on a misconception. The software and the human would have to align to each other in some kind of conversation. Otherwise it can’t be specified in principle what the task is, that is, which texts should actually be selected. Any approach to text classification not following the “conversation scheme” is necessarily bare nonsense. Yet, that’s not really a surprise (except for some of the engineers).

There is a further consequence of the completeness issue. We can’t set up a table to learn from at all. This too is not a surprise, since setting up a table means to set up a particular symbolization. Any symbolization apriori to understanding must count as a hypothesis. Such simple. Whether it matches our purpose or not, we can’t know before we didn’t understand the text.

However, in order to make the software learning something we need assignates (traditionally called “properties”) and some criteria to distinguish better models from less performant models. In other words, we need a recurrent scheme on the technical level as well.

That’s why it is not perfectly correct to call texts “unstructured data”. (Besides the fact that data are not “out there”: we always need a measurement device, which in turn implies some kind of model AND some kind of theory.) In the case of texts, imposing a structure onto a text simply means to understand it. We even could say that a text as text is not structurable at all, since the interpretation of a text can’t never be regarded as finished.

All together, we may summarize the issue of complexity of texts as deriving from the following properties in the following way:

  • – there are different levels of context, which additionally stretch across surrounds of very different sizes;
  • – there are rich organizational constraints, e.g. grammars
  • – there is a large corpus of words, while any of them bears meaning only upon interpretation;
  • – there is a large number of relations that not only form a network, but which also change dynamically in the course of reading and of interpretation;
  • – texts are symbolic: spatial neighborhood does not translate into reference, in neither way;
  • understanding of texts requires a wealth of external, and quite abstract-concepts, that appear as significant only upon interpretation, as well as a social embedding of mutual interpretation,.

This list should at least exclude any attempt to defend the empirico-statistical approach as a reasonable one. Except the fact that it conveys a better-than-nothing attitude. These brings us to the question of utility.

Engineers build machines that are supposedly useful, more exactly, they are intended to be fulfill a particular purpose. Mostly, however, machines, even any technology in general, is useful only upon processes of subjective appropriation. The most striking example for this is the car. Else, computers have evolved not for reasons of utility, but rather for gaming. Video did not become popular for artistic reasons or for commercial ones, but due to the possibilities the medium offered for the sex industry. The lesson here being that an intended purpose is difficult to achieve as of the actual usage of the technology. On the other hand, every technology may exert some gravitational forces to develop a then unintended symbolic purpose and regarding that even considerable value. So, could we agree that the classification of texts as it is performed by contemporary technology is useful?

Not quite. We can’t regard the classification of texts as it is possible with the empirico-statistical approach as a reasonable technology. For the classification of texts can’t be separated from their understanding. All we can accomplish by this approach is to filter out those texts that do not match our interests with a sufficiently high probability. Yet, for this task we do not need text classification.

Architectures like 3L-SOM could also be expected to play an important role in translation, as translation requires even deeper understanding of texts as it is needed for sorting texts according to a template.

Besides the necessity for this doubly recurrent scheme we haven’t said much so far here about how then actually to treat the text. Texts should not be mistaken as empiric data. That means that we have to take a modified stance regarding measurement itself. In several essays we already mentioned the conceptual advantages of the two-layered (TL) approach based on self-organizing maps (TL-SOM). We already described in detail how the TL-SOM works, including the the basic preparation of the random graph as it has been described by Kohonen.

The important thing about TL-SOM is that it is not a device for modeling the similarity of texts. It is just a representation, even as it is a very powerful one, because it is based on probabilistic contexts (random graphs). More precisely, it is just one of many possible representations, even as it is much more appropriate than n-gram and other jokes. We even should NOT consider the TL-SOM as so-called “unsupervised modeling”, as the distinction between unsupervised vs. supervised is just another myth (=nonsense if it comes to quantitative models). The TL-SOM is nothing else than an instance for associative storage.

The trick of using a random graph (see the link above) is that the surrounds of words are differentially represented as well. The Kohonen model is quite scarce in this respect, since it applies a completely neutral model. In fact, words in a text are represented as if they would be all the same: of the same kind, of the same weight, etc. That’s clearly not reasonable. Instead, we should represent a word in several, different manners into the same SOM.

Yet, the random graph approach should not be considered just as a “trick”. We repeatedly argued (for instance here) that we have to “dissolve” empirical observations into a probabilistic (re)presentation in order to evade and to avoid the pseudo-problem of “symbol grounding”. Note that even by the practice of setting up a table in order to organize “data” we are already crossing the rubicon into the realm of the symbolic!

The real trick of the TL-SOM, however, is something completely different. The first layer represents the random graph of all words, the actual pre-specific sorting of texts, however, is performed by the second layer on the output of the first layer. In other words, the text is “renormalized”, the SOM itself is used as a measurement device. This renormalization allows to organize data in a standardized manner while allowing to avoid the symbolic fallacy. To our knowledge, this possible usage of the renormalization principle has not been recognized so far. It is indeed a very important principle that puts many things in order. We will deal later in a separate contribution with this issue again.

Only based on the associative storage taken as an entirety appropriate modeling is possible for textual data. The tremendous advantage of that is that the structure for any subsequent consideration now remains constant. We may indeed set up a table. The content of this table, the data, however is not derived directly from the text. Instead we first apply renormalization (a technique known from quantum physics, cf. [1])

The input is some description of the text completely in terms of the TL-SOM. More explicit, we have to “observe” the text as it behaves in the TL-SOM. Here, we are indeed legitimized to treat the text as an empirical observation, albeit we can, of course, observe the text in many different ways. Yet, observing means to conceive the text as a moving target, as a series of multitudes.

One of the available tools is Markov modeling, either as Markov chains, or by means of Hidden Markov Models. But there are many others. Most significantly, probabilistic grammars, even probabilistic phrase structure grammars can be mapped onto Markov models. Yet, again we meet the problem of apriori classification. Both models, Markovian as well as grammarian, need an assignment of grammatical type to a phrase, which often first requires understanding.

Given the autonomy of text, their temporal structure and the impossibility to apply apriori schematism, our proposal is that we just have to conceive of the text like we do of (higher) animals. Like an animal in its habitat, we may think of the text as inhabiting the TL-SOM, our associative storage. We can observe paths, their length and form, preferred neighborhoods, velocities, size and form of habitat.

Similar texts will behave in a similar manner. Such similarity is far beyond (better: as if from another planet) the statistical approach. We also can see now that the statistical approach is being trapped by the representationalist fallacy. This similarity is of course a relative one. The important point here is that we can describe texts in a standardized manner strictly WITHOUT reducing their content to statistical measures. It is also quite simple to determine the similarity of texts, whether as a whole, or whether regarding any part of it. We need not determine the range of our source at all apriori to the results of modeling. That modeling introduces a third logical layer. We may apply standard modeling, using a flexible tool for transformation and a further instance of a SOM, as we provide it as SomFluid in the downloads. The important thing is that this last step of modeling has to run automatically.

The proposed structure keeps any kind of reference completely intact. It also draws on its collected experience, that is, all texts it have been digesting before. It is not necessary to determine stopwords and similar gimmicks. Of course, we could, but that’s part of the conversation. Just provide an example of any size, just as it is available. Everything from two words, to a sentence, to a paragraph, to the content of a directory will work.

Such a 3L-SOM is very close to what we reasonably could call “understanding texts”. But does it really “understand”?

As such, not really. First, images should be stored in the same manner (!!), that is, preprocessed as random graphs over local contexts of various size, into the same (networked population of) SOM(s). Second, a language production module would be needed. But once we have those parts working together, then there will be full understanding of texts.

(I take any reasonable offer to implement this within the next 12 months, seriously!)

Conclusion

Understanding is a faculty to move around in a world of symbols. That’s not meant as a trivial issue. First, the world consists of facts, where facts comprise an universe of dynamic relations. Symbols are just not like traffic signs or pictograms as these belong to the more simple kind of symbols. Symbolizing is a complex, social, mediatized diachronic process.

Classifying, understood as “performing modeling and applying models” consists basically of two parts. One of them could be automated completely, while the other one could not treated by a finite or apriori definable set of rules at all: setting the purpose. In the case of texts, classifying can’t be separated from understanding, because the purpose of the text emerges only upon interpretation, which in turn requires a manifold of modeling raids. Modeling a (quasi-)physical system is completely different from that, it is almost trivial. Yet, the structure of a 3L-SOM could well evolve into an arrangement that is capable to understand in a similar way as we humans do. More precisely, and a bit more abstract, we also could say, that a “system” based on a population of 3L-SOM once will be able to navigate in the choreostemic space.

References
  • [1] B. Delamotte (2003). A hint of renormalization. Am.J.Phys. 72 (2004) 170-184, available online: arXiv:hep-th/0212049v3.

۞

Analogical Thinking, revisited. (II)

March 20, 2012 § Leave a comment

In this second part of the essay about a fresh perspective on

(II/II)

analogical thinking—more precise: on models about it—we will try to bring two concepts together that at first sight represent quite different approaches: Copycat and SOM.

Why engaging in such an endeavor? Firstly, we are quite convinced that FARG’s Copycat demonstrates an important and outstanding architecture. It provides a well-founded proposal about the way we humans apply ideas and abstract concepts to real situations. Secondly, however, it is also clear that Copycat suffers from a few serious flaws in its architecture, particularly the built-in idealism. This renders any adaptation to more realistic domains, or even to completely domain-independent conditions very, very difficult, if not impossible, since this drawback also prohibits structural learning. So far, Copycat is just able to adapt some predefined internal parameters. In other words, the Copycat mechanism just adapts a predefined structure, though a quite abstract one, to a given empiric situation.

Well, basically there seem to be two different, “opposite” strategies to merge these approaches. Either we integrate the SOM into Copycat, or we try to transfer the relevant yet to be identified parts from Copycat to a SOM-based environment. Yet, at the end of day we will see that and how the two alternatives converge.

In order to accomplish our goal of establishing a fruitful combination between SOM and Copycat we have to take mainly three steps. First, we briefly recapitulate the basic elements of Copycat and the proper instance of a SOM-based system. We also will describe the extended SOM system in some detail, albeit there will be a dedicated chapter on it. Finally, we have to transfer and presumably adapt those elements of the Copycat approach that are missing in the SOM paradigm.

Crossing over

The particular power of (natural) evolutionary processes derives from the fact that it is based on symbols. “Adaptation” or “optimization” are not processes that change just the numerical values of parameters of formulas. Quite to the opposite, adaptational processes that span across generations parts of the DNA-based story is being rewritten, with potential consequences for the whole of the story. This effect of recombination in the symbolic space is particularly present in the so-called “crossing over” during the production of gamete cells in the context of sexual reproduction in eukaryotes. Crossing over is a “technique” to dramatically speed up the exploration of the space of potential changes. (In some way, this space is also greatly enlarged by symbolic recombination.)

What we will try here in our attempt to merge the two concepts of Copycat and SOM is exactly this: a symbolic recombination. The difference to its natural template is that in our case we do not transfer DNA-snippets between homologous locations in chromosomes, we transfer whole “genes,” which are represented by elements.

Elementarizations I: C.o.p.y.c.a.t.

In part 1 we identified two top-level (non-atomic) elements of Copycat

Since the first element, covering evolutionary aspects such as randomness, population and a particular memory dynamics, is pretty clear and a whole range of possible ways to implement it are available, any attempt for improving the Copycat approach has to target the static, strongly idealistic characteristics of the the structure that is called “Slipnet” by the FARG’s. The Slipnet has to be enabled for structural changes and autonomous adaptation of its parameters. This could be accomplished in many ways, e.g. by representing the items in the Slipnet as primitive artificial genes. Yet, we will take a different road here, since the SOM paradigm already provides the means to achieve idealizations.

At that point we have to elementarize Copycat’s Slipnet in a way that renders it compatible with the SOM principles. Hofstadter emphasizes the following properties of the Slipnet and the items contained therein (pp.212).

  • (1) Conceptual depth allows for a dynamic and continuous scaling of “abstractness” and resistance against “slipping” to another concept;
  • (2) Nodes and links between nodes both represent active abstract properties;
  • (3) Nodes acquire, spread and lose activation, which knows an switch-on threshold < 1;
  • (4) The length of links represents conceptual proximity or degree of association between the nodes.

As a whole, and viewed from the network perspective, the Slipnet behaves much like a spring system, or a network built from rubber bands, where the springs or the rubber bands are regulated in their strength. Note that our concept of SomFluid also exhibits the feature of local regulation of the bonds between nodes, a property that is not present in the idealized standard SOM paradigm.

Yet, the most interesting properties in the list above are (1) and (2), while (3) and (4) are known in the classic SOM paradigm as well. The first item is great because it represents an elegant instance of creating the possibility for measurability that goes far beyond the nominal scale. As a consequence, “abstractness” ceases to be nominal none-or-all property, as it is present in hierarchies of abstraction. Such hierarchies now can be recognized as mere projections or selections, both introducing a severe limitation of expressibility. The conceptual depth opens a new space.

The second item is also very interesting since it blurs the distinction between items and their relations to some extent. That distinction is also a consequence of relying too readily on the nominal scale of description. It introduces a certain moment of self-reference, though this is not fully developed in the Slipnet. Nevertheless, a result of this move is that concepts can’t be thought without their embedding into other a neighborhood of other concepts. Hofstadter clearly introduces a non-positivistic and non-idealistic notion here, as it establishes a non-totalizing meta-concept of wholeness.

Yet, the blurring between “concepts” and “relations” could be and must be driven far beyond the level Hofstadter achieved, if the Slipnet should become extensible. Namely, all the parts and processes of the Slipnet need to follow the paradigm of probabilization, since this offers the only way to evade the demons of cybernetic idealism and control apriori. Hofstadter himself relies much on probabilization concerning the other two architectural parts of Copycat. Its beyond me why he didn’t apply it to the Slipnet too.

Taken together, we may derive (or: impose) the following important elements for an abstract description of the Slipnet.

  • (1) Smooth scaling of abstractness (“conceptual depth”);
  • (2) Items and links of a network of sub-conceptual abstract properties are instances of the same category of “abstract property”;
  • (3) Activation of abstract properties represents a non-linear flow of energy;
  • (4) The distance between abstract properties represents their conceptual proximity.

A note should be added regarding the last (fourth) point. In Copycat, this proximity is a static number. In Hofstadter’s framework, it does not express something like similarity, since the abstract properties are not conceived as compounds. That is, the abstract properties are themselves on the nominal level. And indeed, it might appear as rather difficult to conceive of concepts as “right of”, “left of”, or “group” as compounds. Yet, I think that it is well possible by referring to mathematical group theory, the theory of algebra and the framework of mathematical categories. All of those may be subsumed into the same operationalization: symmetry operations. Of course, there are different ways to conceive of symmetries and to implement the respective operationalizations. We will discuss this issue in a forthcoming essay that is part of the series “The Formal and the Creative“.

The next step is now to distill the elements of the SOM paradigm in a way that enables a common differential for the SOM and for Copycat..

Elementarizations II: S.O.M.

The self-organizing map is a structure that associates comparable items—usually records of values that represent observations—according to their similarity. Hence, it makes two strong and important assumptions.

  • (1) The basic assumption of the SOM paradigm is that items can be rendered comparable;
  • (2) The items are conceived as tokens that are created by repeated measurement;

The first assumption means, that the structure of the items can be described (i) apriori to their comparison and (ii) independent from the final result of the SOM process. Of course, this assumption is not unique to SOMs, any algorithmic approach to the treatment of data is committed to it. The particular status of SOM is given by the fact—and in stark contrast to almost any other method for the treatment of data—that this is the only strong assumption. All other parameters can be handled in a dynamic manner. In other words, there is no particular zone of the internal parametrization of a SOM that would be inaccessible apriori. Compare this with ANN or statistical methods, and you feel the difference…  Usually, methods are rather opaque with respect to their internal parameters. For instance, the similarity functional is usually not accessible, which renders all these nice looking, so-called analytic methods into some kind of subjective gambling. In PCA and its relatives, for instance, the similarity is buried in the covariance matrix, which in turn is only defined within the assumption of normality of correlations. If not a rank correlation is used, this assumption is extended even to the data itself. In both cases it is impossible to introduce a different notion of similarity. Else, and also as a consequence of that, it is impossible to investigate the particular dependency of the results proposed by the method from the structural properties and (opaque) assumptions. In contrast to such unfavorable epistemo-mythical practices, the particular transparency of the SOM paradigm allows for critical structural learning of the SOM instances. “Critical” here means that the influence of internal parameters of the method onto the results or conclusions can be investigated, changed, and accordingly adapted.

The second assumption is implied by its purpose to be a learning mechanism. It simply needs some observations as results of the same type of measurement. The number of observations (the number of repeats) has to  exceed a certain lower threshold, which, dependent on the data and the purpose, is at least 8, typically however (much) more than 100 observations of the same kind are needed. Any result will be within the space delimited by the assignates (properties), and thus any result is a possibility (if we take just the SOM itself).

The particular accomplishment of a SOM process is the transition from the extensional to the intensional description, i.e. the SOM may be used as a tool to perform the step from tokens to types.

From this we may derive the following elements of the SOM:1

  • (1) a multitude of items that can be described within a common structure, though not necessarily an identical one;
  • (2) a dense network where the links between nodes are probabilistic relations;
  • (3) a bottom-up mechanism which results in the transition from an extensional to an intensional level of description;

As a consequence of this structure the SOM process avoids the necessity to compare all items (N) to all other items (N-1). This property, together with the probabilistic neighborhoods establishes the main difference to other clustering procedures.

It is quite important to understand that the SOM mechanism as such is not a modeling procedure. Several extensions have to be added and properly integrated, such as

  • – operationalization of the target into a target variable;
  • – validation by separate samples;
  • – feature selection, preferably by an instance of  a generalized evolutionary process (though not by a genetic algorithm);
  • – detecting strong functional and/or non-linear coupling between variables;
  • – description of the dependency of the results from internal parameters by means of data experiments.

We already described the generalized architecture of modeling as well as the elements of the generalized model in previous chapters.

Yet, as we explained in part 1 of this essay, analogy making is conceptually incompatible to any kind of modeling, as long as the target of the model points to some external entity. Thus, we have to choose a non-modeling instance of a SOM as the starting point. However, clustering is also an instance of those processes that provide the transition from extensions to intensions, whether this clustering is embedded into full modeling or not. In other words, both the classic SOM as well as the modeling SOM are not suitable as candidates for a merger with Copycat.

SOM-based Abstraction

Fortunately, there is already a proposal, and even a well-known one, that indeed may be taken as such a candidate: the two-layer SOM (TL-SOM) as it has been demonstrated as essential part of the so-called WebSom [1,2].

Actually, the description as being “two layered” is a very minimalistic, if not inappropriate description what is going on in the WebSom. We already discussed many aspects of its architecture here and here.

Concerning our interests here, the multi-layered arrangement itself is not a significant feature. Any system doing complicated things needs a functional compartmentalization; we have met a multi-part, multi-compartment and multi-layered structure in the case of Copycat too. Else, the SOM mechanism itself remains perfectly identical across the layers.

The real interesting features of the approach realized in the TL-SOM are

  • – the preparation of the observations into probabilistic contexts;
  • – the utilization of the primary SOM as a measurement device (the actual trick).

The domain of application of the TL-SOM is the comparison and classification of texts. Texts belong to unstructured data and the comparison of texts is exposed to the same problematics as the making of analogies: there is no apriori structure that could serve as a basis for modeling. Also, as the analogies investigated by the FARG the text is a locational phenomenon, i.e. it takes place in a space.

Let us briefly recapitulate the dynamics in a TL-SOM. In order to create a TL-SOM the text is first dissolved into overlapping, probabilistic contexts. Note that the locational arrangement is captured by these random contexts. No explicit apriori rules are necessary to separate patterns. The resulting collection of  contexts then gets “somified”. Each node then contains similar random contexts that have been derived from various positions in different texts. Now the decisive step will be taken, which consists in turning the perspective by “90 degrees”: We can use the SOM as the basis for creating a histogram for each of the texts. The nodes are interpreted as properties of the texts, i.e. each node represents a bin of the histogram. The values of the individual bins measure the frequency of the text as it is represented by the respective random context. The secondary SOM then creates a clustering across these histograms, which represent the texts in an abstract manner.

This way the primary lattice of the TL-SOM is used to impose a structure on the unstructured entity “text.”

Figure 1: A schematic representation of a two-layered SOM with built-in self-referential abstraction. The input for the secondary SOM (foreground) is derived as a collection of histograms that are defined as a density across the nodes of the primary SOM (background). The input for the primary SOM are random contexts.

To put it clearly: the secondary SOM builds an intensional description of entities that results from the interaction of a SOM with a probabilistic description of the empirical observations. Quite obviously, intensions built this way about intensions are not only quite abstract, the mechanism could even be stacked. It could be described as “high-level perception” as justified as Hofstadter uses the term for Copycat. The TL-SOM turns representational intensions into abstract, structural ones.

The two aspects from above thus interact, they are elements of the TL-SOM. Despite the fact that there are still transitions from extensions to intensions, we also can see that the targeted units of the analysis, the texts get probabilistically distributed across an area, the lattice of the primary SOM. Since the SOM maps the high-dimensional input data onto its map in a way that preserves their topological properties, it is easy to recognize that the TL-SOM creates conceptual halos as an intermediate.

So let us summarize the possibilities provided by the SOM.

  • (1) SOMs are able to create non-empiric, or better: de-empirified idealizations of intensions that are based on “quasi-empiric” input data;
  • (2) TL-SOMs can be used to create conceptual halos.

In the next section we will focus on this spatial, better: primarily spatial effect.

The Extended SOM

Kohonen and co-workers [1,2] proposed to build histograms that reflect the probability density of a text across the SOM. Those histograms represent the original units (e.g. texts) in a quite static manner, using a kind of summary statistics.

Yet, texts are definitely not a static phenomenon. At first sight there is at least a series, while more appropriately texts are even described as dynamic networks of own associative power [3]. Returning to the SOM we see that additionally to the densities scattered across the nodes of the SOM we also can observe a sequence of invoked nodes, according to the sequence of random contexts in the text (or the serial observations)

The not so difficult question then is: How to deal with that sequence? Obviously, it is again and best conceived as a random process (though with a strong structure), and random processes are best described using Markov models, either as hidden (HMM) or as transitional models. Note that the Markov model is not a model about the raw observational data, it describes the sequence of activation events of SOM nodes.

The Markov model can be used as a further means to produce conceptual halos in the sequence domain. The differential properties of a particular sequence as compared to the Markov model then could be used as further properties to describe the observational sequence.

(The full version of the extended SOM comprises targeted modeling as a further level. Yet, this targeted modeling does not refer to raw data. Instead, its input is provided completely by the primary SOM, which is based on probabilistic contexts, while the target of such modeling is just internal consistency of a context-dependent degree.)

The Transfer

Just to avoid misunderstanding: it does not make sense to try representing Copycat completely by a SOM-based system. The particular dynamics and phenomenologically behavior depends a lot on Copycat’s tripartite morphology as represented by the Coderack (agents), the Workspace and the Slipnet. We are “just” in search for a possibility to remove the deep idealism from the Slipnet in order to enable it for structural learning.

Basically, there are two possible routes. Either we re-interpret the extended SOM in a way that allows us to represent the elements of the Slipnet as properties of the SOM, or we try to replace the all items in the Slipnet by SOM lattices.

So, let us take a look which structures we have (Copycat) or what we could have (SOM) on both sides.

Table 1: Comparing elements from Copycat’s Slipnet to the (possible) mechanisms in a SOM-based system.

Copycat extended SOM
 1. smoothly scaled abstraction Conceptual depth (dynamic parameter) distance of abstract intensions in an integrated lattice of a n-layered SOM
 2.  Links as concepts structure by implementation reflecting conceptual proximity as an assignate property for a higher-level
 3. Activation featuring non-linear switching behavior structure by implementation x
 4. Conceptual proximity link length (dynamic parameter) distance in map (dynamic parameter)
 5.  Kind of concepts locational, positional symmetries, any

From this comparison it is clear that the single most challenging part of this route is the possibility for the emergence of abstract intensions in the SOM based on empirical data. From the perspective of the SOM, relations between observational items such as “left-most,” “group” or “right of”, and even such as “sameness group” or “predecessor group”, are just probabilities of a pattern. Such patterns are identified by functions or dynamic combinations thereof. Combinations ot topological primitives remain mappable by analytic functions. Such concepts we could call “primitive concepts” and we can map these to the process of data transformation and the set of assignates as potential properties.2 It is then the job of the SOM to assign a relevancy to the assignates.

Yet, Copycat’s Slipnet comprises also rather abstract concepts such as “opposite”. Further more, the most abstract concepts often act as links between more primitive concepts, or, in Hofstadter terms, conceptual items of lower “conceptual depth”.

My feeling here is that it is a fundamental mistake to implement concepts like “opposite” directly. What is opposite of something else is a deeply semantic concept in itself, thus strongly dependent on the domain. I think that most of the interesting concepts, i.e. the most abstract ones are domain-specific. Concepts like “opposite” could be considered as something “simple” only in case of geometric or spatial domains.

Yet, that’s not a weakness. We should use this as a design feature. Take the following rather simple case as shown in the next figure as an example. Here we mapped simply triplets of uniformly distributed random values onto a SOM. The three values can be readily interpreted as parts of a RGB value, which renders the interpretation more intuitive. The special thing here is that the map has been a really large one: We defined approximately 700’000 nodes and fed approx. 6 million observations into it.

Figure 2: A SOM-based color map showing emergence of abstract features. Note that the topology of the map is a borderless toroid: Left and right borders touch each other (distance=0), and the same applies to the upper and lower borders.

We can observe several interesting things. The SOM didn’t come up with just any arbitrary sorting of the colors. Instead, a very particular one emerged.

First, the map is not perfectly homogeneous anymore. Very large maps tend to develop “anisotropies”, symmetry breaks if you like, simply due to the fact the the signal horizon becomes an important issue. This should not be regarded as a deficiency though. Symmetry breaks are essential for the possibility of the emergence of symbols. Second, we can see that two “color models” emerged, the RGB model around the dark spot in the lower left, and the YMC model around the bright spot in the upper right. Third, the distance between the bright, almost white spot and the dark, almost black one is maximized.

In other words, and not quite surprising, the conceptual distance is reflected as a geometrical distance in the SOM. As it is the case in the TL-SOM, we now could use the SOM as a measurement device that transforms an unknown structure into an internal property, simply by using the locational property in the SOM as an assignate for a secondary SOM. In this way we not only can represent “opposite”, but we even have a model procedure for “generalized oppositeness” at out disposal.

It is crucial to understand this step of “observing the SOM”, thereby conceiving the SOM as a filter, or more precisely as a measurement device. Of course, at this point it becomes clear that a large variety of such transposing and internal-virtual measurement devices may be thought of. Methodologically, this opens an orthogonal dimension to the representation of data, resembling strongly to the concept of orthoregulation.

The map shown above even allows to create completely different color models, for instance one around yellow and another one around magenta. Our color psychology is strongly determined by the sun’s radiated spectrum and hence it reflects a particular Lebenswelt; yet, there is no necessity about it. Some insects like bees are able to perceive ultraviolet radiation, i.e. their colors may have 4 components, yielding a completely different color psychology, while the capability to distinguish colors remains perfectly.3

“Oppositeness” is just a “simple” example for an abstract concept and its operationalization using a SOM. We already mentioned the “serial” coherence of texts (and thus of general arguments) that can be operationalized as sort of virtual movement across a SOM of a particular level of integration.

It is crucial to understand that there is no other model besides the SOM that combines the ability to learn from empirical data and the possibility for emergent abstraction.

There is yet another lesson that we can take home from the simple example above. Well, the example doesn’t not remain that simple. High-level abstraction, items of considerable conceptual depth, so to speak, requires rather short assignate vectors. In the process of learning qua abstraction it appears to be essential that the masses of possible assignates derived from or imposed by measurement of raw data will be reduced. On the one hand, empiric contexts from very different domains should be abstracted, i.e. quite literally “reduced”, into the same perspective. On the other hand, any given empiric context should be abstracted into (much) more than just one abstract perspective. The consequence of that is that we need a lot of SOMs, all separated “sufficiently” from each other. In other words, we need a dynamic population of Self-organizing maps in order to represent the capability of abstraction in real-life. “Dynamic population” here means that there are developmental mechanisms that result in a proliferation, almost a breeding of new SOM instances in a seamless manner. Of course, the SOM instances themselves have to be able to grow and to differentiate, as we have described it here and here.

In a population of SOM the conceptual depth of a concept may be represented by the efforts to arrive at a particular abstract “intension.” This not only comprises the ordinary SOM lattices, but also processes like Markov models, simulations, idealizations qua SOMs, targeted modeling, transition into symbolic space, synchronous or potential activations of other SOM compartments etc. This effort may be represented finally as a “number.”

Conclusions

The structure of multi-layered system of Self-organizing Maps as it has been proposed by Kohonen and co-workers is a powerful model to represent emerging abstraction in response to empiric impressions. The Copycat model demonstrates how abstraction could be brought back to the level of application in order to become able to make analogies and to deal with “first-time-exposures”.

Here we tried to outline a potential path to bring these models together. We regard this combination in the way we proposed it (or a quite similar one) as crucial for any advance in the field of machine-based episteme at large, but also for the rather confined area of machine learning. Attempts like that of Blank [4] appear to suffer seriously from categorical mis-attributions. Analogical thinking does not take place on the level of single neurons.

We didn’t discuss alternative models here (so far, a small extension is planned). The main reasons are that first it would be an almost endless job, and second that Hofstadter already did it and as a result of his investigation he dismissed all the alternative approaches (from authors like Gentner, Holyoak, Thagard). For an overview Runco [5] about recent models on creativity, analogical thinking, or problem solving provides a good starting point. Of course, many authors point to roughly the same direction as we did here, but mostly, the proposals are circular, not helpful because the problematic is just replaced by another one (e.g. the infamous and completely unusable “divergent thinking”), or can’t be implemented for other reasons. Thagard [6] for instance, claim that a “parallel satisfaction of the constraints of similarity, structure and purpose” is key in analogical thinking. Given our analysis, such statements are nothing but a great mess, mixing modeling, theory, vagueness and fluidity.

For instance, in cognitive psychology and in the field of artificial intelligence as well, the hypothesis of Structural Mapping (STM) finds a lot of supporters [7]. Hofstadter discusses similar approaches in his book. The STM hypothesis is highly implausible and obviously a left-over of the symbolic approach to Artificial Intelligence, just transposed into more structural regions. The STM hypothesis has not only to be implemented as a whole, it also has to be implemented for each domain specifically. There is no emergence of that capability.

The combination of the extended SOM—interpreted as a dynamic population of growing SOM instances—with the Copycat mechanism indeed appears as a self-sustaining approach into proliferating abstraction and—quite significant—back from it into application. It will be able to make analogies on any field already in its first encounter with it, even regarding itself, since both the extended SOM as well as the Copycat comprise several mechanisms that may count as precursors of high-level reflexivity.

After this proposal little remains to be said on the technical level. One of those issues which remain to be discussed is the conditions for the possibility of binding internal processes to external references. Here our favorite candidate principle is multi-modality, that is the joint and inextricable “processing” (in the sense of “getting affected”) of words, images and physical signals alike. In other words, I feel that we have come close to the fulfillment of the ariadnic question this blog:”Where is the Limit?” …even in its multi-faceted aspects.

A lot of implementation work has now to be performed, eventually commented by some philosophical musings about “cognition”, or more appropriate the “epistemic condition.” I just would like to invite you to stay tuned for the software publications to come (hopefully in the near future).

Notes

1. see also the other chapters about the SOM, SOM-based modeling, and generalized modeling.

2. It is somehow interesting that in the brain of many animals we can find very small groups of neurons, if not even single neurons, that respond to primitive features such as verticality of lines, or the direction of the movement of objects in the visual field.

3. Ludwig Wittgenstein insisted all the time that we can’t know anything about the “inner” representation of “concepts.” It is thus free of any sense and meaning to claim knowledge about the inner state of oneself as well as of that of others. Wilhelm Vossenkuhl introduces and explains the Wittgensteinian “grammatical” solipsism carefully and in a very nice way.[8]  The only thing we can know about inner states is that we use certain labels for it, and the only meaning of emotions is that we do report them in certain ways. In other terms, the only thing that is important is the ability to distinguish ones feelings. This, however, is easy to accomplish for SOM-based systems, as we have been demonstrating here and elsewhere in this collection of essays.

4. Don’t miss Timo Honkela’s webpage where one can find a lot of gems related to SOMs! The only puzzling issue about all the work done in Helsinki is that the people there constantly and pervasively misunderstand the SOM per se as a modeling tool. Despite their ingenuity they completely neglect the issues of data transformation, feature selection, validation and data experimentation, which all have to be integrated to achieve a model (see our discussion here), for a recent example see here, or the cited papers about the Websom project.

  • [1] Timo Honkela, Samuel Kaski, Krista Lagus, Teuvo Kohonen (1997). WEBSOM – Self-Organizing Maps of Document Collections. Neurocomputing, 21: 101-117.4
  • [2] Krista Lagus, Samuel Kaski, Teuvo Kohonen in Information Sciences (2004)
    Mining massive document collections by the WEBSOM method. Information Sciences, 163(1-3): 135-156. DOI: 10.1016/j.ins.2003.03.017
  • [3] Klaus Wassermann (2010). Nodes, Streams and Symbionts: Working with the Associativity of Virtual Textures. The 6th European Meeting of the Society for Literature, Science, and the Arts, Riga, 15-19 June, 2010. available online.
  • [4 ]Douglas S. Blank, Implicit Analogy-Making: A Connectionist Exploration.Indiana University Computer Science Department. available online.
  • [5] Mark A. Runco, Creativity-Research, Development, and Practice Elsevier 2007.
  • [6] Keith J. Holyoak and Paul Thagard, Mental Leaps: Analogy in Creative Thought.
    MIT Press, Cambridge 1995.
  • [7] John F. Sowa, Arun K. Majumdar (2003), Analogical Reasoning.  in: A. Aldo, W. Lex, & B. Ganter (eds.), “Conceptual Structures for Knowledge Creation and Communication,” Proc.Intl.Conf.Conceptual Structures, Dresden, Germany, July 2003.  LNAI 2746, Springer New York 2003. pp. 16-36. available online.
  • [8] Wilhelm Vossenkuhl. Solipsismus und Sprachkritik. Beiträge zu Wittgenstein. Parerga, Berlin 2009.

.

Beyond Containing: Associative Storage and Memory

February 14, 2012 § Leave a comment

Memory, our memory, is a wonderful thing. Most of the time.

Yet, it also can trap you, sometimes terribly, if you use it in inappropriate ways.

Think about the problematics of being a witness. As long as you don’t try to remember exactly you know precisely. As soon as you start to try to achieve perfect recall, everything starts to become fluid, first, then fuzzy and increasingly blurry. As if there would be some kind of uncertainty principle, similar to Heisenberg’s [1]. There are other tricks, such as asking a person the same question over and over again. Any degree of security, hence knowledge, will vanish. In the other direction, everybody knows about the experience that a tiny little smell or sound triggers a whole story in memory, and often one that have not been cared about for a long time.

The main strengths of memory—extensibility, adaptivity, contextuality and flexibility—could be considered also as its main weakness, if we expect perfect reproducibility for results of “queries”. Yet, memory is not a data base. There are neither symbols, nor indexes, and at the deeper levels of its mechanisms, also no signs. There is no particular neuron that would “contain” information as a file on a computer can be regarded able to provide.

Databases are, of course, extremely useful, precisely because they can’t do in other ways as to reproduce answers perfectly. That’s how they are designed and constructed. And precisely for the same reason we may state that databases are dead entities, like crystals.

The reproducibility provided by databases expels time. We can write something into a database, stop everything, and continue precisely at the same point. Databases do not own their own time. Hence, they are purely physical entities. As a consequence, databases do not/can not think. They can’t bring or put things together, they do not associate, superpose, or mix. Everything is under the control of an external entity. A database does not learn when the amount of bits stored inside it increases. We also have to be very clear about the fact that a database does not interpret anything. All this should not be understood as a criticism, of course, these properties are intended by design.

The first important consequence about this is that any system relying just on the principles of a database also will inherit these properties. This raises the question about the necessary and sufficient conditions for the foundations of  “storage” devices that allow for learning and informational adaptivity.

As a first step one could argue that artificial systems capable for learning, for instance self-organizing maps, or any other “learning algorithm”, may consist of a database and a processor. This would represent the bare bones of the classic von Neumann architecture.

The essence of this architecture is, again, reproducibility as a design intention. The processor is basically empty. As long as the database is not part of a self-referential arrangement, there won’t be something like a morphological change.

Learning without change of structure is not learning but only changing the value of structural parameters that have been defined apriori (at implementation time). The crucial step however would be to introduce those parameters at all. We will return to this point at a later stage of our discussion, when it comes to describe the processing capabilities of self-organizing maps.1

Of course, the boundaries are not well defined here. We may implement a system in a very abstract manner such that a change in the value of such highly abstract parameters indeed involves deep structural changes. In the end, almost everything can be expressed by some parameters and their values. That’s nothing else than the principle of the Deleuzean differential.

What we want to emphasize here is just the issue that (1) morphological changes are necessary in order to establish learning, and (2) these changes should be established in response to the environment (and the information flowing from there into the system). These two condition together establish a third one, namely that (3) a historical contingency is established that acts as a constraint on the further potential changes and responses of the system. The system acquires individuality. Individuality and learning are co-extensive. Quite obviously, such a system is not a von Neumann device any longer, even if it still runs on a such a linear machine.

Our claim here is that the “learning” requires a particular perspective on the concept of “data” and its “storage.” And, correspondingly, without the changed concept about the relation between data and storage, the emergence of machine-based episteme will not be achievable.

Let us just contrast the two ends of our space.

  • (1) At the logical end we have the von Neumann architecture, characterized by empty processors, perfect reproducibility on an atomic level, the “bit”; there is no morphological change; only estimation of predefined parameters can be achieved.
  • (2) The opposite end is made from historically contingent structures for perception, transformation and association, where the morphology changes due to the interaction with the perceived information2; we will observe emergence of individuality; morphological structures are always just relative to the experienced influences; learning occurs and is structural learning.

With regard to a system that is able to learn, one possible conclusion from that would be to drop the distinction between storage of encoded information and the treatment of that  encodings. Perhaps, it is the only viable conclusion to this end.

In the rest of this chapter we will demonstrate how the separation between data and their transformation can be overcome on the basis of self-organizing maps. Such a device we call “associative storage”. We also will find a particular relation between such an associative storage and modeling3. Notably, both tasks can be accomplished by self-organizing maps.

Prerequisites

When taking the perspective from the side of usage there is still another large contrasting difference between databases and associative storage (“memories”). In case of a database, the purpose of a storage event is known at the time of performing the storing operation. In case of memories and associative storage this purpose is not known, and often can’t be reasonably expected to be knowable by principle.

From that we can derive a quite important consequence. In order to build a memory, we have to avoid storing the items “as such,” as it is the case for databases. We may call this the (naive) representational approach. Philosophically, the stored items do not have any structure inside the storage device, neither an inner structure, nor an outer one. Any item appears as a primitive qualia.

The contrast to the process in an associative storage is indeed a strong one. Here, it is simply forbidden to store items in an isolated manner, without relation to other items, as an engram, an encoded and reversibly decodable series of bits. Since a database works perfectly reversible and reproducible, we can encode the graphem of a word into a series of bits and later decode that series back into a graphem again, which in turn we as humans (with memory inside the skull) can interpret as words. Strictly taken, we do NOT use the database to store words.

More concretely, what we have to do with the items comprises two independent steps:

  • (1) Items have to be stored as context.
  • (2) Items have to be stored as probabilized items.

The second part of our re-organized approach to storage is a consequence of the impossibility to know about future uses of a stored item. Taken inversely, using a database for storage always and strictly implies that the storage agent claims to know perfectly about future uses. It is precisely this implication that renders long-lasting storage projects so problematic, if not impossible.

In other words, and even more concise, we may say that in order to build a dynamic and extensible memory we have to store items in a particular form.

Memory is built on the basis of a population of probabilistic contexts in and by an associative structure.

The Two-Layer SOM

In a highly interesting prototypical model project (codename “WEBSOM”) Kaski (a collaborator of Kohonen) introduced a particular SOM architecture that serves the requirements as described above [2]. Yet, Kohonen (and all of his colleagues alike) did not recognize so far the actual status of that architecture. We already mentioned this point in the chapter about some improvements of the SOM design; Kohonen fails to discern modeling from sorting, when he uses the associative storage as a modeling device. Yet, modeling requires a purpose, operationalized into one or more target criteria. Hence, an associative storage device like the two-layer SOM can be conceived as a pre-specific model only.

Nevertheless, this SOM architecture is not only highly remarkable, but we also can easily extend it appropriately; thus it is indeed so important, at least as a starting point, that we describe it briefly here.

Context and Basic Idea

The context for which the two-layer SOM (TL-SOM) has been created is document retrieval by classification of texts. From the perspective of classification,texts are highly complex entities. This complexity of texts derives from the following properties:

  • – there are different levels of context;
  • – there are rich organizational constraints, e.g. grammars
  • – there is a large corpus of words;
  • – there is a large number of relations that not only form a network, but which also change dynamically in the course of interpretation.

Taken together, these properties turn texts into ill-defined or even undefinable entities, for which it is not possible to provide a structural description, e.g. as a set of features, and particularly not in advance to the analysis. Briefly, texts are unstructured data. It is clear, that especially non-contextual methods like the infamous n-grams are deeply inappropriate for the description, and hence also for the modeling of texts. The peculiarity of texts has been recognized long before the age of computers. Around 1830 Friedrich Schleiermacher founded the discipline of hermeneutics as a response to the complexity of texts. In the last decades of the 20ieth century, it was Jacques Derrida who brought in a new perspective on it. in Deleuzean terms, texts are always and inevitably deterritorialized to a significant portion. Kaski & coworkers addressed only a modest part of these vast problematics, the classification of texts.

The starting point they took by was to preserve context. The large variety of contexts makes it impossible to take any kind of raw data directly as input for the SOM. That means that the contexts had to be encoded in a proper manner. The trick is to use a SOM for this encoding (details in next section below). This SOM represents the first layer. The subject of this SOM are the contexts of words (definition below). The “state” of this first SOM is then used to create the input for the SOM on the second layer, which then addresses the texts. In this way, the size of the input vectors are standardized and reduced in size.

Elements of a Two-Layer SOM

The elements, or building blocks, of a TL-SOM devised for the classification of texts are

  • (1) random contexts,
  • (2) the map of categories (word classes)
  • (3) the map of texts

The Random Context

A random context encodes the context of any of the words in a text. let us assume for the sake of simplicity that the context is bilateral symmetric according to 2n+1, i.e. for example with n=3 the length of the context is 7, where the focused word (“structure”) is at pos 3 (when counting starts with 0).

Let us resort to the following example, that take just two snippets from this text. The numbers represent some arbitrary enumeration of the relative positions of the words.

sequence A of words rel. positions in text “… without change of structureis not learning …”53        54    55    56       57 58     59
sequence B of words rel. positions in text “… not have any structureinside the storage …”19    20  21       22         23    24     25

The position numbers we just need for calculating the positional distance between words. The interesting word here is “structure”.

For the next step you have to think about the words listed in a catalog of indexes, that is as a set whose order is arbitrary but fixed. In this way, any of the words gets its unique numerical fingerprint.

Index Word Random Vector
 …  …
1264  structure 0.270    0.938    0.417    0.299    0.991 …
1265  learning 0.330    0.990    0.827    0.828    0.445 …
 1266  Alabama 0.375    0.725    0.435    0.025    0.915 …
 1267  without 0.422    0.072    0.282    0.157    0.155 …
 1268  storage 0.237    0.345    0.023    0.777    0.569 …
 1269  not 0.706    0.881    0.603    0.673    0.473 …
 1270  change 0.170    0.247    0.734    0.383    0.905 …
 1271  have 0.735    0.472    0.661    0.539    0.275 …
 1272  inside 0.230    0.772    0.973    0.242    0.224 …
 1273  any 0.509    0.445    0.531    0.216    0.105 …
 1274  of 0.834    0.502    0.481    0.971    0.711 …
1274  is 0.935    0.967    0.549    0.572    0.001 …
 …

Any of the words of a text can now be replaced by an apriori determined vector of random values from [0..1]; the dimensionality of those random vectors should be around  80 in order to approximate orthogonality among all those vectors. Just to be clear: these random vectors are taken from a fixed codebook, a catalog as sketched above, where each word is assigned to exactly one such vector.

Once we have performed this replacement, we can calculate the averaged vectors per relative position of the context. In case of the example above, we would calculate the reference vector for position n=0 as the average from the vectors encoding the words “without” and “not”.

Let us be more explicit. For example sentence A we translate first into the positional number, interpret this positional number as a column header, and fill the column with the values of its respective fingerprint. For the 7 positions (-3, +3) we get 7 columns:

sequence A of words “… without change of structure is not learning …”
rel. positions in text        53        54    55    56       57 58     59
 grouped around “structure”         -3       -2    -1       0       1    2     3
random fingerprints
per position
0.422  0.170  0.834  0.270  0.935  0.706  0.330
0.072  0.247  0.502  0.938  0.967  0.881  0.990
0.282  0.734  0.481  0.417  0.549  0.603  0.827

…further entries of the fingerprints…

The same we have to do for the second sequence B. Now we have to tables of fingerprints, both comprising 7 columns and N rows, where N is the length of the fingerprint. From these two tables we calculate the average value and put it into a new table (which is of course also of dimensions 7xN). Such, the example above yields 7 such averaged reference vectors. If we have a dimensionality of 80 for the random vectors we end up with a matrix of [r,c] = [80,7].

In a final step we concatenate the columns into a single vector, yielding a vector of 7×80=560 variables. This might appear as a large vector. Yet, it is much smaller than the whole corpus of words in a text. Additionally, such vectors can be compressed by the technique of random projection (math. foundations by [3], first proposed for data analysis by [4], utilized for SOMs later by [5] and [6]), which today is quite popular in data analysis. Random projection works by matrix multiplication. Our vector (1R x  560C) gets multiplied with a matrix M(r) of 560R x 100C, yielding a vector of 1R x 100C. The matrix M(r) also consists of flat random values. This technique is very interesting, because no relevant information is lost, but the vector gets shortened considerable. Of course, in an absolute sense there is a loss of information. Yet, the SOM only needs the information which is important to distinguish the observations.

This technique of transferring a sequence made from items encoded on an symbolic level into a vector that is based on random context can be applied to any symbolic sequence of course.

For instance, it would be a drastic case of reductionism to conceive of the path taken by humans in an urban environment just as a sequence locations. Humans are symbolic beings and the urban environment is full of symbols to which we respond. Yet, for the population-oriented perspective any individual path is just a possible path. Naturally, we interpret it as a random path. The path taken through a city needs to be described both by location and symbol.

The advantage of the SOM is that the random vectors that encode the symbolic aspect can be combined seamlessly with any other kind of information, e.g. the locational coordinates. That’s the property of the multi-modality. Which particular combination of “properties” then is suitable to classify the paths for a given question then is subject for “standard” extended modeling as described inthe chapter Technical Aspects of Modeling.

The Map of Categories (Word Classes)

From these random context vectors we can now build a SOM. Similar contexts will arrange in adjacent regions.

A particular text now can be described by its differential abundance across that SOM. Remember that we have sent the random contexts of many texts (or text snippets) to the SOM. To achieve such a description a (relative) frequency histogram is calculated, which has as much classes as the SOM node count is. The values of the histogram is the relative frequency (“probability”) for the presence of a particular text in comparison to all other texts.

Any particular text is now described by a fingerprint, that contains highly relevant information about

  • – the context of all words as a probability measure;
  • – the relative topological density of similar contextual embeddings;
  • – the particularity of texts across all contextual descriptions, again as a probability measure;

Those fingerprints represent texts and they are ready-mades for the final step, “learning” the classes by the SOM on the second layer in order to identify groups of “similar” texts.

It is clear, that this basic variant of a Two-Layer SOM procedure can be improved in multiple ways. Yet, the idea should be clear. Some of those improvements are

  • – to use a fully developed concept of context, e.g. this one, instead of a constant length context and a context without inner structure;
  • – evaluating not just the histogram as a foundation of the fingerprint of a text, but also the sequence of nodes according to the sequence of contexts; that sequence can be processed using a Markov-process method, such as HMM, Conditional Random Fields, or, in a self-similar approach, by applying the method of random contexts to the sequence of nodes;
  • – reflecting at least parts of the “syntactical” structure of the text, such as sentences, paragraphs, and sections, as well as the grammatical role of words;
  • – enriching the information about “words” by representing them not only in their observed form, but also as their close synonyms, or stuffed with the information about pointers to semantically related words as this can be taken from labeled corpuses.

We want to briefly return to the first layer. Just imagine not to measure the histogram, but instead to follow the indices of the contexts across the developed map by your fingertips. A particular path, or virtual movement appears. I think that it is crucial to reflect this virtual movement in the input data for the second layer.

The reward could be significant, indeed. It offers nothing less than a model for conceptual slippage, a term which has been emphasized by Douglas Hofstadter throughout his research on analogical and creative thinking. Note that in our modified TL-SOM this capacity is not an “extra function” that had to be programmed. It is deeply built “into” the system, or in other words, it makes up its character. Besides Hofstadter’s proposal which is based on a completely different approach, and for a different task, we do not know of any other system that would be able for that. We even may expect that the efficient production of metaphors can be achieved by it, which is not an insignificant goal, since all the practiced language is always metaphoric.

Associative Storage

We already mentioned that the method of TL-SOM extracts important pieces of information about a text and represents it as a probabilistic measure. The SOM does not contain the whole piece of text as single entity, or a series of otherwise unconnected entities, the words. The SOM breaks the text up into overlapping pieces, or better, into overlapping probabilistic descriptions of such pieces.

It would be a serious misunderstanding to perceive this splitting into pieces as a drawback or failure. It is the mandatory prerequisite for building an associative storage.

Any further target oriented modeling would refer to the two layers of a TL-SOM, but never to the raw input text.Such it can work reasonable fast for a whole range of different tasks. One of those tasks that can be solved by a combination of associative storage and true (targeted) modeling is to find an optimized model for a given text, or any text snippet, including the identification of the discriminating features.  We also can turn the perspective around, addressing the query to the SOM about an alternative formulation in a given context…

From Associative Storage towards Memory

Despite its power and its potential as associative storage, the Two-Layer SOM still can’t be conceived as a memory device. The associative storage just takes the probabilistically described contexts and sorts it topologically into the map. In order to establish “memory” further components are required that provides the goal orientation.

Within the world of self-organizing maps, simple (!) memories are easy to establish. We just have to combine a SOM that acts as associative storage with a SOM for targeted modeling. The peculiar distinctive feature of that second SOM for modeling is that it does not work on external data, but on “data” as it is available in and as the SOM that acts as associative storage.

We may establish a vivid memory in its full meaning if we establish two further components: (1) targeted modeling via the SOM principle, (2) a repository about the targeted models that have been built from (or using) the associative storage, and (3) at least a partial operationalization of a self-reflective mechanism, i.e. a modeling process that is going to model the working of the TL-SOM. Since in our framework the basic SOM module is able to grow and to differentiate, there is no principle limitation of/for such a system any more, concerning its capability to build concepts, models, and (logical) habits for navigating between them. Later, we will call the “space” where this navigation takes place “choreosteme“: Drawing figures into the open space of epistemic conditionability.

From such a memory we may expect dramatic progress concerning the “intelligence” of machines. The only questionable thing is whether we should call such an entity still a machine. I guess, there is neither a word nor a concept for it.

u .

Notes

1. Self-organizing maps have some amazing properties on the level of their interpretation, which they share especially with the Markov models. As such, the SOM and Markov models are outstanding. Both, the SOM as well as the Markov model can be conceived as devices that can be used to turn programming statements, i.e. all the IF-THEN-ELSE statements occurring in a program as DATA. Even logic itself, or more precisely, any quasi-logic, is getting transformed into data.SOM and Markov models are double-articulated (a Deleuzean notion) into logic on the one side and the empiric on the other.

In order to achieve such, a full write access is necessary to the extensional as well as the intensional layer of a model. Hence, artificial neuronal networks (nor, of course, statistical methods like PCA) can’t be used to achieve the same effect.

2. It is quite important not to forget that (in our framework) information is nothing that “is out there.” If we follow the primacy of interpretation, for which there are good reasons, we also have to acknowledge that information is not a substantial entity that could be stored or processed. Information is nothing else than the actual characteristics of the process of interpretation. These characteristics can’t be detached from the underlying process, because this process is represented by the whole system.

3. Keep in mind that we only can talk about modeling in a reasonable manner if there is an operationalization of the purpose, i.e. if we perform target oriented modeling.

  • [1] Werner Heisenberg. Uncertainty Principle.
  • [2] Samuel Kaski, Timo Honkela, Krista Lagus, Teuvo Kohonen (1998). WEBSOM – Self-organizing maps of document collections. Neurocomputing 21 (1998) 101-117.
  • [3] W.B. Johnson and J. Lindenstrauss. Extensions of Lipshitz mapping into Hilbert space. In Conference in modern analysis and probability, volume 26 of Contemporary Mathematics, pages 189–206. Amer. Math. Soc., 1984.
  • [4] R. Hecht-Nielsen. Context vectors: general purpose approximate meaning representations self-organized from raw data. In J.M. Zurada, R.J. Marks II, and C.J. Robinson, editors, Computational Intelligence: Imitating Life, pages 43–56. IEEE Press, 1994.
  • [5] Papadimitriou, C. H., Raghavan, P., Tamaki, H., & Vempala, S. (1998). Latent semantic indexing: A probabilistic analysis. Proceedings of the Seventeenth ACM Symposium on the Principles of Database Systems (pp. 159-168). ACM press.
  • [6] Bingham, E., & Mannila, H. (2001). Random projection in dimensionality reduction: Applications to image and text data. Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 245-250). ACM Press.

۞

SOM = Speedily Organizing Map

February 12, 2012 § Leave a comment

The Self-organizing Map is a powerful and high-potential computational procedure.

Yet, there is no free lunch, especially not for procedures that are able to deliver meaningful results.

The self-organizing map is such a valuable procedure, we have discussed its theoretical potential with regard to a range of different aspects in other chapters. Here, we want not to deal further with such theoretical or even philosophical issues, e.g. related to the philosophy of mind, instead we focus the issue of performance, understood simply in terms of speed.

For all those demo SOMs the algorithmic time complexity is not really an issue. The algorithm approximates rather quickly to a stable state. Yet, small maps—where “small” means something like less than 500 nodes or so—are not really interesting. It is much like in brains. Brains are made largely from neurons and some chemicals, and a lot of them can do amazing things. If you take 500 of them you may stuff a worm in an appropriate manner, but not even a termite. The important questions thus are, beyond the nice story about theoretical benefits.

What happens with the SOM principle if we connect 1’000’000 nodes?

How to organize 100, 1000 or 10’000 of such million-nodes SOMs?

By these figures we would end up with somewhat around 1..10 billions of nodes1, all organized along the same principles. Just to avoid a common misunderstanding here: these masses of neurons are organized in a very similar manner, yet the totality of them builds a complex system as we have described it in our chapter about complexity. There are several, if not many emergent levels, and a lot of self-referentiality. These 1 billion nodes are not all engaged with segmenting external data! We will see elsewhere, in the chapter about associative storage and memory, how such deeply integrated modular system could be conceived of. There are some steps to take, though not terribly complicated or difficult ones.

When approaching such scales, the advantage of the self-organization turns palpably into a problematic disadvantage. “Self-organizing” means “bottom-up,” and this bottom-up direction in SOMs is represented by the fact that all records representing the observations have repeatedly to be compared to all nodes in the SOM in order to find the so-called “best matching unit” (BMU). The BMU is just that node in the network that exhibits an intensional profile that is the most similar among all the other profiles2. Though the SOM avoids to compare all records to all records, its algorithmic complexity scales as a power-function with respect to its own scale! Normally, algorithms are dependent on the size of the data, but not to its own “power.”

In its naive form the SOM shows a complexity of something like O(n w m2), where n is the amount of data (number of records, size of feature set), w the number of nodes to be visited for searching the BMU, and m2 the number of nodes affected in the update procedure. w and m are scaled by factors f1,f2 <1, but the basic complexity remains. The update procedure affects an area that is dependent on the size of the SOM, therefore the exponent. The exact degree of algorithmic complexity is not absolutely determined, since it depends on the dynamics in the learning function, among other things.

The situation worsens significantly if we apply improvements to the original flavor of the SOM, e.g.

  • – the principle of homogenized variance (per variable across extensional containers),
  • – in targeted modeling, tracking the explicit classification performance per node on the level of records, which means that the data has to be referenced
  • – size balancing of nodes,
  • – morphological differentiation like growing and melting, as in the FluidSOM, which additionally allows for free ranging nodes,
  • – evolutionary feature selection and creating proto-hypothesis,
  • – dynamic transformation of data,
  • – then think about the problem of indeterminacy of empiric data, which enforces differential modeling, i.e. a style of modeling that is performing experimental investigation of the dependency of the results on the settings (the free parameters) of the computational procedure: sampling the data, choosing the target, selecting a resolution for the scale of the resulting classification, choosing a risk attitude, among several more.

All affects the results of modeling, that is the prognostic/diagnostic/semantic conclusions one could draw from the modeling. Albeit all these steps could be organized based on a set of rules, including applying another instance of a SOM, and thus could be run automatically, all of these necessary explorations require separate modeling. It is quite easy to set up an exploration plan for differential modeling that would require several dozens of models, and if evolutionary optimization is going to be applied, 100s if not thousands of different maps have to be calculated.

Fortunately, the SOM offers a range of opportunities for using dynamic look-up tables and parallel processing. A SOM consisting of 1’000’000 neurons could easily utilize several thousand threads, without much worries about concurrency (or the collisions of parallel threads). Unfortunately, such computers are not available yet, but you got the message…

Meanwhile we have to apply optimization through dynamically built look-up tables.  These I will describe briefly in the following sections.

Searching the Most Similar Node

An ingredient part of speeding up the SOM in real-life application is an appropriate initialization of the intensional profiles across the map. Of course, precisely this can not be known in advance, at least not exactly. The self-organization of the map is the shortest path to its final state, there is no possibility for an analytic short-cut. Kohonen proposes to apply Principal Component Analysis (PCA) for calculating the initial values. I am convinced that this is not a good idea. The PCA is deeply incompatible with the SOM, hence it will respond to very different traits in the data. PCA and SOM behave similar only in the case of demo cases…

Preselecting the Landing Zone

A better alternative is the SOM itself. Since the mapping organized by the SOM is preserving the topology of the data, we could apply a much smaller SOM, even a nested series of down-scaled SOMs to create a coarse model for selecting the appropriate sub-population in the large SOM. The steps are the following:

  • 1. create the main SOM, say 40’000 nodes, organized on a square rectangle, where the sides are of the relative length of 200 nodes;
  • 2. create a SOM for preselecting the landing zone, scaled approximately 14 by 14 nodes, and use the same structure (i.e. the same feature vectors) as for the large SOM;
  • 3. Prime the small SOM with a small but significant sample of the data, which comprise say 2000..4000 records in this case of around 200 nodes; draw this sample randomly from the data; this step should complete comparatively quick (by a factor of 200 in our example);
  • 4. initialize the main SOM by a blurred (geometric) projection of the intensional profiles from the minor to the larger SOM;
  • 5. now use the minor SOM as a model for the selection of the landing zone, simply by means of geometric projection.

As a result, the number of nodes to be visited in the large SOM in order to find the best match remains almost constant.
There is an interesting correlate to this technique. If one needs a series of SOM based representations of the data distinguished just by the maximum number of nodes in the map, one should always start with the lowest, i.e. most coarse resolution, with the least number of nodes. The results then can be used as a projective priming of the SOM on the next level of resolution.

Explicit Lookup Tables linking Observations to SOM areas

In the previous section we described the usage of a secondary much smaller SOM as a device for limiting the number of nodes to be scanned. The same problematics can be addressed by explicit lookup tables that establish a link between a given record and a vicinity around the last (few) best matching units.

If the SOM is approximately stable, that is, after the SOM has seen a significant portion of the data, it is not necessary any more to check the whole map. Just scan the vicinity around the last best matching node in the map. Again, the number of nodes necessary to be checked is almost constant.

The stability can not be predicted in advance, of course. The SOM is, as the name says, self-organizing (albeit in a weak manner). As a rule of thumb one could check the average number of observations attached to a particular node, the average taken across all nodes that contain at least one record. This average filling should be larger than 8..10 (due to considerations based on variance, and some arguments derived from non-parametric statistics… but it is a rule of thumb).

Large Feature Vectors

Feature vectors can be very large. In life sciences and medicine I experienced cases with 3000 raw variables. During data transformation this number can increase to 4000..6000 variables. Te comparison of such feature vectors is quite expensive.

Fortunately, there are some nice tricks, which are all based on the same strategy. This strategy comprises the following steps.

  • 1. create a temporary SOM with the following, very different feature vector; this vector has just around 80..100 (real-valued) positions and 1 position for the index variable (in other words, the table key); such the size of the vector is a 60th of the original vector, if we are faced with 6000 variables.
  • 2. create the derived vectors by encoding the records representing the observation by a technique that is called “random projection”; such a projection is generated by multiplying the data vector with a token from a catalog of (labeled) matrices, that are filled with uniform random numbers ([0..1]).
  • 3. create the “random projection” SOM based on these transformed records
  • 4. after training, replace the random projection data with real data, re-calculate the intensional profiles accordingly, and run a small sample of the data through the SOM for final tuning.

The technique of random projection has been invented in 1988. The principle works because of two facts:

  • (1) Quite amazingly, all random vectors beyond a certain dimensionality (80..200, as said before) are nearly orthogonal to each other. The random projection compresses the original data without loosing the bits of information that are distinctive, even if they are not accessible in an explicit manner any more.
  • (2) The only trait of the data that is considered by the SOM is their potential difference.

Bags of SOMs

Finally, one can take advantage of splitting the data into several smaller samples. These samples require only smaller SOMs, which run much faster (we are faced with a power law). After training each of the SOMs, they can be combined into a compound model.

This technique is known as bagging in Data Mining. Today it is also quite popular in the form of so-called random forests, where instead of one large decision tree man smaller ones are built and then combined. This technique is very promising, since it is a technique of nature. Its simply modularization on an abstract level, leading the net level of abstraction in a seamless manner. It is also one of our favorite principles for the overall “epistemic machine”.

Notes

1. This would represent just around 10% of the neurons of our brain, if we interpret each node as a neuron. Yet, this comparison is misleading. The functionality of a node in a SOM rather represents a whole population of neurons, although there is no 1:1 principle transferable between them. Hence, such a system would be roughly of the size of a human brain, and much more important, it is likely organized in a comparable, albeit rather alien, manner.

2. Quite often, the vectors that are attached to the nodes are called weight vectors. This is a serious misnomer, as neither the nodes are weighted by this vector (alone), nor the variables that make up that vector (for more details see here). Conceptually it is much more sound to call those vectors “intensional profiles.” Actually, one could indeed imagine  a weight vector that would control the contribution (“weight”) of variables to the similarity / distance between two of such vectors. Such weight vectors could even be specific for each of the nodes.

References…

  • [1]

.

۞

The Self-Organizing Map: SOMe Design Issues

February 4, 2012 § 1 Comment

It is the duality of persistent, quasi-material yet simulated structures

and the highly dynamic, volatile and-most salient-informational aspects that are so characteristic for learning entities like Self-Organizing Maps (SOM) or Artificial Neural Networks (ANN). It should not be regarded as a surprise that the design of manifold aspects of the persistent, quasi-material part of SOM or ANN is quite influential and hence also important.

Here we explore some of the aspects of that design. Sure, there is something like a “classic” version of the SOM, named after its inventor, the so-called “Kohonen-SOM.” Kohonen developed several slightly different SOM mechanisms over many years, starting with statistical covariance matrices. All of them comprise great ideas, for sure. Yet, in a wider perspective it is clear that there are many properties of the SOM that are presumably quite sub-optimal for realizing a generally applicable learning mechanism.

The Elements of SOMs

We shall recapitulate very briefly the principle of SOM below, more detailed descriptions can be found in many places in the Web (one of the best for the newbie, with some formulas and a demo software: ai-junkie), see also our document here that relates some issues to references, as well as our intro in plain language.

Yet, the question beyond all the mathematical formula stuff is: “What are the elements of a SOM?”

We propose to distinguish the following four basic elements:

  • (1) a Collection of Items
    that have memory for observations, or reflecting them, where all the items start with the same structure for these observations (items are often called “nodes”, or in a more romantic attitude “neurons”);
  • (2) the Spatial Layout Principles
    and the relational arrangement of this items;
  • (3) an Influence Mechanism
    that link the items together, and which together with the spatial layout defines the topology of the piece;
  • (4) a Perceptional Mechanism
    that introduces observations into the SOM in a particular manner.

In the case of the SOM these elements are configured in a way that creates a particular class of “learning” that we can describe as competitive-collaborative abstraction.

Those basic elements of a SOM can be parameterized—and thus also implemented—in very different ways. If we would take only the headlines of that list we could also subsume artificial neural networks (ANN) with these elements. Yet, even the items of a SOM and those of a ANN are drastically different. Else, the meaning of concepts like “layout” or “influence mechanism” are very different. This results in a completely different architecture regarding the relation of the “data”, or if you like potential observations, and the structure (SOM or ANN). Basically, ANNs are analytic,which means that the abstraction is (has to be done) done before the interaction of the structure with the data. In strong contrast to this approach, SOM build up an abstraction while interacting with the data. This abstraction is mostly consisting of the transition from extensional data to intensional representation. Thus SOM are able to find a structure, while ANN only can move within the apriori defined structure. In contrast to ANN, SOM are associative mechanisms (which is the reason why we are so fond of them)

Yet, it is also true for SOMs that the parametrization of the instances of the four elements as listed above have a significant influence on the capabilities and the potential of the resulting actual associative structure. Note that the design of the internals of the SOM does not refer to the issues of the usage or the embedding of the SOM into a wider context of modeling, or the structure of modeling itself.

In the following we will discuss the usual actualizations of those four elements, the respective drawbacks and better alternatives.

The SOM itself

Often one can find schematic representations like the one shown in the following figure 1:

Then this is usually described in this way: “The network is created from a 2D lattice of ‘nodes’, each of which is fully connected to the input layer.

Albeit this is a possible description, it is a highly misleading one, with some quite unfavorable consequences: as we will see, it hides some important opportunities offered by the SOM mechanism.

Instead of speaking in an opaque manner about the “input layer” we simply can use the concept of “structured observations”. The structure is just given by the features used to establish or describe the observations. The important step that simplifies everything is to give all the nodes the same structure as the observations, at least in the beginning and as the trivial case; we will see that both assumptions may “develop away” as an effect of self-organization.

Anyway, the complicated connectivity in figure 1 changes into the following structure for the simple case:

Figure 2: An interpretation of the SOM grid, where the nodes are stuffed with the same structure (ordered set of variables) as the observations. This interpretation allows for a localizing of structures that is not achievable by the standard interpretation as shown in Fig.1.

To see what we gain by this change we have to visit briefly and partially the SOM mechanism.

The SOM mechanism compares a particular “incoming” observation to “all” nodes and determines a best matching node. The intensional part of this node then gets changed as a function of the given weight vector and the new observation. Some kind of intermediate between the observational vector and the intensional vector of the node is established. As a consequence, the nodes develop different intensional descriptions. This change upon matching with an observation then will be spread in the vicinity of the selected node, decaying with the distance, while this distance additionally is shrinking with increasing duration of the learning process. This is called the lateral control mechanism (LCM) by Kohonen (see Kohonen’s book 2001 p.179). This LCM is one of the most striking differences to so-called artificial neural networks (ANN).

It is now rather straightforward to think that the node keeps the index of the matching observation in its local memory. Over the course of learning, a node collects many records, which are all similar. This gathering of observations into an explicit collection is one of the MOST salient differences of our interpretation of the SOM to most of the standard interpretations! 

Figure 3: As Fig.2, showing the extensional container of one of the nodes.

The consequences are highly significant: The SOM is not a tool for visualization any more, it is a mechanism with inherent and nevertheless transparent abstraction! To be explicit: While we retain the full power of the SOM mechanism we also not only get an explicit clustering, but even the opportunity for a fully validated modeling, inclusive a full description of the structure of the risk of mis-classification, hence there is no “black box” any more (as in contrast say to ANN, or even statistical methods).

Now we can see what we gained from changing the description, dropping the unholy concept of “input layer.” It now becomes clearly visible that nodes can be conceived of as containers, comprised of an extensional and an intensional part (as Carnap used the terms). The intensional part is what usually is called the weight vector of a node.The extensional part is the list of observations matching this intension.

The intensional part of a node thus represents a type. The extensional part of our revised SOM node represents the matching tokens.

But wait! As it is usual done, we called the intensional part of the node the “weight vector”. Yet, this is a drastic misnomer. It is not “weights” of the variables. It is simply a value that can be calculated in different ways, and which is influenced from different sides. It is a function of

  • – the underlying extensional part = the list of records;
  • – the similarity functional that is used for this node
  • – the general network dynamics;
  • – any kind of dynamic rule relating the new observation.

It is thus much more adequate to talk about an “intensionality profile” than about weights. Of course, we can additionally introduce real “weights” for each of the positions in a structure profile vector.

A second important advance of dropping this bad concept of “input layer” is that we can localize this function that results in the actualization of the intensional part of the node. For instance, we can localize the similarity function. As part of the similarity function we could even consider to implement a dynamic rule (dependent on the extensional content of the node) that excludes certain positions = variables as arguments from the determination of the similarity!

The third important consequence is that we created a completely new compartment, the “extensional container” of a node. Using the concept of “input layer” this compartment is simply not visible. Thus, the concept of the input layer violates central insights from the theory of epistemic action.

This “extensional container” is not just a list of records. We can conceive it as a “functional” compartment, that allows for a great deal of new flexibility and dynamics. This inner dynamics could be used to create new elements of the intensional part of the node, e.g. about the variance of the tokens contained in the “extensionality container”. Or about their relation as measured by the correlation. In fact, we could use any mechanism to create new positions in the intensional profile of node, even the properties of an embedded SOM, a small population of artificial neurons, the result parameters of statistical functions taking the list of observations as input and so on.

It is quite important to understand that the particular dynamics in the extensionality container is purely local. Notably the possibility for this dynamics also makes it possible to implement local differentiation of the SOM network, just as it is induced by the observations itself.

There is even a fourth implication of dropping the concept of input layer, which lead us to the separation between intensional and extensional aspects. This implication concerns the numerical production of the intensionality profile. Obviously we can regard the transition from the extensional description to the intensional representation. This abstraction, as any, is accompanied by a loss of information. Referring to the collection of intensional representations means to use them as a model. It is now very important to recognize that there is no explicit down-stream connection to the observations any more. All we have at our disposal are intensional representations that emerged as a consequence of the interaction of three components: (1) the observations, (2) the quasi-material aspects of the modeling procedure(particularly the associative part of it, of course), and (3) the imposed target/risk settings.

As a consequence we have to care explicitly about the variance structure within the extensional containers. More precisely, the internal variance of the extensional containers have to be “comparable.” If we would not care about that, we could not consider the intensional representations as comparable. We simply would compare apples with oranges, since some of the intensional representations simply would represent “a large mess”. On the level of intensionality profile one can’t see the variance anymore, hence we have to avoid the establishment of extensional groups (“micro-clusters”) that do not collect observations that are “similar” with regard to their descriptional values vector (inside the apriori given space of assignates). Astonishingly, this requirement of a homogenized extensional variance measure is overlooked even by Kohonen and his group, not to mention the implementations by countless epigonal fellows. It is clear that only the explicit distinction between intensional and extensional part of a model allows for the visibility of this important structural element.

Finally, and as a fifth consequence, we would like to emphasize that the explicit distinction between intensional and extensional parts opens the road towards a highly interesting region. We already mentioned that the transition from extensional description to intensional representation is a kind of abstraction. Yet, it is a simple kind of abstraction, closely tied to quasi-material aspects of the associative mechanism.

We may, however, easily derive the production of idealistic representations from that, if not even to say “ideas” in the philosophical sense. To achieve that we just have to extend the SOM with a production facility, the capability to simulate. This is of course not a difficult task. We will describe the details elsewhere (essay is scheduled), thus just a brief outline here. The “trick” is to use the intensional representations as seeds for generating surrogate observations by means of a Monte-Carlo simulation, such that the variance of the observations is a bit smaller than that of the empiric observations. Both, the empiric and surrogated “data” (nothing is “given” in the latter case) share the same space of assignates. The variance threshold can be derived dynamically from the SOM itself, it need not be predetermined at implementation time. As the next step one drops the extensional containers of the SOM and feeds the simulated data into it. After several loops of such self-referential modeling the intensional descriptions have “lost” their close ties to empirical data, yet, they are not completely unrelated. We still may use it as a kind of “template” in modeling, or for instance as a kind of null-model. In other words, the SOM contains the first traces of Platonic ideas.

Modeling. What else?

Above we emphasized that the SOM provides the opportunity for a fully validated modeling if we distinguish explicitly intensional and extensional parts in the make-up of the nodes. The SOM is, however, a strange thing, that can act in completely different ways.

In the chapter about modeling we concluded that a model without a purpose is not a model, or it is at most a strongly deficient model. Nevertheless, many people claim to create models without implying a purpose to the learning SOM. They call it “unsupervised clustering”. This is, of course, nonsense. It should be called more appropriately, “clustering with a deliberately hidden purpose,” since all the parameters of the SOM mechanisms and even the implementation act as constraints for the clustering, too. Any clustering mechanism applies a lot of  criteria that influence the results. These constraints are supervised by the software, and the software has been produced by a human being (often called programmer), so this human being is supervising the clustering with a long arm. For the same reason one can not say the SOM is learning something and also not that we would train the SOM, without giving it a purpose.

Though the digesting of information by a SOM without a purpose being present is neither modeling nor learning, what can we conceive such a process as then?

The answer is pretty simple, and remember it becomes visible only after having dropped illegitimate ascriptions of mistaken concepts. This clustering has a particular epistemological role:

Self-organizing Maps that are running without purpose (i.e. target variables) are best described as associative storage devices. Nothing more, but above all, also nothing less.

Actually, this has to be rated as one of the greatest currently unrecognized opportunities in the field of machine learning. The reason is again inadequate wording. Of course, the input for such a map should be probabilized (randomized), and it has been already demonstrated how to accomplish this… guess by whom… by Teuvo Kohonen himself, while he was inventing the so-called WebSom. Kohonen proposed random neighborhoods for presenting snippets of texts to the SOM, which are a simple version of random contexts.

Importantly, once one recognizes the categorical differences between the target oriented modeling and the associative storage, it becomes immediately clear that there are strictly different methodological, hence quasi-morphological requirements. Astonishingly, even Kohonen himself, and any of his fellows as well, did not recognize the conceptual difference between the two flavors. He used SOMs created without target variable, i.e. without implying a purpose, as models for performing selections. Note that the principal mechanism of the SOM is the same for both approaches. There are just differences in the cost function(s) regarding the selection of variables.

There should be no doubt that any system intended to advance towards an autonomous machine-based episteme has to combine the two mechanism. There are sill other mechanisms, such like virtual movements, or virtual sequences in the abstract SOM space (we will describe that elsewhere), or the self-referential SOM for developing “crisp ideas”, but such a combination of associative storage and target oriented modeling is definitely inevitable (in our perspective… but we have strong arguments!).

SOM and Self-Organization

A small remark should be made here: Self-organizing maps are not in the same strong sense self-organizing as for instance Turing systems, or other Reaction-Diffusion Systems (RDS). A SOM gets organized by the interaction of its mechanisms and structures and the data. A SOM does not create patterns by it-SELF. Without feeding data into it, nothing happens, in stark contrast to self-organizing systems in the strong sense (see the example we already cited here), or take a look here from where we reproduced this parameter map for Gray-Scott Models.

Figure 4: The parameter map for Gray-Scott models, a particular Reaction-Diffusion System. Only for certain combinations of the two parameters of the system interesting patterns appear, and only for part of them the system remains dynamical, i.e. changing the layout of the patterns continuously.

As we discuss it in the chapter on complexity, it is pretty clear which kind of conditions must be at work to create the phenomenon of self-organization. None of them is present in Self-Organizing Maps; above all, SOMs are neither dissipative, nor are there antagonist influences.

Yet, it is not too difficult to create a self-organizing map that is really self-organizing. What is needed is either a second underlying process or inhibitory elements organized as population. In natural brains, we find both kinds of processes. The key for choosing the right starting point for implementing a system that is showing the transition from SOM to RDS is the complete probabilization of the idea of the network.

Our feeling is that at least one of them is mandatory in order to allow the system to develop logic as a category in an autonomous manner, i.e. not pre-programmed. As any other understanding, the ability to think in logical terms, or using logic as a category should not be programmed into a computer. That ability should emerge from the implemented conditions. Our claim that some concept is quite the opposite to something other is quite likely based on such processes. It is highly indicate in this context that the brain is indeed showing Turing patterns on the level of activity patterns, i.e. the patterns are not made of material entities, but are completely immaterial. Else, like in chemical clocks like the Belousov-Zhabotinsky system, another RDS, the natural brain shows a strong rhythmicity, both in its “local” activity patterns, as well as in the overall activity, affecting billions of cells at a time.

So far, the strong self-organization is not implemented in our FluidSOM.

Spatial Layout Principles

The spatial layout principle is a very important design aspect. It concerns not only the actual geometrical arrangement of nodes, but also their mobility as representations of physical entities. In the case of SOM this has to be taken quite abstract. The “physical entities” represented by the nodes are not neurons. The nodes represent functional roles of populations of neurons.

Usually, the SOM is defined as a collection of nodes that are arranged in a particular topology. This topology may be

  • – grid like, 2-(3) dimensional;
  • – as kind of a swarm in 2 dimensions;
  • – as a gas, freely moving nodes.

The obvious difference between them is the degree of physical freedom for the nodes to move around. In grids, nodes are fixed and cannot  move, while in the SOM gas the “nodes” are much more mobile.

There is also a quite important, yet not so obvious commonality between them. Firstly, in all of these layout principles the logical SOM nodes are identical with the “physical” items, i.e. representations of crossings in a grid, swarming entities, or gaseous containers. Thus, the data aspect of the nodes is not cleanly separated from its spatial behavior. If we separate it, the behavior of the nodes and the spatial aspects can be handled more transparently, i.e. the relevant parameters are better accessible.

Secondly, the space where those nodes are embedded is conceived as being completely neutral, as if those nodes would be arranged in deep space. Yet, everything we know of learning entities points to their mediality. In other words, the space that embeds the nodes should not be “empty”.

Using a Grid

In most of the cases the SOM is defined as a collection of nodes that are arrangement as a regular grid (4(8)n, 6n). Think of it as a fixed network like a regular wire fence, or the atomic bonds in a model of a crystal.

This layout is by far the most abundant one, yet it is the most restricted one. It is almost impossible, at least very difficult to make such a SOM dynamic, e.g. to provide it the potential to grow or to differentiate.

The advantage of grids is that it is quite easy to calculate the geometrical distance between the nodes, which is a necessary step to determine the influence between any two nodes. If the nodes are mobile, this measurement requires much much more efforts in terms of implementation.

Using Metaphors for Mobility: Swarms, or Gases

Here, the nodes may range freely. Their movement is strongly influenced (or even) restricted by the moves of its neighbors. Here, experience tells us the flocks of birds, or fishes, or bacteria, do not learn efficiently on the level of the swarm. Structures are destroyed to easy. The same is true for the gas metaphor.

Flexible Phase in a Mediating Space

Our proposal is to render the “phase” flexible according to the requirements that are important in a particular stage of learning. The nodes may be strictly arranged like in a crystal, or quite mobile, they may move around according to physical forces or according to their informational properties like the gathered data.

Ideally, the crystalline phases and the fluid phases are dependent on just a two or three parameters. One example for this is the “repulsive field”, a collection of items in a 2D space which repel each other. If the kinetic energy of those items is not too large, and the range of repellent force is not too low, this automatically leads to a hexagonal pattern. Yet, the pattern is not programmed as an apriori pattern. It is a result of properties of the items (and the embedding space). Such, the emergent arrangement is never affected by something like a “layout defect.”

Inserting a new item or removing one is very easy in such a structure. More important, the overall characteristics of the system does not change despite the fact that the actual pattern changes.

The Collection of Items : “Nodes”

In the classic SOM, nodes serve a double purpose:

  • P1 – They serve as container for references that point to records of data (=observations);
  • P2 – They present this extensional list in an integrated, “intensional” form ;

The intensional form of the list is simply the weight vector of that node. In the course of learning, the list of the records contained in a particular node will be selected such that they are increasingly similar.

Note that keeping the references to the data records is extremely important. It is NOT part of most SOM implementations. If we would not do it, we could not use the SOM as a modeling tool at all. This might be the reason why most people use the SOM just as visualization tool for data (which is a dramatic misunderstanding)

The nodes are not “directly” linked. Whether they influence each other or not is dependent on the distance between them and the neighborhood function. The neighborhood function determines the neighborhood, and it is a salient property of the SOM mechanism that this function changes over time. Important for our understanding of machine-based epistemology is that the relations between nodes in a SOM are potentially of a probabilistic character.

However, if we use a fixed grid, a fixed distance function, and a deterministically behaving neighborhood function, the resulting relations are not probabilistic any more.

Else, in case of default SOM, the nodes are passive. They even do not perform the calculation of the weight vector, which is performed by a central “update” loop in most implementations. In other words, in a standard SOM a node is a data structure.Here we arrive at a main point in our critique of the SOM

The common concept of a SOM is equivalent to a structural constant.

What we need, however, is something completely different. Even on the level of the nodes we need entities, that can change their structure and their relationality.

The concept of FluidSOM must be based on active nodes.

These active nodes are semi-autonomous. They calculate the weight vector themselves, based either on new input data, or some other “chemical” influences. They may develop a few long-range outgoing fibers or masses of more or less stable (but not “fixed”!) input relations to other nodes. The active meta-nodes in a fluid self-organizing map  may develop a nested mini-SOM, or may incorporate any other mechanism for evaluating the data to which it is pointing to, e.g. a small neural network of a fixed structure (see mn-SOM). Meta-nodes also may branch out a further SOM instances locally into relative “3D”, e.g. dependent on its work load, or again, on some “chemical influences”

We see, that meta-nodes are dynamic structures, sth like a category of categories. This flexibility is indispensable for growing and differentiation.

This introduces the seed of autonomy on the lowest possible level. Here, within the almost material processes, it is barely autonomy, it is really a mechanic activity. Yet, this activity is NOT triggered by some reason any more. It is just there, as a property of the matter itself.

We are convinced that the top-level behavioral autonomy is (at least for large parts) an emergent property that grows out of the a-reasonable activity on the micro-material level.

Data, Reflection

The profile vector of a SOM node usually contains for all mutable variables (non-ID/TV) the average of the values in the extensional list. That is, the profile vector itself does not know anything about TV or index variable…  which is solely the business of the Node.
In our case, however, and based on the principle of “strict locality,” the weight vector also may contain a further section, which is referring to dynamic properties of the node, or the data. We introduced this in a different way above when discussing the extensionality container of SOM nodes. For instance, the deviation of the data in the node against a model function (such as a correlation) such internal measurements can not be predefined, and they are also not stable input data since they are constantly changing (due to the list of data in the node, the state of other  nodes etc.).

This introduces the possibility of self-referentiality on the lowest possible level. Similar to the case of autonomy, we find the seed for self-referentiality on the topmost-level (call it consciousness…) in midst the material layer.

Programming Style

If there is one lesson we can draw from the studies of naturally occurring brains, then it is the fact that there is no master code between neurons, no “Mentalese.” The brain does not work on the base of its own language. Equivalently, there are no logical circuits implementing logic calculus. As a correlate we can say that the brain is not a thing that consists of a definite wiring. A brain is not a finite state automaton, it does not make any sense to ascribe states to brains. Instead, everything going on in a brain is probabilistic, even on the sub-cellular level. It is not determined in a definite manner, how many vesicles have to burst in a synaptic gap to cause a transmission of the signal, it is not determined how many neurons exactly make up a working group for a particular “function” etc.etc. The only thing we can say is that certain fibers collect from certain “regions”, typically millions of neurons, to other such regions.

Note that any software program IS representable by just such a definite wiring. Hence, what we need is a mechanism that can transcend its own being as mechanism. We already discussed this issue in another chapter, where we identified abstract growth as a possible route to that achievement.

The processing of information in the brain is probabilistic, despite the fact that on the top level it “feels” different for us. Now, when starting to program artificial associative structures that are able to do similar things as a brain can accomplish, we have to respect this principle of probabilization.

We not only have to avoid hard-coded wiring between procedures. We have to avoid any explicit wiring at all. In terms of software architecture this translates into the proposal that we should not rely just on object-oriented programming (OOP). For instance, we would represent nodes in a SOM as objects, and the properties of these objects again would be other objects. OOP is an important, but certainly not a sufficient design element for a machine that shall develop its own episteme.

What we have to actualize in our implementation is not just OOP, but a messaging based architecture, where all elements are only loosely coupled. The Lateral Control Mechanism (LCM) of the Kohonen SOM is a nice example for this, the explicit wiring in ANN is perfect counter-example, a DON’T DO IT. Yet, as we will see in the next section, the LCM should not be considered as a symmetric and structurally constant functional entity!

Concerning programming style, on an even lower level this translates into the heavy use of so-called interfaces, as they are so prevalent in Java. Not objects are wired or passed around, but only interfaces. Interfaces are forward contracts about the standards for the interaction of different parts, that actually can change while the “program” is running.

Of course, these considerations regard only to the lowest, indeed material levels of an associative system, yet, they are necessary. If we start with wires of any kind, we won’t achieve our goals. From the philosophical perspective it does not come as a surprise that the immanence of autonomous abstraction is to be found only in open processes, which include the dimension of mediality. Even in the interaction of its tiniest parts the system should not rely on definite encodings.

Functional Differentiation

During their development, natural systems differentiate in their parts. Bodies are comprised of organs, organs are made of different cell types, within all members of a cell a further differentiation of their actual and context-specific role may occur. The same can be observed in social insects, or any other group of social beings. They are morphologically almost identical, yet, their experience let them do their tasks differentially, or even let them do different tasks. Why then should we assume that all neurons in a large compound should act absolutely equally?

To illustrate the point we should visit a particular African termite species (Schedorhinotermes lamanianus) on which I worked as a young biologist. They are feeding on rodden/rodding wood. Well, since these pieces of wood are much larger than the termites, a problem occurs. The animals have to organize their collective foraging, i.e. where to stay and gnaw onto the wood, and where to travel to return the harvested pieces back to home nest, where they then put it to a processing chamber stuffed with a special kind of fungus. The termites then actually feed that fungus, and mostly not the wood. (though they have also bacteria in their gut to do the job of digesting the cellulose and the lignine.

Important for us is the foraging process. To organize gnawing sites and traveling routes they use pheromones, and no wonder, they use just 2 for that, which build a Turing system, as I proofed with a small bio-test together with a colleague.

In the nervous system of animals we find a similar problematics. The brain is not just a large network, over and over symmetric like a crystal. Of course not. There are compartments (see our chapter about complexity), there are fibers. The various parts of the brain even differ strongly with respect to their topological structure, their “wiring”. Why the heck should an artificial system look like a perfect crystal? In a crystal their will be no stable emergence, hence no structural learning. By the way, we should not expect structural learning in swarms either, for a very similar reason, albeit that reason instantiates in the opposite manner: complete perturbation prevents the emergence of compartments, too, hence no structural learning will be observed (That’s the reason why we do not have swarms in the skull…)

Back to our neurons. We reject the approach of a direct representational simulation of neurons, or parts of the brain. Instead we propose to focus the principles as elements of construction. Any system that is intended to show structural learning, is in urgent need of the basic differentiation into “local” and “tele” (among others). Here we meet even a structural parallelism to large urban compounds.

We can implement the emergence of such fibers in a straightforward manner, if we make it dependent on the occurrence of reproducing / repeating co-excitation of regions. This implies that we have to soften the SOM principle of the “winner-takes-it-all” approach. At least in large networks, any given observation should possibly leave its trace in different regions. Yet, our experience with very large maps indicate that this may happen almost inevitably. We just used very simple observations consisting of only 3 features (r,g, and b, such forming the RGB color triplet) and a large SOM, consisting of around 1’000’000 nodes. The topology was 4n, and the map was placed on a torus (no borders). After approx 200’000 observations, the uniqueness for color concepts started to become eroded. For some colors, two conceptual regions appeared.

In the further development of such SOMs, it is then quite naturally to let fibers grow between such regions, changing the topology of the SOM from that of a crystal to that of a brain. While the first is almost perfectly isotropic in exactly 3 dimensions, the topology of the brain is (due to the functional differentiation into tele-fibres) highly anisotropic in a high and variable dimensionality.

Conclusion

Here we discussed some basic design issues about self-organizing maps and introduced some improvements. We have seen that wording matters when it comes to represent even a mechanism. The issues we touched have been

  • – explicit distinction of intensionality and extensionality in the conceptualization of the SOM mechanism, leading to a whole “new” domain of SOM architectures;
  • – producing idealistic representations from a collection of extensional descriptions;
  • – dynamics in the extensionality domain, including embedding of other structures, thus proceeding to the principle of compartmentalization, functional differentiation and morphological growth;
  • – the distinction between modeling and associative storage, which require different morphological structures once they are distinguished;
  • – stuffing the SOM with self-organization in the strong sense;
  • – spatial layout, fixed rid versus the emergent patterns in a repulsion field of freely moving particles; distinguishing material particles from functional abstract nodes;
  • – nodes as active components of the grid;
  • – self-referentiality on the microscopic level that gives rise to emergent self-referentiality on the macroscopic level;
  • – programming style, which should not only be as abstract (and thus as general) as possible, but also has to proceed from strictly defined, strongly coupled object-oriented style to loosely coupled system based on messaging, even on the lowest levels of implementation, e.g. the interaction of nodes;
  • – functional differentiation of nodes, leading to dynamic, fractional dimensionality and topological anisotropy;

Yet, there are still much more aspects that have to be considered if one would try to approach processes on machinic substrate that could be give rise to what we call “thinking.” In discussing the design issues listed above, we remain quite on the material level. But of course, morphology is important. Nevertheless we should not conceive of morphology as a perfect instance of a blueprint, it is more about the potential, if not to say the “virtuality”, that is implied as immanence by the morphology. Beyond that morphology, we have to design the processes of dynamic change of that morphology, which we usually call growth, or tissue differentiation. Even on top of that, we have to think about the informational, i.e. immaterial processes, that only eventually lead to morphological correlates.

Anyway, when thinking about machine-based episteme, we obviously have to forget about crystals and swarms, about perfectness and symmetry in morphological structures. Instead, the design of all of the issues, whether material or immaterial, should be designed with the perspective towards an immanence of virtuality in mind, based on probabilized mechanisms.

In a further chapter (scheduled) we will try to approach two other design issues about the implementation of an advanced Self-organizing Map in more detail that we already mentioned briefly here, again oriented at basic abstract elements and the principles found in natural brains: inhibitory processes and probabilistic negation on the one hand and the chemical milieu on the other. Above we already indicated that we expect a continuum between Self-organizing Maps and Reaction-Diffusion Systems, which in our perspective is highly significant for the working of brains, whether natural or artificial ones.

۞

Where Am I?

You are currently browsing entries tagged with associativity at The "Putnam Program".