Songs of Birth

September 26, 2012 § Leave a comment

Embryos do not sing.

For embryos do not live in a probabilistic world, there is no need for the negotiation of codes and playing with them, both the codes and the negotiations. We even may not ask how the world could look like for an embryo, because there is no world. The vast majority of all relations of an embryo are purely internal. Obviously, the embryo exhausts all its possibilities of becoming when submersed in a tank. Embryos are professional solipsists. They are their own environment.

Embryos are able to absorb violent transformations that are dictated by its plan. The condition of the embryonic transforms the plan into morphogenetic processes. Foldings, transfers, inversions, and above all, melting of tissue. The fingers of the hand of vertebrates, including humans, do not just simply grow out like a branch in a plant. Fingers exist because tissue is removed by melting and “recycling” it. Between the DNA and the body there is the embryo. It constitutes even a different kind of corporeality.

As always with the particular factuality of biological systems we have to take it along the road into abstractness if we would like to learn from it. This road is, of course, not prebuilt. It is never, to be more precise. If we will build it in a proper way, we will find neat junctions into architecture, urbanism and the theory of machine-based episteme as well. Of course, we are not the first ones delving into this subject matter. Think of Simondon and his individuation, or again Deleuze, to whom we owe so much also for this essay that actually is about the principle of embryonic with regard to Singapore, Rem Koolhaas and his writing “Singapore Songlines”.

Anyway, these transformations that embryos undergo, this violence is a direct consequence of the simultaneity of the presence of a plan and the absence of play. We may even turn this relation around: Wherever we find a plan and processes that implement its actualization we may describe the respective context is an embryological context. Wherever we find violence (of one kind or the other) and its tolerance (of one kind or the other), we may describe the respective context as an embryological context. Yet, we must be careful regarding our valuation. From the perspective of the embryo, even the most brutal processes of folding, melting, secondary morphogenesis and renewal are perhaps not brutal at all.

Here we also find cybernetics as a symptom for a societies’ infancy, if not for still being embryo. It is not by mere chance that Michel Serres came up with the idea of Hominiscence only in the late 1990ies (published 2001), well after the retreat of cybernetics. Any cybernetical structure is the materialized plan, it is apriori closed, anti-probabilistic, any structural extension would result in its collapse. Cybernetic structures—which are actually quite rare in natural systems—may be regarded even as being proto-embryonic, as they can’t result in morphogenesis. Cybernetics works only for perfect solipsists like embryos, or, in a slightly different perspective, for perfectly constrained sub-systems.

Embryos may be conceived as instances of a principle or a concept that actualizes the possibility of material differentiation and growth. Embryos develop. Etymologically, to develop relates to replace, unwrap (like the German “entwickeln”, Swedish “utwickla”, or Portugues “desenvolver”), the particle “en” melted away from “des-en-velop”. The something that needs to be there as the entity is going to be unwrapped is the plan.

From here we can develop this concept of the embryo in a straightforward manner. It is a construction by inversion. Inversion here means to select one of the key “properties” or elements of the concept and to invert it, which of course generates something very different, compared to its ancestor. Remarkably enough, “construction by inversion” also goes far beyond of negation and dialectics, it is a deeply positive move.

Well, in our context, there are two main routes for doing that. Either we drop its inherent solipsism, confronting the becoming with the probabilistic, open world. If we still focus on the more material aspects, we arrive at the concept of evolution. The second route of inversion drops the focus on the material. Usually, we call differentiation and growth in the domain of the immaterial.1 “learning”. Hence, it would not be reasonable at all to say that embryos learn, or that they evolve. Concerning the general concept of differentiation we found now a trinity of only slightly overlapping language games, comprising development, evolution and learning, or embryos, populations and brains, or plans, probabilization and mediatization. Admittedly, minds create secondary, immaterial or virtual embryonic morphogenesis as well as probabilized and highly volatile populations. The immanence of thought is located between populations of informational germ layers of interpretation, where the respective morphology settles in the realm of the symbolic. In more philosophical terms, we could express this trinity as form, process and virtualisation, and even more abstract the particular, the species and the general. By means of all these parallel perspectives it should be clear for now that this trinity establishes a fundamental space (which is an aspectional space) for the language game of change.

In any real system, these three aspects of differentiation as mentioned above overlap, of course. For there is, for instance, no clear separation between the material and the immaterial (see footnote 1); there is also no perfect solipsism, which could claim that there are no relations to some kind of “outside”. And everybody knows that plans are subject to failure precisely due to the probabilistic influences from this outside as well as from the processes going to implement them.

In biology, these three aspects are handled by, or even just applied as three perspectives for asking about the underlying mechanisms. During the last two decades or so, biologists started to drop the idealistic distinction between the individual and the species by talking about the respective problematic field as evo-devo. Both, evolutionary and embryonic differentiation are characterized by constraints and potentials that are inherently implied by the process “it-self” .

So what’s about other domains, such as the Urban.2, or machine-based episteme? Urban environments are full of change, are representatives of change par excellence, and so is the condition of the Urban. In many cities, even rather small ones, we find urban planning agencies or urban development offices. In Singapore, however, which will constitutes our target in this piece, we find an Urban Redevelopment Authority—note the “re-“ here! Yet, so far we won’t find any Urban Evolution Department… How to speak about change processes without invoking ideology, and, most significant, beyond the particularity of a given “case”? Could the concept of the abstract embryo be helpful for that? And if, how?

Restricting questions about change to the level of the embryonic seems to be tempting. Yet, design efforts directed to the Urban hardly can be limited to the first level. Doing so instead causes strange conditions such as extreme forms of neoteny, or even embryoteny. From the perspective of a embryotenic entity, birth is conceived as a threat. Above we have seen that the embryonic level is closely related to the particular, restricting design activity regarding the Urban to the first level thus means to get trapped by a representational fallacy. Any prolonged development activity does not only deny birth and the possibility of learning as a mechanism of smooth adaptation, it necessarily results in “re”-development and the violence of the embryonic.

Of course, in such domains outside of biological structures we do neither find “eggs” nor “placentas”, even not metaphorically. We should avoid to call a city an “organism”, or “super-organism”. Yet, asking about the instantiation and orchestration of change processes in cities (regarding the Urban) or machines (regarding understanding and consciousness), we certainly can apply a sufficiently generalized concept of differentiation, such as we put it above as the trinity of plans (embryos), probabilization (populations) and mediatization (brains). This trinity establishes an aspectional space of differentiability and its expressibility. This space also comprises the Deleuzean concept of the differential (as a structure) as well as Simendonean individuation (as a process).3, both in their full complexity.

The obvious question regarding any designed process of change thus concerns about the mechanisms and the implied changes of quality when moving around in this space of differentiability. Practically, in actual cases we have to choose a particular differential weighting regarding the trinity of development, evolution and learning.

Not all moves are possible in that space, and not all possible moves are smooth and painless. We also should not expect that those somehow disrupting transitions in this space such as birth are taking place only once, or as a unique event. Perhaps, we should not conceive the moves and movements in this space as transitions, since the relation

Embryos are still not born..

This Essay

This essay is the third of a row about Rem Koolhaas’ trilogy4 comprising three texts titled “The Generic City”, “Junkspace” and “Singapore Songlines”. The former two are much more abstract than the third, which actually seems to strive for some kind of understanding the Singaporean condition. It is thus the most extensive in the trilogy, bringing in a wealth of details.

The resulting trilogy of our own, established by “The Generic City – a Précis”, “Junkspace, extracted”, and this essay accompanies our investigation about the possibility and the form of a “Theoretical Architecture” as well as a “Theory of Architecture” and the role of “Theory in Architecture”. These theoretical moves are explored under the umbrella of a philosophically guided approach that we call Urban Reason.

Here, our main subject of interest is Singapore and its particular quality. Such it turns also into a critique of Koolhaas’ investigation. Our main proposal about Singapore is that it is best conceived as an Urban Embryo, where the notion of embryo as well as that of the Urban is a rather abstract one, of course. Yet, everything in Singapore starts making sense only before this background.

The remainder of this essay comprises the following sections (active links):

Dreaming Koolhaas

Songlines refer to the cultural heritage. They convey something about the Life Form of the culture’s past as well as its presence, yet not about those things that could be clearly spoken about. Its a kind of myth, though it is less and at the same time more than a myth. Actually, even just referring to them establishes important constraints on any further individual and collective action. Songlines are like collective daydreams, often expressed in non-textual music or images or actions. Writing about someone else’s Songlines is thus a delicate issues, since one is going to confront the speakable with those issues about which we can’t speak, nor which we could show or demonstrate.

Previously, we called Rem Koolhaas a story-teller. More recently, his works showed a tendency towards cross-mediality and genericity. Such it seems as if he’d remembered his roots as journalist and experimental moviemaker. In an interview from 1999 (to a German newspaper), he describes himself, his major profession, as a writer and author—at that time—, yet not as an architect or even an urbanist.

Story-telling, .consolidated into a more or less secular and profane form first by Boccaccio through his Decamerone5 is a form and a mirror of human reason. Reason exceeds rationality by far, as for instance ethics can’t be fully determined by rationality, or the necessary contradictions inherent to complex, living entities and their social organization defy rationality as well.

At this point it is quite interesting to see that Koolhaas, in his earlier, still more modernist “configuration,” relates evolution and stories almost by definition. In Singapore Songlines [1]—which is from 1995—he writes:

Singapore is a city without qualities (maybe that is an ultimate form of deconstruction, and even of freedom). But its evolution—its songline—continues: from enlightened postwar UN triumvirate, first manifestation of belated CIAM apotheosis, overheated metabolist metropolis, now dominated by a kind of Confucian postmodernism in which the brutal early housing slabs are rehabilitated with symmetrical ornament. (p.1077/78)

Singapore Songlines appeared in S, M, X, XL [2], a remarkable cross-over-view about his oeuvre up to the mid-nineties. There he also demonstrates how AMO/OMA approached regional cultures, i.e. a particular city, by empiric studies for the purpose of learning about the city (e.g. the study about Lagos, Nigeria). In the case of Singapore, Koolhaas added a detailed investigation addressing the particular history of Singapore, resulting in an almost hermeneutical attitude.

The passage quoted above is remarkable for at least two reasons. First, he ascribes the historical course of Singapore a hidden tendency and consistency, which nevertheless consists for large parts of unintended effects (and affects), despite the particular culture of planning that prevails in Singapore.

Secondly, equating evolution with a songline, i.e. mythical sequential order of undefined dimensionality, but surely not of a single one. Such, he conceives of the evolution of a local cultural arrangement as kind of a generic story, but he also conceives of the songlines as evolution. The former bringing in inceptions, bursting fountains of dreamt cross-media from buried experiences, the latter invoking the element of probability, constraints, symbiosis, extinction and inheritance. The former purely immaterial, the latter constantly crossing the border between the material dimension of the differentiating body and the probabilistic, informational dynamics of populations

Equating both, .the songlines and evolution, is provocative in its own, especially if it is performed in such a parenthetic manner. Probably, it is used by Koolhaas to indicate a particular constitution regarding the “resistance of the existential”6. By qualifying Singapore as a “city without qualities” he is actually pointing to the same direction. In particular, Robert Musil described the advent of the age of the probabilized conditions—the times around 1910 in the agonizing Austrian monarchy—in his novel “The man without qualities” [5]. Far from being without qualities—his main figure Ulrich has been called of being without qualities as an offending act by another figure inside the novel—, Ulrich is described as a person who explicitly tries to develop the manifoldness based on his individuality, albeit he as an individual is detached from traditions and immersed into the upcoming “mass society”, that is, a population where everything gets probabilized. In his novel, Musil unfolded a broad view about the problematics of societal transformation, particularly the relation between the individual and the fundamentally changing society at large.

Without doubt, these references as introduced by Koolhaas en passant all apply to contemporary Singapore as well as to its history, the subject of Koolhaas’ 80-page essay. Yet, his piece constitutes also a particular point of departure for Koolhaas’ own Songlines, which should eventually be completed through “The Generic City” (also contained in S, M, X, XL) and particularly through “Junkspace,” besides his architectural works such as the Dutch embassy or the Casa da Musica.

In 1995, some seven years before his piece “Junkspace,” Koolhaas was still defending modernism despite he also felt uneasy about it. In his short critical piece “What ever happened to Urbanism?”, which also appeared in S,M,X,XL, he mentioned:

Modernism’s alchemistic promise—to transform quantity into quality through abstraction and repetition—has been a failure, a hoax: magic that didn’t work.

His defense in the Singapore Songlines, though being an implicit one, goes by emphasizing that its goals should not be separated from its way of operation, namely a mechanistic and rationalist program. This, of course, means that he proposes to leave precisely these mechanisms intact:

In Singapore—modernization in its pure form—the forces of modernity are enlisted against the demands of modernism. Singapore’s modernism is lobotomized: from modernism’s full agenda, it has adopted only the mechanistic, rationalistic program and developed it to an unprecedented perfection in a climate of streamlined “smoothness” generated by shedding modernism’s artistic, irrational, uncontrollable, subversive ambitions—revolution without agony.  (p.1041)

Koolhaas’ argument here is almost a romanticist one. First, modernism is no exception to the general condition that the goals of a movement are often shaped by the mixture of historical facts and metaphysical beliefs. Obviously, Koolhaas suggests that it is indeed possible to separate the goals from the operational setup. Such, he fails to recognize the core modernism itself, namely the way that the metaphysical beliefs characterizing modernism—above all “independence”—leads to its particular arrangement of operations.

The point now is that a similarity in the operations is by far not sufficient to conclude about the similarity regarding metaphysical beliefs. Yet, what are the metaphysical beliefs of Singaporeans? And how could a member of a Western society relate to it? For Koolhaas, the latter issue is clear: don’t forget to confirm your return flight (p.1087). This clarity does not hold for the former part; as Koolhaas was not aware about his own metaphysical setup, he barely could get aware that of the Singaporeans. No wonder he feels the whole subject as a troubling one:

[…] the most disconcerting question is: Where are these urgencies buried? (p.1017)

The answer would have been, of course: in his own metaphysical beliefs. At that time, in the mid-1990ies, Koolhaas had apparently been puzzled about what he experienced in Singapore. He was neither able to think appropriately about differentiation , nor, as a consequence, he could find sufficient distance that would have been necessary for an appropriate comparison. I think that at least some important conclusions about Singapore are mis-spelled, at least. In turn Koolhaas misses to construct a launching site for a general theory of urban development. The first thing such a theory would need is an appropriate conceptual work. Elements that could serve as building blocks as well as a basis to speak about changing urban structures or processes.

Koolhaas describes his strategy for approaching the particular constitution of Singapore by reference to biological systems:

I have tried to decipher its reverse alchemy, understand its genealogy, do an architectural genome project, re-create its architectural songlines. (p.1017)

As we already noted, invoking the image of the “songlines” serves Koolhaas as a metaphorical placeholder for evolution and its historical fabric, its abstract tendencies, contingencies and non-linearities. Like in dreams, it is impossible to forecast the results of the actualization of evolution, yet, beyond the contingency there is also a certain consistency in both cases. Such, Koolhaas set up another tuple that reminds to the major domains of living systems: the combinatorics of molecules (chemistry), the basic encoding (plan, genome), its becoming (genealogy, differentiating individuation), and finally the level of evolution.

Unfortunately, this is the only case where Koolhaas’ essay exhibits a tendency towards abstract structuralism that is inspired by the perspectives developed in biology. Even worse, Koolhaas got stuck in an almost phenomenological habit, blending delving and drowning unwittingly into each other. Of course, Koolhaas essay is a great source for any thorough view onto the historical constraints influencing Singapore’s actualization. In this regard, Singapore Songlines his a highly recommended reading. Yet, Koolhaas tried to do more than just bringing together important sources and describing its history. As a story-teller about the Urban, he is interested in a generally applicable approach. It is regarding this “more” where he didn’t succeed.

We already mentioned that his affiliation to modernism could be seen one of the major reasons for this failure. Later, Koolhaas will depart more and more from modernism, resulting in a rather critical attitude towards modernism. This is reflected in his work as well, of course, which—at least regarding some instances—became more and more relational, and thus Deleuzean.

Behind the Curtains

From this context, given by Koolhaas and Singapore, there are mainly two questions that may appear significant. First, how could we approach the case “Singapore” in a more appropriate way? That is, how could we ask about Singapore and learn from it, rather than being drowned by the amount of particular bits of facts about its peculiarity? Second, how could we read Koolhaas’1995-writing with his more recent achievements?

These interrelated achievements we already discussed previously, they could be summarized as three beyonds:

  • Beyond Erecting: story-telling (in its serious, thus playfully comparatist version) as a method and an effect in architecture, regarding the usage of the building—ultimately its Life Form—as well as the building’s relation to architecture itself;
  • – Beyond Form or Function: emphasizing relationality rather than individual form or functionality, with regard to the people using the building as well as the building’s embedding into a given arranged asset of other buildings;
  • Beyond the Differential Equation: employing time as an activated structural element or asset of building, overcoming the reductionist concept of time as a parameter or even as a (passive) dimension, as it appears in commonly used models of usage or change.

In more concise manner we could express these points also by saying that Koolhaas is on an evolutionary trajectory towards an animate architecture, where behavior is the main organizing paradigm. It is somewhat significant not to separate the three parts listed above. Story-telling does NOT mean that the architect is telling his or her own story as an egomaniac, a category populated by “star architects” and “deconstructivists”. It would be a serious misunderstanding to conceive of story-telling in the same vein as programmatic music once did (for instance Mussorgsky, and Bach earlier). Of course, these pieces can be beautiful, but you can’t listen to them very often. Programmatic, or theme-oriented, often also means “programmed”, i.e. closed.

It is much as Nigel Coates expresses it [6]

Heathrow has versions of Yates Wine Lodge. A debased form of narrative adorns every hotel lobby restaurant and ready-furnished apartment reaching out to the experience-hungry consumers. We live in a morass of meaningless quotation […] ( p.160)

It is more appropriate to conceive of story-telling as a particular game (or play) of braiding teller, listeners, the text, and the situation. The art of story-telling is to create a self-sustaining, nested story-process within each of the listeners by means of feeding and growing their interpretive activity. For good stories, and good story-telling events, the story told is never the story of the teller, it is always the story of the listeners. Having a rich history of telling is certainly helpful to create this, yet, it would be a fatal reduction to conceive of architects as “sources” of stories. Koolhaas referred to a similar issue in his essay about “Bigness”.

We repeatedly mentioned that modernism is characterized by the metaphysical belief in independence. As a corollary, time is usually conceived as a single thing, a primitive series of infinitesimal points. Change is usually described using this time as an external parameter, while the description itself, e.g. as some kind of formula, is symmetric with respect to time. this is paradigmatically realized in physics and, (not quite) astonishingly, also in modernist urbanism.

Taking historicity into consideration, as S.Giedion or Aldo Rossi did it, is just the first step towards a communal story-telling. Koolhaas, in contrast, applies a completely different concept of time. We could call this image of time semiotic (Peirce), cinematic (Deleuze), or complex (Prigogine). In any case, the naive physicalist image of time as a series of independent points vanishes. Not only is “presence” not point-like any more. Presence is as long as a particular “sign-process” is ongoing. There are also bundles of different times, created by different compartments that all host (more or less) separate forms of life.

But again, how would we start interesting communal story-telling? At first, it should be clear that there is always some story told by an urban context. For it is always possible to project some coherence to an urban arrangement, even if would be filled with crap urbanism, ugly store-houses, etc. Such, the mere notion of narrative architecture is just empty. What is at stake is the “proto-content” of the story and dynamics of its unfolding. I put it into quotation marks because it is of course clear, we just mentioned it, that the content can’t be predefined. The visible story is always and only the mediation of the actual story. And that is going to be braided by people, citizens, active listeners. Architecture and town design just has to provide suitable settings.

Nigel Coates tries to identify such elements of city design that could support a different kind of story telling. Yet, Coates fails, not only because he does not develop a proper theory of urban story-telling, which would include some reference or even assimilation of cultural theory. He also is not aware of city theory, e.g. that of David Shane. Yet, intuitively he strongly refers to heterotopias, albeit just by example, not by concept. Coates’ work generally suffers from the case-study approach, even as he tries to get some grip onto the more abstract level. In his advanced theory Shane, identifies several types of heterotopias. The common denominator for those is, however, complexity, either as we introduced it, or in the way Koolhaas is celebrating it as Bigness. Coates is far from understanding any of those. He just points to Koolhaas.

It is crucial to understand that those three beyonds  fom a few paragraphs above are deeply incompatible with the metaphysical belief system of modernism, first of all the sacrosanct independence as a primary element. To put it in another way, these three beyonds are actualizations of a deeply a-modern attitude. This includes any kind of post-modernism as well! Yet, so far Koolhaas didn’t develop his own songlines explicitly that follow these particular issues.

Teaching Singapore

Many people, at least the more sensitive ones, get irritated when visiting Singapore for the first time. Despite it reminds to Western urban arrangements at first sight, it turns out to be quite different. Despite Western guys may recognize some or even many elements that contribute to urban arrangements, at second sight these elements turn out to be choreographed in a strikingly different manner, or to establish a choreography of its own. In terms of animate architecture we could say, Singapore behaves differently.  (Just remember that we conceive of behavior quite abstractly, not in terms of organisms!)

Of course, we should understand that these “despites” are just evoked by underlying disappointments of illusions, created by inappropriate projections. In the case of Singapore the illusion that may be easily triggered by the visual similarity to sceneries in Western cities, perhaps spurred by a certain expectation regarding the effect of globalization. In a sense, traveling with a A380 is not traveling any more. There is just a little movement to and from the airport, but the flight as such is like staying overnight at a weird hotel.

Anyway, what remains is that difference at the second sight. And this difference is a very strong one. By now it should be clear that the peculiarity of Singapore can’t be found on the surface, where empiricists could hope that counting frequencies of whatsoever could show us “directly” the representative differences. Even a latent state variable analysis wouldn’t reveal anything meaningful. This applies, of course, not only to the case of Singapore.

Yet, again, what are the metaphysical beliefs of Singaporeans? Of which achievements are Singaporeans proud of?

In order to understand Singapore, on the level of the individual as well as on the level of the whole state, we have to be clear where they come from. In a sense, Singapore repeats the typical European transformation from non-urban to urban structures, yet in an extremely condensed form, both in the spatial as well as in the temporal dimension. This renders mechanisms visible that otherwise are hidden by vast amounts of historical and contingent particulars. To put it in Foucaultian terms: How could we describe the field of proposals, the space of everything that Singapore could think, and how could we describe the fields of forces that are at the roots of its specific governmentality? Such questions are part of what we could call “Archaelogy of the Urban”.

As a state, Singapore was born by an act of segregation. Yet, it didn’t set apart itself, it has been cast out by Malaysia. The Malayan government enforced the founding of the state because it considered the conditions on the island of Singapore as highly pathological, indeed so bad that it was considered as being incurable. Well, the conditions indeed have been quite bad. In the case of Singapore, the state was born into a chaos. The formal political state was not even accompanied by any organizational structure, nor such a thing as political awareness among its factual inhabitants. From the perspective of the perinatal Singapore there wasn’t anything to build upon.

At that time, in the beginning of the 1960ies, a lot of Chinese people have been living on the island. This brought the structure of the family as a clan into the political reality of Singapore, where it still prevails today. Undeniably, it is a kind of feudalism, yet, it can’t be directly compared to the European form of feudalism. After all, members of the clan are related to each other.

Operationally, the initial mess had to be cleaned up, and this wouldn’t have been possible without a strong plan of almost a military precision. Without any doubt, the political system was Singapore, and probably still is, an oligarchy, establishing a political elite de facto. Yet, it is also clear that it is not a tyranny or a dictatorship. The “big families” feel a serious responsibility about the welfare of the whole state. The political system is probably best described as a technocratic paternalistic oligarchy, using a parliament for the purpose of limited mediation. (In some way, not so dissimilar to the course of development of the E.U.) Else, in Singapore, you won’t see as much video surveillance as you could in London, and the reason is not a lack of potential funding.

In a sense, the Singaporeans did an incredible job. It is the successful improvement of the conditions by actualizing an incredible culture of planning that contributes most to the self-esteem of Singapore. This culture is orchestrated by the Urban Redevelopment Authority (URA), which spends a lot of efforts to inform the public about the result of the planning process, not however about the process of planning itself. The emblematic item of the Singaporean culture of planning is a continuous exhibition run by the URA. Below I show just a few images from this exhibition, which covers historical aspects as well as planning aspects.

Generally, the exhibition tries to smooth the history and align intentions, means and effects. Center of the exhibition is a large, representational model, approximately of the size 15mx6m, where one can find all built houses, and all planned houses.

Figure 1a: Partial view of the city model at U.R.A.’s continuous exhibition. In the foreground, you can see Marina Bay, which extends to Singapore River to the background and to the left. The blueish color of the indicated buildings (each model of a high-rise approx 15 cm tall) indicates “being planned” and contracted. The material of blue models is plastic foam.

Figure 1b: Marina Bay, now in wood indicating that it is being built or that it has has been built.

Figure 2a: Poster about the Master Plan 2008. You can see an enormous grade of details. It is indeed a plan, not an open program.

Figure 2b: Exhibiting proudness, the Singaporean way.

Really smart, one could think about such enduring success regarding the implementation of large scale plans. Yet, one also can feel that something under its hood leaves a trace of acid. so, what’s wrong with it? Deleuze frequently insisted on the distinction of reality vs. actuality and possibility vs. potential. Plans are already denoting the possible, everything that is possible (such as denoted by a plan) is already real. Hence, the poster above (Fig.2b) tries to feed on a contrast where actually is none. Deleuze also analyzed and described in detail the origin and the setup of such a misunderstanding, which according to him suffers from the representationalist fallacy (see our earlier discussion here). It is nothing else than a nice match and confirmation that he describes such thinking also as an instance of the “dogmatic image of thought”.

What the author of the poster most likely was referring to is what we described earlier as the existential. Yet, the existential defies any control, even to speak about it, which is quite the opposite to what “planning” refers to. The transduction and implementation of plans as something we then could experience as  something “external” may succeed only, and here we repeat ourselves, if the conditions of such an implementation are completely fixed. Plans could be implemented successfully only if there is no potential. Thus, exhibiting proudness about the successful implementation of plans may be well considered as nothing else than the embryo saying “I am”. The doubts appear much later.

Within a comparably very short time, and without externalized violence, i.e.  bloody revolutions and riots, they transformed their society from level zero into a wealthy third-sector society. Yet, Singapore feels strange for a Western visitor nowadays, as we already pointed out. The reason is that Singapore still behaves as if there would be chaos to fight against, as if there would be a serious lack regarding material supplies, as if Singapore still would fall behind developed countries regarding the economical figures. Employing the umbrella of sustainability (see the next figure below), the URA (Urban Redevelopment Authority) readily declares the alternative to planning.

Figure 3: Beautiful new world. How much halves of the full story are appropriate?

why do we plan

Yet the declared alternative is at least incomplete, if not wrong, in two ways. Neither is it a necessity that the absence of planning results in bad conditions of industrialization (evolution and learning as alternatives), nor does a polished city mean that this city runs well-balanced on a larger scale (costs are likely to be externalized). In fact, solar energy is almost unknown in Singapore, and the “adoption process” has not even started. All electricity is generated by three power plants running on mineral oil.

Figure 4: Screen shot from the official website of the URA, where it is providing a lot of video and images for a virtual visit. The image below is showing the Clarke Quay at Singapore River mouth, near Marina Bay, which would follow right-hand. In this area you can find a lot of restaurants (Chinese, Japanese, French cuisine), where every aspect of them is choreographed. Hence, the whole arrangement does neither feel “native”, nor “smooth”.

 

William Gibson once mentioned that Singapore is like Disneyland with death penalty. This of course is a deeply misleading exaggeration. The grain of truth in it is the particular silliness due to the still rigorous adherence to the paradigm of planning. Singapore is not threatened by chaos, predatory capitalism, democratic trash or mis-understood materialism of the Marxian flavor. Singapore is threatened by blocking itself from giving birth to itself. Its silliness derives from its neoteny, which in this case is even kind of an embryoteny. Embryos claiming to be fully alive look silly, or troubling at least.

Nevertheless, we should be cautious in our valuation. As it is the case for embryos, we simply can’t apply any of the categories we are used to refer to when thinking about Western societies as Western enculturates. Note that this is not a question of Western vs. Asian though, as Koolhaas repeatedly mentions in his text.

The mystery of how […] the strategy of modern housing that failed in much more plausible conditions could suddenly “work” is left suspended between the assumption of greater authoritarianism and the inscrutable nature of the Asian mentality. (p.1037)

Koolhaas fails to recognize the particular setup of Singapore as an embryo. For grammatical reasons, his conclusions are thus inappropriate, despite his hermeneutical and thorough approach. Singapore is an urban embryo in Asia, its parents gifted it with a potentiality that is Asian, yet, Singapore itself can’t conceived as an Urban body so far. In a sense, it is not even Asian.

The example of Singapore demonstrates that for embryos the dimension of history does not exist at all. Melting and folding erases the possibility for history. Instead, the embryo “knows” only about the future. Even the presence is irrelevant to it. Embryonic morphogenesis means to live inside the plan. If you know that a particular structure is necessary for the next step, but also that this awaited next structure needs to be melt down afterwards again, well then you would start to speak about continuous renewal. Plans reduce the potential to the possible, their purpose being precisely to expel the unforeseeable. Koolhaas is therefore wrong when he repeatedly reproach Singaporean authorities for a certain violence or cruelty. If you live inside a plan, then there is no cruelty except the plan’s rationality, which however is not visible from within. Actually, the perspective from within a plan renders concepts like violence, rationality or moral freedom even meaningless. They could not even be debated within the life form of plans. Perhaps, here we meet the major argument against any close ties between politics and plans, whether in the form of “normal” bureaucracy or in the form of centralized governments.

Such, despite Koolhaas is certainly right to expose a certain “violence”, he definitely fails to find an appropriate category for it. Calling it some kind of “war” is probably not quite correct: War is an extreme form of politically organized externalized violence!

A regime like the one in power in Singapore is a radical movement: it has transformed the term urban renewal into the moral equivalent of war, […] (p. 1035)

[…] a perpetuum mobile where what is given is taken away in a convulsion of uprooting, a state of permanent disorientation. (p.1036)

All the new housing, accommodated in high-rises, close together, entirely devoid of the centrifugal vectors of modernism, obscuring both sky and horizon, precludes any notion of escape. In Singapore, each perspective is blocked by good intentions.  (p.1037)

How would an embryo “escape”? If it “escapes”, we call it birth. More significant, the “escape” of embryos is equivalent to a vary fundamental change of the life form. Not only relations change, even what could be called a relation changes during birth. For Singapore, it seems to me, the appropriate question could be how to initiate its birth?

The delicate situation of the planning authorities, probably of the whole city all together is described by a serious kind of impossibility. It would be the first embryo thinking about its own birth.

Well, today the URA established a rule that spectators should be enabled too get a glimpse to the sea each from high-rise building. (Of course, Koolhaas meant a different thing here… :)

Living inside a plan, likewise we could say that living as embryo creates a strange attitude towards the presence. Everything is known to be potentially replaced quite soon. So, why spend any efforts to make things beautiful? As beauty always means some kind of sustainably encoded luxury, it should be clear how it feels to be in Singapore, strolling around. For a Western soul it feels sterile, sharp, uncreative. I say this without any notion of reproach, of course. Nevertheless, it remains at least exciting to observe how Singapore will proceed to turn its paradigm of change from development to evolution. Sustainable adaptiveness is achievable only through the latter, and in a smart way only by overcoming evolution through a further turn towards virtuality and learning.

It is very important to understand that this current Singaporean paradigm of renewal has nothing to do with an open evolution. Precisely here we find the suture for intensive conflict. Unfortunately, Singapore apparently didn’t recognize the necessity for proceeding towards a more open style of development. What the SG authorities try nowadays is to plan leisure and the play, i.e. the playfulness of its citizens, which indeed sounds somewhat perverse. You can’t issue commands like “Play!”, “Be creative!”, “Develop tolerable sub-culture!” Nevertheless, this is exactly what the Singaporean government apparently is heading for.

It is a period of transition, revision, marginal adjustments, “New Orientations”; after “urbanization” comes “leisurization.”  “Singaporeans now aspire to the finer things in life to the arts, culture, and sports …”

The recent creation of a Ministry for Information and the Arts is indicative. As Yeo, its minister, warns, “It may seem odd, but we have to pursue the subject of fun very seriously if we want to stay competitive in the 21st century …” (p.1077)

Not recognizing the embryonic form of life Koolhaas was tempted into a further mistake. It is plainly wrong to call Singapore a semiotic state.

Singapore is perhaps the first semiotic state, a Barthian slate, a clean synthetic surface, a field at once active and neutralized where political themes or minimal semantic particles can be launched and withdrawn, tested like weather balloons. (p.1039)

Embryos need anything but open interpretation. Koolhaas would even be wrong if he would apply the (open) Peircean concept of the sign (he obviously sticks to the mechanistic and closed Saussurean model). Yet, Roland Barthes himself preferred the Peircean conception. Additionally, Koolhaas seriously misunderstands semiotics, as it has been made available for architecture e.g. by Venturi. In 1995, Koolhaas was still following the common modernist misconception regarding semantics, namely that semantics and thus meaning could be determined apriori. Semantic particles can’t be launched simply because they don’t exist (they are impossible). In fact, the city of Singapore lacks semiotic anchor points almost completely (so far at least), except perhaps the exaggerated tourismic choreographies around Marina Bay, not quite surprising due to the same misunderstanding. The semiotics of a city depends on its history, as it is impossible to introduce a symbolic value by declaration. Yet, living inside the plan, such a history is impossible. Trying to enforce the presence of history—which is nothing else than to pretend it—results just in more silly artificiality—regarding the Western setup. Yet, if we compare it to things like the “historical district” in San Diego, Singapore may not be that far off.

SingaPure Conclusions

Given the unique conditions that we can find in Singapore, or as Singapore, it is not really a surprise to find two highly renowned technical universities engaged in large projects. The Boston MIT and the Zurich ETH run “laboratories” in Singapore. The total budget spent in the 5-year period since 2010 well exceeds 400 mio. $, shared between Singapore and the universities. Of course, both parties address technical questions almost exclusively, attracting reductionist practitioners of all sorts (For most of them “complexity” is an offense). It has been proven difficult to bring in a more cultural perspective. Encouragingly, or should we say ironically, the Swiss fraction is housed in a building called the “Create Tower”.

Singapore is indeed a laboratory, .though a very special one. Yet, like in an experiment conducted in material science, the basic setup is known. No new natural laws are to be expected7, the main target being optimization of the embedding system. In such an experiment, you know how to set it up in advance. Hopefully, the Future Cities Laboratory, as the Swiss branch is called, will recognize the subtle complexity of that naming. Hopefully, Singapore will not serve as a template for other cities. Yet, just his seems to happen in China.

The main lesson we can learn from Singaporean Songlines is what it means to become embryonic. Without the implemented example we simply would not know. It  would not be possible to set up a theory about change, particularly not about change with respect to the Urban. In turn we may say that the actual Gestalt of the Urban—as a concept and as a Life Form—is highly dependent on the way one actualizes the concept of change. Thinking change means thinking time. And this is definitely different all around the Asian cultures as compared to the Western concept.

In our summary we claimed that everything in Singapore starts making sense only before the background of its embryonic condition. .This may easily be generalized into a generally applicable principle: Nothing regarding the Urban Makes Sense Except in the Light of the Orchestration of Change.8

Of course, everything always changes. Yet, we deliberately emphasize its orchestration as the important aspect. Cities that are not aware of that, instead just reacting on a daily basis to the never ending challenges without any reflection about the conditions for these actions and reactions, can hardly maintain the quality of the Urban. Such, the Orchestration of Change provides the transcendental conditions for the particular quality of the Form of Life that establishes in a certain city, maybe even as the Urban. Thus, it is clear that mere size is only a secondary condition for the appearance of the Urban. (For instance, Munich has been dubbed as a “large village” by Karl Krauss in the 1920ies, and it is quite likely that he would label it the same today.)

The perspective expressed above includes, of course, the conceptualization as well as the socio-political instantiation of change, the former implying the choreosteme, the latter all the (usually) highly complex mechanism associated with it. We have argued that change always implies embryonic, evolutionary and learning aspects (all in its abstract form). In the opposite direction we could say that any process of change or differentiation can be situated in the aspectional space spanned by these three aspect. Such we can sharpen the perspective onto differentiation that we have been developing earlier, in our essay about growth, where we distinguished different modes of growth. Now, we are able to transpose the “observation of growth” into the abstract, which allows us to derive a general approach to change. A very brief remark should be allowed here saying that this aspectional space conveys precisely the attitude of the late Putnam regarding essences or prototypes. They simply do not exist for him outside the collective process of settling down at a particular configuration (which then is considered as being an “essence”). (cf. [7])

With regard to the Urban this is particularly interesting for shrinking cities or neighborhoods. Gaps and local meltdowns in urban assets are anything but defects or pathological. Shrinking processes do not provide any reason to become desperate. Yet, they definitely deserve a vision, a long-term perspective, even if it won’t be implemented as rigorously as it is done in Singapore.

Singapore demonstrates what it means to “become positive”. The embryo is wholeheartedly positive. It is a punch to representational negativity, blaming the Western flavor of urbanism that got infected by it. Koolhaas was aware of this (“What ever happened to Urbanism?”), yet at that time without being able to point towards a possible release of this deadlock.

Our amalgamated wisdom can be easily caricatured: according to Derrida we cannot be Whole, according to Baudrillard we cannot be Real, according to Virilio we cannot be There.

Of course, the actual issue with all of those three guys is that they are caricatures of themselves. Trying to reason about the whole and its actuality as romanticist hyper-modernists is a contradictio in adiecto. Methodological stupidity. It is stupid (or childish) to believe—as modernists actually do—in metaphysical independence of everything, thus splitting everything into metaphysical and empirical dust, and then at the same time tying to pretend to say anything about the imagined whole, which even worse is often assumed to be out there as such. Yet, the positivism of Singapore is just following the negative of this negativity, because it takes the positive itself again representational. There is no free choice in a plan. If it would, it would not be a plan anymore. The metaphysical setup of Singapore is characterized by the belief in transcendental identity as the primary element, shaped by the historic need for rigorous planning. We already have been discussing several times the problem with concepts that are based on the principle of identity. Yet, in an engineered city it matches the general habit.

Both together, planning within the paradigm of identity, resulted in the city-state’s embryonic character. The abstract embryo is the only being that could claim identity, since it is the only being that also could claim being a solipsist. As it is typical for embryos, the Singaporean model is possible only on this apriori spatially restricted island of 600 square kilometers (a bit more than the Lake of Constance in the middle of Europe). Indeed, it could prove quite hard to adopt a more relational attitude.

No wonder Singapore attracts engineers and reductionist urbanists. By no means Singapore could be considered a “model” city in the sense that one could transfer “experiences” to other cases. (Except similarly brutal cases of city planning in China.) Time will reveal whether Singapore once will develop into a model case. For that, however, it must find some way to get born.

Regarding Koolhaas and his Singapore Songlines, we have seen that he was not able to depart far enough from his own modernist setup. Despite he is able to observe that…

Singapore is incredibly “Western” for an Asian city, […]. This perception is a Eurocentric misreading. The “Western” is no longer our exclusive domain. (p.1013)

…he is not able to develop an appropriate perspective to the change model that is implemented in Singapore. Neither the assignment of ugliness nor that of absurdity actually makes sense. Who would say that embryos are ugly? Or chaotic? Or a Potemkinic entity?

It is managed by a regime that has excluded accident and randomness: even its nature is entirely remade. It is pure intention: if there is chaos, it is authored chaos; if it is ugly, it is designed ugliness; if it is absurd, it is willed absurdity. Singapore represents a unique ecology of the contemporary. (p.1011)

The problem of Singapore, its problematic field, is provoked by its addiction to embryonism. In order to avoid an increase of the intensity of violence there is no other possibility than to transform the centralized, representationalist embryonism into its probabilized version, a steady, multiplied and manifold nativity on the level of the individual or small social groups. I am (not so) sure that they will find a plan how to accomplish this….

Notes 

.1. This distinction between the material and immaterial is a secondary distinction. Previously, in the essay about behavior, we argued that this distinction is due to the existential fallacy. That distinction implicitly assumes that we could speak about the material in its or as an existence before any perception, any language and any conceptual work. This of course is not possible. Distinguishing between the material and the immaterial pretends a problematics, yet it only gets trapped by a misunderstanding.
Thus, this distinction should be regarded as a coarse approximation only. 

.2. As always, we use the capital “U” if we refer to the urban as a particular quality and as a concept (particularly the one we are developing in this series), in order to distinguish it from the ordinary adjective, and additionally to avoid any reference to any kind of “-ism”. 

.3. For discussion of Simendon’s individuation see Bühlmann [3] who discussed him with respect to mediality; also see Kenneth Dean [4] who refers to him in a concise way: “Gilbert Simendon ( 1989; 1992) has analysed the process of individuation of living organisms, individuals, and social collectives. He argues that an individual is generated out of a complex metastable field of preindividual forces, potential forms, and possible coalescences of matter. The moment of individuation is determinative in physical processes, such as in the formation of a crystal. Even after attaining the consistency of energy, form, and matter that constitutes a crystal, the crystal continues to interact with its milieu, in order to maintain its consistency. In the case of living organisms, the realm of virtuosity Simendon refers to as the preindividual is carried along throughout the living being’s lifetime of continuous individuation. Thus, attaining a particular identity is only one, and but a temporary, aspect of a continuous interaction with the milieu, and a continuous process of individuation drawing upon the virtual, or preindividual, realm. Many of the forces that move through a living being undergoing these processes may be described as transindividual. This is particularly the case with regard to the establishment of an individual identity vis-a-vis a social collective.” (p.31). 

.4. Note that Koolhaas didn’t conceive those texts as a trilogy by himself, at least as far as I know. Rather, putting these texts into a close neighborhood is due to our interpretation.

.5. The Decamerone is commonly regarded as the first instance of the novel, (it. novella), indeed a novel thing, usually about novel stories, though the same stories have been told innumerable ways before.

.6. We introduced “resistance of the existential” as an accidens of corporeality. There is always something about the material arrangement that we can’t speak of, as any speaking or thinking already refers to modeling, or more precisely, to the choreosteme. Yet, despite we can infer any outside and its materiality only indirectly, we are faced with it. Saying this we also have to emphasize that materiality is not limited to the outside (of the mind, or the choreosteme), since symbols always acquire a quasi-materiality.

.7. Unless the experimentator does not just play around, as it happened in case of the discovery of ceramic high-temperature super conductivity by Bednorz and Müller in 1986. Accordingly, there is still no theory that explains the physical phenomenon of high-temperature super conductivity.

.8. This is a mirror of Dobzhansky’s famous “principle” for biology as a science. He mentioned that “Nothing in Biology Makes Sense Except in the Light of Evolution”, American Biology Teacher vol. 35 (March 1973)

References

  • [1] Rem Koolhaas (1995), Singapore Songlines – Portrait of a Potemkin Metropolis …or Thirty Years of Tabula Rasa.  In: O.M.A., Rem Koolhaas and Bruce Mau, S,M,X,XL. Crown Publishing Group, 1997. p.1009-1089.
  • [2] O.M.A., Rem Koolhaas and Bruce Mau, S,M,X,XL. Crown Publishing Group, 1997.
  • [3] Vera Bühlmann. Inhabiting media. Thesis University of Basel (CH) 2009.
  • [4] Kenneth Dean, Lord of the Three in One: The Spread of a Cult in Southeast China. Princeton University Press, Princeton 1998.
  • [5] Robert Musil, The Man without Qualities.
  • [6] Nigel Coates, Narrative Architecture: Architectural Design Primers series Wiley, London 2012.
  • [7] Ian Hacking (2007), Putnam’s Theory of natural Kinds and their Names is not the Same as Kripke’s. Principia, 11(1) (2007), pp. 1–24.

۞

Modernism, revisited (and chunked)

July 19, 2012 § Leave a comment

There can be no doubt that nowadays “modernism”,

due to a series of intensive waves of adoption and criticism, returning as echoes from unexpected grounds, is used as a label, as a symbol. It allows to induce, to claim or to disapprove conformity in previously unprecedented ways, it helps to create subjects, targets and borders. Nevertheless, it is still an unusual symbol, as it points to a complex history, in other words to a putative “bag” of culture(s). As a symbol, or label, “modernity” does not point to a distinct object, process or action. It invokes a concept that emerged through history and is still doing so. Even as a concept, it is a chimaera. Still unfolding from practice, it did not yet move completely into the realm of the transcendental, to join other concepts in the fields most distant from any objecthood.

This Essay

Here, we continue the investigation of the issues raised by Koolhaas’ “Junkspace”. Our suggestion upon the first encounter has been that Koolhaas struggles himself with his attitude to modernism, despite he openly blames it for creating Junkspace. (Software as it is currently practiced is definitely part of it.) His writing bearing the same title thus gives just a proper list of effects and historical coincidences—nothing less, but also nothing more. Particularly, he provides no suggestions about how to find or construct a different entry point into the problematic field of “building urban environments”.

In this essay we will try to outline how a possible—and constructive—archaeology of modernism could look like, with a particular application to urbanism and/or architecture. The decisions about where to dig and what to build have been, of course, subjective. Of course, our equipment is, as almost always in archaeology, rather small, suitable for details, not for surface mining or the like. That is, our attempts are not directed towards any kind of completeness.

We will start by applying a structural perspective, which will yield the basic set of presuppositions that characterizes modernism. This will be followed by a discussion of four significant aspects, for which we will hopefully be able to demonstrate the way of modernist thinking. These four areas concern patterns and coherence, meaning, empiricism and machines. The third major section will deal with some aspects of contemporary “urbanism” and how Koolhaas relates to that, particularly with respect to his “Junkspace”. Note, however, that we will not perform a literary study of Koolhaas’ piece, as most of his subjects there can be easily deciphered on the basis of the arguments as we will show them in the first two sections.

The final section then comprises a (very) brief note about a possible future of urbanism, which actually, perhaps, already has been lifting off. We will provide just some very brief suggestions in order to not appear as (too) presumptuous.

Table of Content (active links)

1. A structural Perspective

According to its heterogeneity, the usage of that symbol “modernity” is fuzzy as well. While the journal Modernism/modernity, published by John Hopkins University Press, concentrates „on the period extending roughly from 1860 to the mid-twentieth century,“ while galleries for “Modern Art” around the world consider the historical period since post-Renaissance (conceived as the period between 1400 to roughly 1900) up today, usually not distinguishing modernism from post-modernism.

In order to understand modernism we have to take the risk of proposing a structure behind the mere symbolical. Additionally, and accordingly, we should resist the abundant attempt to define a particular origin of it. Foucault called those historians who were addicted to the calendar and the idea of the origin, the originator, or more abstract the “cause”, “historians in short trousers” (meaning a particular intellectual infantilism, probably a certain disability to think abstractly enough) [1]. History does not realize a final goal either, and similarly it is bare nonsense to claim that history came to an end. As in any other evolutionary process historical novelty builds on the leftover of preceding times.

After all, the usage of symbols and labels is a language game. It is precisely a modernist misunderstanding to dissect history into phases. Historical phases are not out there, or haven’t been  there. It is by far more appropriate to conceive it as waves, yet not of objects or ideas, but of probabilities. So, the question is what happened in the 19th century that it became possible to objectify a particular wave? Is it possible to give any reasonable answer here?

Following Foucault, we may try to reconstruct the sediments that fell out from these waves like the cripples of sand in the shallow water on the beach. Foucault’s main invention put forward then in his “Archaeology” [1] is the concept of the “field of proposals”. This field is not 2-dimensional, it is high-dimensional, yet not of a stable dimensionality. In many respects, we could conceive it as a historian’s extension of the Form of Life as Wittgenstein used to call it. Later, Foucault would include the structure of power, its exertion and objectifications, the governmentality into this concept.

Starting with the question of power, we can see an assemblage that is typical for the 19th century and the latest phase of the 18th. The invention of popular rights, even the invention of the population as a conscious and a practiced idea, itself an outcome of the French revolution, is certainly key for any development since then. We may even say that its shockwaves and the only little less shocking echoes of these waves haunted us till the end of the 20th century. Underneath the French Revolution we find the claim of independence that traces back to the Renaissance, formed into philosophical arguments by Leibniz and Descartes. First, however, it brought the Bourgeois, a strange configuration of tradition and the claim of independence, bringing forth the idea of societal control as a transfer from the then emerging intensification of the idea of the machine. Still exhibiting class-consciousness, it was at the roots of the modernists rejection of tradition. Yet, even the Bourgeois builds on the French Revolution (of course) and the assignment of a strictly positive value to the concept of densification.

Without the political idea of the population, the positive value of densification, the counter-intuitive and prevailing co-existence of the ideas of independence and control neither the direction nor the success of the sciences and their utilization in the field of engineering could have been emerging as it actually did. Consequently, right to the end of the hot phase of French Revolution, it was argued by Foucroy in 1794 that it would be necessary to found a „Ecole Polytechnique“1. Densification, liberalism and engineering brought another novelty of this amazing century: the first spread of mass media, newspapers in that case, which have been theorized only approx. 100 years later.

The rejection of tradition as part of the answer to the question “What’s next?” is perhaps one of the strongest feelings for the modernist in the 19th century. It even led to considerable divergence of attitudes across domains within modernism. For instance, while the arts rejected realism as a style building on “true representation,” technoscience embraced it. Yet, despite the rejection of immediate visual representations in the arts, the strong emphasis on objecthood and apriori objectivity remained fully in charge. Think of Kandinsky’s “Punkt und Linie zu Fläche“ (1926), or the strong emphasis of pure color (Malevich), even of the idea of purity itself, then somewhat paradoxically called abstractness, or the ideas of the Bauhaus movement about the possibility and necessity to objectify rules of design based on dot, line, area, form, color, contrast etc.. The proponents of Bauhaus, even their contemporary successors in Weimar (and elsewhere) never understood that the claim for objectivity particularly in design is impossible to be satisfied, it is a categorical fault. Just to avoid a misunderstanding that itself would be a fault of the same category: I personally find Kandinsky’s work mostly quite appealing, as well as some of the work by the Bauhaus guys, yet for completely different reasons that he (they) might have been dreaming of.

Large parts of the arts rejected linearity, while technoscience took it as their core. Yet, such divergences are clearly the minority. In all domains, the rejection of tradition was based on an esteem of the idea of independence and resulted predominantly in the emphasis of finding new technical methods to produce unseen results. While the emphasis of the method definitely enhances the practice of engineering, it is not innocent either. Deleuze sharply rejects the saliency of methods [10]:

Method is the means of that knowledge which regulates the collaboration of all the faculties. It is therefore the manifestation of a common sense or the realisation of a Cogitatio natura, […] (p.165)

Here, Deleuze does not condemn methods as such. Undeniably, it is helpful to explicate them, to erect a methodology, to symbolize them. Yet, culture should not be subordinated to methods, not even sub-cultures.

The leading technoscience of these days had been physics, closely followed by chemistry, if it is at all reasonable to separate the two. It brought the combustion engine (from Carnot to Daimler), electricity (from Faraday to Edison, Westinghouse and Tesla), the control of temperature (Kelvin, Boltzmann), the elevator, and consequently the first high-rise buildings along with a food industry. In the second half of 19th century it was fashionable for newspapers to maintain a section showing up the greatest advances and success of technoscience of the last week.

In my opinion it is eminently important to understand the linkage between the abstract ideas, growing from a social practice as their soil-like precursory condition, and the success of a particular kind of science. Independence, control, population on the one side, the molecule and its systematics, the steam and the combustion engine, electricity and the fridge on the other side. It was not energy (in the form of wood and coals) that could be distributed, electricity meant an open potential for an any  of potential [2]. Together they established a new Form of Life which nowadays could be called “modern,” despite the fact that its borders blur, if we could assume their existence at all. Together, combined into a cultural “brown bag,” these ingredients led to an acceleration, not to the least also due to the mere physical densification, an increase of the mere size of the population, produced (literally so) by advances in the physical and biomedical sciences.

At this point we should remind ourselves that factual success does neither legitimize to expect sustainable success nor to reason about any kind of universal legitimacy of the whole setup. The first figure would represent simply naivety, the second the natural fallacy, which seduces us to conclude from the actual (“what is”) to the deontical and the normative (“what should be”).

As a practice, the modern condition is itself dependent on a set of beliefs. These can neither be questioned nor discussed at all from within the “modern attitude,” of course. Precisely this circumstance makes it so difficult to talk with modernists about their beliefs. They are not only structurally invisible, something like a belief is almost categorically excluded qua their set of conditioning beliefs. Once accepted, these conditions can’t be accessed anymore, they are transcendental to any further argument put forward within the area claimed by these conditions. For philosophers, this figure of thought, the transcendental condition, takes the role of a basic technique. Other people like urbanists and architects might well be much less familiar with it, which could explain their struggling with theory.

What are these beliefs to which a proper modernist adheres to? My list would look like as that given below. The list itself is, of course, neither a valuation nor an evaluation.

  • – independence, ultimately taken as a metaphysical principle;
  • – belief in the primacy of identity against the difference, leading to the primacy of objects against the relation;
  • – linearity, additivity and reduction as the method of choice;
  • – analyticity and “lawfulness” for descriptions of the external world;
  • – belief in positively definable universals, hence, the rejection of belief as a sustaining mental figure;
  • – the belief in the possibility of a finally undeniable justification;
  • – belief that the structure of the world follows a bi-valent logic2, represented by the principle of objective causality, hence also a “logification” and “physicalization” of the concept of information as well as meaning; consequently, meaning is conceived as being attached to objects;
  • – the claim of a primacy of ontology and existential claims—as highlighted by the question “What is …?”—over instances of pragmatics that respect Forms of Life—characterized by the question “How to use …?”;
  • – logical “flatness” and the denial of creativity of material arrangements; representation
  • – belief in the universal arbitrariness of evolution;
  • – belief in the divine creator or some replacement, like the independent existence of ideas (here the circle closes).

It now becomes even more clear that is not quite reasonable to assign a birth date to modernism. Some of those ideas and beliefs haven been around for centuries before their assembly into the 19th century habit. Such, modernism is nothing more, yet also nothing less than a name for the evolutionary history of a particular arrangement of attitudes, believes and arguments.

From this perspective it also becomes clear why it is somewhat difficult to separate so-called post-modernism from modernism. Post-modernism takes a yet undecided position to the issue of abstract metaphysical independence. Independence and the awareness for the relations did not amalgamate yet, both are still, well, independent in post-modernism. It makes a huge, if not to say cosmogonic difference to set the relation as the primary metaphysical element. Of course, Foucault was completely right in rejecting the label of being a post-modernist. Foucault dropped the central element of modernism—independence—completely, and very early in his career as author, thinking about the human world as horizontal (actual) and vertical (differential) embeddings. The same is obviously true for Deleuze, or Serres. Less for Lyotard and Latour, and definitely not for Derrida, who practices a schizo-modernism, undulating between independence and relation. Deleuze and Foucault never have been modern, in order to paraphrase Latour, and it would be a serious misunderstanding to attach the label of post-modernism to their oeuvre.

As a historical fact we may summarize modernism by two main achievements: first, the professionalization of engineering and its rhizomatically pervasive implementation, and second the mediatization of society, first through the utilization of mass media, then by means of the world wide web. Another issue is that many people confess to follow it as if they would follow a program, turning it into a movement. And it is here where difficulties start.

2. Problems with Modernism

We are now going to deal with some of the problems that are necessarily associated to the belief set that is so typical for modernism. In some way or another, any basic belief is burdened by its own specific difficulties. There is no universal or absolute way out of that. Yet, modernism is not just an attitude, up to now it also has turned into a large-scale societal experiment. Hence, there are not only some empirical facts, we also meet impacts onto the life of human beings (before any considerations of moral aspects). Actually, Koolhaas provided precisely a description of them in his “Junkspace” [3]. Perhaps, modernism is also more prone to the strong polarity of positive and negative outcomes, as its underlying set of believes is also particularly strong. But this is, of course, only a quite weak suggestion.

In this section we will investigate four significant aspects. Together they hopefully provide kind of a fingerprint of “typical” modernist thinking—and its failure. These four areas concern patterns and coherence, empiricism, meaning and machines.

Before we start with that I would like to visit briefly the issue raised by the role of objects in modernism. The metaphysics of objects in modernism is closely related to the metaphysical belief of independence as a general principle. If you start to think “independence” you necessarily end up with separated objects. “Things” as negotiated entities do barely exist in modernism, and if so, then only as kind of a error-prone social and preliminary approximation to the physical setup. It is else not possible, to balance objects and relations as concepts. One of them must take the primary role.

Setting objects as primary against the relation has a range of problematic consequences. In my opinion, these consequences are inevitable. It is important that neither the underlying beliefs nor their consequences can’t be separated from each other. For a modernist, it is impossible, to drop one of these and to keep the other ones without stepping into the tomb of internal inconsistency!

The idea of independence, whether in its implicit or its explicit version, can be traced back at least to scholastics, probably even to the classic where it appeared as Platonic idealism (albeit this would be an oversimplification). To its full extent it unfolded through the first golden age of the dogma of the machine in the early 17th century, e.g. in the work of Harvey or the philosophy of Descartes. Leibniz recognized its difficulties. For him perception is an activity. If objects would be conceived as purely passive, they would not be able to perceive and not to build any relation at all. Thus, the world can’t be made of objects, since there is a world external to the human mind. He remained, however, being caught by theism, which brought him to the concept of monads as well as to the concept of the infinitesimal numbers. The concept of the monads should not be underestimated, though. Ultimately, they serve the purpose of immaterial elements that bear the ability to perceive and to transfer them to actual bodies, whether stuffed with a mind or not.

The following centuries brought just a tremendous technical refinement of Cartesian philosophy, despite there have been phases where people resisted its ideas, as for instance many people in the Baroque.

Setting objects as primary against the relation is at the core of phenomenology as well, and also, though in a more abstract version, of idealism. Husserl came up with the idea of the “phenomenon”, that impresses us, notably directly, or intuitively, without any interpretation. Similarly, the Kantian “Erhabenheit”, then tapered by Romanticism, is out there as an independent instance, before any reason or perception may start to work.

So, what is the significance of setting objects as primary constituents of the world? Where do we have to expect which effects?

2.1. Dust, Coherence, Patterns

When interpreted as a natural principle, or as a principle of nature, the idea of independence provokes and supports physical sciences. Independence matches perfectly with physics, yet it is also an almost perfect mismatch for biological sciences as far as they are not reducible to physics. The same is true for social sciences. Far from being able to recognize their own conditionability, most sociologist just practice methods taken more or less directly from physics. Just recall their strange addiction to statistics, which is nothing else than methodology of independence. Instead of asking for the abstract and factual genealogy of the difference between independence and coherence, between the molecule and harmony, they dropped any primacy of the relation, even its mere possibility.

The effects in architecture are well-known. On the one hand, modernism led to an industrialization, which is reaching its final heights in the parametrism of Schumacher and Hadid, among others. Yet, by no means there is any necessity that industrialization leads to parametrism! On the other hand, if in the realm of concepts there is no such thing as a primacy of relation, only dust, then there is also no form, only function, or at least a maximized reduction of any form, as it has been presented first by Mies von der Rohe. The modularity in this ideology of the absence of form is not that of living organisms, it is that of crystals. Not only the Seagram building is looking exactly like the structural model of sodium chloride. Of course, it represents a certain radicality. Note that it doesn’t matter whether the elementary cells of the crystal follows straight lines, or whether there is some curvature in their arrangements. Strange enough, for a modernist there is never a particular intention in producing such stuff. Intentions are not needed at all, if the objects bear the meaning. The modernists expectation is that everything the human mind can accomplish under such conditions is just uncovering the truth. Crystals just happen to be there, whether in modernist architecture or in the physico-chemistry of minerals.

Strictly spoken, it is deeply non-modern, perhaps ex-modern, to investigate the question why even modernists feel something like the following structures or processes mysteriously (not: mystical!) beautiful, or at least interesting. Well, I do not know, of course, whether they indeed felt like that, or whether they just pretended to do so. At least they said so… Here are the artefacts3:

Figure 1: a (left): Michael Hansmeyer column [4] ,b (right): Turing-McCabe-pattern (for details see this);

.

These structures are neither natural nor geometrical. Their common structural trait is the local instantiation of a mechanism, that is, a strong dependence on the temporal and spatial local context: Subdivision in case (a), and a probabilistically instantiated set of “chemical” reactions in the case of (b). For the modernist mindset they are simply annoying. They are there, but there is no analytical tool available to describe them as “object” or to describe their genesis. Yet, both examples do not show “objects” with perceivable properties that would be well-defined for the whole entity. Rather, they represent a particular temporal cut in the history of a process. Without considering their history—which includes the contingent unfolding of their deep structure—they remain completely incomprehensible, despite the fact that on the microscopical level they are well-defined, even deterministic.

From the perspective of primary objects they are separated from comprehensibility by the chasm of idealism, or should we say hyper-idealistic conditioning? Yet, for both there exists a set of precise mathematical rules. The difference to machines is just that these rules describe mechanisms, but not anything like the shape or on the level of the entirety. The effect of these mechanism on the level of the collective, however, can’t be described by those rules for the mechanism. They can’t be described at all by any kind of analytical approach, as it possible for instance in many areas in physics and, consequently in engineering, which so far is by definition always engaged in building and maintaining fully determinate machines. This notion of the mechanism, including the fact that only the concept of mechanism allows for a thinking that is capable to comprehend emergence and complexity—and philosophically potential—, is maybe one of the strongest differences between modernist thinking and “organicist” thinking (which has absolutely nothing to do with bubble architecture), as we may call it in a preliminarily.

Here it is probably appropriate to cite the largely undervalued work of Charles Jencks, who proposed as one of the first in the domain of architecture/urbanism the turn to complexity. Yet, since he had not a well-explicated formulation (based on an appropriate elementarization) at his disposal, we had neither been able to bring his theory “down to earth” nor to connect it to more abstract concepts. People like Jencks, Venturi, “parts of” Koolhaas (and me:)—or Deleuze or Foucault in philosophy—never have been modernist. Except the historical fact that they live(d) in a period that followed the blossoming of modernism, there is not any other justification to call them or their thinking “post-modern”. It is not the use of clear arguments that those reject, it is the underlying set of beliefs.

In modernism, that is, in the practice of the belief set as shown above, collective effects are excluded apriori, metaphysically as well as methodologically, as we will see. Statistics is by definition not able to detect “patterns”. It is an analytic technique, of which people believe that its application excludes any construction. This is of course a misbelief, the constructive steps are just shifted into the side-conditions of the formulas, resulting in a deep methodological subjectivity concerning the choice of a particular technique, or formula respectively.

This affects the perspective onto society as well as individual perception and thought. Slightly metaphorically spoken, everything is believed to be (conceptual) dust, and to remain dust. The belief in independence, fired perhaps by a latent skepticism since Descartes, has invaded the methods and the practices. At most, such the belief, one could find different kinds of dust, or different sizes of the hives of dust, governed by a time-inert, universal law. In turn, wherever laws are imposed to “nature”, the subject matter turns into conceptual dust.

Something like a Language Game, let it even be in combination with transcendental conditionability, must almost be incomprehensible for a modernist. I think they even do not see there possibility. While analytic philosophy is largely the philosophy that developed within modernism (one might say that it is thus not philosophy at all), the philosophical stances of Wittgenstein, Heidegger or Deleuze are outside of it. The instances of misunderstanding Wittgenstein as a positivist are countless! Closely related to the neglect of collective effects is the dismissal of the inherent value of the comparative approach. Again, that’s not an accusation. Its just the description of an effect that emerges as soon as the above belief set turns into a practice.

The problem with modernism is indeed tricky. On the one hand it blossomed engineering. Engineering, as it has been conceived since then, is a strictly modernist endeavor. With regard to the physical aspects of the world it works quite well, of course. In any other area, it is doomed to fail, for the very same reasons, unfortunately. Engineering of informational aspects is thus impossible as it is the engineering of architecture or the engineering of machine-based episteme, not to mention the attempt to enable machines to deal with language. Or to deal with the challenges emerging in the urban culture. Just to avoid misunderstandings: Engineering is helpful to find technical realizations for putative solutions, but it never can deliver any kind of solution itself, except the effect that people assimilate and re-shape the produces of urban engineering through their usage, turning them into something different than intended.

2.2. Meaning

The most problematic effects of the idea  of “primary objects” are probably the following:

  • – the rejection of creational power of unconscious or even purely material entities;
  • – the idea that meaning can be attached to objects;
  • – the idea that objects can be represented and must be represented by ideas.

These strong consequences do not concern just epistemological issues. In modernism, “objectivity” has nothing to do with the realm of the social. It can be justified universally and on purely formal grounds. We already mentioned that this may work in large parts of physics—it is challenged in quantum physics—but certainly not in most biological or social domains.

In his investigation of thought, Deleuze identifies representationalism ([9], p.167) as one of the eight major presuppositions of large parts of philosophy, especially idealism in the line from Platon, Hegel, and Frege up to Carnap.

(1) the postulate of the principle, or the Cogitatio natura universalis (good will of the thinker and good nature of thought); (2) the postulate of the ideal, or common sense (common sense as the concordia facultatum and good sense as the distribution which guarantees this concord); (3) the postulate of the model, or of recognition (recognition inviting all the faculties to exercise themselves upon an object supposedly the same, and the consequent possibility of error in the distribution when one faculty confuses one of its objects with a different object of another faculty); (4) the postulate of the element, or of representation (when difference is subordinated to the complementary dimensions of the Same and the Similar, the Analogous and the Opposed); (5) the postulate of the negative, or of error (in which error expresses everything which can go wrong in thought, but only as the product of external mechanisms); (6) the postulate of logical function, or the proposition (designation is taken to be the locus of truth, sense being no more than the neutralised double or the infinite doubling of the proposition); (7) the postulate of modality, or solutions (problems being materially traced from propositions or, indeed, formally defined by the possibility of their being solved); (8) the postulate of the end, or result, the postulate of knowledge (the subordination of learning to knowledge, and of culture to method). Together they form the dogmatic image of thought.

Deleuze by no means attacks the utility of these elements in principle. His point is just that these elements work together and should not be taken as primary principles. The effect of these presuppositions are disastrous.

They crush thought under an image which is that of the Same and the Similar in representation, but profoundly betrays what it means to think and alienates the two powers of difference and repetition, of philosophical commencement and recommence­ment. The thought which is born in thought, the act of thinking which is neither given by innateness nor presupposed by reminiscence but engendered in its genitality, is a thought without image.

As engineer, you may probably have been noticing issue (5). Elsewhere in our essay we already dealt with the fundamental misconception to start from an expected norm, instead from an open scale without imposed values. Only the latter attitude will allow for inherent adaptivity. Adaptive systems never will fail, because failure is conceptually impossible. Instead, they will cease to exist.

The rejection of the negative, which includes the rejection of the opposite as well as dialectics, the norm, or the exception, is particularly important if we think about foundations of whatsoever (think about Hegel, Marx, attac, etc.) or about political implications. We already discussed the case of Agamben.

Deleuze finally will arrive at this “new imageless image of thought” by understanding difference as a transcendental category. The great advantage of this move is that it does not imply a necessity of symbols and operators as primary, as it is the case if we would take identity as primary. The primary identical is either empty (a=a), that is, without any significance for the relation between entities, or it needs symbolification and at least one operator. In practice, however, a whole battery of models, classifications and the assumptions underlying them is required to support the claim of identity. As these assumptions are not justifiable within the claim of identity itself, they must be set, which results in the attempt to define the world. Obviously, attempting so would be quite problematic. It is even self-contradicting if contrasted with the modernists claim of objectivity. Setting the difference as primary, Deleuze not only avoids the trap of identity and pre-established harmony in the hive of objects, but also subordinates the object to the relation. Here he meets with Wittgenstein and Heidegger.

Together, the presupposition of identity and objecthood is necessarily and in a bidirectional manner accompanied with another quite abundant misunderstanding, according to which logic should be directly applicable to the world. World here is of course “everything” except logic, that is (claimed) objects, their relations, measurement, ideas, concepts and so on. Analytic philosophy, positivism, external realism and the larger movement of modernism all apply the concept of bi-valent logic to empirical entities. It is not really a surprise that this leads to serious problems and paradoxa, which however are pseudo-paradoxa. For instance, universal justification requires knowledge. Without logical truity in knowledge universal justification can’t be achieved. The attempt to define knowledge as consisting of positive content failed, though. Next, the formula of “knowledge as justified belief” was proposed. In order not to fall prey to the Gettier-problem, belief itself would have to be objectified. Precisely this happened in analytic philosophy, when Alchourron et al. (1985) published their dramatically (and overly) reduced operationalization of “belief”. Logic is a condition, it is transcendental to its usage. Hence, it is inevitable to instantiate it. By means of instantiation, however, semantics invades equally inevitable.

Ultimately due to the presupposed primacy of identity, modernists are faced with a particular difficulty in dealing with relations. Objects and their role should not be dependent on their interpretation. As a necessary consequence, meaning—and information—must be attached to objects as quasi-physical properties. There is but one single consequence: tyranny. Again, it is not surprising that at the heights of modernism the bureaucratic tyranny was established several times.

Some modernists would probably allow for interpretation. Yet, only as a means, not as a condition, not as a primacy. Concerning their implications, the difference between the stances is a huge one. If you take it simply as a means, keeping the belief into the primacy of objects, you still would adhere to the idea of “absolute truth” within the physical world. Ultimately, interpretation would be degraded into an error-prone “method”, which ideally should have no influence onto the recognition of truth, of course. The world, at least the world that goes beyond the mere physical aspects, appears as a completely different one if relations, and thus interpretation is set as primary. Obviously, this implies also a categorical difference regarding the way one approaches that world, e.g. in science, or the way one conceives of the possible role of design. Is a nothing else than myth that a designer, architect, or urbanist designs objects. The practitioners in these professions design potentials, namely that for the construction of meaning by the future users and inhabitants (cf. [5]). There is nothing a designer can do to prevent a particular interpretation or usage. Koolhaas concludes that regarding Junkspace this may lead to a trap, or kind of a betrayal [3]:

Narrative reflexes that have enabled us from the beginning of time to connect dots, fill in blanks, are now turned against us: we cannot stop noticing—no sequence is too absurd, trivial, meaningless, insulting… Through our ancient evolutionary equipment, our irrepressible attention span, we helplessly register, provide insight, squeeze meaning, read intention; we cannot stop making sense out of the utterly senseless… (p.188)

I think that on the one hand Koolhaas here accepts the role of interpretation, yet, and somewhat contradictory, he is not able to recognize that it is precisely the primacy of interpretation that enables for an transformation through assimilation, hence the way out of Junkspace. Here he remains modernist to the full extent.

The deep reason being that for the object-based attitude there is no possibility at all to recognize non-representational coherence. (Thus, a certain type of illiteracy regarding complex texts is prevailing among “true” modernists…)

2.3. Shades of Empiricism

Science, as we understand it today—yet at least partially also as we practice it—is based on the so-called hypothetico-deductive approach of empiricism (cf. [6]). Science is still taken as a synonym for physics by many, even in philosophy of science, with only very few exceptions. There, the practice and the theory of Life sciences are not only severely underrepresented, quite frequently biology is still reduced to physics. Physicists, and their philosophical co-workers, often claim that the whole world can be reduced to a description in terms of quantum mechanics (among many others cf. [7]). A closely related reduction, only slightly less problematic, is given by the materialist’s claim that mental phenomena should be explained completely in biological terms, that is, using only biological concepts.

The belief in empiricism is implemented into the methodological framework that is called “statistics”. The vast majority of the statistical tests rest on the assumption that observations and variables are independent from each other. Some tests are devised to test for independence, or dependence, but this alone does not help much. Usually, if dependency is detected, then the subsequent tests are rearranged as to fit again the independence assumption. In other words, any possibly actual coherence is first assumed to be nonexistent. By means of the method itself, the coherence is indeed destroyed. Yet, once it is destroyed, you never will get it back. It is quite simple: The criteria for any such construction are just missing.

From this perspective, statistics is not scientific according to science’s own measures; due to its declared non-critical and  non-experimental stance it actually looks more like ideology. For a scientific method would perform an experiment for testing whether something could be assumed or not. As Nobel laureate Konrad Lorenz said: I never needed statistics to do my work. What would be needed instead is indeed a method that is structurally independent of any independence assumption regarding the observed data. Such a method would propose patterns if there are sufficiently dense hints, and not , otherwise. Without proposing one or the other apriori. From that perspective, it is more the representationalism in modernism that brings the problem.

This framework of statistics is far from being homogeneous, though. Several “interpretations” are fiercely discussed: frequentism, bayesianism, uncertainty, or propensity. Yet, any of them faces serious internal inconsistencies, as Alan Hajek convincingly demonstrated [8]. To make a long story short (the long version you can find over here), it is not possible to build a model without symbols, without concepts that require interpretation and further models, and outside a social practice, or without an embedding into such. Modernists usually reject such basics and eagerly claim even universal objectivity for their data (hives of dust). More than 50 years ago, Quine proofed that believing otherwise should be taken just as nothing else than a dogma [9]. This dogma can be conceived as a consequence of the belief that objects that are the primary constituents of the world.

Of course, the social embedding is especially important in the case of social affairs such like urbanism. The claim that any measurement of data then treated by statistical modeling (they call it wrongly “analysis”) could convey any insight per se is nothing but pretentious.

Dealing with data always results in some kind of construction, base don some methods. Methods, however, respond differentially to data, they filter. In other words, even applying “analytical” methods involves interpretation, often even a strong one. Unfortunately for the modernist, he excluded the possibility of the primacy of interpretation at all, because there are only objects out there. This hurdle is quickly solved, of course, by the belief that the meaning is outside of interpretation. As result, they believe, that there is a necessary progress towards the truth. For modernists: Here you may jump back to subsection 3.2. …

2.4. Machines

For le Corbusier a house is much like a “machine for living in”. According to him, a building has clear functions, that could be ascribed apriori, governed by universal relations, or even laws. Recently, people engaged in the building economy recognized that it may turn problematic to assign a function apriori, as it simply limits the sales arguments. As a result, any function from the building as well as from the architecture itself tends to be stripped away. The “solution” is a more general one. Yet, in contrast to an algebraic equation that will be instantiated before used, the building actually exists after building it. It is there. And up today, not in a reconfigurable form.

Actually, the problem is created not by the tendency for more general, or even pre-specific solutions. It turns critical if it generality amalgamates with the modernist attitude. The category of machines, which is synonymic to ascribing or assigning a function (understood as usage) apriori, doesn’t accept any reference to luxury. A machine that would contain properties or elements that don’t bear any function, at least temporarily, other than pleasure (which does not exist in a world that consists only of objects) would be badly built. Minimalism is not just a duty, it even belongs to the grammar of modernism. Minimalism is the actualization and representation of mathematical rigidity, which is also a necessity as it is the only way to use signs without interpretation. At least, that is the belief of modernists.

The problem with minimalism is that it effectively excludes evolution. Either the produce fits perfectly or not at all. Perfectness of the match can be expected only, if the user behaves exactly as expected, which represents nothing else than dogmatism, if not worse. Minimalism in form excludes alternative interpretations and usages, deliberately so, it even has  to exclude the possibility for the alternative. How else to get rid of alternatives? Koolhaas rightly got it: by nothingness (minimalism), or by chaos.

3. Urbanism, and Koolhaas.

First, we have of course to make clear that we will be able to provide only a glimpse to the field invoked by this header. Else, our attempts here should not be understood as a proposal to separate architecture from urbanism. Both, regarding theory and implementation they more and more overlap. When Koolhaas explains the special situation of the Casa do Musica in Porto, he refers to processes like continuation of certain properties and impressions from the surround to be continued inside of the building. Inversely, any building, even any persistent object in a city shifts the qualities of its urban surround.

Rem Koolhaas, once journalist, then architect, now for more than a decade additionally someone doing comparative studies on cities has performatively demonstrated—by means of his writings such as “S,M,L,XL”, “Generic City” or “Junkspace”—that a serious engagement about the city can’t be practiced as a disciplinary endeavor. Human culture moved irrevocably into a phase where culture largely means urban culture. Urbanists may be seen as a vanishing species that became impossible due to the generality of the field. “Culturalist” is neither a proper domain nor a suitable label. Or perhaps they moult into organizers of research in urban contexts, similarly as architects are largely organizers for creating buildings. Yet, there is an important difference: Architects may still believe that they externalize something. Such a belief is impossible for urbanists, because they are part of the culture. It is thus questionable, if a project like the “Future Cities Laboratory” should indeed be called such. It is perhaps only possible to do so in Singapore, but that’s the subject of one of the next essays.

Rem Koolhaas wrote “Delirious New York” before turning to architecture and urbanism as a practitioner. There, he praised its diversity and manifoldness that, in or by means of his dreams, added up to the deliriousness of Manhattan, and probably also of his own.

Without any doubt, the particular quality of Manhattan is its empowering density, which is not actualizing as the identical, but rather as heterotopia, as divergence. In some way, Manhattan may be conceived as the urban precursor of the internet [11], built first in steel, glass and concrete. Vera Bühlmann writes:

Manhattan space is, if not yet everywhere, so at least in the internet potentially everywhere, and additionally not limited to three, probably even spatial dimensions.4

Urbanism is in urgent demand of an advanced theory that refers to the power of networks. It was perhaps this “network process” that brought Koolhaas to explore the anti-thesis of the wall and the plane, the absolute horizontal and vertical separation. I say anti-thesis, because Delirious New York itself behaves quite ambiguously, half-way between the Hegelian, (post-)structuralist dialectics and utopia on the one side and an affirmation of heterotopias on the other hand as a more advanced level of conceptualization alienating processes, which always are also processes of selection and individuation into both directions, the medium and the “individual”. Earlier scholars like Aldo Rossi have been too early to go into that direction as networks weren’t recognizable as part of the Form of Life. Even Shane is only implicitly referring to their associative power (he does not refer to complexity as well). And Koolhaas was not either, and probably is still not aware of this problematics.

Recently, I have been proposing one of the possible approaches to build such a theory, the according concepts, terms and practices (for more details see [12]). It is rather important, to distinguish two very basic forms of networks, logistic and associative networks. Logistic networks are used everywhere in modernist reasoning about cities and culture. Yet, they exclusively refer to the network as a machine, suitable to optimize the transport of anything. Associative networks are completely different. They do not transfer anything, they swallow, assimilate, rearrange, associate and, above all, they learn. Any associative network can learn anything. The challenge is, particularly for modernist attitudes, that it can’t be controlled what exactly an associative network is going to learn. The interesting thing about it is that the concept of associative networks provides a bridge to the area of advanced “machine”-learning and to the Actor-Network-Theory (ANTH) of Bruno Latour. The main contribution of ANTH is its emphasis of agency, even of those mostly mineral material arrangements that are usually believed to have no mental capacity.

It is clear, that an associative network may not be perceived at all under the strictly practiced presupposition of independence, as it is typical for modernism. Upon its implementation, the  belief set of modernism tends to destroy the associativity, hence also the almost inevitable associations between the more or less mentally equipped actors in urban environments.

When applied to cities, it breaks up relations, deliberately. Any interaction of high-rise buildings, so typical for Manhattan, is precluded intentionally. Any transfer is optimized just along one single parameter: time, and secondarily, space as a resource. Note that optimization always requires the apriori definition of a single function. As soon as would allow for multiple goals, you would be faced with the necessity of weighting and assigning subjective expectations, which are subjective precisely due to the necessity of interpretation. In order to exclude even the possibility for it, modernists agree hastily to optimize time (as a resource under the assignment of scarcity and physicality), once being understood as a transcendental condition.

As Aldo Rossi remarked already in the 1960ies [13], the modernist tries to evacuate any presence of time from the city. It is not just that history is cut off and buried, largely under false premises and wrong conclusions, reducing history just to institutional traditions (remember, there is no interpretation for a modernist!). In some way, it would have been even easy to predict Koolhaas’ Junkspace already in the end of the 19th century. Well, the Futurologists did it, semi-paradoxically, though. Quite stringent, Futurism was only a short phase within modernism. This neglect of time in modernism is by no means a “value” or an intention. It is a direct logical consequence of the presupposed belief set, particularly independence, logification and the implied neglect of context.

Dis-assembling the associative networks of a city results inevitably in the modernist urban conceptual dust, ruled by the paradigm of scarce time and the blindness against interpretation, patterns and non-representational coherence. This is in a nutshell, what I would like to propose as the deep grammar of the Junkspace, as it has been described by Koolhaas. Modernism did nothing else than to build and to actualize it conceptual dust. We may call it tertiary chaos, which has been—in its primary form—equal to the initial state of indiscernability concerning the cosmos as a whole. Yet, this time it has been dictated by modernists. Tertiary chaos thus can be set equal to the attempt to make any condition for the possibility of discernability vanishing.

Modernists may not be aware that there is not only already a theory of discernability, which equals to the Peircean theory of the sign, there is also an adaptation and application to urbanism and architecture. Urbanists probably may know about the name “Venturi”, but I seriously doubt that semiotics is on their radar. If modernists talk about semiotics at all, they usually refer to the structuralist caricature of it, as it has been put forward by de Saussure, establishing a closed version of the sign as a “triangle”. Peircean signs—and these have been used by Venturi—establish as an interpretive situation. They do not refer to objects, but just to other signs. Their reference to the world is provided through instances of abstract models and a process of symbolification, which includes learning as an ability that precedes knowledge. (more detail here in this earlier essay) Unfortunately, Venturi’s concept have scarcely been updated, except perhaps in the context of media facades [14]. Yet, media facades are mostly and often vastly misunderstood as the possibility to display adverts. There are good arguments supporting the view that there is more about them [15].

Modernists, including Koolhaas employ a strange image of evolution. For him (them), evolution is pure arbitrariness, both regarding the observable entities and processes as well as regarding the future development. He supposes to detect “zero loyalty-and zero tolerance-toward configuration“ ([3] p.182). In the same passage he simultaneously and contradictory misses the „”original” condition“ and blames history for its corruptive influence: „History corrupts, absolute history corrupts absolutely.“ All of that is put into the context of a supposedly “”permanent evolution.”“ (his quot. marks). Most remarkably, even biologists as S.J. Gould, pretending to be evolutionary biologist, claims that evolution is absolutely arbitrary. Well, the only way out of the contrasting fact that there is life in the form we know about it is to assume some active divine involvement. Precisely this was the stance of Gould. People like Gould(and perhaps Koolhaas) commit the representationalist fault, which excludes them from recognizing (i) the structural tendency of any evolution towards more general solutions, and (ii) the there is an evolution of evolutionarity. The modernist attitude towards evolution can again be traced back to the belief into metaphysical independence of objects, but our interest here is different.

Understanding evolution as a concept has only little to do with biology and the biological model that is called “natural evolution”. Natural evolution is just an instance of evolution into physico-chemical and then biological matter. Bergson has been the first who addressed evolution as a concept [16], notably in the context of abstract memory. In a previous essay we formalized that approach and related it to biology and machine-learning. At its basics, it requires a strict non-representational approach. Species and organisms are expressed in terms of probability. Our conclusion was that in a physical world evolution inevitably takes place if there at least two different kinds or scales of memory. Only on that abstract level we can adopt the concept of evolution into urbanism, that is, into any cultural context.

Memory can’t be equated to tradition, institutions or even the concrete left-overs of history, of course. They are just instances of memory. It is of utmost importance here, not to contaminate the concept of memory again with representationalism. This memory is constructive. Memory that is not constructive, is not memory, but a stock, a warehouse (although these are also kinds of storage and contribute as such to memory). Memory is inherently active and associative. Such memory is the basic, non-representative element of a generally applicable evolutionary theory.

Memory can not be “deposited” into almost geological layers of sediments, quite in contrast to the suggestions of Eisenman, whom Rajchman follows closely in his “Constructions”.

The claim of “storable memory” is even more disastrous than the the claim that information could be stored. These are not objects and items that are independent of an interpretation, they are the processes of constructive of guided interpretation. Both “storages” would only become equal to the respective immaterial processes under the condition of a strictly deterministic set of commands. Even the concept of the “rule” is already too open to serve the modernist claim of storable memory.

It is immediately clear that the dynamic concept of memory is highly relevant for any theory about urban conditions. It provides a general language to derive particular models and instances of association, stocks and flows, that are not reducible to storage or transfers. We may even expect that whenever we meet kind of material storage in an urban context, we also should expect association. The only condition for that just being that there are no modernists around… Yet, storage without memory, that is, without activity remains dead, much like but even less than a crystal. Cripples in the sand. The real relevance of stocks and flows is visible only in the realm of the non-representational, the non-material, if we conceive it as waves in abstract density, that is as media, conveying the potential for activity as a differential. Physicalists and modernists like Christianse or Hillier will never understand that. Just think of the naïve empirics, calling it cartography, they are performing around the world.

This includes deconstructivism as well. Derrida’s deconstructivism can be read as a defense war against the symbolification of the new, the emerging, the complex, the paradox of sense. His main weapon is the “trail”, of which he explicitly states that it could not be interpreted at all. Such, Derrida as master of logical flatness and modernist dust is the real enemy of progress. Peter Sloterdijk, the prominent contemporary German “philosopher”5, once called Derrida the “Old Egyptian”. Nothing would fit better to Derrida, who lives in the realm of shadows and for whom life is just a short transitory phase, hopefully “survived” without too much injuries. The only metaphor being possible on that basis is titanic geology. Think of some of Eisenman’s or Libeskind’s works.

Figure 2: Geologic-titanic shifts induced by the logical flatness of deconstructivism

a: Peter Eisenman, Aronoff Center for Design and Art in Cincinnati (Ohio) (taken from [11]); the parts of building are treated blocks, whose dislocation reminds to that of geological sediments (or the work of titans).

b: Daniel Libeskind, Victoria and Albert Museum Boilerhouse Extension. Secondary chaos, inducing Junkspace through its isolationist “originality”, conveying “defunct myths” (Koolhaas in [3], p.189).

Here we finish our exploration of generic aspects of the structure of modernist thinking. Hopefully, the sections so far are sufficiently suited to provide some insights about modernism in general, and the struggles Koolhaas is fighting with in “Junkspace”.

4. Redesigning Urbanism

Redesigning urbanism, that is to unlock it from modernist phantasms is probably much more simple than it may look at first sight. Well, not exactly simple, at least for modernists. Everything is about the presuppositions. Dropping the metaphysical believe of independence without getting trapped by esotericism or mysticism might well be the cure.Of course, metaphysical independence need to be removed from any level and any aspect in urbanism, starting from the necessary empirical work, which of course is already an important part of the construction work. We already mentioned that the notion of “empirical analysis” pretends neutrality, objectivity (as independence from the author) and validity. Yet, this is pure illusion. Independence should be abandoned also in its form of searching for originality or uniqueness, trying to set an unconditional mark in the cityscape. By that we don’t refer to morphing software, of course.

The antidote against isolationism, analyticity and logic is already well-known. To provide coherence you have to defy splintering and abjure the believe in (conceptual) dust. The candidate tool for it is story-telling, albeit in a non-representational manner, respecting the difference and heterotopias from the beginning. In turn this also means to abandon utopias and a-topias, but to embrace complexity and a deep concept of prevailing differentiation (in a subsequent essay we will deal with that). As citizens, we are not interested in non-places and deserts of spasmodic uniqueness (anymore) or the mere “solution of problems” either (see Deleuze about the dogmatic image of thought as cited above). Changing the perspective from the primacy of analysis to the primacy story-telling immediately reveals the full complexity of the respective Form of Life, to which we refer here as a respectful philosophical concept.

It is probably pretentious to speak such about urbanism as a totality. There are of course, and always have been, people who engaged in the urban condition based on a completely different set of believes, righteous non-modern. Those people start with the pattern and never tear them apart. Those people are able to distinguish structure, genesis and appearance. In biology, this distinction has been instantiated into the perspectives of the genotype, the phenotype, and, in bio-slang, evo-devo, the compound made from development, growth and evolution. These are tied together (necessarily) by complexity. In philosophy, the respective concepts are immanence, the differential, and the virtual.

For urbanism, take for instance the work of David Shane (“Recombinant Urbanism“). Shane’s work, which draws much on Kelly’s, is a (very) good starting point not only for any further theoretical work, but also for practical work.

As a practitioner, one has to defy the seduction for the totality of a master plan, as the renowned parametricists actualize in Istanbul, Christianse and his office did recently in Zürich at the main station. Both are producing pure awfulness, castles of functional uniformity, because they express the totality of the approach even visually. Even in Singapore’s URA (Urban Development Authority), the master plan has been relativised in favor of a (slightly) more open conceptualization. Designer’s have to learn that not less is more, but rather that partial nothingness is more. Deliberately non-planning, as Koolhaas has repeatedly emphasized. This should not be taken representationally, of course. It does not make any sense to grow “raw nature”, jungles within the city, neither for the city, nor for the “jungle”. Before a crystal can provide soil for real life, it must decay, precisely because it is a closed system (see next figure 3). Adaptive systems replace parts, melt holes to build structures, without decaying at all. We will return to this aspect of differentiation in a later article.

Figure 3: Pruitt-Igoe (St.Louis), getting blasted in 1972. Charles Jencks called this event “one of the deaths of modernism”. This had not been the only tear-down there. Laclede, a neighborhood nearby Pruitt-Igoe, made from small, single-flat houses failed as well, the main reasons being an unfortunate structure of the financial model and political issues, namely separation of “classes” and apartheid. (see this article).

The main question for finding a practicable process therefore is: How to ask, which questions should we address in order to build an analytics under the umbrella of story-telling, that avoids the shortfalls of modernism?

We might again take a look to biology (as a science). As urbanism, biology is also confronted with a totality. We call it life. How to address reasonable, that is fruitful questions to that totality? Biology already found a set of answer, which nevertheless are not respected by the modernist version of this science, mainly expressed as genetics. The first insight was, that “nothing in biology makes sense except in the light of evolution.”[17] Which would be the respective question for urbanism? I can’t give an answer here, but it is certainly not independence. This we can know through the lesson told by “Junkspace”. Another, almost ridiculous anti-candidate is sustainability, as far as it is conceived in terms of scarcity of mainly physical resources instead of social complexity. Perhaps we should remember the history of the city beyond its “functionality”. Yet, that would mean to first develop an understanding of (abstract) evolution, to instantiate that, and then to derive a practicable model for urban societies. What does it mean to be social, what does it mean to think, both taken as practice in a context of freedom? Biology then developed a small set of basic contexts along to which any research should be aligned to, without loosing the awareness (hopefully) that there are indeed four of such contexts. These have been clearly stated by Nobel laureate Tinbergen [18]. According to him research in biology is suitably structured by four major per­spectives: phylogeny, ontogeny, physiology and behavior. Are there similarly salient dimensions for structuring thought in urbanism, particularly in a putative non-modernist (neither modernist, not post-modernist) version? Particularly interesting are, imho, especially the intersections of such sub-domains.

Perhaps differentiation (as a concept) is indeed a (the) proper candidate for the grand perspective. We will discuss some aspects of this in the next essay: it includes growth and its modes, removal, replacement, deterioration, the problem of the generic, the difference between development and evolution, and a usable concept of complexity. to name but a few. In the philosophy of Gilles Deleuze, particularly the Thousand Plateaus, Difference and Repetition and the Fold, we already can find a good deal of theoretical work about he conceptual issues around differentiation. Differentiation includes learning, individually and collectively (I do NOT refer to swarm ideology here, nor to collectivist mysticism either!!!), which in turn would bring in the (abstract) mental into any consideration of urbanism. Yet, wasn’t mankind differentiating and learning all the time? The challenge will be to find a non-materialist interpretation of those in these materialist times.

Notes

1. Cited after [11]

2. Its core principles are the principle of excluded middle (PEM) and the  principle of non-contradictivity (PNC). Both principles are equivalent to the concept of macroscopic objects, albeit only in a realist perspective, i.e. under the presupposition that objects are primary against relations. This is, of course, quite problematic, as it excludes an appropriate conceptualisation of information.

Both, the PEM and PNC allow for the construction of paradoxes like the Taylor Paradox. Such paradoxes may be conceived as “Language Game Colliders”, that is as conceptual devices which commit a mistake concerning the application of the grammar of language games. Usually, the bring countability and the sign for non-countability into conflict. First, it is a fault to compare a claim with a sign, second, it is stupid to claim contradicting proposals. Note, that here we are allowed to speak of “contradiction”, because we are following the PNC as it is suggested by the PNC claim. The Taylor-Paradox is of course, like any other paradox, a pseudo-problem. It appears only due to an inappropriate choice or handling of the conceptual embedding, or due to the dismissal of the concept of the “Language Game”, which mostly results in the implicit claim of the existence of a “Private Language”.

3. Vera Bühlmann, “Articulating quantities, if things depend on whatever can be the case“, lecture held at The Art of Concept, 3rd Conference: CONJUNCTURE — A Series of Symposia on 21st Century Philosophy, Politics, and Aesthetics, organized by Nathan Brown and Petar Milat, Multimedia Institute MAMA in Zagreb, Kroatia, June 15-17 2012.

4. German orig.: “Manhattan Space ist, wenn schon nicht überall, so doch im Internet potentiell überall, und zudem nicht mehr auf drei vielleicht gar noch räumliche Dimensionen beschränkt.”

5. Peter Sloterdjik does not like to be called a philosopher

References

  • [1] Michel Foucault, Archaeology of Knowledge. Routledge 2002 [1969].
  • [2] Vera Bühlmann, Printed Physics, de Gruyter, forthcoming.
  • [3] Rem Koolhaas (2002). Junkspace. October, Vol. 100, “Obsolescence”, pp. 175-190. MIT Press
  • [4] Michael Hansmeyer, his website about these columns.
  • [5] “Pseudopodia. Prolegomena to a Discourse of Design”. In: Vera Bühlmann and Martin Wiedmer . pre-specifics. Some Comparatistic Investigations on Research in Art and Design. JRP| Ringier Press, Zurich 2008. p. 21-80 (English edition). available online;
  • [6] Wesley C. Salmon, Causality and Explanation. Oxford University Press, Oxford 1998.
  • [7] Michael Epperson (2009). Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse. Process Studies, 38(2): 339-366.
  • [8] Alan Hájek (2007). The Reference Class Problem is Your Problem Too. Synthese 156 (3):563-585.
  • [9] W.v.O. Quine (1951), Two Dogmas of Empiricism. The Philosophical Review 60: 20-43.
  • [10] Gilles Deleuze, Difference and Repetition. Columbia University Press, New York 1994 [1968].
  • [11] Vera Bühlmann, inhabiting media. Thesis, University of Basel (CH), 2009.
  • [12] Klaus Wassermann (2010). SOMcity: Networks, Probability, the City, and its Context. eCAADe 2010, Zürich. September 15-18, 2010. (pdf)
  • [13] Aldo Rossi, The Architecture of the City. MIT Press, Cambridge (Mass.) 1982 [1966].
  • [14] Christoph Kronhagel (ed.), Mediatecture, Springer, Wien 2010. pp.334-345.
  • [15] Klaus Wassermann, Vera Bühlmann, Streaming Spaces – A short expedition into the space of media-active façades. in: Christoph Kronhagel (ed.), Mediatecture, Springer, Wien 2010. pp.334-345. available here. available here
  • [16] Henri Bergson, Matter and Memory. (Matière et Mémoire 1896) transl. N.M. Paul & W.S. Palmer. Zone Books 1990.
  • [17] Theodore Dobzhansky, Genetics and the Origin of Species, Columbia University Press, New York 1951 (3rd ed.) [1937].
  • [18] Niko Tinbergen (1963). On Aims and Methods in Ethology, Z. Tierpsych., (20): 410–433.

۞

Elementarization and Expressibility

March 12, 2012 § Leave a comment

Since the beginnings of the intellectual adventure

that we know as philosophy, elements take a particular and prominent role. For us, as we live as “post-particularists,” the concept of element seems to be not only a familiar one, but also a simple, almost a primitive one. One may take this as the aftermath of the ontological dogma of the four (or five) elements and its early dismissal by Aristotle.

In fact, I think that the concept element is seriously undervalued and hence it is left disregarded much too often, especially as far as one concerns it as a structural tool in the task to organize thinking. The purpose of this chapter is thus to reconstruct the concept of “element” in an adequate manner (at least, to provide some first steps of such a reconstruction). To achieve that we have to take tree steps.

First, we will try to shed some light on its relevance as a more complete concept. In order to achieve this we will briefly visit the “origins” of the concept in (pre-)classic Greek philosophy. After browsing quickly through some prominent examples, the second part then will deal with the concept of element as a thinking technique. For that purpose we strip the ontological part of it (what else?), and turn it into an activity, a technique, and ultimately into a “game of languagability,” called straightforwardly “elementarization.”

This will forward us then to the third part, which will deal with problematics of expression and expressibility, or more precisely, to the problematics of how to talk about expression and expressibility. Undeniably, creativity is breaking (into) new grounds, and this aspect of breaking pre-existing borders also implies new ways of expressing things. To get clear about creativity thus requires to get clear about expressibility in advance.

The remainder of this essay revolves is arranged by the following sections (active links):

The Roots1

As many other concepts too, the concept of “element” first appeared in classic Greek culture. As a concept, the element, Greek “stoicheion”, in greek letters ΣΤΟΙΧΕΙΟΝ, is quite unique because it is a synthetic concept, without predecessors in common language. The context of its appearance is the popularization of the sundial by Anaximander around 590 B.C. Sundials have been known before, but it was quite laborious to create them since they required a so-called skaphe, a hollow sphere as the projection site of the gnomon’s shadow.

Figure 1a,b.  Left (a): A sundial in its ancient (primary) form based on a skaphe, which allowed for equidistant segmentation , Right (b): the planar projection involves hyperbolas and complicated segmentation.

The planar projection promised a much more easier implementation, yet, it involves the handling of hyperbolas, which even change relative to the earth’s seasonal inclination. Else, the hours can’t be indicated by an equidistant segments any more. Such, the mathematical complexity has been beyond the capabilities of that time. The idea (presumably of Anaximander) then was to determine the points for the hours empirically, using “local” time (measured by water clocks) as a reference.

Anaximander also got aware of the particular status of a single point in such a non-trivial “series”. It can’t be thought without reference to the whole series, and additionally, there was no simple rule which would have been allowing for its easy reconstruction. This particular status he called an “element”, a stoicheia (pronunciation). Anaximander’s element is best understood as a constitutive component, a building block for the purpose to build a series; note the instrumental twist in his conceptualization.

From this starting point, the concept has been generalized in its further career, soon denoting something like “basics,” or “basic principles”. While Empedokles conceived the four elements, earth, wind, water and fire almost as divine entities, it was Platon (Timaios 201, Theaitet 48B) who developed the more abstract perspective into “elements as basic principles.”

Yet, the road of abstraction does not know a well-defined destiny. Platon himself introduced the notion of “element of recognition and proofing” for stoicheia. Isokrates, then, a famous rhetorician and coeval of Platon extended the reach of stoicheia from “basic component / principle” into “basic condition.” This turn is quite significant since as a consequence it inverts the structure of argumentation from idealistic, positive definite claims to the constraints of such claims; it even opens the perspective to the “condition of possibility”, a concept that is one of the cornerstones of Kantian philosophy, more than 2000 years later. No wonder, Isokrates is said to have opposed Platon’s  arguments.

Nevertheless, all these philosophical uses of stoicheia, the elements, have been used as ontological principles in the context of the enigma of the absolute origin of all things and the search for it. This is all the more particularly remarkable as the concept itself has been constructed some 150 years before in a purely instrumental manner.

Aristotle dramatically changed the ontological perspective. He dismissed the “analysis based on elements” completely and established what is now known as “analysis of moments”, to which the concepts of “form” and “substance” are central. Since Aristotle, elemental analysis regarded as a perspective heading towards “particularization”, while the analysis of moments is believed to be directed to generalization. Elemental analysis and ontology is considered as being somewhat “primitive,” probably due to its (historic) neighborhood to the dogma of the four elements.

True, the dualism made from form and substance is more abstract and more general. Yet, as concept it looses contact not only to the empiric world as it completely devoid of processual aspects. It is also quite difficult, if not impossible, to think “substance” in a non-ontological manner. It seems as if that dualism abolishes even the possibility to think in a different manner than as ontology, hence implying a whole range of severe blind spots: the primacy of interpretation, the deeply processual, event-like character of the “world” (the primacy of “process” against “being”), the communal aspects of human lifeforms and its creational power, the issue of localized transcendence are just the most salient issues that are rendered invisible in the perspective of ontology.

Much more could be said of course about the history of those concepts. Of course, Aristotle’s introduction of the concept of substance is definitely not without its own problems, paving the way for the (overly) pronounced materialism of our days. And there is, of course, the “Elements of Geometry” by Euclid, the most abundant mathematical textbook ever. Yet, I am neither a historian nor a philologus, thus let us now proceed with some examples. I just would like to emphasize that the “element” can be conceived as a structural topos of thinking starting from the earliest witnesses of historical time.

2. Examples

Think about the chemical elements as they have been invented in the 19th century. Chemical compounds, so the parlance of chemists goes, are made from chemical elements, which have been typicized by Mendeleev according to the valence electrons and then arranged into the famous “periodic table.” Mendeleev not only constructed a quality according to which various elements could be distinguished. His “basic principle” allowed him to make qualitative and quantitative predictions of an astonishing accuracy. He predicted the existence of chemical elements, “nature’s substance”, actually unknown so far, along with their physico-chemical qualities. Since it was in the context of natural science, he also could validate that. Without the concept of those (chemical) elements the (chemical) compounds can’t be properly understood. Today a similar development can be observed within the standard theory of particle physics, where basic types of particles are conceived as elements analogous to chemical elements, just that in particle physics the descriptive level is a different one.

Here we have to draw a quite important distinction. The element in Mendeleev’s thinking is not equal to the element as the chemical elements. Mendeleev’s elements are (i) the discrete number (an integer between 1..7, and 0/8 for the noble gases like Argon etc.) that describes the free electron as a representative of electrostatic forces, and (ii) the concept of “completeness” of the set of electrons in the so-called outer shell (or “orbitals”): the number of the valence electrons of two different chemical elements tend to sum up to eight. Actually, chemical elements can be sorted into groups (gases, different kinds of metals, carbon and silicon) according to the mechanism how they achieve this magic number (or how they don’t). As a result, there is a certain kind of combinatorianism, the chemical universe is almost a Lullian-Leibnizian one. Anyway, the important point here is that the chemical elements are only a consequence of a completely different figure of thought.

Still within in chemistry, there is another famous, albeit less well-known example for abstract “basic principles”: Kekulé’s de-localized valence electrons in carbon compounds (in today’s notion: delocalized 6-π-electrons). Actually, Kekulé added the “element” of the indeterminateness to the element of the valence electron. He dropped the idea of a stable state that could be expressed by a numerical value, or even by an integer. His 6-π-orbital is a cloud that could not be measured directly as such. Today, it is easy to see that the whole area of organic chemistry is based on, or even defined by, these conceptual elements.

Another example is provided by “The Elements of Geometry” by Euclid. He called it “elements” probably for mainly two reasons. First, it was supposed that it was complete, secondly, because you could not remove any of the axioms, procedures, proofs or lines of arguments, i.e. any of its elements, without corroborating the compound concept “geometry.”

A further example from the classic is the conceptual (re-)construction of causality by Aristotle. He obviously understood that it is not appropriate to take causality as an impartible entity. Aristotle designed his idea of causality as an irreducible combination of four distinct elements, causa materialis, causa formalis, causa efficiens and causa finalis. To render this a bit more palpable, think about inflaming a wooden stick and then being asked: What is the cause for the stick being burning?

Even if I would put (causa efficiens) a wooden (causa materialis) stick (causa formalis) above an open flame (part of causa efficiens), it will not necessarily be inflamed until I decide that it should (causa finalis). This is a quite interesting structure, since it could be conceived as a precursor of the Wittgensteinian perspective of a language game.

For Aristotle it made no sense to assume that any of the elements of his causality as he conceived it would be independent from any of the others. For him it would have been nonsense to conceive of causality as any subset of his four elements. Nevertheless, exactly this was what physics did since Newton. In our culture, causality is almost always debated as if it would be identical to causa efficiens. In Newton’s words: Actioni contrariam semper et aequalem esse reactionem. [2] Later, this postulate of actio = reactio has been backed by further foundational work through larger physical theories postulating the homogeneity of physical space. Despite the success of physics, the reduction of causality to physical forces remains just that: a reduction. Applying this principle then again to any event in the world generates specific deficits, which are well visible in large parts of contemporary philosophy of science when it comes to the debate about the relation of natural science and causality (see cf. [3]).

Aristotle himself did not call the components of causality as “elements.” Yet, the technique he applied is just that: an elementarization. This technique was quite popular and well known from another discourse, involving earth, water, air, and fire. Finally, this model had to be abolished, but it is quite likely that the idea of the “element” has been inherited down to Mendeleev.

Characterizing the Concept of “Element”

As we have announced it before, we would like to strip any ontological flavor from the concept of the element. This marks the difference between conceiving them as part of the world or, alternatively, as a part of a tool-set used in the process of constructing a world. This means to take it purely instrumental, or in other words, as a language game. Such, it is also one out of the row of many examples for the necessity to remove any content from philosophy (Ontology is always claiming some kind of such content, which is highly problematical).

A major structural component of the language game “element” is that the entities denoted by it are used as anchors for a particular non-primitive compound quality, i.e. a quality that can’t be perceived by just the natural five (or six, or so) senses.

One the other hand, they are also strictly different from axioms. An axiom is a primitive proposition that serves as a starting point in a formal framework, such as mathematics. The intention behind the construction of axioms is to utilize common sense as a basis for more complicated reasoning. Axioms are considered as facts that could not seriously disputed as such. Thus, they indeed the main element in the attempt to secure mathematics as a unbroken chain of logic-based reasoning. Of course, the selection of a particular axiom for a particular purpose could always be discussed. But itself, it is a “primitive”, either a simple more or less empiric fact, or a simple mathematical definition.

The difference to elements is profound. One always can remove a single axiom from an axiomatic system without corroborating the sense of the latter. Take for instance the axiom of associativity in group theory, which leads to Lie-groups and Lie-algebras. Klein groups are just a special case of Lie Groups. Or, removing the “axiom” of parallel lines from the Euclidean axioms brings us to more general notions of geometry.

In contrast to that pattern, removing an element from an elemental system destroys the sense of the system. Elemental systems are primarily thought as a whole, as a non-decomposable thing, and any of the used elements is synthetically effective. Their actual meaning is only given by being a part of a composition with other elements. Axioms, in contrast, are parts of decomposable systems, where they act as constraints. Removing them leads usually to improved generality. The axioms that build an “axiomatic system” are not tied to each other, they are independent as such. Of course, their interaction always will create a particular conditionability, but that is a secondary effect.

The synthetic activity of elements simply mirrors the assumption that there is (i) a particular irreducible whole, and (ii) that the parts of that whole have a particular relationship to the embedding whole. In contrast to the prejudice that elemental analysis results in an unsuitable particularization of the subject matter, I think that elements are highly integrated, yet itself non-decomposable idealizations of compound structures. This is true for the quaternium of earth, wind, water and fire, but also for the valence electrons in chemistry or the elements of complexity, as we have introduced them here. Elements are made from concepts, while axioms are made from definitions.

In some way, elements can be conceived as the operationalization of beliefs. Take a belief, symbolize it and you get an element. From this perspective it again becomes obvious (on a second route) that elements could not be as something natural or even ontological; they can not be discovered as such in a pure or stable form. They can’t be used to proof propositions in a formal system, but they are indispensable to explain or establish the possibility of thinking a whole.

Mechanism and organism are just different terms that can be used to talk about the same issue, albeit in a less abstract manner. Yet, it is clear that integrated phenomena like “complexity,” or “culture,” or even “text” can’t be appropriately handled without the structural topos of the element, regardless which specific elements are actually chosen. In any of these cases it is a particular relation between the parts and the whole that is essential for the respective phenomenon as such.

If we accept the perspective that conceives of  elements as stabilized beliefs we may recognize that they may be used as building blocks for the construction of a consistent world. Indeed, we well may say that it is due to their properties as described above, their positioning between belief and axiom, that we can use them as an initial scaffold (Gestell), which in turn provides the possibility for targeted observation, and thus for consistency, understood both as substance and as logical quality.

Finally, we should shed some words on the relation between elements and ideas. Elsewhere, we distinguished ideas from concepts. Ideas can’t be equated with elements either. Just the other way round, elements may contain ideas, but also concepts, relations and systems thereof, empirical hypotheses or formal definitions. Elements are, however, always immaterial, even in the case of chemistry. For us, elements are immaterial synthetic compounds used as interdependent building blocks of other immaterial things like concepts, rules, or hypotheses.

Many, if not all concepts, are built from elements in a similar way. The important issue is that elements are synthetic compounds which are used to establish further compounds in a particular manner. In the beginning there need not to be any kind of apriori justification for a particular choice or design. The only requirement is that the compound built from them allows for some kind of beneficial usage in creating higher integrated compounds which would not be achievable without them.

4. Expressibility

Elements may well be conceived as epistemological stepping stones, capsules of belief that we use to build up beliefs. Such, the status of elements is somewhere between models and concepts, not as formal and restricted as models and not as transcendental as concepts, yet still with much stronger ties towards empiric conditions than ideas.

It is quite obvious that such a status reflects a prominent role for perception as well as for understanding. The element may well be conceived as an active zone of differentiation, a zone from which different kind of branches emerge: ideas, models, concepts, words, beliefs. We also could say that elements are close to the effects and the emergence of immanence. The ΣΤΟΙΧΕΙΟΝ itself, its origins and transformations, may count as an epitome of this zone, where thinking creates its objects. It is “here” that expressibility finds its conditions.

At that point we should recall – and keep in mind – that elements should not be conceived as an ontological category. Elements unfold as (rather than “are”) a figure of thought, an idiom of thinking, as a figure for thought. Of course, we can deliberately visit this area, we may develop certain styles to navigate in this (sometimes) misty areas. In other words, we may develop a culture of elementarization. Sadly enough, positivism, which emerged from the materialism of the 19th century on the line from Auguste Comte down to Frege, Husserl, Schlick, Carnap and van Fraassen (among others), that positivism indeed destroyed much of that style. In my opinion, much of the inventiveness of the 19th century could be attributed a certain, yet largely unconscious, attitude towards the topos of the “element.”

No question, elevating the topos of the element into consciousness, as a deliberate means of thinking, is quite promising. Hence, it is also of some importance to our question of machine-based episteme. We may just add a further twist to this overarching topic by asking about the mechanisms and conditions that are needed for the possibility of “elementarization”. Still in other words we could say that elements are the main element of creativity. And we may add that the issue of expression and expressibility is not about words and texts, albeit texts and words potentiated the dynamics and the density of expressibility.

Before we can step on to harvest the power of elementarization we have to spend some efforts on the issue of the structure of expression. The first question is: What exactly happens if we invent and impose an element in and to our thoughts? The second salient question is about the process forming the element itself. Is the “element” just a phenomenological descriptional parlance, or is it possible to give some mechanisms for it?

Spaces and Dimensions

As it is already demonstrated by Anaximander’s ΣΤΟΙΧΕΙΟΝ, elements put marks into the void. The “element game” introduces discernability, and it is central to the topos of the element that it implies a whole, an irreducible set, of which it is a constitutive part. This way, elements don’t act just sign posts that would indicate a direction in an already existing landscape. It is more appropriate to conceive of them as a generators of landscape. Even before words, whether spoken or written, elements are the basic instance of externalization, abstract writing, so to speak.

It is the abstract topos of elements that introduce the complexities around territorialization and deterritorialization into thought, a dynamics that never can come to an end. Yet, let us focus here on the generative capacities of elements.

Elements transform existing spaces or create completely new ones, they represent the condition for the possibility of expressing anything. The implications are rather strong. Looking back from that conditioning to the topos itself we may recognize that wherever there is some kind of expression, there is also a germination zone of ideas, concepts and models, and above all, belief.

The space implied by elements is particular one yet, due to the fact that it inherits the aprioris of the wholeness and non-decomposability. Non-decomposability means that the elemental space looses essential qualities if one of the constituting elements would be removed.

This may be contrasted to the Cartesian space, the generalized Euclidean space, which is the prevailing concept of space today. A Cartesian space is spanned by dimensions that are set orthogonal to each other. This orthogonality of the dimensional setup allows to change the position in just one dimension, but to keep the position in all the other dimensions unchanged, constant. The dimensions are independent from each other. Additionally, the quality of the space itself does not change if we remove one of the dimensions of a n-dimensional Cartesian space (n>1). Thus, the Cartesian space is decomposable.

Spaces are inevitably implied as soon as entities are conceived as carriers of properties, in fact, even if at least one (“1”!) property will be assigned to them. These assigned properties, or short: assignates, then could be mapped to different dimensions. A particular entity thus becomes visible as a particular arrangement in the implied space. In case of Cartesian spaces, this arrangement consists of a sheaf of vectors, which is as specific for the mapped entity as it could be desired.

Dimensions may refer to sensory modalities, to philosophical qualias, or to constructed properties of development in time, that is, concepts like frequency, density, or any kind of pattern. Dimensions may be even purely abstract, as in case of random vectors or random graphs, which we discussed here, where the assignate refers to some arbitrary probability or structural, method specific parameter.

Many phenomena remain completely mysterious if we do not succeed to setup the (approximately) right number of dimensions or aspects. This has been famously demonstrated by Abbott and his flatland [4], or by Ian Stewart and his flatter land [5]. Other examples are the so-called embedding dimension in the complex systems analysis, or the analysis of (mathematical) cusp catastrophes by Ian Stewart [6]. Dimensionality also plays an important role in the philosophy of science, where Ronald Giere uses it to develop a “scientific perspectivism.” [7]

Suppose the example of a cloud of points in the 3‑dimensional space, which forms a spiral-like shape, with the main axis of the shape parallel to the z-axis. For points in the upper half of the cloudy spiral there shall be a high probability that they are blue; those in the lower half shall be mostly red. In other words, there is a clear pattern. If we now project the points to the x-y-plane, i.e. if we reduce dimensionality we loose the possibility to recognize the pattern. Yet, the conclusion that there “is” no pattern is utterly wrong. The selection of a particular number of dimensions is a rather critical operation. Hence, taking action without reflecting on the dimensionality of the space of expressibility quite likely leads to severe misinterpretations. The cover of Douglas Hofstadter’s first book “Gödel, Escher, Bach” featured a demonstration of the effect of projection from higher to lower dimensionality (see the image below), another presentation can be found here on YouTube, featuring Carl Sagan on the topic of dimensionality.

In mathematics, the relation between two spaces of different dimensionality, the so-called manifold, may itself form an abstract space. This exercise of checking out the consequences of removing or adding a dimension/aspect from the space of expressibility is a rewarding game even in everyday life. In the case of fractals in time series developments, Mandelbrot conceptualizes even a changing dimensionality of the space which is used to embed the observations over time.

Undeniably, this decomposability contributed much to the rise and the success of what we call modern science. Any of the spaces of mathematics or statistics is a Cartesian space. Riemann space, Hilbert space, Banach space, topological spaces etc. are all Cartesian insofar as the dimensions are arranged orthogonal to each other, thus introducing independence of elements before any other definition. Though, the real revolutionary contribution of Descartes has not been the setup of independent dimensions, it is the “Copernican” move to move the “origin” around, and with that, to mobilize the reference system of a particular measurement.

But again: By performing this mapping, the wholeness of the entity will be lost. Any interpretation of the entities requires a point outside of the Cartesian dimensional system. And precisely this externalized position is not possible for an entity that itself “performs cognitive processes.”2 It would be quite interesting to investigate the epistemic role of externalization of mental affairs through cultural techniques like words, symbols, or computers, yet that task would be huge.

Despite the success of the Cartesian space as a methodological approach it obviously also remains true that there is no free lunch in the realm of methods and mappings. In case of the Cartesian space this cost is as huge as its benefit, as both are linked to its decomposability. In Cartesian space it is not possible to speak about a whole, whole entities are simply nonexistent. This is indeed as dramatic as it sounds.Yet, it is a direct consequence of the independence of the dimensions. There is nothing in the structure of the Cartesian space that could be utilized as a kind of media to establish coherence. We already emphasized that the structure of the Cartesian space implies the necessity of an external observer. This, however, is not quite surprising for a construction devised by Descartes in the age of absolutistic monarchies symbiontically tied to catholicism, where the idea of the machine had been applied pervasively to anything and everything.

There are still  further assumptions underlying the Cartesian conception of space. Probably the two most salient ones are concerning density and homogeneity. At first it might sound somewhat crazy to conceive of a space of inhomogeneous dimensionality. Such a space would have “holes” about which one could neither talk from within that space not would they be recognizable. Yet, from theoretical physics we know about the concept of wormholes, which precisely represent such inhomogeneity. Nevertheless, the “accessible” parts of such a space would remain Cartesian, so we could call the whole entity “weakly Cartesian”. A famous example is provided by Benoît Mandelbrot’s warping of dimensionality in the time domain of observations [8,9]

From an epistemological perspective, the Cartesian space is just a particular instance for the standardization or even institutionalization of the inevitable implication of spaces. Yet, the epistemic spaces are not just 3-dimensional as Kant assumed in his investigation, epistemic spaces may comprise a large and even variable number of dimensions. Nevertheless, Kant was right about the transcendental character of space, though the space we refer to here is not just the 3d- or (n)d-physical space.

Despite the success of Cartesian space, which builds on the elements of separability, decomposability and externalizable position of the interpreter, it is perfectly clear that it is nothing else than just a particular way of dealing with spaces. There are many empirical, cognitive or mental contexts for which the assumptions underlying the Cartesian space are severely violated. Such contexts usually involve the wholeness of the investigated entity as a necessary apriori. Think of complexity, language, the concept of life forms with its representatives like urban cultures, for any of these domains the status of any part of it can’t be qualified in any reasonable manner without referring always to the embedding wholeness.

The Aspectional Space

What we need is a more general concept of space, which does not start with any assumption about decomposability (or its refutation). Since it is always possible to proof and to drop the assumption of dependence (non-decomposability), but never for the assumption of independence (decomposability) we should start with a concept of space which keeps the wholeness intact.

Actually, it is not too difficult to start with a construction of such a space. The starting point is provided by a method to visualize data, the so-called ternary diagram. Particularly in metallurgy and geology ternary diagrams are abundantly in use for the purpose of expressing mixing proportions. The following figure 2a shows a general diagram for three components A,B,C, and Figure 2b shows a concrete diagram for a three component steel alloy at 900°C.

Figure 2a,b: Ternary diagrams in metallurgy and geology are pre-cursors of aspectional spaces.

Such ternary diagrams are used to express the relation between different phases where the influential components all influence each other. Note that the area of the triangle in such a ternary diagram comprises the whole universe as it is implied by the components. However, in principle it is still possible (though not overly elegant) to map the ternary diagram as it is used in geology into Cartesian space, because there is a strongly standardized way about how to map values. Any triple of values (a,b,c) is mapped to the axes A,B,C such that these axes are served counter-clockwise beginning with A. Without that rule a unique mapping of single points from the ternary space to the Cartesian space would not be possible any more. Thus we can see that the ternary diagram does not introduce a fundamental difference as compared to the Cartesian space defined by orthogonal axes.

Now let us drop this standard of the arrangement of axes. None of the axes should be primary against any other. Obviously, the resulting space is completely different from the spaces shown in Fig.2. We can keep only one of n dimensions constant while changing position in this space (by moving along an arc around one of the corners). Compare this to the Cartesian space, where it is possible to change just one and keep the other constant. For this reason we should call the boundaries of such a space not “axes” or “dimensions” and more. By convention, we may call the scaling entities “aspection“, derived from “aspect,” a concept that, similarly to the concept of element, indicates the non-decomposability of the embedding context.

As said, our space that we are going to construct for a mapping of elements can’t be transformed into a Cartesian space any more. It is an “aspectional space”, not a dimensional space. Of course, the aspectional space, together with the introduction of “aspections” as a companion concept for “dimension” is not just a Glass Bead Game. We urgently need it if we want to talk transparently and probably even quantitatively about the relation between parts and wholes in a way that keeps the dependency relations alive.

The requirement of keeping the dependency relations exerts an interesting consequence. It renders the corner points into singular points, or more precisely, into poles, as the underlying apriori assumption is just the irreducibility of the space. In contrast to the ternary diagram (which is thus still Cartesian) the aspectional space is neither defined at the corner points nor along the borders (“edges”). In  other words, the aspectional space has no border, despite the fact that its volume appears to be limited. Since it would be somehow artificial to exclude the edges and corners by dedicated rules we prefer to achieve the same effect (of exclusion) by choosing a particular structure of the space itself. For that purpose, it is quite straightforward to provide the aspectional space with a hyperbolic structure.

The artist M.C. Escher produced a small variety of confined hyperbolic disks that perfectly represent the structure of our aspectional space. Note that there are no “aspects,” it is a zero-aspectional space. Remember that the 0-dimensional mathematical point represents a number in Cartesian space. This way we even invented a new class of numbers!3 A value in this class of number would (probably) represent the structure of the space, in other words the curvature of the hyperbola underlying the scaling of the space. Yet, the whole mathematics around this space and these numbers is undiscovered!

Figure 3: M.C. Eschers hyperbolic disk, capturing infinity on the table.

Above we said that this space appears to be limited. This impression of a limitation would hold only for external observers. Yet, our interest in aspectional spaces is precisely given by the apriori assumption of non-decomposability and the impossibility of such an external position for cognitive activities. Aspectional spaces are suitable just for those cases where such an external position is not available. From within such a hyperbolic space, the limitation would not be experiencable, a at least not by simple means: the propagation of waves would be different as compared to the Cartesian space.

Aspections, Dimensions

So, what is the status of the aspectional space, especially as compared to the dimensional Cartesian space? A first step of such a characterization would investigate the possibility of transforming those spaces into each other. A second part would not address the space itself, but its capability to do some things uniquely.

So, let us start with the first issue, the possibility for a transition between the two types of species. Think of a three-aspectional space. The space is given by the triangularized relation, where the corners represent the intensity or relevance of a certain aspect. Moving around on this plane changes the distance to at least two (n-1) of the corners, but most moves change the distance to all three of the corners. Now, if we reduce the conceptual difference and/or the possible difference of intensity between all of the three corners we experience a sudden change of the quality of the aspectional space when we perform the limes transition into a state where all differential relevance has been expelled; the aspects would behave perfectly collinear.

Of course, we then would drop the possibility for dependence, claiming independence as a universal property, resulting in a jump into Cartesian space. Notably, there is no way back from the dimensional Cartesian space into aspectional spaces. Interestingly, there is a transformation of the aspectional space which produces a Cartesian space, while the opposite is not possible.

This formal exercise sheds an interesting light to the life form of the 17th century Descartes. Indeed, even in assuming the possibility of dependence one would grant parts of the world autonomy, something that has been categorically ruled out at those times. The idea of God as it was abundant then implied the mechanical character of the world.

Anyway, we can conclude that aspectional spaces are more general than Cartesian spaces as there is a transition only in one direction. Aspectional spaces are indeed formal spaces as Cartesian spaces are. It is possible to define negative numbers, and it is possible to provide them with different metrices or topologies.

Figure 4: From aspectional space to dimensional space in 5 steps. Descartes’ “origin” turns out to be nothing else than the abolishment or conflation of elements, which again could be interpreted as a strongly metaphysically influenced choice.

Now to the second aspect about the kinship between aspections and dimensions. One may wonder, whether the kind of dependency that could be mapped to aspectional spaces could not be modeled in dimensional spaces as well, for instance, by some functional rule acting on the relation between two dimensions. A simple example would be the regression, but also any analytic function y=f(x).

At first sight it seems that this could result in similar effects. We could, for instance, replace two independent dimensions by a new dimension, which has been synthesized in a rule-based manner, e.g. by applying a classic analytical closed-form function. The dependency would disappear and all dimensions again being orthogonal, i.e. independent to each other. Such an operation, however, would require that the dimensions are already abstract enough such that they can be combined by closed analytical functions. This then reveals that we put the claim of independence already into the considerations before anything else. Claiming the perfect equality of functional mapping of dependency into independence thus is a petitio principii. No wonder we find it possible to do so in a later phase of the analysis. It is thus obvious, that the epistemological state of a dependence secondary to the independence of dimensions is a completely different from the primary dependence.

A brief Example

A telling example4 for such an aspectional space is provided by the city theory of David Grahame Shane [10]. The space created by Shane in order to fit in his interests in a non-reductionist coverage of the complexity of cities represents a powerful city theory, from which various models can be derived. The space is established through the three elements of armature, enclave and (Foucaultian) heterotopia. Armature is, of course a rather general concept–designed to cover more or less straight zones of transmission or the guidance for such–, which however expresses nicely the double role of “things” in a city. It points to things as part of the equipment of a city as well as their role as anchor (points). Armatures, in Shane’s terminology, are things like gates, arcades, malls, boulevards, railways, highway, skyscraper or particular forms of public media, that is, particular forms of passages. Heterotopias, on the other hand, are rather compli­cated “things,” at least it invokes the whole philo­sophi­cal stance of the late Foucault, to whom Shane explicitly refers. For any of these elements, Shane then provides extensions and phenomenological instances, as values if you like, from which he builds a metric for each of the three basic aspects. Through­out his book he demonstrates the usefulness of his approach, which is based on these three elements. This usefulness becomes tangible because Shane’s city theory is an aspectional space of expressibility which allows to compare and to relate an extreme variety of phenomena regarding the city and the urban organization. Of course, we must expect other such spaces in principle; this would not only be interesting, but also a large amount of work to complete. Quite likely, however, it will be a just an extension of Shane’s concept.

5. Conclusion

Freeing the concept of “element” from its ontological burden turns it into a structural topos of thinking. The “element game” is a mandatory condition for the creation of spaces that we need in order to express anything. Hence, the “element game,” or briefly, the operation of “elementarization,” may be regarded as the prime instance of externalization and as such also as the hot spot of the germination of ideas, concepts and words, both abstract and factual. For our concerns here about machine-based episteme it is important that the notion of the element provides an additional (new?) possibility to ask about the mechanism in the formation of thinking.

Elementarization also represents the conditions for “developing” ideas and to “settle” them. Yet, our strictly non-ontological approach helps to avoid premature and final territorialization in thought. Quite to the contrary, if understood as a technique, elementarization helps to open new perspectives.

Elementarization appears as a technique to create spaces of expressibility, even before words and texts. It is thus worthwhile to consider words as representatives of a certain dynamics around processes of elementarization, both as an active as well as a passive structure.

We have been arguing that the notion of space does not automatically determine the space to be a Cartesian space. Elements to not create Cartesian spaces. Their particular reference to the apriori acceptance of an embedding wholeness renders both the elements as well as the space implied by them incompatible with Cartesian space. We introduced the notion of “aspects” in order to reflect to the particular quality of elements. Aspects are the result of a more or less volitional selection and construction.

Aspectional spaces are spaces of mutual dependency between aspects, while Cartesian spaces claim that dimensions are independent from each other. Concerning the handling and usage of spaces, parameters have to be sharply distinguished both from aspects as well as from dimensions. In Mathematics or in natural sciences, parameters are distinguished from variables. Variables are to be understood as containers for all allowed instances of values of a certain dimension. Parameters are modifying just the operation of placing such a value into the coordinate system. In other words, they do not change the general structure of the space used for or established by performing a mapping, and they even do not change the dimensionality of the space itself. For designers as well as scientists, and more general for any person acting with or upon things in the world, it is thus more than naive to play around with parameters without explicating or challenging the underlying space of expressibility, whether this is a Cartesian or an aspectional space. From that it also follows that the estimation of parameters can’t be regarded as an instance of learning.

Here we didn’t mention the mechanisms that could lead to the formation of elements.Yet, it is quite important to understand that we didn’t just shift the problematics of creativity to another descriptional layer, without getting a better grip to it. The topos of the element allows us to develop and to apply a completely different perspective to the “creative act.”

The mechanisms that could be put into charge for generating elements will be the issue of the next chapter. There we will deal with relations and its precursors. We also will briefly return to the topos of comparison.

Part 3: A Pragmatic Start for a Beautiful Pair

Part 5: Relations and Symmetries (forthcoming)

Notes

1. Most of the classic items presented here I have taken from Wilhelm Schwabe’s superb work about the ΣΤΟΙΧΕΙΟΝ [1], in latin letters “stoicheion.”

2. The external viewpoint has been recognized as an unavailable desire already by Archimedes long ago.

3. Just consider the imaginary numbers that are basically 2-dimensional entities, where the unit 1 expresses a turn of -90 degrees in the plane.

4. Elsewhere [11] I dealt in more detail with Shane’s approach, a must read for anyone dealing with or interested in cities or urban culture.

  • [1] Wilhelm Schwabe. ‘Mischung’ und ‘Element’ im Griechischen bis Platon. Wort- u. begriffsgeschichtliche Untersuchungen, insbes. zur Bedeutungsentwicklung von ΣΤΟΙΧΕΙΟΝ. Bouvier, Bonn 1980.
  • [2] Isaac Newton: Philosophiae naturalis principia mathematica. Bd. 1 Tomus Primus. London 1726, S. 14 (http://gdz.sub.uni-goettingen.de/no_cache/dms/load/img/?IDDOC=294021)
  • [3] Wesley C. Salmon. Explanation and Causality. 2003.
  • [4] Abbott. Flatland.
  • [5] Ian Stewart Flatter Land.
  • [6] Ian Stewart & nn, Catastrophe Theory
  • [7] Ronald N. Giere, Scientific Perspectivism.
  • [8] Benoit B. Mandelbrot, Fractals: Form, Chance and Dimension.Freeman, New York 1977.
  • [9] Benoit B. Mandelbrot, Fractals and Scaling in Finance. Springer, New York 1997.
  • [10] David Grahame Shane, Recombinant Urbanism, Wiley, New York 2005.
  • [11] Klaus Wassermann (2011). Sema Città-Deriving Elements for an applicable City Theory. in: T. Zupančič-Strojan, M. Juvančič, S. Verovšek, A. Jutraž (eds.), Respecting fragile places, 29th Conference on Education in Computer Aided Architectural Design in Europe
    eCAADe. available online.

۞

SOM = Speedily Organizing Map

February 12, 2012 § Leave a comment

The Self-organizing Map is a powerful and high-potential computational procedure.

Yet, there is no free lunch, especially not for procedures that are able to deliver meaningful results.

The self-organizing map is such a valuable procedure, we have discussed its theoretical potential with regard to a range of different aspects in other chapters. Here, we want not to deal further with such theoretical or even philosophical issues, e.g. related to the philosophy of mind, instead we focus the issue of performance, understood simply in terms of speed.

For all those demo SOMs the algorithmic time complexity is not really an issue. The algorithm approximates rather quickly to a stable state. Yet, small maps—where “small” means something like less than 500 nodes or so—are not really interesting. It is much like in brains. Brains are made largely from neurons and some chemicals, and a lot of them can do amazing things. If you take 500 of them you may stuff a worm in an appropriate manner, but not even a termite. The important questions thus are, beyond the nice story about theoretical benefits.

What happens with the SOM principle if we connect 1’000’000 nodes?

How to organize 100, 1000 or 10’000 of such million-nodes SOMs?

By these figures we would end up with somewhat around 1..10 billions of nodes1, all organized along the same principles. Just to avoid a common misunderstanding here: these masses of neurons are organized in a very similar manner, yet the totality of them builds a complex system as we have described it in our chapter about complexity. There are several, if not many emergent levels, and a lot of self-referentiality. These 1 billion nodes are not all engaged with segmenting external data! We will see elsewhere, in the chapter about associative storage and memory, how such deeply integrated modular system could be conceived of. There are some steps to take, though not terribly complicated or difficult ones.

When approaching such scales, the advantage of the self-organization turns palpably into a problematic disadvantage. “Self-organizing” means “bottom-up,” and this bottom-up direction in SOMs is represented by the fact that all records representing the observations have repeatedly to be compared to all nodes in the SOM in order to find the so-called “best matching unit” (BMU). The BMU is just that node in the network that exhibits an intensional profile that is the most similar among all the other profiles2. Though the SOM avoids to compare all records to all records, its algorithmic complexity scales as a power-function with respect to its own scale! Normally, algorithms are dependent on the size of the data, but not to its own “power.”

In its naive form the SOM shows a complexity of something like O(n w m2), where n is the amount of data (number of records, size of feature set), w the number of nodes to be visited for searching the BMU, and m2 the number of nodes affected in the update procedure. w and m are scaled by factors f1,f2 <1, but the basic complexity remains. The update procedure affects an area that is dependent on the size of the SOM, therefore the exponent. The exact degree of algorithmic complexity is not absolutely determined, since it depends on the dynamics in the learning function, among other things.

The situation worsens significantly if we apply improvements to the original flavor of the SOM, e.g.

  • – the principle of homogenized variance (per variable across extensional containers),
  • – in targeted modeling, tracking the explicit classification performance per node on the level of records, which means that the data has to be referenced
  • – size balancing of nodes,
  • – morphological differentiation like growing and melting, as in the FluidSOM, which additionally allows for free ranging nodes,
  • – evolutionary feature selection and creating proto-hypothesis,
  • – dynamic transformation of data,
  • – then think about the problem of indeterminacy of empiric data, which enforces differential modeling, i.e. a style of modeling that is performing experimental investigation of the dependency of the results on the settings (the free parameters) of the computational procedure: sampling the data, choosing the target, selecting a resolution for the scale of the resulting classification, choosing a risk attitude, among several more.

All affects the results of modeling, that is the prognostic/diagnostic/semantic conclusions one could draw from the modeling. Albeit all these steps could be organized based on a set of rules, including applying another instance of a SOM, and thus could be run automatically, all of these necessary explorations require separate modeling. It is quite easy to set up an exploration plan for differential modeling that would require several dozens of models, and if evolutionary optimization is going to be applied, 100s if not thousands of different maps have to be calculated.

Fortunately, the SOM offers a range of opportunities for using dynamic look-up tables and parallel processing. A SOM consisting of 1’000’000 neurons could easily utilize several thousand threads, without much worries about concurrency (or the collisions of parallel threads). Unfortunately, such computers are not available yet, but you got the message…

Meanwhile we have to apply optimization through dynamically built look-up tables.  These I will describe briefly in the following sections.

Searching the Most Similar Node

An ingredient part of speeding up the SOM in real-life application is an appropriate initialization of the intensional profiles across the map. Of course, precisely this can not be known in advance, at least not exactly. The self-organization of the map is the shortest path to its final state, there is no possibility for an analytic short-cut. Kohonen proposes to apply Principal Component Analysis (PCA) for calculating the initial values. I am convinced that this is not a good idea. The PCA is deeply incompatible with the SOM, hence it will respond to very different traits in the data. PCA and SOM behave similar only in the case of demo cases…

Preselecting the Landing Zone

A better alternative is the SOM itself. Since the mapping organized by the SOM is preserving the topology of the data, we could apply a much smaller SOM, even a nested series of down-scaled SOMs to create a coarse model for selecting the appropriate sub-population in the large SOM. The steps are the following:

  • 1. create the main SOM, say 40’000 nodes, organized on a square rectangle, where the sides are of the relative length of 200 nodes;
  • 2. create a SOM for preselecting the landing zone, scaled approximately 14 by 14 nodes, and use the same structure (i.e. the same feature vectors) as for the large SOM;
  • 3. Prime the small SOM with a small but significant sample of the data, which comprise say 2000..4000 records in this case of around 200 nodes; draw this sample randomly from the data; this step should complete comparatively quick (by a factor of 200 in our example);
  • 4. initialize the main SOM by a blurred (geometric) projection of the intensional profiles from the minor to the larger SOM;
  • 5. now use the minor SOM as a model for the selection of the landing zone, simply by means of geometric projection.

As a result, the number of nodes to be visited in the large SOM in order to find the best match remains almost constant.
There is an interesting correlate to this technique. If one needs a series of SOM based representations of the data distinguished just by the maximum number of nodes in the map, one should always start with the lowest, i.e. most coarse resolution, with the least number of nodes. The results then can be used as a projective priming of the SOM on the next level of resolution.

Explicit Lookup Tables linking Observations to SOM areas

In the previous section we described the usage of a secondary much smaller SOM as a device for limiting the number of nodes to be scanned. The same problematics can be addressed by explicit lookup tables that establish a link between a given record and a vicinity around the last (few) best matching units.

If the SOM is approximately stable, that is, after the SOM has seen a significant portion of the data, it is not necessary any more to check the whole map. Just scan the vicinity around the last best matching node in the map. Again, the number of nodes necessary to be checked is almost constant.

The stability can not be predicted in advance, of course. The SOM is, as the name says, self-organizing (albeit in a weak manner). As a rule of thumb one could check the average number of observations attached to a particular node, the average taken across all nodes that contain at least one record. This average filling should be larger than 8..10 (due to considerations based on variance, and some arguments derived from non-parametric statistics… but it is a rule of thumb).

Large Feature Vectors

Feature vectors can be very large. In life sciences and medicine I experienced cases with 3000 raw variables. During data transformation this number can increase to 4000..6000 variables. Te comparison of such feature vectors is quite expensive.

Fortunately, there are some nice tricks, which are all based on the same strategy. This strategy comprises the following steps.

  • 1. create a temporary SOM with the following, very different feature vector; this vector has just around 80..100 (real-valued) positions and 1 position for the index variable (in other words, the table key); such the size of the vector is a 60th of the original vector, if we are faced with 6000 variables.
  • 2. create the derived vectors by encoding the records representing the observation by a technique that is called “random projection”; such a projection is generated by multiplying the data vector with a token from a catalog of (labeled) matrices, that are filled with uniform random numbers ([0..1]).
  • 3. create the “random projection” SOM based on these transformed records
  • 4. after training, replace the random projection data with real data, re-calculate the intensional profiles accordingly, and run a small sample of the data through the SOM for final tuning.

The technique of random projection has been invented in 1988. The principle works because of two facts:

  • (1) Quite amazingly, all random vectors beyond a certain dimensionality (80..200, as said before) are nearly orthogonal to each other. The random projection compresses the original data without loosing the bits of information that are distinctive, even if they are not accessible in an explicit manner any more.
  • (2) The only trait of the data that is considered by the SOM is their potential difference.

Bags of SOMs

Finally, one can take advantage of splitting the data into several smaller samples. These samples require only smaller SOMs, which run much faster (we are faced with a power law). After training each of the SOMs, they can be combined into a compound model.

This technique is known as bagging in Data Mining. Today it is also quite popular in the form of so-called random forests, where instead of one large decision tree man smaller ones are built and then combined. This technique is very promising, since it is a technique of nature. Its simply modularization on an abstract level, leading the net level of abstraction in a seamless manner. It is also one of our favorite principles for the overall “epistemic machine”.

Notes

1. This would represent just around 10% of the neurons of our brain, if we interpret each node as a neuron. Yet, this comparison is misleading. The functionality of a node in a SOM rather represents a whole population of neurons, although there is no 1:1 principle transferable between them. Hence, such a system would be roughly of the size of a human brain, and much more important, it is likely organized in a comparable, albeit rather alien, manner.

2. Quite often, the vectors that are attached to the nodes are called weight vectors. This is a serious misnomer, as neither the nodes are weighted by this vector (alone), nor the variables that make up that vector (for more details see here). Conceptually it is much more sound to call those vectors “intensional profiles.” Actually, one could indeed imagine  a weight vector that would control the contribution (“weight”) of variables to the similarity / distance between two of such vectors. Such weight vectors could even be specific for each of the nodes.

References…

  • [1]

.

۞

Associativity

December 19, 2011 § Leave a comment

Initially, the meaning of ‘associativity’ seems to be pretty clear.

According to common sense, it denotes the capacity or the power to associate entities, to establish a relation or a link between them. Yet, there is a different meaning from mathematics that almost appears as kind of a mocking of the common sense. Due to these very divergent meanings we first have to clarify our usage before discussing the concept.

A Strange Case…

In mathematics, associativity is defined as a neutrality of the results of a compound operation with respect to the “bundling,” or association, of the individual parts of the operation. The formal statement is:

A binary operation ∘ (relating two arguments) on a set S is called associative if it satisfies the associative law:

x∘(y∘z) = (x∘y)∘z for all x, y, z S

This, however, is just the opposite of “associative,” as it demands the independence from any particular association. If there would be any capacity to establish an association between any two elements of S, then there should not be any difference.

Maybe, some mathematician in the 19th century hated the associative power of so many natural structures. Subsequently, modernism contributed its own part to establish the corruption of the obvious etymological roots.

In mathematics the notion of associativity—let us call it I-associativity in order to indicate the inverted meaning—is an important part of fundamental structures like “classic” (Abelian) groups or categories.

Groups are important since they describe the basic symmetries within the “group” of operations that together form an algebra. Groups cover anything what could be done with sets. Note that the central property of sets is their enumerability. (Hence, a notion of “infinite” sets is nonsense; it simply contradicts itself.) Yet, there are examples of quite successful, say: abundantly used, structures that are not based on I-associativity, the most famous of them being the Lie-group. Lie-groups allow to conceive of continuous symmetry, hence it is much more general than the Abelian group that essentially emerged from the generalization of geometry. Even in the case of Lie-groups or other “non-associative” structures, however, the term refers to the meaning such as to inverting it.

With respect to categories we can say that so far, and quite unfortunately, there is not yet something like a category theory that would not rely on I-associativity, a fact that is quite telling in itself. Of course, category theory is also quite successful, yet…

Well, anyway, we would like to indicate that we are not dealing with I-associativity here in this chapter. In contrast, we are interested in the phenomenon of associativity as it is indicated by the etymological roots of the word: The power to establish relations.

A Blind Spot…

In some way the particular horror creationes so abundant in mathematics is comprehensible. If a system would start to establish relations it also would establish novelty by means of that relation (sth. that simply did not exist before). So far, it was not possible for mathematics to deal symbolically with the phenomenon of novelty.

Nevertheless it is astonishing that a Google raid on the term “associativity” reveals only slightly more than 500 links (Dec. 2011), from which the vast majority consists simply from the spoofed entry in Wikipedia that considers the mathematical notion of I-associativity. Some other links are related to computer sciences, which basically refer to the same issue, just sailing under a different flag. Remarkably, only one (1) single link from an open source robotics project [1] mentions associativity as we will do here.

Not very surprising one can find an intense linkage between “associative” and “memory,” though not in the absolute number of sources (also around ~600), but in the number of citations. According to Google scholar, Kohonen and his Self-Organizing Map [2] is being cited 9000+ times, followed by Anderson’s account on human memory [3], accumulating 2700 citations.

Of course, there are many entries in the web referring to the word “associative,” which, however, is an adjective. Our impression is that the capability to associate has not made its way into a more formal consideration, or even to regard it as a capability that deserves a dedicated investigation. This deficit may well be considered as a continuation of a much older story of a closely related neglect, namely that of the relation, as Mertz pointed out [4, ch.6], since associativity is just the dynamic counterpart of the relation.

Formal and (Quasi-)Material Aspects

In a first attempt, we could conceive of associativity as the capability to impose new relations between some entities. For Hume (in his “Treatise”, see Deleuze’s book about him), association was close to what Kant later dubbed “faculty”: The power to do sth, and in this case to relate ideas. However, such wording is inappropriate as we have seen (or: will see) in the chapters about modeling and categories and models. Speaking about relations and entities implies set theory, yet, models and modeling can’t be covered by set theory, or only very exceptionally so. Since category theory seems to match the requirements and the structure of models much better, we also adapt its structure and its wording.

Associativity then may be taken as the capability to impose arrows between objects A, B, C such that at least A ⊆ B ⊆ C, but usually A ⋐ B ⋐ C, and furthermore A ≃ C, where “≃” means “taken to be identical despite non-identity”. In set theoretic terms we would have used the notion of the equivalence class. Such arrows may be identified with the generalized model, as we are arguing in the chapter about the category of models. The symbolized notion of the generalized abstract model looks like this (for details jump over to the page about modeling):

eq.1

where U=usage; O=potential observations; F=featuring assignates on O; M=similarity mapping; Q=quasi-logic; P=procedural aspects of implementation.

Those arrows representing the (instances of a generalized) model are functors that are mediating between categories. We also may say that the model imposes potentially a manifold of partially ordered sets (posets) onto the initial collection of objects.

Now we can start to address our target, the structural aspects of associativity, more directly. We are interested in the necessary and sufficient conditions for establishing an instance of an object that is able (or develops the capability) to associate objects in the aforementioned sense. In other words, we need an abstract model for it. Yet, here we are not interested in the basic, that is transcendental conditions for the capability to build up associative power.

Let us start more practically, but still immaterial. The best candidates we can think of are Self-Organizing Maps (SOM) and particularly parameterized Reaction-Diffusion Systems (RDS); both of them can be subsumed into the class of associative probabilistic networks, which we describe in another chapter in more technical detail. Of course, not all networks exhibit the emergent property of associativity. We may roughly distinguish between associative networks and logistic networks [5]. Both, SOM as well as RDS, are also able to create manifolds of partial orderings. Another example from this family is the Boltzmann engine, which, however, has some important theoretical and practical drawbacks, even in its generalized form.

Next, we depict the elementary processes of SOM and RDS, respectively. SOM and RDS can be seen as instances located at the distant endpoints of a particular scale, which expresses the topology of the network. The topology expresses the arrangement of quasi-material entities that serve as persistent structure, i.e. as a kind of memory. In the SOM, these entities are called nodes and they are positioned in a more or less fixed grid (albeit there is a variant of the SOM, the SOM gas, where the grid is more fluid). The nodes do not range around. In contrast to the SOM, the entities of an RDS are freely floating around. Yet, RDS are simulated much like the SOM, assuming cells in a grid and stuffing them with a certain memory.

Inspecting those elementary processes, we of course again find transformations. More important, however, is another structural property to both of them. Both networks are characterized by a dynamically changing field of (attractive) forces. Just the locality of those forces is different between SOM and RDS, leading to a greater degree of parallelity in RDS and to multiple areas of the same quality. In SOMs, each node is unique.

The forces in both types of networks are, however, exhibiting the property of locality, i.e. there is one or more center, where the force is strong, and a neighborhood that is established through a stochastic decay of the strength of this force. Usually, in SOM as well as in RDS, the decay is assumed to be radially symmetric, but this is not a necessary condition.

After all, are we now allowed to ask ‘Where does this associativity come from?’ The answer is clearly ‘no.’ Associativity is a holistic property of the arrangement as a total. It is the result of the copresence of some properties like

  • – stochastic neighborhoods that are hosting an anisotropic and monotone field of forces;
  • – a certain, small memory capacity of the nodes; note that the nodes are not “points”: in order to have a memory they need some corporeality. In turn this opens the way to think of a separation of of the function of that memory and a variable host that provides a container for that memory.
  • – strong flows, i.e. a large number of elementary operations acting on that memory, producing excitatory waves (long-range correlations) of finite velocity;

The result of the interaction of those properties can not be described on the level of the elements of the network itself, or any of its parts. What we will observe is a complex dynamics of patterns due to the superposition of antagonist forces, that are modeled either explicitly in the case of RDS, or more implicitly in the case of SOM. Thus both networks are also presenting the property of self-organization, though this aspect is much more dominantly expressed in RDS as compared to the SOM. The important issue is that the whole network, and even more important, the network and its local persistence (“memory”) “causes” the higher-level phenomenon.

We also could say that it is the quasi-material body that is responsible for the associativity of the arrangement.

The Power of a Capability

So, what is this associativity thing about? As we have said above, associativity imposes a potential manifold of partial orderings upon an arbitrary open set.

Take a mixed herd of Gnus and Zebras as the open set without any particular ordering, put some predators like hyenas or lions into this herd, and you will get multiple partially ordered sub-populations. In this case, the associativity emerges through particular rules of defense, attack and differential movement. The result of the process is a particular probabilistic order, clearly an immaterial aspect of the herd, despite the fact that we are dealing with fleshy animals.

The interesting thing in both the SOM and the RDS is that a quasi-body provides a capability that transforms an immaterial arrangement. The resulting immaterial arrangement is nothing else than information. In other words, something specific, namely a persistent contrast, has been established from some larger unspecific, i.e. noise. Taking the perspective of the results,  i.e. with respect to the resulting information, we always can see that the association creates new information. The body, i.e. the materially encoded filters and rules, has a greater weight in RDS, while in case of the SOM the stabilization aspect is more dominant. In any case, the associative quasi-body introduces breaks of symmetry, establishes them and stabilizes them. If this symmetry breaking is aligned to some influences, feedback or reinforcement acting from the surrounds onto the quasi-body, we may well call the whole process (a simple form of) “learning.”

Yet, this change in the informational setup of the whole “system” is mirrored by a material change in the underlying quasi-body. Associative quasi-bodies are therefore representatives for the transition from the material to the immaterial, or in more popular terms, for the body-mind-dualism. As we have seen, there is no conflict between those categories, as the quasi-body showing associativity provides a double-articulating substrate for differences. Else, we can see that these differences are transformed from a horizontal difference (such as 7-5=2) into vertical, categorical differences (such like the differential). If we would like to compare those vertical differences we need … category theory! …or a philosophy of the differential!

Applications

Early in the 20th century, the concept of association has been adopted by behaviorism. Simply recall the dog of Pavlov and the experiments of Skinner and Watson. The key term in behaviorism as a belated echo of 17th century hyper-mechanistics (support of a strictly mechanic world view) is conditioning, which appears in various forms. Yet, conditioning always remains a 2-valued relation, practically achieved as an imprinting, a collision between two inanimate entities, despite the wording of behaviorists who equate their conditioning with “learning by association.” What should learning be otherwise? Nevertheless, behaviorist theory commits the mistake to think that this “learning” should be a passive act. As you can see here, psychologists still strongly believe in this weird concept. They write: “Note that it does not depend on us doing anything.” Utter nonsense, nothing else.

In contrast to imprinting, imposing a functor onto an open set of indeterminate objects is not only an exhausting activity, it is also a multi-valued “relation,” or simply, a category. If we would analyze the process of imprinting, we would find that even “imprinting” can’t be covered by a 2-valued relation.

Nevertheless, other people took the media as the message. For instance, Steven Pinker criticized the view that association is sufficient to explain the capability of language. Doing so, he commits the same mistake as the behaviorists, just from the opposite direction. How else should we acquire language, if not by some kind of learning, even if it is a particular type of learning? The blind spot of Pinker seems to be randomization, i.e. he is not able leave the actual representation of a “signal” behind.

Another field of application for the concept of associativity is urban planning or urbanism, albeit associativity is rarely recognized as a conceptual or even as a design tool. [cf. 6]  It is obvious that urban environments can be conceived as a multitude of high-dimensional probabilistic networks [7].

Machines, Machines, Machines, ….Machines?

Associativity is a property of a persistent (quasi-material) arrangement to act onto a volatile stream (e.g. information, entropy) in such a way as to establish a particular immaterial arrangement (the pattern, or association), which in turn is reflected by material properties of the persistent layer. Equivalently we may say that the process leading to an association is encoded into the material arrangement itself. The establishment of the first pattern is the work of the (quasi-)body. Only for this reason it is possible to build associative formal structures like the SOM or the RDS.

Yet, the notion of “machine” would be misplaced. We observe strict determinism only on the level of the elementary micro-processes. Any of the vast number of individual micro-events are indeed uniquely parameterized, sharing only the same principle or structure. In such cases we can not speak of a single machine any more, since a mechanic machine has a singular and identifiable state at any point in time. The concept of “state” does neither hold for RDS nor for SOM. What we see here is much more like a vast population of similar machines, where any of those is not even stable across time. Instead, we need to adopt the concept of mechanism, as it is in use in chemistry, physiology, or biology at large. Since both principles, SOM and RDS, show the phenomenon of self-organization, we even can not say that they represent a probabilistic machine. The notion of the “machine” can’t be applied to SOM or RDS, despite the fact that we can write down the principles for the micro-level in simple and analytic formulas. Yet, we can’t assume any kind of a mechanics for the interaction of those micro-machines.

It is now exciting to see that a probabilistic, self-organizing process used to create a model by means of associating principles looses the property of being a machine, even as it is running on a completely deterministic machine, the simulation of a Universal Turing Machine.

Associativity is a principle that transcends the machine, and even the machinic (Guattari). Assortative arrangements establish persistent differences, hence we can say that they create proto-symbols. Without associativity there is no information. Of course, the inverse is also true: Wherever we find information or an assortment, we also must expect associativity.

۞

  • [1]  iCub
  • [2] Kohonen, Teuvo, Self-Organization and Associative Memory. Springer Series in Information Sciences, vol.8, Springer, New York 1988.
  • [3] Anderson J.R., Bower G.H., Human Associative Memory. Erlbaum, Hillsdale (NJ) 1980.
  • [4] Mertz, D. W., Moderate Realism and its Logic, New Haven: Yale 1996.
  • [5] Wassermann, K. (2010), Associativity and Other Wurban Things – The Web and the Urban as merging Cultural Qualities. 1st international workshop on the urban internet of things, in conjunction with: internet of things conference 2010 in Tokyo, Japan, Nov 29 – Dec 1, 2010. (pdf)
  • [6] Dean, P., Rethinking representation. the Berlage Institute report No.11, episode Publ. 2007.
  • [7] Wassermann, K. (2010). SOMcity: Networks, Probability, the City, and its Context. eCAADe 2010, Zürich. September 15-18, 2010. (pdf)

The Self-Organizing Map – an Intro

October 20, 2011 § Leave a comment

A Map that organizes itself:

Is it the paradise for navigators, or is it the hell?

Well, it depends, I would say. As a control freak, or a warfarer like Shannon in the early 1940ies, you probably vote for the hell. And indeed, there are presumably only very few entities that have been so strongly neglected by information scientists like it was the case for the self-organizing map. Of course, there are some reasons for that. The other type of navigator probably enjoying the SOM is more likely of the type Odysseus, or Baudolino, the hero in Umberto Eco’s novel of the same name.

More seriously, the self-organizing map (SOM) is a powerful and even today (2011) a still underestimated structure, though meanwhile rapidly gaining in popularity. This chapter serves as a basic introduction into the SOM, along with a first discussion of the strength and weaknesses of its original version. Today, there are many versions around, mainly in research; the most important ones I will briefly mention at the end. It should be clear that there are tons of articles around in the web. Yet, most of them focus on the mathematics of the more original versions, but do not describe or discuss the architecture itself, or even provide a suitable high-level interpretation of what is going on in a SOM. So, I will not repeat the mathematics, instead I will try to explain it also for non-engineers without using mathematical formulas. Actually, the mathematics is not the most serious thing in it anyway.

Brief

The SOM is a bundle comprising a mathematical structure and a particularly designed procedure for feeding multi-dimensional (multi-attributes) data into it that are prepared as a table. Numbers of attributes can reach tens of thousands. Its purpose is to infer the best possible sorting of the data in a 2(3) dimensional grid. Yet, both preconditions, dimensionality and data as table, are not absolute and may be overcome by certain extensions to the original version of the SOM. The sorting process groups more similar records closer together. Thus we can say that a body of high-dimensional data (organized as records from a table) are mapped onto 2 dimensions, thereby enforcing a weighting of the properties used to describe (essentially: create) the records.

The SOM can be parametrized such that it is a very robust method for clustering data. The SOM exhibits an interesting duality, as it can be used for basic clustering as well as for target oriented predictive modeling. This duality opens interesting possibilities for realizing a pre-specific associative storage. The SOM is particularly interesting due to its structure and hence due to its extensibility, properties that other most methods do not share with the SOM. Though substantially different to other popular structures like Artificial Neural Networks, the SOM may be included into the family of connectionist models.

History

The development leading finally to the SOM started around 1973 in a computer science lab at the Helsinki university. It was Teuvo Kohonen who got aware to certain memory effects of correlation matrices. Until 1979, when he first published the principle of the Self-Organizing Map, he dedicatedly adopted principles known from the human brain. A few further papers followed and a book about the subject in 1983. Then, the SOM wasn’t readily adapted for at least 15 years. Its direct competitor for acceptance, the Backpropagation Artificial Neural network (B/ANN), was published in 1985, after the neural networks have been rediscovered in physics, following investigations of spin glasses and certain memory effects there. Actually, the interest in simulating neural networks dates back to 1941, when von Neumann, Hebb, McCulloch, and also Pitts, among others, met at a conference on the topic.

For a long time the SOM wasn’t regarded as a “neural network,” and this has been considered being a serious disadvantage. The first part of the diagnosis indeed was true: Kohonen never tried to simulate individual neurons, as it was the goal for all simulations of ANN. The ANN research has been deeply informed by physics, cybernetics and mathematical information theory. Simulating neurons is simply not adequate, it is kind of silly science. Above all, most ANN are just a very particular type of a “network” as there are no connections within a particular layer. In contrast, Kohonen tried to grasp a more abstract level: the population of neurons. In our opinion this choice is much more feasible and much more powerful as well. In particular, SOM can not only represent “neurons,” but any population of entities which can exchange and store information. More about that in a moment.

Nowadays, the methodology of SOM can be rated as well adopted. More than 8’000 research papers have been published so far, with increasing momentum, covering a lot of practical domains and research areas. Many have demonstrated the superiority or greater generality of SOM as compared to other methods.

Mechanism

The mechanism of a basic SOM is quite easy o describe, since there are only a few ingredients.
First, we need data. Imagine a table, where observations are listed in rows, and the column headers describe the variables that have been measured for each observed case. The variables are also called attributes, or features. Note, that in the case of the basic (say, the Kohonen-) SOM the structure given by the attributes is the same for all records. Technically, the data have to be normalized per column such that the minimum value is 0 and the maximum value is 1. Note, that this normalization ensures comparability of different sub-sets of observations. It represents just the most basic transformation of data, while there are many other transformation possible: logarithmic re-scaling of values of a column in order to shift the mode of the empirical distribution, splitting a variable by value, binarization, scaling of parameters that are available only on nominal niveau, or combining two or several columns by a formula are further examples (for details please visit the chapter about modeling). In fact, the transformation of data (I am not talking here about the preparation of data!) is one of the most important ingredients for successful predictive modeling.

Second, we create the SOM. Basically, and its simplest form, the SOM is a grid, where each cell has 4 (squares, rectangles) or 6 edges (hexagonal layout). The grid consists from nodes and edges. Nodes serve as a kind of container, while edges work as a kind of fibers for spreading signals. In some versions of the SOM the nodes can range freely or they can randomly move around a little bit.

An important element of the architecture of a SOM now is that each node gets the same structure assigned as we know from the table. As a consequence, the vectors collected in the nodes can easily be compared by some function (just wait a second for that). In the beginning, each node get randomly initialized. Then the data are fed into the SOM.

This data feeding is organized as follows. A randomly chosen record is taken from the table and then compared to all of the nodes. There is always a best matching node. The record then gets inserted into this node. Upon this insertion, which is kind of hiding, the values in the nodes structure vector are recalculated, e.g. as the (new) mean for all values across all records collected in a node (container). The trick now is not to change just the winning node where the data record has been inserted, but all nodes of the the close surround also, though with a strength that decreases with the distance.

This small activity of searching the best matching node, insertion and information spreading is done for all records, and possibly also repeated. The spreading of information to the neighbor nodes is a crucial element in the SOM mechanism. This spreading is responsible for the self-organization. It also represents a probabilistic coupling in a network. Of course, there are some important variants to that, see below, but basically that’s all. Ok, there is some numerical bookkeeping, optimizations to search the winning nodes etc. but these measures are not essential for the mechanism.

As a result one will find similar records in the same node, or a the direct neighbors. It has been shown that the SOM is topology preserving, that is, the SOM is smooth with regard to the similarity of neighbor nodes. The data records inside the nodes are a list, which is described by the node’s value vector. That value vector could be said to represent a class, or intension, which is defined by its empirical observations, the cases, or extension.

After feeding all data to the SOM the training has been finished. For SOM, it is easy to run in a continuous mode, where the feed of incoming data is not “stopping” at any time. Now the SOM can be used to classify new records. A new record simply needs to be compared to the nodes of the SOM, i.e. to the value vector of the nodes, but NOT to all the cases (SOM is not case-based reasoning, but type-based reasoning!). If the records contained a marker attribute, e.g. indicating the quality of the record, you will also get the expected quality for a new record of unknown quality.

Properties of the SOM

The SOM belongs to the class of clustering algorithms. It is very robust against missing values in the table, and unlike many other methods it does NOT require any settings regarding the clusters, such as size or number. Of course, this is a great advantage and a property of logical consistency. Nodes may remain empty, while the node value vector of the empty node is well-defined. This is a very important feature of the SOM, as this represents the capability to infer potential yet unseen observations. No other method is able to behave like this. Other properties can be invoked by means of possible extensions of the basic mechanism (see below)

As already said, nodes collect similar records of data, where a record represents a single observation. It is important to understand, that a node does not equal to a cluster. In our opinion, it does not make much sense to draw boundaries around one or several nodes and so  proposing a particular “cluster.” This boundary should be set only upon an external purpose. Inversely, without such a purpose, it is sense-free to conceive of a trained SOM as a model. AT best, it would represent a pre-specific model, which however is a great property of the SOM to be able to create such.

The learning is competitive, since different nodes compete for a single record. Yet, it is also cooperative, since the upon an insert operation information is exchanged between neighbor nodes.

The reasoning of the SOM is type-based, which is much more favorable than case-based reasoning. It is also more flexible than ANN, which just provide a classification, but no distinction between extension and intension is provided. SOM, but not ANN, can be used in two very different modes. Either just for clustering or grouping individual observations without any further assumptions, and secondly for targeted modeling, that is for establishing a predictive/ diagnostic link between several (or many) basic input variables and one (or several) target variable(s) that represent the outcome of a process according to experience. Such a double usage of the same structure is barely accessible for any other associative structure.

Another difference is that ANN are much more suitable to approximate single analytic functions, while SOM are suitable for general classification tasks, where the parameter space and/or the value (outcome) space could even be discontinuous or folded.

A large advantage over many other methods is that the similarity function and the cost function is explicitly accessible. For ANN, SVM or statistical learning this is not possible. Similarly, the SOM automatically adapts its structure according to the data, i.e. it is also possible to change the method within the learning process, adaptively and self-organized.

As a result we can conclude that the design of the SOM method is much more transparent than that of any other of the advanced methods.

Competing Comparable Methods

SOM are either more robust, more general or more simple than any other method, while the quality of classification is closely comparable. Among those competing methods are artificial neural networks (ANN), principal component analysis (PCA), multi-dimensional scaling (MDS), or adaptive resonance theory network (ART). Important ideas of ART networks can be merged with the SOM principle, keeping the benefits of both. PCA and MDS are based on statistical correlation analysis (covariance matrices), i.e. they are importing all the assumptions and necessary precondition of statistics, namely the independence of observational variables. Yet, it is the goal to identify such dependencies, thus it is not quite feasible to presuppose that! SOM do not know such limitations from strange assumptions; else, recently it has been proven that SOM are just a generalization of PCA.

Of course, there are many other methods, like Support Vector Machines (SVM) with statistical kernels, or tree forests; yet, these methods are purely statistical in nature, with no structural part in it. Else, they do not provide access to the similarity function as it is possible for the SOM.

A last word about the particular difference between ANN and SOM. SOM are true symmetrical networks, where each unit has its own explicit memory about observations, while the linkage to other units on the same level of integration is probabilistic. That means, that the actual linkage between any two units can be changed dynamically within the learning process. In fact, a SOM is thus not a single network like a fisher net, it is much more appropriate to conceive them as a representation of a manifold of networks.

Contrary to those advanced structural properties, the so-called Artificial Neural Networks are explicit directional networks.Units represent individual neurons and do not have storage capacities. Each unit does not know anything about things like observations.  Conceptually, these units are thus on a much lower level than the units in a SOM. In ANN they can not have “inner” structure. The same is true for for the links between the units. Since they have to be programmed in an explicit manner (which is called “architecture”), the topology of the connections can not be changed during learning at runtime of the program.

In ANN information flows through the units in a directed manner (as in case of natural neurons). It is there almost impossible to create an equally dense connectivity within a single layer of units as in SOM. As a consequence, ANN do not show the capability for self-organization.

Taken as whole, ANN seem to be under the representationalist delusion. In order to achieve the same general effects  and abstract phenomena as the SOM are able to, very large ANN would be necessary. Hence, pure ANN are not really a valid alternative for our endeavor. This does not rule out the possibility to use them as components within a SOM or between SOMs.

Variants and Architectures

Here are some SOM extensions and improvements of the SOM.

Homogenized Extensional Diversity
The original version of the SOM tends to collect “bad” records, those not matching well anywhere else, into a single cluster, even if the records are not similar at all. In this case it is not allowed to compare nodes any more, since the internal variance is not comparable anymore and the mean/variance on the level of the node would not describe the internal variance on the level of the collected records any more. The cure for that misbehavior is rather simple. The cost function controlling the matching of a record to the most similar node needs to contain the variability within the set of records (extension of the type represented by the node) collected by the node. Else, merging and splitting of nodes as described for structural learning helps effectively. In scientific literature, there is yet no reference for this extension of the SOM.

Structural Learning
One of the most basic extensions to the original mechanism is to allow for splitting and merging of nodes according to some internal divergence criterion.  A SOM made from such nodes is able to adopt structurally to the input data, not just statistically. This feature is inspired by the so-called ART-networks [1]. Similarly, merging and splitting of “nodes” of a SOM was proposed by [2], though not in the cue of ART networks.

Nested SOM
Since the SOM represents populations of neurons, it is easy and straightforward to think about nesting of SOM. Each node would contain a smaller SOM. A node may even contain any other parametrized method, such like Artificial Neural Networks. The node value vector then would not exhibit the structure of the table, but instead would display the parameters of the enclosed algorithm. One example for this is the so-called mn-SOM [3].

Growing SOM
Usually, data are not evenly distributed. Hence, some nodes grow much more than others. One way to cope with this situation is to automatically let the SOM grow. many different variants of growth could be thought of and some already has been implemented. Our experiments point into the direction of a “stem cell” analog.

Growing SOMs have first been proposed by [4], while [5] provides some exploratory implementation. Concerning growing SOM, it is very important to understand the concept (or phenomenon) of growth. We will discuss possible growth patterns and the  consequences for possible and new growing SOMs elsewhere. Just for now we can say that any kind of SOM structure can grow and/or differentiate.

SOM gas, mobile nodes
The name already says it: the topology of the grid making up the SOM is not fixed. Nodes even may range around quite freely, as in the case of SOM Gas.

Chemical SOM
Starting from mobile nodes, we can think about a small set of properties of nodes which are not directly given by the data structure. These properties can be interpreted as chemicals creating sth. like a Gray-Scott Reaction-Diffusion-Model, i.e. a self-organizing fluid dynamics. The possible effects are (i) a differential opacity of the SOM for transmitted information, or (ii) the differentiation into fibers and networks, or (iii) the optimization of the topological structure as a standard part of the life cycle of the SOM. The mobility can be controlled internally by means of a “temperature,” or expressed by a metaphor, the fixed SOM would melt partially. This may help to reorganize a SOM. In scientific literature, there is yet no reference for this extension of the SOM.

Evolutionary Embedded SOM with Meta-Modeling
SOM can be embedded into an evolutionary learning about the most appropriate selection of attributes. This can be extended even towards the construction of structural hypothesis about the data. While other methods could be also embedded in a similar manner, the results are drastically different, since most methods do not learn structurally. Coupling evolutionary processes with associative structures was proposed a long time ago by [6], albeit only in the context of optimization of ANN. While this is quite reasonable, we additionally propose to use evolution in a different manner and for different purposes (see the chapter about evolution)

[1] ART networks
[2] merging splitting of nodes
[3] The mn-SOM
[4] Growing SOM a
[5] Growing SOM b
[6] evolutionary optimization of artificial neural networks

۞

Where Am I?

You are currently browsing entries tagged with learning at The "Putnam Program".