Time, Magic and the Self (I/III)

January 24, 2013 § Leave a comment

There is.

Isn’t it? Would you agree? Well, I would not. In other words, to say ‘There is.’ is infinitesimally close to a misunderstanding. Or a neglect, if you prefer. It is not the missing of a referent, though, at least not in first instance. The problem would be almost the same if we would have said ‘There is x’. It is the temporal aspect that is missing. Without considering the various aspects of temporality of the things that build up our world, we could neither understand the things nor the world.

Nowadays, the probability for finding some agreement for such a claim is somewhat higher than it once was, in the high tides of modernism. For most urbanists and architects, time was nothing but a somewhat cumbrous parameter, yet nothing of any deeper structural significance. The modern city was a city without time, after breaking the traditions, even not creating new ones. Such was the claim, which is properly demonstrated by Simon Sadler [1] citing Ron Herron, group member of Archigram.

“Living City”1 curator Ron Herron described his appreciation of “Parallel of Life and Art”: It was most extraordinary because it was primarily photographic and with apparently no sequence; it jumped around like anything.

Unfortunately, and beyond the mere “functioning,” the well-organized disorg-anization itself became a tradition. Koolhaas called it Junkspace [2]. Astonishingly, and not quite compatible to the admiration of dust-like scatterings that negate relationality, Archigram claims to be interested in, if not focused to life and behavior. Sadler summarizes (p.55)

“Living City” and its catalogue were not about traditional architectural form, but its opposite: the formlessness of space, behavior, life.

Obviously, Sadler himself is not quite aware about the fact that behavior is predominantly a choreography, that is, it is about form and time as well as form in time. The concepts of form and behavior as implied by Archigram’s utopias are indeed very strange.

Basically, the neglect of time beyond historicity is typical for modern/modernist architects, urbanists and theorists up to our days, including Venturi [2], Tschumi [4] or Oswald [5]. Even Koolhaas does not refer expressis verbis to it, albeit he is constantly in a close orbit of it. This is astonishing since key concepts in the immediate neighborhood of time such as semiotics, narration or complexity are indeed mentioned by these authors. Yet, without a proper image of time one remains on the level of mere phenomena. We will discuss this topic of time on the one side and architects and architecture on the other later in more detail.

Authors like Sigfried Giedion [6] or Aldo Rossi [7] didn’t change much concerning the awareness for time in the practice of architecture and urbanism. Maybe, partly because their positions have been more self-contradictive than consistent. On the one hand they demanded for a serious consideration of time, on the other hand they still stuck to rather strong rationalism. Rationalist time, however, is much less than just half of the story. Another salient reason is certainly given by the fact that time is a subject that is notoriously difficult to deal with. As Mike Sandbothe cites Paul Ricoeur [8]:

Ultimately, for Ricoeur time marks the „mystery“ of our thinking, which resists representation by encompassing our Dasein in a way that is ineluctable for our thinking.2

This Essay

One of the large hypotheses that I have been following across the last essays is that we will not be able to understand the Urban3 and architecture without a proper image of differentiation. Both parts of this notion, the “image” and the “differentiation” need some explication.

Despite “differentiation” seems to be similar to change, they are quite different from each other. The main reason being that differentiation comprises an activity, which, according to Aristotle has serious consequences. Mary Louise Gill [9] summarizes his distinction as follows:

Whereas a change is brought about by something other than the object or by the object itself considered as other (as when a doctor cures himself), an activity is brought about by the object itself considered as itself. This single modification yields an important difference: whereas a change leads to a state other than the one an object was previously in, an activity maintains or develops what an object already is.4

In other terms, in case of change it is proposed that it is relatively unconstrained, hence with less memory and historicity implied, while activity, or active differentiation implies a greater weight of historicity, less contingency, increased persistence and thus an increased intensity of being in time.

Besides this fundamental distinction we may discern several modes of differentiation. The question then is, how to construct a proper “whole” of that. Obviously we can think of different such compound “wholes,” which is the reason for our claim that we need a proper image of differentiation.

Now to the other part of the notion of the “image of differentiation,” the image. An “image” is much more than a “concept.” It is more like a diagram about the possibility to apply the concept, the structure of its use. The aspect of usage is, of course, a crucial one. Actually, with respect to the relation between concepts and actions we identified the so-called “binding problem”. The binding problem claims that there is no direct, unmediated way from concepts to actions, or the reverse. Models are needed, both formalizable structural models, being more close to concepts, and anticipatory models, being more close to the implementation of concepts. The operationalization of concepts may be difficult. Yet, action without heading to get contact to concepts is simply meaningless. (The reason for the emptiness of ‘single case’-studies.) Our overall conclusion regarding the binding problem was that it is the main source for frictions and even failure in the control and management of society, if it is not properly handled, if concepts and actions are not mediated by a layer of “Generic Differentiation.” Only the layer of “Generic Differentiation” with its possibility for different kinds of models can provide the basic conditions to speak about and to conceive any of the mechanisms potentially relevant for the context at hand. Such, the binding problem is probably one of the most frequent causes for many, many difficulties concerning the understanding, designing and dealing with the Urban, or its instances, the concrete city, the concrete settlement or building, the concrete neighborhood.

This transition between concept and action (or vice versa) can’t be fully comprised by language alone. For a certain reasons we need a diagram. “Generic Differentiation”, comprising various species of probabilistic, generalized networks, is conceived as part of a larger compound—we may call it “critical pragmatics”—, as it mediates between concepts and actions. Finally we ended up with the following diagram.

Figure 1: “Critical Pragmatics for active Subjects.” The position of Generic Differentiation is conceived as a necessary layer between the domains of concepts and actions, respectively. See text below for details and the situs where we developed it.

basic module of the fractal relation between concept/conceptual, generic differentiation and operation/operational comprising logistics and politics that describes the active subject urban reason 4t

Note, that this diagram just shows the basic module of a more complete diagram, which in the end would form a moebioid fractal due to self-affine mapping: this module appears in any of the three layers in a nested fashion. Hence, a more complete image would show this module as part of a fractal image, which however could not be conceived as a flat fractal, such like a leaf of fern.5 The image of pragmatics as it is shown above is first a fractal due to the self-affine mapping. Second, however, the instances of the module within the compound are not independent, as in case of the fern. Important traces of the same concepts appear at various levels of the fractal mapping, leading to dimensional braids, in other words to a moebioid.

So, as we are now enabled for approaching it, let us return to the necessity of considering the various aspects of temporality. What are they in general, and what in case of architecture, the city, the Urban, or Urban Reason? Giedion, for instance, related to time with regard to the historicity and with regard to an adaptation of the concept of space-time from physics, which at that time was abundantly discussed in science and society. This adaptation, according to Giedion, can be found in simultaneity and movement. A pretty clear statement, one might think. Yet, as we will see, he conceived of these two temporal forms of simultaneity and movement in a quite unusual way that is not really aligned to the meaning that it bears in physics.

Rossi, focusing more on urban aspects, denotes quite divergent concepts of time. He did not however clearly distinguish or label them. He as well refers to history, but he also says that a city has “many times” (p.61 in [7]), a formulation that reminds to Bergson’s durée. Given the cultural “sediments” of a city within itself, its multiply folded traces of historical times, such a proposal is easy to understand, everybody could agree upon it.

Besides the multiplicity of referential historical time—we will make the meaning of this more clear below—, Rossi also proposes implicitly a locality of time through the acceleration of urbanization through primary elements such as “monuments”, or building that own a “monumental” flavor. Unfortunately, he neither does refer to an operationalization of his time concept nor does he provide his own. In other words, he still refers to time only implicitly, by describing the respective changes and differentiations on an observational level.

These author’s proposals provide important hints, no doubt. Yet, we certainly have to clarify them from the perspective of time itself. This equals firstly an inversion of the perspective from architectural or urbanismic vantage point taken by Giedion and Rossi, who in both cases started from built matter. Before turning to architecture, we have to be clear about time. As a second consequence, we have to be cautious when talking about time. We have to uncover and disclose the well-hidden snares before we are going to push the investigation of the relation between temporality and architecture further down.

For instance, both Giedion and Rossi delivered an analysis. This analyticity results in a pair of consequences. Either it is, firstly, just useful for sorting out the past, but not for deriving schemes for synthesis and production, or, secondly, it requires an instantiation that would allow to utilize the abstract content of their analysis for taking action. Such an instantiation could produce hints for a design process that is directed to the future. Yet, neither Giedion [6] nor Rossi [7] did provide such schemes. Most likely precisely due to the fact that they did not refer to a proper image of time!

This essay is the first of two in a row about the “Time of Architecture”. As Yeonkyung Lee and Sungwoo Kim [10] put it, there is much need for its investigation. In order to do so, however, one has to be clear about time and its conception(s). Insofar we will attempt to trace time as a property of architecture and less as an accessory, we also have to try to liberate time from its distinctive link to human consciousness without sacrificing the applicability of the respective conception to the realm of the human.

Hence, the layout of this essay is straightforward.

(a) First we will introduce a synopsis on various conceptions of time as brief as possible, taking into account a few, and probably the most salient sources. This will equip us with possible distinctions about modes or aspects of time as well as the differences between and interdependencies of time and space.

In architecture and urbanism, almost no reference can be found to philosophical discourses about time. Things are handled intuitively, leading to interesting but not quite valuable and usable approaches. We will see that the topic of “time” raises some quite fundamental issues, reaching at least into the field of hermeneutics, semiotics, narratology, and of course philosophy as well. The result will be a more or less ranked list of images of time as it is possible from a philosophical vantage point.

(b) Before the background of this explication and the awareness for all the possible misunderstandings around the issue of time, we will introduce a radically different perspective. We will ask how nature “creates time”. More precisely, we will ask about the abstract elements and mechanisms that are suitable for “creating time.” As weird this may seem at first, I think it is even a necessary question. And for sure nobody else posed this question ever before (outside of esoterics, perhaps, nut we do not engage in esoterics here!).

The particularity of that approach is that the proposed structure would work as a basis for deriving an operationalization for the interpretation of material systems as well as an abstract structure for a foundation of philosophical arguments about time. Of course, we have to be very careful here in order to avoid falling back into naturalist or phenomenological naiveties. Yet, carefulness will allow us to blend the several perspectives onto time into a single one, without—and that’s pretty significant—reducing time to either space or formal exercises like geometry. Such, the reward will be a completely new image of time, one that is much more general than any other and which overcomes the traditional separations, for instance that which pulls apart physical time and time of experience. Another effect will be that the question about the origin of time will vanish, a question which is continuously being discussed in cosmology (and theology, perhaps, as well).

(c) From the new perspective then we will revisit architecture and the Urban (in the next essay). We will not only return to Giedion, Rossi, or Koolhaas but we also will revisit the “Behavioral Turn that we have been introducing some essays ago.

Displayed in condensed form, our program comprises the following three sections:

  • (a) Time itself as a subject of philosophy.
  • (b) The creation of time.
  • (c) Time of Architecture.

Before we start a few small remark shall be in order. First, it may well appear as somewhat presumptuous to try to handle time in sufficient depth within just one or two sections of a single essay. I am fully aware about this. Yet, the pressure to condense the subject matter also helps to focus, to achieve a structural picture on the large scale. Second, it should be nevertheless clear that we can’t provide a comprehensive overview or summary about the various conceptions of time in philosophy and science, as interesting this would have been. It would exceed even the possibilities of a sumptuous book. Instead, I will lay out my arguments by means of a purposeful selection, enriched with some annotations.

On the other hand this will provide one of the very rare comprehensive inquiries about time, and the first one that synthesizes a perspective that is backward compatible to those authors to whom it should.

Somewhat surprising, this could even include (theoretical) physics. Yet, the issue is quite complex and very different from mainstream, versions of which you may find in [27, 28]. Even as there are highly interesting and quite direct links to philosophy, I decided to put this into a separate essay, which hopefully will happen soon. Just to give you a tiny glimpse on it: Once Richard Feynman called his mentor and adviser John Wheeler in the middle of the night, asking him, “How many electrons are there in the universe?” According to the transmission Wheeler answered: “There is exactly one.” Sounds odd, doesn’t it? Nevertheless it may be that there are indeed only a few of them, according to Robbert Dijkgraaf, who also proposes that space-time is an emergent “property,” while information could be conceived as more fundamental than those. This, however, has a rather direct counterpart in the metaphysics of Spinoza, who claimed that there is only 1 single attribute. Or (that’s not an unhumbleness), take our conception of information that we described earlier. Anyway, you may have got the point.

The sections in the remainder of this essay are the following. Note that in this piece we will provide only chapter 1 and 2. The other chapters from “Synthesis” onwards will follow as a separate piece.

1. Time in Philosophy—A Selection

Since antiquity people have been distinguishing two aspects of time. It was only in the course of the success of modern physics and engineering that this distinction has been gone forgotten in the Western world’s common sense. The belief set of modernism with its main pillar of metaphysical independence may have been contributing as well. Anyway, the ancient Greeks assigned them the two gods of chronos and kairos. While the former was referring to measurable clock-time, the second denoted the opportune time. The opportune time is a certain period of time that is preferential to accomplish an action, argument, or proof, which includes all parts and parties of the setting. The kairos clearly exceeds experience and points to the entirety of consummation. The advantage of taking into account means and ends is accompanied by the disadvantage of a significant inseparability.

Aristotle

Aristotle, of course, developed an image of time that is much richer, more detailed and much less mystical. For him, change and motion are apriori to time [11]. Aristotle is careful in conceiving change and motion without reference to time, which then gets determined as “a number of change with respect to the before and after” (Physics 219 b 1-2). Hence, it is possible for him to conceive of time as essentially countable, whereas change is not. Here, it is also important to understand Aristotle’s general approach of hylemorphism, which states that—in a quite abstract sense—substance always consists of a matter-aspect and a form-aspect [11]. So also for time. For him, the matter-aspect is given by its kinetic, which includes change, while the form aspect shows up in a kind of order6. Time is a kind of order is not, as is commonly supposed, a kind of measure, as Ursula Coope argues [13]. Aristotle’s use of “number” (arithmos) is more a potential for extending operations, as opposed to “measure” (metron), which is imposed to the measured. Hence, “order” does not mean that this order is necessarily monotone. It is an universal order within which all changes are related to each other. Of course, we could reconstruct a monotone order from that, but as said, it is not a necessity. Another of the remarkable consequences of Aristotle’s conception is that without an counting instance—call it observer or interpretant —there is no time.

This role of the interpreter is further explicated by Aristotle with respect to the form of the “now”. Roark summarizes that we have understand that

[…] phantasia (“imagination”) plays a crucial role in perception, as Aristotle understands it, and therefore also in his account of time. Briefly, phantasia serves as the basis for both memory and anticipation, thereby making possible the possession of mental states about the past and the future. (p.7)

Actually, the most remarkable property of Aristotle’s conception is that he is able to overcome the duality between experience and physical time by means of the interpretant.

Pseudo-Paradoxes

It is not by chance alone that Augustine denied the Aristotelian conception by raising his infamous paradox about time. He does so from within Christian cosmogony. First he argues that the present time vanishes, if we try to take close look. Then he claims that both past and future are only available in the present. The result is that time is illusory. Many centuries later, Einstein would pose the same claim. Augustine transposed the problem of time into one of the relation between the soul and God. For him, no other “solution” would have been reasonable. Augustine instrumentalises a misunderstanding of references, established by mixing incompatible concepts (or language games). Unfortunately, Augustine inaugurated a whole tradition of nonsense, finally made persistent by McTaggart’s purported proof of the illusion of time [14] where he extended Augustine’s already malformed argument into deep nonsense, creating on the way the distinction between A-series (past, present and future) and B-series (earlier, later) of time. It is perpetuated until our days by author’s like Oaklander [15][16] or Power [17]. Actually, the position is so nonsensical and misplaced—Bergson called it a wrong problem, Wittgenstein a grammatical mistake—that we will not deal with it further7.

Heidegger

Heidegger explicitly refers to phenomenology as it has been shaped by Edmund Husserl. Yet, Heidegger recognized that phenomenology—as well as the implied ontology of Being—suffers from serious defects. Thus, we have to take a brief look onto it.

With the rise of phenomenology towards the end of the 19th century, the dualistic mapping of the notion of time has been reintroduced and reworked. Usually, a distinction has been made between clock-time on the one hand and experiential time on the other. This may be regarded indeed as quite similar to the ancient position. Yet, philosophically it is not interesting to state such. Instead we have to ask about the relation between the two. The same applies to the distinction of time and space.

There are two main positions dealing with this dualism. On the one side we find Bergson, on the other Brentano and Husserl as founders of phenomenology. Both refer to consciousness as an essential element of time. Of course, we should not forget that this is one of the limitations we have to overcome, if we want to achieve a generalized image of time.

Phenomenology suffers from a serious defect, which is given by the assumption of subjects and objects as apriori entities. The object is implied as a consequence of the consciousness of the subject, yet this did not result in a constructivism à la Maturana. Phenomenology, as an offspring of 19th century modernism and a close relative of logicism, continued and radicalized the tendency of German Idealism to think that the world could be accessed “directly”. In the words of Thomas Sheehan [19]:

And finally phenomenology argued that the being of entities is known not by some after-the-fact reflection or transcendental construction but directly and immediately by way of a categorical intuition.

There are two important consequences of that. Firstly, it violates the primacy of interpretation8 and has to assume a world-as-such, which in other words translates into a fundamentally static world. Secondly, there is no relation between to appearances of an object across time.

Heidegger, in “Being and Time” [21] (original “Sein und Zeit” [22]), tried to correct this defect of phenomenology and ontology by a hermeneutic transformation of phenomenology. This would remove the central role of consciousness, which is replaced by the concept of the “Being-there” (Dasein) and so by the “Analysis of Subduity.” He clearly states (end of §3 in “Being and time”) that any ontology has to be fundamental ontology. The Being-there (Dasein) however needs— in order to be able to see its Being—temporality.

The fundamental ontological task of the interpretation of being as such, therefore, includes working out the Temporality of being. The concrete answer to the question of the sense of being is given for the first time in the exposition of the problematic of Temporality. ([22], p.19)

How is temporality described? In §65 Heidegger writes:

Coming back to itself futurally, resoluteness brings itself into the Situation by making present. The character of “having been” arises from the future, and in such a way that the future which “has been” (or better, which “is in the process of having been”) releases from itself the Present. This phenomenon has the unity of a future which makes present in the process of having been; we designate it as “temporality”.9

Time clearly “delimits” Being as a conditioning horizon:

[…] we require an originary explication of time as the horizon of the understanding of being in terms of temporality as the being of Dasein who understands being. ([22], p.17)

Heidegger examines thoroughly the embedding of Being-there into time and the conditioning role of “time.” For instance, we can understand a tool only with respect to its future use. Temporality itself is seen as the structure of “care”, a major constitutive of the being of Dasein, which similarly to anticipation carries a strong reference to the future:

The originary unity of the structure of care lies in temporality” ([22], p.327).

Temporality is the meaning and the foundation of Being.10 Temporality is an Existential. Existential analysis claims that Being-there does not fill space, it is not within spatiality (towards the end of §70):

Only on the basis of its ecstatico-horizontal temporality is it possible for Dasein to break into space. The world is not present-at-hand in space; yet, only within a world does space let itself be discovered. The ecstatical temporality of the spatiality that is characteristic of Dasein, makes it intelligible that space is independent of time; but on the other hand, this same temporality also makes intelligible Dasein’s ‘dependence’ on space—a ‘dependence’ which manifests itself in the well-known phenomenon that both Dasein’s interpretation of itself and the whole stock of significations which belong to language in general are dominated through and through by ‘spatial representations’. This priority of the spatial in the Articulation of concepts and significations has its basis not in some specific power which space possesses, but in Dasein’s kind of Being. Temporality is essentially deterioriating11, and it loses itself in making present; […]

This concept of temporality could have been used to overcome the difference between “vulgar time” (chronos) and experiential time, to which he clearly sub-ordinated the former. Well, “could have been” if Heidegger’s program would have been completable. But Heidegger finally failed, “Being and Time” remained fragmentary. There are several closely related aspects for this failure. Ultimately, perhaps, as Cristina Lafont [24] argues, it is impossible to engage in a radical program of detranscendentalization and at the same time to try to achieve a fundamental foundation. This pairs with the inherited phenomenological habit to disregard the primacy of interpretation. The problem for Heidegger now is that the sign in the language is already in the world which has to be subdued. As Lafont brilliantly revealed, Heidegger still adheres to the concept of language as an “ontic” instrument, as something that is found in the outer world. Yet, this must count simply as a highly inappropriate reduction. Language constantly and refracted points towards the inwardly settled translation between body and thought and the outward directed translation between thought and community (of speakers), while translation is also kind of a rooting. Such we can conclude that ultimately Heidegger therefore still follows the phenomenological subject-object scheme. His attempt for a fundamental foundation while avoiding any reference to transcendent horizons must fail, even if this orientation towards the fundamental pretends to just serve as an indirect “foundation” (see below).

There is a striking similarity between Augustine and Heidegger. We could call it metaphysical linearity as a cosmological element. In case of Augustine it is induced by the believe in Salvation, in case of Heidegger by the believe into an absolute beginning paired with a (implicit) believe to step out of language. In a lecture held in 1963, that is 36 years after Being and Time, titled “Time and Being”, Heidegger revisits the issue of time. Yet, he simply capitulated from the problem of foundations, referring to “intuitional insight” as a foundation. In the speech “Time and Being” hold in 1962 [25], he said

To think the Being in its own right requires to dismiss Being as the originating reason of being-Being (des Seienden), in favor of the Giving that is coveredly playing in its Decovering (Entbergen), i.e. of the “There is as giving fateness.”12 (p.10)13

Here, Heidegger refutes foundational ontology in favour of the communal and external world by he concept of the Giving14. Yet, the step towards the communal still remains a very small step, since now not only the Other gets depersonalized as far as possible. The real serious issue here is that Heidegger now replaces the ontological conception of “ontic” language by the “ontic” communal. He still does not understand the double-articulation of the communal through language. We may say that Heidegger is struck by blindness (on his right eye).

Inga Römer [47] detects a certain kind of archaism throughout the philosophy of Heidegger, which comes along as a still not defeated thinking about origins.

Finally, in „Being and Time“ Heidegger detects the origin of time in the event, which he dedicatedly determines as the provider [m: the Giving] of Being and time. This Giving is seen as being divested from itself. The event, determined by Heidegger elsewhere as a singular tantum, is eliminated from itself—and nevertheless the event is conceived as the origin of time.15 (p.289)

Many years after the publication of “Being and Time”, in the context of the seminar “Time and Being” Heidegger claimed that he did not conceive fundamental ontology as kind of a foundation. He described the role of the Daseins-analytics as proposed in “Being and Time” in the following way [23]:

Being and Time is in fact on the way to find, taking the route through the timeness of Dasein in the interpretation of Being as temporality, a conception of time, that Owned of “time”, whence “Being” reveals itself as Presenting. Such however it is said that the fundamental mentioned in the fundamental ontology can’t take reference and synthesis. Instead, the whole analytics of Dasein ought to be repeated, subsequent of possibly having thrown light upon the sense of Being, in a more pristinely and completely different manner.16

Indeed, “Being and Time” remained fragmentary, Heidegger recognized the inherent incompatibility of the still transcendental alignment with the conception of the Dasein and was hence forced to shift the target of the Daseins-analytics [26](p.99). Being is not addressed from the vantage point of being-Being (Seiendes) anymore. It resulted in a replacement of the sense of Being by the question about the historical truth of Being as fateness. In the course of that shift, however, temporality lost its role, too, and was replaced by a thinking of a historized event. This event is conceived as kind of a non-spatial endurance [25]:

Time-Space (m: endurance) now denotes the open that in the mutually-serving-one-another of arrival, having been (Gewesenheit) and present clears itself. Only this open spacingly allows (räumt ein) the ordinarily known space its propagation. (p.19)17

As far as this move could be taken as a cure of the methodological problems in “Being and Time,” it turned out, however, to be far detrimental for Heidegger’s whole philosophy. He was forced to determine man by his ecstatic exposition and being-thrown (tossed?) into nothingness. Care as kind of cautious anticipation was replaced first by angst, then by incurable disgust through Sartre. While the early Heidegger precisely tried to cure the missing of primal relationality in phenomenology, the later Heidegger got trapped by an even more aggressive form of singularization and denial of relationality at all. This whole enterprise of existential philosophy suffers from this same deep disrespect if not abhorrence of the communal, of the practice of sharing joyfully a common language that turns into the Archimedic Point of being human. Well, how could he think differently given his particular political aberrancy?

Anyway, Heidegger’s shift to endurance brings us directly to the next candidate.

Bergson

Politically, in real life, Heidegger and Bergson could not be more different. The former more than sympathizing (up to open admiration) with totalitarianism in the form of Hitlerism and fascism, thereby matching his performative rejection of relationality, the latter engaging internationally in forming the precursor of the UN.

But, how does Bergson’s approach to time look like? For Bergson, logicism and the subject-object dichotomy are thoughts that are alien to him. Both actually have to assume a sequential order that yet have to be demonstrated in its genesis.18 The starting point for Bergson is the diagnosis that measurable time, or likewise measuring time, as it is done in physics as well by any clock-time introduces homogeneity, which in turn translates into quantificability [31]. As such, time is converted into a spatial concept, as these properties are also properties of space as physics conceives it. The consequence is that we create pseudo-paradoxes like that which has been explicated by Augustine. To this factum of quantificability Bergson then opposes qualitability. For him, quality and quantity remain incommensurable throughout his works.

At any rate, we cannot finally admit two forms of the homogeneous, Time and Space, without first seeking whether one of them cannot be reduced to the other […] Time, conceived under the form of an unbounded and homogeneous medium, is nothing but the ghost of space, haunting the reflective consciousness. ([32] p. 232)

So we can fix that time is essential a qualitative entity, or in other words, an intensity that is, according to Bergson, opposed to the extensity of spatial entities. Spatial entities are always external to each other, while for intensive entities—such as time—such an externalization is not possible. They can be thought only as a mutually interpenetrating beside-one-another, which however should be thought as an aterritorial “beside”. As Friedrich Kuemmel puts it, intensity, for Bergson, can be detached from extensity.19 Intensity then is being equipped by Bergson with a manifoldness or multiplicity that consequently establishes a reality apart from physical spatiality with its measurable time. This reality is the reality of consciousness and the soul. Bergson calls it “durée”, which of course must not be translated into “duration” (or into the German “Dauer”). Durée is more like the potential for communicable time, or in Deleuze’s words, a “potential number” ([33] p.45), to which we can refer in language literally as “referential time.”

Bergson’s notion of durée is quite easily determined (p.37)

It [durée] is a case of “transition,” of a “change,” a becoming, but it is a becoming that endures, a change that is substance itself. […] Bergson has no difficulty in reconciling the two fundamental characteristics of duration; continuity and heterogeneity. However, defined in this way, duration is not merely lived experience; […] it is already a condition of experience.

As a qualitative multiplicity, durée is opposed to quantitative multiplicity. For Bergson, this duality is a strict and unresolvable one, yet it does not set up an opposition, it is not subject of dialectic. It does, however, follow the leitmotif of Bergson, according to Deleuze ([33] p.23): People see quantitative differences where actually are differences in kind. (RRR)

Deleuze emphasizes that the two multiplicities have to be strictly distinguished ([33] p.38).

[…] the decomposition of the composite reveals to us two types of multiplicity. One is represented by space […]: it is a multiplicity of exteriority, of simultaneity, of juxtaposition, of order, of quantitative differentiation, of difference in degree; it is a numerical multiplicity, discontinuous and actual. The other type of multiplicity appears in pure duration: It is an internal multiplicity of succession, of fusion, of organization, of heterogeneity, of qualitative discrimination, or of difference in kind; it is a virtual and continuous multiplicity that cannot be reduced to numbers.

Here we may recall Aristotle’s notion of time as kind of order. This poses the question whether duration itself is a multiplicity. As Deleuze carves it out ([33] p.85):

At the heart of the question “Is duration one or multiple?” we find a completely different problem: Duration is a multiplicity, but of what type? Only the hypothesis of a single Time can, according to Bergson, account for the nature of virtual multiplicities. By confusing the two types – actual spatial multiplicity and virtual temporal multiplicity- Einstein has merely invented a new way of spatializing time.

Pushing Bergson’s architecture of time further, Deleuze develops his first accounts on virtuality. It becomes clear, that durée is a virtual entity. As such, it is outside of the realm of numbers, even outside of quantificability or quantitability. Speaking in Aristotelian terms we could say that time is a smooth manifold of kinds of orders. Again Deleuze (p.85):

Being, or Time, is a multiplicity. But it is precisely not “multiple”; it is One, in conformity with its type of multiplicity.

For Bergson, tenses are already actualizations of durée. The past is conceived as being different from the present in kind, and could not be compared to it. There is also possibility for a transition from a “past” to a “present.” It is the work of memory (as an abstract entity) that creates the link. Memory extends completely into present, though. Its main effect is to recollect the past. In this sense, memory is stepping forward. Durée and memory are co-extensive.

As we have seen, Bergson’s conception of time is strongly linked to consciousness and its particular memory. We also have seen that he considers physical time as a kind of a secondary phenomenon. He thinks that things surely have no endurance in the sense of a capability to actualize durée into an extended present.

This poses a problem: What is time in our outside? In Time and Free Will he writes [32],

Although things do not endure as we do ourselves, nevertheless, there must be some incomprehensible reason why phenomena are seen to succeed one another instead of being set out all at once. (p.227)

Well, what does this claim “things do not endure as we do ourselves” refer to? Is there endurance of things at all? And what about animals, thinking animals, or epistemic machines? As Deleuze explains, Bergson is able to solve this puzzle only by extending his durée into a cosmic principle ([33], pp.51). Yet, I think that in this case he mixes immaterial and material aspects in a quite inappropriate manner.

Bergson’s conception of time certainly has some appealing properties. But just as its much less potent rival phenomenology it is strongly anthropocentric. It can’t be generalized enough for our purposes that follow the question of time in architecture. Of course, we could conceive of architecture as a thing that is completely passive if nobody looks onto it or thinks about it. But what is then about cities? The perspective of passive things has been largely refuted, first by Heidegger through his hermeneutic perspective, and in a much more developed manner, by Bruno Latour and his Agent-Network-Theory.

In still other terms, we could say that Bergson’s philosophy suffers from a certain binding problem. I think it was precisely the binding problem that caused the hefty dispute between Einstein and Bergson. Just to be clear, in my opinion both of them failed.

Thus we need a perspective that allows to overcome the binding problem without sacrificing either the experiential time, or durée or the measurability of referential time. This perspective is provided by the semiotics of Charles Sanders Peirce.

Peirce

Peirce was an engineer, his formal accounts thus always pragmatic. This sets him apart from Bergson and his early devotion to mathematics. Where the former sees processes in which various parts engage, the latter sees abstract structures.

Being an engineer, Peirce looked at thought and time in a completely different manner. He starts with referential time, with clock-time. He does not criticize it at first hand as Bergson would later do.

The first step in our reconstruction of Peircean time is his move to show that neither thought nor, of course, consciousness can take place in an instant. Consciousness must be a process. Else, thought is a sign. One has to know that for Peirce, a sign is not to be mistaken as a symbol. For him it is an enduring situation. We will return to this point later.

In MS23720 (chapter IV in Writings 3) his primary concern is to explain how thinking could take place

A succession in time among ideas is thus presupposed in time-conception of a logical mind; but need this time progress by a continuous flow rather than by discrete steps?

Of course, he concludes that a “continuous time” is needed. Yet, at this point, Peirce starts to depart from a single, univoke time. He continues

Not only does it take time for an idea to grow but after that process is completed the idea cannot exist in an instant. During the time of its existence it will not be always the same but will undergo changes. […] It thus appears that as all ideas occupy time so all ideas are more or less general and indeterminate, the wider conceptions occupying longer intervals.

This way he arrives at a time conception that could be characterized as a multiplicity of continua. Even if it would be possible to determine a starting time and a time of completion for any of those intervals, it still remains that all those overlapping thoughts form a single consciousness.

Chapter 5 in “Writings 3” (MS239), titled “That the significance of thought lies in reference to the future” [35], starts in the following way.

In every logical mind there must be 1st, ideas; 2nd, general rules according to which one idea determines another, or habits of mind which connect ideas; and, 3rd, processes whereby such habitual connections are established.

The second aspect strongly reminds to our orthoregulation and the underlying “paradox of rule-following” first clearly stated by Ludwig Wittgenstein in the 1930ies [36]. The section ends with the following reasoning:

It appears then that the intellectual significance of all thought ultimately lies in its effect upon our actions. Now in what does the intellectual character of conduct consist? Clearly in its harmony to the eye of reason; that is in the fact that the mind in contemplating it shall find a harmony of purposes in it. In other words it must be capable of rational interpretation to a future thought. Thus thought is rational only so far as it recommends itself to a possible future thought. Or in other words the rationality of thought lies in its reference to a possible future.

In this brief paragraph we may find several resemblances to what we have said earlier, and elsewhere. First, Peirce’s conception of time within his semiotics provide us a means for referring to the binding problem. More precisely, thought as sign process is itself the mechanism to relate ideas and actions, where actions are always preceded, but never succeeded by their respective ideas.

Second, Peirce rejects the idea that a single purpose could be considered as reasonable. Instead, in order to justify reasonability, a whole population of remindable purposes, present and past, is required; all of them overlapping, at least potentially, all of them once pointing to the future. This multiplicity of overlapping and unmeasurable intervals creates a multiplicity of continuations. Even more important, this continuation is known before it happens. Hence, the present extends into the past as well as into the future. Given the fact that firstly the immediate effect of an action is rarely the same as the ultimate effect, and secondly the ultimate effect is often quite different to the expectation related to the purpose, we often do even not know “what” happened in the past. So, by applying ordinary referential time, our ignorance stretches to both sides of present, though not in the same way. It even exceeds the period of time of what could be called event.

Yet, by applying Peirce’s continuity, we find a possibility to simplify the description. For we then are faced by a single kind of ignorance that results in the attitude that Heidegger called “care” (Sorge).

The mentioned extension of the experienced ignorance as an ignorance within the present into the past and the future does not mean, of course, to propose a symmetry between the past and the future with respect to present, as we will see in a moment. Wittgenstein [40] is completely right in his diagnosis that

[…] in the grammar of future tense the conception of “memory” does not occur, even not with inverted sign.21 (p. 159)

The third issue, finally, concerns the way re relates rationality to the notion of “possible future.” This rationality is not claiming absolute objectivity, since it creates its own conditions as well as itself. Peirce’s rationality is a local one, at least at first sight. It is just this creating of the possible future that provides the conditions for the possibility of the experiencibility of future affairs.

The most important (methodological) feature of Peircean semiotics is, however, the possibility to jump out of consciousness, so to speak. Sign situations occur not only within the mind, they are also ubiquitous in interpersonal exchange, and even in the absorption of energy by different kinds of matter. Semiotics provides a cross-medial continuity. This argument has been extended later by John Dewey [37][38], Peirce’s pragmatist disciple .

Such we could say that, if (1) thought comprises signs, and (2) signs are sign situations, then it does not make sense to speak about “instantaneous” time regarding thought and consciousness in particular, but also regarding any interpretation in general, as interpretation is always part of a sign (-situation). Then, we also can say that presence lasts as long as a particular interpretation is “running”. Yet, signs refer to signs only. Interpretations are fundamentally open in its beginning as well as in its end. They are nested and occur in parallel, and are more broken than finished just contingently. Once the time string, or the interpretive chain, respectively, has been broken, past and future appear literally in their own right, i.e. de iure, and only by a formal act.22

The consequence of all that the probabilistic network of interpretations gives rise to a cloud of time strings, any of them with indeterminable ends. It is clear that signs and thus thinking would be absolutely impossible if there would be just one referential clock-time. But even more important, without the inner multiplicity of “sign time” there would be only the cold world of a single strictly causal process. There would be no life and no information. Only a single, frozen black hole.

Given the primacy of the cloud of time strings, it is easy to construct referential time as a clock-time. One just needs to enumerate the overlapping time strings in such a way that enumeration and counting coincide. Once this is done it is possible to refer to a clock. Yet, the clock would be without any meaning without such a enumerative counting. The clock the is suitably actualized in a more simple way by a perfectly repetitive process, that is, a process that actually is outside of time, much as Aristotle thought it is the case for celestial bodies. And once we have established clock time we can engage in interpersonal synchronization of our individual time string populations.

Peircean sign time thus not only allows to reconcile the two modi of time, the experiential time and referential time. It is also possible to extend the same process into historical time, rooting historicity in an alternative and much more appealing manner than it was proposed by Heidegger.

Wittgenstein

All the positions we met so far can be split into two sets. In the first part we find fundamental ontology and existential philosophy (Heidegger), analytic ontology (Oaklander), “folk approaches” (Augustine), idealistic conceptions (McTaggart) and physics with its reductionist perspective . In the second subset we find Aristotle, Bergson and Peirce.

The difference between the two parties lies in the way they root the concept of time. The former party roots it in reality; hence they ask about the inner structure of time, much like one would ask about the inner structure of wood. For the proponents of the second class time is primary experiential time and such always rooted in the interpretant, i.e. some kind of active observer, whether this refers to observers with or without consciousness. For all of them, though in different ways, the present is primary. For Aristotle it is kind of a substance, for Bergson durée, for Peirce the sign as process.

Wittgenstein does not say much time, since he seems to be convinced that there is not so much to say. He simply accepts the distinction between referential time of physics and experiential time and considers them to be incommensurable. [39]

Both ways of expressing it are okay and equitable, yet not blendable.23 ([40], p.81-82)

Already in the Tractatus, Wittgenstein wrote

We cannot compare any process with the “passage of time”—there is no such thing—but only with another process (say, with the movement of the chronometer).24 (TLP 6.3611)

Here it becomes clear that clock-time is nothing “built into matter”, but rather a communally negotiated reference, or in short, referential time. We all refer to the same particular process, whether this is length of a day or the number of state changes in Cs-133.25 Experiential time, on the other hand, can’t be considered as a geometrical entity, hence there is no such thing as a “point” in present. In experience, there is nothing to measure. The main reason for this being that experience is tightly linked to (abstract) modeling, and thus to the choreosteme. In short, experience is a self-generating process without an Archimedean Point.

“Now” does not denote a point in time. It is not a “name of a moment in time.”26 ([43], 157)

[…] yet it is nonsense to say ‚This is this‘, or ‚This is now‘.27 ([43], 159)

„Now“ is an indexical term, just as „I“, „this“ or „here“. Indexical terms do not refer to an index. Quite in contrast, sometimes, in more simple cases, they are setting an index, in more complicated cases indexical terms just denote the possibility for imposing an index onto a largely indeterminate context. Hence, it is for grammatical reasons that we can’t say “this is now.” Time is not an object. Time is nothing of which we could say that it does exist. Thus we also can not ask “What is time?” as this implies the existentialist perspective. The question about the reality of time is ungrammatical, it is like trying to play Chinese checkers28 on a chess board, or chess on a soccer field.

More precisely, there is no possibility to speak about “time as an object” in meaningful terms. For language is (i) a process itself, (ii) a process that intrinsically relates to the communal (there is no private language), and (iii) language is a strongly singular term. Thus we can conclude that there is no such thing as the objectification of time, or objective time.

Examples for such an objectification are easy to find. For instance, it is included in the question posed by Augustine “What is time?”  (Wittgenstein’s starting point for the Philosophical Investigations.) It is also included in the misunderstanding of an objective referential time. Or in the claim that time itself is flowing (like a river). Or in the attempt to proof that time itself is continuous.29

Instead, “now” is used as an indication of—or a pointer to—the present and the presence of the speaker. Its duration in terms of clock-time is irrelevant. It would be nonsense to attempt to measure this duration, because it would mean to measure the speaker and his act itself.

Accordingly, the temporal modi in language, the tenses, such as past, present time, future, reflect to the temporal modi of actions—including speech acts—, which take place in the “now” and are anchored in the future through their purpose ([42] p.142).

Confusing and mixing the two conceptions of time—referential time and experiential time—is the main reason, according to Wittgenstein, for enigmas and paradoxes regarding time (such as the distinction of A-series and B-series by McTaggart and in ontology).

For there is no such thing as the objectification of time, time is intrinsically a relational “entity”. As Deleuze brilliantly demonstrates in his reflections about Bergson [33], time can be thought only as durée, or in my words, as a manifold of anobjected time strings, that directly points to the virtual, which in turn is not isolated, but rather an intensity within the choreosteme.

The idealistic, phenomenological and existential approaches to temporality are deeply flawed, because it is not possible to take time apart, or to take time out of the game. Wittgenstein considers such attempts as a misuse of language. Expressions like „time itself“ or questions like “What is time?” are outside of any possible language.

In the ‘Philosophical Remarks’ he says

What belongs to the essence of the world could not be expressed by language. Only what we could imagine as being different language is able to tell.30 ([40] p.84).

Everything which we are able to describe at all, could also be different.31 ([45],p .173).

In order to play the game of “questioning reality of X” in a meaningful manner it has to be possible that it is not real, or partially. An alternative is needed, which however is missing in existential questions or attempts to find the essence. Thus it is meaningless (free of sense) to doubt (even implicitly) the reality of time, whether as present, as past or as future. It is similar to Moore’s paradox of doubting of having an arm. In the end, at least after Wittgenstein, one always have to begin with language. It is nonsense to begin with existence, or likewise essence.

Wittgenstein rejects the traditional philosophical reflection that always tried to find the eternal, necessary and general truth, essence or “true nature” as opposed to empirical—and pragmatical—impressions. The attempt to determine the reality of X as a being-X-as-such is a misuse of language, it is outside of the logic of language.

For Wittgenstein, the more interesting part of time points to memory, as clock-time is a mere convention. For him, memory is the sourcing wellspring (“Quelle”) of time, since the past is experienceable just as a recall of the past ([40] p.81f). Bergson called it recollection.

I think that there are one major consequence of Wittgenstein’s considerations. Time can be comprehended only as a transcendent structural condition of establishing a relation, hence also acting, speaking and thinking. Without such conditioning it is simply not possible to establish a relation. This extends, of course, also to the realm of the social [46]. Here we could even point to physics, particularly to the maximum speed of light, that is the maximum speed of exchanging information, which translates to the “establishment of time” as soon as a relation has been built. This includes that this building of a relation is irreversible. Within reversibility it does not make sense to speak about time. Even shorter, we could be tempted to say that within information there is no time, if it would be meaningful to think something like “within information”. Information itself is strictly bound to interpretation, which brings us back to Peircean semiotics.

Such we could say that we as humans “create” time mainly by means of language, albeit it is not the only possibility to “create” time. Yet, for us humans (as a collective individual beings32) there is hardly another possibility, for we can’t step out of language. Different languages and different uses of language “create” different times. It is this what Helga Nowotny calls “Eigenzeit” [46] (“self-owned time”).

It is rather important to understand that by means of these argument we don’t refer any more to something like “historical time” or “natural time”. Our argument is much more general.

Secondarily, then, we may conclude that we have to ask about the different ways we use the language game “time”.

Ricoeur

As other authors Paul Ricoeur proposes a strict discontinuity between historical time (“historicality”) and physical time. The former he also calls “time with present”, the latter “time without present.” Yet, unlike other authors he also proposes that this discontinuity can’t be reconciled or bridged. This hypothesis he proceeds to formulate by means of three aporias [47].

  • – Aporia 1, duality: Subjective time and objective time can’t be thought together in a single conception, and even more, they obscure them mutually.
  • – Aporia 2, false unity: Despite we take it for granted that there is one single time, we can’t justify it. We even contradict the insight—which appears as trivial—that there is subjective and objective time.
  • – Aporia 3, inscrutability: Thought can not comprehend time, since its origin can’t be grasped. Conceptually, time is ineluctable. Whenever philosophical thought starts to think about time, this thinking is already too late.

Ricoeur is the second author in our selection who takes a phenomenological stance. Heidegger’s “Being and Time” serves as his point of reference. Yet, Ricoeur is neither interested in the analysis of Being nor of the having-Been. The topic to which he refers in Heidegger, and at the same time his vantage point, is historicality, which he approaches in a very different manner. For Ricoeur, history and historicality can not only be understood just through narrativity; there is even a mutual structural determination. Experience of time as the source of historicality as well as the soil of it gets refigurated through narration. In the essay “On Narrative” [49] that he published while his major work “Time and Narration” [48] was in the making we can find his main hypothesis:

My […] working hypothesis is that narrativity and temporality are closely related—as closely as, in Wittgenstein’s terms, a language game and a form of life. Indeed, I take temporality to be that structure of existence that reaches language in narrativity and narrativity to be the language structure that has temporality as its ultimate referent. Their relationship is therefore reciprocal. (p.169)

Concerning narrativity, Ricoeur draws a lot, of course, on the structure of language and the structure of stories. On both levels various degrees of temporality and nonchronological proportions appear. On the level of language, we find short-range and long-range indicators of temporality, beyond mere grammar. Long-range indicators such as “while” or adverbs of time (“earlier”) do not have a clear boundary, neither structurally nor semantically. The same can be found on the level of the story, the plot as Ricoeur calls it. Here he distinguishes a episodic from a configurational dimension, the former presupposing ordinary, i.e. referential time. Taking into account that

To tell and to follow a story is already to reflect upon events in order to encompass them in successive wholes. (p.178)

it follows that any story comprises a

[…] twofold characteristic of confronting and combining both sequence and pattern in various ways.

In other words, a story creates a multiplicity of possible sequences and times, thereby opening a multiplicity of “planes of manifestation,” or in other words, a web of metaphors33.

[…] the narrative function provides a transition from within-time-ness to historicality.

Yet, according to Ricoeur the configurational dimension of the story has a particular effect on the ordinary temporality of a story as it is transported by the episodics. Through the triggered reflective act, the whole story may condense into a single “thought”.

Finally, the recollection of the story governed as a whole by its way of ending constitutes an alternative to the representation of time as moving from the past forward into the future, according to the well-known metaphor of the arrow of time. It is as though recollection inverted the so-called natural order of time. […] A story is made out of events to the extent that plot makes events into a story. The plot, therefore, places us at the crossing point of temporality and narrativity.

This single thought, the plot of a story as whole now is confronted particularly with the third aporia of inscrutability. Basically, for Ricoeur “not really thinking time” when thinking about time is aporetic. (fTR III 467/dZE III, 417) The aporia

[…] emerges right in that moment, where time, which eludes any attempt to be constituted, turns out to be associated to a constitutive order, which in turn always and already is assumed by the work of that constitution.

Any conception that we could propose about time is confronted with the impossibility of integrating this reflexively ineluctable reason. We never can subject time as an object of our reflexions completely. Inga Römer emphasizes (p.284)

Yet, and this is a crucial point for Ricoeur, “what is brought to its failure here is not thinking, in all its meanings, but rather the drive, better the hubris that our thinking seduces to attempt to dominate sense”. For this failure is only a relative one, the inscrutability is not faced with a lapse into silence, but rather with a polymorphy of arrangements and valuations.34

The items of this polymorphy are incommensurable for Ricoeur. Now, for Ricoeur this polymorphy of time experience is situated in a constitutive and reciprocal relationship with narrativity (see his main hypothesis in “On Narrative” that we cited above). Thereby, our experience of time refigurates and reconfigurates itself continuously. In other words, narration represents a practical and poetic mediation of heterogeneous experiences of time. This interplay, so Ricoeur, can overcome the limitations of philosophical inquiries of time.

Interestingly, Ricoeur rejects any systematicity of his arguments, as Römer points out: (p.454)

This association of withdrawal of grounds at the one hand and the challenge for a thinking-more and thinking-different is the strongest argument for Ricoeur’s explicit refusal of a system regarding the three aporias of time as well as their narrative answers.35 (p.454)

The result of this is pretty clear. The Ricoeurean aporetics starts to molt itself into a narration, constantly staggering and oscillating between its claiming, its negation, its negative positivity and its positive negativity, beginning to dazzle and getting incomprehensible.

Temporality tends to get completely merged in narrativity, which in turn becomes synonymous with the experience of time. Such, there are only two possibilities for Ricoeur, neither of which he actually did follow. The first is the denial of temporality that could be thought independent of narration. The second would be that life is equated with narration.

I think, Ricoeur would favour the second alternative. As Römer summarizes:

Historical practice allows us to mediate experienced time with linear time in its own creation, the historical time.36 (p.326)

Such, however, Ricoeur would introduce a secondary re-mystification, which actually is even an autolog one, since Ricoeur has been starting with it as an inscrutability. At this point, all his arguments vanish and turn into a simple pointing to experience.

In the end, the notion of historical practice remains rather questionable. Ricoeur uses the concepts of witness or testimony as well as “trace,” which of course reminds to Derrida’s infamous trace: an uninterpretable remnant of history. Despite Ricoeur emphasizes the importance of the reader as the situs of the completion of text, he never seems to accept interpretation as a primacy. Here, he closely follows the inherited phenomenological misconceptions of the object that exists independent from and outside of the subject. Other difficulties of it is the denial of transcendence and abstraction, which together with its logicism causes the wrong problem of freedom. Phenomenology never understood, whether in Husserl, Heidegger, Derrida, Ricoeur or analytic philosophy, that comparing things can’t take place on the same level as the compared things. Even the most simple comparison implies the Differential, requiring a considerable amount of constructive activity.

Outside phenomenology, Ricoeur’s attempt is only little convincing, albeit he describes many interesting observations around narration and texts. His aporetics of time appears half-baked, through and through, so to speak. Poisoned by phenomenology, and strangely enough forgetting about language in the formulation of his aporias, he commits almost all of the possible mistakes already in his premises. He objectifies time and he treats it as an existential, which could be explained. After all, his main objection that we “can’t really think time”, does not hit a unique. case. Any thinking of any concept is unable to “really think it.”

Our conclusion is not a rejection of Ricoeur’s basic idea of a mutual relationship between “thinking time” and narration. Yet, obviously thinking about narration and phenomenology is an impossibility itself.

One of interesting observations around narration is the distinction between the episodic and the configurational dimension of a plot. This introduces multiplicity, reversibility, and extended present as well as an additional organizational layer. Yet, Ricoeur failed to step out of his affections with narration in order to get aware of the opportunities attached to it.

Kant

Introducing transcendence into our game, we have to refer to Kant, of course, and his conception of time in his “Transzendentale Ästhetik der Kritik der reinen Vernunft”. Kant’s merit is the emancipation of transcendental thinking from the imagined divinity, albeit he did not push this move far enough.

By no means Kant demonstrated the irreality of time, as Einstein as well as McTaggard boldly claim. Kant just demonstrated that time can’t “have” a reality independent from a subject. Accordingly, the idea of an illusionary or irreal time itself is based on a fiction: the fiction of naïve realism. It claims that there is the possibility of an access to “nature” in a way that is independent of subject. Conversely, this does not mean that time as a reality is constructed by human thinking, of course.

The reason for misunderstanding Kant lies in the fact that Kant still argues completely within the realm of the human, while physicists like Einstein talk about the fiction of primarily unrelated entities. It is a major methodological element of the theoretic constitution of physics to assume so, in order to become able, so the fiction, to describe the relations then objectively. Well, actually this does not make much sense, yet physicists usually believe in it.

Far from showing that time is illusionary, Kant tried to secure the objectivity of time under conditions of empirical constitutions, that is, after the explicit and final departure from still scholastic pre-established harmonies that are guaranteed by God. In order to accomplish that he had to invent kind of an intrinsic transcendentality of empirical arrangements. This common basis he found in the transcendent sensual intuition.

For Kant time is a form of intuition (Anschauung), or more precisely, a transcendental and insofar pure form of sensual intuition. It is however of utmost importance, as Mike Sandbothe writes, that Kant himself relativized the universality that is introduced by the transcendentality of time, or in still other words, the intuition of the transcendental subject.

[…] die Form der Anschauung bloss Mannigfaltiges, die formale Anschauung aber Einheit der Vorstellung gibt.” ([47]p.154, B 160f)

The formal account in the intuition now refers to the use of symbols. Thus, it can’t be covered completely as a subject by the pure reason. Here, we find a possible transition to Wittgenstein, since symbols are symbols by convention. Note that this does not refer to a particular symbol, of course, but to the symbolicity that accompanies any instance of talking about time. On the one hand this points towards the element of historicity, which has been developed by Heidegger in a rather limited manner (because he restricted history to the realm of the Dasein, i.e. consciousness).

On the other hand, however, we could extend Kant’s insight of a two-fold constitution of time into more abstract, and this means a-human regions. In a condensed way Kant shows that we need sensual intuitions and symbolicity in order to access temporal aspects of the world. Sensual intuitions, then, require, in the widest sense, kind of match between sensed and the sensing. In human thinking these are the schemata, in particle physics it is the filter built deeply into matter. We could call this transverse excitability. In physics, it is called quantum.

Yet, the important thing is the symbolicity. We can immediately translate this into quantificability and quantitability. And again we are back at Aristotle’s conception.

2. Synopsis

So, after having visited some of the most important contributions to the issue of time we may try to approach a synopsis of those. Again, we have to emphasize that we disregarded many highly interesting ideas, among others those of Platon in his Timaios with his three “transcendental” categories of Being, Space and Becoming, or those of Schelling (cf. in [31]); or those of Deleuze in his cinema books, where he distinguished the “movement image” (presupposing clock time) from the “time image” that is able to provide a grip onto “time itself,” which, for Deleuze, is the virtual to which Bergson’s durée points to; likewise, any of the works by the authors we referred to should have been discussed in much more detail in order to do justice to them. Anyway.

Our intermediate goal was to liberate time from its human influences without sacrificing the applicability of the respective conception to the realm of the human. We need to do so in order to investigate the relation between time and architecture. This liberation, however, still has to obey to the insight of Wittgenstein that we must not expect to find an “essence” of time. Taking all the aspects together, we indeed may ask, as careful as possible,

How should we conceive of time?

The answer is pretty clear, yet, it comes as a compound consisting of three parts. And above all it is also pretty simple.

(1) Time is best conceived as a transcendent condition for the possibility of establishing a relation.

This “transcendent condition” is not possible without a respective plane of immanence, which in turn comprises the unfolding of virtuality. Much could be said about that, of course, with respect to the philosophical implications, its choreostemic references, or its architectonic vicinity. For instance, this determination of time suggests a close relationship to the issue of information and its correlate, causality. Or we could approach other conceptions of time by means of something like a “reverse synthesis.”

It is perhaps at least indicated to emphasize—particularly for all those that are addicted to some kind of science—that this transcendent condition does not, by no means, exclude any consideration of “natural” systems, even not in its material(ist) contraction. On the other hand, this in turn does not mean, of course, that we are doing “Naturphilosophie” here, neither of the ancient nor the scholastic type.

It is clear that we need to instantiate the subjects of this conception in order to achieve a practical relevance of it. It is in this instantiation that different forms of temporality appear, i.e. durée on the one hand and clock-time on the other. Nothing could be less surprising, now, as an incompatibility of the two forms of temporality. Actually, the expectation of a compatibility is already based on the misunderstanding that claims the possibility of a “direct” comparison (which is an illusion). Quite to the contrast, we have to understand that the phenomenal incommensurability just points to a differential of time, which we formulated as a transcendent condition above.

Now, one of the instantiations, clock-time, or referential time, is pretty trivial. We don’t need to deal with it any further. The other branch, where we find Peirce and Bergson, is more interesting.

As we have seen in our discussion about their works, multiplicity is an essential ingredient of relational time. Peirce and Bergson arrived at it on different ways, though. For Peirce it is a consequence of the multiplicity of thoughts about something, naturally derived from his semiotics. For Bergson, it is a multiplicity within experience, or better the experiencing consciousness. So to speak, they take inverse positions regarding the mediality. We already said that we prefer the Peircean perspective due to its more prominent potential for generalization. Yet, I think the two perspectives could be reconciled quite easily. Both conceptions conceive primal time as “experiential” time (in the widest sense).

Our instantiation of time as a transcendent condition is thus:

(2) Transcendent time gets instantiated as a probabilistic, distributed and manifold multiplicity of—topologically spoken—open time strings.

Each time string represents a single and local present, where “local” does not refer to a “spatial place”, but rather to a particular sign process.

This multiplicity is not an external multiplicity, despite it is triggered or filled from the external. It is also not possible to “count” the items in it, without loosing present. If we count, we destroy the coherence between the overlapping strings of present, thus creating countable referential time. This highlights a further step of instantiation, the construction of expressibility.

(3) The pre-specific multiplicity of time strings decoheres by symbolization into a specific space of expressibility.

Symbolization may be actualized by means of numbers, as already mentioned before. This would allow us to comprehend and speak of movement. We also have seen that we could construct a web of proceeding metaphors and their virtual movement. This would put us in midst the narration and into metaphoricology, as I call it, which refers to the perspective that conceives of being human and of human beings as parts of lively metaphors. In other words, culture itself becomes the story and the narrative.

As still another possibility we could address the construction of a space of expressibility of temporality quite directly. Such a space need to be an aspectional space, of course. Just keep in mind that the aspectional space is not a space of quantities, as it is the case for a Cartesian space. The aspectional space is a space that is characterized by a “smooth” blending of intensity and quantity. We may call it intensive quantities, or quantitable intensities. It is a far-reaching generalization of the “ordinary” space conceptions that we know from mathematics. As the aspects —the replacement of dimensions—of that space we could choose the modes of temporality—such as past, present, future—, the durée, the referential time, or implicit time as it occurs and shows up in behavior or choreostemic space. We also could think of an aspection that is built by a Riemannian manifold, allowing to comprise linearity and circularity as just a single aspect.

The tremendous advantage of such a space is manifold, of course, because an infinite amount of particular time practices can be constructed, even as a continuum. This contiguous and continuous plurality is of a completely different kind as the unmediatable items in the plurality of time conceptions that has been proposed by Mike Sandbothe [8].

The aspectional space of transcendent time offers, I mentioned it, the possibility for expressing time, or more precisely, a particular image of time. There are several of those spaces, and each of them is capable to hold an infinite number of different images of time.

It is now easy to understand that collapsing the conditions for building relations with the instantiation into a concrete time form, or even with the action (or the “phenomenon”) results in nothing else than a terrible mess. Actually, it is precisely the mess that physicists or phenomenology create in different ways. “Phenomenal” observables of this mess are pseudo-paradoxes or dualities. We also could say that such mess is created due to a wrong application of the grammar of time.

There is one important aspect of time and temporality, or perspective onto them, that we mentioned only marginally so far, the event. We met it in Heidegger’s “Being and Time” as the provider [m: the Giving] and insofar also the origin of Being and time. We also saw that Ricoeur uses them as building bricks for stories that combine them into successive wholes. For Dewey (“Time and Individuality”, “Context of Thought”) the concept of an event involves both the individual pattern of growth and the environmental conditions. Dewey, as Ricoeur, emphasizes that there is no geometrical sequence, no strict seriality to which events could be arranged. Dewey calls it concurrence, which could not be separated from occurrence of an event.

Yet, for both of them time remains something external to the conception of event, while Heidegger conceives it as the source of time. Considering our conception of time as a proceeding actualization of Differential Time we could say the the concept of event relates to the actualization of the relation within the transcendence of its conditions. Such it could be said to accompany creation of time, integrating transcendent and practical conditions as well as all the more or less contingent choices associated with it. In some way we can see that we have proceduralized (differentiated) Heidegger’s “point of origin”.37. Marc Rölli [52] sharpens this point by referring to Deleuze’s conception as “radically empiricist”, dismissing Heidegger through the concepts of actuality and virtuality. Such we can see that the immediate condition that is embedding the possibility of experience is the “event,” which in turn can be traced back to a proper image of time. Time, as a condition, is mediated towards experience by the event, as a condition. Certainly, however, the “event” could not be thought without an explicitly formulated conception of time. Without it, a multitude of misunderstandings must be expected. If we accept the perspective that insofar time is preceding substance, which resolves of course into a multiplicity in a Deleuzean perspective, we also could say that the trinity of time, event and experience contributes to the foil of immanence, or even builds it up, where experience in turn refers to the choreostemic constitution of being in the world.

In order to summarize our conception as an overview… here is how we propose to conceive of time

  • (1) Time is a transcendent condition for the possibility of establishing a relation, or likewise a quality.
  • (2) It gets instantiated as a probabilistic multiplicity of open time strings that, by the completion of all instantiations, present presence.
  • (3) The pre-specific multiplicity of time strings decoheres by symbolization into a specific, aspectional space of expressibility.
  • (4) Any particular “choice” of a situs in this space of intensive quantities represents the respective image of time, which then may emerge in worldly actualizations.

Particularly regarding this last element we have to avoid the misunderstanding of a seriality of the kind “I choose then I get”. This choice is an implicit one, just as the other instantiations, and can be “observed” only in hindsight, or more precise, they show themselves only within performance. Only in this way we can say that it brings time into a particular Lebenswelt and its contexts as a matter (or subject) of design.

Nevertheless, we now could formulate kind of a recipe for creating a particular “time”, form of temporality, or “time quality.” This would work also in the reverse direction, of course. It is possible to construct a comparative of time qualities across authors, architects or urban neighborhoods. Hopefully, this will help to improve urban practices. In order to make this creational aspect more clear, we now have to investigate the possibilities to create time “itself”.

to be continued …

(The next part will deal with the question whether it could be possible to identify the mechanisms needed to create time…)

Notes

1. “Living City” was Archigram’s first presentation to the public, which has been curated by Ron Herron in 1963. 

2. German orig.: „Zuletzt markiert die Zeit für Ricoeur das “Mysterium” unseres Denkens, das sich der Repräsentation verweigert, indem es unser Dasein auf eine für das Denken uneinholbare Weise umgreift.“

3. As in the preceding essays, we use the capital “U” if we refer to the urban as a particular quality and as a concept in the vicinity of Urban Reason, in order to distinguish it from the ordinary adjective that refers to common sense understanding.

4. remark about state and development.

5. We discussed them in the essay about growth patterns. The fractal is a consequence of self-affine mapping, roughly spoken, a local replacement by a minor version of the original diagram.

6. It is tempting to relate this position to Heisenberg’s uncertainty principle. Yet, we won’t deal with contemporary physics here, even as it would be interesting to investigate the deficiencies of physical conceptions of time.

7. McTaggart paper about time that has been cited over and over again and became unfortunately very influential. Yet, it is nothing but a myth. For a refutation see Tegtmeier [18]. For reasons of its own stupidity and the boldly presented misinterpretation of the work of Kant, McTaggart’s writing deserves the title of a “most developed philanosy” (Grk: anoysia ανοησία, nonsense, or anosia, immunity). It is not even worthwhile, as we will see later through our discussion of Wittgenstein’s work regarding time, to consider it seriously, as for instance Sean Power does .

8. There is a distant resemblance to Georg Berkley’s “esse est percipi.” [20] Yet, in contrast to Berkley, we conceive of interpretation as an activity that additionally is deeply rooted in the communal.

9. German original: SZ: 326: „Zukünftig auf sich zurückkommend, bringt sich die Entschlossenheit gegenwärtigend in die Situation. Die Gewesenheit entspringt der Zukunft, so zwar, dass die gewesene (besser gewesende) Zukunft die Gegenwart aus sich entlässt. Dies dergestalt als gewesend-gegenwärtigende Zukunft einheitliche Phänomen nennen wir die Zeitlichkeit.

10. One has to consider that Heidegger conceives of Being only in relation to the Being-there (“Dasein”), while the “Being-there” is confined to conscious beings.

11. The translators used ”falling”, which however does not match the German “verfallend”. (Actually, I consider it as a mistake.) Hence, I replaced it by a more appropriate verb.

12. Note that Heidegger always used to write in a highly ambigue fashion, which makes it nearly impossible to translate him literally from German to English. In everyday language “Es gibt” is surely well translated by “There is.” Yet, in this text he repeatedly refers to “giving”. Turning perspective to “giving” opens the preceding “Es” away from its being as impersonate corpuscle towards impersonal “fateness”. This interpretation matches the presentation of the affair in [24].

13. German original: “Das Sein eigens denken, verlangt, das Sein als den Grund des Seienden fahren zu lassen zugunsten des im Entbergen verborgen spielenden Gebens, d.h. des „Es gibt“.“

14. see also: Marcel Mauss, Die Gabe. Form und Funktion des Austauschs in archaischen Gesellschaften. Suhrkamp, Frankfurt 2009 [1925].

15. German orig.: „In “Zeit und Sein” schliesslich sieht Heidegger den Ursprung der Zeit im Ereignis, welches er ausdrücklich als den [sich ] selbst entzogenen Geber von Sein und Zeit bestimmt. Das Ereignis, von Heidegger andernorts bestimmt als singulare tantum, ist selbst grundsätzlich entzogen – und dennoch ist das Ereignis der Ursprung der Zeit.“

16. German original (my own translation): “Sein und Zeit ist vielmehr dahin unterwegs, auf dem Wege über die Zeitlichkeit des Daseins in der Interpretation des Seins als Temporalität einen Zeitbegriff, jenes Eigene der “Zeit” zu finden, von woher sich “Sein” als Anwesen er-gibt. Damit ist aber gesagt, daß das in der Fundamentalontologie gemeinte Fundamentale kein Aufbauen darauf verträgt. Stattdessen sollte, nachdem der Sinn von Sein erhellt worden wäre, die ganze Analytik des Daseins ursprünglicher und in ganz anderer Weise wiederholt werden.“ [21]

17. German original (my translation): “Zeit-Raum nennt jetzt das Offene, das im Einander-sich-reichen von Ankunft, Gewesenheit und Gegenwart sich lichtet. Erst dieses Offene und nur es räumt dem uns gewöhnlich bekannten Raum seine mögliche Ausbreitung ein.“

18. This also holds for any of the attempts hat can be found in physics. The following sources may be considered as the most prominent sources, though they are not undisputed: Carroll [22], Price [23][24], Penrose [25]. Physics always and inevitably conceives of time as a measurable “thing”, i.e. as something which already has been negotiated in its relevance for the communal aspects of thinking. See Aristotle’s conception of time.

19. hint to Schelling, for whom intensity is not accessible at all, but could be conceived only as a force that expands into extensity.

20. You will find Peirce’s writings online here: http://www.cspeirce.com/; the parts reference here for instance at http://www.cspeirce.com/menu/library/bycsp/logic/ms237.htm,

21. German original (my transl.): „Denn in der Grammatik der Zukunft tritt der Begriff des ,Gedächtnis’ nicht auf, auch nicht mit umgekehrten Vorzeichen.“

22. In meditational practices one can extend the interpretive chain in various ways. The result is simply the stopping of referential time.

23. German orig.: „Beide Ausdrucksweisen sind in Ordnung und gleichberechtigt, aber nicht miteinander vermischbar“.

24. German orig.: „Wir können keinen Vorgang mit dem ,Ablauf der Zeit’ vergleichen – diesen gibt es nicht – sondern nur mit einem anderen Vorgang (etwa mit dem Gang des Chronometers).“ translation taken from here.

25. 1 second is currently defined as the duration of 9192631770 transitions between two energy levels of the caesium-133 atom. [39] Interestingly, this fits nicely to Aristotle’s conception of time. The reason to take the properties of Cs-133 as a reference is generality. The better the resolution of the referential scale the more general it could be applied.

26. German orig.: „„Jetzt“ bezeichnet keinen Zeitpunkt. Es ist kein „Name eines Zeitmomentes“.“

27. German orig.: „[…] es ist aber Unsinn zu sagen ‘Dies ist dies’, oder ‘Dies ist jetzt’.“

28. In German “Halma”.

29. Much could be said about physics here, regarding the struggling of physicists to “explain” the so-called arrow of time, or regarding the theory of relativity or quantum physics with its Planck time, but it is not close enough to our interests here. Physics always tries to objectify time, which happens through claiming an universally applicable scale, hence they run into paradoxes. In other terms, the fact of the necessity of conceptions like Planck time, or time dilatation, is precisely that without observer there is nothing. The mere possibility of observation (and the observer) vanishes at the light of speed, or at the singularity “within” black holes”. In some way, physics all the time (tries to) proof(s) their own nonsensical foundations.

30. German orig.: „Was zum Wesen der Welt gehört, kann die Sprache nicht ausdrücken. (…) Nur was wir uns auch anders vorstellen können, kann die Sprache sagen.”

31. German orig.: ,,Alles was wir überhaupt beschreiben können, könnte auch anders sein”.

32. Note that in case of a city we meet somewhat the inverse of it. We could conceive of a city as “an individual being made from a collective.”

33. see also Paul Ricoeur (1978), “The Metaphorical Process as Cognition, Imagination, and Feeling,” Critical Inquiry, 1978.

34. German orig.: „Aber, und das ist für Ricoeur entscheidend, “was hier zum Scheitern gebracht wird, ist nicht das Denken, in allen Bedeutungen des Wortes, sondern der Trieb, besser die hybris, die unser Denken dazu verleitet, sich zu Herrn des Sinns zu machen“. Aufgrund dieses nur relativen Scheiterns stehe der Unerforschlichkeit kein Verstummen, sondern vielmehr eine Polymorphie der Gestaltungen und Bewertungen der Zeit gegenüber.“

35. German orig.: „Diese Zusammengehörigkeit von Entzug des Grundes und Herausforderung um Mehr- und Andersdenken ist der stärkste Grund für Ricoeurs explizite Ablehnung eines Systems sowohl der drei Aporien der Zeit selbst wie auch ihrer narrativen Antworten.“

36. German orig.: „Historische Praxis erlaubt uns, die erlebt Zeit mit der linearen Zeit in einer ihr eigenen Schöpfung, der historischen Zeit, zu vermitteln.“

37. Much more would be to say about the event, of course (cf. [51]). Yet, I think that our characterization not only encompasses most conceptions or fits to most of the contribution to the “philosophy of the event,” it also clarifies and sheds light (kind of x-rays?) on them.

References

  • [1] Simon Sadler, Archigram – Architecture without Architecture. MIT Press, Boston 2005.
  • [2] Koolhaas, Junkspace
  • [3] Robert Venturi, Complexity and Contradiction in Architecture. 1977 [1966].
  • [4] Bernard Tschumi, Architecture and Disjunction. MIT Press, Boston 1996.
  • [5] Franz Oswald and Peter Baccini, Netzstadt: Einführung zum Stadtentwerfen. Birkhäuser, Basel 2003.
  • [6] Sigfried Giedion, Space, Time and Architecture: The Growth of a New Tradition. 1941.
  • [7] Aldo Rossi, The Architecture of the City. Oppositions 1984 [1966].
  • [8] Mike Sandbothe, „Die Verzeitlichung der Zeit in der modernen Philosophie.“ in: Antje Gimmler, Mike Sandbothe und Walther Ch. Zimmerli (eds.), Die Wiederentdeckung der Zeit. Primus & Wissenschaftliche Buchgesellschaft, Darmstadt 1997. available online.
  • [9] Mary Louise Gill, Aristotle’s Distinction between Change and Activity. in: Johanna Seibt (ed.), Process Theories: Crossdisciplinary Studies in Dynamic Categories. p.3-22.
  • [10] Yeonkyung Lee and Sungwoo Kim (2008). Reinterpretation of S. Giedion’s Conception of Time in Modern Architecture – Based on his book, Space, Time and Architecture. Journal of Asian Architecture and Building Engineering 7(1):15–22.
  • [11] Tony Roark, Aristotle on Time: A Study of the Physics.
  • [12] Werner Heisenberg, Physics and Philosophy. The Revolution in Modern Science. Harper, New York 1962.
  • [13] Ursula Coope, Time for Aristotle, Oxford University Press, 2005.
  • [14] John Ellis McTaggart (1908). The Unreality of Time. Mind: A Quarterly Review of Psychology and Philosophy 17: 456-473.
  • [15] L. Nathan Oaklander, Quenin Smith (eds.), The New Theory of Time. Yale University Press, New Haven (CT) 1994.
  • [16] L. Nathan Oaklander (2004). The Ontology of Time (Studies in Analytic Philosophy)
  • [17] Sean Power, The Metaphysics of Temporal Experience. forthcoming.
  • [18] Erwin Tegtmeier (2005). Three Flawed Distinctions in the Philosophy of Time. IWS 2005.
  • [19] Thomas Sheehan, “Heidegger, Martin (1889-1976)” in: Edward Craig (ed.), Routledge Encyclopedia of Philosophy, Routledge, New York 1998, IV, p.307-323.
  • [20] George Berkley, Eine Abhandlung über die Prinzipien der menschlichen Erkenntnis (1710). Vgl. vor allem die ‚Sectionen‘ III-VII und XXV, Übers. F. Überweg, Berlin 1869.
  • [21] Martin Heidegger, Being and Time. transl. John Macquarrie & Edward Robinson (based on 7th edition of “Sein und Zeit”), Basil Blackwell, Oxford 1962. available online.
  • [22] Martin Heidegger, Sein und Zeit. Tübingen 1979 [1927].
  • [23] Martin Heidegger, Protokoll zu einem Seminar über den Vortrag “Zeit und Sein”. in: Zur Sache des Denkens. Gesamtausgabe Band 14, p.34. Klostermann, Frankfurt 2007 [1967].
  • [24] Cristina Lafont (1993). Die Rolle der Sprache in Sein und Zeit. Zeitschrift für philosophische Forschung, Band 47, 1.
  • [25] Martin Heidegger, Zur Sache des Denkens. Gesamtausgabe Band 14. Klostermann, Frankfurt 2007.
  • [26] Christian Bermes, Ulrich Dierse (eds.), Schlüsselbegriffe der Philosophie des 20. Jahrhunderts. Meiner, Hamburg 2010.
  • [27] Sean Carroll, From Eternity to Here: The Quest for the Ultimate Theory of Time. Oneworld, Oxford 2011.
  • [28] Huw Price, Time’s Arrow and Archimedes’ Point: New Directions. Oxford University Press, Oxford 1996.
  • [29] Huw Price (1994). Reinterpreting the Wheeler-Feynman Absorber Theory: Reply to Leeds. The British Journal for the Philosophy of Science 45 (4), pp. 1023-1028.
  • [30] Roger Penrose, The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage, London 2004.
  • [31] Friedrich Kuemmel, Über den Begriff der Zeit. Niemeyer, Tübingen 1962.
  • [32] Time and Free Will: An Essay on the Immediate Data of Consciousness, transl., F.L. Pogson, Montana: Kessinger Publishing Company, original date, 1910 (orig. 1889).
  • [33] Gilles Deleuze, Bergsonism.
  • [34] Lawlor, Leonard and Moulard, Valentine, “Henri Bergson”, in: Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2012 Edition), available online.
  • [35] Charles Sanders Peirce, Writings 3, 107-108, MS239 (Robin 392, 371), 1873. available online.
  • [36] Ludwig Wittgenstein, Philosophical Investigations. §201
  • [37] John Dewey, “Time and Individuality,” in: Jo Ann Boydston (ed.), Later Works of John Dewey, Vol.14. Southern Illinois University Press, Carbondale 1988.
  • [38] John Dewey, “Experience and Nature,” in: Jo Ann Boydston (ed.), Later Works of John Dewey, Vol.1. Southern Illinois University Press, Carbondale 1981 , p. 92.
  • [39] Rudolf F. Kaspar und Alfred Schmidt (1992). Wittgenstein über Zeit. Zeitschrift für philosophische Forschung, Band 46(4): 569-583.
  • [40] Ludwig Wittgenstein, Philosophische Bemerkungen. in: Werkausgabe Bd. 2. Frankfurt 1984.
  • [41] “International System of Units (SI)”. Bureau International des Poids et Mesures. 2006.
  • [42] Peter Janich (1996). Die Konstitution der Zeit durch Handeln und Reden. Kodikas/Code Ars Semeiotica 19, 133-147.
  • [43] Ludwig Wittgenstein, Eine Philosophische Betrachtung (Das Braune Buch). in: Suhrkamp Werkausgabe Bd. 5. Frankfurt 1984.
  • [44] Andrea A. Reichenberger, „Was ist Zeit?“ Wittgensteins Kritik an Augustinus kritisch betrachtet. in: Friedrich Stadler, Michael Stöltzner (eds.), Papers of the 28th International Wittgenstein Symposium 7-13 August 2005. Zeit und Geschichte – Time and History. ALWS, Kirchberg am Wechsel 2005.
  • [45] Tagebücher 1924-1916. in: Ludwig Wittgenstein, Werkausgabe Bd.1, Frankfurt 1984.
  • [46] Helga Nowotny, Eigenzeit: Entstehung und Strukturierung eines Zeitgefühls. Suhrkamp 1993.
  • [47] Inga Römer, Das Zeitdenken bei Husserl, Heidegger und Ricoeur. Springer, Dordrecht & Heidelberg 2010.
  • [48] Paul Ricoeur, Zeit und Erzählung, Bd. 3: Die erzählte Zeit, München, Fink , München 1991. (zuerst frz.: Paris 1985).
  • [49] Paul Ricoeur (1980). On Narrative. Critical Inquiry, Vol. 7, No. 1, pp. 169-190.
  • [50] Immanuel Kant, Kritik der reinen Vernunft, in: Wolfgang Weischedel (ed.), Immanuel Kant., Werke in sechs Bänden, Bd. 2, Wissenschaftliche Buchgesellschaft, Darmstadt 1983.
  • [51] Marc Rölli, Ereignis auf Französisch. Von Bergson bis Deleuze. Fink, München 2004.
  • [52] Marc Rölli, “Begriffe für das Ereignis: Aktualität und Virtualität. Oder wie der radikale Empirist Gilles Deleuze Heidegger verabschiedet”, in: Marc Rölli (ed.), Ereignis auf Französisch. Von Bergson bis Deleuze. Fink, München 2004

۞

Advertisements

Growth Patterns

November 29, 2012 § Leave a comment

Growing beings and growing things, whether material

or immaterial, accumulate mass or increase their spreading. Plants grow, black holes grow, a software program grows, economies grow, cities grow, patterns grow, a pile of sand grows, a text grows, the mind grows and even things like self-confidence and love are said to grow. On the other hand, we do not expect that things like cars or buildings “grow.”

Despite the above mentioned initial “definition” might sound fairly trivial, the examples demonstrate that growth itself, or more precisely, the respective language game, is by far not a trivial thing. Nevertheless, when people start to talk about growth or if they invoke the concept of growth implicitly, they mostly imagine a smooth and almost geometrical process, a dilation, a more or less smooth stretching. Urbanists and architects are no exception to this undifferentiated and prosy perspective. Additionally, growth is usually not con- sidered seriously beyond its mere wording, probably due to the hasty prejudgment about the value of biological principles. Yet, if one can’t talk appropriately about growth—which includes differentiation—one also can’t talk about change. As a result of a widely (and wildly) applied simplistic image of growth, there is a huge conceptual gap in many, if not almost all works about urban conditions, in urban planning, and about architecture.1  But why talking about change, for in architecture and urbanism is anyway all about planning…

The imprinting by geometry often entails another prejudice: that of globality. Principles, rules, structures are thought to be necessarily applied to the whole, whatever this “wholeness” is about. This is particularly problematic, if these rules refer more or less directly to mere empirical issues. Such it frequently goes unnoticed that maintaining a particular form or keeping position in a desired region of the parameter space of a forming process requires quite intense interconnected local processes, both for building as well as destroying structures.

It was one of the failures in the idea of Japanese Metabolism not to recognize the necessity for deep integration of this locality. Albeit they followed the intention to (re-)introduce the concept of “life cycle” into architecture and urbanism, they kept aligned to cybernetics. Such, Metabolism failed mainly for two reasons. Firstly, they attempted to combine incommensurable mind sets. It is impossible to amalgamate modernism and the idea of bottom-up processes like self-organization or associativity, and the Metabolists always followed the modernist route. Secondly, the movement has been lacking a proper structural setup: the binding problem remained unresolved. They didn’t develop a structural theory of differentiation that would have been suitable to derive appropriate mechanisms.

This Essay

Here in this piece we just would like to show some possibilities to enlarge the conceptual space and the vocabulary that we could use to describe (the) “growing” (of) things. We will take a special reference to architecture and urbanism, albeit the basics would apply to other fields as well, e.g. to the growth and the differentiation of organizations (as “management”) or social forms, but also of more or even “completely” immaterial entities. In some way, this power is even mandatory, if we are going to address the Urban6, for the Urban definitely exceeds the realm of the empirical.
We won’t do much of philosophical reflection and embedding, albeit it should be clear that these descriptions don’t make sense without proper structural, i.e. theoretical references as we have argued in the previous piece. “As such” they would be just kind of a pictorial commentary, mistaking metaphor as allegory. There are two different kinds of important structural references. One is pointing to the mechanisms2, the abstract machinery with its instantiation on the micro-level or with respect to the generative processes. The other points to the theoretico-structural embedment, which we have been discussing in the previous essay. Here, it is mainly the concept of generic differentiation that provides us the required embedding and the power to overcome the binding problem in theoretical work.

The remainder of this essay comprises the following sections (active links):

1. Space

Growth concerns space, both physical and abstract space. Growth concerns even the quality of space. The fact of growth is incompatible with the conception of space as a container. This becomes obvious in case of the fractals, which got their name due to their “broken” dimensionality. A fractal could be 2.846-dimensional. Or 1.2034101 dimensional. The space established by the “inside” of a fractal is very different from the 3-dimensional space. Astonishingly, the dimensionality even need not be constant at all while traveling through a fractal.

Abstract spaces, on the other hand, can be established by any set of criteria, just by interpreting criteria as dimensions. Such, one gets a space for representing and describing items, their relations and their transformations. In mathematics, a space is essentially defined as the possibility to perform a mapping from one set to another, or in other terms, by the abstract (group-theoretic) symmetry properties of the underlying operations on the relations between any entities.

Strangely enough, in mathematics spaces are almost exclusively conceived as consisting from independent dimensions. Remember that “independence” is the at the core of the modernist metaphysical belief set! Yet, they need neither be Euclidean nor Cartesian as the generalization of the former. The independence of descriptive dimensions can be dropped, as we have argued in an earlier essay. The resulting space is not a dimensional space, but rather an aspectional space, which can be conceived as a generalization of dimensional space.

In order to understand growth we should keep in contact with a concept of space that is as general as possible. It would be really stupid for instance, to situate growth restrictively in a flat 2-dimensional Euclidean space. At least since Descartes’ seminal work “Regulae” (AT X 421-424) it should be clear that any aspect may be taken as a contribution to the cognitive space [8].

The Regulae in its method had even allowed wide latitude to the cognitive use of fictions for imagining artificial dimensions along which things could be grasped in the process of problem solving. Natures in the Meditations, however, are no longer aspects or axes along which things can be compared, evaluated, and arrayed, but natures in the sense that Rule 5 had dismissed: natures as the essences of existing things.

At the same time Descartes also makes clear that these aspects should not be taken as essences of existing things. In other words, Descartes has been ahead of 20ieth century realism and existentialism! Aspects do not represent things in their modes of existence, they represent our mode of talking about the relations we establish to those things. Yet, these relations are more like those threads as String Theory describes them: without fixed endings on either side. All we can say about the outer world is that there is something. Of course, that is far to little to put it as a primacy for human affairs.

The consequence of such a dimensional limitation would be a blind spot (if not a population of them), a gap in the potential to perceive, to recognize, to conceive of and to understand. Unfortunately, the gaps themselves, the blind spots are not visible for those who suffer from them. Nevertheless, any further conceptualization would remain in the state of educated nonsense.

Growth is established as a transformation of (abstract) space. Vice versa, we can conceive of it also as the expression of the transformation of space. The core of this transformation is the modulation of the signal intensity length through the generation of compartments, rendering abstract space into a historical, individual space. Vice versa, each transformation of space under whatsoever perspective can be interpreted as some kind of growth.

The question is not any more to be or not to be, as ontologists tried to proof since the first claim of substance and the primacy of logics and identity. What is more, already Shakespeare demonstrated the pen-ultimate consequences of that question. Hamlet, in his mixture of being realist existentialist (by that very question) and his like of myths and (use) of hidden wizards, guided by the famous misplaced question, went straight into his personal disaster, not without causing a global one. Shakespeare’s masterfully wrapped lesson is that the question about Being leads straight to disaster. (One might add that this holds also for ontology and existentialism: it is consequence of ethical corruption.)

Substance has to be thought of being always and already a posteriori to change, to growth. Setting change as a primacy means to base thought philosophically on difference. While this is almost a completely unexplored area, despite Deleuze’s proposal of the plane of immanence, it is also clear that starting with identity instead causes lots of serious troubles. For instance, we would be forced to acknowledge that the claim of the possibility that a particular interpretation indeed could be universalized. The outcome? A chimaera of Hamlet (the figure in the tragedy!) and Stalin.

Instead, the question is one of growth and the modulation of space: Who could reach whom? It is only through this question that we can integrate the transcendence of difference, its primacy, and to secure the manifold of the human in an uncircumventable manner. Life in all of its forms, with all its immanence,  always precedes logic.3 Not only for biological assemblages, but also for human beings and all its produces, including “cities” and other forms of settlements.

Just to be clear: the question of reaching someone else is not dependent on anything given. The given is a myth, as philosophers from Wittgenstein to Quine until Putnam and McDowell have been proofing. Instead, the question about the possibility to reach someone else, to establish a relation between any two (at least) items is one of activity, design, and invention, targeting the transformation of space. This holds even in particle physics.

2. Modes of Talking

Traditionally spoken, the result of growth is formed matter. More exactly, however, it is transformed space. We may distinguish a particular form, morphos, or with regard to psychology also a “Gestalt,” and form as an abstractum. The result of growth is form. Thus, form actually does not only concern matter, it always concerns the potential relationality.

For instance, growing entities never interact “directly”. They, that is, also: we, always interact through their spaces and the mediality that is possible within them.4 Otherwise it would be completely impossible for a human individual to interact with a city. Before any semiotic interpretive relation it is the individual space that enables incommensurable entities to relate.

If we consider the growth of a plant, for instance, we find a particular morphology. There are different kinds of tissues and also a rather typical habitus, i.e. a general appearance. The underlying processes are of biological nature, spanning from physics and bio-chemistry to information and the “biological integration” of those.

Talking about the growth of a building or the growth of a city we have to spot the appropriate level of abstraction. There is no 1:1 transferability. In a cell we do neither find craftsmen nor top-down-implementations of plans. In contrast, rising a building apparently does not know anything about probabilistic mechanisms. Just by calling something intentionally “metabolism” (Kurokawa) or “fractal” (Jencks), invoking thereby associations of organisms and their power to maintain themselves in physically highly unlikely conditions, we certainly do not approach or even acquire any understanding.

The key for any growth model is the identification of mechanisms (cf. [4]). Biology  is the science that draws most on the concept of mechanism (so far), while physics does so for the least. The level of mechanism is already an abstraction, of course. It needs to be completed, however, by the concept of population, i.e. a dedicated probabilistic perspective, in order to prevent falling back to the realm of trivial machines. In a cross-disciplinary setting we have to generalize the mechanisms into principles, such that these provide a shared differential entity.5

Well, we already said that a building is rarely raised by a probabilistic process. Yet, this is only true if we restrict our considerations to the likewise abstract description of the activities of the craftsmen. Else, the building process starts long before any physical matter is touched.

Secondly, from the perspective of abstraction we never should forget—and many people indeed forget about this—that the space of expressibility and the space of transformation also contains the nil-operator. From the realm of numbers we call it the zero. Note that without the zero many things could not be expressed at all. Similarly, the negative is required for completing the catalog of operations. Both, the nil-operator and the inverse element are basic constituents of any mathematical group structure, which is the most general way to think about the conditions for operations in space.

The same is true for our endeavor here. It would be impossible to construct the possibility for graded expressions, i.e. the possibility for a more or less smooth scale, without the nil and the negative. Ultimately, it is the zero and the nil-operation together with the inverse that allows to talk reflexively at all, to create abstraction, in short to think through.

3. Modes of Growth

Let us start with some instances of growth from “nature”. We may distinguish crystals, plants, animals and swarms. In order to compare even those trivial and quite obviously very different “natural” instances with respect to growth, we need a common denominator. Without that we could not accomplish any kind of reasonable comparison.

Well, initially we said that growth could be considered as accumulation of mass or as an increase of spread. After taking one step back we could say that something gets attached. Since crystals, plants and animals are equipped with different capabilities, and hence mechanisms, to attach further matter, we choose the way of organizing the attachment as the required common denominator.

Given that, we can now change the perspective onto our instances. The performance of comparing implies an abstraction, hence we will not talk about crystals etc. as phenomena, as this would inherit the blindness of phenomenology against its conditions. Instead, we conceive of them as models of growth, inspired by observations that can be classified along the mode of attachment.

Morphogenesis, the creation of new instances of formed matter, or even the creation of new forms, is tightly linked to complexity. Turing titled his famous article the “Chemical Basis of Morphogenesis“. This, however, is not exactly what he invented, for we have to distinguish between patterns and forms, or likewise, between order and organization. Turing described the formal conditions for emergence of order from a noisy flow of entropy. Organization, in contrast, also needs the creation of remnants, partial decay, and it is organization that brings in historicity. Nevertheless, the mechanisms of complexity of which the Turing-patterns and -mechanisms are part of, are indispensable ingredients for the “higher” forms of growth, at least, that is, for anything besides crystals (but probably even for for them in same limited sense). Note that morphogenesis, in neither of its aspects, should not be conceived as something “cybernetical”!

3.1. Crystals

Figure 1a: Crystals are geometric entities out of time.

Crystals are geometrical entities. In the 19th century, the study of crystals and the attempt to classify them inspired mathematicians in their development of the concept of symmetry and group theory. Crystals are also entities that are “structurally flat”. There are no levels of integration, their macroscopic appearance is a true image of their constitution on the microscopic level. A crystal looks exactly the same on the level of atoms up to the scale of centimeters. Finally, crystals are outside of time. For their growth is only dependent on the one or two layers of atoms (“elementary cells”) that had been attached before at the respective site.

There are two important conditions in order to grow a 3-dimensional crystal. The site of precipitation and attachment need to be (1) immersed in a non-depletable solution where (2) particles can move through diffusion in three dimensions. If these conditions are not met, mineral depositions look very different. As far as it concerns the global embedding conditions, the rules have changed. More abstractly, the symmetry of the solution is broken, and so the result of the process is a fractal.

Figure 1b. Growth in the realm of minerals under spatial constraints, particularly the reduction of dimensionality. The image does NOT show petrified plants! It is precipitated mineral from a solution seeped into a nearly 2-dimensional gap between  two layers of (lime) rock. The similarity of shapes points to a similarity of mechanisms.

Both examples are about mineralic growth. We can understand now that the variety of resulting shapes is highly dependent on the dimensional conditions embedding the growth process.

Figure 1c. Crystalline buildings. Note that it is precisely and only this type of building that actualizes a “perfect harmony” between the metaphysics of the architect and the design of social conditions. The believe in independence and the primacy of identity  has been quite effectively delivered into the habit of the everyday housing conditions.

Figure 1d. Crystalline urban layout, instantiated as “parametrism”. The “curvy” shape should not be misinterpreted as “organic”. In this case it is just a little dose of artificial “erosion” imposed as a parametric add-on to the crystalline base. We again meet the theme of the geological. Nothing could be more telling than the claim of a “new global style”: Schumacher is an arch-modernist, a living fossil, mistaking design as religion, who benefits from advanced software technology. Who is Schumacher that he could decree a style globally?

The growth of crystals is a very particular transformation of space. It is the annihilation of any active part of it. The relationality of crystals is completely exhausted by resistance and the spread of said annihilation.

Regarding the Urban6, parametrism must be considered as being deeply malignant. As the label says, it takes place within a predefined space. Yet, who the hell Schumacher (and Hadid, the mathematician) thinks s/he is that s/he is allowed, or even being considered as being able, to define the space of the Urban? For the Urban is a growing “thing,” it creates its own space. Consequently all the rest of the world admits not to “understand” the Urban, yet Hadid and her barking Schumacher even claim to be able to define that space, and thus also claim that this space shall be defined. Not surprisingly, Schumacher is addicted to the mayor of all bureaucrats of theory, Niklas Luhman (see our discussion here), as he proudly announces in his book “The Autopoiesis of Architecture” that is full of pseudo- and anti-theory.

The example of the crystal clearly shows that we have to consider the solution and the deposit together as a conditioned system. The forces that rule their formation are a compound setup. The (electro-chemical) properties of the elementary cell on the microscopic level, precisely where it is in contact with the solution, together with the global, macroscopic conditions of the immersing solution determine the instantiation of the basic mechanism. Regardless the global conditions, basic mechanism for the growth of crystals is the attachment of matter is from the outside.

In crystals, we do not find a separated structural process layer that would be used for regulation of the growth. The deep properties of matter determine their growth. Else, only the outer surface is involved.

3.2. Plants

With plants, we find a class of organisms that grow—just as crystals—almost exclusively at their “surface”. With only a few exceptions, matter is almost exclusively attached at the “outside” of their shape. Yet, matter is also attached from their inside, at precisely defined locations, the meristemes. Else, there is a dedicated mechanism to regulate growth, based on a the diffusion of certain chemical compounds, the phyto-hormones, e.g. auxin. This regulation emancipates the plant in its growth from the properties of the matter it is built from.

Figure 2a. Growth in Plants. The growth cone is called apical meristeme. There are just a handful of largely undifferentiated cells that keep dividing almost infinitely. The shape of the plant is largely determined by a reaction-diffusion-system in the meristem, based on phyto-hormones that determine the cells. Higher plants can build secondary meristemes at particular locations, leading to a characteristic branching pattern.

 

Figure 2b. A pinnately compound leaf of a fern, showing its historical genesis as attachment at the outside (the tip of the meristeme)  from the inside. If you apply this principle to roots, you get a rhizome.

Figure 2c. The basic principle of plant growth can be mapped into L-Grammars, n order to create simulations of plant-like shapes. This makes clear that fractal do not belong to geometry! Note that any form creation that is based on formal grammars is subject to the representational fallacy.

Instead of using L-grammars as a formal reference we could also mention self-affine mapping. Actually, self-affine mapping is the formal operation that leads to perfect self-similarity and scale invariance. Self-affine mapping projects a minor version of the original, often primitive graph onto itself. But let us inspect two examples.

Figure 2d.1. Scheme showing the self-affine mapping that would create a graph that looks like a leaf of a fern (image from wiki).

self-affine Fractal fern scheme
Figure 2d.2. Self-affine fractal (a hexagasket) and its  neighboring graph, which encodes its creation [9].
self-affine fractals hexagasket t

Back to real plants! Nowadays, most plants are able to build branches. Formally, they perform a self-affine mapping. Bio-chemically, the cells in their meristeme(s) are able to respond differentially to the concentration of one (or two) plant hormones, in this case auxine. Note, that for establishing a two component system you won’t necessarily need two hormones! The counteracting “force” might be realized by some process just inside the cells of the meristeme as well.

From this relation between the observable fractal form, e.g. the leaf of the fern, or the shape of the surrounding of a city layout, and the formal representation we can draw a rather important conclusion. The empirical analysis of a shape should never stop with the statement that the respective shape shows scale-invariance, self-similarity or the like. Literally nothing is gained by that! It is just a promising starting point. What one has to do subsequently is to identify the mechanisms leading to the homomorphy between the formal representation and the particular observation. If you like, the chemical traces of pedestrians, the tendency to imitate, or whatever else. Even more important, in each particular case these actual mechanisms could be different, though leading to the same visual shape!!!

In earlier paleobiotic ages, most plants haven’t been able to build branches. Think about tree ferns, or the following living fossile.

Figure 2d. A primitive plant that can’t build secondary meristemes (Welwitschia). Unlike in higher plants, where the meristeme is transported by the growth process to the outer regions of the plant (its virtual borders), here it remains fixed; hence, the leaf is growing only in the center.

Figure 2e. The floor plan of Guggenheim Bilbao strongly reminds to the morphology of Welwitschia. Note that this “reminding” represents a naive transfer on the representational level. Quite in contrast, we have to say that the similarity in shape points to a similarity regarding the generating mechanisms. Jencks, for instance, describes the emanations as petals, but without further explanation, just as metaphor. Gehry himself explained the building by referring to the mythology of the “world-snake”, hence the importance of the singularity of the “origin”. Yet, the mythology does not allow to say anything about the growth pattern.

Figure 2f. Another primitive plant that can’t build secondary apical meristems. common horsetail (Equisetum arvense). Yet, in this case the apical meristeme is transported.

Figure 2g. Patrick Schumacher, Hadid Office, for the master plan of the Istanbul project. Primitive concepts lead to primitive forms and primitive habits.

Many, if not all of the characteristics of growth patterns in plants are due to the fact that they are sessile life forms. Most buildings are also “sessile”. In some way, however, we consider them more as geological formations than as plants. It seems to be “natural” that buildings start to look like those in fig.2g above.

Yet, in such a reasoning there are even two fallacies. First, regarding design there is neither some kind of “naturalness”, nor any kind of necessity. Second, buildings are not necessarily sessile. All depends on the level of the argument. If we talk just about matter, then, yes, we can agree that most buildings do not move, like crystals or plants. Buildings could not be appropriately described, however, just on the physical level of their matter. It is therefore very important to understand that we have to argue on the level of structural principles. Later we will provide an impressive example of an “animal” or “animate” building.7 

As we said, plants are sessile, all through, not only regarding their habitus. In plants, there are no moving cells in the inside. Thus, plants have difficulties to regenerate without dropping large parts. They can’t replace matter “somewhere in between”, as animals can do. The cells in the leafs, for instance, mature as cells do in animals, albeit for different reasons. In plants, it is mainly the accumulation of calcium. Such, even in tropical climates trees drop off their leaves at least once a year, some species all of them at once.

The conclusion for architecture as well as for urbanism is clear. It is just not sufficient to claim “metabolism” (see below) as a model. It is also appropriate to take “metabolism” as a model, not even if we would avoid the representational fallacy to which the “Metabolists” fell prey. Instead, the design of the structure of growth should orient itself in the way animals are organized, at the level of macroscopic structures like organs, if we disregard swarms for the moment, as most of them are not able to maintain persistent form.

This, however, brings immediately the problematics of territorialization to the fore. What we would need for our cities is thus a generalization towards the body without organs (Deleuze), which orients towards capabilities, particularly the capability to choose the mode of growth. Yet, the condition for this choosing is the knowledge about the possibilities. So, let us proceed to the next class of growth modes.

3.3. Swarms

In plants, the growth mechanisms are implemented in a rather deterministic manner. The randomness in their shape is restricted to the induction of branches. In swarms, we find a more relaxed regulation, as there is only little persistent organization. There is just transient order. In some way, many swarms are probabilistic crystals, that is, rather primitive entities. Figures 3a thru 3d provide some examples for swarms.

From the investigation of swarms in birds and fishes it is known that any of the “individual” just looks to the movement vector of its neighbors. There is no deep structure, precisely because there is no persistent organization.

Figure 3a. A flock of birds. Birds take the movement of several neighbors into account, sometimes without much consideration of their distance.

Figure 3b. A swarm of fish, a “school”. It has been demonstrated that some fish not only consider the position or the direction of their neighbors, but also the form of the average vector. A strong straight vector seems to be more “convincing” for the neighbors as a basis for their “decision” than one of unstable direction and scalar.

Figure 3c. The Kaaba in Mekka. Each year several persons die due to panic waves. Swarm physics helped to improve the situation.

Figure 3d. Self-ordering in a pedestrians population at Shibuya, Tokyo. In order to not crash into each other, humans employ two strategies. Either just to follow the person ahead, or to consider the second derivative of the vector, if the first is not applicable. Yet, it requires a certain “culture”, an unspoken agreement to do so (see this for what happens otherwise)

A particularly interesting example for highly developed swarms that are able to establish persistent organization is provided by Dictyostelium (Fig 4a), in common language called a slime-mold. In biological taxonomy, they form a group called Mycetozoa, which indicates their strangeness: Partly, they behave like fungi, partly like primitive animals. Yet, they are neither prototypical fungi nor prototypical animals. in both cases the macroscopic appearance is a consequence of (largely) chemically organized collaborative behavior of a swarm of amoeboids. Under good environmental conditions slime-molds split up into single cells, each feeding on their own (mostly on bacteria). Under stressing conditions, they build astonishing macroscopic structures, which are only partially reversible as parts of the population might be “sacrificed” to meet the purpose of non-local distribution.

Figure 4a. Dictyostelium, “fluid” mode; the microscopic individuals are moving freely, creating a pattern that optimizes logistics. Individuals can smoothly switch roles from moving to feeding. It should be clear that the “arrangement” you see is not a leaf, nor a single organism! It is a population of coordinating individuals. Yet, the millions of organisms in this population can switch “phase”… (continue with 4b…)

Figure 4b. Dictyostelium, in “organized” mode, i.e. the “same” population of individuals now behaving “as if” it would be an organism, even with different organs. Here, individuals organize a macroscopic form, as if they were a single organism. There is irreversible division of labor. Such, the example of Dictyostelium shows that the border between swarms and plants or animals can be blurry.

The concept of swarms has also been applied to crowds of humans, e.g. in urban environments [11]. Here, we can observe an amazing re-orientation. Finally, after 10 years or so of research on swarms and crowds, naïve modernist prejudices are going to be corrected. Independence and reductionist physicism have been dropped, instead, researchers get increasingly aware of relations and behavior [14].

Trouble is, the simulations treat people as independent particles—ignoring our love of sticking in groups and blabbing with friends. Small groups of pedestrians change everything, says Mehdi Moussaid, the study’s leader and a behavioral scientist at the University of Toulouse in France. “We have to rebuild our knowledge about crowds.”

Swarms solve a particular class of challenges: logistics. Whether in plants or slime-molds, it is the transport of something as an adaptive response that provides their framing “purpose”. This something could be the members of the swarm itself, as in fish, or something that is transported by the swarm, as it is the case in ants. Yet, the difference is not that large.

Figure 5: Simulation of foraging raid patterns in army ants Eciton. (from [12]) The hive (they haven’t a nest) is at the bottom, while the food source is towards thr top.  The only difference between A and B is the number of food sources.

When compared to crystals, even simple swarms show important differences. Firstly, in contrast to crystals, swarms are immaterial. What we can observe at the global scale, macroscopically, is an image of rules that are independent of matter. Yet, in simple, “prototypical” swarms the implementation of those rules is still global, just like in crystals. Everywhere in the primitive swarm the same basic rules are active. We have seen that in Dictyostelium, much like in social insects, rules begin to be active in a more localized manner.

The separation of immaterial components from matter is very important. It is the birth of information. We may conceive information itself as a morphological element, as a condition for the probabilistic instantiation. Not by chance we assign the label “fluid” to large flocks of birds, say starlings in autumn. On the molecular level, water itself is organized as a swarm.

As a further possibility, the realm of immaterial rules provides allows also for a differentiation of rules. For in crystals the rule is almost synonymic to the properties of the matter, there is no such differentiation for them. They are what they are, eternally. In contrast to that, in swarms we always find a setup that comprises attractive and repellent forces, which is the reason for their capability to build patterns. This capability is often called self-organization, albeit calling it self-ordering would be more exact.

There is last interesting point with swarms. In order to boot a swarm as swarm, that is, to effectuate the rules, a certain, minimal density is required. From this perspective, we can recognize also a link between swarms and mediality. The appropriate concept to describe swarms is thus the wave of density (or of probability).

Not only in urban research the concept of swarms is often used in agent-based models. Unfortunately, however, only the most naive approaches are taken, conceiving of agents as entities almost without any internal structure, i.e. also without memory. Paradoxically, researchers often invoke the myth of “intelligent swarms”, overlooking that intelligence is nothing that is associated to swarms. In order to find appropriate solutions to a given challenge, we simply need an informational n-body system, where we find emergent patterns and evolutionary principles as well. This system can be realized even in a completely immaterial manner, as a pattern of electrical discharges. Such a process we came to call a “brain”… Actually, swarms without an evolutionary embedding can be extremely malignant and detrimental, since in swarms the purpose is not predefined. Fiction authors (M.Crichton, F.Schätzing) recognized this long ago. Engineers seem to still have difficulties with that.

Such, we can also see that swarms actualize the most seriously penetrating form of growth.

3.4. Animals

So far, we have met three models of growth. In plants and swarms we find different variations of the basic crystalline mode of growth. In animals, the regulation of growth acquired even more degrees of freedom.

The major determinant of the differences between the forms of plants and animals is movement. This not only applies to the organism as a whole. We find it also on the cellular level. Plants do not have blood or an immune system, where cells of a particular type are moving around. Once they settled, they are fixed.

The result of this mobility is a greatly diversified space of possibilities for instantiating compartmentalization. Across the compartments, which we find also in the temporal domain, we may even see different modes of growth. The liver of the vertebrates, for instance, grows more like a plant. It is somehow not surprising that the liver is the organ with the best ability for regeneration. We also find interacting populations of swarms in animals, even in the most primitive ones like sponges.

The important aspects of form in animals are in their interior. While for crystals there is no interiority, plants differ in their external organization, their habitus, with swarms somewhere in between. Animals, however, are different due to their internal organization on the level of macroscopic compartments, which includes their behavioral potential. (later: remark about metabolism, as taking the wrong metaphorical anchor) Note that the cells of animals look quite similar, they are highly standardized, even between flies and humans.

Along with the importance of the dynamics and form of interior compartments, the development of animals in their embryological phase8 is strictly choreographed. Time is not an outer parameter any more. Much more than plants, swarms or even crystals, of course, animals are beings in and of time. They have history, as individual and as population, which is independent of matter. In animals, history is a matter of form and rules, of interior, self-generated conditions.

During the development of animal embryos we find some characteristic operations of form creating, based on the principle of mobility, additionally to the principles that we can describe for swarms, plants and crystals. These are

  • – folding, involution and blastulation;
  • – melting, and finally
  • – inflation and gastrulation;

The mathematics for describing these operations is not geometry any more. We need topology and category theory in order to grasp it, that is the formalization of transformation.

Folding brings compartments together that have been produced separately. It breaks the limitations of signal horizons by initiating a further level of integration. Hence, the role of folding can be understood as a way as a means to overcome or to instantiate dimensional constraints and/or modularity. While inflation is the mee accumulation of mass and amorphous enlargement of a given compartment by attachment from the interior, melting may be conceived as a negative attachment. Abstractly taken, it introduces the concept of negativity, which in turn allows for smooth gradation. Finally, involution, gastrulation and blastulation introduce floating compartments, hence swarm-like capabilities in the interior organization. It blurs the boundaries between structure and movement, introducing probabilism and reversibility into the development and the life form of the being.

Figure 6a. Development in Embryos. Left-hand, a very early phase is shown, emphasizing the melting and inflating, which leads to “segments”, called metamers. (red arrows show sites of apoptosis, blue arrows indicate inflation, i.e. ordinary increase of volume)

Figure 6b. Early development phase of a hand. The space between fingers is melted away in order to shape the fingers.

Figure 6c. Rem Koolhaas [16]. Inverting the treatment of the box, thereby finding (“inventing”?) the embryonic principle of melting tissue in order to generate form. Note that Koolhaas himself never referred to “embryonic principles” (so far). This example demonstrates clearly where we have to look for the principles of morphogenesis in architecture!

In the image 6a above we can not only see the processes of melting and attaching, we also can observe another recipe of nature: repetition. In case of the Bauplan of animal organisms the result is metamery.9 While in lower animals such as worms (Annelidae), metamers are easily observed, in higher animals, such as insects or vertebrates, metamers are often only (clearly) visible in the embryonal phase. Yet, in animals metamers are always created through a combination of movement or melting and compartmentalization in the interior of the body. They are not “added” in the sense of attaching—adding—them to the actual border, as it is the case in plants or crystals. In mathematical terms, the operation in animals’ embryonic phase is multiplication, not addition.

Figure 6d. A vertebrate embryo, showing the metameric organization of the spine (left), which then gets replicated by the somites (right). In animals, metamers are a consequence of melting processes, while in plants it is due to attachment. (image found here)

The principles of melting (apoptosis), folding, inflating and repetition can be used to create artificial forms, of course. The approach is called subdivision. Note that the forms shown below have nothing to do with geometry anymore. The frameworks needed to talk about them are, at least, topology and category theory. Additionally, they require an advanced non-Cartesian conception of space, as we have been outlining one above.

Figure 7. Forms created by subdivision (courtesy Michael Hansmeyer). It is based on a family of procedures, called subdivision, that are directed towards the differentiation of the interior of a body. It can’t be described by geometry any more. Such, it is a non-geometrical, procedural form, which expresses time, not matter and its properties. The series of subdivisions are “breaking” the straightness of edges and can be seen also as a series of nested, yet uncompleted folds (See Deleuze’s work on the Fold and Leibniz). Here, in Hansmeyer’s work, each column is a compound of three “tagmata”, that is, sections that have been grown “physically” independently from each other, related just by a similar dynamics in the set of parameters.

subdivision columns

Creating such figurated forms is not fully automatic, though. There is some contingency, represented by the designer’s choices while establishing a particular history of subdivisions.

Animals employ a wide variety of modes in their growing. They can do so due to the highly developed capability of compartmentalization. They gain almost complete independence from matter10 , regarding their development, their form, and particularly regarding their immaterial setup, which we can observe as learning and the use of rules. Learning, on the other hand, is intimately related to perception, in other words, configurable measurement, and data. Perception, as a principle, is in turn mandatory for the evolution of brains and the capability to handle information. Thus, staffing a building with sensors is not a small step. It could take the form of a jump into another universe, particularly if the sensors are conceived as being separate from the being of the house, for instance in order to facilitate or modify mental or social affairs of their inhabitants.

3.5. Urban Morphing

On the level of urban arrangements, we also can observe different forms of differentiation on the level of morphology.

Figure 8. Urban Sprawl, London (from [1]). The layout looks like a slime-mold. We may conclude that cities grow like slime-molds, by attachment from the inside and directed towards the inside and the outside. Early phases of urban sprawl, particularly in developing countries, grow by attachment form the outside, hence they look more like a dimensionally constrained crystal (see fig.1b).

The concept of the fractal and the related one of self-similarity entered, of course, also the domain of urbanism, particularly an area of interest which is called Urban Morphology. This has been born as a sub-discipline of geography. It is characterized by a salient reductionism of the Urban to the physical appearance of a city and its physical layout, which of course is not quite appropriate.

Given the mechanisms of attachment, whether it is due to interior processes or attachment from the outside (through people migrating to the city), it is not really surprising to find similar fractal shapes as in case of (dimensionally) constrained crystalline growth, or in the case of slime-molds with their branching amoeba highways. In order to understand the city, the question is not whether there is a fractal or not, whether there is a dimensionality of 1.718 or one of 1.86.

The question is about the mechanisms that show up as a particular material habitus, and about the actual instantiation of these mechanisms. Or even shorter: the material habitus must be translated into a growth model. In turn, this would provide the means to shape the conditions of the cities own unfolding and evolution. We already know that dedicated planning and dedicated enforcement of plans will not work in most cities. It is of utmost importance here, not to fall back into representationalist patterns, as for instance Michael Batty sometimes falls prey to [1]. Avoiding representationalist fallacies is possible only if we embed the model about abstract growth into a properly bound compound which comprises theory (methodology and philosophy) and politics as well, much like we proposed in the previous essay.

Figure 9a. In former times, or as a matter of geographical facts, attachment is excluded. Any growth is directed towards the inside and shows up as a differentiation. Here, in this figure we see a planned city, which thus looks much like a crystal.

Figure 9b. A normally grown medieval city. While the outer “shell” looks pretty standardized, though not “crystalline”, the interior shows rich differentiation. In order to describe the interior of such cities we have to use the concept of type.

Figure 10a. Manhattan is the paradigmatic example for congestion due to a severe (in this case: geographical) limitation of the possibility to grow horizontally. In parallel, the overwhelming interior differentiation created a strong connectivity and abundant heterotopias. This could be interpreted as the prototype of the internet, built in steel and glass (see Koolhaas’ “Delirious New York” [15]).

Figure 10b. In the case of former Kowloon (now torn down), it wasn’t geological, but political constraints. It was a political enclave/exclave, where actually no legislative regulations could be set active. In some way it is the chaotic brother of Manhattan. This shows Kowloon in 1973…

Figure 10c. And here the same area in 1994.

Figure 10d. Somewhere in the inside. Kowloon developed more and more into an autonomous city that provided any service to its approx. 40’000 inhabitants. On the roof of the buildings they installed the play grounds for the children.

The medieval city, Manhattan and Kowloon share a particular growth pattern. While the outer shape remains largely constant, their interior develops any kind of compartments, any imaginable kind of flow and a rich vertical structure, both physical and logical. This growth pattern is the same as we can observe in animals. Furthermore, those cities, much like animals, start to build an informational autonomy, they start to behave, to build an informational persistence, to initiate an intense mediality.

3.6. Summary of Growth Modes

The following table provides a brief overview about the main structural differences of growth models, as they can be derived from their natural instantiations.

Table 1: Structural differences of the four basic classes of modes of growth. Note that the class labels are indeed just that: labels of models. Any actual instantiation, particularly in case of real animals, may comprise a variety of compounds made from differently weighted classes.

Aspect \ Class crystal plant swarm animal
Mode of Attachment passive positive active positive active positive and negative active positive and negative
Direction from outside from inside from inside  towards outside or inside from & towards the inside
Morphogenetic Force as a fact by matter explicitly produced inhibiting fields implicit and explicit multi-component fields 11 explicitly produced multi-component fields
Status of Form implicitly templated by existing form beginning independence from matter independence from matter independence from matter
Formal Tools geometric scaling, representative reproduction, constrained randomness Fibonacci patterns, fractal habitus, logistics fractal habitus, logistics metamerism, organs, transformation, strictly a-physical
Causa Finalis(main component) actualization of identity space filling logistics mobile logistics short-term adaptivity

4. Effects of Growth

Growth increases mass, spread or both. Saying that doesn’t add anything, it is an almost syntactical replacement of words. In Aristotelian words, we would get stuck with the causa materialis and the causa formalis. The causa finalis of growth, in other words its purpose and general effect, besides the mere increase of mass, is differentiation12, and we have to focus the conditions for that differentiation in terms of information. For the change of something is accessible only upon interpretation by an observing entity. (Note that this again requires relationality as a primacy)

The very possibility of difference and consequently of differentiation is bound to the separation of signals.13 Hence we can say that growth is all about the creation of a whole bouquet of signal intensity lengths, instantiated on a scale that stretches from as morpho-physical compartments through morpho-functional compartments to morpho-symbolic specializations.14

Inversely we may say that abstract growth is a necessary component for differentiation. Formally, we can cover differentiation as an abstract complexity  of positive and negative growth. Without abstract growth—or differentiation—there is no creation or even shaping of space into an individual space with its own dynamical dimensionality, which in turn would preclude the possibility for interaction. Growth regulates the dimensionality of the space of expressibility.

5. Growth, an(d) Urban Matter

5.1. Koolhaas, History, Heritage and Preservation

From his early days as urbanist and architect, Koolhaas has been fascinated by walls and boxes [16], even with boxes inside boxes. While he conceived the concept of separation first in a more representational manner, he developed it also into a mode of operation later. We now can decode it as a play with informational separation, as an interest in compartments, hence with processes of growth and differentiation. This renders his personal fascinosum clearly visible: the theory and the implementation of differentiation, particularly with respect to human forms of life. It is probably his one and only subject.

All of Koolhaas’ projects fit into this interest. New York, Manhattan, Boxes, Lagos, CCTV, story-telling, Singapore, ramps, Lille, empirism, Casa da Musica, bigness, Metabolism. His exploration(s) of bigness can be interpreted as an exploration of the potential of signal intensity length. How much have we to inflate a structure in order to provoke differentiation through the shifting the signal horizon into the inside of the structure? Remember, that the effective limit of signal intensity length manifests as breaking of symmetry, which in turn gives rise to compartmentalization, opposing forces, paving the way for complexity, emergence, that is nothing else than a dynamic generation of patterns. BIG BAG. BIG BANG. Galaxies, stardust, planets, everything in the mind of those crawling across and inside bigness architecture.  Of course, it appears to be more elegant to modulate the signal intensity length through other means than just by bigness, but we should not forget about it. Another way for provoking differentiation is through introducing elements of complexity, such as contradictory elements and volatility. Already in 1994, Koolhaas wrote [17]15

But in fact, only Bigness instigates the regime of complexity that mobilizes the full intelligence of architecture and its related fields. […] The absence of a theory of Bigness–what is the maximum architecture can do?–is architecture’s most debilitating weakness. […] By randomizing circulation, short-circuiting distance, […] stretching dimensions, the elevator, electricity, air-conditioning,[…] and finally, the new infrastructures […] induced another species of architecture. […] Bigness perplexes; Bigness transforms the city from a summation of certainties into an accumulation of mysteries. […] Bigness is no longer part of any urban tissue. It exists; at most, it coexists. Its subtext is fuck context.

The whole first part of this quote is about nothing else than modulating signal intensity length. Consequently, the conclusion in the second part refers directly to complexity that creates novelty. An artifice that is double-creative, that is creative and in each of its instances personalized creative, how should it be perceived other than as a mystery? No wonder, modernists get overcharged…

The only way to get out of (built) context is through dynamically creating novelty., by creating an exhaustively new context outside of built matter, but strongly building on it. Novelty is established just and only by the tandem of complexity and selection (aka interpretation). But, be aware, complexity here is fully defined and not to be mistaken with the crap delivered by cybernetics, systems theory or deconstructivism.

The absence of a theory of Bigness—what is the maximum architecture can do? —is architecture’s most debilitating weakness. Without a theory of Bigness, architects are in the position of Frankenstein’s creators […] Bigness destroys, but it is also a new beginning. It can reassemble what it breaks. […] Because there is no theory of Bigness, we don’t know what to do with it, we don’t know where to put it, we don’t know when to use it, we don’t know how to plan it. Big mistakes are our only connection to Bigness. […] Bigness destroys, but it is also a new beginning. It can reassemble what it breaks. […] programmatic elements react with each other to create new events- Bigness returns to a model of programmatic alchemy.

All this reads like a direct rendering of our conceptualization of complexity. It is, of course, nonsense to think that

[…] ‘old’ architectural principles (composition, scale, proportion, detail) no longer apply when a building acquires Bigness. [18]

Koolhaas sub-contracted Jean Nouvel for caring of large parts of Euro-Lille. Why should he do so, if proportions wouldn’t be important? Bigness and proportions are simply on different levels! Bigness instantiates the conditions for dynamic generation of patterns, and those patters, albeit volatile and completely on the side of the interpreter/observer/user/inhabitant/passer-by, deserve careful thinking about proportions.

Bigness is impersonal: the architect is no longer condemned to stardom.

Here, again, the pass-porting key is the built-in creativity, based on elementarized, positively defined complexity. We thus would like to propose to consider our theory of complexity—at least—as a theory of Bigness. Yet, the role of complexity can be understood only as part of generic differentiation. Koolhaas’ suggestion for Bigness does not only apply for architecture. We already mentioned Euro-Lille. Bigness, and so complexity—positively elementarized—is the key to deal with Urban affairs. What could be BIGGER than the Urban? Koolhaas concludes

Bigness no longer needs the city, it is the city.’ […]

Bigness = urbanism vs. architecture.

Of course, by “architecture” Koolhaas refers to the secretions by the swarm architects’ addiction to points, lines, forms and apriori functions, all these blinkers of modernism. Yet, I think, urbanism and a re-newed architecture (one htat embraces complexity) may be well possible. Yet, probably only if we, architects and their “clients”, contemporary urbanists and their “victims,” start to understand both as parts of a vertical, differential (Deleuzean) Urban Game. Any comprehensive apprehension of {architecture, urbanism} will overcome the antipodic character of the relations between them. Hope is that it also will be a cure for junkspace.

There are many examples from modernism, where architects spent the utmost efforts to prevent the “natural” effect of bigness, though not always successful. Examples include Corbusier as well as Mies van der Rohe.

Koolhaas/OMA not only uses assemblage, bricolage and collage as working techniques, whether as “analytic” tool (Delirious New York) or in projects, they also implement it in actual projects. Think of Euro-Lille, for instance. Implementing the conditions of or for complexity creates a never-ending flux of emergent patterns. Such an architecture not only keeps being interesting, it is also socially sustainable.

Such, it is not really a surprise that Koolhaas started to work on the issue and the role of preservation during the recent decade, culminating in the contribution of OMA/AMO to the Biennale 2010 in Venice.

In an interview given there to Hans Ulrich Obrist [20] (and in a lecture at the American University of Beirut), Koolhaas mentioned some interesting figures about the quantitative consequences of preservation. In 2010, 3-4% of the area of the earths land surface has been declared as heritage site. This amounts to a territory larger than the size of India. The prospects of that have been that soon up to 12% are protected against change. His objection was that this development can lead to kind of a stasis. According to Koolhaas, we need a new vocabulary, a theory that allows to talk about how to get rid of old buildings and to negotiate of which buildings we could get rid of. He says that we can’t talk about preservation without also talking about how to get rid of old stuff.

There is another interesting issue about preservation. The temporal distance marked by the age of the building to be preserved and the attempt to preserve the building constantly decreased across history. In 1800 preservation focused on buildings risen 2000 years before, in 1900 the time distance shrunk to 300 years, and in 2000 it was as little as 30 years. Koolhaas concludes that we obviously are entering a phase of prospective preservation.

There are two interpretations for this tendency. The first one would be, as a pessimistic one, that it will lead to a perfect lock up. As an architect, you couldn’t do anything anymore without being engaged in severely intensified legislation issues and a huge increase in bureaucrazy. The alternative to this pessimistic perspective is, well, let’s call it symbolic (abstract) organicism, based on the concept of (abstract) growth and differentiation as we devised it here. The idea of change as a basis of continuity could be built so deeply into any architectural activity, that the result would not only comprise preservation, it would transcend it. Obviously, the traditional conception of preservation would vanish as well.

This points to an important topic: Developing a theory about a cultural field, such as it is given by the relation between architecture and preservation, can’t be limited to just the “subject”. It inevitably has to include a reflection about the conceptual layer as well. In the case of preservation and heritage, we simply find that the language game is still of an existential character, additionally poisoned by values. Preservation should probably not target the material aspects. Thus, the question whether to get rid of old buildings is inappropriate. Transformation should not be regarded as a question of performing a tabula rasa.

Any well-developed theory of change in architectural or Urban affairs brings a quite important issue to the foreground. The city has to decide what it wants to be. The alternatives are preformed by the modes of growth. It could conceive of itself as an abstract crystal, as a plant, a slime-mold made from amoeboids, or as an abstract animal. Each choice offers particular opportunities and risks. Each of these alternatives will determine the characteristics and the quality of the potential forms of life, which of course have to be supported by the city. Selecting an alternative also selects the appropriate manner of planning, of development. It is not possible to perform the life form of an animal and to plan according to the characteristics of a crystal. The choice will also determine whether the city can enter a regenerative trajectory, whether it will decay to dust, or whether it will be able to maintain its shape, or whether it will behave predatory. All these consequences are, of course, tremendously political. Nevertheless, we should not forget that the political has to be secured against the binding problem as much as conceptual work.

In the cited interview, Koolhaas also gives a hint about that when he refers to the Panopticum project, a commission to renovate a 19th century prison. He mentions that they discovered a rather unexpected property of the building: “a lot of symbolic extra-dimensions”. These symbolic capital allows for “much more and beautiful flexibility” to handle the renovation. Actually, one “can do it in 50 different ways” without exhausting the potential, something, which according to Koolhaas is “not possible for modern architecture”.

Well, again, not really a surprise. Neither function, nor functionalized form, nor functionalized fiction (Hollein) can bear symbolic value except precisely that of the function. Symbolic value can’t be implanted as little as meaning can be defined apriori, something that has not been understood, for instance, by Heinrich Klotz14. Due to the deprivation of the symbolic domain it is hard to re-interpret modernist buildings. Yet, what would be the consequence for preservation? Tearing down all the modernist stuff? Probably not the worst idea, unless the future architects are able to think in terms of growth and differentiation.

Beyond the political aspects the practical question remains, how to decide on which building, or district, or structure to preserve? Koolhaas already recognized that the politicians started to influence or even rule the respective decision-making processes, taking responsibility away from the “professional” city-curators. Since there can’t be a rational answer, his answer is random selection.

Figure 11: Random Selection for Preservation Areas, Bejing. Koolhaas suggested to select preservation areas randomly, since it can’t be decided “which” Bejing should be preserved (there are quite a few very different ones).

Yet, I tend to rate this as a fallback into his former modernist attitudes. I guess, the actual and local way for the design of the decision-making process is a political issue, which in turn is dependent on the type of differentiation that is in charge, either as a matter of fact, or as a subject of political design. For instance, the citizens of the whole city, or just of the respective areas could be asked about their values, as it is a possibility (or a duty) in Switzerland. Actually, there is even a nice and recent example for it. The subject matter is a bus-stop shelter designed by Santiago Calatrava in 1996, making it to one of his first public works.

Figure 12: Santiago Calatrava 1996, bus stop shelter in St.Gallen (CH), at a central place of the city; there are almost no cars, but every 1-2 minutes a bus, thus a lot of people are passing even several times per day. Front view…

…and rear view

In 2011, the city parliament decided to restructure the place and to remove the Calatrava shelter. It was considered by the ‘politicians’ to be too “alien” for the small city, which a few steps away also hosts a medieval district that is a Unesco World Heritage. Yet, many citizen rated the shelter as something that provides a positive differential, a landmark, which could not be found in other cities nearby, not even in whole Northern Switzerland. Thus, a referendum has been enforced by the citizens, and the final result from May 2012 was a clear rejection of the government’s plans. The effect of this recent history is pretty clear: The shelter accumulates even more symbolic capital than before.

Back to the issue of preservation. If it is not the pure matter, what else should be addressed? Again, Koolhaas himself already points to the right direction. The following fig.13 shows a scene from somewhere in Bejing. The materials of the dwelling are bricks, plastic, cardboard. Neither the site nor the matter nor the architecture seems to convey anything worthwhile to be preserved.

Figure 13: When it comes to preservation, the primacy is about the domain of the social, not that of matter.

Yet, what must be preserved mandatorily is the social condition, the rooting of the people in their environment. Koolhaas, however, says that he is not able to provide any answer to solve this challenge. Nevertheless it s pretty clear, that “sustainability” start right here, not in the question of energy consumption (despite the fact that this is an important aspect too).

5.2. Shrinking. Thinning. Growing.

Cities have been performances of congestion. As we have argued repeatedly, densification, or congestion if you like, is mandatory for the emergence of typical Urban mediality. Many kinds of infrastructures are only affordable, let alone be attractive, if there are enough clients for it. Well, the example of China—or Singapore—and its particular practices of implementing plans demonstrate that the question of density can take place also in a plan, in the future, that is, in the domain of time. Else, congestion and densification may actualize more and more in the realm of information, based on the new medially active technologies. Perhaps, our contemporary society does not need the same corporeal density as it was the case in earlier times. There is a certain tendency that the corporeal city and the web amalgamate into something new that could be called the “wurban“. Nevertheless, at the end of the day, some kind of density is needed to ignite the conditions for the Urban.

Such, it seems that the Urban is threatened by the phenomenon of thinning. Thinning is different from shrinking, which appears foremost in some regions of the U.S. (e.g. Detroit) or Europe (Leipzig, Ukrainia) as a consequence of monotonic, or monotopic economic structure. Yet, shrinking can lead to thinning. Thinning describes the fact that there is built matter, which however is inhabited only for a fraction of time. Visually dense, but socially “voided”.

Thinning, according to Koolhaas, considers the form of new cities like Dubai. Yet, as he points out, there is also a tendency in some regions, such as Switzerland, or the Netherlands, that approach the “thinned city” from the other direction. The whole country seems to transform itself into something like an urban garden, neither of rural nor of urban quality. People like Herzog & deMeuron lament about this form, conceiving it as urban sprawl, the loss of distinct structure, i.e. the loss of clearly recognizable rural areas on the one hand, and the surge of “sub-functional” city-fragments on the other. Yet, probably we should turn perspective, away from reactive, negative dialectics, into a positive attitude of design, as it may appear a bit infantile to think that a palmful of sociologists and urbanists could act against a gross cultural tendency.

In his lecture at the American University in Beirut in 2010 [19], Koolhaas asked “What does it [thinning] mean for the ‘Urban Condition’?”

Well, probably nothing interesting, except that it prevents the appearance of the Urban16 or lets it vanish, would it have been present. Probably cities like Dubai are just not yet “urban”, not to speak of the Urban. From the distant, Dubai still looks like a photomontage, a Potemkin village, an absurdity. The layout of the arrangement of the high-rises remembers to the small street villages, just 2 rows of cottages on both sides of  a street, arbitrarily placed somewhere in the nowhere of a grassland plain. The settlement being ruled just by a very basic tendency for social cohesion and a common interest for exploiting the hinterland as a resource. But there is almost no network effect, no commonly organized storage, no deep structure.

Figure 14a: A collage shown by Koolhaas in his Beirut lecture, emphasizing the “absurdity” (his words) of the “international” style. Elsewhere, he called it an element of Junkspace.

The following fig 14b demonstrates the artificiality of Dubai, classifying more as a lined village made from huge buildings than actually as a “city”.

Figure 14b. Photograph “along” Dubai’s  main street taken in late autumn 2012 by Shiva Menon (source). After years of traffic jamming the nomadic Dubai culture finally accepted that something like infrastructure is necessary in a more sessile arrangement. They started to build a metro, which is functional with the first line since Sep 2010.

dubai fog 4 shiva menon

Figure 14c below shows the new “Simplicity ™”. This work of Koolhaas and OMA oscillates between sarcasm, humor pretending to be naive, irony and caricature. Despite a physical reason is given for the ability of the building to turn its orientation such as to minimize insulation, the effect is a quite different one. It is much more a metaphor for the vanity of village people, or maybe the pseudo-religious power of clerks.

Figure 14c-1. A proposal by Koolhaas/OMA for Dubai (not built, and as such, pure fiction). The building, called “Simplicity”, has been thought to be 200m wide, 300m tall and measuring only 21m in depth. It is placed onto a plate that rotates in order to minimize insulation.

Figure 14b-2. The same thing a bit later the same day

Yet, besides the row of high-rises we find the dwellings of the migration workers in a considerable density, forming a multi-national population. However, the layout here remembers more to Los Angeles than to any kind of “city”. Maybe, it simply forms kind of the “rural” hinterland of the high-rise village.

Figure 15. Dubai, “off-town”. Here, the migration workers are housing. In the background the skyscrapers lining the infamous main street.

For they, for instance, also started to invest into a metro, despite the (still) linear, disseminated layout of the city, which means that connectivity, hence network effects are now recognized as a crucial structural element for the success of the city. And this then is not so different anymore from the classical Western conception. Anyway, even the first cities of mankind, risen not in the West, provided certain unique possibilities, which as a bouquet could be considered as urban.

There is still another dimension of thinning, related to the informatization of presence via medially active technologies. Thinning could be considered as an actualization of the very idea of the potentiality of co-presence, much as it is exploited in the so-called “social media”. Of course, the material urban neighborhood, its corporeality, is dependent on physical presence. Certainly, we can expect either strong synchronization effects or negative tipping points, demarcating a threshold towards sub-urbanization. On the other hand, this could give rise to new forms of apartment sharing, supported by urban designers and town officials…

On the other hand, we already mentioned natural structures that show a certain dispersal, such as the blood cells, the immune system in vertebrates, or the slime-molds. These structures are highly developed swarms. Yet, all these swarms are highly dependent on the outer conditions. As such, swarms are hardly persistent. Dubai, the swarm city. Technology, however, particularly in the form of the www and so-called social media could stabilize the swarm-shape.17

From a more formal perspective we may conceive of shrinking and thinning simply as negative growth. By this growth turns, of course, definitely into an abstract concept, leaving the representational and even the metaphorical far behind. Yet, the explication of a formal theory exceeds the indicated size of this text by far. We certainly will do it later, though.

5.3. In Search for Symbols

What turns a building into an entity that may grow into an active source for symbolization processes? At least, we can initially know that symbols can’t be implanted in a direct manner. Of course, one always can draw on exoticism, importing the cliché that already is attached to the entity from abroad. Yet, this is not what we are interested in here.The question is not so dissimilar to the issue of symbolization at large, as it is known from the realm of language. How could a word, a sign, a symbol gain reference, and how could a building get it? We could even take a further step by asking: How could a building acquire generic mediality such that it could be inhabited not only physically, but also in the medial realm? [23] We can’t answer the issues around these questions here, as there is a vast landscape of sources and implications, enough for filling at least a book. Yet, conceiving buildings as agents in story-telling could be a straightforward and not too complicated entry into this landscape.

Probably, story-telling with buildings works like a good joke. If they are too direct, nobody would laugh. Probably, story-telling has a lot to do with behavior and the implied complexities, I mean, the behavior of the building. We interpret pets, not plants. With plants, we interpret just their usage. We laugh about cats, dogs, apes, and elephants, but not about roses and orchids, and even less about crystals. Once you have seen one crystal, you have seen all of them. Being inside a crystal can be frightening, just think about Snow White. While in some way this holds even for plants, that’s certainly not true for animals. Junkspace is made from (medial) crystals. Junkspace is so detrimental due to the fundamental modernist misunderstanding that claims the possibility of implementing meaning and symbols, if these are regarded as relevant at all.

Closely related to the issue of symbols is the issue of identity.

Philosophically, it is definitely highly problematic to refer to identity as a principle. It leads to deep ethical dilemmata. If we are going to drop it, we have to ask immediately about a replacement, since many people indeed feel that they need to “identify” with their neighborhood.

Well, first we could say that identification and “to identify” are probably quite different from the idea of identity. Every citizen in a city could be thought to identify with her or his city, yet, at the same time there need not be such a thing as “identity”. Identity is the abstract idea, imposed by mayors and sociologists, and preferably it should be rejected just for that, while the process of feeling empathy with one’s neighborhood is a private process that respects plurality. It is not too difficult to imagine that there are indeed people that feel so familiar with “their” city, the memories about experiences, the sound, the smell, the way people walk, that they feel so empathic with all of this such that they source a significant part of their personality from it. How to call this inextricable relationship other than “to identify with”?

The example of the Calatrava-bus stop shelter in St.Gallen demonstrates one possible source of identification: Success in collective design decisions. Or more general: successfully finished negotiations about collective design issues, a common history about such successful processes. Even if the collective negotiation happens as a somewhat anonymous process. Yet, the relative preference of participation versus decreed activities depends on the particular distribution of political and ethical values in the population of citizens. Certainly, participatory processes are much more stable than top-down-decrees, not only in the long run, as even the Singaporean government has recognized recently. But anyway, cities have their particular personality, because they behave18 in a particular manner, and any attempt to get clear or to decide about preservation must respect this personality. Of course, it also applies that the decision-making process should be conscious enough to be able to reflect about the metaphysical belief set, the modes of growth and the long-term characteristics of the city.

5.4. The Question of Implementation

This essay tries to provide an explication of the concept of growth in the larger context of a theory of differentiation in architecture and urbanism. There, we positioned growth as one of four principles or schemata that are constitutive for generic differentiation.

In this final section we would like to address the question of implementation, since only little has been said so far about how to deal with the concept of growth. We already described how and why earlier attempts like that of the Metabolists dashed against the binding problem of theoretical work.

If houses do not move physically, how then to make them behaving, say, similar to the way an animal does? How to implement a house that shares structural traits with animals? How to think of a city as a system of plants and animals without falling prey to utter naivity?

We already mentioned that there is no technocratic, or formal, or functionalist solution to the question of growth. At first, the city has to decide what it wants to be, which kind of mix of growth modes should be implemented in which neighborhoods.

Let us first take some visual impressions…

Figure 16a,b,c. The Barcelona Pavilion by Mies van der Rohe (1929 [1986]).

This pavilion is a very special box. It is non-box, or better, it establishes a volatile collection of virtual boxes. In this building, Mies reached the mastery of boxing. Unfortunately, there are not so much more examples. In some way, the Dutch Embassy by Koolhaas is the closest relative to it, if we consider more recent architecture.

Just at the time the Barcelona pavilion has been built, another important architect followed similar concepts. In his Villa Savoye, built 1928-31, LeCorbusier employed and demonstrated several new elements in his so-called “new architecture,” among others the box and the ramp. Probably the most important principle, however, was to completely separate construction and tectonics from form and design. Such, he achieved a similar “mobility” as Mies in his Pavilion.

Figure 17a: La Villa Savoye, mixing interior and exterior on the top-roof “garden”. The other zone of overlapping spaces is beneath the house (see next figure 17b).

corbusier Villa Savoye int-ext

Figure 17b: A 3d model of Villa Savoye, showing the ramps that serve as “entrance” (from the outside) and “extrance” (towards the top-roof garden). The principle of the ramp creates a new location for the creation and experience of duration in the sense of Henri Bergson’s durée. Both the ramp and the overlapping of spaces creates a “zona extima,” which is central to the “behavioral turn”.

Corbusier Villa Savoye 06 small model

Comparing La Villa Savoye with the Barcelona pavilion regarding the mobility of space, it is quite obvious, that LeCorbusier handled the confluence and mutual penetration of interior and exterior in a more schematic and geometric manner.19

The quality of the Barcelona building derives from the fact that its symbolic value is not directly implemented, it just emerges upon interaction with the visitor, or the inhabitant. It actualizes the principle of “emerging symbolicity by induced negotiation” of compartments. The compartments become mobile. Such, it is one of the roots of the ramp that appeared in many works of Koolhaas. Yet, its working requires a strong precondition: a shared catalog of values, beliefs and basic psychological determinants, in short, a shared form of life.

On the other hand, these values and beliefs are not directly symbolized, shifting them into their volatile phase, too. Walking through the building, or simply being inside of it, instantiates differentiation processes in the realm of the immaterial. All the differentiation takes place in the interior of the building, hence it brings forth animal-like growth, transcending the crystal and the swarm.

Thus the power of the pavilion. It is able to transform and to transcend the values of the inhabitant/visitor. The zen of silent story-telling.

This example demonstrates clearly that morphogenesis in architecture not only starts in the immateriality of thought, it also has to target the immaterial.

It is clear that such a volatile dynamics, such a active, if not living building is hard to comprehend. In 2008, the Japanese office SANAA has been invited for contributing the annual installation in the pavilion. They explained their work with the following words [24].

“We decided to make transparent curtains using acrylic material, since we didn’t want the installation to interfere in any way with the existing space of the Barcelona Pavilion,” says Kazuyo Sejima of SANAA.

Figure 18. The installation of Japanese office SANAA in the Barcelona Pavilion. You have to take a careful look in order to see the non-interaction.

Well, it certainly rates as something between bravery and stupidity to try “not to interfere in any way with the existing space“. And doing so by highly transparent curtains is quite to the opposite of the buildings characteristics, as it removes precisely the potentiality, the volatility, virtual mobility. Nothing is left, beside the air, perhaps. SANAA committed the typical representational fault, as they tried to use a representational symbol. Of course, the walls that are not walls at all have a long tradition in Japan. Yet, the provided justification would still be simply wrong.

Instead of trying to implement a symbol, the architect or the urbanist has to care about the conditions for the possibility of symbol processes and sign processes. These processes may be political or not, they always will refer to the (potential) commonality of shared experiences.

Above we mentioned that the growth of a building has its beginning in the immateriality of thought. Even for the primitive form of mineralic growth we found that we can understand the variety of resulting shapes only through the conditions embedding the growth process. The same holds, of course, for the growth of buildings. For crystals the outer conditions belong to them as well, so the way of generating the form of a building belongs to the building.

Where to look for the outer conditions for creating the form? I suppose we have to search for them in the way the form gets concrete, starting from a vague idea, which includes its social and particularly its metaphysical conditions. Do you believe in independence, identity, relationality, difference?

It would be interesting to map the difference between large famous offices, say OMA and HdM.

According to their own words, HdM seems to treat the question of material very differently from OMA, where the question of material comes in at later stage [25]. HdM seems to work much more “crystallinic”, form is determined by the matter, the material and the respective culture around it. There are many examples for this, from the wine-yard in California, the “Schaulager” in Basel (CH), the railway control center (Basel), up to the “Bird’s Nest” in Bejing (which by the way is an attempt for providing symbols that went wrong). HdM seem to try to rely to the innate symbolicity of the material, of corporeality itself. In case of the Schaulager, the excavated material have been used to raise the building, the stones from the underground have been erected into a building, which insides looks like a Kafkaesque crystal. They even treat the symbols of a culture as material, somehow counterclockwise to their own “matérialisme brut”. Think about their praise of simplicity, the declared intention to avoid any reference beside the “basic form of the house” (Rudin House). In this perspective, their acclaimed “sensitivity” to local cultures is little more than the exploitation of a coal mine, which also requires sensitivity to local conditions.

Figure 18: Rudin House by Herzog & deMeuron

HdM practice a representationalist anti-symbolism, leaning strongly to architecture as a crystal science, a rather weird attitude to architecture. Probably it is this weirdness that quite unintentionally produces the interest in their architecture through a secondary dynamics in the symbolic. Is it, after all, Hegel’s tricky reason @ work? At least this would explain the strange mismatch of their modernist talking and the interest in their buildings.

6. Conclusions

In this essay we have closed a gap with respect to the theoretical structure of generic differentiation. Generic Differentiation may be displayed by the following diagram (but don’t miss the complete argument).

Figure 19: Generic Differentiation is the key element for solving the binding problem of theory works. This structure is to be conceived not as a closed formula, but rather as a module of a fractal that is created through mutual self-affine mappings of all of the three parts into the respective others.

basic module of the fractal relation between concept/conceptual, generic differentiation/difference and operation/operational comprising logistics and politics that describes the active subject

In earlier essays, we proposed abstract models for probabilistic networks, for associativity and for complexity. These models represent a perspective from the outside onto the differentiating entity. All of these have been set up in a reflective manner by composing certain elements, which in turn can be conceived as framing a particular space of expressibility. Yet, we also proposed the trinity of development, evolution and learning (chp.10 here) for the perspective from the inside of the differentiation process(es), describing different qualities of differentiation.

Well, the concept of growth20 is now joining the group of compound elements for approaching the subject of differentiation from the outside. In some way, using a traditional and actually an inappropriate wording, we could say that this perspective is more analytical than synthetical, more scientific than historiographical. This does not mean, of course, that the complementary perspective is less scientific, or that talking about growth or complexity is less aware of the temporal domain. It is just a matter of weights. As we have pointed out in the previous essay, the meta-theoretical conception (as a structural description of the dynamics of theoretical work) is more like a fractal field than a series of activities.

Anyway, the question is what can we do with the newly re-formulated concept of growth?

First of all, it completes the concept of generic differentiation, as we already mentioned just before. Probably the most salient influence is the enlarged and improved vocabulary to talk about change as far as it concerns the “size” of the form of a something, even if these something is something immaterial. For many reasons, we definitely should resist the tendency to limit the concept of growth to issues of morphology.

Only through this vocabulary we can start to compare the entities in the space of change. Different things from different domains or even different forms of life can be compared to each other, yet not as those things, but rather as media of change. Comparing things that change means to investigate the actualization of different modes of change as this passes through the something. This move is by no means eclecticist. It is even mandatory in order to keep aligned to the primacy of interpretation, the Linguistic Turn, and the general choreostemic constitution.

By means of the new and generalized vocabulary we may overcome the infamous empiricist particularism. Bristle counting, as it is called in biology, particularly entomology. Yes, there are around 450’000 different species of beetles… but… Well, overcoming particularism means that we can spell out new questions: about regulative factors, e.g. for continuity, melting and apoptosis. Guided by the meta-theoretical structure in fig.19 above we may ask: How would a politics of apoptosis look like? What about recycling of space? How could infrastructure foster associativity, learning and creativity of the city, rather than creativity in the city? What is epi/genetics of the growth and differentiation processes in a particular city?

Such questions may appear as elitary, abstract, of only little use. Yet, the contrary is true, as precisely such questions directly concern the productivity of a city, the speed of circulation of capital, whether symbolic or monetary (which anyway is almost the same). Understanding the conditions of growth may lead to cities that are indeed self-sustaining, because the power of life would be a feature deeply built into them. A little, perhaps even homeopathic dose of dedetroitismix, a kind of drug to cure the disease that infected the city of Detroit as well as the planners of Detroit or also all the urbanists that are pseudo-reasoning about Detroit in particular and sustainability in general. Just as Paracelsus mentioned that there is not just one kind of stomach, instead there are hundreds of kinds of stomach, we may recognize how to deal with the thousands of different kinds of cities that all spread across thousands of plateaus, if we understand of how to speak and think about growth.

Notes

1. This might appear a bit arrogant, perhaps, at first sight. Yet, at this point I must insist on it, even as I take into account the most advanced attempts, such as those of Michael Batty [1], Luca D’Acci or Karl Kropf [2]. The proclaimed “science of cities” is in a bad state. Either it is still infected by positivist or modernist myths, or the applied methodological foundations are utterly naive. Batty for instance embraces full-heartedly complexity. But how could one use complexity other as a mere label, if he is going to write such weird mess [3], mixing wildly concepts and subjects?

“Complexity: what does it mean? How do we define it? This is an impossible task because complex systems are systems that defy definition. Our science that attempts to understand such systems is incomplete in the sense that a complex system behaves in ways that are unpredictable. Unpredictability does not mean that these systems are disordered or chaotic but that defy complete definition.

Of course, it is not an impossible task to conceptualize complexity in a sound manner. This is even a mandatory precondition to use it as a concept. It is a bit ridiculous to claim the impossibility and then writing a book about its usage. And this conceptualization, whatsoever it would look like, has absolutely nothing to do with the fact that complex systems may behave unpredictable. Actually, in some way they are better predictable than complete random processes. It remains unclear which kind of unpredictability Batty is referring to? He didn’t disclose anything about this question, which is a quite important one if one is going to apply “complexity science”. And what about the concept of risk, and modeling, then, which actually can’t be separated at all?

His whole book [1] is nothing else than an accumulation of half-baked formalistic particulars. When he talks about networks, he considers only logistic networks. Bringing in fractals, he misses to mention the underlying mechanisms of growth and the formal aspects (self-affine mapping). In his discussion of the possible role of evolutionary theory [4], following Geddes, Batty resorts again to physicalism and defends it. Despite he emphasizes the importance of the concept of “mechanism”, despite he correctly distinguishes development from evolution, despite he demands an “evolutionary thinking”, he fails to get to the point: A proper attitude to theory under conditions of evolution and complexity, a probabilistic formulation, an awareness for self-referentiality, insight to the incommensurability of emergent traits, the dualism of code and corporeality, the space of evo-devo-cogno. In [4], one can find another nonsensical statement about complexity on p.567:

“The essential criterion for a complex system is a collection of elements that act independently of one another but nevertheless manage to act in concert, often through constraints on their actions and through competition and co-evolution. The physical trace of such complexity, which is seen in aggregate patterns that appear ordered, is the hallmark of self-organisation.” (my emphasis).

The whole issue with complex systems is that there is no independence… they do not manage to act in concert… wildly mixing with concepts like evolution or competition… physics definitely can nothing say about the patterns, and the hallmark of self-organizing systems is not surely not just the physical trace: it is the informational re-configuration.

Not by pure chance therefore he is talking about “tricks” ([5], following Hamdi [7]): “The trick for urban planning is to identify key points where small change can lead spontaneously to massive change for the better.” Without a proper vocabulary of differentiation, that is, without a proper concept of differentiation, one inevitably has to invoke wizards…

But the most serious failures are the following: regarding the cultural domain, there is no awareness about the symbolic/semiotic domain, the disrespect of information, and regarding methodology, throughout his writings, Batty mistakes theory for models and vice versa, following the positivist trail. There is not the slightest evidence in his writing that there is even a small trace of reflection. This however is seriously indicated, because cities are about culture.

This insensitivity is shared by talented people like Luca D’Acci, who is still musing about “ideal cities”. His procedural achievements as a craftsman of empirism are impressive, but without reflection it is just threatening, claiming the status of the demiurg.

Despite all these failures, Batty’s approach and direction is of course by far more advanced than the musings of Conzen, Caniggia or Kropf, which are intellectually simply disastrous.There are numerous examples for a highly uncritical use of structural concepts, for mixing of levels of arguments, crude reductionism, a complete neglect of mechanisms and processes etc. For instance, Kropf in [6]

A morphological critique is necessarily a cultural critique. […] Why, for example, despite volumes of urban design guidance promoting permeability, is it so rare to find new development that fully integrates main routes between settlements or roads directly linking main routes (radials and counter-radials)?” (p.17)

The generic structure of urban form is a hierarchy of levels related part to whole. […] More effective and, in the long run, more successful urbanism and urban design will only come from a better understanding of urban form as a material with a range of handling characteristics.” (p.18)

It is really weird to regard form as matter, isn’t it? The materialist final revenge… So, through the work of Batty there is indeed some reasonable hope for improvement. Batty & Marshall are certainly heading to the right direction when they demand (p.572 [4]):

“The crucial step – still to be made convincingly – is to apply the scientifically inspired understanding of urban morphology and evolution to actual workable design tools and planning approaches on the ground.

But it is equally certain that an adoption of evolutionary theory that seriously considers an “elan vital” will not be able to serve as a proper foundation. What is needed instead is a methodologically sound abstraction of evolutionary theory as we have proposed it some time ago, based on a probabilistic formalization and vocabulary. (…end of the longest footnote I have ever produced…)

2. The concept mechanism should not be mistaken as kind of a “machine”. In stark contrast to machines, mechanisms are inherently probabilistic. While machines are synonymic to their plan, mechanisms imply an additional level of abstraction, the population and its dynamics. .

3. Whenever it is tried to proof or implement the opposite, the primacy of logic, characteristic gaps are created, more often than not of a highly pathological character.

4. see also the essay about “Behavior”, where we described the concept of “Behavioral Coating”.

5. Deleuzean understanding of differential [10], for details see “Miracle of Comparison”.

6. As in the preceding essays, we use the capital “U” if we refer to the urban as a particular quality and as a concept, in order to distinguish it from the ordinary adjective that refers to common sense understanding.

7. Only in embryos or in automated industrial production we find “development”.

8. The definition (from Wiki) is: “In animals, metamery is defined as a mesodermal event resulting in serial repetition of unit subdivisions of ectoderm and mesoderm products.”

9. see our essay about Reaction-Diffusion-Systems.

10. To emancipate from constant and pervasive external “environmental” pressures is the main theme of evolution. This is the deep reason that generalists are favored to the costs of specialists (at least on evolutionary time scales).

11. Aristotle’s idea of the four causes is itself a scheme to talk about change. .

12. This principle is not only important for Urban affairs, but also for a rather different class of arrangements, machines that are able to move in epistemic space.

13. Here we meet the potential of symbols to behave according to a quasi-materiality.

14. Heinrich Klotz‘ credo in [21] is „not only function, but also fiction“, without however taking the mandatory step away from the attitude to predefine symbolic value. Such, Klotz himself remains a fully-fledged modernist. see also Wolfgang Welsch in [22], p.22 .

15. There is of course also Robert Venturi with his  “Complexity and Contradiction in Architecture”, or Bernard Tschumi with his disjunction principle summarized in “Architecture and Disjunction.” (1996). Yet, both went as far as necessary, for “complexity” can be elementarized and generalized even further as he have been proposing it (here), which is, I think a necessary move to combine architecture and urbanism regarding space and time. 

16. see footnote 5.

17. ??? .

18. Remember, that the behavior of cities is also determined by the legal setup, the traditions, etc.

19.The ramp is an important element in contemporary architecture, yet, often used as a logistic solution and mostly just for the disabled or the moving staircase. In Koolhaas’ works, it takes completely different role as an element of story-telling. This aspect of temporality we will investigate in more detail in another essay. Significantly, LeCorbusier used the ramp as a solution for a purely spatial problem.

20. Of course, NOT as a phenomenon!

References

  • [1] Michael Batty, Cities and Complexity: Understanding Cities with Cellular Automata, Agent-Based Models, and Fractals. MIT Press, Boston 2007.
  • [2] Karl Kropf (2009). Aspects of urban form. Urban Morphology 13 (2), p.105-120.
  • [3] Michael Batty’s website.
  • [4] Michael Batty and Stephen Marshall (2009). The evolution of cities: Geddes, Abercrombie and the new physicalism. TPR, 80 (6) 2009 doi:10.3828/tpr.2009.12
  • [5] Michael Batty (2012). Urban Regeneration as Self-Organization. Architectural Design, 215, p.54-59.
  • [6] Karl Kropf (2005). The Handling Characteristics of Urban Form. Urban Design 93, p.17-18.
  • [7] Nabeel Hamdi, Small Change: About the Art of Practice and the Limits of Planning, Earthscan, London 2004.
  • [8] Dennis L. Sepper, Descartes’s Imagination Proportion, Images, and the Activity of Thinking. University of California Press, Berkeley 1996. available online.
  • [9] C. Bandt and M. Mesing (2009). Self-affine fractals of finite type. Banach Center Publications 84, 131-148. available online.
  • [9] Gilles Deleuze, Difference & Repetition. [1967].
  • [10] Moussaïd M, Perozo N, Garnier S, Helbing D, Theraulaz G (2010). The Walking Behaviour of Pedestrian Social Groups and Its Impact on Crowd Dynamics. PLoS ONE 5(4): e10047. doi:10.1371/journal.pone.0010047.
  • [11] Claire Detrain, Jean-Louis Deneubourg (2006). Self-organized structures in a superorganism: do ants “behave” like molecules? Physics of Life Reviews, 3(3)p.162–187.
  • [12] Dave Mosher, Secret of Annoying Crowds Revealed, Science now, 7 April 2010. available online.
  • [13] Charles Jencks, The Architecture of the Jumping Universe. Wiley 2001.
  • [14] Rem Koolhaas. Delirious New York.
  • [15] Markus Heidingsfelder, Rem Koolhaas – A Kind of Architect. DVD 2007.
  • [16] Rem Koolhaas, Bigness – or the problem of Large. in: Rem Koolhaas, Bruce Mau & OMA, S,M,L,XL. p.495-516. available here (mirrored)
  • [17] Wiki entry (english edition) about Rem Koolhaas, http://en.wikipedia.org/wiki/Rem_Koolhaas, last accessed Dec 4th, 2012.
  • [18] Rem Koolhaas (2010?). “On OMA’s Work”. Lecture as part of “The Areen Architecture Series” at the Department of Architecture and Design, American University of Beirut. available online. (the date of the lecture is not clearly identifiable on the Areen AUB website).
  • [19] Hans Ulrich Obrist, Interview with Rem Koolhaas at the Biennale 2010, Venice. Produced by the Institute of the 21st Century with support from ForYourArt, The Kayne Foundation. available online on youtube, last accessed Nov 27th, 2012.
  • [20] Heinrich Klotz, The history of postmodern architecture, 1986.
  • [21] Wolfgang Welsch, Unsere postmoderne Moderne. 6.Auflage, Oldenbourg Akademie Verlag, Berlin 2002 [1986].
  • [22] Vera Bühlmann, inahbiting media. Thesis, University of Basel 2009. (in german, available online)
  • [23] Report in de zeen (2008). available online.
  • [24] Jacques Herzog, Rem Koolhaas, Urs Steiner (2000). Unsere Herzen sind von Nadeln durchbohrt. Ein Gespräch zwischen den Architekten Rem Koolhaas und Jacques Herzog über ihre Zusammenarbeit. Aufgezeichnet von Urs Steiner.in: Marco Meier (Ed.). Tate Modern von Herzog & de Meuron. in: Du. Die Zeitschrift der Kultur. Vol. No. 706, Zurich, TA-Media AG, 05.2000. pp. 62-63. available online.

۞

Modernism, revisited (and chunked)

July 19, 2012 § Leave a comment

There can be no doubt that nowadays “modernism”,

due to a series of intensive waves of adoption and criticism, returning as echoes from unexpected grounds, is used as a label, as a symbol. It allows to induce, to claim or to disapprove conformity in previously unprecedented ways, it helps to create subjects, targets and borders. Nevertheless, it is still an unusual symbol, as it points to a complex history, in other words to a putative “bag” of culture(s). As a symbol, or label, “modernity” does not point to a distinct object, process or action. It invokes a concept that emerged through history and is still doing so. Even as a concept, it is a chimaera. Still unfolding from practice, it did not yet move completely into the realm of the transcendental, to join other concepts in the fields most distant from any objecthood.

This Essay

Here, we continue the investigation of the issues raised by Koolhaas’ “Junkspace”. Our suggestion upon the first encounter has been that Koolhaas struggles himself with his attitude to modernism, despite he openly blames it for creating Junkspace. (Software as it is currently practiced is definitely part of it.) His writing bearing the same title thus gives just a proper list of effects and historical coincidences—nothing less, but also nothing more. Particularly, he provides no suggestions about how to find or construct a different entry point into the problematic field of “building urban environments”.

In this essay we will try to outline how a possible—and constructive—archaeology of modernism could look like, with a particular application to urbanism and/or architecture. The decisions about where to dig and what to build have been, of course, subjective. Of course, our equipment is, as almost always in archaeology, rather small, suitable for details, not for surface mining or the like. That is, our attempts are not directed towards any kind of completeness.

We will start by applying a structural perspective, which will yield the basic set of presuppositions that characterizes modernism. This will be followed by a discussion of four significant aspects, for which we will hopefully be able to demonstrate the way of modernist thinking. These four areas concern patterns and coherence, meaning, empiricism and machines. The third major section will deal with some aspects of contemporary “urbanism” and how Koolhaas relates to that, particularly with respect to his “Junkspace”. Note, however, that we will not perform a literary study of Koolhaas’ piece, as most of his subjects there can be easily deciphered on the basis of the arguments as we will show them in the first two sections.

The final section then comprises a (very) brief note about a possible future of urbanism, which actually, perhaps, already has been lifting off. We will provide just some very brief suggestions in order to not appear as (too) presumptuous.

Table of Content (active links)

1. A structural Perspective

According to its heterogeneity, the usage of that symbol “modernity” is fuzzy as well. While the journal Modernism/modernity, published by John Hopkins University Press, concentrates „on the period extending roughly from 1860 to the mid-twentieth century,“ while galleries for “Modern Art” around the world consider the historical period since post-Renaissance (conceived as the period between 1400 to roughly 1900) up today, usually not distinguishing modernism from post-modernism.

In order to understand modernism we have to take the risk of proposing a structure behind the mere symbolical. Additionally, and accordingly, we should resist the abundant attempt to define a particular origin of it. Foucault called those historians who were addicted to the calendar and the idea of the origin, the originator, or more abstract the “cause”, “historians in short trousers” (meaning a particular intellectual infantilism, probably a certain disability to think abstractly enough) [1]. History does not realize a final goal either, and similarly it is bare nonsense to claim that history came to an end. As in any other evolutionary process historical novelty builds on the leftover of preceding times.

After all, the usage of symbols and labels is a language game. It is precisely a modernist misunderstanding to dissect history into phases. Historical phases are not out there, or haven’t been  there. It is by far more appropriate to conceive it as waves, yet not of objects or ideas, but of probabilities. So, the question is what happened in the 19th century that it became possible to objectify a particular wave? Is it possible to give any reasonable answer here?

Following Foucault, we may try to reconstruct the sediments that fell out from these waves like the cripples of sand in the shallow water on the beach. Foucault’s main invention put forward then in his “Archaeology” [1] is the concept of the “field of proposals”. This field is not 2-dimensional, it is high-dimensional, yet not of a stable dimensionality. In many respects, we could conceive it as a historian’s extension of the Form of Life as Wittgenstein used to call it. Later, Foucault would include the structure of power, its exertion and objectifications, the governmentality into this concept.

Starting with the question of power, we can see an assemblage that is typical for the 19th century and the latest phase of the 18th. The invention of popular rights, even the invention of the population as a conscious and a practiced idea, itself an outcome of the French revolution, is certainly key for any development since then. We may even say that its shockwaves and the only little less shocking echoes of these waves haunted us till the end of the 20th century. Underneath the French Revolution we find the claim of independence that traces back to the Renaissance, formed into philosophical arguments by Leibniz and Descartes. First, however, it brought the Bourgeois, a strange configuration of tradition and the claim of independence, bringing forth the idea of societal control as a transfer from the then emerging intensification of the idea of the machine. Still exhibiting class-consciousness, it was at the roots of the modernists rejection of tradition. Yet, even the Bourgeois builds on the French Revolution (of course) and the assignment of a strictly positive value to the concept of densification.

Without the political idea of the population, the positive value of densification, the counter-intuitive and prevailing co-existence of the ideas of independence and control neither the direction nor the success of the sciences and their utilization in the field of engineering could have been emerging as it actually did. Consequently, right to the end of the hot phase of French Revolution, it was argued by Foucroy in 1794 that it would be necessary to found a „Ecole Polytechnique“1. Densification, liberalism and engineering brought another novelty of this amazing century: the first spread of mass media, newspapers in that case, which have been theorized only approx. 100 years later.

The rejection of tradition as part of the answer to the question “What’s next?” is perhaps one of the strongest feelings for the modernist in the 19th century. It even led to considerable divergence of attitudes across domains within modernism. For instance, while the arts rejected realism as a style building on “true representation,” technoscience embraced it. Yet, despite the rejection of immediate visual representations in the arts, the strong emphasis on objecthood and apriori objectivity remained fully in charge. Think of Kandinsky’s “Punkt und Linie zu Fläche“ (1926), or the strong emphasis of pure color (Malevich), even of the idea of purity itself, then somewhat paradoxically called abstractness, or the ideas of the Bauhaus movement about the possibility and necessity to objectify rules of design based on dot, line, area, form, color, contrast etc.. The proponents of Bauhaus, even their contemporary successors in Weimar (and elsewhere) never understood that the claim for objectivity particularly in design is impossible to be satisfied, it is a categorical fault. Just to avoid a misunderstanding that itself would be a fault of the same category: I personally find Kandinsky’s work mostly quite appealing, as well as some of the work by the Bauhaus guys, yet for completely different reasons that he (they) might have been dreaming of.

Large parts of the arts rejected linearity, while technoscience took it as their core. Yet, such divergences are clearly the minority. In all domains, the rejection of tradition was based on an esteem of the idea of independence and resulted predominantly in the emphasis of finding new technical methods to produce unseen results. While the emphasis of the method definitely enhances the practice of engineering, it is not innocent either. Deleuze sharply rejects the saliency of methods [10]:

Method is the means of that knowledge which regulates the collaboration of all the faculties. It is therefore the manifestation of a common sense or the realisation of a Cogitatio natura, […] (p.165)

Here, Deleuze does not condemn methods as such. Undeniably, it is helpful to explicate them, to erect a methodology, to symbolize them. Yet, culture should not be subordinated to methods, not even sub-cultures.

The leading technoscience of these days had been physics, closely followed by chemistry, if it is at all reasonable to separate the two. It brought the combustion engine (from Carnot to Daimler), electricity (from Faraday to Edison, Westinghouse and Tesla), the control of temperature (Kelvin, Boltzmann), the elevator, and consequently the first high-rise buildings along with a food industry. In the second half of 19th century it was fashionable for newspapers to maintain a section showing up the greatest advances and success of technoscience of the last week.

In my opinion it is eminently important to understand the linkage between the abstract ideas, growing from a social practice as their soil-like precursory condition, and the success of a particular kind of science. Independence, control, population on the one side, the molecule and its systematics, the steam and the combustion engine, electricity and the fridge on the other side. It was not energy (in the form of wood and coals) that could be distributed, electricity meant an open potential for an any  of potential [2]. Together they established a new Form of Life which nowadays could be called “modern,” despite the fact that its borders blur, if we could assume their existence at all. Together, combined into a cultural “brown bag,” these ingredients led to an acceleration, not to the least also due to the mere physical densification, an increase of the mere size of the population, produced (literally so) by advances in the physical and biomedical sciences.

At this point we should remind ourselves that factual success does neither legitimize to expect sustainable success nor to reason about any kind of universal legitimacy of the whole setup. The first figure would represent simply naivety, the second the natural fallacy, which seduces us to conclude from the actual (“what is”) to the deontical and the normative (“what should be”).

As a practice, the modern condition is itself dependent on a set of beliefs. These can neither be questioned nor discussed at all from within the “modern attitude,” of course. Precisely this circumstance makes it so difficult to talk with modernists about their beliefs. They are not only structurally invisible, something like a belief is almost categorically excluded qua their set of conditioning beliefs. Once accepted, these conditions can’t be accessed anymore, they are transcendental to any further argument put forward within the area claimed by these conditions. For philosophers, this figure of thought, the transcendental condition, takes the role of a basic technique. Other people like urbanists and architects might well be much less familiar with it, which could explain their struggling with theory.

What are these beliefs to which a proper modernist adheres to? My list would look like as that given below. The list itself is, of course, neither a valuation nor an evaluation.

  • – independence, ultimately taken as a metaphysical principle;
  • – belief in the primacy of identity against the difference, leading to the primacy of objects against the relation;
  • – linearity, additivity and reduction as the method of choice;
  • – analyticity and “lawfulness” for descriptions of the external world;
  • – belief in positively definable universals, hence, the rejection of belief as a sustaining mental figure;
  • – the belief in the possibility of a finally undeniable justification;
  • – belief that the structure of the world follows a bi-valent logic2, represented by the principle of objective causality, hence also a “logification” and “physicalization” of the concept of information as well as meaning; consequently, meaning is conceived as being attached to objects;
  • – the claim of a primacy of ontology and existential claims—as highlighted by the question “What is …?”—over instances of pragmatics that respect Forms of Life—characterized by the question “How to use …?”;
  • – logical “flatness” and the denial of creativity of material arrangements; representation
  • – belief in the universal arbitrariness of evolution;
  • – belief in the divine creator or some replacement, like the independent existence of ideas (here the circle closes).

It now becomes even more clear that is not quite reasonable to assign a birth date to modernism. Some of those ideas and beliefs haven been around for centuries before their assembly into the 19th century habit. Such, modernism is nothing more, yet also nothing less than a name for the evolutionary history of a particular arrangement of attitudes, believes and arguments.

From this perspective it also becomes clear why it is somewhat difficult to separate so-called post-modernism from modernism. Post-modernism takes a yet undecided position to the issue of abstract metaphysical independence. Independence and the awareness for the relations did not amalgamate yet, both are still, well, independent in post-modernism. It makes a huge, if not to say cosmogonic difference to set the relation as the primary metaphysical element. Of course, Foucault was completely right in rejecting the label of being a post-modernist. Foucault dropped the central element of modernism—independence—completely, and very early in his career as author, thinking about the human world as horizontal (actual) and vertical (differential) embeddings. The same is obviously true for Deleuze, or Serres. Less for Lyotard and Latour, and definitely not for Derrida, who practices a schizo-modernism, undulating between independence and relation. Deleuze and Foucault never have been modern, in order to paraphrase Latour, and it would be a serious misunderstanding to attach the label of post-modernism to their oeuvre.

As a historical fact we may summarize modernism by two main achievements: first, the professionalization of engineering and its rhizomatically pervasive implementation, and second the mediatization of society, first through the utilization of mass media, then by means of the world wide web. Another issue is that many people confess to follow it as if they would follow a program, turning it into a movement. And it is here where difficulties start.

2. Problems with Modernism

We are now going to deal with some of the problems that are necessarily associated to the belief set that is so typical for modernism. In some way or another, any basic belief is burdened by its own specific difficulties. There is no universal or absolute way out of that. Yet, modernism is not just an attitude, up to now it also has turned into a large-scale societal experiment. Hence, there are not only some empirical facts, we also meet impacts onto the life of human beings (before any considerations of moral aspects). Actually, Koolhaas provided precisely a description of them in his “Junkspace” [3]. Perhaps, modernism is also more prone to the strong polarity of positive and negative outcomes, as its underlying set of believes is also particularly strong. But this is, of course, only a quite weak suggestion.

In this section we will investigate four significant aspects. Together they hopefully provide kind of a fingerprint of “typical” modernist thinking—and its failure. These four areas concern patterns and coherence, empiricism, meaning and machines.

Before we start with that I would like to visit briefly the issue raised by the role of objects in modernism. The metaphysics of objects in modernism is closely related to the metaphysical belief of independence as a general principle. If you start to think “independence” you necessarily end up with separated objects. “Things” as negotiated entities do barely exist in modernism, and if so, then only as kind of a error-prone social and preliminary approximation to the physical setup. It is else not possible, to balance objects and relations as concepts. One of them must take the primary role.

Setting objects as primary against the relation has a range of problematic consequences. In my opinion, these consequences are inevitable. It is important that neither the underlying beliefs nor their consequences can’t be separated from each other. For a modernist, it is impossible, to drop one of these and to keep the other ones without stepping into the tomb of internal inconsistency!

The idea of independence, whether in its implicit or its explicit version, can be traced back at least to scholastics, probably even to the classic where it appeared as Platonic idealism (albeit this would be an oversimplification). To its full extent it unfolded through the first golden age of the dogma of the machine in the early 17th century, e.g. in the work of Harvey or the philosophy of Descartes. Leibniz recognized its difficulties. For him perception is an activity. If objects would be conceived as purely passive, they would not be able to perceive and not to build any relation at all. Thus, the world can’t be made of objects, since there is a world external to the human mind. He remained, however, being caught by theism, which brought him to the concept of monads as well as to the concept of the infinitesimal numbers. The concept of the monads should not be underestimated, though. Ultimately, they serve the purpose of immaterial elements that bear the ability to perceive and to transfer them to actual bodies, whether stuffed with a mind or not.

The following centuries brought just a tremendous technical refinement of Cartesian philosophy, despite there have been phases where people resisted its ideas, as for instance many people in the Baroque.

Setting objects as primary against the relation is at the core of phenomenology as well, and also, though in a more abstract version, of idealism. Husserl came up with the idea of the “phenomenon”, that impresses us, notably directly, or intuitively, without any interpretation. Similarly, the Kantian “Erhabenheit”, then tapered by Romanticism, is out there as an independent instance, before any reason or perception may start to work.

So, what is the significance of setting objects as primary constituents of the world? Where do we have to expect which effects?

2.1. Dust, Coherence, Patterns

When interpreted as a natural principle, or as a principle of nature, the idea of independence provokes and supports physical sciences. Independence matches perfectly with physics, yet it is also an almost perfect mismatch for biological sciences as far as they are not reducible to physics. The same is true for social sciences. Far from being able to recognize their own conditionability, most sociologist just practice methods taken more or less directly from physics. Just recall their strange addiction to statistics, which is nothing else than methodology of independence. Instead of asking for the abstract and factual genealogy of the difference between independence and coherence, between the molecule and harmony, they dropped any primacy of the relation, even its mere possibility.

The effects in architecture are well-known. On the one hand, modernism led to an industrialization, which is reaching its final heights in the parametrism of Schumacher and Hadid, among others. Yet, by no means there is any necessity that industrialization leads to parametrism! On the other hand, if in the realm of concepts there is no such thing as a primacy of relation, only dust, then there is also no form, only function, or at least a maximized reduction of any form, as it has been presented first by Mies von der Rohe. The modularity in this ideology of the absence of form is not that of living organisms, it is that of crystals. Not only the Seagram building is looking exactly like the structural model of sodium chloride. Of course, it represents a certain radicality. Note that it doesn’t matter whether the elementary cells of the crystal follows straight lines, or whether there is some curvature in their arrangements. Strange enough, for a modernist there is never a particular intention in producing such stuff. Intentions are not needed at all, if the objects bear the meaning. The modernists expectation is that everything the human mind can accomplish under such conditions is just uncovering the truth. Crystals just happen to be there, whether in modernist architecture or in the physico-chemistry of minerals.

Strictly spoken, it is deeply non-modern, perhaps ex-modern, to investigate the question why even modernists feel something like the following structures or processes mysteriously (not: mystical!) beautiful, or at least interesting. Well, I do not know, of course, whether they indeed felt like that, or whether they just pretended to do so. At least they said so… Here are the artefacts3:

Figure 1: a (left): Michael Hansmeyer column [4] ,b (right): Turing-McCabe-pattern (for details see this);

.

These structures are neither natural nor geometrical. Their common structural trait is the local instantiation of a mechanism, that is, a strong dependence on the temporal and spatial local context: Subdivision in case (a), and a probabilistically instantiated set of “chemical” reactions in the case of (b). For the modernist mindset they are simply annoying. They are there, but there is no analytical tool available to describe them as “object” or to describe their genesis. Yet, both examples do not show “objects” with perceivable properties that would be well-defined for the whole entity. Rather, they represent a particular temporal cut in the history of a process. Without considering their history—which includes the contingent unfolding of their deep structure—they remain completely incomprehensible, despite the fact that on the microscopical level they are well-defined, even deterministic.

From the perspective of primary objects they are separated from comprehensibility by the chasm of idealism, or should we say hyper-idealistic conditioning? Yet, for both there exists a set of precise mathematical rules. The difference to machines is just that these rules describe mechanisms, but not anything like the shape or on the level of the entirety. The effect of these mechanism on the level of the collective, however, can’t be described by those rules for the mechanism. They can’t be described at all by any kind of analytical approach, as it possible for instance in many areas in physics and, consequently in engineering, which so far is by definition always engaged in building and maintaining fully determinate machines. This notion of the mechanism, including the fact that only the concept of mechanism allows for a thinking that is capable to comprehend emergence and complexity—and philosophically potential—, is maybe one of the strongest differences between modernist thinking and “organicist” thinking (which has absolutely nothing to do with bubble architecture), as we may call it in a preliminarily.

Here it is probably appropriate to cite the largely undervalued work of Charles Jencks, who proposed as one of the first in the domain of architecture/urbanism the turn to complexity. Yet, since he had not a well-explicated formulation (based on an appropriate elementarization) at his disposal, we had neither been able to bring his theory “down to earth” nor to connect it to more abstract concepts. People like Jencks, Venturi, “parts of” Koolhaas (and me:)—or Deleuze or Foucault in philosophy—never have been modernist. Except the historical fact that they live(d) in a period that followed the blossoming of modernism, there is not any other justification to call them or their thinking “post-modern”. It is not the use of clear arguments that those reject, it is the underlying set of beliefs.

In modernism, that is, in the practice of the belief set as shown above, collective effects are excluded apriori, metaphysically as well as methodologically, as we will see. Statistics is by definition not able to detect “patterns”. It is an analytic technique, of which people believe that its application excludes any construction. This is of course a misbelief, the constructive steps are just shifted into the side-conditions of the formulas, resulting in a deep methodological subjectivity concerning the choice of a particular technique, or formula respectively.

This affects the perspective onto society as well as individual perception and thought. Slightly metaphorically spoken, everything is believed to be (conceptual) dust, and to remain dust. The belief in independence, fired perhaps by a latent skepticism since Descartes, has invaded the methods and the practices. At most, such the belief, one could find different kinds of dust, or different sizes of the hives of dust, governed by a time-inert, universal law. In turn, wherever laws are imposed to “nature”, the subject matter turns into conceptual dust.

Something like a Language Game, let it even be in combination with transcendental conditionability, must almost be incomprehensible for a modernist. I think they even do not see there possibility. While analytic philosophy is largely the philosophy that developed within modernism (one might say that it is thus not philosophy at all), the philosophical stances of Wittgenstein, Heidegger or Deleuze are outside of it. The instances of misunderstanding Wittgenstein as a positivist are countless! Closely related to the neglect of collective effects is the dismissal of the inherent value of the comparative approach. Again, that’s not an accusation. Its just the description of an effect that emerges as soon as the above belief set turns into a practice.

The problem with modernism is indeed tricky. On the one hand it blossomed engineering. Engineering, as it has been conceived since then, is a strictly modernist endeavor. With regard to the physical aspects of the world it works quite well, of course. In any other area, it is doomed to fail, for the very same reasons, unfortunately. Engineering of informational aspects is thus impossible as it is the engineering of architecture or the engineering of machine-based episteme, not to mention the attempt to enable machines to deal with language. Or to deal with the challenges emerging in the urban culture. Just to avoid misunderstandings: Engineering is helpful to find technical realizations for putative solutions, but it never can deliver any kind of solution itself, except the effect that people assimilate and re-shape the produces of urban engineering through their usage, turning them into something different than intended.

2.2. Meaning

The most problematic effects of the idea  of “primary objects” are probably the following:

  • – the rejection of creational power of unconscious or even purely material entities;
  • – the idea that meaning can be attached to objects;
  • – the idea that objects can be represented and must be represented by ideas.

These strong consequences do not concern just epistemological issues. In modernism, “objectivity” has nothing to do with the realm of the social. It can be justified universally and on purely formal grounds. We already mentioned that this may work in large parts of physics—it is challenged in quantum physics—but certainly not in most biological or social domains.

In his investigation of thought, Deleuze identifies representationalism ([9], p.167) as one of the eight major presuppositions of large parts of philosophy, especially idealism in the line from Platon, Hegel, and Frege up to Carnap.

(1) the postulate of the principle, or the Cogitatio natura universalis (good will of the thinker and good nature of thought); (2) the postulate of the ideal, or common sense (common sense as the concordia facultatum and good sense as the distribution which guarantees this concord); (3) the postulate of the model, or of recognition (recognition inviting all the faculties to exercise themselves upon an object supposedly the same, and the consequent possibility of error in the distribution when one faculty confuses one of its objects with a different object of another faculty); (4) the postulate of the element, or of representation (when difference is subordinated to the complementary dimensions of the Same and the Similar, the Analogous and the Opposed); (5) the postulate of the negative, or of error (in which error expresses everything which can go wrong in thought, but only as the product of external mechanisms); (6) the postulate of logical function, or the proposition (designation is taken to be the locus of truth, sense being no more than the neutralised double or the infinite doubling of the proposition); (7) the postulate of modality, or solutions (problems being materially traced from propositions or, indeed, formally defined by the possibility of their being solved); (8) the postulate of the end, or result, the postulate of knowledge (the subordination of learning to knowledge, and of culture to method). Together they form the dogmatic image of thought.

Deleuze by no means attacks the utility of these elements in principle. His point is just that these elements work together and should not be taken as primary principles. The effect of these presuppositions are disastrous.

They crush thought under an image which is that of the Same and the Similar in representation, but profoundly betrays what it means to think and alienates the two powers of difference and repetition, of philosophical commencement and recommence­ment. The thought which is born in thought, the act of thinking which is neither given by innateness nor presupposed by reminiscence but engendered in its genitality, is a thought without image.

As engineer, you may probably have been noticing issue (5). Elsewhere in our essay we already dealt with the fundamental misconception to start from an expected norm, instead from an open scale without imposed values. Only the latter attitude will allow for inherent adaptivity. Adaptive systems never will fail, because failure is conceptually impossible. Instead, they will cease to exist.

The rejection of the negative, which includes the rejection of the opposite as well as dialectics, the norm, or the exception, is particularly important if we think about foundations of whatsoever (think about Hegel, Marx, attac, etc.) or about political implications. We already discussed the case of Agamben.

Deleuze finally will arrive at this “new imageless image of thought” by understanding difference as a transcendental category. The great advantage of this move is that it does not imply a necessity of symbols and operators as primary, as it is the case if we would take identity as primary. The primary identical is either empty (a=a), that is, without any significance for the relation between entities, or it needs symbolification and at least one operator. In practice, however, a whole battery of models, classifications and the assumptions underlying them is required to support the claim of identity. As these assumptions are not justifiable within the claim of identity itself, they must be set, which results in the attempt to define the world. Obviously, attempting so would be quite problematic. It is even self-contradicting if contrasted with the modernists claim of objectivity. Setting the difference as primary, Deleuze not only avoids the trap of identity and pre-established harmony in the hive of objects, but also subordinates the object to the relation. Here he meets with Wittgenstein and Heidegger.

Together, the presupposition of identity and objecthood is necessarily and in a bidirectional manner accompanied with another quite abundant misunderstanding, according to which logic should be directly applicable to the world. World here is of course “everything” except logic, that is (claimed) objects, their relations, measurement, ideas, concepts and so on. Analytic philosophy, positivism, external realism and the larger movement of modernism all apply the concept of bi-valent logic to empirical entities. It is not really a surprise that this leads to serious problems and paradoxa, which however are pseudo-paradoxa. For instance, universal justification requires knowledge. Without logical truity in knowledge universal justification can’t be achieved. The attempt to define knowledge as consisting of positive content failed, though. Next, the formula of “knowledge as justified belief” was proposed. In order not to fall prey to the Gettier-problem, belief itself would have to be objectified. Precisely this happened in analytic philosophy, when Alchourron et al. (1985) published their dramatically (and overly) reduced operationalization of “belief”. Logic is a condition, it is transcendental to its usage. Hence, it is inevitable to instantiate it. By means of instantiation, however, semantics invades equally inevitable.

Ultimately due to the presupposed primacy of identity, modernists are faced with a particular difficulty in dealing with relations. Objects and their role should not be dependent on their interpretation. As a necessary consequence, meaning—and information—must be attached to objects as quasi-physical properties. There is but one single consequence: tyranny. Again, it is not surprising that at the heights of modernism the bureaucratic tyranny was established several times.

Some modernists would probably allow for interpretation. Yet, only as a means, not as a condition, not as a primacy. Concerning their implications, the difference between the stances is a huge one. If you take it simply as a means, keeping the belief into the primacy of objects, you still would adhere to the idea of “absolute truth” within the physical world. Ultimately, interpretation would be degraded into an error-prone “method”, which ideally should have no influence onto the recognition of truth, of course. The world, at least the world that goes beyond the mere physical aspects, appears as a completely different one if relations, and thus interpretation is set as primary. Obviously, this implies also a categorical difference regarding the way one approaches that world, e.g. in science, or the way one conceives of the possible role of design. Is a nothing else than myth that a designer, architect, or urbanist designs objects. The practitioners in these professions design potentials, namely that for the construction of meaning by the future users and inhabitants (cf. [5]). There is nothing a designer can do to prevent a particular interpretation or usage. Koolhaas concludes that regarding Junkspace this may lead to a trap, or kind of a betrayal [3]:

Narrative reflexes that have enabled us from the beginning of time to connect dots, fill in blanks, are now turned against us: we cannot stop noticing—no sequence is too absurd, trivial, meaningless, insulting… Through our ancient evolutionary equipment, our irrepressible attention span, we helplessly register, provide insight, squeeze meaning, read intention; we cannot stop making sense out of the utterly senseless… (p.188)

I think that on the one hand Koolhaas here accepts the role of interpretation, yet, and somewhat contradictory, he is not able to recognize that it is precisely the primacy of interpretation that enables for an transformation through assimilation, hence the way out of Junkspace. Here he remains modernist to the full extent.

The deep reason being that for the object-based attitude there is no possibility at all to recognize non-representational coherence. (Thus, a certain type of illiteracy regarding complex texts is prevailing among “true” modernists…)

2.3. Shades of Empiricism

Science, as we understand it today—yet at least partially also as we practice it—is based on the so-called hypothetico-deductive approach of empiricism (cf. [6]). Science is still taken as a synonym for physics by many, even in philosophy of science, with only very few exceptions. There, the practice and the theory of Life sciences are not only severely underrepresented, quite frequently biology is still reduced to physics. Physicists, and their philosophical co-workers, often claim that the whole world can be reduced to a description in terms of quantum mechanics (among many others cf. [7]). A closely related reduction, only slightly less problematic, is given by the materialist’s claim that mental phenomena should be explained completely in biological terms, that is, using only biological concepts.

The belief in empiricism is implemented into the methodological framework that is called “statistics”. The vast majority of the statistical tests rest on the assumption that observations and variables are independent from each other. Some tests are devised to test for independence, or dependence, but this alone does not help much. Usually, if dependency is detected, then the subsequent tests are rearranged as to fit again the independence assumption. In other words, any possibly actual coherence is first assumed to be nonexistent. By means of the method itself, the coherence is indeed destroyed. Yet, once it is destroyed, you never will get it back. It is quite simple: The criteria for any such construction are just missing.

From this perspective, statistics is not scientific according to science’s own measures; due to its declared non-critical and  non-experimental stance it actually looks more like ideology. For a scientific method would perform an experiment for testing whether something could be assumed or not. As Nobel laureate Konrad Lorenz said: I never needed statistics to do my work. What would be needed instead is indeed a method that is structurally independent of any independence assumption regarding the observed data. Such a method would propose patterns if there are sufficiently dense hints, and not , otherwise. Without proposing one or the other apriori. From that perspective, it is more the representationalism in modernism that brings the problem.

This framework of statistics is far from being homogeneous, though. Several “interpretations” are fiercely discussed: frequentism, bayesianism, uncertainty, or propensity. Yet, any of them faces serious internal inconsistencies, as Alan Hajek convincingly demonstrated [8]. To make a long story short (the long version you can find over here), it is not possible to build a model without symbols, without concepts that require interpretation and further models, and outside a social practice, or without an embedding into such. Modernists usually reject such basics and eagerly claim even universal objectivity for their data (hives of dust). More than 50 years ago, Quine proofed that believing otherwise should be taken just as nothing else than a dogma [9]. This dogma can be conceived as a consequence of the belief that objects that are the primary constituents of the world.

Of course, the social embedding is especially important in the case of social affairs such like urbanism. The claim that any measurement of data then treated by statistical modeling (they call it wrongly “analysis”) could convey any insight per se is nothing but pretentious.

Dealing with data always results in some kind of construction, base don some methods. Methods, however, respond differentially to data, they filter. In other words, even applying “analytical” methods involves interpretation, often even a strong one. Unfortunately for the modernist, he excluded the possibility of the primacy of interpretation at all, because there are only objects out there. This hurdle is quickly solved, of course, by the belief that the meaning is outside of interpretation. As result, they believe, that there is a necessary progress towards the truth. For modernists: Here you may jump back to subsection 3.2. …

2.4. Machines

For le Corbusier a house is much like a “machine for living in”. According to him, a building has clear functions, that could be ascribed apriori, governed by universal relations, or even laws. Recently, people engaged in the building economy recognized that it may turn problematic to assign a function apriori, as it simply limits the sales arguments. As a result, any function from the building as well as from the architecture itself tends to be stripped away. The “solution” is a more general one. Yet, in contrast to an algebraic equation that will be instantiated before used, the building actually exists after building it. It is there. And up today, not in a reconfigurable form.

Actually, the problem is created not by the tendency for more general, or even pre-specific solutions. It turns critical if it generality amalgamates with the modernist attitude. The category of machines, which is synonymic to ascribing or assigning a function (understood as usage) apriori, doesn’t accept any reference to luxury. A machine that would contain properties or elements that don’t bear any function, at least temporarily, other than pleasure (which does not exist in a world that consists only of objects) would be badly built. Minimalism is not just a duty, it even belongs to the grammar of modernism. Minimalism is the actualization and representation of mathematical rigidity, which is also a necessity as it is the only way to use signs without interpretation. At least, that is the belief of modernists.

The problem with minimalism is that it effectively excludes evolution. Either the produce fits perfectly or not at all. Perfectness of the match can be expected only, if the user behaves exactly as expected, which represents nothing else than dogmatism, if not worse. Minimalism in form excludes alternative interpretations and usages, deliberately so, it even has  to exclude the possibility for the alternative. How else to get rid of alternatives? Koolhaas rightly got it: by nothingness (minimalism), or by chaos.

3. Urbanism, and Koolhaas.

First, we have of course to make clear that we will be able to provide only a glimpse to the field invoked by this header. Else, our attempts here should not be understood as a proposal to separate architecture from urbanism. Both, regarding theory and implementation they more and more overlap. When Koolhaas explains the special situation of the Casa do Musica in Porto, he refers to processes like continuation of certain properties and impressions from the surround to be continued inside of the building. Inversely, any building, even any persistent object in a city shifts the qualities of its urban surround.

Rem Koolhaas, once journalist, then architect, now for more than a decade additionally someone doing comparative studies on cities has performatively demonstrated—by means of his writings such as “S,M,L,XL”, “Generic City” or “Junkspace”—that a serious engagement about the city can’t be practiced as a disciplinary endeavor. Human culture moved irrevocably into a phase where culture largely means urban culture. Urbanists may be seen as a vanishing species that became impossible due to the generality of the field. “Culturalist” is neither a proper domain nor a suitable label. Or perhaps they moult into organizers of research in urban contexts, similarly as architects are largely organizers for creating buildings. Yet, there is an important difference: Architects may still believe that they externalize something. Such a belief is impossible for urbanists, because they are part of the culture. It is thus questionable, if a project like the “Future Cities Laboratory” should indeed be called such. It is perhaps only possible to do so in Singapore, but that’s the subject of one of the next essays.

Rem Koolhaas wrote “Delirious New York” before turning to architecture and urbanism as a practitioner. There, he praised its diversity and manifoldness that, in or by means of his dreams, added up to the deliriousness of Manhattan, and probably also of his own.

Without any doubt, the particular quality of Manhattan is its empowering density, which is not actualizing as the identical, but rather as heterotopia, as divergence. In some way, Manhattan may be conceived as the urban precursor of the internet [11], built first in steel, glass and concrete. Vera Bühlmann writes:

Manhattan space is, if not yet everywhere, so at least in the internet potentially everywhere, and additionally not limited to three, probably even spatial dimensions.4

Urbanism is in urgent demand of an advanced theory that refers to the power of networks. It was perhaps this “network process” that brought Koolhaas to explore the anti-thesis of the wall and the plane, the absolute horizontal and vertical separation. I say anti-thesis, because Delirious New York itself behaves quite ambiguously, half-way between the Hegelian, (post-)structuralist dialectics and utopia on the one side and an affirmation of heterotopias on the other hand as a more advanced level of conceptualization alienating processes, which always are also processes of selection and individuation into both directions, the medium and the “individual”. Earlier scholars like Aldo Rossi have been too early to go into that direction as networks weren’t recognizable as part of the Form of Life. Even Shane is only implicitly referring to their associative power (he does not refer to complexity as well). And Koolhaas was not either, and probably is still not aware of this problematics.

Recently, I have been proposing one of the possible approaches to build such a theory, the according concepts, terms and practices (for more details see [12]). It is rather important, to distinguish two very basic forms of networks, logistic and associative networks. Logistic networks are used everywhere in modernist reasoning about cities and culture. Yet, they exclusively refer to the network as a machine, suitable to optimize the transport of anything. Associative networks are completely different. They do not transfer anything, they swallow, assimilate, rearrange, associate and, above all, they learn. Any associative network can learn anything. The challenge is, particularly for modernist attitudes, that it can’t be controlled what exactly an associative network is going to learn. The interesting thing about it is that the concept of associative networks provides a bridge to the area of advanced “machine”-learning and to the Actor-Network-Theory (ANTH) of Bruno Latour. The main contribution of ANTH is its emphasis of agency, even of those mostly mineral material arrangements that are usually believed to have no mental capacity.

It is clear, that an associative network may not be perceived at all under the strictly practiced presupposition of independence, as it is typical for modernism. Upon its implementation, the  belief set of modernism tends to destroy the associativity, hence also the almost inevitable associations between the more or less mentally equipped actors in urban environments.

When applied to cities, it breaks up relations, deliberately. Any interaction of high-rise buildings, so typical for Manhattan, is precluded intentionally. Any transfer is optimized just along one single parameter: time, and secondarily, space as a resource. Note that optimization always requires the apriori definition of a single function. As soon as would allow for multiple goals, you would be faced with the necessity of weighting and assigning subjective expectations, which are subjective precisely due to the necessity of interpretation. In order to exclude even the possibility for it, modernists agree hastily to optimize time (as a resource under the assignment of scarcity and physicality), once being understood as a transcendental condition.

As Aldo Rossi remarked already in the 1960ies [13], the modernist tries to evacuate any presence of time from the city. It is not just that history is cut off and buried, largely under false premises and wrong conclusions, reducing history just to institutional traditions (remember, there is no interpretation for a modernist!). In some way, it would have been even easy to predict Koolhaas’ Junkspace already in the end of the 19th century. Well, the Futurologists did it, semi-paradoxically, though. Quite stringent, Futurism was only a short phase within modernism. This neglect of time in modernism is by no means a “value” or an intention. It is a direct logical consequence of the presupposed belief set, particularly independence, logification and the implied neglect of context.

Dis-assembling the associative networks of a city results inevitably in the modernist urban conceptual dust, ruled by the paradigm of scarce time and the blindness against interpretation, patterns and non-representational coherence. This is in a nutshell, what I would like to propose as the deep grammar of the Junkspace, as it has been described by Koolhaas. Modernism did nothing else than to build and to actualize it conceptual dust. We may call it tertiary chaos, which has been—in its primary form—equal to the initial state of indiscernability concerning the cosmos as a whole. Yet, this time it has been dictated by modernists. Tertiary chaos thus can be set equal to the attempt to make any condition for the possibility of discernability vanishing.

Modernists may not be aware that there is not only already a theory of discernability, which equals to the Peircean theory of the sign, there is also an adaptation and application to urbanism and architecture. Urbanists probably may know about the name “Venturi”, but I seriously doubt that semiotics is on their radar. If modernists talk about semiotics at all, they usually refer to the structuralist caricature of it, as it has been put forward by de Saussure, establishing a closed version of the sign as a “triangle”. Peircean signs—and these have been used by Venturi—establish as an interpretive situation. They do not refer to objects, but just to other signs. Their reference to the world is provided through instances of abstract models and a process of symbolification, which includes learning as an ability that precedes knowledge. (more detail here in this earlier essay) Unfortunately, Venturi’s concept have scarcely been updated, except perhaps in the context of media facades [14]. Yet, media facades are mostly and often vastly misunderstood as the possibility to display adverts. There are good arguments supporting the view that there is more about them [15].

Modernists, including Koolhaas employ a strange image of evolution. For him (them), evolution is pure arbitrariness, both regarding the observable entities and processes as well as regarding the future development. He supposes to detect “zero loyalty-and zero tolerance-toward configuration“ ([3] p.182). In the same passage he simultaneously and contradictory misses the „”original” condition“ and blames history for its corruptive influence: „History corrupts, absolute history corrupts absolutely.“ All of that is put into the context of a supposedly “”permanent evolution.”“ (his quot. marks). Most remarkably, even biologists as S.J. Gould, pretending to be evolutionary biologist, claims that evolution is absolutely arbitrary. Well, the only way out of the contrasting fact that there is life in the form we know about it is to assume some active divine involvement. Precisely this was the stance of Gould. People like Gould(and perhaps Koolhaas) commit the representationalist fault, which excludes them from recognizing (i) the structural tendency of any evolution towards more general solutions, and (ii) the there is an evolution of evolutionarity. The modernist attitude towards evolution can again be traced back to the belief into metaphysical independence of objects, but our interest here is different.

Understanding evolution as a concept has only little to do with biology and the biological model that is called “natural evolution”. Natural evolution is just an instance of evolution into physico-chemical and then biological matter. Bergson has been the first who addressed evolution as a concept [16], notably in the context of abstract memory. In a previous essay we formalized that approach and related it to biology and machine-learning. At its basics, it requires a strict non-representational approach. Species and organisms are expressed in terms of probability. Our conclusion was that in a physical world evolution inevitably takes place if there at least two different kinds or scales of memory. Only on that abstract level we can adopt the concept of evolution into urbanism, that is, into any cultural context.

Memory can’t be equated to tradition, institutions or even the concrete left-overs of history, of course. They are just instances of memory. It is of utmost importance here, not to contaminate the concept of memory again with representationalism. This memory is constructive. Memory that is not constructive, is not memory, but a stock, a warehouse (although these are also kinds of storage and contribute as such to memory). Memory is inherently active and associative. Such memory is the basic, non-representative element of a generally applicable evolutionary theory.

Memory can not be “deposited” into almost geological layers of sediments, quite in contrast to the suggestions of Eisenman, whom Rajchman follows closely in his “Constructions”.

The claim of “storable memory” is even more disastrous than the the claim that information could be stored. These are not objects and items that are independent of an interpretation, they are the processes of constructive of guided interpretation. Both “storages” would only become equal to the respective immaterial processes under the condition of a strictly deterministic set of commands. Even the concept of the “rule” is already too open to serve the modernist claim of storable memory.

It is immediately clear that the dynamic concept of memory is highly relevant for any theory about urban conditions. It provides a general language to derive particular models and instances of association, stocks and flows, that are not reducible to storage or transfers. We may even expect that whenever we meet kind of material storage in an urban context, we also should expect association. The only condition for that just being that there are no modernists around… Yet, storage without memory, that is, without activity remains dead, much like but even less than a crystal. Cripples in the sand. The real relevance of stocks and flows is visible only in the realm of the non-representational, the non-material, if we conceive it as waves in abstract density, that is as media, conveying the potential for activity as a differential. Physicalists and modernists like Christianse or Hillier will never understand that. Just think of the naïve empirics, calling it cartography, they are performing around the world.

This includes deconstructivism as well. Derrida’s deconstructivism can be read as a defense war against the symbolification of the new, the emerging, the complex, the paradox of sense. His main weapon is the “trail”, of which he explicitly states that it could not be interpreted at all. Such, Derrida as master of logical flatness and modernist dust is the real enemy of progress. Peter Sloterdijk, the prominent contemporary German “philosopher”5, once called Derrida the “Old Egyptian”. Nothing would fit better to Derrida, who lives in the realm of shadows and for whom life is just a short transitory phase, hopefully “survived” without too much injuries. The only metaphor being possible on that basis is titanic geology. Think of some of Eisenman’s or Libeskind’s works.

Figure 2: Geologic-titanic shifts induced by the logical flatness of deconstructivism

a: Peter Eisenman, Aronoff Center for Design and Art in Cincinnati (Ohio) (taken from [11]); the parts of building are treated blocks, whose dislocation reminds to that of geological sediments (or the work of titans).

b: Daniel Libeskind, Victoria and Albert Museum Boilerhouse Extension. Secondary chaos, inducing Junkspace through its isolationist “originality”, conveying “defunct myths” (Koolhaas in [3], p.189).

Here we finish our exploration of generic aspects of the structure of modernist thinking. Hopefully, the sections so far are sufficiently suited to provide some insights about modernism in general, and the struggles Koolhaas is fighting with in “Junkspace”.

4. Redesigning Urbanism

Redesigning urbanism, that is to unlock it from modernist phantasms is probably much more simple than it may look at first sight. Well, not exactly simple, at least for modernists. Everything is about the presuppositions. Dropping the metaphysical believe of independence without getting trapped by esotericism or mysticism might well be the cure.Of course, metaphysical independence need to be removed from any level and any aspect in urbanism, starting from the necessary empirical work, which of course is already an important part of the construction work. We already mentioned that the notion of “empirical analysis” pretends neutrality, objectivity (as independence from the author) and validity. Yet, this is pure illusion. Independence should be abandoned also in its form of searching for originality or uniqueness, trying to set an unconditional mark in the cityscape. By that we don’t refer to morphing software, of course.

The antidote against isolationism, analyticity and logic is already well-known. To provide coherence you have to defy splintering and abjure the believe in (conceptual) dust. The candidate tool for it is story-telling, albeit in a non-representational manner, respecting the difference and heterotopias from the beginning. In turn this also means to abandon utopias and a-topias, but to embrace complexity and a deep concept of prevailing differentiation (in a subsequent essay we will deal with that). As citizens, we are not interested in non-places and deserts of spasmodic uniqueness (anymore) or the mere “solution of problems” either (see Deleuze about the dogmatic image of thought as cited above). Changing the perspective from the primacy of analysis to the primacy story-telling immediately reveals the full complexity of the respective Form of Life, to which we refer here as a respectful philosophical concept.

It is probably pretentious to speak such about urbanism as a totality. There are of course, and always have been, people who engaged in the urban condition based on a completely different set of believes, righteous non-modern. Those people start with the pattern and never tear them apart. Those people are able to distinguish structure, genesis and appearance. In biology, this distinction has been instantiated into the perspectives of the genotype, the phenotype, and, in bio-slang, evo-devo, the compound made from development, growth and evolution. These are tied together (necessarily) by complexity. In philosophy, the respective concepts are immanence, the differential, and the virtual.

For urbanism, take for instance the work of David Shane (“Recombinant Urbanism“). Shane’s work, which draws much on Kelly’s, is a (very) good starting point not only for any further theoretical work, but also for practical work.

As a practitioner, one has to defy the seduction for the totality of a master plan, as the renowned parametricists actualize in Istanbul, Christianse and his office did recently in Zürich at the main station. Both are producing pure awfulness, castles of functional uniformity, because they express the totality of the approach even visually. Even in Singapore’s URA (Urban Development Authority), the master plan has been relativised in favor of a (slightly) more open conceptualization. Designer’s have to learn that not less is more, but rather that partial nothingness is more. Deliberately non-planning, as Koolhaas has repeatedly emphasized. This should not be taken representationally, of course. It does not make any sense to grow “raw nature”, jungles within the city, neither for the city, nor for the “jungle”. Before a crystal can provide soil for real life, it must decay, precisely because it is a closed system (see next figure 3). Adaptive systems replace parts, melt holes to build structures, without decaying at all. We will return to this aspect of differentiation in a later article.

Figure 3: Pruitt-Igoe (St.Louis), getting blasted in 1972. Charles Jencks called this event “one of the deaths of modernism”. This had not been the only tear-down there. Laclede, a neighborhood nearby Pruitt-Igoe, made from small, single-flat houses failed as well, the main reasons being an unfortunate structure of the financial model and political issues, namely separation of “classes” and apartheid. (see this article).

The main question for finding a practicable process therefore is: How to ask, which questions should we address in order to build an analytics under the umbrella of story-telling, that avoids the shortfalls of modernism?

We might again take a look to biology (as a science). As urbanism, biology is also confronted with a totality. We call it life. How to address reasonable, that is fruitful questions to that totality? Biology already found a set of answer, which nevertheless are not respected by the modernist version of this science, mainly expressed as genetics. The first insight was, that “nothing in biology makes sense except in the light of evolution.”[17] Which would be the respective question for urbanism? I can’t give an answer here, but it is certainly not independence. This we can know through the lesson told by “Junkspace”. Another, almost ridiculous anti-candidate is sustainability, as far as it is conceived in terms of scarcity of mainly physical resources instead of social complexity. Perhaps we should remember the history of the city beyond its “functionality”. Yet, that would mean to first develop an understanding of (abstract) evolution, to instantiate that, and then to derive a practicable model for urban societies. What does it mean to be social, what does it mean to think, both taken as practice in a context of freedom? Biology then developed a small set of basic contexts along to which any research should be aligned to, without loosing the awareness (hopefully) that there are indeed four of such contexts. These have been clearly stated by Nobel laureate Tinbergen [18]. According to him research in biology is suitably structured by four major per­spectives: phylogeny, ontogeny, physiology and behavior. Are there similarly salient dimensions for structuring thought in urbanism, particularly in a putative non-modernist (neither modernist, not post-modernist) version? Particularly interesting are, imho, especially the intersections of such sub-domains.

Perhaps differentiation (as a concept) is indeed a (the) proper candidate for the grand perspective. We will discuss some aspects of this in the next essay: it includes growth and its modes, removal, replacement, deterioration, the problem of the generic, the difference between development and evolution, and a usable concept of complexity. to name but a few. In the philosophy of Gilles Deleuze, particularly the Thousand Plateaus, Difference and Repetition and the Fold, we already can find a good deal of theoretical work about he conceptual issues around differentiation. Differentiation includes learning, individually and collectively (I do NOT refer to swarm ideology here, nor to collectivist mysticism either!!!), which in turn would bring in the (abstract) mental into any consideration of urbanism. Yet, wasn’t mankind differentiating and learning all the time? The challenge will be to find a non-materialist interpretation of those in these materialist times.

Notes

1. Cited after [11]

2. Its core principles are the principle of excluded middle (PEM) and the  principle of non-contradictivity (PNC). Both principles are equivalent to the concept of macroscopic objects, albeit only in a realist perspective, i.e. under the presupposition that objects are primary against relations. This is, of course, quite problematic, as it excludes an appropriate conceptualisation of information.

Both, the PEM and PNC allow for the construction of paradoxes like the Taylor Paradox. Such paradoxes may be conceived as “Language Game Colliders”, that is as conceptual devices which commit a mistake concerning the application of the grammar of language games. Usually, the bring countability and the sign for non-countability into conflict. First, it is a fault to compare a claim with a sign, second, it is stupid to claim contradicting proposals. Note, that here we are allowed to speak of “contradiction”, because we are following the PNC as it is suggested by the PNC claim. The Taylor-Paradox is of course, like any other paradox, a pseudo-problem. It appears only due to an inappropriate choice or handling of the conceptual embedding, or due to the dismissal of the concept of the “Language Game”, which mostly results in the implicit claim of the existence of a “Private Language”.

3. Vera Bühlmann, “Articulating quantities, if things depend on whatever can be the case“, lecture held at The Art of Concept, 3rd Conference: CONJUNCTURE — A Series of Symposia on 21st Century Philosophy, Politics, and Aesthetics, organized by Nathan Brown and Petar Milat, Multimedia Institute MAMA in Zagreb, Kroatia, June 15-17 2012.

4. German orig.: “Manhattan Space ist, wenn schon nicht überall, so doch im Internet potentiell überall, und zudem nicht mehr auf drei vielleicht gar noch räumliche Dimensionen beschränkt.”

5. Peter Sloterdjik does not like to be called a philosopher

References

  • [1] Michel Foucault, Archaeology of Knowledge. Routledge 2002 [1969].
  • [2] Vera Bühlmann, Printed Physics, de Gruyter, forthcoming.
  • [3] Rem Koolhaas (2002). Junkspace. October, Vol. 100, “Obsolescence”, pp. 175-190. MIT Press
  • [4] Michael Hansmeyer, his website about these columns.
  • [5] “Pseudopodia. Prolegomena to a Discourse of Design”. In: Vera Bühlmann and Martin Wiedmer . pre-specifics. Some Comparatistic Investigations on Research in Art and Design. JRP| Ringier Press, Zurich 2008. p. 21-80 (English edition). available online;
  • [6] Wesley C. Salmon, Causality and Explanation. Oxford University Press, Oxford 1998.
  • [7] Michael Epperson (2009). Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse. Process Studies, 38(2): 339-366.
  • [8] Alan Hájek (2007). The Reference Class Problem is Your Problem Too. Synthese 156 (3):563-585.
  • [9] W.v.O. Quine (1951), Two Dogmas of Empiricism. The Philosophical Review 60: 20-43.
  • [10] Gilles Deleuze, Difference and Repetition. Columbia University Press, New York 1994 [1968].
  • [11] Vera Bühlmann, inhabiting media. Thesis, University of Basel (CH), 2009.
  • [12] Klaus Wassermann (2010). SOMcity: Networks, Probability, the City, and its Context. eCAADe 2010, Zürich. September 15-18, 2010. (pdf)
  • [13] Aldo Rossi, The Architecture of the City. MIT Press, Cambridge (Mass.) 1982 [1966].
  • [14] Christoph Kronhagel (ed.), Mediatecture, Springer, Wien 2010. pp.334-345.
  • [15] Klaus Wassermann, Vera Bühlmann, Streaming Spaces – A short expedition into the space of media-active façades. in: Christoph Kronhagel (ed.), Mediatecture, Springer, Wien 2010. pp.334-345. available here. available here
  • [16] Henri Bergson, Matter and Memory. (Matière et Mémoire 1896) transl. N.M. Paul & W.S. Palmer. Zone Books 1990.
  • [17] Theodore Dobzhansky, Genetics and the Origin of Species, Columbia University Press, New York 1951 (3rd ed.) [1937].
  • [18] Niko Tinbergen (1963). On Aims and Methods in Ethology, Z. Tierpsych., (20): 410–433.

۞

A Deleuzean Move

June 24, 2012 § Leave a comment

It is probably one of the main surprises in the course of

growing up as a human that in the experience of consciousness we may meet things like unresolvable contradictions, thoughts that are incommensurable, thoughts that lead into contradictions or paradoxes, or thoughts that point to something which is outside of the possibility of empirical, so to speak “direct” experience. All these experiences form a particular class of experience. For one or the other reason, these issues are issues of mental itself. We definitely have to investigate them, if we are going to talk about things like machine-based episteme, or the urban condition, which will be the topic of the next few essays.

There have been only very few philosophers1 who have been embracing paradoxicality without getting caught by antinomies and paradoxes in one or another way.2 Just to be clear: Getting caught by paradoxes is quite easy. For instance, by violating the validity of the language game you have been choosing. Or by neglecting virtuality. The first of these avenues into persistent states of worries can be observed in sciences and mathematics3, while the second one is more abundant in philosophy. Fortunately, playing with paradoxicality without getting trapped by paradoxes is not too difficult either. There is even an incentive to do so.

Without paradoxicality it is not possible to think about beginnings, as opposed to origins. Origins­­—understood as points of {conceptual, historical, factual} departure—are set for theological, religious or mystical reasons, which by definition are always considered as bearer of sufficient reason. To phrase it more accurately, the particular difficulty consists in talking about beginnings as part of an open evolution without universal absoluteness, hence also without the need for justification at any time.

Yet, paradoxicality, the differential of actual paradoxes, could form stable paradoxes only if possibility is mixed up with potentiality, as it is for instance the case for perspectives that could be characterised as reductionist or positivist. Paradoxes exist strictly only within that conflation of possibility and potentiality. Hence, if a paradox or antinomy seems to be stable, one always can find an implied primacy of negativity in lieu of the problematic field spawned and spanned by the differential. We thus can observe the pouring of paradoxes also if the differential is rejected or neglected, as in Derrida’s approach, or the related functionalist-formalist ethics of the Frankfurt School, namely that proposed by Habermas [4]. Paradoxes are like knots that always can be untangled in higher dimensions. Yet, this does NOT mean that everything could be smoothly tiled without frictions, gaps or contradictions.

Embracing the paradoxical thus means to deny the linear, to reject the origin and the absolute, the centre points [6] and the universal. We may perceive remote greetings from Nietzsche here4. Perhaps, you already may have classified the contextual roots of these hints: It is Gilles Deleuze to whom we refer here and who may well be regarded as the first philosopher of open evolution, the first one who rejected idealism without sacrificing the Idea.5

In the hands of Deleuze—or should we say minds?—paradoxicality does neither actualize into paradoxes nor into idealistic dichotomic dialectics. A structural(ist) and genetic dynamism first synthesizes the Idea, and by virtue of the Idea as well as the space and time immanent to the Idea paradoxicality turns productive.7

Philosophy is revealed not by good sense but by paradox. Paradox is the pathos or the passion of philosophy. There are several kinds of paradox, all of which are opposed to the complementary forms of orthodoxy – namely, good sense and common sense. […] paradox displays the element which cannot be totalised within a common element, along with the difference which cannot be equalised or cancelled at the direction of a good sense. (DR227)

As our title already indicates, we not only presuppose and start with some main positions and concepts of Deleuzean philosophy, particularly those he once developed in Difference and Repetition (D&R)8. There will be more details later9. We10 also attempt to contribute some “genuine” aspects to it. In some way, our attempt could be conceived as a development being an alternative to part V in D&R, entitled “Asymmetrical Synthesis of the Sensible”.

This Essay

Throughout the collection of essays about the “Putnam Program” on this site we expressed our conviction that future information technology demands for an assimilation of philosophy by the domain of computer sciences (e.g. see the superb book by David Blair “Wittgenstein, Language and Information” [47]). There are a number of areas—of both technical as well as societal or philosophical relevance—which give rise to questions that already started to become graspable, not just in the computer sciences. How to organize the revision of beliefs?11 What is the structure of the “symbol grounding problem”? How to address it? Or how to avoid the fallacy of symbolism?12 Obviously we can’t tackle such questions without the literacy about concepts like belief or symbol, which of course can’t be reduced to a merely technical notion. Beliefs, for instance, can’t be reduced to uncertainty or its treatment, despite there is already some tradition in analytical philosophy, computer sciences or statistics to do so. Else, with the advent of emergent mental capabilities in machines ethical challenges appear. These challenges are on both sides of the coin. They relate to the engineers who are creating such instances as well as to lawyers who—on the other side of the spectrum—have to deal with the effects and the properties of such entities, and even “users” that have to build some “theory of mind” about them, some kind of folk psychology.

And last but not least, just the externalization of informational habits into machinal contexts triggers often pseudo-problems and “deep” confusion.13 Examples for such confusion are the question about the borders of humanity, i.e. as kind of a defense war fought by anthropology, or the issue of artificiality. Where does the machine end and where does the domain of the human start? How can we speak reasonably about “artificiality”, if our brain/mind remains still dramatically non-understood and thus implicitly is conceived by many as kind of a bewildering nature? And finally, how to deal with technological progress: When will computer scientists need self-imposed guidelines similar to those geneticists ratified for their community in 1974 during the Asimolar Conferences? Or are such guidelines illusionary or misplaced, because we are weaving ourselves so intensively into our new informational carpets—made from multi- or even meta-purpose devices—that are righteous flying carpets?

There is also a clearly recognizable methodological reason  for bringing the inventioneering of advanced informational “machines” and philosophy closer together. The domain of machines with advanced mental capabilities—I deliberately avoid the traditional term of “artificial intelligence”—, let us abbreviate it MMC, acquires ethical weight in itself. MMC establishes a subjective Lebenswelt (life form) that is strikingly different from ours and which we can’t understand analytically any more (if at all)14. The challenge then is how to talk about this domain? We should not repeat the same fallacy as anthropology and anthropological philosophy have been committing since Kant, where human measures have been applied (and still are up today) to “nature”. If we are going to compare two different entities we need a differential position from which both can be instantiated. Note that no resemblance can be expected between the instances, nor between the instances and the differential. That differential is a concept, or an idea, and as such it can’t be addressed by any kind of technical perspective. Hence, questions of mode of speaking can’t be conceived as a technical problem, especially not for the domain of MMC, also due to the implied self-referentiality of the mental itself.

Taken together, we may say that our motivation follows two lines. Firstly, the concern is about the problematic field, the problem space itself, about the possibility that problems could become visible at all. Secondly, there is a methodological position characterisable as a differential that is necessary to talk about the subject of incommensurable that are equipped entities with mental capacities.15

Both directions and all related problems can be addressed in the same single move, so at least is our proposal. The goal of this essay is the introduction and a brief discussion of a still emerging conceptual structure that may be used as an image of thought, or likewise as a tool in the sense of an almost formal mental procedure, helping to avoid worries about the diagnosis—or supporting it—of the challenges opened by the new technologies. Of course, it will turn out that the result is not just applicable to the domain of philosophy of technology.

In the following we will introduce a unique structure that has been inspired not only from heterogeneous philosophical sources. Those stretch from Aristotle to Peirce, from Spinoza to Wittgenstein, and from Nietzsche to Deleuze, to name but a few, just to give you an impression what mindset you could expect. Another important source is mathematics, yet not used as a ready-made system for formal reasoning, but rather as a source for a certain way of thinking. Last, but not least, biology is contributing as the home of the organon, of complexity, of evolution, and, more formally, on self-referentiality. The structure we will propose as a starting point that appears merely technical, thus arbitrary, and at the same time it draws upon the primary amalgamate of the virtual and the immanent. Its paradoxicality consists in its potential to describe the “pure” any, the Idea that comprises any beginning. Its particular quality as opposed to any other paradoxicality is caused by a profound self-referentiality that simultaneously leads to its vanishing, its genesis and its own actualization. In this way, the proposed structure solves a challenge that is considered by many throughout the history of philosophy to be one of the most serious one. The challenge in question is that of sufficient reason, justification and conditionability. To be more precise, that challenge is not solved, it is more correct to say that it is dissolved, made disappear. In the end, the problem of sufficient reason will be marked as a pseudo-problem.

Here, a small remark is necessary to be made to the reader. Finally, after some weeks of putting this down, it turned out as a matter of fact that any (more or less) intelligible way of describing the issues exceeds the classical size of a blog entry. After all, now it comprises approx. 150’000 characters (incl white space), which would amount to 42+ pages on paper. So, it is more like a monograph. Still, I feel that there are many important aspects left out. Nevertheless I hope that you enjoy reading it.

The following provides you a table of content (active links) for the remainder of this essay:

2. Brief Methodological Remark

As we already noted, the proposed structure is self-referential. Self-referentiality also means that all concepts and structures needed for an initial description will be justified by the working of the structure, in other words, by its immanence. Actually, similarly to the concept of the Idea in D&R, virtuality and immanence come very close to each other, they are set to be co-generative. As an Idea, the proposed structure is complete. As any other idea, it needs to be instantiated into performative contexts, thus it is to be conceived as an entirety, yet neither as a completeness nor as a totality. Yet, its self-referentiality allows for and actually also generates a “self-containment” that results in a fractal mirroring of itself, in a self-affine mapping. Metaphorically, it is a concept that develops like the leaf of a fern. Superficially, it could look like a complete and determinate entirety, but it is not, similar to area-covering curves in mathematics. Those fill a 2-dimensional area infinitesimally, yet, with regard to their production system they remain truly 1-dimensional. They are a fractal, an entity to which we can’t apply ordinal dimensionality. Such, our concept also develops into instances of fractal entirety.

For these reasons, it would be also wrong to think that the structure we will describe in a moment is “analytical”, despite it is possible to describe its “frozen” form by means of references to mathematical concepts. Our structure must be understood as an entity that is not only not neutral or invariant against time. It forms its own sheafs of time (as I. Prigogine described it) Analytics is always blind against its generative milieu. Analytics can’t tell anything about the world, contrary to a widely exercised opinion. It is not really a surprise that Putnam recommended to reduce the concept of “analytic” to “an inexplicable noise”. Very basically it is a linear endeavor that necessarily excludes self-referentiality. Its starting point is always based on an explicit reference to kind of apparentness, or even revelation. Analytics not only presupposes a particular logic, but also conflates transcendental logic and practiced quasi-logic. Else, the pragmatics of analysis claims that it is free from constructive elements. All these characteristics do not apply to out proposal, which is as less “analytical” as the philosophy of Deleuze, where it starts to grow itself on the notion of the mathematical differential.

3. The Formal Structure

For the initial description of the structure we first need a space of expressibility. This space then will be equipped with some properties. And right at the beginning I would like to emphasize that the proposed structure does not “explain” by itself anything, just like a (philosophical) grammar. Rather, through its usage, that is, its unfolding in time, it shows itself and provides a stable as well as a generative ground.

The space of the structure is not a Cartesian space, where some concepts are mapped onto the orthogonal dimensions, or where concepts are thought to be represented by such dimensions. In a Cartesian space, the dimensions are independent from each other.16 Objects are represented by the linear and additive combination of values along those dimensions and thus their entirety gets broken up. We loose the object as a coherent object and there would be no way to regain it later, regardless the means and the tools we would apply. Hence the Cartesian space is not useful for our purposes. Unfortunately, all the current mathematics is based on the cartesian, analytic conception. Currently, mathematics is a science of control, or more precisely, a science about the arrangement of signs as far as it concerns linear, trivial machines that can be described analytically. There is not yet a mathematics of the organon. Probably category theory is a first step into its direction.

Instead, we conceive our space as an aspectional space, as we introduced it in a previous chapter. In an aspectional space concepts are represented by “aspections” instead of “dimensions”. In contrast to the values in a dimensional space, values in an aspectional can not be changed independently from each other. More precisely, we always can keep only at most 1 aspection constant, while the values along all the others change simultaneously. (So-called ternary diagrams provide a distantly related example for this in a 2-dimensional space.) In other words, within the N-manifolds of the aspectional space always all values are dependent on each other.

This aspectional space is stuffed with a hyperbolic topological structure. The space of our structure is not flat. You may take M.C. Escher’s plates as a visualization of such a space. Yet, our space is different from such a fixed space; it is a relativistic space that is built from overlapping hyperbolic spaces. At each point in the space you will find a point of reference (“origin”) for a single hyperbolic reference system. Our hyperbolic space is locally centred. A mathematical field about comparable structures would be differential topology.

So far, the space is still quite easy and intuitively to understand. At least there is still a visualization possible for it. This changes probably with the next property. Points in this aspectional space are not “points”, or expressed in a better, less obscure way, our space does not contain points at all. In a Cartesian space, points are defined by one or more scales and their properties. For instance, in a x-y-coordinate system we could have real numbers on both dimensions, i.e. scales, or we could have integers on the first, and reals on the second one. The interaction of the number systems used to create a scale along a dimension determines the expressibility of the space. This way, a point is given as a fixed instance of a set of points as soon as the scale is given. Points themselves are thus said to be 0-dimensional.

Our “points”, i.e. the content of our space is quite different from that. It is not “made up” from inert and passive points but the second differential, i.e. ultimately a procedure that invokes an instantiation. Our aspectional space thus is made from infinitesimal procedural sites, or “situs” as Leibniz probably would have said. If we would represent the physical space by a Cartesian dimensional system, then the second derivative would represent an acceleration. Take this as a metaphor for the behavior of our space. Yet, our space is not a space that is passive. The second-order differential makes it an active space and a space that demands for an activity. Without activity it is “not there”.

We also could describe it as the mapping of the intensity of the dynamics of transformation. If you would try to point to a particular location, or situs, in that space, which is of course excluded by its formal definition, you would instantaneously “transported” or transformed, such that you would find yourself elsewhere instantaneously. Yet, this “elsewhere” can not be determined in Cartesian ways. First, because that other point does not exist, second, because it depends on the interaction of the subject’s contribution to the instantiation of the situs and the local properties of the space. Finally, we can say that our aspectional space thus is not representational, as the Cartesian space is.

So, let us sum the elemental17 properties of our space of expressibility:

  • 1. The space is aspectional.
  • 2. The topology of the space is locally hyperbolic.
  • 3. The substance of the space is a second-order differential.

4. Mapping the Semantics

We now are going to map four concepts onto this space. These concepts are themselves Ideas in the Deleuzean sense, but they are also transcendental. They are indeterminate and real, just as virtual entities. As those, we take the chosen concepts as inexplicable, yet also as instantiationable.

These four concepts have been chosen initially in a hypothetical gesture, such that they satisfy two basic requirements. First, it should not be possible to reduce them to one another. Second, together they should allow to build a space of expressibility that would contain as much philosophical issues of mentality as possible. For instance, it should contain any aspect of epistemology or of languagability, but it does not aim to contribute to the theory of morality, i.e. ethics, despite the fact that there is, of course, significant overlapping. For instance, one of the possible goals could be to provide a space that allows to express the relation between semiotics and any logic, or between concepts and models.

So, here are the four transcendental concepts that form the aspections of our space as described above:

  • – virtuality
  • – mediality
  • – model
  • – concept

Inscribing four concepts into a flat, i.e. Euclidean aspectional space would result in a tetraedic space. In such a space, there would be “corners,” or points of inflections, which would represent the determinateness of the concepts mapped to the aspections. As we have emphasized above, our space is not flat, though. There is no static visualization possible for it, since our space can’t be mapped to the flat Euclidean space of a drawing, or of the space of our physical experience.

So, let us proceed to the next level by resorting to the hyperbolic disc. If we take any two points inside the disc, their distance is determinate. Yet, if we take any two points at the border of the disc, the distance between those points is infinite from the inside perspective, i.e. for any perspective associated to a point within the disc. Also the distance from any point inside the disc to the border is infinite. This provides a good impression how transcendental concepts that by definition can’t be accessed “as such”, or as a thing, can be operationalized by the hyperbolic structure of a space. Our space is more complicated, though, as the space is not structured by a fixed hyperbolic topology that is, so to speak, global for the entire disc. The consequence is that our space does not have a border, but at the same time it remains an aspectional space. Turning the perspective around, we could say that the aspections are implied into this space.

Let us now briefly visit these four concepts.

4.1. Virtuality

Virtuality describes the property of “being virtual”. Saying that something is virtual does not mean that this something does not exist, despite the property “existing” can’t be applied to it either. It is fully real, but not actual. Virtuality is the condition of potentiality, and as such it is a transcendental concept. Deleuze repeatedly emphasises that virtuality does not refer to a possibility. In the context of information technologies it is often said that this or that is “virtual”, e.g. virtualized servers, or virtual worlds. This usage is not the same as in philosophy, since, quite obviously, we use the virtual server as a server, and the world dubbed “virtual“ indeed does exist in an actualized form. Yet, in both examples there is also some resonance to the philosophical concept of virtuality. But this virtuality is not exclusive to the simulated worlds, the informationally defined server instances or the WWW. Virtualization is, as we will see in a moment, implied by any kind of instance of mediality.

As just said, virtuality and thus also potentiality must be strictly distinguished from possibility. Possible things, even if not yet present or existent, can be thought of in a quasi-material way, as if they would exist in their material form. We even can say that possible things and the possibilities of things are completely determined in any given moment. It is not possible to say so about potentiality. Yet, without the concept of potentiality we could not speak about open evolutionary processes. Neglecting virtuality thus is necessarily equivalent to the apriori claim of determinateness, which is methodologically and ethically highly problematic.

The philosophical concept of virtuality is known since Aristotle. Recently, Bühlmann18 brought it to the vicinity of semiotics and the question of reference19 in her work about mediality. There would be much, much more to say about virtuality here, just, the space is missing…

4.2. Mediality

Mediality, that is the medial aspects of things, facts and processes belongs to the most undervalued concepts nowadays, even as we get some exercise by means of so-called “social media”. That term perfectly puts this blind spot to stage through its emphasis: Neither is there any mediality without sociality, nor is there any sociality without mediality. Mediality is the concept that has been “discovered” as the last one of our small group. There is a growing body of publications, but many are—astonishingly—deeply infected by romanticism or positivism20, with only a few exceptions.21 Mediality comprises issues like context, density, or transformation qua transfer. Mediality is a concept that helps to focus the appropriate level of integration in populations or flows when talking about semantics or meaning and their dynamics. Any thing, whether material or immaterial, that occurs in a sufficient density in its manifoldness may develop a mediality within a sociality. Mediality as a “layer of transport” is co-generative to sociality. Media are never neutral with respect to the transported, albeit one can often find counteracting forces here.

Signs and symbols could not exist as such without mediality. (Yet, this proposal is based on the primacy of interpretation, which is rejected by modernist set of beliefs. The costs for this are, however, tremendous, as we are going to argue here) The same is true for words and language as a whole. In real contexts, we usually find several, if not many medial layers. Of course, signs and symbols are not exhaustively described by mediality. They need reference, which is a compound that comprises modeling.

4.3. Model

Models and modeling need not be explicated too much any more, as it is one of the main issues throughout our essays. We just would like to remember to the obvious fact that a “pure” model is not possible. We need symbols and rules, e.g. about their creation or usage, and necessarily both are not subject of the model itself. Most significantly, models need a purpose, a concept to which they refer. In fact, any model presupposes an environment, an embedding that is given by concepts and a particular social embedding. Additionally, models would not be models without virtuality. On the one hand, virtuality is implied by the fact that models are incarnations of specific modes of interpretation, and on the other hand they imply virtuality themselves, since they are, well, just models.

We frequently mentioned that it is only through models that we can build up references to the external world. Of course, models are not sufficient to describe that referencing. There is also the contingency of the manifold of populations and the implied relations as quasi-material arrangements that contribute to the reference of the individual to the common. Yet, only modeling allows for anticipation and purposeful activity. It is only though models that behavior is possible, insofar any behavior is already differentiated behavior. Models are thus the major site where information is created. It is not just by chance that the 20th century experienced the abundance of models and of information as concepts.

In mathematical terms, models can be conceived as second-order categories. More profane, but equivalent to that, we can say that models are arrangement of rules for transformation. This implies the whole issue of rule-following as it has been investigated and formulated by Wittgenstein. Note that rule-following itself is a site of paradoxicality. As there is no private language, there is also no private model. Philosophically, and a bit more abstract, we could describe them as the compound of providing the possibility for reference (they are one of the conditions for such) and the institutionalized site for creating (f)actual differences.

4.4. Concepts

Concept is probably one of the most abused, or at least misunderstood concepts, at least in modern times. So-called Analytical Philosophy is claiming over and over again that concepts could be explicated unambiguously, that concepts could be clarified or defined. This way, the concept and its definition are equaled. Yet, a definition is just a definition, not a concept. The language game of the definition makes sense only in a tree of analytical proofs that started with axioms. Definitions need not to be interpreted. They are fully given by themselves. Such, the idea of clarifying a concept is nothing but an illusion. Deleuze writes (DR228)

It is not surprising that, strictly speaking, difference should be ‘inexplicable’. Difference is explicated, but in systems in which it tends to be cancelled; this means only that difference is essentially implicated, that its being is implication. For difference, to be explicated is to be cancelled or to dispel the inequality which constitutes it. The formula according to which ‘to explicate is to identify’ is a tautology.

Deleuze points to the particular “mechanism” of eradication by explication, which is equal to its transformation into the sayable. There is a difference between 5 and 7, but the arithmetic difference does not cover all aspects of difference. Yet, by explicating the difference using some rules, all the other differences except the arithmetic one vanish. Such, this inexplicability is not limited to the concept of difference. In some important way, these other aspects are much more interesting and important than the arithmetic operation itself or the result of it. Actually, we can understand differencing only as far we are aware of these other aspects.

Elsewhere, we already cited Augustine and his remark about time:22 “What, then, is time? If no one ask of me, I know; if I wish to explain to him who asks, I know not.” Here, we can observe at least two things. Firstly, this observation may well be the interpreted as the earliest rejection of “knowledge as justified belief”, a perspective which became popular in modernism. Meanwhile it has been proofed to be inadequate by the so-called Gettier problem. The consequences for the theory of data bases, or machine-based processing of data, can’t be underestimated. It clearly shows, that knowledge can’t be reduced to confirmed hypotheses qua validated models, and belief can’t be reduced to kind of a pre-knowledge. Belief must be something quite different.

The second thing to observe by those two example concerns the status of interpretation. While Augustine seems to be somewhat desperate, at least for a moment23, analytical philosophy tries to abolish the annoyance of indeterminateness by killing the freedom inherent to interpretation, which always and inevitably happens, if the primacy of interpretation is denied.

Of course, the observed indeterminateness is equally not limited to time either. Whenever you try to explicate a concept, whether you describe it or define it, you find the unsurmountable difficulty to pick one of many interpretations. Again: There is no private language; meaning, references and signs exist only within social situations of interpretation. In other words, we again find the necessity of invoking the other conceptual aspects from which we build our space. Without models and mediality there is no concept. And even more profound than models, concepts imply virtuality.

In the opposite direction we can understand now that these four concepts are not only not reducible to each other. They are dependent on each other and—somewhat paradoxically—they are even competitively counteracting. From this we can expect an abstract dynamics that reminds somewhat to the patterns evolving in Reaction-Diffusion-Systems. These four concepts imply the possibility for a basic creativity in the realm of the Idea, in the indeterminate zone of actualisation that will result in a “concrete” thought, or at least the experience of thinking.

Before we proceed we would like to introduce a notation that should be helpful in avoiding misunderstandings. Whenever we refer to the transcendental aspects between which the aspections of our space stretch out, we use capital letters and mark it additionally by a bar, such as “_Concept”,or “_Model”.The whole set of aspects we denote by “_A”,while its unspecified items are indicated by “_a”.

5. Anti-Ontology: The T-Bar-Theory

The four conceptual aspects _Aplay different roles. They differ in the way they get activated. This becomes visible as soon as we use our space as a tool for comparing various kinds of mental concepts or activities, such as believing, referring, explicating or understanding. These we will inspect later in detail.

Above we described the impossibility to explicate a concept without departing from the “conceptness”. Well, such a description is actually not appropriate according to our aspectional space. The four basic aspections are built by transcendental concepts. There is a subjective, imaginary yet pre-specific scale along those aspections. Hence, in our space “conceptness” is not a quality, but an intensity, or almost a degree, a quantity. The key point then is that a mental concept or activity relates always to all four transcendental aspections in such a way that the relative location of the mental activity can’t be changed along just a single aspect alone.

We also can recognize another significant step that is provided by our space of expressibility. Traditionally, concepts are used as existential signifiers, in philosophy often called qualia. Such existential signifiers are only capable to indicate presence or absence, which thus is also confined to naive ontology of Hamletian style (to be or not to be). It is almost impossible to build a theory or a model from existential signifiers. From the modeling or the measurement theory point of view, concepts are on the binary scale. Despite concepts collect a multitude of such binary usages, appropriate modeling remains impossible due the binary scale, unless we would probabilize all potential dual pairs.

Similarly to the case of logic we also have to distinguish the transcendental aspect _a,that is, the _Model,_Mediality,_Concept,and _Virtualityfrom the respective entity that we find in applications. Those practiced instances of a are just that: instances. That is: instances produced by orthoregulated habits. Yet, the instances of a that could be gained through the former’s actualization do not form singularities, or even qualia. Any a can be instantiated into an infinite diversity of concrete, i.e. definable and sayable abstract entities. That’s the reason for the kinship between probabilistic entities and transcendental perspectives. We could operationalize the latter by the former, even if we have to distinguish sharply between possibility and potentiality. Additionally we have to keep in mind that the concrete instances do not live independently from their transcendental ancestry24.

Deleuze provides us a nice example of this dynamics in the beginning of part V in D&R. For him, “divergence” is an instance of the transcendental entity “Difference”.

Difference is not diversity. Diversity is given, but difference is that by which the given is given, that by which the given is given as diverse. Difference is not phenomenon but the noumenon closest to the phenomenon.

What he calls “phenomenon” we dubbed “instance”, which is probably more appropriate in order to avoid the reference to phenomenology and the related difficulties. Calling it “phenomenon” pretends—typically for any kind of phenomenology or ontology—sort of a deeply unjustified independence of mentality and its underlying physicality.

This step from existential signifiers to the situs in a space for expressibility, made possible by our aspectional space, can’t be underestimated. Take for instance the infamous question that attracted so many misplaced answers: “How do words or concepts acquire reference?” This question appears to be especially troubling because signs do refer only to signs.25 In existential terms, and all the terms in that question are existential ones, this question can’t be answered, even not addressed at all. As a consequence, deep mystical chasms unnecessarily keep separating the world from the concepts. Any resulting puzzle is based on a misconception. Think of Platons chorismos (greek for “separation”) of explanation and description, which recently has been taken up, refreshed and declared being a “chasm” by Epperson [31] (a theist realist, according to his own positioning; we will meet him later again). The various misunderstandings are well-known, ranging from nominalism to externalist realism to scientific constructivism.

They all vanish in a space that overcomes the existentiality embedded in the terms. Mathematically spoken, we have to represent words, concepts and references as probabilized entities, as quasi-species as Manfred Eigen called it in a different context, in order to avoid naive mysticism regarding our relations to the world.

It seems that our space provides the possibility for measuring and comparing different ways of instantiation for _A,kind of a stable scale. We may use it to access concepts differentially, that is, we now are able to transform concepts in a space of quantitability (a term coined by Vera Bühlmann). The aspectional space as we have constructed it is thus necessary even in order to talk just about modeling. It would provide the possibility for theories about any transition between any mental entities one could think of. For instance, if we conceive “reference” as the virtue of purposeful activity and anticipation, we could explore and describe the conditions for the explication of the path between the _Modelon the one side and the _Concept on the other.On this path—which is open on both sides—we could, for instance, first meet different kinds of symbols near the Model, started by idealization and naming of models, followed by the mathematical attitude concerning the invention and treatment of signs, _Logicand all of its instances, semiosis and signs, words, and finally concepts, not forgetting above all that this path necessarily implies a particular dynamics regarding _Medialityand _Virtuality.

Such an embedding of transformations into co-referential transcendental entities is anything we can expect to “know” reliably. That was the whole point of Kant. Well, here we can be more radical than Kant dared to. The choreostemic space is a rejection of the idea of “pure thought”, or pure reason, since such knowledge needs to undergo a double instantiation, and this brings subjectivity back. It is just a phantasm to believe that propositions could be secured up to “truth”. This is even true for least possible common denominator, existence.

I think that we cannot know whether something exists or not (here, I pretend to understand the term exist), that it is meaningless to ask this. In this case, our analysis of the legitimacy of uses has to rest on something else. (David Blair [49])

Note that Blair is very careful in his wording here. He is not about any universality regarding the justification, or legitimization. His proposal is simply that any reference to “Being” or “Existence” is useless apriori. Claiming seriousness of ontology as an aspect of or even as an external reality immediately instantiates the claim of an external reality as such, which would be such-and-such irrespective to its interpretation. This, in turn, would consequently amount to a stance that would set the proof of irrelevance of interpretation and of interpretive relativism as a goal. Any familiar associations about that? Not to the least do physicists, but only physicists, speak of “laws” in nature. All of this is, of course, unholy nonsense, propaganda and ideology at least.

As a matter of fact, even in a quite strict naturalist perspective, we need concepts and models. Those are obviously not part of the “external” nature. Ontology is an illusion, completely and in any of its references, leading to pseudo-problems that are indeed  “very difficult” to “solve”. Even if we manage to believe in “existence”, it remains a formless existence, or more precisely, it has to remain formless. Any ascription of form immediately would beat back as a denial of the primacy of interpretation, hence in a naturalist determinism.

Before addressing the issue of the topological structure of our space, let us trace some other figures in our space.

6. Figures and Forms

Whenever we explicate a concept we imply or refer to a model. In a more general perspective, this applies to virtuality and mediality as well. To give an example: Describing a belief does not mean to belief, but to apply a model. The question now is, how to revert the accretion of mental activities towards the _Model._Virtuality can’t be created deliberately, since in this case we would refer again to the concept of model. Speaking about something, that is, saying in the Wittgensteinian sense, intensifies the _Model.

It is not too difficult, though, to find some candidate mechanics that turns the vector of mental activity away from the _Concept.It is through performance, mere action without explicable purpose, that we introduce new possibilities for interpretation and thus also enriched potential as the (still abstract) instance of _Virtuality.

In contrast to that, the _Concept is implied.The _Conceptcan only be demonstrated. Even by modeling. Traveling on some path that is heading towards the _Model,the need for interpretation continuously grows, hence, the more we try to approach the “pure” _Model,the stronger is the force that will flip us back towards the _Concept.

_Mediality,finally, the fourth of our aspects, binds its immaterial colleagues to matter, or quasi-matter, in processes that are based on the multiplicity of populations. It is through _Medialityand its instances that chunks of information start to behave as device, as quasi-material arrangement. The whole dynamics between _Conceptsand _Modelsrequires a symbol system, which can evolve only through the reference to _Mediality,which in turn is implied by populations of processes.

Above we said that the motivation for this structure is to provide a space of expressibility for mental phenomena in their entirety. Mental activity does not consist of isolated, rare events. It is an multitude of flows integrated into various organizational levels, even if we would consider only the language part. Mapping these flows into our space rises the question whether we could distinguish different attractors, different forms of recurrence.

Addressing this question establishes an interesting configuration, since we are talking about the form of mental activities. Perhaps it is also appropriate to call these forms “mental style”. In any case, we may take our space as a tool to formalize the question about potential classes of mental styles. In order to render out space more accessible, we take the tetraedic body as a (crude) approximating metaphor for it.

Above we stressed the point that any explication intensifies the _Model aspect. Transposed into a Cartesian geometry we would have said—metaphori- cally—that explication moves us towards the corner of the model. Let us stick to this primitive representation for a moment and in favour of a more intuitive understanding. Now imagine constructing a vector that points away from the model corner, right to the middle of the area spanned by virtuality, mediality and concept. It is pretty clear, that mental activity that leaves the model behind, and quite literally so, in this way will be some form of basic belief, or revelation. Religiosity (as a mental activity) may be well described as the attempt to balance virtuality, mediality and concept without resorting to any kind of explication, i.e. models. Of course, this is not possible in an absolute manner, since it is not possible to move in the aspectional space without any explication. This in turn then yields a residual that again points towards the model corner.

Inversely, it is not possible to move only in the direction of the _Model.Nevertheless, there are still many people proposing such, think for instance about (abundant as well as overdone) scientism. What we can see here are particular forms of mental activity. What about other forms? For instance, the fixed-point attractor?

As we have seen, our aspectional space does not allow for points as singularities. Both the semantics of the aspections as well as the structure of the space as a second-order differential prevents them. Yet, somebody could attempt to realize an orbit around a singularity that is as narrow as possible. Despite such points of absolute stability are completely illusionary, the idea of the absoluteness of ideas—idealism—represents just such an attempt. Yet, the claim of absoluteness brings mental activity to rest. It is not by accident therefore that it was the logician Frege who championed kind of a rather strange hyperplatonism.

At this point we can recognize the possibility to describe different forms of mental activity using our space. Mental activity draws specific trails into our space. Moreover, our suggestion is that people prefer particular figures for whatever reasons, e.g. due to their cultural embedding, their mental capabilities, their knowledge, or even due to their basic physical constraints. Our space allows to compare, and perhaps even to construct or evolve particular figures. Such figures could be conceived as the orthoregulative instance for the conditions to know. Epistemology thus looses its claim of universality.

It seems obvious to call our space a “choreostemic” space, a term which refers to choreography. Choreography means to “draw a dance”, or “drawing by dancing”, derived from Greek choreia (χορεύω) for „dancing, (round) dance”. Vera Bühlmann [19] described that particular quality as “referring to an unfixed point loosely moving within an occurring choreography, but without being orchestrated prior to and independently of such occurrence.”

The notion of the choreosteme also refers to the chorus of the ancient theatre, with all its connotations, particularly the drama. Serving as an announcement for part V of D&R, Deleuze writes:

However, what carries out the third aspect of sufficient reason—namely, the element of potentiality in the Idea? No doubt the pre-quantitative and pre-qualitative dramatisation. It is this, in effect, which determines or unleashes, which differenciates the differenciation of the actual in its correspondence with the differentiation of the Idea. Where, however, does this power of dramatisation come from? (DR221)

It is right here, where the choreostemic space links in. The choreostemic space does not abolish the dramatic in the transition from the conditionability of Ideas into concrete thoughts, but it allows to trace and to draw, to explicate and negotiate the dramatic. In other words, it opens the possibility for a completely new game: dealing with mental attitudes. Without the choreostemic space this game is not even visible, which itself has rather unfortunate consequences.

The choreostemic space is not an epistemic space either. Epistemology is concerned about the conditions that are influencing the possibility to know. Literally, episteme means “to stand near”, or “to stand over”. It draws upon a fixed perspective that is necessary to evaluate something. Yet, in the last 150 years or so, philosophy definitely has experienced the difficulties implied by epistemology as an endeavour that has been expected to contribute finally to the stabilization of knowledge. I think, the choreostemic space could be conceived as a tool that allows to reframe the whole endeavour. In other words, the problematic field of the episteme, and the related research programme “epistemology” are following an architecture (or intention), that has been set up far too narrow. That reframing, though, has become accessible only through the “results” of—or the tools provided by — the work of Wittgenstein and Deleuze. Without the recognition of the role of language and without a renewal of the notion of the virtual, including the invention of the concept of the differential, that reframing would not have been possible at all.

Before we are going to discuss further the scope of the choreostemic space and the purposes it can serve, we have to correct the Cartesian view that slipped in through our metaphorical references. The Cartesian flavour keeps not only a certain arbitrariness alive, as the four conceptual aspects _Aare given just by some subjective empirical observations. It also keeps us stick completely within the analytical space, hence with a closed approach that again would need a mystical external instance for its beginning. This we have to correct now.

7. Reason and Sufficiency

Our choreostemic space is built as an aspectional space that is spanned by transcendental entities. As such, they reflect the implied conditionability of concrete entities like definitions, models or media. The _Conceptcomprises any potential concrete concept, the _Modelcomprises any actual model of whatsoever kind and expressed in whatsoever symbolic system, the _Medialitycontains the potential for any kind of media, whether more material or more immaterial in character. The transcendental status of these aspects also means that we never can “access” them in their “pure” form. Yet, due to these properties our space allows to map any mental activity, not just of the human brain. In a more general perspective, our space is the space where the _Comparison takes place.

The choreostemic space is of course itself a model. Given the transcendentality of the four conceptual aspects _A,we can grasp the self-referentiality. Yet, this neither does result in an infinite regress, nor in circularity. This would be the case only if the space would be Cartesian and the topological structure would be flat (Euclidean) and global.

First, we have to consider that the choreostemic space is not only model, precisely due to its self-referentiality. Second, it is a tool, and as such it is not time-inert as a physical law. Its relevance unfolds only if it is used. This, however, invokes time and activity. Thus the choreostemic space could be conceived also as a means to intensify the virtual aspects of thought. Furthermore, and third, it is of course a concept, that is, it is an instance of the _Concept.As such, it should be constructed in a way that abolishes any possibility for a Cartesio-Euclidean regression. All these aspects are covered by the topological structure of the choreostemic space: It is meant to be a second-order differential.

A space made by the second-order differential does not contain items. It spawns procedures. In such a space it is impossible to stay at a fixed point. Whenever one would try to determine a point, one would be accelerated away. The whole space causes divergence of mental activities. Here we find the philosophical reason for the impossibility to catch a thought as a single entity.

We just mentioned that the choreostemic space does not contain items. Due to the second-order differential it is not made up as a set of coordinates, or, if we’d consider real scaled dimensions, as potential sets of coordinates. Quite to the opposite, there is nothing determinable in it. Yet, in rear-view, or hindsight, respectively, we can reconstruct figures in a probabilistic manner. The subject of this probabilism is again not determinable coordinates, but rather clouds of probabilities, quite similar to the way things are described in quantum physics by the Schrödinger equation. Unlike the completely structureless and formless clouds of probability which are used in the description of electrons, the figures in our space can take various, more or less stable forms. This means that we can try to evolve certain choreostemic figures and even anticipate them, but only to a certain degree. The attractor of a chaotic system provides a good metaphor for that: We clearly can see the traces in parameter space as drawn by the system, yet, the system’s path as described by a sequence of coordinates remains unpredictable. Nevertheless, the attractor is probabilistically confined to a particular, yet cloudy “figure,” that is, an unsharp region in parameter space. Transitions are far from arbitrary.

Hence, we would propose to conceive the choreostemic space as being made up from probabilistic situs (pl.). Transitions between situs are at the same time also transformations. The choreostemic space is embedded in its own mediality without excluding roots in external media.

Above we stuffed the space with a hyperbolic topology in order to align to the transcendentality of the conceptual aspects. It is quite important to understand that the choreostemic space does not implement a single, i.e. global hyperbolic relation. In contrast, each situs serves as point of reference. Without this relativity, the choreostemic space would be centred again, and in consequence it would turn again to the analytic and totalising side. This relativity can be regarded as the completed and subjectivising Cartesian delocalization of the “origin”. It is clear that the distance measures of any two such relative hyperbolic spaces do not coincide any more. There is neither apriori objectivity nor could we expect a general mapping function. Approximate agreement about distance measures may be achievable only for reference systems that are rather close to each other.

The choreostemic space comprises any condition of any mental attitude or thought. We already mentioned it above: The corollary of that is that the choreostemic space is the space of _Comparisonas a transcendental category.

It comprises the conditions for the whole universe of Ideas, it is an entirety. Here, it is again the topological structure of the space that saves us from mental dictatorship. We have to perform a double instantiation in order to arrive at a concrete thought. It is somewhat important to understand that these instantiations are orthoregulated.

It is clear that the choreostemic space destroys the idea of a uniform rationality. Rationality can’t be tied to truth, justice or utility in an objective manner, even if we would soften objectivity as a kind of relaxed intersubjectivity. Rationality depends completely on the preferred or practiced figures in the choreostemic space. Two persons, or more generally, two entities with some mental capacity, could completely agree on the facts, that is on the percepts, the way of their construction, and the relations between them, but nevertheless assign them completely different virtues and values, simply for the fact that the two entities inhabit different choreostemic attractors. Rationality is global within a specific choreostemic figure, but local and relative with regard to that figure. The language game of rationality therefore does not refer to a particular attitude towards argumentation, but quite in contrast, it includes and displays the will to establish, if not to enforce uniformity. Rationality is the label for the will to power under the auspices of logic and reductionism. It serves as the display for certain, quite critical moral values.

Thus, the notion of sufficient reason looses its frightening character as well. As any other principle of practice it gets transformed into a strictly local principle, retaining some significance only with regard to situational instrumentality. Since the choreostemic space is a generative space, locality comprises temporal locality as well. According to the choreostemic space, sufficient reasons can’t even be transported between subsequent situations. In terms of the choreostemic space notions like rationality or sufficient reason are relative to a particular attractor. In different attractors their significance could be very different, they may bear very different meanings. Viewed from the opposite direction, we also can see that a more or less stable attractor in the choreostemic has first to form, or: to be formed, before there is even the possibility for sufficient reasons. This goes straightly parallel to Wittgenstein’s conception of logic as a transcendental apriori that possibly becomes instantiated only within the process of an unfolding Lebensform. As a contribution to political reason, the choreostemic space it enables persons inhabiting different attractors, following different mental styles. Later, we will return to this aspect.

In D&R, Deleuze explicated the concept of the “Image of Thought”, as part III of D&R is titled. There he first discusses what he calls the dogmatic image of thought, comprised according to him from eight elements that together lead to the concept of the idea as an representation (DR167). Following that we insists that the idea is bound to repetition and difference (as differenciation and differentiation), where repetition introduces the possibility of the new, as it is not the repetition of the same. Nevertheless, Deleuze didn’t develop this Image into a multiplicity, as it could have been expected from a more practical perspective, i.e. the perspective of language games. These games are different from his notion emphasizing at several instances that language is a rich play.

For me it seems that Deleuze didn’t (want to) get rid of ontology, hence he did not conceive of his great concept of the “differential” as a language game, and in turn he missed to detect the opportunity for self-referentiality or even to apply it in a self-referential manner. We certainly do therefore not agree with his attempt to ground the idea of sufficient reason as a global principle. Since “sufficient reason” is a practice I think it is not possible or not sufficient to conceive of it as a transcendental guideline.

8. Elective Kinships

It is pretty clear that the choreostemic space is applicable to many problematic fields concerning mental attitudes, and hence concerning cultural issues at large, reaching far beyond the specificity of individual domains.

As we will see, the choreostemic space may serve as a treatment for several kinds of troublesome aberrances, in philosophy itself as well as in its various applications. Predominantly, the choreostemic space provides the evolutionary perspective towards the self-containing theoretical foundation of plurality and manifoldness.26 Comparing that with Hegel’s slogans of “the synthesis of the nation’s reason“ (“Synthese des Volksgeistes“) or „The Whole is the Truth“ („Das Ganze ist das Wahre“) shows the difference regarding its level and scope.

Before we go into the details of the dynamics that unfolds in the choreostemic space, we would like to pick up on two areas, the philosophy of the episteme and the relationship between anthropology and philosophy.

8.1. Philosophy of the Episteme

The choreostemic space is not about a further variety of some epistemological argument. It is thought as a reframing of the concerns that have been addressed traditionally by epistemology. (Here, we already would like to warn of the misunderstanding that the choreostemic space exhausts as epistemology.) Hence, it should be able to serve (as) the theoretical frame for the sociology of science or the philosophy of science as well. Think about the work of Bruno Latour [9], Karin Knorr Cetina [10] or Günther Ropohl [11] for the sociology of science or the work of van Fraassen [12] of Giere [13] for the field of philosophy of science. Sociology and philosophy, and quite likely any of the disciplines in human sciences, should indeed establish references to the mental in some way, but rather not to the neurological level, and—since we have to avoid anthropological references—to cognition as it is currently understood in psychology as well.

Giere, for instance, brings the “cognitive approach” and hence the issue of practical context close to the understanding of science, criticizing the idealising projection of unspecified rationality:

Philosophers’ theories of science are generally theories of scientific rationality. The scientist of philosophical theory is an ideal type, the ideally rational scientist. The actions of real scientists, when they are considered at all, are measured and evaluated by how well they fulfill the ideal. The context of science, whether personal, social or more broadly cultural, is typically regarded as irrelevant to a proper philosophical understanding of science” (p.3).

The “cognitive approach” that Giere proposes as a means to understand science is, however, threatened seriously by the fact that there is no consensus about the mental. This clearly conflicts with the claim of trans-cultural objectivity of contemporary science. Concerning cognition, there are still many simplistic paradigms around, recently seriously renewed by the machine learning community. Aaron Ben Ze’ev [14] writes critically:

In the schema paradigm [of the mind, m.], which I advocate, the mind is not an internal container but a dynamic system of capacities and states. Mental properties are states of a whole system, not internal entities within a particular system. […] Novel information is not stored in a separate warehouse, but is ingrained in the constitution of the cognitive system in the form of certain cognitive structures (or schemas). […] The attraction of the mechanistic paradigm is its simplicity; this, however, is an inadequate paradigm, because it fails to explain various relevant phenomena. Although the complex schema paradigm does not offer clear-cut solutions, it offers more adequate explanations.

How problematic even such critiques are can be traced as soon as we remember Wittgenstein’s mark on “mental states” (Brown Book, p.143):

There is a kind of general disease of thinking which always looks for (and  finds) what would be called a mental state from which all our acts spring as from a reservoir.

In the more general field of epistemology there is still no sign for any agreement about the concept of knowledge. From our position, this is little surprising. First, concepts can’t be defined at all. All we can find are local instances of the transcendental entity. Second, knowledge and even its choreostemic structure is dependent on the embedding culture while at the same time it is forming the culture. The figures in the choreostemic space are attractors: They do not prescribe the next transformation, but they constrain the possibility for it. How ever to “define” knowledge in an explicit, positively representationalist manner? For instance, knowledge can’t be reduced to confirmed hypotheses qua validated models. It is just impossible in principle to say “knowledge is…”, since this implies inevitably the demand for an objective justification. At most, we can take it as a language game. (Thus the choreosteme, that is, the potential of building figures in the choreostemic space, should not be mixed with the episteme! We will return to this issue later again.)

Yet, just to point to the category of the mental as a language game does not feel satisfying at all. Of course, Wittgenstein’s work sheds bright light on many aspects of mentality. Nevertheless, we can’t use Wittgenstein’s work as a structure; it is itself to be conceived as a result of a certain structuredness. On the other hand, it is equally disappointing to rely on the scientific approach to the mental. In some way, we need a balanced view, which additionally should provide the possibility for a differential experimentation with mechanisms of the mental.

Just that is offered by the choreostemic space. We may relate disciplinary reductionist models to concepts as they live in language games without any loss and without getting into troubles as well.

Let us now see what is possible by means of the choreostemic space and the anti-ontological T-Bar-Theory for the terms believing, referring, explicating, understanding and knowing. It might be relevant to keep in mind that by “mental activities” we do not refer to any physical or biochemical process. We distinguish the mental from the low-level affairs in the brain. Beliefs, or believing, are thus considered to be language games. From that perspective our choreostemic space just serves as a tool to externalize language in order to step outside of it, or likewise, to get able to render important aspects of playing the language game visible.

Believing

The category of beliefs, or likewise the activity of believing27, we already met above. We characterised it as a mental activity that leaves the model behind. We sharply refute the quite abundant conceptualisation of beliefs as kind of uncertainty in models. Since there is no certainty at all, not even with regard to transcendental issues, such would make little sense. Actually, the language game of believing shows its richness even on behalf of a short investigation like this one.

Before we go into details here let us see how others conceive of it. PMS Hacker [27] gave the following summary:

Over the last two and a half centuries three main strands of opinion can be discerned in philosophers’ investigations of believing. One is the view that believing that p is a special kind of feeling associated with the idea that p or the proposition that p. The second view is that to believe that p is to be in a certain kind of mental state. The third is that to believe that p is to have a certain sort of disposition.

Right to the beginning of his investigation, Hacker marks the technical, reductionist perspective onto believe as a misconception. This technical reductionism, which took form as so-called AGM-theory in the paper by Alchourron, Gärdenfors and Makinson [28] we will discuss below. Hacker writes about it:

Before commencing analysis, one misconception should be mentioned and put aside. It is commonly suggested that to believe that p is a propositional attitude.That is patently misconceived, if it means that believing is an attitude towards a proposition. […] I shall argue that to believe that p is neither a feeling, nor a mental state, nor yet a disposition to do or feel anything.

Obviously, believing has several aspects. First, it is certainly kind of a mental activity. It seems that I need not to tell anybody that I believe in order to be able to believe. Second, it is a language game, and a rich one, indeed. It seems almost to be omnipresent. As a language game, it links “I believe that” with, “I believe A” and “I believe in A”. We should not overlook, however, that these utterances are spoken towards someone else (even in inner speech), hence the whole wealth of processes and relations of interpersonal affairs have to be regarded, all those mutual ascriptions of roles, assertions, maintained and demonstrated expectations, displays of self-perception, attempts to induce a certain co-perception, and so on. We frequently cited Robert Brandom who analysed that in great detail in his “Making it Explicit”.

Yet, can we really say that believing is just a mental activity? For the one, above we did not mention that believing is something like a “pure” mental activity. We clearly would reject such a claim. First, we clearly can not set the mental as such into a transcendental status, as this would lead straight to a system like Hegel’s philosophy, with all its difficulties, untenable claims and disastrous consequences. Second, it is impossible to explicate “purity”, as this would deny the fact that models are impossible without concepts. So, is it possible that a non-conscious being or entity can believe? Not quite, I would like to propose. Such an entity will of course be able to build models, even quite advanced ones, though probably not about reflective subjects as concepts or ideas. It could experience that it could not get rid of uncertainty and its closely related companion, risk. Such we can say that these models are not propositions “about” the world, they comprise uncertainty and allow to deal with uncertainty through actions in the world. Yet, the ability to deal with uncertainty is certainly not the same as believing. We would not need the language game at all. Saying “I believe that A” does not mean to have a certain model with a particular predictive power available. As models are explications, expressing a belief or experiencing the compound mental category “believing” is just the demonstration that any explication is impossible for the person.

Note that we conceive of “belief “as completely free of values and also without any reference to mysticism. Indeed, the choreostemic space allows to distinguish different aspects of the “compound experience” that we call “belief”, which otherwise are not even visible as separate aspects of it. As a language game we thus may specify it as the indication that the speaker assigns—or the listener is expected to assign—a considerable portion of the subject matter to that part of the choreostemic figure that points away from the _Model.It is immediately clear from the choreostemic space that mental activity without belief is not possible. There is always a significant “rest” that could not be covered by any kind of explication. This is true for engineering and of course for any kind of social interaction, as soon as mutual expectations appear on the stage. By means of the choreostemic space we also can understand the significance of trust in any interaction with the external world. In communicative situations, this quickly may lead to a game of mutual deontic ascriptions, as Robert Brandom [15] has been arguing for in his “Making it Explicit”.

Interestingly enough, belief (in its choreostemically founded version) is implied by any transition away from the _Model,for instance also in case of the transition path that ultimately is heading towards the _Concept.Even more surprising—at first sight—and particularly relevant is the “inflection dynamics” in the choreostemic space. The more one tries to explicate something the larger the necessary imports (e.g. through orthoregulations) from the other _a,and hence the larger is the propensity for an inflecting flip.28

As an example, take for instance the historical development of theories in particle physics. There, people started with rather simple experimental observations, which then have been assimilated by formal mathematical models. Those in turn led to new experiments, and so forth, until physics has been reaching a level of sophistication where “observations” are based on several, if not many layers of derived concepts. On the way, structural constants and heuristic side conditions are implied. Finally, then, the system of the physical model turns into an architectonics, a branched compound of theory-models, that sounds as trivial as it is conceptual. In case of physics, it is the so-called grand unified theory. There are several important things here. First, due to large amounts of heuristic settings and orthoregulations, such concepts can’t be proved or disproved anymore, the least by empirical observations. Second, on the achieved level of abstraction, the whole subject could be formulated in a completely different manner. Note that such a dynamic between experiment, model, theory29 and concept never has been described in a convincing manner before.30

Now that we have a differentiated picture about belief at our disposal we can briefly visit the field of so-called belief revision. Belief revision has been widely adopted in artificial intelligence and machine learning as the theory for updating a data base. Quite unfortunately, the whole theory is, well, simply crap, if we would go to apply it according to its intention. I think that we can raw some significance of the choreostemic space from this mismatch for a more appropriate treatment of beliefs in information technology.

The theory of belief revision was put forward by a branch of analytical philosophy in a paper by Alchourron, Gärdenfors and Makinson (1985) [29], often abbr. as “AGM-theory.” Hansson [30] writes:

A striking feature of the framework employed there [monnoo: AGM] is its simplicity. In the AGM framework, belief states are represented by deductively closed sets of sentences, called belief sets. Operations of change take the form of either adding or removing a specified sentence.

Sets of beliefs are held by an agent, who establishes or maintains purely logical relations between the items of those beliefs. Hansson correctly observes that:

The selection mechanism used for contraction and revision encodes information about the belief state not represented by the belief set.

Obviously, such “belief sets” have nothing to do with beliefs as we know it from language game, besides the fact that is a misdone caricature. As with Pearl [23], the interesting stuff is left out: How to achieve those logical sentences at all, notably by a non-symbolic path of derivation?  (There are no symbols out there in the world.) By means of the choreostemic space we easily derive the answer: By an orthoregulated instantiation of a particular choreostemic performance in an unbounded (open) aspectional space that spans between transcendental entities. Since the AGM framework starts with or presupposes logic, it simply got stuck in symbolistic fallacy or illusion. Accordingly, Pollock & Gillies [30] demonstrate that “postulational approaches” such as the AGM-theory can’t work within a fully developed “standard” epistemology. Both are simply incompatible to each other.

Explicating

Closely related to believing is explicating, the latter being just the inverse of the former, pointing to the “opposite direction”. Explicating is almost identical to describing a model. The language game of “explication” means to transform, to translate and to project choreostemic figures into lists of rules that could be followed, or in other words, into the sayable. Of course, this transformation and projection is neither analytic nor neutral. We must be aware of the fact that even a model can’t be explicated completely. Else, this rule-following itself implies the necessity of believes and trust, and it requires a common understanding about the usage or the influence of orthoregulations. In other words, without an embedding into a choreostemic figure, we can’t accomplish an explication.

Understanding, Explaining, Describing

Outside of the perspective of the language game “understanding” can’t be understood. Understanding emerges as a result of relating the items of a population of interpretive acts. This population and the relations imposed on them are closely akin to Heidegger’s scaffold (“Gestell”). Mostly, understanding something is just extending an existent scaffold. About these relations we can’t speak clearly or in an explicit manner any more, since these relations are constitutive parts of the understanding. As all language games this too unfolds in social situations, which need not be syntemporal. Understanding is a confirming report about beliefs and expectations into certain capabilities of one’s own.

Saying “I understand” may convey different meanings. More precisely, understanding may come along in different shades that are placed between two configurations. Either it signals that one believes to be able to extend just the own scaffold, one’s own future “Gestelltheit”. Alternatively it is used to indicate the belief that the extension of the scaffold is shared between individuals in such a way as to be able to reproduce the same effect as anyone else could have produced understanding the same thing. This effect could be merely instrumental or, more significantly, it could refer to the teaching of further pupils. In this case, two people understand something if they can teach another person to the same ends.

Beside the performative and social aspects of understanding there are of course the mental aspects of the concept of “understanding” something. These can be translated into choreostemic terms. Understanding is less a particular “figure” in the CS than it is a deliberate visiting of the outer regions of the figure and the intentional exploration of those outposts. We understand something only in case we are aware of the conditions of that something and of our personal involvements. These includes cognitive aspects, but also the consequences of the performative parts of acts that contribute to an intensifying of the aspect of virtuality. A scientist who builds a strong model without considering his and its conditionability does not understand anything. He just would practice a serious sort of dogma (see Quine about the dogmas of empiricism here!). Such a scientist’s modeling could be replaced by that of a machine.

A similar account could be given to the application of a grammar, irrespective the abstractness of that grammar. Referring to a grammar without considering its conditionability could be performed by a mindless machine as well. It would indeed remain a machine: mindless, and forever determined. Such is most, if not all of the computer software dealing with language today.

We again would like to emphasize that understanding does not exhaust in the ability to write down a model. Understanding means to relate the model to concepts, that is, to trace a possible path that would point towards the concept. A deep understanding refers to the ability to extend a figure towards the other transcendental aspects in a conscious manner. Hence, within idealism and (any sort of) representationalism understanding is actually excluded. They mistake the transcendental for the empirical and vice versa, ending in a strict determinism and dogmatism.

Explaining, in turn, indicates the intention to make somebody else to understand a certain subject. The infamous existential “Why?” does not make any sense. It is not just questionable why this language game should by performed at all, as the why of absolute existence can’t be answered at all. Actually, it seems to be quite different from that. As a matter of fact, we indeed play this game in a well comprehensible way and in many social situations. Conceiving “explanation” of nature as to account for its existence (as Epperson does it, see [31] p.357) presupposes that everything could turned into the sayable. It would result in the conflation of logic and factual world, something Epperson indeed proposes. Some pages later in his proposal about quantum physics he seems to loosen that strict tie when referring to Whitehead he links “understanding” to coherence and empirical adequacy. ([31] p.361)

I offer this argument in the same speculative philosophical spirit in which Whitehead argued for the fitness of his metaphysical scheme to the task of understanding (though not “explaining”) nature—not by the “provability” of his first principles via deduction or demonstration, but by their evaluation against the metrics of coherence and empirical adequacy.

Yet, this presents us an almost a perfect phenomenological stance, separating objects from objects and subjects. Neither coherence nor empirical adequacy can be separated from concepts, models and the embedding Lebenswelt. It expresses thus the believe of “absolute” understanding and final reason. Such ideas that are at least highly problematic, even and especially if we take into account the role Whitehead gives the “value” as an cosmological apriori. It is quite clear, that this attitude to understanding is sharply different from anything that is related to semiotics, the primacy of interpretation, to the role of language or a relational philosophy, in short, to anything what resembles even remotely to what we proposed about understanding of understanding a few lines above.

The intention to make somebody else to understand a certain subject necessarily implies a theory, where theory here is understood (as we always do) as a milieu for deriving or inventing models. The “explaining game” comprises the practice of providing a general perspective to the recipient such that she or he could become able to invent such a model, precisely because a “direct” implant of an idea into someone else is quite impossible. This milieu involves orthoregulation and a grammar (in the philosophical sense). The theory and the grammar associated or embedded with it does nothing else than providing support to find a possibility for the invention or extension of a model. It is a matter of persistent exchange of models from a properly grown population of models that allow to develop a common understanding about something. In the end we then may say “yes, I can follow you!”

Describing is often not distinguished (properly) from explaining. Yet, in our context of choreostemically embedded language games it is neither mysterious nor difficult to do so. We may conceive of describing just as explicating something into the sayable, the element of cross-individual alignment is not part of it, at least in a much less explicit way. Hence, usually the respective declaration will not be made. The element of social embedding is much less present.

Describing pretends more or less that all the three aspects accompanying the model aspect could be neglected, particularly however the aspects of mediality and virtuality. The mathematical proof can be taken as an extreme example for that. Yet, even there it is not possible, since at least a working system of symbols is needed, which in turn is rooted in a dynamics unfolding as choreostemic figure, the mental aspect of Forms of Life. Basically, this impossibility for fixing a “position” in the choreostemic space is responsible for the so-called foundational crisis in mathematics. This crisis prevails even today in philosophy, where many people naively enough still search for absolute  justification, or truth, or at least regard such as a reasonable concept.

All this should not be understood as an attempt to deny description or describing as a useful category. Yet, we should be aware that the difference to explaining is just one of (choreostemic) form. More explicitly, said difference is an affair of of culturally negotiated portions of the transcendental aspects that make up mental life.

I hope this sheds some light on Wittgenstein’s claim that philosophy should just describe, but not explain anything. Well, the possibly perceived mysteriousness may vanish as well, if we remember is characterisation of grammar

Both, understanding and explaining are quite complicated socially mediated processes, hence they unfold upon layers of milieus of mediality. Both not only relate to models and concepts that need to exist in advance and thus to a particular dynamics between them, they require also a working system of symbols. Models and concepts relate to each other only as instances of _Models and _Concepts,that is in a space as it is provided by the choreostemic space. Talking about understanding as a practice is not possible without it.

Referring

Referring to something means to point to the expectation that the referred entity could point to the issue at hand. Referring is not “pointing to” and hence does not consist of a single move. It is “getting pointed to”. Said expectation is based on at least one model. Hence, if we refer to something, we put our issue as well as ourselves into the context of a chain of signifiers. If we refer to somebody, or to a named entity, then this chain of interpretive relations transforms in one of two ways.

Either the named entity is used, that is, put into a functional context, or more precisely, by assigning it a sayable function. The functionalized entity does not (need to) interpret any more, all activity gets centralized, which could be used as the starting point for totalizing control. This applies to any entity, whether it is just material or living, social.

The second way how referencing is affected by names concerns the reference to another person, or a group of persons. If it is not a functional relationship, e.g. taking the other as a “social tool”, it is less the expected chaining as signifier by the other person. Persons could not be interpreted as we interpret things or build signs from signals. Referring to a person means to accept the social game that comprises (i) mutual deontic assignments that develop into “roles”, including deontic credits and their balancing (as first explicated by Brandom [15]), (ii) the acceptance of the limit of the sayable, which results in a use of language that is more or less non-functional, always metaphorical and sometimes even poetic, as well as (iii) the declared persistence for repeated exchanges. The fact that we interpret the utterances of our partner within the orthoregulative milieu of a theory of mind (which builds up through this interpretations) means that we mediatize our partner at least partially.

The limit of the sayable is a direct consequence of the choreostemic constitution of performing thinking. The social is based on communication, which means “to put something into common”; hence, we can regard “communication” as the driving, extending and public part of using sign systems. As a proposed language game, “functional communication” is nonsense, much like the utterance “soft stone”.

By means of the choreostemic space we also can see that any referencing is equal to a more or less extensive figure, as models, concepts, performance and mediality is involved.

Knowing

At first hand, we could suspect that before any instantiation qua choreostemic performance we can not know something positively for sure in a global manner, i.e. objectively, as it is often meant to be expressed by the substantive “knowledge”. Due to that performance we have to interpret before we could know positively and objectively. The result is that we never can know anything for sure in a global manner. This holds even for transcendental items, that is, what Kant dubbed “pure reason”. Nevertheless, the language game “knowledge” has a well-defined significance.

“Knowledge” is a reasonable category only with respect to performing, interpreting (performance in thought) and acting (organized performance). It is bound to a structured population of interpretive situations, to Peircean signs. We thus find a gradation of privacy vs. publicness with respect to knowledge. We just have to keep in mind that neither of these qualities could be thought of as being “pure”. Pure privacy is not possible, because there is nothing like a private language (meaning qua usage and shared reference). Pure publicness is not possible because there is the necessity of a bodily rooted interpreting mechanism (associative structure). Things like “public space” as a purely exterior or externalized thing do not exist. The relevant issue for our topic of a machine-based episteme is that functionalism always ends in a denial of the private language argument.

We now can see easily why knowledge could not be conceived as a positively definable entity that could be stored or transferred as such. First, it is of course a language game. Second, and more important, “knowing {of, about, that}” always relates to instances of transcendental entities, and necessarily so. Third, even if we could agree on some specific way of instantiating the transcendental entities, it always invokes a particular figure unfolding in an aspectional space. This figure can’t be transferred, since this would mean that we could speak about it outside of itself. Yet, that’s not possible, since it is in turn impossible to just pretend to follow a rule.

Given this impossibility we should stay for a moment at the apparent gap opened by it towards teaching. How to teach somebody something if knowledge can’t be transferred? The answer is furnished by the equipment that is shared among the members of a community of speakers or co-inhabitants of the choreostemic space. We need this equipment for matching the orthoregulation of our rule-following. The parts, tools and devices of this equipment are made from palpable traditions, cultural rhythms, institutions, individual and legal preferences regarding the weighting of individuals versus the various societal clusters, the large story of the respective culture and the “templates” provided by it, the consciously accessible time horizon, both to the past and the future31, and so on. Common sense wrongly labels the resulting “setup” as “body of values”. More appropriately, we could call it grammatical dynamics. Teaching, then, is in some way more about the reconstruction of the equipment than about the agreement of facts, albeit the arrangement of the facts may tell us a lot about the grammar.

Saying ‘I know’ means that one wants to indicate that she or he is able to perform choreostemically with regard to the subject at hand. In other words, it is a label for a pointer (say reference) to a particular image of thought and its use. This includes the capability of teaching and explaining, which probably are the only way to check if somebody really knows. We can, however, not claim that we are aligned to a particular choreostemic dynamics. We only can believe that our choreostemic moves are part of a supposed attractor in the choreostemic space. From that also follows that knowledge is not just about facts, even if we would conceive of facts as a compound of fixed relations and fixed things.

The traditional concerns of epistemology as the discipline that asks about the conditions of knowing and knowledge must be regarded as a misplaced problem. Usually, epistemology does not refer to virtuality or mediality. Else, in epistemology knowledge is often sharply separated from belief, yet for the wrong reasons. The formula of “knowledge as justified belief” puts them both onto the same stage. It then would have to be clarified what “justified” should mean, which is not possible in turn. Explicating “justifying” would need reference to concepts and models, or rather the confinement to a particular one: logic. Yet, knowledge and belief are completely different with regard to their role in choreostemic dynamics. While belief is an indispensable element of any choreostemic figure, knowledge is the capability to behave choreostemically.

8.2. Anthropological Mirrors

Philosophy suffers even more from a surprising strangeness. As Marc Rölli recently mentioned [34] in his large work about the relations between anthropology and philosophy (KAV),

Since more than 200 years philosophy is anthropologically determined. Yet, philosophy didn’t investigate the relevance of this fact to any significant extent. (KAV15)32

Rölli agrees with Nietzsche regarding his critique of idealism.

“Nietzsche’s critique of idealism, which is available in many nuances, always targeting the philosophical self-misunderstanding of the pure reason or pure concepts, is also directed against a certain conception of nature.” (KAV439)33.

…where this rejected certain conception of nature is purposefulness. In nature there is no forward directed purpose, no plan. Such ideas are either due to religious romanticism or due to a serious misunderstanding of the Darwinian theory of natural evolution. In biological nature, there is only blind tendency towards the preference of intensified capability for generalization34. Since Kant, and inclusively him, and in some way already Descartes, philosophy has been influenced by scientific, technological or anthropological conceptions about nature in general, or the nature of the human mind.

Such is (at least) problematic for three reasons. First, it constitutes a misunderstanding of the role of philosophy to rely on scientific insights. Of course, this perspective is becoming (again) visible only today, notably after the Linguistic Turn as far as it regards non-analytical philosophy. Secondly, however, it is clear that the said influence implies, if it remains unreflected, a normative tie to empiric observations. This clearly represents a methodological shortfall. Thirdly, even if one would accept a certain link between anthropology and philosophy, the foundations taken from a “philosophy of nature”35 are so simplistic, that they hardly could be regarded as viable.

This almost primitive image about the purposeful nature finally flowed into the functionalism of our days, whether in philosophy (Habermas) or so-called neuro-philosophy, by which many feel inclined to establish a variety of determinism that is even proto-Hegelian.

In the same passage that invokes Nietzsche’s critique, Rölli cites Friedrich Albert Lange [39]

“The topic that we actually refer to can be denoted explicitly. It is quasi the apple in the logical lapse of German philosophy subsequent to Kant: the relation between subject and object within knowledge.” (KAV443)36

Lange deliberately attests Kant—in contrast to the philosophers of the German idealism— to be clear about that relationship. For Kant subject and object constitute only as an amalgamate, the pure whatsoever has been claimed by Hegel, Schelling and their epigones and inheritors. The intention behind introducing pureness, according to Lange, is to support absolute reason or absolute understanding, in other words, eternally justified reason and undeniability of certain concepts. Note that German Idealism was born before the foundational crisis in mathematics, that started with Russell’s remark on Frege’s “Begriffsschrift” and his “all” quantor, that found its continuation in the Hilbert programme and that finally has been inscribed to the roots of mathematics by Goedel. Philosophies of “pureness” are not items of the past, though. Think about materialism, or about Agamben’s “aesthetics of pure means”, as Benjamin Morgan [39] correctly identified the metaphysical scaffold of Agamben’s recent work.

Marc Rölli dedicates all of the 512 pages to the endeavor to destroy the extra-philosophical foundations of idealism. As the proposed alternative we find pragmatism, that is a conceptual foundation of philosophy that is based on language and Life form (Lebenswelt in the Wittgensteinian sense). He concludes his work accordingly:

After all it may have become more clear that this pragmatism is not about a simple, naive pragmatism, but rather about a pragmatism of difference37 that has been constructed with great subtlety. (KAV512)38

Rölli’s main target is German Idealism. Yet, undeniably Hegelian philosophy is not only abundant on the European continent, where it is the Frankfurt School from Adorno to Habermas and even K.-O. Apel, followed by the ill-fated ideas of Luhmann that are infected by Hegel as well. Significant traces of it can be found in Germany’s society also in contemporary legal positivism and the oligarchy of political parties.

During the last 20 years or so, Hegelian positions spread considerably also in anglo-american philosophy and political theory. Think about Hard and Negri, or even the recent works of Brian Massumi. Hegelian philosophy, however, can’t be taken in portions. It is totalitarian all through, because its main postulates such as “absolute reason” are totalizing by themselves. Hegelian philosophy is a relic, and a quite dangerous one, regardless whether you interpret it in a leftist (Lenin) or in a rightist (Carl Schmitt) manner. With its built-in claim for absoluteness the explicit denial of context-specificity, of the necessary relativity of interpretation, of the openness of future evolution, of the freedom inscribed deeply even into the basic operation of comparison, all of these positions turn into transcendental aprioris. The same holds for the claim that things, facts, or even norms can be justified absolutely. No further comment should be necessary about that.

The choreostemic space itself can not result in a totalising or even totalitarian attitude. We met this point already earlier when we discussed the topological structure of the space and its a-locational “substance” (Reason and Sufficiency). As Deleuze emphasized, there is a significant difference between entirety and completeness, which just mirrors the difference between the virtual and the actual. We’d like to add that the choreostemic space also disproves the possibility for universality of any kind of conception. In some way, yet implicitly, the choreostemic space defends humanity against materiality and any related attitude. Even if we would be determined completely on the material level, which we are surely not39, the choreostemic space proofs the indeterminateness and openness of our mental life.

You already may have got the feeling that we are going to slip into political theory. Indeed, the choreostemic space not only forms a space indeterminateness and applicable pre-specificity, it provides also a kind of a space of “Swiss neutrality”. Its capability to allow for a comparison of collective mental setups, without resorting to physicalist concepts like swarms or mysticistic concepts like “collective intelligence”, provides a fruitful ground for any construction of transitions between choreostemic attractors.

Despite the fact that the choreostemic space concerns any kind of mentality, whether seen as hosted more by identifiable individuals or by collectives, the concept should not be taken as an actual philosophy of reason (“Philosophie des Geistes”). It transcends it as it does regarding any particular philosophical stance. It would be wrong as well to confine it into an anthropology or an anthropological architecture of philosophy, as it is the case not only in Hegel (Rölli, KAV137). In some way, it presents a generative zone for a-human philosophies, without falling prey to the necessity to define what human or a-human should mean. For sure, here we do not refer to transhumanism as it is known today, which just follows the traditional anthropological imperative of growth (“Steigerungslogik”), as Rölli correctly remarks (KAV459).

A-Human simply means that as a conception it is neither dependent nor confined to the human Lebenswelt. (We again would like to stress the point that it does neither represent a positively sayable universalism not even kind of a universal procedural principle, and as well that this “a-” should also not be understood as “anti” or “opposed”, simply as “being free of”). It is this position that is mandatory to draw comparisons40 and, subsequently, conclusions (in the form of introduced irreversibilities) about entities that belong to strikingly different Lebenswelten (forms of life). Any particular philosophical position immediately would be guilty in applying human scales to non-human entities. That was already a central cornerstone of Nietzsche’s critique not only of German philosophy of the 19th century, but also of natural sciences.

8.3. Simplicissimi

Rölli criticizes the uncritical adoption of items taken from the scientific world view by philosophy in the 19th century. Today, philosophy is still not secured against simplistic conceptions, uncritically assimilated from certain scientific styles, despite the fact that nowadays we could know about the (non-analytic) Linguistic Turn, or the dogmatics in empiricism. What I mean here comprises two conceptual ideas, the reduction of living or social system to states and the notion of exception or that of normality respectively.

There are myriads of references in the philosophy of mind invoking so-called mental states. Yet, not only in the philosophy of mind one can find the state as a concept, but also in political theory, namely in Giorgio Agamben’s recent work, which also builds heavily on the notion of the “state of exception”. The concept of a mental state is utter nonsense, though, and mainly so for three very different reasons. The first one can be derived from the theory of complex systems, the second one from language philosophy, and the third one from the choreostemic space.

In complex systems, the notion of a state is empty. What we can observe subsequent to the application of some empiric modeling is that complex systems exhibit meta-stability. It looks as if they are stable and trivial. Yet, what we could have learned mainly from biological sciences, but also from their formal consideration as complex systems, is that they aren’t trivial. There is no simple rule that could describe the flow of things in a particular period of time. The reason is precisely that they are creative. They build patterns, hence the build a further “phenomenal” level, where the various levels of integration can’t be reduced to one another. They exhibit points of bifurcation, which can be determined only in hindsight. Hence, from the empirical perspective we only can estimate the probability for stability. This, however, is clearly to weak as to support the claim of “states”.

In philosophy, Deleuze and Guattari in their “Thousand Plateaus” (p.48) have been among the first who recognized the important abstract contribution of Darwin by means of his theory. He opened the possibility to replace types and species by population, degrees by differential relations. Darwin himself, however, has not been able to complete this move. It took another 100 years until Manfred Eigen coined the term quasi-species as an increased density in a probability distribution. Talking about mental states is noting than a fallback into Linnean times when science was the endeavor to organize lists according to uncritical use of concepts.

Actually, from the perspective of language-oriented philosophy, the notion of a state is even empty for any dynamical system that is subject to open evolution (but probably even for trivial dynamic systems). A real system does not build “states”. There are only flows and memories. “State” is a concept, in particular, an idealistic—or at least an idealizing—concept that are only present in the interpreting entity. The fact that one first has to apply a model before it is possible to assign states is deliberately peculated whenever it is invoked by an argument that relates to philosophy or to any (other) kind of normativity. Therefore, the concept of “state” can’t be applied analytically, or as a condition in a linearly arranged argument. Saying that we do not claim that the concept of state is meaningless at large. In natural science, especially throughout the process of hypothesis building, the notion of state can be helpful (sometimes, at least).

Yet, if one would use it in philosophy in a recurrent manner, one would quickly arrive at the choreostemic space (or something very similar), where states are neither necessary nor even possible. Despite that a “state” is only assigned, i.e. as a concept, philosophers of mind41 and philosophers of political theory alike (as Agamben [37] among other materialists) use it as a phenomenal reference. It is indeed somewhat astonishing to observe this relapse into naive realism within the community of otherwise trained philosophers. One of the reasons for this may well be met in the missing training in mathematics.42

The third argument against the reasonability of the notion of “state” in philosophy can be derived from the choreostemic space. A cultural body comprises individual mentality as well as a collective mentality based on externalized symbolic systems like language, to make a long story short. Both together provide the possibility for meaning. It is absolutely impossible to assign a “state” to a cultural body without loosing the subject of culture itself. It would be much like a grammatical mistake. That “subject” is nothing else than a figurable trace in the choreostemic space. If one would do such an assignment instead, any finding would be relevant only within the reduced view. Hence, it would be completely irrelevant, as it could not support the self-imposed pragmatics. Continuing to argue about such finding then establishes a petitio principii: One would find only what you originally assumed. The whole argument would be empty and irrelevant.

Similar arguments can be put forward regarding the notion of the exceptional, if it is applied in contexts that are governed by concepts and their interpretation, as opposed to trivial causal relationships. Yet, Giorgio Agamben indeed started to built a political theory around the notion of exception [37], which—at first sight strange enough—already triggered an aesthetics of emergency. Elena Bellina [38] cites Agamben:

The state of exception “is neither external nor internal to the juridical order, and the problem of defining it concerns a threshold, or a zone of indifference, where inside and outside do not exclude each other but rather blur with each other.” In this sense, the state of exception is both a structured or rule-governed and an anomic phenomenon: “The state of exception separates the norm from its application in order to make its application possible. It introduces a zone of anomie into the law in order to make the effective regulation of the real possible.”

It results in nothing else than disastrous consequences if the notion of the exception would be applied to areas where normativity is relevant, e.g. in political theory. Throughout history there are many, many terrible examples for that. It is even problematic in engineering. We may even call it fully legitimized “negativity engineering”, as it establishes completely unnecessary the opposite of the normal and the deviant as an apriori. The notion of the exception presumes total control as an apriori. As such, it is opposed to the notion of openness, hence it also denies the primacy of interpretation. Machines that degenerate and that would produce disasters on any malfunctioning can’t be considered as being built smartly. In a setup that embraces indeterminateness, there is even no possibility for disastrous fault. Instead, deviances are defined only with respect to the expectable, not against an apriori set, hence obscure, normality. If the deviance is taken as the usual (not the normal, though!), fault-tolerance and even self-healing could be built in as a core property, not as an “exception handling”.

Exception is the negative category to the normal. It requires models to define normality, models to quantify the deviation and finally also arbitrary thresholds to label it. All of the three steps can be applied in linear domains only, where the whole is dependent on just very few parameters. For social mega-systems as societies it is nothing else than a methodological categorical illusion to apply the concept of the exception.

9. Critique of Paradoxically Conditioned Reason

Nothing could be more different to that than pragmatism, for which the choreostemic space can serve as the ultimate theory. Pragmatism always suffered from—or at least has been violable against—the reproach of relativism, because within pragmatism it is impossible to argue against it. With the choreostemic space we have constructed a self-sufficient, self-containing and a necessary model that not only supports pragmatism, but also destroys any possibility of universal normative position or normativity. Probably even more significant, it also abolishes relativism through the implied concept of the concrete choreostemic figure, which can be taken as the differential of the institution or the of tradition43. Choreostemic figures are quite stable since they relate to mentality qua population, which means that they are formed as a population of mental acts or as mental acts of the members of a population. Even for individuals it is quite hard to change the attractor inhabited in choreostemic space, to change into another attractor or even to build up a new one.

In this section we will check out the structure of the way we can use the choreostemic space. Naively spoken we could ask for instance, how can we derive a guideline to improve actions? How can we use it to analyse a philosophical attitude or a political writing? Where are the limits of the choreostemic space?

The structure behind such questions concerns a choice on a quite fundamental level. The issue is whether to argue strictly in positive terms, to allow negative terms, or even to define anything starting from negative terms only. In fact, there are quite a few of different possibilities to arrange any melange of positivity or negativity. For instance, one could ontologically insist first on contingency as a positivity, upon then constraints would act as a negativity. Such traces we will not follow here. We regard them either as not focused enough or, most of them, as being infected by realist ontology.

In more practical terms this issue of positivity and negativity regards the way of how to deal with justifications and conditions. Deleuze argues for strict positivity; in that he follows Spinoza and Nietzsche. Common sense, in contrast, is given only as far as it is defined against the non-common. In this respect, any of the existential philosophical attitudes, whether Christian religion, phenomenology or existentialism, are quite similar to each other. Even Levinas’ Other is infected by it.

Admittedly, at first hand it seems quite difficult, if not impossible, to arrive at an appropriate valuation of other persons, the stranger, the strange, in short, the Other, but also the alienated. Or likewise, how to derive or develop a stance to the world that does not start from existence. Isn’t existence the only thing we can be sure about? And isn’t the external, the experience the only stable positivity we can think about? Here, we shout a loud No! Nevertheless we definitely do not deny the external either.

We just mentioned that the issue of justification is invoked by our interests here. This gives rise to ask about the relation of the choreostemic space to epistemology. We will return to this in the second half of this section.

Positivity. Negativity.

Obviously, the problem of the positive is not the positive, but how we are going to approach it. If we set it primary, we first run into problems of justification, then into ethical problems. Setting the external, the existence, or the factual positive as primary we neglect the primacy of interpretation. Hence, we can’t think about the positive as an instance. We have to think of it as a Differential.

The Differential is defined as an entirety, yet not instantiated. Its factuality is potential, hence its formal being is neither exhaustive nor limiting its factuality, or positivity. Its givenness demands for action, that is for a decision (which is sayable regarding its immediacy) bundled with a performance (which is open and just demonstrable as a matter of fact).

The concept of choreosteme follows closely Deleuze’s idea of the Differential: It is built into the possibility of expressibility that spans as the space between the _Directionsas they are indicated by the transcendental aspects _A.The choreostemic space does not constitute a positively definable stance, since the space for it, the choreostemic space is not made from elements that could be defined apriori to any moment in time. Nevertheless it is well-defined. In order to provide an example which requires a similar approach we may refer to the space of patterns as they are potentially generated by Turing-systems. The mechanics of Turing-patterns, its mechanism, is well-defined as well, it is given in its entirety, but the space of the patterns can’t be defined positively. Without deep interpretation there is nothing like a Turing-pattern. Maybe, that’s one of the reasons that hard sciences still have difficulties to deal adequately with complexity.

Besides the formal description of structure and mechanism of our space there is nothing left about one could speak or think any further. We just could proceed by practicing it. This mechanism establishes a paradoxicality insofar as it does not contain determinable locations. This indeterminateness is even much stronger than the principle of uncertainty as it is known from quantum physics, which so far is not constructed in a self-referential manner (at least if we follow the received views). Without any determinate location, there seems to be no determinable figure either, at least none of which we could say that we could grasp them “directly”, or intuitively. Yet, figures may indeed appear in the choreostemic space, though only by applying orthoregulative scaffolds, such as traditions, institutions, or communities that form cultural fields of proposals/propositions (“Aussagefeld”), as Foucault named it [40].

The choreostemic space is not a negativity, though. It does not impose apriori determinable factual limits to a real situation, whether internal or external. It even doesn’t provide the possibility for an opposite. Due to its self-referentiality it can be instantiated into positivity OR negativity, dependent on the “vector”—actually, it is more a moving cloud of probabilities—one currently belongs to or that one is currently establishing by one’s own  performances.

It is the necessity of choice itself, appearing in the course of instantiation of the twofold Differential, that introduces the positive and the negative. In turn, whenever we meet an opposite we can conclude that there has been a preceding choice within an instantiation. Think about de Saussure structuralist theory of language, which is full of opposites. Deleuze argues (DR205) that the starting point of opposites betrays language:

In other words, are we not on the lesser side of language rather than the side of the one who speaks and assigns meaning? Have we not already betrayed the nature of the play of language – in other words, the sense of that combinatory, of those imperatives or linguistic throws of the dice which, like Artaud’s cries, can be understood only by the one who speaks in the transcendent exercise of language? In short, the translation of difference into opposition seems to us to concern not a simple question of terminology or convention, but rather the essence of language and the linguistic Idea.

In more traditional terms one could say it is dependent on the “perspective”. Yet, the concept of “perspective” is fallacious here, at least so, since it assumes a determinable stand point. By means of the choreostemic space, we may replace the notion of perspectives by the choreostemic figure, which reflects both the underlying dynamics and the problematic field much more adequately. In contrast to the “perspective”, or even of such, a choreostemic figure spans across time. Another difference is that a perspective needs to be taken, which does not allow for continuity, while a choreostemic figure evolves continually. The possibility for negativity is determined along the instantiation from choreosteme to thought, while the positivity is built into the choreostemic space as a potential. (Negative potentials are not possible.)

Such, the choreostemic space is immune to any attempt—should we say poison pill?—to apply a dialectic of the negative, whether we consider single, double, or absurdly enough multiply repeated ones. Think about Hegel’s negativity, Marx’s rejection and proposal for a double negativity, or the dropback by Marcuse, all of which must be counted simply as stupidity. Negativity as the main structural element of thinking did not vanish, though, as we can see in the global movement of anti-capitalism or the global movement of anti-globalization. They all got—or still get—victimized by the failure to leave behind the duality of concepts and to turn them into a frame of quantitability. A recent example for that ominous fault is given by the work of Giorgio Agamben; Morgan writes:

Given that suspending law only increases its violent activity, Agamben proposes that ‘deactivating’ law, rather erasing it, is the only way to undermine its unleashed force. (p.60)

The first question, of course, is, why the heck does Agamben think that law, that is: any lawfulness, must be abolished. Such a claim includes the denial of any organization and any institution, above all, as practical structures, as immaterial infrastructures and grounding for any kind of negotiation. As Rölli noted in accordance to Nietzsche, there is quite an unholy alliance between romanticism and modernism. Agamben, completely incapable of getting aware of the virtual and of the differential alike, thus completely stuck in a luxurating system of “anti” attitudes, finds himself faced with quite a difficulty. In his mono-(zero) dimensional modernist conception of world he claims:

“What is found after the law is not a more proper and original use value that precedes law, but a new use that is born only after it. And use, which has been contaminated by law, must also be freed from its value. This liberation is the task of study, or of play.”

Is it really reasonable to demand for a world where uses, i.e. actions, are not “contaminated” by law? Morgan continues:

In proposing this playful relation Agamben makes the move that Benjamin avoids: explicitly describing what would remain after the violent destruction of normativity itself. ‘Play’ names the unknowable end of ‘divine violence’.

Obviously, Agamben never realized any paradox concerning rule-following. Instead, he runs amok against his own prejudices. “Divine violence” is the violence of ignorance. Yet, abolishing knowledge does not help either, nor is it an admirable goal in itself. As Derrida (another master of negativity) before him, in the end he demands for stopping interpretation, any and completely. Agamben provides us nothing else than just another modernist flavour of a philosophy of negativity that results in nihilistic in-humanism (quite contrary to Nietzsche, by the way). It is somewhat terrifying that Agamben receives not jut little attention currently.

In the last statement we are going to cite from Morgan, we can see in which eminent way Agamben is a thinker of the early 19th century, incapable to contribute any reasonable suggestion to current political theory:

But it is not only the negative structure of the argument but also the kind of negativity that is continuous between Agamben’s analyses of aesthetic and legal judgement. In other words, ‘normality without a norm’, which paradoxically articulates the subtraction of normativity from the normal, is simply another way of saying ‘law without force or application’.

This Kantian formulation is not only fully packed with uncritical aprioris, such like normality or the normal, which marks Agamben as an epigonic utterer of common sense. As this ancient form of idealism demonstrates, Agamben obviously never heard anything of the linguistic turn as well. The unfortunate issue with Agamben’s writing is that it is considered both as influential and pace-making.

So, should we reject negativity and turn to positivity? Rejecting negativity turns problematic only if it is taken as an attitude that stretches out from the principle down to the activity. Notably, the same is true for positivity. We need not to get rid of it, which only would send us into the abyss of totalised mysticism. Instead, we have to transcend them into the Differential that “precedes” both. While the former could be reframed into the conditionability of processes (but not into constraints!), the latter finds its non-representational roots in the potential and the virtual. If the positive is taken as a totalizing metaphysics, we soon end in overdone specialization, uncritical neo-liberalism or even dictatorship, or in idealism as an ideology. The turn to a metaphysics of (representational) positivity is incurably caught in the necessity of justification, which—unfortunately enough for positivists—can’t be grounded within a positive metaphysics. To justify, that is to give “good reasons”, is a contradictio in adiecto, if it is understood in its logic or idealistic form.

Both, negativity and positivity (in their representational instances) could work only if there is a preceding and more or less concrete subject, which of course could not presupposed when we are talking about “first reasons” or “justification”. This does not only apply to political theory or practice, it even holds for logic as a positively given structure. Abstractly, we can rewrite the concreteness into countability. Turning the whole thing around we see that as long as something is countable we will be confined by negativity and positivity on the representational level. Herein lies the limitation of the Universal Turing Machine. Herein lies also the inherent limitation of any materialism, whether in its profane or it theistic form. By means of the choreostemic space we can see various ways out of this confined space. We may, for instance, remove the countability from numbers by mediatizing it into probabilities. Alternatively, we may introduce a concept like infinity to indicate the conceptualness of numbers and countability. It is somewhat interesting that it is the concept of the infinite that challenges the empiric character of numbers. Else, we could deny representationalism in numbers while trying to keep countability. This creates the strange category of infinitesimals. Or we create multi-dimensional number spaces like the imaginary numbers. There are, of course, many, many ways to transcend the countability of numbers, which we can’t even list here. Yet, it is of utmost importance to understand that the infinite, as any other instance of departure from countability, is not a number any more. It is not countable either in the way Cantor proposed, that is, thinking of a smooth space of countability that stretches between empiric numbers and the infinite. We may count just the symbols, but the reference has inevitably changed. The empirics is targeting the number of the symbols, not the their content, which has been defined as “incountability”. Only by this misunderstanding one could get struck by the illusion that there is something like the countability of the infinite. In some ways, even real numbers do not refer to the language game of countability, and all the more irrational numbers don’t either. It is much more appropriate to conceive of them as potential numbers; it may well be that precisely this is the major reason for the success of mathematics.

The choreostemic space is the condition for separating the positive and the negative. It is structure and tool, principle and measure. Its topology implies the necessity for instantiation and renders the representationalist fallacy impossible; nevertheless, it allows to map mental attitudes and cultural habits for comparative purposes. Yet, this mapping can’t be used for modeling or anticipation. In some way it is the basis for subjectivity as pre-specific property, that is for a _Subjectivity,of course without objectivity. Therefore, the choreostemic space also allows to overcome the naïve and unholy separation of subjects and objects, without denying the practical dimension of this separation. Of course, it does so by rejecting even the tiniest trace of idealism, or apriorisms respectively.

The choreostemic space does not separate apriori the individual or the collective forms of mentality. In describing mentality it is not limited to the sayable, hence it can’t be attacked or even swallowed by positivism. Since it provides the means to map those habitual _Mentalfigures, people could talk about transitions between different attractors, which we could call “choreostemic galaxies”. The critical issue of values, those typical representatives of uncritical aprioris, is completely turned into a practical concern. Obviously, we can talk about “form” regarding politics without the need to invoke aesthetics. As Benjamin Morgan recently demonstrated (in the already cited [41]), aesthetics in politics necessarily refers to idealism.

Rejecting representational positivity, that is, any positivity that we could speak of in a formal manner, is equivalent to the rejection of first reason as an aprioric instance. As we already proposed for representational positivity, the claim of a first reason as a point of departure that is never revisited again results as well in a motionless endpoint, somewhere in the triangle built from materialism, idealism or realism. Attempts to soften this outcome by proposing a playful, or hypothetical, if not pragmatic, “fixation of first principles” are not convincing, mainly because this does not allow for any coherence between games, which results in a strong relativity of principles. We just could not talk about the relationships between those “firstness games”. In other words, we would not gain anything. An example for such a move is provided by Epperson [42].  Though he refers to the Aristotelian potential, he sticks with representational first principles, in his case logic in the form of the principle of the excluded middle and the principle of non-contradiction. Epperson does not get aware of the problems regarding the use of symbols in doing this. Once Wittgenstein critized the very same point in the Principia by Russell and Whitehead. Additionally, representational first principles are always transporters for ontological claims. As long as we recognize that the world is NOT made from objects, but of relations organized, selected and projected by each individual through interpretation, we would face severe difficulties. Only naive realism allows for a frictionless use of first principles. Yet, for a price that is definitely too high.

We think that the way we dissolved the problem of first reason has several advantages as compared to Deleuze’s proposal of the absolute plane of immanence. First, we do not need the notion of absoluteness, which appears at several instances in Deleuze’s main works “What is Philosophy?” [35] (WIP), “Empiricism and Subjectivity [43], and his “Pure Immanence” [44]. The second problem with the plane of immanence concerns the relation between immanence and transcendence. Deleuze refers to two different kinds of transcendence. While in WIP he denounces transcendence as inappropriate due to its heading towards identity, the whole concept of transcendental empiricism is built on the Kantian invention. This two-fold measure can’t be resolved. Transcendence should not be described by its target. Third, Deleuze’s distinction between the absolute plane of immanence and the “personal” one, instantiated by each new philosophical work, leaves a major problem: Deleuze leaves completely opaque how to relate the two kinds of immanence to each other. Additionally, there is a potentially infinite number of “immanences,” implying a classification, a differential and an abstract kind of immanence, all of which is highly corrosive for the idea of immanence itself. At least, as long one conceives immanence not as an entity that could be naturalized. This way, Deleuze splits the problem of grounding into two parts: (1) a pure, hence “transcendent” immanence, and (2) the gap between absolute and personal immanence. While the first part could be accepted, the second one is left completely untouched by Deleuze. The problem of grounding has just been moved into a layer cake. Presumably, these problems are caused by the fact that Deleuze just considers concepts, or _Concepts, if we’d like to consider the transcendental version as well. Several of those imply the plane of immanence, which can’t be described, which has no structure, and which just is implied by the factuality of concepts. Our choreostemic space moves this indeterminacy and openness into a “form” aspect in a non-representational, non-expressive space with the topology of a double-differential. But more important is that we not only have a topology at our disposal which allows to speak about it without imposing any limitation, we else use three other foundational and irreducibly elements to think that space, the choreostemic space. The CS thus also brings immanence and transcendence into the same single structure.

In this section we have discussed a change of perspective towards negativity and positivity. This change did become accessible by the differential structure of the choreostemic space. The problematic field represented by them and all the respective pseudo-solutions has been dissolved. This abandonment we achieved through the “Lagrangean principle”, that is, we replaced the constants—positivity and negativity respectively—by a procedure—instantiation of the Differential—plus a different constant. Yet, this constant is itself not a not a finite replacement, i.e. a “constant” as an invariance. The “constant” is only a relative one: the orthoregulation, comprising habits, traditions and institutions.

Reason—or as we would like to propose for its less anthropological character and better scalability­, mentality—has been reconstructed as a kind of omnipresent reflection on the conditionability of proceedings in the choreostemic space. The conditionability can’t be determined in advance to the performed mental proceedings (acts), which for many could appear as somewhat paradoxical. Yet, it is not. The situation is quite similar to Wittgenstein’s transcendental logic that also gets instantiated just by doing something, while the possibility for performance precedes that of logic.

Finally, there is of course the question, whether there is any condition that we impose onto the choreostemic itself, a condition that would not be resolved by its self-referentiality. Well, there is indeed one: The only unjustified apriori of the choreostemic space seems to be the primacy of interpretation (POI). This apriori, however, is only a weak one, and above all, a practicable one, or one that derives from the openness of the world. Ultimately, the POI in turn is a direct consequence of the time-being. Any other aspect of interpretation is indeed absorbed by the choreostemic space and its self-referentiality, hence requiring no further external axioms or the like. In other words, the starting point of the choreostemic space, or the philosophical attitude of the choreosteme, is openness, the insight that the world is far to generative as to comprehend all of it.

The fact that it is almost without any apriori renders the choreostemic space suitable for those practical purposes where the openness and its sibling, ignorance, calls for dedicated activity, e.g. in all questions of cross-disciplinarity or trans-culturality. As far as different persons establish different forms of life, the choreostemic space even is highly relevant for any aspect of cross-personality. This in turn gives rise to a completely new approach to ethics, which we can’t follow here, though.

<h5>Mentality without Knowledge</h5>

Two of the transcendental aspects of the choreostemic space are _Model,and _Concept. The concepts of model and concept, that is, instantiations of our aspects, are key terms in philosophy of science and epistemology. Else, we proposed that our approach brings with it a new image of thought. We also said that mental activities inscribe figures or attractors into that space. Since we are additionally interested in the issue of justification—we are trying to get rid of them—the question of the relation between the choreostemic space and epistemology is being triggered.

The traditional primary topic of epistemology is knowledge, how we acquire it, particularly however the questions of first how to separate it from beliefs (in the common sense) on the one hand, and second how to secure it in a way that we possibly could speak about truth. In a general account, epistemology is also about the conditions of knowledge.

Our position is pretty clear: the choreostemic space is something that is categorically different from episteme or epistemology. Which are the reasons?

We reject the view that truth in its usual version is a reasonable category for talking about reasoning. Truth as a property of a proposition can’t be a part of the world. We can’t know anything for sure, neither regarding the local context, nor globally. Truth is an element of logic, and the only truth we can know of is empty: a=a. Yet, knowledge is supposed to be about empirical facts (arrangements of relations). Wittgenstein thus set logic as transcendental. Only the transcendental logic can be free of semantics and thus only within transcendental logic we can speak of truth conditions. The consequence is that we can observe either of two effects. First, any actual logic contains some semantic references, because of which it could be regarded as “logic” only approximately. Second, insisting on the application of logical truth values to actual contexts instead results in a categorical fault. The conclusion is that knowledge can’t be secured neither locally from a small given set of sentences about empirical facts, nor globally. We even can’t measure the reliability of knowledge, since this would mean to have more knowledge about the fact than it is given by the local observations provide. As a result, paradoxes and antinomies occur. The only thing we can do is try to build networks of stable models for a negotiable anticipation with negotiable purposes. In other words, facts are not given by relation between objects, but rather as a system of relations between models, which as a whole is both accepted by a community of co-modelers and which provides satisfying anticipatory power. Compared to that the notion of partial truth (Newton da Costa & Steven French) is still misconceived. It keeps sticking to the wrong basic idea and as such it is inferior to our concept of the abstract model. After all, any account of truth violates the fact that it is itself a language game.

Dropping the idea of truth we could already conclude that the choreostemic space is not about epistemology.

Well, one might say, ok, then it is an improved epistemology. Yet, this we would reject as well. The reason for that is a grammatical one. Knowledge in the meaning of epistemology is either about sayable or demonstrable facts. If someone says “I know”, or if someone ascribes to another person “he knows”, or if a person performs well and in hindsight her performance is qualified as “based on intricate knowledge” or the like, we postulate an object or entity called knowledge, almost in an ontological fashion. This perspective has been rejected by Isabelle Peschard [45]. According to her, knowledge can’t be separated from activity, or “enaction”, and knowledge must be conceived as a social embedded practice, not as a stateful outcome. For her, knowledge is not about representation at all. This includes the rejection of the truth conditions as a reasonable part of a concept of knowledge. Else, it will be impossible to give a complete or analytical description of this enaction, because it is impossible to describe (=to explicate) the Form of Life in a self containing manner.

In any case, however, knowledge is always, at least partially, about how to do something, even if it is about highly abstract issues. That means that a partial description of knowledge is possible. Yet, as a second grammatical reason, the choreostemic space does not allow for any representations at all, due to its structure, which is strictly local and made up from the second-order differential.

There are further differences. The CS is a tool for the expression of mental attractors, to which we can assign distinct yet open forms. To do so we need the concepts of mediality and virtuality, which are not mentioned anywhere in epistemology. Mental attractors, or figures, will always “comprise” beliefs, models, ideas, concepts as instances of transcendental entities, and these instances are local instances, which are even individually constrained. It is not possible to explicate these attractors other than by “living” it.

In some way, the choreostemic space is intimately related to the philosophy of C.S. Peirce, which is called “semiotics”. As he did, we propose a primacy of interpretation. We fully embrace his emphasis that signs only refer to signs. We agree with his attempt for discerning different kinds of signs. And we think that his firstness, secondness and thirdness could be related to the mechanisms of the choreostemic space. In some way, the CS could be conceived as a generalization of semiotics. Saying this, we also may point to the fact that Peirce’s philosophy is not  regarded as epistemology either.

Rejecting the characterization of the choreostemic space as an epistemological subject we can now even better understand the contours of the notion of mentality. The “mental” can’t be considered as a set of things like beliefs, wishes, experiences, expectations, thought experiments, etc. These are just practices, or likewise practices of speaking about the relation between private and public aspects of thinking. Any of these items belong to the same mentality, to the same choreostemic figures.

In contrast to Wittgenstein, however, we propose to discard completely the distinction between internal and external aspects of the mental.

And nothing is more wrong-headed than calling meaning a mental activity! Unless, that is, one is setting out to produce confusion.” [PI §693]

One of the transcendental aspects in the CS is concept, another is model. Both together are providing the aspects of use, idea and reference, that is, there is nothing internal and external any more. It simply depends on the purpose of the description, or the kind of report we want to create about the mental, whether we talk about the mental in an internalist or in externalist way, whether we talk about acts, concepts, signs, or models. Regardless, what we do as humans, it will always be predominantly a mental act, irrespective the change of material reconfigurations.

10. Conclusion

It is probably not an exaggeration to say that in the last two decades the diversity of mentality has been discovered. A whole range of developments and shifts in public life may have been contributing to that, concerning several domains, namely from politics, technology, social life, behavioural science and, last but not least, brain research. We saw the end of the Cold War, which has been signalling an unrooting of functionalism far beyond the domain of politics, and simultaneously the growth and discovery of the WWW and its accompanied “scopic44 media” [46, 47]. The “scopics” spurred the so-called globalization that worked much more in favour of the recognition of diversity than it levelled that diversity, at least so far. While we are still in the midst of the popularization and increasingly abundant usage of so-called machine learning, we already witness an intensified mutual penetration and amalgamation of technological and social issues. In the behavioural sciences, probably also supported by the deepening of mediatization, an unforeseen interest in the mental and social capabilities of animals manifested, pushing back the merely positivist and dissecting description of behavior. As one of the most salient examples may serve the confirmation of cultural traditions in dolphins and orcas, concerning communication as well as highly complex collaborative hunting.  The unfolding of collaboration requires the mutual and temporal assignment of functional roles for a given task. This not only prerequisites a true understanding of causality, but even its reflected use as a game in probabilistic spaces.

Let us distil three modes or forms here, (i) the animal culture, (ii) the machine-becoming and of course (iii) the human life forms in the age of intensified mediatization. All three modes must be considered as “novel” ones, for one reason or another. We won’t go in any further detail here, yet it is pretty clear that the triad of these three modes render any monolithic or anthropologically imprinted form of philosophy of mind impossible. In turn, any philosophy of mind that is limited to just the human brains relation to the world, or even worse, which imposes analytical, logical or functional perspectives onto it, must be considered as seriously defect. This applies still to large parts of the mainstream in philosophy of mind (and even ethics).

In this essay we argued for a new Image of Thought that is independent from the experience of or by a particular form of life, form of informational45 organization or cultural setting, respectively. This new Image of Thought is represented through the choreostemic space. This space is dynamic and active and can be described formally only if it is “frozen” into an analytical reduction. Yet, its self-referentiality and self-directed generativity is a major ingredient. This self-referentiality is takes a salient role in the space’s capability to  leave its conditions behind.

One of the main points of the choreostemic space (CS) probably is that we can not talk about “thought”—regardless its quasi-material and informational foundations—without referring to the choreostemic space. It is a (very) strong argument against Rylean concepts about the mind that claim the irrelevance of the concept of the mental by proposing that looking at the behavior is sufficient to talk about the “mind”. Of course, the CS does not support “the dogma of the ghost in the machine“ either. The choreostemic space defies (and helps to defy) any empirical and so also anthropological myopias through its triple-feature of transcendental framing, differential operation and immanent rooting. Such it is immune against naturalist fallacies such as Cartesian dualism as well as against arbitrariness or relativism. Neither it could be infected by any kind of preoccupation such like idealism or universalism. Despite one could regard it in some way as “pure Thought”, or consider it as the expressive situs of it, its purity is not an idealistic one. It dissolves either into the metaphysical transcendentality of the four conceptual aspects _a,that is, the _Model, _Mediality,_Concept,and _Virtuality.Or it takes the form of the Differential that could be considered as being kind of a practical transcendentality46 [48].  There, as one of her starting points Bühlmann writes:

Deleuze’s fundamental critique in Difference and Repetition is that throughout the history of philosophy, these conditions have always been considered as »already confined« in one way or another: Either within »a formless, entirely undifferentiated underground« or »abyss« even, or within the »highly personalized form« of an »autocratically individuated Being«

Our choreostemic space provides also the answer to the problematics of conditions.47  As Deleuze, we suggest to regard conditions only as secondary, that is as relevant entities only after any actualization. This avoids negativity as a metaphysical principle. Yet, in order to get completely rid of any condition while at the same time retain conditionability as a transcendental entity we have to resort to self-referentiality as a generic principle. Hence, our proposal goes beyond Deleuze’s framework as he developed it from “Difference and Repetition” until “What is Philosophy?”, since he never made this move.

Basically, the CS supports Wittgenstein’s rejection of materialism, which experienced a completely unjustified revival in the various shades of neuro-isms. Malcolm cites him [49]:

It makes as little sense to ascribe experiences, wishes, thoughts, beliefs, to  a brain as to a mushroom. (p.186)

This support should not surprise, since the CS was deliberately constructed to be compatible with the concept of language game. Despite the CS also supports his famous remark about meaning:

And nothing is more wrong-headed than calling meaning a mental activity! Unless, that is, one is setting out to produce confusion.” [PI §693]

it is also clear that the CS may be taken as a means to overcome the debate about external or internal primacies or foundations of meaning. The duality of internal vs. external is neutralized in the CS. While modeling and such the abstract model always requires some kind of material body, hence representing the route into some interiority, the CS is also spanned by the Concept and by Mediality. Both concepts are explicit ties between any kind of interiority and and any kind of exteriority, without preferring a direction at all. The proposal that any mental activity inscribes attractors into that space just means that interiority and exteriority can’t be separated at all, regardless the actual conceptualisation of mind or mentality. Yet, in accordance with PI 693 we also admit that the choreostemic space is not equal to the mental. Any particular mentality unfolds as an actual performance in the CS. Of course, the CS does not describe material reconfigurations, environmental contingency etc. and the performance taking place “there”. In other words, it does not cover any aspect of use. On the other hand, material reconfiguration are simply not “there” as long as they do not get interpreted by applying some kind of model.

The CS clearly shows that we should regard questions like “Where is the mind?” as kind of a grammatical mistake, as Blair lucidly demonstrates [50]. Such a usage of the word “mind” not only implies irrevocably that it is a localizable entity. It also claims its conceptual separatedness. Such a conceptualization of the mind is illusionary. The consequences for any attempt to render “machines” “more intelligent” are obviously quite dramatic. As for the brain, it is likewise impossible to “localize” mental capacities in the case of epistemic machines. This fundamental de-territorialization is not a consequence of scale, as in quantum physics. It is a consequence of the verticality of the differential, the related necessity of forms of construction and the fact, that a non-formal, open language, implying randolations to the community, is mandatory to deal with concepts.

One important question about a story like the “choreostemic space” with its divergent, but nevertheless intimately tied four-fold transcendentality is about the status of that space. What “is” it? How could it affect actual thought? Since we have been starting even with  mathematical concepts like space, mappings, topology, or differential, and since our arguments frequently invokes the concept of mechanism,one could suspect that it is a piece of analytical philosophy. This ascription we can clearly reject.

Peter Hacker convincingly argues that “analytical philosophy” can’t be specified by a set of properties of such assumed philosophy. He proposes to consider it as a historical phase of philosophy, with several episodes, beginning around 1890 [53]. Nevertheless, during the 1970ies a a set of believes formed kind of a basic setup. Hacker writes:

But there was broad consensus on three points. First, no advance in philosophical understanding can be expected without the propaedeutic of investigating the use of the words relevant to the problem at hand. Second, metaphysics, understood as the philosophical investigation into the objective, language-independent, nature of the world, is an illusion. Third, philosophy, contrary to what Russell had thought, is not continuous with, but altogether distinct from science. Its task, contrary to what the Vienna Circle averred, is not the clarification or ‘improvement’ of the language of science.

Where we definitely disagree is at the point about metaphysics. Not only do we refute the view that metaphysics is about the objective, language-independent, nature of the world. As such we indeed would reject metaphysics. An example for this kind of thinking is provided by the writing of Whitehead. It should have become clear throughout our writing that we stick to the primacy of interpretation, and accordingly we do regard the believe in an objective reality as deeply misconceived. Thereby we do neither claim that our mental life is independent from the environment—as radical constructivism (Varela & Co) does—nor do we claim that there is no external world around us that is independent from our perception and constructions. Such is just belief in metaphysical independence, which plays an important tole in modernism. The idea of objective reality is also infected by this belief, resulting in a self-contradiction. For “objective” makes sense only as an index to some kind of sociality, and hence to a group sharing a language, and further to the use of language. The claim of “objective reality is thus childish.

More important, however, we have seen that the self-referentiality of terms like concept (we called those “strongly singular terms“) enforces us to acknowledge that Concept, much like logic, is a transcendental category. Obviously we refer strongly to transcendental, that is metaphysical categories. At the same time we also propose, however, that there are manifolds of instances of those transcendental categories.

The choreostemic space describes a mechanism. In that it resembles to the science of biology, where the concept of mechanism is an important epistemological tool. As such, we try to defend against mysticism, against the threat that is proposed by any all too quick reference to the “Lebenswelt”, the form of life and the ways of living. But is it really an “analysis”?

Putnam called “analysis” an “inexplicable noise”[54]. His critique was precisely that semantics can’t be found by any kind of formalization, that is outside of the use of language. In this sense we certainly are not doing analytic philosophy. As a final point we again want to emphasize that it is not possible to describe the choreostemic space completely, that is, all the conditions and effects, etc., due to its self-referentiality. It is a generative space that confirms its structure by itself. Nevertheless it is neither useless nor does it support solipsism. In a fully conscious act it can be used to describe the entirety of mental activity, and only as a fully conscious act, while this description is a fully non-representational description. In this way it overcomes not only the Cartesian dualism about consciousness. In fact, it is another way to criticise the distinction between interiority and exteriority.

For one part we agree with Wittgenstein’s critique (see also the work of PMS Hacker about that), which identifies the “mystery” of consciousness as an illusion. The concept of the language game, which is for one part certainly an empiric concept, is substantial for the choreostemic space. Yet, the CS provides several routes between the private and the communal, without actually representing one or the other. The CS does no distinguish between the interior and the exterior at all, just recall that mediality is one of the transcendental aspects. Along with Wittgenstein’s “solipsistic realism” we consequently reject also the idea that ontology can be about the external world, as this again would introduce such a separation. Quite to the contrast, the CS vanishes the need for the naive conception of ontology. Ontology makes sense only within the choreostemic space.

Yet, we certainly embrace the idea that mental processes are ultimately “based” on physical matter, but unfolded into and by their immaterial external surrounds, yielding an inextricable compound. Referring to any “neuro” stuff regarding the mental does neither “explain” anything nor is it helpful to any regard, whether one considers it as neuro-science or as neuro-phenomenology.

Summarizing the issue we may say that the choreostemic space opens a completely new level for any philosophy of the mental, not just what is being called the human “mind”. It also allows to address scientific questions about the mental in a different way, as well as it clarifies the route to machines that could draw their own traces and figures into that space. It makes irrepealable clear that any kind of functionalism or materialism is once and for all falsified.

Let us now finally inspect our initial question that we put forward in the editorial essay. Is there a limit for the mental capacity of machines? If yes, which kind of limit and where could we draw it? The question about the limit of machines directly triggers the question about the image of humanity („Bild des Menschen“), which is fuelled from the opposite direction. So, does this imply kind of a demarcation line between the domain of the machines and the realm of the human? Definitely not, of course. To opt for such a separation would not only follow idealist-romanticist line of critizising technology, but also instantiate a primary negativity.

Based on the choreostemic space, our proposal is a fundamentally different one. It can be argue that this space can contains any condition of any thought as an population of unfolding thoughts. These unfoldings inscribe different successions into the space, appearing as attractors and figures. The key point of this is that different figures, representing different Lebensformen (Forms of Life) that are probably even incommensurable to each other, can be related to each other without reducing any of them. The choreostemic space is a space of mental co-habitation.

Let us for instance start with the functionalist perspective that is so abundant in modernism since the times of Descartes. A purely functionalist stance is just a particular figure in that space, as it applies to any other style of thinking. Using the dictum of the choreosteme as a guideline, it is relatively easy to widen the perspective into a more appropriate one. Several developmental paths into a different choreostemic attractor are possible. For instance, mediatization through social embedding [52], opening through autonomous associative mechanisms as we have described it, or the adhoc recombination of conceptual principles as it has been demonstrated by Douglas Hofstadter. Letting a robot range freely around also provokes the first tiny steps away from functionalism, albeit the behavioral Bauplan of the insects (arthropoda) demonstrates that this does not install a necessity for the evolutionary path to advanced mental capabilities.

The choreostemic space can serve as such a guideline because it is not infected by anthropology in any regard. Nevertheless it allows to speak clearly about concepts like belief and knowledge, of course, without reducing these concepts to positive definite or functionalist definitions. It also remains completely compatible with Wittgenstein’s concept of the language game. For instance, we reconstructed the language game “knowing” as a label for a pointer (say reference) to a particular image of thought and its use. Of course, this figure should not be conceived as a fixed point attractor, as the various shades of materialism, idealism and functionalism actually would do (if they would argue along the choreosteme). It is somewhat interesting that here, by means of the choreostemic space, Wittgenstein and Deleuze approach each other quite closely, something they themselves would not have been supported, probably.

Where is the limit of machines, then?

I guess, any answer must refer to the capability to leave a well-formed trace in the choreostemic space. As such, the limits of machines are to be found in the same way as they are found for us humans: To feel and to act as an entity that is able to contribute to culture and to assimilate it in its mental activity.

We started the choreostemic space as a framework to talk about thinking, or more general: about mentality, in a non-anthropological and non.-reductionist manner. In the course of our investigation, we found a tool that actualizes itself into real social and cognitive situations. We also found the infinite space of choreostemic galaxies as attractors for eternal returns without repetition of the identical. Choreosteme keeps the any alive, without subjugating individuality, it provides a new and extended level of sayability without falling into representationalism. Taken together, as a new Image of Thought it allows to develop thinking deliberately and as part of a multitudinous variety.

Notes

1. This piece is thought of as a close relative to Deleuze’s Difference & Repetition (D&R)[1]. Think of it as a satellite of it, whose point of nearest approach is at the end of part IV of D&R, and thus also as a kind of extension of D&R.

2. Deleuze of course, belongs to them, but of course also Ludwig Wittgenstein (see §201 of PI [2], “paradox” of rule following), and Wilhelm Vossenkuhl [3], who presented three mutually paradoxical maxims as a new kind of a theory of morality (ethics), that resists the reference to monolithically set first principles, such as for instance in John Rawls’ “Theory of Justice”. The work of those philosophers also provides examples of how to turn paradoxicality productive, without creating paradoxes at all, the main trick being to overcome their fixation by a process. Many others, including Derrida, just recognize paradoxes, but are neither able to conceive of paradoxicality nor to distinguish them from paradoxes, hence they take paradoxes just as unfortunate ontological knots. In such works, one can usually find one or the other way to prohibit interpretation (think about the trail, grm. “Spur” in Derrida)

3. Paradoxes and antinomies like those described by Taylor, Banach-Tarski, Russell or of course Zenon are all defect, i.e. pseudo-paradoxes, because they violate their own “gaming pragmatics”. They are not paradoxical at all, but rather either simply false or arbitrarily fixed within the state of such violation. The same fault is committed by the Sorites paradox and its relatives. They are all mixing up—or colliding—the language game of countability or counting with the language game of denoting non-countability, as represented by the infinite or the infinitesimal. Instead of saying that they violate the apriori self-declared “gaming pragmatics” we also could say that they change the most basic reference system on the fly, without any indication of doing so. This may happen through an inadequate use of the concept of infiniteness.

4. DR 242 eternal return: it is not the same and the identical that returns, but the virtual structuredness (not even a “principle”), without which metamorphosis can’t be conceived.

5. In „Difference and Repetition“, Deleuze chose to spell “Idea” with a capital letter, in order to distinguish his concept from the ordinary word.

7. Here we find interesting possibilities for a transition to Alan Turing‘s formal foundation of creativity [5].

8. This includes the usage of concepts like virtuality, differential, problematic field, the rejection of the primacy of identity and closely related to that, the rejection of negativity, the rejection of the notion of representation, etc. Rejecting the negative opens an interesting parallel to Wittgenstein’s insisting on the transcendentality of logics and the subordination of any practical logic to performance. Since the negative is a purely symbolic entity, it is also purely aposteriori to any genesis, that is self-referential performance.

9. I would like to recommend to take a look to the second part of part IV in D&R, and maybe, also to the concluding chapter therein (download it here).

10. Saying „we“ here is not just due to some hyperbolic politeness. The targeted concept of this essay, the choreosteme, has been developed by Vera Bühlmann and the author of this essay (Klaus Wassermann) in close collaboration over a number of years. Finally the idea proofed to be so strong that now there is some dissent about the role and the usage of the concept.

11. For belief revision as described by others, overview @ Stanford, a critique by Pollock, who clarified that belief revision as comprised and founded by the AGM theory (see below) is incompatible to  standard epistemology.

12. By symbolism we mean the belief that symbols are the primary and apriori existent entities for any description of any problematic field. In machine-based epistemology for instance, we can not start with data organized in tables because this pre-supposes a completed process of “ensymbolization”. Yet, in the external world there are no symbols, because symbols only exist subsequent to interpretation. We can see that symbolism creates the egg-chick-problem.

13. Miriam Meckel, communication researcher at the university of Zürich, is quite active in drawing dark-grey pictures. Recently, she coined “Googlem” as a resemblance to Google and Golem. Meckel commits several faults in that: She does not understand the technology(accusing Google to use averages), and she forgets about the people (programmers) behind “the computer”, and the people using the software as well. She follows exactly the pseudo-romantic separation between nature and the artificial.

Miriam Meckel, Next. Erinnerungen an eine Zukunft ohne uns,  Rowohlt 2011.

14. Here we find a resemblance to Wittgenstein’s denial to attribute philosophy the role of an enabler of understanding. According to Wittgenstein, philosophy even does not and can not describe. It just can show.

15. This also concerns the issue of cross-culturality.

16. Due to some kind of cultural imprinting, a frequently and solitary exercised habit, people almost exclusively think of Cartesian spaces as soon as a “space” is needed. Yet, there is no necessary implication between the need for a space and the Cartesian type of space. Even Deleuze did not recognize the difficulties implied by the reference to the Cartesian space, not only in D&R, but throughout his work. Nevertheless, there are indeed passages (in What is philosophy? with “planes of immanence”, or in the “Fold”) where it seems that he could have smelled into a different conception of space.

17. For the role of „elements“ please see the article about „Elementarization“.

18. Vera Bühlmann [8]: „Insbesondere wird eine Neu-Bestimmung des aristotelischen Verhältnisses von Virtualität und Aktualität entwickelt, unter dem Gesichtspunkt, dass im Konzept des Virtuellen – in aller Kürze formuliert – das Problem struktureller Unendlichkeit auf das Problem der zeichentheoretischen Referenz trifft.“

19. which is also a leading topic of our collection of essays here.

20. e.g. Gerhard Gamm, Sybille Krämer, Friedrich Kittler

21. cf. G.C. Tholen [7], V.Bühlmann [8].

22. see the chapter about machinic platonism.

23. Actually, Augustine instrumentalises the discovered difficulty to propose the impossibility to understand God’s creation.

24. It is an „ancestry“ only with respect to the course in time, as the result of a process, not however in terms of structure, morphology etc.

25. cf. C.S. Peirce [16], Umberto Eco [17], Helmut Pape [18];

26. Note that in terms of abstract evolutionary theory rugged fitness landscapes enforce specialisation, but also bring along an increased risk for vanishing of the whole species. Flat fitness landscapes, on the other hand, allow for great diversity. Of course the fitness landscape is not a stable parameter space, neither locally not globally. IN some sense, it is even not a determinable space. Much like the choreostemic space, it would be adequate to conceive of the fitness landscape as a space built from 2-set of transformatory power and the power to remain stability. Both can be determined only in hindsight. This paradoxality is not by chance, yet it has not been discovered as an issue in evolutionary theory.

27. Of course I know that there are important differences between verbs and substantives, which we may level out in our context without loosing too much.

28. In many societies, believing has been thought to be tied to religion, the rituals around the belief in God(s). Since the renaissance, with upcoming scientism and profanisation of societies religion and science established sort of a replacement competition. Michel Serres described how scientists took over the positions and the funds previously held by the cleric. The impression of a competition is well-understandable, of course, if we consider the “opposite direction” of the respective vectors in the choreostemic space. Yet, it is also quite mistaken, maybe itself provoked by overly idealisation, since neither the clerk can make his day without models nor the scientist his one without beliefs.

29. The concept of “theory” referred to here is oriented towards a conceptualisation based on language game and orthoregulation. Theories need to be conceived as orthoregulative milieus of models in order to be able to distinguish between models and theories, something which can’t be accomplished by analytic concepts. See the essay about theory of theory.

30. Of course, we do not claim to cover completely the relation between experiments, experience, observation on the one side and their theoretical account on the other by that. We just would like to emphasize the inextricable dynamic relation between modeling and concepts in scientific activities, whether in professional or “everyday-type” of science. For instance, much could be said in this regard about the path of decoherence from information and causality. Both aspects, the decoherence and the flip from intensifying modeling over to a conceptual form has not been conceptualized before. The reason is simple enough: There was no appropriate theory about concepts.

When, for instance, Radder [28] contends that the essential step from experiment to theory is to disconnect theoretical concepts from the particular experimental processes in which they have been realized [p.157], then he not only misconceives the status and role of theories, he also does not realize that experiments are essentially material actualisations of models. Abstracting regularities from observations into models and shaping the milieu for such a model in order to find similar ones, thereby achieving generalization is anything but to disconnect them. It seems that he overshoot a bit in his critique of scientific constructivism. Additionally, his perspective does not provide any possibility to speak about the relation between concepts and models. Though Radder obviously had the feeling of a strong change in the way from putting observations into scene towards concepts, he fails to provide a fruitful picture about it. He can’t surpass that feeling towards insight, as he muses about “… ‘unintended consequences’ that might arise from the potential use of theoretical concepts in novel situations.” Such descriptions are close to scientific mysticism.

Radder’s account is a quite recent one, but others are not really helpful about the relation between experiment, model and concept either. Kuhn’s praised concept of paradigmatic changes [24] can be rated at most as a phenomenological or historizing description. Sure, his approach brought a fresh perspective in times of overdone reductionism, but he never provided any kind of abstract mechanism. Other philosophers of science stuck to concepts like prediction (cf. Reichenbach [20], Salmon [21]) and causality (cf. Bunge [22], Pearl [23]), which of course can’t say anything about the relation to the category of concepts. Finally, Nancy Cartwright [25], Isabelle Stengers [26], Bruno Latour [9] or Karin Knorr Cetina [10] are representatives for the various shades of constructivism, whether individually shaped or as a phenomenon embedded into a community, which also can’t say anything about concepts as categories. A screen through the Journal of Applied Measurement did not reveal any significantly different items.

Thus, so far philosophy of science, sociology and history of science have been unable to understand the particular dynamics between models and concepts as abstract categories, i.e. as _Modelsor _Concepts.

31. If the members of a community, or even the participants in random interactions within it, agree on the persistence of their relations, then they will tend to exhibit a stronger propensity towards collaboration. Robert Axelrod demonstrated that on the formal level by means of a computer experiment [33]. He has been the first one, who proposed game theory as a means to explain the choice of strategies between interactees.

32. Orig.: „Seit über 200 Jahren ist die Philosophie anthropologisch bestimmt. Was das genauer bedeutet, hat sie dagegen kaum erforscht.“

33. Orig.: „Nietzsches Idealismuskritik, die in vielen Schattierungen vorliegt und immer auf das philosophische Selbstmissverständnis eines reinen Geistes und reiner Begriffe zielt, richtet sich auch gegen ein bestimmtes Naturverständnis.“ (KAV439)

34. More precisely, in evolutionary processes the capability for generalization is selected under conditions of scarcity. Scarcity, however, is inevitably induced under the condition of growth or consumption. It is important to understand that newly emerging levels of generalization do not replace former levels of integration. Those undergo a transformation with regard to their relations and their functional embedding, i.e. with regard to their factuality. In morphology of biological specimens this is well-known as “Überformung”. For more details about evolution and generalization please see this.

35. The notions of “philosophy of nature” or even “natural philosophy” are strictly inappropriate. Both “kinds” of philosophy are not possible at all. They have to be regarded as a strange mixture of contemporarily available concepts from science (physics, chemistry, biology), mysticism or theism and the mistaken attempt to transfer topics as such from there to philosophy. Usually, the result is simply a naturalist fallacy with serious gaps regarding the technique of reflection. Think about Kant’s physicalistic tendencies throughout his philosophy, the unholy adaptation of Darwinian theory, analytic philosophy, which is deeply influenced by cybernetics, or the comeback of determinism and functionalism due to almost ridiculous misunderstandings of the brain.

Nowadays it must be clear that philosophy before the reflection of the role of language, or more general, before the role of languagability—which includes processes of symbolization and naming—can’t be regarded as serious philosophy. Results from sciences can be imported into philosophy only as formalized structural constraints. Evolutionary theory, for instance, first have to be formalized appropriately (as we did here), before it could be of any relevance to philosophy. Yet, what is philosophy? Besides Deleuze’s answer [35], we may conceive philosophy as a technique of asking about the conditionability of the possibility to reflect. Hence, Wittgenstein said that philosophy should be regarded as a cure. Thus philosophy includes fields like ethics as a theory of morality or epistemology, which we developed here into a “choreostemology”.

36. Orig.: „Der Punkt, um den es sich namentlich handelt, lässt sich ganz bestimmt angeben. Es ist gleichsam der Apfel in dem logischen Sündenfall der deutschen Philosophie nach Kant: das Verhältnis zwischen Subjekt und Objekt in der Erkenntnis.“

37. Despite Rölli usually esteems Deleuze’s philosophy of the differential, here he refers to the difference though. I think it should be read as “divergence and differential”.

38. Orig.: „Nach allem wird klarer geworden sein, dass es sich bei diesem Pragmatismus nicht um einen einfachen Pragmatismus handelt, sondern um einen mit aller philosophischen Raffinesse konstruierten Pragmatismus der Differenz.“

39. As scientific facts, Quantum physics, the probabilistic structure of the brain and the non-representationalist working of the brain falsify determinism as well as finiteness of natural processes, even if there should be something like “natural laws”.

40. See the article about the structure of comparison.

41. Even Putnam does so, not only in his early functionalist phase, but still in Representation and Reality [36].

42. Usually, philosophers are trained only in logics, which does not help much, since logic is not a process. Of course, being trained in mathematical structures does not imply that the resulting philosophy is reasonable at all. Take Alain Badiou as an example, who just blows up materialism.

43. A complete new theory of governmentality and sovereignty would be possible here.

44. The notion of “scopic” media as coined by Knorr Cetina means that modern media substantially change the point of view (“scopein”, looking, viewing). Today, we are not just immersed into them, but we deliberately choose them and search for them. The change of perspective is thought to be a multitude and contracting space and time. This however, is not quite typical for the new media.

45. Here we refer to our extended view onto “information” that goes far beyond the technical reduced perspective that is forming the main stream today. Information is a category that can’t be limited to the immaterial. See the chapter about “Information and Causality”.

46. Vera Bühlmann described certain aspects of Deleuze’s philosophy as an attempt to naturalize transcendentality in the context of emergence, as it occurs in complex systems. Deleuze described the respective setting in “Logic of Sense” [49] as the 14th series of paradoxes.

47. …which is not quite surprising, since we developed the choreostemic space together.

References
  • [1] Gilles Deleuze, Difference and Repetition. Translated by Paul Patton, Athlon Press, 1994 [1968].
  • [2] Ludwig Wittgenstein, Philosophical Investigations.
  • [3] Wilhelm Vossenkuhl. Die Möglichkeit des Guten. Beck, München 2006.
  • [4] Jürgen Habermas, Über Moralität und Sittlichkeit – was macht eine Lebensform »rational«? in: H. Schnädelbach (Hrsg.), Rationalität. Suhrkamp, Frankfurt 1984.
  • [5] Alan Turing. Chemical Basis of Morphogenesis.
  • [6] K. Wassermann, That Centre-Point Thing. The Theory Model in Model Theory. In: Vera Bühlmann, Printed Physics, Springer New York 2012, forthcoming.
  • [7] Georg Christoph Tholen. Die Zäsur der Medien. Kulturphilosophische Konturen. Suhrkamp, Frankfurt 2002.
  • [8] Vera Bühlmann. Inhabiting media : Annäherungen an Herkünfte und Topoi medialer Architektonik. Thesis, University of Basel 2011. available online, summary (in German language) here.
  • [9] Bruno Latour,
  • [10] Karin Knorr Cetina (1991). Epistemic Cultures: Forms of Reason in Science. History of Political Economy, 23(1): 105-122.
  • [11] Günther Ropohl, Die Unvermeidlichkeit der technologischen Aufklärung. In: Paul Hoyningen-Huene, & Gertrude Hirsch (eds.), Wozu Wissenschaftsphilosophie? De Gruyter, Berlin 1988.
  • [12] Bas C. van Fraassen, Scientific Representation: Paradoxes of Perspective. Oxford University Press, New York 2008.
  • [13] Ronald N. Giere, Explaining Science: A Cognitive Approach. Cambridge University Press, Cambridge 1988.
  • [14] Aaron Ben-Ze’ev, Is There a Problem in Explaining Cognitive Progress? pp.41-56 in: Robert F. Goodman & Walter R. Fisher (eds.), Rethinking Knowledge: Reflections Across the Disciplines (Suny Series in the Philosophy of the Social Sciences) SUNY Press, New York 1995.
  • [15] Robert Brandom, Making it Explicit.
  • [16] C.S. Peirce, var.
  • [17] Umberto Eco,
  • [18] Helmut Pape, var.
  • [19] Vera Bühlmann, “Primary Abundance, Urban Philosophy — Information and the Form of Actuality.” pp.114-154, in: Vera Bühlmann (ed.), Printed Physics. Springer, New York 2012, forthcoming.
  • [20] Hans Reichenbach, Experience and Prediction. An Analysis of the Foundations and the Structure of Knowledge, University of Chicago Press, Chicago, 1938.
  • [21] Wesley C. Salmon, Causality and Explanation. Oxford University Press, New York 1998.
  • [22] Mario Bunge, Causality and Modern Science. Dover Publ. 2009 [1979].
  • [23] Judea Pearl , T.S. Verma (1991) A Theory of Inferred Causation.
  • [24] Thomas S. Kuhn, Scientific Revolutions
  • [25] Nancy Cartwright. var.
  • [26] Isabelle Stengers, Spekulativer Konstruktivismus. Merve, Berlin 2008.
  • [27] Peter M. Stephan Hacker, “Of the ontology of belief”, in: Mark Siebel, Mark Textor (eds.),  Semantik und Ontologie. Ontos Verlag, Frankfurt 2004, pp. 185–222.
  • [28] Hans Radder, “Technology and Theory in Experimental Science.” in: Hans Radder (ed.), The Philosophy Of Scientific Experimentation. Univ of Pittsburgh 2003, pp.152-173
  • [29] C. Alchourron, P. Gärdenfors, D. Makinson (1985). On the logic of theory change: Partial meet contraction functions and their associated revision functions. Journal of Symbolic Logic, 50: 510–530.
  • [30] Sven Ove Hansson, Sven Ove Hansson (1998). Editorial to Thematic Issue on: “Belief Revision Theory Today”, Journal of Logic, Language, and Information 7(2), 123-126.
  • [31] John L. Pollock, Anthony S. Gillies (2000). Belief Revision and Epistemology. Synthese 122: 69–92.
  • [32] Michael Epperson (2009). Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse. Process Studies, 38:2, 339-366.
  • [33] Robert Axelrod, Die Evolution der Kooperation. Oldenbourg, München 1987.
  • [34] Marc Rölli, Kritik der anthropologischen Vernunft. Matthes & Seitz, Berlin 2011.
  • [35] Deleuze, Guattari, What is Philosophy?
  • [36] Hilary Putnam, Representation and Reality.
  • [37] Giorgio Agamben, The State of Exception.University of Chicago Press, Chicago 2005.
  • [38] Elena Bellina, “Introduction.” in: Elena Bellina and Paola Bonifazio (eds.), State of Exception. Cultural Responses to the Rhetoric of Fear. Cambridge Scholars Press, Newcastle 2006.
  • [39] Friedrich Albert Lange, Geschichte des Materialismus und Kritik seiner Bedeutung in der Gegenwart. Frankfurt 1974. available online @ zeno.org.
  • [40] Michel Foucault, Archaeology of Knowledge.
  • [41] Benjamin Morgan, Undoing Legal Violence: Walter Benjamin’s and Giorgio Agamben’s Aesthetics of Pure Means. Journal of Law and Society, Vol. 34, Issue 1, pp. 46-64, March 2007. Available at SSRN: http://ssrn.com/abstract=975374
  • [42] Michael Epperson, “Bridging Necessity and Contingency in Quantum Mechanics: The Scientific Rehabilitation of Process Metaphysics.” in: David R. Griffin, Timothy E. Eastman, Michael Epperson (eds.), Whiteheadian Physics: A Scientific and Philosophical Alternative to Conventional Theories. in process, available online; mirror
  • [43] Gilles Deleuze, Empiricism and Subjectivity. An Essay on Hume’s Theory of HUman Nature. Columbia UNiversity Press, New York 1989.
  • [44] Gilles Deleuze, Pure Immanence – Essays on A Life. Zone Books, New York 2001.
  • [45] Isabelle Peschard
  • [46] Knorr Cetina, Karin (2009): The Synthetic Situation: Interactionism for a Global World. In: Symbolic Interaction, 32 (1), S. 61-87.
  • [47] Knorr Cetina, Karin (2012): Skopische Medien: Am Beispiel der Architektur von Finanzmärkten. In: Andreas Hepp & Friedrich Krotz (eds.): Mediatisierte Welten: Beschreibungsansätze und Forschungsfelder. Wiesbaden: VS Verlag, S. 167-195.
  • [48] Vera Bühlmann, “Serialization, Linearization, Modelling.” First Deleuze Conference, Cardiff 2008) ; Gilles Deleuze as a Materialist of Ideality”, (lecture held at the Philosophy Visiting Speakers Series, University of Duquesne, Pittsburgh 2010.
  • [49] Gilles Deleuze, Logic of Sense. Columbia University Press, New York 1991 [1990].
  • [50] N. Malcolm, Nothing is Hidden: Wittgenstein’s Criticism of His Early Thought,  Basil Blackwell, Oxford 1986.
  • [51] David Blair, Wittgenstein, Language and Information: “Back to the Rough Ground!” Springer, New York 2006. mirror
  • [52] Caroline Lyon, Chrystopher L Nehaniv, J Saunders (2012). Interactive Language Learning by Robots: The Transition from Babbling to Word Forms. PLoS ONE 7(6): e38236. Available online (doi:10.1371/journal.pone.0038236)
  • [53] Peter M. Stephan Hacker, “Analytic Philosophy: Beyond the linguistic turn and back again”, in: M. Beaney (ed.), The Analytic Turn: Analysis in Early Analytic Philosophy and Phenomenology. Routledge, London 2006.
  • [54] Hilary Putnam, The Meaning of “Meaning”, 1976.

۞

Transformation

May 17, 2012 § Leave a comment

In the late 1980ies there was a funny, or strange, if you like,

discussion in the German public about a particular influence of the English language onto the German language. That discussion got not only teachers engaged in higher education going, even „Der Spiegel“, Germany’s (still) leading weekly news magazine damned the respective „anglicism“. What I am talking about here considers the attitude to „sense“. At those times well 20 years ago, it was meant to be impossible to say „dies macht Sinn“, engl. „this makes sense“. Speakers of German at that time understood the “make” as “to produce”. Instead, one was told, the correct phrase had to be „dies ergibt Sinn“, in a literal, but impossible translation something like „this yields sense“, or even „dies hat Sinn“, in a literal, but again wrong and impossible translation, „this has sense“. These former ways of building a reference to the notion of „sense“ feels even awkward for many (most?) speakers of German language today. Nowadays, the English version of the meaning of the phrase replaced the old German one, and one even can find in the “Spiegel“ now the analogue to “making” sense.

Well, the issue here is not just one historical linguistics or one of style. The differences that we can observe here are deeply buried into the structure of the respective languages. It is hard to say whether such idioms in German language are due to the history of German Idealism, or whether this particular philosophical stance developed on the basis of the structures in the language. Perhaps a bit of both, one could say from a Wittgensteinian point of view. Anyway, we may and can be relate such differences in “contemporary” language to philosophical positions.

It is certainly by no means an exaggeration to conclude that the cultures differ significantly in what their languages allow to be expressible. Such a thing as an “exact” translation is not possible beyond trivial texts or a use of language that is very close to physical action. Philosophically, we may assign a scale, or a measure, to describe the differences mentioned above in probabilistic means, and this measure spans between pragmatism and idealism. This contrast also deeply influences philosophy itself. Any kind of philosophy comes in those two shades (at least), often expressed or denoted by the attributes „continental“ and „anglo-american“. I think these labels just hide the relevant properties. This contrast of course applies to the reading of idealistic or pragmatic philosophers itself. It really makes a difference (1980ies German . . . „it is a difference“) whether a native English speaking philosopher reads Hegel, or a German native, whether a German native is reading Peirce or an American guy, whether Quine conducts research in logic or Carnap. The story quickly complicates if we take into consideration French philosophy and its relation to Heidegger, or the reading of modern French philosophers in contemporary German speaking philosophy (which is almost completely absent).1

And it becomes even more complicated, if not complex and chaotic, if we consider the various scientific sub-cultures as particular forms of life, formed by and forming their own languages. In this way it may well seem to be rather impossible—at least, one feels tempted to think so—to understand Descartes, Leibniz, Aristotle, or even the pre-Socratics, not to speak about the Cro-Magnon culture2, albeit it is probably more appropriate to reframe the concept of understanding. After all, it may itself be infected by idealism.

In the chapters to come you may expect the following sections. As we did before we’ll try to go beyond the mere technical description, providing the historical trace and the wider conceptual frame:

A Shift of Perspective

Here, I need this reference to the relativity as it is introduced in—or by­ —language for highlighting a particular issue. The issue concerns a shift in preference, from the atom, the point, from matter, substance, essence and metaphysical independence towards the relation and its dynamic form, the transformation. This shift concerns some basic relationships of the weave that we call “Lebensform” (form of life), including the attitude towards those empiric issues that we will deal with in a technical manner later in this essay, namely the transformation of “data”. There are, of course, almost countless aspects of the topos of transformation, such like evolutionary theory, the issue of development, or, in the more abstract domains, mathematical category theory. In some way or another we already dealt with these earlier (for category theory, for evolutionary theory). These aspects of the concept of transformation will not play a role here.

In philosophical terms the described difference between German and English language, and the change of the respective German idiom  marks the transition from idealism to pragmatism. This corresponds to the transition from a philosophy of primal identity to one where difference is transcendental. In the same vein, we could also set up the contrast between logical atomism and the event as philosophical topoi, or between favoring existential approaches and ontology against epistemology. Even more remarkably, we also find an opposing orientation regarding time. While idealism, materialism, positivism or existentialism (and all similar attitudes) are heading backwards in time, and only backwards, pragmatism and, more generally, a philosophy of events and transformation is heading forward, and only forward. It marks the difference between settlement (in Heideggerian „Fest-Stellen“, English something like „fixing at a location“, putting something into the „Gestell“3) and anticipation. Settlements are reflected by laws of nature in which time does not—and shall not—play a significant role. All physical laws, and almost all theories in contemporary physics are symmetric with respect to time. The “law perspective” blinds against the concept of context, quite obviously so. Yet, being blinded against context also disables to refer to information in an adequate manner.

In contrast, within a framework that is truly based on the primacy of interpretation and thus following the anticipatory paradigm, it does not make sense to talk about “laws”. Notably, issues like the “problem” of induction exist only in the framework of the static perspective of idealism and positivism.

It is important to understand that these attitudes are far from being just “academic” distinctions. There are profound effects to be found on the level of empiric activity, how data are handled using which kind of methods. Further more, they can’t be “mixed”, once one of them have been chosen. Despite we may switch between them in a sequential manner, across time or across domains, we can’t practice them synchronously as the whole setup of the life form is influenced. Of course, we do not want to rate one of them as the “best”, we just want to ensure that it is clear that there are particular consequences of that basic choice.

Towards the Relational Perspective

As late as 1991, Robert Rosen’s work about „Relational Biology“ has been anything but nearby [1]. As a mathematician, Rosen was interested in the problematics of finding a proper way to represent living systems by formal means. As a result of this research, he strongly proposed the “relational” perspective. He identifies Nicolas Rashevsky as the originator of it, who mentioned about it around 1935 for the first time. It really sounds strange that relational biology had to be (re-)invented. What else than relations could be important in biology? Yet, still today the atomistic thinking is quite abundant, think alone about the reductionist approaches in genetics (which fortunately got seriously attacked meanwhile4). Or think about the still prevailing helplessness in various domains to conceive appropriately about complexity (see our discussion of this here). Being aware of relations means that the world is not conceived as made from items that are described by inputs and outputs with some analytics, or say deterministics, in between. Only such items could be said that they “function”. The relational perspective abolishes the possibility of the reduction of real “systems” to “functions”.

As it is already indicated by the appearance of Rashevsky, there is, of course, a historical trace for this shift, kind of soil emerging from intellectual sediments.5 While the 19th century could be considered as being characterized by the topos of population (of atoms)—cf. the line from Laplace and Carnot to Darwin and Boltzmann—we can observe a spawning awareness for the relation in the 20th century. Wittgenstein’s Tractatus started to oppose Frege and has been always in stark contrast to logical positivism, then accompanied by Zermelo (“axiom” of choice6), Rashevsky (relational biology), Turing (morphogenesis in complex systems), McLuhan (media theory), String Theory in physics, Foucault (field of propositions), and Deleuze (transcendental difference). Comparing Habermas and Luhmann on the one side—we may label their position as idealistic functionalism—with Sellars and Brandom on the other—who have been digging into the pragmatics of the relation as it is present in humans and their culture—we find the same kind of difference. We also could include Gestalt psychology as kind of a pre-cursor to the party of “relationalists,” mathematical category theory (as opposed to set theory) and some strains from the behavioral sciences. Researchers like Ekman & Scherer (FACS), Kummer (sociality expresses as dynamics in relative positions), or Colmenares (play) focused the relation itself, going far beyond the implicit reference to the relation as a secondary quality. We may add David Shane7 for architecture and Clarke or Latour8 for sociology. Of course, there are many, many other proponents who helped to grow the topos of the relation, yet, even without a detailed study we may guess that compared to the main streams they still remain comparatively few.

These difference could not be underestimated in the field of information sciences, computer sciences, data analysis, or machine-based learning and episteme. It makes a great difference whether one would base the design of an architecture or the design of use on the concept of interfaces, most often defined as a location of full control, notably in both directions, or on the concept of behavioral surfaces.9. In the field of empiric activities, that is modeling in its wide sense, it yields very different setups or consequences whether we start with the assumption of independence between our observables or between our observations or whether we start with no assumptions about the dependency between observables, or observations, respectively. The latter is clearly the preferable choice in terms of intellectual soundness. Even if we stick to the first of both alternatives, we should NOT use methods that work only if that assumption is satisfied. (It is some kind of a mystery that people believe that doing so could be called science.) The reason is pretty simple. We do not know anything about the dependency structures in the data before we have finished modeling. It would inevitably result in a petitio principii if we’d put “independence” into the analysis, wrapped into the properties of methods. We would just find. . . guess what. After destroying facts—in the Wittgensteinian sense understood as relationalities—into empiristic dust we will not be able to find any meaningful relation at all.

Positioning Transformation (again)

Similarly, if we treat data as a “true” mapping of an outside “reality”, as “givens” that eventually are distorted a bit by more or less noise, we will never find multiplicity in the representations that we could derive from modeling, simply because it would contradict the prejudice. We also would not recognize all the possible roles of transformation in modeling. Measurement devices act as a filter10, and as such it does not differ from any analytic transformation of the data. From the perspective of the associative part of modeling, where the data are mapped to desired outcomes or decisions, “raw” data are simply not distinguishable from “transformed” data, unless the treatment itself would not be encoded as data as well. Correspondingly, we may consider any data transformation by algorithmic means as additional measurement devices, which are responding to particular qualities in the observations on their own. It is this equivalence that allows for the change from the linear to a circular and even a self-referential arrangement of empiric activities. Long-term adaptation, I would say even any adaptation at all is based on such a circular arrangement. The only thing we’d to change to earn the new possibilities was to drop the “passivist” representationalist realism11.

Usually, the transformation of data is considered as an issue that is a function of discernibility as an abstract property of data (Yet, people don’t talk like that, it’s our way of speaking here). Today, the respective aphorism as coined by Bateson already became proverbial, despite its simplistic shape: Information is the difference that makes the difference. According to the context in which data are handled, this potential discernibility is addressed in different ways. Let us distinguish three such contexts: (i) Data warehousing, (ii) statistics, and (iii) learning as an epistemic activity.

In Data Warehousing one is usually faced with a large range of different data sources and data sinks, or consumers, where the difference of these sources and sinks simply relates to the different technologies and formats of data bases. The warehousing tool should “transform” the data such that they can be used in the intended manner on the side of the sinks. The storage of the raw data as measured from the business processes and the efforts to provide any view onto these data has to satisfy two conditions (in the current paradigm). It has to be neutral—data should not be altered beyond the correction of obvious errors—and its performance, simply in terms of speed, has to be scalable, if not even independent from the data load. The activities in Data Warehousing are often circumscribed as “Extract, Transform, Load”, abbreviated ETL. There are many and large software solutions for this task, commercial ones and open source (e.g. Talend). The effect of DWH is to disclose the potential for an arbitrary and quickly served perspective onto the data, where “perspective” means just re-arranged columns and records from the database. Except cleaning and simple arithmetic operations, the individual bits of data itself remain largely unchanged.

In statistics, transformations are applied in order to satisfy the conditions for particular methods. In other words, the data are changed in order to enhance discernibility. Most popular is the log-transformation that shifts the mode of a distribution to the larger values. Two different small values that consequently are located nearby are separated better after a log-transformation, hence it is feasible to apply log-transformation to data that form a left-skewed distribution. Other transformations are aiming at a particular distribution, such as the z-score, or Fisher’s z-transformation. Interestingly, there is a further class of powerful transformations that is not conceived as such. Residuals are defined as deviation of the data from a particular model. In linear regression it is the square of the distance to the regression line.

The concept, however, can be extended to those data which do not “follow” the investigated model. The analysis of residual has two aspects, a formal one and an informal one. Formally, it is used as a complex test whether the investigated model does fit or whether it does not. The residual should not show any evident “structure”. That’s it. There is no institutional way back to the level of the investigated model, there are no rules about that, which could be negotiated in a yet to establish community. The statistical framework is a linear one, which could be seen as a heritage from positivism. It is explicitly forbidden to “optimize” a correlation by multiple actualization. Yet, informally the residuals may give hints on how to change the basic idea as represented by the model. Here we find a circular setup, where the strategy is to remove any rule-based regularity, i.e. discernibility form the data.

The effect of this circular arrangement takes completely place in the practicing human as kind of a refinement. It can’t be found anywhere in the methodological procedure itself in a rule-based form. This brings us to the third area, epistemic learning.

In epistemic learning, any of the potentially significant signals should be rendered in such a way as to allow for an optimized mapping towards a registered outcome. Such outcomes often come as dual values, or as a small group of ordinal values in the case of multi-constraint, multi-target optimization. In epistemic learning we thus find the separation of transformation and association in its most prominent form, despite the fact that data warehousing and statistics as well also are intended to be used for enhancing decisions. Yet, their linearity simply does not allow for any kind of institutionalized learning.

This arbitrary restriction to the linear methodological approach in formal epistemic activities results in two related quite unfavorable effects: First, the shamanism of “data exploration”, and second, the infamous hell of methods. One can indeed find thousands, if not 10s of thousands of research or engineering articles trying to justify a particular new method as the most appropriate one for a particular purpose. These methods themselves however are never identified as a „transformation“. Authors are all struggling for the “best” method, the whole community being neglecting the possibility—and the potential—of combining different methods after shaping them as transformations.

The laborious and never-ending training necessary to choose from the huge amount of possible methods then is called methodology… The situation is almost paradox. First, the methods are claimed to tell something about the world, despite this is not possible at all, not just because those methods are analytic.  It is an idealistic hope, which has been abolished already by Hume. Above all, only analytic methods are considered to be scientific. Then, through the large population of methods the choice for a particular one becomes aleatory, which renders the whole activity into a deeply non-scientific one. Additionally, it is governed by the features of some software, or the skills of the user of such software, not by a conceptual stance.

Now remember that any method is also a specific filter. Obviously, nothing could be known about the beneficiality of a particular method before the prediction that is based on the respective model had been validated. This simple insight renders “data exploration” into meaninglessness. It can only play its role within linear empirical frameworks, which are inappropriate any way. Data exploration is suggested to be done “intuitively”, often using methods of visualization. Yet, those methods are severely restricted with regard to the graspable dimensionality. More than 6 to 8 dimensions can’t be “visualized” at once. Compare this to the 2n (n: number of variables) possible models and you immediately see the problem. Else, the only effect of visualization is just a primitive form of clustering. Additionally, visual inputs are images, above all, and as images they can’t play a well-defined epistemological role.12

Complementary to the non-concept of “exploring” data13, and equally misconceived, is the notion of “preparing” data. At least, it must be rated as misconceived as far as it comprises transformations beyond error correction and arranging data into tables. The reason is the same: We can’t know whether a particular “cleansing” will enhance the predictive power of the model, in other words, whether it comprises potential information that supports the intended discernibility, before the model has been built. There is no possibility to decide which variables to include before having finished the modeling. In some contexts the information accessible through a particular variable could be relevant or even important. Yet, if we conceive transformations as preliminary hypothesis we can’t call them “preparation” any more. “Preparation” for what? For proofing the petitio principii? Certainly the peak of all preparatory nonsense is the “imputation” of missing values.

Dorian Pyle [11] calls such introduced variables “pseudo variables”, others call them “latent” or even “hidden variables”.14 Any of these labels is inappropriate, since the transformation is nothing else than a measurement device. Introduced variables are just variables, nothing else.

Indeed, these labels are reliable markers: whenever you meet a book or article dealing with data exploration, data preparation, the “problem” of selecting a method, or likewise, selecting an architecture within a meta-method like the Artificial Neural Networks, you can know for sure that the author is not really interested in learning and reliable predictions. (Or, that he or she is not able to distinguish analysis from construction.)

In epistemic learning the handling of residuals is somewhat inverse to their treatment in statistics, again as a result of the conceptual difference between the linear and the circular approach. In statistics one tries to prove that the model, say: transformation, removes all the structure from the data such that the remaining variation is pure white noise. Unfortunately, there are two drawbacks with this. First, one has to define the model before removing the noise and before checking the predictive power. Secondly, the test for any possibly remaining structure again takes place within the atomistic framework.

In learning we are interested in the opposite. We are looking for such transformations which remove the noise in a multi-variate manner such that the signal-noise ratio is strongly enhanced, perhaps even to the proto-symbolic level. Only after the de-noising due to the learning process, that is after a successful validation of the predictive model, the structure is then described for the (almost) noise-free data segment15 as an expression that is complementary to the predictive model.

In our opinion an appropriate approach would actualize as an instance of epistemic learning that is characterized by

  • – conceiving any method as transformation;
  • – conceiving measurement as an instance of transformation;
  • – conceiving any kind of transformation as a hypothesis about the “space of expressibility” (see next section), or, similarly, the finally selected model;
  • – the separation of transformation and association;
  • – the circular arrangement of transformation and association.

The Abstract Perspective

We now have to take a brief look onto the mechanics of transformations in the domain of epistemic activities.16 For doing this, we need a proper perspective. As such we choose the notion of space. Yet, we would like to emphasize that this space is not necessarily Euclidean, i.e. flat, or open, like the Cartesian space, i.e. if quantities running to infinite. Else, dimensions need not be thought of as being “independent”, i.e. orthogonal on each other. Distance measures need to be defined only locally, yet, without implying ideal continuity. There might be a certain kind of “graininess” defined by a distance D, below which the space is not defined. The space may even contain “bubbles” of lower dimensionality. So, it is indeed a very general notion of “space”.

Observations shall be represented as “points” in this space. Since these “points” are not independent from the efforts of the observer, these points are not dimensionless. To put it more precisely, they are like small “clouds”, that are best described as probability densities for “finding” a particular observation. Of course, this “finding” is kind of an inextricable mixture of “finding” and “constructing”. It does not make much sense to distinguish both on the level of such cloudy points. Note, that the cloudiness is not a problem of accuracy in measurement! A posteriori, that is, subsequent to introducing an irreversible move17, such a cloud could also be interpreted as an open set of the provoked observation and virtual observations. It should be clear by now that such a concept of space is very different from the Euclidean space that nowadays serves as a base concept for any statistics or data mining. If you think that conceiving such a space is not necessary or even nonsense, then think about quantum physics. In quantum physics we also are faced with the break-down of observer and observable, and they ended up quite precisely in spaces as we described it above. These spaces then are handled by various means of renormalization methods.18 In contrast to the abstract yet still physical space of quantum theory, our space need not even contain an “origin”. Elsewhere we called such a space aspectional space.

Now let us take the important step in becoming interested in only a subset of these observations. Assume we not only want to select a very particular set of observations—they are still clouds of probabilities, made from virtual observations—by means of prediction. This selection now can be conceived in two different ways. The first way is the one that is commonly applied and consists of the reconstruction of a “path”. Since in the contemporary epistemic life form of “data analysts” Cartesian spaces are used almost exclusively, all these selection paths start from the origin of the coordinate system. The endpoint of the path is the point of interest, the “outcome” that should be predicted. As a result, one first gets a mapping function from predictor variables to the outcome variable. All possible mappings form the space of mappings, which is a category in the mathematical sense.

The alternative view does not construct such a path within a fixed coordinate system, i.e. with a space with fixed properties. Quite to the contrast, the space itself gets warped and transformed until very simple figures appear, which represent the various subsets of observations according to the focused quality.

Imagine an ordinary, small, blown-up balloon. Next, imagine a grid in the space enclosed by the balloon’s hull, made by very thin threads. These threads shall represent the space itself. Of course, in our example the space is 3d, but it is not limited to this case. Now think of two kinds of small pearls attached to the threads all over the grid inside the balloon, blue ones and red ones. It shall be the red ones in which we are interested. The question now is what can we do to separate the blue ones from the red ones?

The way to proceed is pretty obvious, though the solution itself may be difficult to achieve. What we can try is to warp and to twist, to stretch, to wring and to fold the balloon in such a way that the blue pearls and the red pearls separate as nicely as possible. In order to purify the groups we may even consider to compress some regions of the space inside the balloon such that they are turn into singularities. After all this work—and beware it is hard work!—we introduce a new grid of threads into the distorted space and dissolve the old ones. All pearls automatically attach to the threads closest nearby, stabilizing the new space. Again, conceiving of such a space may seem weird, but again we can find a close relative in physics, the Einsteinian space of space-time. Gravitation effectively is warping that space, though in a continuous manner. There are famous empirical proofs of that warping of physical space-time.19

Analytically, these two perspectives, the path reconstruction on the hand and the space warping on the other, are (almost) equivalent. The perspective of space warping, however, offers a benefit that is not to be underestimated. We arrive at a new space for which we can define its own properties and in which we again can define measures that are different from those possible in the original space. The path reconstruction does not offer such a “a derived space”. Hence, once the path is reconstructed, the story stops. It is a linear story. Our proposal thus is to change perspective.

Warping the space of measurability and expressibility is an operation that inverts the generation of cusp catastrophes.20 (see Figure 1 below). Thus it transcends the cusp catastrophes. In the perspective of path reconstruction one has to avoid the phenomenon of hysteresis and cusps altogether, hence loosing a lot of information about the observed source of data.

In the Cartesian space and the path reconstruction methodology related to it, all operations are analytic, that is organized as symbolic rewriting. The reason for this is the necessity for the paths remaining continuous and closed. In contrast, space warping can be applied locally. Warping spaces in dealing with data is not an exotic or rare activity at all. It happens all the time. We know it even from (simple) mathematics, when we define different functions, including the empty function, for different sets of input parameter values.

The main consequence of changing the perspective from path reconstruction to space warping is an enlargement of the set of possible expressions. We can do more without the need to call it “heuristics”. Our guess is that any serious theory of data and measurement must follow the opened route of space warping, if this theory of data tries to avoid positivistic reductionism. Most likely, such a theory will be kind of a renormalization theory in a connected, relativistic data space.

Revitalizing Punch Cards and Stacks

In this section we will introduce the outline of a tool that allows to follow the circular approach in epistemic activities. Basically, this tool is about organizing arbitrary transformations. While for analytic (mathematical) expressions there are expression interpreters it is also clear that analytic expressions form only a subset of the set of all possible transformations, even if we consider the fact that many expression interpreters have been growing to some kind of programming languages, or script language. Indeed, Java contains an interpreting engine for JavaScript by default, and there are several quite popular ones for mathematical purposes. One could also conceive mathematical packages like Octave (open source), MatLab or Mathematica (both commercial) as such expression interpreters, even as their most recent versions can do much, much more. Yet, using MatLab & Co. are not quite suitable as a platform for general purpose data transformation.

The structural metaphor that proofed to be as powerful as it was sustainable for more than 10 years now is the combination of the workbench with the punch card stack.

Image 1: A Punched Card for feeding data into a computer

Any particular method, mathematical expression or arbitrary computational procedure resulting in a transformation of the original data is conceived as a “punch card”. This provides a proper modularization, and hence standardization. Actually, the role of these “functional compartments” is extremely standardized, at least enough to define an interface for plugins. Like the ancient punch cards made from paper, each card represents a more or less fixed functionality. Of course, these functionality may be defined by a plugin that itself connects to Matlab…

Else, again like the ancient punch cards, the virtualized versions can be stacked. For instance, we first put the treatment for missing values onto the stack, simply to ensure that all NULLS are written as -1. The next card then determines minimum and maximum in order to provide the data for linear normalization, i.e. the mapping of all values into the interval [0..1]. Then we add a card for compressing the “fat tail” of the distribution of values in a particular variable. Alternatively we may use a card to split the “fat tail” off into a new variable! Finally we apply the card=plugin for normalizing the data to the original and the new data column.

I think you got the idea. Such a stack is not only maintained for any of the variables, it is created on the fly according to the needs as these got detected by simple rules. You may think of the cards also as the set of rules that describe the capabilities of agents, which constantly check the data whether they could apply their rules. You also may think of these stacks as a device that works like a tailored distillation column , as it is used for fractional distillation in petro-chemistry.

Image 2: Some industrial fractional distillation columns for processing mineral oil. Dependent on the number of distillation steps different products result.

These stacks of parameterized procedures and expressions represent a generally programmable computer, or more precisely, operating system, quite similar to a spreadsheet, albeit the purpose of the latter, and hence the functionality, actualizes in a different form. The whole thing may even be realized as a language! In this case, one would not need a graphical user-interface anymore.

The effect of organizing the transformation of data in this way, by means of plugins that follow the metaphor of the “punch card stack”, is dramatic. Introducing transformations and testing them can be automated. At this point we should mention about the natural ally of the transformation workbench, the maximum likelihood estimation of the most promising transformations that combine just two or three variables into a new one. All three parts, the transformation stack engine, the dependency explorer, and the evolutionary optimized associative engine (which is able to create a preference weighting for the variables) can be put together in such a way that finding the “optimal” model can be run in a fully automated manner. (Meanwhile the SomFluid package has grown into a stage where it can accomplish this. . . download it here, but you need still some technical expertise to make it running)

The approach of the “transformation stack engine” is not just applicable to tabular data, of course. Given a set of proper plugins, it can be used as a digester for large sets of images or time series as well (see below).

Transforming Data

In this section we now will take a more practical and pragmatic perspective. Actually, we will describe some of the most useful transformations, including their parameters. We do so, because even prominent books about “data mining” have been handling the issue of transforming data in a mistaken or at least seriously misleading manner.21,22

If we consider the goal of the transformation of numerical data, increasing the discernibility of assignated observations , we will recognize that we may identify a rather limited number of types of such transformations, even if we consider the space of possible analytic functions, which combine two (or three) variables.

We will organize the discussion of the transformations into three sub-sections, whose subjects are of increasing complexity. Hence, we will start with the (ordinary) table of data.

Tabular Data

Tables may comprise numerical data or strings of characters. In its general form it may even contain whole texts, a complete book in any of the cells of a column (but see the section about unstructured data below!). If we want to access the information carried by the string data, we more sooner than later have to translate them into numbers. Unlike numbers, string data, and the relations between data points made from string data, must be interpreted. As a consequence, there are always several, if not many different possibilities of that representation. Besides referring to the actual semantics of the strings that could be expressed by means of the indices of some preference orders, there are also two important techniques of automatic scaling available, which we will describe below.

Besides string data, dates are further multi-dimensional category of data. A date encodes not only a serial number relative to some (almost) arbitrarily chosen base date, which we can use to express the age of the item represented by the observation. We have, of course, day of week, day of month, number of week, number of month, and not to forget about season as an approximate class. It depends a bit on the domain whether these aspects play any role at all. Yet, think about the rhythms in the city or on the stock markets across the week, or the “black Monday/ Tuesday/Friday effect” in production plants or hospitals then it is clear that we usually have to represent the single date value by several “informational extracts”.

A last class of data types that we have to distinguish are time values. We already mentioned the periodicity in other aspects of the calendar. In which pair of time values we find a closer similarity, T1( 23:41pm, 0:05pm), or T2(8:58am;3:17pm)? In case of any kind of distance measure the values of T2 are evaluated as much more similar than those in T1. What we have to do is to set a flag for “circularity” in order to calculate the time distances correctly.

Numerical Data: Numbers, just Numbers?

Numerical data are data for which in principle any value from within a particular interval could be observed. If such data are symmetrically normal distributed then we have little reasons to guess that there is something interesting within these sample of values. As soon as the distribution becomes asymmetrical, it starts to become interesting. We may observe “fat tails” (large values are “over-represented), or multi-modal distributions. In both cases we could suspect that there are at least two different processes, one dominating the other differentially across peaks. So we should split the variable into two (called “deciling”) and ceteris paribus check out the effect on the predictive power of the model. Typically one splits the values at the minimum between the peaks, but it is also possible to implement an overlap, where some records are present in both of the new variables.

Long tails indicate some aberrant behavior of the items represented by the respective records, or, like in medicine even pathological contexts. Strongly left-skewed distribution often indicate organizational or institutional influences. Here we could compress the long tail, log-shift, and then split the variable, that is decile it into two. 21

In some domains, like the finances, we find special values at which symmetry breaks. For ordinary money values the 0 is such a value. We know in advance that we have to split the variable into two, because the semantic and the structural difference between +50$ and -75$ is much bigger than between 150$ and 2500$… probably. As always, we transform it such that we create additional variables as kind of a hypotheses, for which we have to evaluate their (positive) contribution to the predictive power of the model.

In finances, but also in medicine, and more general in any system that is able to develop meta-stable regions, we have to expect such points (or regions) with increased probability of breaking symmetry and hence strong semantic or structural difference. René Thom first described similar phenomena by his theory that he labeled “catastrophe theory”. In 3d you can easily think about cusp catastrophes as a hysteresis in x-z direction that is however gradually smoothed out in y-direction.

Figure 1: Visualization of folds in parameters space, leading to catastrophes and hystereses.

In finances we are faced with a whole culture of rule following. The majority of market analysts use the same tools, for instance “stochasticity,” or a particularly parameterized MACD for deriving “signals”, that is, indicators for points of actions. The financial industries have been hiring a lot of physicists, and this population sticks to greatly the same mathematics, such as GARCH, combined with Monte-Carlo-Simulations. Approaches like fractal geometry are still regarded as exotic.23

Or think about option prices, where we find several symmetry breaks by means of contract. These points have to be represented adequately in dedicated, means derived variables. Again, we can’t emphasize it enough, we HAVE to do so as a kind of performing hypothesizing. The transformation of data by creating new variables is, so to speak, the low-level operationalization of what later may grow into a scientific hypothesis. Creating new variables poses serious problems for most methods, which may count as a reason why many people don’t follow this approach. Yet, for our approach it is not a problem, definitely not.

In medicine we often find “norm values”. Potassium in blood serum may take any value within a particular range without reflecting any physiologic problem. . . if the person is healthy. If there are other risk factors the story may be a different one. The ratio of potassium and glucose in serum provides us an example for a significant marker. . . if the person has already heart problems. By means of such risk markers we can introduce domain-specific knowledge. And that’s actually a good message, since we can identify our own “markers” and represent it as a transformation. The consequence is pretty clear: a system that is supposed to “learn” needs a suitable repository for storing and handling such markers, represented as a relational system (graph).

Let us return to the norm ranges briefly again. A small difference outside the norm range could be rated much more strongly than within the norm range. This may lead to the weight functions shown in the next figure, or more or less similar ones. For a certain range of input values, the norm range, we leave the values unchanged. The output weight equals 1. Outside of this range we transform them in a way that emphasizes the difference to the respective boundary value of the norm range. This could be done in different ways.

Figure 2: Examples for output weight configurations in norm-range transformation

Actually, this rationale of the norm range can be applied to any numerical data. As an estimate of the norm range one could use the 80% quantile, centered around the median and realized as +/-40% quantiles. On the level of model selection, this will result in a particular sensitivity for multi-dimensional outliers, notably before defining any criterion apriori of what an outlier should be.

From Strings to Orders to Numbers

Many data come as some kind of description or label. Such data are described as nominal data. Think for instance about prescribed drugs in a group of patients included into an investigation of risk factors for a disease, or think about the name or the type of restaurants in a urbanological/urbanistic investigation. Nominal data are quite frequent in behavioral, organizational or social data, that is, in contexts that are established mainly on a symbolic level.

It should be avoided to perform measurements only on the nominal scale, yet, sometimes it is not possible to circumvent it. It could be avoided at least partially by including further properties that can be represented by numerical values. For instance, instead using only the names cities in a data set, one can use the geographical location, number of inhabitants, or when referring to places within a city one can use descriptors that cover some properties of the respective area, such items as density of traffic, distance to similar locations, price level of consumer goods, economical structure etc. If a direct measurement is not possible, estimates can do the job as well, if the certainty of the estimate is expressed. The certainty then can be used to generate surrogate data. If the fine grained measurement creates further nominal variables, they could be combined for form a scale. Such enrichment is almost always possible, irrespective the domain. One should keep in mind, however, that any such enrichment is nothing else than a hypothesis.

Sometimes, data on the nominal level, technically a string of alphanumerical characters, already contains valuable information. For instance, the contain numerical values, as in the name of cars. If we would deal with things like names of molecules, where these names often come as compounds, reflecting the fact that molecules themselves are compounds, we can calculate the distance of each name to a virtual “average name” by applying a technique called “random graph”. Of course, in case of molecules we would have a lot of properties available that can be expressed as numerical values.

Ordinal data are closely related to nominal data. Essentially, there are two flavors of them. In case of the least valuable of them the numbers to not express a numerical value, the cipher is just used as kind of a letter, indicating that there is a set of sortable items. Sometimes, values of an ordinal scale represent some kind of similarity. Despite this variant is more valuable it still can be misleading, because the similarity may not scale isodistantly with the numerical values of the ciphers. Undeniably, there is still a rest of a “name” in it.

We are now going to describe some transformations to deal with data from low-level scales.

The least action we have to apply to nominal data is a basic form of encoding. We use integer values instead of the names. The next, though only slightly better level would be to reflect the frequency of the encoded item in the ordinal value. One would, for instance not encode the name into an arbitrary integer value, but into the log of the frequency. A much better alternative, however, is provided by the descendants of the correspondence analysis. These are called Optimal Scaling and the Relative Risk Weight. The drawback for these method is that some information about the predicted variable is necessary. In the context of modeling, by which we always understand target-oriented modeling—as opposed to associative storage24—we usually find such information, so the drawback is not too severe.

First to optimal scaling (OSC). Imagine a variable, or “assignate” as we prefer to call it25, which is scaled on the nominal or the low ordinal scale. Let us assume that there are just three different names or values. As already mentioned, we assume that a purpose has been selected and hence a target variable as its operationalization is available. Then we could set up the following table (the figures are denoting frequencies).

Table 1: Summary table derived from a hypothetical example data set. av(i) denote three nominally scaled assignates.

outcometv

av1

av2

av3

marginal sum

ta

140

120

160

420

tf (focused)

30

10

40

80

marginal sum

170

130

200

500

From these figures we can calculate the new scale values by the formula

For the assignate av1 this yields

Table 2: Here, various encodings are contrasted.

assignate

literal encoding

frequency

normalized log(freq)

optimal scaling

normalized OSC

av1

1

170

0.62

0.176

0.809

av2

2

130

0.0

0.077

0.0

av3

3

200

1.0

0.200

1.0

Using these values we could replace any occurrence of the original nominal (ordinal) values by the scaled values. Alternatively—or better additionally—, we could sum up all values for each observation (record), thereby collapsing the nominally scaled assignates into a single numerically scaled one.

Now we will describe the RRW. Imagine a set of observations {o(i)} where each observation is described by a set of assignates a(i). Also let us assume that some of these assignates are on the binary level, that is, the presence of this quality in the observation is encoded by “1”, its missing by “0”. This usually results in sparsely filled (regions of ) the data table. Depending on the size of the “alphabet”, even more than 99.9% of all values could simply be equal to 0. Such data can not be grouped in a reasonable manner. Additionally, if there are further assignates in the table that are not binary encoded, the information in the binary variables would be neglected almost completely without applying a rescaling like the RRW.

For the assignate av1 this yields

As you can see, the RRW uses the marginal from the rows, while the optimal scaling uses the marginal from the columns. Thus, the RRW uses slightly more information. Assuming a table made from binary assignates av(i), which could be summarized into table 1 above, the formula yields the following RRW factors for the three binary scaled assignates:

Table 3: Relative Risk Weights (RRW) for the frequency data shown in table 1.

Assignate

raw RRWi

RRWi

normalized RRW

av1

1.13

0.33

0.82

av2

0.44

0.16

0.00

av3

1.31

0.36

1.00

The ranking of av(i) based RRW is equal to that returned by OSC, even the normalized score values are quite similar. Yet, while in the case of nominal variables assignates are usually not collapsed, this will be done always in case of binary variables.

So, let us summarize these simple methods in the following table.

Table 4: Overview about some of the most important transformations for tabular data.

Transformation

Mechanism

Effect, New Value

Properties, Conditions

log-transform

analytic function

analytic combination

explicit analytic function (a,b)→f(a,b)

enhancing signal-to-noise ratio for the relationship between predictors and predicted, 1 new variable

targeted modeling

empiric combinational recoding

using simple clustering methods like KNN or K-means for a small number of assignates

distance from cluster centers and, or cluster center as new variables

targeted modeling

Deciling

upon evaluation of properties of the distribution

2 new variables

Collapsing

based on extreme-value quantiles

1 new variable, better distinction for data in frequent bins

optimal scaling

numerical encoding and/or rescaling using marginal sums

enhancing the scaling of the assignate from nominal to numerical

targeted modeling

relative risk weight

dto.

collapsing sets of sparsely filled variables

targeted modeling

Obviously, the transformation of data is not an analytical act, on both sides. Left-hand it refers to structural and hence semantic assumptions, while right hand it introduces hypotheses about those assumptions. Numbers are never ever just values, much like sentences and words do not consists just from letters. After all, the difference between both is probably less than one could initially presume. Later we will address this aspect from the opposite direction, when it comes to the translation of textual entities into numbers.

Time Series and Contexts

Time series data are the most valuable data. They allow the reconstruction of the flow of information in the observed system, either between variables intrinsic to the measurement setup (reflecting the “system”) or between treatment and effects. In the recent years, so-called “causal FFT” gained some popularity.

Yet, modeling time series data poses the same problematics as tabular data. We do not know apriori which variables to include, or how to transform variables in order to reflect particular parts of the information in the most suitable way. Simply pressing a FFT onto the data is nothing but naive. FFT assumes a harmonic oscillation, or a combination thereof, which certainly is not appropriate. Even if we interpret a long series of FFT terms as an approximation to an unknown function, it is by no means clear whether the then assumed stationarity26 is indeed present in the data.

Instead, it is more appropriate to represent the aspects of a time series in multiple ways. Often, there are many time series available, one for each assignate. This brings the additional problem of careful evaluation of cross-correlations and auto-correlations, and all of this under the condition that it is not known apriori whether the evolution of the system is stationary.

Fortunately, the analysis of multiple time series, even from non-stationary processes, is quite simple, if we follow the approach as outlined so far. Let us assume a set of assignates {a(i)} for which we have their time series measurement available, which are given by equidistant measurement points. A transformation then is constructed by a method m that is applied to a moving window of size md(k). All moving windows of any size are adjusted such that their endpoints meet at the measurement point at time t(m(k)). Let us call this point the prediction base point, T(p). The transformed values consist either from the residuals resulting from this methods values and the measurement data, or the parameters of the method fitted to the moving window. A example for the latter case are for instance given by the wavelet coefficients, which provide a quite suitable, multi-frequency perspective onto the development up to T(p). Of course, the time series data of different assignates could be related to each other by any arbitrary functional mapping.

The target value for the model could be any set of future points relative to t(m(k)). The model may predict a singular point, averages some time in the future, the volatility of the future development of the time series, or even the parameters of a particular mapping function relating several assignates. In the latter case the model would predict several criteria at once.

Such transformations yield a table that contain a lot more variables than originally available. The ratio may grow up to 1:100 in complex cases like the global financial markets. Just to be clear: If you measure, say the index values of 5 stock markets, some commodities like gold, copper, precious metals and “electronics metals”, the money market, bonds and some fundamentals alike, that is approx. 30 basic input variables, even a superficial analysis would have to inspect 3000 variables… Yes, learning and gaining experience can take quite a bit! Learning and experience do not become cheaper only for that we use machines to achieve it. Just exploring is more easy nowadays, not requiring life times any more. The reward consists from stable models about complex issues.

Each point in time is reflected by the original observational values and a lot of variables that express the most recent history relative to the point in time represented by the respective record. Any of the synthetic records thus may be interpreted as a set of hypothesis about the future development, where this hypothesis comes as a multidimensional description of the context up to T(p). It is then the task of the evolutionarily optimized variable selection based on the SOM to select the most appropriate hypothesis. Any subgroup contained in the SOM then represents comparable sets of relations between the past relative to T(p) and the respective future as it is operationalized into the target variable.

Typical transformations in such associative time series modeling are

  • – moving average and exponentially decaying moving average for de-seasoning or de-trending;
  • – various correlational methods: cross- and auto-correlation, including the result parameters of the Bartlett test;
  • – Wavelet-, FFT-, or Walsh- transforms of different order, residuals to the denoised reconstruction;
  • – fractal coefficients like Lyapunov coefficient or Hausdorff dimension
  • – ratios of simple regressions calculated over moving windows of different size;
  • – domain specific markers (think of technical stock market analysis, or ECG.

Once we have expressed a collection of time series as series of contexts preceding the prediction point T(p), the further modeling procedure does not differ from the modeling of ordinary tabular data, where the observations are independent from each other. From the perspective of our transformation tool, these time series transformation are nothing else than “methods”, they do not differ from other plugin methods with respect to the procedure calls in their programing interface.

„Unstructurable“ „Data“: Images and Texts

The last type of data for which we briefly would like to discuss the issue of transformation is “unstructurable” data. Images and texts are the main representatives for this class of entities. Why are these data “unstructurable”?

Let us answer this question from the perspective of textual analysis. Here, the reason is obvious, actually, there are several obvious reasons. Patrizia Violi [17] for instance emphasizes that words are creating their own context, upon which they are then going to be interpreted. Douglas Hofstadter extended the problematics to thinking at large, arguing that for any instance of analogical thinking—and any thinking he claimed as being analogical—it is impossible to define criteria that would allow to set up a table. Here on this site we argued repeatedly that it is not possible to define any criteria apriori that would capture the “meaning” of a text.

Else, understanding language, as well as understanding texts can’t be mapped to the problematics of predicting a time series. In language, there is no such thin as a prediction point T(p), and there is no positively definable “target” which could be predicted. The main reason for this is the special dynamics between context (background) and proposition (figure). It is a multi-level, multi-scale thing. It is ridiculous to apply n-grams to text, then hoping to catch anything “meaningful”. The same is true for any statistical measure.

Nevertheless, using language, that is, producing and understanding is based on processes that select and compose. In some way there must be some kind of modeling. We already proposed a structure, or more, an architecture, for this in a previous essay.

The basic trick consists of two moves: Firstly, texts are represented probabilistically as random contexts in an associative storage like the SOM. No variable selection takes place here, no modeling and no operationalization of a purpose is present. Secondly, this representation then is used as a basis for targeted modeling. Yet, the “content” of this representation does not consist from “language” data anymore. Strikingly different, it contains data about the relative location of language concepts and their sequence as they occur as random contexts in a text.

The basic task in understanding language is to accomplish the progress from a probabilistic representation to a symbolic tabular representation. Note that any tabular representation of an observation is already on the symbolic level. In the case of language understanding precisely this is not possible: We can’t define meaning, and above all, not apriori. Meaning appears as a consequence of performance and execution of certain rules to a certain degree. Hence we can’t provide the symbols apriori that would be necessary to set up a table for modeling, assessing “similarity” etc.

Now, instead of probabilistic non-structured representation we also could say arbitrary unstable structure. From this we should derive a structured, (proto-)symbolic and hence tabular and almost stable structure. The trick to accomplish this consists of using the modeling system itself as measurement device and thus also as a “root” for further reference in the then possible models. Kohonen and colleagues demonstrated this crucial step in their WebSom project. Unfortunately (for them), they then actualized several misunderstandings regarding modeling. For instance, they misinterpreted associative storage as a kind of model.

The nice thing with this architecture is that once the symbolic level has been achieved, any of the steps of our modeling approach can be applied without any change, including the automated transformation of “data” as described above.

Understanding the meaning of images follows the same scheme. The fact that there are no words renders the task more complicated and more simple at the same time. Note that so far there is no system that would have learned to “see”, to recognize and to understand images, despite many titles claim that the proposed “system” can do so. All computer vision approaches are analytic by nature, hence they are all deeply inadequate. The community is running straight into the method hell as the statisticians and the data miners did before, mistaking transformations as methods, conflating transformation and modeling, etc.. We discussed this issues at length above. Any of the approaches might be intelligently designed, but all are victimized by the representationalist fallacy, and probably even by naive realism. Due to the fact that the analytic approach is first, second and third mainstream, the probabilistic and contextual bottom-up approach is missing so far. In the same way as a word is not equal to the grapheme, a line is not defined on the symbolic level in the brain. We else and again meet the problem of analogical thinking even on the most primitive graphical level. When is a line still a line, when is a triangle still a triangle?

In order to start in the right way we first have to represent the physical properties of the image along different dimensions, such as textures, edges, or salient points, and all of those across different scales. Probably one can even detect salient objects by some analytic procedure. From any of the derived representations the random contexts are derived and arranged as vectors. A single image is represented as a table that contains random contexts derived from the image as a physical entity. From here on, the further processing scheme is the same as for texts. Note, that there is no such property as “line” in this basic mapping.

In case of texts and images the basic transformation steps thus consist in creating the representation as random contexts. Fortunately, this is “only” a question of the suitable plugins for our transformation tool. In both cases, for texts as well as images, the resulting vectors could grow considerably. Several thousands of implied variables must be expected. Again, there is already a solution, known as random projection, which allows to compress even very large vectors (say 20’000+) into one of say maximal 150 variables, without loosing much of the information that is needed to retain the distinctive potential. Random projection works by multiplying a vector of size N with a matrix of uniformly distributed random values of size NxM, which results in a vector of size M. Of course, M is chosen suitably (100+). The reason why this works is that with that many dimension all vectors are approximately orthogonal to each other! Of course, the resulting fields in such a vector do not “represent” anything that could be conceived as a reference to an “object”. Internally, however, that is from the perspective of a (population of) SOMs, it may well be used as a (almost) fixed “attribute”. Yet, neither the missing direct reference not the subjectivity poses a problem, as the meaning is not a mental entity anyway. Q.E.D.

Conclusion

Here in this essay we discussed several aspects related to the transformation of data as an epistemic activity. We emphasized that an appropriate attitude towards the transformation of data requires a shift in perspective and the focus of another vantage point. One of the more significant changes in attitude consider, perhaps, the drop of any positivist approach as one of the main pillars of traditional modeling. Remember that statistics is such a positivist approach. In our perspective, statistical methods are just transformations, nothing less, but above all also nothing more, characterized by a specific set of rather strong assumptions and conditions for their applicability.

We also provided some important practical examples for the transformation of data, whether tabular data derived from independent observations, time series data or “unstructurable” “data” like texts and images. According to the proposed approach we else described a prototypical architecture for a transformation tool, that could be used universally. In particular, it allows a complete automation of the modeling task, as it could be used for instance in the field of so-called data mining. The possibility for automated modeling is, of course, a fundamental requirement for any machine-based episteme.

Notes

1. The only reason why we do not refer to cultures and philosophies outside Europe is that we do not know sufficient details about them. Yet, I am pretty sure that taking into account Chinese or Indian philosophy would severe the situation.

2. It was Friedrich Schleiermacher who first observed that even the text becomes alien and at least partially autonomous to its author due to the necessity and inevitability of interpretation. Thereby he founded hermeneutics.

3. In German language these words all exhibit a multiple meaning.

4. In the last 10 years (roughly) it became clear that the gene-centered paradigms are not only not sufficient [2], they are even seriously defect. Evely Fox-Keller draws a detailed trace of this weird paradigm [3].

5. Michel Foucault [4]

6. The „axiom of choice“ is one of the founding axioms in mathematics. Its importance can’t be underestimated. Basically, it assumes that “something is choosable”. The notion of “something choosable” then is used to construct countability as a derived domain. This implies three consequences. First, this avoids to assume countability, that is, the effect of a preceding symbolification, as a basis for set theory. Secondly, it puts performance at the first place. These two implications render the “Axiom of Choice” into a doubly-articulated rule, offering two docking sites, one for mathematics, and one for philosophy. In some way, it thus can not count as an  “axiom”. Those implications are, for instance, fully compatible with Wittgenstein’s philosophy. For these reasons, Zermelo’s “axiom” may even serve as a shared point (of departure) for a theory of machine-based episteme. Finally, the third implication is that through the performance of the selection the relation, notably a somewhat empty relation is conceived as a predecessor of countability and the symbolic level. Interestingly, this also relates to Quantum Darwinism and String Theory.

7. David Grahame Shane’s theory on cities and urban entities [5] is probably the only theory in urbanism that is truly a relational theory.  Additionally, his work is full of relational techniques and concepts, such as the “heterotopy” (a term coined by Foucault).

8. Bruno Latour developed the Actor-Network-Theory [6,7], while Clarke evolved “Grounded Theory” into the concept of “Situational Analysis” [8]. Latour, as well as Clarke, emphasize and focus the relation as a significant entity.

9. behavioral coating, and behavioral surfaces ;

10. See Information & Causality about the relation between measurement, information and causality.

11. „Passivist“ refers to the inadequate form of realism according to which things exist as-such independently from interpretation. Of course, interpretation does affect the material dimension of a thing. Yet, it changes its relations insofar the relations of a thing, the Wittgensteinian “facts”, are visible and effective only if we assign actively significance to them. The “passivist” stance conceives itself as a re-construction instead of a construction (cf. Searle [9])

12. In [10] we developed an image theory in the context of the discussion about the mediality of facades of buildings.

13. nonsense of „non-supervised clustering“

14. In his otherwise quite readable book [11], though it may serve only as an introduction.

15. This can be accomplished by using a data segment for which the implied risk equals 0 (positive predictive value = 1). We described this issue in the preceding chapter.

16. hint to particle physics…

17. See our previous essay about the complementarity of the concepts of causality and information.

18. For an introduction of renormalization (in physics) see [12], and a bit more technical [13]

19. see the Wiki entry about so-called gravitational lenses.

20. Catastrophe theory is a concept invented and developed by French mathematician Rene Thom as a field of Differential Topology. cf. [14]

21.  In their book, Witten & Eibe [15] recognized the importance of transformation and included a dedicated chapter about it. They also explicitly mention the creation of synthetic variables. Yet, they do also explicitly retreat from it as a practical means for the reason of computational complexity (=here, the time needed to perform a calculation in relation to the amount of data). After all, their attitude towards transformation is somehow that towards an unavoidable evil. They do not recognize its full potential. After all, as a cure for the selection problem, they propose SVM and their hyperplanes, which is definitely a poor recommendation.

22. Dorian Pyle [11]

23. see Benoit Mandelbrot [16].

24. By using almost meaningless labels target-oriented modeling is often called supervised modeling as opposed to “non-supervised modeling”, where no target variable is being used. Yet, such a modeling is not a model, since the pragmatics of the concept of “model” invariably requires a purpose.

25. About assignates: often called property, or feature… see about modeling

26. Stationarity is a concept in empirical system analysis or description, which denotes the expectation that the internal setup of the observed process will not change across time within the observed period. If a process is rated as “stationary” upon a dedicated test, one could select a particular, and only one particular method or model to reflect the data. Of course, we again meet the chicken-egg problem. We can decide about stationarity only by means of a completed model, that is after the analysis. As a consequence, we should not use linear methods, or methods that depend on independence, for checking the stationarity before applying the “actual” method. Such a procedure can not count as a methodology at all. The modeling approach should be stable against non-stationarity. Yet, the problem of the reliability of the available data sample remains, of course. As a means to “robustify” the resulting model against the unknown future one can apply surrogating. Ultimately, however, the only cure is a circular, or recurrent methodology that incorporates learning and adaptation as a structure, not as a result.

References
  • [1] Robert Rosen, Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life. Columbia University Press, New York 1991.
  • [2] Nature Insight: Epigenetics, Supplement Vol. 447 (2007), No. 7143 pp 396-440.
  • [3] Evelyn Fox Keller, The Century of the Gene. Harvard University Press, Boston 2002. see also: E. Fox Keller, “Is There an Organism in This Text?”, in P. R. Sloam (ed.), Controlling Our Destinies. Historical, Philosophical, Ethical, and Theological Perspectives on the Human Genome Project, Notre Dame (Indiana), University of Notre Dame Press, 2000, pp. 288-289
  • [4] Michel Foucault, Archeology of Knowledge. 1969.
  • [5] David Grahame Shane. Recombinant Urbanism: Conceptual Modeling in Architecture, Urban Design and City Theory
  • [6] Bruno Latour. Reassembling The Social. Oxford University Press, Oxford 2005.
  • [7] Bruno Latour (1996). On Actor-network Theory. A few Clarifications. in: Soziale Welt 47, Heft 4, p.369-382.
  • [8] Adele E. Clarke, Situational Analysis: Grounded Theory after the Postmodern Turn. Sage, Thousand Oaks, CA 2005).
  • [9] John R. Searle, The Construction of Social Reality. Free Press, New York 1995.
  • [10] Klaus Wassermann & Vera Bühlmann, Streaming Spaces – A short expedition into the space of media-active façades. in: Christoph Kronhagel (ed.), Mediatecture, Springer, Wien 2010. pp.334-345. available here
  • [11] Dorian Pyle, Data Preparation for Data Mining. Morgan Kaufmann, San Francisco 1999.
  • [12] John Baez (2009). Renormalization Made Easy. Webpage
  • [13] Bertrand Delamotte (2004). A hint of renormalization. Am.J.Phys. 72: 170-184. available online.
  • [14] Tim Poston & Ian Stewart, Catastrophe Theory and Its Applications. Dover Publ. 1997.
  • [15] Ian H. Witten & Frank Eibe, Data Mining. Practical Machine Learning Tools and Techniques (2nd ed.). Elsevier, Oxford 2005.
  • [16] Benoit Mandelbrot & Richard L. Hudson, The (Mis)behavior of Markets. Basic Books, New York 2004.
  • [17] Patrizia Violi (2000). Prototypicality, typicality, and context. in: Liliana Albertazzi (ed.), Meaning and Cognition – A multidisciplinary approach. Benjamins Publ., Amsterdam 2000. p.103-122.

۞

Prolegomena to a Morphology of Experience

May 2, 2012 § Leave a comment

Experience is a fundamental experience.

The very fact of this sentence demonstrates that experience differs from perception, much like phenomena are different from objects. It also demonstrates that there can’t be an analytic treatment or even solution of the question of experience. Experience is not only related to sensual impressions, but also to affects, activity, attention1 and associations. Above all, experience is deeply linked to the impossibility to know anything for sure or, likewise, apriori. This insight is etymologically woven into the word itself: in Greek, “peria” means “trial, attempt, experience”, influencing also the roots of “experiment” or “peril”.

In this essay we will focus on some technical aspects that are underlying the capability to experience. Before we go in medias res, I have to make clear the rationale for doing so, since, quite obviously so, experience could not be reduced to those said technical aspects, to which for instance modeling belongs. Experience is more than the techné of sorting things out [1] and even more than the techné of the genesis of discernability, but at the same time it plays a particular, if not foundational role in and for the epistemic process, its choreostemic embedding and their social practices.

Epistemic Modeling

As usual, we take the primacy of interpretation as one of transcendental conditions, that is, it is a condition we can‘t go beyond, even on the „purely“ material level. As a suitable operationalization of this principle, still a quite abstract one and hence calling for situative instantiation, we chose the abstract model. In the epistemic practice, the modeling does not, indeed, even never could refer to data that is supposed to „reflect“ an external reality. If we perform modeling as a pure technique, we are just modeling, but creating a model for whatsoever purpose, so to speak „modeling as such“, or purposed modeling, is not sufficient to establish an epistemic act, which would include the choice of the purpose and the choice of the risk attitude. Such a reduction is typical for functionalism, or positions that claim a principle computability of epistemic autonomy, as for instance the computational theory of mind does.

Quite in contrast, purposed modeling in epistemic individuals already presupposes the transition from probabilistic impressions to propositional, or say, at least symbolic representation. Without performing this transition from potential signals, that is mediated „raw“ physical fluctuations in the density of probabilities, to the symbolic it is impossible to create a structure, let it be for instance a feature vector as a set of variably assigned properties, „assignates“, as we called it previously. Such a minimal structure, however, is mandatory for purposed modeling. Any (re)presentation of observations to a modeling methods thus is already subsequent to prior interpretational steps.

Our abstract model that serves as an operationalization of the transcendental principle of the primacy of interpretation thus must also provide, or comprise, the transition from differences into proto-symbols. Proto-symbols are not just intensions or classes, they are so to speak non-empiric classes that have been derived from empiric ones by means of idealization. Proto-symbols are developed into symbols by means of the combination of naming and an associated practice, i.e a repeating or reproducible performance, or still in other words, by rule-following. Only on the level of symbols we then may establish a logic, or claiming absolute identity. Here we also meet the reason for the fact that in any real-world context a “pure” logic is not possible, as there are always semantic parts serving as a foundation of its application. Speaking about “truth-values” or “truth-functions” is meaningless, at least. Clearly, identity as a logical form is a secondary quality and thus quite irrelevant for the booting of the capability of experience. Such extended modeling is, of course, not just a single instance, it is itself a multi-leveled thing. It even starts with the those properties of the material arrangement known as body that allow also an informational perspective. The most prominent candidate principle of such a structure is the probabilistic, associative network.

Epistemic modeling thus consists of at least two abstract layers: First, the associative storage of random contexts (see also the chapter “Context” for their generalization), where no purpose is implied onto the materially pre-processed signals, and second, the purposed modeling. I am deeply convinced that such a structure is only way to evade the fallacy of representationalism2. A working actualization of this abstract bi-layer structure may comprise many layers and modules.

Yet, once one accepts the primacy of interpretation, and there is little to say against it, if anything at all, then we are lead directly to epistemic modeling as a mandatory constituent of any interpretive relationship to the world, for primitive operations as well as for the rather complex mental life we experience as humans, with regard to our relationships to the environment as well as with regard to our inner reality. Wittgenstein emphasized in his critical solipsism that the conception of reality as inner reality is the only reasonable one [3]. Epistemic modeling is the only way to keep meaningful contact with the external surrounds.

The Bridge

In its technical parts experience is based on an actualization of epistemic modeling. Later we will investigate the role and the usage of these technical parts in detail. Yet, the gap between modeling, even if conceived as an abstract, epistemic modeling, and experience is so large that we first have to shed some light on the bridge between these concepts. There are some other issues with experience than just the mere technical issues of modeling that are not less relevant for the technical issues, too.

Experience comprises both more active and more passive aspects, both with regard to performance and to structure. Both dichotomies must not be taken as ideally separated categories, of course. Else, the basic distinction into active and passive parts is not a new one either. Kant distinguished receptivity and spontaneity as two complementary faculties that combine in order to bring about what we call cognition. Yet, Leibniz, in contrast, emphasized the necessity of activity even in basic perception; nowadays, his view has been greatly confirmed by the research on sensing in organic (animals) as well as in in-organic systems (robots). Obviously, the relation between activity and passivity is not a simple one, as soon as we are going to leave the bright spheres of language.3

In the structural perspective, experience unfolds in a given space that we could call the space of experiencibility4. That space is spanned, shaped and structured by open and dynamic collections of any kind of theory, model, concept or symbol as well as by the mediality that is “embedding” those. Yet, experience also shapes this space itself. The situation reminds a bit to the relativistic space in physics, or the social space in humans, where the embedding of one space into another one will affect both participants, the embedded as well as the embedding space. These aspects we should keep in mind for our investigation of questions about the mechanisms that contribute to experience and the experience of experience. As you can see, we again refute any kind of ontological stances even to their smallest degrees.5

Now when going to ask about experience and its genesis, there are two characteristics of experience that enforce us to avoid the direct path. First, there is the deep linkage of experience to language. We must get rid of language for our investigation in order to avoid the experience of finding just language behind the language or what we call upfront “experience”; yet, we also should not forget about language either. Second, there is the self-referentiality of the concept of experience, which actually renders it into a strongly singular term. Once there are even only tiny traces of the capability for experience, the whole game changes, burying the initial roots and mechanisms that are necessary for the booting of the capability.

Thus, our first move consists in a reduction and linearization, which we have to catch up with later again, of course. We will achieve that by setting everything into motion, so-to-speak. The linearized question thus is heading towards the underlying mechanisms6:

How do we come to believe that there are facts in the world? 7

What are—now viewed from the outside of language8—the abstract conditions and the practiced moves necessary and sufficient for the actualization­­ of such statements?

Usually, the answer will refer to some kind of modeling. Modeling provides the possibility for the transition from the extensional epistemic level of particulars to the intensional epistemic level of classes, functions or categories. Yet, modeling does not provide sufficient reason for experience. Sure, modeling is necessary for it, but it is more closely related to perception, though also not being equivalent to it. Experience as a kind of cognition thus can’t be conceived as kind of a “high-level perception”, quite contrary to the suggestion of Douglas Hofstadter [4]. Instead, we may conceive experience, in a first step, as the result and the activity around the handling of the conditions of modeling.

Even in his earliest writings, Wittgenstein prominently emphasized that it is meaningless to conceive of the world as consisting from “objects”. The Tractatus starts with the proposition:

The world is everything that is the case.

Cases, in the Tractatus, are states of affairs that could be made explicit into a particular (logical) form by means of language. From this perspective one could derive the radical conclusion that without language there is no experience at all. Despite we won’t agree to such a thesis, language is a major factor contributing to some often unrecognized puzzles regarding experience. Let us very briefly return to the issue of language.

Language establishes its own space of experiencibility, basically through its unlimited expressibility that induces hermeneutic relationships. Probably mainly to this particular experiential sphere language is blurring or even blocking clear sight to the basic aspects of experience. Language can make us believe that there are phenomena as some kind of original stuff, existing “independently” out there, that is, outside the human cognition.9 Yet, there is no such thing like a phenomenon or even an object that would “be” before experience, and for us humans even not before or outside of language. It is even not reasonable to speak about phenomena or objects as if they would exist before experience. De facto, it is almost non-sensical to do so.

Both, objects as specified entities and phenomena at large are consequences of interpretation, in turn deeply shaped by cultural imprinting, and thus heavily depending on language. Refuting that consequence would mean to refute the primacy of interpretation, which would fall into one of the categories of either naive realism or mysticism. Phenomenology as an ontological philosophical discipline is nothing but a mis-understanding (as ontology is henceforth); since phenomenology without ontological parts must turn into some kind of Wittgensteinian philosophy of language, it simply vanishes. Indeed, when already being teaching in Cambridge, Wittgenstein once told a friend to report his position to the visiting Schlick, whom he refused to meet on this occasion, as “You could say of my work that it is phenomenology.” [5] Yet, what Wittgenstein called “phenomenology” is completely situated inside language and its practicing, and despite there might be a weak Kantian echo in his work, he never supported Husserl’s position of synthetic universals apriori. There is even some likelihood that Wittgenstein, strongly feeling to be constantly misunderstood by the members of the Vienna Circle, put this forward in order to annoy Schlick (a bit), at least to pay him back in kind.

Quite in contrast, in a Wittgensteinian perspective facts are sort of collectively compressed beliefs about relations. If everybody believes to a certain model of whatever reference and of almost arbitrary expectability, then there is a fact. This does not mean, however, that we get drowned by relativism. There are still the constraints implied by the (unmeasured and unmeasurable) utility of anticipation, both in its individual and its collective flavor. On the other hand, yes, this indeed means that the (social) future is not determined.

More accurately, there is at least one fact, since the primacy of interpretation generates at least the collectivity as a further fact. Since facts are taking place in language, they do not just “consist” of content (please excuse such awful wording), there is also a pragmatics, and hence there are also at least two different grammars, etc.etc.

How do we, then, individually construct concepts that we share as facts? Even if we would need the mediation by a collective, a large deal of the associative work takes place in our minds. Facts are identifiable, thus distinguishable and enumerable. Facts are almost digitized entities, they are constructed from percepts through a process of intensionalization or even idealization and they sit on the verge of the realm of symbols.

Facts are facts because they are considered as being valid, let it be among a collective of people, across some period of time, or a range of material conditions. This way they turn into kind of an apriori from the perspective of the individual, and there is only that perspective. Here we find the locus situs of several related misunderstandings, such as direct realism, Husserlean phenomenology, positivism, the thing as such, and so on. The fact is even synthetic, either by means of “individual”10 mental processes or by the working of a “collective reasoning”. But, of course, it is by no means universal, as Kant concluded on the basis of Newtonian science, or even as Schlick did in 1930 [6]. There is neither a universal real fact, nor a particular one. It does not make sense to conceive the world as existing from independent objects.

As a consequence, when speaking about facts we usually studiously avoid the fact of risk. Participants in the “fact game” implicitly agree on the abandonment of negotiating affairs of risk. Despite the fact that empiric knowledge never can be considered as being “safe” or “secured”, during the fact game we always behave as if. Doing so is the more or less hidden work of language, which removes the risk (associated with predictive modeling) and replaces it by metaphorical expressibility. Interestingly, here we also meet the source field of logic. It is obvious (see Waves & Words) that language is neither an extension of logics, nor is it reasonable to consider it as a vehicle for logic, i.e. for predicates. Quite to the contrast, the underlying hypothesis is that (practicing) language and (weaving) metaphors is the same thing.11 Such a language becomes a living language that (as Gier writes [5])

“[…] grows up as a natural extension of primitive behavior, and we can count on it most of the time, not for the univocal meanings that philosophers demand, but for ordinary certainty and communication.”

One might just modify Gier’s statement a bit by specifying „philosophers“ as idealistic, materialistic or analytic philosophers.

In “On Certainty” (OC, §359), Wittgenstein speaks of language as expressing primitive behavior and contends that ordinary certainty is “something animal”. This now we may take as a bridge that provides the possibility to extend our asking about concepts and facts towards the investigation of the role of models.

Related to this there is a pragmatist aspect that is worthwhile to be mentioned. Experience is a historicizing concept, much like knowledge. Both concepts are meaningful only in hindsight. As soon as we consider their application, we see that both of them refer only to one half of the story that is about the epistemic aspects of „life“. The other half of the epistemic story and directly implied by the inevitable need to anticipate is predictive or, equivalently, diagnostic modeling. Abstract modeling in turn implies theory, interpretation and orthoregulated rule-following.

Epistemology thus should not be limited to „knowledge“, the knowable and its conditions. Epistemology has explicitly to include the investigation of the conditions of what can be anticipated.

In a still different way we thus may repose the question about experience as the transition from epistemic abstract modeling to the conditions of that modeling. This would include the instantiation of practicable models as well as the conditions for that instantiation, and also the conditions of the application of models.In technical terms this transition is represented by a problematic field: The model selection problem, or in more pragmatic terms, the model (selection) risk.

These two issues, the prediction task and the condition of modeling now form the second toehold of our bridge between the general concept of experience and some technical aspects of the use of models. There is another bridge necessary to establish the possibility of experience, and this one connects the concept of experience with languagability.

The following list provides an overview about the following chapters:

These topics are closely related to each other, indeed so closely that other sequences would be justifiable too. Their interdependencies also demand a bit of patience from you, the reader, as the picture will be complete only when we arrive at the results of modeling.

A last remark may be allowed before we start to delve into these topics. It should be clear by now that any kind of phenomenology is deeply incompatible with the view developed here. There are several related stances, e.g. the various shades of ontology, including the objectivist conception of substance. They are all rendered as irrelevant and inappropriate for any theory about episteme, whether in its machine-based form or regarding human culture, whether as practice or as reflecting exercise.

The Modeling Statement

As the very first step we have to clearly state the goal of modeling. From the outside that goal is pretty clear. Given a set of observations and the respective outcomes, or targets, create a mapping function such that the observed data allow for a reconstruction of the outcome in an optimized manner. Finding such a function can be considered as a simple form of learning if the function is „invented“. In most cases it is not learning but just the estimation of pre-defined parameters.12 In a more general manner we also could say that any learning algorithm is a map L from data sets to a ranked list of hypothesis functions. Note that accuracy is only one of the possible aspects of that optimization. Let us call this for convenience the „outer goal“ of modeling. Would such mapping be perfect within reasonable boundaries, we would have found automatically a possible transition from probabilistic presentation to propositional representation. We could consider the induction of a structural description from observations as completed. So far the secret dream of Hans Reichenbach, Carl Schmid-Hempel, Wesley Salmon and many of their colleagues.

The said mapping function will never be perfect. The reasons for this comprise the complexity of the subject, noise in the measured data, unsuitable observables or any combinations of these. This induces a wealth of necessary steps and, of course, a lot of work. In other words, a considerable amount of apriori and heuristic choices have to be taken. Since a reliable, say analytic mapping can’t be found, every single step in the value chain towards the model at once becomes questionable and has to be checked for its suitability and reliability. It is also clear that the model does not comprise just a formula. In real-world situations a differential modeling should be performed, much like in medicine a diagnosis is considered to be complete only if a differential diagnosis is included. This comprises the investigation of the influence of the method’s parameterization onto the results. Let us call the whole bunch of respective goals the „inner goals“ of modeling.

So, being faced with the challenge of such empirical mess, how does the statement about the goals of the „inner modeling“ look like? We could for instance demand to remove the effects of the shortfalls mentioned above, which cause the imperfect mapping: complexity of the subject, noise in the measured data, or unsuitable observables.

To make this more concrete we could say, that the inner goals of modeling consist in a two-fold (and thus synchronous!) segmentation of the data, resulting in the selection of the proper variables and in the selection of the proper records, where this segmentation is performed under conditions of a preceding non-linear transformation of the embedding reference system. Ideally, the model identifies the data for which it is applicable. Only for those data then a classification is provided. It is pretty clear that this statement is an ambitious one. Yet, we regard it as crucial for any attempt to step across our epistemic bridge that brings us from particular data to the quality of experience. This transition includes something that is probably better known by the label „induction“. Thus, we finally arrive at a short statement about the inner goals of modeling:

How to conclude and what to conclude from measured data?

Obviously, if our data are noisy and if our data include irrelevant values any further conclusion will be unreliable. Yet, for any suitable segmentation of the data we need a model first. From this directly follows that a suitable procedure for modeling can’t consist just from a single algorithm, or a „one-shot procedure“. Any instance of single-step approaches are suffering from lots of hidden assumptions that influence the results and its properties in unforeseeable ways. Modeling that could be regarded as more than just an estimation of parameters by running an algorithm is necessarily a circular and—dependent on the amount of variables­—possibly open-ended process.

Predictability and Predictivity

Let us assume a set of observations S obtained from an empirical process P. Then ­­­this process P should be called “predictable” if the results of the mapping function f(m) that serves as an instance of a hypothesis h from the space of hypotheses H coincides with the outcome of the process P in such a way that f(m) forms an expectation with a deviation d<ε for all f(m). In this case we may say that f(m) predicts P. This deviation is also called “empirical risk”, and the purpose of modeling is often regarded as minimizing the empirical risk (ERM).

There are then two important questions. Firstly, can we trust f(m), since f(m) has been built on a limited number of observations? Secondly, how can we make f(m) more trustworthy, given the limitation regarding the data? Usually, these questions are handled under the label of validation. Yet, validation procedures are not the only possible means to get an answer here. It would be a misunderstanding to think that it is the building or construction of a model that is problematic.

The first question can be answered only by considering different models. For obtaining a set of different models we could apply different methods. That would be o.k. if prediction would be our sole interest. Yet, we also strive for detecting structural insights. And from that perspective we should not, of course, use different methods to get different models. The second possibility for addressing the first question is to use different sub-samples, which turns simple validation into a cross-validation. Cross-validation provides an expectation for the error (or the risk). Yet, in order to compare across methods one actually should describe the expected decrease in “predictive power”13 for different sample sizes (independent cross-validation per sample size). The third possibility for answering question (1) is related to the the former and consists by adding noised, surrogated (or simulated) data. This prevents the learning mechanism from responding to empirically consistent, but nevertheless irrelevant noisy fluctuations in the raw data set. The fourth possibility is to look for models of equivalent predictive power, which are, however, based on a different set of predicting variables. This possibility is not accessible for most statistical approaches such like Principal Component Analysis (PCA). Whatever method is used to create different models, models may be combined into a “bag” of models (called “bagging”), or, following an even more radical approach, into an ensemble of small and simple models. This is employed for instance in the so-called Random Forest method.

Commonly, if a model passes cross-validation successfully, it is considered to be able to “generalize”. In contrast to the common practice, Poggio et al. [7] demonstrated that standard cross-validation has to be extended in order to provide a characterization of the capability of a model to generalize. They propose to augment

CV1oo stability with stability of the expected error and stability of the empirical error to define a new notion of stability, CVEEE1oo stability.

This makes clear that Poggio’s et al. approach is addressing the learning machinery, not any longer just the space of hypotheses. Yet, they do not take the free parameters of the method into account. We conclude that their proposed approach still remains an uncritical approach. Thus I would consider such a model as not completely trustworthy. Of course, Poggio et al. are definitely pointing towards the right direction. We recognize a move away from naive realism and positivism, instead towards a critical methodology of the conditional. Maybe, philosophy and natural sciences find common grounds again by riding the information tiger.

Checking the stability of the learning procedure leads to a methodology that we called “data experiments” elsewhere. The data experiments do NOT explore the space of hypotheses, at least not directly. Instead they create a map for all possible models. In other words, instead of just asking about the predictability we now ask about the differential predictivity of in the space of models.

From the perspective of a learning theory Poggio’s move can’t be underestimated. Statistical learning theory (SLT)[8] explicitly assumes that a direct access to the world is possible (via identity function, perfectness of the model). Consequently, SLT focuses (only) on the reduction of the empirical risk. Any learning mechanism following the SLT is hence uncritical about its own limitation. SLT is interested in the predictability of the system-as-such, thereby not rather surprisingly committing the mistake of pre-19th century idealism.

The Independence Assumption

The independence assumption [I.A.], or linearity assumption, acts mainly on three different targets. The first of them is the relationship between observer and observed, while its second target is the relationship between observables. The third target finally regards the relation between individual observations. This last aspect of the I.A. is the least problematic one. We will not discuss this any further.

Yet, the first and the second one are the problematic ones. The I.A. is deeply buried into the framework of statistics and from there it made its way into the field of explorative data analysis. There it can be frequently met for instance in the geometrical operationalization of similarity, the conceptualization of observables as Cartesian dimensions or independent coefficients in systems of linear equations, or as statistical kernels in algorithms like the Support Vector Machine.

Of course, the I.A. is just one possible stance towards the treatment of observables. Yet, taking it as an assumption we will not include any parameter into the model that reflects the dependency between observables. Hence, we will never detect the most suitable hypothesis about the dependency between observables. Instead of assuming the independence of variables throughout an analysis it would be methodological much more sound to address the degree of dependency as a target. Linearity should not be an assumption, it should be a result of an analysis.

The linearity or independence assumption carries another assumption with it under its hood: the assumption of the homogeneity of variables. Variables, or assignates, are conceived as black-boxes, with unknown influence onto the predictive power of the model. Yet, usually they exert very different effects on the predictive power of a model.

Basically, it is very simple. The predictive power of a model depends on the positive predictive value AND the negative predictive value, of course; we may also use closely related terms sensitivity and specificity. Accordingly, some variables contribute more to the positive predictive value, other help to increase the negative predictive value. This easily becomes visible if we perform a detailed type-I/II error analysis. Thus, there is NO way to avoid testing those combinations explicitly, even if we assume the initial independence of variables.

As we already mentioned above, the I.A. is just one possible stance towards the treatment of observables. Yet, its status as a methodological sine qua non that additionally is never reflected upon renders it into a metaphysical assumption. It is in fact an irrational assumption, which induces serious costs in terms of the structural richness of the results. Taken together, the independence assumption represents one of the most harmful habits in data analysis.

The Model Selection Problem

In the section “Predictability and Predictivity” above we already emphasized the importance of the switch from the space of hypotheses to the space of models. The model space unfolds as a condition of the available assignates, the size of the data set and the free parameters of the associative (“modeling”) method. The model space supports a fundamental change of the attitude towards a model. Based on the denial of the apriori assumption of independence of observables we identified the idea of a singular best model as an ill-posed phantasm. We thus move onwards from the concept of a model as a mapping function towards ensembles of structurally heterogeneous models that together as a distinguished population form a habitat, a manifold in the sphere of the model space. With such a structure we neither need to arrive at a single model.

Methods, Models, Variables

The model selection problem addresses two sets of parameters that are actually quite different from each other. Model selection should not be reduced to the treatment of the first set, of course, as it happens at least implicitly for instance in [9]. The first set refers to the variables as known from the data, sometimes also called the „predictors“. The selection of the suitable variables is the first half of the model selection problem. The second set comprises all free parameters of the method. From the methodological point of view, this second set is much more interesting than the first one. The method’s parameters are apriori conditions to the performance of the method, which additionally usually remain invisible in the results, in contrast to the selection of variables.

For associative methods like SOM or other clustering variables the effect of de-/selecting variables can be easily described. Just take all the objects in front of you, for instance on the table, or in your room. Now select an arbitrary purpose and assign this purpose as a degree of support to those objects. For now, we have constructed the target. Now we go “into” the objects, that is, we describe them by a range of attributes that are present in most of the objects. Dependent on the selection of  a subset from these attributes we will arrive at very different groups. The groups now represent the target more or less, that’s the quality of the model. Obviously, this quality differs across the various selections of attributes. It is also clear that it does not help to just use all attributes, because some of the attributes just destroy the intended order, they add noise to the model and decrease its quality.

As George observes [10], since its first formulation in the 1960ies a considerable, if not large number of proposals for dealing with the variable selection problem have been proposed. Although George himself seem to distinguish the two sets of parameters, throughout the discussion of the different approaches he always refers just to the first set, the variables as included in the data. This is not a failure of the said author, but a problem of the statistical approach. Usually, the parameters of statistical procedures are not accessible, as any analytic procedure, they work as they work. In contrast to Self-organizing Maps, and even to Artificial Neural Networks (ANN) or Genetic Procedures, analytic procedures can’t be modified in order to achieve a critical usage. In some way, with their mono-bloc design they perfectly fit into representationalist fallacy.

Thus, using statistical (or other analytic) procedures, the model selection problem consists of the variable selection problem and the method selection problem. The consequences are catastrophic: If statistical methods are used in the context of modeling, the whole statistical framework turns into a black-box, because the selection of a particular method can’t be justified in any respect. In contrast to that quite unfavorable situation, methods like the Self-Organizing Map provide access to any of its parameters. Data experiments are only possible with methods like SOM or ANN. Not the SOM or the ANN are „black-boxes“, but the statistical framework must be regarded as such. Precisely this is also the reason for the still ongoing quarrels about the foundations of the statistical framework. There are two parties, the frequentists and the bayesians. Yet, both are struck by the reference class problem [11]. From our perspective, the current dogma of empirical work in science need to be changed.

The conclusion is that statistical methods should not be used at all to describe real-world data, i.e. for the modeling of real-world processes. They are suitable only within a fully controlled setting, that is, within a data experiment. The first step in any kind of empirical analysis thus must consist of a predictive modeling that includes the model selection task.14

The Perils of Universalism

Many people dealing with the model selection task are mislead by a further irrational phantasm, caused by a mixture of idealism and positivism. This is the phantasm of the single best model for a given purpose.

Philosophers of science long ago recognized, starting with Hume and ultimately expressed by Quine, that empirical observations are underdetermined. The actual challenge posed by modeling is given by the fact of empirical underdetermination. Goodman felt obliged to construct a paradox from it. Yet, there is no paradox, there is only the phantasm  of the single best model. This phantasm is a relic from the Newtonian period of science, where everybody thought the world is made by God as a miraculous machine, everything had to be well-defined, and persisting contradictions had to be rated as evil.

Secondarily, this moults into the affair of (semantic) indetermination. Plainly spoken, there are never enough data. Empirical underdetermination results in the actuality of strongly diverging models, which in turn gives rise to conflicting experiences. For a given set of data, in most cases it is possible to build very different models (ceteris paribus, choosing different sets of variables) that yield the same utility, or say predictive power, as far as this predictive power can be determined by the available data sample at all. Such ceteris paribus difference will not only give rise to quite different tracks of unfolding interpretation, it is also certainly in the close vicinity of Derrida’s deconstruction.

Empirical underdetermination thus results in a second-order risk, the model selection risk. Actually, the model selection risk is the only relevant risk. We can’t change the available data, and data are always limited, sometimes just by their puniness, sometimes by the restrictions to deal with them. Risk is not attached to objects or phenomena, because objects “are not there” before interpretation and modeling. Risk is attached only to models. Risk is a particular state of affair, and indeed a rather fundamental one. Once a particular model would tell us that there is an uncertainty regarding the outcome, we could take measures to deal with that uncertainty. For instance, we hedge it, or organize some other kind of insurance for it. But hedging has to rely on the estimation of the uncertainty, which is dependent on the expected predictive power of the model, not just the accuracy of the model given the available data from a limited sample.

Different, but equivalent selections of variables can be used to create a group of models as „experts“ on a given task to decide on. Yet, the selection of such „experts“ is not determinable on the basis of the given data alone. Instead, further knowledge about the relation of the variables to further contexts or targets needs to be consulted.

Universalism is usually unjustifiable, and claiming it instead usually comes at huge costs, caused by undetectable blindnesses once we accept it. In contemporary empiricism, universalism—and the respecting blindness—is abundant also with regard to the role of the variables. What I am talking about here is context, mediality and individuality, which, from a more traditional formal perspective, is often approximated by conditionality. Yet, it more and more becomes clear that the Bayesian mechanisms are not sufficient to get the complexity of the concept of variables covered. Just to mention the current developments in the field of probability theory I would like to refer to Brian  Weatherson, who favors and develops the so-called dynamic Keynesian models of uncertainty. [10] Yet, we regard this only as a transitional theory, despite the fact that it will have a strong impact on the way scientists will handle empiric data.

The mediating individuality of observables (as deliberately chosen assignates, of course) is easy to observe, once we drop the universalism qua independence of variables. Concerning variables, universalism manifests in an indistinguishability of the choices made to establish the assignates with regard to their effect onto the system of preferences. Some criteria C will induce the putative objects as distinguished ones only, if another assignate A has pre-sorted it. Yet, it would be a simplification to consider the situation in the Bayesian way as P(C|A). The problem with it is that we can’t say anything about the condition itself. Yet, we need to “play” (actually not “control”) with the conditionability, the inner structure of these conditions. As it is with the “relation,” which we already generalized into randolations, making it thereby measurable, we also have to go into the condition itself in order to defeat idealism even on the structural level. An appropriate perspective onto variables would hence treat it as a kind of media. This mediality is not externalizable, though, since observables themselves precipitate from the mediality, then as assignates.

What we can experience here is nothing else than the first advents of a real post-modernist world, an era where we emancipate from the compulsive apriori of independence (this does not deny, of course, its important role in the modernist era since Descartes).

Optimization

Optimizing a model means to select a combination of suitably valued parameters such that the preferences of the users in terms of risk and implied costs are served best. The model selection problem is thus the link between optimization problems, learning tasks and predictive modeling. There are indeed countless many procedures for optimization. Yet, the optimization task in the context of model selection is faced with a particular challenge: its mere size. George begins his article in the following way:

A distinguishing feature of variable selection problems is their enormous size. Even with moderate values of p, computing characteristics for all 2p models is prohibitively expensive and some reduction of the model space is needed.

Assume for instance a data set that comprises 50 variables. From that 1.13e15 models are possible, and assume further that we could test 10‘000 models per second, then we still would need more than 35‘000 years to check all models. Usually, however, building a classifier on a real-world problem takes more than 10 seconds, which would result in 3.5e9 years in the case of 50 variables. And there are many instances where one is faced with much more variables, typically 100+, and sometimes going even into the thousands. That’s what George means by „prohibitively“.

There are many proposals to deal with that challenge. All of them fall into three classes: they use either (1) some information theoretic measure (AIC, BIC, CIC etc. [11]), or (2) they use likelihood estimators, i.e. they conceive of parameters themselves as random variables, or (3) they are based of probabilistic measures established upon validation procedures. Particularly the instances from the first two of those classes are hit by the linearity and/or the independence assumption, and also by unjustified universalism. Of course, linearity should not be an assumption, it should be a result, as we argued above. Hence, there is no way to avoid the explicit calculation of models.

Given the vast number of combinations of symbols it appears straightforward to conceive of the model selection problem from an evolutionary perspective. Evolution always creates appropriate and suitable solutions from the available „evolutionary model space“. That space is of size 230‘000 in the case of humans, which is a „much“ larger number than the number of species ever existent on this planet. Not a single viable configuration could have been found by pure chance. Genetics-based alignment and navigation through the model space is much more effective than chance. Hence, the so-called genetic algorithms might appear on the radar as the method of choice .

Genetics, revisited

Unfortunately, for the variable selection problem genetic algorithms15 are not suitable. The main reason for this is still the expensive calculation of single models. In order to set up the genetic procedure, one needs at least 500 instances to form the initial population. Any solution for the variable selection problem should arrive at a useful solution with less than 200 explicitly calculated models. The great advantage of genetic algorithms is their capability to deal with solution spaces that contain local extrema. They can handle even solution spaces that are inhomogeneously rugged, simply for the reason that recombination in the realm of the symbolic does not care about numerical gradients and criteria. Genetic procedures are based on combinations of symbolic encodings. The continuous switch between the symbolic (encoding) and the numerical (effect) are nothing else than the pre-cursors of the separation between genotypes and phenotypes, without which there would not be even simple forms of biological life.

For that reason we developed a specialized instantiation of the evolutionary approach (implemented in SomFluid). Described very briefly we can say that we use evolutionary weights as efficient estimators of the maximum likelihood of parameters. The estimates are derived from explicitly calculated models that vary (mostly, but not necessarily ceteris paribus) with respect to the used variables. As such estimates, they influence the further course of the exploration of the model space in a probabilistic manner. From the perspective of the evolutionary process, these estimates represent the contribution of the respective parameter to the overall fitness of the model. They also form a kind of long-term memory within the process, something like a probabilistic genome. The short-term memory in this evolutionary process is represented by the intensional profiles of the nodes in the SOM.

For the first initializing step, the evolutionary estimates can be estimated themselves by linear procedure like the PCA, or by non-parametric procedures (Kruskal-Wallis, Mann-Whitney, etc.), and are available after only a few explicitly calculated models (model here means „ceteris paribus selection of variables“).

These evolutionary weights reflect the changes of the predictive power of the model when adding or removing variables to the model. If the quality of the model improves, the evolutionary weight increases a bit, and vice versa. In other words, not the apriori parameters of the model are considered, but just the effect of the parameters. The procedure is an approximating repetition: fix the parameters of the model (method specific, sampling, variables), calculate the model, record the change of the predictive power as compared to the previous model.

Upon the probabilistic genome of evolutionary weights there are many different ways one could take to implement the “evo-devo” mechanisms, let it be the issue of how to handle the population (e.g. mixing genomes, aspects of virtual ecology, etc.), or the translational mechanisms, so to speak the “physiologies” that are used to proceed from the genome to an actual phenotype.

Since many different combinations are being calculated, the evolutionary weight represents the expectable contribution of a variable to the predictive power of the model, under whatsoever selection of variables that represents a model. Usually, a variable will not improve the quality of the model irrespective to the context. Yet, if a variable indeed would do so, we not only would say that its evolutionary weight equals 1, we also may conclude that this variable is a so-called confounder. Including a confounder into a model means that we use information about the target, which will not be available when applying the model for classification of new data; hence the model will fail disastrously. Usually, and that’s just a further benefit of dropping the independence-universalism assumption, it is not possible for a procedure to identify confounders by itself. It is also clear that the capability to do so is one of the cornerstones of autonomous learning, which includes the capability to set up the learning task.

Noise, and Noise

Optimization raises its own follow-up problems, of course. The most salient of these is so-called overfitting. This means that the model gets suitably fitted to the available observations by including a large number of parameters and variables, but it will return wrong predictions if it is going to be used on data that are even only slightly different from the observations used for learning and estimating the parameters of the model. The model represents noise, random variations without predictive value.

As we have been describing above, Poggio believes that his criterion of stability overcomes the defects with regard to the model as a generalization from observations. Poggio might be too optimistic, though, since his method still remains to be confined to the available observations.

In this situation, we apply a methodological trick. The trick consists in turning the problem into a target of investigation, which ultimately translates the problem into an appropriate rule. In this sense, we consider noise not as a problem, but as a tool.

Technically, we destroy the relevance of the differences between the observations by adding noise of a particular characteristic. If we add a small amount of normally distributed noise, nothing will probably change, but if we add a lot of noise, perhaps even of secondarily changing distribution, this will result in the mere impossibility to create a stable model at all. The scientific approach is to describe the dependency between those two unknowns, so to say, to set up a differential between noise (model for the unknown) and the model (of the unknown). The rest is straightforward: creating various data sets that have been changed by imposing different amounts of noise of a known structure, and plotting the predictive power against the the amount of noise. This technique can be combined by surrogating the actual observations via a Cholesky decomposition.

From all available models then those are preferred that combine a suitable predictive power with suitable degree of stability against noise.

Résumé

In this section we have dealt with the problematics of selecting a suitable subset from all available observables (neglecting for the time being that model selection involves the method’s parameters, too). Since mostly we have more observables at our disposal than we actually presume to need, the task could be simply described as simplification, aka Occam’s Razor. Yet, it would be terribly naive to first assume linearity and then selecting the “most parsimonious” model. It is even cruel to state [9, p.1]:

It is said that Einstein once said

Make things as simple as possible, but not simpler.

I hope that I succeeded in providing some valuable hints for accomplishing that task, which above all is not a quite simple one. (etc.etc. :)

Describing Classifiers

The gold standard for describing classifiers is believed to be the Receiver-Operator-Characteristic, or short, ROC. Particularly, the area under the curve is compared across models (classifiers). The following Figure 1demonstrates the mechanics of the ROC plot.

Figure 1: Basic characteristics of the ROC curve (reproduced from Wikipedia)

Figure 2. Realistic ROC curves, though these are typical for approaches that are NOT based on sub-group structures or ensembles (for instance ANN or logistic regression). Note that models should not be selected on the basis of the Area-under-Curve. Instead the true positive rate (sensitivity) at a false positive rate FPR=0 should be used for that. As a further criterion that would indicate the stability of of the model one could use the slope of the curve at FPR=0.

Utilization of Information

There is still another harmful aspect of the universalistic stance in data analysis as compared to a pragmatic stance. This aspect considers the „reach“ of the models we are going to build.

Let us assume that we would accept a sensitivity of approx 80%, but we also expect a specificity of >99%. In other words, the cost for false positives (FP) are defined as very high, while the costs for false negatives (FN, not recognized preferred outcomes) are relatively low. The ratio of costs for error, or in short the error cost ratio err(FP)/err(FN) is high.

Table 1a: A Confusion matrix for a quite performant classifier.

Symbols: test=model; TP=true positives; FP=false positives; FN=false negatives; TN=true negatives; ppv=positive predictive value, npv=negative predictive value. FN is also called type-I-error (analogous to “rejecting the null hypothesis when it is true”), while FP is called type-II-error (analogous to “accepting the null hypothesis when it is false”), and FP/(TP+FP) is called type-II-error-rate, sometime labeled as β-error, where (1-β) is the called the “power” of the test or model. (download XLS example)

condition Pos

condition Neg

test Pos

100 (TP)

3 (FP)

0.971

ppv

test Neg

28 (FN)

1120 (TN)

0.976

npv

0.781

0.997

sensitivity

specificity

Let us further assume that there are observations of our preferred outcome that we can‘t distinguish well from other cases of the opposite outcome that we try to avoid. They are too similar, and due to that similarity they form a separate group in our self-organizing map. Let us assume that the specificity of these clusters is at 86% only and the sensitivity is at 94%.

Table 1b: Confusion matrix describing a sub-group formed inside the SOM, for instance as it could be derived from the extension of a “node”.

condition Pos

condition Neg

test Pos

0 (50)

0 (39)

0.0 (0.56)

ppv

test Neg

50 (0)

39 (0)

0.44 (1.0)

npv

0.0 (1.0)

1.0 (0.0)

sensitivity

specificity

Yet, this cluster would not satisfy our risk attitude. If we would use the SOM as a model for classification of new observations, and the new observation would fall into that group (by means of similarity considerations) the implied risk would violate our attitude. Hence, we have to exclude such clusters. In the ROC this cluster represents a value further to the right on the specificity (X-) axis.

Note that in the case of acceptance of the subgroup as a representative for a contributor of a positive prediction, the false negative is always 0 aposteriori, and in case of denial the true positives is always set to 0 (accordingly the figures for the condition negative).

There are now several important points to that, which are related to each other. Actually, we should be interested only in such sub-groups with specificity close to 1, such that our risk attitude is well served. [13] Likewise, we should not try to optimize the quality of the model across the whole range of the ROC, but only for the subgroups with acceptable error cost ratio. In other words, we use the available information in a very specific manner.

As a consequence, we have to set the ECR before calculating the model. Setting the ECR after the selection of a model results in a waste of information, time and money. For this reason it is strongly indicated to use methods that are based on building a representation by sub-groups. This again rules out statistical methods as they always take into account all available data. Zytkow calls such methods empirically empty [14].

The possibility to build models of a high specificity is a huge benefit of sub-group based methods like the SOM.16 To understand this better let us assume we have a SOM-based model with the following overall confusion matrix.

condition Pos

condition Neg

test Pos

78

1

0.9873

ppv

test Neg

145

498

0.7745

npv

0.350

0.998

sensitivity

specificity

That is, the model recognizes around 35% of all preferred outcomes. It does so on the basis of sub-groups that all satisfy the respective ECR criterion. Thus we know that the implied risk of any classification is very low too. In other words, such models recognize whether it is allowed to apply them. If we apply them and get a positive answer, we also know that it is justified to apply them. Once the model identifies a preferred outcome, it does so without risk. This lets us miss opportunities, but we won’t be trapped by false expectations. Such models we could call auto-consistent.

In a practical project that has been aiming at an improvement of the post-surgery risk classification of patients (n>12’000) in a hospital we have been able to demonstrate that the achievable validated rate of implied risk can be as low as <10e-4. [15] Such a low rate is not achievable by statistical methods, simply because there are far too few incidents of wrong classifications. The subjective cut-off points in logistic regression are not quite suitable for such tasks.

At the same time, and that’s probably even more important, we get a suitable segmentation of the observations. All observations that can be identified as positive do not suffer from any risk. Thus, we can investigate the structure of the data for these observations, e.g. as particular relationships between variables, such as correlations etc. But, hey, that job is already done by the selection of the appropriate set of variables! In other words, we not only have a good model, we also have found the best possibility for a multi-variate reduction of noise, with a full consideration of the dependencies between variables. Such models can be conceived as reversed factorial experimental design.

The property of auto-consistency offers a further benefit as it is scalable, that is, “auto-consistent” is not a categorical, or symbolic, assignment. It can be easily measured as sensitivity under the condition of specificity > 1-ε, ε→0. Thus, we may use it as a random measure (it can be described by its density) or as a scale of reference in case of any selection task among sub-populations of models. Additionally, if the exploration of the model space does not succeed in finding a model of a suitable degree of auto-consistency, we may conclude that the quality of the data is not sufficient. Data quality is a function of properly selected variables (predictors) and reproducible measurement. We know of no other approach that would be able to inform about the quality of the data without referring to extensive contextual “knowledge”. Needless to say that such knowledge is never available and encodable.

There are only weak conditions that need to be satisfied. For instance, the same selection of variables need to be used within a single model for all similarity considerations. This rules out all ensemble methods, as far as different selections of variables are used for each item in the ensemble; for instance decision tree methods (a SOM with its sub-groups is already “ensemble-like”, yet, all sub-groups are affected by the same selection of variables). It is further required to use a method that performs the transition from extensions to intensions on a sub-group level,which rules out analytic methods, and even Artificial Neural Networks (ANN). The way to establish auto-consistent models is not possible for ANN. Else, the error-cost ratio must be set before calculating the model, and the models have to be calculated explicitly, which removes linear methods from the list, such as Support Vector Machines with linear kernels (regression, ANN, Bayes). If we want to access the rich harvest of auto-consistent models we have to drop the independence hypothesis and we have to refute any kind of universalism. But these costs are rather low, indeed.

Observations and Probabilities

Here we developed a particular perspective onto the transition from observations to intensional representations. There are of course some interesting relationships of our point of view to the various possibilities of “interpreting” probability (see [16] for a comprehensive list of “interpretations” and interesting references). We also provide a new answer to Hume’s problem of induction.

Hume posed the question, how often should we observe a fact until we could consider it as lawful? This question, called the “problem of induction” points to the wrong direction and will trigger only irrelevant answer. Hume, living still in times of absolute monarchism, in a society deeply structured by religious beliefs, established a short-cut between the frequency of an observation and its propositional representation. The actual question, however, is how to achieve what we call an “observation”.

In very simple, almost artificial cases like the die there is nothing to interpret. The die and its values are already symbols. It is in some way inadequate to conceive of a die or of dicing as an empirical issue. In fact, we know before what could happen. The universe of the die consists of precisely 6 singular points.

Another extreme are so-called single-case observations of structurally rich events, or processes. An event, or a setting should be called structurally rich, if there are (1) many different outcomes, and (2) many possible assignates to describe the event or the process. Such events or processes will not produce any outcome that is could be expected by symbolic or formal considerations. Obviously, it is not possible to assign a relative frequency to a unique, a singular, or a non-repeatable event. Unfortunately, however, as Hájek points out [17], any actual sequence can be conceived of as a singular event.

The important point now is that single-case observations are also not sufficiently describable as an empirical issue. Ascribing propensities to objects-in-the-world demands for a wealth of modeling activities and classifications, which have to be completed apriori to the observation under scrutiny. So-called single-case propensities are not a problem of probabilistic theory, but one of the application of intensional classes and their usage as means for organizing one’s own expectations. As we said earlier, probability as it is used in probability theory is not a concept that could be applied meaningful to observations, where observations are conceived of as primitive “givens”. Probabilities are meaningful only in the closed world of available subjectively held concepts.

We thus have to distinguish between two areas of application for the concept of probability: the observational part, where we build up classes, and the anticipatory part, where we are interested in a match of expectations and actual outcomes. The problem obviously arises by mixing them through the notion of causality.17 Yet, there is absolutely no necessity between the two areas. The concept of risk probably allows for a resolution of the problems, since risk always implies a preceding choice of a cost function, which necessarily is subjective. Yet, the cost function and the risk implied by a classification model is also the angle point for any kind of negotiation, whether this takes place on an material, hence evolutionary scale, or within a societal context.

The interesting, if not salient point is that the subjectively available intensional descriptions and classes are dependent on ones risk attitude. We may observe the same thing only  if we have acquired the same system of related classes and the same habits of using them. Only if we apply extreme risk aversion we will achieve a common understanding about facts (in the Wittgensteinian sense, see above). This then is called science, for instance. Yet, it still remains a misunderstanding to equate this common understanding with objects as objects-out-there.

The problem of induction thus must be considered as a seriously  ill-posed problem. It is a problem only for idealists (who then solve it in a weird way), or realists that are naive against the epistemological conditions of acting in the world. Our proposal for the transition from observations to descriptions is based on probabilism on both sides, yet, on either side there is a distinct flavor of probabilism.

Finally, a methodological remark shall be allowed, closely related to what we already described in the section about “noise” above. The perspective onto “making experience” that we have been proposing here demonstrates a significant twist.

Above we already mentioned Alan Hájek’s diagnosis that the frequentist and the Bayesian interpretation of probabilities suffer from the reference class problem. In this section we extended Hájek’s concerns to the concept of propensity. Yet, if the problem shows a high prevalence we should not conceive it as a hurdle but should try to treat it dynamically as a rule.The reference class is only a problem as long as (1) either the actual class is required as an external constant, or (2) the abstract concept of the class is treated as a fixed point. According to the rule of Lagrange-Deleuze, any constant can be rewritten into a procedure (read: rules) and less problematic constants. Constants, or fixed points on a higher abstract level are less problematic, because the empirically grounded semantics vanishes.

Indeed, the problem of the reference class simply disappears if we put the concept of the class, together with all the related issues of modeling, as the embedding frame, the condition under which any notion of probability only can make sense at all. The classes itself are results of “rule-following”, which  admittedly is blind, but whose parameters are also transparently accessible. In this way, probabilistic interpretation is always performed in a universe, that is closed and in principle fully mapped. We need the probabilistic methods just because that universe is of a huge size. In other words, the space of models is a Laplacean Universe.

Since statistical methods and similar interpretations of probability are analytical techniques, our proposal for a re-positioning of statistics into such a Laplacean Universe is also well aligned with the general habit of Wittgenstein’s philosophy, which puts practiced logic (quasi-logic) second to performance.

The disappearance of the reference class problem should be expected if our relations to the world are always mediated through the activity with abstract, epistemic modeling. The usage of probability theory as a “conceptual game” aiming for sharing diverging attitudes towards risks appears as nothing else than just a particular style of modeling, though admittedly one that offers a reasonable rate of success.

The Result of Modeling

It should be clear by now, that the result of modeling is much more than just a single predictive model. Regardless whether we take the scientific perspective or a philosophical vantage point, we need to include operationalizations of the conditions of the model, that reach beyond the standard empirical risk expressed as “false classification”. Appropriate modeling provides not only a set of models with well-estimated stability and of different structures; a further goal is to establish models that are auto-consistent.

If the modeling employs a method that exposes its parameters, we even can avoid the „method hell“, that is, the results are not only reliable, they are also valid.

It is clear that only auto-consistent models are useful for drawing conclusions and in building up experience. If variables are just weighted without actually being removed, as for instance in approaches like the Support Vector Machines, the resulting methods are not auto-consistent. Hence, there is no way towards a propositional description of the observed process.

Given the population of explicitly tested models it is also possible to describe the differential contribution of any variable to the predictive power of a model. The assumption of neutrality or symmetry of that contribution, as it is for instance applied in statistical learning, is a simplistic perspective onto the variables and the system represented by them.

Conclusion

In this essay we described some technical aspects of the capability to experience. These technical aspects link the possibility for experience to the primacy of interpretation that gets actualized as the techné of anticipatory, i.e. predictive or diagnostic modeling. This techné does not address the creation or derivation of a particular model by means of employing one or several methods. The process of building a model could be fully automated anyway. Quite differently, it focuses the parametrization, validation, evaluation and application of models, particularly with respect to the task of extract a rule from observational data. This extraction of rules must not be conceived as a “drawing of conclusions” guided by logic. It is a constructive activity.

The salient topics in this practice are the selection of models and the description of the classifiers. We emphasized that the goal of modeling should not be conceived as the task of finding a single best model.

Methods like the Self-organizing Map which are based on sub-group segmentation of the data can be used to create auto-consistent models, which represent also an optimally de-noised subset of the measured data. This data sample could be conceived as if it would have been found by a factorial experimental design. Thus, auto-consistent models also provide quite valuable hints for the setup of the Taguchi method of quality assurance, which could be seen as a precipitation of organizational experience.

In the context of exploratory investigation of observational data one first has to determine the suitable observables (variables, predictors) and, by means of the same model(s), the suitable segment of observations before drawing domain-specific conclusions. Such conclusions are often expressed as contrasts in location or variation. In the context of designed experiments as e.g. in pharmaceutical research one first has to check the quality of the data, then to de-noise the data by removing outliers by means of the same data segmentation technique, before again null hypotheses about expected contrasts could be tested.

As such, auto-consistent models provide a perfect basis for learning and for extending the “experience” of an epistemic individual. According to our proposals this experience does not suffer from the various problems of traditional Humean empirism (the induction problem), or contemporary (defective) theories of probabilism (mainly the problem of reference classes). Nevertheless, our approach remains fully empirico-epistemological.

Notes

1. As many other philosophers Lyotard emphasized the indisputability of an attention for the incidential, not as a perception-as, but as an aisthesis, as a forming impression. see: Dieter Mersch, ›Geschieht es?‹ Ereignisdenken bei Derrida und Lyotard. available online, last accessed May 1st, 2012. Another recent source arguing into the same direction is John McDowell’s “Mind and World” (1996).

2. The label “representationalism” has been used by Dreyfus in his critique of symbolic AI, the thesis of the “computational mind” and any similar approach that assumes (1) that the meaning of symbols is given by their reference to objects, and (2) that this meaning is independent of actual thoughts, see also [2].

3. It would be inadequate to represent such a two-fold “almost” dichotomy as a 2-axis coordinate system, even if such a representation would be a metaphorical one only; rather, it should be conceived as a tetraedic space, given by two vectors passing nearby without intersecting each other. Additionally, the structure of that space must not expected to be flat, it looks much more like an inhomogeneous hyperbolic space.

4. “Experiencibility” here not understood as an individual capability to witness or receptivity, but as the abstract possibility to experience.

5. In the same way we reject Husserl’s phenomenology. Phenomena, much like the objects of positivism or the thing-as-such of idealism, are not “out there”, they are result of our experiencibility. Of course, we do not deny that there is a materiality that is independent from our epistemic acts, but that does not explain or describe anything. In other words we propose go subjective (see also [3]).

6. Again, mechanism here should not be misunderstood as a single deterministic process as it could be represented by a (trivial) machine.

7. This question refers to the famous passage in the Tractatus, that “The world is everything that is the case.“ Cases, in the terminology of the Tractatus, are facts as the existence of states of affairs. We may say, there are certain relations. In the Tractatus, Wittgenstein excluded relations that could not be explicated by the use of symbols., expressed by the 7th proposition: „Whereof one cannot speak, thereof one must be silent.“

8. We must step outside of language in order to see the working of language.

9. We just have to repeat it again, since many people develop misunderstandings here. We do not deny the material aspects of the world.

10. “individual” is quite misleading here, since our brain and even our mind is not in-divisable in the atomistic sense.

11. thus, it is also not reasonable to claim the existence of a somehow dualistic language, one part being without ambiguities and vagueness, the other one establishing ambiguity deliberately by means of metaphors. Lakoff & Johnson started from a similar idea, yet they developed it into a direction that is fundamentally incompatible with our views in many ways.

12. Of course, the borders are not well defined here.

13. “predictive power” could be operationalized in quite different ways, of course….

14. Correlational analysis is not a candidate to resolve this problem, since it can’t be used to segment the data or to identify groups in the data. Correlational analysis should be performed only subsequent to a segmentation of the data.

15. The so-called genetic algorithms are not algorithms in the narrow sense, since there is no well-defined stopping rule.

16. It is important to recognize that Artificial Neural Networks are NOT belonging to the family of sub-group based methods.

17. Here another circle closes: the concept of causality can’t be used in a meaningful way without considering its close amalgamation with the concept of information, as we argued here. For this reason, Judea Pearl’s approach towards causality [16] is seriously defective, because he completely neglects the epistemic issue of information.

References
  • [1] Geoffrey C. Bowker, Susan Leigh Star. Sorting Things Out: Classification and Its Consequences. MIT Press, Boston 1999.
  • [2] Willian Croft, Esther J. Wood, Construal operations in linguistics and artificial intelligence. in: Liliana Albertazzi (ed.) , Meaning and Cognition. Benjamins Publ, Amsterdam 2000.
  • [3] Wilhelm Vossenkuhl. Solipsismus und Sprachkritik. Beiträge zu Wittgenstein. Parerga, Berlin 2009.
  • [4] Douglas Hofstadter, Fluid Concepts And Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought. Basic Books, New York 1996.
  • [5] Nicholas F. Gier, Wittgenstein and Deconstruction, Review of Contemporary Philosophy 6 (2007); first publ. in Nov 1989. Online available.
  • [6] Henk L. Mulder, B.F.B. van de Velde-Schlick (eds.), Moritz Schlick, Philosophical Papers, Volume II: (1925-1936), Series: Vienna Circle Collection, Vol. 11b, Springer, Berlin New York 1979. with Google Books
  • [7] Tomaso Poggio, Ryan Rifkin, Sayan Mukherjee & Partha Niyogi (2004). General conditions for predictivity in learning theory. Nature 428, 419-422.
  • [8]  Vladimir Vapnik, The Nature of Statistical Learning Theory (Information Science and Statistics). Springer 2000.
  • [9] Herman J. Bierens (2006). Information Criteria and Model Selection. Lecture notes, mimeo, Pennsylvania State University. available online.
  • [10 ]Brian Weatherson (2007). The Bayesian and the Dogmatist. Aristotelian Society Vol.107, Issue 1pt2, 169–185. draft available online
  • [11] Edward I. George (2000). The Variable Selection Problem. J Am Stat Assoc, Vol. 95 (452), pp. 1304-1308. available online, as research paper.
  • [12] Alan Hájek (2007). The Reference Class Problem is Your Problem Too. Synthese 156(3): 563-585. draft available online.
  • [13] Lori E. Dodd, Margaret S. Pepe (2003). Partial AUC Estimation and Regression. Biometrics 59( 3), 614–623.
  • [14] Zytkov J. (1997). Knowledge=concepts: a harmful equation. 3rd Conference on Knowledge Discovery in Databases, Proceedings of KDD-97, p.104-109.AAAI Press.
  • [15] Thomas Kaufmann, Klaus Wassermann, Guido Schüpfer (2007).  Beta error free risk identification based on SPELA, a neuro-evolution method. presented at ESA 2007.
  • [16] Alan Hájek, “Interpretations of Probability”, The Stanford Encyclopedia of Philosophy (Summer 2012 Edition), Edward N. Zalta (ed.), available online, or forthcoming.
  • [17] Judea Pearl, Causality – Models, Reasoning, and Inference. 2nd ed. Cambridge University Press, Cambridge  (Mass.) 2008 [2000].

۞

Analogical Thinking, revisited. (II)

March 20, 2012 § Leave a comment

In this second part of the essay about a fresh perspective on

(II/II)

analogical thinking—more precise: on models about it—we will try to bring two concepts together that at first sight represent quite different approaches: Copycat and SOM.

Why engaging in such an endeavor? Firstly, we are quite convinced that FARG’s Copycat demonstrates an important and outstanding architecture. It provides a well-founded proposal about the way we humans apply ideas and abstract concepts to real situations. Secondly, however, it is also clear that Copycat suffers from a few serious flaws in its architecture, particularly the built-in idealism. This renders any adaptation to more realistic domains, or even to completely domain-independent conditions very, very difficult, if not impossible, since this drawback also prohibits structural learning. So far, Copycat is just able to adapt some predefined internal parameters. In other words, the Copycat mechanism just adapts a predefined structure, though a quite abstract one, to a given empiric situation.

Well, basically there seem to be two different, “opposite” strategies to merge these approaches. Either we integrate the SOM into Copycat, or we try to transfer the relevant yet to be identified parts from Copycat to a SOM-based environment. Yet, at the end of day we will see that and how the two alternatives converge.

In order to accomplish our goal of establishing a fruitful combination between SOM and Copycat we have to take mainly three steps. First, we briefly recapitulate the basic elements of Copycat and the proper instance of a SOM-based system. We also will describe the extended SOM system in some detail, albeit there will be a dedicated chapter on it. Finally, we have to transfer and presumably adapt those elements of the Copycat approach that are missing in the SOM paradigm.

Crossing over

The particular power of (natural) evolutionary processes derives from the fact that it is based on symbols. “Adaptation” or “optimization” are not processes that change just the numerical values of parameters of formulas. Quite to the opposite, adaptational processes that span across generations parts of the DNA-based story is being rewritten, with potential consequences for the whole of the story. This effect of recombination in the symbolic space is particularly present in the so-called “crossing over” during the production of gamete cells in the context of sexual reproduction in eukaryotes. Crossing over is a “technique” to dramatically speed up the exploration of the space of potential changes. (In some way, this space is also greatly enlarged by symbolic recombination.)

What we will try here in our attempt to merge the two concepts of Copycat and SOM is exactly this: a symbolic recombination. The difference to its natural template is that in our case we do not transfer DNA-snippets between homologous locations in chromosomes, we transfer whole “genes,” which are represented by elements.

Elementarizations I: C.o.p.y.c.a.t.

In part 1 we identified two top-level (non-atomic) elements of Copycat

Since the first element, covering evolutionary aspects such as randomness, population and a particular memory dynamics, is pretty clear and a whole range of possible ways to implement it are available, any attempt for improving the Copycat approach has to target the static, strongly idealistic characteristics of the the structure that is called “Slipnet” by the FARG’s. The Slipnet has to be enabled for structural changes and autonomous adaptation of its parameters. This could be accomplished in many ways, e.g. by representing the items in the Slipnet as primitive artificial genes. Yet, we will take a different road here, since the SOM paradigm already provides the means to achieve idealizations.

At that point we have to elementarize Copycat’s Slipnet in a way that renders it compatible with the SOM principles. Hofstadter emphasizes the following properties of the Slipnet and the items contained therein (pp.212).

  • (1) Conceptual depth allows for a dynamic and continuous scaling of “abstractness” and resistance against “slipping” to another concept;
  • (2) Nodes and links between nodes both represent active abstract properties;
  • (3) Nodes acquire, spread and lose activation, which knows an switch-on threshold < 1;
  • (4) The length of links represents conceptual proximity or degree of association between the nodes.

As a whole, and viewed from the network perspective, the Slipnet behaves much like a spring system, or a network built from rubber bands, where the springs or the rubber bands are regulated in their strength. Note that our concept of SomFluid also exhibits the feature of local regulation of the bonds between nodes, a property that is not present in the idealized standard SOM paradigm.

Yet, the most interesting properties in the list above are (1) and (2), while (3) and (4) are known in the classic SOM paradigm as well. The first item is great because it represents an elegant instance of creating the possibility for measurability that goes far beyond the nominal scale. As a consequence, “abstractness” ceases to be nominal none-or-all property, as it is present in hierarchies of abstraction. Such hierarchies now can be recognized as mere projections or selections, both introducing a severe limitation of expressibility. The conceptual depth opens a new space.

The second item is also very interesting since it blurs the distinction between items and their relations to some extent. That distinction is also a consequence of relying too readily on the nominal scale of description. It introduces a certain moment of self-reference, though this is not fully developed in the Slipnet. Nevertheless, a result of this move is that concepts can’t be thought without their embedding into other a neighborhood of other concepts. Hofstadter clearly introduces a non-positivistic and non-idealistic notion here, as it establishes a non-totalizing meta-concept of wholeness.

Yet, the blurring between “concepts” and “relations” could be and must be driven far beyond the level Hofstadter achieved, if the Slipnet should become extensible. Namely, all the parts and processes of the Slipnet need to follow the paradigm of probabilization, since this offers the only way to evade the demons of cybernetic idealism and control apriori. Hofstadter himself relies much on probabilization concerning the other two architectural parts of Copycat. Its beyond me why he didn’t apply it to the Slipnet too.

Taken together, we may derive (or: impose) the following important elements for an abstract description of the Slipnet.

  • (1) Smooth scaling of abstractness (“conceptual depth”);
  • (2) Items and links of a network of sub-conceptual abstract properties are instances of the same category of “abstract property”;
  • (3) Activation of abstract properties represents a non-linear flow of energy;
  • (4) The distance between abstract properties represents their conceptual proximity.

A note should be added regarding the last (fourth) point. In Copycat, this proximity is a static number. In Hofstadter’s framework, it does not express something like similarity, since the abstract properties are not conceived as compounds. That is, the abstract properties are themselves on the nominal level. And indeed, it might appear as rather difficult to conceive of concepts as “right of”, “left of”, or “group” as compounds. Yet, I think that it is well possible by referring to mathematical group theory, the theory of algebra and the framework of mathematical categories. All of those may be subsumed into the same operationalization: symmetry operations. Of course, there are different ways to conceive of symmetries and to implement the respective operationalizations. We will discuss this issue in a forthcoming essay that is part of the series “The Formal and the Creative“.

The next step is now to distill the elements of the SOM paradigm in a way that enables a common differential for the SOM and for Copycat..

Elementarizations II: S.O.M.

The self-organizing map is a structure that associates comparable items—usually records of values that represent observations—according to their similarity. Hence, it makes two strong and important assumptions.

  • (1) The basic assumption of the SOM paradigm is that items can be rendered comparable;
  • (2) The items are conceived as tokens that are created by repeated measurement;

The first assumption means, that the structure of the items can be described (i) apriori to their comparison and (ii) independent from the final result of the SOM process. Of course, this assumption is not unique to SOMs, any algorithmic approach to the treatment of data is committed to it. The particular status of SOM is given by the fact—and in stark contrast to almost any other method for the treatment of data—that this is the only strong assumption. All other parameters can be handled in a dynamic manner. In other words, there is no particular zone of the internal parametrization of a SOM that would be inaccessible apriori. Compare this with ANN or statistical methods, and you feel the difference…  Usually, methods are rather opaque with respect to their internal parameters. For instance, the similarity functional is usually not accessible, which renders all these nice looking, so-called analytic methods into some kind of subjective gambling. In PCA and its relatives, for instance, the similarity is buried in the covariance matrix, which in turn is only defined within the assumption of normality of correlations. If not a rank correlation is used, this assumption is extended even to the data itself. In both cases it is impossible to introduce a different notion of similarity. Else, and also as a consequence of that, it is impossible to investigate the particular dependency of the results proposed by the method from the structural properties and (opaque) assumptions. In contrast to such unfavorable epistemo-mythical practices, the particular transparency of the SOM paradigm allows for critical structural learning of the SOM instances. “Critical” here means that the influence of internal parameters of the method onto the results or conclusions can be investigated, changed, and accordingly adapted.

The second assumption is implied by its purpose to be a learning mechanism. It simply needs some observations as results of the same type of measurement. The number of observations (the number of repeats) has to  exceed a certain lower threshold, which, dependent on the data and the purpose, is at least 8, typically however (much) more than 100 observations of the same kind are needed. Any result will be within the space delimited by the assignates (properties), and thus any result is a possibility (if we take just the SOM itself).

The particular accomplishment of a SOM process is the transition from the extensional to the intensional description, i.e. the SOM may be used as a tool to perform the step from tokens to types.

From this we may derive the following elements of the SOM:1

  • (1) a multitude of items that can be described within a common structure, though not necessarily an identical one;
  • (2) a dense network where the links between nodes are probabilistic relations;
  • (3) a bottom-up mechanism which results in the transition from an extensional to an intensional level of description;

As a consequence of this structure the SOM process avoids the necessity to compare all items (N) to all other items (N-1). This property, together with the probabilistic neighborhoods establishes the main difference to other clustering procedures.

It is quite important to understand that the SOM mechanism as such is not a modeling procedure. Several extensions have to be added and properly integrated, such as

  • – operationalization of the target into a target variable;
  • – validation by separate samples;
  • – feature selection, preferably by an instance of  a generalized evolutionary process (though not by a genetic algorithm);
  • – detecting strong functional and/or non-linear coupling between variables;
  • – description of the dependency of the results from internal parameters by means of data experiments.

We already described the generalized architecture of modeling as well as the elements of the generalized model in previous chapters.

Yet, as we explained in part 1 of this essay, analogy making is conceptually incompatible to any kind of modeling, as long as the target of the model points to some external entity. Thus, we have to choose a non-modeling instance of a SOM as the starting point. However, clustering is also an instance of those processes that provide the transition from extensions to intensions, whether this clustering is embedded into full modeling or not. In other words, both the classic SOM as well as the modeling SOM are not suitable as candidates for a merger with Copycat.

SOM-based Abstraction

Fortunately, there is already a proposal, and even a well-known one, that indeed may be taken as such a candidate: the two-layer SOM (TL-SOM) as it has been demonstrated as essential part of the so-called WebSom [1,2].

Actually, the description as being “two layered” is a very minimalistic, if not inappropriate description what is going on in the WebSom. We already discussed many aspects of its architecture here and here.

Concerning our interests here, the multi-layered arrangement itself is not a significant feature. Any system doing complicated things needs a functional compartmentalization; we have met a multi-part, multi-compartment and multi-layered structure in the case of Copycat too. Else, the SOM mechanism itself remains perfectly identical across the layers.

The real interesting features of the approach realized in the TL-SOM are

  • – the preparation of the observations into probabilistic contexts;
  • – the utilization of the primary SOM as a measurement device (the actual trick).

The domain of application of the TL-SOM is the comparison and classification of texts. Texts belong to unstructured data and the comparison of texts is exposed to the same problematics as the making of analogies: there is no apriori structure that could serve as a basis for modeling. Also, as the analogies investigated by the FARG the text is a locational phenomenon, i.e. it takes place in a space.

Let us briefly recapitulate the dynamics in a TL-SOM. In order to create a TL-SOM the text is first dissolved into overlapping, probabilistic contexts. Note that the locational arrangement is captured by these random contexts. No explicit apriori rules are necessary to separate patterns. The resulting collection of  contexts then gets “somified”. Each node then contains similar random contexts that have been derived from various positions in different texts. Now the decisive step will be taken, which consists in turning the perspective by “90 degrees”: We can use the SOM as the basis for creating a histogram for each of the texts. The nodes are interpreted as properties of the texts, i.e. each node represents a bin of the histogram. The values of the individual bins measure the frequency of the text as it is represented by the respective random context. The secondary SOM then creates a clustering across these histograms, which represent the texts in an abstract manner.

This way the primary lattice of the TL-SOM is used to impose a structure on the unstructured entity “text.”

Figure 1: A schematic representation of a two-layered SOM with built-in self-referential abstraction. The input for the secondary SOM (foreground) is derived as a collection of histograms that are defined as a density across the nodes of the primary SOM (background). The input for the primary SOM are random contexts.

To put it clearly: the secondary SOM builds an intensional description of entities that results from the interaction of a SOM with a probabilistic description of the empirical observations. Quite obviously, intensions built this way about intensions are not only quite abstract, the mechanism could even be stacked. It could be described as “high-level perception” as justified as Hofstadter uses the term for Copycat. The TL-SOM turns representational intensions into abstract, structural ones.

The two aspects from above thus interact, they are elements of the TL-SOM. Despite the fact that there are still transitions from extensions to intensions, we also can see that the targeted units of the analysis, the texts get probabilistically distributed across an area, the lattice of the primary SOM. Since the SOM maps the high-dimensional input data onto its map in a way that preserves their topological properties, it is easy to recognize that the TL-SOM creates conceptual halos as an intermediate.

So let us summarize the possibilities provided by the SOM.

  • (1) SOMs are able to create non-empiric, or better: de-empirified idealizations of intensions that are based on “quasi-empiric” input data;
  • (2) TL-SOMs can be used to create conceptual halos.

In the next section we will focus on this spatial, better: primarily spatial effect.

The Extended SOM

Kohonen and co-workers [1,2] proposed to build histograms that reflect the probability density of a text across the SOM. Those histograms represent the original units (e.g. texts) in a quite static manner, using a kind of summary statistics.

Yet, texts are definitely not a static phenomenon. At first sight there is at least a series, while more appropriately texts are even described as dynamic networks of own associative power [3]. Returning to the SOM we see that additionally to the densities scattered across the nodes of the SOM we also can observe a sequence of invoked nodes, according to the sequence of random contexts in the text (or the serial observations)

The not so difficult question then is: How to deal with that sequence? Obviously, it is again and best conceived as a random process (though with a strong structure), and random processes are best described using Markov models, either as hidden (HMM) or as transitional models. Note that the Markov model is not a model about the raw observational data, it describes the sequence of activation events of SOM nodes.

The Markov model can be used as a further means to produce conceptual halos in the sequence domain. The differential properties of a particular sequence as compared to the Markov model then could be used as further properties to describe the observational sequence.

(The full version of the extended SOM comprises targeted modeling as a further level. Yet, this targeted modeling does not refer to raw data. Instead, its input is provided completely by the primary SOM, which is based on probabilistic contexts, while the target of such modeling is just internal consistency of a context-dependent degree.)

The Transfer

Just to avoid misunderstanding: it does not make sense to try representing Copycat completely by a SOM-based system. The particular dynamics and phenomenologically behavior depends a lot on Copycat’s tripartite morphology as represented by the Coderack (agents), the Workspace and the Slipnet. We are “just” in search for a possibility to remove the deep idealism from the Slipnet in order to enable it for structural learning.

Basically, there are two possible routes. Either we re-interpret the extended SOM in a way that allows us to represent the elements of the Slipnet as properties of the SOM, or we try to replace the all items in the Slipnet by SOM lattices.

So, let us take a look which structures we have (Copycat) or what we could have (SOM) on both sides.

Table 1: Comparing elements from Copycat’s Slipnet to the (possible) mechanisms in a SOM-based system.

Copycat extended SOM
 1. smoothly scaled abstraction Conceptual depth (dynamic parameter) distance of abstract intensions in an integrated lattice of a n-layered SOM
 2.  Links as concepts structure by implementation reflecting conceptual proximity as an assignate property for a higher-level
 3. Activation featuring non-linear switching behavior structure by implementation x
 4. Conceptual proximity link length (dynamic parameter) distance in map (dynamic parameter)
 5.  Kind of concepts locational, positional symmetries, any

From this comparison it is clear that the single most challenging part of this route is the possibility for the emergence of abstract intensions in the SOM based on empirical data. From the perspective of the SOM, relations between observational items such as “left-most,” “group” or “right of”, and even such as “sameness group” or “predecessor group”, are just probabilities of a pattern. Such patterns are identified by functions or dynamic combinations thereof. Combinations ot topological primitives remain mappable by analytic functions. Such concepts we could call “primitive concepts” and we can map these to the process of data transformation and the set of assignates as potential properties.2 It is then the job of the SOM to assign a relevancy to the assignates.

Yet, Copycat’s Slipnet comprises also rather abstract concepts such as “opposite”. Further more, the most abstract concepts often act as links between more primitive concepts, or, in Hofstadter terms, conceptual items of lower “conceptual depth”.

My feeling here is that it is a fundamental mistake to implement concepts like “opposite” directly. What is opposite of something else is a deeply semantic concept in itself, thus strongly dependent on the domain. I think that most of the interesting concepts, i.e. the most abstract ones are domain-specific. Concepts like “opposite” could be considered as something “simple” only in case of geometric or spatial domains.

Yet, that’s not a weakness. We should use this as a design feature. Take the following rather simple case as shown in the next figure as an example. Here we mapped simply triplets of uniformly distributed random values onto a SOM. The three values can be readily interpreted as parts of a RGB value, which renders the interpretation more intuitive. The special thing here is that the map has been a really large one: We defined approximately 700’000 nodes and fed approx. 6 million observations into it.

Figure 2: A SOM-based color map showing emergence of abstract features. Note that the topology of the map is a borderless toroid: Left and right borders touch each other (distance=0), and the same applies to the upper and lower borders.

We can observe several interesting things. The SOM didn’t come up with just any arbitrary sorting of the colors. Instead, a very particular one emerged.

First, the map is not perfectly homogeneous anymore. Very large maps tend to develop “anisotropies”, symmetry breaks if you like, simply due to the fact the the signal horizon becomes an important issue. This should not be regarded as a deficiency though. Symmetry breaks are essential for the possibility of the emergence of symbols. Second, we can see that two “color models” emerged, the RGB model around the dark spot in the lower left, and the YMC model around the bright spot in the upper right. Third, the distance between the bright, almost white spot and the dark, almost black one is maximized.

In other words, and not quite surprising, the conceptual distance is reflected as a geometrical distance in the SOM. As it is the case in the TL-SOM, we now could use the SOM as a measurement device that transforms an unknown structure into an internal property, simply by using the locational property in the SOM as an assignate for a secondary SOM. In this way we not only can represent “opposite”, but we even have a model procedure for “generalized oppositeness” at out disposal.

It is crucial to understand this step of “observing the SOM”, thereby conceiving the SOM as a filter, or more precisely as a measurement device. Of course, at this point it becomes clear that a large variety of such transposing and internal-virtual measurement devices may be thought of. Methodologically, this opens an orthogonal dimension to the representation of data, resembling strongly to the concept of orthoregulation.

The map shown above even allows to create completely different color models, for instance one around yellow and another one around magenta. Our color psychology is strongly determined by the sun’s radiated spectrum and hence it reflects a particular Lebenswelt; yet, there is no necessity about it. Some insects like bees are able to perceive ultraviolet radiation, i.e. their colors may have 4 components, yielding a completely different color psychology, while the capability to distinguish colors remains perfectly.3

“Oppositeness” is just a “simple” example for an abstract concept and its operationalization using a SOM. We already mentioned the “serial” coherence of texts (and thus of general arguments) that can be operationalized as sort of virtual movement across a SOM of a particular level of integration.

It is crucial to understand that there is no other model besides the SOM that combines the ability to learn from empirical data and the possibility for emergent abstraction.

There is yet another lesson that we can take home from the simple example above. Well, the example doesn’t not remain that simple. High-level abstraction, items of considerable conceptual depth, so to speak, requires rather short assignate vectors. In the process of learning qua abstraction it appears to be essential that the masses of possible assignates derived from or imposed by measurement of raw data will be reduced. On the one hand, empiric contexts from very different domains should be abstracted, i.e. quite literally “reduced”, into the same perspective. On the other hand, any given empiric context should be abstracted into (much) more than just one abstract perspective. The consequence of that is that we need a lot of SOMs, all separated “sufficiently” from each other. In other words, we need a dynamic population of Self-organizing maps in order to represent the capability of abstraction in real-life. “Dynamic population” here means that there are developmental mechanisms that result in a proliferation, almost a breeding of new SOM instances in a seamless manner. Of course, the SOM instances themselves have to be able to grow and to differentiate, as we have described it here and here.

In a population of SOM the conceptual depth of a concept may be represented by the efforts to arrive at a particular abstract “intension.” This not only comprises the ordinary SOM lattices, but also processes like Markov models, simulations, idealizations qua SOMs, targeted modeling, transition into symbolic space, synchronous or potential activations of other SOM compartments etc. This effort may be represented finally as a “number.”

Conclusions

The structure of multi-layered system of Self-organizing Maps as it has been proposed by Kohonen and co-workers is a powerful model to represent emerging abstraction in response to empiric impressions. The Copycat model demonstrates how abstraction could be brought back to the level of application in order to become able to make analogies and to deal with “first-time-exposures”.

Here we tried to outline a potential path to bring these models together. We regard this combination in the way we proposed it (or a quite similar one) as crucial for any advance in the field of machine-based episteme at large, but also for the rather confined area of machine learning. Attempts like that of Blank [4] appear to suffer seriously from categorical mis-attributions. Analogical thinking does not take place on the level of single neurons.

We didn’t discuss alternative models here (so far, a small extension is planned). The main reasons are that first it would be an almost endless job, and second that Hofstadter already did it and as a result of his investigation he dismissed all the alternative approaches (from authors like Gentner, Holyoak, Thagard). For an overview Runco [5] about recent models on creativity, analogical thinking, or problem solving provides a good starting point. Of course, many authors point to roughly the same direction as we did here, but mostly, the proposals are circular, not helpful because the problematic is just replaced by another one (e.g. the infamous and completely unusable “divergent thinking”), or can’t be implemented for other reasons. Thagard [6] for instance, claim that a “parallel satisfaction of the constraints of similarity, structure and purpose” is key in analogical thinking. Given our analysis, such statements are nothing but a great mess, mixing modeling, theory, vagueness and fluidity.

For instance, in cognitive psychology and in the field of artificial intelligence as well, the hypothesis of Structural Mapping (STM) finds a lot of supporters [7]. Hofstadter discusses similar approaches in his book. The STM hypothesis is highly implausible and obviously a left-over of the symbolic approach to Artificial Intelligence, just transposed into more structural regions. The STM hypothesis has not only to be implemented as a whole, it also has to be implemented for each domain specifically. There is no emergence of that capability.

The combination of the extended SOM—interpreted as a dynamic population of growing SOM instances—with the Copycat mechanism indeed appears as a self-sustaining approach into proliferating abstraction and—quite significant—back from it into application. It will be able to make analogies on any field already in its first encounter with it, even regarding itself, since both the extended SOM as well as the Copycat comprise several mechanisms that may count as precursors of high-level reflexivity.

After this proposal little remains to be said on the technical level. One of those issues which remain to be discussed is the conditions for the possibility of binding internal processes to external references. Here our favorite candidate principle is multi-modality, that is the joint and inextricable “processing” (in the sense of “getting affected”) of words, images and physical signals alike. In other words, I feel that we have come close to the fulfillment of the ariadnic question this blog:”Where is the Limit?” …even in its multi-faceted aspects.

A lot of implementation work has now to be performed, eventually commented by some philosophical musings about “cognition”, or more appropriate the “epistemic condition.” I just would like to invite you to stay tuned for the software publications to come (hopefully in the near future).

Notes

1. see also the other chapters about the SOM, SOM-based modeling, and generalized modeling.

2. It is somehow interesting that in the brain of many animals we can find very small groups of neurons, if not even single neurons, that respond to primitive features such as verticality of lines, or the direction of the movement of objects in the visual field.

3. Ludwig Wittgenstein insisted all the time that we can’t know anything about the “inner” representation of “concepts.” It is thus free of any sense and meaning to claim knowledge about the inner state of oneself as well as of that of others. Wilhelm Vossenkuhl introduces and explains the Wittgensteinian “grammatical” solipsism carefully and in a very nice way.[8]  The only thing we can know about inner states is that we use certain labels for it, and the only meaning of emotions is that we do report them in certain ways. In other terms, the only thing that is important is the ability to distinguish ones feelings. This, however, is easy to accomplish for SOM-based systems, as we have been demonstrating here and elsewhere in this collection of essays.

4. Don’t miss Timo Honkela’s webpage where one can find a lot of gems related to SOMs! The only puzzling issue about all the work done in Helsinki is that the people there constantly and pervasively misunderstand the SOM per se as a modeling tool. Despite their ingenuity they completely neglect the issues of data transformation, feature selection, validation and data experimentation, which all have to be integrated to achieve a model (see our discussion here), for a recent example see here, or the cited papers about the Websom project.

  • [1] Timo Honkela, Samuel Kaski, Krista Lagus, Teuvo Kohonen (1997). WEBSOM – Self-Organizing Maps of Document Collections. Neurocomputing, 21: 101-117.4
  • [2] Krista Lagus, Samuel Kaski, Teuvo Kohonen in Information Sciences (2004)
    Mining massive document collections by the WEBSOM method. Information Sciences, 163(1-3): 135-156. DOI: 10.1016/j.ins.2003.03.017
  • [3] Klaus Wassermann (2010). Nodes, Streams and Symbionts: Working with the Associativity of Virtual Textures. The 6th European Meeting of the Society for Literature, Science, and the Arts, Riga, 15-19 June, 2010. available online.
  • [4 ]Douglas S. Blank, Implicit Analogy-Making: A Connectionist Exploration.Indiana University Computer Science Department. available online.
  • [5] Mark A. Runco, Creativity-Research, Development, and Practice Elsevier 2007.
  • [6] Keith J. Holyoak and Paul Thagard, Mental Leaps: Analogy in Creative Thought.
    MIT Press, Cambridge 1995.
  • [7] John F. Sowa, Arun K. Majumdar (2003), Analogical Reasoning.  in: A. Aldo, W. Lex, & B. Ganter (eds.), “Conceptual Structures for Knowledge Creation and Communication,” Proc.Intl.Conf.Conceptual Structures, Dresden, Germany, July 2003.  LNAI 2746, Springer New York 2003. pp. 16-36. available online.
  • [8] Wilhelm Vossenkuhl. Solipsismus und Sprachkritik. Beiträge zu Wittgenstein. Parerga, Berlin 2009.

.

Where Am I?

You are currently browsing entries tagged with idealism at The "Putnam Program".