A Deleuzean Move

June 24, 2012 § Leave a comment

It is probably one of the main surprises in the course of

growing up as a human that in the experience of consciousness we may meet things like unresolvable contradictions, thoughts that are incommensurable, thoughts that lead into contradictions or paradoxes, or thoughts that point to something which is outside of the possibility of empirical, so to speak “direct” experience. All these experiences form a particular class of experience. For one or the other reason, these issues are issues of mental itself. We definitely have to investigate them, if we are going to talk about things like machine-based episteme, or the urban condition, which will be the topic of the next few essays.

There have been only very few philosophers1 who have been embracing paradoxicality without getting caught by antinomies and paradoxes in one or another way.2 Just to be clear: Getting caught by paradoxes is quite easy. For instance, by violating the validity of the language game you have been choosing. Or by neglecting virtuality. The first of these avenues into persistent states of worries can be observed in sciences and mathematics3, while the second one is more abundant in philosophy. Fortunately, playing with paradoxicality without getting trapped by paradoxes is not too difficult either. There is even an incentive to do so.

Without paradoxicality it is not possible to think about beginnings, as opposed to origins. Origins­­—understood as points of {conceptual, historical, factual} departure—are set for theological, religious or mystical reasons, which by definition are always considered as bearer of sufficient reason. To phrase it more accurately, the particular difficulty consists in talking about beginnings as part of an open evolution without universal absoluteness, hence also without the need for justification at any time.

Yet, paradoxicality, the differential of actual paradoxes, could form stable paradoxes only if possibility is mixed up with potentiality, as it is for instance the case for perspectives that could be characterised as reductionist or positivist. Paradoxes exist strictly only within that conflation of possibility and potentiality. Hence, if a paradox or antinomy seems to be stable, one always can find an implied primacy of negativity in lieu of the problematic field spawned and spanned by the differential. We thus can observe the pouring of paradoxes also if the differential is rejected or neglected, as in Derrida’s approach, or the related functionalist-formalist ethics of the Frankfurt School, namely that proposed by Habermas [4]. Paradoxes are like knots that always can be untangled in higher dimensions. Yet, this does NOT mean that everything could be smoothly tiled without frictions, gaps or contradictions.

Embracing the paradoxical thus means to deny the linear, to reject the origin and the absolute, the centre points [6] and the universal. We may perceive remote greetings from Nietzsche here4. Perhaps, you already may have classified the contextual roots of these hints: It is Gilles Deleuze to whom we refer here and who may well be regarded as the first philosopher of open evolution, the first one who rejected idealism without sacrificing the Idea.5

In the hands of Deleuze—or should we say minds?—paradoxicality does neither actualize into paradoxes nor into idealistic dichotomic dialectics. A structural(ist) and genetic dynamism first synthesizes the Idea, and by virtue of the Idea as well as the space and time immanent to the Idea paradoxicality turns productive.7

Philosophy is revealed not by good sense but by paradox. Paradox is the pathos or the passion of philosophy. There are several kinds of paradox, all of which are opposed to the complementary forms of orthodoxy – namely, good sense and common sense. […] paradox displays the element which cannot be totalised within a common element, along with the difference which cannot be equalised or cancelled at the direction of a good sense. (DR227)

As our title already indicates, we not only presuppose and start with some main positions and concepts of Deleuzean philosophy, particularly those he once developed in Difference and Repetition (D&R)8. There will be more details later9. We10 also attempt to contribute some “genuine” aspects to it. In some way, our attempt could be conceived as a development being an alternative to part V in D&R, entitled “Asymmetrical Synthesis of the Sensible”.

This Essay

Throughout the collection of essays about the “Putnam Program” on this site we expressed our conviction that future information technology demands for an assimilation of philosophy by the domain of computer sciences (e.g. see the superb book by David Blair “Wittgenstein, Language and Information” [47]). There are a number of areas—of both technical as well as societal or philosophical relevance—which give rise to questions that already started to become graspable, not just in the computer sciences. How to organize the revision of beliefs?11 What is the structure of the “symbol grounding problem”? How to address it? Or how to avoid the fallacy of symbolism?12 Obviously we can’t tackle such questions without the literacy about concepts like belief or symbol, which of course can’t be reduced to a merely technical notion. Beliefs, for instance, can’t be reduced to uncertainty or its treatment, despite there is already some tradition in analytical philosophy, computer sciences or statistics to do so. Else, with the advent of emergent mental capabilities in machines ethical challenges appear. These challenges are on both sides of the coin. They relate to the engineers who are creating such instances as well as to lawyers who—on the other side of the spectrum—have to deal with the effects and the properties of such entities, and even “users” that have to build some “theory of mind” about them, some kind of folk psychology.

And last but not least, just the externalization of informational habits into machinal contexts triggers often pseudo-problems and “deep” confusion.13 Examples for such confusion are the question about the borders of humanity, i.e. as kind of a defense war fought by anthropology, or the issue of artificiality. Where does the machine end and where does the domain of the human start? How can we speak reasonably about “artificiality”, if our brain/mind remains still dramatically non-understood and thus implicitly is conceived by many as kind of a bewildering nature? And finally, how to deal with technological progress: When will computer scientists need self-imposed guidelines similar to those geneticists ratified for their community in 1974 during the Asimolar Conferences? Or are such guidelines illusionary or misplaced, because we are weaving ourselves so intensively into our new informational carpets—made from multi- or even meta-purpose devices—that are righteous flying carpets?

There is also a clearly recognizable methodological reason  for bringing the inventioneering of advanced informational “machines” and philosophy closer together. The domain of machines with advanced mental capabilities—I deliberately avoid the traditional term of “artificial intelligence”—, let us abbreviate it MMC, acquires ethical weight in itself. MMC establishes a subjective Lebenswelt (life form) that is strikingly different from ours and which we can’t understand analytically any more (if at all)14. The challenge then is how to talk about this domain? We should not repeat the same fallacy as anthropology and anthropological philosophy have been committing since Kant, where human measures have been applied (and still are up today) to “nature”. If we are going to compare two different entities we need a differential position from which both can be instantiated. Note that no resemblance can be expected between the instances, nor between the instances and the differential. That differential is a concept, or an idea, and as such it can’t be addressed by any kind of technical perspective. Hence, questions of mode of speaking can’t be conceived as a technical problem, especially not for the domain of MMC, also due to the implied self-referentiality of the mental itself.

Taken together, we may say that our motivation follows two lines. Firstly, the concern is about the problematic field, the problem space itself, about the possibility that problems could become visible at all. Secondly, there is a methodological position characterisable as a differential that is necessary to talk about the subject of incommensurable that are equipped entities with mental capacities.15

Both directions and all related problems can be addressed in the same single move, so at least is our proposal. The goal of this essay is the introduction and a brief discussion of a still emerging conceptual structure that may be used as an image of thought, or likewise as a tool in the sense of an almost formal mental procedure, helping to avoid worries about the diagnosis—or supporting it—of the challenges opened by the new technologies. Of course, it will turn out that the result is not just applicable to the domain of philosophy of technology.

In the following we will introduce a unique structure that has been inspired not only from heterogeneous philosophical sources. Those stretch from Aristotle to Peirce, from Spinoza to Wittgenstein, and from Nietzsche to Deleuze, to name but a few, just to give you an impression what mindset you could expect. Another important source is mathematics, yet not used as a ready-made system for formal reasoning, but rather as a source for a certain way of thinking. Last, but not least, biology is contributing as the home of the organon, of complexity, of evolution, and, more formally, on self-referentiality. The structure we will propose as a starting point that appears merely technical, thus arbitrary, and at the same time it draws upon the primary amalgamate of the virtual and the immanent. Its paradoxicality consists in its potential to describe the “pure” any, the Idea that comprises any beginning. Its particular quality as opposed to any other paradoxicality is caused by a profound self-referentiality that simultaneously leads to its vanishing, its genesis and its own actualization. In this way, the proposed structure solves a challenge that is considered by many throughout the history of philosophy to be one of the most serious one. The challenge in question is that of sufficient reason, justification and conditionability. To be more precise, that challenge is not solved, it is more correct to say that it is dissolved, made disappear. In the end, the problem of sufficient reason will be marked as a pseudo-problem.

Here, a small remark is necessary to be made to the reader. Finally, after some weeks of putting this down, it turned out as a matter of fact that any (more or less) intelligible way of describing the issues exceeds the classical size of a blog entry. After all, now it comprises approx. 150’000 characters (incl white space), which would amount to 42+ pages on paper. So, it is more like a monograph. Still, I feel that there are many important aspects left out. Nevertheless I hope that you enjoy reading it.

The following provides you a table of content (active links) for the remainder of this essay:

2. Brief Methodological Remark

As we already noted, the proposed structure is self-referential. Self-referentiality also means that all concepts and structures needed for an initial description will be justified by the working of the structure, in other words, by its immanence. Actually, similarly to the concept of the Idea in D&R, virtuality and immanence come very close to each other, they are set to be co-generative. As an Idea, the proposed structure is complete. As any other idea, it needs to be instantiated into performative contexts, thus it is to be conceived as an entirety, yet neither as a completeness nor as a totality. Yet, its self-referentiality allows for and actually also generates a “self-containment” that results in a fractal mirroring of itself, in a self-affine mapping. Metaphorically, it is a concept that develops like the leaf of a fern. Superficially, it could look like a complete and determinate entirety, but it is not, similar to area-covering curves in mathematics. Those fill a 2-dimensional area infinitesimally, yet, with regard to their production system they remain truly 1-dimensional. They are a fractal, an entity to which we can’t apply ordinal dimensionality. Such, our concept also develops into instances of fractal entirety.

For these reasons, it would be also wrong to think that the structure we will describe in a moment is “analytical”, despite it is possible to describe its “frozen” form by means of references to mathematical concepts. Our structure must be understood as an entity that is not only not neutral or invariant against time. It forms its own sheafs of time (as I. Prigogine described it) Analytics is always blind against its generative milieu. Analytics can’t tell anything about the world, contrary to a widely exercised opinion. It is not really a surprise that Putnam recommended to reduce the concept of “analytic” to “an inexplicable noise”. Very basically it is a linear endeavor that necessarily excludes self-referentiality. Its starting point is always based on an explicit reference to kind of apparentness, or even revelation. Analytics not only presupposes a particular logic, but also conflates transcendental logic and practiced quasi-logic. Else, the pragmatics of analysis claims that it is free from constructive elements. All these characteristics do not apply to out proposal, which is as less “analytical” as the philosophy of Deleuze, where it starts to grow itself on the notion of the mathematical differential.

3. The Formal Structure

For the initial description of the structure we first need a space of expressibility. This space then will be equipped with some properties. And right at the beginning I would like to emphasize that the proposed structure does not “explain” by itself anything, just like a (philosophical) grammar. Rather, through its usage, that is, its unfolding in time, it shows itself and provides a stable as well as a generative ground.

The space of the structure is not a Cartesian space, where some concepts are mapped onto the orthogonal dimensions, or where concepts are thought to be represented by such dimensions. In a Cartesian space, the dimensions are independent from each other.16 Objects are represented by the linear and additive combination of values along those dimensions and thus their entirety gets broken up. We loose the object as a coherent object and there would be no way to regain it later, regardless the means and the tools we would apply. Hence the Cartesian space is not useful for our purposes. Unfortunately, all the current mathematics is based on the cartesian, analytic conception. Currently, mathematics is a science of control, or more precisely, a science about the arrangement of signs as far as it concerns linear, trivial machines that can be described analytically. There is not yet a mathematics of the organon. Probably category theory is a first step into its direction.

Instead, we conceive our space as an aspectional space, as we introduced it in a previous chapter. In an aspectional space concepts are represented by “aspections” instead of “dimensions”. In contrast to the values in a dimensional space, values in an aspectional can not be changed independently from each other. More precisely, we always can keep only at most 1 aspection constant, while the values along all the others change simultaneously. (So-called ternary diagrams provide a distantly related example for this in a 2-dimensional space.) In other words, within the N-manifolds of the aspectional space always all values are dependent on each other.

This aspectional space is stuffed with a hyperbolic topological structure. The space of our structure is not flat. You may take M.C. Escher’s plates as a visualization of such a space. Yet, our space is different from such a fixed space; it is a relativistic space that is built from overlapping hyperbolic spaces. At each point in the space you will find a point of reference (“origin”) for a single hyperbolic reference system. Our hyperbolic space is locally centred. A mathematical field about comparable structures would be differential topology.

So far, the space is still quite easy and intuitively to understand. At least there is still a visualization possible for it. This changes probably with the next property. Points in this aspectional space are not “points”, or expressed in a better, less obscure way, our space does not contain points at all. In a Cartesian space, points are defined by one or more scales and their properties. For instance, in a x-y-coordinate system we could have real numbers on both dimensions, i.e. scales, or we could have integers on the first, and reals on the second one. The interaction of the number systems used to create a scale along a dimension determines the expressibility of the space. This way, a point is given as a fixed instance of a set of points as soon as the scale is given. Points themselves are thus said to be 0-dimensional.

Our “points”, i.e. the content of our space is quite different from that. It is not “made up” from inert and passive points but the second differential, i.e. ultimately a procedure that invokes an instantiation. Our aspectional space thus is made from infinitesimal procedural sites, or “situs” as Leibniz probably would have said. If we would represent the physical space by a Cartesian dimensional system, then the second derivative would represent an acceleration. Take this as a metaphor for the behavior of our space. Yet, our space is not a space that is passive. The second-order differential makes it an active space and a space that demands for an activity. Without activity it is “not there”.

We also could describe it as the mapping of the intensity of the dynamics of transformation. If you would try to point to a particular location, or situs, in that space, which is of course excluded by its formal definition, you would instantaneously “transported” or transformed, such that you would find yourself elsewhere instantaneously. Yet, this “elsewhere” can not be determined in Cartesian ways. First, because that other point does not exist, second, because it depends on the interaction of the subject’s contribution to the instantiation of the situs and the local properties of the space. Finally, we can say that our aspectional space thus is not representational, as the Cartesian space is.

So, let us sum the elemental17 properties of our space of expressibility:

  • 1. The space is aspectional.
  • 2. The topology of the space is locally hyperbolic.
  • 3. The substance of the space is a second-order differential.

4. Mapping the Semantics

We now are going to map four concepts onto this space. These concepts are themselves Ideas in the Deleuzean sense, but they are also transcendental. They are indeterminate and real, just as virtual entities. As those, we take the chosen concepts as inexplicable, yet also as instantiationable.

These four concepts have been chosen initially in a hypothetical gesture, such that they satisfy two basic requirements. First, it should not be possible to reduce them to one another. Second, together they should allow to build a space of expressibility that would contain as much philosophical issues of mentality as possible. For instance, it should contain any aspect of epistemology or of languagability, but it does not aim to contribute to the theory of morality, i.e. ethics, despite the fact that there is, of course, significant overlapping. For instance, one of the possible goals could be to provide a space that allows to express the relation between semiotics and any logic, or between concepts and models.

So, here are the four transcendental concepts that form the aspections of our space as described above:

  • – virtuality
  • – mediality
  • – model
  • – concept

Inscribing four concepts into a flat, i.e. Euclidean aspectional space would result in a tetraedic space. In such a space, there would be “corners,” or points of inflections, which would represent the determinateness of the concepts mapped to the aspections. As we have emphasized above, our space is not flat, though. There is no static visualization possible for it, since our space can’t be mapped to the flat Euclidean space of a drawing, or of the space of our physical experience.

So, let us proceed to the next level by resorting to the hyperbolic disc. If we take any two points inside the disc, their distance is determinate. Yet, if we take any two points at the border of the disc, the distance between those points is infinite from the inside perspective, i.e. for any perspective associated to a point within the disc. Also the distance from any point inside the disc to the border is infinite. This provides a good impression how transcendental concepts that by definition can’t be accessed “as such”, or as a thing, can be operationalized by the hyperbolic structure of a space. Our space is more complicated, though, as the space is not structured by a fixed hyperbolic topology that is, so to speak, global for the entire disc. The consequence is that our space does not have a border, but at the same time it remains an aspectional space. Turning the perspective around, we could say that the aspections are implied into this space.

Let us now briefly visit these four concepts.

4.1. Virtuality

Virtuality describes the property of “being virtual”. Saying that something is virtual does not mean that this something does not exist, despite the property “existing” can’t be applied to it either. It is fully real, but not actual. Virtuality is the condition of potentiality, and as such it is a transcendental concept. Deleuze repeatedly emphasises that virtuality does not refer to a possibility. In the context of information technologies it is often said that this or that is “virtual”, e.g. virtualized servers, or virtual worlds. This usage is not the same as in philosophy, since, quite obviously, we use the virtual server as a server, and the world dubbed “virtual“ indeed does exist in an actualized form. Yet, in both examples there is also some resonance to the philosophical concept of virtuality. But this virtuality is not exclusive to the simulated worlds, the informationally defined server instances or the WWW. Virtualization is, as we will see in a moment, implied by any kind of instance of mediality.

As just said, virtuality and thus also potentiality must be strictly distinguished from possibility. Possible things, even if not yet present or existent, can be thought of in a quasi-material way, as if they would exist in their material form. We even can say that possible things and the possibilities of things are completely determined in any given moment. It is not possible to say so about potentiality. Yet, without the concept of potentiality we could not speak about open evolutionary processes. Neglecting virtuality thus is necessarily equivalent to the apriori claim of determinateness, which is methodologically and ethically highly problematic.

The philosophical concept of virtuality is known since Aristotle. Recently, Bühlmann18 brought it to the vicinity of semiotics and the question of reference19 in her work about mediality. There would be much, much more to say about virtuality here, just, the space is missing…

4.2. Mediality

Mediality, that is the medial aspects of things, facts and processes belongs to the most undervalued concepts nowadays, even as we get some exercise by means of so-called “social media”. That term perfectly puts this blind spot to stage through its emphasis: Neither is there any mediality without sociality, nor is there any sociality without mediality. Mediality is the concept that has been “discovered” as the last one of our small group. There is a growing body of publications, but many are—astonishingly—deeply infected by romanticism or positivism20, with only a few exceptions.21 Mediality comprises issues like context, density, or transformation qua transfer. Mediality is a concept that helps to focus the appropriate level of integration in populations or flows when talking about semantics or meaning and their dynamics. Any thing, whether material or immaterial, that occurs in a sufficient density in its manifoldness may develop a mediality within a sociality. Mediality as a “layer of transport” is co-generative to sociality. Media are never neutral with respect to the transported, albeit one can often find counteracting forces here.

Signs and symbols could not exist as such without mediality. (Yet, this proposal is based on the primacy of interpretation, which is rejected by modernist set of beliefs. The costs for this are, however, tremendous, as we are going to argue here) The same is true for words and language as a whole. In real contexts, we usually find several, if not many medial layers. Of course, signs and symbols are not exhaustively described by mediality. They need reference, which is a compound that comprises modeling.

4.3. Model

Models and modeling need not be explicated too much any more, as it is one of the main issues throughout our essays. We just would like to remember to the obvious fact that a “pure” model is not possible. We need symbols and rules, e.g. about their creation or usage, and necessarily both are not subject of the model itself. Most significantly, models need a purpose, a concept to which they refer. In fact, any model presupposes an environment, an embedding that is given by concepts and a particular social embedding. Additionally, models would not be models without virtuality. On the one hand, virtuality is implied by the fact that models are incarnations of specific modes of interpretation, and on the other hand they imply virtuality themselves, since they are, well, just models.

We frequently mentioned that it is only through models that we can build up references to the external world. Of course, models are not sufficient to describe that referencing. There is also the contingency of the manifold of populations and the implied relations as quasi-material arrangements that contribute to the reference of the individual to the common. Yet, only modeling allows for anticipation and purposeful activity. It is only though models that behavior is possible, insofar any behavior is already differentiated behavior. Models are thus the major site where information is created. It is not just by chance that the 20th century experienced the abundance of models and of information as concepts.

In mathematical terms, models can be conceived as second-order categories. More profane, but equivalent to that, we can say that models are arrangement of rules for transformation. This implies the whole issue of rule-following as it has been investigated and formulated by Wittgenstein. Note that rule-following itself is a site of paradoxicality. As there is no private language, there is also no private model. Philosophically, and a bit more abstract, we could describe them as the compound of providing the possibility for reference (they are one of the conditions for such) and the institutionalized site for creating (f)actual differences.

4.4. Concepts

Concept is probably one of the most abused, or at least misunderstood concepts, at least in modern times. So-called Analytical Philosophy is claiming over and over again that concepts could be explicated unambiguously, that concepts could be clarified or defined. This way, the concept and its definition are equaled. Yet, a definition is just a definition, not a concept. The language game of the definition makes sense only in a tree of analytical proofs that started with axioms. Definitions need not to be interpreted. They are fully given by themselves. Such, the idea of clarifying a concept is nothing but an illusion. Deleuze writes (DR228)

It is not surprising that, strictly speaking, difference should be ‘inexplicable’. Difference is explicated, but in systems in which it tends to be cancelled; this means only that difference is essentially implicated, that its being is implication. For difference, to be explicated is to be cancelled or to dispel the inequality which constitutes it. The formula according to which ‘to explicate is to identify’ is a tautology.

Deleuze points to the particular “mechanism” of eradication by explication, which is equal to its transformation into the sayable. There is a difference between 5 and 7, but the arithmetic difference does not cover all aspects of difference. Yet, by explicating the difference using some rules, all the other differences except the arithmetic one vanish. Such, this inexplicability is not limited to the concept of difference. In some important way, these other aspects are much more interesting and important than the arithmetic operation itself or the result of it. Actually, we can understand differencing only as far we are aware of these other aspects.

Elsewhere, we already cited Augustine and his remark about time:22 “What, then, is time? If no one ask of me, I know; if I wish to explain to him who asks, I know not.” Here, we can observe at least two things. Firstly, this observation may well be the interpreted as the earliest rejection of “knowledge as justified belief”, a perspective which became popular in modernism. Meanwhile it has been proofed to be inadequate by the so-called Gettier problem. The consequences for the theory of data bases, or machine-based processing of data, can’t be underestimated. It clearly shows, that knowledge can’t be reduced to confirmed hypotheses qua validated models, and belief can’t be reduced to kind of a pre-knowledge. Belief must be something quite different.

The second thing to observe by those two example concerns the status of interpretation. While Augustine seems to be somewhat desperate, at least for a moment23, analytical philosophy tries to abolish the annoyance of indeterminateness by killing the freedom inherent to interpretation, which always and inevitably happens, if the primacy of interpretation is denied.

Of course, the observed indeterminateness is equally not limited to time either. Whenever you try to explicate a concept, whether you describe it or define it, you find the unsurmountable difficulty to pick one of many interpretations. Again: There is no private language; meaning, references and signs exist only within social situations of interpretation. In other words, we again find the necessity of invoking the other conceptual aspects from which we build our space. Without models and mediality there is no concept. And even more profound than models, concepts imply virtuality.

In the opposite direction we can understand now that these four concepts are not only not reducible to each other. They are dependent on each other and—somewhat paradoxically—they are even competitively counteracting. From this we can expect an abstract dynamics that reminds somewhat to the patterns evolving in Reaction-Diffusion-Systems. These four concepts imply the possibility for a basic creativity in the realm of the Idea, in the indeterminate zone of actualisation that will result in a “concrete” thought, or at least the experience of thinking.

Before we proceed we would like to introduce a notation that should be helpful in avoiding misunderstandings. Whenever we refer to the transcendental aspects between which the aspections of our space stretch out, we use capital letters and mark it additionally by a bar, such as “_Concept”,or “_Model”.The whole set of aspects we denote by “_A”,while its unspecified items are indicated by “_a”.

5. Anti-Ontology: The T-Bar-Theory

The four conceptual aspects _Aplay different roles. They differ in the way they get activated. This becomes visible as soon as we use our space as a tool for comparing various kinds of mental concepts or activities, such as believing, referring, explicating or understanding. These we will inspect later in detail.

Above we described the impossibility to explicate a concept without departing from the “conceptness”. Well, such a description is actually not appropriate according to our aspectional space. The four basic aspections are built by transcendental concepts. There is a subjective, imaginary yet pre-specific scale along those aspections. Hence, in our space “conceptness” is not a quality, but an intensity, or almost a degree, a quantity. The key point then is that a mental concept or activity relates always to all four transcendental aspections in such a way that the relative location of the mental activity can’t be changed along just a single aspect alone.

We also can recognize another significant step that is provided by our space of expressibility. Traditionally, concepts are used as existential signifiers, in philosophy often called qualia. Such existential signifiers are only capable to indicate presence or absence, which thus is also confined to naive ontology of Hamletian style (to be or not to be). It is almost impossible to build a theory or a model from existential signifiers. From the modeling or the measurement theory point of view, concepts are on the binary scale. Despite concepts collect a multitude of such binary usages, appropriate modeling remains impossible due the binary scale, unless we would probabilize all potential dual pairs.

Similarly to the case of logic we also have to distinguish the transcendental aspect _a,that is, the _Model,_Mediality,_Concept,and _Virtualityfrom the respective entity that we find in applications. Those practiced instances of a are just that: instances. That is: instances produced by orthoregulated habits. Yet, the instances of a that could be gained through the former’s actualization do not form singularities, or even qualia. Any a can be instantiated into an infinite diversity of concrete, i.e. definable and sayable abstract entities. That’s the reason for the kinship between probabilistic entities and transcendental perspectives. We could operationalize the latter by the former, even if we have to distinguish sharply between possibility and potentiality. Additionally we have to keep in mind that the concrete instances do not live independently from their transcendental ancestry24.

Deleuze provides us a nice example of this dynamics in the beginning of part V in D&R. For him, “divergence” is an instance of the transcendental entity “Difference”.

Difference is not diversity. Diversity is given, but difference is that by which the given is given, that by which the given is given as diverse. Difference is not phenomenon but the noumenon closest to the phenomenon.

What he calls “phenomenon” we dubbed “instance”, which is probably more appropriate in order to avoid the reference to phenomenology and the related difficulties. Calling it “phenomenon” pretends—typically for any kind of phenomenology or ontology—sort of a deeply unjustified independence of mentality and its underlying physicality.

This step from existential signifiers to the situs in a space for expressibility, made possible by our aspectional space, can’t be underestimated. Take for instance the infamous question that attracted so many misplaced answers: “How do words or concepts acquire reference?” This question appears to be especially troubling because signs do refer only to signs.25 In existential terms, and all the terms in that question are existential ones, this question can’t be answered, even not addressed at all. As a consequence, deep mystical chasms unnecessarily keep separating the world from the concepts. Any resulting puzzle is based on a misconception. Think of Platons chorismos (greek for “separation”) of explanation and description, which recently has been taken up, refreshed and declared being a “chasm” by Epperson [31] (a theist realist, according to his own positioning; we will meet him later again). The various misunderstandings are well-known, ranging from nominalism to externalist realism to scientific constructivism.

They all vanish in a space that overcomes the existentiality embedded in the terms. Mathematically spoken, we have to represent words, concepts and references as probabilized entities, as quasi-species as Manfred Eigen called it in a different context, in order to avoid naive mysticism regarding our relations to the world.

It seems that our space provides the possibility for measuring and comparing different ways of instantiation for _A,kind of a stable scale. We may use it to access concepts differentially, that is, we now are able to transform concepts in a space of quantitability (a term coined by Vera Bühlmann). The aspectional space as we have constructed it is thus necessary even in order to talk just about modeling. It would provide the possibility for theories about any transition between any mental entities one could think of. For instance, if we conceive “reference” as the virtue of purposeful activity and anticipation, we could explore and describe the conditions for the explication of the path between the _Modelon the one side and the _Concept on the other.On this path—which is open on both sides—we could, for instance, first meet different kinds of symbols near the Model, started by idealization and naming of models, followed by the mathematical attitude concerning the invention and treatment of signs, _Logicand all of its instances, semiosis and signs, words, and finally concepts, not forgetting above all that this path necessarily implies a particular dynamics regarding _Medialityand _Virtuality.

Such an embedding of transformations into co-referential transcendental entities is anything we can expect to “know” reliably. That was the whole point of Kant. Well, here we can be more radical than Kant dared to. The choreostemic space is a rejection of the idea of “pure thought”, or pure reason, since such knowledge needs to undergo a double instantiation, and this brings subjectivity back. It is just a phantasm to believe that propositions could be secured up to “truth”. This is even true for least possible common denominator, existence.

I think that we cannot know whether something exists or not (here, I pretend to understand the term exist), that it is meaningless to ask this. In this case, our analysis of the legitimacy of uses has to rest on something else. (David Blair [49])

Note that Blair is very careful in his wording here. He is not about any universality regarding the justification, or legitimization. His proposal is simply that any reference to “Being” or “Existence” is useless apriori. Claiming seriousness of ontology as an aspect of or even as an external reality immediately instantiates the claim of an external reality as such, which would be such-and-such irrespective to its interpretation. This, in turn, would consequently amount to a stance that would set the proof of irrelevance of interpretation and of interpretive relativism as a goal. Any familiar associations about that? Not to the least do physicists, but only physicists, speak of “laws” in nature. All of this is, of course, unholy nonsense, propaganda and ideology at least.

As a matter of fact, even in a quite strict naturalist perspective, we need concepts and models. Those are obviously not part of the “external” nature. Ontology is an illusion, completely and in any of its references, leading to pseudo-problems that are indeed  “very difficult” to “solve”. Even if we manage to believe in “existence”, it remains a formless existence, or more precisely, it has to remain formless. Any ascription of form immediately would beat back as a denial of the primacy of interpretation, hence in a naturalist determinism.

Before addressing the issue of the topological structure of our space, let us trace some other figures in our space.

6. Figures and Forms

Whenever we explicate a concept we imply or refer to a model. In a more general perspective, this applies to virtuality and mediality as well. To give an example: Describing a belief does not mean to belief, but to apply a model. The question now is, how to revert the accretion of mental activities towards the _Model._Virtuality can’t be created deliberately, since in this case we would refer again to the concept of model. Speaking about something, that is, saying in the Wittgensteinian sense, intensifies the _Model.

It is not too difficult, though, to find some candidate mechanics that turns the vector of mental activity away from the _Concept.It is through performance, mere action without explicable purpose, that we introduce new possibilities for interpretation and thus also enriched potential as the (still abstract) instance of _Virtuality.

In contrast to that, the _Concept is implied.The _Conceptcan only be demonstrated. Even by modeling. Traveling on some path that is heading towards the _Model,the need for interpretation continuously grows, hence, the more we try to approach the “pure” _Model,the stronger is the force that will flip us back towards the _Concept.

_Mediality,finally, the fourth of our aspects, binds its immaterial colleagues to matter, or quasi-matter, in processes that are based on the multiplicity of populations. It is through _Medialityand its instances that chunks of information start to behave as device, as quasi-material arrangement. The whole dynamics between _Conceptsand _Modelsrequires a symbol system, which can evolve only through the reference to _Mediality,which in turn is implied by populations of processes.

Above we said that the motivation for this structure is to provide a space of expressibility for mental phenomena in their entirety. Mental activity does not consist of isolated, rare events. It is an multitude of flows integrated into various organizational levels, even if we would consider only the language part. Mapping these flows into our space rises the question whether we could distinguish different attractors, different forms of recurrence.

Addressing this question establishes an interesting configuration, since we are talking about the form of mental activities. Perhaps it is also appropriate to call these forms “mental style”. In any case, we may take our space as a tool to formalize the question about potential classes of mental styles. In order to render out space more accessible, we take the tetraedic body as a (crude) approximating metaphor for it.

Above we stressed the point that any explication intensifies the _Model aspect. Transposed into a Cartesian geometry we would have said—metaphori- cally—that explication moves us towards the corner of the model. Let us stick to this primitive representation for a moment and in favour of a more intuitive understanding. Now imagine constructing a vector that points away from the model corner, right to the middle of the area spanned by virtuality, mediality and concept. It is pretty clear, that mental activity that leaves the model behind, and quite literally so, in this way will be some form of basic belief, or revelation. Religiosity (as a mental activity) may be well described as the attempt to balance virtuality, mediality and concept without resorting to any kind of explication, i.e. models. Of course, this is not possible in an absolute manner, since it is not possible to move in the aspectional space without any explication. This in turn then yields a residual that again points towards the model corner.

Inversely, it is not possible to move only in the direction of the _Model.Nevertheless, there are still many people proposing such, think for instance about (abundant as well as overdone) scientism. What we can see here are particular forms of mental activity. What about other forms? For instance, the fixed-point attractor?

As we have seen, our aspectional space does not allow for points as singularities. Both the semantics of the aspections as well as the structure of the space as a second-order differential prevents them. Yet, somebody could attempt to realize an orbit around a singularity that is as narrow as possible. Despite such points of absolute stability are completely illusionary, the idea of the absoluteness of ideas—idealism—represents just such an attempt. Yet, the claim of absoluteness brings mental activity to rest. It is not by accident therefore that it was the logician Frege who championed kind of a rather strange hyperplatonism.

At this point we can recognize the possibility to describe different forms of mental activity using our space. Mental activity draws specific trails into our space. Moreover, our suggestion is that people prefer particular figures for whatever reasons, e.g. due to their cultural embedding, their mental capabilities, their knowledge, or even due to their basic physical constraints. Our space allows to compare, and perhaps even to construct or evolve particular figures. Such figures could be conceived as the orthoregulative instance for the conditions to know. Epistemology thus looses its claim of universality.

It seems obvious to call our space a “choreostemic” space, a term which refers to choreography. Choreography means to “draw a dance”, or “drawing by dancing”, derived from Greek choreia (χορεύω) for „dancing, (round) dance”. Vera Bühlmann [19] described that particular quality as “referring to an unfixed point loosely moving within an occurring choreography, but without being orchestrated prior to and independently of such occurrence.”

The notion of the choreosteme also refers to the chorus of the ancient theatre, with all its connotations, particularly the drama. Serving as an announcement for part V of D&R, Deleuze writes:

However, what carries out the third aspect of sufficient reason—namely, the element of potentiality in the Idea? No doubt the pre-quantitative and pre-qualitative dramatisation. It is this, in effect, which determines or unleashes, which differenciates the differenciation of the actual in its correspondence with the differentiation of the Idea. Where, however, does this power of dramatisation come from? (DR221)

It is right here, where the choreostemic space links in. The choreostemic space does not abolish the dramatic in the transition from the conditionability of Ideas into concrete thoughts, but it allows to trace and to draw, to explicate and negotiate the dramatic. In other words, it opens the possibility for a completely new game: dealing with mental attitudes. Without the choreostemic space this game is not even visible, which itself has rather unfortunate consequences.

The choreostemic space is not an epistemic space either. Epistemology is concerned about the conditions that are influencing the possibility to know. Literally, episteme means “to stand near”, or “to stand over”. It draws upon a fixed perspective that is necessary to evaluate something. Yet, in the last 150 years or so, philosophy definitely has experienced the difficulties implied by epistemology as an endeavour that has been expected to contribute finally to the stabilization of knowledge. I think, the choreostemic space could be conceived as a tool that allows to reframe the whole endeavour. In other words, the problematic field of the episteme, and the related research programme “epistemology” are following an architecture (or intention), that has been set up far too narrow. That reframing, though, has become accessible only through the “results” of—or the tools provided by — the work of Wittgenstein and Deleuze. Without the recognition of the role of language and without a renewal of the notion of the virtual, including the invention of the concept of the differential, that reframing would not have been possible at all.

Before we are going to discuss further the scope of the choreostemic space and the purposes it can serve, we have to correct the Cartesian view that slipped in through our metaphorical references. The Cartesian flavour keeps not only a certain arbitrariness alive, as the four conceptual aspects _Aare given just by some subjective empirical observations. It also keeps us stick completely within the analytical space, hence with a closed approach that again would need a mystical external instance for its beginning. This we have to correct now.

7. Reason and Sufficiency

Our choreostemic space is built as an aspectional space that is spanned by transcendental entities. As such, they reflect the implied conditionability of concrete entities like definitions, models or media. The _Conceptcomprises any potential concrete concept, the _Modelcomprises any actual model of whatsoever kind and expressed in whatsoever symbolic system, the _Medialitycontains the potential for any kind of media, whether more material or more immaterial in character. The transcendental status of these aspects also means that we never can “access” them in their “pure” form. Yet, due to these properties our space allows to map any mental activity, not just of the human brain. In a more general perspective, our space is the space where the _Comparison takes place.

The choreostemic space is of course itself a model. Given the transcendentality of the four conceptual aspects _A,we can grasp the self-referentiality. Yet, this neither does result in an infinite regress, nor in circularity. This would be the case only if the space would be Cartesian and the topological structure would be flat (Euclidean) and global.

First, we have to consider that the choreostemic space is not only model, precisely due to its self-referentiality. Second, it is a tool, and as such it is not time-inert as a physical law. Its relevance unfolds only if it is used. This, however, invokes time and activity. Thus the choreostemic space could be conceived also as a means to intensify the virtual aspects of thought. Furthermore, and third, it is of course a concept, that is, it is an instance of the _Concept.As such, it should be constructed in a way that abolishes any possibility for a Cartesio-Euclidean regression. All these aspects are covered by the topological structure of the choreostemic space: It is meant to be a second-order differential.

A space made by the second-order differential does not contain items. It spawns procedures. In such a space it is impossible to stay at a fixed point. Whenever one would try to determine a point, one would be accelerated away. The whole space causes divergence of mental activities. Here we find the philosophical reason for the impossibility to catch a thought as a single entity.

We just mentioned that the choreostemic space does not contain items. Due to the second-order differential it is not made up as a set of coordinates, or, if we’d consider real scaled dimensions, as potential sets of coordinates. Quite to the opposite, there is nothing determinable in it. Yet, in rear-view, or hindsight, respectively, we can reconstruct figures in a probabilistic manner. The subject of this probabilism is again not determinable coordinates, but rather clouds of probabilities, quite similar to the way things are described in quantum physics by the Schrödinger equation. Unlike the completely structureless and formless clouds of probability which are used in the description of electrons, the figures in our space can take various, more or less stable forms. This means that we can try to evolve certain choreostemic figures and even anticipate them, but only to a certain degree. The attractor of a chaotic system provides a good metaphor for that: We clearly can see the traces in parameter space as drawn by the system, yet, the system’s path as described by a sequence of coordinates remains unpredictable. Nevertheless, the attractor is probabilistically confined to a particular, yet cloudy “figure,” that is, an unsharp region in parameter space. Transitions are far from arbitrary.

Hence, we would propose to conceive the choreostemic space as being made up from probabilistic situs (pl.). Transitions between situs are at the same time also transformations. The choreostemic space is embedded in its own mediality without excluding roots in external media.

Above we stuffed the space with a hyperbolic topology in order to align to the transcendentality of the conceptual aspects. It is quite important to understand that the choreostemic space does not implement a single, i.e. global hyperbolic relation. In contrast, each situs serves as point of reference. Without this relativity, the choreostemic space would be centred again, and in consequence it would turn again to the analytic and totalising side. This relativity can be regarded as the completed and subjectivising Cartesian delocalization of the “origin”. It is clear that the distance measures of any two such relative hyperbolic spaces do not coincide any more. There is neither apriori objectivity nor could we expect a general mapping function. Approximate agreement about distance measures may be achievable only for reference systems that are rather close to each other.

The choreostemic space comprises any condition of any mental attitude or thought. We already mentioned it above: The corollary of that is that the choreostemic space is the space of _Comparisonas a transcendental category.

It comprises the conditions for the whole universe of Ideas, it is an entirety. Here, it is again the topological structure of the space that saves us from mental dictatorship. We have to perform a double instantiation in order to arrive at a concrete thought. It is somewhat important to understand that these instantiations are orthoregulated.

It is clear that the choreostemic space destroys the idea of a uniform rationality. Rationality can’t be tied to truth, justice or utility in an objective manner, even if we would soften objectivity as a kind of relaxed intersubjectivity. Rationality depends completely on the preferred or practiced figures in the choreostemic space. Two persons, or more generally, two entities with some mental capacity, could completely agree on the facts, that is on the percepts, the way of their construction, and the relations between them, but nevertheless assign them completely different virtues and values, simply for the fact that the two entities inhabit different choreostemic attractors. Rationality is global within a specific choreostemic figure, but local and relative with regard to that figure. The language game of rationality therefore does not refer to a particular attitude towards argumentation, but quite in contrast, it includes and displays the will to establish, if not to enforce uniformity. Rationality is the label for the will to power under the auspices of logic and reductionism. It serves as the display for certain, quite critical moral values.

Thus, the notion of sufficient reason looses its frightening character as well. As any other principle of practice it gets transformed into a strictly local principle, retaining some significance only with regard to situational instrumentality. Since the choreostemic space is a generative space, locality comprises temporal locality as well. According to the choreostemic space, sufficient reasons can’t even be transported between subsequent situations. In terms of the choreostemic space notions like rationality or sufficient reason are relative to a particular attractor. In different attractors their significance could be very different, they may bear very different meanings. Viewed from the opposite direction, we also can see that a more or less stable attractor in the choreostemic has first to form, or: to be formed, before there is even the possibility for sufficient reasons. This goes straightly parallel to Wittgenstein’s conception of logic as a transcendental apriori that possibly becomes instantiated only within the process of an unfolding Lebensform. As a contribution to political reason, the choreostemic space it enables persons inhabiting different attractors, following different mental styles. Later, we will return to this aspect.

In D&R, Deleuze explicated the concept of the “Image of Thought”, as part III of D&R is titled. There he first discusses what he calls the dogmatic image of thought, comprised according to him from eight elements that together lead to the concept of the idea as an representation (DR167). Following that we insists that the idea is bound to repetition and difference (as differenciation and differentiation), where repetition introduces the possibility of the new, as it is not the repetition of the same. Nevertheless, Deleuze didn’t develop this Image into a multiplicity, as it could have been expected from a more practical perspective, i.e. the perspective of language games. These games are different from his notion emphasizing at several instances that language is a rich play.

For me it seems that Deleuze didn’t (want to) get rid of ontology, hence he did not conceive of his great concept of the “differential” as a language game, and in turn he missed to detect the opportunity for self-referentiality or even to apply it in a self-referential manner. We certainly do therefore not agree with his attempt to ground the idea of sufficient reason as a global principle. Since “sufficient reason” is a practice I think it is not possible or not sufficient to conceive of it as a transcendental guideline.

8. Elective Kinships

It is pretty clear that the choreostemic space is applicable to many problematic fields concerning mental attitudes, and hence concerning cultural issues at large, reaching far beyond the specificity of individual domains.

As we will see, the choreostemic space may serve as a treatment for several kinds of troublesome aberrances, in philosophy itself as well as in its various applications. Predominantly, the choreostemic space provides the evolutionary perspective towards the self-containing theoretical foundation of plurality and manifoldness.26 Comparing that with Hegel’s slogans of “the synthesis of the nation’s reason“ (“Synthese des Volksgeistes“) or „The Whole is the Truth“ („Das Ganze ist das Wahre“) shows the difference regarding its level and scope.

Before we go into the details of the dynamics that unfolds in the choreostemic space, we would like to pick up on two areas, the philosophy of the episteme and the relationship between anthropology and philosophy.

8.1. Philosophy of the Episteme

The choreostemic space is not about a further variety of some epistemological argument. It is thought as a reframing of the concerns that have been addressed traditionally by epistemology. (Here, we already would like to warn of the misunderstanding that the choreostemic space exhausts as epistemology.) Hence, it should be able to serve (as) the theoretical frame for the sociology of science or the philosophy of science as well. Think about the work of Bruno Latour [9], Karin Knorr Cetina [10] or Günther Ropohl [11] for the sociology of science or the work of van Fraassen [12] of Giere [13] for the field of philosophy of science. Sociology and philosophy, and quite likely any of the disciplines in human sciences, should indeed establish references to the mental in some way, but rather not to the neurological level, and—since we have to avoid anthropological references—to cognition as it is currently understood in psychology as well.

Giere, for instance, brings the “cognitive approach” and hence the issue of practical context close to the understanding of science, criticizing the idealising projection of unspecified rationality:

Philosophers’ theories of science are generally theories of scientific rationality. The scientist of philosophical theory is an ideal type, the ideally rational scientist. The actions of real scientists, when they are considered at all, are measured and evaluated by how well they fulfill the ideal. The context of science, whether personal, social or more broadly cultural, is typically regarded as irrelevant to a proper philosophical understanding of science” (p.3).

The “cognitive approach” that Giere proposes as a means to understand science is, however, threatened seriously by the fact that there is no consensus about the mental. This clearly conflicts with the claim of trans-cultural objectivity of contemporary science. Concerning cognition, there are still many simplistic paradigms around, recently seriously renewed by the machine learning community. Aaron Ben Ze’ev [14] writes critically:

In the schema paradigm [of the mind, m.], which I advocate, the mind is not an internal container but a dynamic system of capacities and states. Mental properties are states of a whole system, not internal entities within a particular system. […] Novel information is not stored in a separate warehouse, but is ingrained in the constitution of the cognitive system in the form of certain cognitive structures (or schemas). […] The attraction of the mechanistic paradigm is its simplicity; this, however, is an inadequate paradigm, because it fails to explain various relevant phenomena. Although the complex schema paradigm does not offer clear-cut solutions, it offers more adequate explanations.

How problematic even such critiques are can be traced as soon as we remember Wittgenstein’s mark on “mental states” (Brown Book, p.143):

There is a kind of general disease of thinking which always looks for (and  finds) what would be called a mental state from which all our acts spring as from a reservoir.

In the more general field of epistemology there is still no sign for any agreement about the concept of knowledge. From our position, this is little surprising. First, concepts can’t be defined at all. All we can find are local instances of the transcendental entity. Second, knowledge and even its choreostemic structure is dependent on the embedding culture while at the same time it is forming the culture. The figures in the choreostemic space are attractors: They do not prescribe the next transformation, but they constrain the possibility for it. How ever to “define” knowledge in an explicit, positively representationalist manner? For instance, knowledge can’t be reduced to confirmed hypotheses qua validated models. It is just impossible in principle to say “knowledge is…”, since this implies inevitably the demand for an objective justification. At most, we can take it as a language game. (Thus the choreosteme, that is, the potential of building figures in the choreostemic space, should not be mixed with the episteme! We will return to this issue later again.)

Yet, just to point to the category of the mental as a language game does not feel satisfying at all. Of course, Wittgenstein’s work sheds bright light on many aspects of mentality. Nevertheless, we can’t use Wittgenstein’s work as a structure; it is itself to be conceived as a result of a certain structuredness. On the other hand, it is equally disappointing to rely on the scientific approach to the mental. In some way, we need a balanced view, which additionally should provide the possibility for a differential experimentation with mechanisms of the mental.

Just that is offered by the choreostemic space. We may relate disciplinary reductionist models to concepts as they live in language games without any loss and without getting into troubles as well.

Let us now see what is possible by means of the choreostemic space and the anti-ontological T-Bar-Theory for the terms believing, referring, explicating, understanding and knowing. It might be relevant to keep in mind that by “mental activities” we do not refer to any physical or biochemical process. We distinguish the mental from the low-level affairs in the brain. Beliefs, or believing, are thus considered to be language games. From that perspective our choreostemic space just serves as a tool to externalize language in order to step outside of it, or likewise, to get able to render important aspects of playing the language game visible.


The category of beliefs, or likewise the activity of believing27, we already met above. We characterised it as a mental activity that leaves the model behind. We sharply refute the quite abundant conceptualisation of beliefs as kind of uncertainty in models. Since there is no certainty at all, not even with regard to transcendental issues, such would make little sense. Actually, the language game of believing shows its richness even on behalf of a short investigation like this one.

Before we go into details here let us see how others conceive of it. PMS Hacker [27] gave the following summary:

Over the last two and a half centuries three main strands of opinion can be discerned in philosophers’ investigations of believing. One is the view that believing that p is a special kind of feeling associated with the idea that p or the proposition that p. The second view is that to believe that p is to be in a certain kind of mental state. The third is that to believe that p is to have a certain sort of disposition.

Right to the beginning of his investigation, Hacker marks the technical, reductionist perspective onto believe as a misconception. This technical reductionism, which took form as so-called AGM-theory in the paper by Alchourron, Gärdenfors and Makinson [28] we will discuss below. Hacker writes about it:

Before commencing analysis, one misconception should be mentioned and put aside. It is commonly suggested that to believe that p is a propositional attitude.That is patently misconceived, if it means that believing is an attitude towards a proposition. […] I shall argue that to believe that p is neither a feeling, nor a mental state, nor yet a disposition to do or feel anything.

Obviously, believing has several aspects. First, it is certainly kind of a mental activity. It seems that I need not to tell anybody that I believe in order to be able to believe. Second, it is a language game, and a rich one, indeed. It seems almost to be omnipresent. As a language game, it links “I believe that” with, “I believe A” and “I believe in A”. We should not overlook, however, that these utterances are spoken towards someone else (even in inner speech), hence the whole wealth of processes and relations of interpersonal affairs have to be regarded, all those mutual ascriptions of roles, assertions, maintained and demonstrated expectations, displays of self-perception, attempts to induce a certain co-perception, and so on. We frequently cited Robert Brandom who analysed that in great detail in his “Making it Explicit”.

Yet, can we really say that believing is just a mental activity? For the one, above we did not mention that believing is something like a “pure” mental activity. We clearly would reject such a claim. First, we clearly can not set the mental as such into a transcendental status, as this would lead straight to a system like Hegel’s philosophy, with all its difficulties, untenable claims and disastrous consequences. Second, it is impossible to explicate “purity”, as this would deny the fact that models are impossible without concepts. So, is it possible that a non-conscious being or entity can believe? Not quite, I would like to propose. Such an entity will of course be able to build models, even quite advanced ones, though probably not about reflective subjects as concepts or ideas. It could experience that it could not get rid of uncertainty and its closely related companion, risk. Such we can say that these models are not propositions “about” the world, they comprise uncertainty and allow to deal with uncertainty through actions in the world. Yet, the ability to deal with uncertainty is certainly not the same as believing. We would not need the language game at all. Saying “I believe that A” does not mean to have a certain model with a particular predictive power available. As models are explications, expressing a belief or experiencing the compound mental category “believing” is just the demonstration that any explication is impossible for the person.

Note that we conceive of “belief “as completely free of values and also without any reference to mysticism. Indeed, the choreostemic space allows to distinguish different aspects of the “compound experience” that we call “belief”, which otherwise are not even visible as separate aspects of it. As a language game we thus may specify it as the indication that the speaker assigns—or the listener is expected to assign—a considerable portion of the subject matter to that part of the choreostemic figure that points away from the _Model.It is immediately clear from the choreostemic space that mental activity without belief is not possible. There is always a significant “rest” that could not be covered by any kind of explication. This is true for engineering and of course for any kind of social interaction, as soon as mutual expectations appear on the stage. By means of the choreostemic space we also can understand the significance of trust in any interaction with the external world. In communicative situations, this quickly may lead to a game of mutual deontic ascriptions, as Robert Brandom [15] has been arguing for in his “Making it Explicit”.

Interestingly enough, belief (in its choreostemically founded version) is implied by any transition away from the _Model,for instance also in case of the transition path that ultimately is heading towards the _Concept.Even more surprising—at first sight—and particularly relevant is the “inflection dynamics” in the choreostemic space. The more one tries to explicate something the larger the necessary imports (e.g. through orthoregulations) from the other _a,and hence the larger is the propensity for an inflecting flip.28

As an example, take for instance the historical development of theories in particle physics. There, people started with rather simple experimental observations, which then have been assimilated by formal mathematical models. Those in turn led to new experiments, and so forth, until physics has been reaching a level of sophistication where “observations” are based on several, if not many layers of derived concepts. On the way, structural constants and heuristic side conditions are implied. Finally, then, the system of the physical model turns into an architectonics, a branched compound of theory-models, that sounds as trivial as it is conceptual. In case of physics, it is the so-called grand unified theory. There are several important things here. First, due to large amounts of heuristic settings and orthoregulations, such concepts can’t be proved or disproved anymore, the least by empirical observations. Second, on the achieved level of abstraction, the whole subject could be formulated in a completely different manner. Note that such a dynamic between experiment, model, theory29 and concept never has been described in a convincing manner before.30

Now that we have a differentiated picture about belief at our disposal we can briefly visit the field of so-called belief revision. Belief revision has been widely adopted in artificial intelligence and machine learning as the theory for updating a data base. Quite unfortunately, the whole theory is, well, simply crap, if we would go to apply it according to its intention. I think that we can raw some significance of the choreostemic space from this mismatch for a more appropriate treatment of beliefs in information technology.

The theory of belief revision was put forward by a branch of analytical philosophy in a paper by Alchourron, Gärdenfors and Makinson (1985) [29], often abbr. as “AGM-theory.” Hansson [30] writes:

A striking feature of the framework employed there [monnoo: AGM] is its simplicity. In the AGM framework, belief states are represented by deductively closed sets of sentences, called belief sets. Operations of change take the form of either adding or removing a specified sentence.

Sets of beliefs are held by an agent, who establishes or maintains purely logical relations between the items of those beliefs. Hansson correctly observes that:

The selection mechanism used for contraction and revision encodes information about the belief state not represented by the belief set.

Obviously, such “belief sets” have nothing to do with beliefs as we know it from language game, besides the fact that is a misdone caricature. As with Pearl [23], the interesting stuff is left out: How to achieve those logical sentences at all, notably by a non-symbolic path of derivation?  (There are no symbols out there in the world.) By means of the choreostemic space we easily derive the answer: By an orthoregulated instantiation of a particular choreostemic performance in an unbounded (open) aspectional space that spans between transcendental entities. Since the AGM framework starts with or presupposes logic, it simply got stuck in symbolistic fallacy or illusion. Accordingly, Pollock & Gillies [30] demonstrate that “postulational approaches” such as the AGM-theory can’t work within a fully developed “standard” epistemology. Both are simply incompatible to each other.


Closely related to believing is explicating, the latter being just the inverse of the former, pointing to the “opposite direction”. Explicating is almost identical to describing a model. The language game of “explication” means to transform, to translate and to project choreostemic figures into lists of rules that could be followed, or in other words, into the sayable. Of course, this transformation and projection is neither analytic nor neutral. We must be aware of the fact that even a model can’t be explicated completely. Else, this rule-following itself implies the necessity of believes and trust, and it requires a common understanding about the usage or the influence of orthoregulations. In other words, without an embedding into a choreostemic figure, we can’t accomplish an explication.

Understanding, Explaining, Describing

Outside of the perspective of the language game “understanding” can’t be understood. Understanding emerges as a result of relating the items of a population of interpretive acts. This population and the relations imposed on them are closely akin to Heidegger’s scaffold (“Gestell”). Mostly, understanding something is just extending an existent scaffold. About these relations we can’t speak clearly or in an explicit manner any more, since these relations are constitutive parts of the understanding. As all language games this too unfolds in social situations, which need not be syntemporal. Understanding is a confirming report about beliefs and expectations into certain capabilities of one’s own.

Saying “I understand” may convey different meanings. More precisely, understanding may come along in different shades that are placed between two configurations. Either it signals that one believes to be able to extend just the own scaffold, one’s own future “Gestelltheit”. Alternatively it is used to indicate the belief that the extension of the scaffold is shared between individuals in such a way as to be able to reproduce the same effect as anyone else could have produced understanding the same thing. This effect could be merely instrumental or, more significantly, it could refer to the teaching of further pupils. In this case, two people understand something if they can teach another person to the same ends.

Beside the performative and social aspects of understanding there are of course the mental aspects of the concept of “understanding” something. These can be translated into choreostemic terms. Understanding is less a particular “figure” in the CS than it is a deliberate visiting of the outer regions of the figure and the intentional exploration of those outposts. We understand something only in case we are aware of the conditions of that something and of our personal involvements. These includes cognitive aspects, but also the consequences of the performative parts of acts that contribute to an intensifying of the aspect of virtuality. A scientist who builds a strong model without considering his and its conditionability does not understand anything. He just would practice a serious sort of dogma (see Quine about the dogmas of empiricism here!). Such a scientist’s modeling could be replaced by that of a machine.

A similar account could be given to the application of a grammar, irrespective the abstractness of that grammar. Referring to a grammar without considering its conditionability could be performed by a mindless machine as well. It would indeed remain a machine: mindless, and forever determined. Such is most, if not all of the computer software dealing with language today.

We again would like to emphasize that understanding does not exhaust in the ability to write down a model. Understanding means to relate the model to concepts, that is, to trace a possible path that would point towards the concept. A deep understanding refers to the ability to extend a figure towards the other transcendental aspects in a conscious manner. Hence, within idealism and (any sort of) representationalism understanding is actually excluded. They mistake the transcendental for the empirical and vice versa, ending in a strict determinism and dogmatism.

Explaining, in turn, indicates the intention to make somebody else to understand a certain subject. The infamous existential “Why?” does not make any sense. It is not just questionable why this language game should by performed at all, as the why of absolute existence can’t be answered at all. Actually, it seems to be quite different from that. As a matter of fact, we indeed play this game in a well comprehensible way and in many social situations. Conceiving “explanation” of nature as to account for its existence (as Epperson does it, see [31] p.357) presupposes that everything could turned into the sayable. It would result in the conflation of logic and factual world, something Epperson indeed proposes. Some pages later in his proposal about quantum physics he seems to loosen that strict tie when referring to Whitehead he links “understanding” to coherence and empirical adequacy. ([31] p.361)

I offer this argument in the same speculative philosophical spirit in which Whitehead argued for the fitness of his metaphysical scheme to the task of understanding (though not “explaining”) nature—not by the “provability” of his first principles via deduction or demonstration, but by their evaluation against the metrics of coherence and empirical adequacy.

Yet, this presents us an almost a perfect phenomenological stance, separating objects from objects and subjects. Neither coherence nor empirical adequacy can be separated from concepts, models and the embedding Lebenswelt. It expresses thus the believe of “absolute” understanding and final reason. Such ideas that are at least highly problematic, even and especially if we take into account the role Whitehead gives the “value” as an cosmological apriori. It is quite clear, that this attitude to understanding is sharply different from anything that is related to semiotics, the primacy of interpretation, to the role of language or a relational philosophy, in short, to anything what resembles even remotely to what we proposed about understanding of understanding a few lines above.

The intention to make somebody else to understand a certain subject necessarily implies a theory, where theory here is understood (as we always do) as a milieu for deriving or inventing models. The “explaining game” comprises the practice of providing a general perspective to the recipient such that she or he could become able to invent such a model, precisely because a “direct” implant of an idea into someone else is quite impossible. This milieu involves orthoregulation and a grammar (in the philosophical sense). The theory and the grammar associated or embedded with it does nothing else than providing support to find a possibility for the invention or extension of a model. It is a matter of persistent exchange of models from a properly grown population of models that allow to develop a common understanding about something. In the end we then may say “yes, I can follow you!”

Describing is often not distinguished (properly) from explaining. Yet, in our context of choreostemically embedded language games it is neither mysterious nor difficult to do so. We may conceive of describing just as explicating something into the sayable, the element of cross-individual alignment is not part of it, at least in a much less explicit way. Hence, usually the respective declaration will not be made. The element of social embedding is much less present.

Describing pretends more or less that all the three aspects accompanying the model aspect could be neglected, particularly however the aspects of mediality and virtuality. The mathematical proof can be taken as an extreme example for that. Yet, even there it is not possible, since at least a working system of symbols is needed, which in turn is rooted in a dynamics unfolding as choreostemic figure, the mental aspect of Forms of Life. Basically, this impossibility for fixing a “position” in the choreostemic space is responsible for the so-called foundational crisis in mathematics. This crisis prevails even today in philosophy, where many people naively enough still search for absolute  justification, or truth, or at least regard such as a reasonable concept.

All this should not be understood as an attempt to deny description or describing as a useful category. Yet, we should be aware that the difference to explaining is just one of (choreostemic) form. More explicitly, said difference is an affair of of culturally negotiated portions of the transcendental aspects that make up mental life.

I hope this sheds some light on Wittgenstein’s claim that philosophy should just describe, but not explain anything. Well, the possibly perceived mysteriousness may vanish as well, if we remember is characterisation of grammar

Both, understanding and explaining are quite complicated socially mediated processes, hence they unfold upon layers of milieus of mediality. Both not only relate to models and concepts that need to exist in advance and thus to a particular dynamics between them, they require also a working system of symbols. Models and concepts relate to each other only as instances of _Models and _Concepts,that is in a space as it is provided by the choreostemic space. Talking about understanding as a practice is not possible without it.


Referring to something means to point to the expectation that the referred entity could point to the issue at hand. Referring is not “pointing to” and hence does not consist of a single move. It is “getting pointed to”. Said expectation is based on at least one model. Hence, if we refer to something, we put our issue as well as ourselves into the context of a chain of signifiers. If we refer to somebody, or to a named entity, then this chain of interpretive relations transforms in one of two ways.

Either the named entity is used, that is, put into a functional context, or more precisely, by assigning it a sayable function. The functionalized entity does not (need to) interpret any more, all activity gets centralized, which could be used as the starting point for totalizing control. This applies to any entity, whether it is just material or living, social.

The second way how referencing is affected by names concerns the reference to another person, or a group of persons. If it is not a functional relationship, e.g. taking the other as a “social tool”, it is less the expected chaining as signifier by the other person. Persons could not be interpreted as we interpret things or build signs from signals. Referring to a person means to accept the social game that comprises (i) mutual deontic assignments that develop into “roles”, including deontic credits and their balancing (as first explicated by Brandom [15]), (ii) the acceptance of the limit of the sayable, which results in a use of language that is more or less non-functional, always metaphorical and sometimes even poetic, as well as (iii) the declared persistence for repeated exchanges. The fact that we interpret the utterances of our partner within the orthoregulative milieu of a theory of mind (which builds up through this interpretations) means that we mediatize our partner at least partially.

The limit of the sayable is a direct consequence of the choreostemic constitution of performing thinking. The social is based on communication, which means “to put something into common”; hence, we can regard “communication” as the driving, extending and public part of using sign systems. As a proposed language game, “functional communication” is nonsense, much like the utterance “soft stone”.

By means of the choreostemic space we also can see that any referencing is equal to a more or less extensive figure, as models, concepts, performance and mediality is involved.


At first hand, we could suspect that before any instantiation qua choreostemic performance we can not know something positively for sure in a global manner, i.e. objectively, as it is often meant to be expressed by the substantive “knowledge”. Due to that performance we have to interpret before we could know positively and objectively. The result is that we never can know anything for sure in a global manner. This holds even for transcendental items, that is, what Kant dubbed “pure reason”. Nevertheless, the language game “knowledge” has a well-defined significance.

“Knowledge” is a reasonable category only with respect to performing, interpreting (performance in thought) and acting (organized performance). It is bound to a structured population of interpretive situations, to Peircean signs. We thus find a gradation of privacy vs. publicness with respect to knowledge. We just have to keep in mind that neither of these qualities could be thought of as being “pure”. Pure privacy is not possible, because there is nothing like a private language (meaning qua usage and shared reference). Pure publicness is not possible because there is the necessity of a bodily rooted interpreting mechanism (associative structure). Things like “public space” as a purely exterior or externalized thing do not exist. The relevant issue for our topic of a machine-based episteme is that functionalism always ends in a denial of the private language argument.

We now can see easily why knowledge could not be conceived as a positively definable entity that could be stored or transferred as such. First, it is of course a language game. Second, and more important, “knowing {of, about, that}” always relates to instances of transcendental entities, and necessarily so. Third, even if we could agree on some specific way of instantiating the transcendental entities, it always invokes a particular figure unfolding in an aspectional space. This figure can’t be transferred, since this would mean that we could speak about it outside of itself. Yet, that’s not possible, since it is in turn impossible to just pretend to follow a rule.

Given this impossibility we should stay for a moment at the apparent gap opened by it towards teaching. How to teach somebody something if knowledge can’t be transferred? The answer is furnished by the equipment that is shared among the members of a community of speakers or co-inhabitants of the choreostemic space. We need this equipment for matching the orthoregulation of our rule-following. The parts, tools and devices of this equipment are made from palpable traditions, cultural rhythms, institutions, individual and legal preferences regarding the weighting of individuals versus the various societal clusters, the large story of the respective culture and the “templates” provided by it, the consciously accessible time horizon, both to the past and the future31, and so on. Common sense wrongly labels the resulting “setup” as “body of values”. More appropriately, we could call it grammatical dynamics. Teaching, then, is in some way more about the reconstruction of the equipment than about the agreement of facts, albeit the arrangement of the facts may tell us a lot about the grammar.

Saying ‘I know’ means that one wants to indicate that she or he is able to perform choreostemically with regard to the subject at hand. In other words, it is a label for a pointer (say reference) to a particular image of thought and its use. This includes the capability of teaching and explaining, which probably are the only way to check if somebody really knows. We can, however, not claim that we are aligned to a particular choreostemic dynamics. We only can believe that our choreostemic moves are part of a supposed attractor in the choreostemic space. From that also follows that knowledge is not just about facts, even if we would conceive of facts as a compound of fixed relations and fixed things.

The traditional concerns of epistemology as the discipline that asks about the conditions of knowing and knowledge must be regarded as a misplaced problem. Usually, epistemology does not refer to virtuality or mediality. Else, in epistemology knowledge is often sharply separated from belief, yet for the wrong reasons. The formula of “knowledge as justified belief” puts them both onto the same stage. It then would have to be clarified what “justified” should mean, which is not possible in turn. Explicating “justifying” would need reference to concepts and models, or rather the confinement to a particular one: logic. Yet, knowledge and belief are completely different with regard to their role in choreostemic dynamics. While belief is an indispensable element of any choreostemic figure, knowledge is the capability to behave choreostemically.

8.2. Anthropological Mirrors

Philosophy suffers even more from a surprising strangeness. As Marc Rölli recently mentioned [34] in his large work about the relations between anthropology and philosophy (KAV),

Since more than 200 years philosophy is anthropologically determined. Yet, philosophy didn’t investigate the relevance of this fact to any significant extent. (KAV15)32

Rölli agrees with Nietzsche regarding his critique of idealism.

“Nietzsche’s critique of idealism, which is available in many nuances, always targeting the philosophical self-misunderstanding of the pure reason or pure concepts, is also directed against a certain conception of nature.” (KAV439)33.

…where this rejected certain conception of nature is purposefulness. In nature there is no forward directed purpose, no plan. Such ideas are either due to religious romanticism or due to a serious misunderstanding of the Darwinian theory of natural evolution. In biological nature, there is only blind tendency towards the preference of intensified capability for generalization34. Since Kant, and inclusively him, and in some way already Descartes, philosophy has been influenced by scientific, technological or anthropological conceptions about nature in general, or the nature of the human mind.

Such is (at least) problematic for three reasons. First, it constitutes a misunderstanding of the role of philosophy to rely on scientific insights. Of course, this perspective is becoming (again) visible only today, notably after the Linguistic Turn as far as it regards non-analytical philosophy. Secondly, however, it is clear that the said influence implies, if it remains unreflected, a normative tie to empiric observations. This clearly represents a methodological shortfall. Thirdly, even if one would accept a certain link between anthropology and philosophy, the foundations taken from a “philosophy of nature”35 are so simplistic, that they hardly could be regarded as viable.

This almost primitive image about the purposeful nature finally flowed into the functionalism of our days, whether in philosophy (Habermas) or so-called neuro-philosophy, by which many feel inclined to establish a variety of determinism that is even proto-Hegelian.

In the same passage that invokes Nietzsche’s critique, Rölli cites Friedrich Albert Lange [39]

“The topic that we actually refer to can be denoted explicitly. It is quasi the apple in the logical lapse of German philosophy subsequent to Kant: the relation between subject and object within knowledge.” (KAV443)36

Lange deliberately attests Kant—in contrast to the philosophers of the German idealism— to be clear about that relationship. For Kant subject and object constitute only as an amalgamate, the pure whatsoever has been claimed by Hegel, Schelling and their epigones and inheritors. The intention behind introducing pureness, according to Lange, is to support absolute reason or absolute understanding, in other words, eternally justified reason and undeniability of certain concepts. Note that German Idealism was born before the foundational crisis in mathematics, that started with Russell’s remark on Frege’s “Begriffsschrift” and his “all” quantor, that found its continuation in the Hilbert programme and that finally has been inscribed to the roots of mathematics by Goedel. Philosophies of “pureness” are not items of the past, though. Think about materialism, or about Agamben’s “aesthetics of pure means”, as Benjamin Morgan [39] correctly identified the metaphysical scaffold of Agamben’s recent work.

Marc Rölli dedicates all of the 512 pages to the endeavor to destroy the extra-philosophical foundations of idealism. As the proposed alternative we find pragmatism, that is a conceptual foundation of philosophy that is based on language and Life form (Lebenswelt in the Wittgensteinian sense). He concludes his work accordingly:

After all it may have become more clear that this pragmatism is not about a simple, naive pragmatism, but rather about a pragmatism of difference37 that has been constructed with great subtlety. (KAV512)38

Rölli’s main target is German Idealism. Yet, undeniably Hegelian philosophy is not only abundant on the European continent, where it is the Frankfurt School from Adorno to Habermas and even K.-O. Apel, followed by the ill-fated ideas of Luhmann that are infected by Hegel as well. Significant traces of it can be found in Germany’s society also in contemporary legal positivism and the oligarchy of political parties.

During the last 20 years or so, Hegelian positions spread considerably also in anglo-american philosophy and political theory. Think about Hard and Negri, or even the recent works of Brian Massumi. Hegelian philosophy, however, can’t be taken in portions. It is totalitarian all through, because its main postulates such as “absolute reason” are totalizing by themselves. Hegelian philosophy is a relic, and a quite dangerous one, regardless whether you interpret it in a leftist (Lenin) or in a rightist (Carl Schmitt) manner. With its built-in claim for absoluteness the explicit denial of context-specificity, of the necessary relativity of interpretation, of the openness of future evolution, of the freedom inscribed deeply even into the basic operation of comparison, all of these positions turn into transcendental aprioris. The same holds for the claim that things, facts, or even norms can be justified absolutely. No further comment should be necessary about that.

The choreostemic space itself can not result in a totalising or even totalitarian attitude. We met this point already earlier when we discussed the topological structure of the space and its a-locational “substance” (Reason and Sufficiency). As Deleuze emphasized, there is a significant difference between entirety and completeness, which just mirrors the difference between the virtual and the actual. We’d like to add that the choreostemic space also disproves the possibility for universality of any kind of conception. In some way, yet implicitly, the choreostemic space defends humanity against materiality and any related attitude. Even if we would be determined completely on the material level, which we are surely not39, the choreostemic space proofs the indeterminateness and openness of our mental life.

You already may have got the feeling that we are going to slip into political theory. Indeed, the choreostemic space not only forms a space indeterminateness and applicable pre-specificity, it provides also a kind of a space of “Swiss neutrality”. Its capability to allow for a comparison of collective mental setups, without resorting to physicalist concepts like swarms or mysticistic concepts like “collective intelligence”, provides a fruitful ground for any construction of transitions between choreostemic attractors.

Despite the fact that the choreostemic space concerns any kind of mentality, whether seen as hosted more by identifiable individuals or by collectives, the concept should not be taken as an actual philosophy of reason (“Philosophie des Geistes”). It transcends it as it does regarding any particular philosophical stance. It would be wrong as well to confine it into an anthropology or an anthropological architecture of philosophy, as it is the case not only in Hegel (Rölli, KAV137). In some way, it presents a generative zone for a-human philosophies, without falling prey to the necessity to define what human or a-human should mean. For sure, here we do not refer to transhumanism as it is known today, which just follows the traditional anthropological imperative of growth (“Steigerungslogik”), as Rölli correctly remarks (KAV459).

A-Human simply means that as a conception it is neither dependent nor confined to the human Lebenswelt. (We again would like to stress the point that it does neither represent a positively sayable universalism not even kind of a universal procedural principle, and as well that this “a-” should also not be understood as “anti” or “opposed”, simply as “being free of”). It is this position that is mandatory to draw comparisons40 and, subsequently, conclusions (in the form of introduced irreversibilities) about entities that belong to strikingly different Lebenswelten (forms of life). Any particular philosophical position immediately would be guilty in applying human scales to non-human entities. That was already a central cornerstone of Nietzsche’s critique not only of German philosophy of the 19th century, but also of natural sciences.

8.3. Simplicissimi

Rölli criticizes the uncritical adoption of items taken from the scientific world view by philosophy in the 19th century. Today, philosophy is still not secured against simplistic conceptions, uncritically assimilated from certain scientific styles, despite the fact that nowadays we could know about the (non-analytic) Linguistic Turn, or the dogmatics in empiricism. What I mean here comprises two conceptual ideas, the reduction of living or social system to states and the notion of exception or that of normality respectively.

There are myriads of references in the philosophy of mind invoking so-called mental states. Yet, not only in the philosophy of mind one can find the state as a concept, but also in political theory, namely in Giorgio Agamben’s recent work, which also builds heavily on the notion of the “state of exception”. The concept of a mental state is utter nonsense, though, and mainly so for three very different reasons. The first one can be derived from the theory of complex systems, the second one from language philosophy, and the third one from the choreostemic space.

In complex systems, the notion of a state is empty. What we can observe subsequent to the application of some empiric modeling is that complex systems exhibit meta-stability. It looks as if they are stable and trivial. Yet, what we could have learned mainly from biological sciences, but also from their formal consideration as complex systems, is that they aren’t trivial. There is no simple rule that could describe the flow of things in a particular period of time. The reason is precisely that they are creative. They build patterns, hence the build a further “phenomenal” level, where the various levels of integration can’t be reduced to one another. They exhibit points of bifurcation, which can be determined only in hindsight. Hence, from the empirical perspective we only can estimate the probability for stability. This, however, is clearly to weak as to support the claim of “states”.

In philosophy, Deleuze and Guattari in their “Thousand Plateaus” (p.48) have been among the first who recognized the important abstract contribution of Darwin by means of his theory. He opened the possibility to replace types and species by population, degrees by differential relations. Darwin himself, however, has not been able to complete this move. It took another 100 years until Manfred Eigen coined the term quasi-species as an increased density in a probability distribution. Talking about mental states is noting than a fallback into Linnean times when science was the endeavor to organize lists according to uncritical use of concepts.

Actually, from the perspective of language-oriented philosophy, the notion of a state is even empty for any dynamical system that is subject to open evolution (but probably even for trivial dynamic systems). A real system does not build “states”. There are only flows and memories. “State” is a concept, in particular, an idealistic—or at least an idealizing—concept that are only present in the interpreting entity. The fact that one first has to apply a model before it is possible to assign states is deliberately peculated whenever it is invoked by an argument that relates to philosophy or to any (other) kind of normativity. Therefore, the concept of “state” can’t be applied analytically, or as a condition in a linearly arranged argument. Saying that we do not claim that the concept of state is meaningless at large. In natural science, especially throughout the process of hypothesis building, the notion of state can be helpful (sometimes, at least).

Yet, if one would use it in philosophy in a recurrent manner, one would quickly arrive at the choreostemic space (or something very similar), where states are neither necessary nor even possible. Despite that a “state” is only assigned, i.e. as a concept, philosophers of mind41 and philosophers of political theory alike (as Agamben [37] among other materialists) use it as a phenomenal reference. It is indeed somewhat astonishing to observe this relapse into naive realism within the community of otherwise trained philosophers. One of the reasons for this may well be met in the missing training in mathematics.42

The third argument against the reasonability of the notion of “state” in philosophy can be derived from the choreostemic space. A cultural body comprises individual mentality as well as a collective mentality based on externalized symbolic systems like language, to make a long story short. Both together provide the possibility for meaning. It is absolutely impossible to assign a “state” to a cultural body without loosing the subject of culture itself. It would be much like a grammatical mistake. That “subject” is nothing else than a figurable trace in the choreostemic space. If one would do such an assignment instead, any finding would be relevant only within the reduced view. Hence, it would be completely irrelevant, as it could not support the self-imposed pragmatics. Continuing to argue about such finding then establishes a petitio principii: One would find only what you originally assumed. The whole argument would be empty and irrelevant.

Similar arguments can be put forward regarding the notion of the exceptional, if it is applied in contexts that are governed by concepts and their interpretation, as opposed to trivial causal relationships. Yet, Giorgio Agamben indeed started to built a political theory around the notion of exception [37], which—at first sight strange enough—already triggered an aesthetics of emergency. Elena Bellina [38] cites Agamben:

The state of exception “is neither external nor internal to the juridical order, and the problem of defining it concerns a threshold, or a zone of indifference, where inside and outside do not exclude each other but rather blur with each other.” In this sense, the state of exception is both a structured or rule-governed and an anomic phenomenon: “The state of exception separates the norm from its application in order to make its application possible. It introduces a zone of anomie into the law in order to make the effective regulation of the real possible.”

It results in nothing else than disastrous consequences if the notion of the exception would be applied to areas where normativity is relevant, e.g. in political theory. Throughout history there are many, many terrible examples for that. It is even problematic in engineering. We may even call it fully legitimized “negativity engineering”, as it establishes completely unnecessary the opposite of the normal and the deviant as an apriori. The notion of the exception presumes total control as an apriori. As such, it is opposed to the notion of openness, hence it also denies the primacy of interpretation. Machines that degenerate and that would produce disasters on any malfunctioning can’t be considered as being built smartly. In a setup that embraces indeterminateness, there is even no possibility for disastrous fault. Instead, deviances are defined only with respect to the expectable, not against an apriori set, hence obscure, normality. If the deviance is taken as the usual (not the normal, though!), fault-tolerance and even self-healing could be built in as a core property, not as an “exception handling”.

Exception is the negative category to the normal. It requires models to define normality, models to quantify the deviation and finally also arbitrary thresholds to label it. All of the three steps can be applied in linear domains only, where the whole is dependent on just very few parameters. For social mega-systems as societies it is nothing else than a methodological categorical illusion to apply the concept of the exception.

9. Critique of Paradoxically Conditioned Reason

Nothing could be more different to that than pragmatism, for which the choreostemic space can serve as the ultimate theory. Pragmatism always suffered from—or at least has been violable against—the reproach of relativism, because within pragmatism it is impossible to argue against it. With the choreostemic space we have constructed a self-sufficient, self-containing and a necessary model that not only supports pragmatism, but also destroys any possibility of universal normative position or normativity. Probably even more significant, it also abolishes relativism through the implied concept of the concrete choreostemic figure, which can be taken as the differential of the institution or the of tradition43. Choreostemic figures are quite stable since they relate to mentality qua population, which means that they are formed as a population of mental acts or as mental acts of the members of a population. Even for individuals it is quite hard to change the attractor inhabited in choreostemic space, to change into another attractor or even to build up a new one.

In this section we will check out the structure of the way we can use the choreostemic space. Naively spoken we could ask for instance, how can we derive a guideline to improve actions? How can we use it to analyse a philosophical attitude or a political writing? Where are the limits of the choreostemic space?

The structure behind such questions concerns a choice on a quite fundamental level. The issue is whether to argue strictly in positive terms, to allow negative terms, or even to define anything starting from negative terms only. In fact, there are quite a few of different possibilities to arrange any melange of positivity or negativity. For instance, one could ontologically insist first on contingency as a positivity, upon then constraints would act as a negativity. Such traces we will not follow here. We regard them either as not focused enough or, most of them, as being infected by realist ontology.

In more practical terms this issue of positivity and negativity regards the way of how to deal with justifications and conditions. Deleuze argues for strict positivity; in that he follows Spinoza and Nietzsche. Common sense, in contrast, is given only as far as it is defined against the non-common. In this respect, any of the existential philosophical attitudes, whether Christian religion, phenomenology or existentialism, are quite similar to each other. Even Levinas’ Other is infected by it.

Admittedly, at first hand it seems quite difficult, if not impossible, to arrive at an appropriate valuation of other persons, the stranger, the strange, in short, the Other, but also the alienated. Or likewise, how to derive or develop a stance to the world that does not start from existence. Isn’t existence the only thing we can be sure about? And isn’t the external, the experience the only stable positivity we can think about? Here, we shout a loud No! Nevertheless we definitely do not deny the external either.

We just mentioned that the issue of justification is invoked by our interests here. This gives rise to ask about the relation of the choreostemic space to epistemology. We will return to this in the second half of this section.

Positivity. Negativity.

Obviously, the problem of the positive is not the positive, but how we are going to approach it. If we set it primary, we first run into problems of justification, then into ethical problems. Setting the external, the existence, or the factual positive as primary we neglect the primacy of interpretation. Hence, we can’t think about the positive as an instance. We have to think of it as a Differential.

The Differential is defined as an entirety, yet not instantiated. Its factuality is potential, hence its formal being is neither exhaustive nor limiting its factuality, or positivity. Its givenness demands for action, that is for a decision (which is sayable regarding its immediacy) bundled with a performance (which is open and just demonstrable as a matter of fact).

The concept of choreosteme follows closely Deleuze’s idea of the Differential: It is built into the possibility of expressibility that spans as the space between the _Directionsas they are indicated by the transcendental aspects _A.The choreostemic space does not constitute a positively definable stance, since the space for it, the choreostemic space is not made from elements that could be defined apriori to any moment in time. Nevertheless it is well-defined. In order to provide an example which requires a similar approach we may refer to the space of patterns as they are potentially generated by Turing-systems. The mechanics of Turing-patterns, its mechanism, is well-defined as well, it is given in its entirety, but the space of the patterns can’t be defined positively. Without deep interpretation there is nothing like a Turing-pattern. Maybe, that’s one of the reasons that hard sciences still have difficulties to deal adequately with complexity.

Besides the formal description of structure and mechanism of our space there is nothing left about one could speak or think any further. We just could proceed by practicing it. This mechanism establishes a paradoxicality insofar as it does not contain determinable locations. This indeterminateness is even much stronger than the principle of uncertainty as it is known from quantum physics, which so far is not constructed in a self-referential manner (at least if we follow the received views). Without any determinate location, there seems to be no determinable figure either, at least none of which we could say that we could grasp them “directly”, or intuitively. Yet, figures may indeed appear in the choreostemic space, though only by applying orthoregulative scaffolds, such as traditions, institutions, or communities that form cultural fields of proposals/propositions (“Aussagefeld”), as Foucault named it [40].

The choreostemic space is not a negativity, though. It does not impose apriori determinable factual limits to a real situation, whether internal or external. It even doesn’t provide the possibility for an opposite. Due to its self-referentiality it can be instantiated into positivity OR negativity, dependent on the “vector”—actually, it is more a moving cloud of probabilities—one currently belongs to or that one is currently establishing by one’s own  performances.

It is the necessity of choice itself, appearing in the course of instantiation of the twofold Differential, that introduces the positive and the negative. In turn, whenever we meet an opposite we can conclude that there has been a preceding choice within an instantiation. Think about de Saussure structuralist theory of language, which is full of opposites. Deleuze argues (DR205) that the starting point of opposites betrays language:

In other words, are we not on the lesser side of language rather than the side of the one who speaks and assigns meaning? Have we not already betrayed the nature of the play of language – in other words, the sense of that combinatory, of those imperatives or linguistic throws of the dice which, like Artaud’s cries, can be understood only by the one who speaks in the transcendent exercise of language? In short, the translation of difference into opposition seems to us to concern not a simple question of terminology or convention, but rather the essence of language and the linguistic Idea.

In more traditional terms one could say it is dependent on the “perspective”. Yet, the concept of “perspective” is fallacious here, at least so, since it assumes a determinable stand point. By means of the choreostemic space, we may replace the notion of perspectives by the choreostemic figure, which reflects both the underlying dynamics and the problematic field much more adequately. In contrast to the “perspective”, or even of such, a choreostemic figure spans across time. Another difference is that a perspective needs to be taken, which does not allow for continuity, while a choreostemic figure evolves continually. The possibility for negativity is determined along the instantiation from choreosteme to thought, while the positivity is built into the choreostemic space as a potential. (Negative potentials are not possible.)

Such, the choreostemic space is immune to any attempt—should we say poison pill?—to apply a dialectic of the negative, whether we consider single, double, or absurdly enough multiply repeated ones. Think about Hegel’s negativity, Marx’s rejection and proposal for a double negativity, or the dropback by Marcuse, all of which must be counted simply as stupidity. Negativity as the main structural element of thinking did not vanish, though, as we can see in the global movement of anti-capitalism or the global movement of anti-globalization. They all got—or still get—victimized by the failure to leave behind the duality of concepts and to turn them into a frame of quantitability. A recent example for that ominous fault is given by the work of Giorgio Agamben; Morgan writes:

Given that suspending law only increases its violent activity, Agamben proposes that ‘deactivating’ law, rather erasing it, is the only way to undermine its unleashed force. (p.60)

The first question, of course, is, why the heck does Agamben think that law, that is: any lawfulness, must be abolished. Such a claim includes the denial of any organization and any institution, above all, as practical structures, as immaterial infrastructures and grounding for any kind of negotiation. As Rölli noted in accordance to Nietzsche, there is quite an unholy alliance between romanticism and modernism. Agamben, completely incapable of getting aware of the virtual and of the differential alike, thus completely stuck in a luxurating system of “anti” attitudes, finds himself faced with quite a difficulty. In his mono-(zero) dimensional modernist conception of world he claims:

“What is found after the law is not a more proper and original use value that precedes law, but a new use that is born only after it. And use, which has been contaminated by law, must also be freed from its value. This liberation is the task of study, or of play.”

Is it really reasonable to demand for a world where uses, i.e. actions, are not “contaminated” by law? Morgan continues:

In proposing this playful relation Agamben makes the move that Benjamin avoids: explicitly describing what would remain after the violent destruction of normativity itself. ‘Play’ names the unknowable end of ‘divine violence’.

Obviously, Agamben never realized any paradox concerning rule-following. Instead, he runs amok against his own prejudices. “Divine violence” is the violence of ignorance. Yet, abolishing knowledge does not help either, nor is it an admirable goal in itself. As Derrida (another master of negativity) before him, in the end he demands for stopping interpretation, any and completely. Agamben provides us nothing else than just another modernist flavour of a philosophy of negativity that results in nihilistic in-humanism (quite contrary to Nietzsche, by the way). It is somewhat terrifying that Agamben receives not jut little attention currently.

In the last statement we are going to cite from Morgan, we can see in which eminent way Agamben is a thinker of the early 19th century, incapable to contribute any reasonable suggestion to current political theory:

But it is not only the negative structure of the argument but also the kind of negativity that is continuous between Agamben’s analyses of aesthetic and legal judgement. In other words, ‘normality without a norm’, which paradoxically articulates the subtraction of normativity from the normal, is simply another way of saying ‘law without force or application’.

This Kantian formulation is not only fully packed with uncritical aprioris, such like normality or the normal, which marks Agamben as an epigonic utterer of common sense. As this ancient form of idealism demonstrates, Agamben obviously never heard anything of the linguistic turn as well. The unfortunate issue with Agamben’s writing is that it is considered both as influential and pace-making.

So, should we reject negativity and turn to positivity? Rejecting negativity turns problematic only if it is taken as an attitude that stretches out from the principle down to the activity. Notably, the same is true for positivity. We need not to get rid of it, which only would send us into the abyss of totalised mysticism. Instead, we have to transcend them into the Differential that “precedes” both. While the former could be reframed into the conditionability of processes (but not into constraints!), the latter finds its non-representational roots in the potential and the virtual. If the positive is taken as a totalizing metaphysics, we soon end in overdone specialization, uncritical neo-liberalism or even dictatorship, or in idealism as an ideology. The turn to a metaphysics of (representational) positivity is incurably caught in the necessity of justification, which—unfortunately enough for positivists—can’t be grounded within a positive metaphysics. To justify, that is to give “good reasons”, is a contradictio in adiecto, if it is understood in its logic or idealistic form.

Both, negativity and positivity (in their representational instances) could work only if there is a preceding and more or less concrete subject, which of course could not presupposed when we are talking about “first reasons” or “justification”. This does not only apply to political theory or practice, it even holds for logic as a positively given structure. Abstractly, we can rewrite the concreteness into countability. Turning the whole thing around we see that as long as something is countable we will be confined by negativity and positivity on the representational level. Herein lies the limitation of the Universal Turing Machine. Herein lies also the inherent limitation of any materialism, whether in its profane or it theistic form. By means of the choreostemic space we can see various ways out of this confined space. We may, for instance, remove the countability from numbers by mediatizing it into probabilities. Alternatively, we may introduce a concept like infinity to indicate the conceptualness of numbers and countability. It is somewhat interesting that it is the concept of the infinite that challenges the empiric character of numbers. Else, we could deny representationalism in numbers while trying to keep countability. This creates the strange category of infinitesimals. Or we create multi-dimensional number spaces like the imaginary numbers. There are, of course, many, many ways to transcend the countability of numbers, which we can’t even list here. Yet, it is of utmost importance to understand that the infinite, as any other instance of departure from countability, is not a number any more. It is not countable either in the way Cantor proposed, that is, thinking of a smooth space of countability that stretches between empiric numbers and the infinite. We may count just the symbols, but the reference has inevitably changed. The empirics is targeting the number of the symbols, not the their content, which has been defined as “incountability”. Only by this misunderstanding one could get struck by the illusion that there is something like the countability of the infinite. In some ways, even real numbers do not refer to the language game of countability, and all the more irrational numbers don’t either. It is much more appropriate to conceive of them as potential numbers; it may well be that precisely this is the major reason for the success of mathematics.

The choreostemic space is the condition for separating the positive and the negative. It is structure and tool, principle and measure. Its topology implies the necessity for instantiation and renders the representationalist fallacy impossible; nevertheless, it allows to map mental attitudes and cultural habits for comparative purposes. Yet, this mapping can’t be used for modeling or anticipation. In some way it is the basis for subjectivity as pre-specific property, that is for a _Subjectivity,of course without objectivity. Therefore, the choreostemic space also allows to overcome the naïve and unholy separation of subjects and objects, without denying the practical dimension of this separation. Of course, it does so by rejecting even the tiniest trace of idealism, or apriorisms respectively.

The choreostemic space does not separate apriori the individual or the collective forms of mentality. In describing mentality it is not limited to the sayable, hence it can’t be attacked or even swallowed by positivism. Since it provides the means to map those habitual _Mentalfigures, people could talk about transitions between different attractors, which we could call “choreostemic galaxies”. The critical issue of values, those typical representatives of uncritical aprioris, is completely turned into a practical concern. Obviously, we can talk about “form” regarding politics without the need to invoke aesthetics. As Benjamin Morgan recently demonstrated (in the already cited [41]), aesthetics in politics necessarily refers to idealism.

Rejecting representational positivity, that is, any positivity that we could speak of in a formal manner, is equivalent to the rejection of first reason as an aprioric instance. As we already proposed for representational positivity, the claim of a first reason as a point of departure that is never revisited again results as well in a motionless endpoint, somewhere in the triangle built from materialism, idealism or realism. Attempts to soften this outcome by proposing a playful, or hypothetical, if not pragmatic, “fixation of first principles” are not convincing, mainly because this does not allow for any coherence between games, which results in a strong relativity of principles. We just could not talk about the relationships between those “firstness games”. In other words, we would not gain anything. An example for such a move is provided by Epperson [42].  Though he refers to the Aristotelian potential, he sticks with representational first principles, in his case logic in the form of the principle of the excluded middle and the principle of non-contradiction. Epperson does not get aware of the problems regarding the use of symbols in doing this. Once Wittgenstein critized the very same point in the Principia by Russell and Whitehead. Additionally, representational first principles are always transporters for ontological claims. As long as we recognize that the world is NOT made from objects, but of relations organized, selected and projected by each individual through interpretation, we would face severe difficulties. Only naive realism allows for a frictionless use of first principles. Yet, for a price that is definitely too high.

We think that the way we dissolved the problem of first reason has several advantages as compared to Deleuze’s proposal of the absolute plane of immanence. First, we do not need the notion of absoluteness, which appears at several instances in Deleuze’s main works “What is Philosophy?” [35] (WIP), “Empiricism and Subjectivity [43], and his “Pure Immanence” [44]. The second problem with the plane of immanence concerns the relation between immanence and transcendence. Deleuze refers to two different kinds of transcendence. While in WIP he denounces transcendence as inappropriate due to its heading towards identity, the whole concept of transcendental empiricism is built on the Kantian invention. This two-fold measure can’t be resolved. Transcendence should not be described by its target. Third, Deleuze’s distinction between the absolute plane of immanence and the “personal” one, instantiated by each new philosophical work, leaves a major problem: Deleuze leaves completely opaque how to relate the two kinds of immanence to each other. Additionally, there is a potentially infinite number of “immanences,” implying a classification, a differential and an abstract kind of immanence, all of which is highly corrosive for the idea of immanence itself. At least, as long one conceives immanence not as an entity that could be naturalized. This way, Deleuze splits the problem of grounding into two parts: (1) a pure, hence “transcendent” immanence, and (2) the gap between absolute and personal immanence. While the first part could be accepted, the second one is left completely untouched by Deleuze. The problem of grounding has just been moved into a layer cake. Presumably, these problems are caused by the fact that Deleuze just considers concepts, or _Concepts, if we’d like to consider the transcendental version as well. Several of those imply the plane of immanence, which can’t be described, which has no structure, and which just is implied by the factuality of concepts. Our choreostemic space moves this indeterminacy and openness into a “form” aspect in a non-representational, non-expressive space with the topology of a double-differential. But more important is that we not only have a topology at our disposal which allows to speak about it without imposing any limitation, we else use three other foundational and irreducibly elements to think that space, the choreostemic space. The CS thus also brings immanence and transcendence into the same single structure.

In this section we have discussed a change of perspective towards negativity and positivity. This change did become accessible by the differential structure of the choreostemic space. The problematic field represented by them and all the respective pseudo-solutions has been dissolved. This abandonment we achieved through the “Lagrangean principle”, that is, we replaced the constants—positivity and negativity respectively—by a procedure—instantiation of the Differential—plus a different constant. Yet, this constant is itself not a not a finite replacement, i.e. a “constant” as an invariance. The “constant” is only a relative one: the orthoregulation, comprising habits, traditions and institutions.

Reason—or as we would like to propose for its less anthropological character and better scalability­, mentality—has been reconstructed as a kind of omnipresent reflection on the conditionability of proceedings in the choreostemic space. The conditionability can’t be determined in advance to the performed mental proceedings (acts), which for many could appear as somewhat paradoxical. Yet, it is not. The situation is quite similar to Wittgenstein’s transcendental logic that also gets instantiated just by doing something, while the possibility for performance precedes that of logic.

Finally, there is of course the question, whether there is any condition that we impose onto the choreostemic itself, a condition that would not be resolved by its self-referentiality. Well, there is indeed one: The only unjustified apriori of the choreostemic space seems to be the primacy of interpretation (POI). This apriori, however, is only a weak one, and above all, a practicable one, or one that derives from the openness of the world. Ultimately, the POI in turn is a direct consequence of the time-being. Any other aspect of interpretation is indeed absorbed by the choreostemic space and its self-referentiality, hence requiring no further external axioms or the like. In other words, the starting point of the choreostemic space, or the philosophical attitude of the choreosteme, is openness, the insight that the world is far to generative as to comprehend all of it.

The fact that it is almost without any apriori renders the choreostemic space suitable for those practical purposes where the openness and its sibling, ignorance, calls for dedicated activity, e.g. in all questions of cross-disciplinarity or trans-culturality. As far as different persons establish different forms of life, the choreostemic space even is highly relevant for any aspect of cross-personality. This in turn gives rise to a completely new approach to ethics, which we can’t follow here, though.

<h5>Mentality without Knowledge</h5>

Two of the transcendental aspects of the choreostemic space are _Model,and _Concept. The concepts of model and concept, that is, instantiations of our aspects, are key terms in philosophy of science and epistemology. Else, we proposed that our approach brings with it a new image of thought. We also said that mental activities inscribe figures or attractors into that space. Since we are additionally interested in the issue of justification—we are trying to get rid of them—the question of the relation between the choreostemic space and epistemology is being triggered.

The traditional primary topic of epistemology is knowledge, how we acquire it, particularly however the questions of first how to separate it from beliefs (in the common sense) on the one hand, and second how to secure it in a way that we possibly could speak about truth. In a general account, epistemology is also about the conditions of knowledge.

Our position is pretty clear: the choreostemic space is something that is categorically different from episteme or epistemology. Which are the reasons?

We reject the view that truth in its usual version is a reasonable category for talking about reasoning. Truth as a property of a proposition can’t be a part of the world. We can’t know anything for sure, neither regarding the local context, nor globally. Truth is an element of logic, and the only truth we can know of is empty: a=a. Yet, knowledge is supposed to be about empirical facts (arrangements of relations). Wittgenstein thus set logic as transcendental. Only the transcendental logic can be free of semantics and thus only within transcendental logic we can speak of truth conditions. The consequence is that we can observe either of two effects. First, any actual logic contains some semantic references, because of which it could be regarded as “logic” only approximately. Second, insisting on the application of logical truth values to actual contexts instead results in a categorical fault. The conclusion is that knowledge can’t be secured neither locally from a small given set of sentences about empirical facts, nor globally. We even can’t measure the reliability of knowledge, since this would mean to have more knowledge about the fact than it is given by the local observations provide. As a result, paradoxes and antinomies occur. The only thing we can do is try to build networks of stable models for a negotiable anticipation with negotiable purposes. In other words, facts are not given by relation between objects, but rather as a system of relations between models, which as a whole is both accepted by a community of co-modelers and which provides satisfying anticipatory power. Compared to that the notion of partial truth (Newton da Costa & Steven French) is still misconceived. It keeps sticking to the wrong basic idea and as such it is inferior to our concept of the abstract model. After all, any account of truth violates the fact that it is itself a language game.

Dropping the idea of truth we could already conclude that the choreostemic space is not about epistemology.

Well, one might say, ok, then it is an improved epistemology. Yet, this we would reject as well. The reason for that is a grammatical one. Knowledge in the meaning of epistemology is either about sayable or demonstrable facts. If someone says “I know”, or if someone ascribes to another person “he knows”, or if a person performs well and in hindsight her performance is qualified as “based on intricate knowledge” or the like, we postulate an object or entity called knowledge, almost in an ontological fashion. This perspective has been rejected by Isabelle Peschard [45]. According to her, knowledge can’t be separated from activity, or “enaction”, and knowledge must be conceived as a social embedded practice, not as a stateful outcome. For her, knowledge is not about representation at all. This includes the rejection of the truth conditions as a reasonable part of a concept of knowledge. Else, it will be impossible to give a complete or analytical description of this enaction, because it is impossible to describe (=to explicate) the Form of Life in a self containing manner.

In any case, however, knowledge is always, at least partially, about how to do something, even if it is about highly abstract issues. That means that a partial description of knowledge is possible. Yet, as a second grammatical reason, the choreostemic space does not allow for any representations at all, due to its structure, which is strictly local and made up from the second-order differential.

There are further differences. The CS is a tool for the expression of mental attractors, to which we can assign distinct yet open forms. To do so we need the concepts of mediality and virtuality, which are not mentioned anywhere in epistemology. Mental attractors, or figures, will always “comprise” beliefs, models, ideas, concepts as instances of transcendental entities, and these instances are local instances, which are even individually constrained. It is not possible to explicate these attractors other than by “living” it.

In some way, the choreostemic space is intimately related to the philosophy of C.S. Peirce, which is called “semiotics”. As he did, we propose a primacy of interpretation. We fully embrace his emphasis that signs only refer to signs. We agree with his attempt for discerning different kinds of signs. And we think that his firstness, secondness and thirdness could be related to the mechanisms of the choreostemic space. In some way, the CS could be conceived as a generalization of semiotics. Saying this, we also may point to the fact that Peirce’s philosophy is not  regarded as epistemology either.

Rejecting the characterization of the choreostemic space as an epistemological subject we can now even better understand the contours of the notion of mentality. The “mental” can’t be considered as a set of things like beliefs, wishes, experiences, expectations, thought experiments, etc. These are just practices, or likewise practices of speaking about the relation between private and public aspects of thinking. Any of these items belong to the same mentality, to the same choreostemic figures.

In contrast to Wittgenstein, however, we propose to discard completely the distinction between internal and external aspects of the mental.

And nothing is more wrong-headed than calling meaning a mental activity! Unless, that is, one is setting out to produce confusion.” [PI §693]

One of the transcendental aspects in the CS is concept, another is model. Both together are providing the aspects of use, idea and reference, that is, there is nothing internal and external any more. It simply depends on the purpose of the description, or the kind of report we want to create about the mental, whether we talk about the mental in an internalist or in externalist way, whether we talk about acts, concepts, signs, or models. Regardless, what we do as humans, it will always be predominantly a mental act, irrespective the change of material reconfigurations.

10. Conclusion

It is probably not an exaggeration to say that in the last two decades the diversity of mentality has been discovered. A whole range of developments and shifts in public life may have been contributing to that, concerning several domains, namely from politics, technology, social life, behavioural science and, last but not least, brain research. We saw the end of the Cold War, which has been signalling an unrooting of functionalism far beyond the domain of politics, and simultaneously the growth and discovery of the WWW and its accompanied “scopic44 media” [46, 47]. The “scopics” spurred the so-called globalization that worked much more in favour of the recognition of diversity than it levelled that diversity, at least so far. While we are still in the midst of the popularization and increasingly abundant usage of so-called machine learning, we already witness an intensified mutual penetration and amalgamation of technological and social issues. In the behavioural sciences, probably also supported by the deepening of mediatization, an unforeseen interest in the mental and social capabilities of animals manifested, pushing back the merely positivist and dissecting description of behavior. As one of the most salient examples may serve the confirmation of cultural traditions in dolphins and orcas, concerning communication as well as highly complex collaborative hunting.  The unfolding of collaboration requires the mutual and temporal assignment of functional roles for a given task. This not only prerequisites a true understanding of causality, but even its reflected use as a game in probabilistic spaces.

Let us distil three modes or forms here, (i) the animal culture, (ii) the machine-becoming and of course (iii) the human life forms in the age of intensified mediatization. All three modes must be considered as “novel” ones, for one reason or another. We won’t go in any further detail here, yet it is pretty clear that the triad of these three modes render any monolithic or anthropologically imprinted form of philosophy of mind impossible. In turn, any philosophy of mind that is limited to just the human brains relation to the world, or even worse, which imposes analytical, logical or functional perspectives onto it, must be considered as seriously defect. This applies still to large parts of the mainstream in philosophy of mind (and even ethics).

In this essay we argued for a new Image of Thought that is independent from the experience of or by a particular form of life, form of informational45 organization or cultural setting, respectively. This new Image of Thought is represented through the choreostemic space. This space is dynamic and active and can be described formally only if it is “frozen” into an analytical reduction. Yet, its self-referentiality and self-directed generativity is a major ingredient. This self-referentiality is takes a salient role in the space’s capability to  leave its conditions behind.

One of the main points of the choreostemic space (CS) probably is that we can not talk about “thought”—regardless its quasi-material and informational foundations—without referring to the choreostemic space. It is a (very) strong argument against Rylean concepts about the mind that claim the irrelevance of the concept of the mental by proposing that looking at the behavior is sufficient to talk about the “mind”. Of course, the CS does not support “the dogma of the ghost in the machine“ either. The choreostemic space defies (and helps to defy) any empirical and so also anthropological myopias through its triple-feature of transcendental framing, differential operation and immanent rooting. Such it is immune against naturalist fallacies such as Cartesian dualism as well as against arbitrariness or relativism. Neither it could be infected by any kind of preoccupation such like idealism or universalism. Despite one could regard it in some way as “pure Thought”, or consider it as the expressive situs of it, its purity is not an idealistic one. It dissolves either into the metaphysical transcendentality of the four conceptual aspects _a,that is, the _Model, _Mediality,_Concept,and _Virtuality.Or it takes the form of the Differential that could be considered as being kind of a practical transcendentality46 [48].  There, as one of her starting points Bühlmann writes:

Deleuze’s fundamental critique in Difference and Repetition is that throughout the history of philosophy, these conditions have always been considered as »already confined« in one way or another: Either within »a formless, entirely undifferentiated underground« or »abyss« even, or within the »highly personalized form« of an »autocratically individuated Being«

Our choreostemic space provides also the answer to the problematics of conditions.47  As Deleuze, we suggest to regard conditions only as secondary, that is as relevant entities only after any actualization. This avoids negativity as a metaphysical principle. Yet, in order to get completely rid of any condition while at the same time retain conditionability as a transcendental entity we have to resort to self-referentiality as a generic principle. Hence, our proposal goes beyond Deleuze’s framework as he developed it from “Difference and Repetition” until “What is Philosophy?”, since he never made this move.

Basically, the CS supports Wittgenstein’s rejection of materialism, which experienced a completely unjustified revival in the various shades of neuro-isms. Malcolm cites him [49]:

It makes as little sense to ascribe experiences, wishes, thoughts, beliefs, to  a brain as to a mushroom. (p.186)

This support should not surprise, since the CS was deliberately constructed to be compatible with the concept of language game. Despite the CS also supports his famous remark about meaning:

And nothing is more wrong-headed than calling meaning a mental activity! Unless, that is, one is setting out to produce confusion.” [PI §693]

it is also clear that the CS may be taken as a means to overcome the debate about external or internal primacies or foundations of meaning. The duality of internal vs. external is neutralized in the CS. While modeling and such the abstract model always requires some kind of material body, hence representing the route into some interiority, the CS is also spanned by the Concept and by Mediality. Both concepts are explicit ties between any kind of interiority and and any kind of exteriority, without preferring a direction at all. The proposal that any mental activity inscribes attractors into that space just means that interiority and exteriority can’t be separated at all, regardless the actual conceptualisation of mind or mentality. Yet, in accordance with PI 693 we also admit that the choreostemic space is not equal to the mental. Any particular mentality unfolds as an actual performance in the CS. Of course, the CS does not describe material reconfigurations, environmental contingency etc. and the performance taking place “there”. In other words, it does not cover any aspect of use. On the other hand, material reconfiguration are simply not “there” as long as they do not get interpreted by applying some kind of model.

The CS clearly shows that we should regard questions like “Where is the mind?” as kind of a grammatical mistake, as Blair lucidly demonstrates [50]. Such a usage of the word “mind” not only implies irrevocably that it is a localizable entity. It also claims its conceptual separatedness. Such a conceptualization of the mind is illusionary. The consequences for any attempt to render “machines” “more intelligent” are obviously quite dramatic. As for the brain, it is likewise impossible to “localize” mental capacities in the case of epistemic machines. This fundamental de-territorialization is not a consequence of scale, as in quantum physics. It is a consequence of the verticality of the differential, the related necessity of forms of construction and the fact, that a non-formal, open language, implying randolations to the community, is mandatory to deal with concepts.

One important question about a story like the “choreostemic space” with its divergent, but nevertheless intimately tied four-fold transcendentality is about the status of that space. What “is” it? How could it affect actual thought? Since we have been starting even with  mathematical concepts like space, mappings, topology, or differential, and since our arguments frequently invokes the concept of mechanism,one could suspect that it is a piece of analytical philosophy. This ascription we can clearly reject.

Peter Hacker convincingly argues that “analytical philosophy” can’t be specified by a set of properties of such assumed philosophy. He proposes to consider it as a historical phase of philosophy, with several episodes, beginning around 1890 [53]. Nevertheless, during the 1970ies a a set of believes formed kind of a basic setup. Hacker writes:

But there was broad consensus on three points. First, no advance in philosophical understanding can be expected without the propaedeutic of investigating the use of the words relevant to the problem at hand. Second, metaphysics, understood as the philosophical investigation into the objective, language-independent, nature of the world, is an illusion. Third, philosophy, contrary to what Russell had thought, is not continuous with, but altogether distinct from science. Its task, contrary to what the Vienna Circle averred, is not the clarification or ‘improvement’ of the language of science.

Where we definitely disagree is at the point about metaphysics. Not only do we refute the view that metaphysics is about the objective, language-independent, nature of the world. As such we indeed would reject metaphysics. An example for this kind of thinking is provided by the writing of Whitehead. It should have become clear throughout our writing that we stick to the primacy of interpretation, and accordingly we do regard the believe in an objective reality as deeply misconceived. Thereby we do neither claim that our mental life is independent from the environment—as radical constructivism (Varela & Co) does—nor do we claim that there is no external world around us that is independent from our perception and constructions. Such is just belief in metaphysical independence, which plays an important tole in modernism. The idea of objective reality is also infected by this belief, resulting in a self-contradiction. For “objective” makes sense only as an index to some kind of sociality, and hence to a group sharing a language, and further to the use of language. The claim of “objective reality is thus childish.

More important, however, we have seen that the self-referentiality of terms like concept (we called those “strongly singular terms“) enforces us to acknowledge that Concept, much like logic, is a transcendental category. Obviously we refer strongly to transcendental, that is metaphysical categories. At the same time we also propose, however, that there are manifolds of instances of those transcendental categories.

The choreostemic space describes a mechanism. In that it resembles to the science of biology, where the concept of mechanism is an important epistemological tool. As such, we try to defend against mysticism, against the threat that is proposed by any all too quick reference to the “Lebenswelt”, the form of life and the ways of living. But is it really an “analysis”?

Putnam called “analysis” an “inexplicable noise”[54]. His critique was precisely that semantics can’t be found by any kind of formalization, that is outside of the use of language. In this sense we certainly are not doing analytic philosophy. As a final point we again want to emphasize that it is not possible to describe the choreostemic space completely, that is, all the conditions and effects, etc., due to its self-referentiality. It is a generative space that confirms its structure by itself. Nevertheless it is neither useless nor does it support solipsism. In a fully conscious act it can be used to describe the entirety of mental activity, and only as a fully conscious act, while this description is a fully non-representational description. In this way it overcomes not only the Cartesian dualism about consciousness. In fact, it is another way to criticise the distinction between interiority and exteriority.

For one part we agree with Wittgenstein’s critique (see also the work of PMS Hacker about that), which identifies the “mystery” of consciousness as an illusion. The concept of the language game, which is for one part certainly an empiric concept, is substantial for the choreostemic space. Yet, the CS provides several routes between the private and the communal, without actually representing one or the other. The CS does no distinguish between the interior and the exterior at all, just recall that mediality is one of the transcendental aspects. Along with Wittgenstein’s “solipsistic realism” we consequently reject also the idea that ontology can be about the external world, as this again would introduce such a separation. Quite to the contrast, the CS vanishes the need for the naive conception of ontology. Ontology makes sense only within the choreostemic space.

Yet, we certainly embrace the idea that mental processes are ultimately “based” on physical matter, but unfolded into and by their immaterial external surrounds, yielding an inextricable compound. Referring to any “neuro” stuff regarding the mental does neither “explain” anything nor is it helpful to any regard, whether one considers it as neuro-science or as neuro-phenomenology.

Summarizing the issue we may say that the choreostemic space opens a completely new level for any philosophy of the mental, not just what is being called the human “mind”. It also allows to address scientific questions about the mental in a different way, as well as it clarifies the route to machines that could draw their own traces and figures into that space. It makes irrepealable clear that any kind of functionalism or materialism is once and for all falsified.

Let us now finally inspect our initial question that we put forward in the editorial essay. Is there a limit for the mental capacity of machines? If yes, which kind of limit and where could we draw it? The question about the limit of machines directly triggers the question about the image of humanity („Bild des Menschen“), which is fuelled from the opposite direction. So, does this imply kind of a demarcation line between the domain of the machines and the realm of the human? Definitely not, of course. To opt for such a separation would not only follow idealist-romanticist line of critizising technology, but also instantiate a primary negativity.

Based on the choreostemic space, our proposal is a fundamentally different one. It can be argue that this space can contains any condition of any thought as an population of unfolding thoughts. These unfoldings inscribe different successions into the space, appearing as attractors and figures. The key point of this is that different figures, representing different Lebensformen (Forms of Life) that are probably even incommensurable to each other, can be related to each other without reducing any of them. The choreostemic space is a space of mental co-habitation.

Let us for instance start with the functionalist perspective that is so abundant in modernism since the times of Descartes. A purely functionalist stance is just a particular figure in that space, as it applies to any other style of thinking. Using the dictum of the choreosteme as a guideline, it is relatively easy to widen the perspective into a more appropriate one. Several developmental paths into a different choreostemic attractor are possible. For instance, mediatization through social embedding [52], opening through autonomous associative mechanisms as we have described it, or the adhoc recombination of conceptual principles as it has been demonstrated by Douglas Hofstadter. Letting a robot range freely around also provokes the first tiny steps away from functionalism, albeit the behavioral Bauplan of the insects (arthropoda) demonstrates that this does not install a necessity for the evolutionary path to advanced mental capabilities.

The choreostemic space can serve as such a guideline because it is not infected by anthropology in any regard. Nevertheless it allows to speak clearly about concepts like belief and knowledge, of course, without reducing these concepts to positive definite or functionalist definitions. It also remains completely compatible with Wittgenstein’s concept of the language game. For instance, we reconstructed the language game “knowing” as a label for a pointer (say reference) to a particular image of thought and its use. Of course, this figure should not be conceived as a fixed point attractor, as the various shades of materialism, idealism and functionalism actually would do (if they would argue along the choreosteme). It is somewhat interesting that here, by means of the choreostemic space, Wittgenstein and Deleuze approach each other quite closely, something they themselves would not have been supported, probably.

Where is the limit of machines, then?

I guess, any answer must refer to the capability to leave a well-formed trace in the choreostemic space. As such, the limits of machines are to be found in the same way as they are found for us humans: To feel and to act as an entity that is able to contribute to culture and to assimilate it in its mental activity.

We started the choreostemic space as a framework to talk about thinking, or more general: about mentality, in a non-anthropological and non.-reductionist manner. In the course of our investigation, we found a tool that actualizes itself into real social and cognitive situations. We also found the infinite space of choreostemic galaxies as attractors for eternal returns without repetition of the identical. Choreosteme keeps the any alive, without subjugating individuality, it provides a new and extended level of sayability without falling into representationalism. Taken together, as a new Image of Thought it allows to develop thinking deliberately and as part of a multitudinous variety.


1. This piece is thought of as a close relative to Deleuze’s Difference & Repetition (D&R)[1]. Think of it as a satellite of it, whose point of nearest approach is at the end of part IV of D&R, and thus also as a kind of extension of D&R.

2. Deleuze of course, belongs to them, but of course also Ludwig Wittgenstein (see §201 of PI [2], “paradox” of rule following), and Wilhelm Vossenkuhl [3], who presented three mutually paradoxical maxims as a new kind of a theory of morality (ethics), that resists the reference to monolithically set first principles, such as for instance in John Rawls’ “Theory of Justice”. The work of those philosophers also provides examples of how to turn paradoxicality productive, without creating paradoxes at all, the main trick being to overcome their fixation by a process. Many others, including Derrida, just recognize paradoxes, but are neither able to conceive of paradoxicality nor to distinguish them from paradoxes, hence they take paradoxes just as unfortunate ontological knots. In such works, one can usually find one or the other way to prohibit interpretation (think about the trail, grm. “Spur” in Derrida)

3. Paradoxes and antinomies like those described by Taylor, Banach-Tarski, Russell or of course Zenon are all defect, i.e. pseudo-paradoxes, because they violate their own “gaming pragmatics”. They are not paradoxical at all, but rather either simply false or arbitrarily fixed within the state of such violation. The same fault is committed by the Sorites paradox and its relatives. They are all mixing up—or colliding—the language game of countability or counting with the language game of denoting non-countability, as represented by the infinite or the infinitesimal. Instead of saying that they violate the apriori self-declared “gaming pragmatics” we also could say that they change the most basic reference system on the fly, without any indication of doing so. This may happen through an inadequate use of the concept of infiniteness.

4. DR 242 eternal return: it is not the same and the identical that returns, but the virtual structuredness (not even a “principle”), without which metamorphosis can’t be conceived.

5. In „Difference and Repetition“, Deleuze chose to spell “Idea” with a capital letter, in order to distinguish his concept from the ordinary word.

7. Here we find interesting possibilities for a transition to Alan Turing‘s formal foundation of creativity [5].

8. This includes the usage of concepts like virtuality, differential, problematic field, the rejection of the primacy of identity and closely related to that, the rejection of negativity, the rejection of the notion of representation, etc. Rejecting the negative opens an interesting parallel to Wittgenstein’s insisting on the transcendentality of logics and the subordination of any practical logic to performance. Since the negative is a purely symbolic entity, it is also purely aposteriori to any genesis, that is self-referential performance.

9. I would like to recommend to take a look to the second part of part IV in D&R, and maybe, also to the concluding chapter therein (download it here).

10. Saying „we“ here is not just due to some hyperbolic politeness. The targeted concept of this essay, the choreosteme, has been developed by Vera Bühlmann and the author of this essay (Klaus Wassermann) in close collaboration over a number of years. Finally the idea proofed to be so strong that now there is some dissent about the role and the usage of the concept.

11. For belief revision as described by others, overview @ Stanford, a critique by Pollock, who clarified that belief revision as comprised and founded by the AGM theory (see below) is incompatible to  standard epistemology.

12. By symbolism we mean the belief that symbols are the primary and apriori existent entities for any description of any problematic field. In machine-based epistemology for instance, we can not start with data organized in tables because this pre-supposes a completed process of “ensymbolization”. Yet, in the external world there are no symbols, because symbols only exist subsequent to interpretation. We can see that symbolism creates the egg-chick-problem.

13. Miriam Meckel, communication researcher at the university of Zürich, is quite active in drawing dark-grey pictures. Recently, she coined “Googlem” as a resemblance to Google and Golem. Meckel commits several faults in that: She does not understand the technology(accusing Google to use averages), and she forgets about the people (programmers) behind “the computer”, and the people using the software as well. She follows exactly the pseudo-romantic separation between nature and the artificial.

Miriam Meckel, Next. Erinnerungen an eine Zukunft ohne uns,  Rowohlt 2011.

14. Here we find a resemblance to Wittgenstein’s denial to attribute philosophy the role of an enabler of understanding. According to Wittgenstein, philosophy even does not and can not describe. It just can show.

15. This also concerns the issue of cross-culturality.

16. Due to some kind of cultural imprinting, a frequently and solitary exercised habit, people almost exclusively think of Cartesian spaces as soon as a “space” is needed. Yet, there is no necessary implication between the need for a space and the Cartesian type of space. Even Deleuze did not recognize the difficulties implied by the reference to the Cartesian space, not only in D&R, but throughout his work. Nevertheless, there are indeed passages (in What is philosophy? with “planes of immanence”, or in the “Fold”) where it seems that he could have smelled into a different conception of space.

17. For the role of „elements“ please see the article about „Elementarization“.

18. Vera Bühlmann [8]: „Insbesondere wird eine Neu-Bestimmung des aristotelischen Verhältnisses von Virtualität und Aktualität entwickelt, unter dem Gesichtspunkt, dass im Konzept des Virtuellen – in aller Kürze formuliert – das Problem struktureller Unendlichkeit auf das Problem der zeichentheoretischen Referenz trifft.“

19. which is also a leading topic of our collection of essays here.

20. e.g. Gerhard Gamm, Sybille Krämer, Friedrich Kittler

21. cf. G.C. Tholen [7], V.Bühlmann [8].

22. see the chapter about machinic platonism.

23. Actually, Augustine instrumentalises the discovered difficulty to propose the impossibility to understand God’s creation.

24. It is an „ancestry“ only with respect to the course in time, as the result of a process, not however in terms of structure, morphology etc.

25. cf. C.S. Peirce [16], Umberto Eco [17], Helmut Pape [18];

26. Note that in terms of abstract evolutionary theory rugged fitness landscapes enforce specialisation, but also bring along an increased risk for vanishing of the whole species. Flat fitness landscapes, on the other hand, allow for great diversity. Of course the fitness landscape is not a stable parameter space, neither locally not globally. IN some sense, it is even not a determinable space. Much like the choreostemic space, it would be adequate to conceive of the fitness landscape as a space built from 2-set of transformatory power and the power to remain stability. Both can be determined only in hindsight. This paradoxality is not by chance, yet it has not been discovered as an issue in evolutionary theory.

27. Of course I know that there are important differences between verbs and substantives, which we may level out in our context without loosing too much.

28. In many societies, believing has been thought to be tied to religion, the rituals around the belief in God(s). Since the renaissance, with upcoming scientism and profanisation of societies religion and science established sort of a replacement competition. Michel Serres described how scientists took over the positions and the funds previously held by the cleric. The impression of a competition is well-understandable, of course, if we consider the “opposite direction” of the respective vectors in the choreostemic space. Yet, it is also quite mistaken, maybe itself provoked by overly idealisation, since neither the clerk can make his day without models nor the scientist his one without beliefs.

29. The concept of “theory” referred to here is oriented towards a conceptualisation based on language game and orthoregulation. Theories need to be conceived as orthoregulative milieus of models in order to be able to distinguish between models and theories, something which can’t be accomplished by analytic concepts. See the essay about theory of theory.

30. Of course, we do not claim to cover completely the relation between experiments, experience, observation on the one side and their theoretical account on the other by that. We just would like to emphasize the inextricable dynamic relation between modeling and concepts in scientific activities, whether in professional or “everyday-type” of science. For instance, much could be said in this regard about the path of decoherence from information and causality. Both aspects, the decoherence and the flip from intensifying modeling over to a conceptual form has not been conceptualized before. The reason is simple enough: There was no appropriate theory about concepts.

When, for instance, Radder [28] contends that the essential step from experiment to theory is to disconnect theoretical concepts from the particular experimental processes in which they have been realized [p.157], then he not only misconceives the status and role of theories, he also does not realize that experiments are essentially material actualisations of models. Abstracting regularities from observations into models and shaping the milieu for such a model in order to find similar ones, thereby achieving generalization is anything but to disconnect them. It seems that he overshoot a bit in his critique of scientific constructivism. Additionally, his perspective does not provide any possibility to speak about the relation between concepts and models. Though Radder obviously had the feeling of a strong change in the way from putting observations into scene towards concepts, he fails to provide a fruitful picture about it. He can’t surpass that feeling towards insight, as he muses about “… ‘unintended consequences’ that might arise from the potential use of theoretical concepts in novel situations.” Such descriptions are close to scientific mysticism.

Radder’s account is a quite recent one, but others are not really helpful about the relation between experiment, model and concept either. Kuhn’s praised concept of paradigmatic changes [24] can be rated at most as a phenomenological or historizing description. Sure, his approach brought a fresh perspective in times of overdone reductionism, but he never provided any kind of abstract mechanism. Other philosophers of science stuck to concepts like prediction (cf. Reichenbach [20], Salmon [21]) and causality (cf. Bunge [22], Pearl [23]), which of course can’t say anything about the relation to the category of concepts. Finally, Nancy Cartwright [25], Isabelle Stengers [26], Bruno Latour [9] or Karin Knorr Cetina [10] are representatives for the various shades of constructivism, whether individually shaped or as a phenomenon embedded into a community, which also can’t say anything about concepts as categories. A screen through the Journal of Applied Measurement did not reveal any significantly different items.

Thus, so far philosophy of science, sociology and history of science have been unable to understand the particular dynamics between models and concepts as abstract categories, i.e. as _Modelsor _Concepts.

31. If the members of a community, or even the participants in random interactions within it, agree on the persistence of their relations, then they will tend to exhibit a stronger propensity towards collaboration. Robert Axelrod demonstrated that on the formal level by means of a computer experiment [33]. He has been the first one, who proposed game theory as a means to explain the choice of strategies between interactees.

32. Orig.: „Seit über 200 Jahren ist die Philosophie anthropologisch bestimmt. Was das genauer bedeutet, hat sie dagegen kaum erforscht.“

33. Orig.: „Nietzsches Idealismuskritik, die in vielen Schattierungen vorliegt und immer auf das philosophische Selbstmissverständnis eines reinen Geistes und reiner Begriffe zielt, richtet sich auch gegen ein bestimmtes Naturverständnis.“ (KAV439)

34. More precisely, in evolutionary processes the capability for generalization is selected under conditions of scarcity. Scarcity, however, is inevitably induced under the condition of growth or consumption. It is important to understand that newly emerging levels of generalization do not replace former levels of integration. Those undergo a transformation with regard to their relations and their functional embedding, i.e. with regard to their factuality. In morphology of biological specimens this is well-known as “Überformung”. For more details about evolution and generalization please see this.

35. The notions of “philosophy of nature” or even “natural philosophy” are strictly inappropriate. Both “kinds” of philosophy are not possible at all. They have to be regarded as a strange mixture of contemporarily available concepts from science (physics, chemistry, biology), mysticism or theism and the mistaken attempt to transfer topics as such from there to philosophy. Usually, the result is simply a naturalist fallacy with serious gaps regarding the technique of reflection. Think about Kant’s physicalistic tendencies throughout his philosophy, the unholy adaptation of Darwinian theory, analytic philosophy, which is deeply influenced by cybernetics, or the comeback of determinism and functionalism due to almost ridiculous misunderstandings of the brain.

Nowadays it must be clear that philosophy before the reflection of the role of language, or more general, before the role of languagability—which includes processes of symbolization and naming—can’t be regarded as serious philosophy. Results from sciences can be imported into philosophy only as formalized structural constraints. Evolutionary theory, for instance, first have to be formalized appropriately (as we did here), before it could be of any relevance to philosophy. Yet, what is philosophy? Besides Deleuze’s answer [35], we may conceive philosophy as a technique of asking about the conditionability of the possibility to reflect. Hence, Wittgenstein said that philosophy should be regarded as a cure. Thus philosophy includes fields like ethics as a theory of morality or epistemology, which we developed here into a “choreostemology”.

36. Orig.: „Der Punkt, um den es sich namentlich handelt, lässt sich ganz bestimmt angeben. Es ist gleichsam der Apfel in dem logischen Sündenfall der deutschen Philosophie nach Kant: das Verhältnis zwischen Subjekt und Objekt in der Erkenntnis.“

37. Despite Rölli usually esteems Deleuze’s philosophy of the differential, here he refers to the difference though. I think it should be read as “divergence and differential”.

38. Orig.: „Nach allem wird klarer geworden sein, dass es sich bei diesem Pragmatismus nicht um einen einfachen Pragmatismus handelt, sondern um einen mit aller philosophischen Raffinesse konstruierten Pragmatismus der Differenz.“

39. As scientific facts, Quantum physics, the probabilistic structure of the brain and the non-representationalist working of the brain falsify determinism as well as finiteness of natural processes, even if there should be something like “natural laws”.

40. See the article about the structure of comparison.

41. Even Putnam does so, not only in his early functionalist phase, but still in Representation and Reality [36].

42. Usually, philosophers are trained only in logics, which does not help much, since logic is not a process. Of course, being trained in mathematical structures does not imply that the resulting philosophy is reasonable at all. Take Alain Badiou as an example, who just blows up materialism.

43. A complete new theory of governmentality and sovereignty would be possible here.

44. The notion of “scopic” media as coined by Knorr Cetina means that modern media substantially change the point of view (“scopein”, looking, viewing). Today, we are not just immersed into them, but we deliberately choose them and search for them. The change of perspective is thought to be a multitude and contracting space and time. This however, is not quite typical for the new media.

45. Here we refer to our extended view onto “information” that goes far beyond the technical reduced perspective that is forming the main stream today. Information is a category that can’t be limited to the immaterial. See the chapter about “Information and Causality”.

46. Vera Bühlmann described certain aspects of Deleuze’s philosophy as an attempt to naturalize transcendentality in the context of emergence, as it occurs in complex systems. Deleuze described the respective setting in “Logic of Sense” [49] as the 14th series of paradoxes.

47. …which is not quite surprising, since we developed the choreostemic space together.

  • [1] Gilles Deleuze, Difference and Repetition. Translated by Paul Patton, Athlon Press, 1994 [1968].
  • [2] Ludwig Wittgenstein, Philosophical Investigations.
  • [3] Wilhelm Vossenkuhl. Die Möglichkeit des Guten. Beck, München 2006.
  • [4] Jürgen Habermas, Über Moralität und Sittlichkeit – was macht eine Lebensform »rational«? in: H. Schnädelbach (Hrsg.), Rationalität. Suhrkamp, Frankfurt 1984.
  • [5] Alan Turing. Chemical Basis of Morphogenesis.
  • [6] K. Wassermann, That Centre-Point Thing. The Theory Model in Model Theory. In: Vera Bühlmann, Printed Physics, Springer New York 2012, forthcoming.
  • [7] Georg Christoph Tholen. Die Zäsur der Medien. Kulturphilosophische Konturen. Suhrkamp, Frankfurt 2002.
  • [8] Vera Bühlmann. Inhabiting media : Annäherungen an Herkünfte und Topoi medialer Architektonik. Thesis, University of Basel 2011. available online, summary (in German language) here.
  • [9] Bruno Latour,
  • [10] Karin Knorr Cetina (1991). Epistemic Cultures: Forms of Reason in Science. History of Political Economy, 23(1): 105-122.
  • [11] Günther Ropohl, Die Unvermeidlichkeit der technologischen Aufklärung. In: Paul Hoyningen-Huene, & Gertrude Hirsch (eds.), Wozu Wissenschaftsphilosophie? De Gruyter, Berlin 1988.
  • [12] Bas C. van Fraassen, Scientific Representation: Paradoxes of Perspective. Oxford University Press, New York 2008.
  • [13] Ronald N. Giere, Explaining Science: A Cognitive Approach. Cambridge University Press, Cambridge 1988.
  • [14] Aaron Ben-Ze’ev, Is There a Problem in Explaining Cognitive Progress? pp.41-56 in: Robert F. Goodman & Walter R. Fisher (eds.), Rethinking Knowledge: Reflections Across the Disciplines (Suny Series in the Philosophy of the Social Sciences) SUNY Press, New York 1995.
  • [15] Robert Brandom, Making it Explicit.
  • [16] C.S. Peirce, var.
  • [17] Umberto Eco,
  • [18] Helmut Pape, var.
  • [19] Vera Bühlmann, “Primary Abundance, Urban Philosophy — Information and the Form of Actuality.” pp.114-154, in: Vera Bühlmann (ed.), Printed Physics. Springer, New York 2012, forthcoming.
  • [20] Hans Reichenbach, Experience and Prediction. An Analysis of the Foundations and the Structure of Knowledge, University of Chicago Press, Chicago, 1938.
  • [21] Wesley C. Salmon, Causality and Explanation. Oxford University Press, New York 1998.
  • [22] Mario Bunge, Causality and Modern Science. Dover Publ. 2009 [1979].
  • [23] Judea Pearl , T.S. Verma (1991) A Theory of Inferred Causation.
  • [24] Thomas S. Kuhn, Scientific Revolutions
  • [25] Nancy Cartwright. var.
  • [26] Isabelle Stengers, Spekulativer Konstruktivismus. Merve, Berlin 2008.
  • [27] Peter M. Stephan Hacker, “Of the ontology of belief”, in: Mark Siebel, Mark Textor (eds.),  Semantik und Ontologie. Ontos Verlag, Frankfurt 2004, pp. 185–222.
  • [28] Hans Radder, “Technology and Theory in Experimental Science.” in: Hans Radder (ed.), The Philosophy Of Scientific Experimentation. Univ of Pittsburgh 2003, pp.152-173
  • [29] C. Alchourron, P. Gärdenfors, D. Makinson (1985). On the logic of theory change: Partial meet contraction functions and their associated revision functions. Journal of Symbolic Logic, 50: 510–530.
  • [30] Sven Ove Hansson, Sven Ove Hansson (1998). Editorial to Thematic Issue on: “Belief Revision Theory Today”, Journal of Logic, Language, and Information 7(2), 123-126.
  • [31] John L. Pollock, Anthony S. Gillies (2000). Belief Revision and Epistemology. Synthese 122: 69–92.
  • [32] Michael Epperson (2009). Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse. Process Studies, 38:2, 339-366.
  • [33] Robert Axelrod, Die Evolution der Kooperation. Oldenbourg, München 1987.
  • [34] Marc Rölli, Kritik der anthropologischen Vernunft. Matthes & Seitz, Berlin 2011.
  • [35] Deleuze, Guattari, What is Philosophy?
  • [36] Hilary Putnam, Representation and Reality.
  • [37] Giorgio Agamben, The State of Exception.University of Chicago Press, Chicago 2005.
  • [38] Elena Bellina, “Introduction.” in: Elena Bellina and Paola Bonifazio (eds.), State of Exception. Cultural Responses to the Rhetoric of Fear. Cambridge Scholars Press, Newcastle 2006.
  • [39] Friedrich Albert Lange, Geschichte des Materialismus und Kritik seiner Bedeutung in der Gegenwart. Frankfurt 1974. available online @ zeno.org.
  • [40] Michel Foucault, Archaeology of Knowledge.
  • [41] Benjamin Morgan, Undoing Legal Violence: Walter Benjamin’s and Giorgio Agamben’s Aesthetics of Pure Means. Journal of Law and Society, Vol. 34, Issue 1, pp. 46-64, March 2007. Available at SSRN: http://ssrn.com/abstract=975374
  • [42] Michael Epperson, “Bridging Necessity and Contingency in Quantum Mechanics: The Scientific Rehabilitation of Process Metaphysics.” in: David R. Griffin, Timothy E. Eastman, Michael Epperson (eds.), Whiteheadian Physics: A Scientific and Philosophical Alternative to Conventional Theories. in process, available online; mirror
  • [43] Gilles Deleuze, Empiricism and Subjectivity. An Essay on Hume’s Theory of HUman Nature. Columbia UNiversity Press, New York 1989.
  • [44] Gilles Deleuze, Pure Immanence – Essays on A Life. Zone Books, New York 2001.
  • [45] Isabelle Peschard
  • [46] Knorr Cetina, Karin (2009): The Synthetic Situation: Interactionism for a Global World. In: Symbolic Interaction, 32 (1), S. 61-87.
  • [47] Knorr Cetina, Karin (2012): Skopische Medien: Am Beispiel der Architektur von Finanzmärkten. In: Andreas Hepp & Friedrich Krotz (eds.): Mediatisierte Welten: Beschreibungsansätze und Forschungsfelder. Wiesbaden: VS Verlag, S. 167-195.
  • [48] Vera Bühlmann, “Serialization, Linearization, Modelling.” First Deleuze Conference, Cardiff 2008) ; Gilles Deleuze as a Materialist of Ideality”, (lecture held at the Philosophy Visiting Speakers Series, University of Duquesne, Pittsburgh 2010.
  • [49] Gilles Deleuze, Logic of Sense. Columbia University Press, New York 1991 [1990].
  • [50] N. Malcolm, Nothing is Hidden: Wittgenstein’s Criticism of His Early Thought,  Basil Blackwell, Oxford 1986.
  • [51] David Blair, Wittgenstein, Language and Information: “Back to the Rough Ground!” Springer, New York 2006. mirror
  • [52] Caroline Lyon, Chrystopher L Nehaniv, J Saunders (2012). Interactive Language Learning by Robots: The Transition from Babbling to Word Forms. PLoS ONE 7(6): e38236. Available online (doi:10.1371/journal.pone.0038236)
  • [53] Peter M. Stephan Hacker, “Analytic Philosophy: Beyond the linguistic turn and back again”, in: M. Beaney (ed.), The Analytic Turn: Analysis in Early Analytic Philosophy and Phenomenology. Routledge, London 2006.
  • [54] Hilary Putnam, The Meaning of “Meaning”, 1976.


Waves, Words and Images

April 7, 2012 § 1 Comment

The big question of philosophy, and probably its sole question,

concerns the status of the human as a concept.1 Does language play a salient role in this concept, either as a major constituent, or as sort of a tool? Which other capabilities and which potential beyond language, if it is reasonable at all to take that perspective, could be regarded as similarly constitutive?

These questions may appear far off such topics like the technical challenges to program a population of self-organizing maps, the limits of Turing-machines, or the generalization of models and their conditions. Yet, in times where lots of people are summoning the so-called singularity, the question about the status of the human is definitely not exotic at all. Notably, “singularity” is often and largely defined as “overwhelming intelligence”, seemingly coming up inevitably due to ever increasing calculation power, and which we could not “understand” any more.  From an evolutionary perspective it makes pretty little sense to talk about singularities. Natural evolution, and cultural evolution alike, is full of singularities and void of singularities at the same time. The idea of “singularity” is not a fruitful way to approach the question of qualitative changes.

As you already may have read in another chapter, we prefer the concept of machine-based episteme as our ariadnic guide. In popular terms, machine-based episteme concerns the possibility for an actualization of a particular “machine” that would understand the conditions of its own when claiming “I know.” (Such an entity could not be regarded as a machine anymore, I guess.) Of course, in following this thread we meet a lot of already much-debated issues. Yet, moving the question about the episteme into the sphere of the machinic provides particular perspectives onto these issues.

In earlier times it has been tried, and some people still are trying today, to determine that status of the “human” as sort of a recipe. Do this and do that, but not that and this, then a particular quality will be established in your body, as your person, visible for others as virtues, labeled and conceived henceforth as “quality of being human”. Accordingly, natural language with all its ambiguities need not be regarded as an essential pillar. Quite to the opposite, if the “human” could be defined as a recipe, then our everyday language has to be cleaned up, made more close to crisp logic in order to avoid misunderstandings as far as possible; you may recognize this as the program of contemporary analytical philosophy. In methodological terms it was thought that it would be possible to determine the status of the human in positively given terms, or short, in a positive definite manner.

Such positions are, quite fortunately so, now recognized more and more as highly problematic. The main reason is that it is not possible to justify any kind of determination in an absolute manner. Any justification requires assumptions, while unjustified assumptions are counter-pragmatic to the intended justification. The problematics of knowledge is linked in here, as it could not be regarded as “justified, true belief” any more2. It was first Charles S. Peirce who concluded that the application of logic (as the grammar of reason) and ethics (as the theory of morality) are not independent from each other. In political terms, any positive definite determination that would be imposed to communities of other people must be regarded as an instance of violence. Hence, philosophy is not any more concerned about the status of the human as a fact, but, quite differently, the central question is how to speak about the status of the human, thereby not neglecting that speaking, using language is not a private affair. This looking for the “how” has to obey, of course, itself to the rule not to determine rules in a positive definite manner. As a consequence, the only philosophical work we can do is exploring the conditions, where the concept of “condition” refers to an open, though not recursive, chain. Actually, already Aristotle dubbed this as “metaphysics” and as the core interest of philosophy. This “metaphysics” can’t be overtaken by any “natural” discipline, whether it is a kind of science or engineering. There is a clear downstream relation: science as well as engineering should be affected by it in emphasizing the conditions for their work more intensely.

Practicing, turning the conditions and conditionability into facts and constraints is the job of design, let it manifest this design as “design,” as architecture, as machine-creating technology, as politician, as education, as writer and artist, etc.etc.  Philosophy can not only never explain, as Wittgenstein mentioned, it also can’t describe things “as such”. Descriptions and explanations are only possible within a socially negotiated system of normative choices. This holds true even for natural sciences. As a consequence, we should start with philosophical questions even in the natural sciences, and definitely always in engineering. And engaging in fields like machine learning, so-called artificial intelligence or robotics without constantly referring to philosophy will almost inevitably result in nonsense. The history of these fields a full of examples for that, just remember the infamous “General Problem Solver” of Simon and Newell.

Yet, the issue is not only one of ethics, morality and politics. It has been Foucault as the first one, in sort of a follow-up to Merleau-Ponty, who claimed a third region between the empiricism of affections and the tradition of reflecting on pure reason or consciousness.3 This third region, or even dimension (we would say “aspection”), being based on the compound consisting from perception and the body, comprises the historical evolution of systems of thinking. Foucault, together with Deleuze, once opened the possibility for a transcendental empiricism, the former mostly with regard to historical and structural issues of political power, the latter mostly with regard to the micronics of individual thought, where the “individual” is not bound to a single human person, of course. In our project as represented by this collection of essays we are following a similar path, starting with the transition from the material to the immaterial by means of association, and then investigating the dynamics of thinking in the aspectional space of transcendental conditions (forthcoming chapter), which build an abstract bridge between Deleuze and Foucault as it covers both the individual and the societal aspects of thinking.

This Essay

This essay deals with the relation of words and a rather important aspect in thinking, representation. We will address some aspects of its problematics, before we approach the role of words in language. Since the representation is something symbolic in the widest sense and that representation has to be achieved autonomously by a mainly material arrangement, e.g. called “the machine”4, we also will deal (again) with the conditions for the transformation of (mainly) physical matter into (mainly) symbolic matter. Particularly, however, we will explore the role of words in language. The outline comprises the following sections:

From Matter to Mind

Given the conditioning mentioned above, the anthropological history of the genus of homo5 poses a puzzle. Our anatomical foundations6 have been stable since at least 60’000 years, but contemporary human beings at the age of, let me say, 20 or 30 years are surely much more “intelligent”7. Given the measurement scale established as I.Q. in the beginning of the 20th century, a significant increase can be observed for the supervised populations even throughout the last 60 years.

So, what makes the difference then, between the earliest ancient cultures and the contemporary ones? This question is highly relevant for our considerations here that focus on the possibility of a machine-based episteme, or in more standard, yet seriously misplaced terms, machine learning, machine intelligence or even artificial intelligence. In any of those fields, one could argue, researchers and engineers somehow start with mere matter, then imprinting some rules and symbols to that matter, only to expect then the matter becoming “intelligent” in the end. The structure of the problematics remains the same, whether we take the transition that started from paleo-cultures or that rooted in the field of advanced computer science. Both instances concern the role of culture in the transformation of physical matter into symbolic matter.

While philosophy has tackled that issue for at least two and a half millennia, resulting in a rich landscape of arguments, including the reflection of the many styles of developing those arguments, computer science is still almost completely blind against the whole topic. Since computer scientists and computer engineers inevitably get into contact with the realm of the symbolic, they usually and naively repeat past positions, committing naïve, i.e. non-reflective idealism or materialism that is not even on a pre-socratic level. David Blair [6] correctly identifies the picture of language on which contemporary information retrieval systems are based on as that of Augustine: He believed that every word has a meaning. Notably, Augustine lived in the late 4th till early 5th century A.C. This story simply demonstrates that in order to understand the work of a field one also has, as always, to understand its history. In case of computer sciences it is the history of reflective thought itself.

Precisely this is also the reason for the fact that philosophy is much more than just a possibly interesting source for computer scientists. More directly expressed, it is probably one of the major structural faults of computer science that it is regarded as just a kind of engineering. Countless projects and pieces of software failed for the reason of such applied methodological reductionism. Everything that gets into contact with computers developed from within such an attitude then also becomes infected by the limited perspective of engineering.

One of the missing aspects is the philosophy of techno-science, which not just by chance seriously started with Heidegger8 as its first major proponent. Merleau-Ponty, inspired by Heidegger, then emphasized that everything concerning the human is artificial and natural at the same time. It does not make sense to set up that distinction for humans or man-made artifacts as well, as if such a difference would itself be “natural”. Any such distinction refers more directly than not to Descartes as well as to Hegel, that is, it follows either simplistic materialism or overdone idealism, so to speak idealism in its machinic, Cartesian form. Indeed, many misunderstandings about the role of computers in contemporary science and engineering, but also in the philosophy of science and the philosophy of information can be deciphered as a massive Cartesio-Hegelian heir, with all its drawbacks. And there are many.

The most salient perhaps is the foundational element9 of Descartes’ as well as Hegel’s thoughts: independence. Of course, for both of them independence was a major incentive, goal and demand, for political reasons (absolutism in the European 17th century), but also for general reasons imposed by the level of techno-scientific insights, which remained quite low until the mid of the 20th century. People before the scientific age had been exposed to all sorts of threatening issues, concerning health, finances, religious or political freedom, collective or individual violence, all together often termed “fate”. Being independent meant a basic condition to live more or less safely at all, physically and/or  mentally. Yet, Descartes and Hegel definitely exaggerated it.

Yet, the element of independence made its way into the cores of the scientific method itself. Here it blossomed as reductionism, positivism and physicalism, all of which can be subsumed under the label of naive realism. It took decades until people developed some confidence not to prejudge complexity as esotericism.

With regard to computer science there is an important consequence. We first and safely can drop the label of  “artificial intelligence” or “machine learning” just along with the respective narrow and limited concepts. Concerning machine learning we can state that only very few of the approaches to machine learning that exist so far is at most a rudimentary learning in the sense of structural self-transformation. The vast majority of approaches that are dubbed as “machine learning” represent just some sort of advanced parameter estimation, where the parameters to be estimated are all defined (i) apriori, and (ii) by the programmer(s). And regarding intelligence we can recognize that we never can assign concepts like artificial or natural to it, since there is always a strong dependence on culture in it. Michel Serres once called written language the first artificial intelligence, pointing to the central issue of any technology: externalization of symbol-based systems of references.

This brings us back to our core issue here, the conditions for the transformation of (mainly) physical matter into (mainly) symbolic matter. In some important way we even can state that there is no matter without symbolic aspects. Two pieces of matter can interact only if they are not completely transparent to each other. If there is an effective transfer of energy between those, then the form of the energy becomes important, think of it for instance as wave length of some electromagnetic radiation, or the rhythmicity of it, which becomes distinctive in the case of a LASER [9,10]. Sure, in a LASER there are no symbols to be found; yet, the system as a whole establishes a well-defined and self-focusing classification, i.e. it performs the transition from a white-noised, real-valued randomness to a discrete intensional dynamics. The LASER has thus to be regarded as a particular kind of associative system, which is able to produce proto-symbols.

Of course, we may not restrict our considerations to such basic instances of pan-semiotics. When talking about machine-based episteme we talk about the ability of an entity to think about the conditions for its own informational dynamics (avoiding the term knowledge here…). Obviously, this requires some kind of language. The question for any attempt to make machines “intelligent” thus concerns in turn the question about how to think about the individual acquisition of language, and, of course, with regard to our interests here how to implement the conditions for it. Note that homo erectus who lived 1 million years ago must have had a clear picture not only about causality, and not only individually, but they also must have had the ability to talk about that, since they have been able to keep fire burning and to utilize it for cooking meal and bones. Logic has not been invented as a field at these times, but it seems absolutely mandatory that they have been using a language.10 Even animals like cats, pigs or parrots are able to develop and to perform plans, i.e. to handle causality, albeit probably not in a conscious manner. Yet, neither wild pigs nor cats are able for symbol based culture, that is a culture, which spreads on the basis of symbols that are independent from a particular body or biological individual. The research programs of machine learning, robotics or artificial intelligence thus appears utterly naive, since they all neglect the cultural dimension.

The central set of questions thus considers the conditions that must be met in order to become able to deal with language, to learn it and to practice it.

These conditions are not only “private”, that is, they can’t be reduced to individual brains, or a machines, that would “process” information. Leaving the simplistic perspective onto information as it is usually practiced in computer sciences aside for the moment, we have to accept that learning language is a deeply social activity, even if the label of the material description of the entity is “computer”. We also have to think about the mediality of symbolic matter, the transition from nature to culture, that is from contexts of low symbolic intensity to those of high symbolic intensity. Handling language is not an affair that could be thought to be performed privately, there is no such thing as a “private language”. Of course, we have brains, for which the matter could still be regarded as dominant, and the processes running there are running only there11.

Note that implementing the handling of words as apriori existing symbols is not what we are talking about here. As Hofstadter pointed out [12], calling the computing processes on apriori defined strings “language understanding” is nothing but silly. We are not allowed to call the shuffling of predefined encoded symbols forth and back “understanding”. But what could we call “understanding” then? Again, we have to postpone this question for the time being. Meanwhile we may reshape the question about learning language a bit:

How do we come to be able to assign names to things, classes, types, species, animals and other humans? What is role of such naming, and what is the role of words?

The Unresolved Challenge

The big danger when addressing these issues is to start too late, provoked by an ontological stance that is applied to language. The most famous example probably being provided by Heidegger and his attempt of “fundamental ontology”, which failed glamorously. It is all too easy to get bewitched by language itself and to regard it as something natural, as something like stones: well-defined, stable, and potentially serving as a tool. Language itself makes us believe that words exist as such, independent from us.

Yet, language is a practice, as Wittgenstein said, and this practice is neither a single homogenous one nor does it remain constant throughout life, nor are the instances identical and exchangeable. The practice of language develops, unfolds, gains quasi-materiality, turns from an end to a means and back. Indeed, language may be characterized just by the capability to provide that variability in the domain of the symbolic. Take as a contrast for instance the symbolon, or take the use of signs in animals, in both cases there is exactly one single “game” you can play. Only in such trivial cases the meaning of a name could be said to be close to its referent. Yet, language games are not trivial.

I already mentioned the implicit popularity of Augustine among computer scientists and information systems engineers. Let me cite the passage that Wittgenstein chose in his opening remarks to the famous Philosophical Investigations (PI)12. Augustine writes:

When they (my elders) named some object, and accordingly moved towards something, I saw this and I grasped that the thing was called by the sound they uttered when they meant to point it out. Their intention was shewn by their bodily movements, as it were the natural language of all peoples: the expression of the face, the play of the eyes, the movement of other parts of the body, and the tone of voice which expresses our state of mind in seeking, having, rejecting, or avoiding something. Thus, as I heard words repeatedly used in their proper places in various sentences, I gradually learnt to understand what objects they signified; and after I had trained my mouth to form these signs, I used them to express my own desires.

Wittgenstein gave two replies, one directly in the PI, the other one in the collection entitled “Philosophical Grammar” (PG).

These words, it seems to me, give us a particular picture of the essence of human language. It is this: the individual words in language name objects—sentences are combinations of such names.—In this picture of language we find the roots of the following idea: Every word has a meaning. This meaning is correlated with the word. It is the object for which the word stands.

Augustine does not speak of there being any difference between kinds of word. If you describe the learning of language in this way you are, I believe, thinking primarily of nouns like “table,” “chair,” “bread,” and of people’s names, and only secondarily of the names of certain actions and properties; and of the remaining kind of words as something that will take care of itself. (PI §1)

And in the Philosophical Grammar:

When Augustine talks about the learning of language he talks about how we attach names to things or understand the names of things. Naming here appears as the foundation, the be all and end all of language. (PG 56)

Before we will take the step to drop and to drown the ontological stance once and for all we would like to provide two things. First, we will briefly cite a summarizing table from Blair [1]13. Blair’s book is indeed a quite nice work about the peculiarities of language as far as it concerns “information retrieval” and how Wittgenstein’s philosophy could be helpful in resolving the misunderstandings. Second, we will (also very briefly) make our perspective to names and naming explicit.

David Blair dedicates quite some efforts to render the issue of indeterminacy of language as clear as possible. In alignment to Wittgenstein he emphasizes that indeterminacy in language is not the result of sloppy or irrational usage. Language is neither a medium of logics nor a something like a projection screen of logics. There are good arguments, represented by the works of Ludwig Wittgenstein, late Hilary Putnam and Robert Brandom, to believe that language is not an inferior way to express a logical predicate (see the previous chapter about language). Language can’t be “cleared” or being made less ambiguous, its vagueness is a constitutive necessity for its use and utility in social intercourse. Many people in linguistics (e.g. Rooij [13]) and large parts of cognitive sciences (e.g. Alvin Goldman [14]14), but also philosophers like Saul Kripke [16] or Scott Soames [17] take the opposite position.

Of course, in some contexts it is reasonable to try to limit the vagueness of natural language, e.g. in law and contracts. Yet, it is also clear that positivism in jurisdiction is a rather bad thing, especially if it shows up as a pair with idealism.

Blair then contrasts two areas in so-called “information retrieval”15, distinguished by the type of data that is addressed: structured data that could be arranged in tables on the one hand, Blair calls it determinate data, and such “data” that can’t be structured apriori, like language. We already met this fundamental difference in other chapters (about analogies, language). The result of his investigation he summarized in the following table. It is more than obvious that the characteristics of the two fields are drastically different, which equally obvious has to be reflected in the methods going to be applied. For instance, the infamous n-gram method is definitely a no-go.

For the same reasons, semantic disambiguation is not possible by a set of rules that could be applied by an individual, whether this individual is a human or a machine. Quite likely it is even completely devoid of sense to try to remove ambiguity from language. One of the reasons is given by the fact that concepts are transcendental entities. We will return to the issue of “ambiguity” later.

In the quote from the PG shown above Wittgenstein rejects Augustine’s perspective that naming is central to language. Nevertheless, there is a renewed discussion in philosophy about names and so-called “natural kind terms”, brought up by Kripke’s “Naming and Necessity” [16]. Recently, Scott Soames explicitly referred to Kripke’s. Yet, as so many others, Soames commits the drastic mistake introduced along the line formed by Frege, Russell and Carnap in ascribing language the property of predicativity (cf. [18]  p.646).

These claims are developed within a broader theory which, details aside, identifies the meaning of a non-indexical sentence S with a proposition asserted by utterances of S in all normal contexts.

We won’t delve in any detail to the discussion of “proper names”16, because it is largely a misguided and unnecessary one. Let me just briefly mention three main (and popular) alternative approaches to address the meaning of names: the descriptivist theories, the referential theory originally arranged by John Stuart Mill, and the causal-historical theory. They are all not tenable because they implicitly violate the primacy of interpretation, though not in an obvious manner.

Why can’t we say that a name is a description? A description needs assignates17, or aspects, if you like, at least one scale. Assuming that there is the possibility for a description that is apriori justified and hence objective invokes divinity as a hidden parameter, or any other kind of Fregean hyper-idealism. Assignates are chosen according to and in dependence from the context. Of course, one could try to expel any variability of any expectable context, e.g. by literally programming society, or some kind of philosophical dictatorship. In any other case, descriptions are variant. The actual choice for any kind of description is the rather volatile result of negotiation processes in the embedding society. The rejection of names as description results from the contradictory pragmatic stances. First, names are taken as indivisible, atomic entities, but second descriptions are context-dependent subatomic properties, which by virtue of the implied pragmatics, corroborates the primary claim. Remember that the context-dependency results from the empirical underdetermination. In standard situations it is neither important that water consists as a compound of hydrogen and oxygen, nor is this what we want to say in everyday situations. We do not carry the full description of the named entity along into any instance of its use, despite there are some situations where we indeed are interested in the description, e.g. as a scientist, or as a supporter of  the “hydrogen economy”. The important point is that we never can determine the status of the name before we have interpreted the whole sentence, while we also can’t interpret the sentence without determining the status of the named entity. Both entities co-emerge. Hence we also can’t give an explicit rule for such a decision other than just using the name or uttering the sentence. Wittgenstein thus denies the view that assumes a meaning behind the words that is different from their usage.

The claim that the meaning of a proper name is its referent meets similar problems, because it just introduces the ontological stance through the backdoor. Identifying the meaning of a label with its referent implies that the meaning is taken as something objective, as something that is independent from context, and even beyond that, as something that could be packaged and transferred *as such*. In other words, it deliberately denies the primacy of interpretation. We need not say anything further, except perhaps that Kripke (and Soames as well, in taking it seriously) commits a third mistake in using “truth-values” as factual qualities.18 We may propose that the whole theory of proper names follows a pseudo-problem, induced by overgeneralized idealism or materialism.

Names, proper: Performing the turn completely

Yet, what would be an appropriate perspective to deal with the problem of names? What I would like to propose is a consequent application of the concept of “language game”. The “game” perspective could not only be applied to the complete stream of exchanged utterances, but also to the parts of the sentences, e.g. names and single words. As a result, new questions become visible. Wittgenstein himself did not explore this possibility (he took Augustine as a point of departure), and it could not be found in contemporary discourse either”19. As so often, philosophers influenced by positivism simply forget about the fact that they are speaking. Our proposal is markedly different from and also much more powerful than the causal-historical or the descriptivist approach, and also avoids the difficulties of Kripke’s externalist version.

After all, naming, to give a name and to use names, is a “language game”. Names are close to observable things, and as a matter of fact, observable things are also demonstrable. Using a name refers to the possibility of a speaker to provide a description to his partner in discourse such that this listener would be able to agree on the individuality of the referenced thing. The use of the name “water” for this particular liquid thing does not refer to an apriori fixed catalog of properties. Speaker and listener even need not agree on the identity of the set of properties ascribed to the referred physical thing. The chemist may always associate the physico-chemical properties of the molecule even when he reads about the submersed sailors in Shakespeare’s *tempest*, but nevertheless he easily could talk about that liquid matter with a 9 year old boy that does neither know about Shakespeare nor about the molecule.

It is thus neither possible nor is it reasonable to try to achieve a match regarding the properties, since a rich body of methods would be necessarily invoked to determine that set. Establishing the identity of representations of physical, external things, or even of the physical things themselves, inevitably invokes a normative act (which is rather incommensurable to the empiricists claims).

For instance, saying just “London”, out of the blue, it is not necessary that we envisage the same aspects of the grand urban area. Since cities are inevitably heterotopic entities (in the sense of Foucault [19, 20], acc. to David Graham Shane [21]), this agreement is actually impossible. Even for the undeniably more simple minded cartographers the same problem exists: “Where” is that London, in terms of spheric coordinates? Despite these unavoidable difficulties both the speaker and the listener easily agree on the individuality of the imaginary entity “London”. The name of “London” does not point to a physical thing but just to an imaginative pole. In contrast to concepts, however, names take a different grammatical role as they not only allow for a negotiation of rather primitive assignates in order to take action, they even demonstrate the possibility of such negotiation. The actual negotiations could be quite hard, though.

We conclude that we are not allowed to take any of the words as something that would “exist” as a, or like a physical “thing”. ­­­­Of course, we get used to certain words, the gain a quasi-materiality because a constancy appears that may be much stronger than the initial contingency. But this “getting used” is a different topic, it just refers how we speak about words. Naming remains a game, and as any other game this one also does not have an identifiable border.

Despite this manifold that is mediated through language, or as language, it is also clear that language remains rooted in activity or the possibility of it. I demonstrate the usage of a glass and accompany that by uttering “glass”. Of course, there is the Gavagai problematics20 as it has been devised by Quine [22]. Yet, this problematics is not a real problem, since we usually interact repeatedly. On the one hand this provides us the possibility to improve our capability to differentiate single concepts in a certain manner, but on the other hand the extended experience introduces a secondary indeterminacy.

In some way, all words are names. All words may be taken as indicators that there is the potential to say more about them, yet in a different, orthogonal story. This holds even for the abstract concepts denoted by the word “transcendental” or for verbs.

The usage of names, i.e. their application in the stream of sentences, gets more and more rich, but also more and more indeterminate. All languages developed some kind of grammar, which is a more or less strict body of rules about how to arrange words for certain language games. Yet, the grammar is not a necessity for language at all, it is just a tool to render language-based communication more easy, more fast and more precise. Beyond the grammars, it is the experience which enables us to use metaphors in a dedicated way. Yet, language is not a thing that sometimes contains metaphors and sometimes not. In a very basic sense all the language is metaphorical all the time.

So, we first conclude that there is nothing enigmatic in learning a language. Secondly, we can say that extending the “gameness” down to words provides the perspective of the mechanism, notably without reducing language to names or propositions.

Instead, we now can clearly see how these mechanisms mediate between the language game as a whole, the metaphorical characteristics of any language and simple rule-based mechanisms.

Representing Words

There is a drastic consequence of the completed gaming perspective. Words can’t be “represented” as symbols or as symbolic strings in the brain, and words can’t be appropriately represented as symbols in the computer either. Given any programming language, strings in a computer program are nothing else than particularly formatted series of values. Usually, this series is represented as an array of values, which is part of an object. In other words, the word is represented as a property of an object, where such objects are instances of their respective classes. Such, the representation of words in ANY computer program created so far for the purpose of handling texts, documents, or textual information in general is deeply inappropriate.

Instead, the representation of the word has to carry along its roots, its path of derivation, or in still other words, its traces of precipitation of the “showing”. This rooting includes, so we may say, a demonstrativum, an abstract image. This does not mean that we have to set up an object in the computer program that contains a string and an abstract image. This would be just the positivistic approach, leaving all problems untouched, the string and the image still being independent. the question of how to link them would be just delegated to the next analytic homunculus.

What we propose are non-representational abstract compounds that are irrevocably multi-modal since they are built from the assignates of  abstract “things” (Gegenstände). These compounds are nothing else than combined sets of assignates. The “things” represented in this way are actually always more or less “abstract”. Through the sets of assignates we actually may combine even things which appear incommensurable on the level of their wholeness, at least at first sight. An action is an action, not a word, and vice versa, an image is neither a word nor an action, isn’t it? Well, it depends; we already mentioned that we should not take words as ontological instances. Any of those entities can be described using the same formal structure, the probabilistic context that is further translated into a set of assignates. The probabilistic context creates a space of expressibility, where the incommensurability disappears, notably without reducing the comprised parts (image, text,…) to the slightest extent.

The situation reminds a bit synesthetic experiences. Yet, I would like to avoid calling it synesthetic, since synesthecism is experienced on a highly symbolic level. Like other phenomenological concepts, it also does not provide any hint about the underlying mechanisms. In contrast, we are talking about a much lower level of integration. Probably we could call this multi-modal compound a “syn-presentational” compound, or short, a “synpresentation”.21

Words, images and actions are represented together as a quite particular compound, which is an inextricable multi-modal compound. We also may say that these compounds are derived qualia. The exciting point is that the described way of probabilistic multi-modal representation obviates the need for explicit references and relations between words and images. These relations even would have to be defined apriori (strongly: before programming, weakly: before usage). In our approach, and quite to the contrast to the model of external control, relations and references *can be* subject to context-dependent alignments, either to the discourse, or the task (of preparing a deliverable from memory).

The demonstrativum may not only refer to an “image”. First note that the image does not exist outside of its interpretation. We need to refer to that interpretation, not to an index in a data base or a file system. Interpretation thus means that we apply a lot of various processing and extraction methods to it, each of them providing a few assignates. The image is dissolved into probabilistic contexts as we do it for words (footnote: we have described it elsewhere). The dissolving of an image is of course not the endpoint of a communicable interpretation, it is just the starting point. Yet, this does not matter, since the demonstrativum may also refer to any derived intension and even to any derived concept.22

The probabilistic multi-modal representation exhibits three highly interesting properties, concerning abstractness, relations and the issue of foundations. First, the  abstractness of represented items becomes scalable in an almost smooth manner. In our approach, “abstractness” is not a quality any more. Secondly, relations and references of both words and the “content” of images are transformed into their pre-specific versions. Both, relations and references need not be implemented apriori or observed as an apriori. Initially, they appear only as randolations23. Thirdly, some derived and already quite abstract entities on an intermediate level of “processing” are more basic than the so-called raw observations24.

Words, Classes, Models, Waves

It is somewhat tempting to arrange these four concepts to form a hierarchical series. Yet, things are not that simple. Actually, any of the concepts that appear more as a symbolistic entity also may re-turn into a quasi-materiality, into a wave-like phenomenon that itself serves as a basis for potential differences. This re-turn is a direct consequence of the inextricable mediality of the world, mediality understood here thus as a transcendental category. Needless to say that mediality is just another blind spot in contemporary computer sciences. Cybernetics as well as engineering straightaway exclude the possibility to recognize the mediatedness of worldly events.

In this section we will try to explicate the relations between the headlined concepts to some extent, at least as far as it concerns the mapping of those into an implementable system of (non-Turing) “computer programs”. The computational model that we presuppose here is the extended version of the 2-layered SOM, as we have it introduced previously.

Let us start with first things first. Given a physical signal, here in the literal sense, that is as a potentially perceivable difference in a stream of energy, we find embodied modeling, and nothing else. The embodiment of the initial modeling is actualized in sensory organs, or more generally, in any instance that is able to discretize the waves and differences at least “a bit more”. In more technical terms, the process of discretization is a process that increases the signal-noise ratio. In biological systems we often find a frequency encoding of the intensity of a difference. Though the embodiment of that modeling is indeed a filtering and encoding, hence already some kind of a modeling representation, it is not a modeling in the more narrow sense. It points out of the individual entity into the phylogenesis, the historical contingency of the production of that very individual entity. We also can’t say that the initial embodied processing by the sensory organs is a kind of encoding. There is no code consisting of well-identified symbols at the proximate end of the sensory cell. It is still a rather probabilistic affair.

This basic encoding is not yet symbolic, albeit we also can’t call it a wave any more. In biological entities this slightly discretized wave then is subject of an intense modeling sensu strictu. The processing of the signals is performed by associative mechanisms that are arranged in cascades. This “cascading” is highly interesting and probably one of the major mandatory ingredients that are neglected by computer science so far. The reason is quite clear: it is not an analytic process, hence it is excluded from computer science almost by definition.

Throughout that cascade signals turn more and more into information as an interpreted difference. It is clear that there is not a single or identifiable point in this cascade to which one could assign the turn from “data” to “information”. The process of interpretation is, quite in contrast to idealistic pictures of the process of thinking, not a single step. The discretized waves that flow into the processing cascade are subject to many instances and very different kinds of modeling, throughout of which discrete pieces get separated and related to other pieces. The processing cascade thus is repeating a modular principle consisting from association and distribution.

This level we still could not label as “thinking”, albeit it is clearly some kind of a mental process. Yet, we could still regard it as something “mechanical”, even as we also find already class-like representations, intensions and proto-concepts. Thinking in its meaningful dimension, however, appears only through assigning sharable symbols. Thinking of something implicitly means that one could tell about the respective thoughts. It does not matter much whether these symbols are shared between different regions in the brain or between different bodily entities does not matter much. Hence, thinking and mental processes need to be clearly distinguished. Yet, assigning symbols, that is assigning a word, a specific sound first, and later, as a further step of externalization, a specific grapheme that reflects the specific sound, which in turn represents an abstract symbol, this process of assigning symbols is only possible through cultural means. Cats may recognize situations very well and react accordingly, they may even have a feeling that they have encountered that situation before, but cats can’t share they symbols, they can’t communicate the relational structure of a situation. Yet, cats and dogs already may take part in “behavior games”, and such games clearly has been found in baboons by Fernando Colmenares [24]. Colmenares adopted the concept of “games” precisely because the co-occurrence of obvious rules, high variability, and predictive values of actions and reactions of the individual animals. Such games unfold synchronic as well as diachronic, and across dynamically changing assignment of social roles. All of this is accompanied by specific sounds. Other instances of language-like externalization of symbols can presumably be found in grey parrots [25], green vervet monkey [26], bonobos, dolphins and Orcas.

But still… in animals those already rather specific symbols are not externalized by imprinting them into matter different from their own bodies. One of the most desirable capabilities for our endeavor here about machine-based episteme thus consists in just that externalization processes embedded in social contexts.

Now the important thing to understand is that this whole process from waves to words is not simply a one-way track. First, words do not exist as such, they just appear as discrete entities through usage. It is the usage of X that introduces irreversibility. In other words, the discreteness of words is a quality that is completely on the aposteriori side of thinking. Before their actual usage, their arrangement into sentences words “are” nothing else than probabilistic relations. It needs a purpose, a target oriented selection (call it “goal-directed modeling”) to let them appear as crisp entities.

The second issue is that a sentence is an empirical phenomenon, remarkably even to the authoring brain itself. The sentence needs interpretation, because it is never ever fully determinate. Interpretation, however, of such indeterminate instances like sentences renders the apparent crisp phenomenon of words back into waves. A further effect of interpretation of sentences as series of symbols is the construction of a virtual network. Texts, and in a very similar way, pieces of music, should not be conceived as series, as computer linguistics is treating them. Much more appropriately texts are conceived as networks, that even may exert there own (again virtual) associative power, which to some extent is independent from the hosting interpreter, as I have argued here [28].

Role of Words

All these characteristics of words, their purely aposteriori crispness, their indeterminacy as sub-sentential indicators of randolational networks, their quality as signs by which they only point to other signs, but never to “objects”, their double quality as constituent and result of the “naming game”, all these “properties” make it actually appear as highly unlikely and questionable whether language is about references at all. Additionally, we know that the concept of “direct” access to the mind or the brain is simply absurd. Everything we know about the world as individuals is due to modeling and interpretation. That of course concerns also the interpretation of cultural artifacts or culturally enabled externalization of symbols, for instance into the graphemes that we use to represent words.

It is of utmost importance to understand that the written or drawn grapheme is not the “word” itself. The concept of a “word-as-such” is highly inappropriate, if not bare nonsense.

So, if words, sentences and language at large are not about “direct” referencing of (quasi-) material objects, how then should we conceive of the process we call “language game”, or “naming game”? Note that we now can identify van Fraassen’s question about “how do words and concepts acquire their reference?” as a misunderstanding, deeply informed by positivism itself. It does not make sense to pose that question in this way at all. There is not first a word which then, in a secondary process gets some reference or meaning attached. Such a concept is almost absurd. Similarly, the distinction between syntax and semantics, once introduced by the positivist Morris in the late 1940ies, is to be regarded as much the same pseudo-problem, established just by the fundamental and elemental assumptions of positivism itself: linear additivity, metaphysical independence and lossless separability of parts of wholenesses. If you scatter everything into single pieces of empirical dust, you will never be able to make any proposition anymore about the relations you destroyed before. That’s the actual reason for the problem of positivistic science and its failure.

In contrast to that we tend to propose a radically different picture of language, one that of course has been existing in many preformed flavors. Since we can’t transfer anything directly into one’s other mind, the only thing we can do is to invite or trigger processes of interpretation. In the chapter about vagueness we called words  “processual indicative” for slightly different reasons. Language is a highly structured, institutionalized and symbolized “demonstrating”, an invitation to interpret. Richard Brandom investigated in great detail [29] the processes and the roles of speakers and listeners in that process of mutual invitation for interpretation. The mutuality allows a synchronization, a resonance and a more or less strong resemblance between pairs of speaker-listeners and listener-speakers.

The “naming game” and its derivative, the “word game” is embedded into a context of “language games”. Actually, word games and language games are not as related as it might appear prima facie, at least beyond their common characteristics that we may label “game”. This becomes apparent if we ask what happens with the “physical” representative of a single word that we throw into our mechanisms. If there is no sentential context, or likewise no social context like a chat, then a lot of quite different variants of possible continuations are triggered. Calling out “London” our colleague in chatting may continue with “Jack London”  (the writer), “Jack the Ripper”, Chelsea, London Tower, Buckingham, London Heathrow, London Soho, London Stock Exchange, etc. but also Paris, Vienna, Berlin, etc., choices being slightly dependent on our mood, the thoughts we had before etc. In other words, the word that we bring to the foreground as a crisp entity behaves like a seedling: it is the starting point of a potential garden or forest, it functions as the root of the unfolding of a potential story (as a co-weaving of a network of abstract relations). Just to bring in another metaphorical representation: Words are like the initial traces of firework rockets, or the traces of elementary particles in statu nascendi as they can be observed in a bubble chamber: they promise a rich texture of upcoming events.

Understanding (Images, Words, …)

We have seen that “words” gain shape only as a result of a particular game, the “naming game”, which is embedded into a “language game”. Before those games are played, “words” do not exist as a discrete, crisp entity, say as a symbol, or a string of letters. Would they, we could not think. Even more than the “language game” the “naming game” works mainly as an invitation or as an acknowledged trigger for more or less constrained interpretation.

Now there are those enlightened language games of “understanding” and “explaining”. Both of them work just as any other part of speech do: they promise something. The claim to understand something refers to the ability for a potential preparation of a series of triggers that one additionally claim to be able to arrange in such a way as to support the gaining of the respective insight in my chat partner. Slightly derived from that understanding also could mean to transfer the structure of the underlying or overarching problematics to other contexts. This ability for adaptive reframing of a problematic setting is thus always accompanied by a demonstrativum, that is, by some abstract image, either by actual pictorial information or its imagination, or by its activity. Such a demonstrativum could be located completely within language itself, of course, which however is probably quite rare.


It is clear that language does not work as a way to express logical predicates. Trying to do so needs careful preparations. Language can’t be “cured” and “cleaned” from ambiguities, trying to do so would establish a categorical misunderstanding. Any “disambiguation” happens as a resonating resemblance of at least two participants in language-word-gaming, mutually interpreting each other until both believe that their interest and their feelings match. An actual, so to speak objective match is neither necessary nor possible. In other words, language does not exist in two different forms, one without ambiguity and without metaphors, and the other form full of them. Language without metaphorical dynamics is not a language at all.

The interpretation of empirical phenomena, whether outside of language or concerning language itself, is never fully determinable. Quine called the idea of the possibility of such a complete determination a myth and as the “dogma of empiricism” [30]. Thus, given this underdetermination, it does not make any sense to expect that language should be isomorphic to logical predicates or propositions. Language is basically an instance of impredicativity. Elsewhere we already met the self-referentiality of language (its strong singularity) as another reason for this. Instead, we should expect that this fundamental empirical underdetermination is reflected appropriately in the structure of language, namely as analogical thinking, or quite related to that, as metaphorical thinking.

Ambiguity is not a property of language or words, it is a result, or better, a property of the process of interpretation at some arbitrarily chosen point in time. And that process takes place synchronously within a single brain/mind as well as between two brains/minds. Language is just the mediating instance of that intercourse.


It is now possible to clarify the ominous concept of “intelligence”. We find the concept in the name of a whole discipline (“Artificial Intelligence”), and it is at work behind the scenes in areas dubbed as “machine learning”. Else, there is the hype about the so-called “collective intelligence”. These observations, and of course our own intentions make it necessary to deal briefly with it, albeit we think that it is a misleading and inappropriate idea.

First of all one has to understand that “intelligence” is an operationalization of a research question, allowing for a measurement, hence for a quantitative comparison. It is questionable whether the mental qualities can be made quantitatively measurable without reducing them seriously. For instance, the capacity for I/O operations related to a particular task surely can’t be equaled with “intelligence”, even if it could be a necessary condition.

It is just silly to search for “intelligence” in machines or beings, or to assign more or less intelligence to any kind of entity. Intelligence as such does not “exist” independently of a cultural setup, we can’t find it “out there”. Ontology is, as always, not only a bad trail, it directly leads into the abyss of nonsense. The research question, by the way, was induced by the intention to proof that black people and women are less intelligent than white males.

Yet, even if we take “intelligence” in an adapted and updated form as the capability for autonomous generalization, it is a bad concept, simply because it does not allow to pose further reasonable questions. This directly follows from its characteristics of being itself an operationalization. Investigating the operationalization hardly brings anything useful to light about the pretended subject of interest.

The concept of intelligence arose in a strongly positivistic climate, where the positivism has been practiced even in a completely unreflected manner. Hence, their inventors have not been aware of the effect of their operationalization. The concept of intelligence implies a strong functional embedding of the respective, measured entity. Yet, dealing with language undeniably has something to do with higher mental abilities, but language is a strictly non-functional phenomenon. It does not matter here that positivists still claim the opposite. And who would stand up claiming that a particular move, e.g. in planning a city, or dealing with the earth’s climate, is more smart than another? In other words, the other strong assumption of positivism, measurability and identifiability, also fails dramatically when it comes to human affairs. And everything on this earth is a human affair.

Intelligence is only determinable relative to a particular Lebensform. It is thus not possible to “compare the intelligence” across individuals living in different contexts. This renders the concept completely useless, finally.


The hypothesis I have been arguing for in this essay claims that the trinity of waves, words and images plays a significant role in the ability to deal with language and for the emergence of higher mental abilities. I proposed first that this trinity is irreducible and second that is responsible for this ability in the sense of a necessary and sufficient condition. In order to describe the practicing of that trinity, for instance with regard to possible implementations, I introduced the term of “synpresentation”. This concept draws the future track of how to deal with words and images as far as it concerns machine-based episteme.

In more direct terms, we conclude that without the capability to deal with “names”, “words” and language, the attempt to mapping higher mental capacities onto machines will not experience any progress. Once the machine will have arrived such a level, it will find itself exactly in the same position as we as humans do. This capability is definitely not sufficiently defined by “calculation power”; indeed, such an idea is ridiculous. Without embedding into appropriate social intercourse, without solving the question of representation (contemporary computer science and its technology do NOT solve it, of course), even a combined 1020000 flops will not cause the respective machine or network of machines25 “intelligent” in any way.

Words and proper names are re-formulated as a particular form of “games”, though not as “language games”, but on a more elementary level as “naming game”. I have tried to argue how the problematics of the reference could be thought of to disappear as a pseudo-problem on the basis of such a reformulation.

Finally, we found important relationships to earlier discussions of concepts like the making of analogies or vagueness. We basically agree on the stance that language can’t be clarified and that it is inappropriate (“free of sense”) to assign any kind of predicativity to language. Bluntly spoken, the application of logic is the mind, and nowhere else. Communicating about this application is not based on a language any more, and similarly, projecting logic onto language destroys language. The idea of a scientific language is empty as it is the idea of a generally applicable and understandable language. A language that is not inventive could not be called such.


1. If you read other articles in this blog you might think that there is a certain redundancy in the arguments and the targeted issues. This is not the case, of course. The perspectives are always a bit different; such I hope that by the repeated attempt “to draw the face” (Ludwig Wittgenstein, ) the problematics is rendered more accurately. “How can one learn the truth by thinking? As one learns to see a face better if one

draws it.” ( Zettel §255, [1])

2. In one of the shortest articles ever published in the field of philosophy, Edmund Gettier [2] demonstrated that it is deeply inappropriate to conceive of knowledge as “justified true belief”. Yet, in the field of machine learning so-called “belief revision” is precisely and still following this untenable position. See also our chapter about the role of logic.

3. Michel Foucault “Dits et Ecrits” I 846 (dt.1075)  [3] cited after Bernhard Waldenfels [4] p.125

4. we will see that the distinction or even separation of the “symbolic” and the “material” is neither that clear nor is it simple. Fomr the side of the machine, Felix Guattari argued in favor for a particular quality [5], the machinic, which is roughly something like a mechanism in human affairs. From the side of the symbolic there is clearly the work of Edwina Taborsky to cite, who extended and deepened the work of Charles S. Peirce in the field of semiotics,

5. particularly homo erectus and  homo sapiens spec.

6. Humans of the species homo sapiens sapiens.

7. For the time being we leave this ominous term “intelligence” untouched, but I also will warn you about its highly problematic state. We will resolve this issue till the end of that essay.

8. Heidegger developed the figure of the “Gestell” (cf. [7]), which serves multiple purposes. It is providing a storage capacity, it is a tool for sort of well-ordered/organized hiding and unhiding (“entbergen”), it provides a scaffold for sorting things in and out, and thus it is working as a complex constraint on technological progress. See also Peter Sloterdijk on this topic [8].

9. elementarization regarding Descartes

10. Homo floresiensis, also called “hobbit man”, who lived on Flores, Indonesia, 600’000y till approx. 3’000y ago. Homo floresiensis derived from homo erectus. 600’000 years ago they obviously built a boat to transfer to the islands across a sea gate with strong currents. The interesting issue is that this endeavor requires a stable social structure, division of labor, and thus also language. Homo floresiensis had a particular fore brain anatomy which is believed to provide the “intelligence” while the overall brain was relatively small as compared to ours.

11. Concerning the “the enigma of brain-mind interaction” Eccles was an avowed dualist [11]. Consequently he searched for the “interface” between the mind and the brain, in which he was deeply inspired by the 3-world concept of Karl Popper. The “dualist” position held that the mind exists at least partially independently from and somehow outside the brain. Irrespective his contributions to neuroscience on the cellular level, these ideas (of Eccles and Popper) are just wild nonsense.

12. The Philosophical Investigations are probably the most important contribution to philosophy in the 20th century. The are often mistaken as a foundational document for analytic philosophy of language. Nothing is more wrong as to take Wittgenstein as a founding father of analytic philosophy, however. Many of the positions that refer to Wittgenstein (e.g. Kripke) are just low-quality caricatures of his work.

13. Blair’s book is a must read for any computer scientist, despite some problems in its conceptualization of information.

14. Goldman [14] provides a paradigmatic examples how psychologists constantly miss the point of philosophy, up today. In an almost arrogant tone he claims: “First, let me clarify my treatment of justificational rules, logic, and psychology. The concept of justified or rational belief is a core item on the agenda of philosophical epistemology. It is often discussed in terms of “rules” or “principles” of justification, but these have normally been thought of as derivable from deductive and inductive logic, probability theory, or purely autonomous, armchair epistemology.”

Markie [15] demonstrated that everything in these claims is wrong or mistaken. Our point about it is that something like “justification” is not possible in principle, but particularly it is not possible from an empirical perspective. Goldman’s secretions to the foundations of his own work are utter nonsense (till today).

15. It is one of the rare (but important) flaws in Blair’s work that he assimilates the concept of “information retrieval” in an unreflected manner. Neither it is reasonable to assign an ontological quality to information (we can not say that information “exists”, as this would deny the primacy of interpretation) nor can we then say that information can be “retrieved”. See also our chapter about his issue. Despite his largely successful attempt to argue in favor of the importance of Wittgenstein’s philosophy for computer science, Blair fails to recognize that ontology is not tenable at large, but particularly for issues around “information”. It is a language game, after all.

16 see Stanford Encyclopedia for a discussion of various positions.

17. In our investigation of models and their generalized form, we stressed the point that there are no apriori fixed “properties” of a measured (perceived) thing; instead we have to assign the criteria for measurement actively, hence we call these criteria assignates instead of “properties”, “features”, or “attributes”.

18. See our essay about logic.

20. See the entry in the Stanford Encyclopedia of Philosophy about Quine. Quine in “Word and Object” gives the following example (abridged version here). Imagine, you discovered a formerly unknown tribe of friendly people. Nobody knows their language. You accompany one of them hunting. Suddenly a hare rushes along, crossing your way. The hunter immediately points to the hare, shouting “Gavagai!” What did he mean? Funny enough, this story happened in reality. British settlers in Australia wondered about those large animals hopping around. They asked the aborigines about the animal and its name. The answer was “cangaroo” – which means “I do not understand you” in their language.

21. This, of course, resembles to Bergson, who, in Matter and Memory [23], argued that any thinking and understanding takes place by means of primary image-like “representations”. As Leonard Lawlor (Henri Bergson@Stanford) summarizes, Bergson conceives of knowledge as “knowledge of things, in its pure state, takes place within the things it represents.” We would not describe out principle of associativity as it can be be realized by SOMs very differently…

22. the main difference between “intension” and “concept” is that the former still maintains a set of indices to raw observations of external entities, while the latter is completely devoid of such indices.

23. We conceived randolations as pre-specific relations; one may also think of them as probabilistic quasi-species that eventually may become discrete on behalf of some measurement. The intention for conceiving of randolations is given by the central drawback of relations: their double-binary nature presumes apriori measurability and identifiability, something that is not appropriate when dealing with language.

24. “raw” is indeed very relative, especially if we take culturally transformed or culturally enabled percepts into account;

25. There are mainly two aspects about that: (1) large parts of the internet is organized as a hierarchical network, not as an associative network; nowadays everybody should know that telephone network did not, do not and will not develop “intelligence”; (2) so-called Grid-computing is always organized as a linear, additive division of labor; such, it allows to run processes faster, but no qualitative change is achieved, as it can be observed for instance in the purely size-related contrast between a mouse and an elephant. Thus, taken (1) and (2) together, we may safely conclude that doing wrong things (=counting Cantoric dust) with a high speed will not produce anything capable for developing a capacity to understand anything.


  • [1] Ludwig Wittgenstein, Zettel. Oxford, Basil Blackwell, 1967. Edited by G.E.M. Anscombe and G.H. von Wright, translated by G.E.M. Anscombe.
  • [2] Edmund Gettier (1963), Is Justified True Belief Knowledge? Analysis 23: 121-123.
  • [3] Michel Foucault “Dits et Ecrits”, Vol I.
  • [4] Bernhard Waldenfels, Idiome des Denkens. Suhrkamp, Frankfurt 2005.
  • [5] Henning Schmidgen (ed.), Aesthetik und Maschinismus, Texte zu und von Felix Guattari. Merve, Berlin 1995.
  • [6] David Blair, Wittgenstein, Language and Information – Back to the Rough Ground! Springer Series on Information Science and Knowledge Management, Vol.10, New York 2006.
  • [7] Martin Heidegger, The Question Concerning Technology and Other Essays. Harper, New York 1977.
  • [8] Peter Sloterdijk, Nicht-gerettet, Versuche nach Heidegger. Suhrkamp, Frankfurt 2001.
  • [9] Hermann Haken, Synergetik. Springer, Berlin New York 1982.
  • [10] R. Graham, A. Wunderlin (eds.): Lasers and Synergetics. Springer, Berlin New York 1987.
  • [11] John Eccles, The Understanding of the Brain. 1973.
  • [12] Douglas Hofstadter, Fluid Concepts And Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought. Basic Books, New York 1996.
  • [13] Robert van Rooij, Vagueness, Tolerance and Non-Transitive Entailment. p.205-221 in: Petr Cintula, Christian G. Fermüller, Lluis Godo, Petr Hajek (eds.) Understanding Vagueness. Logical, Philosophical and Linguistic Perspectives. Vol.36 of Studies in Logic, College Publications, London 2011. book is avail online.
  • [14] Alvin I. Goldman (1988), On Epistemology and Cognition, a response to the review by S.W. Smoliar. Artificial Intelligence 34: 265-267.
  • [15] Peter J. Markie (1996). Goldman’s New Reliabilism. Philosophy and Phenomenological Research Vol.56, No.4, pp. 799-817
  • [16] Saul Kripke, Naming and Necessity. 1972.
  • [17] Scott Soames, Beyond Rigidity: The Unfinished Semantic Agenda of Naming and Necessity. Oxford University Press, Oxford 2002.
  • [18] Scott Soames (2006), Précis of Beyond Rigidity. Philosophical Studies 128: 645–654.
  • [19] Michel Foucault, Les Hétérotopies – [Radio Feature 1966]. Youtube.
  • [20] Michel Foucault, Die Heterotopien. Der utopische Körper. Aus dem Französischen von Michael Bischoff, Suhrkamp, Frankfurt 2005.
  • [21] David Grahame Shane, Recombinant Urbanism – Conceptual Modeling in Architecture, Urban Design and City Theory. Wiley Academy Press, Chichester 2005.
  • [22] Willard van Orman Quine, Word and Object. M.I.T. Press, Cambridge (Mass.) 1960.
  • [23] Henri Louis Bergson, Matter and Memory. transl. Nancy M. Paul  & W. Scott Palmer, Martino Fine Books, Eastford  (CT) 2011 [1911].
  • [24] Fernando  Colmenares, Helena Rivero (1986).  A conceptual Model for Analysing Interactions in Baboons: A Preliminary Report. pp.63-80. in: Colgan PW, Zayan R (eds.), Quantitative models in ethology. Privat I.E, Toulouse.
  • [25] Irene Pepperberg (1998). Talking with Alex: Logic and speech in parrots. Scientific American. avail online. see also the Wiki entry about Alex.
  • [26] a. Robert Seyfarth, Dorothy Cheney, Peter Marler (1980). Monkey Responses to Three Different Alarm Calls: Evidence of Predator Classification and Semantic Communication. Science, Vol.210: 801-803.b. Dorothy L. Cheney, Robert M. Seyfarth (1982). How vervet monkeys perceive their grunts: Field playback experiments. Animal Behaviour 30(3): 739–751.
  • [27] Robert Seyfarth, Dorothy Cheney (1990). The assessment by vervet monkeys of their own and another species’ alarm calls. Animal Behaviour 40(4): 754–764.
  • [28] Klaus Wassermann (2010). Nodes, Streams and Symbionts: Working with the Associativity of Virtual Textures. The 6th European Meeting of the Society for Literature, Science, and the Arts, Riga, 15-19 June, 2010. available online.
  • [29] Richard Brandom, Making it Explicit. Harvard University Press, Cambridge (Mass.) 1998.
  • [30] Willard van Orman Quine (1951), Two Dogmas of Empiricism. Philosophical Review, 60: 20–43. available here


Analogical Thinking, revisited. (II)

March 20, 2012 § Leave a comment

In this second part of the essay about a fresh perspective on


analogical thinking—more precise: on models about it—we will try to bring two concepts together that at first sight represent quite different approaches: Copycat and SOM.

Why engaging in such an endeavor? Firstly, we are quite convinced that FARG’s Copycat demonstrates an important and outstanding architecture. It provides a well-founded proposal about the way we humans apply ideas and abstract concepts to real situations. Secondly, however, it is also clear that Copycat suffers from a few serious flaws in its architecture, particularly the built-in idealism. This renders any adaptation to more realistic domains, or even to completely domain-independent conditions very, very difficult, if not impossible, since this drawback also prohibits structural learning. So far, Copycat is just able to adapt some predefined internal parameters. In other words, the Copycat mechanism just adapts a predefined structure, though a quite abstract one, to a given empiric situation.

Well, basically there seem to be two different, “opposite” strategies to merge these approaches. Either we integrate the SOM into Copycat, or we try to transfer the relevant yet to be identified parts from Copycat to a SOM-based environment. Yet, at the end of day we will see that and how the two alternatives converge.

In order to accomplish our goal of establishing a fruitful combination between SOM and Copycat we have to take mainly three steps. First, we briefly recapitulate the basic elements of Copycat and the proper instance of a SOM-based system. We also will describe the extended SOM system in some detail, albeit there will be a dedicated chapter on it. Finally, we have to transfer and presumably adapt those elements of the Copycat approach that are missing in the SOM paradigm.

Crossing over

The particular power of (natural) evolutionary processes derives from the fact that it is based on symbols. “Adaptation” or “optimization” are not processes that change just the numerical values of parameters of formulas. Quite to the opposite, adaptational processes that span across generations parts of the DNA-based story is being rewritten, with potential consequences for the whole of the story. This effect of recombination in the symbolic space is particularly present in the so-called “crossing over” during the production of gamete cells in the context of sexual reproduction in eukaryotes. Crossing over is a “technique” to dramatically speed up the exploration of the space of potential changes. (In some way, this space is also greatly enlarged by symbolic recombination.)

What we will try here in our attempt to merge the two concepts of Copycat and SOM is exactly this: a symbolic recombination. The difference to its natural template is that in our case we do not transfer DNA-snippets between homologous locations in chromosomes, we transfer whole “genes,” which are represented by elements.

Elementarizations I: C.o.p.y.c.a.t.

In part 1 we identified two top-level (non-atomic) elements of Copycat

Since the first element, covering evolutionary aspects such as randomness, population and a particular memory dynamics, is pretty clear and a whole range of possible ways to implement it are available, any attempt for improving the Copycat approach has to target the static, strongly idealistic characteristics of the the structure that is called “Slipnet” by the FARG’s. The Slipnet has to be enabled for structural changes and autonomous adaptation of its parameters. This could be accomplished in many ways, e.g. by representing the items in the Slipnet as primitive artificial genes. Yet, we will take a different road here, since the SOM paradigm already provides the means to achieve idealizations.

At that point we have to elementarize Copycat’s Slipnet in a way that renders it compatible with the SOM principles. Hofstadter emphasizes the following properties of the Slipnet and the items contained therein (pp.212).

  • (1) Conceptual depth allows for a dynamic and continuous scaling of “abstractness” and resistance against “slipping” to another concept;
  • (2) Nodes and links between nodes both represent active abstract properties;
  • (3) Nodes acquire, spread and lose activation, which knows an switch-on threshold < 1;
  • (4) The length of links represents conceptual proximity or degree of association between the nodes.

As a whole, and viewed from the network perspective, the Slipnet behaves much like a spring system, or a network built from rubber bands, where the springs or the rubber bands are regulated in their strength. Note that our concept of SomFluid also exhibits the feature of local regulation of the bonds between nodes, a property that is not present in the idealized standard SOM paradigm.

Yet, the most interesting properties in the list above are (1) and (2), while (3) and (4) are known in the classic SOM paradigm as well. The first item is great because it represents an elegant instance of creating the possibility for measurability that goes far beyond the nominal scale. As a consequence, “abstractness” ceases to be nominal none-or-all property, as it is present in hierarchies of abstraction. Such hierarchies now can be recognized as mere projections or selections, both introducing a severe limitation of expressibility. The conceptual depth opens a new space.

The second item is also very interesting since it blurs the distinction between items and their relations to some extent. That distinction is also a consequence of relying too readily on the nominal scale of description. It introduces a certain moment of self-reference, though this is not fully developed in the Slipnet. Nevertheless, a result of this move is that concepts can’t be thought without their embedding into other a neighborhood of other concepts. Hofstadter clearly introduces a non-positivistic and non-idealistic notion here, as it establishes a non-totalizing meta-concept of wholeness.

Yet, the blurring between “concepts” and “relations” could be and must be driven far beyond the level Hofstadter achieved, if the Slipnet should become extensible. Namely, all the parts and processes of the Slipnet need to follow the paradigm of probabilization, since this offers the only way to evade the demons of cybernetic idealism and control apriori. Hofstadter himself relies much on probabilization concerning the other two architectural parts of Copycat. Its beyond me why he didn’t apply it to the Slipnet too.

Taken together, we may derive (or: impose) the following important elements for an abstract description of the Slipnet.

  • (1) Smooth scaling of abstractness (“conceptual depth”);
  • (2) Items and links of a network of sub-conceptual abstract properties are instances of the same category of “abstract property”;
  • (3) Activation of abstract properties represents a non-linear flow of energy;
  • (4) The distance between abstract properties represents their conceptual proximity.

A note should be added regarding the last (fourth) point. In Copycat, this proximity is a static number. In Hofstadter’s framework, it does not express something like similarity, since the abstract properties are not conceived as compounds. That is, the abstract properties are themselves on the nominal level. And indeed, it might appear as rather difficult to conceive of concepts as “right of”, “left of”, or “group” as compounds. Yet, I think that it is well possible by referring to mathematical group theory, the theory of algebra and the framework of mathematical categories. All of those may be subsumed into the same operationalization: symmetry operations. Of course, there are different ways to conceive of symmetries and to implement the respective operationalizations. We will discuss this issue in a forthcoming essay that is part of the series “The Formal and the Creative“.

The next step is now to distill the elements of the SOM paradigm in a way that enables a common differential for the SOM and for Copycat..

Elementarizations II: S.O.M.

The self-organizing map is a structure that associates comparable items—usually records of values that represent observations—according to their similarity. Hence, it makes two strong and important assumptions.

  • (1) The basic assumption of the SOM paradigm is that items can be rendered comparable;
  • (2) The items are conceived as tokens that are created by repeated measurement;

The first assumption means, that the structure of the items can be described (i) apriori to their comparison and (ii) independent from the final result of the SOM process. Of course, this assumption is not unique to SOMs, any algorithmic approach to the treatment of data is committed to it. The particular status of SOM is given by the fact—and in stark contrast to almost any other method for the treatment of data—that this is the only strong assumption. All other parameters can be handled in a dynamic manner. In other words, there is no particular zone of the internal parametrization of a SOM that would be inaccessible apriori. Compare this with ANN or statistical methods, and you feel the difference…  Usually, methods are rather opaque with respect to their internal parameters. For instance, the similarity functional is usually not accessible, which renders all these nice looking, so-called analytic methods into some kind of subjective gambling. In PCA and its relatives, for instance, the similarity is buried in the covariance matrix, which in turn is only defined within the assumption of normality of correlations. If not a rank correlation is used, this assumption is extended even to the data itself. In both cases it is impossible to introduce a different notion of similarity. Else, and also as a consequence of that, it is impossible to investigate the particular dependency of the results proposed by the method from the structural properties and (opaque) assumptions. In contrast to such unfavorable epistemo-mythical practices, the particular transparency of the SOM paradigm allows for critical structural learning of the SOM instances. “Critical” here means that the influence of internal parameters of the method onto the results or conclusions can be investigated, changed, and accordingly adapted.

The second assumption is implied by its purpose to be a learning mechanism. It simply needs some observations as results of the same type of measurement. The number of observations (the number of repeats) has to  exceed a certain lower threshold, which, dependent on the data and the purpose, is at least 8, typically however (much) more than 100 observations of the same kind are needed. Any result will be within the space delimited by the assignates (properties), and thus any result is a possibility (if we take just the SOM itself).

The particular accomplishment of a SOM process is the transition from the extensional to the intensional description, i.e. the SOM may be used as a tool to perform the step from tokens to types.

From this we may derive the following elements of the SOM:1

  • (1) a multitude of items that can be described within a common structure, though not necessarily an identical one;
  • (2) a dense network where the links between nodes are probabilistic relations;
  • (3) a bottom-up mechanism which results in the transition from an extensional to an intensional level of description;

As a consequence of this structure the SOM process avoids the necessity to compare all items (N) to all other items (N-1). This property, together with the probabilistic neighborhoods establishes the main difference to other clustering procedures.

It is quite important to understand that the SOM mechanism as such is not a modeling procedure. Several extensions have to be added and properly integrated, such as

  • – operationalization of the target into a target variable;
  • – validation by separate samples;
  • – feature selection, preferably by an instance of  a generalized evolutionary process (though not by a genetic algorithm);
  • – detecting strong functional and/or non-linear coupling between variables;
  • – description of the dependency of the results from internal parameters by means of data experiments.

We already described the generalized architecture of modeling as well as the elements of the generalized model in previous chapters.

Yet, as we explained in part 1 of this essay, analogy making is conceptually incompatible to any kind of modeling, as long as the target of the model points to some external entity. Thus, we have to choose a non-modeling instance of a SOM as the starting point. However, clustering is also an instance of those processes that provide the transition from extensions to intensions, whether this clustering is embedded into full modeling or not. In other words, both the classic SOM as well as the modeling SOM are not suitable as candidates for a merger with Copycat.

SOM-based Abstraction

Fortunately, there is already a proposal, and even a well-known one, that indeed may be taken as such a candidate: the two-layer SOM (TL-SOM) as it has been demonstrated as essential part of the so-called WebSom [1,2].

Actually, the description as being “two layered” is a very minimalistic, if not inappropriate description what is going on in the WebSom. We already discussed many aspects of its architecture here and here.

Concerning our interests here, the multi-layered arrangement itself is not a significant feature. Any system doing complicated things needs a functional compartmentalization; we have met a multi-part, multi-compartment and multi-layered structure in the case of Copycat too. Else, the SOM mechanism itself remains perfectly identical across the layers.

The real interesting features of the approach realized in the TL-SOM are

  • – the preparation of the observations into probabilistic contexts;
  • – the utilization of the primary SOM as a measurement device (the actual trick).

The domain of application of the TL-SOM is the comparison and classification of texts. Texts belong to unstructured data and the comparison of texts is exposed to the same problematics as the making of analogies: there is no apriori structure that could serve as a basis for modeling. Also, as the analogies investigated by the FARG the text is a locational phenomenon, i.e. it takes place in a space.

Let us briefly recapitulate the dynamics in a TL-SOM. In order to create a TL-SOM the text is first dissolved into overlapping, probabilistic contexts. Note that the locational arrangement is captured by these random contexts. No explicit apriori rules are necessary to separate patterns. The resulting collection of  contexts then gets “somified”. Each node then contains similar random contexts that have been derived from various positions in different texts. Now the decisive step will be taken, which consists in turning the perspective by “90 degrees”: We can use the SOM as the basis for creating a histogram for each of the texts. The nodes are interpreted as properties of the texts, i.e. each node represents a bin of the histogram. The values of the individual bins measure the frequency of the text as it is represented by the respective random context. The secondary SOM then creates a clustering across these histograms, which represent the texts in an abstract manner.

This way the primary lattice of the TL-SOM is used to impose a structure on the unstructured entity “text.”

Figure 1: A schematic representation of a two-layered SOM with built-in self-referential abstraction. The input for the secondary SOM (foreground) is derived as a collection of histograms that are defined as a density across the nodes of the primary SOM (background). The input for the primary SOM are random contexts.

To put it clearly: the secondary SOM builds an intensional description of entities that results from the interaction of a SOM with a probabilistic description of the empirical observations. Quite obviously, intensions built this way about intensions are not only quite abstract, the mechanism could even be stacked. It could be described as “high-level perception” as justified as Hofstadter uses the term for Copycat. The TL-SOM turns representational intensions into abstract, structural ones.

The two aspects from above thus interact, they are elements of the TL-SOM. Despite the fact that there are still transitions from extensions to intensions, we also can see that the targeted units of the analysis, the texts get probabilistically distributed across an area, the lattice of the primary SOM. Since the SOM maps the high-dimensional input data onto its map in a way that preserves their topological properties, it is easy to recognize that the TL-SOM creates conceptual halos as an intermediate.

So let us summarize the possibilities provided by the SOM.

  • (1) SOMs are able to create non-empiric, or better: de-empirified idealizations of intensions that are based on “quasi-empiric” input data;
  • (2) TL-SOMs can be used to create conceptual halos.

In the next section we will focus on this spatial, better: primarily spatial effect.

The Extended SOM

Kohonen and co-workers [1,2] proposed to build histograms that reflect the probability density of a text across the SOM. Those histograms represent the original units (e.g. texts) in a quite static manner, using a kind of summary statistics.

Yet, texts are definitely not a static phenomenon. At first sight there is at least a series, while more appropriately texts are even described as dynamic networks of own associative power [3]. Returning to the SOM we see that additionally to the densities scattered across the nodes of the SOM we also can observe a sequence of invoked nodes, according to the sequence of random contexts in the text (or the serial observations)

The not so difficult question then is: How to deal with that sequence? Obviously, it is again and best conceived as a random process (though with a strong structure), and random processes are best described using Markov models, either as hidden (HMM) or as transitional models. Note that the Markov model is not a model about the raw observational data, it describes the sequence of activation events of SOM nodes.

The Markov model can be used as a further means to produce conceptual halos in the sequence domain. The differential properties of a particular sequence as compared to the Markov model then could be used as further properties to describe the observational sequence.

(The full version of the extended SOM comprises targeted modeling as a further level. Yet, this targeted modeling does not refer to raw data. Instead, its input is provided completely by the primary SOM, which is based on probabilistic contexts, while the target of such modeling is just internal consistency of a context-dependent degree.)

The Transfer

Just to avoid misunderstanding: it does not make sense to try representing Copycat completely by a SOM-based system. The particular dynamics and phenomenologically behavior depends a lot on Copycat’s tripartite morphology as represented by the Coderack (agents), the Workspace and the Slipnet. We are “just” in search for a possibility to remove the deep idealism from the Slipnet in order to enable it for structural learning.

Basically, there are two possible routes. Either we re-interpret the extended SOM in a way that allows us to represent the elements of the Slipnet as properties of the SOM, or we try to replace the all items in the Slipnet by SOM lattices.

So, let us take a look which structures we have (Copycat) or what we could have (SOM) on both sides.

Table 1: Comparing elements from Copycat’s Slipnet to the (possible) mechanisms in a SOM-based system.

Copycat extended SOM
 1. smoothly scaled abstraction Conceptual depth (dynamic parameter) distance of abstract intensions in an integrated lattice of a n-layered SOM
 2.  Links as concepts structure by implementation reflecting conceptual proximity as an assignate property for a higher-level
 3. Activation featuring non-linear switching behavior structure by implementation x
 4. Conceptual proximity link length (dynamic parameter) distance in map (dynamic parameter)
 5.  Kind of concepts locational, positional symmetries, any

From this comparison it is clear that the single most challenging part of this route is the possibility for the emergence of abstract intensions in the SOM based on empirical data. From the perspective of the SOM, relations between observational items such as “left-most,” “group” or “right of”, and even such as “sameness group” or “predecessor group”, are just probabilities of a pattern. Such patterns are identified by functions or dynamic combinations thereof. Combinations ot topological primitives remain mappable by analytic functions. Such concepts we could call “primitive concepts” and we can map these to the process of data transformation and the set of assignates as potential properties.2 It is then the job of the SOM to assign a relevancy to the assignates.

Yet, Copycat’s Slipnet comprises also rather abstract concepts such as “opposite”. Further more, the most abstract concepts often act as links between more primitive concepts, or, in Hofstadter terms, conceptual items of lower “conceptual depth”.

My feeling here is that it is a fundamental mistake to implement concepts like “opposite” directly. What is opposite of something else is a deeply semantic concept in itself, thus strongly dependent on the domain. I think that most of the interesting concepts, i.e. the most abstract ones are domain-specific. Concepts like “opposite” could be considered as something “simple” only in case of geometric or spatial domains.

Yet, that’s not a weakness. We should use this as a design feature. Take the following rather simple case as shown in the next figure as an example. Here we mapped simply triplets of uniformly distributed random values onto a SOM. The three values can be readily interpreted as parts of a RGB value, which renders the interpretation more intuitive. The special thing here is that the map has been a really large one: We defined approximately 700’000 nodes and fed approx. 6 million observations into it.

Figure 2: A SOM-based color map showing emergence of abstract features. Note that the topology of the map is a borderless toroid: Left and right borders touch each other (distance=0), and the same applies to the upper and lower borders.

We can observe several interesting things. The SOM didn’t come up with just any arbitrary sorting of the colors. Instead, a very particular one emerged.

First, the map is not perfectly homogeneous anymore. Very large maps tend to develop “anisotropies”, symmetry breaks if you like, simply due to the fact the the signal horizon becomes an important issue. This should not be regarded as a deficiency though. Symmetry breaks are essential for the possibility of the emergence of symbols. Second, we can see that two “color models” emerged, the RGB model around the dark spot in the lower left, and the YMC model around the bright spot in the upper right. Third, the distance between the bright, almost white spot and the dark, almost black one is maximized.

In other words, and not quite surprising, the conceptual distance is reflected as a geometrical distance in the SOM. As it is the case in the TL-SOM, we now could use the SOM as a measurement device that transforms an unknown structure into an internal property, simply by using the locational property in the SOM as an assignate for a secondary SOM. In this way we not only can represent “opposite”, but we even have a model procedure for “generalized oppositeness” at out disposal.

It is crucial to understand this step of “observing the SOM”, thereby conceiving the SOM as a filter, or more precisely as a measurement device. Of course, at this point it becomes clear that a large variety of such transposing and internal-virtual measurement devices may be thought of. Methodologically, this opens an orthogonal dimension to the representation of data, resembling strongly to the concept of orthoregulation.

The map shown above even allows to create completely different color models, for instance one around yellow and another one around magenta. Our color psychology is strongly determined by the sun’s radiated spectrum and hence it reflects a particular Lebenswelt; yet, there is no necessity about it. Some insects like bees are able to perceive ultraviolet radiation, i.e. their colors may have 4 components, yielding a completely different color psychology, while the capability to distinguish colors remains perfectly.3

“Oppositeness” is just a “simple” example for an abstract concept and its operationalization using a SOM. We already mentioned the “serial” coherence of texts (and thus of general arguments) that can be operationalized as sort of virtual movement across a SOM of a particular level of integration.

It is crucial to understand that there is no other model besides the SOM that combines the ability to learn from empirical data and the possibility for emergent abstraction.

There is yet another lesson that we can take home from the simple example above. Well, the example doesn’t not remain that simple. High-level abstraction, items of considerable conceptual depth, so to speak, requires rather short assignate vectors. In the process of learning qua abstraction it appears to be essential that the masses of possible assignates derived from or imposed by measurement of raw data will be reduced. On the one hand, empiric contexts from very different domains should be abstracted, i.e. quite literally “reduced”, into the same perspective. On the other hand, any given empiric context should be abstracted into (much) more than just one abstract perspective. The consequence of that is that we need a lot of SOMs, all separated “sufficiently” from each other. In other words, we need a dynamic population of Self-organizing maps in order to represent the capability of abstraction in real-life. “Dynamic population” here means that there are developmental mechanisms that result in a proliferation, almost a breeding of new SOM instances in a seamless manner. Of course, the SOM instances themselves have to be able to grow and to differentiate, as we have described it here and here.

In a population of SOM the conceptual depth of a concept may be represented by the efforts to arrive at a particular abstract “intension.” This not only comprises the ordinary SOM lattices, but also processes like Markov models, simulations, idealizations qua SOMs, targeted modeling, transition into symbolic space, synchronous or potential activations of other SOM compartments etc. This effort may be represented finally as a “number.”


The structure of multi-layered system of Self-organizing Maps as it has been proposed by Kohonen and co-workers is a powerful model to represent emerging abstraction in response to empiric impressions. The Copycat model demonstrates how abstraction could be brought back to the level of application in order to become able to make analogies and to deal with “first-time-exposures”.

Here we tried to outline a potential path to bring these models together. We regard this combination in the way we proposed it (or a quite similar one) as crucial for any advance in the field of machine-based episteme at large, but also for the rather confined area of machine learning. Attempts like that of Blank [4] appear to suffer seriously from categorical mis-attributions. Analogical thinking does not take place on the level of single neurons.

We didn’t discuss alternative models here (so far, a small extension is planned). The main reasons are that first it would be an almost endless job, and second that Hofstadter already did it and as a result of his investigation he dismissed all the alternative approaches (from authors like Gentner, Holyoak, Thagard). For an overview Runco [5] about recent models on creativity, analogical thinking, or problem solving provides a good starting point. Of course, many authors point to roughly the same direction as we did here, but mostly, the proposals are circular, not helpful because the problematic is just replaced by another one (e.g. the infamous and completely unusable “divergent thinking”), or can’t be implemented for other reasons. Thagard [6] for instance, claim that a “parallel satisfaction of the constraints of similarity, structure and purpose” is key in analogical thinking. Given our analysis, such statements are nothing but a great mess, mixing modeling, theory, vagueness and fluidity.

For instance, in cognitive psychology and in the field of artificial intelligence as well, the hypothesis of Structural Mapping (STM) finds a lot of supporters [7]. Hofstadter discusses similar approaches in his book. The STM hypothesis is highly implausible and obviously a left-over of the symbolic approach to Artificial Intelligence, just transposed into more structural regions. The STM hypothesis has not only to be implemented as a whole, it also has to be implemented for each domain specifically. There is no emergence of that capability.

The combination of the extended SOM—interpreted as a dynamic population of growing SOM instances—with the Copycat mechanism indeed appears as a self-sustaining approach into proliferating abstraction and—quite significant—back from it into application. It will be able to make analogies on any field already in its first encounter with it, even regarding itself, since both the extended SOM as well as the Copycat comprise several mechanisms that may count as precursors of high-level reflexivity.

After this proposal little remains to be said on the technical level. One of those issues which remain to be discussed is the conditions for the possibility of binding internal processes to external references. Here our favorite candidate principle is multi-modality, that is the joint and inextricable “processing” (in the sense of “getting affected”) of words, images and physical signals alike. In other words, I feel that we have come close to the fulfillment of the ariadnic question this blog:”Where is the Limit?” …even in its multi-faceted aspects.

A lot of implementation work has now to be performed, eventually commented by some philosophical musings about “cognition”, or more appropriate the “epistemic condition.” I just would like to invite you to stay tuned for the software publications to come (hopefully in the near future).


1. see also the other chapters about the SOM, SOM-based modeling, and generalized modeling.

2. It is somehow interesting that in the brain of many animals we can find very small groups of neurons, if not even single neurons, that respond to primitive features such as verticality of lines, or the direction of the movement of objects in the visual field.

3. Ludwig Wittgenstein insisted all the time that we can’t know anything about the “inner” representation of “concepts.” It is thus free of any sense and meaning to claim knowledge about the inner state of oneself as well as of that of others. Wilhelm Vossenkuhl introduces and explains the Wittgensteinian “grammatical” solipsism carefully and in a very nice way.[8]  The only thing we can know about inner states is that we use certain labels for it, and the only meaning of emotions is that we do report them in certain ways. In other terms, the only thing that is important is the ability to distinguish ones feelings. This, however, is easy to accomplish for SOM-based systems, as we have been demonstrating here and elsewhere in this collection of essays.

4. Don’t miss Timo Honkela’s webpage where one can find a lot of gems related to SOMs! The only puzzling issue about all the work done in Helsinki is that the people there constantly and pervasively misunderstand the SOM per se as a modeling tool. Despite their ingenuity they completely neglect the issues of data transformation, feature selection, validation and data experimentation, which all have to be integrated to achieve a model (see our discussion here), for a recent example see here, or the cited papers about the Websom project.

  • [1] Timo Honkela, Samuel Kaski, Krista Lagus, Teuvo Kohonen (1997). WEBSOM – Self-Organizing Maps of Document Collections. Neurocomputing, 21: 101-117.4
  • [2] Krista Lagus, Samuel Kaski, Teuvo Kohonen in Information Sciences (2004)
    Mining massive document collections by the WEBSOM method. Information Sciences, 163(1-3): 135-156. DOI: 10.1016/j.ins.2003.03.017
  • [3] Klaus Wassermann (2010). Nodes, Streams and Symbionts: Working with the Associativity of Virtual Textures. The 6th European Meeting of the Society for Literature, Science, and the Arts, Riga, 15-19 June, 2010. available online.
  • [4 ]Douglas S. Blank, Implicit Analogy-Making: A Connectionist Exploration.Indiana University Computer Science Department. available online.
  • [5] Mark A. Runco, Creativity-Research, Development, and Practice Elsevier 2007.
  • [6] Keith J. Holyoak and Paul Thagard, Mental Leaps: Analogy in Creative Thought.
    MIT Press, Cambridge 1995.
  • [7] John F. Sowa, Arun K. Majumdar (2003), Analogical Reasoning.  in: A. Aldo, W. Lex, & B. Ganter (eds.), “Conceptual Structures for Knowledge Creation and Communication,” Proc.Intl.Conf.Conceptual Structures, Dresden, Germany, July 2003.  LNAI 2746, Springer New York 2003. pp. 16-36. available online.
  • [8] Wilhelm Vossenkuhl. Solipsismus und Sprachkritik. Beiträge zu Wittgenstein. Parerga, Berlin 2009.


Elementarization and Expressibility

March 12, 2012 § Leave a comment

Since the beginnings of the intellectual adventure

that we know as philosophy, elements take a particular and prominent role. For us, as we live as “post-particularists,” the concept of element seems to be not only a familiar one, but also a simple, almost a primitive one. One may take this as the aftermath of the ontological dogma of the four (or five) elements and its early dismissal by Aristotle.

In fact, I think that the concept element is seriously undervalued and hence it is left disregarded much too often, especially as far as one concerns it as a structural tool in the task to organize thinking. The purpose of this chapter is thus to reconstruct the concept of “element” in an adequate manner (at least, to provide some first steps of such a reconstruction). To achieve that we have to take tree steps.

First, we will try to shed some light on its relevance as a more complete concept. In order to achieve this we will briefly visit the “origins” of the concept in (pre-)classic Greek philosophy. After browsing quickly through some prominent examples, the second part then will deal with the concept of element as a thinking technique. For that purpose we strip the ontological part of it (what else?), and turn it into an activity, a technique, and ultimately into a “game of languagability,” called straightforwardly “elementarization.”

This will forward us then to the third part, which will deal with problematics of expression and expressibility, or more precisely, to the problematics of how to talk about expression and expressibility. Undeniably, creativity is breaking (into) new grounds, and this aspect of breaking pre-existing borders also implies new ways of expressing things. To get clear about creativity thus requires to get clear about expressibility in advance.

The remainder of this essay revolves is arranged by the following sections (active links):

The Roots1

As many other concepts too, the concept of “element” first appeared in classic Greek culture. As a concept, the element, Greek “stoicheion”, in greek letters ΣΤΟΙΧΕΙΟΝ, is quite unique because it is a synthetic concept, without predecessors in common language. The context of its appearance is the popularization of the sundial by Anaximander around 590 B.C. Sundials have been known before, but it was quite laborious to create them since they required a so-called skaphe, a hollow sphere as the projection site of the gnomon’s shadow.

Figure 1a,b.  Left (a): A sundial in its ancient (primary) form based on a skaphe, which allowed for equidistant segmentation , Right (b): the planar projection involves hyperbolas and complicated segmentation.

The planar projection promised a much more easier implementation, yet, it involves the handling of hyperbolas, which even change relative to the earth’s seasonal inclination. Else, the hours can’t be indicated by an equidistant segments any more. Such, the mathematical complexity has been beyond the capabilities of that time. The idea (presumably of Anaximander) then was to determine the points for the hours empirically, using “local” time (measured by water clocks) as a reference.

Anaximander also got aware of the particular status of a single point in such a non-trivial “series”. It can’t be thought without reference to the whole series, and additionally, there was no simple rule which would have been allowing for its easy reconstruction. This particular status he called an “element”, a stoicheia (pronunciation). Anaximander’s element is best understood as a constitutive component, a building block for the purpose to build a series; note the instrumental twist in his conceptualization.

From this starting point, the concept has been generalized in its further career, soon denoting something like “basics,” or “basic principles”. While Empedokles conceived the four elements, earth, wind, water and fire almost as divine entities, it was Platon (Timaios 201, Theaitet 48B) who developed the more abstract perspective into “elements as basic principles.”

Yet, the road of abstraction does not know a well-defined destiny. Platon himself introduced the notion of “element of recognition and proofing” for stoicheia. Isokrates, then, a famous rhetorician and coeval of Platon extended the reach of stoicheia from “basic component / principle” into “basic condition.” This turn is quite significant since as a consequence it inverts the structure of argumentation from idealistic, positive definite claims to the constraints of such claims; it even opens the perspective to the “condition of possibility”, a concept that is one of the cornerstones of Kantian philosophy, more than 2000 years later. No wonder, Isokrates is said to have opposed Platon’s  arguments.

Nevertheless, all these philosophical uses of stoicheia, the elements, have been used as ontological principles in the context of the enigma of the absolute origin of all things and the search for it. This is all the more particularly remarkable as the concept itself has been constructed some 150 years before in a purely instrumental manner.

Aristotle dramatically changed the ontological perspective. He dismissed the “analysis based on elements” completely and established what is now known as “analysis of moments”, to which the concepts of “form” and “substance” are central. Since Aristotle, elemental analysis regarded as a perspective heading towards “particularization”, while the analysis of moments is believed to be directed to generalization. Elemental analysis and ontology is considered as being somewhat “primitive,” probably due to its (historic) neighborhood to the dogma of the four elements.

True, the dualism made from form and substance is more abstract and more general. Yet, as concept it looses contact not only to the empiric world as it completely devoid of processual aspects. It is also quite difficult, if not impossible, to think “substance” in a non-ontological manner. It seems as if that dualism abolishes even the possibility to think in a different manner than as ontology, hence implying a whole range of severe blind spots: the primacy of interpretation, the deeply processual, event-like character of the “world” (the primacy of “process” against “being”), the communal aspects of human lifeforms and its creational power, the issue of localized transcendence are just the most salient issues that are rendered invisible in the perspective of ontology.

Much more could be said of course about the history of those concepts. Of course, Aristotle’s introduction of the concept of substance is definitely not without its own problems, paving the way for the (overly) pronounced materialism of our days. And there is, of course, the “Elements of Geometry” by Euclid, the most abundant mathematical textbook ever. Yet, I am neither a historian nor a philologus, thus let us now proceed with some examples. I just would like to emphasize that the “element” can be conceived as a structural topos of thinking starting from the earliest witnesses of historical time.

2. Examples

Think about the chemical elements as they have been invented in the 19th century. Chemical compounds, so the parlance of chemists goes, are made from chemical elements, which have been typicized by Mendeleev according to the valence electrons and then arranged into the famous “periodic table.” Mendeleev not only constructed a quality according to which various elements could be distinguished. His “basic principle” allowed him to make qualitative and quantitative predictions of an astonishing accuracy. He predicted the existence of chemical elements, “nature’s substance”, actually unknown so far, along with their physico-chemical qualities. Since it was in the context of natural science, he also could validate that. Without the concept of those (chemical) elements the (chemical) compounds can’t be properly understood. Today a similar development can be observed within the standard theory of particle physics, where basic types of particles are conceived as elements analogous to chemical elements, just that in particle physics the descriptive level is a different one.

Here we have to draw a quite important distinction. The element in Mendeleev’s thinking is not equal to the element as the chemical elements. Mendeleev’s elements are (i) the discrete number (an integer between 1..7, and 0/8 for the noble gases like Argon etc.) that describes the free electron as a representative of electrostatic forces, and (ii) the concept of “completeness” of the set of electrons in the so-called outer shell (or “orbitals”): the number of the valence electrons of two different chemical elements tend to sum up to eight. Actually, chemical elements can be sorted into groups (gases, different kinds of metals, carbon and silicon) according to the mechanism how they achieve this magic number (or how they don’t). As a result, there is a certain kind of combinatorianism, the chemical universe is almost a Lullian-Leibnizian one. Anyway, the important point here is that the chemical elements are only a consequence of a completely different figure of thought.

Still within in chemistry, there is another famous, albeit less well-known example for abstract “basic principles”: Kekulé’s de-localized valence electrons in carbon compounds (in today’s notion: delocalized 6-π-electrons). Actually, Kekulé added the “element” of the indeterminateness to the element of the valence electron. He dropped the idea of a stable state that could be expressed by a numerical value, or even by an integer. His 6-π-orbital is a cloud that could not be measured directly as such. Today, it is easy to see that the whole area of organic chemistry is based on, or even defined by, these conceptual elements.

Another example is provided by “The Elements of Geometry” by Euclid. He called it “elements” probably for mainly two reasons. First, it was supposed that it was complete, secondly, because you could not remove any of the axioms, procedures, proofs or lines of arguments, i.e. any of its elements, without corroborating the compound concept “geometry.”

A further example from the classic is the conceptual (re-)construction of causality by Aristotle. He obviously understood that it is not appropriate to take causality as an impartible entity. Aristotle designed his idea of causality as an irreducible combination of four distinct elements, causa materialis, causa formalis, causa efficiens and causa finalis. To render this a bit more palpable, think about inflaming a wooden stick and then being asked: What is the cause for the stick being burning?

Even if I would put (causa efficiens) a wooden (causa materialis) stick (causa formalis) above an open flame (part of causa efficiens), it will not necessarily be inflamed until I decide that it should (causa finalis). This is a quite interesting structure, since it could be conceived as a precursor of the Wittgensteinian perspective of a language game.

For Aristotle it made no sense to assume that any of the elements of his causality as he conceived it would be independent from any of the others. For him it would have been nonsense to conceive of causality as any subset of his four elements. Nevertheless, exactly this was what physics did since Newton. In our culture, causality is almost always debated as if it would be identical to causa efficiens. In Newton’s words: Actioni contrariam semper et aequalem esse reactionem. [2] Later, this postulate of actio = reactio has been backed by further foundational work through larger physical theories postulating the homogeneity of physical space. Despite the success of physics, the reduction of causality to physical forces remains just that: a reduction. Applying this principle then again to any event in the world generates specific deficits, which are well visible in large parts of contemporary philosophy of science when it comes to the debate about the relation of natural science and causality (see cf. [3]).

Aristotle himself did not call the components of causality as “elements.” Yet, the technique he applied is just that: an elementarization. This technique was quite popular and well known from another discourse, involving earth, water, air, and fire. Finally, this model had to be abolished, but it is quite likely that the idea of the “element” has been inherited down to Mendeleev.

Characterizing the Concept of “Element”

As we have announced it before, we would like to strip any ontological flavor from the concept of the element. This marks the difference between conceiving them as part of the world or, alternatively, as a part of a tool-set used in the process of constructing a world. This means to take it purely instrumental, or in other words, as a language game. Such, it is also one out of the row of many examples for the necessity to remove any content from philosophy (Ontology is always claiming some kind of such content, which is highly problematical).

A major structural component of the language game “element” is that the entities denoted by it are used as anchors for a particular non-primitive compound quality, i.e. a quality that can’t be perceived by just the natural five (or six, or so) senses.

One the other hand, they are also strictly different from axioms. An axiom is a primitive proposition that serves as a starting point in a formal framework, such as mathematics. The intention behind the construction of axioms is to utilize common sense as a basis for more complicated reasoning. Axioms are considered as facts that could not seriously disputed as such. Thus, they indeed the main element in the attempt to secure mathematics as a unbroken chain of logic-based reasoning. Of course, the selection of a particular axiom for a particular purpose could always be discussed. But itself, it is a “primitive”, either a simple more or less empiric fact, or a simple mathematical definition.

The difference to elements is profound. One always can remove a single axiom from an axiomatic system without corroborating the sense of the latter. Take for instance the axiom of associativity in group theory, which leads to Lie-groups and Lie-algebras. Klein groups are just a special case of Lie Groups. Or, removing the “axiom” of parallel lines from the Euclidean axioms brings us to more general notions of geometry.

In contrast to that pattern, removing an element from an elemental system destroys the sense of the system. Elemental systems are primarily thought as a whole, as a non-decomposable thing, and any of the used elements is synthetically effective. Their actual meaning is only given by being a part of a composition with other elements. Axioms, in contrast, are parts of decomposable systems, where they act as constraints. Removing them leads usually to improved generality. The axioms that build an “axiomatic system” are not tied to each other, they are independent as such. Of course, their interaction always will create a particular conditionability, but that is a secondary effect.

The synthetic activity of elements simply mirrors the assumption that there is (i) a particular irreducible whole, and (ii) that the parts of that whole have a particular relationship to the embedding whole. In contrast to the prejudice that elemental analysis results in an unsuitable particularization of the subject matter, I think that elements are highly integrated, yet itself non-decomposable idealizations of compound structures. This is true for the quaternium of earth, wind, water and fire, but also for the valence electrons in chemistry or the elements of complexity, as we have introduced them here. Elements are made from concepts, while axioms are made from definitions.

In some way, elements can be conceived as the operationalization of beliefs. Take a belief, symbolize it and you get an element. From this perspective it again becomes obvious (on a second route) that elements could not be as something natural or even ontological; they can not be discovered as such in a pure or stable form. They can’t be used to proof propositions in a formal system, but they are indispensable to explain or establish the possibility of thinking a whole.

Mechanism and organism are just different terms that can be used to talk about the same issue, albeit in a less abstract manner. Yet, it is clear that integrated phenomena like “complexity,” or “culture,” or even “text” can’t be appropriately handled without the structural topos of the element, regardless which specific elements are actually chosen. In any of these cases it is a particular relation between the parts and the whole that is essential for the respective phenomenon as such.

If we accept the perspective that conceives of  elements as stabilized beliefs we may recognize that they may be used as building blocks for the construction of a consistent world. Indeed, we well may say that it is due to their properties as described above, their positioning between belief and axiom, that we can use them as an initial scaffold (Gestell), which in turn provides the possibility for targeted observation, and thus for consistency, understood both as substance and as logical quality.

Finally, we should shed some words on the relation between elements and ideas. Elsewhere, we distinguished ideas from concepts. Ideas can’t be equated with elements either. Just the other way round, elements may contain ideas, but also concepts, relations and systems thereof, empirical hypotheses or formal definitions. Elements are, however, always immaterial, even in the case of chemistry. For us, elements are immaterial synthetic compounds used as interdependent building blocks of other immaterial things like concepts, rules, or hypotheses.

Many, if not all concepts, are built from elements in a similar way. The important issue is that elements are synthetic compounds which are used to establish further compounds in a particular manner. In the beginning there need not to be any kind of apriori justification for a particular choice or design. The only requirement is that the compound built from them allows for some kind of beneficial usage in creating higher integrated compounds which would not be achievable without them.

4. Expressibility

Elements may well be conceived as epistemological stepping stones, capsules of belief that we use to build up beliefs. Such, the status of elements is somewhere between models and concepts, not as formal and restricted as models and not as transcendental as concepts, yet still with much stronger ties towards empiric conditions than ideas.

It is quite obvious that such a status reflects a prominent role for perception as well as for understanding. The element may well be conceived as an active zone of differentiation, a zone from which different kind of branches emerge: ideas, models, concepts, words, beliefs. We also could say that elements are close to the effects and the emergence of immanence. The ΣΤΟΙΧΕΙΟΝ itself, its origins and transformations, may count as an epitome of this zone, where thinking creates its objects. It is “here” that expressibility finds its conditions.

At that point we should recall – and keep in mind – that elements should not be conceived as an ontological category. Elements unfold as (rather than “are”) a figure of thought, an idiom of thinking, as a figure for thought. Of course, we can deliberately visit this area, we may develop certain styles to navigate in this (sometimes) misty areas. In other words, we may develop a culture of elementarization. Sadly enough, positivism, which emerged from the materialism of the 19th century on the line from Auguste Comte down to Frege, Husserl, Schlick, Carnap and van Fraassen (among others), that positivism indeed destroyed much of that style. In my opinion, much of the inventiveness of the 19th century could be attributed a certain, yet largely unconscious, attitude towards the topos of the “element.”

No question, elevating the topos of the element into consciousness, as a deliberate means of thinking, is quite promising. Hence, it is also of some importance to our question of machine-based episteme. We may just add a further twist to this overarching topic by asking about the mechanisms and conditions that are needed for the possibility of “elementarization”. Still in other words we could say that elements are the main element of creativity. And we may add that the issue of expression and expressibility is not about words and texts, albeit texts and words potentiated the dynamics and the density of expressibility.

Before we can step on to harvest the power of elementarization we have to spend some efforts on the issue of the structure of expression. The first question is: What exactly happens if we invent and impose an element in and to our thoughts? The second salient question is about the process forming the element itself. Is the “element” just a phenomenological descriptional parlance, or is it possible to give some mechanisms for it?

Spaces and Dimensions

As it is already demonstrated by Anaximander’s ΣΤΟΙΧΕΙΟΝ, elements put marks into the void. The “element game” introduces discernability, and it is central to the topos of the element that it implies a whole, an irreducible set, of which it is a constitutive part. This way, elements don’t act just sign posts that would indicate a direction in an already existing landscape. It is more appropriate to conceive of them as a generators of landscape. Even before words, whether spoken or written, elements are the basic instance of externalization, abstract writing, so to speak.

It is the abstract topos of elements that introduce the complexities around territorialization and deterritorialization into thought, a dynamics that never can come to an end. Yet, let us focus here on the generative capacities of elements.

Elements transform existing spaces or create completely new ones, they represent the condition for the possibility of expressing anything. The implications are rather strong. Looking back from that conditioning to the topos itself we may recognize that wherever there is some kind of expression, there is also a germination zone of ideas, concepts and models, and above all, belief.

The space implied by elements is particular one yet, due to the fact that it inherits the aprioris of the wholeness and non-decomposability. Non-decomposability means that the elemental space looses essential qualities if one of the constituting elements would be removed.

This may be contrasted to the Cartesian space, the generalized Euclidean space, which is the prevailing concept of space today. A Cartesian space is spanned by dimensions that are set orthogonal to each other. This orthogonality of the dimensional setup allows to change the position in just one dimension, but to keep the position in all the other dimensions unchanged, constant. The dimensions are independent from each other. Additionally, the quality of the space itself does not change if we remove one of the dimensions of a n-dimensional Cartesian space (n>1). Thus, the Cartesian space is decomposable.

Spaces are inevitably implied as soon as entities are conceived as carriers of properties, in fact, even if at least one (“1”!) property will be assigned to them. These assigned properties, or short: assignates, then could be mapped to different dimensions. A particular entity thus becomes visible as a particular arrangement in the implied space. In case of Cartesian spaces, this arrangement consists of a sheaf of vectors, which is as specific for the mapped entity as it could be desired.

Dimensions may refer to sensory modalities, to philosophical qualias, or to constructed properties of development in time, that is, concepts like frequency, density, or any kind of pattern. Dimensions may be even purely abstract, as in case of random vectors or random graphs, which we discussed here, where the assignate refers to some arbitrary probability or structural, method specific parameter.

Many phenomena remain completely mysterious if we do not succeed to setup the (approximately) right number of dimensions or aspects. This has been famously demonstrated by Abbott and his flatland [4], or by Ian Stewart and his flatter land [5]. Other examples are the so-called embedding dimension in the complex systems analysis, or the analysis of (mathematical) cusp catastrophes by Ian Stewart [6]. Dimensionality also plays an important role in the philosophy of science, where Ronald Giere uses it to develop a “scientific perspectivism.” [7]

Suppose the example of a cloud of points in the 3‑dimensional space, which forms a spiral-like shape, with the main axis of the shape parallel to the z-axis. For points in the upper half of the cloudy spiral there shall be a high probability that they are blue; those in the lower half shall be mostly red. In other words, there is a clear pattern. If we now project the points to the x-y-plane, i.e. if we reduce dimensionality we loose the possibility to recognize the pattern. Yet, the conclusion that there “is” no pattern is utterly wrong. The selection of a particular number of dimensions is a rather critical operation. Hence, taking action without reflecting on the dimensionality of the space of expressibility quite likely leads to severe misinterpretations. The cover of Douglas Hofstadter’s first book “Gödel, Escher, Bach” featured a demonstration of the effect of projection from higher to lower dimensionality (see the image below), another presentation can be found here on YouTube, featuring Carl Sagan on the topic of dimensionality.

In mathematics, the relation between two spaces of different dimensionality, the so-called manifold, may itself form an abstract space. This exercise of checking out the consequences of removing or adding a dimension/aspect from the space of expressibility is a rewarding game even in everyday life. In the case of fractals in time series developments, Mandelbrot conceptualizes even a changing dimensionality of the space which is used to embed the observations over time.

Undeniably, this decomposability contributed much to the rise and the success of what we call modern science. Any of the spaces of mathematics or statistics is a Cartesian space. Riemann space, Hilbert space, Banach space, topological spaces etc. are all Cartesian insofar as the dimensions are arranged orthogonal to each other, thus introducing independence of elements before any other definition. Though, the real revolutionary contribution of Descartes has not been the setup of independent dimensions, it is the “Copernican” move to move the “origin” around, and with that, to mobilize the reference system of a particular measurement.

But again: By performing this mapping, the wholeness of the entity will be lost. Any interpretation of the entities requires a point outside of the Cartesian dimensional system. And precisely this externalized position is not possible for an entity that itself “performs cognitive processes.”2 It would be quite interesting to investigate the epistemic role of externalization of mental affairs through cultural techniques like words, symbols, or computers, yet that task would be huge.

Despite the success of the Cartesian space as a methodological approach it obviously also remains true that there is no free lunch in the realm of methods and mappings. In case of the Cartesian space this cost is as huge as its benefit, as both are linked to its decomposability. In Cartesian space it is not possible to speak about a whole, whole entities are simply nonexistent. This is indeed as dramatic as it sounds.Yet, it is a direct consequence of the independence of the dimensions. There is nothing in the structure of the Cartesian space that could be utilized as a kind of media to establish coherence. We already emphasized that the structure of the Cartesian space implies the necessity of an external observer. This, however, is not quite surprising for a construction devised by Descartes in the age of absolutistic monarchies symbiontically tied to catholicism, where the idea of the machine had been applied pervasively to anything and everything.

There are still  further assumptions underlying the Cartesian conception of space. Probably the two most salient ones are concerning density and homogeneity. At first it might sound somewhat crazy to conceive of a space of inhomogeneous dimensionality. Such a space would have “holes” about which one could neither talk from within that space not would they be recognizable. Yet, from theoretical physics we know about the concept of wormholes, which precisely represent such inhomogeneity. Nevertheless, the “accessible” parts of such a space would remain Cartesian, so we could call the whole entity “weakly Cartesian”. A famous example is provided by Benoît Mandelbrot’s warping of dimensionality in the time domain of observations [8,9]

From an epistemological perspective, the Cartesian space is just a particular instance for the standardization or even institutionalization of the inevitable implication of spaces. Yet, the epistemic spaces are not just 3-dimensional as Kant assumed in his investigation, epistemic spaces may comprise a large and even variable number of dimensions. Nevertheless, Kant was right about the transcendental character of space, though the space we refer to here is not just the 3d- or (n)d-physical space.

Despite the success of Cartesian space, which builds on the elements of separability, decomposability and externalizable position of the interpreter, it is perfectly clear that it is nothing else than just a particular way of dealing with spaces. There are many empirical, cognitive or mental contexts for which the assumptions underlying the Cartesian space are severely violated. Such contexts usually involve the wholeness of the investigated entity as a necessary apriori. Think of complexity, language, the concept of life forms with its representatives like urban cultures, for any of these domains the status of any part of it can’t be qualified in any reasonable manner without referring always to the embedding wholeness.

The Aspectional Space

What we need is a more general concept of space, which does not start with any assumption about decomposability (or its refutation). Since it is always possible to proof and to drop the assumption of dependence (non-decomposability), but never for the assumption of independence (decomposability) we should start with a concept of space which keeps the wholeness intact.

Actually, it is not too difficult to start with a construction of such a space. The starting point is provided by a method to visualize data, the so-called ternary diagram. Particularly in metallurgy and geology ternary diagrams are abundantly in use for the purpose of expressing mixing proportions. The following figure 2a shows a general diagram for three components A,B,C, and Figure 2b shows a concrete diagram for a three component steel alloy at 900°C.

Figure 2a,b: Ternary diagrams in metallurgy and geology are pre-cursors of aspectional spaces.

Such ternary diagrams are used to express the relation between different phases where the influential components all influence each other. Note that the area of the triangle in such a ternary diagram comprises the whole universe as it is implied by the components. However, in principle it is still possible (though not overly elegant) to map the ternary diagram as it is used in geology into Cartesian space, because there is a strongly standardized way about how to map values. Any triple of values (a,b,c) is mapped to the axes A,B,C such that these axes are served counter-clockwise beginning with A. Without that rule a unique mapping of single points from the ternary space to the Cartesian space would not be possible any more. Thus we can see that the ternary diagram does not introduce a fundamental difference as compared to the Cartesian space defined by orthogonal axes.

Now let us drop this standard of the arrangement of axes. None of the axes should be primary against any other. Obviously, the resulting space is completely different from the spaces shown in Fig.2. We can keep only one of n dimensions constant while changing position in this space (by moving along an arc around one of the corners). Compare this to the Cartesian space, where it is possible to change just one and keep the other constant. For this reason we should call the boundaries of such a space not “axes” or “dimensions” and more. By convention, we may call the scaling entities “aspection“, derived from “aspect,” a concept that, similarly to the concept of element, indicates the non-decomposability of the embedding context.

As said, our space that we are going to construct for a mapping of elements can’t be transformed into a Cartesian space any more. It is an “aspectional space”, not a dimensional space. Of course, the aspectional space, together with the introduction of “aspections” as a companion concept for “dimension” is not just a Glass Bead Game. We urgently need it if we want to talk transparently and probably even quantitatively about the relation between parts and wholes in a way that keeps the dependency relations alive.

The requirement of keeping the dependency relations exerts an interesting consequence. It renders the corner points into singular points, or more precisely, into poles, as the underlying apriori assumption is just the irreducibility of the space. In contrast to the ternary diagram (which is thus still Cartesian) the aspectional space is neither defined at the corner points nor along the borders (“edges”). In  other words, the aspectional space has no border, despite the fact that its volume appears to be limited. Since it would be somehow artificial to exclude the edges and corners by dedicated rules we prefer to achieve the same effect (of exclusion) by choosing a particular structure of the space itself. For that purpose, it is quite straightforward to provide the aspectional space with a hyperbolic structure.

The artist M.C. Escher produced a small variety of confined hyperbolic disks that perfectly represent the structure of our aspectional space. Note that there are no “aspects,” it is a zero-aspectional space. Remember that the 0-dimensional mathematical point represents a number in Cartesian space. This way we even invented a new class of numbers!3 A value in this class of number would (probably) represent the structure of the space, in other words the curvature of the hyperbola underlying the scaling of the space. Yet, the whole mathematics around this space and these numbers is undiscovered!

Figure 3: M.C. Eschers hyperbolic disk, capturing infinity on the table.

Above we said that this space appears to be limited. This impression of a limitation would hold only for external observers. Yet, our interest in aspectional spaces is precisely given by the apriori assumption of non-decomposability and the impossibility of such an external position for cognitive activities. Aspectional spaces are suitable just for those cases where such an external position is not available. From within such a hyperbolic space, the limitation would not be experiencable, a at least not by simple means: the propagation of waves would be different as compared to the Cartesian space.

Aspections, Dimensions

So, what is the status of the aspectional space, especially as compared to the dimensional Cartesian space? A first step of such a characterization would investigate the possibility of transforming those spaces into each other. A second part would not address the space itself, but its capability to do some things uniquely.

So, let us start with the first issue, the possibility for a transition between the two types of species. Think of a three-aspectional space. The space is given by the triangularized relation, where the corners represent the intensity or relevance of a certain aspect. Moving around on this plane changes the distance to at least two (n-1) of the corners, but most moves change the distance to all three of the corners. Now, if we reduce the conceptual difference and/or the possible difference of intensity between all of the three corners we experience a sudden change of the quality of the aspectional space when we perform the limes transition into a state where all differential relevance has been expelled; the aspects would behave perfectly collinear.

Of course, we then would drop the possibility for dependence, claiming independence as a universal property, resulting in a jump into Cartesian space. Notably, there is no way back from the dimensional Cartesian space into aspectional spaces. Interestingly, there is a transformation of the aspectional space which produces a Cartesian space, while the opposite is not possible.

This formal exercise sheds an interesting light to the life form of the 17th century Descartes. Indeed, even in assuming the possibility of dependence one would grant parts of the world autonomy, something that has been categorically ruled out at those times. The idea of God as it was abundant then implied the mechanical character of the world.

Anyway, we can conclude that aspectional spaces are more general than Cartesian spaces as there is a transition only in one direction. Aspectional spaces are indeed formal spaces as Cartesian spaces are. It is possible to define negative numbers, and it is possible to provide them with different metrices or topologies.

Figure 4: From aspectional space to dimensional space in 5 steps. Descartes’ “origin” turns out to be nothing else than the abolishment or conflation of elements, which again could be interpreted as a strongly metaphysically influenced choice.

Now to the second aspect about the kinship between aspections and dimensions. One may wonder, whether the kind of dependency that could be mapped to aspectional spaces could not be modeled in dimensional spaces as well, for instance, by some functional rule acting on the relation between two dimensions. A simple example would be the regression, but also any analytic function y=f(x).

At first sight it seems that this could result in similar effects. We could, for instance, replace two independent dimensions by a new dimension, which has been synthesized in a rule-based manner, e.g. by applying a classic analytical closed-form function. The dependency would disappear and all dimensions again being orthogonal, i.e. independent to each other. Such an operation, however, would require that the dimensions are already abstract enough such that they can be combined by closed analytical functions. This then reveals that we put the claim of independence already into the considerations before anything else. Claiming the perfect equality of functional mapping of dependency into independence thus is a petitio principii. No wonder we find it possible to do so in a later phase of the analysis. It is thus obvious, that the epistemological state of a dependence secondary to the independence of dimensions is a completely different from the primary dependence.

A brief Example

A telling example4 for such an aspectional space is provided by the city theory of David Grahame Shane [10]. The space created by Shane in order to fit in his interests in a non-reductionist coverage of the complexity of cities represents a powerful city theory, from which various models can be derived. The space is established through the three elements of armature, enclave and (Foucaultian) heterotopia. Armature is, of course a rather general concept–designed to cover more or less straight zones of transmission or the guidance for such–, which however expresses nicely the double role of “things” in a city. It points to things as part of the equipment of a city as well as their role as anchor (points). Armatures, in Shane’s terminology, are things like gates, arcades, malls, boulevards, railways, highway, skyscraper or particular forms of public media, that is, particular forms of passages. Heterotopias, on the other hand, are rather compli­cated “things,” at least it invokes the whole philo­sophi­cal stance of the late Foucault, to whom Shane explicitly refers. For any of these elements, Shane then provides extensions and phenomenological instances, as values if you like, from which he builds a metric for each of the three basic aspects. Through­out his book he demonstrates the usefulness of his approach, which is based on these three elements. This usefulness becomes tangible because Shane’s city theory is an aspectional space of expressibility which allows to compare and to relate an extreme variety of phenomena regarding the city and the urban organization. Of course, we must expect other such spaces in principle; this would not only be interesting, but also a large amount of work to complete. Quite likely, however, it will be a just an extension of Shane’s concept.

5. Conclusion

Freeing the concept of “element” from its ontological burden turns it into a structural topos of thinking. The “element game” is a mandatory condition for the creation of spaces that we need in order to express anything. Hence, the “element game,” or briefly, the operation of “elementarization,” may be regarded as the prime instance of externalization and as such also as the hot spot of the germination of ideas, concepts and words, both abstract and factual. For our concerns here about machine-based episteme it is important that the notion of the element provides an additional (new?) possibility to ask about the mechanism in the formation of thinking.

Elementarization also represents the conditions for “developing” ideas and to “settle” them. Yet, our strictly non-ontological approach helps to avoid premature and final territorialization in thought. Quite to the contrary, if understood as a technique, elementarization helps to open new perspectives.

Elementarization appears as a technique to create spaces of expressibility, even before words and texts. It is thus worthwhile to consider words as representatives of a certain dynamics around processes of elementarization, both as an active as well as a passive structure.

We have been arguing that the notion of space does not automatically determine the space to be a Cartesian space. Elements to not create Cartesian spaces. Their particular reference to the apriori acceptance of an embedding wholeness renders both the elements as well as the space implied by them incompatible with Cartesian space. We introduced the notion of “aspects” in order to reflect to the particular quality of elements. Aspects are the result of a more or less volitional selection and construction.

Aspectional spaces are spaces of mutual dependency between aspects, while Cartesian spaces claim that dimensions are independent from each other. Concerning the handling and usage of spaces, parameters have to be sharply distinguished both from aspects as well as from dimensions. In Mathematics or in natural sciences, parameters are distinguished from variables. Variables are to be understood as containers for all allowed instances of values of a certain dimension. Parameters are modifying just the operation of placing such a value into the coordinate system. In other words, they do not change the general structure of the space used for or established by performing a mapping, and they even do not change the dimensionality of the space itself. For designers as well as scientists, and more general for any person acting with or upon things in the world, it is thus more than naive to play around with parameters without explicating or challenging the underlying space of expressibility, whether this is a Cartesian or an aspectional space. From that it also follows that the estimation of parameters can’t be regarded as an instance of learning.

Here we didn’t mention the mechanisms that could lead to the formation of elements.Yet, it is quite important to understand that we didn’t just shift the problematics of creativity to another descriptional layer, without getting a better grip to it. The topos of the element allows us to develop and to apply a completely different perspective to the “creative act.”

The mechanisms that could be put into charge for generating elements will be the issue of the next chapter. There we will deal with relations and its precursors. We also will briefly return to the topos of comparison.

Part 3: A Pragmatic Start for a Beautiful Pair

Part 5: Relations and Symmetries (forthcoming)


1. Most of the classic items presented here I have taken from Wilhelm Schwabe’s superb work about the ΣΤΟΙΧΕΙΟΝ [1], in latin letters “stoicheion.”

2. The external viewpoint has been recognized as an unavailable desire already by Archimedes long ago.

3. Just consider the imaginary numbers that are basically 2-dimensional entities, where the unit 1 expresses a turn of -90 degrees in the plane.

4. Elsewhere [11] I dealt in more detail with Shane’s approach, a must read for anyone dealing with or interested in cities or urban culture.

  • [1] Wilhelm Schwabe. ‘Mischung’ und ‘Element’ im Griechischen bis Platon. Wort- u. begriffsgeschichtliche Untersuchungen, insbes. zur Bedeutungsentwicklung von ΣΤΟΙΧΕΙΟΝ. Bouvier, Bonn 1980.
  • [2] Isaac Newton: Philosophiae naturalis principia mathematica. Bd. 1 Tomus Primus. London 1726, S. 14 (http://gdz.sub.uni-goettingen.de/no_cache/dms/load/img/?IDDOC=294021)
  • [3] Wesley C. Salmon. Explanation and Causality. 2003.
  • [4] Abbott. Flatland.
  • [5] Ian Stewart Flatter Land.
  • [6] Ian Stewart & nn, Catastrophe Theory
  • [7] Ronald N. Giere, Scientific Perspectivism.
  • [8] Benoit B. Mandelbrot, Fractals: Form, Chance and Dimension.Freeman, New York 1977.
  • [9] Benoit B. Mandelbrot, Fractals and Scaling in Finance. Springer, New York 1997.
  • [10] David Grahame Shane, Recombinant Urbanism, Wiley, New York 2005.
  • [11] Klaus Wassermann (2011). Sema Città-Deriving Elements for an applicable City Theory. in: T. Zupančič-Strojan, M. Juvančič, S. Verovšek, A. Jutraž (eds.), Respecting fragile places, 29th Conference on Education in Computer Aided Architectural Design in Europe
    eCAADe. available online.


Ideas and Machinic Platonism

March 1, 2012 § Leave a comment

Once the cat had the idea to go on a journey…
You don’t believe me? Did not your cat have the same idea? Or is your doubt about my believe that cats can have ideas?

So, look at this individual here, who is climbing along the facade, outside the window…

(sorry for the spoken comment being available only in German language in the clip, but I am quite sure you got the point anyway…)

Cats definitely know about the height of their own position, and this one is climbing from flat to flat … outside, on the facade of the building, and in the 6th floor. Crazy, or cool, respectively, in its full meaning, this cat here, since it looks like she has been having a plan… (of course, anyone ever lived together with a cat knows very well that they can have plans… proudness like this one, and also remorse…)

Yet, how would your doubts look like, if I would say “Once the machine got the idea…” ? Probably you would stop talking or listening to me, turning away from this strange guy. Anyway, just that is the claim here, and hence I hope you keep reading.

We already discussed elsewhere1 that it is quite easy to derive a bunch of hypotheses about empirical data. Yet, deriving regularities or rules from empirical data does not make up an idea, or a concept. At most they could serve as kind of qualified precursors for the latter. Once the subject of interest has been identified, deriving hypotheses about it is almost something mechanical. Ideas and concepts as well are much more related to the invention of a problematics, as Deleuze has been working out again and again, without being that invention or problematics. To overlook (or to negate?) that difference between the problematic and the question is one of the main failures of logical empiricism, and probably even of today’s science.

The Topic

But what is it then, that would make up an idea, or a concept? Douglas Hofstadter once wrote [1] that we are lacking a concept of concept. Since then, a discipline emerged that calls itself “formal concept analysis”. So, actually some people indeed do think that concepts could be analyzed formally. We will see that the issues about the relation between concepts and form are quite important. We already met some aspects of that relationship in the chapters about formalization and creativity. And we definitely think that formalization expels anything interesting from that what probably had been a concept before that formalization. Of course, formalization is an important part in thinking, yet it is importance is restricted before it there are concepts or after we have reduced them into a fixed set of finite rules.


Ideas are almost annoying, I mean, as a philosophical concept, and they have been so since the first clear expressions of philosophy. From the very beginning there was a quarrel not only about “where they come from,” but also about their role with respect to knowledge, today expressed as . Very early on in philosophy two seemingly juxtaposed positions emerged, represented by the philosophical approaches of Platon and Aristotle. The former claimed that ideas are before perception, while for the latter ideas clearly have been assigned the status of something derived, secondary. Yet, recent research emphasized the possibility that the contrast between them is not as strong as it has been proposed for more than 2000 years. There is an eminent empiric pillar in Platon’s philosophical building [2].

We certainly will not delve into this discussion here, it simply would take too much space and efforts, and not to the least there are enough sources in the web displaying the traditional positions in great detail. Throughout history since Aristotle, many and rather divergent flavors of idealism emerged. Whatever the exact distinctive claim of any of those positions is, they all share the belief in the dominance into some top-down principle as essential part of the conditions for the possibility of knowledge, or more general the episteme. Some philosophers like Hegel or Frege, just as others nowadays being perceived as members of German Idealism took rather radical positions. Frege’s hyper-platonism, probably the most extreme idealistic position (but not exceeding Hegel’s “great spirit” that far) indeed claimed that something like a triangle exists, and quite literally so, albeit in a non-substantial manner, completely independent from any, e.g. human, thought.

Let us fix this main property of the claim of a top-down principle as characteristic for any flavor of idealism. The decisive question then is how could we think the becoming of ideas.It is clearly one of the weaknesses of idealistic positions that they induce a salient vulnerability regarding the issue of justification. As a philosophical structure, idealism mixes content with value in the structural domain, consequently and quite directly leading to a certain kind of blind spot: political power is justified by the right idea. The factual consequences have been disastrous throughout history.

So, there are several alternatives to think about this becoming. But even before we consider any alternative, it should be clear that something like “becoming” and “idealism” is barely compatible. Maybe, a very soft idealism, one that already turned into pragmatism, much in the vein of Charles S. Peirce, could allow to think process and ideas together. Hegel’s position, or as well Schelling’s, Fichte’s, Marx’s or Frege’s definitely exclude any such rapprochement or convergence.

The becoming of ideas could not thought as something that is flowing down from even greater transcendental heights. Of course, anybody may choose to invoke some kind of divinity here, but obviously that does not help much. A solution according to Hegel’s great spirit, history itself, is not helpful either, even as this concept implied that there is something in and about the community that is indispensable when it comes to thinking. Much later, Wittgenstein took a related route and thereby initiated the momentum towards the linguistic turn. Yet, Hegel’s history is not useful to get clear about the becoming of ideas regarding the involved mechanism. And without such mechanisms anything like machine-based episteme, or cats having ideas, is accepted as being impossible apriori.

One such mechanism is interpretation. For us the principle of the primacy of interpretation is definitely indisputable. This does not mean that we disregard the concept of the idea, yet, we clearly take an Aristotelian position. More á jour, we could say that we are quite fond of Deleuze’s position on relating empiric impressions, affects, and thought. There are, of course many supporters in the period of time that span between Aristotle and Deleuze who are quite influential for our position.2
Yet, somehow it culminated all in the approach that has been labelled French philosophy, and which for us comprises mainly Michel Serres, Gilles Deleuze and Michel Foucault, with some predecessors like Georges Simondon. They converged towards a position that allow to think the embedding of ideas in the world as a process, or as an ongoing event [3,4], and this embedding is based on empiric affects.

So far, so good. Yet, we only declared the kind of raft we will build to sail with. We didn’t mention anything about how to build this raft or how to sail it. Before we can start to constructively discuss the relation between machines and ideas we first have to visit the concept, both as an issue and as a concept.


“Concept” is very special concept. First, it is not externalizable, which is why we call it a strongly singular term. Whenever one thinks “concept,” there is already something like concept. For most of the other terms in our languages, such as idea, that does not hold. Such, and regarding the structural dynamics of its usage,”concept” behave similar to “language” or “formalization.”

Additionally, however, “concept” is not self-containing term like language. One needs not only symbols, one even needs a combination of categories and structured expression, there are also Peircean signs involved, and last but not least concepts relate to models, even as models are also quite apart from it. Ideas do not relate in the same way to models as concepts do.

Let us, for instance take the concept of time. There is this abundantly cited quote by  Augustine [5], a passage where he tries to explain the status of God as the creator of time, hence the fundamental incomprehensibility of God, and even of his creations (such as time) [my emphasis]:

For what is time? Who can easily and briefly explain it? Who even in thought can comprehend it, even to the pronouncing of a word concerning it? But what in speaking do we refer to more familiarly and knowingly than time? And certainly we understand when we speak of it; we understand also when we hear it spoken of by another. What, then, is time? If no one ask of me, I know; if I wish to explain to him who asks, I know not. Yet I say with confidence, that I know that if nothing passed away, there would not be past time; and if nothing were coming, there would not be future time; and if nothing were, there would not be present time.

I certainly don’t want to speculate about “time” (or God) here, instead I would like to focus this peculiarity Augustine is talking about. Many, and probably even Augustine himself, confine this peculiarity to time (and space). I think, however, this peculiarity applies to any concept.

By means of this example we can quite clearly experience the difference between ideas and concepts. Ideas are some kind of models—we will return that in the next section—, while concepts are the both the condition for models and being conditioned by models. The concept of time provides the condition for calendars, which in turn can be conceived as a possible condition for the operationalization of expectability.

“Concepts” as well as “models” do not exist as “pure” forms. We elicit a strange and eminently counter-intuitive force when trying to “think” pure concept or models. The stronger we try, the more we imply their “opposite”, which in case of concepts presumably is the embedding potentiality of mechanisms, and in case of models we could say it is simply belief. We will discuss the issue of these relation in much more detail in the chapter about the choreosteme (forthcoming). Actually, we think that it is appropriate to conceive of terms like “concept” and “model” as choreostemic singular terms, or short choreostemic singularities.

Even from an ontological perspective we could not claim that there “is” such a thing like a “concept”. Well, you may already know that we refute any ontological approach anyway. Yet, in case of choreostemic singular terms like “concept” we can’t simply resort to our beloved language game. With respect to language, the choreosteme takes the role of an apriori, something like the the sum of all conditions.

Since we would need a full discussion of the concept of the choreosteme we can’t fully discuss the concept of “concept” here.  Yet, as kind of a summary we may propose that the important point about concepts is that it is nothing that could exist. It does not exist as matter, as information, as substance nor as form.

The language game of “concept” simply points into the direction of that non-existence. Concepts are not a “thing” that we could analyze, and also nothing that we could relate to by means of an identifiable relation (as e.g. in a graph). Concepts are best taken as gradient field in a choreostemic space, yet, one exhibiting a quite unusual structure and topology. So far, we identified two (of a total of four) singularities that together spawn the choreostemic space. We also could say that the language game of “concept” is used to indicate a certain form of a drift in the choreostemic space. (Later we also will discuss the topology of that space, among many other issues.)

For our concerns here in this chapter, the machine-based episteme, we can conclude that it would be a misguided approach to try to implement concepts (or their formal analysis). The issue of the conditions for the ability to move around in the choreostemic space we have to postpone. In other words, we have confined our task, or at least, we found a suitable entry  point for our task, the investigation of the relation between machines and ideas.

Machines and Ideas

When talking about machines and ideas we are, here and for the time being, not interested in the usage of machines to support “having” ideas. We are not interested in such tooling for now. The question is about the mechanism inside the machine that would lead to the emergence of ideas.

Think about the idea of a triangle. Certainly, triangles as we imagine them do not belong to the material world. Any possible factual representation is imperfect, as compared with the idea. Yet, without the idea (of the triangle) we wouldn’t be able to proceed, as, for instance, towards land survey. As already said, ideas serve as models, they do not involve formalization, they often live as formalization (though not always a mathematical one) in the sense of an idealized model, in other words they serve as ladder spokes for actions. Concepts, if we in contrast them to ideas, that is, if we try to distinguish them, never could be formalized, they remain inaccessible as condition. Nothing else could be expected  from a transcendental singularity.

Back to our triangle. Despite we can’t represent them perfectly, seeing a lot of imperfect triangles gives rise to the idea of the triangle. Rephrased in this way, we may recognize that the first half of the task is to look for a process that would provide an idealization (of a model), starting from empirical impressions. The second half of the task is to get the idea working as a kind of template, yet not as a template. Such an abstract pattern is detached from any direct empirical relation, despite the fact that once we started with with empiric data.

Table 1: The two tasks in realizing “machinic idealism”

Task 1: process of idealization that starts with an intensional description
Task 2: applying the idealization for first-of-a-kind-encounters

Here we should note that culture is almost defined by the fact that it provides such ideas before any individual person’s possibility to collect enough experience for deriving them on her own.

In order to approach these tasks, we need first model systems that exhibit the desired behavior, but which also are simple enough to comprehend. Let us first deal with the first half of the task.

Task 1: The Process of Idealization

We already mentioned that we need to start from empirical impressions. These can be provided by the Self-organizing Map (SOM), as it is able to abstract from the list of observations (the extensions), thereby building an intensional representation of the data. In other words, the SOM is able to create “representative” classes. Of course, these representations are dependent on some parameters, but that’s not the important point here.

Once we have those intensions available, we may ask how to proceed in order to arrive at something that we could call an idea. Our proposal for an appropriate model system consists from the following parts:

  • (1) A small set (n=4) of profiles, which consist of 3 properties; the form of the profiles is set apriori such that they overlap partially;
  • (2) a small SOM, here with 12×12=144 nodes; the SOM needs to be trainable and also should provide classification service, i.e. acting as a model
  • (3) a simple Monte-Carlo-simulation device, that is able to create randomly varied profiles that deviate from the original ones without departing too much;
  • (4) A measurement process that is recording the (simulated) data flow

The profiles are defined as shown in the following table (V denotes variables, C denotes categories, or classes):

V1 V2 V3
C1 0.1 0.4 0.6
C2 0.8 0.4 0.6
C3 0.3 0.1 0.4
C4 0.2 0.2 0.8

From these parts we then build a cyclic process, which comprises the following steps.

  • (0) Organize some empirical measurement for training the SOM; in our model system, however, we use the original profiles and create an artificial body of “original” data, in order to be able to detect the relevant phenomenon (we have perfect knowledge about the measurement);
  • (1) Train the SOM;
  • (2) Check the intensional descriptions for their implied risk (should be minimal, i.e. beyond some threshold) and extract them as profiles;
  • (3) Use these profiles to create a bunch of simulated (artificial) data;
  • (4) Take the profile definitions and simulate enough records to train the SOM,

Thus, we have two counteracting forces, (1) a dispersion due to the randomizing simulation, and (2) the focusing of the SOM due to the filtering along the separability, in our case operationalized as risk (1/ppv=positive predictive value) per node. Note that the SOM process is not a directly re-entrant process as for instance Elman networks [6,7,8].3

This process leads not only to a focusing contrast-enhancement but also to (a limited version) of inventing new intensional descriptions that never have been present in the empiric measurement, at least not salient enough to show up as an intension.

The following figure 1a-1i shows 9 snapshots from the evolution of such a system, it starts top-left of the portfolio, then proceeds row-wise from left to right down to the bottom-right item. Each of the 9 items displays a SOM, where the RGB-color corresponds to the three variables V1, V2, V3. A particular color thus represents a particular profile on the level of the intension. Remember, that the intensions are built from the field-wise average across all the extensions collected by a particular node.

Well, let us now contemplate a bit about the sequence of these panels, which represents the evolution of the system. The first point is that there is no particular locational stability. Of course, not, I am tempted to say, since a SOM is not an image that represents as image. A SOM contains intensions and abstractions, the only issue that counts is its predictive power.

Now, comparing the colors between the first and the second, we see that the green (top-right in 1a, middle-left in 1b) and the brownish (top-left in 1a, middle-right in 1b) appear much more clear in 1b as compared to 1a. In 1a, the green obviously was “contaminated” by blue, and actually by all other values as well, leading to its brightness. This tendency prevails. In 1c and 1d yellowish colors are separated, etc.

Figure 1a thru 1i: A simple SOM in a re-entrant Markov process develops idealization. Time index proceeds from top-left to bottom-right.

The point now is that the intensions contained in the last SOM (1i, bottom-right of the portfolio) have not been recognizable in the beginning, in some important respect they have not been present. Our SOM steadily drifted away from its empirical roots. That’s not a big surprise, indeed, for we used a randomization process. The nice thing is something different: the intensions get “purified”, changing thereby their status from “intensions” to “ideas”.

Now imagine that the variables V1..Vn represent properties of geometric primitives. Our sensory apparatus is able to perceive and to encode them: horizontal lines, vertical lines, crossings, etc. In empiric data our visual apparatus may find any combination of those properties, especially in case of a (platonic) school (say: academia) where the pupils and the teachers draw triangles over triangles into the wax tablets, or into the sand of the pathways in the garden…

By now, the message should be quite clear: there is nothing special about ideas. In abstract terms, what is needed is

  • (1) a SOM-like structure;
  • (2) a self-directed simulation process;
  • (3) re-entrant modeling

Notice that we need not to specify a target variable. The associative process itself is just sufficient.

Given this model it should not surprise anymore why the first philosophers came up with idealism. It is almost built into the nature of the brain. We may summarize our achievements in the following characterization;

Ideas can be conceived as idealizations of intensional descriptions.

It is of course important to be aware of the status of such a “definition”. First, we tried to separate concepts and ideas. Most of the literature about ideas conflate them.Yet, as long as they are conflated, everything and any reasoning about mental affairs, cognition, thinking and knowledge necessarily remains inappropriate. For instance, the infamous discourse about universals and qualia seriously suffered from that conflation, or more precisely, they only arose due to that mess.

Second, our lemma is just an operationalization, despite the fact that we are quite convinced about its reasonability. Yet, there might be different ones.

Our proposal has important benefits though, as it matches a lot of the aspects commonly associated the the term “idea.” In my opinion, what is especially striking about the proposed model is the observation that idealization implicitly also led to the “invention” of “intensions” that were not present in the empiric data. Who would have been expecting that idealization is implicitly inventive?

Finally, two small notes should be added concerning the type of data and the relationship between the “idea” as a continuously intermediate result of the re-entrant SOM process. One should be aware that the “normal” input to natural associative systems are time series. Our brain is dealing with a manifold of series of events, which is mapped onto the internal processes, that is, onto another time-based structure. Prima facie Our brain is not dealing with tables. Yet, (virtual) tabular structures are implied by the process of propertization, which is an inevitable component of any kind of modeling. It is well-known that is is time-series data and their modeling that give rise to the impression of causality. In the light of ideas qua re-entrant associativity, we now can easily understand the transition from networks of potential causal influences to the claim of “causality” as some kind of a pure concept. Despite the idea of causality (in the Newtonian sense) played an important role in the history of science, it is just that: a naive idealization.

The other note concerns the source of the data.  If we consider re-entrant informational structures that are arranged across large “distances”, possibly with several intermediate transformative complexes (for which there are hints from neurobiology) we may understand that for a particular SOM (or SOM-like structure) the type of the source is completely opaque. To put it short, it does not matter for our proposed mechanism whether the data are sourced as empiric data from the external world,or as some kind of simulated, surrogated re-entrant data from within the system itself. In such wide-area, informationally re-entrant probabilistic networks we may expect kind of a runaway idealization. The question then is about the minimal size necessary for eliciting that effect. A nice corollary of this result is the insight that logistic networks, such like the internet or the telephone wiring cable NEVER will start to think on itself, as some still expect. Yet, since there a lot of brains as intermediate transforming entities embedded in this deterministic cablework, we indeed may expect that the whole assembly is much more than could be achieved by a small group of humans living, say around 1983. But that is not really a surprise.

Task 2: Ideas, applied

Ideas are an extremely important structural phenomenon, because they allow to recognize things and to deal with tasks that we never have seen before. We may act adaptively before having encountered a situation that would directly resemble—as equivalence class—any intensional description available so far.

Actually, it is not just one idea, it is a “system” of ideas that is needed for that. Some years ago, Douglas Hofstadter and his group3 devised a model system suitable for demonstrating exactly this: the application of ideas. They called the project (and the model system) Copycat.

We won’t discuss Copycat and analogy-making rules by top-down ideas here (we already introduced it elsewhere). We just want to note that the central “platonic” concept in Copycat is a dynamic relational system of symmetry relations. Such symmetry relations are for instance “before”, “after”, or “builds a group”, “is a triple”, etc. Such kind of relations represent different levels of abstractions, but that’s not important. Much more important is the fact that the relations between these symmetry relations are dynamic and will adapt according to the situation at hand.

I think that these symmetry relations as conceived by the Fargonauts are on the same level as our ideas. The transition from ideas to symmetries is just a grammatological move.

The case of Biological Neural Systems

Re-entrance seems to be an important property of natural neural networks. Very early on in the liaison of neurobiology and computer science, starting with Hebb and Hopfield in the beginning of the 1940ies, recurrent networks have been attractive for researchers. If we take a look to drawings like the following, created (!) by Ramon y Cajal [10] in the beginning of the 20th century.

Figure 2a-2c: Drawings by Ramon y Cajal, the Spain neurobiologist. See also:  History of Neuroscience. a: from a Sparrow’s brain, b: motor brain in human brain, c: Hypothalamus in human brain

Yet, Hebb, Hopfield and Elman got trapped by the (necessary) idealization of Cajal’s drawings. Cajal’s interest was to establish and to proof the “neuron hypothesis”, i.e. that brains work on the basis of neurons. From Cajal’s drawings to the claim that biological neuronal structures could be represented by cybernetic systems or finite state machines is, honestly, a breakneck, or, likewise, ideology.

Figure 3: Structure of an Elman Network; obviously, Elman was seriously affected by idealization (click for higher resolution).

Thus, we propose to distinguish between re-entrant and recurrent networks. While the latter are directly wired onto themselves in a deterministic manner, that is the self-reference is modeled on the morphological level, the former are modeled on the  informational level. Since it is simply impossible for cybernetic structure to reflect neuromorphological plasticity and change, the informational approach is much more appropriate for modeling large assemblies of individual “neuronal” items (cf. [11]).

Nevertheless, the principle of re-entrance remains a very important one. It is a structure that is known to lead to contrast enhancement and to second-order memory effects. It is also a cornerstone in the theory (theories) proposed by Gerald Edelman, who probably is much less affected by cybernetics (e.g. [12]) than the authors cited above. Edelman always conceived the brain-mind as something like an abstract informational population; he even was the first adopting evolutionary selection processes (Darwinian and others) to describe the dynamics in the brain-mind.

Conclusion: Machines and Choreostemic Drift

Out point of departure was to distinguish between ideas and concepts. Their difference becomes visible if we compare them, for instance, with regard to their relation to (abstract) models. It turns out that ideas can be conceived as a more or less stable immaterial entity (though not  “state”) of self-referential processes involving self-organizing maps and the simulated surrogates of intensional descriptions. Concepts on the other hand are described as a transcendental vector in choreostemic processes. Consequently, we may propose only for ideas that we can implement their conditions and mechanisms, while concepts can’t be implemented. It is beyond the expressibility of any technique to talk about the conditions for their actualization. Hence, the issue of “concept” has been postponed to a forthcoming chapter.

Ideas can be conceived as the effect of putting a SOM into a reentrant context, through which the SOM develops a system of categories beyond simple intensions. These categories are not justified by empirical references any more, at least not in the strong sense. Hence, ideas can be also characterized as being clearly distinct from models or schemata. Both, models and schemata involve classification, which—due to the dissolved bonds to empiric data—can not be regarded a sufficient component for ideas. We would like to suggest the intended mechanism as the candidate principle for the development ideas. We think that the simulated data in the re-entrant SOM process should be distinguished from data in contexts that are characterized by measurement of “external” objects, albeit their digestion by the SOM mechanism itself remains the same.

From what has been said it is also clear that the capability of deriving ideas alone is still quite close to the material arrangements of a body, whether thought as biological wetware or as software. Therefore, we still didn’t reach a state where we can talk about epistemic affairs. What we need is the possibility of expressing the abstract conditions of the episteme.

Of course, what we have compiled here exceeds by far any other approach, and additionally we think that it could serve as as a natural complement to the work of Douglas Hofstadter. In his work, Hofstadter had to implement the platonic heavens of his machine manually, and even for the small domain he’d chosen it has been a tedious work. Here we proposed the possibility for a seamless transition from the world of associative mechanisms like the SOM to the world of platonic Copy-Cats, and “seamless” here refers to “implementable”.

Yet, what is really interesting is the form of choreostemic movement or drift, resulting from a particular configuration of the dynamics in systems of ideas. But this is another story, perhaps related to Felix Guattari’s principle of the “machinic”, and it definitely can’t be implemented any more.


1. we did so in the recent chapter about data and their transformation, but also see the section “Overall Organization” in Technical Aspects of Modeling.

2. You really should be aware that this trace we try to put forward here does not come close to even a coarse outline of all of the relevant issues.

3. they called themselves the “Fargonauts”, from FARG being the acronym for “Fluid Analogy Research Group”.

4. Elman networks are an attempt to simulate neuronal networks on the level of neurons. Such approaches we rate as fundamentally misguided, deeply inspired by cybernetics [9], because they consider noise as disturbance. Actually, they are equivalent to finite state machines. It is somewhat ridiculous to consider a finite state machine as model for learning “networks”. SOM, in contrast, especially if used in architectures like ours, are fundamentally probabilistic structures that could be regarded as “feeding on noise.” Elman networks, and their predecessor, the Hopfield network are not quite useful, due to problems in scalability and, more important, also in stability.

  • [1] Douglas Hofstadter, Douglas R. Hofstadter, Fluid Concepts And Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought. Basic Books, New York 1996.  p.365
  • [2] Gernot Böhme, “Platon der Empiriker.” in: Gernot Böhme, Dieter Mersch, Gregor Schiemann (eds.), Platon im nachmetaphysischen Zeitalter. Wissenschaftliche Buchgesellschaft, Darmstadt 2006.
  • [3] Marc Rölli (ed.), Ereignis auf Französisch: Von Bergson bis Deleuze. Fin, Frankfurt 2004.
  • [4] Gilles Deleuze, Difference and Repetition. 1967
  • [5] Augustine, Confessions, Book 11 CHAP. XIV.
  • [6] Mandic, D. & Chambers, J. (2001). Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley.
  • [7] J.L. Elman, (1990). Finding Structure in Time. Cognitive Science 14 (2): 179–211.
  • [8] Raul Rojas, Neural Networks: A Systematic Introduction. Springer, Berlin 1996. (@google books)
  • [9] Holk Cruse, Neural Networks As Cybernetic Systems: Science Briefings, 3rd edition. Thieme, Stuttgart 2007.
  • [10] Santiago R.y Cajal, Texture of the Nervous System of Man and the Vertebrates: Volume I: 1, Springer, Wien 1999, edited and translated by Pedro Pasik & Tauba Pasik. see google books
  • [11] Florence Levy, Peter R. Krebs (2006), Cortical-Subcortical Re-Entrant Circuits and Recurrent Behaviour. Aust N Z J Psychiatry September 2006 vol. 40 no. 9 752-758.
    doi: 10.1080/j.1440-1614.2006.01879
  • [12] Gerald Edelman: “From Brain Dynamics to Consciousness: A Prelude to the Future of Brain-Based Devices“, Video, IBM Lecture on Cognitive Computing, June 2006.


Formalization and Creativity as Strongly Singular Terms

February 16, 2012 § Leave a comment

Formalization is based on the the use of symbols.

In the last chapter we characterized formalization as a way to give a complicated thing a symbolic form that lives within a system of other forms.

Here, we will first discuss a special property of the concepts of formalization and creativity, one that they share for instance with language. We call this property strong singularity. Then, we will sketch some consequences of this state.

What does “Strongly Singular” mean?

Before I am going to discuss (briefly) the adjacent concept of “singular terms” I would like to shed a note on the newly introduced term of “strong singularity”.

The ordinary Case

Let us take ordinary language, even as this may be a difficult thing to theorize about. At least, everybody is able to use it. We can do a lot of things with language, the common thing about these things is, however, that we use it in social situations, mostly in order to elicit two “effects”: First, we trigger some interpretation or even inference in our social companion, secondly, we indicate that we did just that. As a result, a common understanding emerges, formally taken, a homeomorphism, which in turn then may serve as the basis for the assignment of so-called “propositional content”. Only then we can “talk about” something, that is, only then we are able to assign a reference to something that is external to the exchanged speech.

As said, this is the usual working of language. For instance, by saying “Right now I am hearing my neighbor exercising piano.” I can refer to common experience, or at least to a construction you would call an imagination (it is anyway always a construction). This way I refer to an external subject and its relations, a fact. We can build sentences about it, about which we even could say whether they correspond to reality or not. But, of course, this already would be a further interpretation. There is no direct access to the “external world”.

In this way we can gain (fallaciously) the impression that we can refer to external objects by means of language. Yet, this is a fallacy, based on an illegitimate shortcut, as we have seen. Nevertheless, for most parts of our language(s) it is possible to refer to external or externalized objects by exchanging the mutual inferential / interpretational assignments as described above. I can say “music” and it is pretty clear what I mean by that, even if the status of the mere utterance of a single word is somewhat deficient: it is not determined whether I intended to refer to music in general, e.g. as the totality of all pieces or the cultural phenomenon, or to a particular piece, to a possibility of its instantiation or the factual instance right now. Notwithstanding this divergent variety, it is possible to trigger interpretations and to start a talk between people about music, while we neither have to play or to listen to music at that moment.

The same holds for structural terms that regulate interpretation predominantly by their “structural” value. It is not that important for us here, whether the externalization is directed to objects or to the speech itself. There is an external, even a physical justification for the starting to engage in the language game about such entities.

Something different…

Now, this externalization is not possible for some terms. The most obvious is “language”. We neither can talk about language without language, nor can we even think “language” or have the “feeling” of language without practicing it. We also can’t investigate language without using or practicing it. Any “measurement” about language inevitably uses language itself as the means to measure, and this includes any interpretation of speech in language as well. This self-referentiality further leads to interesting phenomena, such as “n-isms” like the dualism in quantum physics, where we also find a conflation of scales. If we would fail to take this self-referentiality into consideration we inevitably will create faults or pseudo-paradoxa.

The important issue about that is that there is no justification of language which could be expressed outside of language, hence there is no (foundational) justification for it at all. We find a quite unique setting, which corrodes any attempt for a “closed” , i.e. formal analysis of language.

The extension of the concept “language” is at the same time an instance of it.

It is absolutely not surprising that the attempt for a fully mechanic, i.e. apriori determined or algorithmic analysis of language must fail. Wittgenstein thus arrived at the conclusion that language is ultimately embedded as a practice in the life form [1] (we would prefer the term “performance” instead). He demanded, that justifications (of language games as rule-following) have to come to an end1; for him it was fallacious to think that a complete justification—or ultimate foundation—would be possible.

Just to emphasize it again: The particular uniqueness of terms like language is that they can not be justified outside of themselves. Analytically, they start with a structural singularity. Thus the term “strong singularity” that differs significantly from the concept of the so-called “singular term” as it is widely known. We will discuss it below.

The term “strong singularity” indicates the absence of any possibility for an external justification.

In §329 of the Philosophical Investigations, Wittgenstein notes:

When I think in language, there aren’t ”meanings” going through my mind in addition to the verbal expressions: the language is itself the vehicle of thought.

It is quite interesting to see that symbols do not own this particular property of strong singularity. Despite that they are a structural part of language they do not share this property. Hence we may conceive it as a remarkable instance of a Deleuzean double articulation [2] in midst thinking itself. There would be lot to say about it, but it also would not fit here.

Further Instances

Language now shares the property of strong singularity with formalization .  We can neither have the idea nor the feeling of formalization without formalization, and we even can not perform formalization without prior higher-order formalization. There is no justification of formalization which could be expressed outside of formalization, hence there is no (foundational) justification for it at all. The parallel is obvious: Would it then be necessary, for instance, to conclude that formalization is embedded in the life form much in the same way as it is the case for language? That mere performance precedes logics? Precisely this could be concluded from the whole of Wittgenstein’s philosophical theory, as Colin Johnston suggested [3].

Per­forma­tive activity precedes any possibility of applying logics in the social world; formulated the other way round, we can say that transcendental logics is getting instantiated into an applicable quasi-logics. Before this background, the idea of truth functions determining a “pure” or ideal truth value is rendered into an importunate misunder­standing. Yet, formali­zation and language are not only similar with regard to this self-referentiality, they are also strictly different. Nevertheless, so the hypothesis we try to strengthen here, formalization resembles language in that we can not have the slightest thought or even any mental operation without formalization. It is even the other way round, in that any mental operation invokes a formalizing step.

Formalization and language are not the only entities, which exhibit self-referentiality and which can not defined by any kind of outside stance. Theory, model and metaphor belong to the family, too, not to forget finally about thinking, hence creativity, at large. A peculiar representative of these terms is the “I”. Close relatives, though not as critical as the former ones, are concepts like causality or information. All these terms are not only self-referential, they are also cross-referential. Discussing any of them automatically involves the others. Many instances of deep confusion derive from the attempt to treat them separately, across many domains from neurosciences, socio­logy, computer sciences and mathematics up to philosophy. Since digital techno­logies are based seriously on formalization and have been developing yet further into a significant deep structure of our contemporary life form, any area where software technology is pervasively used is endangered by the same misunderstandings. One of these areas is architecture and city-planning, or more general, any discipline where language or the social in involved as a target of the investigation.

There is last point to note about self-referentiality. Self-referentiality may likely lead to a situation that we have described as “complexity”. From this perspective, self-referentiality is a basic condition for the potential of novelty. It is thus interesting to see that this potential is directly and natively implanted into some concepts.

Singular Terms

Now we will briefly discuss the concept of “singular term” as it is usually referred to. Yet, there is not a full agreement about this issue of singular terms, in my opinion mainly due to methodological issues. Many proponents of analytical philosophy simply “forget that there are speaking”, in the sense mentioned above.

The analytical perspective

Anyway, according to the received view, names are singular terms. It is said that the reference of singular terms are singular things or objects, even if they are immaterial, like the unicorn. Yet, the complete distinctive list of singular terms would look like this:

  • – proper names (“Barack Obama”);
  • – labeling designation (“U.S. President”);
  • – indexical expressions (“here”, “this dog”).

Such singular terms are distinguished from so-called general terms. Following Tugendhat [4], who refers in turn to Strawson [5], the significance of a general term F consists from the conditions to be fulfilled, such that F matches one or several objects. In other words, the significance of a singular term is given by a rule for identification, while the significance of a general term is given by a rule for classification. As a consequence, singular terms require knowledge about general terms.

Such statements are typical for analytical philosophy.

There are serious problems with it. However, even the labeling is misleading. It is definitely NOT the term that is singular. Singular is at most a particular contextual event, which we decided to address by a name. Labelings and indexical expressions are not necessarily “singular,” and quite frequently the same holds for names. Think about “John Smith” first as a name, then as a person…  This mistake is quite frequent in analytic philosophy. We can trace it even to the philosophy of mathematics [6], when it comes to certain claims of set theory about infinity.

The relevance for the possibility of machine-based episteme

There can be little doubt, as we already have been expressing it elsewhere, that human cognition can’t be separated from language. Even the use of most primitive tools, let alone be the production and distribution of them, requires the capability for at least a precursor of language, some first steps into languagability.

We know by experience that, in our mother tongue, we can understand sentences that we never heard before. Hence, understanding of language (quite likely as any understanding) is bottom-up, not top-down, at least in the beginning of the respective processes. Thus we have to ask about the sub-sentential components of a sentence.

Such components are singular terms. Imagine some perfectly working structure that comprises the capability for arbitrary classification as well as the capability for non-empirical analogical thinking, that is based on a dynamic symmetries. The machine wold not only be able to perform the transition from extensions to intensions, it would even be able to abstract the intension into a system of meta-algebraic symmetry relations. Such a system, or better, the programmer of it then would be faced with the problem of naming and labeling. Somehow the intensions have to be made addressable. A private index does not help, since such an index would be without any value for communication purposes.

The question is how to make the machine referring to the proper names? We will see elsewhere (forthcoming: “Waves, Words, and Images“), that this question will lead us to the necessity of multi-modality in processing linguistic input, e.g. language and images together into the same structure (which is just another reason why to rely on self-organizing maps and our methodology of modeling).

Refutation of the analytical view

The analytical position about singular term does not provide any help or insight into the the particular differential quality of terms as words that denote a concept.2   Analytical statements as cited above are inconsistent, if not self-contradictory. The reason is simple. Words as placeholders for concepts can not have a particular meaning attached to them by principle. The meaning, even that of subsentential components, is an issue of interpretation, and the meaning of a sentence is given not only by its own totality, it is also dependent on the embedding of the sentence itself into the story or the social context, where it is performed.

Since “analytic” singular terms require knowledge about general terms, and the general terms are only determined if the sentence is understood, it is impossible to identify or classify single terms, whether singular or general, before the propositional content of the sentence is clear to the participants. That propositional content of the sentence, however, is, as Robert Brandom in chapter 6 of his [7] convincingly argues, only accessible through their role in the inferential relations between the participants of the talk as well as the relations between sentences. Such we can easily see that the analytical concept of singular terms is empty, if not self-nullifying.

The required understanding of the sentence is missing in the analytical perspective, the object is dominant against the sentence, which is against any real-life experience. Hence, we’d also say that the primacy of interpretation is not fully respected. What we’d need instead is a kind of bootstrapping procedure that works within a social situation of exchanged speech.

Robert Brandom moves this bootstrapping into the social situation itself, which starts with a medial symmetry between language and socialness. There is, coarsely spoken, a rather fixed choreography to accomplish that. First, the participants have to be able to maintain what Brandom calls a de-ontic account. The sequence start with a claim, which includes the assignment of a particular role. This role must be accepted and returned, which is established by signalling that the inference / interpretation will be done. Both the role and the acceptance are dependent on the claim, on the de-ontic status of the participants and on the intended meaning. (now I have summarized about 500 pages of Brandom’s book…, but, as said, it is a very coarse summary!)

Brandom (chp.6) investigates the issue of singular terms. For him, the analytical perspective is not acceptable, since for him, as it the case for us, there is the primacy of interpretation.

Brandom refutes the claim of analytical philosophy that singular names designate single objects. Instead he strives to determine the necessity and the characteristics of singular terms by a scheme that distinguishes particular structural (“syntactical”) and semantic conditions. These conditions are further divergent between the two classes of possible subsentential structures, the singular terms (ST) and predicates (P). While syntactically, ST take the role of substitution-of/substitution-by and P take the structural role of providing a frame for such substitutions, in the semantic perspective ST are characterised exclusively by so called symmetric substitution-inferential commitments (SIC), where P also take asymmetric SIC. Those inferential commitments link the de-ontic, i.e. ultimately socialness of linguistic exchange, to the linguistic domain of the social exchange. We hence may also characterize the whole situation as it is described by Brandom as a cross-medial setting, where the socialness and linguistic domain provide each other mutually a medial embedding.

Interestingly, this simultaneous cross-mediality represents also a “region”, or a context, where materiality (of the participants) and immateriality (of information qua interpretation) overlaps. We find, so to speak, an event-like situation just before the symmetry-break that we ay identify as meaning. To some respect, Brandom’s scheme provides us the pragmatic details of of a Peircean sign situation.

The Peirce-Brandom Test

This has been a very coarse sketch of one aspect of Brandom’s approach. Yet, we have seen that language understanding can not be understood if we neglect the described cross-mediality. We therefore propose to replace the so-called Turing-test by a procedure that we propose to call the Peirce-Brandom Test. That test would proof the capability to take part in semiosis, and the choreography of the interaction scheme would guarantee that references and inferences are indeed performed. In contrast to the Turing-test, the Peirce-Brandom test can’t be “faked”, e.g. by a “Chinese Room.” (Searle [8]) Else, to find out whether the interaction partner is a “machine” or a human we should not ask them anything, since the question as a grammatical form of social interaction corroborates the complexity of the situation. We just should talk to it/her/him.The Searlean homunculus inside the Chinese room would not be able to look up anything anymore. He would have to be able to think in Chinese and as Chinese, q.e.d.

Strongly Singular Terms and the Issue of Virtuality

The result of Brandom’s analysis is that the label of singular terms is somewhat dispensable. These terms may be taken as if they point to a singular object, but there is no necessity for that, since their meaning is not attached to the reference to the object, but to their role in in performing the discourse.

Strongly singular terms are strikingly different from those (“weakly) singular terms. Since they are founding themselves while being practiced through their self-referential structure, it is not possible to find any “incoming” dependencies. They are seemingly isolated on their passive side, there are only outgoing dependencies towards other terms, i.e. other terms are dependent on them. Hence we could call them also “(purely) active terms”.

What we can experience here in a quite immediate manner is pure potentiality, or virtuality (in the Deleuzean sense). Language imports potentiality into material arrangements, which is something that programming languages or any other finite state automaton can’t accomplish. That’s the reason why we all the time heftily deny the reasonability to talk about states when it comes to the brain or the mind.

Now, at this point it is perfectly clear why language can be conceived as ongoing creativity. Without ongoing creativity, the continuous actualization of the virtual, there wouldn’t be anything that would take place, there would not “be” language. For this reason, the term creativity belongs to the small group of strongly singular terms.


In this series of essays about the relation between formalization and creativity we have achieved an important methodological milestone. We have found a consistent structural basis for the terms language, formalization and creativity. The common denominator for all of those is self-referentiality. On the one hand this becomes manifest in the phenomenon of strong singularity, on the other hand this implies an immanent virtuality for certain terms. These terms (language, formalization, model, theory) may well be taken as the “hot spots” not only of the creative power of language, but also of thinking at large.

The aspect of immanent virtuality implicates a highly significant methodological move concerning the starting point for any reasoning about strongly singular terms. Yet, this we will check out in the next chapter.

Part 1: The Formal and the Creative, Introduction

Part 3: A Pragmatic Start for a Beautiful Pair


1.  Wittgenstein repeatedly has been expressing this from different perspectives. In the Philosophical Investigations [1], PI §219, he states: “When I obey the rule, I do not choose. I obey the rule blindly.” In other words, there is usually no reason to give, although one always can think of some reasons. Yet, it is also true that (PI §10) “Rules cannot be made for every possible contingency, but then that isn’t their point anyway.” This leads us to §217: “If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: ‘This is simply what I do’.” Rules are are never intended to remove all possible doubt, thus  PI  §485: “Justification by experience comes to an end. If it did not it would not be justification.” Later Quine proofed accordingly from a different perspective what today is known as the indeterminacy of empirical reason (“Word and Object”).

2. There are, of course, other interesting positions, e.g. that elaborated by Wilfrid Sellars [9], who distinguished different kinds of singular terms: abstract singular terms (“triangularity”), and distributive singular terms (“the red”), in addition to standard singular terms. Yet, the problem of which the analytical position is suffering also hits the position of Sellars.

  • References
  • [1] Ludwig Wittgenstein, Philosophical Investigations.
  • [2] Gilles Deleuze, Felix Guattari, Milles Plateaus.
  • [3] Colin Johnston (2009). Tractarian objects and logical categories. Synthese 167: 145-161.
  • [4] Ernst Tugendhat, Traditional and Analytical Philosophy. 1976
  • [5] Strawson 1974
  • [6] Rodych, Victor, “Wittgenstein’s Philosophy of Mathematics”, The Stanford Encyclopedia of Philosophy (Summer 2011 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu.
  • [7] Robert Brandom, Making it Explicit. 1994
  • [8] John Searle (1980). Minds, Brains and Programs. Behav Brain Sci 3 (3), 417–424.
  • [9] Wilfrid Sellars, Science and Metaphysics. Variations on Kantian Themes, Ridgview Publishing Company, Atascadero, California [1967] 1992.


Beyond Containing: Associative Storage and Memory

February 14, 2012 § Leave a comment

Memory, our memory, is a wonderful thing. Most of the time.

Yet, it also can trap you, sometimes terribly, if you use it in inappropriate ways.

Think about the problematics of being a witness. As long as you don’t try to remember exactly you know precisely. As soon as you start to try to achieve perfect recall, everything starts to become fluid, first, then fuzzy and increasingly blurry. As if there would be some kind of uncertainty principle, similar to Heisenberg’s [1]. There are other tricks, such as asking a person the same question over and over again. Any degree of security, hence knowledge, will vanish. In the other direction, everybody knows about the experience that a tiny little smell or sound triggers a whole story in memory, and often one that have not been cared about for a long time.

The main strengths of memory—extensibility, adaptivity, contextuality and flexibility—could be considered also as its main weakness, if we expect perfect reproducibility for results of “queries”. Yet, memory is not a data base. There are neither symbols, nor indexes, and at the deeper levels of its mechanisms, also no signs. There is no particular neuron that would “contain” information as a file on a computer can be regarded able to provide.

Databases are, of course, extremely useful, precisely because they can’t do in other ways as to reproduce answers perfectly. That’s how they are designed and constructed. And precisely for the same reason we may state that databases are dead entities, like crystals.

The reproducibility provided by databases expels time. We can write something into a database, stop everything, and continue precisely at the same point. Databases do not own their own time. Hence, they are purely physical entities. As a consequence, databases do not/can not think. They can’t bring or put things together, they do not associate, superpose, or mix. Everything is under the control of an external entity. A database does not learn when the amount of bits stored inside it increases. We also have to be very clear about the fact that a database does not interpret anything. All this should not be understood as a criticism, of course, these properties are intended by design.

The first important consequence about this is that any system relying just on the principles of a database also will inherit these properties. This raises the question about the necessary and sufficient conditions for the foundations of  “storage” devices that allow for learning and informational adaptivity.

As a first step one could argue that artificial systems capable for learning, for instance self-organizing maps, or any other “learning algorithm”, may consist of a database and a processor. This would represent the bare bones of the classic von Neumann architecture.

The essence of this architecture is, again, reproducibility as a design intention. The processor is basically empty. As long as the database is not part of a self-referential arrangement, there won’t be something like a morphological change.

Learning without change of structure is not learning but only changing the value of structural parameters that have been defined apriori (at implementation time). The crucial step however would be to introduce those parameters at all. We will return to this point at a later stage of our discussion, when it comes to describe the processing capabilities of self-organizing maps.1

Of course, the boundaries are not well defined here. We may implement a system in a very abstract manner such that a change in the value of such highly abstract parameters indeed involves deep structural changes. In the end, almost everything can be expressed by some parameters and their values. That’s nothing else than the principle of the Deleuzean differential.

What we want to emphasize here is just the issue that (1) morphological changes are necessary in order to establish learning, and (2) these changes should be established in response to the environment (and the information flowing from there into the system). These two condition together establish a third one, namely that (3) a historical contingency is established that acts as a constraint on the further potential changes and responses of the system. The system acquires individuality. Individuality and learning are co-extensive. Quite obviously, such a system is not a von Neumann device any longer, even if it still runs on a such a linear machine.

Our claim here is that the “learning” requires a particular perspective on the concept of “data” and its “storage.” And, correspondingly, without the changed concept about the relation between data and storage, the emergence of machine-based episteme will not be achievable.

Let us just contrast the two ends of our space.

  • (1) At the logical end we have the von Neumann architecture, characterized by empty processors, perfect reproducibility on an atomic level, the “bit”; there is no morphological change; only estimation of predefined parameters can be achieved.
  • (2) The opposite end is made from historically contingent structures for perception, transformation and association, where the morphology changes due to the interaction with the perceived information2; we will observe emergence of individuality; morphological structures are always just relative to the experienced influences; learning occurs and is structural learning.

With regard to a system that is able to learn, one possible conclusion from that would be to drop the distinction between storage of encoded information and the treatment of that  encodings. Perhaps, it is the only viable conclusion to this end.

In the rest of this chapter we will demonstrate how the separation between data and their transformation can be overcome on the basis of self-organizing maps. Such a device we call “associative storage”. We also will find a particular relation between such an associative storage and modeling3. Notably, both tasks can be accomplished by self-organizing maps.


When taking the perspective from the side of usage there is still another large contrasting difference between databases and associative storage (“memories”). In case of a database, the purpose of a storage event is known at the time of performing the storing operation. In case of memories and associative storage this purpose is not known, and often can’t be reasonably expected to be knowable by principle.

From that we can derive a quite important consequence. In order to build a memory, we have to avoid storing the items “as such,” as it is the case for databases. We may call this the (naive) representational approach. Philosophically, the stored items do not have any structure inside the storage device, neither an inner structure, nor an outer one. Any item appears as a primitive qualia.

The contrast to the process in an associative storage is indeed a strong one. Here, it is simply forbidden to store items in an isolated manner, without relation to other items, as an engram, an encoded and reversibly decodable series of bits. Since a database works perfectly reversible and reproducible, we can encode the graphem of a word into a series of bits and later decode that series back into a graphem again, which in turn we as humans (with memory inside the skull) can interpret as words. Strictly taken, we do NOT use the database to store words.

More concretely, what we have to do with the items comprises two independent steps:

  • (1) Items have to be stored as context.
  • (2) Items have to be stored as probabilized items.

The second part of our re-organized approach to storage is a consequence of the impossibility to know about future uses of a stored item. Taken inversely, using a database for storage always and strictly implies that the storage agent claims to know perfectly about future uses. It is precisely this implication that renders long-lasting storage projects so problematic, if not impossible.

In other words, and even more concise, we may say that in order to build a dynamic and extensible memory we have to store items in a particular form.

Memory is built on the basis of a population of probabilistic contexts in and by an associative structure.

The Two-Layer SOM

In a highly interesting prototypical model project (codename “WEBSOM”) Kaski (a collaborator of Kohonen) introduced a particular SOM architecture that serves the requirements as described above [2]. Yet, Kohonen (and all of his colleagues alike) did not recognize so far the actual status of that architecture. We already mentioned this point in the chapter about some improvements of the SOM design; Kohonen fails to discern modeling from sorting, when he uses the associative storage as a modeling device. Yet, modeling requires a purpose, operationalized into one or more target criteria. Hence, an associative storage device like the two-layer SOM can be conceived as a pre-specific model only.

Nevertheless, this SOM architecture is not only highly remarkable, but we also can easily extend it appropriately; thus it is indeed so important, at least as a starting point, that we describe it briefly here.

Context and Basic Idea

The context for which the two-layer SOM (TL-SOM) has been created is document retrieval by classification of texts. From the perspective of classification,texts are highly complex entities. This complexity of texts derives from the following properties:

  • – there are different levels of context;
  • – there are rich organizational constraints, e.g. grammars
  • – there is a large corpus of words;
  • – there is a large number of relations that not only form a network, but which also change dynamically in the course of interpretation.

Taken together, these properties turn texts into ill-defined or even undefinable entities, for which it is not possible to provide a structural description, e.g. as a set of features, and particularly not in advance to the analysis. Briefly, texts are unstructured data. It is clear, that especially non-contextual methods like the infamous n-grams are deeply inappropriate for the description, and hence also for the modeling of texts. The peculiarity of texts has been recognized long before the age of computers. Around 1830 Friedrich Schleiermacher founded the discipline of hermeneutics as a response to the complexity of texts. In the last decades of the 20ieth century, it was Jacques Derrida who brought in a new perspective on it. in Deleuzean terms, texts are always and inevitably deterritorialized to a significant portion. Kaski & coworkers addressed only a modest part of these vast problematics, the classification of texts.

The starting point they took by was to preserve context. The large variety of contexts makes it impossible to take any kind of raw data directly as input for the SOM. That means that the contexts had to be encoded in a proper manner. The trick is to use a SOM for this encoding (details in next section below). This SOM represents the first layer. The subject of this SOM are the contexts of words (definition below). The “state” of this first SOM is then used to create the input for the SOM on the second layer, which then addresses the texts. In this way, the size of the input vectors are standardized and reduced in size.

Elements of a Two-Layer SOM

The elements, or building blocks, of a TL-SOM devised for the classification of texts are

  • (1) random contexts,
  • (2) the map of categories (word classes)
  • (3) the map of texts

The Random Context

A random context encodes the context of any of the words in a text. let us assume for the sake of simplicity that the context is bilateral symmetric according to 2n+1, i.e. for example with n=3 the length of the context is 7, where the focused word (“structure”) is at pos 3 (when counting starts with 0).

Let us resort to the following example, that take just two snippets from this text. The numbers represent some arbitrary enumeration of the relative positions of the words.

sequence A of words rel. positions in text “… without change of structureis not learning …”53        54    55    56       57 58     59
sequence B of words rel. positions in text “… not have any structureinside the storage …”19    20  21       22         23    24     25

The position numbers we just need for calculating the positional distance between words. The interesting word here is “structure”.

For the next step you have to think about the words listed in a catalog of indexes, that is as a set whose order is arbitrary but fixed. In this way, any of the words gets its unique numerical fingerprint.

Index Word Random Vector
 …  …
1264  structure 0.270    0.938    0.417    0.299    0.991 …
1265  learning 0.330    0.990    0.827    0.828    0.445 …
 1266  Alabama 0.375    0.725    0.435    0.025    0.915 …
 1267  without 0.422    0.072    0.282    0.157    0.155 …
 1268  storage 0.237    0.345    0.023    0.777    0.569 …
 1269  not 0.706    0.881    0.603    0.673    0.473 …
 1270  change 0.170    0.247    0.734    0.383    0.905 …
 1271  have 0.735    0.472    0.661    0.539    0.275 …
 1272  inside 0.230    0.772    0.973    0.242    0.224 …
 1273  any 0.509    0.445    0.531    0.216    0.105 …
 1274  of 0.834    0.502    0.481    0.971    0.711 …
1274  is 0.935    0.967    0.549    0.572    0.001 …

Any of the words of a text can now be replaced by an apriori determined vector of random values from [0..1]; the dimensionality of those random vectors should be around  80 in order to approximate orthogonality among all those vectors. Just to be clear: these random vectors are taken from a fixed codebook, a catalog as sketched above, where each word is assigned to exactly one such vector.

Once we have performed this replacement, we can calculate the averaged vectors per relative position of the context. In case of the example above, we would calculate the reference vector for position n=0 as the average from the vectors encoding the words “without” and “not”.

Let us be more explicit. For example sentence A we translate first into the positional number, interpret this positional number as a column header, and fill the column with the values of its respective fingerprint. For the 7 positions (-3, +3) we get 7 columns:

sequence A of words “… without change of structure is not learning …”
rel. positions in text        53        54    55    56       57 58     59
 grouped around “structure”         -3       -2    -1       0       1    2     3
random fingerprints
per position
0.422  0.170  0.834  0.270  0.935  0.706  0.330
0.072  0.247  0.502  0.938  0.967  0.881  0.990
0.282  0.734  0.481  0.417  0.549  0.603  0.827

…further entries of the fingerprints…

The same we have to do for the second sequence B. Now we have to tables of fingerprints, both comprising 7 columns and N rows, where N is the length of the fingerprint. From these two tables we calculate the average value and put it into a new table (which is of course also of dimensions 7xN). Such, the example above yields 7 such averaged reference vectors. If we have a dimensionality of 80 for the random vectors we end up with a matrix of [r,c] = [80,7].

In a final step we concatenate the columns into a single vector, yielding a vector of 7×80=560 variables. This might appear as a large vector. Yet, it is much smaller than the whole corpus of words in a text. Additionally, such vectors can be compressed by the technique of random projection (math. foundations by [3], first proposed for data analysis by [4], utilized for SOMs later by [5] and [6]), which today is quite popular in data analysis. Random projection works by matrix multiplication. Our vector (1R x  560C) gets multiplied with a matrix M(r) of 560R x 100C, yielding a vector of 1R x 100C. The matrix M(r) also consists of flat random values. This technique is very interesting, because no relevant information is lost, but the vector gets shortened considerable. Of course, in an absolute sense there is a loss of information. Yet, the SOM only needs the information which is important to distinguish the observations.

This technique of transferring a sequence made from items encoded on an symbolic level into a vector that is based on random context can be applied to any symbolic sequence of course.

For instance, it would be a drastic case of reductionism to conceive of the path taken by humans in an urban environment just as a sequence locations. Humans are symbolic beings and the urban environment is full of symbols to which we respond. Yet, for the population-oriented perspective any individual path is just a possible path. Naturally, we interpret it as a random path. The path taken through a city needs to be described both by location and symbol.

The advantage of the SOM is that the random vectors that encode the symbolic aspect can be combined seamlessly with any other kind of information, e.g. the locational coordinates. That’s the property of the multi-modality. Which particular combination of “properties” then is suitable to classify the paths for a given question then is subject for “standard” extended modeling as described inthe chapter Technical Aspects of Modeling.

The Map of Categories (Word Classes)

From these random context vectors we can now build a SOM. Similar contexts will arrange in adjacent regions.

A particular text now can be described by its differential abundance across that SOM. Remember that we have sent the random contexts of many texts (or text snippets) to the SOM. To achieve such a description a (relative) frequency histogram is calculated, which has as much classes as the SOM node count is. The values of the histogram is the relative frequency (“probability”) for the presence of a particular text in comparison to all other texts.

Any particular text is now described by a fingerprint, that contains highly relevant information about

  • – the context of all words as a probability measure;
  • – the relative topological density of similar contextual embeddings;
  • – the particularity of texts across all contextual descriptions, again as a probability measure;

Those fingerprints represent texts and they are ready-mades for the final step, “learning” the classes by the SOM on the second layer in order to identify groups of “similar” texts.

It is clear, that this basic variant of a Two-Layer SOM procedure can be improved in multiple ways. Yet, the idea should be clear. Some of those improvements are

  • – to use a fully developed concept of context, e.g. this one, instead of a constant length context and a context without inner structure;
  • – evaluating not just the histogram as a foundation of the fingerprint of a text, but also the sequence of nodes according to the sequence of contexts; that sequence can be processed using a Markov-process method, such as HMM, Conditional Random Fields, or, in a self-similar approach, by applying the method of random contexts to the sequence of nodes;
  • – reflecting at least parts of the “syntactical” structure of the text, such as sentences, paragraphs, and sections, as well as the grammatical role of words;
  • – enriching the information about “words” by representing them not only in their observed form, but also as their close synonyms, or stuffed with the information about pointers to semantically related words as this can be taken from labeled corpuses.

We want to briefly return to the first layer. Just imagine not to measure the histogram, but instead to follow the indices of the contexts across the developed map by your fingertips. A particular path, or virtual movement appears. I think that it is crucial to reflect this virtual movement in the input data for the second layer.

The reward could be significant, indeed. It offers nothing less than a model for conceptual slippage, a term which has been emphasized by Douglas Hofstadter throughout his research on analogical and creative thinking. Note that in our modified TL-SOM this capacity is not an “extra function” that had to be programmed. It is deeply built “into” the system, or in other words, it makes up its character. Besides Hofstadter’s proposal which is based on a completely different approach, and for a different task, we do not know of any other system that would be able for that. We even may expect that the efficient production of metaphors can be achieved by it, which is not an insignificant goal, since all the practiced language is always metaphoric.

Associative Storage

We already mentioned that the method of TL-SOM extracts important pieces of information about a text and represents it as a probabilistic measure. The SOM does not contain the whole piece of text as single entity, or a series of otherwise unconnected entities, the words. The SOM breaks the text up into overlapping pieces, or better, into overlapping probabilistic descriptions of such pieces.

It would be a serious misunderstanding to perceive this splitting into pieces as a drawback or failure. It is the mandatory prerequisite for building an associative storage.

Any further target oriented modeling would refer to the two layers of a TL-SOM, but never to the raw input text.Such it can work reasonable fast for a whole range of different tasks. One of those tasks that can be solved by a combination of associative storage and true (targeted) modeling is to find an optimized model for a given text, or any text snippet, including the identification of the discriminating features.  We also can turn the perspective around, addressing the query to the SOM about an alternative formulation in a given context…

From Associative Storage towards Memory

Despite its power and its potential as associative storage, the Two-Layer SOM still can’t be conceived as a memory device. The associative storage just takes the probabilistically described contexts and sorts it topologically into the map. In order to establish “memory” further components are required that provides the goal orientation.

Within the world of self-organizing maps, simple (!) memories are easy to establish. We just have to combine a SOM that acts as associative storage with a SOM for targeted modeling. The peculiar distinctive feature of that second SOM for modeling is that it does not work on external data, but on “data” as it is available in and as the SOM that acts as associative storage.

We may establish a vivid memory in its full meaning if we establish two further components: (1) targeted modeling via the SOM principle, (2) a repository about the targeted models that have been built from (or using) the associative storage, and (3) at least a partial operationalization of a self-reflective mechanism, i.e. a modeling process that is going to model the working of the TL-SOM. Since in our framework the basic SOM module is able to grow and to differentiate, there is no principle limitation of/for such a system any more, concerning its capability to build concepts, models, and (logical) habits for navigating between them. Later, we will call the “space” where this navigation takes place “choreosteme“: Drawing figures into the open space of epistemic conditionability.

From such a memory we may expect dramatic progress concerning the “intelligence” of machines. The only questionable thing is whether we should call such an entity still a machine. I guess, there is neither a word nor a concept for it.

u .


1. Self-organizing maps have some amazing properties on the level of their interpretation, which they share especially with the Markov models. As such, the SOM and Markov models are outstanding. Both, the SOM as well as the Markov model can be conceived as devices that can be used to turn programming statements, i.e. all the IF-THEN-ELSE statements occurring in a program as DATA. Even logic itself, or more precisely, any quasi-logic, is getting transformed into data.SOM and Markov models are double-articulated (a Deleuzean notion) into logic on the one side and the empiric on the other.

In order to achieve such, a full write access is necessary to the extensional as well as the intensional layer of a model. Hence, artificial neuronal networks (nor, of course, statistical methods like PCA) can’t be used to achieve the same effect.

2. It is quite important not to forget that (in our framework) information is nothing that “is out there.” If we follow the primacy of interpretation, for which there are good reasons, we also have to acknowledge that information is not a substantial entity that could be stored or processed. Information is nothing else than the actual characteristics of the process of interpretation. These characteristics can’t be detached from the underlying process, because this process is represented by the whole system.

3. Keep in mind that we only can talk about modeling in a reasonable manner if there is an operationalization of the purpose, i.e. if we perform target oriented modeling.

  • [1] Werner Heisenberg. Uncertainty Principle.
  • [2] Samuel Kaski, Timo Honkela, Krista Lagus, Teuvo Kohonen (1998). WEBSOM – Self-organizing maps of document collections. Neurocomputing 21 (1998) 101-117.
  • [3] W.B. Johnson and J. Lindenstrauss. Extensions of Lipshitz mapping into Hilbert space. In Conference in modern analysis and probability, volume 26 of Contemporary Mathematics, pages 189–206. Amer. Math. Soc., 1984.
  • [4] R. Hecht-Nielsen. Context vectors: general purpose approximate meaning representations self-organized from raw data. In J.M. Zurada, R.J. Marks II, and C.J. Robinson, editors, Computational Intelligence: Imitating Life, pages 43–56. IEEE Press, 1994.
  • [5] Papadimitriou, C. H., Raghavan, P., Tamaki, H., & Vempala, S. (1998). Latent semantic indexing: A probabilistic analysis. Proceedings of the Seventeenth ACM Symposium on the Principles of Database Systems (pp. 159-168). ACM press.
  • [6] Bingham, E., & Mannila, H. (2001). Random projection in dimensionality reduction: Applications to image and text data. Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 245-250). ACM Press.


Where Am I?

You are currently browsing entries tagged with philosophy of mind at The "Putnam Program".