Concepts add up and make up much of what John Searle called the “background”. He recognized, like others before him, e.g. Quine or Wittgenstein, that making sense of the world can’t be based just on the actual empiric “impressions” (data, if you like), but rather that we need concepts that in turn are embedded in the Form of Life whenever we are going to interpret something. Bringing in the concept of “concept” here I have to emphasize right at the beginning that concepts can’t be positively, which means: exhaustively, defined. Concepts articulate directly towards meta-physics, they are even part of it. In the last section of this text we will introduce a structural approach that relates the concept of “concept” into a wider perspective.
Yet, describing the role of concepts as “background” certainly does not satisfy “completely”. For Deleuze, for instance, concepts are part of an activity that forms the plane of immanence, first, for each series of thought. Secondarily, however, this gives rise to what he calls the absolute plane of immanence, or pure immanence. In his perspective, this plane is not itself a concept. Yet, here is not the place to criticize Deleuze’s braiding of thought, a life, immanence and transcendence. We try to accomplish both a critique and an extension of Deleuze’s conceptualization elsewhere. We’d like just to mention that concepts should not be messed with definitions, as it is usually done in analytical philosophy, or by functionalists of any kind, creating nonsense such as formal (syntactic) semantics or analytic ontology.
Here in his text we’d like to provide brief summaries about important concepts that form the background and the “Image of Thought” of the writing on this blog; it may also be helpful for the reader of any of articles on this blook, since I honestly can’t expect that it is easy to navigate those texts due to the fact that most of them are crossing the “borders” of disciplines or domains.
Of course, this list does not form an exhaustive glossary. It just marks the most salient topics from a cross-disciplinary perspective, as far as they are constitutive for a particular attitude I try to actualize, or say, a certain Image of Thought. Its cornerstones? Openness and being anti-axiomatic and heading for consistency, which implies an awareness for meta-critique and conditionability: the question about the conditions of the possible, particularly when it comes to novelty, creativity and imagination. I am strongly convinced that this should not be just a concern for philosophy. It is easy to see why: It concerns both strictly human beings and of a-human entities such as an urban context, epistemic machines or the culture at large.
Such, I think I have to emphasize even right to the “beginning” that quite in contrast to analytical positivist stances, the concept of “concept” can’t be conceived as something that could be defined apriori to a particular use case (see this for more details about concepts).
Concepts are not only the immaterial joints between empirical experience and intentionality, both as plan and the category of the mental. There are good arguments to regard them as indispensable for any thought, not just in the case of human beings. Together they form a virtual network of potential joints that enable the actualization of potential and at the same time provide the site and the tool to harvest those actualizations. Such we find two very different types of articulations “in” the concept of concept, which “decohere” into one of the forms upon our choreostemic moves.
This extends our notion of “understanding” as we described it previously. Put into a nutshell, we conceived of “understanding” as the language game which we humans use to indicate the belief that the extension of the background (scaffold) by the topic at hand is shared between individuals in such a way as to indicate the abilty to reproduce the same effect as anyone else could have triggered understanding the same thing. This sharing of references and inducing of capabilities makes up a lot of what we usually call teaching. Saying “I understand” means to indicate the belief into a resemblance, resonance, or even alignment regarding the way concepts are harvested, linked and used.
Throughout this blog we are going to draw upon a variety of abstract concepts. Some are more important than others, some we imported from the complexes of concepts of other authors, some more we assimilated and reshaped them in order to fit them to the rest, a few we supposedly invented (among them: orthoregulation, aspections, choreostemic space, delocutionary acts). Together they form the plane of immanence of our writing. They recur frequently, contextualized in different ways.
In order to facilitate a more easy access to the essays, we provide a collection of very brief introductions into those concepts, of which we think that they are “more basic” than others. Of course, and to give only a few examples, something like abstract associativity, the abstract model, categories (in the sense of mathematical category theory), renormalization as a contribution to a theory of generalized measurement, generalized contexts, or the notion of generalized evolution written in purely probabilistic terms without any reference to biology are all quite abstract and also quite fundamental concepts. Yet, they could be found and fully comprehended only on the basis of the basic stuff that we are going to introduce (as briefly as possible) here.
For instance, the transition from matter and the material, aka the “body,” to the realm of the immaterial where we find information is still considered by many as one of the core problems in philosophy. We consider it firstly as a pseudo-problem, though, and secondly we hold that any of its formulations is based on a particular (and particularly weird) instantiation of the basic concepts described below. A second example would be provided by the concepts of numbers, or more generally, the concept of the “quantum” or “quantitability” as Bühlmann calls it, which denotes the potential and the conditions for rendering something quantifiable. Quantitability is closely related to the conditions for making something separable, identifiable. It has much to do with the abstract (transcendental) conditions that need to be met in order for becoming able to select something. Such, we are also close to the fields of individuation, associativity, the question about the relation between the material and immaterial. All of those, however, and thus also the question about the role of numbers and mathematics and their form are shaped through the space provided by the concepts below.
Another example (the last we will give here) is the problem of causality that is haunting philosophy, natural science as well as the humanities for centuries now. Any attitude towards “causality” is configured by the basic concepts below, which allow us to recognize that “causality” is neither a “phenomenon” nor a transcendental category. It would constitute even a shortfall to consider it merely as a Language Game. It is a name for a particular way to construct the “world” and this name can’t be separated from the “image”, the way we perform this construction. We may ask, in turn , well, what then is the world, how to conceive this? We could answer that we conceive it as a fluid, broiling collection of strands of potential relations that we arrange in order to deal with other “things” than “ourselves”. Of course, this “ourselves” starts to get slippery quite soon. Anyway. The important thing is that others see it differently. And it is this difference that lets appear the concepts below.
In the following, I first provide the list of those concepts, before briefly describing each item, garnished with links to the articles and external sources, and last but not least also cross-references within this text that will be indicated by an arrow (➔).
The list comprises the following items (active links)
- Differential and Virtuality
- Lagrangean Abstraction
- Aspectional Space
- Localized Space
- Vanishing Regress
- Language Game
- Processual Indicative
- Inverted Experimental Plan
- Choreostemic Space
1. Differential and Virtuality
Originally, the differential is a mathematical concept. It has been invented almost at the same time by Newton and by Leibniz, albeit in very different contexts. The differential can be conceived as an inverse relation to the mathematical concept of integration. To integrate a function, i.e. a relation between two quantifiable concepts, one has (1) to find a relation that could yield the starting concepts when integrated (mathematically), then (2) to define the side conditions for the operation of integrating. Quantifiable concepts are often called variables, or when mapped to a Cartesian coordinate system, dimensions. Of course, many relations can’t be differentiated or integrate analytically. Another trick in complicated contexts is to build the differential only with regard to certain aspects, or dimensions. This is called partial differential equation. In physics, the differential is used to describe the regularity of change, each of position over time as velocity or, as second-degree differential, as acceleration, or it is used to calculate the cumulative effect of a dynamically changing, yet describable relationship between two quantifiable concepts.
The mathematical differential establishes a very peculiar, if not a unique relationship between quantifiable concepts. There are two remarkable issues. Firstly, expressed as the differential quotient dx over dy (dy differentiated for dx) it symbolizes a special case for the “0 divided by 0”. In other words, despite the actual quantities are vanishing, the whole thing remains well-defined (if certain conditions apply). This “whole thing” is, however, not determined and definable in the same way as its constituents are. It resides completely in the realm of the symbolic. Secondly, this quotient provides an abstraction for the relationship between the quantifiable concepts that we denoted as x and y. It represents a clear concept inclusive the symbols to talk about that abstraction. By means of the differential, abstraction (still mathematical here) is not an ill-defined, fluffy concept anymore.
Yet, if we start with the differential (or a system of differential equations) to say anything about the quantifiable concepts and their relation again, we have to instantiate it. In other words, we have to put a selection into effect.
Deleuze recognized the philosophical potential of this mathematical concept. Quite obviously, we can’t transfer it directly, representatively, from one domain to the other, i.e. from mathematics to philosophy. It is therefore nothing but stupid to read Deleuze as if he would be a mathematician. Deleuze doesn’t talk about the mathematical differential. The differential is transferable only by means of abstraction, i.e. by means of the philosophical notion of the differential. Such, we equally obviously use the structure of the mathematical differential as a model.
Its philosophical relevance derives from two major consequence. We already mentioned it, it is the necessity of instantiation. the first of which is that instantiation can’t be avoided, yet, in the end, it also can’t be fully justified. There are always many ways of instantiating a particular differential. The second implication is that there is irreducible freedom, which nevertheless remains related to the embedding milieu, the language, the community of people sharing the same language. This establishes borders in the genesis of the instance, which must be distinguished from irrelevant representational borders. The elementary figure of interpretation consists of a differential and its instantiation.
The differential (in the philosophical sense) is closely linked to the concept of virtuality. Understanding the concept of virtuality becomes increasingly important, since it expresses the necessity for conditioning the possibility for mediality (cf. ) as well as the potential. Both of these are in turn salient implications of contemporary information technology. Ultimately, neither the role nor the potential can be fully recognized without a proper understanding of the concept of virtuality and its embedding into the dynamics of generic thought.
While the idea of the virtual has been introduced already in classic philosophy, Gilles Deleuze renewed its role in contemporary philosophy, mainly by describing the abstract mechanism of actualisation. The concept of virtuality is about the conditions for potential, which itself is not specified. Between the potential and the transcendental (➔) concept of virtuality a space is spanning which we call the virtual. Neither the potential nor the virtual can be enumerated. Strictly speaking, it would be against the grammar of the potential or the virtual to say “potential objects”. As soon as we speak about objects, we imply identifiability, enumerability (implying set theory as a tool), hence we already talk about possibility .
We opposed the virtual and the real: although it could not have been more precise before now, this terminology must be corrected. The virtual is opposed not to the real but to the actual. The virtual is fully real in so far as it is virtual. Exactly what Proust said of states of resonance must be said of the virtual: ‘Real without being actual, ideal without being abstract’; ( p.208)
The mathematical differential quotient expresses the potential of selecting a tangent. The idea of the tangent is fully present and determinate, though it is “not anywhere” (Note that the language game of selecting a place violates the grammar of the potential!). In contrast, if expressed as difference quotient (before the limes transition), we see an infinite set (or population) of possible items.
Ideas are, as Deleuze says, strictly positive. We can neither subtract 1 idea from 3 ideas to get 2 ideas nor can we remove anything from “an” idea to get a “smaller” idea. Ideas are not enumerable at all. Removal and opposition become possible only subsequent to the actualization or in the realm of the possible, that is, once an Idea actualised into a “species”.
We call the determination of the virtual content of an Idea differentiation; we call the actualisation of that virtuality into species and distinguished parts differenciation. ( p.207)
If we talk about the possible, for instance in the context of a probability (➔), we already actualized the virtual into a space of possible selections. The density or intensity of this selection is already symbolized within an analytical framework. This exclusion of the virtual from certain ways to describe empirical observations is the deep reason why statistics alone is not sufficient to produce forecasts. In turn, whenever we speak about the probable we already limited ourselves to the possible. We already applied certain rules to actualize the aspects that point to the virtual (as the ever present condition), avoiding here the symbolization as of “virtual aspects”.
Upon the actualization of a potential we select, we limit what we can perceive as actualized species. Hence I could have said: what we can actually perceive. Any enumeration or symbolization into a formal framework includes actualization, precisely because these actions imply a limiting step. This then includes, of course, also modeling, here taken as the instantiation of the abstract model into a particular model. It is not possible to point to a potential, or the virtual. Hence we can’t take the virtual as a property of space, nor can we build a representational space from it.
It is thus nothing but a serious misunderstanding to point to the internet, or the WWW, to Facebook or to Google, and calling this the “virtual space” (cf. , among many others). It is also a misunderstanding to call the n-dimensional image of a space or a building “virtual space”. Obviously we can point to such imaging, we can immerse ourselves into it, we can color it etc. There is no color of or for a potential, yet. Something is utterly wrong in such usage of the concept of the virtual.
The internet and the WWW represents the technical aspects of mediality—another transcendental (➔) condition. In the usage of tools, symbols (abstract tools) and concepts we refer to the virtual, for we always have to perform myriads of selection. The virtual aspects of “space” around and in a building are present as not-yet-actualised potential always in precisely the same way, whether we talk about the Parthenon, Las Vegas, or a rendering of computational procedures into a visible form.
Again, not discerning possibility from the virtual results either in pseudo-paradoxes, or in a fully deterministic world without the possibility for any constructive selection.
In so-called information technology, the virtual shows up in a very different way. It is the reversibility and thus the almost, or actually, simultaneous presence of different species, in other words, the full potential for differenciation, which creates an endless and dense presence of the necessity to select. Nevertheless, the virtual is not “inside the computer”, nor is it “inside the mind”. While in more material contexts we can select only once before information decoheres into irreversibility, in electronically mediated contexts the possibility and the potential for ongoing selections keeps intact. Yet, we should not overlook that selecting—and interpreting exactly in the same way—something is first an activity that basically comprises performance and further selections. Performance in turn refers directly to the body. For the decoherence into irreversibility induces in turn the need for interpretation, it is ultimately performance that intensifies our reference to the virtual, while this intensification is before any quantification, i.e. enumeration. The condition for quantification, quantitability, appears as a transcendental condition (➔), which is “almost virtual”, albeit the relations between quantitability and virtuality are difficult to determine (cf. ).
Let us take a brief look to the difference. Using the concept of integers (“natural” numbers), we can write down a difference as 7-5. What is the role of the operator, the “minus”? Obviously, as its label already tells us, it symbolizes an operation, an action, hence some rules. Yet, these rules have nothing to do with numbers at all. In the case of arithmetic, they take the numbers as a structural argument. The difference implies orthoregulation (➔) and instantiation, hence the differential and transcendence (➔). This is the major difference between the philosophy of Derrida and Deleuze. Derrida got stuck at the difference. Its condition he called difference, which, however, does not allow any further argument. It is even deliberately excluded by Derrida to ask about the deconstruction of the differance. Derrida never accepted to proceed into abstraction. As a compensation he needed further strange concepts and anti-concepts, like the “original trail”, that can’t be seen nor interpreted. This negation of the orthoregulation and of the differential (as a philosophical concept) makes Derrida’s philosophy, the deconstructivism so suitable for realism.
The difference implies the negative. Deleuze argued (we cited him here) that the “Idea knows nothing about negation”. Nevertheless we have different ideas. These can be related to each other only by means of the Differential.
A final remark: it would impossible to discuss the virtual comprehensively even in a whole book, so all the more in a brief section. We just can point to the various online encyclopedias for further aspects. The only things we tried to say is that (1) the Idea of the virtual expresses a necessity, (2) it needs to be distinguished from the possible, (3) the only way to actualize is by means of selection, even if it happens implicitly. Together these aspects of the virtual may serve as a foundation for the primacy of interpretation. The resulting self-referentiality can be turned productive—defending such against the infinite regress—by transcendental constructions as it is instantiated in our concept of the choreostemic space.
2. Lagrangean Abstraction
If you have some procedure or rule, you will always have some variables, some operators and some constants. Lagrange’s insight was that the constants can be replaced by a procedure plus further constants, which are very different from the constants we met first. Deleuze adopted this figure in his investigation of thought ( p.209) that eventually would result in a transcendental empiricism. The constants on the implied and explicated level we usually call “more abstract,” they are more general, more powerful, but at the same time also less salient than constants on less abstract levels.
The important issue is, however, the change of reference for the implied rule. The new rule that is more abstract is replacing a constant on the less abstract level. Hence it is directed towards a rule and to the side-condition of its instantiation, but rather not to the modeled entity itself. Such, the Lagrangean abstraction is closely related to orthoregulation (➔).
The Lagrangean abstraction is not only closely related to the treatment of partial differential equations and thus to the deep structure (the grammar) of mathematics itself. Actually, Lagrange got aware of one of the most powerful tools of thought. As such, Lagrangean abstraction is even at the roots of the philosophical notion of the Differential.
Our empirical relation to the outer world is a problematic field. In philosophy it is still heavily discussed under the label of the so-called “natural kinds”, a concept that we reject. In a moment you will understand why. There are, of course, also a lot of serious practical issues and implications around it.
Quite obviously, our relation to the world does not include a “direct” access. Even logical positivists admit that. We never can recognize the world as such, we never can be sure about something that we have to conceive as empirical. Astonishingly, in urbanism it still (in 2007) elicits an emphasizing response if someone calls it “illusion to believe that we can see reality as it really is” (Linda Pollack, quoting Lefebvre ). We take it as a sign for hope.
Unfortunately for the realist attitude, even words are empirical items. If we can’t be sure about our empirical observation, what is the consequence of that? In everyday parlance we use the grammatical form of the “conditional 1”, “it could be”. In technoscience we apply the framework of probability. The main consequence thus is a further question: How do we manage to proceed from a probabilistic description, an expectation that comprises different cases, sometimes only implied, but never seen before, to the use of propositions, to a propositional structure?
Most appropriately, “object” should be conceived first as a language game (➔). We then may ask about that game, its rules, the necessary symbols and processes, its usage and its context etc. The next step is pretty simple. We conclude that the language game “object” refers to a small set of only two structural properties. These are (1) a binning operation that refers to a model that is organized by (i) a certain minimum degree of reliability in the context of anticipatory modeling, (ii) a commonly acknowledged threshold for performing the transition from the degree to the propositional usage of the concept, and (2) the indication for the propositional usage of the concept itself, excluding thereby the need for complicated and challenging probabilistic reasoning.
Even if a particular extended system of models and theories would help us to conclude that a particular conclusion about the relations and conditions in the outer world is pretty sure, we can not be absolutely sure. Models, theories and, of course, Forms of Life can’t be arranged in a seamless manner, they may even partially contradict each other. Such, the embedding, i.e. the conditioning of a particular conclusion is not stable in its prescription how to organize measurement and modeling, namely regarding the selection of “properties” that are used to organize the raw signals. Adding or removing properties from a descriptive setup results in a different “object”, which establishes a different set of relations, hence in this sense also different facts.
Here, we can see/resolve a number of issues. “Objects” are not as such in the world. To claim otherwise would result in the further claim that symbols are in the world, notably (1) as immaterial entities, and (2) independently from any mind-based interpretation. This was the position of Frege, and partially of Russell and Whitehead in the Principia. Wittgenstein was the first to proof that this is wrong.
Regarding the field of machines with mental capabilities, we can understand now that the so-called “symbol-grounding problem” is not a problem at. Second, we can not talk reasonably about “learning” with regard to “machines”, as long as we represent the outer world in a structured manner like tables. Tables already presuppose an act of symbolification; even worse, this symbolification implements a direct representation in the space of the possible, that is the already actualised. Ultimately, symbolification is a process or act that expels potentiality.
The step from a probabilistic representation of signals from the outer world to the propositional usage of concepts—lets call it PP-transition—can be interpreted as a speciation, hence also as an individuation. Yet, the appearance of those is just a consequence of the necessity for an empirical rooting of thought.
The said PP-transition seems to induce a regress, as we would need symbols even for setting up a probabilistic representation of the signals. Yet, the probabilistic representation of these signals is not only not representative, it is generic. The remaining self-referentiality can be dissolved into orthoregulation (➔) and a differential space (philosophical notion of d. here). Renormalization  is the practical tool for representing empirical observations that pays most respect to this structure.
4. Aspectional Space
The mathematical notion of space comprises two elements: a mapping between enumerable—though not necessarily quantifiable—entities and a structure that describes the allowed operators. Such a structure is, for instance, a group. As a map, a space represents the condition of the possibility to relate entities to each other. In turn, any entity may be conceived as a compound of elements, where ta particular compound then appears at a particular location in the space. A space thus defines the expressibility. This space is an apriori for any further expression. Yet, a particular concept of space should not be taken as a “constant” (see above about Lagrangean Abstraction).
It is a tribute to the grammar (and the structure) of mathematics as a societal institution for inventing signs and symbols as well as the relations between them that the dimensions of mathematical spaces are independent from each other. Of course, the notion of independence is a quite problematic one, as we have argued in our investigation of modernism and its presuppositions, and it is by no means a necessary condition for the possibility of a space.
If we drop the apriori set determination of independence we get an aspectional space (see this for more details). If the structure of an aspectional space is global and representational, such as for instance the Euclidean or the hyperbolic space is, the 3d-aspectional space can be drawn in 2 dimensions as a ternary diagram.
In such a space, the mapped entity is not crushed into elements that appear as independent elements. In the aspectional space, the wholeness, the integrity of the object remains fully intact, even if we would express it by means of the respective formalism.
It is clear, that the mere formulation of the aspectional space as a contrast to the Cartesian dimensional space implies a differential, and thus also a particular actualization, which results either in the aspectional space or in the Cartesian space. As we have seen, the difference between them is due to a different stance towards independence. As Wittgenstein elaborated in the context of his image theory—the double aspect images—we may ask what is it that shows up in the possibility for the duality itself? Obviously, the two spaces point towards the virtual of independence/dependence.
5. Local Space
The idea is quite simple. Instead of setting up a single space that serves as a global container into which we map different entities, we assign a local instance of space for each entity in the “would-be” global space. This, of course, is very close to the idea of the analysis situs by Leibniz, hence also to his concept of the monad, and also to the concept of topology.
Actually, in a flat Euclidean space (and full symmetry of the group defining the operations) it doesn’t make any difference whether we follow the route of the global space or that of the local space. By means of elementary geometric operations such as turning or mirroring we can translate the global and the local spaces into each other without any loss or consequence, except perhaps a simplification regarding some analytical aspects.
Yet, this remains true only if the space is indeed flat and fully symmetric regarding its group. For the relations in a hyperbolical space it makes a huge difference whether we use a global space or a local space. If we replace the “standard” symmetric algebra e.g. by a Lie-algebra, there could also be a drastic difference.
Since in self-referential contexts space is actively and locally deformed—Mandelbrot calls it “warping”—we should not expect that the assumption of a globally homogeneous space is a reasonable assumption. Instead, we should renormalize the space in terms of the intensities of local qualities. In other words, the reference system is not treated as a constant condition any more, instead, the reference system is conceived as a procedural dynamisation (➔).
An example for the cost of the assumption of global homogeneity may be provided by the Navier-Stokes equation. This equation is used to describe the behavior of fluids. Yet, astonishingly it is still not known (in 2012) whether and under which conditions the NSE has a solution. As far as I know, engineering (and physics) does not apply the trick of the local space to the problem of turbulent flows so far. The method of “energy cascade” across different scales, however, points to the same direction. Yet, whatever approach engineers select, they apply it in global space. Note that local space is fundamentally different from approaches like fractals or wavelets; even as they point into the same direction, they are usually applied globally in the same way. Hence, they should try to localize the problem more radically. In other words, the Navier-Stokes equation and the current approaches may be too representational (not properly abstract) as for allowing for a general solution.
Decoherence is a language game that has been introduced first in quantum physics. Despite we can measure quants, for instance in the Quantum-Hall-effect, the microscopic description of a quant does not imply an enumerable entity such as a particle. Even the descriptive level of the field is not abstract enough. The Schrödinger equation expresses waves of probability.
The problem is that we can make experiments where we do not find fields, i.e. continuums, but rather particles. This is even true for electromagnetic waves. Obviously, there is a conceptual gap between the purely informational description, which actually refers to a potential, and the observable particle. In simple words (hopefully not too simplistic), this gap is closed by the concept of decoherence.
Decoherence is thus closely related to interpretation and measurement, hence also to the concepts (and language games) of reversibility vs. irreversibility, and as well to that of information and causality.
This allows us to recognize that the effect of decoherence is not limited to the quantum world. It appears as soon as we deal with information and interpretation, particularly however, in context where we are forced to start with an almost purely informational approach. Such contexts are given by self-referential systems as far as they show emergence, in associative networks and certainly in any context where language or symbols are involved.
The concept of mechanism refers to three elements. (1) a more or less microscopic perspective, (2) a more or less deterministic principle that is implemented on the microscopic level, (3) a population of instances of that principle, making up the macroscopic entity. Mechanism are not limited to the description of material “systems”. They can also be applied to purely conceptual arrangements, albeit this implies the treatment of symbols as quasi-materials”.
Machines can be conceived as a particular, “denaturalized” instance of the concept of mechanism. There is usually no population and the principles work on the macroscopic, i.e. global level. Machines reside in the Cartesian space. They are fully deterministic and thus fully actualized, at least such are the hopes of the engineer. Literally, they ought to represent a set of formulas. Mechanisms, on the other hand, result in an entity that we can not describe as a fully actualized ensemble of embodied mechanisms any more.
Unfortunately, the engineering of informational machines follows still the route of fully deterministic machines. There are only two concepts I am aware of that transcend determinism: Douglas Hofstadter’s proposal of the Copycat, and our proposal of the extended Self-Organizing Map.
Similarly, the concept of mechanism is not yet a sufficiently explored topic in the philosophy of science. It is clear that the concept of mechanism is a necessary element of any practice that s interested to defy determinism.
8. Vanishing Regress
The infinite regress is an infamous figure in philosophy and philosophical logic. The label describes a situation, where by means of logic you find, as a conclusion, that you have to apply the same logical argument again (logic taken here as n-valent, with n=2 or larger, but finite.) Hence, it seems that nothing had been gained.
Despite its abundance, the infinite regress (IR) is nothing but a mistake known as petitio principii, though on a structural level. It is a classical pseudo-problem.
For the basic reason being that there is no such thing like “pure logic” in the empirical world. Logic is a transcendental condition. Any actual instance of logic is incubated with unknown amounts of semantic references. One of the problems is that this usually does not get recognized, hence, important questions remain unspoken. Yet, these semantic references require presuppositions of any kind, abstractly spoken, logic becomes applicable only through the reference to rules outside of logic, to orthoregulation.
Hence, there is no infinite regress. Any regress is vanishing as soon as we are going to take it serious.
The petitio principii consists in the assumption of the universality and the actuality of logic. The regress shows up only under these presuppositions. It is the presupposition that creates the problem, which is conceived as a logical problem.
Obviously, we necessarily maintain empirical relations to the outer world. We play the anticipation game wherever we can afford, either by modeling or by institutional structures. We expect at least a weak stability of these efforts. Orthoregulations are not directed to the most basic observations, but to the construction of observations, or to the generation and usage of models. Since there is—quite obviously so, because it is intended—much less diversity in models than in basic observations, the generation of rules as orthoregulation is faced with an empirical basis that is much smaller that that we have at our disposal for building even a single model. The same holds for the second level of orthoregulation. Despite we still talk about rule-following, the empirical reference changes.
Any condition is transcendent to the conditioned. From the perspective of the conditioned, from within the system, if you like, the conditions are not even visible. Thus, transcendence has nothing to with mysticism. Yet, it may well be conceived as a kind of metaphysics: it is a concepts that goes beyond physics (as a science), its claims and its possibilities. Else note that it is very different from the side condition.
In order to make conditions visible one has to change them, yet almost in an unintended manner. It will turn out only upon the interpretation of the result if one really has found new conditions, or just a variation of known parameters. Thus, experimentation is necessarily an “embodied” activity, which even includes the laboratory itself.
There are important implications. First, we can see conditions only upon the implied differential (➔). More precisely, conditions are always given as differential, or in a different language game, as orthoregulation (➔). This points to the second issue: Conditions imply a symbolic description. Even if they don’t get symbolized, e.g. in animals, or on the level of physical descriptions, implied conditions are already possible and thus actualized. Of course, any visibly symbolized condition refers itself to concepts, which in turn are transcendental.
There is a particularly interesting issue about conditions and transcendence with regard to emergence. It is not possible to use the language suited for the description of the microscopic level as well for the description of the emergent macroscopic level. The same impossibility we experience in the opposite direction. Despite we see one system, perhaps even in a petri dish, or as the rendering of a computational procedure, we need two different descriptions that are even incommensurable to each other. In complex systems, such as organisms, the emergent levels act back onto the lower levels. Such, the organism is full of irreducible transcendence.
Yet, we do not need organisms to observe the implication of transcendence. We may experience it upon the interpretation of any arbitrary series. There are two reasons for this. First, a series changes its appearance if it is chunked in different ways. Second, as an informational entity the series itself provokes the necessity of decoherence, of duality, and thus for simultaneously present, yet incommensurable individuation .
Whenever we need to render something accessible to measurement, interpretation, or quantification, we need a starting point. Once set, this starting point can’t be revoked anymore. It conditions the possibility of measurement and interpretation.
On the level of concepts, such starting points appear as elements. Elements mediate between quantitability, which is close to the virtual, and quantification, that is, the establishment of a suitable space for rendering relations into maps.
It is, for instance, impossible to talk about complexity without a preceding elementarization. Without elementarization, we get stuck in binary concepts. This in turn makes any measurement impossible, and hence also interpretation and modeling. Binary (dual) concepts thus are not suitable for any attempt to clarify things by a (partial) transposition into the sayable. We just can assign names and values, both of which are completely arbitrary as species in a any given case. Usually, the consequence of which is some kind of quarrels.
12. Language Game
When describing the concept of probabilization (➔) we asked about the consequences of uncertainty regarding our empirical observations. The problem starts, of course, even before that, because we have to set what we are going to observe. For we have to do it in a way that could be accepted as a replication of the observing and would allow to “reproduce” the observations, which requires in turn interpretation, we obviously need language already before the empirical observation. This does not mean that primitive animals or machines would not be able to “observe”. Yet, their experimenting takes at least hundreds of generations, if not millions of years. Evolution is an embodied experiment. Interestingly enough, Gerard Edelman proposed an important role of positive and negative selection processes in both long-term and short-term dynamics in the brain.
So, what is a “Language Game”, and why is it called like that? Wittgenstein has been the first who recognized the transcendental (➔) role of language for the domain of the “human”. Of course, it would be nonsense to claim that we can think only by means of language. But any kind of culture is necessarily based on the externalization of thoughts, regardless the material shape of the thinking entities, and this externalization requires some kind of open language.
Above we provided a possible argument for that in a very brief form. Second, Wittgenstein calls it a game for two reasons. (1) The use of language is basic, that is, there is neither a formalization nor a meta-language possible. (2) Language is open, locally as well as globally, both in space and time, which means that the rules can not be written down completely, neither apriori nor aposteriori to the act. Else, the rules are in constant evolution, or if you like, negotiation. We invent new rules and new words, drop others, import and assimilate rules and concepts from other languages. Just like in a very complicated game. Language reflects the Form of Life.
Without an open language we would not be able to build new concepts as informational entities, nor to use them, for this requires interpretation. Again, animals certainly use concepts as well, but in most these concepts remain closely tied to the respective type of body as a material arrangement, capable only for certain associations. As soon as concepts and bodies disengage, we observe culture, even in animals.
As any other game, or set of rules, language games can be challenged and perverted. This fact is the trivial consequence of the need for interpretation. Interpretation is a primary condition for any reference (which ultimately results in an transcendental empiricism). In the vast case of cases language games work quite well. For words are not just pointers, they carry a set of implied rules that indicate how to process them.
One particular way to challenge a language game leads to (pseudo-) paradoxes. Paradoxes are downright “language game colliders” (LGC). A LGC is a concept or an argument that relates two (or more) incommensurable language games into a close mutual dependence. Language games are incommensurable if they follow a different grammar, such as for instance countability of instances of a particular type (entity) and the sign for non-countability. This leads for instance to the Sorites-paradox, the paradox of the heap. It goes like this: Given a heap of sand, we remove one grain of sand after the other. First, the heap remains a heap. At some point, however, we can’t consider it a heap anymore. This situation then is commonly called “paradoxical”.
Yet, by no means this establishes a paradox. It is just an indication for a LGC. The concept of “heap” excludes the operation of counting, as the “mechanism” of the paradoxical argument emphasis itself. The argument, however, also employs counting, thus a grammatical incommensurability is established. This, of course, induces difficulties. Nevertheless, these difficulties are neither bound to a “heap object”, nor to logic. It is more like the 2-year old boy who gets terrified by the incommensurability of his own wish to do opposite things at once. In the same way, many variants can be constructed. Most of them collide countability with the sign of non-countability, that is the structural declaration that something should not be counted. On the one part of the “paradox” we then find object, numbers, entities, and on the other we find symbols or concepts like “infinity” or “all”. It is very important to understand that difference, because it introduces operations. In logic however, there are no “operations”, because any term has to be reducible to a=a. For the same reason logic is not applicable to emergence, to the use of the differential, or to orthoregulation (➔).
An archaeologist digs. He tries to reconstruct forces in the past by using modeling upon the fund artefacts. She takes the sediments as they can be found… well, as they can be constructed. The important thing is that there is no necessity for the sequence of layers of sediments, nor for their naming. That means, that in archaeology we do not deal with causality.
Yet, the sediments didn’t get there to build a mountain by chance. There is piecewise coherence, spanning certain periods of time, until an outer event (“comet”) or an inner bifurcation leads to radically different conditions. A small remark is indicated here: Of course, coherence is itself concept whose usage requires rules for its instantiation.
It has been Michel Foucault who transferred this structure into the investigation of cultural issues. In culture, the coherence is not given by the (mainly disentangling) geological forces and the (mainly associating) biological forces. Instead we find language, institutions, traditions, images of thought, concepts and morality. These form a meta-fluid assemblage that sometimes is more coherent, and sometimes not, sometimes stable as glass (which physically is fluid), sometimes stable as ice (which is a crystal), sometime as a non-Newtonian fluid where fluidity or viscosity is dependent on local contexts. This multidimensional “carpet” he called the “field of proposals”. At any point in time, we are immersed in that field.
14. Processual Indicative
The processual indicative (PI) is a concept that is related to language games and words. Which role(s) play words in language? While it is clearly based on an illusion to claim that words refer to objects, it also seems inappropriate to think that they refer directly to concepts or to symbols. Unger once noted that “the cloud doesn’t exist”. Yet, the symbolic value associated to the word “cloud” is absolutely clear—it is part of the propositional world. Following Wittgenstein we tend to propose that vagueness is not a problem of language that could or that should be cured, it is a design principle. Quite in contrast, it is linguistics that must be “cured”. In this situation inferentialism as proposed by Robert Brandom may help. The label “inferentialism” refers to a set of arguments proposing that in the use of language we necessarily ascribe roles to each other in order to provide some of the necessary conditions that allow us to interpret any “released utterance”, which of course need not follow the official school-grammar for the respective language.
The language game “cloud” contains a marker that is both linked to the structure and the semantics of the word. This marker indicates two very different things. (1) There is a object without sharp borders, and (2) no precise measurement should be performed.
Wittgenstein remarked I know how to use language, or how to use a word. The interesting thing in language and words is not that they are used to reference a putative “object”. Upon applying Lagrangean abstraction we can conceive the “reference” as a constant compound, built from a rule and other constant. The interesting thing about words is the rules that they carry along together with the reference to another sign, or word in more simple cases.
It is thus not just the “object” that is indicated by the word “cloud,” but rather a particular procedure, or class of procedures, which I as the primary speaker suggest to use by saying “Oh, there is a cloud.” By means of such procedures a particular style of modeling will be “induced” in my discourse partner, a particular way to actualize an operationalization, leading to such a representation of the external phenomenon that both partners are able to increase their mutual “understanding.” This scheme transparently describes the inner structure of what Charles S. Peirce called a “sign situation.”
Words are thus not just primitive symbols, or relations to other words, as they are treated by the community of computer linguists. They are to be conceived as a compound made from a pointer to other (Peircean) signs and a bag of rules how to establish a model about the respective reference to the external world, or to the world of concepts. This second part comes as an invitation, or even as a demand. Hence we call it the processual indicative of words.
15. Inverted Experimental Plan
Kasparov once has been asked how he could manage to find the most appropriate move from the billions over billions of possible moves, through which the computer sifts. His answer was: “I take the right one.”
In empirical measurement we are faced with the problem to set up the measurement scheme for the most important variables. Yet, we could measure almost an infinite number of different things, of course, even in case of trivial machines. If the experimenter is very experienced, she might know where to look. In other words, she can use already available concepts to organize the measurement. Or, in turn we may say that the measurements reflects the concepts. But how to know?
Anyway, if one would know then one could set up a so-called statistical trial plan in order to minimize efforts and to maximize the probability to observe clear differences. Unfortunately, these plans work only for at most a few, usually less than 5, variables. Any real world system is influenced by much more factors.
In this situation we can take a fresh look to classification. If we organize according to the scheme shown here, we will find a model that identifies both the most relevant variables as well as the best segmentation of the data to derive the actual proposal about that relevancy. Usually, these two aspects are deeply related to each other, except in the case of random systems, where each “particle” is moving completely independent of the others.
Such a model that accomplishes segmentation and selection “simultaneously” can be interpreted as if it would be a result of a statistical trial plan. Thus, we can call the model also an “inverted plan”. From this we can derive a further proposal, namely that as long as a putative model does not behave as an inverted plan it is unsuitable for serving as a basis to proceed from the probabilistic properties of a model to the propositional usage of a model.
A diagram relates a small set of abstract elements in such a way as to demonstrate the irreducible quality of the whole. Such, not any (mathematical) graph is also a diagram, but every diagram is a graph. A diagram is the differential for a given structure, while itself it is of course again a structure. Yet, obviously the inverse does not hold, not every structure is also a diagram.
Diagrams work like abstract machines, as Deleuze noted. Usually, they do not resemble neither to the input nor to the output in any obvious way. Of course, they also refer to a purpose or a produce (an implied purpose), even if the produce would be a probabilistic outcome. In an important sense, diagrams are instances of the abstract model. Yet, there is a slight difference. Diagrams show us the relations between the elements of the abstract machine, which are established through the operators in modeling.
Self-referentiality (SR) is a property of an operational arrangement, whether this arrangement is of a more material or more immaterial nature. In a SR arrangement, the result of the operations establish the structural conditions of further operations. This implies two distinctions: First, between a microscopic level and a macroscopic level of description. Second, between the principle (working on the microscopic level) and the entirety.
These properties make them different from other conceptual figures like recursion or the infinite regress (➔) (which anyway is impossible). The infinite regress is a logical figure, hence there are no “operations”. The recursion is not defining the conditions in a circular manner, because any recursion can be linearized. It is just a particularly condensed form to express a strictly serial arrangement of repetitive operations. Self-referential arrangements (SRA) can’t be linearized.
SRA are also (strongly) different from the cybernetic perspective. A cybernetic arrangement organizes the throughput of a machine-like system without any changes occurring on the structural or conditional level of itself. This is called feedback. In SRA, there is nothing like a feedback, instead the conditions for the mechanism(s) (➔) changes and leads to different operations or behavior. In biological systems we rarely find cybernetic arrangements, probably only in simple wiring patterns in the vegatative nervous system. Even the innumerably cited (and taught) prototype for cybernetics, the “regulation” of blood sugar in an animal organism, can’t be conceived as a cybernetic cycle, as research has demonstrated in the recent decade.
An example for a material SRA is the class of Reaction-Diffusion-Systems (RDS), regardless its instantiation. We know for instance of the Turing-RDS, the Turing-McCabe system, the Belousov-Zhabotinsky-system (BZ), the Meinhard-Gierer-System, or the Gray-Scott system. These systems differ with regard to the kinetics of the mechanisms on the microscopic level and the kind of involved mechanisms. While the BZ-system or the McCabe-system do not introduce or remove anything, the Gray-Scott-system models a throughput system, where some produce is removed from the reactor. Even in the case of blood sugar it is much more appropriate to conceive it as an RDS (some parts spatially fixed, others not) than to think about it as a cybernetic control system.
18. Choreostemic Space
The choreostemic space (CS) is a concept whose target is most easily introduced by the following observation. In order to organize our empirical observations we need models. We even need a “system of models” for understanding language. In order to set up models we need symbols and rules that have nothing to do with the model. These rules in turn refer to concepts and to a language. Such, they are implying mediality in a twofold manner. First, through language, second, through the fact, that models never appear as a singularized instance. They always imply further models, directed to the same or to other purposes, but anyway forming a population, which can serve itself as a medium. Rule-following, on the other hand, implies an action, it is a label for the decoherence (➔) into irreversibility, i.e. for interpretation and selection. Rule-following implies virtuality.
Such, we arrive at four concepts: model, mediality , concept and virtuality (➔). The CS describes a particular possibility of how to relate these four concepts to each other. This arrangement is, of course, a kind of space (➔).
The particularity of the CS derives from its properties.
- It is defined in a self-referential manner, it is an operational space that does not allow for representations.
- The four concepts are conceived as transcendental “directions”, or “headings” (like in the navigation of airplanes) for anything happening in the CS. The transcendental headings are called “aspects”.
- The structure of the space is hyperbolic and local (➔).
- The choreostemic space is an aspectional space (➔).
- The containment of the space is the 2nd-order differential .
This structure results in some quite remarkable consequences.
Since its formal setup is self-referential it is not a definition. More precisely, it does not refer to any other concept. It does not need any axiom as its preceding condition other than the possibility for interpretation.
The CS does not contain “locations”, because its containment is the generalized differential, that is a 2nd-order differential. Pointing to a putative location, even the claim to do so, “transports” the entity trying to do so to “elsewhere”. More precisely, any explication is heading thoughts towards the model while “still” keeping a relation to the other aspects, namely concepts.
Everything that could be thought starts in the CS and leaves a trace in it. Of course, also talking about the CS, or setting up its structural properties, or the use of the concept “properties”. The same holds for “pointing” or “showing”. Such, it extends, clarifies and provides a general foundation for the concept of thought. Yet, the CS does not “contain” anything that could be thought or said. Through the process of instantiation of the transcendental aspects we arrive at interpretable entities, such as language, images, or intended material arrangements that usually are called design. It is a procedural differential (➔), the inner structure of interpretation itself.
Since any usage of concepts leaves a trace in the CS, the CS can be used as a tool to map different systems of thought, even different Forms of Life. Take for instance an cleric and a scientist, or within philosophy, an idealist (e.g. Fichte) and a pragmatist (e.g. Peirce), or within science a biologist and a physicist, or within religion a bishop or sufi and a derwish. Any of them builds a more or less typical attractor that describes the particular dynamics in thinking.
Comparing these attractors allows to investigate resemblances beyond any representation. Hence, it is also suitable get rid of values and binary concepts without falling into relativism or skepticism. We can conceive it as the possibility to construct a kind of measurement device for thought that comprises its own renormalization procedure.
The fact that the CS is about the general notion of thought, or, in other words, the generic thought, sometimes gives rise to two misunderstandings. First, the CS can’t be conceived as a concept that would map or represent the episteme. The CS is not about epistemology. Second, saying that it is about generic thought does not mean that it is about “pure” thought. Quite to the opposite, the CS excludes any “purity”. Idealism shows itself just as a particular figure in the CS.
All together, the CS is the last outpost of the sayable and the demonstrable. It is possible only through two properties. First, its self-referentiality, second, its differential quality, which makes the instantiation a necessary operation and condition for itself.
The final remark thus states that the CS dissolves the idea of philosophical justification. Justification appears as a very small attractor near the attractor of logic, both of which are “virtwhere” on a pre-specific path that leads from the (abstract) model to the (abstract) concept.
-  Gilles Deleuze, Difference and Repetition.
-  Marcos Novak, The Transphysical City, 1996. available online.
-  Linda Pollack, “Constructed Ground: of Scale”, in: Charles Waldheim (ed.), The Landscape Urbanism Reader, 2007.
-  B. Delamotte (2003). A hint of renormalization. Am.J.Phys. 72 (2004) 170-184, available online: arXiv:hep-th/0212049v3.
-  Vera Bühlmann, “Serialization, Linearization, Modelling.” The First International Deleuze Studies Conference. Cardiff University, School of English, Communication and Philosophy. Cardiff, Wales UK, August 11-14 2008.
-  Vera Bühlmann, inhabiting media. Annäherungen an Herkünfte und Topoi medialer Architektonik. PhD thesis, University of Basel (CH) 2009;
-  Vera Bühlmann, “Articulating quantities, if things depend on whatever can be the case”, forthcoming in the proceedings to The Art of Concept, 3rd Conference: CONJUNCTURE — A Series of Symposia on 21st Century Philosophy, Politics, and Aesthetics, ed. by Nathan Brown and Petar Milat, Multimedia Institut MAMA in Zagreb Kroatia.