Ideas and Machinic Platonism

March 1, 2012 § Leave a comment

Once the cat had the idea to go on a journey…
You don’t believe me? Did not your cat have the same idea? Or is your doubt about my believe that cats can have ideas?

So, look at this individual here, who is climbing along the facade, outside the window…

(sorry for the spoken comment being available only in German language in the clip, but I am quite sure you got the point anyway…)

Cats definitely know about the height of their own position, and this one is climbing from flat to flat … outside, on the facade of the building, and in the 6th floor. Crazy, or cool, respectively, in its full meaning, this cat here, since it looks like she has been having a plan… (of course, anyone ever lived together with a cat knows very well that they can have plans… proudness like this one, and also remorse…)

Yet, how would your doubts look like, if I would say “Once the machine got the idea…” ? Probably you would stop talking or listening to me, turning away from this strange guy. Anyway, just that is the claim here, and hence I hope you keep reading.

We already discussed elsewhere1 that it is quite easy to derive a bunch of hypotheses about empirical data. Yet, deriving regularities or rules from empirical data does not make up an idea, or a concept. At most they could serve as kind of qualified precursors for the latter. Once the subject of interest has been identified, deriving hypotheses about it is almost something mechanical. Ideas and concepts as well are much more related to the invention of a problematics, as Deleuze has been working out again and again, without being that invention or problematics. To overlook (or to negate?) that difference between the problematic and the question is one of the main failures of logical empiricism, and probably even of today’s science.

The Topic

But what is it then, that would make up an idea, or a concept? Douglas Hofstadter once wrote [1] that we are lacking a concept of concept. Since then, a discipline emerged that calls itself “formal concept analysis”. So, actually some people indeed do think that concepts could be analyzed formally. We will see that the issues about the relation between concepts and form are quite important. We already met some aspects of that relationship in the chapters about formalization and creativity. And we definitely think that formalization expels anything interesting from that what probably had been a concept before that formalization. Of course, formalization is an important part in thinking, yet it is importance is restricted before it there are concepts or after we have reduced them into a fixed set of finite rules.


Ideas are almost annoying, I mean, as a philosophical concept, and they have been so since the first clear expressions of philosophy. From the very beginning there was a quarrel not only about “where they come from,” but also about their role with respect to knowledge, today expressed as . Very early on in philosophy two seemingly juxtaposed positions emerged, represented by the philosophical approaches of Platon and Aristotle. The former claimed that ideas are before perception, while for the latter ideas clearly have been assigned the status of something derived, secondary. Yet, recent research emphasized the possibility that the contrast between them is not as strong as it has been proposed for more than 2000 years. There is an eminent empiric pillar in Platon’s philosophical building [2].

We certainly will not delve into this discussion here, it simply would take too much space and efforts, and not to the least there are enough sources in the web displaying the traditional positions in great detail. Throughout history since Aristotle, many and rather divergent flavors of idealism emerged. Whatever the exact distinctive claim of any of those positions is, they all share the belief in the dominance into some top-down principle as essential part of the conditions for the possibility of knowledge, or more general the episteme. Some philosophers like Hegel or Frege, just as others nowadays being perceived as members of German Idealism took rather radical positions. Frege’s hyper-platonism, probably the most extreme idealistic position (but not exceeding Hegel’s “great spirit” that far) indeed claimed that something like a triangle exists, and quite literally so, albeit in a non-substantial manner, completely independent from any, e.g. human, thought.

Let us fix this main property of the claim of a top-down principle as characteristic for any flavor of idealism. The decisive question then is how could we think the becoming of ideas.It is clearly one of the weaknesses of idealistic positions that they induce a salient vulnerability regarding the issue of justification. As a philosophical structure, idealism mixes content with value in the structural domain, consequently and quite directly leading to a certain kind of blind spot: political power is justified by the right idea. The factual consequences have been disastrous throughout history.

So, there are several alternatives to think about this becoming. But even before we consider any alternative, it should be clear that something like “becoming” and “idealism” is barely compatible. Maybe, a very soft idealism, one that already turned into pragmatism, much in the vein of Charles S. Peirce, could allow to think process and ideas together. Hegel’s position, or as well Schelling’s, Fichte’s, Marx’s or Frege’s definitely exclude any such rapprochement or convergence.

The becoming of ideas could not thought as something that is flowing down from even greater transcendental heights. Of course, anybody may choose to invoke some kind of divinity here, but obviously that does not help much. A solution according to Hegel’s great spirit, history itself, is not helpful either, even as this concept implied that there is something in and about the community that is indispensable when it comes to thinking. Much later, Wittgenstein took a related route and thereby initiated the momentum towards the linguistic turn. Yet, Hegel’s history is not useful to get clear about the becoming of ideas regarding the involved mechanism. And without such mechanisms anything like machine-based episteme, or cats having ideas, is accepted as being impossible apriori.

One such mechanism is interpretation. For us the principle of the primacy of interpretation is definitely indisputable. This does not mean that we disregard the concept of the idea, yet, we clearly take an Aristotelian position. More á jour, we could say that we are quite fond of Deleuze’s position on relating empiric impressions, affects, and thought. There are, of course many supporters in the period of time that span between Aristotle and Deleuze who are quite influential for our position.2
Yet, somehow it culminated all in the approach that has been labelled French philosophy, and which for us comprises mainly Michel Serres, Gilles Deleuze and Michel Foucault, with some predecessors like Georges Simondon. They converged towards a position that allow to think the embedding of ideas in the world as a process, or as an ongoing event [3,4], and this embedding is based on empiric affects.

So far, so good. Yet, we only declared the kind of raft we will build to sail with. We didn’t mention anything about how to build this raft or how to sail it. Before we can start to constructively discuss the relation between machines and ideas we first have to visit the concept, both as an issue and as a concept.


“Concept” is very special concept. First, it is not externalizable, which is why we call it a strongly singular term. Whenever one thinks “concept,” there is already something like concept. For most of the other terms in our languages, such as idea, that does not hold. Such, and regarding the structural dynamics of its usage,”concept” behave similar to “language” or “formalization.”

Additionally, however, “concept” is not self-containing term like language. One needs not only symbols, one even needs a combination of categories and structured expression, there are also Peircean signs involved, and last but not least concepts relate to models, even as models are also quite apart from it. Ideas do not relate in the same way to models as concepts do.

Let us, for instance take the concept of time. There is this abundantly cited quote by  Augustine [5], a passage where he tries to explain the status of God as the creator of time, hence the fundamental incomprehensibility of God, and even of his creations (such as time) [my emphasis]:

For what is time? Who can easily and briefly explain it? Who even in thought can comprehend it, even to the pronouncing of a word concerning it? But what in speaking do we refer to more familiarly and knowingly than time? And certainly we understand when we speak of it; we understand also when we hear it spoken of by another. What, then, is time? If no one ask of me, I know; if I wish to explain to him who asks, I know not. Yet I say with confidence, that I know that if nothing passed away, there would not be past time; and if nothing were coming, there would not be future time; and if nothing were, there would not be present time.

I certainly don’t want to speculate about “time” (or God) here, instead I would like to focus this peculiarity Augustine is talking about. Many, and probably even Augustine himself, confine this peculiarity to time (and space). I think, however, this peculiarity applies to any concept.

By means of this example we can quite clearly experience the difference between ideas and concepts. Ideas are some kind of models—we will return that in the next section—, while concepts are the both the condition for models and being conditioned by models. The concept of time provides the condition for calendars, which in turn can be conceived as a possible condition for the operationalization of expectability.

“Concepts” as well as “models” do not exist as “pure” forms. We elicit a strange and eminently counter-intuitive force when trying to “think” pure concept or models. The stronger we try, the more we imply their “opposite”, which in case of concepts presumably is the embedding potentiality of mechanisms, and in case of models we could say it is simply belief. We will discuss the issue of these relation in much more detail in the chapter about the choreosteme (forthcoming). Actually, we think that it is appropriate to conceive of terms like “concept” and “model” as choreostemic singular terms, or short choreostemic singularities.

Even from an ontological perspective we could not claim that there “is” such a thing like a “concept”. Well, you may already know that we refute any ontological approach anyway. Yet, in case of choreostemic singular terms like “concept” we can’t simply resort to our beloved language game. With respect to language, the choreosteme takes the role of an apriori, something like the the sum of all conditions.

Since we would need a full discussion of the concept of the choreosteme we can’t fully discuss the concept of “concept” here.  Yet, as kind of a summary we may propose that the important point about concepts is that it is nothing that could exist. It does not exist as matter, as information, as substance nor as form.

The language game of “concept” simply points into the direction of that non-existence. Concepts are not a “thing” that we could analyze, and also nothing that we could relate to by means of an identifiable relation (as e.g. in a graph). Concepts are best taken as gradient field in a choreostemic space, yet, one exhibiting a quite unusual structure and topology. So far, we identified two (of a total of four) singularities that together spawn the choreostemic space. We also could say that the language game of “concept” is used to indicate a certain form of a drift in the choreostemic space. (Later we also will discuss the topology of that space, among many other issues.)

For our concerns here in this chapter, the machine-based episteme, we can conclude that it would be a misguided approach to try to implement concepts (or their formal analysis). The issue of the conditions for the ability to move around in the choreostemic space we have to postpone. In other words, we have confined our task, or at least, we found a suitable entry  point for our task, the investigation of the relation between machines and ideas.

Machines and Ideas

When talking about machines and ideas we are, here and for the time being, not interested in the usage of machines to support “having” ideas. We are not interested in such tooling for now. The question is about the mechanism inside the machine that would lead to the emergence of ideas.

Think about the idea of a triangle. Certainly, triangles as we imagine them do not belong to the material world. Any possible factual representation is imperfect, as compared with the idea. Yet, without the idea (of the triangle) we wouldn’t be able to proceed, as, for instance, towards land survey. As already said, ideas serve as models, they do not involve formalization, they often live as formalization (though not always a mathematical one) in the sense of an idealized model, in other words they serve as ladder spokes for actions. Concepts, if we in contrast them to ideas, that is, if we try to distinguish them, never could be formalized, they remain inaccessible as condition. Nothing else could be expected  from a transcendental singularity.

Back to our triangle. Despite we can’t represent them perfectly, seeing a lot of imperfect triangles gives rise to the idea of the triangle. Rephrased in this way, we may recognize that the first half of the task is to look for a process that would provide an idealization (of a model), starting from empirical impressions. The second half of the task is to get the idea working as a kind of template, yet not as a template. Such an abstract pattern is detached from any direct empirical relation, despite the fact that once we started with with empiric data.

Table 1: The two tasks in realizing “machinic idealism”

Task 1: process of idealization that starts with an intensional description
Task 2: applying the idealization for first-of-a-kind-encounters

Here we should note that culture is almost defined by the fact that it provides such ideas before any individual person’s possibility to collect enough experience for deriving them on her own.

In order to approach these tasks, we need first model systems that exhibit the desired behavior, but which also are simple enough to comprehend. Let us first deal with the first half of the task.

Task 1: The Process of Idealization

We already mentioned that we need to start from empirical impressions. These can be provided by the Self-organizing Map (SOM), as it is able to abstract from the list of observations (the extensions), thereby building an intensional representation of the data. In other words, the SOM is able to create “representative” classes. Of course, these representations are dependent on some parameters, but that’s not the important point here.

Once we have those intensions available, we may ask how to proceed in order to arrive at something that we could call an idea. Our proposal for an appropriate model system consists from the following parts:

  • (1) A small set (n=4) of profiles, which consist of 3 properties; the form of the profiles is set apriori such that they overlap partially;
  • (2) a small SOM, here with 12×12=144 nodes; the SOM needs to be trainable and also should provide classification service, i.e. acting as a model
  • (3) a simple Monte-Carlo-simulation device, that is able to create randomly varied profiles that deviate from the original ones without departing too much;
  • (4) A measurement process that is recording the (simulated) data flow

The profiles are defined as shown in the following table (V denotes variables, C denotes categories, or classes):

V1 V2 V3
C1 0.1 0.4 0.6
C2 0.8 0.4 0.6
C3 0.3 0.1 0.4
C4 0.2 0.2 0.8

From these parts we then build a cyclic process, which comprises the following steps.

  • (0) Organize some empirical measurement for training the SOM; in our model system, however, we use the original profiles and create an artificial body of “original” data, in order to be able to detect the relevant phenomenon (we have perfect knowledge about the measurement);
  • (1) Train the SOM;
  • (2) Check the intensional descriptions for their implied risk (should be minimal, i.e. beyond some threshold) and extract them as profiles;
  • (3) Use these profiles to create a bunch of simulated (artificial) data;
  • (4) Take the profile definitions and simulate enough records to train the SOM,

Thus, we have two counteracting forces, (1) a dispersion due to the randomizing simulation, and (2) the focusing of the SOM due to the filtering along the separability, in our case operationalized as risk (1/ppv=positive predictive value) per node. Note that the SOM process is not a directly re-entrant process as for instance Elman networks [6,7,8].3

This process leads not only to a focusing contrast-enhancement but also to (a limited version) of inventing new intensional descriptions that never have been present in the empiric measurement, at least not salient enough to show up as an intension.

The following figure 1a-1i shows 9 snapshots from the evolution of such a system, it starts top-left of the portfolio, then proceeds row-wise from left to right down to the bottom-right item. Each of the 9 items displays a SOM, where the RGB-color corresponds to the three variables V1, V2, V3. A particular color thus represents a particular profile on the level of the intension. Remember, that the intensions are built from the field-wise average across all the extensions collected by a particular node.

Well, let us now contemplate a bit about the sequence of these panels, which represents the evolution of the system. The first point is that there is no particular locational stability. Of course, not, I am tempted to say, since a SOM is not an image that represents as image. A SOM contains intensions and abstractions, the only issue that counts is its predictive power.

Now, comparing the colors between the first and the second, we see that the green (top-right in 1a, middle-left in 1b) and the brownish (top-left in 1a, middle-right in 1b) appear much more clear in 1b as compared to 1a. In 1a, the green obviously was “contaminated” by blue, and actually by all other values as well, leading to its brightness. This tendency prevails. In 1c and 1d yellowish colors are separated, etc.

Figure 1a thru 1i: A simple SOM in a re-entrant Markov process develops idealization. Time index proceeds from top-left to bottom-right.

The point now is that the intensions contained in the last SOM (1i, bottom-right of the portfolio) have not been recognizable in the beginning, in some important respect they have not been present. Our SOM steadily drifted away from its empirical roots. That’s not a big surprise, indeed, for we used a randomization process. The nice thing is something different: the intensions get “purified”, changing thereby their status from “intensions” to “ideas”.

Now imagine that the variables V1..Vn represent properties of geometric primitives. Our sensory apparatus is able to perceive and to encode them: horizontal lines, vertical lines, crossings, etc. In empiric data our visual apparatus may find any combination of those properties, especially in case of a (platonic) school (say: academia) where the pupils and the teachers draw triangles over triangles into the wax tablets, or into the sand of the pathways in the garden…

By now, the message should be quite clear: there is nothing special about ideas. In abstract terms, what is needed is

  • (1) a SOM-like structure;
  • (2) a self-directed simulation process;
  • (3) re-entrant modeling

Notice that we need not to specify a target variable. The associative process itself is just sufficient.

Given this model it should not surprise anymore why the first philosophers came up with idealism. It is almost built into the nature of the brain. We may summarize our achievements in the following characterization;

Ideas can be conceived as idealizations of intensional descriptions.

It is of course important to be aware of the status of such a “definition”. First, we tried to separate concepts and ideas. Most of the literature about ideas conflate them.Yet, as long as they are conflated, everything and any reasoning about mental affairs, cognition, thinking and knowledge necessarily remains inappropriate. For instance, the infamous discourse about universals and qualia seriously suffered from that conflation, or more precisely, they only arose due to that mess.

Second, our lemma is just an operationalization, despite the fact that we are quite convinced about its reasonability. Yet, there might be different ones.

Our proposal has important benefits though, as it matches a lot of the aspects commonly associated the the term “idea.” In my opinion, what is especially striking about the proposed model is the observation that idealization implicitly also led to the “invention” of “intensions” that were not present in the empiric data. Who would have been expecting that idealization is implicitly inventive?

Finally, two small notes should be added concerning the type of data and the relationship between the “idea” as a continuously intermediate result of the re-entrant SOM process. One should be aware that the “normal” input to natural associative systems are time series. Our brain is dealing with a manifold of series of events, which is mapped onto the internal processes, that is, onto another time-based structure. Prima facie Our brain is not dealing with tables. Yet, (virtual) tabular structures are implied by the process of propertization, which is an inevitable component of any kind of modeling. It is well-known that is is time-series data and their modeling that give rise to the impression of causality. In the light of ideas qua re-entrant associativity, we now can easily understand the transition from networks of potential causal influences to the claim of “causality” as some kind of a pure concept. Despite the idea of causality (in the Newtonian sense) played an important role in the history of science, it is just that: a naive idealization.

The other note concerns the source of the data.  If we consider re-entrant informational structures that are arranged across large “distances”, possibly with several intermediate transformative complexes (for which there are hints from neurobiology) we may understand that for a particular SOM (or SOM-like structure) the type of the source is completely opaque. To put it short, it does not matter for our proposed mechanism whether the data are sourced as empiric data from the external world,or as some kind of simulated, surrogated re-entrant data from within the system itself. In such wide-area, informationally re-entrant probabilistic networks we may expect kind of a runaway idealization. The question then is about the minimal size necessary for eliciting that effect. A nice corollary of this result is the insight that logistic networks, such like the internet or the telephone wiring cable NEVER will start to think on itself, as some still expect. Yet, since there a lot of brains as intermediate transforming entities embedded in this deterministic cablework, we indeed may expect that the whole assembly is much more than could be achieved by a small group of humans living, say around 1983. But that is not really a surprise.

Task 2: Ideas, applied

Ideas are an extremely important structural phenomenon, because they allow to recognize things and to deal with tasks that we never have seen before. We may act adaptively before having encountered a situation that would directly resemble—as equivalence class—any intensional description available so far.

Actually, it is not just one idea, it is a “system” of ideas that is needed for that. Some years ago, Douglas Hofstadter and his group3 devised a model system suitable for demonstrating exactly this: the application of ideas. They called the project (and the model system) Copycat.

We won’t discuss Copycat and analogy-making rules by top-down ideas here (we already introduced it elsewhere). We just want to note that the central “platonic” concept in Copycat is a dynamic relational system of symmetry relations. Such symmetry relations are for instance “before”, “after”, or “builds a group”, “is a triple”, etc. Such kind of relations represent different levels of abstractions, but that’s not important. Much more important is the fact that the relations between these symmetry relations are dynamic and will adapt according to the situation at hand.

I think that these symmetry relations as conceived by the Fargonauts are on the same level as our ideas. The transition from ideas to symmetries is just a grammatological move.

The case of Biological Neural Systems

Re-entrance seems to be an important property of natural neural networks. Very early on in the liaison of neurobiology and computer science, starting with Hebb and Hopfield in the beginning of the 1940ies, recurrent networks have been attractive for researchers. If we take a look to drawings like the following, created (!) by Ramon y Cajal [10] in the beginning of the 20th century.

Figure 2a-2c: Drawings by Ramon y Cajal, the Spain neurobiologist. See also:  History of Neuroscience. a: from a Sparrow’s brain, b: motor brain in human brain, c: Hypothalamus in human brain

Yet, Hebb, Hopfield and Elman got trapped by the (necessary) idealization of Cajal’s drawings. Cajal’s interest was to establish and to proof the “neuron hypothesis”, i.e. that brains work on the basis of neurons. From Cajal’s drawings to the claim that biological neuronal structures could be represented by cybernetic systems or finite state machines is, honestly, a breakneck, or, likewise, ideology.

Figure 3: Structure of an Elman Network; obviously, Elman was seriously affected by idealization (click for higher resolution).

Thus, we propose to distinguish between re-entrant and recurrent networks. While the latter are directly wired onto themselves in a deterministic manner, that is the self-reference is modeled on the morphological level, the former are modeled on the  informational level. Since it is simply impossible for cybernetic structure to reflect neuromorphological plasticity and change, the informational approach is much more appropriate for modeling large assemblies of individual “neuronal” items (cf. [11]).

Nevertheless, the principle of re-entrance remains a very important one. It is a structure that is known to lead to contrast enhancement and to second-order memory effects. It is also a cornerstone in the theory (theories) proposed by Gerald Edelman, who probably is much less affected by cybernetics (e.g. [12]) than the authors cited above. Edelman always conceived the brain-mind as something like an abstract informational population; he even was the first adopting evolutionary selection processes (Darwinian and others) to describe the dynamics in the brain-mind.

Conclusion: Machines and Choreostemic Drift

Out point of departure was to distinguish between ideas and concepts. Their difference becomes visible if we compare them, for instance, with regard to their relation to (abstract) models. It turns out that ideas can be conceived as a more or less stable immaterial entity (though not  “state”) of self-referential processes involving self-organizing maps and the simulated surrogates of intensional descriptions. Concepts on the other hand are described as a transcendental vector in choreostemic processes. Consequently, we may propose only for ideas that we can implement their conditions and mechanisms, while concepts can’t be implemented. It is beyond the expressibility of any technique to talk about the conditions for their actualization. Hence, the issue of “concept” has been postponed to a forthcoming chapter.

Ideas can be conceived as the effect of putting a SOM into a reentrant context, through which the SOM develops a system of categories beyond simple intensions. These categories are not justified by empirical references any more, at least not in the strong sense. Hence, ideas can be also characterized as being clearly distinct from models or schemata. Both, models and schemata involve classification, which—due to the dissolved bonds to empiric data—can not be regarded a sufficient component for ideas. We would like to suggest the intended mechanism as the candidate principle for the development ideas. We think that the simulated data in the re-entrant SOM process should be distinguished from data in contexts that are characterized by measurement of “external” objects, albeit their digestion by the SOM mechanism itself remains the same.

From what has been said it is also clear that the capability of deriving ideas alone is still quite close to the material arrangements of a body, whether thought as biological wetware or as software. Therefore, we still didn’t reach a state where we can talk about epistemic affairs. What we need is the possibility of expressing the abstract conditions of the episteme.

Of course, what we have compiled here exceeds by far any other approach, and additionally we think that it could serve as as a natural complement to the work of Douglas Hofstadter. In his work, Hofstadter had to implement the platonic heavens of his machine manually, and even for the small domain he’d chosen it has been a tedious work. Here we proposed the possibility for a seamless transition from the world of associative mechanisms like the SOM to the world of platonic Copy-Cats, and “seamless” here refers to “implementable”.

Yet, what is really interesting is the form of choreostemic movement or drift, resulting from a particular configuration of the dynamics in systems of ideas. But this is another story, perhaps related to Felix Guattari’s principle of the “machinic”, and it definitely can’t be implemented any more.


1. we did so in the recent chapter about data and their transformation, but also see the section “Overall Organization” in Technical Aspects of Modeling.

2. You really should be aware that this trace we try to put forward here does not come close to even a coarse outline of all of the relevant issues.

3. they called themselves the “Fargonauts”, from FARG being the acronym for “Fluid Analogy Research Group”.

4. Elman networks are an attempt to simulate neuronal networks on the level of neurons. Such approaches we rate as fundamentally misguided, deeply inspired by cybernetics [9], because they consider noise as disturbance. Actually, they are equivalent to finite state machines. It is somewhat ridiculous to consider a finite state machine as model for learning “networks”. SOM, in contrast, especially if used in architectures like ours, are fundamentally probabilistic structures that could be regarded as “feeding on noise.” Elman networks, and their predecessor, the Hopfield network are not quite useful, due to problems in scalability and, more important, also in stability.

  • [1] Douglas Hofstadter, Douglas R. Hofstadter, Fluid Concepts And Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought. Basic Books, New York 1996.  p.365
  • [2] Gernot Böhme, “Platon der Empiriker.” in: Gernot Böhme, Dieter Mersch, Gregor Schiemann (eds.), Platon im nachmetaphysischen Zeitalter. Wissenschaftliche Buchgesellschaft, Darmstadt 2006.
  • [3] Marc Rölli (ed.), Ereignis auf Französisch: Von Bergson bis Deleuze. Fin, Frankfurt 2004.
  • [4] Gilles Deleuze, Difference and Repetition. 1967
  • [5] Augustine, Confessions, Book 11 CHAP. XIV.
  • [6] Mandic, D. & Chambers, J. (2001). Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley.
  • [7] J.L. Elman, (1990). Finding Structure in Time. Cognitive Science 14 (2): 179–211.
  • [8] Raul Rojas, Neural Networks: A Systematic Introduction. Springer, Berlin 1996. (@google books)
  • [9] Holk Cruse, Neural Networks As Cybernetic Systems: Science Briefings, 3rd edition. Thieme, Stuttgart 2007.
  • [10] Santiago R.y Cajal, Texture of the Nervous System of Man and the Vertebrates: Volume I: 1, Springer, Wien 1999, edited and translated by Pedro Pasik & Tauba Pasik. see google books
  • [11] Florence Levy, Peter R. Krebs (2006), Cortical-Subcortical Re-Entrant Circuits and Recurrent Behaviour. Aust N Z J Psychiatry September 2006 vol. 40 no. 9 752-758.
    doi: 10.1080/j.1440-1614.2006.01879
  • [12] Gerald Edelman: “From Brain Dynamics to Consciousness: A Prelude to the Future of Brain-Based Devices“, Video, IBM Lecture on Cognitive Computing, June 2006.


The Miracle of Comparing

November 11, 2011 § Leave a comment

Miracles denote the incomparable.

Since comparing is so abundant in our thinking, we even can’t think of any activity devoid of comparing, miracles also signify zones that are truly non-cognitive. Curiously, not believing in the existence of such non-cognitive zones is called agnostic.

Actually, divine revelations seem to be the only cognitive acts that are not based on comparisons. In such kinds of events, we directly find some entity in the world or in our mind, without cause, notably. We simply and suddenly can look at it, humble so. In some way, the only outside of the comparison is the miracle, which is not really an outside, since it is outside of everything. Thinking and comparing do not have a proper neighbor, in order to use a Wittgensteinian concept. Thus, we could conclude that it is not really possible to talk about it. Without any reasonable comparable, thinking itself is outside of anything else. We just can look at it, silently. Obviously, there is a certain resemblance to the event of a miracle. Maybe, that’s the reason for the fact that there are so much misunderstandings about thinking.

Of course, we may build models about thinking. Yet, this does not change very much, even if we apply modern means in our research. On the other hand, this also does not reduce the interestingness of the topic. Astonishingly, even renowned researchers in cognitive sciences as Robert Goldstone (Indiana) feel inclined to justify their engagement with the issue of comparison [1].

It might not be immediately clear why the topic of comparison warrants a whole chapter in a book on human thinking. […] In fact, comparison is one of the most integral components of human thought. Along with the related construct of similarity, comparison plays a crucial role in almost everything that we do.

We fully agree to these introductory words, but Goldstone, as so many researcher in cognitive sciences, proceeds with remarkable trivia.

Furthermore, comparison itself is a powerful cognitive tool – in addition to its supporting role in other mental processes, research has demonstrated that the simple act of comparing two things can produce important changes in our knowledge.

In contrast to Goldstone we also will separate the concept of comparison completely from that of similarity. In his article, Goldstone discusses comparison only through the concept of similarity.

Thinking is closely related to consciousness, hence to self-consciousness, as the German philosopher Manfred Frank argued for [2]. Probably for that reason humans avoid to call mental processes in animals “thinking.” Anyway, for us humans thinking is so natural that we usually do not bother with the operation of comparing, I mean with the structure of that operation. Of course, there is rhetoric, which teaches us the different ways of comparing things with one another. Yet, this engagement is outside of the operation, it just concerns its application. The same is true for mathematics, where always a particular way of comparing is presupposed.  In contrast, we are interested in the structure of the operation of comparing.

Well, our thesis here does not, of course, follow the route of the miraculous, at least not without a better specification of the situation. Miracles are a rather unsuitable thing to rely on as a (software) programmer.

What we will try here is to clarify the anatomy of comparisons. Indeed, we distinguish at least three basic types.

The Anatomy of Comparison: Basic Ingredients

Comparisons are operations. They imply some kind of matter, providing a certain storage or memory capacity. Without such storage/memory-matter, no comparison is possible. It is thus not perfectly correct to speak about the anatomy of comparisons, since any type of comparison is a process in time.

Let us first consider the basic case of a pairwise comparison. Without loss of generality, we take an apple (A) and an orange (B) that we are going to compare. What are, in a first account, the basic elements of such a comparison?

As the proverb intends to convey, what can’t be compared that can’t be compared. In order to compare two entities A and B, we have to assign properties that can be applied to both of the entities. Take “COLOR” as an example here. So, first we assign properties…


In a second step we select just those of the properties that shall (!) be applied to both. There is no necessity that a particular feature is common to both items. Actually, the determination of those sets is a free parameter in modeling. Yet, here we are just interested in the actuality of two such aligned sets (“vectors”).


Those selected properties represent subsets with respect to their parent sets. Given those two subsets a(j.) and b(m.) we now can align the vectors of properties. In data analysis, such vectors are often called feature vectors.


Any comparison then refers to those aligned feature vectors. The more features in such a vector the more comparisons are possible.

In diagnostic/predictive modeling a particular class of operations is applied to such feature vectors, mainly in order to determine the similarity, or its inverse, the so-called distance. We will see in the chapter about similarity what “distance” actually means and why it is deeply problematic to apply this concept as an operationalization of similarity in comparisons.

Before we start with the typology of comparison we should note that the “features” can be very abstract, depending on the actual items and their abstractness to be compared. Features could be taken from physical measurement, from information theoretic considerations, or from any kind of transformation of “initial” features. Regardless the actual setting, there will always be some criteria, even if we use an abstract similarity measure based on information theory, as e.g. proposed by Lin [3].

A second note concerns the concept of the feature vector and its generality. The feature vector does not imply by itself a particular similarity measure, e.g. a distance measure, or a cosine similarity. In other words, it does not imply a particular space of comparison. Only the interaction of the similarity functional with the feature vector creates such a space. Related to that we also have to emphasize again that the sets aj and bm need not be completely identical. There might be features that are considered as being indispensable for the description of the item at hand. Such reasons are, however, external to a particular comparison, opening just a further level. In a larger context we have to expect feature vectors that are not completely matching, since similarity includes the notion of commonality. To measure this requires differences, hence the difference in the feature sets. In turn this requires a measure for the (dimensional) difference of potential solution spaces.

We will discuss all these issues in more detail in the chapter about similarity. There we will argue that the distinctions of different kinds of similarity like the one proposed by cognitive psychologist Goldstone in [1] into geometric, feature-based, alignment-based, and transformational similarity are not appropriate.

Yet, for now and first and here, we will focus on the internal structure of the comparison as an event, or an operation.

Three Types of Comparison

Type I: Comparison within a closed Framework

Type I is by far the most simple of all three types. It can be represented by finite algorithms. Hence, type I is the type of comparison that is used (almost exclusively) in computer sciences, e.g. in advanced optimization techniques or in data mining.

In this type the observables are data (“givens”), the entities as well as their features. The salient example is provided by the database. Hence, the space of potential solutions is well-defined, albeit its indeterminacy and its vast size could render it into a pseudo-open space.

We start with basic propertization, exactly as in the general case shown above in Fig.1:


From these, feature vectors are set up into a table-like structure for column-wise comparison. This comparison is organized as a “similarity function,” which in turn is an operationalization of the concept of similarity. Note that this structure separates the concepts of “comparison” and “similarity,” the latter being set as a part of the former. It is quite important not to conflate comparison and (the operationalization of) similarity


Based on the result of applying the similarity function to the table of feature vectors, a particular proposal can be made about the relations between A and B.


The diagram clearly shows that the proposal can’t be conceived as something “objective,” which could be extracted from an “outside” reality. Such a perspective is widely believed to be appropriate in the field of so-called “data mining.” Obviously, proposals are heavily dependent on feature vectors and the similarity functional. Even in predictive model, where the predictive accuracy can be taken as a corrective or defense against proliferate relativism, the proposal is still dependent on the selected features. As many different selections allow for an almost equal predictive accuracy, there isn’t objectivity either.

From the simple case we can take the important lesson that there is no such thing as a “direct” comparison, hence also no “built-in” objectivity, even not on those low levels of cognition.


We also can see that the structure of comparison comprises three levels of abstraction. This structure further applies to simple translations and topics like transdisciplinary discourses, i.e. the task of relating domain specific vocabularies, where any of them are supposed to consist from well-defined singular terms.

The features that are ultimately used as input into the similarity function are often called the “model parameters”, or also “dimensions.” In philosophical terms we suggest them to relate closely to Spinoza’s Common Terms.


Type II: “Inverted” goal-directed Comparison

The second type is quite different from the type I. Here we do not start with data completely defined also on the level of the properties. Instead, the starting point for processes of type II is determined by the proposal and rather coarse entities, or vaguely described contexts. Hence, it is kind of an inverted comparison.

The diagram for the first step doesn’t look quite spectacular, yet, in some sense it is also quite dramatic in its emptiness.


The second step is based on habits or experience; abstract properties and some similarity function is selected before the comparison.


These are then applied to the large or vaguely given entities. By means of this top-down projection a relation between A and B appears. Only subsequent to establishing that relation we can start with a forward comparison!


The final step then associates the initially vague observations, now constructively related, to the initial proposal.


As we already said, in this process the properties are taken from experience before the are projected onto amorphous observations. In Spinoza’s terms, those properties are “abstract terms”. Essentially, the projection can be also conceived as a construction. In the structure shown above, fiction and optimization are co-dependent and co-extensive. We also could simply call it “invention,” either of a solution or of a problematics, just as you prefer to call it.

The same process of “optimized fiction”, or “fictional optimization,” is often mistaken as “induction from empiric data.” Using the schemes of figure 3 we easily can understand why this claim is a misunderstanding. Actually, such an induction is of course not possible, since their is no necessity in any of the steps.

About the role of experience: The selection of “abstract terms” viz. “suggested properties” is itself based on models, of course. Yet, these models are far outside of the context induced by the observables A, B.

We should note that type-I and type-II comparisons are usually used in a constant interplay, resulting in an open, complex dynamics. This interplay creates a new quality, as Goldstone remarks in [1] as his final conclusion:

When we compare entities, our understanding of the entities changes, and this may turn out to be a far more important consequence of comparison than simply deriving an assessment of similarity.

Goldstone fails, however, to separate similarity and comparison in an appropriate manner. Consequently, he also fails to put categorization into the right place:

Despite the growing body of evidence that similarity comparisons do not always track categorization decisions, there are still some reasons to be sanguine about the continued explanatory relevance of similarity. Categorization itself may not be completely flexible.

Such statements are almost awful in their production of conceptual mess. Our impression is that Goldstone (as many others) does not have the concept at his disposal that we call the transition from probabilistic description to propositional representation.

Type III: Comparison of Populations

The third type, finally, describes the process of comparison on the level of populations. The most striking difference to type I + II concerns the fact that there is no explicitly given proposal, which could be assigned to some kind of input data. Instead, the only visible goal is (long-term) stability. We could say that the comparison is an open comparison.

Let us start with the basic structure we already know as results from type-I and type-II.


Now we introduce two significant elements, population and, as a consequence, time. Indeed, these two elements mark a very important step, not the least with regard to philosophical concepts like idealism. For instance, Hegel’s whole system suffers from the utter negligence of population and (individual) time.

By introducing populations we also introduce repetition and signal horizons. Yet, even more important, we dissolve the objecthood that allowed us to denote two entities as A and B, respectively, into a probabilistic representation. In other words, we replace (crisp) symbols by (open, borderless) distributions. In natural evolution, the logical sequence is just the other way round. There, we start with populations as a matter of fact.

Comparing A and B means to compare two populations A(n) and B(m). Instead of objects, singulars or concepts we talk about types, or species. Comparing populations also means to repeat (denoted by “::n”) the comparison between approximate instances of A(n), aj and  approximate instances of B(m), bk.

It is quite obvious that in real situations we never compare uniquely identifiable, apriori existing objects that are completely described. We rather always have to deal with populations of them, not the least due to the inevitability of modeling, even with regard to simple perception.


Comparing two populations does not result in just one proposal, instead we are faced with a lot of different ones. Even more, the set of achieved proposal can not be expected to be constant in its actuality, since not all proposals arrive at the same time. We then could try to reduce this manifoldness by applying a model, that is, by comparing proposals. Yet, doing so we are faced with both a certain kind of empirical underdetermination and a conceptual indeterminacy.

It is this indeterminacy that causes a qualitative shift in the whole game. It is not the proposals any more that are subject of the comparison. The manifold of proposals lift us to the level of the frequency distribution upon all proposals. Since comparing proposals can not refer to other information than that which is present in the population, deciding about proposals turns into an investigation about the influence of the variation above A(n) or B(m), or the influence of the similarity functional.

The comparison of populations obviously introduces an element of self-referentiality. There are two consequences of this. First, it introduces an undecidability, secondly, comparing populations induces an anisotropy, a symmetry break within them. Compare two populations and you’ll get three. Since this changes the input for the comparison itself, the process either develops the perlocutionary aspect of pragmatic stability as a content-free intention, or the whole game disappears at all.

This pragmatics of induced stability can be described by using the concept of fitness landscape.


The figure above could be conceived as a particular segment of evolutionary processes. For us, it is so natural that things are connected and related to each other that we can hardly image a different picture. Yet, we could revert the saying that evolution is based on competition or competitive selection. Any population that does not engage in the evolutionary comparison game will not develop the pragmatics of stability and hence will disappear more soon than later.

Practical Use

The three types of comparison that we distinguished above are abstract structures. In a practical application, e.g. in modeling, some issues have to be considered.

The most important misunderstanding would be to apply those abstract forms as practical means. Doing so one would commit the mistake not only to equate the local and the global, but also to claim a necessity for this equality.

Above we introduced the principle of aligned feature lists that need to be common for both instances that we are going to compare. Note that we are comparing only two (2) of them! From such a proposal one can not conclude that all items out of a set of available observations are necessarily compared using exactly the same list of features in order to arrive at a particular classification. As Wittgenstein put it, “there is no property all games have in common which distinguishes them from all the activities which are not games.” (cited after Putnam [4, p.11]) This, of course, does not exclude a field of overlapping feature lists suitable for deciding whether something is a game or not. Of course, there is no unique result to be expected. The crispness of the result of such comparison depends on the purpose and its operationalization in modeling, mainly through the choice of the similarity measure.

Yet, such a uniqueness is never to be expected unless we enforce it, e.g. by some sort of formal axiomatics as in mathematics, or mathematical statistics, or, not so different from that, by organizational constraints. If our attempt to create a model for a particular classification task requires a lot of features, such uniqueness is excluded even then if we would use the same list of features for creating all observations. On the other hand this does not mean either that we could not find a suitable result.

The Differential and Ortho-Regulation

Finally, we arrive at the differential as the most sophisticated form of comparison. We consider the differential as the major element of abstract thinking. Here, we neither can discuss the roots of the concepts nor the elaborated philosophy that Gilles Deleuze developed in his book Difference and Repetition [5]. We just would like to emphasize that much of this work has been influenced it.

The differential is not a tool to compare given observables in order to derive a proposal about those observables. Instead, when playing the “Differential Game” we are interested in the potentially achievable proposals. Of course, we also start with a population of observations. Those observations are yet not in any pre-configured, or observable relationship. The diagrams that we will develop in the series below look very different from those we found for the comparisons.

The starting point. Given a set of observations and the respective propertization that we derived according to intellectual habits, our forms of intuition, we may ask which proposals, statements or solutions are possible?


The first step is to replace immutable properties by a more dynamic entity, a procedure. This procedure could be taken as the tool to create a particular partition in the observations {O}, or as a dynamic representation of possible equivalence classes on {O}. We also could call it a model. Note that models always imply also a usage, or purpose.

The interesting thing now is that procedures can be conceived as consisting from rules and their parameters, or in the language of mathematical category theory, of objects and their transformations. The parameters are much like variables, but from the perspective of any particular partitioning, or say, instance, the parameters are constants. This  scheme has been originally invented, or at least it has been written down, by Lagrange in the late 18th century. Most remarkably, he also observed that this scheme can be cascaded. The parameters on the abstract level can be taken as new, quasi-empirical “observations,” and so on.


The important part of this schemes are indeed the free parameters, which are, we have to remember that, also constants. If we now play around with these pfree parameters, we can construct different partitions from {O}, but this also means, that by varying the parameters we can create a proposal beyond such partitioning, or solutions regarding some request (again upon {O}).


Of course, what now becomes possible is the simulation game. Which statement we are actually going to construct is again a matter of habits. Let us call this the forms of construction. Astonishingly, this structure has been overlooked completely so far (with one exception), it went also unnoticed for Immanuel Kant and all of his fellows in philosophy.


Given this scheme we would like to emphasize that there is no direct path from observations to statements. Hence, habits and conventions become active at two different positions in the process that allows us to speak about an observation. This is already true for the most simple judgments about {O}, indicated by S(0). Again, this has not been recognized by epistemology up to date.


Finally, we apply the Lagrangian insight to our scheme. Forms of intuition as well as forms of construction are, of course, not just constants. They are regulated, too. This results in the induction of a cascade. Since this regulation of mental forms (of intuition, or construction, respectively) does not refer to {O}, but instead to the abstraction itself (mainly the selection of the parameters), it appears as if this secondary mental forms are in a different dimension, not visible within the underlying activity of comparing. Thus we call it ortho-regulation.


It is actually quite surprising that there are whole philosophical schools that deny the (cascaded) vertical dimension of these processes. One of the examples is provided by Jacques Derrida’s work. At many places throughout his writings he comes up with rather weird ideas to preserve the mental flatness. One of them is the infamous “uninterpretable trace” (grm.: Spur).

A common misunderstanding is committed by many scientists influenced by positivism (who of them is not?) concerning the alternatives S(k). Determinists claim that the step from abstraction to the solution is unique, or at least determined as a well-defined alternative of a finite set. Doing so, they deny implicitly the necessity for orthoregulation, hence they also deny any form of internal freedom as well as the importance of conventions (see below). This paves the way for the nonsensical conviction that the choice between S(k) can be computed algorithmically (as a Turing-Machine computes). The schemes above clearly show that such a conception must be considered as seriously deficient.

The following scheme may be taken as an abbreviated form for the phenomenon of (abstract) thinking.


Quite important, the differential is isomorphic to the metaphor. Actually, we are convinced that metaphors are not a linguistic phenomenon. Metaphors are the direct consequence of necessary structures in thinking and modeling.

Architectonics of Comparison

Comparisons belong, together with the classifications that are based on them, to the basic elements of cognition and higher-level mental processes. Thus they may be taken as a well-justified starting point for any consideration of the conditions for epistemic processes.

Astonishingly, such considerations are completely absent in science as well as the humanities (and their mixed forms). The only vaguely related references that can be found—and there are really on very few of them—are from the field of (comparative) linguistics or literature science. In linguistics it is not the structure abstract operation of comparison that is focused [6]; here comparison is taken as a primitive and then applied to linguistic structures. One of the research paradigms is given by the case of adjectives like smaller, higher etc. What is studied there is the structure of the application of the operation of comparing, not the operation of comparing itself.  In comparative literature science, however, an interesting note can be found. In his inquiry of the writings of Jean Paul, a German Romanticist, Coker [7, p.397] distinguishes different types of comparisons, at least implicitly, and relates a particular one directly to imagination, a result that we can confirm through our formal analysis:

“The imagination is a structure of comparison through which desire can realize its infinite nature, always transcending finite givens.”

Besides this really rare occurrence, however, comparisons are always taken in its most simple and reduced form, the comparison along a numerical scale. This type is even more primitive than our simplest type shown in Fig.2. It is true that all of our three types ultimately are based on the primitive type, yet, considering normal thinking, the reduction to the primitive case is inappropriate. The language is full of comparisons and comparative moves, where we call it metaphor. For more than 100 years now linguists are reasoning about metaphors in largely inappropriate ways precisely because they impose a reduced concept of comparison.

A reference to a more elaborated and rich concept of comparison is missing completely, in cognitive sciences as well as in computer science. Even the field of metaphorology did not contribute a clear structural view. So we conclude that the problematics of comparison seems to play a role only in improper proverbs about apples and oranges, or apples and pears.

Hence, the two cornerstones of any type of comparison remained undetected, propertization and ortho-regulation. We propose that these are the elements of an  architectonics of comparison. The propertization will be discussed in the chapter about modeling, so we can turn to the phenomenon of ortho-regulation.


The concept of ortho-regulation becomes visible only if we take two approaches serious: rule-following (Wittgenstein) and the differential (Deleuze). The first step is the discovery of the Forms of Construction. In a second step, symmetry considerations lead us to the cascaded view.

The notion of “forms of construction” may appear as trivial and well-known. Yet, it is usually applied as a concept used while thinking, in the sense of a particular way to construct something, not as a basic concept constitutive for thinking (e.g. Quine in [8], or Sandbothe in [9]); for example, it is spoken about “forms of construction of reality.” In contrast to that we consider “forms of construction” here as a transcendental principle of thinking.

Orthoregulations are rules that organize rules. Wittgenstein dismissed such an attempt, since he feared an infinite regress. Since then this remained the received view. Nevertheless, we think that this dismissal has been devised to hastily. There is no thread through an infinite regress because the rules on the level of orthoregulations are neither based on nor directed towards observations {O} about facts. The subject of ortho-regulative rules are rules. In other words, their empirical basis is not only completely different, but also much smaller and much more difficult to learn. Ortho-regulative rules can not be demonstrated as readily as, say, how to follow an instruction.

The cascade is thus not an infinite one, rather, it stops presumably quite soon. To derive rules Rx about rules from a more basic body of rules Rb, you need a lot of instances or observations about Rb. There are less hawks than mice. Proceeding in the chain of rules about rules soon there are not enough observations available anymore to derive further regularities and rules. We agree to Wittgenstein’s claim that rule-following must come to an end, yet for a different reason. Accordingly, the stopping point where rule-following becomes impossible is not the fear of the philosopher, it is a point deeply buried in our capability to think, precisely because thinking is a bodily, hence empirical activity. Note that “our” here means “human stuffed with a brain.” This point is a very interesting one, which quite unfortunately we can not investigate here. As a last remark on the subject we would like to hint to Leibniz’ idea of the monad and the associated concept of absolute interiority.

Another discussion we can not follow is of course Kant’s notion of the form of intuition. We simply are not fluent enough to develop serious arguments from a Kantian perspective. We find it, however, remarkable, that he missed the counterpart of the rising branch of abstraction. In some way, we guess, this could be the reason for his prevailing difficulties with the (ethical) notion of freedom, which Kant considered to be in an antinomic relation to being determined [e.g. 10]. His categorical imperative is a weak argument, since Kant had to introduce it actually much like an axiom. He was quite desperate about that, as he expressed this in his last writing [11]. His argument that the capability to choose one’s own determination reflects or implies freedom is at least incomplete and does not work in our times any more, where we know crazy things about the brain. Our analysis shows that this antinomic contrast is misplaced.

Instead freedom arises inevitably with thinking itself through the necessity of applying forms of construction. There is no necessity of any kind to choose a particular statement from all potential alternatives S(k) (see Fig.5e). Note that this choice is indeed actualizing a potential therefore. Furthermore, it is not only a creative act, though not without being bound to rules, it is also an act that can not be completely determined any kind of subsequent model. Hence, it is actions that are introducing virtuality into the world by virtue of creating statements in a non-predictable way. Saying non-predictable, one should not think that there could be some kind of measurement that would allow to render this choice or creation predictable. It is non-predictable because it is not even in the space of predictable things.

Freedom is thus not an issue of quantum mechanics, as Kauffman tries to argue so hard [12]. It is also not an issue of human jurisdiction or any other concept of human society. Above all, freedom is nothing which could be created or prepared, as Peter Bieri [13] and other analytic philosophers believe. A reduction to the probabilistic nature of the world would be circular and the wrong level of description. Quite to the contrast to those proposals we think that freedom it is a necessary consequence of abstraction in thought. Since any kind of modeling that is not realized as body (think of adaptive behavior of amoebas or bacterias) implies abstraction, it makes perfectly sense even in philosophical terms to say that even animals have a free will. Everyone who lives with a cat knows about that. We can also see that freedom is directly related to the intensity in the cognitive domain. They do so as long as they are performing abstract modeling. No thoughts, so no freedom, no expression of will, so no cognitive capacity. Being a machine (whether from silicon or from flesh), so no will and no cognitive capacity.

While forms of intuition can be realized quite easily on a computer as a machine learning algorithm, this is not possible for forms of construction. It is the inherently limited cascade of ortho-regulations on the one side, and the import of conventions through those that create a double-articulation for rule-following, that point towards a transcendental singularity. It is not possible to speak formally or clearly about this singularity, of course. Maybe we could say that this singularity is a zone where the being’s exteriority (conventions) directly interferes with its interiority (the associative power of the body). It feels a bit like a wormhole in space, since we find entities connecting that normally are far apart from each other. We also could call it a miracle, no problem with that.

Fortunately enough, there is also a perspective that is closer to the application. More profane we could also simply say (in an expression near the surface of the story) that freedom exists because those brains form “minds” in a community, where those “minds” need to be able to step onto the Lagrangian path of abstraction. We do not need the concept of will for creating freedom, it is just the other way round.

Consequences for Epistemology

Orthoregulation and the underlying forms of construction are probably one of the most important concepts for a proper formulation of epistemology. Without the capability for ortho-regulation we will not find autonomy. A free-ranging machine is not an autonomous being, of course, even if it “develops” “own” “decisions” if it is put into a competitive swarm with similar entities.

The concept of ortho-regulation throws some light onto our path towards machine-based epistemology. Last but not least, it is a strong argument for a growing self-referential system of associative entities that are parts of a human community.

This article was first published 11/11/2011, last revision is from 28/12/2011

  • [1] Robert L. Goldstone, Sam Day, Ji Y. Son, Comparison. in: Britt Glatzeder, Vinod Goel, Albrecht von Müller (eds.), Towards a Theory of Thinking – Building Blocks for a Conceptual Framework. Springer New York. pp.103-122.
  • [2] Manfred Frank, Wege aus dem Deutschen Idealismus.
  • [3] Dekang Lin, An Information-Theoretic Definition of Similarity. In: Proceedings of the 15th International Conference on Machine Learning ICML, 1998, pp. 296-304.
  • [4] Hilary Putnam, Renewing Philosophy. 1992.
  • [5] Gilles Deleuze, Difference and Repetition. Continuum Books, London, New York 1994 [1968].
  • [6] Scott Fults, The Structure of Comparison: An Investigation of Gradable Adjectives. Diss. University of Maryland, 2006.
  • [7] Coker, William N. (2009) “Narratives of Emergence: Jean Paul on the Inner Life,” Eighteenth-Century Fiction: Vol. 21: Iss. 3, Article 5. Available at:
  • [8] WVO Quine, Two Dogmas of Empiricism,
  • [9] Mike Sandbothe (1998), The Transversal Logic of the World Wide Web,11th Annual Computers and Philosophy Conference in Pittsburgh (PA), August 1998; available online
  • [10] Rudolf Eisler, Freiheit des Willens, Wörterbuch der philosophischen Begriffe. 1904. available online.
  • [11] Immanuel Kant, Zum Ewigen Frieden.
  • [12] Stuart A Kauffman (2009), Five Problems in the Philosophy of Mind.,  available online.
  • [13] Peter Bieri, Das Handwerk der Freiheit, Hanser 2001.


Where Am I?

You are currently browsing entries tagged with intuition at The "Putnam Program".