Complexity

January 15, 2012 § Leave a comment

The great antipode, if not opponent to rationality is,

by a centuries-old European declaration, complexity.

For a long time this adverse relationship was only implicitly given. The rationalist framework has been set up already in the 14th century. 200 years later, everything had to be a machine in order to celebrate God. Things did not change much in this relationship, when the age of bars (mechanics, kinetics) grew into the age of machines (dynamics). Just to the opposite, control was the declared goal, an attitude that merged into the discovery of the state of information. While the 19th century invented the modern versions of information, statistics, and control technologies—at least precursory—, the 2oth century overgeneralized them and applied those concepts everywhere, up to their drastic abuse in the large bureaucrazies and the secret services.

Complexity finally was marked as the primary enemy of rationalism by the so-called system theoretician Luhmann even as late as 1987 [1]. We cite this enigmatically dualistic statement from Eckardt [2] (p.132)

[translation, starting with the citation in the first paragraph, “We shall call a contiguous set of elements complex, if due to immanent limitations regarding the elements’ capacity for establishing links it is not possible any more that any element could be connected to each other.” […] Luhmann shifted complexity fundamentally into a dimension of the quantifiable, where is looking for a logical division of sets and elements, hybridizations are not allowed in this dualistic conception of complexity.

Even more than that: Luhmann just describes the break of idealistic symmetry, yet by no means the mechanisms, nor the consequences of that. And even this aspect of broken symmetry remains opaque to him, as Luhmann, correctly diagnosed by Eckard, refuges into the reduction of quantisation and implied control. Such, he emptied the concept of complexity even before he drops it. Not the slightest hint towards the qualitative change that complex system can provoke through their potential for inducing emergent traits. Neglecting qualitative changes means to enforce a flattening, to expunge any vertical layering or integration. Later we will see that this is a typical behavior of modernists.

Eckardt continues to cite Luhmann (one has to know that Luhmann was a bureaucrat by training):

[translation, second paragraph: “Complexity in this second sense then is a measure for indeterminacy or for the lack of information. Complexity is, from this perspective, the information that is unavailable for the system, but needed by it for completely registering and describing its environment (environmental complexity), or itself (systemic complexity), respectively.“]

In the second statement he proposed that complexity denotes things a system cannot deal with, because it cannot comprehend it. This conclusion was an even necessary by-product of his so-called “systems theory,” which is nothing else than plain cybernetics. Actually, applying cybernetics in whatsoever form to social system is categorical nonsense, irrespective the “order” of the theory: 2nd order cybernetics (observing the observer) is equally unsuitable as 1st order cybernetics is for talking about the phenomena that result from self-referentiality. They always deny the population effects and they always claim the externalizability of meaning.

Of course, such a “theoretical” stance is deeply inappropriate to deal with the realm of the social, or more generally with that of complexity.

Since Luhmann’s instantiation of system theoretic nonsense, the concept of complexity developed into three main strains. The first flavor, following Luhmann, was a widely accepted and elegant symbolic notion either for incomprehensibility and ignorance or for complicatedness. The second mistook it drastically and violently under the signs of cybernetics, which is more or less the complete “opposite” to complexity. Vrachliotis [3] cited Popper’s example of a swarm of midges as a paradigm for complexity in his article about the significance of complexity. Unfortunately for him, the swarm of midges is anything but complex, it is almost pure randomness in a weak field of attraction. We will see that attraction alone is not a sufficient condition. In the same vein most of the research in physics is just concerned with self-organization from the perspective of cybernetics, if at all. Mostly, they resort to information theoretic measures (shortest description length) in their attempt to quantify complexity. We will discuss this later. Here we just note that the product of complexity  contains large parts about one can not speak in principle, whereas a description length even claims that the complex phenomenon is formalizable.

This usage is related to the notion of complexity in computer sciences, denoting the dependency of needs for time and memory from the size of the problem, and information sciences. There, people talk about “statistical complexity,” which sounds much like “soft stones.” It would be funny if it would have be meant as a joke. As we will see, the epistemological status of statistics is as incommensurable with complexity as it is true for cybernetics. The third strain finally reduces complexity to non-linearity in physical systems, or physically interpreted systems, mostly referring to transient self-organization phenomena or to the “edge of chaos”[4]. The infamous example here are swarms… no, swarms do not have a complex organization! Swarms can be modeled in a simple way as kind of a stable, probabilistic spring system.

Naturally, there is also any kind of mixture of these three strains of incompetent treatment of the concept of complexity. So far there is no acceptable general concept or even an accepted working definition. As a consequence, many people are writing about complexity, or they refer to it, in a way that confuses everything. The result is bad science, at least.

Fortunately enough, there is a fourth, though tiny strain. Things are changing, more and more complexity looses its pejorative connotation. The beginnings are in the investigation of natural systems in biology. People like Ludwig von Bertalanffy, Howard Pattee, Stanley Salthe, or Conrad Waddington, among others brought the concept of irreducibility into the debate. And then, there is of course Alan Turing and his mathematical paper about morphogenesis [5] from the mid 1950ies, accompanied by the Russian chemist Belousov, who did almost the same work practically [6].

The Status of Emergence

Complex systems may be characterized by a phenomenon which could be described from a variety of perspectives. We could say that symmetry breaks [7], that there is strong emergence [8], that it develops patterns that cannot be described on the level of the constituents of the process, and so on. If such an emergent pattern is being selected by another system, we can say that something novel has been established. The fact that we can take such perspectives neatly describes the peculiarity of complexity and its dynamical grounds.

Many researchers feel that complexity potentially provides some important yet opaque benefits, despite the abundant usage as a synonym for incomprehensibility. As it is always the case in such situation, we need a proper operationalization, which in turn needs a clear-cut identification of its basic elements in order to be sufficient to re-construct the concept as a phenomenon. As we already pointed out above, these elements are to be understood as abstract entities, which need a deliberate instantiation before any usage. From a large variety of sources starting from Turings seminal paper (1952) and up to Foucault’s figure of the heterotopia [9] we can derive five elements, which are necessary and sufficient to render any “system” from any domain into a complex system.

Here, we would like to add a small but important remark. Even if we take a somewhat uncritical (non-philosophical) perspective, and thus the wording is not well-defined so far (but this will change), we have to acknowledge that complexity is (1) a highly dynamical phenomenon where we can not expect to find a “foundation” or a “substance” that would give sufficient reason, and (2) more simply, whenever we are faced with complexity we experience at least a distinct surprise. Both issues directly lead to the consequence that there is no possibility for a purely quantitative description of complexity. This also means that any of the known empirical approaches (external realism, empirism, whether in a hypothetico-deductive attitude or not, whether following falsificationism or not, also does not work. All these approaches, particularly statistics, are structurally flat. Anything else (structurally “non-flat”) would imply the necessity of interpretation immanent to the system itself, and such is excluded from contemporary science for several hundred years now. The phenomenon of complexity is only partial an empirical problem of “facts,” despite the “fact” that we can observe it (just wait a few moments!).

We need a quite different scheme.

Observations

Before we proceed, we would like to give the following as an example. It is a simulation of a particular sort of chemical system, organized in several layers. “Chemical system” denotes a solution of several chemicals that mutually feed on each other and the reactants of other reactions. Click to the image if you would like to see it in action in a new tab (image & applet by courtesy of “bitcraft“):

For non-programmers it is important to understand that there is no routine like “draw-a-curve-from-here-to-there”. The simulation is on the level of the tiniest molecules. What is striking then is the emerging order, or, in a technical term, the long-range correlation of the states (or contexts) of the particles. The system is not “precisely” predictable, but its “behavior” is far from random. This range is 100-1000’s of times larger than the size of the particle. In other words, the individual particles do not know anything about those “patterns.”

Such systems as the above one are called reaction-diffusion systems, simply because there are chemical reactions and physical diffusion. There are several different types of them, distinguishable by the architecture of the process, i.e. there is the question, what happens with the result of the reaction? In Gray-Scott models for instance, the product of the reaction is removed from the system, hence it is a flow-type reactor. The first system as discovered by Turing (mathematically) and independently from him, by Belousov and Zhabotinsky (in a quite non-Russian attitude, by chemical experiments), is different. Here, everything remains in the system in a perfectly circularly re-birthing (very small) “world”.

The diffusion brings the reactants together, which then react “chemically.” Chemical reactions combine some reactants into a new one of completely different characteristics. What happens if you bring chlorine gas to the metal natrium (aka sodium)? Cooking salt.

In some way, we see here two different kinds of emergence, the chemical one and the informational one. Well, the second that is more interesting to us today is not strictly informational, it is a dualistic emergence, spanning between the material world and the informational world. (read here about the same subject from a different perspective)

Here we already meet a relation that can be met in any complex system, i.e. a system that exhibits emergent patterns in a sustainable manner. We meet it in the instance of chemical reactions feeding on each other. Yet, the reaction does not simply stop, creating some kind of homogeneous mud… For mechanists, positivists, idealists etc. something quite counter-intuitive happens instead.

This first necessary (at least I suppose so) condition to create complexity is the probabilistic mutual counteraction, where these two processes are in a particular constellation to each other. We can identify this constellation always where we meet “complexity”. Whether in chemical systems, inside biological organisms at any scale, or in their behavior. So far we use “complexity” in a phenomenal manner, so to speak, proto-empiric, but this will change. Our goal is to develop a precise notion of complexity. Then we will see what we can do about it.

Back to this probabilistic mutual counteraction. One of the implied “forces” accelerates, enforces, but its range of influence is small. The other force, counter-acting the strong one, is much weaker, but with a larger range of influence [10]. Note that “force” does not imply here an actor, it should be taken more like a field.

The lesson we can take from that, as a “result” of our approach, is already great, IMHO: Whenever we see something that exhibits dynamic “complexity,” we are allowed—no, we have to!—ask about those two (at least) antagonistic processes, their quality and their mechanism. In this perspective, complexity is a paradigmatic example for a full theory, that is categorically different from a model: it provides a guideline how to ask, which direction to take for creating a bunch of models (more details about theory here). Also quite important to understand is that asking for mechanisms here does not imply any kind of reductionism, quite to the contrary.

Yet, those particularly configured antagonistic forces are not the only necessary condition. We will see that we need several more of those elements.

I just have to mention the difference between a living organism and the simulation of the chemical system above in order to demonstrate that there are entities that are vastly “more” complex than the Turing-McCabe system above. Actually, I would propose not to call such chemical systems as “complex,” and for good reasons so, as we will see shortly.

If we would not simulate such a system, that is run it as a chemical system, it soon would stop, even if we would not remove anything from the system. What is therefore needed is a source of energy, or better enthalpy, or still better, of neg-entropy. The system has to dissipate a lot of entropy (un-order, pure randomness, hence radiation) in order to establish a relatively “higher” degree of order.

These two principles are still not sufficient even to create self-organization phenomena as you see it above. Yet, for the remaining factors we change the (didactic) direction.

The Proposal

We already mentioned above that complexity is not “purely” an empiric phenomenon. Actually, radical empiricism has been proofed to be problematic. So, what we are enforced to do concerning complexity we always have to do in any empiric endeavor. In the case of complexity it is just overly clear visible. What we are talking about is the deliberate apriori setting of “elements” (not axioms!). Nevertheless, we are convinced that it is possible to take a scientific perspective towards complexity, in the meaning of working with theories, models and predictions/diagnoses. What we propose is just not radical positivism, or the like.

Well, what is an “element”? Elements are “syndroms,” almost some kind of identifiable and partially naturalized symbols. Elements do not make sense if taken each after another. The make sense only if taken together. Yet, I would not like to open the classic-medieval discourse about “ratios”….

Our elements are from basic physics, dynamical systems showing emergent phenomena, abstract sign-theoretic considerations, completed by a formal argument.

We now would like to provide our proposal about the five necessary elements that create complexity, and by virtue of the effect, the whole set then is also sufficient. The following five elements of essential and sufficient components for complexity are presumably:

  • (1) dissipation, deliberate creation of additional entropy by the system at hand;
  • (2) an antagonistic setting similar to the reaction-diffusion-system (RDS), as described first by Alan Turing [5], and later by Gray-Scott [11], among others;
  • (3) standardization;
  • (4) active compartmentalization;
  • (5) systemic knots.

We shall now contextualize these elements as briefly as possible.

Dissipation

Element 1: The basic element is dissipation. Dissipation means that the system produces large amount of disorder, so-called entropy, in order to be able to establish structures (order) and to keep the overall entropy balance increasing [4]. This requires a radical openness of the system, what has been recognized already by Schrödinger in his Dublin Lectures [12]. Without radiation of heat, without dissipation, no new order (=local decrease of entropy) could be created. Only the overall lavishly increase of entropy allows to decrease it locally (=establish patterns and structure). If the system performs physical work with radiation of heat, the system can produce new patterns, i.e. order, which transcends the particles and their work. Think about the human brain, which radiates up to 100 Watts per hour in the infrared spectrum in order to create that volatile immaterial order which we call thinking. Much the same is true for other complex organizational phenomena such like “cities.”

Antagonistic Forces in Populations

Element 2: This increase of entropy is organized on two rather different time scales. On the short-time scale we can see microscopic movements of particles, which–now invoking the second element–have to be organized in an antagonistic setting. One of the “forces,” reactions or in general “mechanisms” should be strong with a comparatively short spatio-temporal range of influence. The other, antagonistic force or mechanism should be comparatively weak, but as kind of a compensate, its influence should be far-reaching in time-space. Turing described mathematically, that such a setting can produce novel patterns for a wide range of parameters. “Novelty” means that these patterns can not be found anywhere in the local rules organizing the moves of the microscopic particles.

Turing’s system is one of constant mass; reaction-diffusion systems (RDS) may also be formulated as a flow-through reactor system (so-called Gray-Scott model [11]). In general, you can think of reaction-diffusion-systems as a sort of population-based, probabilistic Hegelian dialectics (which of course is strictly anti-Hegelian).

Standardization

Element 3: The third element, standardization, reflects the fact, that the two processes can only interact intensely if their mutual interference is sufficiently standardized. Without standardization, there would not be any antagonistic process, hence also no emergence of novel patterns. the two processes would be transparent to each other. This link has been overlooked so far in the literature about complexity. The role of standardization is also overlooked in theories of social or cultural evolution. Yet, think only about DIN or ASCII , or de-facto standards like “the PC”. Any of the subsequent emergences of more complex patterns would not have happened. We easily can extend the role of standardization into the area of mechanization and automation. There is also a deep and philosophically highly relevant relation between standardization and rule-following. Here, we can’t deal further with this topic. Actually, the process of habituation, the development of a particular “common sense” and even the problem of naming as one of the first steps of standardization do not only belong to the most difficult problems in philosophy, they are actually still somewhat mysterious .

On a sign-theoretic level we may formulate standardization as some kind of acknowledged semiotics, as a body of rules, which tell the basic individual items of system, how to behave and to interpret. Frequently changing codes effectively prohibits any complexity; changing codes also prohibits effectively any further progress in the sense of innovation. Codes produce immaterial enclaves. On the other hand, codes are also means of standardization. Codes play multiple and even contradictory roles.

Towards the Decisive Step

As already mentioned above, emergent creation of novel and even irreducible patterns can not be regarded as a sufficient condition for complexity. Such processes are only proto-complex; since there is no external instance ruling the whole generative process those patterns are often called “self-organized.” This is, bluntly said, a misnomer, since then we would not distinguish between the immaterial order (on the finer time scale) and the (quasi-)material organization (on the coarser time scale). Emergent patterns are not yet organized; at best we could say that such systems are self-patterning. Obviously, strong emergence and complexity are very different things.

Compartmentalization

Element 4: What is missing in such self-organizing configurations is the fourth element of complexity, the transition from order to organization. (In a moment we will see, why this transition is absolutely crucial for the concept of complexity.) Organization always means that the system has been built up compartments. To achieve that, the self-organizing processes, such like in the RDS, must produce something, which then resides as something external to the dynamic process running on the short time-scale. In other words, the system introduces (at least) a second much longer time scale, since the produces are much more stable than the processes themselves. This transition we may call indeed self-organization.

In biological systems those compartments are established by cell walls and membranes, which build vesicles, organs, fluid compartments and so on. In large assemblies built of matter–“cities”–we know walls, streets, but also immaterial walls made from rules, semi-permeable walls by steep gradients of fractal coefficients and so on.

There are however also informational compartments, which are probabilistically defined, such like the endocrine systems in animals or the immune systems, which both form an informational network. In the case of social organizations compartments are tangible as strict rules, or even domain specific languages. In some sense, produces of processes facilitating the transition from order to organization are always some kind of left-over, secretions, partial deaths if you like. It is these persistent secretions where the phenomenon of growth becomes visible. From an outside perspective, this transition could be regarded also as a process of selection; starting from a large variety of slightly different and only temporarily stable patterns or forms, the transition from order to organization establishes some of them–selectively, as a matter of fact.

It is clear that the lasting produces of a system act as a constraint for any subsequent processes. Durable systems may acquire the capability to act precisely on that crucial transition in order to gain better stability. One could for instance imagine that a system controls this transition by controlling the “temperature” of the system, high “temperatures” leading to a remelting of structures in order to allow for different patterns. This however raises a deep and self-referential problematics: the system would have to develop a sufficiently detailed model about itself, which is not possible, since there is strong emergence in its lower layers. This problematics forwards us to the fifth element of complexity.

Systemic Knots

Element 5: We know from bio-organic systems that they are able to maintain themselves. This involves a myriad of mutual influences, which then also span across several levels of organizations. For instance, the brain, and even thoughts themselves are able to influence individual (groups of) cells [13]. The behavior to drink green tea directly acts onto the DNA in the cells of the body. Such controlling influence, however, is not unproblematic. Any controlling instance can have only a partial model about the regulated contexts. By means of non-orthogonality this leads directly to the strange situation, that the interests of lower levels and those from the top levels necessarily contradict each other. This effect is just the inverse of the famous “enslaving parameter” introduced by Hermann Haken as the key element of his concept of synergetics [14].

We call this effect the systemic knot, since the relationships between elements of different layers can not be “drawn” onto a flat paper any more. Most interestingly, this latent antagonism between levels of a system is just the precondition for a second-order complexity. Notably, we can conclude that complexity “maintains” itself. If a complex system would not cause the persistence of the complexity it builds upon, it would not be complex any more soon as a consequence of the second law of thermodynamics. In other words, it would be soon dead.

Alternatives

Today, in the beginning of 2012, complexity has been advanced (almost) into everybody’s mind, perhaps except materialists like Slavoj Žižek, who in a TV interview recently proposed a radical formalistic materialism, i.e. we should acknowledge that everything (!) is just formula. Undeniably, the term “complexity” has made its career during the last 25 years. Till the end of 1980ies complexity was “known” only among very few scientists. I guess that this career is somehow linked to information as we practice it.

From a system theoretic approach the available acceleration of many processes as compared to the pre-IT times introduced first a collapse of stable boundaries (like the ocean), buzzed as “globalization,” only to introduce symmetry-breaks on the then provoked complex system. This instance of complexity is really a new one, we never saw it before.

Anyway, given the popularity of the approach one might assume that there is not only a common understanding of complexity, but also an understanding that would somehow work in an appropriate manner, that is one that does not reduce complexity to a narrow domain-specific perspective. Except our proposal, however, such a concept does not exist.

The List

In the context of management research, where the notion of complexity is being discussed since the days of Peter Drucker, Robert Bauer and Mihnea Moldoveanu wrote in 2002 [15]:

‘Complexity’ has many alternative definitions. The word is used in the vernacular to denote difficult to understand phenomena, in information theory to denote incompressible digital bit strings [Li and Vitanyi, 1993], in theoretical computer science to denote relative difficulty of solving a problem [Leiserson, Cormen and Rivest, 1993] in organization theory to denote the relative degree of coupling of a many-component system [Simon, 1962], among many other uses.

They continue

Our goal is to arrive at a representation of complexity that is useful for researchers in organizational phenomena by being comprehensive – incorporating the relevant aspects of complexity, precise – allowing them to identify and conceptualize the complexity of the phenomena that they address, and epistemologically informed – allowing us to identify the import of the observer to the reported complexity of the phenomenon being observed.

So far so good. We could agree upon. Yet, afterwards they claim the possibility to decompose “[…] complexity into informational and computational components.” They seek support by another author: “‘Complexity’ is often thought to refer to the difficulty of producing a competent simulation of a phenomenon [Norretranders, 1995].” which reminds strikingly to Luhmann’s wordy capitulation. We will return to this cybernetic attitude in the next section, here we just fix this reference to (cybernetic) information and computer sciences.

Indeed, computational complexity and informational entropy belong to the most popular approaches to complexity. Computational complexity is usually further operationalized in different ways, e.g. as minimal description length, accessibility of problems and solutions etc.. Informational entropy, on the other hand, is not only of little value, it is also applying a strongly reductionist version of the concept of information. Entropy is little more than the claim that there is a particular phenomenon. The concept of entropy can’t be used to identify mechanisms, whether in physics or regarding complexity. More generally, any attempt to describe complexity in statistical terms fails to include the relevant issue of complexity: the appearance of “novel” traits, novel from the perspective of the system.

Another strain relates complexity to chaos, or even uses it as a synonym. This however applies only in vernacular terms, in the sense of being incomprehensible. Chaos and complexity often appear in the same context, nut by far not necessarily. Albeit complex system may develop chaotic behavior, the link is anything but tight. There is chaotic behavior of non-complex systems (Mandelbrot set), as well as complexity in non-chaotic systems. In a more rational perspective would be a mistake to equate them, since chaos is a descriptive term about the further development of a system, where this description is based on operationalizations like the ε-tube, or the value of the Lyaponuv-exponent.

Only very recently there has been an attempt to address the issue of complexity in a more reflected manner, yet they did not achieve an operationalizable concept.  (…)

Problems with the Received View(s)

It is either not feasible or irrelevant to talk in complex systems about “feedback.” The issue at hand is precisely emergence, thus talking about deterministic hard-wired links, in other words: feedback, is severely missing the point.

Similarly, it is almost nonsense to assign a complex system inner states. There is neither an identifiable origin not even the possibility to describe a formal foundation. If our proposal is not a “formal” foundation. The talk about states in complex systems, and in this similar to the positivist claim of states in the brain, is wrong again precisely due to the fact that the emergence is emergence, and it is so on the (projected) “basis” of an extremely deterritorialized dynamics. There is no center, and there are no states in such systems.

On the other hand, it is also wrong to claim—at least as a general notion—that complex systems are unpredictable [16]. The unpredictability of complex systems is fundamentally different from the unpredictability of random systems. In random systems, we do not know anything except that there are fluctuations of a particular density. It is not possible here to say anything concrete about the next move of the system, only as a probability. Whether the nuclear plant crashes tomorrow or in 100’000 years is not subject of probability theory, precisely because local patterns are not subject of statistic. In a way randomness cannot surprise, because randomness is a language game meaning “there is no structure.”

Complex systems are fundamentally different. Here all we have are patterns. We even may expect that a given pattern persists more or less stable in the near future, meaning, that we would classify the particular pattern (which “as such” is unique for all times) into the same family of patterns. However, more distant in the future, the predictability of complex systems is far less than that for random systems. Yet, despite this stability we must not talk about “states” here. Closely related to the invocation of “states” is the cybernetic attitude.

Yet, there is an obvious divergence between what life in an ecosystem is and the description length of algorithms, or the measure of unorderliness. This categorical difference refers to the fact that living systems as the most complex entities we know of consist of a large number of integrative layers, logical compartments, if you like. All blood cells are on such a layer, all liver cells, all neurons, all organs etc., but also certain functional roles. In cell biology one speaks about the genome, the proteome, the transcriptome etc., in order to indicate such functional layers. The important lesson we can take from that is that we can’t reduce any of the more integrated layers to a lower one. All of these layers are “emergent” in the strong sense, despite the fact that they are interconnected also in a top-down level. Any proclaimed theory of complexity that does not include the phenomenon of emergent layering should not regarded as such a theory. Here we meet the difference of structurally flat theories or attitudes (physics, statistics, cybernetics, positivism, materialism, deconstructivism) and theories that know of irreducible aspects as structural layers and integration.

Finally a philosophical argument. The product of complexity contains large parts about one can not speak in principle. The challenge is indeed a serious one. Regardless which position we take, underneath or above the emergent phenomenon, we cannot speak about it.   In both cases there is no possible language for it. This situation is very similar to the body-mind-problem, where we met a similar transition. Speaking about a complex system from the outside does not help much. We just can point to the emergence, in the Wittgensteinian sense, or focus either the lower or the emergent layer. Both layers must remain separated in the description. We just can describe the explanatory dualism [17].

This does not mean that we can’t speak about it at all. We just can’t speak about it in analytic terms, as structurally flat theories do, if they don’t refuse any attempt to understand the mechanisms of complexity and the speaking thereof anyway. One alternative to integrate that dualism in a productive manner is by the technique of elementarization as we tried it here.

Conclusion

As always, we separate the conclusions first into philosophical aspects and the topic of machine-based epistemology and its “implementation.”

Neil Harrison, in his book about unrecognized complexity in politics [16] correctly wrote,

Like realism, complexity is a thought pattern.

From a completely different domain, Mainzer [7] similarly wrote:

From a logical point of view, symmetry and complexity are syntactical and semantical properties of theories and their models.

We also have argued that the property of emergence causes an explanatory dualism, which withstands any attempt for an analytic formalization in positive terms. Yet, even as we can’t apply logical analyticity without loosing the central characteristics of complex systems,we can indeed symbolize it. This step of symbolization is distantly similar to the symbolization of the zero, or that of infinity.

Our proposal here is to apply the classic “technique” of elementarization. Such, we propose a “symbolization” that is not precisely a symbol, it is more like an (abstract) image. None of the parts or the aspects of an image cannot be taken separately to describe the image, we even may not miss any of those aspects. This link between images and complexity, or images and explanatory dualism is an important one which we will follow in another chapter (about “Waves, Words and Images“)

“Elements” (capital E) are not only powerful instruments, they are also indispensable. Elements are immaterial abstract formless entities; their only property is the promise of a basic constructive power. Elements usually are assumed not to be further decomposable without loosing their quality. That’s true for our big 5 as well as for chemical elements, and also for Euclidean Elements of Geometry. In our case, however, we are also close to Aristotle’s notion of elements, according to which the issue at hand (for him and his fellows the “world”) can’t be described by any isolated subset of them. The five elements are all necessary, but only also sufficient if they appear together.

The meaning of Elements and their usage is that they allow for new perspectives and for a new language. In this way, the notion of “complexity” is taken by many as an element. Yet it is only a pseudo-element, an idol, because (i) it is not abstract enough, and (ii) because it can be described by means of more basic aspects. Yet it depends on the perspective, of course.

Anyway, our theoretical framework allows to distinguish between various phenomena around complex systems in a way that is not accessible through other approaches. Similarly to the theory of natural evolution our theory of complexity is a framework, not a model. It can not be tested empirically in a direct manner. Yet, it has predictive poser on the qualitative level and it allows to develop means for synthesizing complexity, and even for qualitative predictions.

Deleuze (p.142 in [18]) provided a highly convincing and IMHO a still unrivaled description about complexity on less than three short pages. The trinity between the broiling of “particled matter” and “bodies” below, the pattern above and the dynamic appearance emergence he called “sense”. He even attributed that process the label “logic”. It is amazing that Deleuze with his book about the logic of sense focused strongly on paradoxes, i.e. antagonist forces, and he succeeded to remain self-consistent with his series 16 and 17 (paradox of the logics of genesis), and we today can determine the antagonism (of a particular configuration) as one of the main necessary Elements of complexity.

What is the role of complexity in cognition? To approach this question we have to recognize what is happening in complex process: a pattern, a potential novelty appears out of randomness. Yet if the patterns does not stabilize it will sink back into randomness. However, patterns are temporarily quite persistent in all complex systems. That means that the mere appearance of those potential novelties could make their way to be a subject of selection, which stabilizes the volatile pattern over time, changing it into an actual novelty.

Such, complexity is in between randomness of scattered matter and established differences, a virtual zone between matter and information, between the material and the immaterial. Deleuze called it, we just said it, the paradox of the logics of genesis [18].

The consequence is pretty clear: Self-organizing Maps are not sufficient. They just establish a volatile, transient order. Yet, what we need to create is complexity. There is even a matching result from neuro-cognitive research. The EEC of more intelligent people show a higher “degree” of complexity. We already know that this complexity we can achieve only by animal-like growth and differentiation. There is a second reason why the standard implementation of SOM is not sufficient: it is not probabilistic enough, since almost all properties of the “nodes” are fixed at implementation time without any chance for a break-out. For instance, SOM are mostly realized as static grids, the transfer mechanism is symmetric (circle, ellipsoid), they do not have a “state” except the collected extensions, there are only informational antagonisms, but not chemical ones, or those related to the matter of a body.

This eventually will lead to a completely novel architecture for SOM (that we soon will offer on this site).

This article was first published 20/10/2011, last substantial revision and re-publishing is from 15/01/2012. The core idea of the article, the elementarization of complexity, has been published in [19].

  • [1] Niklas Luhmann, Soziale Systeme. Grundriss einer allgemeinen Theorie. Frankfurt 1987. p.46/47. cited after [2].
  • [2] Frank Eckardt, Die komplexe Stadt: Orientierungen mi urbanen Labyrinth. Verlag für Sozialwissenschaften, Wiesbaden 2009. p.132.
  • [3] Andrea Gleiniger (Herausgeber), Georg Vrachliotis (eds.),Komplexität: Entwurfsstrategie und Weltbild. Birkhäuser, Basel 2008.
  • [4] Lewin 2000
  • [5] Alan M. Turing (1952), The Chemical Basis of Morphogenesis. Phil. Trans. Royal Soc. Series B, Biological Sciences, Vol. 237, No. 641, pp. 37-72. available online.
  • [6] Belousov
  • [7] Klaus Mainzer (2005), Symmetry and complexity in dynamical systems. European Review, Vol. 13, Supp. No. 2, 29–48.
  • [8] Chalmers 2000
  • [9] Michel Foucault
  • [10] Michael Cross, Notes on the Turing Instability and Chemical Instabilities. mimeo 2006. available online (mirrored)
  • [11] Gray, Scott
  • [12] Schrödinger, What is Life? 1948.
  • [13] Ader, Psychoneuroimmunology. 1990.
  • [14] Hermann Haken, Synergetics.
  • [15] Robert Bauer, Mihnea Moldoveanu (2002), In what Sense are Organizational Phenomena complex and what Differences does their Complexity make? ASAC 2002 Winnipeg (Ca)
  • [16] Neil E.Harrison, Thinking about the World we Make. in: same author (ed.), Complexity in World Politics Concepts and Methods of a New Paradigm. SUNY Press, Albany 2006.
  • [17] Nicholas Maxwell (2000) The Mind-Body Problem and Explanatory Dualism. Philosophy 75, 2000, pp. 49-71.
  • [18] Gilles Deleuze, Logic of Sense. 1968. German edition, Suhrkamp Frankfurt.
  • [19] Klaus Wassermann (2011). Sema Città-Deriving Elements for an applicable City Theory. in: T. Zupančič-Strojan, M. Juvančič, S. Verovšek, A. Jutraž (eds.), Respecting fragile places, 29th Conference on Education in Computer Aided Architectural Design in Europe eCAADe. available online.

۞

Mental States

October 23, 2011 § Leave a comment

The issue we are dealing with here is the question whether we are justified to assign “mental states” to other people on the basis of our experience, that is, based on weakly valid predictions and the use of some language upon them.

Hilary Putnam, in an early writing (at least before 1975), used the notion of mental states, and today almost everybody does so. In the following passage he tries to justify the reasonability of the inference of mental states (italics by H.Putnam, colored emphasis by me); I think this passage is not compatible with his results any more in “Representation and Reality”, although most people particularly from computer sciences cite him as a representative of a (rather crude) machine-state functionalism:

“These facts show that our reasons for accepting it that others have mental states are not an ordinary induction, any more than our reasons for accepting it that material objects exist are an ordinary induction Yet, what can be said in the case of material objects can also be said here our acceptance of the proposition that others have mental states is both analogous and disanalogous to the acceptance of ordinary empirical theories on the basis of explanatory induction. It is disanalogous insofar as ‘other people have mental states’ is, in the first instance, not an empirical theory at all, but rather a consequence of a host of specific hypothesis, theories, laws, and garden variety empirical statements that we accept.   […]   It is analogous, however, in that part of the justification for the assertion that other people have mental states is that to give up the proposition would require giving up all of the theories, statements, etc., that we accept implying that proposition; […] But if I say that other people do not have minds, that is if I say that other people do not have mental states, that is if I say that other people are never angry, suspicious, lustful,sad, etc., I am giving up propositions that are implied by the explanations that I give on specific occasions of the behavior of other people. So I would have to give up all of these explanations.”

Suppose, we observe someone for a few minutes while he or she is getting increasingly stressed/relaxed, and suddenly the person starts to shout and to cry, or to smile. More professionally, if we use a coding system like the one proposed by Scherer and Ekman, the famous “Facial Action Coding System,”  recently popularized by the TV series “Lie to me,” are we allowed to assign them a “mental state”?

Of course, we intuitively and instinctively start trying to guess what’s going on with the person, in order to make some prediction or diagnosis (which essentially is the same thing), for instance because we feel inclined to help, to care, to console the person, to flee, or to chummy with her. Yet, is such a diagnosis, probably taking place in the course of mutual interpretation of almost non-verbal behavior, is such a diagnosis the same as assigning “mental states”?

We are deeply convinced, that the correct answer is ‘NO’.

The answer to this question is somewhat important for an appropriate handling of machines that start to be able to open their own epistemology, which is the correct phrase for the flawed notion of “intelligent” machines. Our answer rests on two different pillars. We invoke complexity theory, and a philosophical argument as well. Complexity theory forbids states for empirical reasons; the philosophical argument forbids its usage regarding the mind due to the fact that empirical observations never can be linked to statefulness, neither by language nor by mathematics. Statefulness is then identified as a concept from the area of (machine) design.

Yet, things are a bit tricky. Hence, we have to extend the analysis a bit. Else we have to refer to what we said (or will say) about theory and modeling.

Reductionism, Complexity, and the Mental

Since the concept of “mental state” involves the concept of state, our investigation has to follow two branches. Besides the concept of “state” we have the concept of the “mental,” which still is a very blurry one. The compound concept of “mental state” just does not seem to be blurry, because of the state-part. But what if the assignment of states to the personal inner life of the conscious vis-a-vis is not justified? We think indeed that we are not allowed to assign states to other persons, at least when it comes to philosophy or science  about the mind (if you would like to call psychology a ‘science’). In this case, the concept of the mental remains blurry, of course. One could suspect that the saying of “mental state” just arose to create the illusion of a well-defined topic when talking about the mind or mindfulness.

“State” denotes a context of empirical activity. It assumes that there have been preceding measurements yielding a range of different values, which we aposteriori classify and interpret. As a result of these empirical activities we distinguish several levels of rather similar values, give them a label and call them a “state.” This labeling remains always partially arbitrary by principle. Looking backward we can see that the concept of “state” invokes measurability, interpretation and, above all, identifiability. The language game of “state” excludes basic non-identifiability. Though we may speak about a “mixed state,” which still assumes identifiability in principle, there are well-known cases of empirical subjects that we can not assign any distinct value in principle. Prigogine [2] gave many examples, and even one analytic one, based on number theory. In short, we can take it for sure that complex systems may traverse regions in their parameter space where it is not possible to assign anything identifiable. In some sense, the object does not exist as a particular thing, it just exists as a trajectory, or more precise, a compound made from history and pure potential. A slightly more graspable example for those regions are the bifurcation “points” (which are not really points for real systems).

An experimental example being also well visible are represented by arrangements like so-called Reaction-Diffusion-Systems [3]. How to describe such a system? An atomic description is not possible, if we try to refer to any kind of rules. The reason is that the description of a point in their parameter system around the indeterminate area of bifurcation is the description of the whole system itself, including its trajectory through phase space. Now, who would deny that the brain and the mind springing off from it is something which exceeds by far those “simple” complex systems in their complexity, which are used as “model systems” in the laboratory, in Petri dishes, or even computer simulations?

So, we conclude that brains can not “have” states in the analytic sense. But what about meta-stability? After all, it seems that the trajectories of psychological or behavioral parameters are somehow predictable. The point is that the concept of meta-stability does not help very much. That concept directly refers to complexity, and thus it references to the whole “system,” including a large part of its history. As a realist, or scientist believing in empiricism, we would not gain anything. We may summarize that their is no possible reduction of the brain to a perspective that would justify the usage of the notion of “state.”

But what about the mind? Let the brain be chaotic, the mind need not, probably. Nobody knows. Yet, an optimistic reductionist could argue for its possibility. Is it then allowed to assign states to the mind, that is, to separate the brain from the mind with respect to stability and “statefulness”? Firstly, again the reductionist would loose all his points, since in this case the mind and its states would turn into something metaphysical, if not from “another reality.” Secondly, measurability would fall apart, since mind is nothing you could measure as an explanans. It is not possible to split off the mind of a person from that very person, at least not for anybody who would try to justify the assignment of states to minds, brains or “mental matter.” The reason is a logical one: Such an attempt would commit a petitio principii.

Obviously, we have to resort to the perspective of language games. Of course, everything is a language game, we knew that even before refuting the state as an appropriate concept to describe the brain. Yet, we have demonstrated that even an enlightened reductionist, in the best case a contemporary psychologist, or probably also William James, must acknowledge that it is not possible to speak scientifically (or philosophically) about states concerning mental issues. Before starting with the state as a Language Game I would first like to visit the concepts of automata in their relation to language.

Automata, Mechanism, and Language

Automata are positive definite, meaning that it consists from a finite set of well-defined states. At any point in time they are exactly defined, even if the particular automaton is a probabilistic one. Well, complexity theory tells us, that this is not possible for real objects. Yet, “we” (i.e. computer hardware engineers) learned to suppress deviations far enough in order to build machines which come close to what is called the “Universal Turing Machine,” i.e. nowadays physical computers. A logical machine, or a “logics machine”, if you like, then is an automaton. Therefore, standard computer programs are perfectly predictable. They can be stopped, hibernated, restarted etc., and weeks later you can proceed at the last point of your work, because the computer did not change any single of more than 8’000’000’000 dual-valued bits. All of the software running on computers is completely defined at any point in time. Hence, logical machines not only do exist outside of time, at least from their own perspective. It is perfectly reasonable to assign them “states,” and the sequence of these states are fully reversible in the sense that either a totality of the state can be stored and mapped onto the machine, or that it can be identically reproduced.

For a long period of time, people thought that such a thing would be an ideal machine. Since it was supposed to be ideal, it was also a matter of God, and in turn, since God could not do nonsense (as it was believed), the world had to be a machine. In essence, this was the reasoning in the startup-phase of the Renaissance, remember Descartes’s or Leibniz’s ideas about machines. Later, Laplace claimed perfect predictability for the universe, if he could measure everything, as he said. Not quite randomly Leibniz also thought about the possibility to create any thought by combination from a rather limited set of primitives, and in that vein he also proposed binary encoding. Elsewhere we will discuss, whether real computers as simulators of logic machines can just and only behave deterministically. (they do not…)

Note that we are not just talking about the rather trivial case of Finite State Automata. We explicitly include the so-called Universal-Turing-Machine (UTM) into our considerations, as well as Cellular Automata, for which some interesting rules are known, producing unpredictable though not random behavior. The common property of all these entities is the positive definiteness. It is important to understand that physical computers must not conceived as UTM. The UTM is logical machine, while the computer is a physical instance of it. At the same time it is more, but also less than a UTM. The UTM consists of operations virtually without a body and without matter, and thus also without the challenge of a time viz. signal horizon: things, which usually cause trouble when it comes to exactness. The particular quality of the unfolding self-organization in Reaction-Diffusion-System is—besides other design principles—dependent on effective signal horizons.

Complex systems are different, and so are living systems (see posts about complexity). Their travel through parameter space is not reversible. Even “simple” chemical processes are not reversible. So, neither the brain nor the mind could be described as reversible entities. Even if we could measure a complex system at a given point in time “perfectly,” i.e. far beyond quantum mechanic thresholds (if such a statement makes any sense at all), even then the complex system will return to increasing unpredictability, because such systems are able to generate information [4]. Besides stability, they are also deeply nested, where each level of integration can’t be reduced to the available descriptions of the next lower level. Standard computer programs are thus an inappropriate metaphor for the brain as well as for the mind. Again, there is the strategic problem for the reductionist trying to defend the usage of the concept of states to describe mental issues, as reversibility would apriori assume complete measurability, which first have to be demonstrated, before we could talk about “states” in the brain or “in” the mind.

So, we drop the possibility that the brain or the mind either is an automaton. A philosophically inspired biological reductionist then probably will resort to the concept of mechanism. Mechanisms are habits of matter. They are micrological and more local with respect to the more global explanandum. Mechanisms do not claim a deterministic causality for all the parts of a system, as the naive mechanists of earlier days did. Yet, referring to mechanisms imports the claim that there is a linkage between probabilistic micrological (often material) components and a reproducible overall behavior of the “system.” The micro-component can be modeled deterministically or probabilistically following very strong rules, the overall system then shows some behavior which can not described by the terms appropriate for the micro-level. Adopted to our case of mental states that would lead us to the assumption that there are mechanisms. We could not say that these mechanisms lead to states, because the reductionist first has to proof that mechanisms lead to stability. However, mechanisms do not provide any means to argue on the more integrated level. Thus we conclude that—funny enough—resorting to the concept of probabilistic mechanism includes the assumptions that it is not appropriate to talk about states. Again a bad card for the reductions heading for the states in the mind.

Instead, systems theory uses concepts like open systems, dynamic equilibrium (which actually is not an equilibrium), etc. The result of the story is that we can not separate a “something” in the mental processes that we could call a state. We have to speak about processes. But that is a completely different game, as Whitehead has demonstrated as the first one.

The assignment of a “mental state” itself is empty. The reason is that there is nothing we could compare it with. We only can compare behavior and language across subjects, since any other comparison of two minds always includes behavior and language. This difficulty is nicely demonstrated by the so-called Turing-test, as well as Searle’s example of the Chinese Chamber. Both examples describe situations where it is impossible to separate something in the “inner being” (of computers, people or chambers with Chinese dictionaries); it is impossible, because that “inner being” has no neighbor, as Wittgenstein would have said. As already said, there is nothing which we could compare with. Indeed, Wittgenstein said so about the “I” and refuted its reasonability, ultimately arriving at a position of “realistic solipsism.” Here we have to oppose the misunderstanding that an attitude like ours denies the existence of mental affairs of other people. It is totally o.k. to believe and to act according to this believe that other people have mental affairs in their own experience; but it is not o.k. to call that a state, because we can not know anything about the inner experience of private realities of other people, which would justify the assignment of the quality of a “state.” We also could refer to Wittgenstein’s example of pain: it is nonsense to deny that other people have pain, but it is also nonsense to try to speak about the pain of others in a way that claims private knowledge. It is even nonsense to speak about one’s own pain in a way that would claim private knowledge—not because it is private, but because it is not a kind of knowledge. Despite we are used to think that we “know” the pain, we do not. If we would, we could speak exactly about it, and for others it would not be unclear in any sense, much like: I know that 5>3, or things like that. But it is not possible to speak in this way about pain. There is a subtle translation or transformation process in between the physiological process of releasing prostaglandin at the cellular level and the final utterance of the sentence “I have a certain pain.” The sentence is public, and mandatory so. Before that sentence, the pain has no face and no location even for the person feeling the pain.

You might say, o.k. there is physics and biology and molecules and all the things we have no direct access to either. Yet, again, these systems behave deterministically, at least some of them we can force to behave regularly. Electrons, atoms and molecules do not have individuality beyond their materiality, they can not be distinguished, they have no memory, and they do not act in their own symbolic space. If they would, we would have the same problem as with the mental affairs of our conspecifics (and chimpanzees, whales, etc.).

Some philosophers, particularly  those calling themselves analytic, claim that not only feelings like happiness, anger etc. require states, but also that intentions would do so. This, however, would aggravate the attempt to justify the assignment of states to mental affairs, since intentions are the result of activities and processes in the brain and the mind. Yet, from that perspective one could try to claim that mental states are the result of calculations or deterministic processes. As for mathematical calculations, there could be many ways leading to the same result. (The identity theory between physical and mental affairs has been refuted first by Putnam 1967 [5].) On the level of the result we unfortunately can not tell anything about the way how to achieve it. This asymmetry is even true for simple mathematics.

Mental states are often conceived as “dispositions,” we just before talked about anger and happiness, notwithstanding more “theoretical” concepts. Regarding this usage of “state,” I suppose it is circular, or empty. We can not talk about the other’s psychic affairs except the linkage we derive by experience. This experience links certain types of histories or developments with certain outcomes. Yet, their is no fixation of any kind, and especially not in the sense of a finite state automaton. That means that we are mapping probability densities to each other. It may be natural to label those, but we can not claim that these labels denote “states.” Those labels are just that: labels. Perhaps negotiated into some convention, but still, just labels. Not to be aware of this means to forget about language, which really is a pity in case of “philosophers.” The concept of “state” is basically a concept that applies to the design of (logical) machines. For these reasons is thus not possible to use “state” as a concept where we attempt to compare (hence to explain)  different entities, one of which is not the result of  design. Thus, it is also not possible to use “states” as kind of “explaining principle” for any kind of further description.

One way to express the reason for the failure of  the supervenience claim is that it mixes matter with information. A physical state (if that would be meaningful at all) can not be equated with a mind state, in none of its possible ways. If the physical parameters of a brain changes, the mind affairs may or may not be affected in a measurable manner. If the physical state remains the same, the mental affairs may remain the same; yet, this does not matter: Since any sensory perception alters the physical makeup of the brain, a constant brain would be simply dead.

Would we accept the computationalist hypothesis about the brain/mind, we would have to call the “result” a state, or the “state” a result. Both alternatives feel weird at least with respect to a dynamic entity like the brain, though the even feel weird with respect to arithmetics. There is no such thing in the brain like a finite algorithm that stops when finished. There are no “results” in the brain, something, even hard-core reductionistic neurobiologists would admit. Yet, again, exactly this determinability had to be demonstrated in order to justify the usage of “state” by the reductionist, he can not refer to it as an assumption.

The misunderstanding is quite likely caused by the private experience of stability in thinking. We can calculate 73+54 with stable results. Yet, this does not tell anything about the relation between matter and mind. The same is true for language. Again, the hypothesis underling the claim of supervenience is denying the difference between matter and information.

Besides the fact that the reductionist is running again into the same serious tactical difficulties as before, this now is a very interesting point, since it is related to the relation of brain and mind on the one side and actions and language on the other. Where do the words we utter come from? How is it possible to express thoughts such that it is meaningful?

Of course, we do not run a database with a dictionary inside it in our head. We not only don’t do so, it would not be possible to produce and to understand language at all, even to the slightest extent. Secondly, we learn language, it is not innate. Even the capability to learn language is not innate, contrary to a popular guess. Just think about Kaspar Hauser who never mastered it better than a 6-year old child. We need an appropriately trained brain to become able to learn a language. Would the capability for language being innate, we would not have difficulties to learn any language. We all know that the opposite is true, many people having severe difficulties to learn even a single one.

Now, the questions of (1) how to become able to learn a language and (2) how to program a computer that it becomes able to understand language are closely related. The programmer can NOT put the words into the machine apriori as that would be self-delusory. Else, the meaning of something can not be determined apriori without referring to the whole Lebenswelt. That’s the result of Wittgenstein’s philosophy as well as it is Putnam’s final conclusion. Meaning is not a mental category, despite that it requires always several brains to create something we call “meaning” (emphasis on several). The words are somewhere in between, between the matter and the culture. In other words there must be some kind process  that includes modeling, binding, symbolization, habituation, both directed to its substrate, the brain matter, and its supply, the cultural life.

We will discuss this aspect elsewhere in more detail. Yet, for the reductionist trying to defend the usage of the concept of states for the description of mental affairs, this special dynamics between the outer world and the cognitively established reality, and which is embedding  our private use of language, is the final defeat for state-oriented reductionisms.

Nevertheless we humans often feel inclined to use that strange concept. The question is why do we do so, and what is the potential role of that linguistic behavior? If we take the habit of assigning a state to mental affairs of other people as a language game, a bunch of interesting questions come to the fore. These are by far too complex and to rich as to be discussed here. Language games are embedded into social situations, and after all, we always have to infer the intentions of our partners in discourse, we have to establish meaning throughout the discourse, etc. Assigning a mental state to another being probably just means “Hey, look, I am trying to understand you! Would you like to play the mutual interpretation game?” That’s ok, of course, for the pragmatics of a social situation, like any invitation to mutual inferentialism [6], and like any inferentialism it is even necessary—from the perspective of the pragmatics of a given social situation. Yet, this designation of understanding should not mistake the flag with the message. Demonstrating such an interest need not even be a valid hypothesis within the real-world situation. Ascribing states in this way, as an invitation for inferring my own utterances,  is even unavoidable, since any modeling requires categorization. We just have to resist to assign these activities any kind of objectivity that would refer to the inner mental affairs of our partner in discourse. In real life, doing so instead is inevitably and always a sign of deep disrespect of the other.

In philosophy, Deleuze and Guattari in their “Thousand Plateaus” (p.48) have been among the first who recognized the important abstract contribution of Darwin by means of his theory. He opened the possibility to replace types and species by population, degrees by differential relations. Darwin himself, however, has not been able to complete this move. It took another 100 years until Manfred Eigen coined the term quasi-species as an increased density in a probability distribution. Talking about mental states is noting than a fallback into Linnean times when science was the endeavor to organize lists according to uncritical use of concepts.

Some Consequences

The conclusion is that we can not use the concept of state for dealing with mental or cognitive affairs in any imaginable way, without stumbling into serious difficulties . We should definitely drop it from our vocabulary about the mind (and the brain as well). Assuming mental states in other people is rendering those other people into deterministic machines. Thus, doing so would even have serious ethical consequences. Unfortunately, many works by many philosophers are rendered into mere garbage by mistakenly referring to this bad concept of “mental states.”

Well, what are the consequences for our endeavor of machine-based epistemology?

The most salient one is that we can not use the digital computers to produce language understanding as along as we use these computers as deterministic machines. If we still want to try (and we do so), then we need mechanisms that introduce aspects that

  • – are (at least) non-deterministic;
  • – produce manifolds with respect to representations, both on the structural level and “content-wise”;
  • – start with probabilized concepts instead of compound symbolic “whole-sale” items (see also the chapter about representation);
  • – acknowledge the impossibility to analyze a kind of causality or—equival- ently—states inside the machine in order to “understand” the process of language at a microscopic level: claiming ‘mental states’ is a garbage state, whether it is assigned to people or to machines.

Fortunately enough, we found further important constraints for our implementa- tion of a machine that is able to understand language. Of course, we need further ingredients, but for now theses results are seminal. You may wonder about such mechanisms and the possibility to implement them on a computer. Be sure, they are there!

  • [1] Hilary Putnam, Mind, language, and reality. Cambridge University Press, 1979. p.346.
  • [2] Ilya Prigogine.
  • [3] Reaction-Diffusion-Systems: Gray-Scott-systems, Turing-systems
  • [4] Grassberger, 1988. Physica A.
  • [5] Hilary Putnam, 1967, ‘The Nature of Mental States’, in Mind, Language and reality, Cambridge University Press, 1975.
  • [6] Richard Brandom, Making it Explicit. 1994.

۞

Where Am I?

You are currently browsing entries tagged with complexity at The "Putnam Program".