Complexity

January 15, 2012 § Leave a comment

The great antipode, if not opponent to rationality is,

by a centuries-old European declaration, complexity.

For a long time this adverse relationship was only implicitly given. The rationalist framework has been set up already in the 14th century. 200 years later, everything had to be a machine in order to celebrate God. Things did not change much in this relationship, when the age of bars (mechanics, kinetics) grew into the age of machines (dynamics). Just to the opposite, control was the declared goal, an attitude that merged into the discovery of the state of information. While the 19th century invented the modern versions of information, statistics, and control technologies—at least precursory—, the 2oth century overgeneralized them and applied those concepts everywhere, up to their drastic abuse in the large bureaucrazies and the secret services.

Complexity finally was marked as the primary enemy of rationalism by the so-called system theoretician Luhmann even as late as 1987 [1]. We cite this enigmatically dualistic statement from Eckardt [2] (p.132)

[translation, starting with the citation in the first paragraph, “We shall call a contiguous set of elements complex, if due to immanent limitations regarding the elements’ capacity for establishing links it is not possible any more that any element could be connected to each other.” […] Luhmann shifted complexity fundamentally into a dimension of the quantifiable, where is looking for a logical division of sets and elements, hybridizations are not allowed in this dualistic conception of complexity.

Even more than that: Luhmann just describes the break of idealistic symmetry, yet by no means the mechanisms, nor the consequences of that. And even this aspect of broken symmetry remains opaque to him, as Luhmann, correctly diagnosed by Eckard, refuges into the reduction of quantisation and implied control. Such, he emptied the concept of complexity even before he drops it. Not the slightest hint towards the qualitative change that complex system can provoke through their potential for inducing emergent traits. Neglecting qualitative changes means to enforce a flattening, to expunge any vertical layering or integration. Later we will see that this is a typical behavior of modernists.

Eckardt continues to cite Luhmann (one has to know that Luhmann was a bureaucrat by training):

[translation, second paragraph: “Complexity in this second sense then is a measure for indeterminacy or for the lack of information. Complexity is, from this perspective, the information that is unavailable for the system, but needed by it for completely registering and describing its environment (environmental complexity), or itself (systemic complexity), respectively.“]

In the second statement he proposed that complexity denotes things a system cannot deal with, because it cannot comprehend it. This conclusion was an even necessary by-product of his so-called “systems theory,” which is nothing else than plain cybernetics. Actually, applying cybernetics in whatsoever form to social system is categorical nonsense, irrespective the “order” of the theory: 2nd order cybernetics (observing the observer) is equally unsuitable as 1st order cybernetics is for talking about the phenomena that result from self-referentiality. They always deny the population effects and they always claim the externalizability of meaning.

Of course, such a “theoretical” stance is deeply inappropriate to deal with the realm of the social, or more generally with that of complexity.

Since Luhmann’s instantiation of system theoretic nonsense, the concept of complexity developed into three main strains. The first flavor, following Luhmann, was a widely accepted and elegant symbolic notion either for incomprehensibility and ignorance or for complicatedness. The second mistook it drastically and violently under the signs of cybernetics, which is more or less the complete “opposite” to complexity. Vrachliotis [3] cited Popper’s example of a swarm of midges as a paradigm for complexity in his article about the significance of complexity. Unfortunately for him, the swarm of midges is anything but complex, it is almost pure randomness in a weak field of attraction. We will see that attraction alone is not a sufficient condition. In the same vein most of the research in physics is just concerned with self-organization from the perspective of cybernetics, if at all. Mostly, they resort to information theoretic measures (shortest description length) in their attempt to quantify complexity. We will discuss this later. Here we just note that the product of complexity  contains large parts about one can not speak in principle, whereas a description length even claims that the complex phenomenon is formalizable.

This usage is related to the notion of complexity in computer sciences, denoting the dependency of needs for time and memory from the size of the problem, and information sciences. There, people talk about “statistical complexity,” which sounds much like “soft stones.” It would be funny if it would have be meant as a joke. As we will see, the epistemological status of statistics is as incommensurable with complexity as it is true for cybernetics. The third strain finally reduces complexity to non-linearity in physical systems, or physically interpreted systems, mostly referring to transient self-organization phenomena or to the “edge of chaos”[4]. The infamous example here are swarms… no, swarms do not have a complex organization! Swarms can be modeled in a simple way as kind of a stable, probabilistic spring system.

Naturally, there is also any kind of mixture of these three strains of incompetent treatment of the concept of complexity. So far there is no acceptable general concept or even an accepted working definition. As a consequence, many people are writing about complexity, or they refer to it, in a way that confuses everything. The result is bad science, at least.

Fortunately enough, there is a fourth, though tiny strain. Things are changing, more and more complexity looses its pejorative connotation. The beginnings are in the investigation of natural systems in biology. People like Ludwig von Bertalanffy, Howard Pattee, Stanley Salthe, or Conrad Waddington, among others brought the concept of irreducibility into the debate. And then, there is of course Alan Turing and his mathematical paper about morphogenesis [5] from the mid 1950ies, accompanied by the Russian chemist Belousov, who did almost the same work practically [6].

The Status of Emergence

Complex systems may be characterized by a phenomenon which could be described from a variety of perspectives. We could say that symmetry breaks [7], that there is strong emergence [8], that it develops patterns that cannot be described on the level of the constituents of the process, and so on. If such an emergent pattern is being selected by another system, we can say that something novel has been established. The fact that we can take such perspectives neatly describes the peculiarity of complexity and its dynamical grounds.

Many researchers feel that complexity potentially provides some important yet opaque benefits, despite the abundant usage as a synonym for incomprehensibility. As it is always the case in such situation, we need a proper operationalization, which in turn needs a clear-cut identification of its basic elements in order to be sufficient to re-construct the concept as a phenomenon. As we already pointed out above, these elements are to be understood as abstract entities, which need a deliberate instantiation before any usage. From a large variety of sources starting from Turings seminal paper (1952) and up to Foucault’s figure of the heterotopia [9] we can derive five elements, which are necessary and sufficient to render any “system” from any domain into a complex system.

Here, we would like to add a small but important remark. Even if we take a somewhat uncritical (non-philosophical) perspective, and thus the wording is not well-defined so far (but this will change), we have to acknowledge that complexity is (1) a highly dynamical phenomenon where we can not expect to find a “foundation” or a “substance” that would give sufficient reason, and (2) more simply, whenever we are faced with complexity we experience at least a distinct surprise. Both issues directly lead to the consequence that there is no possibility for a purely quantitative description of complexity. This also means that any of the known empirical approaches (external realism, empirism, whether in a hypothetico-deductive attitude or not, whether following falsificationism or not, also does not work. All these approaches, particularly statistics, are structurally flat. Anything else (structurally “non-flat”) would imply the necessity of interpretation immanent to the system itself, and such is excluded from contemporary science for several hundred years now. The phenomenon of complexity is only partial an empirical problem of “facts,” despite the “fact” that we can observe it (just wait a few moments!).

We need a quite different scheme.

Observations

Before we proceed, we would like to give the following as an example. It is a simulation of a particular sort of chemical system, organized in several layers. “Chemical system” denotes a solution of several chemicals that mutually feed on each other and the reactants of other reactions. Click to the image if you would like to see it in action in a new tab (image & applet by courtesy of “bitcraft“):

For non-programmers it is important to understand that there is no routine like “draw-a-curve-from-here-to-there”. The simulation is on the level of the tiniest molecules. What is striking then is the emerging order, or, in a technical term, the long-range correlation of the states (or contexts) of the particles. The system is not “precisely” predictable, but its “behavior” is far from random. This range is 100-1000’s of times larger than the size of the particle. In other words, the individual particles do not know anything about those “patterns.”

Such systems as the above one are called reaction-diffusion systems, simply because there are chemical reactions and physical diffusion. There are several different types of them, distinguishable by the architecture of the process, i.e. there is the question, what happens with the result of the reaction? In Gray-Scott models for instance, the product of the reaction is removed from the system, hence it is a flow-type reactor. The first system as discovered by Turing (mathematically) and independently from him, by Belousov and Zhabotinsky (in a quite non-Russian attitude, by chemical experiments), is different. Here, everything remains in the system in a perfectly circularly re-birthing (very small) “world”.

The diffusion brings the reactants together, which then react “chemically.” Chemical reactions combine some reactants into a new one of completely different characteristics. What happens if you bring chlorine gas to the metal natrium (aka sodium)? Cooking salt.

In some way, we see here two different kinds of emergence, the chemical one and the informational one. Well, the second that is more interesting to us today is not strictly informational, it is a dualistic emergence, spanning between the material world and the informational world. (read here about the same subject from a different perspective)

Here we already meet a relation that can be met in any complex system, i.e. a system that exhibits emergent patterns in a sustainable manner. We meet it in the instance of chemical reactions feeding on each other. Yet, the reaction does not simply stop, creating some kind of homogeneous mud… For mechanists, positivists, idealists etc. something quite counter-intuitive happens instead.

This first necessary (at least I suppose so) condition to create complexity is the probabilistic mutual counteraction, where these two processes are in a particular constellation to each other. We can identify this constellation always where we meet “complexity”. Whether in chemical systems, inside biological organisms at any scale, or in their behavior. So far we use “complexity” in a phenomenal manner, so to speak, proto-empiric, but this will change. Our goal is to develop a precise notion of complexity. Then we will see what we can do about it.

Back to this probabilistic mutual counteraction. One of the implied “forces” accelerates, enforces, but its range of influence is small. The other force, counter-acting the strong one, is much weaker, but with a larger range of influence [10]. Note that “force” does not imply here an actor, it should be taken more like a field.

The lesson we can take from that, as a “result” of our approach, is already great, IMHO: Whenever we see something that exhibits dynamic “complexity,” we are allowed—no, we have to!—ask about those two (at least) antagonistic processes, their quality and their mechanism. In this perspective, complexity is a paradigmatic example for a full theory, that is categorically different from a model: it provides a guideline how to ask, which direction to take for creating a bunch of models (more details about theory here). Also quite important to understand is that asking for mechanisms here does not imply any kind of reductionism, quite to the contrary.

Yet, those particularly configured antagonistic forces are not the only necessary condition. We will see that we need several more of those elements.

I just have to mention the difference between a living organism and the simulation of the chemical system above in order to demonstrate that there are entities that are vastly “more” complex than the Turing-McCabe system above. Actually, I would propose not to call such chemical systems as “complex,” and for good reasons so, as we will see shortly.

If we would not simulate such a system, that is run it as a chemical system, it soon would stop, even if we would not remove anything from the system. What is therefore needed is a source of energy, or better enthalpy, or still better, of neg-entropy. The system has to dissipate a lot of entropy (un-order, pure randomness, hence radiation) in order to establish a relatively “higher” degree of order.

These two principles are still not sufficient even to create self-organization phenomena as you see it above. Yet, for the remaining factors we change the (didactic) direction.

The Proposal

We already mentioned above that complexity is not “purely” an empiric phenomenon. Actually, radical empiricism has been proofed to be problematic. So, what we are enforced to do concerning complexity we always have to do in any empiric endeavor. In the case of complexity it is just overly clear visible. What we are talking about is the deliberate apriori setting of “elements” (not axioms!). Nevertheless, we are convinced that it is possible to take a scientific perspective towards complexity, in the meaning of working with theories, models and predictions/diagnoses. What we propose is just not radical positivism, or the like.

Well, what is an “element”? Elements are “syndroms,” almost some kind of identifiable and partially naturalized symbols. Elements do not make sense if taken each after another. The make sense only if taken together. Yet, I would not like to open the classic-medieval discourse about “ratios”….

Our elements are from basic physics, dynamical systems showing emergent phenomena, abstract sign-theoretic considerations, completed by a formal argument.

We now would like to provide our proposal about the five necessary elements that create complexity, and by virtue of the effect, the whole set then is also sufficient. The following five elements of essential and sufficient components for complexity are presumably:

  • (1) dissipation, deliberate creation of additional entropy by the system at hand;
  • (2) an antagonistic setting similar to the reaction-diffusion-system (RDS), as described first by Alan Turing [5], and later by Gray-Scott [11], among others;
  • (3) standardization;
  • (4) active compartmentalization;
  • (5) systemic knots.

We shall now contextualize these elements as briefly as possible.

Dissipation

Element 1: The basic element is dissipation. Dissipation means that the system produces large amount of disorder, so-called entropy, in order to be able to establish structures (order) and to keep the overall entropy balance increasing [4]. This requires a radical openness of the system, what has been recognized already by Schrödinger in his Dublin Lectures [12]. Without radiation of heat, without dissipation, no new order (=local decrease of entropy) could be created. Only the overall lavishly increase of entropy allows to decrease it locally (=establish patterns and structure). If the system performs physical work with radiation of heat, the system can produce new patterns, i.e. order, which transcends the particles and their work. Think about the human brain, which radiates up to 100 Watts per hour in the infrared spectrum in order to create that volatile immaterial order which we call thinking. Much the same is true for other complex organizational phenomena such like “cities.”

Antagonistic Forces in Populations

Element 2: This increase of entropy is organized on two rather different time scales. On the short-time scale we can see microscopic movements of particles, which–now invoking the second element–have to be organized in an antagonistic setting. One of the “forces,” reactions or in general “mechanisms” should be strong with a comparatively short spatio-temporal range of influence. The other, antagonistic force or mechanism should be comparatively weak, but as kind of a compensate, its influence should be far-reaching in time-space. Turing described mathematically, that such a setting can produce novel patterns for a wide range of parameters. “Novelty” means that these patterns can not be found anywhere in the local rules organizing the moves of the microscopic particles.

Turing’s system is one of constant mass; reaction-diffusion systems (RDS) may also be formulated as a flow-through reactor system (so-called Gray-Scott model [11]). In general, you can think of reaction-diffusion-systems as a sort of population-based, probabilistic Hegelian dialectics (which of course is strictly anti-Hegelian).

Standardization

Element 3: The third element, standardization, reflects the fact, that the two processes can only interact intensely if their mutual interference is sufficiently standardized. Without standardization, there would not be any antagonistic process, hence also no emergence of novel patterns. the two processes would be transparent to each other. This link has been overlooked so far in the literature about complexity. The role of standardization is also overlooked in theories of social or cultural evolution. Yet, think only about DIN or ASCII , or de-facto standards like “the PC”. Any of the subsequent emergences of more complex patterns would not have happened. We easily can extend the role of standardization into the area of mechanization and automation. There is also a deep and philosophically highly relevant relation between standardization and rule-following. Here, we can’t deal further with this topic. Actually, the process of habituation, the development of a particular “common sense” and even the problem of naming as one of the first steps of standardization do not only belong to the most difficult problems in philosophy, they are actually still somewhat mysterious .

On a sign-theoretic level we may formulate standardization as some kind of acknowledged semiotics, as a body of rules, which tell the basic individual items of system, how to behave and to interpret. Frequently changing codes effectively prohibits any complexity; changing codes also prohibits effectively any further progress in the sense of innovation. Codes produce immaterial enclaves. On the other hand, codes are also means of standardization. Codes play multiple and even contradictory roles.

Towards the Decisive Step

As already mentioned above, emergent creation of novel and even irreducible patterns can not be regarded as a sufficient condition for complexity. Such processes are only proto-complex; since there is no external instance ruling the whole generative process those patterns are often called “self-organized.” This is, bluntly said, a misnomer, since then we would not distinguish between the immaterial order (on the finer time scale) and the (quasi-)material organization (on the coarser time scale). Emergent patterns are not yet organized; at best we could say that such systems are self-patterning. Obviously, strong emergence and complexity are very different things.

Compartmentalization

Element 4: What is missing in such self-organizing configurations is the fourth element of complexity, the transition from order to organization. (In a moment we will see, why this transition is absolutely crucial for the concept of complexity.) Organization always means that the system has been built up compartments. To achieve that, the self-organizing processes, such like in the RDS, must produce something, which then resides as something external to the dynamic process running on the short time-scale. In other words, the system introduces (at least) a second much longer time scale, since the produces are much more stable than the processes themselves. This transition we may call indeed self-organization.

In biological systems those compartments are established by cell walls and membranes, which build vesicles, organs, fluid compartments and so on. In large assemblies built of matter–“cities”–we know walls, streets, but also immaterial walls made from rules, semi-permeable walls by steep gradients of fractal coefficients and so on.

There are however also informational compartments, which are probabilistically defined, such like the endocrine systems in animals or the immune systems, which both form an informational network. In the case of social organizations compartments are tangible as strict rules, or even domain specific languages. In some sense, produces of processes facilitating the transition from order to organization are always some kind of left-over, secretions, partial deaths if you like. It is these persistent secretions where the phenomenon of growth becomes visible. From an outside perspective, this transition could be regarded also as a process of selection; starting from a large variety of slightly different and only temporarily stable patterns or forms, the transition from order to organization establishes some of them–selectively, as a matter of fact.

It is clear that the lasting produces of a system act as a constraint for any subsequent processes. Durable systems may acquire the capability to act precisely on that crucial transition in order to gain better stability. One could for instance imagine that a system controls this transition by controlling the “temperature” of the system, high “temperatures” leading to a remelting of structures in order to allow for different patterns. This however raises a deep and self-referential problematics: the system would have to develop a sufficiently detailed model about itself, which is not possible, since there is strong emergence in its lower layers. This problematics forwards us to the fifth element of complexity.

Systemic Knots

Element 5: We know from bio-organic systems that they are able to maintain themselves. This involves a myriad of mutual influences, which then also span across several levels of organizations. For instance, the brain, and even thoughts themselves are able to influence individual (groups of) cells [13]. The behavior to drink green tea directly acts onto the DNA in the cells of the body. Such controlling influence, however, is not unproblematic. Any controlling instance can have only a partial model about the regulated contexts. By means of non-orthogonality this leads directly to the strange situation, that the interests of lower levels and those from the top levels necessarily contradict each other. This effect is just the inverse of the famous “enslaving parameter” introduced by Hermann Haken as the key element of his concept of synergetics [14].

We call this effect the systemic knot, since the relationships between elements of different layers can not be “drawn” onto a flat paper any more. Most interestingly, this latent antagonism between levels of a system is just the precondition for a second-order complexity. Notably, we can conclude that complexity “maintains” itself. If a complex system would not cause the persistence of the complexity it builds upon, it would not be complex any more soon as a consequence of the second law of thermodynamics. In other words, it would be soon dead.

Alternatives

Today, in the beginning of 2012, complexity has been advanced (almost) into everybody’s mind, perhaps except materialists like Slavoj Žižek, who in a TV interview recently proposed a radical formalistic materialism, i.e. we should acknowledge that everything (!) is just formula. Undeniably, the term “complexity” has made its career during the last 25 years. Till the end of 1980ies complexity was “known” only among very few scientists. I guess that this career is somehow linked to information as we practice it.

From a system theoretic approach the available acceleration of many processes as compared to the pre-IT times introduced first a collapse of stable boundaries (like the ocean), buzzed as “globalization,” only to introduce symmetry-breaks on the then provoked complex system. This instance of complexity is really a new one, we never saw it before.

Anyway, given the popularity of the approach one might assume that there is not only a common understanding of complexity, but also an understanding that would somehow work in an appropriate manner, that is one that does not reduce complexity to a narrow domain-specific perspective. Except our proposal, however, such a concept does not exist.

The List

In the context of management research, where the notion of complexity is being discussed since the days of Peter Drucker, Robert Bauer and Mihnea Moldoveanu wrote in 2002 [15]:

‘Complexity’ has many alternative definitions. The word is used in the vernacular to denote difficult to understand phenomena, in information theory to denote incompressible digital bit strings [Li and Vitanyi, 1993], in theoretical computer science to denote relative difficulty of solving a problem [Leiserson, Cormen and Rivest, 1993] in organization theory to denote the relative degree of coupling of a many-component system [Simon, 1962], among many other uses.

They continue

Our goal is to arrive at a representation of complexity that is useful for researchers in organizational phenomena by being comprehensive – incorporating the relevant aspects of complexity, precise – allowing them to identify and conceptualize the complexity of the phenomena that they address, and epistemologically informed – allowing us to identify the import of the observer to the reported complexity of the phenomenon being observed.

So far so good. We could agree upon. Yet, afterwards they claim the possibility to decompose “[…] complexity into informational and computational components.” They seek support by another author: “‘Complexity’ is often thought to refer to the difficulty of producing a competent simulation of a phenomenon [Norretranders, 1995].” which reminds strikingly to Luhmann’s wordy capitulation. We will return to this cybernetic attitude in the next section, here we just fix this reference to (cybernetic) information and computer sciences.

Indeed, computational complexity and informational entropy belong to the most popular approaches to complexity. Computational complexity is usually further operationalized in different ways, e.g. as minimal description length, accessibility of problems and solutions etc.. Informational entropy, on the other hand, is not only of little value, it is also applying a strongly reductionist version of the concept of information. Entropy is little more than the claim that there is a particular phenomenon. The concept of entropy can’t be used to identify mechanisms, whether in physics or regarding complexity. More generally, any attempt to describe complexity in statistical terms fails to include the relevant issue of complexity: the appearance of “novel” traits, novel from the perspective of the system.

Another strain relates complexity to chaos, or even uses it as a synonym. This however applies only in vernacular terms, in the sense of being incomprehensible. Chaos and complexity often appear in the same context, nut by far not necessarily. Albeit complex system may develop chaotic behavior, the link is anything but tight. There is chaotic behavior of non-complex systems (Mandelbrot set), as well as complexity in non-chaotic systems. In a more rational perspective would be a mistake to equate them, since chaos is a descriptive term about the further development of a system, where this description is based on operationalizations like the ε-tube, or the value of the Lyaponuv-exponent.

Only very recently there has been an attempt to address the issue of complexity in a more reflected manner, yet they did not achieve an operationalizable concept.  (…)

Problems with the Received View(s)

It is either not feasible or irrelevant to talk in complex systems about “feedback.” The issue at hand is precisely emergence, thus talking about deterministic hard-wired links, in other words: feedback, is severely missing the point.

Similarly, it is almost nonsense to assign a complex system inner states. There is neither an identifiable origin not even the possibility to describe a formal foundation. If our proposal is not a “formal” foundation. The talk about states in complex systems, and in this similar to the positivist claim of states in the brain, is wrong again precisely due to the fact that the emergence is emergence, and it is so on the (projected) “basis” of an extremely deterritorialized dynamics. There is no center, and there are no states in such systems.

On the other hand, it is also wrong to claim—at least as a general notion—that complex systems are unpredictable [16]. The unpredictability of complex systems is fundamentally different from the unpredictability of random systems. In random systems, we do not know anything except that there are fluctuations of a particular density. It is not possible here to say anything concrete about the next move of the system, only as a probability. Whether the nuclear plant crashes tomorrow or in 100’000 years is not subject of probability theory, precisely because local patterns are not subject of statistic. In a way randomness cannot surprise, because randomness is a language game meaning “there is no structure.”

Complex systems are fundamentally different. Here all we have are patterns. We even may expect that a given pattern persists more or less stable in the near future, meaning, that we would classify the particular pattern (which “as such” is unique for all times) into the same family of patterns. However, more distant in the future, the predictability of complex systems is far less than that for random systems. Yet, despite this stability we must not talk about “states” here. Closely related to the invocation of “states” is the cybernetic attitude.

Yet, there is an obvious divergence between what life in an ecosystem is and the description length of algorithms, or the measure of unorderliness. This categorical difference refers to the fact that living systems as the most complex entities we know of consist of a large number of integrative layers, logical compartments, if you like. All blood cells are on such a layer, all liver cells, all neurons, all organs etc., but also certain functional roles. In cell biology one speaks about the genome, the proteome, the transcriptome etc., in order to indicate such functional layers. The important lesson we can take from that is that we can’t reduce any of the more integrated layers to a lower one. All of these layers are “emergent” in the strong sense, despite the fact that they are interconnected also in a top-down level. Any proclaimed theory of complexity that does not include the phenomenon of emergent layering should not regarded as such a theory. Here we meet the difference of structurally flat theories or attitudes (physics, statistics, cybernetics, positivism, materialism, deconstructivism) and theories that know of irreducible aspects as structural layers and integration.

Finally a philosophical argument. The product of complexity contains large parts about one can not speak in principle. The challenge is indeed a serious one. Regardless which position we take, underneath or above the emergent phenomenon, we cannot speak about it.   In both cases there is no possible language for it. This situation is very similar to the body-mind-problem, where we met a similar transition. Speaking about a complex system from the outside does not help much. We just can point to the emergence, in the Wittgensteinian sense, or focus either the lower or the emergent layer. Both layers must remain separated in the description. We just can describe the explanatory dualism [17].

This does not mean that we can’t speak about it at all. We just can’t speak about it in analytic terms, as structurally flat theories do, if they don’t refuse any attempt to understand the mechanisms of complexity and the speaking thereof anyway. One alternative to integrate that dualism in a productive manner is by the technique of elementarization as we tried it here.

Conclusion

As always, we separate the conclusions first into philosophical aspects and the topic of machine-based epistemology and its “implementation.”

Neil Harrison, in his book about unrecognized complexity in politics [16] correctly wrote,

Like realism, complexity is a thought pattern.

From a completely different domain, Mainzer [7] similarly wrote:

From a logical point of view, symmetry and complexity are syntactical and semantical properties of theories and their models.

We also have argued that the property of emergence causes an explanatory dualism, which withstands any attempt for an analytic formalization in positive terms. Yet, even as we can’t apply logical analyticity without loosing the central characteristics of complex systems,we can indeed symbolize it. This step of symbolization is distantly similar to the symbolization of the zero, or that of infinity.

Our proposal here is to apply the classic “technique” of elementarization. Such, we propose a “symbolization” that is not precisely a symbol, it is more like an (abstract) image. None of the parts or the aspects of an image cannot be taken separately to describe the image, we even may not miss any of those aspects. This link between images and complexity, or images and explanatory dualism is an important one which we will follow in another chapter (about “Waves, Words and Images“)

“Elements” (capital E) are not only powerful instruments, they are also indispensable. Elements are immaterial abstract formless entities; their only property is the promise of a basic constructive power. Elements usually are assumed not to be further decomposable without loosing their quality. That’s true for our big 5 as well as for chemical elements, and also for Euclidean Elements of Geometry. In our case, however, we are also close to Aristotle’s notion of elements, according to which the issue at hand (for him and his fellows the “world”) can’t be described by any isolated subset of them. The five elements are all necessary, but only also sufficient if they appear together.

The meaning of Elements and their usage is that they allow for new perspectives and for a new language. In this way, the notion of “complexity” is taken by many as an element. Yet it is only a pseudo-element, an idol, because (i) it is not abstract enough, and (ii) because it can be described by means of more basic aspects. Yet it depends on the perspective, of course.

Anyway, our theoretical framework allows to distinguish between various phenomena around complex systems in a way that is not accessible through other approaches. Similarly to the theory of natural evolution our theory of complexity is a framework, not a model. It can not be tested empirically in a direct manner. Yet, it has predictive poser on the qualitative level and it allows to develop means for synthesizing complexity, and even for qualitative predictions.

Deleuze (p.142 in [18]) provided a highly convincing and IMHO a still unrivaled description about complexity on less than three short pages. The trinity between the broiling of “particled matter” and “bodies” below, the pattern above and the dynamic appearance emergence he called “sense”. He even attributed that process the label “logic”. It is amazing that Deleuze with his book about the logic of sense focused strongly on paradoxes, i.e. antagonist forces, and he succeeded to remain self-consistent with his series 16 and 17 (paradox of the logics of genesis), and we today can determine the antagonism (of a particular configuration) as one of the main necessary Elements of complexity.

What is the role of complexity in cognition? To approach this question we have to recognize what is happening in complex process: a pattern, a potential novelty appears out of randomness. Yet if the patterns does not stabilize it will sink back into randomness. However, patterns are temporarily quite persistent in all complex systems. That means that the mere appearance of those potential novelties could make their way to be a subject of selection, which stabilizes the volatile pattern over time, changing it into an actual novelty.

Such, complexity is in between randomness of scattered matter and established differences, a virtual zone between matter and information, between the material and the immaterial. Deleuze called it, we just said it, the paradox of the logics of genesis [18].

The consequence is pretty clear: Self-organizing Maps are not sufficient. They just establish a volatile, transient order. Yet, what we need to create is complexity. There is even a matching result from neuro-cognitive research. The EEC of more intelligent people show a higher “degree” of complexity. We already know that this complexity we can achieve only by animal-like growth and differentiation. There is a second reason why the standard implementation of SOM is not sufficient: it is not probabilistic enough, since almost all properties of the “nodes” are fixed at implementation time without any chance for a break-out. For instance, SOM are mostly realized as static grids, the transfer mechanism is symmetric (circle, ellipsoid), they do not have a “state” except the collected extensions, there are only informational antagonisms, but not chemical ones, or those related to the matter of a body.

This eventually will lead to a completely novel architecture for SOM (that we soon will offer on this site).

This article was first published 20/10/2011, last substantial revision and re-publishing is from 15/01/2012. The core idea of the article, the elementarization of complexity, has been published in [19].

  • [1] Niklas Luhmann, Soziale Systeme. Grundriss einer allgemeinen Theorie. Frankfurt 1987. p.46/47. cited after [2].
  • [2] Frank Eckardt, Die komplexe Stadt: Orientierungen mi urbanen Labyrinth. Verlag für Sozialwissenschaften, Wiesbaden 2009. p.132.
  • [3] Andrea Gleiniger (Herausgeber), Georg Vrachliotis (eds.),Komplexität: Entwurfsstrategie und Weltbild. Birkhäuser, Basel 2008.
  • [4] Lewin 2000
  • [5] Alan M. Turing (1952), The Chemical Basis of Morphogenesis. Phil. Trans. Royal Soc. Series B, Biological Sciences, Vol. 237, No. 641, pp. 37-72. available online.
  • [6] Belousov
  • [7] Klaus Mainzer (2005), Symmetry and complexity in dynamical systems. European Review, Vol. 13, Supp. No. 2, 29–48.
  • [8] Chalmers 2000
  • [9] Michel Foucault
  • [10] Michael Cross, Notes on the Turing Instability and Chemical Instabilities. mimeo 2006. available online (mirrored)
  • [11] Gray, Scott
  • [12] Schrödinger, What is Life? 1948.
  • [13] Ader, Psychoneuroimmunology. 1990.
  • [14] Hermann Haken, Synergetics.
  • [15] Robert Bauer, Mihnea Moldoveanu (2002), In what Sense are Organizational Phenomena complex and what Differences does their Complexity make? ASAC 2002 Winnipeg (Ca)
  • [16] Neil E.Harrison, Thinking about the World we Make. in: same author (ed.), Complexity in World Politics Concepts and Methods of a New Paradigm. SUNY Press, Albany 2006.
  • [17] Nicholas Maxwell (2000) The Mind-Body Problem and Explanatory Dualism. Philosophy 75, 2000, pp. 49-71.
  • [18] Gilles Deleuze, Logic of Sense. 1968. German edition, Suhrkamp Frankfurt.
  • [19] Klaus Wassermann (2011). Sema Città-Deriving Elements for an applicable City Theory. in: T. Zupančič-Strojan, M. Juvančič, S. Verovšek, A. Jutraž (eds.), Respecting fragile places, 29th Conference on Education in Computer Aided Architectural Design in Europe eCAADe. available online.

۞

Advertisements

Non-Turing-Computing

October 28, 2011 § Leave a comment

At first sight it may sound like a bad joke, indeed.

Turing not only provided many important theoretical insights on computing [1], including the Universal Turing Machine (UTM), he and his group in Bletchley Park also created a working prototype, which had been employing the theoretical results [2].

Turing Computation

In order to clarify what non-Turing computing could be, we first have to inspect a bit closer how Turing-computing is defined. On Wikipedia one can find the following explanation in standard language:

With this encoding of action tables as strings it becomes possible in principle for Turing machines to answer questions about the behavior of other Turing machines. Most of these questions, however, are undecidable, meaning that the function in question cannot be calculated mechanically. For instance, the problem of determining whether an arbitrary Turing machine will halt on a particular input, or on all inputs, known as the Halting problem, was shown to be, in general, undecidable in Turing’s original paper. Rice’s theorem shows that any non-trivial question about the output of a Turing machine is undecidable.

A universal Turing machine can calculate any recursive function, decide any recursive language, and accept any recursively enumerable language. According to the Church-Turing thesis, the problems solvable by a universal Turing machine are exactly those problems solvable by an algorithm or an effective method of computation, for any reasonable definition of those terms.

One could add firstly that any recursive algorithm can be linearized (and vice versa). Secondly, algorithms are defined as procedures that produce a defined result after a finite state of time.

Here is already the first problem in computational theory. What is a result? Is it a fixed value, or would we accept a probability density or even a class of those (like Dirac’s delta) also as a result? Even non-deterministic Turing machines yield unique results. The alternative of an indeterminable result sounds quite counter-intuitive and I suppose that it indeed can not be subsumed under the classical theory of computability. It would simply mean that the results of a UTM are only weakly predictable. We will return to that point a bit later.

Another issue is induced by problem size. While analytic undecidability causes the impossibility for the putative computational procedure to stop, sheer problem size may render the problem as if being undecidable. Solution spaces can be really large, beyond 102000 possible solutions. Compare this to the estimated 1080 atoms of visible matter in the whole universe. Such solution spaces are also an indirect consequence of Quine’s principle of underdetermination of an empirical situation, which results in the epistemological fact of indeterminacy of any kind of translation. We will discuss this elsewhere (not yet determined chapter…) in more detail.

From the perspective of an entity being searching through a such a large solution space it does not matter very much, whether the solution space is ill-defined or vast, from the perspective of the machine controller (“user”) both cases belong to the same class of problems: There is no analytic solution available. Let us now return the above cited question about the behavior of other entities. Even for the trivial case that the interactee is a Turing machine, the question about the behavior is undecidable. That means that any kind of interaction can not be computed using an UTM, particularly however those between epistemic beings. Besides the difficulties raised by this for the status of simulation, this means that we need an approach, which is not derived or included in the paradigm established by the Church-Turing thesis.

The UTM as the abstract predecessor of today’s digital computers is based on the operations of writing and deleting symbols. Before an UTM can start to work, the task to be computed needs to be encoded. Once, the task has been actually encoded, including the rules necessary to accomplish the computation, everything that happens is just an almost material moving of semantically empty graphemes. (We avoid here to call the 0 and 1 “symbols,” since “symbol” is a compound concept, hence it could introduce complications to our investigation here.) During the operations of the UTM, the total amount information is constantly decreasing. Else, an UTM is not only initially completely devoid of any meaning, it remains semantically empty during the whole period it works on the task. Any meaning concerning the UTM remains forever outside the UTM. This remains true even if the UTM would operate at the speed of light.

Note, that we are not discussing the architecture of an actual physical computing device. Everybody uses devices that are built according von Neumann architecture. There are very few (artificial) computers on this earth not following this paradigm. Yet, it is unclear why DNA-computers or even quantum computers should not fall in this category. These computers’ processing is different from an instance that computes based on traditional logics, physically realized as transistors. Yet, the von Neumann architecture does not make any proposal about the processor except that there need to be one. Such fancy computers still need persistent storage, a bus system, encoding and decoding devices.

As said, our concern is not about the architecture, or even more trivial, about different speed of calculation. Hence, he question of non-Turing computing is also not a matter of accuracy. For instance, it is sometimes claimed that a UTM can simulate an analog neural net with with arbitrary accuracy. (More on that later!) The issue at stake has much more to do with the role of encoding, the status of information and being an embodied entity than with the question of how to arrange physical components.

Our suggestion here is that any kind of computer could be probably used in a way that it changes into a non-Turing computer. In order to deal with this question we have to discuss first the contemporary concept of “computation.”

Computation

To get clear about the concept of “computation” does not include the attempt to find an answer to the question “What is computation?”, as for instance Jack Copeland did [3]. Such a question can not be included in any serious attempt of getting clear about it, precisely because it is not an ontological question. There are numerous attempts to define computation, then to invoke some intuitively “clear” or otherwise “indisputable” “facts”, only in order to claim an ontological status of the respective proposal. This of course is ridiculous, at least nowadays after the Linguistic Turn. Yet, the conflation of definitory means and ontic status is just (very) naive metaphysics, if not to say esoterism in scientifically looking clothes. The only thing we can do is to get clear about possible “reasonable” ways of usage of the concepts in question.

In philosophy of mind and cognitive science, and thus also for our investigation of machine-based epistemology, the interest in getting clear about computation is given by two issues. First,  there is the question, whether, and, if yes, to what extent, the brain can be assigned “a computational interpretation.” To address this question we have to clarify what “computing” could mean and whether the concept of “brain” could match any of the reasonable definitions for computing. Second, as a matter of fact we know before any such investigation that we, in order to create a machine able to follow epistemological topics, have at least to start with some kind of programming.The question here is simply how to start practically. This concerns methods, algorithms, or machine architectures. A hidden but important derivative of this question is about the possible schemes of differentiation of an initial artifact, which indeed is likely to be just a software running on a contemporary standard digital computer.

These questions that are related to the mind are not in the focus of this chapter. We will return to them elsewhere. First, and that’s our interest here, we have to clarify the usage of the concept of computation. Francesco Nir writes [4]:

According to proponents of computationalism, minds are computers, i.e., mechanisms that perform computations. In my view, the main reason for the controversy about whether computationalism is accurate in its current form, or how to assess its adequacy is the lack of a satisfactory theory of computation.

It is obvious that not only the concepts of computation, brain and mind are at stake and have to be clarified, but also the concept of theory. If we would follow a completely weird concept about “theory,” i.e. if our attempts would try to follow an impossible logical structure, we would have no chance to find appropriate solutions for those questions. We even would not be able to find appropriate answers about the role of our actions. This, of course, is true for any work; hence we will discuss the issue of “theory” in detail in another chapter. Similarly, it would be definitely to limited to conceive of a computer just as a digital computer running some algorithm (all of them are finite by definition).

The history of of computation as an institutionalized activity starts in medieval ages. Of course, people performed calculation long before. The ancient Egypts even used algorithms for problems that can’t be written in a closed form. In classics, there have been algorithms to calculate pi or square roots. Yet, only in medieval ages the concept of “computare” got a definite institutional, i.e. functional meaning. It referred to the calculation of the future Easter dates. The first scientific attempts to define computation start mainly with works published by Alan Turing and Alonzo Church, which then was later combined into the so-called Church-Turing-Thesis (CTT).

The CTT is a claim about effectively computable functions, nothing more, nothing less. Turing found that everything which is computable in finite time (and hence also on a finite strap) by his a-machine (later called Turing machine), is equivalent to the λ-calculus. As an effect, computability is equaled with the series of actions a Turing machine can perform.As stated above, even Universal Turing Machines (UTM) can’t solve the Halting-problem. There are even functions that can’t be decided by UTM.

It has been claimed that computation is just the sequential arrangement of input, transformation, and output. Yet, as Copeland and Nir correctly state, citing Searle therein, this would render even a wall into a computer. So we need something more exact. Copeland ends with the following characterization:

“It is always an empirical question whether or not there exists a labelling of some given naturally occurring system such that the system forms an honest model of some architecture-algorithm specification. And notwithstanding the truism that ‘syntax is not intrinsic to physics’ the discovery of this architecture-algorithm specification and labelling may be the key to understanding the system’s organisation and function.”

The strength of this attempt is the incorporation of the relation between algorithm and (machine) architecture into the theory. The weakness is given by the term “honest,” which is completely misplaced in the formal arguments Copeland builds up. If we remember that “algorithm” means “definite results in finite time and space” we quickly see that Copeland’s concept of computation is by far too narrow.

Recently, Wilfried Sieg tried to clarify the issues around computation and computability in a series of papers [5,6]. Similarly to Nir (see above), he starts his analysis writing:

“To investigate calculations is to analyze symbolic processes carried out by calculators; that is a lesson we owe to Turing. Taking the lesson seriously, I will formulate restrictive conditions and well motivated axioms for two types of calculators, namely, for human (computing) agents and mechanical (computing) devices. 1 My objective is to resolve central foundational problems in logic and cognitive science that require a deeper understanding of the nature of calculations. Without such an understanding, neither the scope of undecidability and incompleteness results in logic nor the significance of computational models in cognitive science can be explored in their proper generality.” [5]

Sieg follows (and improves) largely an argument originally developed by Robin Gandy. Sieg characterizes it (p.12):

“Gandy’s Central Thesis is naturally formulated as the claim that any mechanical device can be represented as a dynamical system satisfying the above principles.”

By which he meant four limiting principles that prevent that everything is regarded as a computer. He then proceeds:

I no longer take a Gandy machine to be a dynamical system 〈S, F〉 (satisfying Candy’s principles), but rather a structure M consisting of a structural class S of states together with two kinds of patterns and operations on (instantiations of) the latter;”

[decorations by W.Sieg]

What is a dynamical system for Sieg and Gandy? Just before (p.11), Sieg describes it as follows:

“Gandy’s characterization […] is given in terms of discrete dynamical systems 〈S, F〉, where S is the set of states and F governs the system’s evolution. More precisely, S is a structural class, i.e., a subclass of the hereditarily finite sets H F over an infinite set U of atoms that is closed under ∈- isomorphisms, and F is a structural operation from S to S, i.e., a transformation that is, roughly speaking, invariant under permutations of atoms. These dynamical systems have to satisfy four restrictive principles.”

[decorations by W.Sieg]

We may drop further discussion of these principles, since they just add further restrictions. From the last two quotes one can see two important constraints. First, the dynamical systems under considerations are of a discrete character. Second, any transformation leads from a well-defined (and unique) state to another such state.

The basic limitation is already provided in the very first sentence of Sieg’s paper: “To investigate calculations is to analyze symbolic processes carried out by calculators;” There are two basic objections, which lead us to deny the claim of Sieg that his approach provides the basis for a general account of computation

Firstly, from epistemology it is clear that there are no symbols out in the world. We even can’t transfer symbols in a direct manner between brains or minds in principle. We just say so in a very abbreviative manner. Even if our machine would work completely mechanically, Sieg’s approach would be insufficient to explain a “human computor.” His analysis is just and only be valid for machines belonging (as a subclass) to the group of Turing machines that run finite algorithms. Hence, his analysis is also suffering from the same restrictions. Turing machines can not make any proposal about other Turing machines. We may summarize this first point by saying that Sieg thus commits the same misunderstanding as the classical (strong) notion of artificial intelligence did. Meanwhile there is a large, extensive and somewhat bewildering debate about symbolism and sub-symbolism (in connectionism) that only stopped due to exhaustion of the participants and the practical failure of strong AI.

The second objection against Sieg’s approach comes from Wittgenstein’s philosophy. According to Wittgenstein, we can not have a private language [8]. In other words, our brains can not have a language of thinking, as such a homunculus arrangements would always be private by definition. Searle and Putnam agree on that in rare concordance. Hence it is also impossible that our brain is “doing calculations” as something that is different from the activities when we perform calculation with a pencil and paper, or sand, or a computer and electricity. This brings us to an abundant misunderstanding about what computer really do. Computers do not calculate. They do not calculate in the same respect as we our human brain does not calculate. Computers just perform moves, deletions and—according to their theory—sometimes also an insertion into a string of atomic graphemes. Computers do not calculate in the same way as the pencil is not calculating while we use it to write formulas or numbers. The same is true for the brain. What we call calculation is the assignment of meaning to a particular activity that is embedded in the Lebenswelt, the general fuzzy “network”, or “milieu” of rules and acts of rule-following. Meaning on the other hand is not a mental entity, Wilhelm Vossenkuhl emphasizes throughout his interpretation of Wittgenstein’s work.

The obvious fact that we as humans are capable of using language and symbols brings again the question to the foreground, which we addressed already elsewhere (in our editorial essay): How do words acquire meaning? (van Fraassen), or in terms of the machine-learning community: How to ground symbols? Whatsoever the answer will be (we will propose one in the chapter about conditions), we should not fallaciously take the symptom—using language and symbols—as the underlying process, “cause”, or structure. using language clearly does not indicate that our brain is employing language to “have thoughts.”

There are still other suggestions about a theory of computation. Yet, they either can be subsumed to the three approaches as discussed here, provided by Copeland, Nir, and Sieg, or they the fall short of the distinction between Turing computability, calculation and computation, or the are merely confused by the shortfalls of reductionist materialism. An example is the article by Goldin and Wegner where they basically equate computation with interaction [9].

As an intermediate result we can state that that there is no theory of computation so far that would would be appropriate to serve as a basis for the debate around epistemological and philosophical issues around our machines and around our mind. So, how to conceive of computation?

Computation: An extended Perspective

Any of the theories of computation refer to the concept of algorithm. Yet, even deterministic algorithms may run forever if the solution space is defined in a self-referential manner. There are also a lot of procedures that can be made to run on a computer, which follow “analytic rules” and never will stop running. (By “analytic rules” we understand an definite and completely determined and encoded rule that may be run on an UTM.)

Here we meet again the basic intention of Turing: His work in [1] has been about the calculability of functions. In other words, time is essentially excluded by his notion (and also in Sieg’s and Gandy’s extensions of Turing’s work). It does not matter, whether the whole of all symbol manipulations are accomplished in a femto-second or in a giga-second. Ontologically, there is just a single block: the function.

Here at this pint we can easily recognize the different ways of branching off the classical, i.e. Turing-theory based understanding of computation. Since Turing’s concept is well-defined, there are obviously more ways to conceive of something different. These, however, boil down to three principles.

  • (1) referring to (predefined) symbols;
  • (2) referring to functions;
  • (3) based on uniquely defined states.

Any kind of Non-Turing computation can be subsumed to either of these principles. These principles may also be combined. For instance, algorithms in the standard definition as given first by Donald Knuth refer to all three of them, while some computational procedures like the Game of Life, or some so-called “genetic algorithms” (which are not algorithms by definition) do not necessarily refer to (2) and (3). We may loosely distinguish weakly Non-Turing (WNT) structures from strongly Non-Turing (SNT) structures.

All of the three principles vanish, and thus the story about computation changes completely, if we allow for a signal-horizon inside the machine process.  Immediately, we would have myriads of read/write devices working all to the same tape. Note, that the situation does not actualize a parallel processing, where one would have lots of Turing machines, each of them working on its own tape. Such parallelism is equivalent to a single Turing machine, just working faster.Of course, exactly this is intended in standard parallel processing as it is implemented today.

Our shared-tape parallelism is strikingly different. Here, even as we still would have “analytic rules,” the effect of the signal horizon could be world-breaking. I guess exactly this was the basis for Turing’s interest in the question of the principles of morphogenesis [10]. Despite we only have determinate rules, we find the emergence of properties that can’t be predicted on the basis of those rules, neither quantitatively nor, even more important, qualitatively. There is not even the possibility of a language on the lower level to express what has been emerging from it. Such an embedding renders our analytic rules into “mechanisms.”

Due to the determinateness of the rules we still may talk about computational processes. Yet, there are no calculations of functions any more. The solution space gets extended by performing the computation. It is an empirical question to what extent we can use such mechanisms and systems built from such mechanisms to find “solutions.” Note, that such solutions are not intrinsically given by the task. Nevertheless, they may help us from the perspective of the usage to proceed.

A lot of debates about deterministic chaos, self-organization, and complexity is invoked by such a turn. At least the topic of complexity we will discuss in detail elsewhere. Notwithstanding we may call any process that is based on mechanisms and that extends the solution space by its own activity Proper Non-Turing Computation.

Non-Turing Computation

We have now to discuss the concept of Non-Turing Computation (NTC) more explicitly. We will yet not talk about Non-deterministic Turing Machines (NTM), and also not about exotic relativistic computers, i.e. Turing machines running in a black hole or its vicinity. Note also that as along as we would perform in an activity that finally is going to be interpreted as a solution for a function, we still are in the area defined by Turing’s theory, whether such an activity is based on so-called analog computers, DNA or quantum dots. A good example for such a misunderstanding is given in [11]. MacLennan [12] emphasizes that Turing’s theory is based on a particular model (or class of models) and its accompanying axiomatics. Based on a different model we achieve a different way of computation. Despite MacLennan provides a set of definitions of “computation” before the background of what we labels “natural computation,” his contribution remains too superficial for our purposes (He also does not distinguish between mechanistic and mechanismic).

First of all, we distinguish between “calculation” and “computation.” Calculating is completely within the domain of the axiomatic use of graphemes (again, we avoid using “symbol” here). An example is 71+52. How do we know that the result is 123? Simply by following the determinate rules that are all based on mathematical axioms. Such calculations do not add anything new, even if a particular one has been performed the first time ever. Their solution space is axiomatically confined. Thus, UTM and λ-calculus are the equivalent, as it holds also for mathematical calculation and calculations performed by UTM or by humans. Such, the calculation is equivalent to follow the defined deterministic rules. We achieve the results by combining a mathematical model and some “input” parameters. Note that this “superposition” destroys information. Remarkably, neither the UTM nor its physical realization as a package consisting from digital electronics and a particular kind of software can be conceived as a body not even metaphorically.

In contrast to that by introducing a signal horizon we get processes that provoke a basic duality. On the one hand they are based on rules, which can be written down explicitly; they even may be “analytic.” Nevertheless, if we run these rules under the condition of a signal horizon we get (strongly) emergent patterns and structures. The description of those patterns or structures can not be reduced to the descriptions of the rules (or the rules themselves) in principle. This is valid even for those cases, where the rules on the micro-level would indeed by algorithms, i.e. rules delivering definite results in finite time and space.

Still, we have a lot of elementary calculations, but the result is not given by the axioms according to which we perform these calculations. Notably, introducing a signal horizon is equivalent to introduce the abstract body. So how to call calculations that extend their own axiomatic basis?

We suggest that this kind of processes could be called Non-Turing Computation, despite the fact that Turing was definitely aware about the constraints of the UTM, and despite the fact that it was Turing who invented the reaction-diffusion-system as a Non-UTM-mechanism.

The label Non-Turing Computation just indicates that

  • – there is a strong difference between calculations under conditions of functional logics (λ-calculus) and calculations in an abstract and, of course, also in a concrete body, implied by the signal horizon and the related symmetry breaking; the first may be called (determinate) calculation, the latter (indeterminate) computation
  • – the calculations on the micro-level extend the axiomatic basis on the macro-level, leading to the fact that “local algorithmicity” does not not coincide any longer with its “global algorithmicity”;
  • – nevertheless all calculations on the micro-level may be given explicitly as (though “local”) algorithms.

Three notes are indicated here. Firstly, it does not matter for our argument, whether in a real body there are actually strict rules “implemented” as in a digital computer. The assumption that there are such rules plays the role of a worst-case assumption. If it is possible to get a non-deterministic result despite the determinacy of calculations on the micro-level, then we can proceed with our intention, that a machine-based epistemology is possible. At the same time this argument does not necessarily support either the perspective of functionalism (claiming statefulness of entities) or that of computationalism (grounding on “algorithmic framework”).

Secondly, despite the simplicity and even analyticity of local algorithms an UTM is not able to calculate a physical actualization of a system that performs non-Turing computations. The reason is that it is not defined in a way that it could. One of the consequences of embedding trivial calculations into a factual signal horizon is that the whole “system” has no defined state any more. Of course we can interpret the appearance of such a system and classify it. Yet, we can not claim anymore that the “system” has a state, which could be analytically defined or recognized as such. Such a “system” (like the reaction-diffusion systems) can not be described with a framework that allows only unique states, such as the UTM, nor can a UTM represent such a system. Here, many aspects come to the fore that are closely related to complexity. We will discuss them over there!

The third note finally concerns the label itself. Non-Turing computation could be any computation based on a customizable engine, where there is no symbolic encoding, or no identifiable states while the machine is running. Beside complex systems, there are other architectures, such like so-called analog computers. In some quite justifiable way, we could indeed conceive the simulation of a complex self-organizing system as an analog computer. Another possibility is given by evolvable hardware, like FPGA, even as the actual programming is still based on symbolic encoding. Finally, it has been suggested that any mapping of real-world data (e.g. sensory input) that are representable only by real numbers to a finite set of intensions is also non-Turing computation [13].

What is the result of an indeterminate computation, or, in order to use the redefined term, Non-Turing computation? We are not allowed to expect “unique” results anymore. Sometimes, there might be several results at the same time. A solution might be even outside of the initial solution space, causing a particular blindness of the instance performing non-Turing computations against the results of its activities. Dealing with such issues can not be regarded as an issue of a theory of calculability, or any formal theory of computation. Formal theories can not deal with self-induced extension of solution spaces.

The Role of Symbols

Before we are going to draw an conclusion, we have to discuss the role of symbols. Here we have, of course, to refer to semiotics. (…)

keywords: CS Peirce, symbolism, (pseudo-) sub-symbolism, data type in NTC as actualization of associativity (which could be quite different) network theory (there: randolation)

Conclusion

Our investigation of computation and Non-Turing-Computation brings a distinction of different ways of actualization of Non-Turing computation.Yet, there is one particular structure that is so different from Turing’s theory that it can not even compared to it. Naturally, this addresses the pen-ultimate precondition of Turing-machines: axiomatics. If we perform a computation in the sense of strong rule-following, which could be based even on predefined symbols, we nevertheless may end up with a machine that extends its own axiomatic basis. For us, this seems to be the core property of Non-Turing Computation.

Yet, such a machine has not been built so far. We provided just the necessary conditions for it. It is clear that mainly the software is missing for an actualization of such a machine. If in some near future such a machine would exist, however, this also would have consequences concerning the status of the human mind, though rather undramatic ones.

Our contribution to the debate of the relation of “computers” and “minds” spans over three aspects. Firstly, it should be clear that the traditional frame of “computationalism,” mainly based on the equivalence to the UTM, can recognized as an inappropriate hypothesis. For instance, questions like “Is the human brain a computer?” can be identified as inadequate, since it is not apriori clear what a computer should be (besides falling thereby into the anti-linguistic trap). David King asked even (more garbageful) “Is the human mind a Turing machine?” [14] King concludes that :

“So if we believe that we are more than Turing machines, a belief in a kind of Cartesian dualist gulf between the mental and the physical seems to be concomitant.”

He arrives at that (wrong) conclusion by some (deeply non-Wittgensteinian) reflections about the actual infinite and Cantor’s (non-sensical) ideas about it. It is simply an ill-posed question whether the human mind can solve problems a UTM can’t. Mode of the problems we as humans deal with all the day long can not be “solved” (within the same day), and many not even represented to a UTM, since this would require definite encoding into a string of graphemes. Indeed, we can deal with those problems without solving them “analytically.” King is not aware about the poison of analyticity imported through the direct comparison with the UTM.

This brings us to the second aspect, the state of mechanisms. The denial of the superiority or let it even be the equality of brains and UTMs does not mount to the acceptance of some top-down principle, as King suggests in the passage cited above. UTMs, as any other algorithmic machine, are finite state automata (FSA). FSA, and even probabilistic or non-deterministic FSA, are totalizing the mechanics such that they become equivalent to a function, as Turing himself clearly stated. Yet, the brain and mind could be recognized as something that indeed rests on very simple (material) mechanisms, while these mechanisms (say algorithms) are definitely not sufficient to explain anything about the brain or the mind. From that perspective we could even conclude that we only can build such a machine if we fully embrace the transcendental role of so-called “natural” languages, as it has been recognized by Wittgenstein and others.

The third and final aspect of our results finally concerns the effect of these mechanisms onto the theory. Since the elementary operations are still mechanical and maybe even finite and fully determined, it is fully justified to call such a process a calculation. Molecular operations are indeed highly determinate, yet only within the boundaries of quantum phenomena, and not to forget the thermal noise on the level of the condition of the possible. Organisms are investing a lot to improve the signal-noise-ratios up to a digital level. Yet, this calculation is not a standard computation for two reasons: First, these processes are not programmable. They are as they are, as a matter of fact and by means of the factual matter. Secondly, the whole process is not a well-defined calculation any more. There is even no state. At the borderlines between matter, its formation (within processes of interpretation themselves part of that borderline zone), and information something new is appearing (emerging?), that can’t be covered by the presuppositions of the lower levels.

As a model then—and we anyway always have to model in each single “event” (we will return to that elsewhere)—we could refer to axiomatics. It is a undeniable fact that we as persons can think more and in more generality than amoebas or neurons. Yet, even in case of reptiles, dogs, cats or dolphins, we could not say “more” anymore, it is more a “different” than a “more” that we have to apply to describe the relationships between our thinking and that of those. Still, dogs or chimpanzees did not develop the insight of the limitations of the λ-calculus.

As a conclusion, we could describe the “Non-Turing computation” with regard to the stability of its own axiomatic basis. Non-Turing computation extends its own axiomatic basis. From the perspective of the integrated entity, however, we can call it differentiation, or abstract growth. We already appreciated Turing’s contribution on that topic above. Just imagine to imagine images like those before actually having seen them…

There are some topics that directly emerge from these results, forming kind of a (friendly) conceptual neighborhood.

  • – What is the relation between abstract growth / differentiation and (probabilistic) networks?
  • – Part of the answer to this first issue is likely given by the phenomenon of a particular transition from the probabilistic to the propositional, which also play a role concerning the symbolic.
  • – We have to clarify the notion “extending an axiomatic basis”. This relates us further to evolution, and particularly to the evolution of symbolic spaces, which in turn is related to category theory and some basic notions about the concepts of comparison, relation, and abstraction.
  • – The relationship of Non-Turing Computation to the concepts of “model” and “theory.”
  • – Is there an ultimate boundary for that extension, some kind of conditional system that can’t be surpassed, and how could we speak about that?
  • [1] Alan M. Turing (1936), On Computable Numbers, With an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, Volume 42 (1936), p.230-265.
  • [2] Andrew Hodges, Alan Turing.
  • [3] B. Jack Copeland (1996), What is Computation? Synthese 108: 335-359.
  • [4] Nir Fresco (2008), An Analysis of the Criteria for Evaluating Adequate Theories of Computation. Minds & Machines 18:379–401.
  • [5] Sieg, Wilfried, “Calculations by Man and Machine: Conceptual Analysis” (2000). Department of Philosophy. Paper 178. http://repository.cmu.edu/philosophy/178
  • [6] Sieg, Wilfried, “Church Without Dogma: Axioms for Computability” (2005). Department of Philosophy. Paper 119. http://repository.cmu.edu/philosophy/119.
  • [7] Wilhelm Vossenkuhl, Ludwig Wittgenstein. 2003.
  • [8] Ludwig Wittgenstein, Philosophical Investigations §201; see also the Internet Encyclopedia of Philosophy
  • [9] Goldin and Wegner
  • [10] Alan M. Turing (1952), The Chemical Basis of Morphogenesis. Phil.Trans.Royal Soc. London. Series B, Biological Sciences, Vol.237, No. 641. (Aug. 14, 1952), pp. 37-72.
  • [11] Ed Blakey (2011), Computational Complexity in Non-Turing Models of Computation: The What, the Why and the How. Electronic Notes Theor Comp Sci 270: 17–28.
  • [12] Bruce J MacLennan (2009), Super-Turing or Non-Turing? Extending the Concept of Computation. Int. J. Unconvent. Comp., Vol 5 (3-4),p.369-387.
  • [13] Thomas M. Ott, Self-organised Clustering as a Basis for Cognition and Machine Intelligence. Thesis, ETH Zurich, 2007.
  • [14] David King (1996), Is the human mind a Turing machine? Synthese 108: 379-389.

۞

Where Am I?

You are currently browsing entries tagged with Turing at The "Putnam Program".