FluidSOM (Software)

January 25, 2012 § 7 Comments

The FluidSOM is a modular component of a SOM population

that is suitable to follow the “Growth & Differentiate” paradigm.

Self-Organizing Maps (SOM) are usually established on fixed grids, using a 4n or 6n topology. Implementations as swarms or gas are quite rare and also are burdened with particular problems. After all, we don’t have “swarms” or “gases” in our heads (at least most of us for most of the time…). This remains true even if we would consider only the informational part of the brain.

The fixed grid prohibits a “natural” growth or differentiation of the SOM-layer. Actually, this impossibility to differentiate also renders structural learning impossible. If we consider “learning” as something that is different from mere adjustment of already available internal parameters, then we could say that the inability to differentiate morphologically also means that that there is no true learning at all.

These limitations, among others, are overcome by our FluidSOM. Instead of fixed grid, we use a quasi-crystalline fluid of particles. This makes it very easy to add or to remove, to merge or to split “nodes”. The quasi-grid will always take a state of minimized tensions (at least after shaking it a bit … )

Instead of fixed grid, we use a quasi-crystalline fluid of particles. This makes it very easy to add or to remove, to merge or to split “nodes”. The quasi-grid will always take a state of minimized tensions (at least after shaking it a bit … )

As said, the particles of the collection may move around “freely”, there is no grid to which they are bound apriori. Yet, the population will arrange in an almost hexagonal arrangement… if certain conditions hold:

  • – The number of particles fits the dimensions of the available surface area.
  • – The particles are fully symmetric across the population regarding their properties.
  • – The parameters for mobility and repellent forces are suitably chosen

Deviations from a perfect hexagonal arrangement are thus quite frequent. Sometimes hexagons enclose an empty position, or pentagons establish instead of hexagons, frequently so near the border or immediately after a change of collection (adding/removing a particle). This, however, is not a drawback at all, especially not in in case of SOM layers that are relatively large (starting with N>~500). In really large layers comprising >100’000 nodes, the effect is neglectable. The advantage of such symmetry breaks on the geometrical level, i.e. on the quasi-material level, is that it provides a starting point for natural pathway of differentiation.

There is yet another advantage: The fluid layer contains particles that not necessarily are identical to the nodes of the SOM, and also the relations between nodes are not bound to the hosting grid.

The RepulsionField class allows for a confined space or for a borderless topology (a torus), the second of which is often more suitable to run a SOM.

Given all the advantages, there is the question why are fixed grids so dramatically preferred against fluid layouts? The answer is simple: it is not simple at all to implement them in a way that allows for a fast and constant query time for neighborhoods. If it takes 100ms to determine the neighborhood for a particular location in a large SOM layer, it would not be possible to run such a construct as a SOM at all: the waiting time would be prohibitive. Our Repulsion Field addresses this problem with buffering, such it is almost as fast as the neighborhood query in fixed grids.

So far, only the RepulsionField class is available, but the completed FluidSOM should follow soon.

The Repulsion Field of the FluidSOM is available through the Google project hosting in noolabfluidsom.

The following four screenshot images show four different selection regimes for the dynamic hexagonal grid:

  • – single node selection, here as an arbitrary group
  • – minimal spanning tree on this disjoint set of nodes
  • – convex hull on the same set
  • – conventional patch selection as it occurs in the learning phase of a SOM

As I already said, those particles may move around such that the total energy of the field gets minimized. Splitting a node as a metaphor for natural growth leads to a different layout, yet in a very smooth manner.

Fig 1a-d: The Repulsion Field used in FluidSOM.
Four different modes of selection are demonstrated.

To summarize, the change to the fluidic architecture comprises

  • – possibility for a separation of physical particles and logical node components
  • – possibility for dynamic seamless growth or differentiation of the SOM lattice, including the mobility of the “particles” that act as node containers;

Besides that FluidSOM offers a second major advance as compared to the common SOM concept. It concerns the concept of the nodes. In FluidSOM, nodes are active entities, stuffed with a partial autonomy. Nodes are not just passive data structures, they won’t “get updated2 by a central mechanism. In a salient contrast they maintain certain states comprised by activity and connectivity as well as their particular selection of a similarity function. Only in the beginning all nodes are equal with respect to those structural parameters. As a consequence of these properties, nodes in FluidSOM are able to outgrow (pullulate) new additional instances of FluidSOM as kind of offspring.

These two advances removes many limitations of the common concept of SOM (for more details see here).

There is last small improvement to introduce. In the snapshots shown above you may detect some “defects,” often as either holes within a perfect hexagon, or sometimes also as a pentagon. But generally it looks quite regular. Yet, this regularity is again more similar to crystals than to living tissue. We should not take the irregularity of living tissue as a deficiency. In nature there are indeed highly regular morphological structures, e.g. in the retina of the eyes in vertebrates, or the faceted eyes of insects. In some parts (motor brains) of some brains (especially birds) we can find quite regular structures. There is no reason to assume that evolutionary processes could not lead to regular cellular structures. Yet, we never will find “crystals” in any kind of brain, not even in insects.

Taking this as an advice, we should introduce a random factor into the basic settings of the particles, such that the emerging pattern will not be regular anymore. The repulsion principle still will lead to a locally stable configuration, though. Yet, strong re-arrangement flows are not excluded either. The following figure show the resulting layout for a random variation (within certain limits) of the repellent force.

Figure 2: The Repulsion Field of FluidSOM, in which the particle are individually parameterized with regard to the repellent force. This leads to significant deviations  from the hexagonal symmetry.

This broken symmetry is based on a local individuality with regard to repellent force attached to it. Albeit this individuality is only local and of a rather weak character, together with the fact of the symmetry break it helps to induce it is nevertheless important as a seed for differentiation. It is easy to imagine that the repellent forces are some (random) function of the content-related role of the nodes that are transported by the particles. For instance, large particles, could decrease or increase this repellent force, leading to a particular morphological correlates to the semantic activity of the nodes in a FluidSOM.

A further important property for the determining the neighborhood of a particle is directionality. The RepulsionField supports this selection mode, too. It is, however, completely controlled on the level of the nodes. Hence we will discuss it there.

Here you may directly download a zip archive containing a runnable file demonstrating the repulsion field (sorry for the size (6 Mb), it is not optimized for the web). Please note that you have to install java first (on Windows). Else, I recommend to read the file “readme.txt” which explains the available commands.

GlueStarter (Software)

January 19, 2012 § Leave a comment

The GlueStarter is a small infrastructure component

that allows for remote start and shutdown of java jar files. A remote instance can send a command via TCP socket to the GlueStarter. Such, the remote instance can start a dedicated Java VM that is running the desired program. The GlueStarter itself may be started as auto-start during startup of the OS. Hence, one could conceive it also as a so-called daemon, yet one that  is just written in pure Java, which is a very convenient property.

The GlueStarter is a helper component for the NooLabGlue system. Both are operating completely independent, of course. In the context of the probabilistic approach to (a population of) growing networks, the GlueStarter can be conceived as something like a “growth mechanism.”

The GlueStarter just executes the commands to create or to shutdown instances of the modules that are linked together via the NooLabGlue system. Such, it does not contain any administration functionality… it really knows (almost) nothing about the modules or their state. The only information that it could make available to other modules is the number of modules that have been started by GlueStarter.

GlueStarter is available through the Google project hosting for noolabglue, a more detailed description is here (coming soon).

Complexity

January 15, 2012 § Leave a comment

The great antipode, if not opponent to rationality is,

by a centuries-old European declaration, complexity.

For a long time this adverse relationship was only implicitly given. The rationalist framework has been set up already in the 14th century. 200 years later, everything had to be a machine in order to celebrate God. Things did not change much in this relationship, when the age of bars (mechanics, kinetics) grew into the age of machines (dynamics). Just to the opposite, control was the declared goal, an attitude that merged into the discovery of the state of information. While the 19th century invented the modern versions of information, statistics, and control technologies—at least precursory—, the 2oth century overgeneralized them and applied those concepts everywhere, up to their drastic abuse in the large bureaucrazies and the secret services.

Complexity finally was marked as the primary enemy of rationalism by the so-called system theoretician Luhmann even as late as 1987 [1]. We cite this enigmatically dualistic statement from Eckardt [2] (p.132)

[translation, starting with the citation in the first paragraph, “We shall call a contiguous set of elements complex, if due to immanent limitations regarding the elements’ capacity for establishing links it is not possible any more that any element could be connected to each other.” […] Luhmann shifted complexity fundamentally into a dimension of the quantifiable, where is looking for a logical division of sets and elements, hybridizations are not allowed in this dualistic conception of complexity.

Even more than that: Luhmann just describes the break of idealistic symmetry, yet by no means the mechanisms, nor the consequences of that. And even this aspect of broken symmetry remains opaque to him, as Luhmann, correctly diagnosed by Eckard, refuges into the reduction of quantisation and implied control. Such, he emptied the concept of complexity even before he drops it. Not the slightest hint towards the qualitative change that complex system can provoke through their potential for inducing emergent traits. Neglecting qualitative changes means to enforce a flattening, to expunge any vertical layering or integration. Later we will see that this is a typical behavior of modernists.

Eckardt continues to cite Luhmann (one has to know that Luhmann was a bureaucrat by training):

[translation, second paragraph: “Complexity in this second sense then is a measure for indeterminacy or for the lack of information. Complexity is, from this perspective, the information that is unavailable for the system, but needed by it for completely registering and describing its environment (environmental complexity), or itself (systemic complexity), respectively.“]

In the second statement he proposed that complexity denotes things a system cannot deal with, because it cannot comprehend it. This conclusion was an even necessary by-product of his so-called “systems theory,” which is nothing else than plain cybernetics. Actually, applying cybernetics in whatsoever form to social system is categorical nonsense, irrespective the “order” of the theory: 2nd order cybernetics (observing the observer) is equally unsuitable as 1st order cybernetics is for talking about the phenomena that result from self-referentiality. They always deny the population effects and they always claim the externalizability of meaning.

Of course, such a “theoretical” stance is deeply inappropriate to deal with the realm of the social, or more generally with that of complexity.

Since Luhmann’s instantiation of system theoretic nonsense, the concept of complexity developed into three main strains. The first flavor, following Luhmann, was a widely accepted and elegant symbolic notion either for incomprehensibility and ignorance or for complicatedness. The second mistook it drastically and violently under the signs of cybernetics, which is more or less the complete “opposite” to complexity. Vrachliotis [3] cited Popper’s example of a swarm of midges as a paradigm for complexity in his article about the significance of complexity. Unfortunately for him, the swarm of midges is anything but complex, it is almost pure randomness in a weak field of attraction. We will see that attraction alone is not a sufficient condition. In the same vein most of the research in physics is just concerned with self-organization from the perspective of cybernetics, if at all. Mostly, they resort to information theoretic measures (shortest description length) in their attempt to quantify complexity. We will discuss this later. Here we just note that the product of complexity  contains large parts about one can not speak in principle, whereas a description length even claims that the complex phenomenon is formalizable.

This usage is related to the notion of complexity in computer sciences, denoting the dependency of needs for time and memory from the size of the problem, and information sciences. There, people talk about “statistical complexity,” which sounds much like “soft stones.” It would be funny if it would have be meant as a joke. As we will see, the epistemological status of statistics is as incommensurable with complexity as it is true for cybernetics. The third strain finally reduces complexity to non-linearity in physical systems, or physically interpreted systems, mostly referring to transient self-organization phenomena or to the “edge of chaos”[4]. The infamous example here are swarms… no, swarms do not have a complex organization! Swarms can be modeled in a simple way as kind of a stable, probabilistic spring system.

Naturally, there is also any kind of mixture of these three strains of incompetent treatment of the concept of complexity. So far there is no acceptable general concept or even an accepted working definition. As a consequence, many people are writing about complexity, or they refer to it, in a way that confuses everything. The result is bad science, at least.

Fortunately enough, there is a fourth, though tiny strain. Things are changing, more and more complexity looses its pejorative connotation. The beginnings are in the investigation of natural systems in biology. People like Ludwig von Bertalanffy, Howard Pattee, Stanley Salthe, or Conrad Waddington, among others brought the concept of irreducibility into the debate. And then, there is of course Alan Turing and his mathematical paper about morphogenesis [5] from the mid 1950ies, accompanied by the Russian chemist Belousov, who did almost the same work practically [6].

The Status of Emergence

Complex systems may be characterized by a phenomenon which could be described from a variety of perspectives. We could say that symmetry breaks [7], that there is strong emergence [8], that it develops patterns that cannot be described on the level of the constituents of the process, and so on. If such an emergent pattern is being selected by another system, we can say that something novel has been established. The fact that we can take such perspectives neatly describes the peculiarity of complexity and its dynamical grounds.

Many researchers feel that complexity potentially provides some important yet opaque benefits, despite the abundant usage as a synonym for incomprehensibility. As it is always the case in such situation, we need a proper operationalization, which in turn needs a clear-cut identification of its basic elements in order to be sufficient to re-construct the concept as a phenomenon. As we already pointed out above, these elements are to be understood as abstract entities, which need a deliberate instantiation before any usage. From a large variety of sources starting from Turings seminal paper (1952) and up to Foucault’s figure of the heterotopia [9] we can derive five elements, which are necessary and sufficient to render any “system” from any domain into a complex system.

Here, we would like to add a small but important remark. Even if we take a somewhat uncritical (non-philosophical) perspective, and thus the wording is not well-defined so far (but this will change), we have to acknowledge that complexity is (1) a highly dynamical phenomenon where we can not expect to find a “foundation” or a “substance” that would give sufficient reason, and (2) more simply, whenever we are faced with complexity we experience at least a distinct surprise. Both issues directly lead to the consequence that there is no possibility for a purely quantitative description of complexity. This also means that any of the known empirical approaches (external realism, empirism, whether in a hypothetico-deductive attitude or not, whether following falsificationism or not, also does not work. All these approaches, particularly statistics, are structurally flat. Anything else (structurally “non-flat”) would imply the necessity of interpretation immanent to the system itself, and such is excluded from contemporary science for several hundred years now. The phenomenon of complexity is only partial an empirical problem of “facts,” despite the “fact” that we can observe it (just wait a few moments!).

We need a quite different scheme.

Observations

Before we proceed, we would like to give the following as an example. It is a simulation of a particular sort of chemical system, organized in several layers. “Chemical system” denotes a solution of several chemicals that mutually feed on each other and the reactants of other reactions. Click to the image if you would like to see it in action in a new tab (image & applet by courtesy of “bitcraft“):

For non-programmers it is important to understand that there is no routine like “draw-a-curve-from-here-to-there”. The simulation is on the level of the tiniest molecules. What is striking then is the emerging order, or, in a technical term, the long-range correlation of the states (or contexts) of the particles. The system is not “precisely” predictable, but its “behavior” is far from random. This range is 100-1000’s of times larger than the size of the particle. In other words, the individual particles do not know anything about those “patterns.”

Such systems as the above one are called reaction-diffusion systems, simply because there are chemical reactions and physical diffusion. There are several different types of them, distinguishable by the architecture of the process, i.e. there is the question, what happens with the result of the reaction? In Gray-Scott models for instance, the product of the reaction is removed from the system, hence it is a flow-type reactor. The first system as discovered by Turing (mathematically) and independently from him, by Belousov and Zhabotinsky (in a quite non-Russian attitude, by chemical experiments), is different. Here, everything remains in the system in a perfectly circularly re-birthing (very small) “world”.

The diffusion brings the reactants together, which then react “chemically.” Chemical reactions combine some reactants into a new one of completely different characteristics. What happens if you bring chlorine gas to the metal natrium (aka sodium)? Cooking salt.

In some way, we see here two different kinds of emergence, the chemical one and the informational one. Well, the second that is more interesting to us today is not strictly informational, it is a dualistic emergence, spanning between the material world and the informational world. (read here about the same subject from a different perspective)

Here we already meet a relation that can be met in any complex system, i.e. a system that exhibits emergent patterns in a sustainable manner. We meet it in the instance of chemical reactions feeding on each other. Yet, the reaction does not simply stop, creating some kind of homogeneous mud… For mechanists, positivists, idealists etc. something quite counter-intuitive happens instead.

This first necessary (at least I suppose so) condition to create complexity is the probabilistic mutual counteraction, where these two processes are in a particular constellation to each other. We can identify this constellation always where we meet “complexity”. Whether in chemical systems, inside biological organisms at any scale, or in their behavior. So far we use “complexity” in a phenomenal manner, so to speak, proto-empiric, but this will change. Our goal is to develop a precise notion of complexity. Then we will see what we can do about it.

Back to this probabilistic mutual counteraction. One of the implied “forces” accelerates, enforces, but its range of influence is small. The other force, counter-acting the strong one, is much weaker, but with a larger range of influence [10]. Note that “force” does not imply here an actor, it should be taken more like a field.

The lesson we can take from that, as a “result” of our approach, is already great, IMHO: Whenever we see something that exhibits dynamic “complexity,” we are allowed—no, we have to!—ask about those two (at least) antagonistic processes, their quality and their mechanism. In this perspective, complexity is a paradigmatic example for a full theory, that is categorically different from a model: it provides a guideline how to ask, which direction to take for creating a bunch of models (more details about theory here). Also quite important to understand is that asking for mechanisms here does not imply any kind of reductionism, quite to the contrary.

Yet, those particularly configured antagonistic forces are not the only necessary condition. We will see that we need several more of those elements.

I just have to mention the difference between a living organism and the simulation of the chemical system above in order to demonstrate that there are entities that are vastly “more” complex than the Turing-McCabe system above. Actually, I would propose not to call such chemical systems as “complex,” and for good reasons so, as we will see shortly.

If we would not simulate such a system, that is run it as a chemical system, it soon would stop, even if we would not remove anything from the system. What is therefore needed is a source of energy, or better enthalpy, or still better, of neg-entropy. The system has to dissipate a lot of entropy (un-order, pure randomness, hence radiation) in order to establish a relatively “higher” degree of order.

These two principles are still not sufficient even to create self-organization phenomena as you see it above. Yet, for the remaining factors we change the (didactic) direction.

The Proposal

We already mentioned above that complexity is not “purely” an empiric phenomenon. Actually, radical empiricism has been proofed to be problematic. So, what we are enforced to do concerning complexity we always have to do in any empiric endeavor. In the case of complexity it is just overly clear visible. What we are talking about is the deliberate apriori setting of “elements” (not axioms!). Nevertheless, we are convinced that it is possible to take a scientific perspective towards complexity, in the meaning of working with theories, models and predictions/diagnoses. What we propose is just not radical positivism, or the like.

Well, what is an “element”? Elements are “syndroms,” almost some kind of identifiable and partially naturalized symbols. Elements do not make sense if taken each after another. The make sense only if taken together. Yet, I would not like to open the classic-medieval discourse about “ratios”….

Our elements are from basic physics, dynamical systems showing emergent phenomena, abstract sign-theoretic considerations, completed by a formal argument.

We now would like to provide our proposal about the five necessary elements that create complexity, and by virtue of the effect, the whole set then is also sufficient. The following five elements of essential and sufficient components for complexity are presumably:

  • (1) dissipation, deliberate creation of additional entropy by the system at hand;
  • (2) an antagonistic setting similar to the reaction-diffusion-system (RDS), as described first by Alan Turing [5], and later by Gray-Scott [11], among others;
  • (3) standardization;
  • (4) active compartmentalization;
  • (5) systemic knots.

We shall now contextualize these elements as briefly as possible.

Dissipation

Element 1: The basic element is dissipation. Dissipation means that the system produces large amount of disorder, so-called entropy, in order to be able to establish structures (order) and to keep the overall entropy balance increasing [4]. This requires a radical openness of the system, what has been recognized already by Schrödinger in his Dublin Lectures [12]. Without radiation of heat, without dissipation, no new order (=local decrease of entropy) could be created. Only the overall lavishly increase of entropy allows to decrease it locally (=establish patterns and structure). If the system performs physical work with radiation of heat, the system can produce new patterns, i.e. order, which transcends the particles and their work. Think about the human brain, which radiates up to 100 Watts per hour in the infrared spectrum in order to create that volatile immaterial order which we call thinking. Much the same is true for other complex organizational phenomena such like “cities.”

Antagonistic Forces in Populations

Element 2: This increase of entropy is organized on two rather different time scales. On the short-time scale we can see microscopic movements of particles, which–now invoking the second element–have to be organized in an antagonistic setting. One of the “forces,” reactions or in general “mechanisms” should be strong with a comparatively short spatio-temporal range of influence. The other, antagonistic force or mechanism should be comparatively weak, but as kind of a compensate, its influence should be far-reaching in time-space. Turing described mathematically, that such a setting can produce novel patterns for a wide range of parameters. “Novelty” means that these patterns can not be found anywhere in the local rules organizing the moves of the microscopic particles.

Turing’s system is one of constant mass; reaction-diffusion systems (RDS) may also be formulated as a flow-through reactor system (so-called Gray-Scott model [11]). In general, you can think of reaction-diffusion-systems as a sort of population-based, probabilistic Hegelian dialectics (which of course is strictly anti-Hegelian).

Standardization

Element 3: The third element, standardization, reflects the fact, that the two processes can only interact intensely if their mutual interference is sufficiently standardized. Without standardization, there would not be any antagonistic process, hence also no emergence of novel patterns. the two processes would be transparent to each other. This link has been overlooked so far in the literature about complexity. The role of standardization is also overlooked in theories of social or cultural evolution. Yet, think only about DIN or ASCII , or de-facto standards like “the PC”. Any of the subsequent emergences of more complex patterns would not have happened. We easily can extend the role of standardization into the area of mechanization and automation. There is also a deep and philosophically highly relevant relation between standardization and rule-following. Here, we can’t deal further with this topic. Actually, the process of habituation, the development of a particular “common sense” and even the problem of naming as one of the first steps of standardization do not only belong to the most difficult problems in philosophy, they are actually still somewhat mysterious .

On a sign-theoretic level we may formulate standardization as some kind of acknowledged semiotics, as a body of rules, which tell the basic individual items of system, how to behave and to interpret. Frequently changing codes effectively prohibits any complexity; changing codes also prohibits effectively any further progress in the sense of innovation. Codes produce immaterial enclaves. On the other hand, codes are also means of standardization. Codes play multiple and even contradictory roles.

Towards the Decisive Step

As already mentioned above, emergent creation of novel and even irreducible patterns can not be regarded as a sufficient condition for complexity. Such processes are only proto-complex; since there is no external instance ruling the whole generative process those patterns are often called “self-organized.” This is, bluntly said, a misnomer, since then we would not distinguish between the immaterial order (on the finer time scale) and the (quasi-)material organization (on the coarser time scale). Emergent patterns are not yet organized; at best we could say that such systems are self-patterning. Obviously, strong emergence and complexity are very different things.

Compartmentalization

Element 4: What is missing in such self-organizing configurations is the fourth element of complexity, the transition from order to organization. (In a moment we will see, why this transition is absolutely crucial for the concept of complexity.) Organization always means that the system has been built up compartments. To achieve that, the self-organizing processes, such like in the RDS, must produce something, which then resides as something external to the dynamic process running on the short time-scale. In other words, the system introduces (at least) a second much longer time scale, since the produces are much more stable than the processes themselves. This transition we may call indeed self-organization.

In biological systems those compartments are established by cell walls and membranes, which build vesicles, organs, fluid compartments and so on. In large assemblies built of matter–“cities”–we know walls, streets, but also immaterial walls made from rules, semi-permeable walls by steep gradients of fractal coefficients and so on.

There are however also informational compartments, which are probabilistically defined, such like the endocrine systems in animals or the immune systems, which both form an informational network. In the case of social organizations compartments are tangible as strict rules, or even domain specific languages. In some sense, produces of processes facilitating the transition from order to organization are always some kind of left-over, secretions, partial deaths if you like. It is these persistent secretions where the phenomenon of growth becomes visible. From an outside perspective, this transition could be regarded also as a process of selection; starting from a large variety of slightly different and only temporarily stable patterns or forms, the transition from order to organization establishes some of them–selectively, as a matter of fact.

It is clear that the lasting produces of a system act as a constraint for any subsequent processes. Durable systems may acquire the capability to act precisely on that crucial transition in order to gain better stability. One could for instance imagine that a system controls this transition by controlling the “temperature” of the system, high “temperatures” leading to a remelting of structures in order to allow for different patterns. This however raises a deep and self-referential problematics: the system would have to develop a sufficiently detailed model about itself, which is not possible, since there is strong emergence in its lower layers. This problematics forwards us to the fifth element of complexity.

Systemic Knots

Element 5: We know from bio-organic systems that they are able to maintain themselves. This involves a myriad of mutual influences, which then also span across several levels of organizations. For instance, the brain, and even thoughts themselves are able to influence individual (groups of) cells [13]. The behavior to drink green tea directly acts onto the DNA in the cells of the body. Such controlling influence, however, is not unproblematic. Any controlling instance can have only a partial model about the regulated contexts. By means of non-orthogonality this leads directly to the strange situation, that the interests of lower levels and those from the top levels necessarily contradict each other. This effect is just the inverse of the famous “enslaving parameter” introduced by Hermann Haken as the key element of his concept of synergetics [14].

We call this effect the systemic knot, since the relationships between elements of different layers can not be “drawn” onto a flat paper any more. Most interestingly, this latent antagonism between levels of a system is just the precondition for a second-order complexity. Notably, we can conclude that complexity “maintains” itself. If a complex system would not cause the persistence of the complexity it builds upon, it would not be complex any more soon as a consequence of the second law of thermodynamics. In other words, it would be soon dead.

Alternatives

Today, in the beginning of 2012, complexity has been advanced (almost) into everybody’s mind, perhaps except materialists like Slavoj Žižek, who in a TV interview recently proposed a radical formalistic materialism, i.e. we should acknowledge that everything (!) is just formula. Undeniably, the term “complexity” has made its career during the last 25 years. Till the end of 1980ies complexity was “known” only among very few scientists. I guess that this career is somehow linked to information as we practice it.

From a system theoretic approach the available acceleration of many processes as compared to the pre-IT times introduced first a collapse of stable boundaries (like the ocean), buzzed as “globalization,” only to introduce symmetry-breaks on the then provoked complex system. This instance of complexity is really a new one, we never saw it before.

Anyway, given the popularity of the approach one might assume that there is not only a common understanding of complexity, but also an understanding that would somehow work in an appropriate manner, that is one that does not reduce complexity to a narrow domain-specific perspective. Except our proposal, however, such a concept does not exist.

The List

In the context of management research, where the notion of complexity is being discussed since the days of Peter Drucker, Robert Bauer and Mihnea Moldoveanu wrote in 2002 [15]:

‘Complexity’ has many alternative definitions. The word is used in the vernacular to denote difficult to understand phenomena, in information theory to denote incompressible digital bit strings [Li and Vitanyi, 1993], in theoretical computer science to denote relative difficulty of solving a problem [Leiserson, Cormen and Rivest, 1993] in organization theory to denote the relative degree of coupling of a many-component system [Simon, 1962], among many other uses.

They continue

Our goal is to arrive at a representation of complexity that is useful for researchers in organizational phenomena by being comprehensive – incorporating the relevant aspects of complexity, precise – allowing them to identify and conceptualize the complexity of the phenomena that they address, and epistemologically informed – allowing us to identify the import of the observer to the reported complexity of the phenomenon being observed.

So far so good. We could agree upon. Yet, afterwards they claim the possibility to decompose “[…] complexity into informational and computational components.” They seek support by another author: “‘Complexity’ is often thought to refer to the difficulty of producing a competent simulation of a phenomenon [Norretranders, 1995].” which reminds strikingly to Luhmann’s wordy capitulation. We will return to this cybernetic attitude in the next section, here we just fix this reference to (cybernetic) information and computer sciences.

Indeed, computational complexity and informational entropy belong to the most popular approaches to complexity. Computational complexity is usually further operationalized in different ways, e.g. as minimal description length, accessibility of problems and solutions etc.. Informational entropy, on the other hand, is not only of little value, it is also applying a strongly reductionist version of the concept of information. Entropy is little more than the claim that there is a particular phenomenon. The concept of entropy can’t be used to identify mechanisms, whether in physics or regarding complexity. More generally, any attempt to describe complexity in statistical terms fails to include the relevant issue of complexity: the appearance of “novel” traits, novel from the perspective of the system.

Another strain relates complexity to chaos, or even uses it as a synonym. This however applies only in vernacular terms, in the sense of being incomprehensible. Chaos and complexity often appear in the same context, nut by far not necessarily. Albeit complex system may develop chaotic behavior, the link is anything but tight. There is chaotic behavior of non-complex systems (Mandelbrot set), as well as complexity in non-chaotic systems. In a more rational perspective would be a mistake to equate them, since chaos is a descriptive term about the further development of a system, where this description is based on operationalizations like the ε-tube, or the value of the Lyaponuv-exponent.

Only very recently there has been an attempt to address the issue of complexity in a more reflected manner, yet they did not achieve an operationalizable concept.  (…)

Problems with the Received View(s)

It is either not feasible or irrelevant to talk in complex systems about “feedback.” The issue at hand is precisely emergence, thus talking about deterministic hard-wired links, in other words: feedback, is severely missing the point.

Similarly, it is almost nonsense to assign a complex system inner states. There is neither an identifiable origin not even the possibility to describe a formal foundation. If our proposal is not a “formal” foundation. The talk about states in complex systems, and in this similar to the positivist claim of states in the brain, is wrong again precisely due to the fact that the emergence is emergence, and it is so on the (projected) “basis” of an extremely deterritorialized dynamics. There is no center, and there are no states in such systems.

On the other hand, it is also wrong to claim—at least as a general notion—that complex systems are unpredictable [16]. The unpredictability of complex systems is fundamentally different from the unpredictability of random systems. In random systems, we do not know anything except that there are fluctuations of a particular density. It is not possible here to say anything concrete about the next move of the system, only as a probability. Whether the nuclear plant crashes tomorrow or in 100’000 years is not subject of probability theory, precisely because local patterns are not subject of statistic. In a way randomness cannot surprise, because randomness is a language game meaning “there is no structure.”

Complex systems are fundamentally different. Here all we have are patterns. We even may expect that a given pattern persists more or less stable in the near future, meaning, that we would classify the particular pattern (which “as such” is unique for all times) into the same family of patterns. However, more distant in the future, the predictability of complex systems is far less than that for random systems. Yet, despite this stability we must not talk about “states” here. Closely related to the invocation of “states” is the cybernetic attitude.

Yet, there is an obvious divergence between what life in an ecosystem is and the description length of algorithms, or the measure of unorderliness. This categorical difference refers to the fact that living systems as the most complex entities we know of consist of a large number of integrative layers, logical compartments, if you like. All blood cells are on such a layer, all liver cells, all neurons, all organs etc., but also certain functional roles. In cell biology one speaks about the genome, the proteome, the transcriptome etc., in order to indicate such functional layers. The important lesson we can take from that is that we can’t reduce any of the more integrated layers to a lower one. All of these layers are “emergent” in the strong sense, despite the fact that they are interconnected also in a top-down level. Any proclaimed theory of complexity that does not include the phenomenon of emergent layering should not regarded as such a theory. Here we meet the difference of structurally flat theories or attitudes (physics, statistics, cybernetics, positivism, materialism, deconstructivism) and theories that know of irreducible aspects as structural layers and integration.

Finally a philosophical argument. The product of complexity contains large parts about one can not speak in principle. The challenge is indeed a serious one. Regardless which position we take, underneath or above the emergent phenomenon, we cannot speak about it.   In both cases there is no possible language for it. This situation is very similar to the body-mind-problem, where we met a similar transition. Speaking about a complex system from the outside does not help much. We just can point to the emergence, in the Wittgensteinian sense, or focus either the lower or the emergent layer. Both layers must remain separated in the description. We just can describe the explanatory dualism [17].

This does not mean that we can’t speak about it at all. We just can’t speak about it in analytic terms, as structurally flat theories do, if they don’t refuse any attempt to understand the mechanisms of complexity and the speaking thereof anyway. One alternative to integrate that dualism in a productive manner is by the technique of elementarization as we tried it here.

Conclusion

As always, we separate the conclusions first into philosophical aspects and the topic of machine-based epistemology and its “implementation.”

Neil Harrison, in his book about unrecognized complexity in politics [16] correctly wrote,

Like realism, complexity is a thought pattern.

From a completely different domain, Mainzer [7] similarly wrote:

From a logical point of view, symmetry and complexity are syntactical and semantical properties of theories and their models.

We also have argued that the property of emergence causes an explanatory dualism, which withstands any attempt for an analytic formalization in positive terms. Yet, even as we can’t apply logical analyticity without loosing the central characteristics of complex systems,we can indeed symbolize it. This step of symbolization is distantly similar to the symbolization of the zero, or that of infinity.

Our proposal here is to apply the classic “technique” of elementarization. Such, we propose a “symbolization” that is not precisely a symbol, it is more like an (abstract) image. None of the parts or the aspects of an image cannot be taken separately to describe the image, we even may not miss any of those aspects. This link between images and complexity, or images and explanatory dualism is an important one which we will follow in another chapter (about “Waves, Words and Images“)

“Elements” (capital E) are not only powerful instruments, they are also indispensable. Elements are immaterial abstract formless entities; their only property is the promise of a basic constructive power. Elements usually are assumed not to be further decomposable without loosing their quality. That’s true for our big 5 as well as for chemical elements, and also for Euclidean Elements of Geometry. In our case, however, we are also close to Aristotle’s notion of elements, according to which the issue at hand (for him and his fellows the “world”) can’t be described by any isolated subset of them. The five elements are all necessary, but only also sufficient if they appear together.

The meaning of Elements and their usage is that they allow for new perspectives and for a new language. In this way, the notion of “complexity” is taken by many as an element. Yet it is only a pseudo-element, an idol, because (i) it is not abstract enough, and (ii) because it can be described by means of more basic aspects. Yet it depends on the perspective, of course.

Anyway, our theoretical framework allows to distinguish between various phenomena around complex systems in a way that is not accessible through other approaches. Similarly to the theory of natural evolution our theory of complexity is a framework, not a model. It can not be tested empirically in a direct manner. Yet, it has predictive poser on the qualitative level and it allows to develop means for synthesizing complexity, and even for qualitative predictions.

Deleuze (p.142 in [18]) provided a highly convincing and IMHO a still unrivaled description about complexity on less than three short pages. The trinity between the broiling of “particled matter” and “bodies” below, the pattern above and the dynamic appearance emergence he called “sense”. He even attributed that process the label “logic”. It is amazing that Deleuze with his book about the logic of sense focused strongly on paradoxes, i.e. antagonist forces, and he succeeded to remain self-consistent with his series 16 and 17 (paradox of the logics of genesis), and we today can determine the antagonism (of a particular configuration) as one of the main necessary Elements of complexity.

What is the role of complexity in cognition? To approach this question we have to recognize what is happening in complex process: a pattern, a potential novelty appears out of randomness. Yet if the patterns does not stabilize it will sink back into randomness. However, patterns are temporarily quite persistent in all complex systems. That means that the mere appearance of those potential novelties could make their way to be a subject of selection, which stabilizes the volatile pattern over time, changing it into an actual novelty.

Such, complexity is in between randomness of scattered matter and established differences, a virtual zone between matter and information, between the material and the immaterial. Deleuze called it, we just said it, the paradox of the logics of genesis [18].

The consequence is pretty clear: Self-organizing Maps are not sufficient. They just establish a volatile, transient order. Yet, what we need to create is complexity. There is even a matching result from neuro-cognitive research. The EEC of more intelligent people show a higher “degree” of complexity. We already know that this complexity we can achieve only by animal-like growth and differentiation. There is a second reason why the standard implementation of SOM is not sufficient: it is not probabilistic enough, since almost all properties of the “nodes” are fixed at implementation time without any chance for a break-out. For instance, SOM are mostly realized as static grids, the transfer mechanism is symmetric (circle, ellipsoid), they do not have a “state” except the collected extensions, there are only informational antagonisms, but not chemical ones, or those related to the matter of a body.

This eventually will lead to a completely novel architecture for SOM (that we soon will offer on this site).

This article was first published 20/10/2011, last substantial revision and re-publishing is from 15/01/2012. The core idea of the article, the elementarization of complexity, has been published in [19].

  • [1] Niklas Luhmann, Soziale Systeme. Grundriss einer allgemeinen Theorie. Frankfurt 1987. p.46/47. cited after [2].
  • [2] Frank Eckardt, Die komplexe Stadt: Orientierungen mi urbanen Labyrinth. Verlag für Sozialwissenschaften, Wiesbaden 2009. p.132.
  • [3] Andrea Gleiniger (Herausgeber), Georg Vrachliotis (eds.),Komplexität: Entwurfsstrategie und Weltbild. Birkhäuser, Basel 2008.
  • [4] Lewin 2000
  • [5] Alan M. Turing (1952), The Chemical Basis of Morphogenesis. Phil. Trans. Royal Soc. Series B, Biological Sciences, Vol. 237, No. 641, pp. 37-72. available online.
  • [6] Belousov
  • [7] Klaus Mainzer (2005), Symmetry and complexity in dynamical systems. European Review, Vol. 13, Supp. No. 2, 29–48.
  • [8] Chalmers 2000
  • [9] Michel Foucault
  • [10] Michael Cross, Notes on the Turing Instability and Chemical Instabilities. mimeo 2006. available online (mirrored)
  • [11] Gray, Scott
  • [12] Schrödinger, What is Life? 1948.
  • [13] Ader, Psychoneuroimmunology. 1990.
  • [14] Hermann Haken, Synergetics.
  • [15] Robert Bauer, Mihnea Moldoveanu (2002), In what Sense are Organizational Phenomena complex and what Differences does their Complexity make? ASAC 2002 Winnipeg (Ca)
  • [16] Neil E.Harrison, Thinking about the World we Make. in: same author (ed.), Complexity in World Politics Concepts and Methods of a New Paradigm. SUNY Press, Albany 2006.
  • [17] Nicholas Maxwell (2000) The Mind-Body Problem and Explanatory Dualism. Philosophy 75, 2000, pp. 49-71.
  • [18] Gilles Deleuze, Logic of Sense. 1968. German edition, Suhrkamp Frankfurt.
  • [19] Klaus Wassermann (2011). Sema Città-Deriving Elements for an applicable City Theory. in: T. Zupančič-Strojan, M. Juvančič, S. Verovšek, A. Jutraž (eds.), Respecting fragile places, 29th Conference on Education in Computer Aided Architectural Design in Europe eCAADe. available online.

۞

Hooking up Logic

January 8, 2012 § Leave a comment

The million dollar question of any philosophy is

about the relation between logic and world, though it is probably not the only one. “World” shall subsume the sayable and the demonstrable here (in contrast to the Tractatus [1]). Thus, this would comprise, for instance, the relationship between practiced language and logic, but also, and in no way less problematic, the derivation of a particular reasoning about the world on the basis of diagnostic models. Another flavor of the same issue is concerned about the question why mathematics is applicable to the world. There was, for instance, the dream of some philosophers and logicians (some still dreaming this dream today) about analytical (=logical) conclusions that extend the empirical basis. This is, of course, even beyond utter nonsense.

Somehow it seems that the logical approach is completely unsuitable for getting in touch with the world. O.k., that’s not really surprising for anyone who understood Wittgenstein’s philosophical work, even if only partially. Nevertheless, up to date there is no applicable proposal about this relationship. Simply enforcing practiced language (or even the whole world) into the rigid brace of logic renders practiced language as well as the world inevitably into a deterministic machine. This is hardly acceptable, of course, despite the fact that large parts of philosophy, neurobiology and many sciences are proposing (doing?) exactly this.

Our investigation of this difficult problematics has to concern about three main parts:

  • (1) The transition from the realm of (probabilistic) description to logic.
  • (2) The transition from logic back into the world.
  • (3) The conditions for either of the two directions

In other parts of this collection of writings about machine-based epistemology we already met these transitions, yet without digging too far into the problematics of the relation between logics and world. Matters of appropriate levels of description, the status of similarity, and also of the duality between information and causality all relate to these transitions.

Yet, we not only want to get clear about the problematics posed by the transition across the gap between the indeterminate/world and logics. We also want to outline a possible path towards an implementation.

Consequences of an Irrevocable Choice

Elsewhere, we already argued that the empirical input into instances that are stuffed with the power for modeling through association needs to be probabilized. Neither words, nor objects, nor formalized descriptions can serve as a basis for the first steps of getting into contact with the world, because any of those requires (and hence: assumes) a two-fold apriori existence, both in the outside and the inside of the “understanding” subject.1 Yet, this is exactly what we try to explain (in the meaning of “trying to get clear about”), thus we should not, of course, assume it. Doing so instead, we would commit the infamous figure of petitio principii, which victimizes large parts of philosophy (e.g. Hegel and any sort of “analytic” philosophy), humanities (positivism in social sciences, formalization of semantics or even language as a whole) and even of science (computer science, and even biology concerning “genes” [2]). Here, we have to be very clear about our basic assumptions in order to be victimized by a petitio principii ourselves.

Philosophically spoken, we do not start with existence. We do not follow the related assumption of the primary role of identity2 and logics, symbolized as “a=a”3. This also excludes any sort of externalized realism, even regarding the “structure” of the world. As a consequence, we also tend towards a denial of the feasibility of ontology as a subject of (philosophical) thinking. In the introduction to his “Ethics without Ontology” [3] Putnam readily displayed the intention for the Hermes lecture in 2001 on which the book was based:

I […] present in public something I realized I had long wanted to say, namely that the renewed (and continuing) respectability of Ontology (the capital letter here is intentional!) following the publication of W. V. Quine’s “On What There Is” at the midpoint of the last century has had disastrous consequences for just about every part of analytic philosophy.

…and a bit later a bit more precise:

[…] the purpose of the Hermes Lectures was to criticize certain fallacious conceptions-conceptions linking ontology, metaphysics, and the theory of truth-that, in my view, have had deleterious effects on our thinking as much in philosophy of logic and philosophy of mathematics as in ethics.

For Putnam, among many others, and also, of course, us, it does not make sense to split off philosophy into disciplines like epistemology as the philosophy of knowing, or, more general, into any philosophy of ⟨..⟩. Ontology, often regarded as the science of being, such putting the idea of being before any other idea, is deleterious to any human thought, because it claims a necessity of truths that are to be found in the external viz. non-human (sphere of the) world, let it be physical, religious or idealistic. The direction of this argument can be reversed without loosing validity: Claiming the necessity of x-kind of truth directly results in an acceptance of the primacy of existence. A truth that is rooted outside of ethics and morality is nothing but a monster. Both directions open the doors wide for any “justification” of a-human activities. From this perspective, ontology is a atavistic remnant pre-historic mysticism. The idea of “ontology” is even deleterious in computer sciences, as it hides the important questions and supports self-delusionary concepts. Our opposition to “Ontology” does not mean that we deny that things, facts or humans exists. In some sense we even could agree to say that ideas “exist.” We just deny that Ontology and existence is an acceptable or even a possible starting point. Yet, an ontology that is not a starting point is not an Ontology any more. A biology that does not start with life and living beings is not a biology any more, at most biochemistry, biophysics etc.

The contrasting perspective is that of the transcendental difference. In physical terms it is probably more popular to speak about fluctuations. In the beginning there is not the word, nor the idea, in the beginning there are only fluctuations, yet indeterminate, while the notion of “beginning” refers to any individual being as well as to the Big Bang and its evolution towards crisp separations, which we sometimes call “particles” or “objects.” If in any beginning there are only fluctuations, every thing and so every being may establish itself only through interpretation5, or more precisely, in a volatile (probabilistic) network of transient, mutually superposing interpretations. We may well call this kind of a “String Theory” (of Generalized Reference). Language and meaning form only the tip of that iceberg. Last, but not least, we see that accepting the primacy of interpretation influences any activity, even that of interpreting and modeling itself.

Preferring difference in favor of identity, and so interpretation in favor of existence as starting points should not be misinterpreted as a denial of existence, as we already mentioned above, or as a denial of the possibility of identity. I am not denying that we exist, that is it is feasible to say that something like an idea or properties of things exists6, nor do I think that we are living in a matrix, in a brain vat or anything comparable.7 Of course, the concept of “reality” remains meaningful for us. Yet, the reference of this “existing,” or this “reality” is, in our perspective, not outside the human sphere, definitely not, and not a tiny bit. Hence it is not possible to do a science of existence (ontology), because as soon as one would start with it, it would vanish. (The rest of the argument can be met in Putnam’s book.)

Both of these alternatives are, however, still based on assumptions, necessarily so. One may call these assumptions “metaphysical,” the label does not matter. If there would not be metaphysical assumptions inside them, everything about the origin could be formulated (formally explicated), which of course is not possible. There is no such thing as an explication that does not need external conditions. Such, albeit there seem to be good reasons for choosing the second alternative we nevertheless you may also call these assumptions “non-justifiable beliefs.” Later, we will see that it is well possible to identify the “nature” of this dependence to external conditions and also how we can speak about it without internal contradiction, without “silly” self-contradictory performance. Yet, preferring the primacy of interpretation against the primacy of identity is justified by a larger degree consistency, in other words it remains being based on a belief.

The common issue about both alternatives is that there is no “intermediate” for them. They are mutually exclusive and together they are exhaustive, there is no other possibility, except, perhaps, revelation that by definition is not only outside of the sayable, but even outside of the world. It is simply there, without any possibility for (a) preceding reason. Besides that, there are only two possible primacies, logics or interpretation. We may call them strong abstract attractors. Switching between them forth and back would corroborate any possibility even for the simplest argument, which is not acceptable for any stance emerging from the two alternatives.

The difference between both alternatives is, indeed, really a large one. Starting with identity not only means starting with logics, but even to equate the world with logics, or, in more favorable words, to claim the direct applicability of truth functions in the world. Yet, today we know that the programs of Carnap [4] and Stegmüller [5] heading for a “language of/for science”, or “scientific language” failed, that there is no possible definition of knowledge that would obey to the logical frame (cf. Gettier [6], Peschard[7]). The failure of the logical approach in artificial intelligence we already mentioned above.

So, the question about the relationship between logics and the world itself turns away from the possibility for a formal foundation, even from formal arguments. This relationship is one of pure practical concerns.

The Theory of Transit

Unexpected Allies, and a Break

Even if we accept the primacy of interpretation for any feasibly distinguished activity, it is undeniable that there is something like logics. Before we start trying to link into the fields of the propositional we have to get clear about the status of logic itself.

Following Wittgenstein, Johnston [8] supports the position that logical form is available only a posteriori. We can’t have apriori knowledge of atomic forms, e.g. of “relation,” where “apriori” means “in advance to the application of logic”. The “application of logic” precedes the possibility to distinguish logical forms and it is understood by Wittgenstein as a truth-functional analysis; this brings in aspects of interpretation. In other words, it is a performance that actualizes a particular logical form. No doubt, there is now a certain tension that we have to resolve.

Johnston [8] captures it in the following way (p.155):

And in a 1929 discussion with Waismann entitled ‘Objects’ Wittgenstein says: “Only when we analyse phenomena logically shall we know what form elementary propositions have. (Wittgenstein 1979b, p. 42)8” It is, Wittgenstein held, only through the performance of analysis that we may develop a clear symbolism, substituting it for the unprecise one of everyday. A concept script will encode the elementary propositional forms and uncovering what these are is a principal ambition of the project of analysis. Wittgenstein is fundamentally opposed to the idea that one first constructs a concept script and only subsequently turns one’s attention to particular propositions of everyday, attempting to see how they might be written in the constructed symbolism. One does not first work out what propositional forms there are and then subsequently decide which of these forms is had by some English sentence. Those are not the two steps of giving logic and applying it. The two steps of giving logic and applying it are rather first to characterise truth functionality, and subsequently to uncover truth functional structuring within propositions. And it is only through the second of these that the elementary forms will become apparent, that a concept-script will become available.

The distinction between (universal) “particulars” and (truly universal) universals (by Frege or Russell) is a misunderstanding. Johnston (p.159) cites Ramsey that in “the historical theories of universals and particulars are ‘muddles’ predicated upon the false presumption that we have knowledge of atomic forms.”

Tractarian objects of logic do not have a particular form. As Johnston compiled it, Wittgenstein wrote in the Tractatus (TLP 2.0141):

The possibility of its occurring in states of affairs is the form of an object.

But he also asserts that there is nothing accidental in logic. Wittgenstein does not refer to probability or possibility here, as it is to be understood regarding empirical relations. Thus the possibility in TLP 2.0141 could not be related to immanence either. It is much more appealing to interpret the Tractarian possibility as potential, or virtuality, maybe almost in the Aristotelian sense. Logic gets attached the notion of transcendentality.

Even more significant for our endeavor of machine-based epistemology, he proceeds with

If two objects have the same logical form, the only distinction between them, apart from their external properties, is that they are different. (TLP 2.0233)

External properties are properties that emerge due to the application of the logic that comprises them. The potential of building relations is the form of an object of logic, and the only thing that we can say about different such potentials is that they are different. That difference can’t be described, of course. It is transcendental difference.

Well, this now is an astonishing parallel to the central pillar of a very different (!) sort of doing philosophy: that of Gilles Deleuze. In the interview series “ABCDaire” Deleuze [9] called the “school” of Wittgensteinian philosophy a “catastrophe” and denied to comment any further. Deleuze said

Non, je ne veux pas parler de ça. Pour moi, c’est une catastrophe philosophique, c’est le type même d’une école, c’est une réduction de toute la philosophie, une régression massive de la philosophie. C’est très triste […]

Notably, Deleuze did not refer directly to Wittgenstein himself. We all know that Wittgenstein is embarrassingly often taken as representative of “analytic” or “positivist” philosophy, which surely is a deep and violent misunderstanding [10]. (As we just saw,  even according to the Tractatus, Wittgenstein denies the possibility of a primacy of logical analysis before performance, which is a typical pragmatist attitude, and to label any “thinking” or dissecting as analytic, well…) Maybe, however, there is also a trace of sort of some higher form of rivalry in Deleuze’s comments. Everything in Deleuze’s approach is arranged around transcendental difference, and the differential. Despite the fact that both performed a very different style, I think that both share a certain disgust regarding the “analytic,” and certainly both favor the approach to look closely to their subject of investigation. “Broad strokes” (a term coined by Quine) do not belong to the tool-set of any of the two. We take this concord about the transcendental status of difference between those two great philosophers—maybe the two greatest philosophers of the 20th century—as a confirmation for the feasibility of our approach.

As a consequence of the transcendental difference of objects in logic, any logic that contains countable relations is already affected by semantic choices and empirical determinations. We follow Wittgenstein in his (implicit) proposal to discern “pure,” hence transcendental logics, in short “T-logic,” from practical quasi-logics, or in short “Q-logic”. Quasi-logic is best conceived as a heuristic instantiation of  “pure” T-logic, comprising almost any “amount” of semantics. From this follows, that there not only is an infinite number of different Q-logics, but also that many of them are incommensurable. Precisely this is what we can observe.

It is more than obvious that this has tremendous consequences for any possible (attempt to establish) machine-based epistemology. The core is about the idea that to “characterise truth functionality” is the first step. Putting down such a characterization is not the business of logic. The structure of the world precipitates in that characterization. The structure of the world that is before language and and before logic (which are co-genetic) is nothing else than the body of faculties, mainly, however, the faculty of association.

Before Space and Within It

So, based on the results available so far, we could put the issue at hand—about relation between logic and world—in different words:

How can we proceed from the realm of the indeterminate to the fields of the propositional?

Before approaching the path towards a possible solution more closely we should be clear about the characteristics of the two sides, and possible approximations, or, if you like, operationalizations. Upfront, we may expect that we will meet neither pure construction (of abstract scaffolds to stock perceptions) nor sort of a distillation (of such scaffolds from “empiric data”).

Generally spoken, the task is to describe a rather particular (class of) transform- ation(s). This transformation is not only settled completely in the structural domain, we even have to state that left-hand we have no identifiable entity at all. Using some symbols and denoting that transformation for instance as

A → B

is not applicable, since the “A” and the “B” are very different, so to speak, they are differentially different. A and B have a different status; it is not possible to think of a possible transformation between them, as it is for instance conceptualized in category theory. The transformation we are looking at is nothing else than an actualization of the indetermined into the propositional. Yet, this indetermined should not be equated with potential here, albeit the potential and the virtual are important aspects of it. Symbolizing it we could write

. → B

In order to make some substantial progress we want to draw an analogy to number theory here. Real numbers are not numbers in the “classical” sense, since there is nothing that could be enumerated. What real numbers share with more simple numbers like integers is the just order relation that is valid for all its members. Nevertheless, there is no such thing like a “particular” real number. They are more a space for the more simple numbers like natural or rational numbers. Dedekind formulated that real numbers (and irrational numbers as well) are not members of countable sets of (rational) numbers that approach each other (see Dedekind cut, or Cauchy sequences). Roughly spoken, real and irrational numbers fill the gap of any thinkable or selected number. Hence there are relations to the so-called “axiom of choice” of Zermelo-Fraenkel. Of course, the technique that Dedekind applied to the relation between countable and non-countable numbers, where the countable ones build the basis, can be applied to the real number itself. This way Conway “discovered”, or invented, the surreal numbers [11]. Those form a space which “contains” the real “numbers” as a possibility, just as the real number contain the natural numbers as a possibility.

Yet, here we are not interested in the number theory as such. What strikes us is the structural similarity to our question here, showing a well-defined possibility for a transition from a space of the indeterminate that contains any determinable space into a space of identifiable entities. From this (more philosophical than mathematical) perspective the problem is not how to proceed from natural (or other countable) numbers to real numbers, i.e. from the countable, representable and identifiable to the indeterminate, but just the other way round. We could write it as

ℝ → ℕ

To be honest, it is not a problem, it is simply a choice; it is even consistent with the position of assuming a transcendental status for the “difference.” Starting with ℝ, we can select any kind of countable, i.e. discrete space 𝒩 by means of an “inverted” Dedekind cut.

ℝ → 𝒩

Now let us return to our problematics of the transit from the indeterminate to the logical. Any applicable logic (Q-logic) we know of so far is based on identifiable relations. Gödel’s proof of incompleteness of formal systems [12] is strongly based on the discreteness of those relations: he created the so-called Gödel numbering for it (cf. Hofstadter [13]). Gödel numbering is a function that assigns a unique natural number to each symbol and well-formed formula of some formal language. Yet, we arrive at such an enumerable structural system only by “selecting” it from a space that contains any enumerable structure. The nature of the “selection” process is important and we will have to clarify it.

That “any”-space is our space of indeterminacy. Yet, this space is not without structure, much like the real numbers are not without structure (e.g. as a topological entity), despite that they are not enumerable. One of the structures is what we call “randolation.” Randolations are open categories from which certain families of relations can be derived, similar to the derivation of natural numbers from real numbers, if we invert the Dedekind cut. The randolation is the manifold of a particular logical form, the relation.

We think that one may conceive of randolations not only in the space of the indeterminate. We may operationalize them into probabilistic distributions of relations. And exactly this happens in modeling, especially if we volatilize naive concepts of similarity into a similarity functional: collecting and grouping items and their relations into extensional sets.

The Practice of Transit

We have seen that proceeding from the space of the indeterminate to the logical implies a selection. This selection happens in modeling. The configuration of this selection is implicitly determined by the structure of practiced explication, which has been called “world” by Wittgenstein.

This structure of practiced explication is mainly given by the structure of the (quasi-) materiality that serves as a carrier for modeling. For instance, in a world of apples and trees (Newton), or arrows, swords and lances it is quite clear that the proposition of the excluded middle has been regarded as something rather absolute. Where the sword is, no body can be. This world is often characterized by the concept of classical causality. In a world of virtual networks we find a completely different kind of materiality. Here, the excluded middle has barely any relevance any more. To put it in simple terms: the materiality of information is quite different from that of apples, a fact that went unnoticed to authors like Pearl [14] of Salmon [15].

Yet, we shall not treat it unjustly. It is precisely this link from logic to  irreversibility of causality that allows to forget about doubts. If a glass broke irreversibly, there can’t be any doubt that it broke. Being broken implies for the glass that there had to be an identifiable act upon the glass, a transfer of energy, no doubt. The relation between the cause and the effect may be crisp, no doubt. Hence, it had to be an enumerable structure. But there is no necessity, still. There is still a lot of interpretation about it, on the structural level, and a lot of arbitrary choices. Obviously, if there already was a crack in the glass we could not claim that the energy transfer has been the cause, that is, the unique cause. For this particular glass we even can’t know if the energy transferred to the glass would have broke it, even if it had a crack before. We never can know perfectly, if the referent of this “knowing” is about the world. As Quine noted, any other claim is a dogma, and he found two dogmas in empiricism [16]. But we do not think, indeed, that conceptual truth is possible (see also, for a discussion of Öberg [17]). Concepts are entities (later we will say: choreostemic poles) that are not only outside of the world of possible effects, they are also outside of any available Q-logic, and of course, outside of T-logic. There is a long way to take from a concept to an effect, and from potentially perceived fluctuations to concepts as well.

Nevertheless, this may serve as a guide in our transit. In order to arrive at an arrangement of enumerable structures we have to imply a certain kind of materiality. We may call this step also a “decoherence“. We have to introduce incommensurabilities between subsets of structures that derive from their randolated “counterparts” as its operationalization, notably simply by choosing one.

This choice is not completely arbitrary. It remains bound to the requirement of providing the possibility for sufficient predictive power. It is very important to understand here that there is probably an infinite number of possible structures that could serve this purpose and from which we could choose. Quite naturally we will develop habits, often based on our body and its enumerable structures, we will draw on experiences how to organize that transit. But it remains arbitrary within the constraints of an infimum utility, there is no “causal” relationship between that choice and predictive success.

Note that there is no possible “logical” justification for this choice, or selection. This choice, which can be conceived as the inverse of the Dedekind cut, is just the result of a performance that in turn is constrained (“conditionalized”) by the embedding material and immaterial structures (“givens”) of the Lebenswelt.

We now can conclude the first part by providing an answer to the first part of the problematics about the relation between world and logics.

First, it is not adequate to talk about a relation between those two “phenomena.” Logic (and its enumerable structures) is created from the indeterminate (and its matrix of non-enumerable randolations) by a selection as a performance. Logic simply “appears” through associative modeling and its implied materiality, sometime aka as “body,” more generally labeled as “quasi-body,” as, for instance, in the case of symbols. Wittgenstein called it the structure of the world. The fact that we meet selection here, opens a passage to evolutionary processes and the structure of comparison. It is interesting that it is precisely this evolution that reverted the relation between the body as materiality and associativity. The evolutionary story went from “implied associativity” (in amoebas) to “implied materiality” (in complex brains). As a domain, logic could only appear as a secondary effect, which is the reason that applicable logics is always a Q-logics. As a structure, it is transcendental and not applicable as a “pure” form (whatsoever this could mean…).

For the same reason, truth and truth functions can not be considered as being part of the world, “world” here referring to a world of effects, or in other words, a non-conceptual world. In this respect, however, we disagree with Putnam about the possibility of conceptual truth, albeit we would defend it if he would be right about concepts. The concept of concept is a non-trivial concept! Categories like concepts and models we will call a “choreostemic poles.” Concepts acquire meaning and sense only in social context, in a discourse, and in a very particular way, as Brandom [17] demonstrates. For sure, “concept” can’t be defined exhaustively and positively. Nevertheless, truth functions disappear from the world together with ontology.

Second, the composite made from material aspects, habits concerning structural selections and styles of modeling is undeniably quite important for the empirical parts of any particular Lebensform. This compound is both a (provisional) result and a (dynamic) scaffold for deriving (further) results. Of course, perception is neither a “flat” input-output-relation nor should it conceived as a passive process.

Third, the transit into the area of the propositional is also a transit from the indeterminate into the realm of the symbolic, which in turn open the path to the realm of reversibility.

This brings us to an almost paradoxical arrangement. On the one hand, we have seen that logic is the paradigmatic representative of causality, at least as far it does concern finite value logics.9 On the other hand, this representation takes place in the realm of the symbolic, which provides just the opposite of classic causality: reversibility. Here, we take this as a clear hint that logics should not be seen serving this representative role, at least not in an idealistic or absolute manner. We will return to the conditional embedding of logics into the world elsewhere. It will become clear that there is no paradox.

Inadvertent Transits: Creating Actions

The second part of our problematics—the transition from logic back into the world—is much simpler. Regardless, how much operations we performed in the space of reversibility, once we act, we change the frame. Quite likely, we also forget about most of the “reversible” operations. “Acting” is precisely the language game for this change from the space of reversibility to the space of irreversibility. Acting does not create unambiguousness and uniqueness. It does create, however, the need for new interpretation. Acting introduces the indeterminate. Only for this reason it can be irreversible. This transit happens inadvertently.

Interestingly enough, we provoke this transit by writing, or other means of externalization. Meanwhile, human culture developed even as an art of externalization, from symbols to language to printing to the media to the web. Yet, we should not forget that dealing with this externalization requires again modeling, including a transit of the first kind. It is precisely this dynamics that creates the particular status of a text, or, in a different way, that of any ordinary discourse. It was Robert Brandom [18], whom we cite frequently on this site, who was the first to shed some light on the mechanisms of discourse from the perspective of the primacy of interpretation.

The Role of Logics

Undeniably, logic takes a particular role. Marcus Russell naturally called it the technical tool of philosophy [19]. Besides the question whether there are possibly tools that are not technical, we actually have to be concerned about the question which role (Q-) logic is playing. Henceforth we always refer to Q-logic when saying “logic”.

One major goal of philosophy is clarity. The major property of logics, regardless the actual flavor, is the notion of uniqueness. Of course, already the premises need to be distinguishable (Gödel enumerable). This obligation (propensity?) towards uniqueness is paired with a particular focus on a structural linearity, or at least pre-linearity in the case of many-valued logic like the Gödel logic. Else, even in further abstraction like Malinowski’s t-entailment [20] there is a direction. Also, self- referentiality is excluded from any logics, unfortunately so, I think. From that it follows that any (pretended) application of logics directly establishes a temporal order. Again, note that this temporal order is strictly linear and of a stepwise structure that is imposed on a synchronous set of steps. Within a logical expression, or a predicate, there is absolute synchronicity and contemporaneity, pure present time, if you like. This describes the trivial fact that we should not forget about the premises before we ended up with the assignment of a truth value. Q-logics induces a split in temporal reference. All parts of a logical expression are, however, impermeable. Without resorting to logical atomism we nevertheless are allowed to say that the elements of a logical expression (in a finite-valued, classical logic) are like particles. It is this property that renders Q-logic into a kind of (a simple) materiality. Regarding the temporal structure we can observe that the instantiation of a Q-logic also implies a separation of temporal reference into at least two lattices.

The principle idea is that from true premises everything else follows. Marcus [21] cites de Morgan’s famous proverbial characterization as the motto of his book: “The question of logic is: Does the conclusion certainly follow if the premises be true?” In practice, of course, one can utilize this “everything” in a reverted manner. We project it (take it for granted) and search for the effect that necessarily follows. Obviously, the art then is in setting the premises.

Leaving that aside, we can now address the question about the role of logic in thinking. There are several aspects to it: how to link it to the world beyond the concept of relation, its status in the world, and which effects could be achieved by using it. The latter point refers to the issues of symbols and knowledge.

First of all, we take the distinction of “logic” into a T-logic and the realm of Q-logics as an alleviation from the burden of ontological truth. Truth and truth functions are not applicable to the world, at least not directly, as we will see shortly. Truth is as little in the world, or an object of the world as any other concept. Just as any other concept, “truth” is dependent on some basic conditions. Yet, this does not mean that we propose to accept radical relativism, of course. Just as any other concept “truth” makes sense and has meaning only in a discursive context, i.e. as a particular language game. The important convention is to use it as tool for the construction of “as-iffs,” that is to create scenarios and simulations.

Carnapian programs propose a particular linkage between the world and logics. The basic claim is that there is the possibility for an exhaustive language in which all statements about empirical “facts” are completely within an exhaustive logic. This is equivalent to the claim that there are truth values to be found in the world. We already refuted that. Yet, Carnapian programs are also equivalent to the claim that sentences of natural languages can be rewritten in expressions that belong to a logic, notably to a T-logic. Despite the fact that such attempts have been proofed to be untenable (indeed, many times so), one still can find such attempts even nowadays (e.g. [20]).

Sentences in natural languages are clearly not logical predicates or propositions, and for many reasons so. Probably the story goes just the other way round: Any particular Q-logic is the form of the unfolding organization of a discourse. This would Q-logic tie to a particular Lebensform without prescribing one by the other. We have seen that this linkage between logic and Lebensform also creates the distinction between T-logic and Q-logic. If this is correct, logic is just a consequence of discourse pragmatics.

Q-logic is essential to establish knowledge. Knowledge, however, is not about empirical facts, at least not directly, as we argue in more detail in another chapter, neither it is reducible to things like “justified beliefs”(see Gettier [6]). Besides the trivial cases where we indeed may reduce knowledge to the figure of “knowledge that p”, knowledge is mainly the capability to establish a social resonance about empirical facts. Without Q-logics we would not have the possibility to reduce representations down to uniqueness, i.e. we could not exclude misunderstanding, or in still other words, we could not secure it mutually to each other.10

Due to its role in excluding vagueness, at least in local contexts, logic plays an important role for modeling as well as for using models, yet a quite different one here. With respect to the practice of modeling logic is a co-genetic phenomenon, as far as we are concerned with associative modeling. Of course, we have to distinguish sharply the performance of modeling from formalizing it using symbols, like we did with respect to the generalized model. In our investigation about associativity we have seen that associativity implies the transition from the realm of the quasi-material into the realm of immaterial. It is very important to understand here that associativity itself implies this separation. We may consider it also a compartmentalization in the abstract. The property “quasi-material” is not restricted to matter, of course. If we run an associative structure like the Self-organizing Map we meet a purely immaterial network. Nevertheless, its associativity implies the mentioned separation. In other words, associativity generates relative matter, i.e. the quasi-material. In the very same context, logic appears, notably by virtue of the associativity as performance, which is, in turn, creating and bound to the quasi-material, or if we regard it as a compartment, to the quasi-body. In short, any particular Q-logic is a consequence of a certain bodilyness.

The role of logics is a different one if we proceed to the application of models. Here, logic is simply an apriori condition. Applying a model implies the preceding selection of a logic via the inheritance from the modeling performance. This also means, however, that a particular model prescribes which (class of) Q-logic one has to obey to.

A last point remains that we want to deal with before going practical. The uniqueness built into logic provides a bridge to other entities that share this property: names, indexes, and symbols.

A name is a primitive. If we compare it with indexes we see that it is even a pre-specific primitive.Vilém Flusser called it “throwing out a name.” Names are elements of a language, in contrast to indexes, which are elements of a formal system. A name does not need a preceding quasi-material referent, quite in contrast to an index. Both, of course, share the property of uniqueness. Names usually develop into compounds consisting from at least one index and the potential to serve as a symbol, while an index is never a symbol.

Elsewhere we have seen that the concept of “symbol” describes a particular process of referencing that is routed two-fold in quasi-materiality, regarding both the starting point and the end point of their usage, so to speak. This symbol-process transits through immateriality like a looping ligament. It provides the hook for logic and for signs. While symbols always refer to a quasi-materiality, signs do not. Symbols refer to material, signs refer to signs.

Let us recapitulate the two aspects that are most salient for us here:

  • – Any particular Q-logic is both a consequence of a certain bodilyness as well as a kind of quasi-materiality.
  • – The instantiation of a Q-logic implies a separation of temporal reference into at least two lattices

If we contrast this with the symbol-process, then logic appears as a way to describe a certain policy for chaining those processes. Logic always remains close to the quasi-materiality of symbols. Else, logics remains inevitably a performance. As an performance, logic is the means to describe the mechanism for chaining symbol-processes, for finding or creating stable grounds for the purely immaterial semiotic processes. In short we may describe logic also as the story-telling of quasi-bodies.11

Practical Considerations

Now we can turn to consequences for the machine-based epistemology. These consequences derive all from the performance aspect of logics and the related quasi-materiality.

On the one hand, logics is a consequence of the performative capacities of the quasi-body and its implied associativity. The relationship, however, is opaq. It is (not yet) known how to control the emergence of a particular Q-logic from a particular bodilyness. I guess that’s even a matter of evolutionary dynamics, transcending the potential of individual capabilities. Philosophically spoken, it is a matter of the Lebensform as a whole.

We may further guess that the possibility for an evolvability of this relation between quasi-materiality and logics implies a more abstract, or alternatively, a pre-specific notion of logics. There are, however, indications that such a step is not possible, as the so-called basic t-norm logic is considered to be an extra-logical move (cf. [20]).

On the other hand, we may implement different models of quasi-(Q-) logics, even in a parameterized way, starting with an algebraic representation of a Q-logics. Such an implementation can be regarded as the simulation of a particular materiality.

Of course, both aspects have to be coupled. Yet, the path from associative quasi-bodies to both, the performed and the applied logics is pretty clear. It is demonstrated by the path we already described as the series: patterns → classes → models → named models → (names → indexes → symbols) → applied logic → signs → concepts. Obviously, the middle part of this transitional series is rather volatile. Else, we just would like to recall two issues here: (1) This path can’t be hosted by an isolated (or otherwise) closed entity. (2) This path does not imply any kind of causality, of course.

Names

The starting point of this chain is relatively easy accessible. Modeling is explicitly available in many different forms, indexes or logics can be implemented using standard techniques using e.g. databases, or logical programming. On the other side of the chain the concepts reside. As choreostemic poles, concepts are not accessible at all, while semiotic signs, i.e. a semiosic process is difficult to implement, but it should be possible using a population of lattices of growing SOMs. The mystery that remains is the naming,12 which also is the interesting part of symbols. Both, the material and the referential aspects of symbols are nearly trivial.

Perhaps you know the film The Pillow Book by Peter Greenaway. The film is (as Prospero’s Books) about the relation between body and text. In a more general perspective, the relation between the symbolic and the body can be met in any of his films. Anyway, in the “Pillow Book” there is scene, where the father paints the name of his daughter as a calligraphy to her forehead as part of a initiation ceremony. He also explains that (Japanese) people believe that god created man by the means of the same ceremony. Naming is probably indeed the focal point about everything we are interested here.

Of course, the act of naming can’t be pre-programmed. It has to be a particular practice of the “machine” “itself,” and most likely it has to be a social practice. One could argue that perhaps not any model has to be named. Yet, based on our analysis, we would oppose that view, guessing that the naming of every model is inevitable in order to achieve the ability to deal with signs (in the Peircean sense as “sign-situation”).

So the big question is: How to enable for the act of naming? Our provisional answer (admittedly, a guess): By means of sensual cross-modality. Of course, we have to justify this further in the future. Yet, it would mean that the capability for “active” understanding will not be achievable for any arbitrary kind of entity unless the entity does not deal with different modalities, i.e. basically, waves, words and images. These principles are needed to describe the world and to act upon it, which in turn provides the anchor points for practices and their names.

Notes

1. see Frege’s hyper-platonism, idealism.

2. Of course, we also could have started with identify, putting existence to the second line.

3. note that an expression like a=b already would refer to an algebra, since in such an assignment there is already an empirical element in claiming a particular symmetry that justifies the equation. Yet, we do not agree on Frege’s chasmatic distinction of analytic (“a=a”) and synthetic (“a=b”) [23, p.56]. Outside of transcendental relationships (like a=a), everything is more or less synthetic. In fact, it is the transcendentality of “pure logics” that allows us to drop Frege’s distinction.

4. A must read here is certainly Hilary Putnam’s “Ethics without Ontology.” [3] We just want to note that quite some of his arguments would become much simpler if one would put Putnam’s ideas onto the foundation of generalized modeling and choreosteme.

5. selective interaction., pan-semioticism, yet not only in relationships to external entities.

6. You see, we don’t take neither reductionist nor nominalist (or even eliminatist) positions.

7. The concept of the “brain in a vat” has been invented by Putnam himself, in order to defeat skepticism.

8. B. McGuinness (ed.), Ludwig Wittgenstein and the Vienna circle. Blackwell, Oxford 1979.

9. The status of logics that incorporate infinite truth values is not clear yet. see the entry about many-valued logic in the Stanford encyclopedia for philosophy.

10. This perspective onto knowledge and its relation to logic is deeply influenced by the approaches first explored by Wilfrid Sellars [22] or Robert Brandom [18], and, so I think, also perfectly compatible with their views.

11. The relation between bodies and concepts also appears in another domain that seems, at least at first sight, completely different from what we are doing here: The art of performance and the pedagogy of art. This relation we explored in [24].

12. So far, there is no convincing concept about names in philosophy, see the entry about names in the Stanford encyclopedia for philosophy. Albeit we think that our approach as presented here is more appropriate than any other, we don’t feel that we are done yet.

  • [1] Ludwig Wittgenstein, Tractatus
  • [2] about Illusion of “Genes”
  • [3] Hilary Putnam, Ethics without Ontology.
  • [4] Carnap
  • [5] Wolfgang Stegmüller
  • [6] Edmund Gettier (1963), Is Justified True Belief Knowledge? Analysis 23: 121-123.
  • [7] I. Peschard
  • [8] Colin Johnston, Tractarian objects and logical categories. Synthese (2009) 167: 145-161. available here in an annotated version
  • [9] Pierre-André Boutang, Claire Parnet, L’Abécédaire de Gilles Deleuze. Arte TV 1988 [released 1996]. Wikipedia
  • [10] Wilhelm Vossenkuhl, Solipsismus und Sprachkritik: Beiträge zu Wittgenstein. Parerga, Berlin 2009.
  • [11] John H. Conway, On Numbers and Games. Academic Press, New York 1976.
  • [12] Gödel on his numbering
  • [13] D. Hofstadter, Goedel, Escher, Bach. 1978.
  • [14] Judea Pearl, Causality – Models, Reasoning and Inference. Cambridge University Press, Cambridge 2000.
  • [15] Wesley C. Salmon, Causality and Explanation. Oxford 1998.
  • [16] W.V. Quine, Two Dogmas of Empiricism.
  • [17] Anders Öberg, Hilary Putnam on Meaning and Necessity. Thesis, University of Uppsala 2011. available online (mirrored)
  • [18] Robert Brandom, Making it Explicit. 1994.
  • [19] Russell Marcus, Logics. Syllabus Announcement, Fall 2011. available online. (last accessed 5/1/2012)
  • [20] Siegfried Gottwald (2009), Many-Valued Logic. Stanford encyclopedia for philosophy.
  • [21] Russell Marcus, What Follows. book in prep, available online.
  • [22] Wilfrid Sellars, Does Empirical Knowledge have a Foundation? in: H. Feigl and M. Scriven (eds.), The Foundations of Science and the Concepts of Psychology and Psychoanalysis. Minnesota Studies in the Philosophy of Science, vol. I. University of Minnesota Press, Minneapolis 1956. pp. 293-300. available online.
  • [23] M. Black, P. Geach (eds.), Translations from the Philosophical Writings of Gottlob Frege.
    Blackwell Oxford 1952.
  • [24] Klaus Wassermann. The Body as Form – or: Desiring the Modeling Body. in: Vera Bühlmann, Martin Wiedmer (eds.), pre-specifics: Some comparatistic investigations on research in design and art. JRP Ringier, Zürich 2008. pp.351-360. available online.

۞

Where Am I?

You are currently viewing the archives for January, 2012 at The "Putnam Program".