Waves, Words and Images

April 7, 2012 § 1 Comment

The big question of philosophy, and probably its sole question,

concerns the status of the human as a concept.1 Does language play a salient role in this concept, either as a major constituent, or as sort of a tool? Which other capabilities and which potential beyond language, if it is reasonable at all to take that perspective, could be regarded as similarly constitutive?

These questions may appear far off such topics like the technical challenges to program a population of self-organizing maps, the limits of Turing-machines, or the generalization of models and their conditions. Yet, in times where lots of people are summoning the so-called singularity, the question about the status of the human is definitely not exotic at all. Notably, “singularity” is often and largely defined as “overwhelming intelligence”, seemingly coming up inevitably due to ever increasing calculation power, and which we could not “understand” any more.  From an evolutionary perspective it makes pretty little sense to talk about singularities. Natural evolution, and cultural evolution alike, is full of singularities and void of singularities at the same time. The idea of “singularity” is not a fruitful way to approach the question of qualitative changes.

As you already may have read in another chapter, we prefer the concept of machine-based episteme as our ariadnic guide. In popular terms, machine-based episteme concerns the possibility for an actualization of a particular “machine” that would understand the conditions of its own when claiming “I know.” (Such an entity could not be regarded as a machine anymore, I guess.) Of course, in following this thread we meet a lot of already much-debated issues. Yet, moving the question about the episteme into the sphere of the machinic provides particular perspectives onto these issues.

In earlier times it has been tried, and some people still are trying today, to determine that status of the “human” as sort of a recipe. Do this and do that, but not that and this, then a particular quality will be established in your body, as your person, visible for others as virtues, labeled and conceived henceforth as “quality of being human”. Accordingly, natural language with all its ambiguities need not be regarded as an essential pillar. Quite to the opposite, if the “human” could be defined as a recipe, then our everyday language has to be cleaned up, made more close to crisp logic in order to avoid misunderstandings as far as possible; you may recognize this as the program of contemporary analytical philosophy. In methodological terms it was thought that it would be possible to determine the status of the human in positively given terms, or short, in a positive definite manner.

Such positions are, quite fortunately so, now recognized more and more as highly problematic. The main reason is that it is not possible to justify any kind of determination in an absolute manner. Any justification requires assumptions, while unjustified assumptions are counter-pragmatic to the intended justification. The problematics of knowledge is linked in here, as it could not be regarded as “justified, true belief” any more2. It was first Charles S. Peirce who concluded that the application of logic (as the grammar of reason) and ethics (as the theory of morality) are not independent from each other. In political terms, any positive definite determination that would be imposed to communities of other people must be regarded as an instance of violence. Hence, philosophy is not any more concerned about the status of the human as a fact, but, quite differently, the central question is how to speak about the status of the human, thereby not neglecting that speaking, using language is not a private affair. This looking for the “how” has to obey, of course, itself to the rule not to determine rules in a positive definite manner. As a consequence, the only philosophical work we can do is exploring the conditions, where the concept of “condition” refers to an open, though not recursive, chain. Actually, already Aristotle dubbed this as “metaphysics” and as the core interest of philosophy. This “metaphysics” can’t be overtaken by any “natural” discipline, whether it is a kind of science or engineering. There is a clear downstream relation: science as well as engineering should be affected by it in emphasizing the conditions for their work more intensely.

Practicing, turning the conditions and conditionability into facts and constraints is the job of design, let it manifest this design as “design,” as architecture, as machine-creating technology, as politician, as education, as writer and artist, etc.etc.  Philosophy can not only never explain, as Wittgenstein mentioned, it also can’t describe things “as such”. Descriptions and explanations are only possible within a socially negotiated system of normative choices. This holds true even for natural sciences. As a consequence, we should start with philosophical questions even in the natural sciences, and definitely always in engineering. And engaging in fields like machine learning, so-called artificial intelligence or robotics without constantly referring to philosophy will almost inevitably result in nonsense. The history of these fields a full of examples for that, just remember the infamous “General Problem Solver” of Simon and Newell.

Yet, the issue is not only one of ethics, morality and politics. It has been Foucault as the first one, in sort of a follow-up to Merleau-Ponty, who claimed a third region between the empiricism of affections and the tradition of reflecting on pure reason or consciousness.3 This third region, or even dimension (we would say “aspection”), being based on the compound consisting from perception and the body, comprises the historical evolution of systems of thinking. Foucault, together with Deleuze, once opened the possibility for a transcendental empiricism, the former mostly with regard to historical and structural issues of political power, the latter mostly with regard to the micronics of individual thought, where the “individual” is not bound to a single human person, of course. In our project as represented by this collection of essays we are following a similar path, starting with the transition from the material to the immaterial by means of association, and then investigating the dynamics of thinking in the aspectional space of transcendental conditions (forthcoming chapter), which build an abstract bridge between Deleuze and Foucault as it covers both the individual and the societal aspects of thinking.

This Essay

This essay deals with the relation of words and a rather important aspect in thinking, representation. We will address some aspects of its problematics, before we approach the role of words in language. Since the representation is something symbolic in the widest sense and that representation has to be achieved autonomously by a mainly material arrangement, e.g. called “the machine”4, we also will deal (again) with the conditions for the transformation of (mainly) physical matter into (mainly) symbolic matter. Particularly, however, we will explore the role of words in language. The outline comprises the following sections:

From Matter to Mind

Given the conditioning mentioned above, the anthropological history of the genus of homo5 poses a puzzle. Our anatomical foundations6 have been stable since at least 60’000 years, but contemporary human beings at the age of, let me say, 20 or 30 years are surely much more “intelligent”7. Given the measurement scale established as I.Q. in the beginning of the 20th century, a significant increase can be observed for the supervised populations even throughout the last 60 years.

So, what makes the difference then, between the earliest ancient cultures and the contemporary ones? This question is highly relevant for our considerations here that focus on the possibility of a machine-based episteme, or in more standard, yet seriously misplaced terms, machine learning, machine intelligence or even artificial intelligence. In any of those fields, one could argue, researchers and engineers somehow start with mere matter, then imprinting some rules and symbols to that matter, only to expect then the matter becoming “intelligent” in the end. The structure of the problematics remains the same, whether we take the transition that started from paleo-cultures or that rooted in the field of advanced computer science. Both instances concern the role of culture in the transformation of physical matter into symbolic matter.

While philosophy has tackled that issue for at least two and a half millennia, resulting in a rich landscape of arguments, including the reflection of the many styles of developing those arguments, computer science is still almost completely blind against the whole topic. Since computer scientists and computer engineers inevitably get into contact with the realm of the symbolic, they usually and naively repeat past positions, committing naïve, i.e. non-reflective idealism or materialism that is not even on a pre-socratic level. David Blair [6] correctly identifies the picture of language on which contemporary information retrieval systems are based on as that of Augustine: He believed that every word has a meaning. Notably, Augustine lived in the late 4th till early 5th century A.C. This story simply demonstrates that in order to understand the work of a field one also has, as always, to understand its history. In case of computer sciences it is the history of reflective thought itself.

Precisely this is also the reason for the fact that philosophy is much more than just a possibly interesting source for computer scientists. More directly expressed, it is probably one of the major structural faults of computer science that it is regarded as just a kind of engineering. Countless projects and pieces of software failed for the reason of such applied methodological reductionism. Everything that gets into contact with computers developed from within such an attitude then also becomes infected by the limited perspective of engineering.

One of the missing aspects is the philosophy of techno-science, which not just by chance seriously started with Heidegger8 as its first major proponent. Merleau-Ponty, inspired by Heidegger, then emphasized that everything concerning the human is artificial and natural at the same time. It does not make sense to set up that distinction for humans or man-made artifacts as well, as if such a difference would itself be “natural”. Any such distinction refers more directly than not to Descartes as well as to Hegel, that is, it follows either simplistic materialism or overdone idealism, so to speak idealism in its machinic, Cartesian form. Indeed, many misunderstandings about the role of computers in contemporary science and engineering, but also in the philosophy of science and the philosophy of information can be deciphered as a massive Cartesio-Hegelian heir, with all its drawbacks. And there are many.

The most salient perhaps is the foundational element9 of Descartes’ as well as Hegel’s thoughts: independence. Of course, for both of them independence was a major incentive, goal and demand, for political reasons (absolutism in the European 17th century), but also for general reasons imposed by the level of techno-scientific insights, which remained quite low until the mid of the 20th century. People before the scientific age had been exposed to all sorts of threatening issues, concerning health, finances, religious or political freedom, collective or individual violence, all together often termed “fate”. Being independent meant a basic condition to live more or less safely at all, physically and/or  mentally. Yet, Descartes and Hegel definitely exaggerated it.

Yet, the element of independence made its way into the cores of the scientific method itself. Here it blossomed as reductionism, positivism and physicalism, all of which can be subsumed under the label of naive realism. It took decades until people developed some confidence not to prejudge complexity as esotericism.

With regard to computer science there is an important consequence. We first and safely can drop the label of  “artificial intelligence” or “machine learning” just along with the respective narrow and limited concepts. Concerning machine learning we can state that only very few of the approaches to machine learning that exist so far is at most a rudimentary learning in the sense of structural self-transformation. The vast majority of approaches that are dubbed as “machine learning” represent just some sort of advanced parameter estimation, where the parameters to be estimated are all defined (i) apriori, and (ii) by the programmer(s). And regarding intelligence we can recognize that we never can assign concepts like artificial or natural to it, since there is always a strong dependence on culture in it. Michel Serres once called written language the first artificial intelligence, pointing to the central issue of any technology: externalization of symbol-based systems of references.

This brings us back to our core issue here, the conditions for the transformation of (mainly) physical matter into (mainly) symbolic matter. In some important way we even can state that there is no matter without symbolic aspects. Two pieces of matter can interact only if they are not completely transparent to each other. If there is an effective transfer of energy between those, then the form of the energy becomes important, think of it for instance as wave length of some electromagnetic radiation, or the rhythmicity of it, which becomes distinctive in the case of a LASER [9,10]. Sure, in a LASER there are no symbols to be found; yet, the system as a whole establishes a well-defined and self-focusing classification, i.e. it performs the transition from a white-noised, real-valued randomness to a discrete intensional dynamics. The LASER has thus to be regarded as a particular kind of associative system, which is able to produce proto-symbols.

Of course, we may not restrict our considerations to such basic instances of pan-semiotics. When talking about machine-based episteme we talk about the ability of an entity to think about the conditions for its own informational dynamics (avoiding the term knowledge here…). Obviously, this requires some kind of language. The question for any attempt to make machines “intelligent” thus concerns in turn the question about how to think about the individual acquisition of language, and, of course, with regard to our interests here how to implement the conditions for it. Note that homo erectus who lived 1 million years ago must have had a clear picture not only about causality, and not only individually, but they also must have had the ability to talk about that, since they have been able to keep fire burning and to utilize it for cooking meal and bones. Logic has not been invented as a field at these times, but it seems absolutely mandatory that they have been using a language.10 Even animals like cats, pigs or parrots are able to develop and to perform plans, i.e. to handle causality, albeit probably not in a conscious manner. Yet, neither wild pigs nor cats are able for symbol based culture, that is a culture, which spreads on the basis of symbols that are independent from a particular body or biological individual. The research programs of machine learning, robotics or artificial intelligence thus appears utterly naive, since they all neglect the cultural dimension.

The central set of questions thus considers the conditions that must be met in order to become able to deal with language, to learn it and to practice it.

These conditions are not only “private”, that is, they can’t be reduced to individual brains, or a machines, that would “process” information. Leaving the simplistic perspective onto information as it is usually practiced in computer sciences aside for the moment, we have to accept that learning language is a deeply social activity, even if the label of the material description of the entity is “computer”. We also have to think about the mediality of symbolic matter, the transition from nature to culture, that is from contexts of low symbolic intensity to those of high symbolic intensity. Handling language is not an affair that could be thought to be performed privately, there is no such thing as a “private language”. Of course, we have brains, for which the matter could still be regarded as dominant, and the processes running there are running only there11.

Note that implementing the handling of words as apriori existing symbols is not what we are talking about here. As Hofstadter pointed out [12], calling the computing processes on apriori defined strings “language understanding” is nothing but silly. We are not allowed to call the shuffling of predefined encoded symbols forth and back “understanding”. But what could we call “understanding” then? Again, we have to postpone this question for the time being. Meanwhile we may reshape the question about learning language a bit:

How do we come to be able to assign names to things, classes, types, species, animals and other humans? What is role of such naming, and what is the role of words?

The Unresolved Challenge

The big danger when addressing these issues is to start too late, provoked by an ontological stance that is applied to language. The most famous example probably being provided by Heidegger and his attempt of “fundamental ontology”, which failed glamorously. It is all too easy to get bewitched by language itself and to regard it as something natural, as something like stones: well-defined, stable, and potentially serving as a tool. Language itself makes us believe that words exist as such, independent from us.

Yet, language is a practice, as Wittgenstein said, and this practice is neither a single homogenous one nor does it remain constant throughout life, nor are the instances identical and exchangeable. The practice of language develops, unfolds, gains quasi-materiality, turns from an end to a means and back. Indeed, language may be characterized just by the capability to provide that variability in the domain of the symbolic. Take as a contrast for instance the symbolon, or take the use of signs in animals, in both cases there is exactly one single “game” you can play. Only in such trivial cases the meaning of a name could be said to be close to its referent. Yet, language games are not trivial.

I already mentioned the implicit popularity of Augustine among computer scientists and information systems engineers. Let me cite the passage that Wittgenstein chose in his opening remarks to the famous Philosophical Investigations (PI)12. Augustine writes:

When they (my elders) named some object, and accordingly moved towards something, I saw this and I grasped that the thing was called by the sound they uttered when they meant to point it out. Their intention was shewn by their bodily movements, as it were the natural language of all peoples: the expression of the face, the play of the eyes, the movement of other parts of the body, and the tone of voice which expresses our state of mind in seeking, having, rejecting, or avoiding something. Thus, as I heard words repeatedly used in their proper places in various sentences, I gradually learnt to understand what objects they signified; and after I had trained my mouth to form these signs, I used them to express my own desires.

Wittgenstein gave two replies, one directly in the PI, the other one in the collection entitled “Philosophical Grammar” (PG).

These words, it seems to me, give us a particular picture of the essence of human language. It is this: the individual words in language name objects—sentences are combinations of such names.—In this picture of language we find the roots of the following idea: Every word has a meaning. This meaning is correlated with the word. It is the object for which the word stands.

Augustine does not speak of there being any difference between kinds of word. If you describe the learning of language in this way you are, I believe, thinking primarily of nouns like “table,” “chair,” “bread,” and of people’s names, and only secondarily of the names of certain actions and properties; and of the remaining kind of words as something that will take care of itself. (PI §1)

And in the Philosophical Grammar:

When Augustine talks about the learning of language he talks about how we attach names to things or understand the names of things. Naming here appears as the foundation, the be all and end all of language. (PG 56)

Before we will take the step to drop and to drown the ontological stance once and for all we would like to provide two things. First, we will briefly cite a summarizing table from Blair [1]13. Blair’s book is indeed a quite nice work about the peculiarities of language as far as it concerns “information retrieval” and how Wittgenstein’s philosophy could be helpful in resolving the misunderstandings. Second, we will (also very briefly) make our perspective to names and naming explicit.

David Blair dedicates quite some efforts to render the issue of indeterminacy of language as clear as possible. In alignment to Wittgenstein he emphasizes that indeterminacy in language is not the result of sloppy or irrational usage. Language is neither a medium of logics nor a something like a projection screen of logics. There are good arguments, represented by the works of Ludwig Wittgenstein, late Hilary Putnam and Robert Brandom, to believe that language is not an inferior way to express a logical predicate (see the previous chapter about language). Language can’t be “cleared” or being made less ambiguous, its vagueness is a constitutive necessity for its use and utility in social intercourse. Many people in linguistics (e.g. Rooij [13]) and large parts of cognitive sciences (e.g. Alvin Goldman [14]14), but also philosophers like Saul Kripke [16] or Scott Soames [17] take the opposite position.

Of course, in some contexts it is reasonable to try to limit the vagueness of natural language, e.g. in law and contracts. Yet, it is also clear that positivism in jurisdiction is a rather bad thing, especially if it shows up as a pair with idealism.

Blair then contrasts two areas in so-called “information retrieval”15, distinguished by the type of data that is addressed: structured data that could be arranged in tables on the one hand, Blair calls it determinate data, and such “data” that can’t be structured apriori, like language. We already met this fundamental difference in other chapters (about analogies, language). The result of his investigation he summarized in the following table. It is more than obvious that the characteristics of the two fields are drastically different, which equally obvious has to be reflected in the methods going to be applied. For instance, the infamous n-gram method is definitely a no-go.

For the same reasons, semantic disambiguation is not possible by a set of rules that could be applied by an individual, whether this individual is a human or a machine. Quite likely it is even completely devoid of sense to try to remove ambiguity from language. One of the reasons is given by the fact that concepts are transcendental entities. We will return to the issue of “ambiguity” later.

In the quote from the PG shown above Wittgenstein rejects Augustine’s perspective that naming is central to language. Nevertheless, there is a renewed discussion in philosophy about names and so-called “natural kind terms”, brought up by Kripke’s “Naming and Necessity” [16]. Recently, Scott Soames explicitly referred to Kripke’s. Yet, as so many others, Soames commits the drastic mistake introduced along the line formed by Frege, Russell and Carnap in ascribing language the property of predicativity (cf. [18]  p.646).

These claims are developed within a broader theory which, details aside, identifies the meaning of a non-indexical sentence S with a proposition asserted by utterances of S in all normal contexts.

We won’t delve in any detail to the discussion of “proper names”16, because it is largely a misguided and unnecessary one. Let me just briefly mention three main (and popular) alternative approaches to address the meaning of names: the descriptivist theories, the referential theory originally arranged by John Stuart Mill, and the causal-historical theory. They are all not tenable because they implicitly violate the primacy of interpretation, though not in an obvious manner.

Why can’t we say that a name is a description? A description needs assignates17, or aspects, if you like, at least one scale. Assuming that there is the possibility for a description that is apriori justified and hence objective invokes divinity as a hidden parameter, or any other kind of Fregean hyper-idealism. Assignates are chosen according to and in dependence from the context. Of course, one could try to expel any variability of any expectable context, e.g. by literally programming society, or some kind of philosophical dictatorship. In any other case, descriptions are variant. The actual choice for any kind of description is the rather volatile result of negotiation processes in the embedding society. The rejection of names as description results from the contradictory pragmatic stances. First, names are taken as indivisible, atomic entities, but second descriptions are context-dependent subatomic properties, which by virtue of the implied pragmatics, corroborates the primary claim. Remember that the context-dependency results from the empirical underdetermination. In standard situations it is neither important that water consists as a compound of hydrogen and oxygen, nor is this what we want to say in everyday situations. We do not carry the full description of the named entity along into any instance of its use, despite there are some situations where we indeed are interested in the description, e.g. as a scientist, or as a supporter of  the “hydrogen economy”. The important point is that we never can determine the status of the name before we have interpreted the whole sentence, while we also can’t interpret the sentence without determining the status of the named entity. Both entities co-emerge. Hence we also can’t give an explicit rule for such a decision other than just using the name or uttering the sentence. Wittgenstein thus denies the view that assumes a meaning behind the words that is different from their usage.

The claim that the meaning of a proper name is its referent meets similar problems, because it just introduces the ontological stance through the backdoor. Identifying the meaning of a label with its referent implies that the meaning is taken as something objective, as something that is independent from context, and even beyond that, as something that could be packaged and transferred *as such*. In other words, it deliberately denies the primacy of interpretation. We need not say anything further, except perhaps that Kripke (and Soames as well, in taking it seriously) commits a third mistake in using “truth-values” as factual qualities.18 We may propose that the whole theory of proper names follows a pseudo-problem, induced by overgeneralized idealism or materialism.

Names, proper: Performing the turn completely

Yet, what would be an appropriate perspective to deal with the problem of names? What I would like to propose is a consequent application of the concept of “language game”. The “game” perspective could not only be applied to the complete stream of exchanged utterances, but also to the parts of the sentences, e.g. names and single words. As a result, new questions become visible. Wittgenstein himself did not explore this possibility (he took Augustine as a point of departure), and it could not be found in contemporary discourse either”19. As so often, philosophers influenced by positivism simply forget about the fact that they are speaking. Our proposal is markedly different from and also much more powerful than the causal-historical or the descriptivist approach, and also avoids the difficulties of Kripke’s externalist version.

After all, naming, to give a name and to use names, is a “language game”. Names are close to observable things, and as a matter of fact, observable things are also demonstrable. Using a name refers to the possibility of a speaker to provide a description to his partner in discourse such that this listener would be able to agree on the individuality of the referenced thing. The use of the name “water” for this particular liquid thing does not refer to an apriori fixed catalog of properties. Speaker and listener even need not agree on the identity of the set of properties ascribed to the referred physical thing. The chemist may always associate the physico-chemical properties of the molecule even when he reads about the submersed sailors in Shakespeare’s *tempest*, but nevertheless he easily could talk about that liquid matter with a 9 year old boy that does neither know about Shakespeare nor about the molecule.

It is thus neither possible nor is it reasonable to try to achieve a match regarding the properties, since a rich body of methods would be necessarily invoked to determine that set. Establishing the identity of representations of physical, external things, or even of the physical things themselves, inevitably invokes a normative act (which is rather incommensurable to the empiricists claims).

For instance, saying just “London”, out of the blue, it is not necessary that we envisage the same aspects of the grand urban area. Since cities are inevitably heterotopic entities (in the sense of Foucault [19, 20], acc. to David Graham Shane [21]), this agreement is actually impossible. Even for the undeniably more simple minded cartographers the same problem exists: “Where” is that London, in terms of spheric coordinates? Despite these unavoidable difficulties both the speaker and the listener easily agree on the individuality of the imaginary entity “London”. The name of “London” does not point to a physical thing but just to an imaginative pole. In contrast to concepts, however, names take a different grammatical role as they not only allow for a negotiation of rather primitive assignates in order to take action, they even demonstrate the possibility of such negotiation. The actual negotiations could be quite hard, though.

We conclude that we are not allowed to take any of the words as something that would “exist” as a, or like a physical “thing”. ­­­­Of course, we get used to certain words, the gain a quasi-materiality because a constancy appears that may be much stronger than the initial contingency. But this “getting used” is a different topic, it just refers how we speak about words. Naming remains a game, and as any other game this one also does not have an identifiable border.

Despite this manifold that is mediated through language, or as language, it is also clear that language remains rooted in activity or the possibility of it. I demonstrate the usage of a glass and accompany that by uttering “glass”. Of course, there is the Gavagai problematics20 as it has been devised by Quine [22]. Yet, this problematics is not a real problem, since we usually interact repeatedly. On the one hand this provides us the possibility to improve our capability to differentiate single concepts in a certain manner, but on the other hand the extended experience introduces a secondary indeterminacy.

In some way, all words are names. All words may be taken as indicators that there is the potential to say more about them, yet in a different, orthogonal story. This holds even for the abstract concepts denoted by the word “transcendental” or for verbs.

The usage of names, i.e. their application in the stream of sentences, gets more and more rich, but also more and more indeterminate. All languages developed some kind of grammar, which is a more or less strict body of rules about how to arrange words for certain language games. Yet, the grammar is not a necessity for language at all, it is just a tool to render language-based communication more easy, more fast and more precise. Beyond the grammars, it is the experience which enables us to use metaphors in a dedicated way. Yet, language is not a thing that sometimes contains metaphors and sometimes not. In a very basic sense all the language is metaphorical all the time.

So, we first conclude that there is nothing enigmatic in learning a language. Secondly, we can say that extending the “gameness” down to words provides the perspective of the mechanism, notably without reducing language to names or propositions.

Instead, we now can clearly see how these mechanisms mediate between the language game as a whole, the metaphorical characteristics of any language and simple rule-based mechanisms.

Representing Words

There is a drastic consequence of the completed gaming perspective. Words can’t be “represented” as symbols or as symbolic strings in the brain, and words can’t be appropriately represented as symbols in the computer either. Given any programming language, strings in a computer program are nothing else than particularly formatted series of values. Usually, this series is represented as an array of values, which is part of an object. In other words, the word is represented as a property of an object, where such objects are instances of their respective classes. Such, the representation of words in ANY computer program created so far for the purpose of handling texts, documents, or textual information in general is deeply inappropriate.

Instead, the representation of the word has to carry along its roots, its path of derivation, or in still other words, its traces of precipitation of the “showing”. This rooting includes, so we may say, a demonstrativum, an abstract image. This does not mean that we have to set up an object in the computer program that contains a string and an abstract image. This would be just the positivistic approach, leaving all problems untouched, the string and the image still being independent. the question of how to link them would be just delegated to the next analytic homunculus.

What we propose are non-representational abstract compounds that are irrevocably multi-modal since they are built from the assignates of  abstract “things” (Gegenstände). These compounds are nothing else than combined sets of assignates. The “things” represented in this way are actually always more or less “abstract”. Through the sets of assignates we actually may combine even things which appear incommensurable on the level of their wholeness, at least at first sight. An action is an action, not a word, and vice versa, an image is neither a word nor an action, isn’t it? Well, it depends; we already mentioned that we should not take words as ontological instances. Any of those entities can be described using the same formal structure, the probabilistic context that is further translated into a set of assignates. The probabilistic context creates a space of expressibility, where the incommensurability disappears, notably without reducing the comprised parts (image, text,…) to the slightest extent.

The situation reminds a bit synesthetic experiences. Yet, I would like to avoid calling it synesthetic, since synesthecism is experienced on a highly symbolic level. Like other phenomenological concepts, it also does not provide any hint about the underlying mechanisms. In contrast, we are talking about a much lower level of integration. Probably we could call this multi-modal compound a “syn-presentational” compound, or short, a “synpresentation”.21

Words, images and actions are represented together as a quite particular compound, which is an inextricable multi-modal compound. We also may say that these compounds are derived qualia. The exciting point is that the described way of probabilistic multi-modal representation obviates the need for explicit references and relations between words and images. These relations even would have to be defined apriori (strongly: before programming, weakly: before usage). In our approach, and quite to the contrast to the model of external control, relations and references *can be* subject to context-dependent alignments, either to the discourse, or the task (of preparing a deliverable from memory).

The demonstrativum may not only refer to an “image”. First note that the image does not exist outside of its interpretation. We need to refer to that interpretation, not to an index in a data base or a file system. Interpretation thus means that we apply a lot of various processing and extraction methods to it, each of them providing a few assignates. The image is dissolved into probabilistic contexts as we do it for words (footnote: we have described it elsewhere). The dissolving of an image is of course not the endpoint of a communicable interpretation, it is just the starting point. Yet, this does not matter, since the demonstrativum may also refer to any derived intension and even to any derived concept.22

The probabilistic multi-modal representation exhibits three highly interesting properties, concerning abstractness, relations and the issue of foundations. First, the  abstractness of represented items becomes scalable in an almost smooth manner. In our approach, “abstractness” is not a quality any more. Secondly, relations and references of both words and the “content” of images are transformed into their pre-specific versions. Both, relations and references need not be implemented apriori or observed as an apriori. Initially, they appear only as randolations23. Thirdly, some derived and already quite abstract entities on an intermediate level of “processing” are more basic than the so-called raw observations24.

Words, Classes, Models, Waves

It is somewhat tempting to arrange these four concepts to form a hierarchical series. Yet, things are not that simple. Actually, any of the concepts that appear more as a symbolistic entity also may re-turn into a quasi-materiality, into a wave-like phenomenon that itself serves as a basis for potential differences. This re-turn is a direct consequence of the inextricable mediality of the world, mediality understood here thus as a transcendental category. Needless to say that mediality is just another blind spot in contemporary computer sciences. Cybernetics as well as engineering straightaway exclude the possibility to recognize the mediatedness of worldly events.

In this section we will try to explicate the relations between the headlined concepts to some extent, at least as far as it concerns the mapping of those into an implementable system of (non-Turing) “computer programs”. The computational model that we presuppose here is the extended version of the 2-layered SOM, as we have it introduced previously.

Let us start with first things first. Given a physical signal, here in the literal sense, that is as a potentially perceivable difference in a stream of energy, we find embodied modeling, and nothing else. The embodiment of the initial modeling is actualized in sensory organs, or more generally, in any instance that is able to discretize the waves and differences at least “a bit more”. In more technical terms, the process of discretization is a process that increases the signal-noise ratio. In biological systems we often find a frequency encoding of the intensity of a difference. Though the embodiment of that modeling is indeed a filtering and encoding, hence already some kind of a modeling representation, it is not a modeling in the more narrow sense. It points out of the individual entity into the phylogenesis, the historical contingency of the production of that very individual entity. We also can’t say that the initial embodied processing by the sensory organs is a kind of encoding. There is no code consisting of well-identified symbols at the proximate end of the sensory cell. It is still a rather probabilistic affair.

This basic encoding is not yet symbolic, albeit we also can’t call it a wave any more. In biological entities this slightly discretized wave then is subject of an intense modeling sensu strictu. The processing of the signals is performed by associative mechanisms that are arranged in cascades. This “cascading” is highly interesting and probably one of the major mandatory ingredients that are neglected by computer science so far. The reason is quite clear: it is not an analytic process, hence it is excluded from computer science almost by definition.

Throughout that cascade signals turn more and more into information as an interpreted difference. It is clear that there is not a single or identifiable point in this cascade to which one could assign the turn from “data” to “information”. The process of interpretation is, quite in contrast to idealistic pictures of the process of thinking, not a single step. The discretized waves that flow into the processing cascade are subject to many instances and very different kinds of modeling, throughout of which discrete pieces get separated and related to other pieces. The processing cascade thus is repeating a modular principle consisting from association and distribution.

This level we still could not label as “thinking”, albeit it is clearly some kind of a mental process. Yet, we could still regard it as something “mechanical”, even as we also find already class-like representations, intensions and proto-concepts. Thinking in its meaningful dimension, however, appears only through assigning sharable symbols. Thinking of something implicitly means that one could tell about the respective thoughts. It does not matter much whether these symbols are shared between different regions in the brain or between different bodily entities does not matter much. Hence, thinking and mental processes need to be clearly distinguished. Yet, assigning symbols, that is assigning a word, a specific sound first, and later, as a further step of externalization, a specific grapheme that reflects the specific sound, which in turn represents an abstract symbol, this process of assigning symbols is only possible through cultural means. Cats may recognize situations very well and react accordingly, they may even have a feeling that they have encountered that situation before, but cats can’t share they symbols, they can’t communicate the relational structure of a situation. Yet, cats and dogs already may take part in “behavior games”, and such games clearly has been found in baboons by Fernando Colmenares [24]. Colmenares adopted the concept of “games” precisely because the co-occurrence of obvious rules, high variability, and predictive values of actions and reactions of the individual animals. Such games unfold synchronic as well as diachronic, and across dynamically changing assignment of social roles. All of this is accompanied by specific sounds. Other instances of language-like externalization of symbols can presumably be found in grey parrots [25], green vervet monkey [26], bonobos, dolphins and Orcas.

But still… in animals those already rather specific symbols are not externalized by imprinting them into matter different from their own bodies. One of the most desirable capabilities for our endeavor here about machine-based episteme thus consists in just that externalization processes embedded in social contexts.

Now the important thing to understand is that this whole process from waves to words is not simply a one-way track. First, words do not exist as such, they just appear as discrete entities through usage. It is the usage of X that introduces irreversibility. In other words, the discreteness of words is a quality that is completely on the aposteriori side of thinking. Before their actual usage, their arrangement into sentences words “are” nothing else than probabilistic relations. It needs a purpose, a target oriented selection (call it “goal-directed modeling”) to let them appear as crisp entities.

The second issue is that a sentence is an empirical phenomenon, remarkably even to the authoring brain itself. The sentence needs interpretation, because it is never ever fully determinate. Interpretation, however, of such indeterminate instances like sentences renders the apparent crisp phenomenon of words back into waves. A further effect of interpretation of sentences as series of symbols is the construction of a virtual network. Texts, and in a very similar way, pieces of music, should not be conceived as series, as computer linguistics is treating them. Much more appropriately texts are conceived as networks, that even may exert there own (again virtual) associative power, which to some extent is independent from the hosting interpreter, as I have argued here [28].

Role of Words

All these characteristics of words, their purely aposteriori crispness, their indeterminacy as sub-sentential indicators of randolational networks, their quality as signs by which they only point to other signs, but never to “objects”, their double quality as constituent and result of the “naming game”, all these “properties” make it actually appear as highly unlikely and questionable whether language is about references at all. Additionally, we know that the concept of “direct” access to the mind or the brain is simply absurd. Everything we know about the world as individuals is due to modeling and interpretation. That of course concerns also the interpretation of cultural artifacts or culturally enabled externalization of symbols, for instance into the graphemes that we use to represent words.

It is of utmost importance to understand that the written or drawn grapheme is not the “word” itself. The concept of a “word-as-such” is highly inappropriate, if not bare nonsense.

So, if words, sentences and language at large are not about “direct” referencing of (quasi-) material objects, how then should we conceive of the process we call “language game”, or “naming game”? Note that we now can identify van Fraassen’s question about “how do words and concepts acquire their reference?” as a misunderstanding, deeply informed by positivism itself. It does not make sense to pose that question in this way at all. There is not first a word which then, in a secondary process gets some reference or meaning attached. Such a concept is almost absurd. Similarly, the distinction between syntax and semantics, once introduced by the positivist Morris in the late 1940ies, is to be regarded as much the same pseudo-problem, established just by the fundamental and elemental assumptions of positivism itself: linear additivity, metaphysical independence and lossless separability of parts of wholenesses. If you scatter everything into single pieces of empirical dust, you will never be able to make any proposition anymore about the relations you destroyed before. That’s the actual reason for the problem of positivistic science and its failure.

In contrast to that we tend to propose a radically different picture of language, one that of course has been existing in many preformed flavors. Since we can’t transfer anything directly into one’s other mind, the only thing we can do is to invite or trigger processes of interpretation. In the chapter about vagueness we called words  “processual indicative” for slightly different reasons. Language is a highly structured, institutionalized and symbolized “demonstrating”, an invitation to interpret. Richard Brandom investigated in great detail [29] the processes and the roles of speakers and listeners in that process of mutual invitation for interpretation. The mutuality allows a synchronization, a resonance and a more or less strong resemblance between pairs of speaker-listeners and listener-speakers.

The “naming game” and its derivative, the “word game” is embedded into a context of “language games”. Actually, word games and language games are not as related as it might appear prima facie, at least beyond their common characteristics that we may label “game”. This becomes apparent if we ask what happens with the “physical” representative of a single word that we throw into our mechanisms. If there is no sentential context, or likewise no social context like a chat, then a lot of quite different variants of possible continuations are triggered. Calling out “London” our colleague in chatting may continue with “Jack London”  (the writer), “Jack the Ripper”, Chelsea, London Tower, Buckingham, London Heathrow, London Soho, London Stock Exchange, etc. but also Paris, Vienna, Berlin, etc., choices being slightly dependent on our mood, the thoughts we had before etc. In other words, the word that we bring to the foreground as a crisp entity behaves like a seedling: it is the starting point of a potential garden or forest, it functions as the root of the unfolding of a potential story (as a co-weaving of a network of abstract relations). Just to bring in another metaphorical representation: Words are like the initial traces of firework rockets, or the traces of elementary particles in statu nascendi as they can be observed in a bubble chamber: they promise a rich texture of upcoming events.

Understanding (Images, Words, …)

We have seen that “words” gain shape only as a result of a particular game, the “naming game”, which is embedded into a “language game”. Before those games are played, “words” do not exist as a discrete, crisp entity, say as a symbol, or a string of letters. Would they, we could not think. Even more than the “language game” the “naming game” works mainly as an invitation or as an acknowledged trigger for more or less constrained interpretation.

Now there are those enlightened language games of “understanding” and “explaining”. Both of them work just as any other part of speech do: they promise something. The claim to understand something refers to the ability for a potential preparation of a series of triggers that one additionally claim to be able to arrange in such a way as to support the gaining of the respective insight in my chat partner. Slightly derived from that understanding also could mean to transfer the structure of the underlying or overarching problematics to other contexts. This ability for adaptive reframing of a problematic setting is thus always accompanied by a demonstrativum, that is, by some abstract image, either by actual pictorial information or its imagination, or by its activity. Such a demonstrativum could be located completely within language itself, of course, which however is probably quite rare.


It is clear that language does not work as a way to express logical predicates. Trying to do so needs careful preparations. Language can’t be “cured” and “cleaned” from ambiguities, trying to do so would establish a categorical misunderstanding. Any “disambiguation” happens as a resonating resemblance of at least two participants in language-word-gaming, mutually interpreting each other until both believe that their interest and their feelings match. An actual, so to speak objective match is neither necessary nor possible. In other words, language does not exist in two different forms, one without ambiguity and without metaphors, and the other form full of them. Language without metaphorical dynamics is not a language at all.

The interpretation of empirical phenomena, whether outside of language or concerning language itself, is never fully determinable. Quine called the idea of the possibility of such a complete determination a myth and as the “dogma of empiricism” [30]. Thus, given this underdetermination, it does not make any sense to expect that language should be isomorphic to logical predicates or propositions. Language is basically an instance of impredicativity. Elsewhere we already met the self-referentiality of language (its strong singularity) as another reason for this. Instead, we should expect that this fundamental empirical underdetermination is reflected appropriately in the structure of language, namely as analogical thinking, or quite related to that, as metaphorical thinking.

Ambiguity is not a property of language or words, it is a result, or better, a property of the process of interpretation at some arbitrarily chosen point in time. And that process takes place synchronously within a single brain/mind as well as between two brains/minds. Language is just the mediating instance of that intercourse.


It is now possible to clarify the ominous concept of “intelligence”. We find the concept in the name of a whole discipline (“Artificial Intelligence”), and it is at work behind the scenes in areas dubbed as “machine learning”. Else, there is the hype about the so-called “collective intelligence”. These observations, and of course our own intentions make it necessary to deal briefly with it, albeit we think that it is a misleading and inappropriate idea.

First of all one has to understand that “intelligence” is an operationalization of a research question, allowing for a measurement, hence for a quantitative comparison. It is questionable whether the mental qualities can be made quantitatively measurable without reducing them seriously. For instance, the capacity for I/O operations related to a particular task surely can’t be equaled with “intelligence”, even if it could be a necessary condition.

It is just silly to search for “intelligence” in machines or beings, or to assign more or less intelligence to any kind of entity. Intelligence as such does not “exist” independently of a cultural setup, we can’t find it “out there”. Ontology is, as always, not only a bad trail, it directly leads into the abyss of nonsense. The research question, by the way, was induced by the intention to proof that black people and women are less intelligent than white males.

Yet, even if we take “intelligence” in an adapted and updated form as the capability for autonomous generalization, it is a bad concept, simply because it does not allow to pose further reasonable questions. This directly follows from its characteristics of being itself an operationalization. Investigating the operationalization hardly brings anything useful to light about the pretended subject of interest.

The concept of intelligence arose in a strongly positivistic climate, where the positivism has been practiced even in a completely unreflected manner. Hence, their inventors have not been aware of the effect of their operationalization. The concept of intelligence implies a strong functional embedding of the respective, measured entity. Yet, dealing with language undeniably has something to do with higher mental abilities, but language is a strictly non-functional phenomenon. It does not matter here that positivists still claim the opposite. And who would stand up claiming that a particular move, e.g. in planning a city, or dealing with the earth’s climate, is more smart than another? In other words, the other strong assumption of positivism, measurability and identifiability, also fails dramatically when it comes to human affairs. And everything on this earth is a human affair.

Intelligence is only determinable relative to a particular Lebensform. It is thus not possible to “compare the intelligence” across individuals living in different contexts. This renders the concept completely useless, finally.


The hypothesis I have been arguing for in this essay claims that the trinity of waves, words and images plays a significant role in the ability to deal with language and for the emergence of higher mental abilities. I proposed first that this trinity is irreducible and second that is responsible for this ability in the sense of a necessary and sufficient condition. In order to describe the practicing of that trinity, for instance with regard to possible implementations, I introduced the term of “synpresentation”. This concept draws the future track of how to deal with words and images as far as it concerns machine-based episteme.

In more direct terms, we conclude that without the capability to deal with “names”, “words” and language, the attempt to mapping higher mental capacities onto machines will not experience any progress. Once the machine will have arrived such a level, it will find itself exactly in the same position as we as humans do. This capability is definitely not sufficiently defined by “calculation power”; indeed, such an idea is ridiculous. Without embedding into appropriate social intercourse, without solving the question of representation (contemporary computer science and its technology do NOT solve it, of course), even a combined 1020000 flops will not cause the respective machine or network of machines25 “intelligent” in any way.

Words and proper names are re-formulated as a particular form of “games”, though not as “language games”, but on a more elementary level as “naming game”. I have tried to argue how the problematics of the reference could be thought of to disappear as a pseudo-problem on the basis of such a reformulation.

Finally, we found important relationships to earlier discussions of concepts like the making of analogies or vagueness. We basically agree on the stance that language can’t be clarified and that it is inappropriate (“free of sense”) to assign any kind of predicativity to language. Bluntly spoken, the application of logic is the mind, and nowhere else. Communicating about this application is not based on a language any more, and similarly, projecting logic onto language destroys language. The idea of a scientific language is empty as it is the idea of a generally applicable and understandable language. A language that is not inventive could not be called such.


1. If you read other articles in this blog you might think that there is a certain redundancy in the arguments and the targeted issues. This is not the case, of course. The perspectives are always a bit different; such I hope that by the repeated attempt “to draw the face” (Ludwig Wittgenstein, ) the problematics is rendered more accurately. “How can one learn the truth by thinking? As one learns to see a face better if one

draws it.” ( Zettel §255, [1])

2. In one of the shortest articles ever published in the field of philosophy, Edmund Gettier [2] demonstrated that it is deeply inappropriate to conceive of knowledge as “justified true belief”. Yet, in the field of machine learning so-called “belief revision” is precisely and still following this untenable position. See also our chapter about the role of logic.

3. Michel Foucault “Dits et Ecrits” I 846 (dt.1075)  [3] cited after Bernhard Waldenfels [4] p.125

4. we will see that the distinction or even separation of the “symbolic” and the “material” is neither that clear nor is it simple. Fomr the side of the machine, Felix Guattari argued in favor for a particular quality [5], the machinic, which is roughly something like a mechanism in human affairs. From the side of the symbolic there is clearly the work of Edwina Taborsky to cite, who extended and deepened the work of Charles S. Peirce in the field of semiotics,

5. particularly homo erectus and  homo sapiens spec.

6. Humans of the species homo sapiens sapiens.

7. For the time being we leave this ominous term “intelligence” untouched, but I also will warn you about its highly problematic state. We will resolve this issue till the end of that essay.

8. Heidegger developed the figure of the “Gestell” (cf. [7]), which serves multiple purposes. It is providing a storage capacity, it is a tool for sort of well-ordered/organized hiding and unhiding (“entbergen”), it provides a scaffold for sorting things in and out, and thus it is working as a complex constraint on technological progress. See also Peter Sloterdijk on this topic [8].

9. elementarization regarding Descartes

10. Homo floresiensis, also called “hobbit man”, who lived on Flores, Indonesia, 600’000y till approx. 3’000y ago. Homo floresiensis derived from homo erectus. 600’000 years ago they obviously built a boat to transfer to the islands across a sea gate with strong currents. The interesting issue is that this endeavor requires a stable social structure, division of labor, and thus also language. Homo floresiensis had a particular fore brain anatomy which is believed to provide the “intelligence” while the overall brain was relatively small as compared to ours.

11. Concerning the “the enigma of brain-mind interaction” Eccles was an avowed dualist [11]. Consequently he searched for the “interface” between the mind and the brain, in which he was deeply inspired by the 3-world concept of Karl Popper. The “dualist” position held that the mind exists at least partially independently from and somehow outside the brain. Irrespective his contributions to neuroscience on the cellular level, these ideas (of Eccles and Popper) are just wild nonsense.

12. The Philosophical Investigations are probably the most important contribution to philosophy in the 20th century. The are often mistaken as a foundational document for analytic philosophy of language. Nothing is more wrong as to take Wittgenstein as a founding father of analytic philosophy, however. Many of the positions that refer to Wittgenstein (e.g. Kripke) are just low-quality caricatures of his work.

13. Blair’s book is a must read for any computer scientist, despite some problems in its conceptualization of information.

14. Goldman [14] provides a paradigmatic examples how psychologists constantly miss the point of philosophy, up today. In an almost arrogant tone he claims: “First, let me clarify my treatment of justificational rules, logic, and psychology. The concept of justified or rational belief is a core item on the agenda of philosophical epistemology. It is often discussed in terms of “rules” or “principles” of justification, but these have normally been thought of as derivable from deductive and inductive logic, probability theory, or purely autonomous, armchair epistemology.”

Markie [15] demonstrated that everything in these claims is wrong or mistaken. Our point about it is that something like “justification” is not possible in principle, but particularly it is not possible from an empirical perspective. Goldman’s secretions to the foundations of his own work are utter nonsense (till today).

15. It is one of the rare (but important) flaws in Blair’s work that he assimilates the concept of “information retrieval” in an unreflected manner. Neither it is reasonable to assign an ontological quality to information (we can not say that information “exists”, as this would deny the primacy of interpretation) nor can we then say that information can be “retrieved”. See also our chapter about his issue. Despite his largely successful attempt to argue in favor of the importance of Wittgenstein’s philosophy for computer science, Blair fails to recognize that ontology is not tenable at large, but particularly for issues around “information”. It is a language game, after all.

16 see Stanford Encyclopedia for a discussion of various positions.

17. In our investigation of models and their generalized form, we stressed the point that there are no apriori fixed “properties” of a measured (perceived) thing; instead we have to assign the criteria for measurement actively, hence we call these criteria assignates instead of “properties”, “features”, or “attributes”.

18. See our essay about logic.

20. See the entry in the Stanford Encyclopedia of Philosophy about Quine. Quine in “Word and Object” gives the following example (abridged version here). Imagine, you discovered a formerly unknown tribe of friendly people. Nobody knows their language. You accompany one of them hunting. Suddenly a hare rushes along, crossing your way. The hunter immediately points to the hare, shouting “Gavagai!” What did he mean? Funny enough, this story happened in reality. British settlers in Australia wondered about those large animals hopping around. They asked the aborigines about the animal and its name. The answer was “cangaroo” – which means “I do not understand you” in their language.

21. This, of course, resembles to Bergson, who, in Matter and Memory [23], argued that any thinking and understanding takes place by means of primary image-like “representations”. As Leonard Lawlor (Henri Bergson@Stanford) summarizes, Bergson conceives of knowledge as “knowledge of things, in its pure state, takes place within the things it represents.” We would not describe out principle of associativity as it can be be realized by SOMs very differently…

22. the main difference between “intension” and “concept” is that the former still maintains a set of indices to raw observations of external entities, while the latter is completely devoid of such indices.

23. We conceived randolations as pre-specific relations; one may also think of them as probabilistic quasi-species that eventually may become discrete on behalf of some measurement. The intention for conceiving of randolations is given by the central drawback of relations: their double-binary nature presumes apriori measurability and identifiability, something that is not appropriate when dealing with language.

24. “raw” is indeed very relative, especially if we take culturally transformed or culturally enabled percepts into account;

25. There are mainly two aspects about that: (1) large parts of the internet is organized as a hierarchical network, not as an associative network; nowadays everybody should know that telephone network did not, do not and will not develop “intelligence”; (2) so-called Grid-computing is always organized as a linear, additive division of labor; such, it allows to run processes faster, but no qualitative change is achieved, as it can be observed for instance in the purely size-related contrast between a mouse and an elephant. Thus, taken (1) and (2) together, we may safely conclude that doing wrong things (=counting Cantoric dust) with a high speed will not produce anything capable for developing a capacity to understand anything.


  • [1] Ludwig Wittgenstein, Zettel. Oxford, Basil Blackwell, 1967. Edited by G.E.M. Anscombe and G.H. von Wright, translated by G.E.M. Anscombe.
  • [2] Edmund Gettier (1963), Is Justified True Belief Knowledge? Analysis 23: 121-123.
  • [3] Michel Foucault “Dits et Ecrits”, Vol I.
  • [4] Bernhard Waldenfels, Idiome des Denkens. Suhrkamp, Frankfurt 2005.
  • [5] Henning Schmidgen (ed.), Aesthetik und Maschinismus, Texte zu und von Felix Guattari. Merve, Berlin 1995.
  • [6] David Blair, Wittgenstein, Language and Information – Back to the Rough Ground! Springer Series on Information Science and Knowledge Management, Vol.10, New York 2006.
  • [7] Martin Heidegger, The Question Concerning Technology and Other Essays. Harper, New York 1977.
  • [8] Peter Sloterdijk, Nicht-gerettet, Versuche nach Heidegger. Suhrkamp, Frankfurt 2001.
  • [9] Hermann Haken, Synergetik. Springer, Berlin New York 1982.
  • [10] R. Graham, A. Wunderlin (eds.): Lasers and Synergetics. Springer, Berlin New York 1987.
  • [11] John Eccles, The Understanding of the Brain. 1973.
  • [12] Douglas Hofstadter, Fluid Concepts And Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought. Basic Books, New York 1996.
  • [13] Robert van Rooij, Vagueness, Tolerance and Non-Transitive Entailment. p.205-221 in: Petr Cintula, Christian G. Fermüller, Lluis Godo, Petr Hajek (eds.) Understanding Vagueness. Logical, Philosophical and Linguistic Perspectives. Vol.36 of Studies in Logic, College Publications, London 2011. book is avail online.
  • [14] Alvin I. Goldman (1988), On Epistemology and Cognition, a response to the review by S.W. Smoliar. Artificial Intelligence 34: 265-267.
  • [15] Peter J. Markie (1996). Goldman’s New Reliabilism. Philosophy and Phenomenological Research Vol.56, No.4, pp. 799-817
  • [16] Saul Kripke, Naming and Necessity. 1972.
  • [17] Scott Soames, Beyond Rigidity: The Unfinished Semantic Agenda of Naming and Necessity. Oxford University Press, Oxford 2002.
  • [18] Scott Soames (2006), Précis of Beyond Rigidity. Philosophical Studies 128: 645–654.
  • [19] Michel Foucault, Les Hétérotopies – [Radio Feature 1966]. Youtube.
  • [20] Michel Foucault, Die Heterotopien. Der utopische Körper. Aus dem Französischen von Michael Bischoff, Suhrkamp, Frankfurt 2005.
  • [21] David Grahame Shane, Recombinant Urbanism – Conceptual Modeling in Architecture, Urban Design and City Theory. Wiley Academy Press, Chichester 2005.
  • [22] Willard van Orman Quine, Word and Object. M.I.T. Press, Cambridge (Mass.) 1960.
  • [23] Henri Louis Bergson, Matter and Memory. transl. Nancy M. Paul  & W. Scott Palmer, Martino Fine Books, Eastford  (CT) 2011 [1911].
  • [24] Fernando  Colmenares, Helena Rivero (1986).  A conceptual Model for Analysing Interactions in Baboons: A Preliminary Report. pp.63-80. in: Colgan PW, Zayan R (eds.), Quantitative models in ethology. Privat I.E, Toulouse.
  • [25] Irene Pepperberg (1998). Talking with Alex: Logic and speech in parrots. Scientific American. avail online. see also the Wiki entry about Alex.
  • [26] a. Robert Seyfarth, Dorothy Cheney, Peter Marler (1980). Monkey Responses to Three Different Alarm Calls: Evidence of Predator Classification and Semantic Communication. Science, Vol.210: 801-803.b. Dorothy L. Cheney, Robert M. Seyfarth (1982). How vervet monkeys perceive their grunts: Field playback experiments. Animal Behaviour 30(3): 739–751.
  • [27] Robert Seyfarth, Dorothy Cheney (1990). The assessment by vervet monkeys of their own and another species’ alarm calls. Animal Behaviour 40(4): 754–764.
  • [28] Klaus Wassermann (2010). Nodes, Streams and Symbionts: Working with the Associativity of Virtual Textures. The 6th European Meeting of the Society for Literature, Science, and the Arts, Riga, 15-19 June, 2010. available online.
  • [29] Richard Brandom, Making it Explicit. Harvard University Press, Cambridge (Mass.) 1998.
  • [30] Willard van Orman Quine (1951), Two Dogmas of Empiricism. Philosophical Review, 60: 20–43. available here



Vagueness: The Structure of Non-Existence.

December 29, 2011 § Leave a comment

For many centuries now, clarity has been the major goal of philosophy.

updated version featuring new references

It drove the first instantiation of logics by Aristotle, who devised it as a cure for mysticism, which was considered as a kind of primary chaos in human thinking. Clarity has been the intended goal in the second enlightenment as a cure for scholastic worries, and among many other places we find it in Wittgenstein’s first work, now directed to philosophy itself. In any of those instances, logics served as a main pillar to follow the goal of clarity.

Vagueness seems to act as an opponent to this intention, lurking behind the scenes in any comparison, which is why one may regard it as being as ubiquitous in cognition. There are whole philosophical and linguistic schools dealing with vagueness as their favorite subject. Heather Burnett (UCLA) recently provided a rather comprehensive overview [1] about the various approaches, including own proposals to solve some puzzles of vagueness in language, particularly related to relative and absolute adjectives and their context-dependency. In the domain of scientific linguistics, vagueness is characterized by three related properties: being fuzzy, being borderline, or being susceptible to the sorites (heap) paradox. A lot of rather different proposals for a solution have been suggested so far [1,2], most of them technically quite demanding; yet, none has been generally accepted as a convincing one.

The mere fact that there are many incommensurable theories, models and attitudes about vagueness we take as a clear indication for a still unrecognized framing problem. Actually, in the end we will see that the problem of vagueness in language does not “exist” at all. We will provide a sound solution that does not refer just to the methodological level. If we replace vagueness by the more appropriate term of indeterminacy we readily recognize that we can’t speak about vague and indeterminate things without implicitly talking about logics. In other words, the issue of (non-linguistic) vagueness triggers the question about the relation between logics and world. This topic we will investigate elsewhere.

Regarding vagueness, let us consider just two examples. The first one is about Peter Unger’s famous example regarding clouds [3]. Where does a cloud end? This question can’t be answered. Close inspection and accurate measurement does not help. It seems as if the vagueness is a property of the “phenomenon” that we call “cloud.” If we conceive it as a particular type of object, we may attest it a resemblance to what is called an “open set” in mathematical topology, or the integral on asymptotic functions. Bertrand Russell, however, would have called this the fallacy of verbalism [4, p.85].

Vagueness and precision alike are characteristics which can only belong to a representation, of which language is an example. […] Apart from representation, whether cognitive or mechanical, there can be no such thing as vagueness or precision;

For Russell, objects can’t be ascribed properties, e.g. vague. Vague is a property of the representation, not of the object. Thus, when Unger concludes that there are no ordinary things, he gets trapped even by several misunderstandings, as we will see. We could add that open sets, i.e. sets without definable border, are not vague at all.

As the second example we take an abundant habit in linguistics when addressing the problem of vagueness, e.g. supervaluationism. This system has the consequence that borderline cases of vague terms yield statements that are neither true, nor false. Despite there is a truth-value gap induced by that model, it nevertheless keeps the idea of truth values fully intact. All linguistic models about vagueness assume that it is appropriate to apply the idea of truth values, predicates and predicate logics to language.

As far as I can tell from all the sources I have inspected, any approach in linguistics about vagueness is taking place within two very strong assumptions. The first basic assumption is that (1) the concept of “predicates” can be applied to an analysis of language. From that basic assumption, three other more secondary derive. (1.1) Language is a means to transfer clear statements. (1.2) It is possible to use language in a way that no vagueness appears. (1.3) Words are items that can be used to build predicates.

Besides of this first assumption of “predicativity” of language, linguistics further assumes that words could be definite and non-ambiguous. Yet, that is not a basic assumption. The basic second assumption of that is that (2) the purpose of language is to transfer meaning unambiguously. Yet, all three aspects of that assumption are questionable, being a purpose, serving as a tool or even a medium to transfer meaning, and to do so unambiguously.

So we summarize: Linguistics employs two strong assumptions:

  • (1) The concept of apriori determinable “predicates” can be applied to an analysis of language.
  • (2) The purpose of language is to transfer meaning unambiguously.

Our position is that both assumptions are deeply inappropriate. The second one we already dealt with elsewhere, so we focus on the first one here. We will see that the “problematics of vagueness” is non-existent. We do not claim that there is no vagueness, but we refute that it is a problem. There are also no serious threats from linguistic paradoxes, because these paradoxes are simply a consequence from “silly” behavior.

We will provide several examples to that, but the structure of it is the following. The problematics consists of  a performative contradiction to the rules one has set up before.  One should not pretend to play a particular game by fixing the rules upon one’s own interests, only to violate those rules a moment later. Of course, one could create a play / game from this, too. Lewis Carroll wrote two books about the bizarre consequences of such a setting. Let us again listen to Russell’s arguments, now to his objection against the “paradoxicity” of “baldness,” which is usually subsumed to the sorites (heap) paradox.

It is supposed that at first he was not bald, that he lost his hairs one by one, and that in the end he was bald; therefore, it is argued, there must have been one hair the loss of which converted him into a bald man. This, of course, is absurd. Baldness is a vague conception; some men are certainly bald, some are certainly not bald, while between them there are men of whom it is not true to say they must either be bald or not bald. The law of excluded middle is true when precise symbols are employed, but it is not true when symbols are vague, as, in fact, all symbols are.

Now, describing the heap (Greek: sorites) or the hair of “balding” men by referring to countable parts of the whole, i.e. either sand particles or singularized hairs, contradicts the conception of baldness. Confronting both in a direct manner (removing hair by hair) mixes two different games. Mixing soccer and tennis is “silly,” especially after the participants have declared that they intend to play soccer, mixing vagueness and counting is silly, too, for the same reason.

This should make clear why the application of the concept of “predicates” to vague concepts, i.e. concepts that are apriori defined as to be vague, is simply absurd.  Remember, even a highly innovative philosopher as Russell, co-author of an extremely abstract work as the Principia Mathematica is, needed several years to accept Wittgenstein’s analysis that the usage of symbols in the Principia is self-contradictory, because actualized symbols are never free of semantics.

Words are Non-Analytic Entities

First I would recall an observation first, or at least popularly, expressed by Augustinus. His concern was the notion of time. I’ll give a sketch of it in my words. As long as he simply uses the word, he perfectly knows what time is. Yet, as soon as he starts to think about time, trying to get an analytic grip onto it, he increasingly looses touch and understanding, until he does not know anything about it at all.

This phenomenon is not limited to the analysis of a concept like time, which some conceive even as a transcendental “something.” The phenomenon of disappearance by close inspection is not unknown. We meet it in Carroll’s character of the Cheshire cat, and we meet it in Quantum physics. Let us call this phenomenon the CQ-phenomenon.

Ultimately, the CQ-phenomenon is a consequence of the self-referentiality of language and self-referentiality of the investigation of language. It is not possible to apply a scale to itself without getting into some serious troubles like fundamental paradoxicity. The language game of “scale” implies a separation of observer and observed that can’t be maintained in the cases of the cat, the quantum, or language. Of course, there are ways to avoid such difficulties, but only to high costs. For instance, a strong regulations or very strict conventions can be imposed to the investigation of such areas ad the application of self-referential scales, to which one may count linguistics, sociology, cognitive sciences, and of course quantum physics. Actually, positivism is nothing else than such a “strong convention”. Yet, even with such strong conventions being applied, the results of such investigations are surprising and arbitrary, far from being a consequence of rationalist research, because self-referential system are always immanently creative.

It is more than salient that linguists create models about vagueness that are subsumed to language. This position is deeply non-sensical and does not only purport ontological relevance for language, it implicitly also claims a certain “immediacy” for the linkage between language and empirical aspects of the world.

Our position is strongly different from that: models are entities that are “completely” outside of language. Of course, they are not separable from each other. We will deal elsewhere with this mutual dependency in more details and a more appropriate framing. Regardless how modeling and language are related, they definitely can not be related in the way linguistics implicitly assumes. It is impossible to use language to transfer meaning, because it is in principle not possible to transfer meaning at all. Of course, this opens the question what then is going to be “transferred.”

This brings us to the next objection against the presumed predicativity of language, namely its role in social intercourse, from which the CQ-phenomenon can’t be completely separated from.

Language: What can be Said

Many things and thoughts are not explicable. Many things also can be just demonstrated, but not expressed in any kind of language. Yet, despite these two severe constraints, we may use language not only to explicitly speak about such things, but also to create what only can be demonstrated.

Robert Brandom’s work [5] may be well regarded as a further leap forward in the understanding of language and its practitioning. He proposes the inferentialist position, to which our positioning of the model is completely compatible. According to Brandom, we always have to infer a lot of things from received words during a discourse. We even have to signal that we expect those things to be inferred. The only thing what we can try in a language-based interaction is to increasingly confine the degrees of freedom of possible models that are created in the interactees’ minds. Yet, achieving a certain state of resonance, or feeling that one understands each other, does NOT imply that the models are the identical. All what could be said is that the resonating models in the two interacting minds allow a certain successful prediction of the further course of the interaction. Here, we should be very clear about our understanding of the concept of model. You will find it in the chapters about the generalized model and the formal status of models (as a category).

Since Austin [6] it is well-known that language is not equal to the series of graphical of phonic signals. The reason for this simply being that language is a social activity, both structural as well as performative. An illocutionary act is part of any utterance and any piece of text in a natural language, sometimes even in the case of a formal language. Yet, it is impossible to speak about that dimension in language.

A text is even more than a “series” of Austinian or Searlean speech acts. The reason for this is a certain aspect of embodiment: Only entities stuffed with memory can use language. Now, receiving a series of words immediately establishes a more or less volatile immaterial network in the “mind” of the receiving entity as well as in the “sending” entity. This network owns properties for which it is absolutely impossible to speak about, despite the fact that these networks represent somehow the ultimate purpose, or “essence”, of natural language. We can’t speak about that, we can’t explicate it, and we simply commit a categorical mistake if we apply logics and tools from logics like predicates in the attempt to understand it.

Logics and Language

These phenomena clearly proof that logics and language are different things. They are deeply incommensurable, despite the fact that they can’t be separated completely from each other, much like modeling and language. The structure of the world shows up in the structure of logics, as Wittgenstein mentioned. There are good reasons to take Wittgenstein serious on that. According to the Tractatus, the coupling between world and logics can’t be a direct one [7].

In contrast to the world, logics is not productive. “Novelty” is not a logical entity. Pure logics is a transcendental system about usage of symbols, precisely because any usage already would require interpretation. Logical predicates are nothing that need to be interpreted. These games are simply different games.

In his talk to the Jowett Society, Oxford, in 1923, Bertrand Russell, exhibiting an attitude quite different to that in the Principia and following much the line drawn by Wittgenstein, writes [p.88]:

Words such as “or” and “not” might seem at first sight, to have a perfectly precise meaning: “p or q'” is true when p is true, true when q is true, and false when both are false. But the trouble is that this involves the notions of “true” and “false”; and it will be found, I think, that all the concepts of logic involve these notions, directly or indirectly. Now “‘true” and “false” can only have a precise meaning when the symbols employed—words, perceptions, images, or what not—are themselves precise. We have seen that, in practice, this is not the case. It follows that every proposition that can be framed in practice has a certain degree of vagueness; that is to say, there is not one definite fact necessary and sufficient for its truth, but a certain region of possible facts, any one of which would make it true. And this region is itself ill-defined: we cannot assign to it a definite boundary.

This is exactly what we meant before: “Precision” concerning logical propositions is not achievable as soon as we refer to symbols that we use. Only symbols that can’t be used are precise. There is only one sort of such symbols: transcendental symbols.

Mapping logics to language, as it happens so frequently and probably even as an acknowledged practice in linguistics in the treatment of vagueness, means to reduce language to logics. One changes the frame of reference, much like Zenon does in his self-generated pseudo-problems, much like Cantor1 [8] and his fellow Banach2 [9] did (in contrast to Dedekind3 [10]), or what Taylor4 did [11]. 3-dimensionality produces paradoxes in a 2-dimensional world, not only faulty projections. It is not really surprising that through the positivistic reduction of language to logics awkward paradoxes appear. Positivism implies violence, not only in the case linguistics.

We now can understand why it is almost silly to apply a truth-value-methodology to the analysis of language. The problem of vagueness is not a problem, it is deeply in the blueprint of “language” itself. It is almost trivial to make remarks as Russell did [3, p.87]:

The fact is that all words are attributable without doubt over a certain area, but become questionable within a penumbra, outside which they are again certainly not attributable.

And it really should be superfluous to cite this 90-year old piece. Quite remarkably it is not.

Language as a Practice

Wittgenstein emphasized repeatedly that language is a practice. Language is not a structure, so it is neither equivalent to logics nor to grammar, or even grammatology. In practices we need models for prediction or diagnosis, and we need rules, we frequently apply habits, which even may get symbolized.

Thus, we again may ask what is happening when we talk to each other. First, we exclude those models of which we now understand that they are not appropriate.

  • – Logics is incommensurable with language.
  • – Language, as well as any of its constituents, can’t be made “precise.”

As a consequence, language (and all of its constituents) is something that can’t be completely explicated. Large parts of language can only be demonstrated. Of course, we do not deny the proposal that a discourse reflects “propositional content,” as Brandom calls it ([5] chp. 8.6.2.). This propositional or conceptual content is given by the various kinds of models appearing in a discourse, models that are being built, inferred, refined, symbolized and finally externalized. As soon as we externalize a model, however, it is not language any more. We will investigate the dynamical route between concepts, logics and models in another chapter. Here and for the time being we may state that applying logics as a tool to language mistakes propositional content as propositional structure.

Again: What happens if I point to the white area up in the air before the blue background that we call sky, calling then “Oh, look a cloud!” ? Do I mean that there is an object called “cloud”? Even an object at all? No, definitely not. Claiming that there are “cloud-constituters,” that we do not measure exactly enough, that there is no proper thing we could call “cloud” (Unger), that our language has a defect etc., any purported “solution” of the problem [for an overview see 11] does not help to the slightest extent.

Anybody having made a mountain hike knows the fog in high altitudes. From lower regions, however, the same actual phenomenon is seen as a cloud. This provides us a hint, that the language game “cloud” also comprises information about the physical relational properties (position, speed, altitude) of the speaker.

What is going to happen by this utterance is that I invite my partner in discourse to interpret a particular, presumably shared sensory input and to interpret me and my interpretations as well. We may infer that the language game “cloud” contains a marker that is both linked to the structure and the semantics of the word, indicating that (1) there is an “object” without sharp borders, (2) no precise measurement should be performed. The symbolic value of “cloud” is such that there is no space for a different interpretation. Not the “object” is indicated by the word “cloud,” but a particular procedure, or class of procedures, that I as the primary speaker suggest when saying “Oh, there is a cloud.” By means of such procedures a particular style of modeling will be “induced” in my discourse partner, a particular way to actualize an operationalization, leading to such a representation of the signals from the external world that both partners are able to increase their mutual “understanding.” Yet, even “understanding” is not directed to the proposed object either. This scheme transparently describes the inner structure of what Charles S. Peirce called a “sign situation.” Neither accuracy, nor precision or vagueness are relevant dimensions in such kinds of mutually induced “activities,” which we may call a Peircean “sign.” They are completely secondary, a symptom of the use and of the openness.

Russell correctly proposes that all words in a language are vague. Yet, we would like to extend his proposal, by drawing on our image of thought that we explicate throughout all of our writings here. Elsewhere we already cited the Lagrangian trick in abstraction. Lagrange got aware about the power of a particular replacement operation: In a proposal or proposition, constants always can be replaced by appropriate procedures plus further constants. This increases generality and abstractness of the representation. Our proposal that is extending Russell’s insight is aligned to this scheme:

Different words are characterised (among other factors) by different procedures to select a particular class (mode) of interpretation.

Such procedures are precisely given as kind of models that are necessary besides those models implied in performing the interpretation of the actual phenomenon. The mode of interpretation comprises the selection of the scale employed in the operationalization, viz. measurement. Coarser scales imply a more profound underdetermination, a larger variety of possible and acceptable models, and a stronger feeling of vagueness.

Note that all these models are outside of language. To our opinion it does not make much sense to instantiate the model inside of language and then claiming a necessarily quite opaque “interpretation function,” as Burnett extensively demonstrates (if I understood her correctly). Our proposal is also more general (and more abstract) than Burnett’s, since we emphasize the procedural selection of interpretation models (note that models are not functions!). The necessary models for words like “taller,” “balder” or “cloudy” are not part of language and can’t be defined in terms of linguistic concepts. I would not call that a “cognitivist” stance, yet.  We conceive it just as a consequence of the transcendental status of models. This proposal is linked to two further issues. First, it implies the acceptance of the necessity of models as a condition. In turn, we have to clarify our attitude towards the philosophical concept of the condition. Second, it implies the necessity of an instantiation, the actualization of it as the move from the transcendental to the applicable, which in turn invokes further transcendental concepts, as we will argue and describe here.

Saying this we could add that models are not confined to “epistemological” affairs. As the relation between language (as a practice) and the “generalized” model shows, there is more in it than a kind of “generalized epistemology.” The generalization of epistemology can’t be conceived as a kind of epistemology at all, as we will argue in the chapter about the choreosteme. The particular relation between language and model as we have outlined it should also make clear that “models” are not limited to the categorization of observables in the outer world. It also applies—now in more classic terms—to the roots of what we can know without observation (e.g. Strawson, p.112 in [12]). It is not possible to act, to think, or to know without implying models, because it is not possible to act, to think or to know without transformation. This gives rise to model as a category and to the question of the ultimate conditionability of language, actions, or knowing. In our opinion, and in contrast to Strawson’s distinction, it is not appropriate to separate “knowledge from observations” and “knowledge without observation.” Insisting on such a separation immediately would also drop the insight about mutual dependency of models, concepts, symbols and signs, among many other things. In short, we would fall back directly into the mystic variant of idealism (cf. Frege’s hyper-platonism), implying also some “direct” link between language and idea. We rate such a disrespect of the body, matter and mediating associativity as inappropriate and of little value.

It would be quite interesting to conduct a comparative investigation of the conceptual life cycle of pictorial information in contrast to textual information along the line opened by such a “processual indicative.” Our guess is that the textual “word” may have a quite interesting visual counterpart. But we have to work on this later and elsewhere.

Our extension also leads to the conclusion that “vague” is not a logical “opposite” of “accurate,” or of “precise” either. Here we differ (not only) from Bertrand Russell’s position. So to speak, the vagueness of language applies here too. In our perspective, “accurate” simply symbolizes the indicative to choose a particular class of models that a speaker suggests to the partner in discourse to use. Nothing more, but also nothing less. Models can not be the “opposite” of other models. Words (or concepts) like “vague” or “accurate” just explicate the necessity of such a choice. Most of the words in a language refer only implicitly to that choice. Adjectives, whether absolute or relative, are bivalent with respect to the explicity or impliciteness of the choice of the procedure, just depending on the context.

For us it feels quite nice to discover a completely new property of words as they occur in natural languages. We call it “processual indicative.” A “word” without such a processual indicative on the structural level would not be a “word” any more. Either it reduces to a symbol, or even an index, or the context degenerates from a “natural” language (spoken and practiced in a community) into a formal language. The “processual indicative” of the language game “word” is a grammatical property (grammar here as philosophical grammar).

Nuisance, Flaws, and other Improprieties

Charles S. Peirce once mentioned, in a letter around 1908, that is well after his major works, and answering a question about the position or status of his own work, that he tends to label it as idealistic materialism. Notably, Peirce founded what is known today as American pragmatism. The idealistic note, as well as the reference to materialism, have to be taken extremely abstract in order to justify such. Of course, Peirce himself has been able for handling such abstract levels.

Usually, however, idealism and pragmatism are in a strong contradiction to each other. This is especially true when it comes to engineering, or more generally, to the problematics of the deviation, or the problematics posed by the deviation, if you prefer.

Obviously, linguistics is blind or even self-deceptive against their domain-specific “flaw,” the vagueness. Linguists are treating vagueness as a kind of flaw, or nuisance, at least as a kind of annoyance that needs to be overcome. As we already mentioned, there are many incommensurable proposals how to overcome it, except one: checking if it is a flaw at all, and which conditions or assumptions lead to the proposal that vagueness is indeed a flaw.

Taking only 1 step behind, it is quite obvious that logical positivism and its inheritance is the cause for the flaw. The problem “appeared” in the early 1960ies, when positivism was prevailing. Dropping the assumptions of positivism also removes the annoyance of vagueness.

Engineering a new device is a demanding task. Yet, there are two fundamentally different approaches. The first one, more idealistic in character, starts with an analytic representation, that is, a formula, or more likely, a system of formulas. Any influence that is not covered by that formula is either shifted into the premises, or into the so-called noise: influences, about nothing “could” be known, that drive the system under modeling into an unpredictable direction. Since this approach starts with a formula, that is, an analytic representation, we also can say that it starts under the assumption of representability, or identity. In fact, whenever you find designers, engineers or politicians experience to speak about “disturbances,” it is more than obvious that they follow the idealistic approach, which in turn follows a philosophy of identity.

The second approach is very different from the first one, since it does not start with identity. Instead, it starts with the acknowledgement of difference. Pragmatic engineering does not work instead of nuisances, it works precisely within and along nuisances. Thus, there is no such thing as a nuisance, a flaw, an annoyance, etc. There is just fluctuation. Instead of assuming the structural constant labeled as “ignorance,” as represented by the concept of noise, there is a procedure that is able to digest any fluctuation. A “disturbance” is nothing that can be observed as such. Quite in contrast, it is just and only a consequence of a particular selection of a purpose. Thus, pragmatic engineering leads to completely different structure that would be generated under idealistic assumptions. The difference between both remains largely invisible in all cases where the information part is neglectable (which actually is never the case), but it is vital to consider it in any context where formalization is dealing with information, whether it is linguistics or machine-learning.

The issue relates to “cognition” too, understood here as the naively and phenomenologically observable precipitation of epistemic conditions. From everyday experience, but also as a researcher in “cognitive sciences”, we know, i.e. we could agree on the proposal that cognition is something that is astonishing stable. The traditional structuralist view, as Smith & Jones call it [13], takes this stability as a starting point and as the target of the theory. The natural consequence is that this theory rests on the apriori assumption of a strict identifiability of observable items and of the result of cognitive acts, which are usually called concepts and knowledge. In other words, the idea that knowledge is about identifiable items is nothing else than a petitio principii: Since it serves as the underlying assumption it is no surprise that the result in the end exhibits the same quality. Yet, there is a (not so) little problem, as Smith & Jones correctly identified (p.184/185):

The structural approach pays less attention to variability (indeed, under a traditional approach, we design experiments to minimize variability) and not surprisingly, it does a poor job explaining the variability and context sensitivity of individual cognitive acts. This is a crucial flaw.  […]

Herein lies our discontent: If structures control what is constant about cognition, but if individual cognitive acts are smartly unique and adaptive to the context, structures cannot be the cause of the adaptiveness of individual cognitions. Why, then, are structures so theoretically important? If the intelligence-and the cause of real-time individual cognitive acts-is outside the constant structures, what is the value of postulating such structures?

The consequence the authors draw is to conceive cognition as process. They cite the work of Freeman [14] about the cognition of smelling

They found that different inhalants did not map to any single neuron or even group of neurons but rather to the spatial pattern of the amplitude of waves across the entire olfactory bulb.

The heir of being affected by naive phenomenology (phenomenology is always naive) and its main pillar of “identifiability of X as X” obviously leads to conclusions that are disastrous for the traditional theory. It vanishes.

Given these difficulties, positivists are trying to adapt. Yet, people still dream of semantic disambiguation as a mechanical technique, or likewise, dream (as Fregean worshipers) of eradicate vagueness from language by trying to explain it away.

One of the paradoxes dealt with over and over again is the already mentioned Sorites (Greek for “heap”) paradox. When is a heap a heap? Closely related to it are constructions like Wang’s Paradox [15]: If n is small, then n+1 is also small. Hence there is no number that s not small. How to deal with that?

Certainly, it does not help to invoke the famous “context dependency” as a potential cure. Jaegher and Rooij recently wrote [16]:

“If, as suggested by the Sorites paradox, ne-grainedness is important, then a vague language should not be used. Once vague language is used in an appropriate context, standard axioms of rational behaviour are no longer violated.”

Yet, what could appropriate mean? Actually, for an endeavor as Jaegher and Rooij have been starting the appropriateness needs to be determined by some means that could not be affected by vagueness. But how to do that for language items? They continue:

“The rationale for vagueness here is that vague predicates allow players to express their valuations, without necessarily uttering the context, so that the advantage of vague predicates is that they can be expressed across contexts.”

At first sight, this seems plausible. Now, any part of language can be used in any context, so all the language is vague. The unfavorable consequence for Jaegher & Rooij being that their attempt is not even a self-disorganizing argument, it has the unique power of being self-vanishing, their endeavor of expelling vagueness is doomed to fail before they even started. Their main failure is, however, that they take the apriori assumption for granted that vagueness and crispness are “real” entities that are somehow existing before any perception, such that language could be “infected” or affected with it. Note that this is not a statement about linguistics, it is one about philosophical grammar.

It also does not help to insist on “tolerance”. Rooij [17] recently mentioned that “vagueness is crucially related with tolerant interpretation”. Rooij desperately tries to hide his problem, the expression “tolerant interpretation” is almost completely empty. What should it mean to interpret something tolerantly as X? Not as X? Also a bit as Y? How then would we exchange ideas and how could it be that we agree exactly on something? The problem is just move around a corner, but not addressed in any reasonable manner. Yet, there is a second objection to “tolerant interpretation”.

Interpretation of vague terms by a single entity must always fail. What is needed are TWO interpretations that are played as negotiation in language games. Two entities, whether humans or machines, have to agree, i.e. they also have to be able to perform the act of agreeing,  in order to resolve vagueness of items in language. It is better to drop vagueness all together and simply to say that at least two entities are necessarily be “present” to play a language game. This “presence” is , of course, an abstract semiotic one. It is given in any Peircean sign situation. Since signs refer only and always just to other signs vagueness is, in other words, not a difficulty that need to be “tolerated”.

Dummett [15] spent more than 20 pages for the examination of the problem of vagueness. Up to date it is one of the most thorough ones, but unfortunately not received or recognized by contemporary linguistics. There is still a debate about it, but no further development of it. Dummett essentially proofs that vagueness is a not a defect of language, it is a “design feature”. First, he proposes a new logical operator “definitely” in order to deal with the particular quality of indeterminateness of language. Yet, it does not remove vagueness or its problematic, “that is, the boundaries between which acceptable sharpenings of a statement or a predicate range are themselves indefinite.” (p.311)

He concludes that “vague predicates are indispensable”, they are not eliminable in principle without loosing language itself. Tolerance does not help as much selecting “appropriate contexts” fails to do, both proposed to get rid of a problem. What linguists propose (at least those adhering to positivism, i.e. nowadays nearly all of them) is to “carry around a colour-chart, as Wittgenstein suggested in one
of his example” (Dummett). This would turn observational terms into legitimated ones by definition. Of course, the “problem” of vagueness would vanish, but along with it also any possibility to speak and to live. (Any apparent similarity to real persons, politicians or organizations such like the E.U. is indeed intended.)

Linguistics, and cognitive sciences as well, will fail to provide any valuable contribution as long as they apply the basic condition of the positivist attitude: that subjects could be separated from each other in order to understand the whole. The whole here is the Lebensform working underneath, or beyond (Foucault’s field of proposals, Deleuze’s sediments), connected cognitions. It is almost ridicule to try to explain anything regarding language within the assumption of identifiability and applicability of logics.

Smith and Jones close their valuable contribution with the following statement, abandoning the naive realism-idealism that has been exhibited so eloquently by Rooij and his co-workers nearly 20 years later:

On a second level, we questioned the theoretical framework-the founding assumptions-that underlie the attempt to define what “concepts really are.” We believe that the data on developing novel word interpretations-data showing the creative intelligence of dynamic cognition-seriously challenge the view of cognition as represented knowledge structures. These results suggest that perception always matters in a deep win: Perception always matters because cognition is always adaptive to the here-and-now, and perception is our only means of contact with the here-and now reality.

There are a number of interesting corollaries here, which we will not follow here. For instance, it would be a categorical mistake to talk about noise in complex systems. Another consequence is that engineering, linguistics or philosophy that is based on the apriori concept of identity is not able to make reasonable proposals about evolving and developing systems, quite in contrast to a philosophy that starts with difference (as a transcendental category, see Deleuze’s work, particularly [18]).

We now can understand that idealistic engineering is imposing its adjudgements ways too early. Consequently, idealistic engineering is committing the naturalistic fallacy in the same way as many linguistics is committing it, at least as far as the latter starts with the positivistic assumption of the possibility of positive assumptions such as identifiability. The conclusion for the engineering of machine-based episteme is quite obvious: we could not start with identified or even identifiable items, and where it seems that we meet them, as in the case of words, we have to take their identifiability as a delusion or illusion. We also could say that the only feasible input for a machine that is supposed to “learn” is made from vague items for which there is only a probabilistic description. Even more radical, we can see that without fundamentally embracing vagueness no learning is possible at all. That’s now the real reason for the failure of “strong” or “symbolic” AI.

Conclusions for Machine-based Epistemology

We started with a close inspection and a critique of the concept of vagueness and ended up in a contribution to the theory of language. Once again we see that language is not just about words, symbols and grammar. There is much more in it and about it that we must understand to bring language into contact with (brain) matter.

Our results clearly indicate, against the mainstream in linguistics and large parts of (mainly analytic) philosophy, that words can’t be conceived as parts of predicates, i.e. clear proposals, and language can’t be used as a vehicle for the latter. This again justifies an initial probabilistic representation of those grouped graphemes (phonemes) as they can be taken from a text, and which we call “words.” Of course, the transition from a probabilistic representation to the illusion of propositions is not a trivial one. Yet, it is not words that we can see in the text, it is just graphemes. We will investigate the role and nature of words at some later point in time (“Waves, Words, and Images”, forthcoming).

Secondly we discovered a novel property or constituent of words, which is a selection function (or a class thereof) which indicates the style of interpretation regarding the implied style of presumed measurement. We called it processual indicative. Such a selection results in the invoking of clear-cut relations or boundaries, or indeterminable ones. Implementing the understanding of language necessarily has to implement such a property for all of the words. In any of the approaches known so far, this function is non-existent, leading to serious paradoxes and disabilities.

A quite nice corollary of these results is that words never could be taken as a reference. It is perhaps more appropriate to conceive of words as symbols for procedural packages, recipes and prescription on how to arrange certain groups of models. Taken such, van Fraassen’s question on how words acquire reference is itself based on a drastic misunderstanding, deeply informed by positivism (remember that it was van Fraassen who invented this weird thing called supervaluationism). There is no such “reference.” Instead, we propose to conceive of words as units consisting from (visible) symbols and a “Lagrangean” differential part. This new conception of words remains completely compatible with Wittgenstein’s view on language as a communal practice; yet, it avoids some difficulties, Wittgenstein has struggled with throughout his life. The core of it may be found in PI §201, describing the paradox of rule following. For us, this paradox simply vanishes. Our model of words as symbolic carriers of “processual indicatives” also sheds light to what Charles S. Peirce called a “sign situation,” being not able to elucidate the structure of “signs” any further. Our inferentialist scheme lucidly describes the role of the symbolic as a quasi-material anchor, from which we can proceed via models as targets of the “processual indicative” to the meaning as a mutually ascribed resonance.

The introduction of the “processual indicative” also allows to understand the phenomenon that despite the vagueness of words and concepts it is possible to achieve very precise descriptions. The precision, however, is just a “feeling” as it is the case for “vagueness,” dependent on a particular discursive situation. Larger amounts of “social” rules that can be invoked to satisfy the “processual indicative” allow for more precise statements. If, however, these rules are indeterminate by themselves quite often more or less funny situation may occur (or disastrous misunderstandings as well).

The main conclusion, finally, is referring to the social aspect of a discourse. It is largely unknown how two “epistemic machines” will perceive, conceive of and act upon each other. Early experiments by Luc Steels involved mini robots that have been far too primitive to draw any valuable conclusion for our endeavor. And Stanislav Lem’s short story “Personetics”[19] does not contain any hint about implementational issues… Thus, we first have to implement it…


1. One of Cantor’s paradoxes claims that a 2-dimensional space can be mapped entirely onto a 1 dimensional space without projection errors, or overlaps. All of Cantor’s work is “absurd,” since it mixes two games that apriori have been separated: countability and non-countability. The dimensions paradox appears because Cantor conceives of real numbers as determinable, hence countable entities. However, by his own definition via the Cantor triangle, real numbers are supra-countable infinite. Real numbers are not determinable, hence they can’t be “re-ordered,” or put along a 1-dimensional line. Its a “silly” contradiction. We conclude that such paradoxes are pseudo-paradoxes.

2. The Banach-Tarski (BT) pseudo-paradox is of the same structure as the dimensional pseudo-paradox of Cantor. The surface of a sphere is broken apart into a finite number of “individual” pieces; yet , those pieces are not of determinate shape. Then BT proof that from the pieces of 1 sphere 2 spheres can be created. No surprise at all: the pieces are not of determinate shape, they are complicated: they are not usual solids but infinite scatterings of points. It is “silly” first to speak about pieces of a sphere, but then to dissolve those pieces  into Cantor dust. Countability and incountability collide. Thus there is no coherence, so they can be any. The BT paradox is even wrong: from such premises an infinite number of balls could be created from a single ball, not just a second one.

3. Dedekind derives natural numbers as actualizations from their abstract uncountable differentials, the real numbers.

4. Taylor’s paradox brings scales into conflict. A switch is toggled repeatedly after a decreasing period of time, such that the next period is just half of the size of the current one. After n toggling events (n>>), what is the state of the switch? Mathematically, it is not defined (1 AND 0), statistically it is 1/2. Again, countability, which implies a physical act, ultimately limited by the speed of light, is contrasted by infinitely small quantities, i.e. incountability. According to Gödel’s incompleteness, for any formal system it is possible to construct paradoxes by putting up “silly” games, which do not obey to the self-imposed apriori assumptions.

This article has been created on Dec 29th, 2011, and has been republished in a considerably revised form on March 23th, 2012.


  • [1] Heather Burnett, The Puzzle(s) of Absolute Adjectives – On Vagueness, Comparison, and the Origin of Scale Structure. Denis Paperno (ed). “UCLA Working Papers in Semantics,” 2011; version referred to is from 20.12.2011. available online.
  • [2] Brian Weatherson (2009), The Problem of the Many. Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. available online, last access 28.12.2011.
  • [3] Peter Unger (1980), The Problem of the Many.
  • [4] Bertrand Russell (1923): Vagueness, Australasian Journal of Psychology and Philosophy, 1(2), 84-92.
  • [5] Robert Brandom, Making it Explicit. 1994.
  • [6] John Austin. Speech act Theory.
  • [7] Colin Johnston (2009). Tractarian objects and logical categories. Synthese 167: 145-161.
  • [8] Cantor
  • [9] Banach
  • [10] Dedekind
  • [11] Taylor
  • [12] Peter Strawson, Individuals: An Essay in Descriptive Metaphysics. Methuen, London 1959.
  • [13] Linda B. Smith, Susan S. Jones (1993). Cognition Without Concepts. Cognitive Development, 8, 181-188. available here.
  • [14] Freeman, W.J. (1991). The physiology of perception. Scientific American. 264. 78-85.
  • [15] Michael Dummett, Wang’s Paradox (1975). Synthese 30 (1975) 301-324. available here.
  • [16] Kris De Jaegher, Robert van Rooij (2011). Strategic Vagueness, and appropriate contexts. Language, Games, and Evolution, Lecture Notes in Computer Science, 2011, Volume 6207/2011, 40-59, DOI: 10.1007/978-3-642-18006-4_3
  • [17] Robert van Rooij (2011). Vagueness, tolerance and non-transitive entailment in Understanding Vagueness – Logical, Philosophical and Linguistic Perspectives, Petr Cintula, Christian Fermuller, Lluis Godo, Petr Hajek (eds.), College Publications, 2011.
  • [18] Gilles Deleuze, Difference and Repetition.
  • [19] Stanislav Lem, Personetics. reprinted in: Douglas Hofstadter, The Minds I.


Where Am I?

You are currently browsing entries tagged with vagueness at The "Putnam Program".