Where is the Limit?

Everybody knows machines.

Today we even use general purpose machines, we use them to control the production of anything, we even outsource certain private cognitive capabilities to them and create new collective ones. For the first time in history, machines are acquiring attributes of privacy. Your laptop might be completely different from mine; its capabilities are different because it had a different history. This is quite an unusual trait of a machine, as far as we’d refer to the classical concept of machine from the age of mechanics. Yes, our contemporary general purpose machines are certainly in the rank of a new cultural technique, much the same way as it was writing down language as it was spoken around 900 B.C. in ancient Greece. These new data processing machines created a new stage for the play of previously unknown forces between automation and participation. There is an incredible number of research projects running, all favoring automation at first hand, before abundance and densification of the techniques eventually may lead to new forms of participation. Even facebook is still more a tool to automate social relationships than a tool favoring participation, masking the far-reaching power of reductionist algorithms. The zone between those two opposing trends or forces differentiates into formerly unknown complexities. Looking back into the pre-classical world, and re-calling the various revolutions that happened since then concerning the informational life of human beings and human cultures, there is one important question:

What is the potential of “machines” ?

As a favorable first way to approach this topic—it is the central topic of this blog—we consider the question about the limits of language, of minds, brains or machines.

Nothing brand new, of course, to think and ask about limits. Not knowing about the limits, whether inherent, factual, transcendental or arbitrary ones, would resort us to pure chance in dealing with the subject. Hence, famous answers have been given throughout the last 2’500 years, both by philosophy and by sciences. Yet, those four words in our subtitle (La.Bra.Ma.Mi.), probably the BIG FOUR of the keywords pointing to an emerging culture, denote not just things, so neither the words nor the related concepts are simple. Even worse, language becomes deceptive exactly in the moment when we start to try to be exact. Is there something like “the” mind? Or “the” language? Supposedly, not. The reasoning for that we will meet later. How, then, to ask about the limit of language, minds and brains, if there is no other possibility than to using just that, language, mind(s), and brain(s)? And what, if we consider limits to “any” side, even those not yet determined? What are the lower limits for a language {brain,mind} such that we still can call it a language {mind,brain}? After a few seconds we are deep on philosophical territory. And, yes, we guess, that reasonable answers to the introductory question could be found only together with philosophy. Any interest in the borders of the “machinic,” a generalization coined by Félix Guattari, is only partially a technical concern.

Another common understanding of limits takes it as the demarcation line between things we know and things we believe. In philosophical terms, it is the problem of foundations and truth: What can we know for sure? How can we know that something is true? We know quite well that this question—it is somehow the positive version of the perspective towards the limits—can not be answered. A large area of disputes between various and even cross-bred sorts and flavors of idealism, realism, scientism, gnosticism and pragmatics blossomed upon this question, and all ultimately failed. In some way, science is as ridiculous (or not) as believing in ghosts (or god(s)), but what do they share? Perhaps, to neglect each other, or that they both neglect a critical attitude towards language? Just consider that scientists not only speak of “natural laws,” they even do not just believe in them, they take them (and the idea of it) as a fundamental and universally unquestionable truth. Somehow inversely symmetric to that, clergymen strive to proof (sic!) the existence (!) of God. Both styles are self-contradictory, simply and completely; both get trapped by their ignorant neglect of language. Literally, they forget that they are speaking.

The question about limits can be and shall be put forward in a more fruitful manner; concerning the limits it is not only of interest just where the limits are, but what the limits are. This includes the representational issue about the facts constituting the limits and it comprises the question about how to speak about the limits as a concept. If may be fallacious to take limits only representational, since limits—as a concept like any other concept—is importing certain conditions into the discourse. In the case of limits it is the assumption of clear identifiability. Yet, just this identifiability is in question for any of the BIG FOUR. So, instead of asking about the limits we probably better ask for the conditions of our something. This includes the condition of the possibility as well as it points to the potential for conditions. What are the conditions to speak about logics, and to use it? Is it justified, to speak about knowledge as “justified beliefs”? What are the conditions to use concepts (grm.: Begriff), notions, apprehensions? And to bring in ‘the’ central question of empiricism (posed originally by de Saussure): How do words acquire meaning? (van Fraassen)[1] In the field of “Artificial Intelligence” and machine-learning, the same issue is known as “Symbol Grounding Problem.”

The fourth of our items, the machines, seem to stand a bit apart from the others, at first sight. Yet, the relationship between man and machines is a philosophical and a cultural issue since the beginnings of written language. The age of enlightenment was full of machine phantasms, Descartes and Leibniz built philosophies around it. Today, we call computers “information processing machines,” which is a quite insightful and not quite innocent misnomer, as we will see: Information can not be processed by principle. Computers re-arrange symbols of a fixed code, which is used to encode information. This mapping is never going to be complete in any sense. Not really surprising a fact that is deliberately overlooked by many (if not most of the) computer scientists.

Yet, what are the limits of those machines? Are those limits given by what we call programming language (another misnomer)? How should we program a computer such that it becomes able to think, to understand language, to understand? Many approaches have been explored by computer engineers, and sometimes these approaches have been extremely silly in a twofold way: as such, and given the fact that they repeat rather primitive thoughts from medieval ages. We will meet them throughout our notes.

The goal of making the machine intelligent can not be accomplished by programming in principle. One of the more obvious reasons is that there is nothing like a meta-intelligence, no meta-knowledge, as well as there is no private language (it was Wittgenstein who proofed that). Would it be possible to create intelligence just by programming a computer in the right way, the programmer would have to implement a controlling instance, in other words, a meta-intelligence. Would this computer program then be intelligent just by its own, it would need a private language, which is impossible. Hilary Putnam devised at least a whole book [2] to this problem, and we agree with his final conclusion. A prominent victim of the wrong analogy between mind and software is Stevan Harnad [3], who claimed “Mental states are just implementations of (the right) computer programs.” Searle [4,5] argued against this view that is known by the label of “computationalism,” and we agree with the components of his arguments, notably his homunculus argument; yet, he took far too drastic measures in denying any possibility to start with software at all.

In contrast to Searle I think that rejecting the structural isomorphy of software and mind-inducing brains does not mean that we can not start with a software program. At last our biological body also starts (presumably) with mindless matter, and our frontal cortex is largely dysfunctional at the time of birth. However, this rejection does indeed mean that standard software programs (actualizing algorithms) are by far not sufficient to create something like a mind-inducing entity, even if we’d chose the theoretically most suitable one. In addition to software, or the matter of the brain, something else is needed. It is Putnam’s merit having clarified this once and for all; its more important consequences not only concern the theory of meaning, as we will see in the chapter about meaning. This result, though preliminary, also allows us to avoid doing the wrong things, now as software engineer, i.e. we will not try to implement impossible features; instead we can focus (better) on the appropriate capabilities.

Just to proceed a few more steps here: If an entity reacts in a predetermined way, and this determinacy is apriori to its existence, say, through programming, then that entity is neither intelligent, nor does it understand anything—it performs look-ups in a database, nothing else. (see Hofstadter’s analogous critique in [6]) The entity would not be able to react to previously unknown situations: “everything”, i.e. the whole spectrum of reactions had been implemented before. It may appear rich, but nevertheless it is completely determined apriori. It can not decide, not to do something against its programming.

If, for example, the programmer programs the computer to perform a pattern-matching process heading for triangles, the computer does neither recognize, nor understand anything about triangles. It remains an operation that is not even dull or stupid, it is outside of any scale where stupidity (and intelligence) could be put on. What appears here is the issue of the self. Probably, and there are good reasons to think so, it is necessary to “have a self” in order to understand anything.

The conclusion seems to be that we have to program the indeterminate, which is obviously a contradictio in adjecto. Where then does the intelligence, the potential for understanding come from? This question applies not only to machines made from plastics, silicon and electricity, but also to those tricky—better: complex—arrangements of mainly protein, fat, sugar, salt, water—chemicals—and electricity—a physical phenomenon—that we call body, and that, in the case of human obviously is hosting a phenomenon we call intelligence, or even consciousness. As engineers we nevertheless (have to) start with some kind of programming, while in the case of biological bodies we presumably start from “pure”matter. So what is it that makes the machine intelligent, let it be the man-made or the biological body? How to program the inprogrammable? Or, more fruitful, which kind of conditions should we meet in the program such that we do not exclude the emergence of intelligence? The only thing which seems to be sure here is that neither the brain is a computer, nor the computer is a kind of brain. So, what we strive to do won’t be simulation either. It has more to do with a careful blend of rather abstract principles that are hosted in a flowing stream of information, actualized by (software) structures that inherently tend towards indeterminableness. It is (not quite) another question whether such software still could be called software, such referring to what we usually call software.

Up to now we frequently used the notion of “intelligence.” Intelligence is often defined as the capability to deal flexibly with a previously unknown problem or situation. This definition does not suit the debate about mind, it just introduces a further even more inappropriate term. What should  flexibility mean here? We can not derive any meaningful question from it. Thus, we admit, we used the term “intelligence” only as a kind of abbreviation. It is not a fruitful term in the debate about mind, since it is a concept, which not only requires an empirical definition, it is itself a completely empirical term. Maybe, it is a concept that allows the empirical comparison of observations, but it is not a concept that allows to think about the empirical comparison. As a consequence, it presupposes a lot of assumptions that can not be addressed once the term intelligence has been introduced. The term intelligence, much as consciousness, yet for other reasons, is preventing the awareness to the problematics of being an empirical being in a material world. Fortunately, philosophy has a well established label for the respective group of topics: epistemology. Hence, when it comes to ‘intelligent’ machines, we prefer to speak about “machine-based epistemology.”

Machine-based epistemology seems to be related to the philosophy of artificial intelligence. This area circumscribes itself by such questions as

  • – Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • – Can a machine have a mind, mental states and consciousness in the same sense humans do? Can it feel?
  • – Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?

As we can see easily, the whole domain is deeply flawed. It starts with the label of “Artificial Intelligence,” which is simply nonsense by itself, deeply informed by positivism and so a relic delivered by an epoch full of control phantasms. It presupposes that “intelligence” can be created by artificially. It is by no means clear what “artificial” should mean here; it can not be opposed to something more “natural” apriori, not only because this would refer to an overly romantic concept of nature. It is (very) clear that the mind of any human contemporary individual is not merely the product of that somehow natural. Yet, it also does not feel right to rate culture as nature, does it? Yet, if we’d chose that alternative, the “artificial” would be empty. So, this label of “artificial intelligence” is simply a mess, at least if one is seriously interested in the subject of mind and its relation to machines (in the field of machine learning one can find more nonsense like this).

The term “intelligence” we already abolished. Next, talking about “mental states” falls into the same vein, a term you can find also frequently in the philosophical debate about the (human) mind. If we assume that humans have mental states, then asking whether machines also “can have mental states,” we already agree with the assumption that the phenomenon of the “mental” can be empirically determined and identified as-such-and-such. This is a fallacy at least. I prefer to rate it as pure nonsense—again poisoned by positivism, or alternatively vexed by language itself. If we could identify such states, we would proof that the mind is a finite state automaton. Assigning a “state” to something requires modeling, interpretation and formalization. Yet, who should model, interpret or formalize in the mind to create a state? All what I would like to say here is that the often cited “state” is nothing real we could find in “nature” by some naive and somehow direct observation and what we could rebuild or reconstruct by copying some mechanism. The notion of “mental state” is not an empirical one, it is just positivist metaphysics, if you like. …as I just said, talking about “mental state” is nonsense, and not just superficially related to the category of “artificial intelligence.” We discuss the issues about “mental states” in more detail in another chapter.

As a consequence—and we will deepen this point further also in a later chapter— we also can conclude that the mind-body-problem is not a real problem, it is just a fallacy. In turn, we will not claim that we are interested in “artificial intelligence,” or some kind of philosophy thereof either. We guess that the whole field will not make any substantial progress as long as the (final) results of Putnam—refutation of any sort of functionalism—and his predecessor Wittgenstein—mainly his realist solipsism and the externalization of meaning—have not been assimilated in order to deal with the topic of machine-based epistemology. Just that we will try to accomplish here. Yet, we don’t consider it to be a particularly philosophical sub-domain.

Hilary Putnam wrote a lot about the possibility to explain minds, language or meaning in functionalist terms. Functionalist terms are positive definite. The basic functionalist assumptions derive from positivism and result in the claim that the subject of interest can be analyzed. After a long intellectual journey and a wealth of thorough investigations, inventing really very strong arguments, Putnam eventually came to the conclusion that there can not be any variant of functionalism, which would be capable to “explain” mindfulness, language, or meaning. Given that, it could very well be that a computer can’t be programmed even in a non-functionalist way to achieve a “thinking machine,” i.e. in a way, that it is not analyzable any more after it has been started. A memory dump would tell nothing any more, quite similar to the role of the physiology of a particular cluster of neurons in the brain with respect to what we call a thought.

Putnam investigated the modes of speaking and thinking (and conceptualizing) about the human mind and human language. Denying functionalism and thus denying naive machinism as a proper approach links it to our interests here. We will try to address the border described by Putnam from the other, the technical side. And probably we also will address to overcome it, at least in one direction, since Putnam’s insights will remain valid even if we succeed.

  • [1] Bas van Fraassen, „Putnam’s Paradox: Metaphysical Realism Revamped and Evaded.“ Philosophical Perspectives, vol.11. Blackwell, Boston 1997. S. 17-42.
  • [2] Hilary Putnam, Representation and Reality.
  • [3] Harnad, Stevan (2001), “What’s Wrong and Right About Searle’s Chinese Room Argument?”, in Bishop, M.; Preston, J., Essays on Searle’s Chinese Room Argument, Oxford University Press.
  • [4] John R. Searle (1990), Is the brain’s mind a computer program? Scientific American, pp.26-31.
  • [5] John R. Searle (1989), Is the Brain a Digital Computer? available online
  • [6] Douglas Hofstadter, Fluid Analogy. 1995.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: