Where is the Limit?

October 20, 2011 § Leave a comment

What is the potential of “machines” ?

As a favorable first way to approach this topic—it is the central topic of this blog—we consider the question about the limits of language, of minds, brains or machines

Nothing brand new, of course, to think about limits. Famous answers have been given throughout the last 2’500 years. Yet, those four words denote not just things, so neither words nor the related concepts are simple. Even worse, language becomes deceptive exactly in the moment when we start to try to be exact. Is there something like “the” mind? Or “the” language? Supposedly, not. The reasoning for that we will meet later. How, then, to ask about the limit of language, minds and brains, if there is no other possibility than to using just that, language, mind(s), and brain(s)? And what, if we consider limits to “any” side, even those not yet determined? What are the lower limits for a language {brain,mind} such that we still can call it a language {mind,brain}? After a few seconds we are deep on philosophical territory. And, yes, we guess, that reasonable answers to the introductory question could be found only together with philosophy. Any interest in the borders of the “machinic,” a generalization coined by Félix Guattari, is only partially a technical concern.

Another common understanding of limits takes it as the demarcation line between things we know and things we believe. In philosophical terms, it is the problem of foundations and truth: What can we know for sure? How can we know that something is true? We know quite well that this question – it is somehow the positive version of the perspective towards the limits – can not be answered. A large area of disputes between various and even cross-bred sorts and flavors of idealism, realism, scientism, gnosticism and pragmatics blossomed upon this question, and all ultimately failed. In some way, science is as ridiculous (or not) as believing in ghosts (or god(s)), but what do they share? Perhaps, to neglect each other, or that they both neglect a critical attitude towards language? Just consider that scientists not only speak of “natural laws,” they even do not believe in them, they take them (and the idea of it) as a fundamental and universally unquestionable truth. Or clergymen strive to proof (sic!) the existence (!) of God. Both styles are self-contradictory, simply and completely.

The question about limits can be put forward in a more fruitful manner. Instead of asking about the limits we probably better ask for the conditions of our something. This includes the condition of the possibility as well as it points to the potential for conditions. What are the conditions to speak about logics, and to use it? Is it justified, to speak about “justified beliefs”? What are the conditions to use concepts (grm.: Begriff), notions, apprehensions? And to bring in ‘the’ central question of empiricism: How do words acquire meaning? (van Fraassen)

The fourth of our items, the machines, seem to stand a bit apart from the others, at first sight. Yet, the relationship between man and machines is a philosophical and a cultural issue since the beginnings of written language. The age of enlightenment was full of machine phantasms, Descartes and Leibniz built philosophies around it. Today, we call computers “information processing machines”, which is a quite insightful and not quite innocent misnomer, as we will see: Information can not be processed by principle. Computers re-arrange symbols of a fixed code, which is used to encode information. This mapping is not complete. of course, a fact deliberately overlooked by many computer scientists.

Yet, what are the limits of those machines? Are those limits given by what we call programming language (another misnomer)? How should we program a computer such that it becomes able to think, to understand language, to understand? Many approaches have been explored by computer engineers, and sometimes these approaches have been extremely silly in a twofold way: as such, and given the fact that they repeat rather primitive thoughts from medieval ages.

The goal of making the machine intelligent can not be accomplished by programming in principle. One of the more obvious reasons is that there is nothing like a meta-intelligence, no meta-knowledge, as well as there is no private language (it was Wittgenstein who proofed that). If an entity reacts in a predetermined way, it is not intelligent, neither it understands anything – it performs look-ups in a database, nothing else. The conclusion seems to be that we have to program the indeterminate, which is obviously a contradictio in adjecto. Where then does the intelligence, the potential for understanding come from? Nevertheless, we start with some kind of programming. But what is it that makes the machine intelligent? How to program the inprogrammable? Or, more fruitful, which kind of conditions should we meet in the program such that we do not exclude the emergence of intelligence? The only thing which seems to be sure here is that neither the brain is a computer, nor the computer is a kind of brain. So, what we strive to do won’t be simulation either. It has more to do with a careful blend of rather abstract principles that are hosted in a flowing stream of information, actualized by (software) structures that inherently tend towards indeterminableness. It is (not quite) another question whether such software still could be called software.


Where Am I?

You are currently browsing entries tagged with mind at The "Putnam Program".