Theory (of Theory)

February 13, 2012 § Leave a comment

Thought is always abstract thought,

so thought is always opposed to work involving hands. Isn’t it? It is generally agreed that there are things like theory and practice, which are believed to belong to different realms. Well, we think that this perspective is inappropriate and misleading. Deeply linked to this first problem is a second one, the distinction between model and theory. Indeed, there are ongoing discussions in current philosophy of science about those concepts.

Frequently one can meet the claim that theories are about predictions. It is indeed the received view. In this essay we try to reject precisely this received view. As an alternative, we offer a Wittgensteinian perspective on the concept of theory, with some Deleuzean, dedicatedly post-Kantian influences. This perspective we could call a theory about theory. It will turn out that this perspective not only is radically different from the received view, it also provides some important otherwise unachievable benefits, or (in still rather imprecise wording) both concerning “practical” as well as philosophical aspects. But let us start first with some examples.

Even before let me state clearly that there is much more about theory than can be mentioned in a single essay. Actually, this essay is based on a draft for book on the theory of theory that comprises some 500 pages…

The motivation to think about theory derives from several hot spots. Firstly, it is directly and intrinsically implied by the main focus of the first “part” of this blog on the issue of (the possibility for a) machine-based episteme. We as humans only can know because we can willingly take part in a game that could be appropriately described as mutual and conscious theorizing-modeling induction. If machines ever should develop the capability for their own episteme, for their autonomous capability to know, they necessarily have to be able to build theories.

A second strain of motivation comes from the the field of complexity. There are countless publications stating that it is not possible to derive a consistent notion of complexity, ranging from Niklas Luhmann [1986] to Hermann Haken [2012] (see []), leading either to a rejection of the idea that it is a generally applicable concept, or to an empty generalization, or to a reduction. Obviously, people are stating that there is no possibility for a theory about complexity. On the other hand, complexity is more and more accepted as a serious explanatory scheme across disciplines, from material science to biology, sociology and urbanism. Complexity is also increasingly a topic in the field of machine-based episteme, e.g. through the concept of self-organizing maps (SOM). This divergence needs to be clarified, and to be dissolved, of course.

The third thread of motivation is given by another field where theory has  been regarded usually as something exotic: urbanism and architecture. Is talking about architecture, e.g. its history, without actually using this talking in the immediate context of organizing and rising a building already “theory”? Are we allowed to talk in this way at all, thereby splitting talking and doing? Another issue in these fields is the strange subject of planning. Plans are neither models nor theory, nor operation, and planning often fails, not only in architecture, but also in the IT-industry. In order to understand the status of plans, we have first to get clear about the abundant parlance that distinguishes “theory” and “practice”.

Quite obviously, a proper theory of theory in general, that is, not just a theory about a particular theory, is also highly relevant what is known as theory about theory change, or in terms used often in the field of Artificial Intelligence, belief revision. If we do not have a proper theory about theory at our disposal, we also will not talk reasonably about what it could mean to change a belief. Actually, the topic about beliefs is so relevant that we will discuss it in a dedicated essay. For the time being, we just want to point out the relevance of our considerations here. Later, we will include a further short remark about it.

For these reasons it is vital in our opinion (and for us) to understand the concept of theory better than it is possible on the basis of current mainstream thinking on the subject.

Examples

In line with that mainstream attitude it has been said for instance that Einstein’s theory predicted—or: Einstein predicted from his theory—the phenomenon of gravitational lenses for light. In Einstein’s universe, there is no absoluteness regarding the straightness of a line, because space itself has a curvature that is parametrized. Another example is the so-called Standard Model, or Standard Interpretation in particle physics. Physicists claim that this model is a theory and that it is the best available theory in making correct predictions about the behavior of matter. The core of this theory is given by the relation between two elements, the field and its respective mediating particle, a view, which is a descendant of Einstein’s famous equation about energy, mass and the speed of of light. Yet, the field theory leads to the problem of infinite regress, which they hope to solve in the LHC “experiments” currently performed at the CERN in Geneva. The ultimate particle that also should “explain” gravity is called the Higgs-Boson. The general structure of the Standard Model, however, is a limit process: The resting mass of the particles is thought to become larger and larger, such, the Higgs-Boson is the last possible particle, leaving gravitation and the graviton still unexplained. There is also a pretty arrangement of the basic types of elementary particles that is reminding the periodic table in chemistry. Anyway, by means of that Standard Model it is possible to build computers, or at least logical circuits, where a bit is represented by just some 20 electrons. Else, Einstein’s theory has a direct application in the GPS, where a highly accurate common time base shared between the satellites is essential.

Despite these successes there are still large deficits of the theory. Physicists say that they did not detect gravitational waves so far that are said to be predicted by their theory. Well, physics even does not even offer any insight about the genesis of electric charges and magnetism. These are treated as phenomena, leaving a strange gap between the theory and the macroscopic observations (Note that the Standard Model does NOT allow decoherence into a field, but rather only into particles). Else, physicists do not have even the slightest clue about some mysterious entities in the universe that they call “dark matter” and “dark energy”, except that it exerts positive or negative gravitational force. I personally tend to rate this as one of the largest (bad) jokes of science ever: Building and running the LHC (around 12 billion $ so far) on the one hand and at the same time taking the road back into mythic medieval language serious. We meet also and again meet dark ages in physics, not only dark matter and dark energy.

Traveling Dark Matter in a particular context, reflecting and inducing new theories: The case of Malevich and his holy blackness.1

Anyway, that’s not our main topic here. I cited these examples just to highlight the common usage of the concept of theory, according to which a theory is a more or less mindful collection of proposals that can be used to make predictions about worldly facts.

To be Different, or not to be Different…

But what is then the difference between theories and models? The concept of model is itself an astonishing phenomenon. Today, it is almost ubiquitous, We hardly can imagine anymore that only a few decades ago, back in the 19th century, the concept of model was used mainly by architects. Presumably, it was the progress made in physics in the beginning of the 20th century, together with the foundational crisis in mathematics that initiated the career of the concept of model (for an overview in German language see this collection of pages and references).

One of the usages of the concept of model refers to the “direct” derivation of predictions from empirical observations. We can take some observations about process D, e.g. an illness of the human body, where we know the outcome (cured or not) and then we could try to build an “empiric” model that links the observations to the outcome. Observations can include the treatment(s), of course. It is clear that predictions and diagnoses are almost synonyms.

Where is the theory here? Many claim that there is no theory in modeling in general, and particularly that there is no theory possible in the case of medicine and pharmacology. Statistical techniques are usually regarded as some kind of method. For there is no useful generalization is is believed that a “theory” would not be different from stating that the subject is alive. It is claimed that we are always directly faced with the full complexity of living organisms, thus we have to reduce or perspective. But stop, shouldn’t we take the notion of complexity here already as a theory, should we?

For Darwin’s theory of natural selection it is also not easy to draw a separating line between the concept of models and theories. Darwin indeed argued on a quite abstract level, which led to the situation that people think that his theory can not be readily tested. Some people feel thus inclined to refer to the great designer, or to the  Spaghetti monster alike. Others, notably often physicists, chemists or mathematicians, tried to turn Darwin’s theory into a system that actually could be tested. For the time being we leave this as an open issue, but we will return to it later.

Today it is generally acknowledged that measurement always implies a theory. From that we directly can conclude that the same should hold for modeling. Modeling implies a theory, as measurement implies a particular model. In the latter case the model is often actualized by the materiality or the material arrangement of the measurement device. Both, the material aspects together with the immaterial design aspects that mainly concern informational filtering, establish at least implicitly a particular normativity, a set of normative rules that we can call “model.” This aspect of normativity of models (and of theories alike) is quite important, we should keep this in mind.

In the former relation, the implication of theories by modeling, we may expect a similar dependency. Yet, as far as we do not clearly distinguish models and theory, theories would be simply some kind of more general models. If we do not discern them, we would not need both. Actually, precisely this is the current state of affairs, at least in the mainstreams across various disciplines.

Reframing. Into the Practice of Languagability.

It is one of the stances inherited from materialism to pose questions about a particular subject in an existential, or if you like, ontological, manner. Existential questions take the form “What is X?”, where the “is” already claims the possibility of an analytical treatment, implied by the sign for equality. In turn this equality, provoked by the existential parlance, claims that this equation is a lossless representation. We are convinced that this approach destroys any chance for sustainable insights already in the first move. This holds even for the concepts of “model” or “theory” themselves. Nevertheless, the questions “What is a model?” or “What is a theory?” can be frequently met (e.g. [1] p.278)

The deeper reason for the referred difficulties is that it implies the primacy of the identity relation. Yet, the only possible identity relation is a=a, the tautology, which of course is empirically empty. Despite we can write a=b, it is not an identity relation any more. Either it is a claim, or it is based on empiric arguments, that means, it is always a claim. In any case, one have to give further criteria upon which the identity a=b appears as justified. The selection of those criteria is far outside of the relation itself. It invokes the totality of the respective life form. The only conclusion we can draw from this is that the identity relation is transcendent. Despite its necessity it can not be part of the empirical world. All the same is hence true for logic.

Claiming the identity relation for empirical facts, i.e. for any kind of experience and hence also for any thought, is self-contradictive. It implies a normativity that remains deliberately hidden. We all know about the late and always disastrous consequences of materialism on the societal level, irrespective of choosing the marxist or the capitalist flavor.

There are probably only two ways of rejecting materialism and such also for avoiding its implications. Both of them reject the primacy of the identity relation, yet in slightly different ways. The first one is Deleuze’s transcendental difference, which he developed in his philosophy of the differential (e.g. in Difference & Repetition, or his book about the Fold and Leibniz). The second one is Wittgenstein’s proposal to take logic as a consequence of performance, or more precise, as an applicable quasi-logic, and to conceive of logic as a transcendental entity. Both ways are closely related, though developed independently from each other. Of course, there are common traits shared by Deleuze and Wittgenstein such as rejecting what has been known as “academic philosophy” at their time. All the philosophy had been positioned just as “footnotes to Platon”, Kant or Hegel.

In our reframing of the concept of theory we have been inspired by both, Deleuze and Wittgenstein, yet we follow the Wittgensteinian track more explicitly in the following.

Actually, the move is quite simple. We just have to drop the assumption that entities “exist” independently. Even if we erode that idealistic independence only slightly we are ultimately actually enforced to acknowledge that everything we can say, know or do is mediated by language, or more general by the conditions that imply the capability for language, in short by languagability.

In contrast to so-called “natural languages”—which actually is a revealing term— languagability is not a dualistic, bivalent off-or-on concept. It is applicable to any performing entity, including animals and machines. Hence, languagability is not only the core concept for the foundation of the investigation of the possibility of machine-based episteme. It is essential for any theory.

Following this track, we stop asking ontological questions. We even drop ontology as a whole. Questions like “What is a Theory?”, “What is Language?” etc. are almost free of any possible sense. Instead, it appears much more reasonable to accept the primacy of languagability and to ask about the language game in which a particular concept plays a certain role. The question that promises progress therefore is:

What can we say about the concept of theory as a language game?

To our knowledge, the “linguistic turn” has not been performed in philosophy of science so far, let it even be in disciplines like computer science or architecture. The consequence of which is a considerable mess in the respective disciplines.

Theory as a Language Game

One of the first implications of the turn towards the primacy of languagability is the vanishing of the dualism between theory and practice. Any practice requires rules, which in turn can only be referred to in the space of languagability. Of course, there is more than the rule in rule-following. Speech acts have been stratified first by Austin [2] into locutionary, illocutionary and perlocutionary parts. There might be even further ones, implying evolutionary issues or the play as story-telling. (Later we we call these aspects “delocutionary”) On the other hand, it is also true that one can not pretend to follow a rule, as Wittgenstein recognized [3].

It is interesting in this respect that the dualistic, opposing contrast between theory and practice has not been the classical view; not just by chance it appeared as late as in the early 17th century [4]. Originally, theory just meant “to look at, to speculate”, a pairing that is interesting in itself.

Ultimately, rules are embedded in the totality of a life form (“Lebensform” in the Wittgensteinian, non-phenomenological sense), including the complete “system” of norms in charge at a given moment. Yet, most rules are regulated themselves, by more abstract ones, that set the conditions for the less abstract ones. The result is not a perfect hierarchy of course, the collection of rules being active in a Lebensform is not an analytic endeavor. We already mentioned this layered system in another chapter (about “comparing”) and labeled it “orthoregulation” there. Rules are orthoregulated, without orthoregulation rules would not be rules.

This rooting of rules in the Forms of Life (Wittgenstein), the communal aspect (Putnam), the Field of Proposals (“Aussagefeld”, Foucault) or the Plane of Immanence provoked by attempting to think consistently (Deleuze), which are just different labels for closely related aspects, prevents the ultimate justification, the justifiable idea, and the presence of logical truth values or truth functions in actual life.

It is now important to recognize and to keep in mind that rules about rules are not referring to any empiric entity that could be found as material or informational fact! Rules about rules are referring to the regulated rules only. Of course, usually even the meta-rules are embedded into the larger context of valuation, the whole system should work somehow, that is, the whole system should allow to create predictive models. Here we find the link to risk (avoidance) and security.

Taking an empiricist or pragmatic stance also for the “meta”-rules that are part of the orthoregulative layer we could well say that the empiric basis of the ortho-rules are other, less abstract and less general rules.

Now we can apply the principle of orthoregulation to the subject of theory. Several implications are immediately and clearly visible, namely and most important that

  • – theories are not about the prediction of empirical “non-normative” phenomena, the subject of Popper’s falsificationism is the model, nor the theory;
  • – theories can not be formalized, because they are at least partially normative;
  • – facts can’t be “explained” as far as “explanations” are conceived to be non-normative entities;

It is clear that the standard account to the status of scientific theories is not compatible with that (which actually is a compliment). Mathias Frisch [5] briefly discusses some of the issues. Particularly, he dismisses the stance that

“the content of a theory is exhausted by its mathematical formalism and a mapping function defining the class of its models.” (p.7)

This approach is also shared by the influential Bas van Fraassen, especially his 1980 [6]. In contrast to this claim we definitely reject that there is any necessity consistency between models and the theory from which they have been derived, nor among the family of models that could be associated with a theory. Life forms (Lebensformen) can not and should not be  evaluated by means of “consistency”, unless you are a social designer, that for instance  has been inventing a variant of idealism practicing in and on Syracuse… The rejection of a formal relationship between theories and models includes the rejection of the set theoretic perspective onto models. Since theories are normative they can’t be formalizable and it is near to scandal to claim ([6], p.43) that

Any structure which satisfies the axioms of a theory…is called a model of that theory.

The problem here being mainly the claim that theories consist of or contain axioms. Norms never have been and never will be “axiomatic.”

There is a theory about belief revision that has been quite influential for the discipline or field that is called “Artificial Intelligence” (we dismiss this term/name, since it is either empty or misleading). This theory is known under the label AGM theory, where the acronym derives from the initials of the names of three proponents Alchourrón, Gärdenfors, and Makinson [7]. The history of its adoption by computer scientists is a story in itself [8]; what we can take here is that it is believed by the computer scientists that the AGM theory is relevant for the update of so-called knowledge bases.

Despite its popularity, the AGM theory is seriously flawed, as Neil Tennant has been pointing out [9] (we will criticize his results in another essay about beliefs (scheduled)). A nasty discussion mainly characterized by mutual accusations started (see [10] as an example), which is typical for deficient theories.

Within AGM, and similar to Fraassen’s account on the topic, a theory is a equal to a set of beliefs, which in turn is conceived as a logically closed set of sentences. There are several mistakes here. First, they are applying truth-function logic as a foundation. This is not possible, as we have seen elsewhere. Second, a belief is not a belief any more as soon as we conceive it as a preposition, i.e. a statement within logic, i.e. under logical closure. It would be a claim, not a belief. Yet, claims belong to a different kind of game. If one would to express the fact that we can’t know anything precisely, e.g. due to the primacy of interpretation, we simply could take the notion of risk, which is part of a general concept of model. A further defect in AGM theory and any similar approach that is trying to formalize the notion of theory completely is that they conflate propositional content with the form of the proposition. Robert Brandom demonstrates in an extremely thorough way, why this is a mistake, and why we are enforced to the view that propositional content “exists” only as a mutual assignment between entities that talk to each other (chapter 9.3.4 in [11]). The main underlying reason for this is the primacy of interpretation.

In turn we can conclude that the AGM theory as well as any attempt to formalize theory can be conceived as a viable theory only, if the primacy of interpretation is inadequate. Yet, this creates the problem how we are tied to the world. The only alternative would be to claim that this is going on somehow “directly”. Of course, such claims are either 100% nonsense, or 100% dictatorship.

Regarding the application of the faulty AGM theory to computer science we find another problem: Knowledge can’t be saved to a hard disk, as little as it is possible for information. Only a strongly reductionist perspective, which almost is a caricature of what could be called knowledge, allows to take that route.

We already argued elsewhere that a model neither can contain the conditions of its applicability nor of its actual application. The same applies of course to theories. As a direct consequence of that we have to investigate the role of conditions (we do this in another chapter).

Theories are precisely the “instrument” for organizing the conditions for building models. It is the property of being an instrument about conditions that renders them into an entity that is inevitably embedded into community. We could even bring in Heidegger’s concept of the “Gestell” (scaffold) here, which we coined in the context of his reflections about technology.

The subject of theories are models, not the proposals about the empirical world, as far as we exclude models from the empirical world. The subject of Popper’s falsificationism is the realm of models. In the chapter about modeling we determined models as tools for anticipation given the expectation of weak repeatability. These anticipations can fail, hence they can be tested and confirmed. Inversely, we also can say that every theoretical construct that can be tested is an anticipation, i.e. a model. Theoretical constructs that can not be tested are theories. Mathias Frisch ([5], p.42) writes, quote:

I want to suggest that in accepting a theory, our commitment is only that the theory allows us to construct successful models of the phenomena in its domain, where part of what it is for a model to be successful is that it represents the phenomenon at issue to whatever degree of accuracy is appropriate in the case at issue. That is, in accepting a theory we are committed to the claim that the theory is reliable, but we are not committed to its literal truth or even just of its empirical consequences.

We agree with him concerning the dismissal of truth or empiric content regarding the theories. Yet, the term “reliable” could still be misleading. One never would say that a norm is reliable. Norms themselves can’t be called reliable, only its following. You not only just obey to a norm, the norm is also something that has been fixed as the result of social process, as a habit of a social group. On a wider perspective, we probably could assign that property, since we tend to expect that a norm supports us in doing so. If norm would not support us, it would not “work,” and in the long run it will be replaced, often in a catastrophically sweeping event. That “working”of a norm is, however, almost unobservable by the individual, since it belongs to the Lebensform. We also should keep in mind that as far as we would refer to such a reliability, it is not directed towards the prediction, at least not directly, it refers just to the possibility to create predictive models.

From  safe grounds we now can reject all the attempts that try to formalize theories according to the line Carnap-Sneed-Stegmüller-Moulines [12, 13, 14, 15]. The “intended usage” of a theory (Sneed/Stegmüller) can not be formalized, since it is related to the world, not just to an isolated subject. Scientific languages (Carnap’s enterprise) are hence not possible.

Of course, it is possible to create models about the modeling, i.e. taking models as an empiric subject. Yet, such models are still not a theory, even as they look quite abstract. They are simply models,  which imply or require a theory. Here lies the main misunderstanding of the folks cited above.

The turn towards languagability includes the removal of the dualistic contrast between theory and practice. This dualism is replaced by a structural perspective according to which theory and practice are co-extensive. Still, there are activities, that we would not call a practice or an action, so to speak before any rule. Such activities are performances. Not to the least this is also the reason why performance art is… art.

Heinrich Lüber, the Swiss performance artist, standing on-top of a puppet shaped as himself. What is no visible here: He stood there for 8 hours, in the water on shore of the French Atlantic coastline.

Besides performance (art) there are no activities that would be free of rules, or equivalently, free of theory. Particularly modeling is of course a practice, quite in contrast to theory. Another important issue we can derive from our distinction is that any model implies a theory, even if the model just consists of a particular molecule, as it is the case in the perception mechanisms of individual biological cells.

Another question we have sharply to distinguish from that about the reach of theories is whether the models predict well. And of course, just as norms, also theories can be inappropriate.

Theories are simply there. Theories denote what can be said about the influence of the general conditions—as present in the embedding “Lebenswelt”—onto the activity of modeling.

Theories thus can be described by the following three properties:

  • (1) A theory is the (social) practice of determining the conditions for the actualization of virtuals, the result of which are models.
  • (2) A theory acts as a synthesizing milieu, which facilitate the orthoregulated  instantiation of models that are anticipatively related to the real world (where the “real world” satisfies the constraints of Wittgensteinian solipsism).
  • (3) A theory is a language generating language game.

Theories, Models, and in between

Most of the constructs called “theory” are nothing else than a hopeless mixture of models and theories, committing serious naturalistic fallacies in comparing empiric “facts” with normative conditions. We will give just a few examples for this.

It is generally acknowledged that some of Newton’s formulas constitute his theory of gravitation. Yet, it is not a theory, it is a model. It allows for direct and, in the mesocosmic scale, even for almost lawful predictions about falling objects or astronomical satellites. Newton’s theory, however, is given by his belief in a certain theological cosmology. Due to this theory, which entails absoluteness, Newton was unable to detect relativism.

Similarly the case of Kepler. For a long time (more than 20 years) Kepler’s theory entailed the belief in a pre-established cosmic harmony that could be described by Euclidean geometry, which itself was considered as being a direct link to divine regions at that time. The first model that Kepler constructed to fulfill this theory comprised the inscription of platonic solids into the planetary orbits. But those models failed. Based on better observational data he derived different models, yet still within the same theory. Only when we dropped the role of the geometrical approach in his theory he was able to find his laws about the celestial ellipses. In other words, he dropped most of his theological orthoregulations.

Einstein’s work about relativity finally is clearly a model as there is not only one formula. Einstein’s theory is not related to the space-time structure of the macroscopic universe. Instead, the condition for deriving the quantitative / qualitative predictions are related to certain beliefs in non-randomness of the universe. His conflict with quantum theory is well-known: “God does not play dice.

The contemporary Standard Model in particle physics is exactly that: a model. Its not a theory. The theory behind the standard model is logical flatness and materialism. It is a considerable misunderstanding of most physicists to accuse proponents of the String theory not to provide predictions. They can not, because they are thinking about a theory. Yet, string theorists themselves do not properly understand the epistemic role of their theory as well.

A particular case is given by Darwin’s theory. Darwin of course did not distinguish perfectly or explicit between models and theories, it was not possible for him at these days. Yet, throughout his writings and the organization of his work we can detect that he implicitly followed that distinction. From Darwin’s writings we know that he was deeply impressed by the non-random manifoldness in the domain of life. Precisely this represented the core of his theory. His formulation about competition, sexual selection or inheritance are just particular models. In our chapter about the abstract structure of evolution we formulated a model about evolutionary processes in a quite abstract way. Yet, it is still a model, within almost the same theory that Darwin once followed.2

There is a quite popular work about the historical dynamics of theory, Thomas Kuhn’s “The Structure of Scientific Revolutions“, which is not theory, but just a model. For large parts it is not even a model, but just a bad description, which he coined the paradigm of the “paradigm shift”. There is almost no reflection in it. Above all, it is certainly not a theory about theory, nor a theory about the evolution of theories. He had to fail, since he does not distinguish between theories and models to the least extent.

So, leaving these examples, how do relate models and theories practically? Is there a transition between them?

Model of Theory, Theory of Model, and Theory of Theory

I think we can we can derive from these examples a certain relativity regarding the life-cycle of models and theories. Theories can be transformed into models through removal of those parts that refer to the Lebenswelt, while models can be transformed into theories if the orthoregulative part of models gets focused (or extracted from theory-models)

Obviously, what we just did was to describe a mechanism. We proposed a model. In the same way it represents a model to use the concept of the language game for deriving a structure for the concept of theory. Plainly spoken, so far we created a model about theory.

As we have seen, this model also comprises proposals about the transition from model to theory. This transition may take two different routes, according to our model about theory. The first route is taken if a model gets extended by habits and further, mainly socially rooted, orthoregulation, until the original model appears just as a special case. The abstract view might be still only implicit, but it may be derived explicity if the whole family of models is concretely going to be constructed, that are possible within those orthoregulations. The second route draws upon a proceeding abstraction, introducing thereby the necessity of instantiation. It is this necessity that decouples the former model from its capability to predict something.

Both routes, either by adding orthoregulations explicitly or implicitly through abstraction, turn the former model de actio into a milieu-like environment: a theory.

As productive milieus, theories comprise all components that allow the construction and the application of models:

  • – families of models as ensembles of virtualized models;
  • – rules about observation and perception, including the processes of encoding and decoding;
  • – infrastructural elements like alphabets or indices;
  • – axiomatically introduced formalizations;
  • – procedures of negotiation the procedures of standardization and other orthoregulations up to arbitrary order

The model of model, on the other hand, we already provided here, where we described it as a 6-Tupel, representing different, incommensurable domains. No possible way can be thought of from one domain to one of the other. These six domains are, by their label:

  • (1) usage U
  • (2) observations O
  • (3) featuring assignates F on O
  • (4) similarity mapping M
  • (5) quasi-logic Q
  • (6) procedural aspects of the implementation

or, taken together:

This model of model is probably the most abstract and general model that is not yet a theory. It provides all the docking stations that are required to attach the realm of norms. Such, it would be only a small step to turn this model into a theory. That step towards a theory of model would include statements about two further dimensions: (1) the formal status and (2) the epistemic role of models. The first issue is largely covered by identifying them as a category (in the sense of category theory). The second part is related to the primacy of interpretation, that is, to a world view that is structured by (Peircean) sign processes and transcendental differences (in the Deleuzean sense).

The last twist concerns the theory of theory. There are good reasons to assume that for a theory of theory we need to invoke transcendental categories. Particularly, a theory of theory can’t contain any positive definite proposal, since in this case it would automatically turn into a model. A theory of theory can be formulated only as a self-referential, self-generating structure within transcendental conditions, where this structure can act as a borderless container for any theory about any kind of Lebensform. (This is the work of the chapter about the Choreosteme.)

Remarkably, we thus could not formulate that we could apply a theory to itself, as a theory is a positive definite thing, even if it would contain only proposals about conditions (yet, this is not possible either). Of course, this play between (i) ultimately transcendent conditions, (ii) mere performance that is embedded in a life form and finally (iii) the generation of positivity within this field constitutes a quite peculiar “three-body-problem” of mental life and (proto-)philosophy. We will return to that in the chapter about the choreosteme, where we also will discuss the issue of “images of thoughts” (Gilles Deleuze) or, in slightly different terms, the “idioms of thinking” (Bernhard Waldenfels).

Conclusion

Finally, there should be our cetero censeo, some closing remarks about the issue of machine-based episteme, or even machine-based epistemology.  Already in the beginning of this chapter we declared our motivation. But what can we derive and “take home” in terms of constructive principles?

Our general goal is to establish—or to get clear about—some minimal set of necessary conditions that would allow a “machinic substrate” in such a way that we could assign to it the property of “being able to understand” in a fully justified manner.

One of the main results in this respect here was that modeling is nothing that could be thought of as running independently, as algorithm, in such a way that we could regard this modelling as sufficient for ascribing the machine the capability to understand. More precisely, it is not even the machine that is modeling, it is the programmer, or the statistician, the data analyst etc., who switched the machine into the ON-state. For modeling, knowing and theorizing the machine should act autonomously.

On the other hand, performing modeling inevitably implies a theory. We just have to keep this theory somehow “within” the machine, or more precisely, within the sign processes that take place inside the machine. The ability to build theories necessarily implies self-referentiality of the informational processes. Our perspective here is that the macroscopic effects of  self-referentiality, such like the ability for building theories, or consciousness, can not be “programmed”, they have to be a consequence of the im-/material design aspects of the processes that make up this aspects…

Another insight is, also not a heavily surprising one, though, that the ability to build theories refers to social norms. Without social norms there is no theorizing. It is not the mathematics or the science that would be necessary it is just the presence and accessibility of social norms. We could call it briefly education. Here we are aligned to theories (i.e. mostly models) that point to the social origins of higher cognitive functions. It is quite obvious that some kind of language is necessary for that.

The road to machine-based episteme thus does not imply a visit in the realms of robotics. There we will meet only insects and …roboters. The road to episteme leads through languagability, and anything that is implied by that, such as metaphors or analogical thinking. These subjects will be the topic of next chapters. Yet, it also defines the programming project accompanying this blog: implementing the ability to understand textual information.

u .

Notes

1. The image in the middle of this tryptich shows the situation in the first installation on the exhibition in Petrograd in 1915, arranged by Malevich himself. He put the “Black Square” exactly at the same place where traditionally the christian cross was to be found in Russian living rooms at that time: up in the corner under the ceiling. This way, he invoked a whole range of reflections about the dynamics of symbols and habits.

2. Other components of our theory of evolutionary processes entail the principle of complexity, and the primacy of difference and the primacy of interpretation.

This article has been created on Oct 21st, 2011, and has been republished in a considerably revised form on Feb 13th, 2012.

References

  • [1] Stathis Psillos, Martin Curd (eds.) The Routledge Companion to Philosophy of Science.

    Taylor & Francis, London and New York 2008.

  • [2] Austin, Speech Act Theory;
  • [3] Wittgenstein, Philosophical Investigations;
  • [4] etymology of “theory”; “theorein”
  • [5] Mathias Frisch, Inconsistency, Asymmetry, and Non-Locality: A Philosophical Investigation of Classical Electrodynamics. Oxford 2005.
  • [6] Bas van Frassen, The Scientific Image,

    Oxford University Press, Oxford 1980.

  • [7] Alchourron, C., Gärdenfors, P. and Makinson, D. (1985). On the Logic of Theory Change: Partial Meet Contraction and Revision Functions. Journal of Symbolic Logic, 50, 510-30.
  • [8] Raúl Carnota and Ricardo Rodríguez (2011). AGM Theory and Artificial Intelligence.

    in: Belief Revision meets Philosophy of ScienceLogic, Epistemology, and the Unity of Science, 2011, Vol.21, 1-42.

  • [9] Neil Tennant (1997). Changing the Theory of Theory Change: Reply to My Critics.

    Brit. J. Phil. Sci. 48, 569-586.

  • [10] Hansson, S. 0. and Rott, H. [1995]: ‘How Not to Change the Theory of Theory Change: A Reply to Tennant’, British Journal for the Philosophy of Science, 46, pp. 361-80.
  • [11] Robert Brandom, Making it Explicit. 1994.
  • [12] Carnap
  • [13] Sneed
  • [14] Wolfgang Stegmüller
  • [15] Moulines

۞

Tagged: , , , , , , , , , , , , , , , ,

Leave a comment

What’s this?

You are currently reading Theory (of Theory) at The "Putnam Program".

meta