December 2, 2011 § Leave a comment
If any thing implies at least one condition
that are outside of itself, how to speak then about conditions?
Conditions are a strange something. Everybody is using them as sort of a scaffold,
for instance in the form of logical premises. In science, however, everyone tries to kick them out of the game. Degraded into “side-conditions” they are first vigorously controlled, then declared as irrelevant. In physical contexts as well as in the context of logic, taking conditions seriously leads to self-referentiality, which in nature provides the road into chaos and complexity. For logics, however, self-referentiality is often considered as its devilish counterpart and hence treated as a taboo. Conditions can’t be investigated by means of logics.
Conditions, and the conditional alike, can be written down in many pretty ways. Yet, it is quite difficult to explain them, to explain how we do arrive at them. They are certainly kind of a nightmare for philosophers, at least to those who, in one way or another, believe into existence as a primary foundation. Not believing in existence as a primary foundation does not mean at all not to believe in existence, or to deny it. Yet it is a dramatic difference if existence is set as a primary foundation or as an implied quality. We discuss this in another chapter.
This question about the foundations is so important because rationality is tightly bound to it, and with it the possibility to know. Since the beginnings of philosophy, and probably even long before, it has been considered as the ultimate justification for human societies to say “I did this because…”. To give a reason for ones own acts establishes the act as act, it thus also establishes the conditions for the possibilities of freedom. At least, such it was believed. If we could not justify our acts, well, then everything about human society would be unsecured. At least, such it was believed.
People quickly discovered that there are justifications that are more tenable than others. Then, if there is such an experiencable difference, is it possible to think that there is a superior condition, one that rules all others? Something like a last condition? Something, that causes any other thing or change? Or, complementary to that, is there the possibility to gain apriori truth? Generally, it was always considered to be vital not to use unnecessary reasons in arguments. Aristoteles always tried to build his systems on a single principle. About 1500 years after the invention of the sparsity argument it was rediscovered by William of Occam (Ockham), a great medieval philosopher. Nowadays it is popular under the brand of “Occam’s razor”.
Consistently, Aristoteles also formulated the idea of the “Unmoved Mover“: an independent divine eternal unchanging immaterial substance. Yet, his main concern in this construct concerned material things. If the “unmoved mover” would be transferred to social or mental aspects of human life, a plethora of problems would arise, primarily about freedom, determinism and ethics. It does not help much to introduce the idea of God into the discourse about conditions. God would be the condition, more direct than not, also for the evil. Don’t think about war, think just of the victims of inquisition. You got the point. Post-medieval philosophy recognized the problems of the idea of God that would be related to any kind of condition, when Leibniz tried to justify that idea in his Theodizee.
Leibniz himself, but also Descartes among others, took the God out of the world of facts and matters and put it into the construction of the world. Consequently, everything was considered to be perfectly construed, everything has been—had to be—considered a machine, a quite complicated one, but still. The discussion about determinism persists up to date. All of our writings here about the possibility of machine-based epistemology are related to this issue, too. Of course, we reject determinism etc., but anyone thinking about the relation of matter and mind inevitably relates herself to the issue of determinism.
The idea of a single, monopolistic foundation for any thinkable justification introduces some serious problems. These problems are first structural in nature, then ethical.
A foundation could do its work only if there is additionally a necessity at work. In other words, the language game of “foundation” remains intact only if there is no choice, even no possibility for a choice for an instantiation. The foundation should not be abstract and it has to be formulated in a positive way. A good example is provided by the Ten Commandments. The very idea of a foundation presupposes the exclusion of choice and freedom for itself.
We may summarize these structural problems:
- (1) The principle has to be formulated as a principle of construction; formally we call that “positive definite”.
- (2) Everything becomes dependent on it.
- (3) The need for further justification has to be impossible; otherwise it would not be a foundation.
It is difficult to underestimate the severity of the first part of the problem. Even Aristotle’s “unmoved mover” has to act, there is a positive determination of some aspect of that entity: it has the faculty to establish movement and change. This, in turn implies conditions that only could be cast aside by an (arbitrary) definition.
The third part of the problem can be dissolved by claiming some kind of absolute externalization, e.g. a divine origin, or the hyper-platonic sphere of “existent” ideas. Yet, since a “God” always remains somehow abstract (even by definition) one additionally needs rules for instantiate the divinity in real life. The rules of course also have to be of divine origin. Again, by definition, these rules can’t be discussed. Quite obviously, such divine rules are completely arbitrary, despite the fact that they produce societal structures in a way that the same arbitrary rules will provide predictive power. You may understand that this “solution” is not an acceptable one for us.
In mathematics, first principles are called axioms. Usually, and traditionally, they are founded outside of mathematics, referring to something that could not be denied. Such undeniable “subjects” are usually taken from everyday life, i.e. the sphere of experiences accessible for a human body, which is considered not to be part of mathematics. It is interesting to recall what happened to mathematics since the late 19th century. The French revolution and the settlement of industrialization brought a broad liberation and a long time crammed with successful science. Mathematics played an important role in those development. Based on the achievements in the unification of mathematics, namely the research about and the creation of new Algebras and their abstract embedding, the group by people like Clifford, Lie, Peano, Abel or Klein, the question about the foundation of mathematics arose. Could there be a single axiom that could be used to justify the correctness of mathematics? Hilbert then proposed his famous program, the Hilbert program, a set of tasks to be solved to guarantee the correctness of mathematics.
We have to put at least 2300 years of philosophical argumentation onto the table in order to grasp the shock that has been released by the results of Goedel’s early work, the incompleteness theorem. Its claim is simple, despite the fact that at that time the proof has been very difficult. Goedel proofed, that the Hilbert program poses impossible goals. It can’t be proofed that a formal system is complete and without contradictions. If you assume consistency, you loose completeness, if you prefer completeness, you lose safety, i.e. you likely will be caught by arbitrariness. This result is devastating for any theory of explicable universalia, let it be a religious theory like the idea of a God, or let it be mathematics. Whenever you strive for a constructive argument that should influence everything you will meet the same problem.
The problem can not be abolished by retreating to a structural and abstract level as first principles. As long as these rules are positive definite they imply arbitrariness. In other words, even the universality of ethics and the clarity of logics are threatened by it. How then rely on them? How to justify a political act, or any act, since any act erodes the freedom of another person? How to conduct an argument, if logics is unsafe and the idea of necessity vanishes? Elsewhere we will delve a bit deeper into the problematics of necessity or logics, which in turn is related to the issue of causality and information. Here, and for now, we have to postpone these big issues for a while.
Instead, we will return to our core subject, the possibility of machine-based epistemology. The question is what are the conditions for that possibility. As we have seen in the chapters about Non-Turing Computation and Growth, these conditions are not trivial. Particularly, we can’t program it. Thus, in the second, more strategic line we have to ask about the way we can speak about those conditions.
The consequence from the above is quite clear. Whatever we will take as these conditions, or speak about them, they can not consist from positive rules. The whole system needs to be a system of constraints, not a system of positive settings. Yet, these constraints have to have a particular quality as they themselves can’t be formulated as an identifiable (countable) limit, or a physical quality.
We already mentioned that positive axioms have to be avoided since they introduce arbitrariness. Nevertheless, we have to choose some starting point. We need an instance of a Wittgenstein ladder. Yet, it should not be necessary to throw it away after we achieved some insights, it simple should vanish, disappear by it own working.
Closely related to that we also can derive that there can’t be an outside position from where we could impose rules as such conditions or about how to talk about them. There is only an inside. Any justification of the system refers only and necessarily to itself. The structure we are going to build is fully self-referential, and that it has to remain consistent under this condition of self-referentiality without sending us into the abyss of the infinite regress. Usually, philosophy excludes any kind of self-referentiality because it is believed that it is equal to or equally evil as infinite regresses. Arguments have to come to an end. Interestingly enough, here philosophers always followed the logical structure of finite algorithms. Yet, not coming to a definite end does not imply that their is meaningful progress.
There is yet a further structural condition we have to fulfill and which we can know before actually starting to define it. First, it is clear, that there will be one or some aspects of our theory of the condition. These aspects, but even a single aspect would do the same, establish a kind of an abstract space. Yet, this space can not be a presentative space, i.e. a space where we could put things into. We immediately would contradict our first principle of the pure negativity of the principles. This space can’t have a structure that would allow for coordinates. Instead, it needs to be impossible to determine a coordinate. As exotic as this requirement may seem, it can be instantiated; at least we can derive that the structure of the space is made from a differential. To give a metaphorical example, the space we are going to construct does not look like a map about a landscape. Such a map would contain locations, coordinates. Instead, the map should contain only kind of the possible acceleration. Yet, it is NOT a map about the values of accelerations at a particular location. The concept of a location does not make any sense in our space. Another way to put it in an almost metaphorical way would be to conceive of this space as a space of Laplace Operators. “Points” in this space are not geometrical points (enumerable idealized particles under an order relation). Yet the space is dense. Each point is acting as an operator, or more graspable, acting like a tunnel, or wormhole.
Let us summarize these conditions crisp and clear:
- (1) We can use only negatively formulated principles, they must not be normative in any respect.
- (2) Even in their negativity these principles must not be “crisp” in order to avoid structural positivity.
- (3) There is no outside position, thus the resulting arrangement needs to be stabilized by self-referentiality
- (4) The abstract space is defined as a space of operators.
Further side conditions are imposed onto our project from the intention to remain consistent with anything we have been saying throughout this collection. Before we start we’d like to say that the way we introduce the structure here is only one out of many different. Yet we are convinced that this structure is stable and useful, after several years of using this structure as a kind of orientation.
Some basic Observations
Let us now start first by referring briefly to some simple observations. Yet, in the end we will put them all together.
The first one has been mentioned by Augustinus when he tried to reflect upon time . He said that—in our words—time appears to him as a perfectly clear concept as long as he uses it. Yet, as soon as he starts to investigate it closer, this clarity vanishes almost completely. The interesting thing now is that this happens with any immaterial concept we can address with out languages. If we do not have a material reference for a concept, say “stone,” we get quickly caught by serious difficulties if we start to ask about its meaning or its essence, its dynamics or its heredity. More often than not, the difficulties arise even for material things. It is almost impossible to give a positive definition of a chair. There is always some exception. We could conclude that concepts can not be defined in a positive manner.
The second observation concerns what we call a model. People say that a model is some representation of another thing, often more simple, or more abstract than what is being represented. Such descriptions are (simply) wrong, as we have seen in the chapter about modeling. Things are experiencable only through models. There is no such thing as the thing-as-such, and there is no process we could call representation besides modeling. The issue now that is relevant in our context here in this chapter regards the issues of generation and usage of models.
Let us imagine we have a model about a building. It is an architectural model, which can be used only in a particular way and for particular purposes. It is a model for a wider context. The model itself can’t define the rules for its own application. That is, the architectural model is different from what we call architecture. Of course, we could build a model about architecture, but that model isn’t a model about a building anymore.
Quite similar, any particular model can’t define itself how to create it. If we generalize the argument to any kind of formal models (which is really a large set), we can see that the model can’t define the symbols and the way we use symbols to create a model. Thus we can conclude that any model has two open ports, where it is not self-containing.
Both sides, or both ports, the required symbols as well as the modes of usage refer to a community. From the perspective of the community, and also from that of its members, the model is a kind of a particular device that mediates the communicative relationships between the members of the community. Models establish a a distinct type of mediality. Yet, models are, of course, not the only media. Our example just demonstrates that the mediality of things can not be reduced to the things themselves, even if those things are concepts or models.
Those three aspects of concept, model and mediality are not reducible to each other; they all refer mutually to the other two as their condition. Together they imply a further categorical aspect: future. Actions are always directed to the future, but never to the past, the same is true for any epistemic act, for epistemology itself, and also for knowledge. Using something implies future. Yet, what is interesting about the future is not the temporal aspect. Instead, for our discussion of the conditionability its correlate, virtuality, is much more relevant.
Virtuality could be conceived as the pure potentiality, irrespective the “concrete” facts and processes. From Bergson and Deleuze we can learn that we have to distinguish the possible from the potential, or in other terms, and related to that, also the real from the actual. The possible is always real, even if it is not factual. We clearly can imagine it. The potential however we can not depict. Yet it does not denote a completely unknown domain either. Aristoteles and later Bergson and Deleuze clearly recognized that any discourse about change requires a notion of virtuality. Virtuality is the non-physical consequence of time and change. Taken from the other side we can see that we can’t talk about change without the concept of virtuality. In still other terms, and again implying the Deleuzean position1, we could say that “the” virtual refers to the energy created by the tension between immanence and transcendence.
Starting to ask about the condition we found four aspects that seem to be involved in the conditionability: model, concept, mediality and virtuality. Obviously, conditionability refers to the issue of sufficient reason and, of course, the possibility to know. For now, we have to stop here, postponing the further development of these issues. We first have to discuss some other topics. The thread will be taken up again in the chapter “A Deleuzean Move“.