Modeling

November 17, 2011 § Leave a comment

Given the ubiquity of models

in our contemporary world, it may seem astonishing that only approx. 100 years ago the concept of model in everyday language was nearly completely absent [1]. Etymology tells us that model derives from Latin: modulus, meaning “measure, standard,” from where it found its way to modern language through architecture [2]. Apparently, the scientism of the 19th century as well as mass production (and mass culture) also paved the way for its spreading. The usage of models unfolded between positrons (Dirac) and cars (Model T), between strategy games and fashion. Surely there is an important relationship between models and the medial intensification throughout the 20th century.

Model is a term almost as iridescent as models are. The concept of model appears as soon as we enter any context of production, hence communication about future entities. They denote a kind of more or less accurate templates, often used in relation to accessible instances of it, whether it refers to physical or conceptional accessibility. Models can be itself established, or taken, as a dedicated entity in order to support exchange and communication about consequences. Such, models act as templates and almost original images  as well as a basis for simulations and what-if analysis. Models must not be conceived just as the result of dealing with data, there are also models that are created without data on the basis of a system of symmetry relations (call it “idea”), for instance, or as a result of making analogy, as we will see in the chapter(s) about analogical thinking. Modeling is a key aspect of epistemology, albeit it is recognized as such only in some peripheral areas of the philosophy of science so far. If we conceive of it in a sufficiently abstract way we might find almost nothing else than modeling, as far as it concerns the activities of the individual. (We never shall forget about the communal part of cognition, of course.)

We are convinced that there is no “direct” access not only to the empirical world. There is even not a thing which we could separate as “accessing the world.” We think that any sort of “direct” realism is just a naive perspective, hence we deny it, and consequently we dismiss any reasonability of notions like”real” things as-such, things per-se, the “real” world as independent from us observers etc. In order to invoke an icon about that, we also could say that we take a strictly non-idealistic, non-Hegelian position with respect to epistemology and the issue of our relations to what we call “the world.”

In contrast, our thesis is that we create any aspect of our world, whether empirical or not, only through interpretation and thus only through modeling. Of course, there is something out there, yet it does not make much sense to refer to this outside as a “world”. The world is real, but this “reality” (and the world) is completely inside everyone’s private mental life. Undeniably we can try to speak about it and we can try to share our realities through speaking, i.e. using a public language. Yet, we have to translate our internals into models first. In other words, we even have to model our own internal mental “reality” before being able to talk about it, notwithstanding the fact, that the second step, the translation of the model into language, adds further complications. This perspective is strictly compatible to Wittgenstein’s Solipsism, as it has been recently described by Wilhelm Vossenkuhl [3].

It should be clear now that our position is not an Anti-Aristotelian attitude, but rather a non-Aristotelian one. It is true that everything we find in the mind first passed our “senses,” extending the notion of sense to any afferent fiber reaching out into the brain. However, this is just not a sufficient contribution to be able to “think.” Similarly, we reject Descartes’ position of the primacy of the “Cogito.” To us it seems that Descartes is refuting both the dependency on external data as well as he is neglecting the role of language and the accompanied necessity of a double translation. Yet, his “innate ideas” resemble to Wittgenstein’s dictum about the alignment of logics to the structure of the world.

Compatible with such frameworks, we can define the concept of “model” from an epistemological perspective. We could say that models are tools for anticipation given the expectation of weak repeatability.

We are convinced that modeling is the only way to synchronize in a useful way with the outer world, or, in case of social affairs, to connect to other realities and to share them. Modeling is the only gateway to connect. In his extension of the Wittgensteinian language game, Richard Brandom deepened and popularized that idea in his work, supposing that the mutual inference of intentions is at the core of the ability to understand language. Modeling is not necessary exactly in the case where external relations are determined. This may happen by means of bureaucrazies or in any other case of “programming the world,” e.g. in a misunderstood human-machine-interface design. The social world is full of attempts to fix the structure of interactions in order to diminish the need for modeling. In this perspective, traditions, grammars and any sort of convention are just tools to facilitate this reduction of necessary efforts, or even to enable modeling at all.

The only things we receive are fluctuations of physical signals. We have to recognize patterns, assign symbols, infer structures and intentions, but we never can know completely. Induction from empirical impressions is a phantasm of gnostic scientists or philosophers. This fact of not knowing enforces us to apply modeling, to derive models, in other words to predict—in every tenth of a second. Members of all human societies enjoy to play with this fact, we call it sports or humor. The input for such modeling are never symbols, but always—even on the level of symbols—probabilistic densities. The way “back” from probabilistic densities to symbols is one of the key issues for any epistemology, not just that of the section engaged with  the machine-based flavor. It should be clear that we neither do agree with radical empiricism nor with radical constructivism.

Here, in the context of our general interests, where we focus the possibility (as well as the structure and the potential) of machine-based epistemology, we thus have to find an appropriate formal representation of the concept of model. This formal representation should not be limited to models in any particular domain, e.g. mathematics, architecture, hermeneutics or science. The concept of “model” is strongly different across these domains. In contrast, our representation should (1) allow for a maximum of generality, while at the same time it (2) should obey to the penultimate conditions of any epistemology. Only by means of such a representation we will be able to investigate further the conditions for the ability of autonomous modeling, things like the concept of data or the role of logics. This general “model of model” also will allow us to find an appropriate concept about theory. Currently, there is no appropriate theory about theory and models. Most frameworks called “theory” are just models, as we will see in the chapter about the theory of theory. Despite the fact that models may be quite abstract and theoretic, models are not theory. In order to understand the specific difference of both, we have to introduce an appropriate, and most general, notion about models. It is quite obvious that such issues are concerning the cornerstones of any attempt to get clear about epistemology, especially concerning the presumed machine-based form of it.

The Formal Representation of the Model

The formal representation of an object of the “real world” is helpful precisely for the reason that it is the only way to investigate what could be said in principle about that object. Of course, formalization itself needs concepts, and quite naturally, formalization introduces the constraints of those concepts. Often, yet not necessarily, formalization introduces a strong reduction of the observed object, often related to the assumption of enumerability or identifiability.

In order to create a model, one needs tools and methods. In order to create a formal model, one has to introduce basic elements (often called axioms), operators and possible relations between them. Given the complexity of the matter, we suggest to start in-medias-res, and to explain the components subsequently.

Our model of model appears as a 6-Tupel. You may conceive of them also as six different, incommensurable domains, where no possible way can be thought of from one domain to one of the other. These six domains are, by their label:

  • (1) usage U
  • (2) observations O
  • (3) featuring assignates F on O
  • (4) similarity mapping M
  • (5) quasi-logic Q
  • (6) procedural aspects of the implementation

Taken together this renders into the following single expression:

eq.1

These six domains, or “elements”, forming the (abstract) model are themselves high-level compound concepts. In the following we will give the complete yet brief descriptions of those compound elements.

Usage

For many reasons we follow the philosophical attitude as it has been developed first by Wittgenstein, then deepened by (the late) Putnam, by Vossenkuhl or Brandom, among others (but, for instance, not by Quine, Lewis, Davidson, Kripke, Stegmüller or Moulines).

Wittgenstein conceived “language as ongoing regulated activities involving coordination and cooperation of the participants, ” as Meredith Williams [4, p.228] put it so clearly. In this perspective, language is not just a set of symbols arranged in a well-ordered manner as a consequence of applying a formal language. The use of words in a language follows a purpose. Without that purpose it would be detached from the world. Usage, world, language and modeling are co-extensive. If we do not assign a purpose, i.e. an intended usage to a symbol, or even to a percept, we can not achieve a model. Likewise, if we act in a structured way, even if only partially structured, we implicitly apply a model. In this way, modeling and rule-following are closely related to each other.

Here in our attempt to get clear about models we have to operationalize the notion of usage. In doing that we have to keep in mind that we are neither equipped with perfect knowledge nor with the possibility to detect “truth” in our perceptions.In many cases something like a “complete” measurement is not possible in principle, even independent from our factual methods of measurements; measurement always provides only a segment of the world, it is itself based on a theory, it is representing a model in itself. This imperfectness in our empirical relation to the world is almost trivial. Yet, what is not trivial is our attitude to the various kinds of errors we will commit due to this imperfectness. Usually, this attitude is expressed by referring to the notion of risk.

We conceive risk as a part of the usage. Irrespective the formal tools used to express “risk,” in the end risk expresses the cost we assign to the misclassifications due to a partially erroneous model. More general, risk can be expressed most generally as the ratio of costs assigned to different kinds of errors, or in short, the Error-Cost-Ratio (ECR). The ECR itself need not to be handled as a constant, it could be defined as dependent or related to a particular class created by the model as a whole.

The purpose itself can be reflected by a particular intensity of a target variable. Such a target variable is, of course, completely fictitious in the beginning. Its usefulness has to be confirmed. Quite often, however, the target variable is determined by factors completely outside of the model (and the modeling process). Obviously, the danger of such a separation is a particular kind of blindness concerning the modeling process and the meaningfulness of its results.

The target variable itself is not a complete (and proper) operationalization of the purpose. We also need a scale and the selection of a range of values that represent the “targeted” (“desired”) outcome of the process which we are going to represent by the model. Note that this is true also for multi-objective modeling. It is always possible to map a set of several outcome variables onto a single target variable and a well-defined range of values for this target variable, though this mapping is usually not a continuous mapping. The reason for the possibility of this reduction from multi-variate “outcomes” to a single target variable is given by the obvious fact that we will use = apply the model as a basis for a decision to act in a particular way.

So we can cover the notion of usage in the following 3-tupel:

U = { TV, TGTV, ECR }  eq.2

The symbols are

TV = target variable, the basis for measure-theoretic operationalization of purpose or usage; the scaling of the variable may be numeric or nominal;
TGTV = target group, defined as a sub-set within the set created by the target variable TV, e.g. as a selection or an interval of values;
ECR = the ratio of costs of the different types of errors a model can commit (ECR=error-cost-ratio); the ECR expresses the the ratio of the costs for misclassifications of Type-I and Type-II; the ECR can be represented also as any kind of (cost-)function.

The ECR is an import docking point for any kind of risk as well as for value, both in the economical and even in the philosophical sense.

You may have noticed that we operationalize usage without referring to an intended area of usage, as it is the case for structuralist concepts of theory. We suppose that it is one of the major faults of scientific structuralism as it has been proposed by Sneed [5] and Stegmüller[6] (based on the work of Carnap [7]) to include a description of the intended area of application into the formalism. We will discuss this in much more detail in the chapter about theory. We just would like to remark here that structuralist concepts about theory are hopelessly mixing models and theories.

We even could say that whenever a proposal includes statements about its area of application—besides the rule “Apply it there!”—it is not even a model anymore. A formalization, and hence a model, can never make any proposal about the conditions of its applicability. This would be another model or it would invoke a rule external to the model at hand, besides the difficulty that the model can not predict its own applicability. The model is itself such an expectation (but not a formal prediction).

Observations

It is pretty clear, and presumably widely accepted, that observations do not reflect things of a world. Rather observations are based on theories, because already measurements are based on theories and models. Both, measurement and observation are based on interpretation, i.e. generally spoken on some kind of transformation. While we could accept that measurements need not to include classification, observations clearly do. Observations can be reflected only by a 3-valued relation.

The theories, models, or habits necessary to perform measurement or to achieve an observation enforce us to conceive of observations only as potential observations.

Those theories and models are inevitably preceding any actual observation; to this regard, they are apriori necessary, yet, this “necessity” is not a physical necessity, it is established as a historical convention. Since they are preceding any actual observation or measurement, those theories imply serious constraints on any possible result of the modeling.

Features

Given an observation, we have to describe the “whole thing.” Usually, we select some abstract structures like “color” and call it feature. Yet, observations do not “possess” features or properties independent of our theories, of course. It is much more appropriate to take them as a kind of assignments that create the conditions of possible descriptability. Instead of “feature” or “properties” we should talk more clearly about “assignates.”

Features are then put into a list that is used as a scheme to described the observations. This list is also the basis for any comparison of observations.

In a particular attempt to create a model M, one of the most important activities is to create and to select features from the observations.

Similarity

Similarity is the one of the biggest blind-spots in epistemology, philosophy of science, and even science and the associated area of data analysis itself. If it is not completely disregarded, it is mistaken or reduced in weird ways. Thus it pays to be explicit here.

Similarity expresses the expected success of a mapping, that relates an unordered set of observations into a partially ordered set. Similarity is nothing that is attached to an object. Once two observations have been put into the same subset on the partially ordered side, we have lost the information to distinguish them. In such case we usually call two observations “identical.” Empirical identity is not equal to logical identity, albeit they are often mixed. We also can say that through this mapping we are going to establish “equivalence classes.”

Of course, there are infinite ways of relating observations. Even the list of assignates need to match only partially. Astonishingly, almost the whole community of data analysts apply the Euclidean distance as a similarity measure for sorting observations. This is nothing else than utter self-contradictory nonsense, as we will discuss in more detail in another chapter about similarity. There we will also  show the details of possible mappings and their drawbacks.

The important issue is that the choice of a particular mapping determines the quality of the result, just as much as the selection of assignates does. While most people are aware about the role of “feature selection,” nobody pays attention to the mapping that we call similarity. Due to this importance, however, our strive for generality of the abstract definition of the “model” leads us to conceive of similarity as a family of function, or short as a functional (mathematically we could also say functor). The functional expresses a potentiality. Which particular actualization as a determined set of mappings we will use in a given attempt to create a model depends solely on the usage U as defined above.

Concerning our abstract model it is quite important to conceive of “similarity” as an irreducible element of a model. Only if we do that we can succeed in keeping a critical distance to what we call a “method.”

Quasi-Logic

The quasi-logics implicitly entails any formal language used to describe the relations between any of the elements or between any of their instances. In order to achieve independence from a particular such language we have to abstract from it and include it as an element.

In this sense, formal languages are imposed onto the process of modeling. Within a particular attempt of modeling, the quasi-logics is out of reach, though not invisible. It also constrains seriously the expressiveness of the model. If we, for instance, relay on the classic bivalent logics with mutually exclusive truth values, we can not make any proposal about partial memberships, multiple memberships, uncertainty etc., and most important, we can not deal with (observations taken on) complex and creative systems.

QS  = { Loc, Lay, Lin, SR, DR }eq.3

The symbols are:

Loc = locality, dependence on the process of the formation of a distribution, context sensitivity;
Lay = capability for self-directed stratification into different epistemic layers;
Lin = linearity (incl. commutativity, associativity);
SR = self-referentiality;
DR = distinctiveness of relations, or, equivalently, the choice between the identity or the difference relation as the fundamental one;

We follow Wittgenstein in his conclusion that the structure of the world precipitates in the structure of logics. We even must not say that we “apply” logics to our observations in order to make sense of them, for any “application” introduces a trace of semantics into the logics. Yet, there is no way not to use some kind of logics. Logics, or more precise, the chosen quasi-logics and the world are co-genetic, they mutually imply another. Thus, we have to be very careful with respect to the quasi-logics we choose.

In our attempt to describe the most general form of model we have to abstract from this choice. We do so by including elements that are determinants of that choice.

Procedural Aspects

In some respects, this element is the most trivial of the set. Yet, despite its relation to practical and actual implementations it needs to be an element of the abstract model, since the selection of a particular (family of) method(s) and their  implementation impose specific constraints on an actual instance of the abstract model.

Conclusion

.

.

Group Properties

It is now quite interesting to check out the various sorts of deficiency that can be constructed following the formula given in Fig.1 above, as well as to interpret the given formal definition as a group. Both types of investigations would not be possible without a formal notation, of course. First we will deal with the group properties, the deficiencies will be handled in the next section.

Why considering mathematical groups here? Well, from today’s perspective, we can easily give an example. Could you answer how it feels to you without the knowledge about the 0 (zero) or the negative numbers? Or how you would do ordinary calculations? No, of course not. The difference between those two worlds, one stuffed with zeroes and negatives, the other not, is precisely covered by the invention of group theory. Without group theory, we can not give a satisfying account for the zero or for negatives.

One of the motivations for group theory in mathematics is rooted in crystallography of the 19th century. The “Erlanger Programm” by Abel and Klein then quickly revealed that there is more about it than just crystals. Today,  group theory is the basis for mathematical bodies like an algebra. Yet, group theory is still related to the notion of (abstract) symmetry regarding sets of elements whose order can be permutated. Symmetry is invariance under a specified group of transformations.

Importing group theory into the theory of modeling directly leads us—and it does so on purely syntactic “reasons”—to questions like “Is a model combined with a model again a model?”

The following table lists the group axioms as applied to the model.

Closure For all elements a, b in group G, the result of the operation a  b is also in G;
Associativity For all a, b and c in G, (a  b) c = a(bc);
Identity element There exists an element e in G, such that for every element a in G, the equation ea = ae = a holds. Dependent on the operator, e may be conceived as 1 or 0;
Inverse element For each a in G, there exists an element b in G, such that   a  b = ba = 1G;

Group Axiom 1: The first property, closure, is easy to understand. It surely applies to models too, even if we combine two very different models.

Group Axiom 2: The story is different for associativity. It is not generally valid, since modeling is a mapping that destroys information, as all non-bijective mappings do. Associativity is fulfilled only for special cases of models, or in special circumstances. Usually, it makes a difference whether we destroy an informational segment I(1) or an segment I(2) before proceeding with the next model of the set.

The only analytically visible case where it does not make a difference is given by a situation, where all three models a,b,c are completely disjoint with regard to their space of mappings. Such models could be called geometric, or logic models. Mostly, however, combining models is asymmetric and introduces a notion of irreversibility. Yet, in a sufficiently large population of models (i.e. mappings) there might well be a selection of models a,b,c for which associativity holds. This, obviously, is not an analytical issue, but an empirical one.  This would give rise to something like a probabilistic group, which does not seem to make much sense for now. Anyway… let us presume that normally models are not associative. Using the results of our investigation about information and causality, we also can conclude that models are causally effective. Actions like rule-following are not only irreversible due to their materiality, but also due to their structure. Or even shorter, we could say that modeling is an activity, and as an activity it introduces irreversibility.

Note that modeling includes both the creation and the application of models. If we would only consider the application of models on a fixed set of data, even if the usage would not be fixed, all resulting models are associative, because they simply filter the records independently from each other. In general, however, if we consider the whole process, models as a result of modeling are not associative.

In Quasi-Groups and their respective algebras associativity is not required. In case of the Lie-group, for example, associativity is replaced by the Jacobi-identity, which introduces a basic asymmetry. Yet, there are many non-associative operations in mathematics, e.g. matrix cross-products. We do not delve further into this topic.

Group Axiom 3: Is there an instance of our abstract model which could be conceived as the neutral, or identity element? Combining it with a normal model would not change anything, regardless the order. Both cases a*e=a as well as e*a=a are possible through a particular error-cost-ratio (leading to 100% false positive classifications, i.e. all data are selected, and there is only one (1) single equivalence class). Thus we conclude that our abstract model can be instantiated such as to form the neutral identity element.

Group Axiom 4: Finally, we have to check whether there is a model which could invert the changes of another model, such that the result is the same for any pairings of models and their inverse. This would be possible only if the mapping would preserve all of the initial information. By definition, however, models are creating equivalence classes, i.e. they destroy information. The inverse model would have to reconstruct this lost information, which is not possible given the fact that the body of data is fixed for both instances, the model as well as for its presumed inverse. Thus, there is no model possible that could be conceived as something like an inverse element.

We conclude that models do not form a group. Hence, there is no calculus possible which would take models as arguments. Groups are not general enough to cover the characteristics of models. A generalization into mathematical structures like the Lie Group, despite its theoretical appeal, particularly the formalized account of asymmetry, is not possible.

Yet, it is too early to guess that there is nothing else one can think of that would be more abstract than the general form of the model. We have to check the relationships between our abstract model and mathematical category theory first. Category theory is basically about abstract transformations, and the inverse element is not an essential piece in it. So, it is well worthwhile to check it out. We prefer to do so in a separate chapter.

A corollary of this (so far still assumed) generality would be that it is not possible to conceive of a theory as a generalization of a model or a set of those. This raises the question, of course, about the status of theories and how we could talk about the relation between theories and models. We will do this in our chapter about theory.

Experimentally Deficient Models

Given the definition of the model as shown in eq.1 above, we now will investigate various sorts of deficiencies, simply created by removing or restricting one or more of the six elements.

We repeat eq.1 for better readability:

Usage: Removing usage U  from a model renders the model into a formal method, performing an arbitrary transformation. Without usage we can not decide on the usefulness, and quite obviously so. This implies that we also can not select a “suitable” selection of features or similarity mappings. We just arrive at some kind of sorting that is completely unjustified. Even worse, the selection of features and the particular transformation destroys information, and this irreversible act is not aligned to any purpose. If we would use such a sorting for a decision we would commit a serious mistake: setting U(0)=U. This renders the algorithmic structure of M and P into some kind of internal criteria, that effects any subsequent decision.

As weird as this sounds, it is an abundant misbehavior in data mining, but also in social sciences, or in disciplines like urban planning. People frequently believe that they perform modeling if they do what they call “non-supervised clustering,” or, equivalently to that, if they represent some measured data by a formula.

Clustering is not modeling, if there is no U-term, hence “clusters” and “classes” are different things. If both are equated, the algorithm or the method dictates the utility. And that’s really weird. (But, as already mentioned, quite unfortunately quite abundant) It is also a simple mistake, since as soon as we introduce a target variable (as an operationalization of purposes) we change the cost function for optimization, hence the sorting of the observation will be different and so also our conclusions. We conclude that “non-supervised clustering” is either useless, a mistake, or nonsense.

There are two other modifications of U we can think of. First, we can replace a dedicated usage by a set of usages U as described above in eq.2. Accordingly, the level of our proposals will change (see, as a parallel, the different modes of comparison). Another modification is to replace the externally defined target variable by a criterion that is constructed solely from the error-cost-ratio and internal consistency measures that are part of P. Doing so, however, we initiate a circular structure and we loose the contact to the world. Such kind of modeling could be taken really only as a very first starting point in a modeling project. One examples for this would be to create models which cover a certain amount of data. The resulting models will be very different with respect to the weight of the features, and thus it will provide the possibility for a first inspection what the data is about.

Observations: Without observations, the model is approaching the area of creatio-ex-nihilo phenomena, miracles or revelations. Yet, since we still have assignates available, which we choose maybe randomly, we can construct matching observations. This kind of activity is quite abundant in the human mind. We call it dreaming.

Similarity etc.: In contrast to that the removal of similarity is not possible at all. The same holds for the quasi-logics and the procedural aspects.

Theoretical Accounts

It is a cornerstone of our proposals here that a formalization, and hence a model, can never make any proposal about the conditions of its applicability.

Difference is the starting point, not identity.

… … …

.

Trope Theory

According to [8], a “trope is an instance or bit (not an exemplification) of a property or a relation. […] The appeal of tropes for philosophers is as an ontological basis free of the postulation of supposedly obscure abstract entities such as propositions and universals.”

Despite some resemblance to our theory of about the abstract model and the assignates, trope theory is radically different. Tropes are about ontology. The appeal of tropes for philosophers is as an ontological basis free of the postulation of supposedly obscure abstract entities such as propositions and universals. [8] Such, the theory of tropes builds upon the separation of ontology and epistemology, which we reject, and vigorously so. Separating them is equivalent to deny the primacy of interpretation and modeling.

Yet, there is an interesting extension, or variety, of trope theory, introduced by Meinard Kuhlmann (which we found here). He relates Algebraic/Axiomatic Quantum Field Theory (AQFT) to the theory of tropes.  Based on (higher) category theory, AQFT is a formalization of quantum field theory that axiomatizes the assignment of algebras of observables to patches of parameter space that one expects a quantum field theory to provide [9]. For Kuhlmann, the basic things are “tropes”, which he defined as “individual property instances,” as opposed to abstract properties that happen to have instances.  “Things” then, are just collections of tropes.  Now the interesting (intermediate) conclusion provided by Kuhlmann is: To talk about the “identity” of a thing means to pick out certain of the tropes as the core ones that define that thing, and others as peripheral.

This again resembles much to our notion of assignates. In our perspective as proposed here, “things” are established through an instance of an abstract model, which comprises a selection of assignates that have been “picked out” from the set of available ones. Yet, Kuhlmann obviously follows an ontology (which we reject) that is based on identity (which we also reject as a feasible starting point). Consequently, his distinction between core-tropes and non-core-tropes, which is quite abundant in trope theory, tries to separate what before has been mistakingly conflated: assignates, their instances as features (properties), and a particular value of it. For Kuhlmann, the core tropes are those properties that define an irreducible representation of a C*-algebra (things like mass, spin, charge, etc.), whereas the non-core tropes are those that identify a state vector within such a representation. Why calling them both tropes? Why discerning them apriori to a particular modeling? In some way, trope theory appears to me like “ontologized epistemology,” in this respect not so distant to Frege’s hyper-platonism.

Other Concepts

Basically, nowadays one can find two salient concepts in the discourses, Popper’s rationalist empiricism and Sneed/Stegmüller’s scientific structuralism. Some conceptions are somehow in between of them, yet without overcoming their weaknesses [9].

Both frameworks are quite interesting attempts, of course, yet they suffer from serious drawbacks. The main problem of both is that they do not provide sufficient means to distinguish theories from models. Else, they both claim that theories make empirical statements about potential observations. Here, we strictly disagree. (For detailed account of the arguments see the chapter about theory of theory).

Both theories also disregard the problem that neither a model nor a theory can make any proposal about the conditions of its applicability. Such conditions are, as we will see, the availability of symbols, the mediality of any kind of relation, and the (implied) virtuality of any activity, which, when taken together, create a very particular “space.”

.

Parts of this article have been published in [x].

  • [1] Müller Research. available online
  • [2] Online Etymology Dictionary about model.
  • [3] Wilhelm Vossenkuhl, Solipsismus und Sprachkritik. Beiträge zu Wittgenstein. Parerga, 2009.
  • [4] Meredith Williams, Wittgenstein, Mind and Meaning: Towards a Social Conception of Mind. Routledge 1999.
  • [5] Sneed
  • [6] Wolfgang Stegmüller
  • [7] Rudolf Carnap
  • [8] Tropes . Stanford: http://plato.stanford.edu/entries/tropes
  • [9] nLab, http://ncatlab.org/nlab/show/AQFT
  • [9] Weiss
  • [x] Klaus Wassermann, The Model of Model. Vera Bühlmann, Ludger Hovestadt (eds.) Printed Physics, 2011/2012. (in press, a draft version of it is available online)

Tagged: , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

What’s this?

You are currently reading Modeling at The "Putnam Program".

meta

%d bloggers like this: