Junkspace, extracted.

July 16, 2012 § Leave a comment

Some years after “The Generic City” Koolhaas published

a further essay on the problematic field of identity: “Junkspace” (JS).[1] I think it is a good idea to introduce both of them and to relate them before discussing the issues of this field by ourselves.

Unlike “The Generic City” (TGC), which was constructed as kind of a report about a film script, JS is more like a “documentary manifesto,” certainly provocative (for thought?), but also not a theory. “Junkspace” throws a concept in/out, according to its message, one could say. As in TGC, Koolhaas tries to densify and to enhance contrasts in order to render the invisible visible. Its language thus should not be misunderstood as “apocalyptic” or the like, or as a reference to actual “facts”. We else must consider that even documentations are inevitably equipped with theories and models, intentions and expectations. The biggest difference between the two essays is probably the fact that in JS Koolhaas does not try to keep distance through the formal construction of the writing. Hence, it may be legitimate to read his essay indeed as kind of a seriously taken diagnosis.

In many ways, JS reads as a critique of modernism and of post-modernism, not just as attitudes in architecture, but rather concerning the whole culture, ending in a state where the “cosmetic is the new cosmic.” Albeit critique is not made (too) explicit, trying to avoid bringing in explicit value statements, the tone of JS appears negative. Yet, it does so only upon the reader’s interpretation. “Junkspace is a low-grade purgatory.” In Christian mythology, everybody had to pass it, the good ones and the evil ones, except the bravest saints, perhaps. Failure is expressed, but by referring to a certain otherworldliness: “We do not leave pyramids.”

The style of JS is ambiguous itself, presumably intentionally so. On the one hand, it reminds to mathematical, formal series of sentences. Sections often start with existential proposals: “Junkspace is …”. Together, as a series, or a hive, these imply  unspoken axioms. On the other hand it seems as if Koolhaas hesitates to use the figure of logic, or accordingly of cause and effect, with regard to the Junkspace itself. Such, Koolhaas exhibits performatively a clear-cut non-modern, or should we say “meta-modern”, attitude. By no means this should be taken as kind of some irrationality, though. We just find lines of historical developments, often even only historizing contrasts. This formal structure is anything but a self-righteous rhetoric game, it’s more like a necessary means to maintain some distance to modernism. The style of JS could be considered as (empty) rhetoric only from within  a modernist attitude.

Before we deal further with modernism (below, and more extensively here), I first want to list my selection of core passages. The sections in Koolhaas’ text are neither enumerated nor divided by headlines (no hierarchies! many “…”! a Junkspace…), so I provide the page numbers in order to facilitate reference. Additionally, I enumerated the pieces for referencing them from within our own writing.

Here is the extract from Junkspace; it is of  course hard to do such a selection—even if we allow for a total of 59 passages—, as JS is rather densely written. Koolhaas begins with some definitions before turning to its properties, readings and implications:

Précis of “Junkspace”

(p.175)

1. “Identity” is the new junk food for the dispossessed, globalization’s fodder for the disenfranchised … […] Junk-Space is the residue mankind leaves on the planet. The built […] product of modernization is not modern architecture but Junkspace. Junkspace is what remains after modernization has run its course, or, more precisely, what coagulates while modernization is in progress, its fallout. Modernization had a rational program: to share the blessings of science, universally. Junkspace is its apotheosis, or meltdown.

2. Junkspace is the sum total of our current achievement;

3. It was a mistake to invent modern architecture for the twentieth century. Architecture disappeared in the twentieth century; we have been reading a footnote under a microscope hoping it would turn into a novel;

4. […] our concern for the masses has blinded us to People’s Architecture. Junkspace seems an aberration, but it is the essence, the main thing. the product of an encounter between escalator and air-conditioning.

5. Continuity is the essence of Junkspace.

(p.176)

6. Junkspace is sealed, held together not by structure but by skin, like a bubble.

7. Junkspace is a Bermuda Triangle of concepts, an abandoned petri dish: it cancels distinctions, undermines resolve, confuses intention with realization. It replaces hierarchy with accumulation, composition with addition. […] A fuzzy empire of blur, it […] offer[s] a seamless patchwork of the permanently disjointed. […] Junkspace is additive, layered, and lightweight, not articulated in different parts but subdivided, […].

8. Junkspace’s iconography is 13 percent Roman, 8 percent Bauhaus and 7 percent Disney (neck and neck), 3 percent Art Nouveau, followed closely by Mayan.

(p.177)

9. Junkspace is beyond measure, beyond code … Because it cannot be grasped, Junks pace cannot be remembered. It is flamboyant yet unmemorable, like a screen saver;

10. Junkspace’s modules are dimensioned to carry brands;

11. Junkspace performs the same role as black holes in the universe: they are essences through which meaning disappears.

12. Junkspace is best enjoyed in a state of post-revolutionary gawking. Polarities have merged.

13. Modern architecture […] exposes what previous generations kept under wraps: structures emerge like springs from a mattress.

14. Junkspace thrives on design, but design dies in Junkspace […] Regurgitation is the new  creativity.

15. Superstrings of graphics, […] LEDs, and video describe an authorless world beyond anyone’s claim, always unique, utterly unpredictable, yet intensely familiar.

(p.178)

16. Junkspace sheds architectures like a reptile sheds skins, is reborn every Monday morning.

17. Architects thought of Junkspace first and named it Megastructure, the final solution to transcend their huge impasse.

18. In Junkspace, the tables are turned: it is subsystem only, without superstructure, orphaned particles in search of a framework or pattern.

19. Each element performs its task in negotiated isolation.

20. Instead of development, it offers entropy.

21. Change has been divorced from the idea of improvement. There is no progress; like a crab on LSD, culture staggers endlessly sideways …

22. Everywhere in Junkspace there are seating arrangements, ranges of modular chairs, even couches, as if the experience Junkspace offers its consumers is significantly more exhausting than any previous spatial sensation;

(p.179)

23. Junkspace is fanatically maintained, the night shift undoing the damage of the day shift in an endless Sisyphean replay. As you recover from Junkspace, Junkspace recovers from you.

24. Traditionally, typology implies demarcation, the definition of a singular model that excludes other arrangements. Junkspace represents a reverse typology of cumulative, approximative identity, less about kind than about quantity. But formlessness is still form, the formless also a typology.

25. Junkspace can either be absolutely chaotic or frighteningly aseptic-like a best-seller-overdetermined and indeterminate at the same time.

26. Junkspace is often described as a space of flows, but that is a misnomer; flows depend on disciplined movement, bodies that cohere. Junkspace is a web without a spider; […] It is a space of collision, a container of atoms, busy, not dense …

(p.180)

27. Junkspace features the tyranny of the oblivious: sometimes an entire Junkspace comes unstuck through the nonconformity of one of its members; a single citizen of an another culture-a refugee, a mother-can destabilize an entire Junkspace, […]

28. Flows in Junkspace lead to disaster: department stores at the beginning of sales; the stampedes triggered by warring compartments of soccer fans;

29. Traffic is Junkspace, from airspace to the subway; the entire highway system is Junkspace […]

30. Aging in Junkspace is nonexistent or catastrophic; sometimes an entire Junkspace—a department store, a nightclub, a bachelor pad-turns into a slum overnight without warning.

(p.181)

31. Corridors no longer simply link A to B, but have become “destinations.” Their tenant life tends to be short: the most stagnant windows, the most perfunctory dresses, the most implausible flowers. All perspective is gone, as in a rainforest (itself disappearing, they keep saying … ).

32. Trajectories are launched as ramp, turn horizontal without any warning, intersect, fold down, suddenly emerge on a vertiginous balcony above a large void. Fascism minus dictator.

(p.182)

33. There is zero loyalty—and zero tolerance—toward configuration, no “original” condition; architecture has turned into a time-lapse sequence to reveal a “permanent evolution.” … The only certainty is conversion-continuous-followed, in rare cases, by “restoration,” the process that claims ever new sections of history as extensions of Junkspace.

34. History corrupts, absolute history corrupts absolutely. Color and matter are eliminated from these bloodless grafts.

35. Sometimes not overload but its opposite, an absolute absence of detail, generates Junkspace. A voided condition of frightening sparseness, shocking proof that so much can be organized by so little.

36. The curse of public space: latent fascism safely smothered in signage, stools, sympathy … Junkspace is postexistential; it makes you uncertain where you are, obscures where you go, undoes where you were. Who do you think you are? Who do you want to be? (Note to architects: You thought that you could ignore Junkspace, visit it surreptitiously, treat it with condescending contempt or enjoy it vicariously … because you could not understand it, you’ve thrown away the keys … But now your own architecture is infected, has become equally smooth, all-inclusive, continuous, warped, busy, atrium-ridden …)

(p.183)

37. Restore, rearrange, reassemble, revamp, renovate, revise, recover, redesign, return-the Parthenon marbles-redo, respect, rent: verbs that start with re-produce Junkspace …

38. Junkspace will be our tomb.

39. Junkspace is political: It depends on the central removal of the critical faculty in the name of comfort and pleasure.

40. Not exactly “anything goes”; in fact, the secret of Junkspace is that it is both promiscuous and repressive: as the formless proliferates, the formal withers, and with it all rules, regulations, recourse …

41. Junkspace […] is the interior of Big Brother’s belly. It preempts people’s sensations. […] it blatantly proclaims how it wants to be read. Junkspace pretends to unite, but it actually splinters. It creates communities not out of shared interest or free association, but out of identical statistics and unavoidable demographics, an opportunistic weave of vested interests.

(p.184)

42. God is dead, the author is dead, history is dead, only the architect is left standing … an insulting evolutionary joke … A shortage of masters has not stopped a proliferation of masterpieces. “Masterpiece” has become a definitive sanction, a semantic space that saves the object from criticism, leaves its qualities unproven, its performance untested, its motives unquestioned.

43. Junkspace reduces what is urban to urbanity. Instead of public life, Public SpaceTM: what remains of the city once the unpredictable has been removed …

44. Inevitably, the death of God (and the author) has spawned orphaned space; Junkspace is authorless, yet surprisingly authoritarian … At the moment of its greatest emancipation, humankind is subjected to the most dictatorial scripts.: […] The chosen theater of megalomania—the dictatorial—is no longer politics, but entertainment.

45. Why can’t we tolerate stronger sensations? Dissonance? Awkwardness? Genius? Anarchy? … Junkspace heals, or at least that is the assumption of many hospitals.

(p.185)

46. Often heroic in size, planned with the last adrenaline of modernism’s grand inspiration, we have made them (too) human;

47. Junkspace is space as vacation;

(p.186)

48. Junkspace features the office as the urban home, a meeting-boudoir. […] Espace becomes E-space.

49. Globalization turns language into Junkspace. […] Through the retrofitting of language, there are too few plausible words left; our most creative hypotheses will never be formulated, discoveries will remain unmade, concepts unlaunched, philosophies muffled, nuances miscarried … We inhabit sumptuous Potemkin suburbs of weasel terminologies. Aberrant linguistic ecologies sustain virtual subjects in their claim to legitimacy, help them survive … Language is no longer used to explore, define, express, or to confront but to fudge, blur, obfuscate, apologize, and comfort … it stakes claims, assigns victimhood, preempts debate, admits guilt, fosters consensus. […] a Satanic orchestration of the meaningless …

50. Intended for the interior, Junkspace can easily engulf a whole city.

(p.187)

51. Seemingly at the opposite end of Junkspace, the golf course is, in fact, its conceptual double: empty, serene, free of commercial debris. The relative evacuation of the golf course is achieved by the further charging of Junkspace. The methods of their design and realization are similar: erasure, tabula rasa, reconfiguration. Junkspace turns into biojunk; ecology turns into ecospace. Ecology and economy have bonded in Junkspace as ecolomy.

52. Junkspace can be airborne, bring malaria to Sussex;

(p.188)

53. Deprivation can be caused by overdose or shortage; both conditions happen in Junkspace (often at the same time). Minimum is the ultimate ornament, a self-righteous crime, the contemporary Baroque.

54. It does not signify beauty, but guilt.

55. Outside, in the real world, the “art planner” spreads Junkspace’s fundamental incoherence by assigning defunct mythologies to residual surfaces and plotting three-dimensional works in leftover emptiness. Scouting for authenticity, his or her touch seals the fate of what was real, taps it for incorporation in Junkspace.

56. The only legitimate discourse is loss; art replenishes Junkspace in direct proportion to its own morbidity.

(p.189)

57. […] maybe the origins of Junkspace go back to the kindergarten …

58. Will Junkspace invade the body? Through the vibes of the cell phone? Has it already? Through Botox injections? […] Is each of us a mini-construction site? […]

(p.190)

59. Is it [m: mankind] a repertoire of reconfiguration that facilitates the intromission of a new species into its self-made Junksphere? The cosmetic is the new cosmic… ◊

Modernism

JS is about the consequences of modernism for architecture and for urbanism. Koolhaas does not hesitate to explicate it: Modernization, modernism ends in a “meltdown”. As an alternative he offers the “apotheosis”, a particular quality as a Golden Calf of modernization. Within the context of urban life and architectural activities, this outcome shows up as “Junkspace”. The essence of it is emptiness, isolation, splintering, arbitrariness. Its “victory” is named by its offer, entropy, and its essence is continuity. Probably it is meant as kind of a tertiary chaos, vanishing any condition for the possibility of discernability, unfortunately as the final point attractor. We will see.

Koolhaas describes Junkspace as an unintended outcome of a global collective activity. Obviously, Koolhaas is struggling with that, or with the unintendedness of the effect, in other words with emergence and self-organization. Emergence and self-organization can be understood exclusively in the wider context of complexity as we have outlined it previously (see this piece). The concept of complexity as we have constructed it is by no means anti-scientific in a fundamental sense. Yet, it is a severe challenge to scientism as it is practiced today, as our concept explicitly refers to a reflected conceptual embedding, something that is still excluded from natural science today. Anyway, complexity as an explicated concept must be considered as a necessary part of architectural theory, if we take Koolhaas and his writings such as “Junkspace” serious. Without it, we could not make sense of the difference between standardization and homogenization, between uniqueness and singularity, between history and identity, between development and evolution, or between randomness and heterotopia.

Modernism and its effects is the not so hidden agenda of JS. We have to be clear about this concept—at least concerning its foundations, albeit we will not find space enough here for discussing or even just listing its branches that reach not only till Marcuse’s office in Frankfurt—if we want to understand neo-leftist interpretations of JS as that by Jameson (“Future City” [2]), and the not so hidden irony expressed by the resonating label “Future Cities Lab” that denotes the urbanism project of the Department of Architecture (one of the biggest in Europe) of the Swiss Federal Institute of Technology (ETHZ). It is also the name of a joint venture between National University of Singapore (NUS) and ETHZ. Yes, they indeed call it Lab(oratory), a place usually producing hives of “petri dishes,” either abandoned (see 7. above) or “containing” the city itself (see section 8.1. of “The Generic City”), and at the same time still, and partially contradictory to its practices, an oratory of modernism. Perhaps. (more about that later).

Latest here we have to address the question:
What is the problem with modernism?

This will be the topic of the next post.

References
  • [1] Rem Koolhaas (2002). Junkspace. October, Vol. 100, “Obsolescence”, pp. 175-190. MIT Press. available here
  • [2] Fredric Jameson, Future City, New Left Review NLR 21, May-June 2003, pp. 65-79. available here

۞

Advertisements

Technical Aspects of Modeling

December 21, 2011 § Leave a comment

Modeling is not only inevitable in an empirical world,

it is also a primary practice. Being in the world as an empirical being thus means continuous modeling. Modeling is not an event-like activity, it is much more like collecting the energy in an photovoltaic device. This does not apply only to living systems, it also should apply to economic organizations.

Modeling thus comprises much more than selecting some data from a source and applying a method or algorithm to them. You may conceive that difference metaphorically as the difference between a machine and a plant for producing some goods. Here in this chapter we first will identify and briefly characterize the elements of continuous modeling; then we will show the overall arrangement of those elements as well as the structure of the modeling core process. We then will even step further down to the level of properties a modeling software for (continuous) modeling should comprise.

You will find much more details and a thorough discussion of the various design decisions for the respective software system in the attached document “The SPELA-Approach to Predictive Modeling.” This acronym stands for “Self-Configuring Profile-Based Evolutionary Learning Approach.” The document also describes how the result of modeling based on SPELA may be used properly for reasoning about the data.

Elements of Modeling

As we have shown in the chapter about the generalized model, a model needs a purpose. This targeted modeling and its internal organization is the subject of this chapter. Here we will not deal with the problem of modeling unstructured data such as texts or images. Understanding language tokens requires a considerable extension of the modeling framework, despite the fact, that modeling as outlined here remains an important part of understanding language tokens. Those extensions mainly concern an appropriate probabilization of what we experience as words or sentences. We will discuss this elsewhere, more technical here, fully contextualized here.

Goal-oriented modeling can be automated to a great extent, if an appropriate perspective to the concept of model is taken (see chapters about the generalized model, and model as categories).

Such automated modeling also can be run as a continuous process. Its main elements are the following:

  • (1) post-measurement phase: selecting and importing data;
  • (2) extended classification by a core process group:building an intensional representation;
  • (3) reflective post-processing (validation), meta-modeling, based on a (self-)monitoring repository;
  • (4) harvesting results and/or aligning measurement

.

Overall Organization

The elements of (continuous, automated) modeling needs to be arranged according to the following multi-layered, multi-loop organizational scheme:

Figure 1: Organizational elements for automated, continuous modeling; L<n> = loop levels; T=transformation of data, S=segmentation of data, R=detecting pairwise relation­ships between variables and identifying them as mathematical functions F(d)=f(x,y), F(d) being a non-linear function improving the discriminant power represented by the typology derived in S, PP=post-processing, e.g. creating dependency path diagrams, which are connected through 4 levels of loops, L1 thru L4, where L1=finding a semi-ordered list of optimized models, L2=introducing the relationships found by R into data transformation, L3=additional sampling of raw data based on post-processing of core process group (active sampling), e.g. for cross-validation purposes, and finally L4=adapting the objective of the modeling process based on the results presented to the user. Feedback-level L4 may be automated through selecting from pre-configured modeling policies, where a policy is a set of rules and sets of parameters controlling the feedback levels L1 thru L3 as well as the core modules. DB = some kind of data source, e.g. a database;

This scheme may be different to anything you have seen so far about modeling. Common software packages, whether commercial (SPSS, SAS, S-Plus, etc.) or open source (R, Weka, Orange) do not natively support this scheme. Some of them would allow for a similar scheme, but it is hard to accomplish it. For instance, the transformation part is not properly separated and embedded in the overall process, there is no possibility to screen for pairwise relationships, which then are automatically actualized as further transformation of the “data.” There is no meta-data and no abstraction inherent to the process. As a consequence, literally everything is left on the side of the user, rendering those softwares into gigantic formalisms. This comes, on the other hand, only with little surprise, given the current paradigm of deterministic computing.

The main reason, however, for the incapability of any of these softwares is the inappropriate theory behind them. Neither the paradigm of statistics nor that of “data mining” is applicable at all to the task of automated and continuous modeling.

Anyway, next we will describe the loops appearing in the scheme. The elements of the core process we will describe in detail later.

Here we should mention another process-oriented approach for predictive modeling, the CRISP-M scheme, which has been published as early as 1997 as a result of an initiative launched by NEC. CRISP-M stands for Cross-industry standard for predictive modeling. However, the CRISP-M is of a hopelessly solipsistic character and only of little value.

Before we start we should note that the scheme above reflects an ideal and rather simple situation. More often than not, a nested, if not fractal structure appears, especially regarding loop levels L1 and L2.

Loop Level 1: Resonance

Here we find the associative structure, e.g. a self-organizing map. An important requirement is that this mechanism is working bottom-up, and a consequence of this is that it is an approximate mechanism.

The purpose of this element is to perform a segmentation of the available data, given a particular data space as defined by the “features,” or more appropriate, the assignates (see the chapter about the generalized model for this).

It is important to understand, that it is impossible for the segmentation mechanism to change the structure of the available data space. Loop level L1 also provides what is called the transition from extensional to intensional description.

L1 performs also feature selection. Given a set of features FO, representing descriptional “dimensions” or aspects of the observations O, many of those features are not related to the intended target of the model. Hence, they introduce noise and have to be removed, which results, in other words, in a selection of the remaining.

In many applications, there are large numbers of variables, especially if L2 will be repeated, resulting in a vast number of possible selections. The number of possible combinations from the set of assignates easily exceeds 1020, and sometime even 10100. This is a larger quantity than the number of sub-atomar particles in the visible universe. The only way to find a reasonable proposal for a “good” selection is by means of an evolutionary mechanism. Formal, probabilistic approaches will fail.

The result of this step is a segmentation that can be represented as a table. The rows represent profiles of prototypes, while the columns show the selected features (for further details see below in the section about the core process)

Loop Level 2: Hypothetico-deductive Transformation

This step starts with a fixed segmentation based on a particular selection ℱ out of FO. The prototypes identified by L1 are the input data for a screening that employs analytic transformations of values within (mostly) pairwise selected variables, such like f(a,b) = a*b, or : f(a,b) = 1/(a+b). Given a fixed set of analytic functions, a complete screening is performed for all possible combinations. Typically, several millions of individual checks are performed.

It is very important to understand that not the original data are used as input, but instead the data on the level of the intensional description, i.e. a first-order abstraction of the data.

Once the most promising transformations have been identified, they are introduced automatically into the set of original transformations in the element T of figure 1.

Loop Level 3: Adaptive Sampling

see legend for figure 1

Loop Level 4: Re-Orientation

While the use aspects are of course already reflected by the target variable and the selected risk structure, there is a further important aspect concerning the “usage” of models. Up to level 3 the whole modeling process can be run in an autonomous manner. Yet, not so on level 4.

Level 4 and its associated loop has been included in the modeling scheme as a dedicated means for re-orientation. The results of a L3 modeling raid could lead to “insights” that change the preference structure of user. Upon this change in her/his preferences, the user could choose a different risk structure, or even a different target, perhaps also to create a further model with a complementary target.

These choices are obviously dependent on external influences such as organizational issues, or limitations / opportunities regarding the available resources.

Structure of the Modeling Core Process

1. Transformation

….of Data

2. Goal-oriented.. .Segmentation 3. Artificial Evolution 4. Dependencies
P = putative property (“assignate”)
F = arbitrary function
var = “raw” variable(s)
profiles
prototypes
concepts
combinatorial exploration
of associations between variables
complete calculation of relations as analytic functions

Figure 2: Organizational elements of the modeling core process. The bottom row is showing important keywords

.

Transformation of Data

This step performs a purely formal, arithmetic and hence analytic transformation of values in a data table. Examples are :

  • – the log-transformation of a single variable, shifting the mode of the distribution to the right, thus allowing for a better discrimination of small values; one can also use it to create missing-values in order to a adaptively filter certain values, and thus, observations;
  • – combinatorial synthesis of new variables from 2+ variables, which is resulting in a stretching, warping or folding of the parameter space;
  • – separating values from one variable into two new and mutually exclusive variables;
  • – binning, that is reducing the scale of the variable, say from numeric to ordinal;
  • – any statistical measure or procedure, changing the quality of an observation: resulting values are not reflecting observations, but instead represent a weight relative to the statistical measure.

A salient effect of the transformation of data is the increase of the number of variables. Also note, that any of those analytic transformations destroys a little bit of the total information, although it also leads to a better discriminability of certain sub-spaces of the parameter space. Most important, however, is to understand, that any analytic transformation is conceived as an hypothesis. Whether it is appropriate or not can be revealed ONLY by means of a targeted (goal-oriented) segmentation, which implies a cost-function that in turn comprises the operationalization of risk (see the chapter about generalized model).

Any of the resulting variables consist from assignates, i.e. the assigned attributes or features. Due to the transformation they comprise not just the “raw” or primary properties upon the first contact of the observer with the observed, but also all of the transformations applied to such raw properties (aka variables). This results in an extended set of assignates.

We now can also see that transformations of measured data are taking the same role as measurement devices. Initial differences in signals are received and selectively filtered according to the quasi-material properties of the device. The first step in figure 2 above such represents also what could be called generalized measurement.

Transforming data by whatsoever an algorithm or analytic method does NOT create a model. In other words, the model-aspect of statistical models is not in the statistical procedure, precisely because statistical models are not built upon associative mechanisms. The same is true for the widespread “physicalist” modeling e.g. in social sciences or urbanism. In these areas, measured data are often represented by a “formula,” i.e. a purely formal denotation, often in the form of a system of differential equations. Such systems are not by itself a model, because they are analytic rewritings of the data. The model-aspect of such formulas gets instantiated only through associating parts of the measured data with a target variable as an operationalization of the purpose. Without target variable, no purpose, without purpose no model, without model, no association, hence no prediction, no diagnostics, and not any kind of notion of risk. Formal approaches always need further side-conditions and premises before they can be applied. Yet, it is silly to come up with conditions for instantiations of “models” after the model has been built, since those conditions inevitably would lead to a different model. The modeling-aspect, again, is completely moved to the person(s) applying the model, hence such modeling is deeply subjective, implying serious and above all completely invisible risks regarding reproducibility and stability.

We conclude that the pretended modeling by formal methods has to be rated as bad practice.

Goal-oriented Segmentation

The segmentation of the data can be represented as a table. The rows represent profiles of prototypes, while the columns show the selected assignates (features); for further details see below in the section about the core process (will be added at a future date!).

In order to allow for a comparison of the profiles, the profiles have to be “homogenous” with respect to their normalized variance. The standard SOM tends to collect “waste” or noise in some clusters, i.e. deeply dissimilar observations are collected in a single group because their dissimilarity. Here we find one of the important modification of the standard SOM as it is widely used. The effect of this modification is of vital size. For other design issues around the Self-organizing Map see the discussion here.

Artificial Evolution

Necessary and even inevitable for screening the vast parameter space.

Dependencies

see about Loop Level 2 above.

Bad Habits

In the practice of modeling one can find bad habits regarding any of the elements, loops and steps outlined above. Beginning with the preparation of data there is the myth that missing values need to be “guessed” before an analysis could be done. What would be the justification for the selection of a particular method to “infer” a value that is missing due to incomplete measurement? What do people expect to find in such data? Of course, filling gaps in data before creating a model from it is deep nonsense.

Another myth, still in the early phases of the modeling process, is represented by the belief that analytical methods applied to measurement data “create” a model. They don’t. They just destroy information. As soon as we align the working of the modeling mechanism to some target variable, the whole endeavor is not analytic any more. Yet, without target variable we would not create a model, just re-written measurement values, that even don’t measure “anything”: measurement also needs a purpose. So it would be just silly first to pretend to do measurement and after that to drop that intention by removing the target variable. All of statistics works like that. Whatever statistics is doing, it is not modeling. If someone uses statistics, that person uses just a rewriting tool; the modeling itself remains deeply opaque, based on personal preferences, in short: unscientific.

People recognize more and more that clustering is indispensable for modeling. Yet, many people, particularly in biological sciences (all the -omics) believe that there is a meaningful distinction between unsupervised and supervised clustering, yet that both varieties produce models. That’s deeply wrong. One can not apply, say K-means clustering, or a SOM, without a target variable, that is a cost function, just for checking whether there is “something in the data.” Any clustering algorithm is applying some criteria to separate the observations. Why then should someone believe that precisely the more or less opaque, but surely purely formal, criteria of an arbitrary clustering algorithm should perfectly match to the data at hand? Of course, nobody should believe that. Instead of surrender oneself blindly to some arbitrary algorithmic properties one should think of those criteria as free parameters that have to be tested according to the purpose of the modeling activity.

Another widespread misbehavior concerns what is called “feature selection.” It is an abundant practice first to apply logistic regression to reduce the number of properties, then, in a completely separated second step to apply any kind of “pattern matching” approach. Of course, the logistic regression acts as a kind of filter. But: is this filter compatible to the second method, is it appropriate to the data and the purpose at hand? You will never find out, because you have applied to different methods. It is thus impossible to play the ceteris paribus game. It appears comprehensible to proceed according the split-method approach if you have just paper and pencil at your disposal. It is inexcusable to do so if there are computers available.

Quite to the contrast of the split-method approach one should use a single method that is able to perform feature selection AND data segmentation in the same process.

There are further problematic attitudes concerning the validation of models, especially concerning sampling and risk, which we won’t discuss here.

Conclusion

In this essay we are providing the first general and complete scheme for target oriented modeling. The main structural achievements comprise (1) the separation of analytic transformation, (2) associative sorting, (3) evolutionary optimization of the selection of assignates and (4) the constructive and combinatorial derivation of new assignates.

Note that any  (computational) procedure of modeling fits into this scheme, even this scheme itself. Ultimately, any modeling results in a supervised mapping. In the chapters about the abstract formalization of models as categories we argue that models are level-2-categories.

It precisely this separation that allows for an autonomous execution of modeling once the user has determined her target and the risk that appears as acceptable. It depends completely on the context—whether external, organizational or internal and more psychological—and on individual habits how these dimensions of purpose and safety are being configured and handled.

From the perspective of our general interest in machine-based epistemology we clearly can see that target oriented modeling for itself does not contribute too much to that capability. Modeling, even if creating new hypotheses, and even if we can reject the claim that modeling is an analytic activity, necessarily remains within the borders of the space determined by the purpose and the observations.

There is no particular difficulty to run even advanced modeling in an autonomous manner. Performing modeling is an almost material performance. Defining the target and selecting a risk attitude are of course not. Thus, in any predictive or diagnostic modeling the crucial point is to determine those. Particularly the risk attitude implies unrooted believes and thus the embedding into social processes. Frequently, humans even change the target in order to obey to certain limits concerning risk. Thus, in commercial projects the risk should be the only dimension one has to talk about when it comes to predictive / diagnostic modeling. Discussing about methods or tools is nothing but silly.

It is pretty clear that approaching the capability for theory-building needs more than modeling, although target oriented modeling is a necessary ingredient. We will see in further chapters how we can achieve that. The important step will be to drop the target from modeling. The result will be a pre-specific modeling, or associative storage, which serves as a substrate for any modeling that is serving a particular target.

This article was first published 21/12/2011, last revision is from 5/2/2012

۞

Mental States

October 23, 2011 § Leave a comment

The issue we are dealing with here is the question whether we are justified to assign “mental states” to other people on the basis of our experience, that is, based on weakly valid predictions and the use of some language upon them.

Hilary Putnam, in an early writing (at least before 1975), used the notion of mental states, and today almost everybody does so. In the following passage he tries to justify the reasonability of the inference of mental states (italics by H.Putnam, colored emphasis by me); I think this passage is not compatible with his results any more in “Representation and Reality”, although most people particularly from computer sciences cite him as a representative of a (rather crude) machine-state functionalism:

“These facts show that our reasons for accepting it that others have mental states are not an ordinary induction, any more than our reasons for accepting it that material objects exist are an ordinary induction Yet, what can be said in the case of material objects can also be said here our acceptance of the proposition that others have mental states is both analogous and disanalogous to the acceptance of ordinary empirical theories on the basis of explanatory induction. It is disanalogous insofar as ‘other people have mental states’ is, in the first instance, not an empirical theory at all, but rather a consequence of a host of specific hypothesis, theories, laws, and garden variety empirical statements that we accept.   […]   It is analogous, however, in that part of the justification for the assertion that other people have mental states is that to give up the proposition would require giving up all of the theories, statements, etc., that we accept implying that proposition; […] But if I say that other people do not have minds, that is if I say that other people do not have mental states, that is if I say that other people are never angry, suspicious, lustful,sad, etc., I am giving up propositions that are implied by the explanations that I give on specific occasions of the behavior of other people. So I would have to give up all of these explanations.”

Suppose, we observe someone for a few minutes while he or she is getting increasingly stressed/relaxed, and suddenly the person starts to shout and to cry, or to smile. More professionally, if we use a coding system like the one proposed by Scherer and Ekman, the famous “Facial Action Coding System,”  recently popularized by the TV series “Lie to me,” are we allowed to assign them a “mental state”?

Of course, we intuitively and instinctively start trying to guess what’s going on with the person, in order to make some prediction or diagnosis (which essentially is the same thing), for instance because we feel inclined to help, to care, to console the person, to flee, or to chummy with her. Yet, is such a diagnosis, probably taking place in the course of mutual interpretation of almost non-verbal behavior, is such a diagnosis the same as assigning “mental states”?

We are deeply convinced, that the correct answer is ‘NO’.

The answer to this question is somewhat important for an appropriate handling of machines that start to be able to open their own epistemology, which is the correct phrase for the flawed notion of “intelligent” machines. Our answer rests on two different pillars. We invoke complexity theory, and a philosophical argument as well. Complexity theory forbids states for empirical reasons; the philosophical argument forbids its usage regarding the mind due to the fact that empirical observations never can be linked to statefulness, neither by language nor by mathematics. Statefulness is then identified as a concept from the area of (machine) design.

Yet, things are a bit tricky. Hence, we have to extend the analysis a bit. Else we have to refer to what we said (or will say) about theory and modeling.

Reductionism, Complexity, and the Mental

Since the concept of “mental state” involves the concept of state, our investigation has to follow two branches. Besides the concept of “state” we have the concept of the “mental,” which still is a very blurry one. The compound concept of “mental state” just does not seem to be blurry, because of the state-part. But what if the assignment of states to the personal inner life of the conscious vis-a-vis is not justified? We think indeed that we are not allowed to assign states to other persons, at least when it comes to philosophy or science  about the mind (if you would like to call psychology a ‘science’). In this case, the concept of the mental remains blurry, of course. One could suspect that the saying of “mental state” just arose to create the illusion of a well-defined topic when talking about the mind or mindfulness.

“State” denotes a context of empirical activity. It assumes that there have been preceding measurements yielding a range of different values, which we aposteriori classify and interpret. As a result of these empirical activities we distinguish several levels of rather similar values, give them a label and call them a “state.” This labeling remains always partially arbitrary by principle. Looking backward we can see that the concept of “state” invokes measurability, interpretation and, above all, identifiability. The language game of “state” excludes basic non-identifiability. Though we may speak about a “mixed state,” which still assumes identifiability in principle, there are well-known cases of empirical subjects that we can not assign any distinct value in principle. Prigogine [2] gave many examples, and even one analytic one, based on number theory. In short, we can take it for sure that complex systems may traverse regions in their parameter space where it is not possible to assign anything identifiable. In some sense, the object does not exist as a particular thing, it just exists as a trajectory, or more precise, a compound made from history and pure potential. A slightly more graspable example for those regions are the bifurcation “points” (which are not really points for real systems).

An experimental example being also well visible are represented by arrangements like so-called Reaction-Diffusion-Systems [3]. How to describe such a system? An atomic description is not possible, if we try to refer to any kind of rules. The reason is that the description of a point in their parameter system around the indeterminate area of bifurcation is the description of the whole system itself, including its trajectory through phase space. Now, who would deny that the brain and the mind springing off from it is something which exceeds by far those “simple” complex systems in their complexity, which are used as “model systems” in the laboratory, in Petri dishes, or even computer simulations?

So, we conclude that brains can not “have” states in the analytic sense. But what about meta-stability? After all, it seems that the trajectories of psychological or behavioral parameters are somehow predictable. The point is that the concept of meta-stability does not help very much. That concept directly refers to complexity, and thus it references to the whole “system,” including a large part of its history. As a realist, or scientist believing in empiricism, we would not gain anything. We may summarize that their is no possible reduction of the brain to a perspective that would justify the usage of the notion of “state.”

But what about the mind? Let the brain be chaotic, the mind need not, probably. Nobody knows. Yet, an optimistic reductionist could argue for its possibility. Is it then allowed to assign states to the mind, that is, to separate the brain from the mind with respect to stability and “statefulness”? Firstly, again the reductionist would loose all his points, since in this case the mind and its states would turn into something metaphysical, if not from “another reality.” Secondly, measurability would fall apart, since mind is nothing you could measure as an explanans. It is not possible to split off the mind of a person from that very person, at least not for anybody who would try to justify the assignment of states to minds, brains or “mental matter.” The reason is a logical one: Such an attempt would commit a petitio principii.

Obviously, we have to resort to the perspective of language games. Of course, everything is a language game, we knew that even before refuting the state as an appropriate concept to describe the brain. Yet, we have demonstrated that even an enlightened reductionist, in the best case a contemporary psychologist, or probably also William James, must acknowledge that it is not possible to speak scientifically (or philosophically) about states concerning mental issues. Before starting with the state as a Language Game I would first like to visit the concepts of automata in their relation to language.

Automata, Mechanism, and Language

Automata are positive definite, meaning that it consists from a finite set of well-defined states. At any point in time they are exactly defined, even if the particular automaton is a probabilistic one. Well, complexity theory tells us, that this is not possible for real objects. Yet, “we” (i.e. computer hardware engineers) learned to suppress deviations far enough in order to build machines which come close to what is called the “Universal Turing Machine,” i.e. nowadays physical computers. A logical machine, or a “logics machine”, if you like, then is an automaton. Therefore, standard computer programs are perfectly predictable. They can be stopped, hibernated, restarted etc., and weeks later you can proceed at the last point of your work, because the computer did not change any single of more than 8’000’000’000 dual-valued bits. All of the software running on computers is completely defined at any point in time. Hence, logical machines not only do exist outside of time, at least from their own perspective. It is perfectly reasonable to assign them “states,” and the sequence of these states are fully reversible in the sense that either a totality of the state can be stored and mapped onto the machine, or that it can be identically reproduced.

For a long period of time, people thought that such a thing would be an ideal machine. Since it was supposed to be ideal, it was also a matter of God, and in turn, since God could not do nonsense (as it was believed), the world had to be a machine. In essence, this was the reasoning in the startup-phase of the Renaissance, remember Descartes’s or Leibniz’s ideas about machines. Later, Laplace claimed perfect predictability for the universe, if he could measure everything, as he said. Not quite randomly Leibniz also thought about the possibility to create any thought by combination from a rather limited set of primitives, and in that vein he also proposed binary encoding. Elsewhere we will discuss, whether real computers as simulators of logic machines can just and only behave deterministically. (they do not…)

Note that we are not just talking about the rather trivial case of Finite State Automata. We explicitly include the so-called Universal-Turing-Machine (UTM) into our considerations, as well as Cellular Automata, for which some interesting rules are known, producing unpredictable though not random behavior. The common property of all these entities is the positive definiteness. It is important to understand that physical computers must not conceived as UTM. The UTM is logical machine, while the computer is a physical instance of it. At the same time it is more, but also less than a UTM. The UTM consists of operations virtually without a body and without matter, and thus also without the challenge of a time viz. signal horizon: things, which usually cause trouble when it comes to exactness. The particular quality of the unfolding self-organization in Reaction-Diffusion-System is—besides other design principles—dependent on effective signal horizons.

Complex systems are different, and so are living systems (see posts about complexity). Their travel through parameter space is not reversible. Even “simple” chemical processes are not reversible. So, neither the brain nor the mind could be described as reversible entities. Even if we could measure a complex system at a given point in time “perfectly,” i.e. far beyond quantum mechanic thresholds (if such a statement makes any sense at all), even then the complex system will return to increasing unpredictability, because such systems are able to generate information [4]. Besides stability, they are also deeply nested, where each level of integration can’t be reduced to the available descriptions of the next lower level. Standard computer programs are thus an inappropriate metaphor for the brain as well as for the mind. Again, there is the strategic problem for the reductionist trying to defend the usage of the concept of states to describe mental issues, as reversibility would apriori assume complete measurability, which first have to be demonstrated, before we could talk about “states” in the brain or “in” the mind.

So, we drop the possibility that the brain or the mind either is an automaton. A philosophically inspired biological reductionist then probably will resort to the concept of mechanism. Mechanisms are habits of matter. They are micrological and more local with respect to the more global explanandum. Mechanisms do not claim a deterministic causality for all the parts of a system, as the naive mechanists of earlier days did. Yet, referring to mechanisms imports the claim that there is a linkage between probabilistic micrological (often material) components and a reproducible overall behavior of the “system.” The micro-component can be modeled deterministically or probabilistically following very strong rules, the overall system then shows some behavior which can not described by the terms appropriate for the micro-level. Adopted to our case of mental states that would lead us to the assumption that there are mechanisms. We could not say that these mechanisms lead to states, because the reductionist first has to proof that mechanisms lead to stability. However, mechanisms do not provide any means to argue on the more integrated level. Thus we conclude that—funny enough—resorting to the concept of probabilistic mechanism includes the assumptions that it is not appropriate to talk about states. Again a bad card for the reductions heading for the states in the mind.

Instead, systems theory uses concepts like open systems, dynamic equilibrium (which actually is not an equilibrium), etc. The result of the story is that we can not separate a “something” in the mental processes that we could call a state. We have to speak about processes. But that is a completely different game, as Whitehead has demonstrated as the first one.

The assignment of a “mental state” itself is empty. The reason is that there is nothing we could compare it with. We only can compare behavior and language across subjects, since any other comparison of two minds always includes behavior and language. This difficulty is nicely demonstrated by the so-called Turing-test, as well as Searle’s example of the Chinese Chamber. Both examples describe situations where it is impossible to separate something in the “inner being” (of computers, people or chambers with Chinese dictionaries); it is impossible, because that “inner being” has no neighbor, as Wittgenstein would have said. As already said, there is nothing which we could compare with. Indeed, Wittgenstein said so about the “I” and refuted its reasonability, ultimately arriving at a position of “realistic solipsism.” Here we have to oppose the misunderstanding that an attitude like ours denies the existence of mental affairs of other people. It is totally o.k. to believe and to act according to this believe that other people have mental affairs in their own experience; but it is not o.k. to call that a state, because we can not know anything about the inner experience of private realities of other people, which would justify the assignment of the quality of a “state.” We also could refer to Wittgenstein’s example of pain: it is nonsense to deny that other people have pain, but it is also nonsense to try to speak about the pain of others in a way that claims private knowledge. It is even nonsense to speak about one’s own pain in a way that would claim private knowledge—not because it is private, but because it is not a kind of knowledge. Despite we are used to think that we “know” the pain, we do not. If we would, we could speak exactly about it, and for others it would not be unclear in any sense, much like: I know that 5>3, or things like that. But it is not possible to speak in this way about pain. There is a subtle translation or transformation process in between the physiological process of releasing prostaglandin at the cellular level and the final utterance of the sentence “I have a certain pain.” The sentence is public, and mandatory so. Before that sentence, the pain has no face and no location even for the person feeling the pain.

You might say, o.k. there is physics and biology and molecules and all the things we have no direct access to either. Yet, again, these systems behave deterministically, at least some of them we can force to behave regularly. Electrons, atoms and molecules do not have individuality beyond their materiality, they can not be distinguished, they have no memory, and they do not act in their own symbolic space. If they would, we would have the same problem as with the mental affairs of our conspecifics (and chimpanzees, whales, etc.).

Some philosophers, particularly  those calling themselves analytic, claim that not only feelings like happiness, anger etc. require states, but also that intentions would do so. This, however, would aggravate the attempt to justify the assignment of states to mental affairs, since intentions are the result of activities and processes in the brain and the mind. Yet, from that perspective one could try to claim that mental states are the result of calculations or deterministic processes. As for mathematical calculations, there could be many ways leading to the same result. (The identity theory between physical and mental affairs has been refuted first by Putnam 1967 [5].) On the level of the result we unfortunately can not tell anything about the way how to achieve it. This asymmetry is even true for simple mathematics.

Mental states are often conceived as “dispositions,” we just before talked about anger and happiness, notwithstanding more “theoretical” concepts. Regarding this usage of “state,” I suppose it is circular, or empty. We can not talk about the other’s psychic affairs except the linkage we derive by experience. This experience links certain types of histories or developments with certain outcomes. Yet, their is no fixation of any kind, and especially not in the sense of a finite state automaton. That means that we are mapping probability densities to each other. It may be natural to label those, but we can not claim that these labels denote “states.” Those labels are just that: labels. Perhaps negotiated into some convention, but still, just labels. Not to be aware of this means to forget about language, which really is a pity in case of “philosophers.” The concept of “state” is basically a concept that applies to the design of (logical) machines. For these reasons is thus not possible to use “state” as a concept where we attempt to compare (hence to explain)  different entities, one of which is not the result of  design. Thus, it is also not possible to use “states” as kind of “explaining principle” for any kind of further description.

One way to express the reason for the failure of  the supervenience claim is that it mixes matter with information. A physical state (if that would be meaningful at all) can not be equated with a mind state, in none of its possible ways. If the physical parameters of a brain changes, the mind affairs may or may not be affected in a measurable manner. If the physical state remains the same, the mental affairs may remain the same; yet, this does not matter: Since any sensory perception alters the physical makeup of the brain, a constant brain would be simply dead.

Would we accept the computationalist hypothesis about the brain/mind, we would have to call the “result” a state, or the “state” a result. Both alternatives feel weird at least with respect to a dynamic entity like the brain, though the even feel weird with respect to arithmetics. There is no such thing in the brain like a finite algorithm that stops when finished. There are no “results” in the brain, something, even hard-core reductionistic neurobiologists would admit. Yet, again, exactly this determinability had to be demonstrated in order to justify the usage of “state” by the reductionist, he can not refer to it as an assumption.

The misunderstanding is quite likely caused by the private experience of stability in thinking. We can calculate 73+54 with stable results. Yet, this does not tell anything about the relation between matter and mind. The same is true for language. Again, the hypothesis underling the claim of supervenience is denying the difference between matter and information.

Besides the fact that the reductionist is running again into the same serious tactical difficulties as before, this now is a very interesting point, since it is related to the relation of brain and mind on the one side and actions and language on the other. Where do the words we utter come from? How is it possible to express thoughts such that it is meaningful?

Of course, we do not run a database with a dictionary inside it in our head. We not only don’t do so, it would not be possible to produce and to understand language at all, even to the slightest extent. Secondly, we learn language, it is not innate. Even the capability to learn language is not innate, contrary to a popular guess. Just think about Kaspar Hauser who never mastered it better than a 6-year old child. We need an appropriately trained brain to become able to learn a language. Would the capability for language being innate, we would not have difficulties to learn any language. We all know that the opposite is true, many people having severe difficulties to learn even a single one.

Now, the questions of (1) how to become able to learn a language and (2) how to program a computer that it becomes able to understand language are closely related. The programmer can NOT put the words into the machine apriori as that would be self-delusory. Else, the meaning of something can not be determined apriori without referring to the whole Lebenswelt. That’s the result of Wittgenstein’s philosophy as well as it is Putnam’s final conclusion. Meaning is not a mental category, despite that it requires always several brains to create something we call “meaning” (emphasis on several). The words are somewhere in between, between the matter and the culture. In other words there must be some kind process  that includes modeling, binding, symbolization, habituation, both directed to its substrate, the brain matter, and its supply, the cultural life.

We will discuss this aspect elsewhere in more detail. Yet, for the reductionist trying to defend the usage of the concept of states for the description of mental affairs, this special dynamics between the outer world and the cognitively established reality, and which is embedding  our private use of language, is the final defeat for state-oriented reductionisms.

Nevertheless we humans often feel inclined to use that strange concept. The question is why do we do so, and what is the potential role of that linguistic behavior? If we take the habit of assigning a state to mental affairs of other people as a language game, a bunch of interesting questions come to the fore. These are by far too complex and to rich as to be discussed here. Language games are embedded into social situations, and after all, we always have to infer the intentions of our partners in discourse, we have to establish meaning throughout the discourse, etc. Assigning a mental state to another being probably just means “Hey, look, I am trying to understand you! Would you like to play the mutual interpretation game?” That’s ok, of course, for the pragmatics of a social situation, like any invitation to mutual inferentialism [6], and like any inferentialism it is even necessary—from the perspective of the pragmatics of a given social situation. Yet, this designation of understanding should not mistake the flag with the message. Demonstrating such an interest need not even be a valid hypothesis within the real-world situation. Ascribing states in this way, as an invitation for inferring my own utterances,  is even unavoidable, since any modeling requires categorization. We just have to resist to assign these activities any kind of objectivity that would refer to the inner mental affairs of our partner in discourse. In real life, doing so instead is inevitably and always a sign of deep disrespect of the other.

In philosophy, Deleuze and Guattari in their “Thousand Plateaus” (p.48) have been among the first who recognized the important abstract contribution of Darwin by means of his theory. He opened the possibility to replace types and species by population, degrees by differential relations. Darwin himself, however, has not been able to complete this move. It took another 100 years until Manfred Eigen coined the term quasi-species as an increased density in a probability distribution. Talking about mental states is noting than a fallback into Linnean times when science was the endeavor to organize lists according to uncritical use of concepts.

Some Consequences

The conclusion is that we can not use the concept of state for dealing with mental or cognitive affairs in any imaginable way, without stumbling into serious difficulties . We should definitely drop it from our vocabulary about the mind (and the brain as well). Assuming mental states in other people is rendering those other people into deterministic machines. Thus, doing so would even have serious ethical consequences. Unfortunately, many works by many philosophers are rendered into mere garbage by mistakenly referring to this bad concept of “mental states.”

Well, what are the consequences for our endeavor of machine-based epistemology?

The most salient one is that we can not use the digital computers to produce language understanding as along as we use these computers as deterministic machines. If we still want to try (and we do so), then we need mechanisms that introduce aspects that

  • – are (at least) non-deterministic;
  • – produce manifolds with respect to representations, both on the structural level and “content-wise”;
  • – start with probabilized concepts instead of compound symbolic “whole-sale” items (see also the chapter about representation);
  • – acknowledge the impossibility to analyze a kind of causality or—equival- ently—states inside the machine in order to “understand” the process of language at a microscopic level: claiming ‘mental states’ is a garbage state, whether it is assigned to people or to machines.

Fortunately enough, we found further important constraints for our implementa- tion of a machine that is able to understand language. Of course, we need further ingredients, but for now theses results are seminal. You may wonder about such mechanisms and the possibility to implement them on a computer. Be sure, they are there!

  • [1] Hilary Putnam, Mind, language, and reality. Cambridge University Press, 1979. p.346.
  • [2] Ilya Prigogine.
  • [3] Reaction-Diffusion-Systems: Gray-Scott-systems, Turing-systems
  • [4] Grassberger, 1988. Physica A.
  • [5] Hilary Putnam, 1967, ‘The Nature of Mental States’, in Mind, Language and reality, Cambridge University Press, 1975.
  • [6] Richard Brandom, Making it Explicit. 1994.

۞

Where Am I?

You are currently browsing entries tagged with reductionism at The "Putnam Program".