A Model for Analogical Thinking

February 13, 2012 § Leave a comment

Analogy is an ill-defined term, and quite naturally so.

If something is said to be analog to something else, one supposes not only just a distant resemblance. Such a similarity could be described indeed by a similarity mapping, based on a well identified function. Yet, analogy is more than that, much more than just similarity.

Analogical thinking is significantly different from determining similarity, or by “selecting” a similar thought. Actually, it is not a selection at all, and it is also not based on modeling. Despite it is based on experience, it is not based on identifiable models. We may even conclude that analogical thinking is (1) non-empiric, and (2) based on a constructive use of theories. In other words, and derived from our theory of theory, an analogy is itself a freshly derived model! Yet, that model is of a  particular kind, since it does not contain any data. In other words, it looks like the basic definition to be used as the primer for surrogating simulated data, which in turn can be used to create SOM-based expectancy. We may also say it in simple terms: analogical thinking is about producing an ongoing stream of ideas. Folding, inventing.

In 2006, on the occasion of the yearly Presidential Lecture at Stanford, Douglas Hofstadter gave some remarkable statements in an interview, which I’d like to quote here, since they express some issues we have argued for throughout our writings.

I knew some people wouldn’t like what I was going to say, since it was diametrically opposed to the standard cognitive-science party line that analogy is simply a part of “reasoning” in the service of some kind of “problem solving” (makes me think of doing problem sets in a physics course). For some bizarre reason, don’t ask me why, people in cognitive science only think of analogies in connection with what they call “analogical reasoning” — they have to elevate it, to connect it with fancy things like logic and reasoning and truth. They don’t seem to see that analogy is dirt-cheap, that it has nothing to do with logical thinking, that it simply pervades every tiny nook and cranny of cognition, it shapes our every thinking moment. Not seeing that is like fish not perceiving water, I guess.

[…]

The point is that thinking amounts to putting one’s finger on the essence of situations one is faced with, which amounts to categorization in a very deep and general sense, and the point is that in order to categorize, one has to compare situations “out there” with things already in one’s head, and such comparisons are analogies. Thus at the root of all acts of thought, every last one of them, is the making of analogies. Cognitive scientists talk a lot about categories, but unfortunately the categories that they study are far too limited. For them, categories are essentially always nouns, and not only that, they are objects that we can see, like “tree” or “car.” You get the impression that they think that categorization is very simple, no more complicated than looking at an object and identifying the standard “features”

The most salient issues here are, in preliminary terms for the time being

  • (1) Whenever thinking happens, this thinking is performed as “making analogy”; there are no “precise = perfectly defined” items in the brain or the mind.
  • (2) Thinking can not be equated with problem-solving and logic;
  • (3) Comparison and categorization are the most basic operations, while both of these take place in a fluid, open manner.

It is due to a fallacy to think that there is analogical reasoning and some other “kinds” (but the journals and libraries are filled with this kind of shortcomings, e.g. [1,2,3]), or vice versa, that there is logical reasoning and something other, such as analogical reasoning. Thinking so would mean to put logic into the world (of neurons, in this case). Yet, we know that logic that is free from interpretive parts can’t be part of the real world. We always and inevitably can deal just and only with quasi-logic, an semantically contaminated instance of transcendental logic. The issue is not just one about wording here. It is the self-referentiality that always is present in multiple respects when dealing with cognitive capacities that enforces us to be not to lazy regarding the wording. Dropping the claim posited by the term “analogical reasoning” we quickly arrive at the insight that all thinking is analogical, or even, that “making analogies” is the label for the visible parts of the phenomenon that we call thinking.

The next issue mentioned by Hofstadter is about categories. Categories in the mind are NOT about the objects, they are even not about any external reference. It is a major and widely spread misunderstanding, according to Hofstadter, among cognitive scientists. It is clear that such referentialism, call it materialism, or naive realism, is conceptually primitive and inappropriate. We also could refer to Charles Peirce, the great American philosopher, who repeatedly stated (and also as the first one) that signs always refer only to signs. Yet, signs are not objects, of course. Similarly, in §198 of the Philosophical Investigations Wittgenstein notes

[…] any interpretation still hangs in the air along with what it interprets, and cannot give it any support. The interpretations alone do not determine the meaning.

The materiality of road sign should not be taken as its meaning, the semiotic sign associated with it is only partially to be found in the matter standing there near the road. The only thing that ties “us” (i.e. our thinking) to the world is modeling, whether this world is taken as the external or the internal one, it is the creation of tools (the models) for anticipation given the expectation of weak repeatability. That means, we have to belief and to trust in the first instance, which yet can not really be taken as “ties”.

The kind of modeling we have in mind is neither covered by model theory, nor by the constructivist framework, nor by the empiricist account of it. Actually, despite we can at least go on believing into an ontology of models ( “a model is…”) as long as we can play the blind man’s buff, that modeling can nevertheless not be separated from entities like concepts, code, mediality or virtuality. We firmly believe in the impossibility of reductionism when it comes to human affairs. And, I guess, we could agree upon the proposal that thinking is such a human affair, even if we are going to “implement” it into a “machine.” In the end, we anyway just strive for understanding ourselves.

It is thus extremely important to investigate the issue of analogy in an appropriate model system. The only feasible model that is known up to these days is that which has been published by Douglas Hofstadter and Melanie Mitchell.

They describe their results on a non-technical level in the wonderful book “Fluid Concepts and Creative Analogies“, while Mitchell focuses on more technical aspects and the computer experiments in “Analogy-Making as Perception: A Computer Model

Though we do not completely agree on their theoretical foundation, particularly the role they ascribe the concept of perception, their approach is nevertheless brilliant.  Hofstadter discusses in a very detailed manner why other approaches are either fake of failure (see also [4]), while Mitchell provides a detailed account about the model system, which they call “CopyCat”.

CopyCat

CopyCat is the last and most advanced variant of a series of similar programs. It deals with a funny example from the letter domain. The task that the program should solve is the following:

Given the transformation of a short sequence of letters, say “abc”, into a similar one, say “abd”, what is the result of applying the very “same” transformation to a different sequence, say “ijk”?

Most people will answer “ijl”. The rule seems to be “obvious”. Yet, there are several solutions, though there is a strong propensity to “ijl”. One of the other solutions would be “ijd”. However, this solution is “intellectually” not appealing, for humans notably…

Astonishingly, CopyCat reproduces the probabilistic distribution of solution provided by a population of humans.

The following extracts show further examples.

Now the important question: How does CopyCat work?

First, the solutions are indeed produced, there is no stored collection of solutions. Thus, CopyCat derives its proposals not from experience that could be expressed by empiric models.

Second, the solutions are created by a multi-layered, multi-component random process. It reminds in some respect to the behavior of a developed ant state, where different functional roles are performed by different sub-populations. In other words, it is more than just swarming behavior, there is division of labor among different populations of agents.

The most important third component is a structure that represents a network of symmetry relations, i.e. a set of symmetry relations are arranged as a graph, where the weights of the relations are dynamically adapted to the given task.

Based on these architectural principals, Copycat produces—and it is indeed a creative production— its answers.

Conclusion

Of course, Copycat is a model system. The greatest challenge is to establish the “Platonic” sphere (Hofstadter’s term) that comprises the dynamical relational system of symmetry relations. In a real system, this sphere of relations has to be fed by other parts of system, most likely the modeling sub-system. This sub-system has to be able to extract abstract relations from data, which then could be assembled into the “Platonic” device. All of those functional parts can be covered or served by self-organizing maps, and all of them are planned. The theory of these parts you may find scattered throughout this blog.

Copycat has been created as a piece of software. Years ago, it had been publicly available from the ftp site at the Illinois University of Urbana-Champaign. Then it vanished. Fortunately I was able to grab it before.

Since I rate this piece as one of the most important contributions to machine-based episteme, I created a mirror for downloading it from Google code. But be aware, so far, you need to be a programmer to run it, it needs a development environment for the Java programming language. You can check it out from the source repository behind the link given above.  In the near future I will provide a version that runs more easily as a standalone program.

  • [1] David E Rumelhart, Adele A Abrahamson (1972), “A model for analogical reasoning,”
    Cognitive Psychology  5(1) 1-28. doi
  • [2] John F. Sowa and Arun K. Majumdar (2003). “Analogical Reasoning,” in: A. Aldo, W. Lex, & B. Ganter, eds. (2003) Conceptual Structures for Knowledge Creation and Communication, LNAI 2746, Springer-Verlag, pp. 16-36. Proc Intl Conf Conceptual Structures, Dresden, July 2003.
  • [3] Morrison RG, Krawczyk DC, Holyoak KJ, Hummel JE, Chow TW, Miller BL, Knowlton BJ. (2004). A neurocomputational model of analogical reasoning and its breakdown in frontotemporal lobar degeneration. J Cogn Neurosci. 16(2), 260-71.
  • [4] Chalmers, D. J., R. M. French, & D. R. Hofstadter (1992). High-level perception, representation, and analogy: A critique of artificial intelligence methodology. J Exp & Theor Artificial Intelligence 4, 185-211.

۞

Where Am I?

You are currently browsing entries tagged with fluid analogy at The "Putnam Program".