When we think analogically, we do much more than just compare two analogs based on obvious similarities between their elements. Rather, analogical reasoning is a complex process of retrieving structured knowledge from long-term memory, representing and manipulating role-filler bindings in working memory, performing self-supervised learning to form new inferences, and finding structured intersections between analogs to form new abstract schemas. The entire process is governed by the core constraints provided by isomorphism, similarity of elements, and the goals of the reasoner (Holyoak & Thagard, 1989a). These constraints apply in all components of analogical reasoning: retrieval, mapping, inference, and relational generalization. When analogs are retrieved from memory, the constraint of element similarity plays a large role, but relational structure is also important - especially when multiple source analogs similar to the target are competing to be selected. For mapping, structure is the most important constraint but requires adequate working memory resources; similarity and purpose also contribute. The success of analogical inference ultimately depends on whether the purpose of the analogy is achieved, but satisfying this constraint is intimately connected with the structural relations between the analogs. Finally, relational generalization occurs when schemas are formed from the source and target to capture those structural patterns in the analogs that are most relevant to the reasoner's purpose in exploiting the analogy.
Several current research directions are likely to continue to develop. Computational models of analogy, such as LISA (Hummel & Holyoak, 1997, 2003), have begun to connect behavioral work on analogy with research in cognitive neuroscience (Morrison et al., 2004). We already have some knowledge of the general neural circuits that underlie analogy and other forms of reasoning (see Goel, Chap. 20). As more sophisticated noninvasive neuroimag-ing methodologies are developed, it should become possible to test detailed hypotheses about the neural mechanisms underlying analogy, such as those based on temporal properties of neural systems.
Most research and modeling in the field of analogy has emphasized quasilinguistic knowledge representations, but there is good reason to believe that reasoning in general has close connections to perception (e.g., Pedone et al., 2001). Perception provides an important starting point for grounding at least some "higher" cognitive representations (Barsalou, 1999). Some progress has been made in integrating analogy with perception. For example, the LISA model has been augmented with a Metric Array Module (MAM; Hummel & Holyoak, 2 001), which provides specialized processing of metric information at a level of abstraction applicable to both perception and quasispatial concepts. However, models of analogy have generally failed to address evidence that the difficulty of solving problems and transferring solution methods to isomorphic problems is dependent on the difficulty of perceptually encoding key relations. The ease of solving apparently isomorphic problems (e.g., isomorphs of the well-known Tower of Hanoi) can vary enormously, depending on perceptual cues (Kotovsky & Simon, 1990; see Novick & Bas-sok, Chap. 14).
More generally, models of analogy have not been well integrated with models of problem solving (see Novick & Bassok, Chap. 14), even though analogy clearly affords an important mechanism for solving problems. In its general form, problem solving requires sequencing multiple operators, establishing subgoals, and using combinations of rules to solve related but non-isomorphic problems. These basic requirements are beyond the capabilities of virtually all computational models of analogy (but see Holyoak & Thagard, 1989b, for an early although limited effort to integrate analogy within a rule-based problem-solving system). The most successful models of human problem solving have been formulated as production systems (see Lovett & Anderson, Chap. 17), and Salvucci and Anderson (2001) developed a model of analogy based on the ACT-R production system. However, this model is unable to solve reliably any analogy that requires integration of multiple relations - a class that includes analogies within the grasp of young children (Halford, 1993; Richland et al., 2004; see Halford, Chap. 22). The integration of analogy models with models of general problem solving remains an important research goal.
Perhaps the most serious limitation of current computational models of analogy is that their knowledge representations must be hand-coded by the modeler, whereas human knowledge representations are formed autonomously. Closely related to the challenge of avoiding hand-coding of representations is the need to flexibly rerepresent knowledge to render potential analogies perspicuous. Concepts often have a close conceptual relationship with more complex relational forms (e.g., Jackendoff, 1983). For example, causative verbs such as lift (e.g., "John lifted the hammer") have very similar meanings to structures based on an explicit higher-order relation, cause (e.g., "John caused the hammer to rise"). In such cases, the causative verb serves as a "chunked" representation of a more elaborate predicate-argument structure. People are able to "see" analogies even when the analogs have very different linguistic forms (e.g., "John lifted the hammer in order to strike the nail" might be mapped onto "The Federal Reserve used an increase in interest rates as a tool in its efforts to drive down inflation"). A deeper understanding of human knowledge representation is a prerequisite for a complete theory of analogical reasoning.
Was this article helpful?