Category Learning and Inference

Memory Professor System

10x Your Memory Power

Get Instant Access

One nice aspect of Rosch and Mervis's (1975) studies of typicality effects is that they used both natural language categories and artificially created categories. Finding typicality effects with natural (real world) categories shows that the phenomenon is of broad interest; finding these same effects with artificial categories provides systematic control for potentially confounding variables (e.g., exemplar frequency) in a way that cannot be done for lexical concepts. This general strategy linking the natural to the artificial has often been followed over the past few decades. Although researchers using artificial categories have sometimes been guilty of treating these categories as ends in themselves, there are enough parallels between results with artificial and natural categories that each area of research informs the other (see Medin & Coley, 1 998, for a review).

prototype versus exemplar models

One idea compatible with Rosch's family resemblance hypothesis is the prototype view. It proposes that people learn the characteristic features (or central tendency) of categories and use them to represent the category (e.g., Reed, 1972). This abstract prototype need not correspond to any experienced example. According to this theory, categorization depends on similarity to the prototypes. For example, to decide whether some animal is a bird or a mammal, a person would compare the (representation of) that animal to both the bird and the mammal prototypes and assign it to the category whose prototype it most resembled. The prototype view accounts for typicality effects in a straightforward manner. Good examples have many characteristic properties of their category and have few characteristics in common with the prototypes of contrasting categories.

Early research appeared to provide striking confirmation of the idea of prototype abstraction. Using random dot patterns as the prototypes, Posner and Keele (1968, 1970) produced a category from each prototype. The instances in a category were "distortions" of the prototype generated by moving constituent dots varying distances from their original positions. Posner and Keele first trained participants to classify examples that they created by distorting the prototypes. Then they gave a transfer test in which they presented both the old patterns and new low or high distortions that had not appeared during training. In addition, the prototypes, which the participants had never seen, were presented during transfer. Participants had to categorize these transfer patterns, but unlike the training procedure, the transfer test gave participants no feedback about the correctness of their responses. The tests either immediately followed training or appeared after a 1 -week delay.

Posner and Keele (1970) found that correct classification of the new patterns decreased as distortion (distance from a category prototype) increased. This is the standard typicality effect. The most striking result was that a delay differentially affected categorization of prototypic versus old training patterns. Specifically, correct categorization of old patterns decreased over time to a reliably greater extent than performance on prototypes. In the immediate test, participants classified old patterns more accurately than prototypes; however, in the delayed test, accuracy on old patterns and prototypes was about the same. This differential forgetting is compatible with the idea that training leaves participants with representations of both training examples and abstracted prototypes but that memory, for examples, fades more rapidly than memory for prototypes. The Posner and Keele results were quickly replicated by others and constituted fairly compelling evidence for the prototype view.

However, this proved to be the beginning of the story rather than the end. Other researchers (e.g., Brooks, 1978; Medin & Schaffer, 1978) put forth an exemplar view of categorization. Their idea was that memory for old exemplars by itself could account for transfer patterns without the need for positing memory for prototypes. On this view, new examples are classified by assessing their similarity to stored examples and assigning the new example to the category that has the most similar examples. For instance, some unfamiliar bird (e.g., a heron) might be correctly categorized as a bird not because it is similar to a bird prototype, but rather because it is similar to flamingos, storks, and other shore birds.

In general, similarity to prototypes and similarity to stored examples will tend to be highly correlated (Estes, 1986). Nonetheless, for some category structures and for some specific exemplar and prototype models, it is possible to develop differential predictions. Medin and Schaffer (1978), for ex ample, pitted the number of typical features against high similarity to particular training examples and found that categorization was more strongly influenced by the latter. A prototype model would make the opposite prediction.

Another contrast between exemplar and prototype models revolves around sensitivity to within-category correlations (Medin, Altom, Edelson, & Freko, 1982). A prototype representation captures what is on average true of a category, but is insensitive to within-category feature distributions. For example, a bird prototype could not represent the impression that small birds are more likely to sing than large birds (unless one had separate prototypes for large and small birds). Medin et al. (1982) found that people are sensitive to within-category correlations (see also Malt & Smith, 1984, for corresponding results with natural object categories). Exemplar theorists were also able to show that exemplar models could readily predict other effects that originally appeared to support prototype theories - differential forgetting of prototypes versus training examples, and prototypes being categorized as accurately or more accurately than training examples. In short, early skirmishes strongly favored exemplar models over prototype models. Parsimony suggested no need to posit prototypes if stored instances could do the job. Since the early 1980s, there have been a number of trends and developments in research and theory with artificially constructed categories, and we give only the briefest of summaries here.

new models

There are now more contending models for categorizing artificial stimuli, and the early models have been extensively elaborated. For example, researchers have generalized the original Medin and Schaffer (1978) exemplar model to handle continuous dimensions (Nosofsky, 1986), to address the time course of categorization (Lamberts, 1 995; Nosofsky & Palmeri, 1997a; Palmeri, 1997), to generate probability estimates in inference tasks (Juslin & Persson, 2002), and to embed it in a neural network (Krus-chke, 1992).

Three new kinds of classification theories have been added to the discussion: rational approaches, decision-bound models, and neural network models. Anderson (1 990, 1 991) proposed that an effective approach to modeling cognition in general and categorization in particular is to analyze the information available to a person in the situation of interest and then to determine abstractly what an efficient, if not optimal, strategy might be. This approach has led to some new sorts of experimental evidence (e.g., Anderson & Fincham, 1996; Clapper & Bower, 2002) and pointed researchers more in the direction of the inference function of categories. Interestingly, the Medin and Schaffer exemplar model corresponds to a special case of the rational model, and Nosofsky (1 991) discussed the issue of whether the rational model adds significant explanatory power. However, there is also some evidence undermining the rational model's predictions concerning inference (e.g., Malt, Ross, & Murphy, 1995 ; Murphy & Ross, 1994; Palmeri, 1 999; Ross & Murphy, 1 996).

Decision-bound models (e.g., Ashby & Maddox, 1993; Maddox & Ashby, 1993) draw their inspiration from psychophysics and signal detection theory. Their primary claim is that category learning consists of developing decision bounds around the category that will allow people to categorize examples successfully. The closer an item is to the decision bound the harder it should be to categorize. This framework offers a new perspective on categorization in that it may lead investigators to ask questions such as How do the decision bounds that humans adopt compare with what is optimal? and What kinds of decision functions are easy or hard to acquire? Researchers have also directed efforts to distinguish decision-bound and exemplar models (e.g., Maddox, 1999; Maddox & Ashby, 1998; McKinley & Nosofsky, 1995; Nosofsky, 1998; Nosofsky & Palmeri, 1997b). One possible difficulty with decision-bound models is that they contain no obvious mechanism by which stimulus familiarity can affect performance, contrary to empirical evidence that it does (Verguts, Storms, & Tuerlinckx, 2001).

Neural network or connectionist models are the third type of new model on the scene (see Knapp & Anderson, 1984, and Kruschke, 1992, for examples, and Doumas & Hummel, Chap. 4, for further discussion of connectionism). It may be a mistake to think of connectionist models as comprising a single category because they take many forms, depending on assumptions about hidden units, attentional processes, recurrence, and the like. There is one sense in which neural network models with hidden units may represent a clear advance on prototype models: They can form prototypes in a bottom-up manner that reflects within-category structure (e.g., Love, Medin, & Gureckis, 2004). That is, if a category comprises two distinct clusters of examples, network models can create a separate hidden unit for each chunk (e.g., large birds versus small birds) and thereby show sensitivity to within-category correlations.

mixed models and multiple categorization systems

A common response to hearing about various models of categorization is to suggest that all the models may be capturing important aspects of categorization and that research should determine in which contexts one strategy versus another is likely to dominate. One challenge to this divide and conquer program is that the predictions of alternative models tend to be highly correlated, and separating them is far from trivial. Nonetheless, there is both empirical research (e.g., Johansen & Palmeri, 2002; Nosofsky, Clark, & Shin, 1989; Reagher & Brooks, 1 993) and theoretical modeling that support the idea that mixed models of categorization are useful and perhaps necessary. Current efforts combine rules and examples (e.g., Erickson & Kruschke, 1998; Nosofsky, Palmeri, & McKinley, 1994), as well as rules and decision bounds (Ashby, Alfonso-Reese, Turken, & Waldron, 1998). Some models also combine exemplars and prototypes (e.g., Homa, Sterling, & Trepel, 1981; Minda & Smith, 2 001; Smith & Minda,

1998, 2000; Smith, Murray, &Minda, 1997), but it remains controversial whether the addition of prototypes is needed (e.g., Buse-meyer, Dewey, & Medin, 1984; Nosofsky & Johansen, 2000; Nosofsky & Zaki, 2002; Stanton, Nosofsky, &Zaki, 2002).

The upsurge of cognitive neuroscience has reinforced the interest in multiple memory systems. One intriguing line of research by Knowlton, Squire, and associates (Knowl-ton, Mangels, & Squire, 1996; Knowlton & Squire, 1993; Squire & Knowlton, 1995) favoring multiple categorization systems involves a dissociation between categorization and recognition. Knowlton and Squire (1993)usedthe Posner and Keele dot pattern stimuli to test amnesic and matched control patients on either categorization learning and transfer or a new-old recognition task (involving five previously studied patterns versus five new patterns). The amnesiacs performed very poorly on the recognition task but were not reliably different from control participants on the categorization task. Knowlton and Squire took this as evidence for a two-system model, one based on explicit memory for examples and one based on an implicit system (possibly prototype abstraction). On this view, amnesiacs have lost access to the explicit system but can perform the classification task using their intact implicit memory.

These claims have provoked a number of counterarguments. First, Nosofsky and Zaki (1998) showed that a single system (exemplar) model could account for both types of data from both groups (by assuming the exemplar-based memory of amnesiacs was impaired but not absent). Second, investigators have raised questions about the details of Knowlton and Squire's procedures. Specifically, Palmeri and Flanery (1 999) suggested that the transfer tests themselves may have provided cues concerning category membership. They showed that undergraduates who had never been exposed to training examples (the students believed they were being shown patterns sublimi-nally) performed above chance on transfer tests in this same paradigm. The debate is far from resolved, and there are strong advocates both for and against the multiple systems view (e.g., Filoteo, Maddox, & Davis, 2001; Maddox, 2002; Nosofsky & Johansen, 2000; Palmeri & Flanery, 2002; Reber, Stark, & Squire, 1998a, 1998b). It is safe to predict that this issue will receive continuing attention.

inference learning

More recently, investigators have begun to worry about extending the scope of category learning studies by looking at inference. Often, we categorize some entity to help us accomplish some function or goal. Ross (1997,1999,2000) showed that the category representations people develop in laboratory studies depend on use and that use affects later categorization. In other words, models of categorization ignore inference and use at their peril. Other work suggests that having a cohesive category structure is more important for inference learning than it is for classification (Yamauchi, Love, & Mark-man, 2002; Yamauchi & Markman, 1998, 2 000a, 2 000b; for modeling implications see Love, Markman, & Yamauchi, 2000; Love et al., 2004). More generally, this work raises the possibility that diagnostic rules based on superficial features, which appear so prominently in pure categorization tasks, may not be especially relevant for contexts involving multiple functions or more meaningful stimuli (e.g., Markman & Makin, 1998; Wisniewski & Medin, 1994).

feature learning

The final topic on our "must mention" list for work with artificial categories is feature learning. It is a common assumption in both models of object recognition and category learning that the basic units of analysis or features remain unchanged during learning. There is increasing evidence and supporting computational modeling that indicate this assumption is incorrect. Learning may increase or decrease the distinctiveness of features and may even create new features (see Goldstone, 1998, 2003; Goldstone, Lippa, & Shriffin, 2001; Goldstone & Stevyers, 2001;

Schyns, Goldstone, &Thibaut, 1998; Schyns & Rodet, 1 997).

Feature learning has important implications for our understanding of the role of similarity in categorization. It is intuitively compelling to think of similarity as a causal factor supporting categorization - things belong to the same category because they are similar. However, this may have things backward. Even standard models of categorization assume learners selectively attend to features that are diagnostic, and the work on feature learning suggests that learners may create new features that help partition examples into categories. In that sense, similarity (in the sense of overlap in features) is the by-product, not the cause, of category learning. We take up this point again in discussing the theory theory of categorization later in this review.

reasoning

As we noted earlier, one of the central functions of categorization is to support reasoning. Having categorized some entity as a bird, one may predict with reasonable confidence that it builds a nest, sings, and can fly, although none of these inferences is certain. In addition, between-category relations may guide reasoning. For example, from the knowledge that robins have some enzyme in their blood, one is likely to be more confident that the enzyme is in sparrows than in raccoons. The basis for this confidence may be that robins are more similar to sparrows than to raccoons or that robins and sparrows share a lower-rank superordinate category than do robins and raccoons (birds versus vertebrates). We do not review this literature here because Sloman and Lagnado (Chap. 5 ) summarize it nicely.

summary

Bowing to practicalities, we have glossed a lot of research and skipped numerous other relevant studies. The distinction between artificially created and natural categories is itself artificial - at least in the sense that it has no clear definition or marker. When we take up the idea that concepts may be organized in terms of theories, we return to some laboratory studies that illustrate this fuzzy boundary. For the moment, however, we shift attention to the more language-like functions of concepts.

Was this article helpful?

0 0

Post a comment