Brain damage and cognitive dysfunction

3 downloads 0 Views 5MB Size Report
subtractive methodology elaborated by Schweikert (1980). This strategy would enable the sceptic to ...... David J. Murray. Department of Psychology, Queen's ...
BEHAVIORAL AND BRAIN SCIENCES (1994) 17, 655-692 Printed in the United States of America

Toward a theory of human memory: Data structures and access processes Michael S. Humphreys Department of Psychology, The University of Queensland, QLD 4072, Australia Electronic mail: [email protected]

Janet Wiles Departments of Psychology and Computer Science, The University of Queensland, QLD 4072, Australia

Simon Dennis Department of Computer Science, The University of Queensland, QLD 4072, Australia

Abstract: Starting from Marr's ideas about levels of explanation, a theory of the data structures and access processes in human memory is demonstrated on 10 tasks. Functional characteristics of human memory are captured implementation-independently. Our theory generates a multidimensional task classification subsuming existing classifications such as the distinction between tasks that are implicit versus explicit, data driven versus conceptually driven, and simple associative (two-way bindings) versus higher order (threeway bindings), providing a broad basis for new experiments. The formal language clarifies the binding problem in episodic memory, the role of input pathways in both episodic and semantic (lexical) memory, the importance of the input set in episodic memory, and the ubiquitous calculation of an intersection in theories of episodic and lexical access. Keywords: amnesia; binding; context; data structure; lexical decision; memory access; perceptual identification; recall; recognition; representation

1. Introduction

An insight gained from computer science was that information storage can be characterized in terms of a data structure and the processes that operate on it. In applying this idea, cognitive psychology and artificial intelligence adopted the traditional data structures used in computer science (e.g., lists, trees, stacks, hash tables, etc.). Cognitive modellers would use the data structures and access processes which were familiar to them and appeared most suited to the task at hand. This choice was not always driven by the properties of the human memory system, even though the properties of human memory are in some respects radically different from those of traditional computer memories. For example, many contemporary human models use various forms of direct memory access (Hintzman 1986; Humphreys et al. 1989b; Metcalfe 1990; Murdock 1982; Raaijmakers & Shiffrin 1981). Such theories also allow the same cues to be used in different ways. A new viewpoint is emerging from these models, with aspects of what had previously been regarded as reasoning and other higher-order cognitive processes being attributed to memory-access processes (Halford et al., in press; Wiles et al., in press). In this target article we describe data structures and © 1994 Cambridge University Press

0140-525X194 S5.00+.00

memory-access processes from a "human memory" perspective. Our two starting points are (1) Marr's (1982) three levels for understanding an information-processing device, and (2) the standard experimental paradigms used to demonstrate the power and complexity of human memory. Marr argued that an information-processing device must be understood at three levels. The highest level is an abstract computational theory of the task. A theory at this level must specify the informational content of the inputs and outputs and the goal of the computation. The second level must specify the structure or form of the input and output information and the algorithm which transforms the input representations into the output representation. (Note that explicit mathematical theories or computer simulations are commonly referred to as "computational" theories in cognitive psychology, but we will refer to them as algorithm-\eve\ theories.) The third level specifies the physical implementation of the representations and algorithm. Before presenting our computational theory we discuss how Marr's ideas about specifying the informational content and the goal of the computation can be applied to human memory (sect. 1.1). We then discuss how a computation-level theory can be interpreted (sect. 1.2), 655

Humphreys et al.: Human memory and how we chose the tasks (sect. 1.3). After presenting our theory (sect. 2), we discuss how we chose the data structures and the access processes (sect. 3.1), and how the theory can be extended to new tasks (sect. 3.2). We then discuss applications to memory theory and research and how the theory can be invalidated (sect. 4). Finally, we discuss how the theory can be used in modelling higher cognitive processes (sect. 5). 1.1. The Information available and the goal of the task

The first of Marr's (1982) levels is where the information that enters into a computation and the goal of the computation are specified. In human memory tasks information can be equated with what is made available by the test environment (e.g., the experimenter-supplied cues and the experimental instructions indicating which cues and memories are relevant) or by the subject (e.g., subjectsupplied cues and self-instructions). Besides the information made available at the time of testing, the information abstracted from the study episode and the previous learning history of the subject must be specified: does the task require information about the occurrence of a single item or a pair of items in a list? Does it require information about a pairwise relationship between two words (e.g., table and chair are associated), a particular relationship between two words (e.g., hot is the antonym of cold), or perceptual cues linking a string of graphemes to a memorial item. There is a precedent in the memory literature for specifying the goal of the computation, although it has not been thought of in those terms. Tulving's (1972) distinction between episodic and semantic memory is widely regarded as useful when applied to tasks, but there are substantial doubts about its applicability to memories or memory systems (see the BBS commentary on Tulving 1984). We think researchers intuitively appreciated the episodic/semantic distinction as a reflection of the fact that different memory tasks have different goals, but the distinction's usefulness is largely independent of the way it is implemented, and distinct memory systems are only one of many ways it can be implemented. 1.2. Interpreting a computation-level theory

A computation-level theory specifies the inputs, outputs, and goal of a task. This answers the question of what is computed; each specified task can be considered as a function in its own right. However, our goal is to go beyond computation-level theories for individual tasks by showing how each task computation can be composed of simpler functions. We try to identify a finite set of functions (computational primitives) out of which all other functions can be composed. Once identified, they can be used to describe or make hypotheses about an indefinite number of tasks. This goal is in keeping with Marr's suggestion that a computation-level theory can provide information about the logical structure of the computation. This paper accordingly shows that there is enough structure in a limited set of tasks to allow parsimonious and plausible computational primitives to emerge. It is also possible to propose that computational primitives are directly represented at the algorithm level. We do not wish to discourage this view; it provides an impor656

BEHAVIORAL AND BRAIN SCIENCES (1994) 17:4

tant source of hypotheses about the similarities and differences between tasks. The reification of tasks or task components as memories or memory components has a long history in theories of human and, animal memory (e.g., classical conditioning vs. instrumental conditioning, short-term memory vs. long-term memory, implicit memory vs. explicit memory, etc.). However, at several points we will be drawing attention to some of the pitfalls in this view. Although Marr's concept of a computational theory is also the starting point for Anderson and Milson's (1989) rational analysis, their conclusion is very different because we are answering a different question. Anderson and Milson adopted the standard memory-access assumption in cognitive psychology and artificial intelligence (access is simply the retrieval of memory traces) and focused on the question of why access occurs. Their answer was that human memory has adapted to the environment in which it operates. In contrast, we focus on the question of what computation is performed and conclude that there is more to memory than trace retrieval. Both of these questions (What is the computation? Why is it appropriate?) are part of Miirr's concept of a computational theory. 1.3. Choosing the tasks

A specification of the input/output relationship for a task can also be considered a definition of what constitutes a correct response for that task. This is clearly a fundamental issue for any memory theory, although it may appear trivial given the tasks we have chosen. It was so as to achieve near unanimous agreement about the goal of the task that we chose laboratory tasks as the second of our starting points. From laboratory experiments memory researchers have learned, with considerable effort, that they cannot isolate current learning from previous learning. Nonsense syllables are not really nonsense and learning and/or retention and/or retrieval are all heavily influenced by the knowledge the subjects bring to the laboratory. A naive subject will retain 80% of a recently acquired list over a 24-hour interval, but an experienced subject who has previously learned many other lists in the laboratory will retain only 20% over the same interval (Underwood 1957). Some examples of the power of the human memory system came from list discrimination experiments where subjects can readily separate the information learned in two different lists (Anderson & Bower 1972) and from transfer paradigms such as an AB-ABr paradigm (McCullers 1965), which isformallyequivalent to an exclusive-or problem. In more recent work it has been shown that there are important differences between word and part-word cues (Nelson & McEvoy 1979; Roediger et al. 1992) and that instructions to use a cue (e.g., a word or a part of a word) to recall a memory from a particular episode can produce a very different outcome from instructions to complete the part-word or to free associate to the word (Graf et al. 1984; RichardsonKlavehn & Bjork, 1988). There have also been many insights about memory tasks. From a contemporary perspective these insights may seem obvious, but that was certainly not true a few years ago. Through the work of Murdock (1974), Norman

Humphreys et al.: Human memory (1966), and Norman and Wickelgren (1965), among others, memory researchers learned to distinguish between single-item recognition, pair recognition, and recall. As we have already indicated, Tulving (1972) taught memory researchers to think differently about free associating to a cue and using it to recall a word that occurred in a specified list. Bain and Humphreys (1989) built on a substantial body of earlier work when they emphasized the role played by instructions about the relevant list or episode. In constructing our computational theory we propose a representation and a small set of computational primitives which suffice to specify the goals of a large number of tasks. This involves an interaction between the tasks chosen and the computational primitives proposed: as our ideas about the computational primitives have become more definite, our choice of tasks has changed to convey the role of the primitives more fully. Some insight into our preliminary thinking about the choice of tasks and primitives can be derived from the task analysis proposed by Humphreys et al. (1989b). We will also discuss the choice of computational primitives in section 3.1. Some specific criteria for task selection were formulated in the construction of our theory. We decided to exclude tasks which involved multiple retrieval, such as free recall and analogies. Our feeling was that once we had identified the basic retrieval functions, we could then use them to construct theories for the multiple-retrieval tasks. Tasks were also excluded when they did not have clear inputs and goals. We only chose tasks in which the test instructions specified the inputs and told the subject rather directly what: to do. We thereby excluded tasks such as repetition priming, in which the experimenter is investigating the effects of a prior study opportunity although the instructions make no mention of this prior experience. This criterion also excluded tasks such as recall in response to an adjective and a noun, in which the instructions do not inform subjects whether they should recall a word that is related to one cue, to both cues, or to the cues in combination. Finally, tasks were excluded in which extra details about the decision process would have complicated the task specifications without illuminating the access processes. This criterion excluded forced choice item recognition.

Table 1. Notation, definitions of inputs and outputs, data structures, and computational primitives Standard notation € means "element of 3 means "there exists" is the "empty set" 1 is "undefined" C is "subset of V means "or" Inputs and outputs x and y are items (words, relations, contexts, etc.). They are singleton sets. X and Y are sets of items. X x Y = {x U y I x e X and y e Y} X - Y = {x I x e X and x i Y} a and b are words. They are singleton sets. The name relation stands for a singleton set of a binary relation such as antonymof, or isa. RELATION is a set of relations. list is a context, information about the time, place, or circumstances in which the learning occurred (a singleton set). LIST is a set of contexts. S is a set of sets of items; s is an element of S, it can be a singleton set, a doubleton set, or a higher-order set. I is the set of all items. p,q are perceptual stimuli (singleton sets) and P and Q are sets of perceptual stimuli. The output from tasks such as single-item recognition and lexical decision is a decision, yes or no. The output from tasks such as recall and perceptual identification is a singleton set containing an item. Data structures M is a subset of the power set of I (M C Ik); m is an element of M. It is a binding between items. For example, a binding which records the occurrence of a pair of words in a context would be {list,a,b}.

2. A partial description of the structures and processes involved in memory access

L is the set of bindings between perceptual stimuli and the words which are compatible with those stimuli ({{p,a}}).

The issues to be discussed here include (1) notation, (2) specifying inputs and outputs, (3) data structures, (4) computational primitives, and (5) functional specifications of the tasks.

Computational primitives

2.1. Definitions

In Table 1 we present our notation and define the inputs to memory tasks (contexts, relations, and words) and the outputs from the same tasks (words and decisions). We also define two data structures, M and L, and five functions (NotEmpty, Choose, Retrieve, Compatible, and Intersection) for accessing the information stored in those structures. These functions are the computational primitives for this level of the theory. They are primitives in the sense that they are computational mechanisms (or processes), they are limited in number, and all other functions are constructed from them. In introducing these primitives we describe a range of possible implementations in order to convey the basic concept.

The NotEmpty primitive takes a set and returns Yes if the set contains an element and No if it does not NotEmpty (X) = Yes, X ^ No, X = | The Choose primitive is designed to choose an element from a set. When the set is not empty it returns a singleton set. In the case where the set is empty the result is undefined. Choose (X) = x, x e X and X ^

(continued)

BEHAVIORAL AND BRAIN SCIENCES (1994) 17:4

657

Humphreys et al.: Human memory Table 1. (Continued)

when the memory system should or should not use that structure. Nor is context necessarily a unitary construct.

Computational primitives

2.2.2. Perceptual stimuli and words. The second data structure (L) consists of bindings between perceptual stimuli and the The Retrieve primitive takes a set of sets, and the memory representations of the words in the data structure M. Our structure M. It returns the set of items which are in a intention is to provide a language in which we can differentiate binding with any element of the input set. bindings between physical stimuli and words from bindings between words. There is also a physical input when the experiRetrieve (S, M) = {x IBs € S, m e M where sCm and x € m — s} menter reinstates a context by reminding subjects of the list they learned last week or asks them to recall a word which is in a The Compatible primitive takes a physical stimulus and reparticular relationship with another word. We do not attempt to turns the set of words which are bound to that stimulus. provide a language for discussing these physical inputs, howCompatible (P,L) = {a 13 p € P and {p,a} e L} ever, because we do not have a theory of how physical and conceptual inputs combine to reinstate contexts and to specify The Intersection primitive finds the intersection between relations. two sets. X n Y = {x I x e X and x € Y}

2.1.1. Inputs and outputs. The inputs to the memory tasks are sets of items. At times we will distinguish between contexts, relations, and words. The outputs from tasks such as recall and perceptual identification are singleton sets of items and the outputs from tasks such as recognition and lexical decision are decisions. At this level we represent these inputs and outputs as symbols or names (e.g., a context list, a relation isa, words a and b and decisions yes or no). 2.2. Data structures

2.2.1. Relations, contexts, and words. The first data structure (M) contains bindings between items (relations, contexts, and words). In this notation, a binding is simply a set of items. We will argue that different types of bindings suffice for different tasks. We focus on two types of bindings in particular. The first is a pairwise binding linking two items {x,y}; the second is a threeway binding. We are concerned with two examples of three-way bindings. The first links a context and a pair of words {list,a, b}. The second links two words and a relationship {relation, a, b}. In representing a three-way binding in this fashion we intend to allow any solution to the problem of binding three elements, provided that it preserves the identity of the individual components. For example, some solutions to the context-binding problem derived from models of memory which assume the separate storage of memories, include: (1) the assignment of a unique identifier to every pair of items in a list (Anderson 1983); (2) storing features derived from the context and from both items in a separate memory for every occurrence of a pair of items in a context (Flexser & Tulving 1978); and (3) storing a unique image for every occurrence of a pair of items in a context (Raaijmakers & Shiffrin 1981). Other examples, derived from models of memory which assume that memories are superimposed, include: (1) using the tensor product of the three vectors which represent the context and the two items (Humphreys et al. 1989b); (2) using the convolution of the same three vectors (Weber & Murdock 1989); and (3) multiplying the activation values of the context and the items to form a unique representation of the pair of items in that context (Sloman & Rumelhart 1992). Anderson and Bower's (1973) use of labelled associations is an example of a three-way binding between a relation and a pair of words. In including context in our data structures we are simply providing a language for specifying that some tasks require the use of information about the study list or episode. Context is often reified as a tag (Anderson & Bower 1972) or a cue (Humphreys et al. 1989b). The context required for short-term memory paradigms could also be implemented as a special structure in which the last list is stored along with processes which direct

658

BEHAVIORAL AND BRAIN SCIENCES (1994) 17:4

2.3. Computational primitives

As mentioned above, the computational primitives are processes that form the basis for all memory operations. Like the basic instruction set of a computer, they can be combined in many ways to produce more complex functions. 2.3.1. Is the output a word or a decision? The first two functions are NotEmpty and Choose. These are used to differentiate between tasks which require a decision as an output (recognition, and lexical decision) and those which require a word as an output (recall and perceptual identificatipn). NotEmpty is a function used for making decisions. It takes a set as an input and outputs yes if the input set is not empty and no if it is empty. Choose takes a set as an input, and if the input set is not empty it outputs one element of the input set. The Choose and NotEmpty functions could be implemented as a series of search and decision operations that might be describable in a flowchart. In Chappell and Humphreys (1994) they are implemented in a single setting of an artificial neural network. The Choose and NotEmpty functions are logically required if we are going to map inputs onto outputs. That is, they are an essential part of a definition of what constitutes a correct response. In our formal system, they are also defined very generally. For example, the Choose function indicates that the output must be an element of the input set but not how that element is selected. When defined in this very general way, these functions simply serve as a reminder that a model is incomplete until a solution to the problem of producing an output has been proposed. Such a reminder has at times been needed. For example, Tulving (1983) required some persuasion before he acknowledged that recognition and recall would require different conversion operations. It should also serve as a reminder that neurophysiological explanations of memory are not complete until an explanation as to how a response js made is provided. 2.3.2. Retrieval from M. Elements that have been stored in the data structure M can be retrieved in several ways, depending on the number and type of cues available. The Retrieve function accepts sets (either contexts LIST, relations RELATION, words A, or a combination) and the data structure M as inputs. An important distinction is whether the input set is a set of singleton sets or a set of higher-order sets such as doubletons. For example, if the input consists of sets of doubleton sets (e.g., a context and word cue, LISTxA) the output of the Retrieve function will be the set of all items which occur in a binding with an instance of the context set and an instance of the cue set. If the input consists of a set of singleton sets the output of the Retrieve function will be the set of all items which occur in a binding with an element of the input set. Note that if the input set contains a context list, and the binding {list,a,b} is in memory, both a and b will be in the output. Similarly, if the input set contains a word a, and the binding {Iist,a,b} is in memory, then both list and b will

Humphreys et al.: Human memory be in the output. Our primary concern was to retrieve the components in a three-way binding {list, a, b} by cuing with a pair of cues ({list, a}) or with any of the component cues (list or a). By allowing the input to be a multi-item set, we can accommodate the situation where the instructions do not differentiate between two or more contexts. The use of multi-item sets is also inherent in the use of superimposed memories, and we wished to allow for this possibility (van Gelder 1991). 2.3.3. Going from a perceptual stimulus to a word. The Compatible function takes a set of perceptual stimuli P and the data structure L as inputs, and outputs a set of words. The output is the set of words that are bound to any of the perceptual stimuli in P. When the physical stimulus is an intact word the output should consist of a singleton set (i.e., the word itself). When the input is a word stem or ending, the output should be the set of all words in the subject's vocabulary that share that stem or ending. We should also expect the outcome to contain more than one word when the input is a physically degraded stimulus. It is also possible, even when a word fragment uniquely specifies a word, that the functional stimulus will be a portion of the fragment. Under these conditions the Compatible function could still produce a multi-item set. The reasons for having sets of perceptual stimuli as potential inputs (rather than individual stimuli) and sets of items as potential outputs are similar to the reasons given for the Retrieve function. Tulving and Shacter's (1990) ideas about a perceptual representational system provide a starting point for one way to implement the Compatible function. In this approach the perceptual representational system would accept the physical input and transform it into a representation which could then be passed on to the memory system. Humphreys et al.'s (1989b) proposal that a distributed memory (an artificial neural network) maps peripheral codes onto central codes provides a starting point for a very different implementation in which an artificial neural net would be taught to map strings of graphemes onto memory representations. The Compatible function is then defined as the mapping between inputs and outputs produced by the trained network. 2.3.4. Intersecting two sets. The final computational primitive is Intersection. This function finds the intersection of sets which are made available by the Retrieve and the Compatible functions. An intersection function is very common in memory theories, although it is not always acknowledged that the proposed algorithm is computing an intersection. For example, searching a list to see whether a list member is related to a cue, or generating associates of a cue followed by an attempt to determine whether the generated instance was a member of the study list are implementations of an intersection function using search processes. These ideas are preserved in such contemporary theories as those of Jacoby and Hollingshead (1990) and Nelson et al. (1992). The SAM (search of associative memory) framework can also be used to compute an intersection (Humphreys et al. 1993; Raaijmakers & Shiffrin 1981). The theories for semantic priming proposed by Becker (1980), Marslen-Wilson (1987), and Norris (1986) all involve the computation of an intersection between a set of items which are compatible with (elicited by) the physical stimulus and a set of items which are semantically related to the prime. Wiles et al. (1991) have discussed a variety of ways to compute an intersection using artificial neural networks, and Chappell and Humphreys (1994) have used the computation of an intersection in an artificial neural network as part of a model for single-item recognition and cued recall.

2.4. Functional specifications

The functional specifications for a task indicate its inputs and goal (the input/output relationship). The specification of the

input/output relationship consists of the series of computational primitives that are required. Each of these is identified by an abbreviation of the task name. The functional specifications for the five episodic and five semantic tasks are given in Table 2. 2.5. AB-ABr learning

In AR-ABr learning subjects study two lists of pairs. The cue and the target terms are the same in list 2 as they were in list 1, but they are re-paired (e.g., the pairs in list 1 are pen booh, cot pole, and top road, whereas the pairs in list 2 are pen pole, cot road, and top book). Subjects can be given the cue term and asked to recall either the list-1 or the list-2 target. The inputs required by the task AB-ABr are the context (information about which list is relevant), the perceptual stimulus (the cue), and the data structures L and M. The output is a word and the goal is to recall a word which was paired with that cue in that context. Our specification of the relationship between the inputs and the outputs starts with the transformation of the perceptual stimulus into a representation of the cue. To do this, the Compatible function is applied to the perceptual stimulus. Because this stimulus is generally intact and is not degraded, the output should be a singleton set (the representation of the cue). To increase the generality of our notation in this and in all of the other tasks, however, we allow for the possibility that the inputs to and the outputs from these functions are multi-item sets. One example of where this generality would be needed in AB-ABr learning is when the instructions do not differentiate between two or more contexts. The next step is to use the representation of the cue and the context to retrieve the target. This is accomplished by using the Retrieve function. The input to this function is a doubleton set, consisting of a context and a representation of the cue. The use of a doubleton set as an input to the Retrieve function provides the ability to utilize just the information stored in three-way or higher-order bindings. That is, the only items retrieved are those which occur in a binding with both cues. Thus, the output from Retrieve is the set of all items paired with that cue in that context. Because more than one item may occur in the output of Retrieve, the Choose function is applied. A three-way binding between the context, the cue, and the target must be present in some form in any algorithm-level theory which solves AB-ABr learning. This does not mean, however, that we must necessarily reify the concept of a threeway binding. For example, consider the situation where a subject will study the pairs AB,CD in list 1 and AD,CB in list 2. Now, at the time of study, provide the subjects or encourage them to select mediators Ml for AB in list 1 and M2 for AD in list 2. Both Ml and M2 are preexisting associates of A. In addition, require B to be a preexisting associate of Ml and D to be a preexisting associate of M2. In addition, assume that Ml becomes associated with the list-1 context and M2 becomes associated with the list-2 context. To recall the list-1 target paired with A, first recall Ml by finding the intersection between the associates of A and the items (including mediators) which occurred in or during list 1. Then find the intersection between the associates of Ml and the items which occurred in or during list 1. The preexisting pairwise associations along with the sequence of retrievals produces a three-way binding between the context, the cue, and the target, but the three-way binding is not directly represented in the data structure M.

2.6. List-specific pair recognition

In list-specific pair recognition (LSPR), the subject studies at least two lists of pairs. After each list the subject can be asked to discriminate between the pairs which occurred and pairs which did not occur in that list (e. g., a pair from a previous list or a pair formed from two words which were studied in different pairs in

BEHAVIORAL AND BRAIN SCIENCES (1994) 17:4

659

Humphreys et al.: Human memory Table 2. Formal specifications for five episodic and five semantic (lexical) tasks Episodic memory tasks AB-ABr learning AB-ABr (LIST,P,L,M) = Choose (Retrieve(USTxCompatible(?,L)M)) List-specific pair recognition (LSPR) P and Q refer to the two members of the test pair Alternative 1 LSPfl (LJST,P,Q,L,M) = NotEmpty(Retrieve(UST\Compatible(¥,h)M)^Compatible(Q,L))\/ NotEmpty (Retrieve (lASTxCompatible (Q,L), M)D Compatible (P,L)) Alternative 2 LSPR (LJST,P,Q,L,M) = NotEmpty(Retrieve(Compatible(¥,L)xCompatible(Q,L),M)niAST) Cued recall with an extralist associate (CREA) CR£A(UST,P,L,M)=C/joose(flefrieue(LIST,M)n Retrieve (Compatible (P.L.), M)) Cued recall with a part-word cue (CRPWC) CRPWC(LlSJ,T,L,M)=Choose(Retrieve(UST,U)r\Compatible(Y,L)) List-specific item recognition (LSIR) Alternative 1 LS/R(LIST,P,L,M) = NotEmpty(Retrieve(lAST,M)nCompatible(P,L)) Alternative 2 LSIR (LIST,P,L,M) = NotEmpty(Retrieve(Compatible(V,L),M)OhlST) Sematic (lexical) memory tasks Relational recall (RR) fifl(RELATION,P,L,M) = Choose(Retrieve(Compatible(P,L)xRElATIOTSi,M)) Perceptual identification (PID) P/D(P,L,M) = Choose(Retrieve(wordxisa,M)