A Third Route for Reading? Implications from a ... - Semantic Scholar

3 downloads 0 Views 222KB Size Report
Feb 15, 2002 - Animals. Body parts. Foods. Outdoor objects. BED. BUG. GUT. BUN. BOG. CAN. CAT. HIP. HAM. LOG ...... Pineapple. Thumb nail. Fingernail.
Neurocase (2002) Vol. 8, pp. 274–295

© Oxford University Press 2002

A Third Route for Reading? Implications from a Case of Phonological Dyslexia Denise H. Wu, Randi C. Martin and Markus F. Damian1 Rice University, Houston, Texas, USA and 1University of Bristol, Bristol, UK

Abstract Models of reading in the neuropsychological literature sometimes only include two routes from print to sound, a lexical semantic route and a sublexical phonological route. Other researchers hypothesize an additional route that involves a direct connection between lexical orthographic representations and lexical phonological representations. This so-called ‘third route’ has been invoked to account for the preserved oral reading of some patients who show severe semantic impairments and a disruption of the sublexical phonological route. In their summation hypothesis, Hillis and Caramazza proposed that reading in these cases could result from a combination of partial lexical semantic information and partial sublexical phonological information, thus obviating the need for the third route. The present study examined the case of a phonological dyslexic patient (ML) who exhibited preserved word reading, even for items he could not name, along with a non-word reading impairment. The relationship between ML’s naming and reading, and the influence of semantic variables on his reading were examined. The results of this examination are interpreted as supporting the existence of the third route.

Introduction How many possible routes for reading a word aloud are available to a fluent English speaker? A general dual-route approach was initially proposed more than two decades ago (Coltheart, 1978). The fundamental property of dual-route models of reading is the idea that skilled readers have at their disposal two different procedures for converting print to speech. These two routes are, roughly speaking, a dictionary look-up procedure and a letter-to-sound conversion rule procedure. The former procedure is also called the lexical semantic route, as it goes through the lexicons and the semantic system, while the latter procedure is also called the sublexical route because it produces the sound of a word by mapping sublexical letter units (e.g. graphemes, syllables) onto sounds without consulting the lexicon (see Fig. 1). Some people refer to the sublexical route as the graphemeto-phoneme conversion route, although there is considerable evidence that units larger than single graphemes are involved (Treiman and Zukowski, 1988; Lesch and Martin, 1998). According to the dual-route model, oral reading can be achieved through the lexical semantic route, the sublexical route, or cooperation between the two. Print is first analysed visually and letter detectors are activated. These detectors then send activation to both the input orthographic lexicon and the sublexical system. After accessing the lexical entry in the input orthographic lexicon, the activation in the lexical

semantic route proceeds to the semantic system. The relevant semantic attributes of the word would be activated and then the activation would be sent to the output phonological lexicon. The intended lexical unit should receive the most activation and be highly activated. It in turn would send activation to the phoneme system. On the other hand, for the sublexical route, only sublexical orthographic-to-sound conversion would be involved and there would be no activation of semantic or lexical properties of the word through this route. This procedure would simply convert the visual input into the corresponding phonemes based on correspondences in the language. Those representations in the phoneme system that correspond to the input graphemes are thus retrieved. According to recent versions of the dual-route model, the lexical representations that share these phonemes in the output phonological lexicon would also receive backward activation from the phoneme system (e.g. Coltheart et al., 2001). Thus, both the phoneme system and the output phonological lexicon would receive input from the lexical semantic route and the sublexical route. The selected response could be based on the most activated unit at either the output phonological level or the phonemic level (see Fig. 1). In either case, both the sublexical and lexical semantic routes would contribute to the resulting activation. An illustrative example is the computational dual-route cascaded model

Correspondence to: R. C. Martin, Psychology Department MS 25, Rice University, Houston, TX 77251, USA. Tel: ⫹1 713 348 3417; Fax: ⫹1 713 348 5221; e-mail: [email protected]

A third route for reading? 275

Fig. 1. Dual-route model of reading.

Fig. 2. Three-route model of reading.

proposed by Coltheart et al. (2001). In this model, every connection is bi-directional and, hence, each level would receive feed-forward activation and feedback from connected systems. Strong evidence for the existence of the lexical semantic route comes from deep dyslexic patients who make semantic errors in reading and show a large effect of concreteness in reading accuracy (Coltheart, 1980; Kremin, 1982). The existence of the sublexical route is also well accepted as normal readers can easily produce the sound of a pseudoword (a pronounceable non-word) that does not have a representation in the lexicon. Additional evidence for the sublexical (grapheme-to-phoneme conversion) route comes from surface dyslexia. The characteristic of this impairment is that the ability to read non-words and regular words aloud is selectively preserved relative to that of reading irregular words (e.g. reading ‘one’ as ‘own’). Moreover, exception words are often read as the grapheme-to-phoneme conversion rules specify. For example, patient KT (McCarthy and Warrington, 1986) achieved an accuracy of 100% with non-words and 81% with regular words, whereas his accuracy was only 41% with irregular words. Among his errors on irregular words, KT regularized at least 71% of them. In addition to the two routes described above, a third direct lexical route has been proposed to account for the behavior of some brain-damaged patients (e.g. Funnell, 1983; Coltheart

and Funnell, 1987; Ellis and Young, 1988; Coslett, 1991; Lambon Ralph et al., 1995). These patients showed fluent and accurate reading of at least some irregular words, despite having very impaired comprehension of the same words. Because sublexical rules cannot be applied to irregular words, word reading cannot be achieved by this route. The patients’ poor comprehension also makes it unlikely that correct word reading is achieved by the semantic route. Therefore, it has been argued that a direct route is needed for accessing the pronunciation of words without accessing semantic information. Specifically, this third route directly connects the input orthographic lexicon and the output phonological lexicon without going through the semantic system (see Fig. 2). One piece of evidence supporting this view comes from a patient reported by Coltheart et al. (1983). Their patient made errors in the comprehension of an irregular, printed word that corresponded to the meaning of its homophone (i.e. the word with the same pronunciation but different spelling). For example, this patient read ‘steak’ aloud correctly but defined it as ‘fencing post’. Because ‘steak’ is irregular, and the patient was evidently not accessing the correct semantic information, it appears that neither the sublexical nor the lexical semantic route was responsible for his correct reading. A patient, WB, reported by Funnell (1983), also provided strong evidence for the third, direct route for word reading.

276 D. H. Wu, R. C. Martin and M. F. Damian

WB showed a striking disruption of the sublexical route as he could not produce the sound of any single letters or nonwords. He also could not pronounce pseudohomophones (e.g. ‘brane’), whereas he was able to read correctly words sharing the same phonology with these pseudohomophones (e.g. ‘brain’). This latter finding indicates that his poor non-word reading could not be attributed to difficulty in producing unfamiliar phonological forms. His word reading across a wide range of frequencies was generally good (86–93% correct). However, his word reading was probably not achieved through the semantic route. For example, on a set of words for which WB showed an impaired ability to make semantic judgments for both spoken and written forms, his ability to match spoken forms of the same words to their written forms was perfect. Given WB’s disrupted sublexical and semantic systems, it seems necessary to have a third, direct lexical route for accessing phonology without knowing the meaning of a printed word to account for his good wordreading ability. Coslett (1991) reported a patient, WT, who also supported the existence of the third route for reading. Similarly to WB, WT showed impairments on both the sublexical route and the lexical semantic route. Her reading, writing, and repetition of non-words were poor (0–67% correct). In writing, repetition, and a semantic judgment task, WT performed poorly on low-imageability words. However, her word-reading performance was excellent and unaffected by imageability. Although the deficit observed in WB and WT strongly suggested that there is a third, direct lexical route for word reading, Hillis and Caramazza (1991, 1995) provided a dual-route account of these findings, termed the summation hypothesis. They argued that these patients’ preserved wordreading abilities were the product of the summation of output from a partially preserved semantic route and a partially preserved sublexical route. Although neither of these two mechanisms was well preserved enough in these patients to support the performance on semantic tasks (which solely rely on the semantic system) nor on non-word tasks (which solely rely on the sublexical route), the cooperation of these two routes enabled relatively spared word reading, as word reading could draw on both routes. According to the summation hypothesis, only two routes were needed to account for the patients’ performance, and this was more parsimonious than hypothesizing the third route. How can the summation hypothesis account for the patients’ behavior summarized above? For a patient with impairments on both the semantic and the sublexical routes, the dual-route model assumes that the summation of the activation from these two routes is sufficient to activate the correct response in the phonemic system and the output phonological lexicon. Suppose that a patient has a semantic deficit which results in difficulties in picture naming (e.g. Caramazza et al., 1990; Hillis et al., 1990). For example, due to this semantic deficit, the picture ‘tulip’ may activate the semantic representations of ‘rose’ and ‘daisy’ as much as ‘tulip’. Due to the activation from these semantic repres-

Fig. 3. The summation hypothesis.

entations, the corresponding representations in the output phonological lexicon and the phoneme system for these words would all in turn receive a certain amount of activation (see left-hand side of Fig. 3). In the case of picture naming, there is no other information from the input that would help to select the correct response from these candidates and the patient would thus probably produce a semantic error. Suppose that the patient also has some problem with the sublexical route, as revealed through poor non-word reading. Even though the patient cannot produce the correct response for, say, the non-word ‘trelp’, the phoneme system may still receive subthreshold activation for some of the sounds (e.g. the phonemes /t/ and /p/) from the disrupted sublexical route. In turn, the representations in the output phonological lexicon sharing these phonemes would be weakly activated through feedback from the phonemic level (see right-hand side of Fig. 3). Now, suppose that the patient is given the written word ‘tulip’. Processing via the semantic route would result in some activation of ‘tulip’, together with other semantically related words in the output phonological lexicon. At the same time, the sublexical route would also feed some activation to the phonemes in ‘tulip’ in the phoneme system and then, through feedback, to those words which share these phonemes in the output phonological lexicon. In the output phonological lexicon, the word ‘tulip’ would receive the most activation from both routes and, according to the summation hypothesis, the summation of these two sources of input could potentially

A third route for reading? 277

be enough for the patient to select and then produce the correct response. Based on the summation hypothesis, correct word reading for patients with disrupted lexical semantic and sublexical routes would be achieved only if semantic representations were partially preserved and the sublexical mechanism was partially functional. Therefore, these patients should not be able to read words for which they showed no semantic knowledge at all. Although patients WB and WT showed severe deficits on those semantic tasks like picture naming, picture–word matching, and semantic judgment, they seemed to have some spared semantic knowledge of those words that they could correctly read but failed to make correct responses to in semantic tasks. For example, WB’s performance on picture–word matching was influenced by the semantic relatedness between the target and the foil, which would not be predicted if he had no semantic knowledge of the words. WT’s word reading was slightly influenced by the regularity of the stimuli (making eight errors out of 70 irregular words versus one error out of 70 regular words), which was also not predicted by complete disruption to the sublexical route. However, the regularity effect was not significant. In support of their arguments, Hillis and Caramazza (1991, 1995) reported several patients who fulfilled the predictions of the summation hypothesis. One of their cases, JJ, demonstrated the strongest evidence for it. JJ never made a wordreading error on a word for which he showed some semantic knowledge. For example, he read the irregular word ‘sword’ correctly and defined it as ‘a weapon...I can’t recall anymore’. JJ read correctly words for which he showed no comprehension only if those words had a regular spelling. Therefore, Hillis and Caramazza proposed that JJ used a combination of residual semantic knowledge and partially preserved sublexical processing to accomplish reading. GLT was another patient providing evidence for the summation hypothesis (Hillis and Caramazza, 1995). Like WB and WT, GLT showed relatively good word reading (88% correct) despite having disrupted sublexical and semantic routes. Evidence for the interaction of the two was obtained from pseudohomophone-reading and picture-naming tasks. His pseudohomophone reading was poor (20% correct) when these stimuli were presented alone. However, he showed improved pseudohomophone reading if some semantic information was provided. When a pseudohomophone was written under the category label of the corresponding real word, he correctly read 73% of these pseudohomophones. When a pseudohomophone was presented with a picture corresponding to the real word, he correctly read 88% of them, even though the items were chosen to be pictures that he could not name alone. When the same pseudohomophones were tested again without any semantic information, his performance dropped to 21% correct again. Therefore, his relatively good performance was not caused by recovery from his deficit; rather, he received aid from the relevant semantic information. Similarly, picture naming improved when some phonological information was provided. Among

those pictures that he could not name, GLT’s picture-naming ability improved when a correct phonological cue was given. His accuracy of picture naming given a phonological cue was 9/10 for cues coming from words he had read correctly and only 2/10 for cues coming from words he had misread. For example, the word ‘trumpet’ had been read correctly and a pictured trumpet plus the cue /tr/ elicited ‘trumpet’, whereas the word ‘harp’ had been read as ‘carp’ and a pictured harp plus the cue /h/ elicited ‘I don’t know’. Interestingly, incorrect but semantically related phonemic cues elicited semantic errors for 7/10 of the items he had produced correctly in response to the written word (e.g. a pictured trumpet ⫹ /fl/ elicited ‘flute’) and for 9/10 items he had misread (e.g. a pictured harp ⫹ /fl/ elicited ‘flute’ in a different session). Other evidence consistent with the summation hypothesis was that GLT’s word reading was better preserved for words he comprehended correctly. His word reading was 95% correct for the items for which he showed correct comprehension in the picture verification task, but only 72% correct for the words to which he responded incorrectly. The same pattern was found for the items used in a synonym-judgment task: his accuracy was 92% for the items for which he showed correct comprehension, but only 44% correct for the items on which he made errors. While the results from JJ and GLT are consistent with the summation hypothesis, they do not rule out a three-route approach; that is, if the third route is damaged, reading would be expected to rely on a combination of semantic and sublexical information. Thus, to account for the data from these patients, one would have to assume that they have damage to all three routes. Hillis and Caramazza’s point, however, is that two routes may be sufficient to account for previously reported patients who, despite an impairment to the sublexical route, are able to read words for which they have poor comprehension. Their approach also provides a motivated account of why reading success should be related to the degree of disruption of semantic knowledge for certain words. There are problems with the summation approach, though. For example, to explain WT’s non-homogeneous performance on the reading, writing, and repetition tasks (Coslett, 1991), Hillis and Caramazza suggest an additional impairment in the parsing of auditorily presented words into sublexical units. Even though this account provides a means to explain poor repetition and writing to dictation due to insufficient input from the auditorily presented stimuli into the sublexical mechanism, the summation hypothesis then loses its appeal of simplicity by suggesting more loci of deficits. Another problem with the summation hypothesis is that it seems very difficult to falsify. That is, even if a patient shows a very severe impairment in either the semantic system or the sublexical mechanism yet has good reading, it could still be possible to argue that there is some spared information within the damaged route, although there is no objective evidence of it from accuracy data (as for WB’s graphemeto-phoneme conversion ability). For example, even though a

278 D. H. Wu, R. C. Martin and M. F. Damian

patient was completely unable to read aloud any non-words or sound out any individual letters, a critic may argue that the patient should show a subtle effect indicating processing of non-word phonology if properly tested, such as showing longer reaction times to reject pseudohomophones than other non-words in a lexical decision task [see Buchanan et al. (1994)]. Recently, some researchers have presented additional cases that seem to provide a strong challenge to the summation hypothesis by providing more convincing evidence of very impaired comprehension of irregular words that are read correctly. Cipolotti and Warrington (1995) reported a patient, DRN, whose comprehension of both low-frequency regular and exception words showed a sharp contrast to word reading of the same items. Although DRN could only name five of 20 pictures whose written forms consisted of irregularly spelled picturable words, such as ‘yacht’ and ‘bouquet’, he correctly read aloud 17 of the 20 irregular words. Among 69 low-frequency irregular words that he was tested on, DRN achieved an accuracy of 96% on word reading. On the other hand, he only defined 29% of the written forms of these words correctly. A similar case, DC, was reported by Lambon Ralph et al. (1995). Among the 182 irregular words given to her, DC read aloud 94% of them, but only defined 26% correctly when judged by a lax criterion. Even when a very lax criterion to score the definition was applied, she comprehended only 48%, which was still much lower than her nearly perfect reading of these words. Despite the strong evidence for the existence of a third route for reading, proponents of the summation hypothesis can still argue that it is difficult to establish convincingly a complete lack of comprehension for those words that can be pronounced correctly [see Rapp et al. (2001) for a discussion along these lines]. According to this reasoning, the summation hypothesis cannot be falsified even when there is a dramatic discrepancy between the accuracy of comprehension and reading on irregular words, as some other test may reveal the preservation of at least some semantic information. One possible means of testing the summation hypothesis, however, is to examine reaction time data for word reading and picture naming from a patient, rather than just comparing the patient’s accuracy on different tasks. As mentioned earlier, picture naming is assumed to draw on the same semantic representations as employed in reading [see also Riddoch et al. (1988)]. Thus, a disruption in naming certain pictures that is due to a semantic deficit or a deficit in accessing a phonological representation from a semantic representation should lead to slower reading times for the names of those pictures for patients with a disruption of the sublexical route. The logic of this approach is as follows: whenever the lexical semantic route is damaged for some words (and the sublexical route is disrupted), summation should be involved in the accurate reading of these words. Even though summation may result in these words being read accurately, one would expect them to be read more slowly than words for which semantics are preserved. That is, if only weak activation

Fig. 4. Computational dual-route model.

from semantics and the sublexical route is available for these words, the activation in the output phonological lexicon should take longer to reach threshold than for words for which there is greater activation coming from the lexical semantic route. Consequently, for a patient with a disruption of both the lexical semantic and the sublexical route, there should be a greater correlation between word-reading and picture-naming latencies than for control subjects, and, in particular, long reading times should be seen for words corresponding to pictures that he or she cannot name. Note that these predictions should be fulfilled if the summation hypothesis is true, even when there is no complete lack of lexical semantic or sublexical processing. As we will demonstrate in the next section via computational modeling, reaction time data are sensitive to any partial impairment on the lexical semantic and the sublexical routes. Therefore, we exploit the reaction time data for word reading and comprehension from a patient to examine the validity of the summation hypothesis.

Computational modeling These predictions seem to us to follow rather directly from the summation hypothesis. To validate our intuitions, we implemented a connectionist dual-route model, albeit in a simplified form. The model, which is schematized in Fig. 4, consists of five layers of nodes: an orthographic (i.e. letter) layer, a phonological (i.e. phoneme) layer, two separate

A third route for reading? 279 Table 1. Network specifications (A) Letters and phonemes allowed in each position

Table 2. Parameter values used in the simulations

Position

Letters

Phonemes

1 2 3

BCDGHMLNPRT AEIOU BDGNMPT

bkdghlmnprt aeiou bdgnmpt

(B) Words in each semantic category Indoor objects

Animals

Body parts

Foods

Outdoor objects

BED CAN COT CUP MAT MUG PAN

BUG CAT COW DOG PIG RAM RAT

GUT HIP LEG LIP RIB

BUN HAM NUT POP RUM

BOG LOG MUD

lexical layers for input orthography and output phonology, and one semantic layer. The lexical semantic pathway consists of a route in which input orthographic units map onto input orthographic lexical representations which are themselves connected to semantic features. These semantic features send activation to the output phonological lexical layer. The sublexical pathway proposed in the summation hypothesis consists of a route in which input orthographic units directly map onto output phonological units that then feed activation to the output phonological lexicon. Thus, the lexical units in the output phonological lexicon receive independent input from both the lexical semantic and the sublexical pathway. The stimulus specifications were taken from Plaut and Shallice’s (1993) study on deep dyslexia. Their model implemented the reading of a set of 40 three- or four-letter words from five semantic categories. On the orthographic level, individual letters were coded in a position-specific manner. At the semantic level, each word was represented by the activation of, on average, 15 out of a total of 68 semantic features such that within-category similarity was greater than between-category similarity. For current purposes, these stimuli were simplified in that only three-letter words were employed, and words were excluded that did not permit a straightforward grapheme-to-phoneme transformation. Thus, 13 words were eliminated from the set. The remaining 27 words and their corresponding orthographic and phonological specifications are displayed in Table 1 [for a description of the semantic features and their assignment to the stimulus words, see Plaut and Shallice (1993)]. The general architecture of the current model was closely derived from the Interactive Activation framework described in McClelland and Rumelhart (1981, 1986), although again a few details were simplified. Each node in the network possesses a real valued activation level ranging between 0 and 1. Activation proceeds exclusively in a forward fashion through the network, and between-level connections are always excitatory. Furthermore, the input orthographic and

Parameter

Value

Orthography–input lexicon excitation Input lexicon–semantic feature excitation Semantic feature–output lexicon excitation Sublexical route (orthography-to-phonology) Phonology–output lexicon excitation Input lexicon word–word inhibition Output lexicon word–word inhibition Decay

0.05 0.15 0.0075 0.05 0.05 0.15 0.15 0.05

output phonological lexical layers implement the principle of competition by having within-level inhibitory connections. At the beginning of each trial simulation, all nodes in the trained model were set to 0. Reading of a particular word was simulated by setting the corresponding orthographic input units to a value of 1. For each simulated time step, the net input for each node in the network (excluding the orthographic input layer) was calculated by summing the weighted incoming excitatory activation and subtracting inhibitory activation (in the case of the lexical layers). Activation of each node was updated by adding the net input to the activation state, scaling it to a range of between 0 and 1, and having it decay a certain amount [for details of the activation and updating functions, see McClelland and Rumelhart (1981)]. Activation at the lexical output layer was taken to constitute the dependent variable of the simulation. Parameter settings for the following simulations were chosen such that additional assumptions entering the model were minimized. Inhibition within both lexical layers was set to the same value. Furthermore, all connection weights in the model were chosen so that the sum of activation passing through them was equalized. For instance, the lexical input layer receives input from three orthographic units, whereas semantic features receive input from only one lexical unit. As a result, the connection weights between the input lexicon and semantics were chosen to be three times as strong as those between orthography and the input lexicon. All other weights were chosen in a similar manner (see Table 2). The only deviation from this principle is described in the next section. As a first step in attempting to model the reading process as laid out by the summation hypothesis, the parameters were adjusted such that the lexical semantic and the sublexical routes both had an approximately equal influence on the activation level and selection probability of the target lexical node. To achieve this end, each of the two routes was selectively disabled, and the influence of the intact route was assessed. In order to assure approximately equal weight to both routes, the connections between the semantic layer and the output lexicon had to be set to a slightly lower value than suggested by the principle described in the preceding section (w ⫽ 0.0075 instead of w ⫽ 0.01) (see Table 2). Figure 5 shows the resulting activation levels for each route separately as well as both routes combined when the model

280 D. H. Wu, R. C. Martin and M. F. Damian

Fig. 5. Performance of the model presented with the input ‘leg’, receiving input from both routes combined or each route separately.

Fig. 6. Activation of the target ‘leg’ as well as competing lexical items with normal setting, reduced sublexical route, and additional semantic impairment.

was presented with the input ‘leg’. With the output lexicon receiving input from both the lexical semantic and the sublexical pathways, activation levels were at a value of approximately 0.75 at cycle 50. Each route separately yielded a maximum activation of merely approximately 0.60 (the influence of both routes was non-additive due to the nonlinearity of the activation function). Other sampled words showed equivalent results. As a next step, we attempted to simulate phonological dyslexia in which, by definition, the sublexical conversion route is impaired. The performance of the model under normal settings was compared with a constellation in which the connection weights for the sublexical route were reduced from w ⫽ 0.05 to w ⫽ 0.005. Note that this manipulation did not totally eliminate the sublexical route, but greatly weakened its influence. Figure 6 shows the activation of the target word ‘leg’. Not surprisingly, the activation was substantially reduced and was maximal at 0.63. Finally, we attempted to model the observed deficit in the lexical semantic route for some selected items (body parts, for example). For the following simulation, the weights connecting the semantic features corresponding to the category of body parts and their corresponding lexical units in the output lexicon were reduced from the default setting of 0.0075 to 0.00451. The category of body parts was selected because a patient (ML—reported later) showed selective naming difficulty on these items. The results display a further

reduction in the level of target activation. Given that the sublexical route is damaged and the lexical semantic route is selectively impaired for body part items (i.e. the word ‘leg’), the model achieved a lower level of activation for these items at 90 cycles and took more cycles to exceed the same level of activation (see Fig. 6). Let us assume that the threshold for correct word reading, for the current trained network, is only 0.4 in terms of activation level. That is to say, when a word is presented to the model, the correct response would be produced whenever the activation level in the output phonological level reaches 0.4, and the number of cycles needed is an index of the reaction time for such a response [for an example, see Grainger and Jacobs (1996)]. If the summation hypothesis is correct, according to the performance of this computational model we would expect to observe longer word-reading latency for those words with a lexical semantic deficit in addition to the sublexical impairment because it would take more cycles to reach this threshold. Thus, a reaction time difference should be observed even if reading accuracy was normal. In order to ensure that the above patterns are not specific to the chosen input word but generalize to other words as well, all items from the semantic category ‘body parts’ were tested. Figure 7 shows the averaged outcome from these simulations. The absolute activation level of the target word is not the only way in which the outcome of a simulation can be conceptualized. Alternatively, the selection probability of the target word that takes into account the activation of competing units via Luce’s (1959) choice rule as well as a weighted average of the activation level at preceding time steps can be computed [see McClelland and Rumelhart (1981) for details]. These values are shown in Fig. 7B. Finally, a commonly used measure is the deviation of the activation vector from the desired pattern, as measured by the summed squared error of the least mean square error (LMS). This measure is reported in Fig. 7C. For either means, assuming some threshold for selecting and producing a particular word, longer reaction times are predicted for items with some lexical semantic impairment when the contribution of the sublexical pathway is diminished.

Case study The predictions derived from the summation hypothesis were examined by studying a patient, ML, who fits the classification of phonological dyslexic, as he reads words very well but has great difficulty with sublexical phonological coding (Lesch and Martin, 1998). According to the summation hypothesis, ML should rely more heavily on the semantic route to achieve word reading, given that his sublexical route is severely damaged. If this prediction is true, then ML should show a greater influence of semantic variables, like imageability and frequency, on reaction time for word reading. However, this pattern was not found in ML’s word-reading responses (Park and Martin, 2001). He only showed a much

A third route for reading? 281 Table 3. ML’s and five control subjects’ reading accuracy (%) and reading latency (ms) for high- and low-imageability words

ML High frequency Accuracy Reaction time Low frequency Accuracy Reaction time Five controls High frequency Accuracy Reaction time Low frequency Accuracy Reaction time

High imageability

Low imageability

100 744

99 752

95 760

91 874

100 594

100 625

99 602

98 645

Table 4. Correct percentage of ML’s word reading across different word classes Score Word type

Roeltgen et al. (1983)

PALPA

Nouns Adjectives Verbs Function

100

100 100 100 85

75

PALPA, Psycholinguistic Assessments of Language Processing in Aphasia.

For those pictures that ML has difficulty naming, his word reading should be slower (relative to control items that he can name) because the weakened input from the semantic route for these items should lead to longer times for their phonological representations to reach threshold. Fig. 7. Performance averaged across all five targets from the semantic category ‘body parts’ with normal setting, reduced sublexical route, and additional semantic impairment. (A) Activation, (B) selection probability, and (C) least mean square error (LMS).

greater imageability effect than control subjects on lowfrequency words but not on high-frequency words (see Table 3 and more discussion below). If ML achieves word reading mainly via the lexical semantic route, he should show a greater correlation between picture-naming time and word-reading time than control subjects, as picture naming and word reading draw on the same semantics-to-phonology connections (see Fig. 3). That is, ML should rely on the semantics-to-phonology connections in the lexical semantic route for both tasks, whereas control subjects will use that route for the picture-naming task but that route plus the sublexical route for the word-reading task2. A further prediction derives from ML’s selective difficulty in naming items from the category of body parts. We will demonstrate that this naming deficit is due to a disruption in the connections between semantic representations and phonological representations for these items. That is, his semantic representations for these items appear to be intact, but he has difficulty accessing their names from semantics.

Patient description ML is a 60-year-old male who suffered a left-hemisphere cerebral vascular accident in May 1990. A computed tomography scan revealed an infarction involving the left frontal and parietal opercula. Atrophy in the left temporal operculum was noted, as was mild diffuse cortical atrophy. ML had completed 2 years of college study and prior to his injury had been employed as a draftsman. ML exhibits mild agrammatism and word-finding difficulties in his spontaneous speech. ML’s word reading is mostly intact. While he read most word classes at a very high level, testing completed several years ago indicated that he appeared to have a slight impairment in reading function words (Lesch and Martin, 1998). When tested on word lists obtained from Roeltgen et al. (1983), ML obtained 100% (40/40) correct on nouns and 75% (30/40) correct on function words (see Table 4). On the word lists from the Psycholinguistic Assessments of Language Processing in Aphasia (PALPA) (Kay et al., 1992), ML also obtained 100% (20/20) correct for nouns, adjectives, and verbs, and 85% (17/20) correct for function words (see

282 D. H. Wu, R. C. Martin and M. F. Damian Table 5. Performance of ML’s non-word reading PALPA non-words

Example Score

Three letters Four letters Five letters Six letters

cug boak snite dringe

2/6 2/6 4/6 0/6

Roeltgen et al. (1983) non-words

Simple

Score

Complex Score

Two phonemes Three phonemes Four phonemes Five to seven phonemes

ep nud scod epzim

6/10 4/10 4/10 2/4

aub shev tharp endurf

5/10 4/10 2/10 0/4

PALPA, Psycholinguistic Assessments of Language Processing in Aphasia.

Table 4). More recent testing indicates that his function word reading is now highly accurate, but he still shows significantly longer reaction times for reading function words than other words matched in frequency and imageability. Other evidence confirms that ML’s deficit in reading function words cannot be due to their low imageability, as his reading of low-imageability words appears to be mildly impaired only for low-frequency words, and function words are highfrequency words. Table 3 shows his reading reaction times and accuracy on a set of content words from different word classes (nouns, verbs, and adjectives) where frequency and imageability were manipulated. ML did not show an imageability effect on high-frequency words. For low-frequency words, although his imageability effect was within the normal range for accuracy, his imageability effect with reaction times was much greater than that of any of the controls (see Table 3). The greater influence of imageability on low-frequency words may be regarded as evidence for the reliance on the lexical semantic route in reading. However, this pattern does not necessarily contradict the existence of the third route, as the strength of activation deriving from the direct lexical route would be expected to be sensitive to frequency. In other words, the activation from the direct lexical route to the output phonological level would accumulate more slowly for low-frequency words, hence there would be a greater opportunity for input from the lexical semantic route to affect the reading of such words. The greater imageability effect on low-frequency words is actually consistent with the fact that ML’s sublexical route is severely impaired. In contrast to his near perfect accuracy on reading almost all classes of words, ML performed poorly on non-word reading. When tested on non-word lists obtained from Roeltgen et al. (1983) and a set of non-words obtained from the PALPA (Kay et al., 1992), ML only obtained 38% correct across the two sets of non-words (see Table 5) (Lesch and Martin, 1998). His errors on the PALPA were mainly (11/16) lexicalizations (e.g. soaf → ‘soft’, dringe → ‘dirge’), which indicated that he utilized lexical information, rather than sublexical processing, to perform the task. Among all of the 169 non-words tested, ML correctly read only 69

(41%) of them. When provided with the written letter, ML produced an appropriate sound for only 9/26 letters, even though he could name 23 out of 26 individual letters correctly (Lesch and Martin, 1998). More recent testing on non-word reading indicates that ML’s deficit persists. His accuracy and error patterns on the non-words from the PALPA were the same (8/26 correct with mainly lexicalization errors). Among the 26 letters, he only correctly sounded out two in lowercase and one in uppercase, which is worse than previously reported (Park and Martin, 2001). Clearly, ML’s sublexical route is impaired. Note that we are not claiming that ML’s sublexical route is completely disrupted as he is able to read some non-words correctly. However, the output of this route is degraded and, according to the modeling results, a degraded sublexical route should lead to slowed word reading. As discussed in Martin and Lesch (1996), ML generally showed good comprehension, as evidenced by his performance on single-word processing tasks. On the Peabody Picture Vocabulary task (Dunn and Dunn, 1981), in which the subject must select from four pictures the one matching a word, he obtained a standard score of 113 (control µ ⫽ 100, σ ⫽ 15). On a task in which he had to choose from two items the one more related to a third, he scored 88% correct, where the mean for young controls was 86%. ML’s picture-naming ability was generally quite accurate, which is also an indication of preserved semantic representations. He was 97% correct on the 175-item Philadelphia Naming Test (Roach et al., 1994), which was above the mean for controls.

Experiment 1. Correlations between reading and naming As discussed above, picture naming is thought to draw on the same semantics-to-phonology connections that are used in reading via the lexical semantic route (Riddoch et al., 1988; Caramazza et al., 1990; Hillis et al., 1990). According to the summation hypothesis, ML’s word reading should involve mainly these semantics-to-phonology connections because of his disrupted sublexical route for reading. For normal readers, word reading would rely on both the semantic and the sublexical routes. Thus, it is expected that the correlation between picture-naming and word-reading times should be higher for ML than for control subjects. On the other hand, if the correlation for ML is not higher than for control subjects, the result would indicate that ML relies on a third route for reading which is non-semantic.

Method Participants. ML and eight age-matched controls participated in this experiment. All participants were reimbursed at the rate of $7/h. Materials. Two hundred and fifty pictures from Snodgrass and Vanderwart (1980) were prepared for use in the naming

A third route for reading? 283

part of this experiment. Each picture was digitized to about 9 ⫻ 7 cm and presented in the center of the computer screen. The names of these same pictures were prepared for use in the reading part of this experiment. Each word was in 12-point font and presented in the center of the computer screen. The 250 stimuli were divided into two lists. In one session, a subject saw 125 stimuli in a picture-naming task and the other 125 in a word-naming task. In the second session, the assignment of the lists to the picture- or wordnaming task was reversed. Thus, a particular item was seen only once in a session in either a picture or a word format. The order of the stimuli within a list was randomized across different subjects. Procedure. ML and all control subjects completed two 1 h sessions approximately 1 week apart. ML and three control subjects were tested twice for the two sessions to assess the consistency of performance over time. The second session of the test and the first session of the retest were 3 weeks apart. In the first session, the first part was either picture naming or word reading, and the second part was the other task. In the second session, the order of the two tasks was the same as that in the first session. The order of the two tasks in the first and second sessions of the retest was reversed from the initial test. In both the naming and the reading tasks, there were 10 practice trials prior to the experimental trials. Each of the stimuli (pictures or words) was presented one at a time, in the center of a Macintosh computer screen. On every trial, a fixation point was presented for 500 ms accompanied by a beep. Three hundred milliseconds after the removal of the fixation point, either a picture or a word appeared. The subjects were instructed to name the picture or to read the word as quickly and accurately as possible. The reaction time of the subject’s verbal response was recorded by a voice-activated key. The picture or the word disappeared as soon as the voice-activated key was triggered. The experimenter recorded online whether the trial was valid by pressing a key. An invalid trial may be due to the malfunction of the voice-activated key, a subject’s incorrect response, or a hesitation. The next trial was initiated 1 s after the experimenter’s online judgment. The whole experiment was taperecorded at the same time and the subject’s naming and reading responses were transcribed after the experiment.

Results and discussion Across the test and retest sessions, ML’s picture naming and word reading were at a near normal level of accuracy, even though he had slightly more invalid trials for reading than the controls (see Table 6). All but two of these invalid trials were caused by hesitations or stuttering, rather than other reading difficulties. ML’s mean picture-naming time was 1274 ms and his mean word-reading time was 658 ms. The standard deviation of the picture-naming times was very large because ML seemed to have a very long reaction time

when more than one name could possibly be applied to the picture. The mean reaction times for the controls (including the retest for three controls) were 939 ms for naming and 623 ms for reading (see Table 6). ML’s mean reaction time for picture naming was slightly outside the normal range whereas his mean reaction time for word reading was close to the normal subjects’ mean. Thus, across both tasks, his accuracy was not very different from the normal subjects’ and he only showed slightly slower reaction times than theirs in picture naming. The main concern of this experiment was whether ML showed a greater correlation between the reaction times for picture naming and word reading than the controls. The uncorrected correlations are shown for ML and the controls in the top half of Table 7, where it is clear that ML did not show a greater correlation than the controls. However, given that a brain-damaged patient like ML may show greater variability in reaction times, the correlations may be low due to unreliability in the measures, and thus these correlations should be corrected for unreliability. The correlation between the test and the retest for each task was calculated for ML and the three control subjects who were tested twice. For ML, the test–retest correlation was 0.20 for reading and 0.39 for naming. For the three controls, the test–retest correlation was 0.49 for reading and 0.47 for naming (see bottom half of Table 7). The correlations between picture-naming times and word-reading times were corrected for attenuation (which adjusts for unreliability as reflected in the test–retest correlations; Pedhazur, 1982, pp. 112–114). After this correction, ML’s correlation between naming and reading latencies increased from 0.02 to 0.08. The correlation between naming and reading latencies for the controls before the correction for task reliability was 0.03, with a range from –0.12 to 0.15. After the correction, the correlation for the controls became 0.27 with a range from 0.05 to 0.40. It is clear that ML did not show a greater correlation between picture-naming times and word-reading times. Thus, the summation hypothesis was not supported.

Experiment 2. Body part naming and reading Because ML has a deficit in the sublexical route, the summation hypothesis predicts that ML should have particular difficulty reading those words corresponding to the names of the pictures that he had difficulty naming. Although ML’s naming was highly accurate overall, the few errors that he did make seemed to be concentrated in the category of body parts. A test sampling several items from several different categories verified that ML has a specific deficit in naming body parts. According to the summation hypothesis, ML should show at least some degree of difficulty reading the words corresponding to these body parts. In experiment 2, body parts and words from other categories were used as stimuli for both naming and reading. Experiment 2 had four parts. In part A, we first verified that ML showed significantly worse naming of body part pictures than control pictures.

284 D. H. Wu, R. C. Martin and M. F. Damian Table 6. Naming and reading accuracy (%) and response time (ms) in experiment 1 for ML and the control subjects

ML Accuracy Reaction time Controls Accuracy Reaction time

Naming

Reading

84.4 1274 (SD ⫽ 855)

92.4 658 (SD ⫽ 133)

82.7 (range 72.8–89.6) 939 (range 683–1199; SD 124–370)

97.9 (range 96–99.2) 623 (range 475–920; SD 36–155)

SD, standard deviation.

Table 7 (A) Uncorrected correlations between naming and reading times for ML and 10 control subjects Correlation of naming and reading times ML Controls

r ⫽ 0.02 (P ⬎ 0.10) r ⫽ 0.05 (range –0.12 to 0.20)

(B) Consistency of naming and reading, and correlations between naming and reading times after correction for attenuation for ML and three controls

ML Controls

Consistency of naming

Consistency of reading

Correlation of naming and reading times after correction

r ⫽ 0.39 (P ⬍ 0.001) r ⫽ 0.49 (range 0.32–0.59)

r ⫽ 0.20 (P ⬍ 0.003) r ⫽ 0.47 (range 0.36–0.53)

r ⫽ 0.08 r ⫽ 0.27 (range 0.05–0.40)

We then attempted to elucidate the locus of this body partnaming deficit. If this deficit was due to a visual or a visualto-semantic stage of processing, then the results should not be relevant to the summation hypothesis. To address this issue, in part B we assessed naming to definition, where a deficit in body part naming would not be expected if the difficulty was in a visual stage of processing specific to pictures. In part C, we assessed comprehension of body parts versus control pictures using a picture–word matching task. Good performance on body parts on the comprehension test would indicate that the naming deficit was not due to a visual processing or semantic deficit, but rather to a categoryspecific deficit in accessing output phonology from semantics. Finally, in part D, we assessed word reading for the body part and control picture names.

Part A. Body part versus control picture naming Method Participants. ML and eight controls participated in this experiment. The control participants were matched to ML in terms of age and education. All participants were reimbursed at the rate of $7/h. Materials. Fifty pictures of body parts were prepared for use in the naming part of this experiment. Each picture was digitized to about 9 ⫻ 7 cm and presented in the center of

the computer screen. Due to the ambiguity of some pictures depicting certain body parts, an arrow pointing to a specific body part was added to 35 of the pictures. Another 50 control pictures from several different categories were also prepared (see Appendix A). The control pictures matched the pictures of body parts in frequency (Francis and Kucera, 1982) on a one-to-one basis. An arrow pointing to a specific part of the intended object was also added to 35 control pictures and the subjects were asked to name the specific part. This was to equate the specificity of the intended response (whether it is the whole object or just a part of the object) between the two sets of pictures. In order to verify that the body part pictures were not more visually complex than the control pictures, we obtained ratings of complexity. Ten college students were instructed to rate the visual complexity of each of the pictures of the body parts and the control objects on a five-point scale (1, very simple; 5, very complex). Complexity was defined as the amount of detail or intricacy on lines, including the arrow (if there was one) and everything else, in the picture. They were told to rate the complexity of the drawing itself, rather than the complexity of the real-life object it represented. Each subject was tested individually and shown 12 pictures from the whole set to allow them to anchor the scale. Rather than the body parts being more complex, they were actually rated as significantly less complex than the control pictures (t(9) ⫽ 6.03, P ⬍ 0.001; mean for body parts ⫽ 2.28; mean for control pictures ⫽ 2.83).

A third route for reading? 285 Table 8. Picture-naming accuracy (%) and response time (ms) in experiment 2A for ML and the control subjects

ML Accuracy Reaction time Controls Accuracy Reaction time

Body parts

Control objects

50 (25/50) 2365 (SD ⫽ 1784)

74 (37/50) 1904 (SD ⫽ 1095)

73.3 (range 62–84) 1229 (SD ⫽ 319; range 1037–1555)

75.5 (range 66–86) 1138 (SD ⫽ 324; range 912–1423)

Difference

–24 461 –2.2 (range –20 to 14) 91 (range 10–146)

SD, standard deviation.

Procedure. In this naming task, the body part pictures were randomly intermixed with the control pictures. The procedure was the same as for picture naming in experiment 1, except that there were 100 stimuli rather than 125.

Results Accuracy. Both the control subjects and ML had higher error rates overall on these materials than for the materials in experiment 1. The invalid trials in this task for the control subjects came mainly from hesitations, although they usually produced the intended or an acceptable response eventually. This was not true for ML’s errors on body parts. In 25 invalid trials with body parts, ML hesitated but gave the correct response in four trials. He did not produce any response in five trials, gave an inappropriate name of a body part in eight trials, and produced a non-word in one trial. For the other invalid trials, ML produced acceptable but not the intended responses. The controls performed at about the same level on the body parts and the other stimuli (73.3 versus 75.5% correct). The difference in accuracy between body parts and control stimuli was not significant for the control subjects at the group level (P ⬎ 0.82), nor for seven of eight individual subjects (all but one, P ⬎ 0.11). The accuracy difference for the control subjects ranged from –20 to 14%. One control subject showed a significantly higher error rate on body parts than control objects (62 versus 82%, χ2(n⫽100) ⫽ 4.96, P ⬍ 0.03). ML showed a significantly worse performance on the body part pictures (50 versus 74% correct, χ2(n⫽100) ⫽ 6.11, P ⬍ 0.01) and showed a larger difference in accuracy between the body part and control pictures than any of the controls (see Table 8). His accuracy in naming body part pictures was below the range of the controls, whereas his accuracy for the control pictures was close to the controls’ mean. Reaction times. ML’s naming latency for these pictures of body parts was 2365 ms and for the control pictures was 1904 ms, although the difference was not significant due to his extremely large standard deviations (P ⬎ 0.21). The control subjects also showed somewhat longer naming times for body parts than control pictures (see Table 8). The difference was significant at the group level (F(1,7) ⫽ 21.49, P ⬍ 0.002), and for two of the eight control subjects when

their data were analysed individually. However, ML showed a much larger difference in reaction times than the controls. It took him 461 ms longer to name a body part picture than a control picture, compared with 91 ms for the control subjects (range: 10–146 ms) (see Table 8). These data confirmed the error rate data in showing that ML had specific difficulty in naming pictures of body parts.

Part B. Naming to definition Method Participants. ML and four controls matched for age and education participated in this experiment. All participants were reimbursed at the rate of $7/h. Materials. Definitions for each body part and control object used in experiment 2A were created (see Appendix B). The only item in part A not included in this experiment was ‘shoulder blade’, as no appropriate definition could be given without mentioning the word ‘shoulder’. In the definition of a body part, we tried to avoid using body parts other than very common ones (e.g. ear, foot). From ML’s performance in the previous experiment, we found that he showed no naming difficulty with pictures of very common body parts. However, it should be noted that if ML showed worse performance on the definitions of body parts, the deficit could arise either from his impaired understanding of the provided definition or from the connection between semantics and the output phonological lexicon, but not from the visual processing of pictures. Twenty fillers were also included in the testing. Procedure. In this task, the definitions of the body parts, the control objects, and the fillers were all intermixed and presented in a random order. There were 10 practice trials prior to the experimental trials. On every trial, a beep sounded, followed at 500 ms by a fixation point presented in the center of a Macintosh computer screen. Five hundred milliseconds later, a definition appeared below the fixation point. The subjects were instructed to name a word that matched the definition as quickly and as accurately as possible. The fixation point disappeared immediately after the voice key was triggered, indicating that the subject had initiated a response. The definition stayed on the screen until

286 D. H. Wu, R. C. Martin and M. F. Damian Table 9. Naming to definition accuracy (%) in experiment 2B for ML and the control subjects

ML Controls

Body parts

Control objects

Difference

73.5 (36/49) 87.8 (43/49; range 83.7–95.9)

90 (45/50) 93.5 (47/50; range 92–96)

–16.5 –5.7 (range –10.3 to 3.9)

the experimenter pressed a key on the response box to indicate whether the trial was valid or not and whether the response was correct. The next trial was initiated 1 s after the experimenter’s key press. The whole experiment was simultaneously tape-recorded and the subject’s naming responses were digitized and transcribed following the testing.

Results Consistent with his performance on picture naming, ML performed substantially worse on naming to definitions of body parts (73.5% correct) than control objects (90% correct) (χ2 ⫽ 4.5, P ⬍ 0.05, n ⫽ 99). Three of the four controls also showed a somewhat worse performance on the body parts than control objects. However, neither the group difference for the controls (87.8 versus 93.5%) nor the difference for individual subjects reached significance. The difference between ML’s performance on body parts and control objects was outside the normal range (see Table 9). ML made seven more errors on body parts than the control mean, whereas he made only two more errors on control objects than the control mean. A more fine-grained analysis of ML’s responses revealed that he failed to provide any response to the definitions of four body parts (finger, heel, toes, tongue), while he failed to do so for only one control object (sleeves). For the four body parts that ML did not respond to at all, only ‘tongue’ was named by one control subject as ‘buds’ and by another control subject as ‘mouth’, both of which were acceptable responses given the definition (‘the thing that people use to taste’). Among the 36 body parts that he named correctly, ML took longer than 10 s to name 42% (15/36) of them. On the other hand, this degree of slowness was true only for 24% (11/45) of the 45 control objects he named correctly. Although these long response latencies may have been due in part to ML’s difficulty in reading and understanding the definitions, such difficulties would not have been expected to give rise to longer times for body parts than control items. ML’s difficulty with naming to definitions of body parts mirrored his performance in experiment 2A, as the 16.5% disadvantage for body parts in definitions was similar to the 24% disadvantage for pictures3. This correspondence indicates that his naming deficit for body parts does not result from difficulty processing their visual representations or accessing semantics from their visual representations. Rather, the results suggest that his impairment with body parts arises from either the semantic system or the connections to the output phonological lexicon from semantics. In the following experiment, we used a picture–word matching task

to assess ML’s semantic representations of body parts in order to adjudicate between these two loci.

Part C. Comprehension of body parts and control items In this experiment, we assessed ML’s comprehension of body part and control pictures using a timed picture–word matching task for the same stimuli used in the picture-naming task in part A. If ML has an impairment in the semantic representation of body parts, we should observe a similar inferior performance on body part pictures relative to pictures of control objects, as observed in the naming task. On the other hand, if ML’s difficulty is limited to an impaired connection between the semantic system and the output phonological lexicon, then he should perform normally on picture–word matching for body part pictures. It should be noted that if ML performs well on this task, it will provide further confirmation that his difficulty with body parts is not related to visual processing or the visual representations of body parts.

Method Participants. ML and five controls participated in the picture–word matching task. The control participants were matched to ML in age and education. All participants were reimbursed at the rate of $7/h. Materials. The same set of pictures of body parts and control objects used in experiment 2A (the naming task) was used in this experiment. Each of the 100 pictures used was paired with the correct name of the picture, a semantically related distracter, or an unrelated distracter (see Appendix C). For the body part stimuli, the semantic distracter was always another body part used in experiment 2A. However, as the control objects were selected from a wide range of categories, there were not enough semantically related distracters to choose from this set. Thus, except for two items, words other than those used in experiment 2A were selected as the semantically related distracters for the control objects. The frequencies (Francis and Kucera, 1982) of the correct name of the picture, the semantically related distracter, and the unrelated distracter were matched across the two sets of stimuli (body parts: 35.8, 35.8, and 36.1; control objects: 33.6, 31, and 33, respectively). Three lists of stimuli were prepared to be tested in three separate sessions. In every list there were 50 body part pictures, 50 control object pictures, and 36 filler pictures.

A third route for reading? 287

Among the 50 body part pictures and the 50 control object pictures of every list, one third of them were paired with the correct picture names, one third with the semantically related distracters, and the other one third with the unrelated distracters. The 36 filler pictures were always paired with the correct picture names to balance the numbers of ‘yes’ and ‘no’ trials in every list. The same picture was paired with its correct name in, say, list 1, with a semantically related distracter in list 2, and with an unrelated distracter in list 3. Special care was taken to ensure that for every list no picture and no word was presented more than once. In every list, the pictures were presented in a random order. The order in which every subject received the lists was counterbalanced. Procedure. All subjects completed three 20 min sessions; each session was approximately 1 week apart. There were 12 practice trials prior to 136 experimental trials in every session. All the pictures (12 practice objects, 50 body parts, 50 control objects, and 36 fillers) were presented one at a time, in the center of a Macintosh computer screen. On every trial, a fixation point was presented for 800 ms accompanied by a beep; 200 ms after the removal of the fixation point, a picture and a word written below the picture appeared simultaneously. The written word was the correct name of the picture, a semantically related distracter, or an unrelated distracter. The subjects were instructed to judge as quickly and as accurately as possible whether the picture and the word matched by pressing the M or V key on the keyboard. Due to ML’s mild right-sided hemiparesis, he and the control subjects were instructed to make the ‘yes’ response by pressing the M key using the left index finger and the ‘no’ response by pressing the V key using the left ring finger. The reaction time of the subject’s key-press response was recorded by the computer. The picture and the word disappeared as soon as the key-press response was made, and, after 1200 ms, the next trial began.

Results and discussion In contrast to the results for picture naming and naming to definition, ML performed close to ceiling in terms of accuracy for both body parts and control pictures (see Table 10). Even with semantically related distracters, ML’s performance on the body part pictures was very high and within the range of the controls. With regard to reaction latencies, the controls showed longer reaction times for body parts than for control pictures (see Table 10), consistent with the findings for picture naming. The difference was not significant at the group level (F ⬍ 1, P ⫽ 0.54), nor for three of the five controls when their data were analysed individually. For the other two controls, the reaction times on body part pictures were significantly longer than those on control object pictures (F(1,87) ⫽ 6.32, P ⫽ 0.014, and F(1,81) ⫽ 5.62, P ⫽ 0.020, respectively). ML’s latencies were on average 3113 ms for pictures of body parts and 2974 ms for control pictures, but this difference was far from significant (F ⬍ 1,

P ⫽ 0.88). In contrast to his picture-naming performance, ML’s difference in reaction times to these two sets of pictures was within the normal range. It took him 140 ms longer to respond to a body part picture than to a control picture, much closer to the mean of 94 ms for the control subjects (range: 14–236 ms) (see Table 10). To make sure that ML did not have more difficulty in matching the body part picture with its name than the control subjects, a subset of 45 body parts and 45 control objects was selected for which the controls’ performance was relatively similar, both in terms of accuracy and response latency (see Table 11). On these pictures, ML, like the controls, showed almost perfect accuracy. Although ML’s reaction times were longer overall, he was no slower with body parts than with control objects. In fact, his means went in the opposite direction with faster times for the body parts (see Table 11). Among these results, one particularly important point to note is that ML actually responded much faster to a body part picture than a control object picture when they were paired with a semantically related distracter. The results so far indicate that ML’s deficit in naming body parts cannot be attributed to a disruption of visual processing or a disruption of visual or semantic representations for body parts. First, the pictures of the body parts were rated as less complex. A deficit in the visual analysis of pictures should be manifested as more difficulty with more complex pictures rather than the pattern observed. Second, ML showed a similar discrepancy between body parts and control items when naming from definitions as when naming pictures. Third, there was no evidence indicating that ML showed any specific difficulty in perceiving and making semantic judgments on pictures of body parts. In other words, ML’s difficulty with body part pictures does not result from processes prior to access of the semantic representation. Instead, his selective difficulty is most probably caused by a disruption of the connection between the semantic system and the output phonological lexicon, which is shared by the lexical semantic route for word reading.

Part D. Reading of body part and control item names Method Participants. ML and the eight controls who participated in experiment 2A (picture naming) also participated in this experiment. The control participants were matched to ML in age and education. All participants were reimbursed at the rate of $7/h. Materials and procedures. The written names of both the body part and control pictures used in experiment 2A (picture naming) were prepared for use in this experiment. The names of the body parts and the control objects were mixed together and presented in a random order. The procedure was the same as for word reading in experiment 1, except that there were 100 stimuli rather than 125.

288 D. H. Wu, R. C. Martin and M. F. Damian Table 10. Picture–word matching accuracy (%) and latency (ms) in experiment 2C for ML and five control subjects Body parts Accuracy ML Target Semantically Unrelated Mean Controls Target Semantically Unrelated Mean Latency ML Target Semantically Unrelated Mean Controls Target Semantically Unrelated Mean

related

related

related

related

Control objects

98 96 96 96 98 95 100 97

98 96 100 98 (range (range (range (range

96–100) 90–100) 98–100) 96–99)

3343 3567 2430 3113 1485 1525 1323 1444

99 97 100 99

0 0 –4 –2 (range (range (range (range

96–100) 92–100) 100–100) 96–100)

3061 3664 2197 2974 (range (range (range (range

1057–2113) 1089–2240) 992–1839) 1050–2064)

1331 1452 1269 1351

Difference

–1 –2 0 –1

(range (range (range (range

–4 –4 –3 –3

to to to to

2) 6) 2) 2)

(range (range (range (range

60–245) –56 to 280) 29–111) 14–236)

282 –96 233 140 (range (range (range (range

988–1796) 1038–1960) 949–1728) 1004–1828)

154 73 54 94

Table 11. A subset (45 body parts and 45 control objects) of digitized responses of picture–word matching accuracy (%) and latency (ms) for ML and five control subjects Body parts Accuracy ML Target Semantically Unrelated Mean Controls Target Semantically Unrelated Mean Latency ML Target Semantically Unrelated Mean Controls Target Semantically Unrelated Mean

related

related

related

related

Control objects

98 100 96 98 98 95 100 97

98 96 100 98 (range (range (range (range

93–100) 91–98) 98–100) 96–99)

3276 3460 2156 2973 1449 1453 1306 1402

99 95 100 98

0 4 –4 0 (range (range (range (range

96–100) 89–100) 100–100) 95–100)

3164 3821 2244 3070 (range (range (range (range

1065–2022) 1057–1985) 982–1799) 1034–1936)

Results Accuracy. In contrast to his difficulty naming body part pictures compared with control pictures, ML read the body part and control words nearly perfectly (98 versus 96% correct, respectively). The control subjects performed at ceiling with a mean accuracy of 99% for both sets of pictures (see Table 12). The invalid trials in this task for both ML

1347 1512 1288 1383

Difference

–1 0 0 –1

(range (range (range (range

–2 –4 –2 –3

to to to to

0) 9) 0) 2)

(range (range (range (range

10–199) –104 to 33) 1–37) –14 to 60)

113 –360 –88 –97 (range (range (range (range

1004–1823) 1121–2057) 950–1762) 1040–1883)

102 –59 18 19

and the control subjects came mainly from the malfunction of the voice-activated key. Reaction times. ML’s word reading was slower overall than the mean for the controls (754 versus 594 ms). Both ML’s and the control subjects’ mean reaction times were very similar for the body part and control words: 7 ms faster

A third route for reading? 289 Table 12. Word-reading accuracy (%) and response time (ms) in experiment 2D for ML and the control subjects

ML Accuracy Reaction time Controls Accuracy Reaction time

Body parts

Control objects

98 (49/50) 750 (SD ⫽ 217)

96 (48/50) 757 (SD ⫽ 238)

99.1 (range 94–100) 588 (SD ⫽ 63; range 459–715)

98.9 (range 98–100) 600 (SD ⫽ 70; range 469–717)

Difference

2 –7 0.2 –12 (range –2 to 29)

SD, standard deviation.

for body parts for ML and 12 ms faster for body parts for the control subjects (see Table 12). The differences were non-significant for ML and for seven of the eight controls analysed individually. When analysed as a group, the 12 ms advantage for body parts for the control subjects was significant (F(1,7) ⫽ 9.35, P ⬍ 0.02). The 7 ms effect for ML was clearly within the range of effects shown by the controls4.

Discussion The contrasting results for ML on picture naming and word reading are not consistent with the summation hypothesis. That is, he showed a very large disadvantage for naming body parts in both error rates and reaction times, but no difference between body parts and control words in oral reading. As discussed earlier, the summation hypothesis would predict longer reaction times for items for which there is some disruption in the lexical semantic route. Two concerns need to be addressed, however, before strong conclusions can be drawn. First, a substantial number of trials in the picture-naming task were excluded for ML, due to the production of extraneous utterances like ‘uh’ or ‘um’, even though he eventually produced the correct response. We wanted to determine if including such trials for ML and the controls would still confirm the previous results. Second, given that the controls showed longer reaction times for body parts than control objects in the picture-naming task, the results raise the issue of whether there is something in the early visual processing of some of the body part pictures that may make these trials difficult. That is, both the controls and ML may have some difficulty in identifying body parts from pictures that is unrelated to complexity (e.g. within-category visual similarity). Because ML’s picture-naming reaction times were overall much longer than the controls, he may have been simply showing an exaggerated effect of this visual processing difficulty. While ML’s performance on experiment 2C argues against the hypothesis that he shows an exaggerated effect of any difficulty in the visual processing of pictures, we thought it wise to address this concern in the current experiment as well. To do so, we analysed a subset of pictures where naming times for body parts and control pictures were closely matched for control subjects.

Further analyses Reaction times for all correct responses. The responses of ML and five of the eight age- and education-matched controls had been recorded on tape and digitized. Reaction times were determined from the digitized responses for all trials in which the subject eventually produced the correct response. Response latency was measured as the interval between the end of the fixation point and the beginning of the intended response. The results are summarized in Table 13. ML gave the intended responses on only 31 out of the 50 pictures of body parts and his accuracy was outside the normal range. On the other hand, he named 38 out of 50 pictures of control objects correctly and performed within the normal range. Although his accuracy difference on body parts and control objects did not reach significance (χ2(n⫽100) ⫽ 2.29, P ⫽ 0.13), ML also had slower responses to body parts. His naming time on body parts (3543 ms) was 848 ms longer than that on control objects (2695 ms), a difference which reached significance (F(1,67) ⫽ 4.062, P ⫽ 0.048). Again, although the controls also showed a longer naming time on body parts (1985 ms) than on control objects (1911 ms), the difference was relatively small (74 ms) and was not significant at the group level (F(1,4) ⫽ 3.87, P ⫽ 0.12), nor for four out of five control subjects when analysed individually. ML’s difference was even further outside the range of the control subjects when these trials with hesitations were included. Matching body part and control pictures on naming latencies for controls. A subset of stimuli was selected that consisted of 38 body parts and 38 control objects which had approximately the same naming times for the control subjects, 1944 and 1949 ms, respectively (see Table 14). At least three out of five control subjects produced the intended response for the pictures in this subset and the reaction time analysis showed no difference between body parts and control objects at group and individual levels (all Ps ⬎ 0.37). For these stimuli, ML still showed a much longer naming time for body parts than for control objects. The difference was 748 ms and obviously outside the normal range, but this difference failed to reach significance (P ⫽ 0.12) on this relatively small number of trials, given his large standard deviation for the body part pictures. Although the accuracy difference

290 D. H. Wu, R. C. Martin and M. F. Damian Table 13. Digitized responses of picture naming and voice-key recorded responses of word reading for ML and five control subjects

Picture-naming task ML Accuracy Reaction time Controls Accuracy Reaction time Word-reading task ML Accuracy Reaction time Controls Accuracy Reaction time

Body parts

Control objects

62 (31/50) 3543

76 (38/50) 2695

77.2 (39/50; range 66–84) 1985 (range 1820–2147)

81.2 (41/50; range 72–86) 1911 (range 1723–2107)

98 (49/50) 750

96 (48/50) 757

98.5 (49/50; range 94–100) 649 (range 537–715)

99 (49.5/50; range 98–100) 659 (range 546–717)

Difference

–14 848 –4 (range –20 to 10) 74 (range 1–140)

2 –7 –0.5 –10 (range –2 to 25)

Table 14. A subset (38 body parts and 38 control objects) of digitized responses of picture naming and voice-key recorded responses of word reading for ML and five control subjects

Picture-naming task ML Accuracy Reaction time Controls Accuracy Reaction time Word-reading task ML Accuracy Reaction time Controls Accuracy Reaction time

Body parts

Control objects

68.4 (26/38) 3475 (SD ⫽ 2355)

78.9 (30/38) 2728 (SD ⫽ 930)

89.5 (34/38; range 79–97) 1944 (SD ⫽ 267; range 1805–2109)

87.4 (33/38; range 76–92) 1949 (SD ⫽ 328; range 1754–2149)

Difference

–10.5 748 2.1 (range –10.5 to 31.6) –5 (range –50 to 51)

100 (38/38) 752

94.7 (36/38) 767

5.3 –15

98.7 (37/38; range 95–100) 639 (range 530–694)

98.7 (37/38; range 97–100) 665 (range 545–730)

0 –26 (range –49 to 3)

SD ⫽ standard deviation.

also failed to reach significance (P ⫽ 0.30), ML’s results conformed to the pattern for the entire set. He named only 26 out of these 38 body parts correctly, a performance level which fell outside the normal range, whereas his accuracy on control objects was within the normal range (see Table 14). ML’s word reading for the names corresponding to this subset of pictures again showed the pattern reported previously for the entire set. He was slightly more accurate and had somewhat faster reaction times for the body part names than for the control names. The reaction time advantage for the body part names was within the range shown by the controls (see Table 14). If the summation hypothesis is correct, ML should have had at least some degree of difficulty reading those words for which he had problems naming the corresponding pictures. This prediction was not fulfilled. ML read all of the selected body parts and most of the selected control objects correctly. Although his reading times were generally slower than the control subjects’, ML showed no difference between reading the words of body parts and of control objects. In fact, ML had slightly faster reading times on body parts than on control objects, just like the control subjects (see Table 14). Unlike

his performance on picture naming, there was no evidence to suggest that ML had any more reading difficulty with words for body parts compared with those for control objects. As discussed earlier, the results from experiments 2A–C indicate that ML’s deficit in naming body parts is due to a disruption in the link between semantics and output phonology for body parts. This link is used in the lexical semantic pathway for reading. It is clear, however, that ML’s wordreading performance showed no correspondence to his picture-naming performance. Given that his sublexical route was severely damaged, as demonstrated in previous studies, a third lexical route is needed to account for his normal reading of words that caused him difficulty in picture naming5.

General discussion Some researchers have argued that there is a third route for reading which connects the input orthographic/graphemic lexicon directly to the output phonological lexicon. This third route has been postulated to account for preserved reading in patients with a disruption of both sublexical and lexical semantic routes (e.g. Funnell, 1983; Coslett, 1991). Hillis

A third route for reading? 291

and Caramazza (1991, 1995) have argued that the third route is superfluous and have offered the summation hypothesis to account for these patients’ relatively preserved word-reading abilities. According to the summation hypothesis, word reading can be achieved through the cooperation of the partially impaired lexical semantic and sublexical routes. If this summation hypothesis is correct, word reading should heavily rely on one route when the other route is damaged. In other words, the performance on word reading should be more highly correlated with the performance of using the lexical semantic route if the sublexical route is disrupted (and more highly correlated with the performance relying on sublexical grapheme-to-phoneme correspondences if the semantic route is impaired). In this study, we examined a brain-damaged patient ML who showed a severe deficit in his sublexical route. According to the summation hypothesis, ML’s word reading should rely more on the lexical semantic route and should show a large influence of semantic factors. ML’s word reading should also be more highly related to picture naming than for the control subjects. Contrary to these predictions, ML showed no larger correlation between picture naming and word reading than the controls. Moreover, for a particular set of stimuli that he had difficulty naming (body parts), ML did not show longer times reading those words than control words. The evidence indicates that his body part-naming deficit is due to a disruption in the connections between semantics and output phonology for body parts—connections that should also be used in the lexical semantic route for reading. According to the summation hypothesis, ML should show at least some degree of difficulty reading body part names, given his severely impaired sublexical route. This pattern was not found. The current discussion has been framed in terms of traditional dual-route models in which the sublexical route carries out grapheme-to-phoneme conversions to determine the pronunciation of a written word. Plaut et al. (1996) proposed a dual-route computational model to account for both normal and impaired word reading in which the nonsemantic route does not carry out grapheme-to-phoneme conversion according to rules. Instead, letter sequences are mapped to sounds using a connectionist architecture in which hidden units intervene between graphemic and phonological representations. This architecture allows for mapping between letters and sounds for both regular and irregular spelling– sound patterns in the same set of nodes and connections. The way this model accounted for the relatively preserved word reading demonstrated by patients WB and WT was very similar to the summation hypothesis. Basically, Plaut et al. (1996) claimed that the cooperation between the phonological mechanism (i.e. orthographic–phonological correspondence) and the semantic mechanism may be sufficient for good word reading (Plaut et al., 1996, pp. 102–103). As indicated by the modeling work described earlier, such an account would predict longer reading latencies for words for which there is a deficit in the lexical semantic route. This pattern was not

observed for ML, however. Thus, this dual-route connectionist model could not account for the present results either. In summary, the predictions derived from the summation hypothesis were not supported by the patient data presented here. The results for patient ML instead indicate that a direct route from lexical orthographic representations to lexical phonological representations is needed.

Acknowledgements This research was supported by NIH grant no. DC-00218 to Rice University. We would like to thank ML and the control subjects for their participation in this project. We would also like to thank Michael McClosky for the helpful discussion, and Jessica Mejia, Angela McHardy, and Laura Matzen for their assistance in testing.

Notes 1Further

reduction in the semantic influence causes the selection of an erroneous lexical item. This is due to the fact that under Plaut and Shallice’s semantic specification, the similarity between items (even across semantic categories) is quite high. Therefore, when the weights between the semantic and the lexical output layer are reduced below a certain value, related words from semantically unimpaired categories will receive more net input than target words from semantically impaired categories. In the above simulation, the impaired sublexical pathway was simulated by modifying connection weights such that the incoming net input was 10% of the original level. In contrast, the semantic impairment was modeled by merely reducing semantic input to 60% of the original level. However, both modifications show comparable reductions in target activation levels. Doubtless, this again demonstrates the non-linearity of the activation function: when one source of input has already been substantially reduced (as in the impairment of the sublexical pathway), the modification of the remaining pathway will show a magnified impact. 2Even though there are other processes involved in picture naming that are not shared by word reading (i.e. visual analysis of pictures, access to stored structural knowledge about objects, access to semantic knowledge from the structural representations), a greater correlation between wordreading and picture-naming latencies should still be expected for patients with a disruption to the sublexical route than for control subjects, as the contribution from the sublexical route would mitigate the effect of the lexical route for normal subjects. 3One point to be noted is that both ML and the controls performed better in naming to definitions than to pictures (see Tables 8 and 9). Although this result is somewhat counterintuitive, as one may expect that definitions are more ambiguous than pictures, it should be kept in mind that the invalid trials of picture naming included hesitations (especially so for the control subjects) even though correct

292 D. H. Wu, R. C. Martin and M. F. Damian

responses were eventually given. Such trials were regarded as accurate in the naming to definition task as long as the intended responses were produced. In addition, due to the nature of some pictures (e.g. the pictures of ‘cheek’ and ‘bread’ could also be named as ‘face’ and ‘sandwich’, respectively), their definitions (‘the part of the face where rouge is put’ for ‘cheek’ and ‘the food that is cut into slices and eaten with butter’ for ‘bread’) are actually less ambiguous, thus higher accuracy. 4The one control subject who performed significantly worse on naming the body part pictures compared with control items did not show any evidence of difficulty reading the body part names. For this subject, the mean accuracies on the body part pictures versus the control pictures were 62 and 82%, respectively, and the mean reaction times were 1262 and 1245 ms, respectively. 5Given that ML showed an imageability effect on word reading with low-frequency words, his word reading of lowfrequency body parts may also be expected to show some influence from the impaired lexical semantic route. Presumably this was not found because the frequency of the body part stimuli was substantially higher than that for the lowimageability words (mean frequencies 33.64 and 8.58, respectively).

References Buchanan L, Hildbrandt N, MacKinnon G. Phonological processing of nonwords by a deep dyslexic patient: A rowse is implicitly a rose. Journal of Neurolinguistics 1994; 8: 163–81. Caramazza A, Hillis AE, Rapp BC, Romani C. The multiple semantic hypothesis: Multiple confusions? Cognitive Neuropsychology 1990; 7: 161–89. Cipolotti L, Warrington EK. Semantic memory and reading abilities: a case report. Journal of the International Neuropsychological Society 1995; 1: 104–10. Coltheart M. Lexical access in simple reading tasks. In: Underwood G, editor. Strategies of information processing. San Diego, CA: Academic Press, 1978: 151–216. Coltheart M. Deep dyslexia: A review of the syndrome. In: Coltheart M, Patterson KE, Marshall JC, editors. Deep dyslexia. London: Routledge & Kegan Paul, 1980. Coltheart M, Funnell E. Reading and writing: One lexicon or two? In: Allport DA, Mackay DG, Prinz W, Scheerer E, editors. Language perception and production: Relationships among listening, speaking, reading, and writing. London: Academic Press, 1987. Coltheart M, Patterson K, Marshall JC. Deep dyslexia. London: Routledge & Kegan Paul, 1980. Coltheart M, Masterson J, Byng S, Prior M, Riddoch J. Surface dyslexia. Quarterly Journal of Experimental Psychology 1983; 35A: 469–95. Coltheart M, Rastle K, Perry C, Langdon R, Ziegler J. DRC: A dual route cascaded model of visual word recognition. Psychological Review 2001; 108: 204–56. Coslett HB. Read but not write ‘‘idea’’: Evidence for a third reading mechanism. Brain and Language 1991; 40: 425–43. Dunn L, Dunn L. Peabody Picture Vocabulary Test-Revised. Circle Pines, MN: American Guidance Service, 1981. Ellis AW, Young AW. Human cognitive neuropsychology. Hove: Lawrence Erlbaum Associates, 1988.

Francis WN, Kucera H. Frequency analysis of English usage: Lexicon and grammar. Boston: Houghton Mifflin, 1982. Funnell E. Phonological processes in reading: New evidence from acquired dyslexia. British Journal of Psychology 1983; 74: 159–80. Grainger J, Jacobs AM. Orthographic processing in visual word recognition: a multiple read-out model. Psychological Review 1996; 103: 518–65. Hillis AE, Caramazza A. Mechanisms for accessing lexical representations for output: Evidence from a category-specific semantic deficit. Brain and Language 1991; 40: 106–44. Hillis AE, Caramazza A. Converging evidence for the interaction of semantic and sublexical phonological information in accessing lexical representations for spoken output. Cognitive Neuropsychology 1995; 12: 187–227. Hillis AE, Rapp BC, Romani C, Caramazza A. Selective impairment of semantics in lexical processing. Cognitive Neuropsychology 1990; 7: 191–243. Kay J, Lesser R, Coltheart M. Psycholinguistic Assessments of Language Processing in Aphasia: reading and spelling. Hove: Lawrence Erlbaum Associates, 1992. Kremin H. Alexia: theory and research. In: Malatesha RN, Aaron PG, editors. Reading disorders: Varieties and treatments. New York: Academic Press, 1982. Lambon Ralph MA, Ellis AW, Franklin S. Semantic loss without surface dyslexia. Neurocase 1995; 1: 363–9. Lesch MF, Martin RC. The representation of sublexical orthographic– phonologic correspondences: Evidence from phonological dyslexia. Quarterly Journal of Experimental Psychology 1998; 51A: 905–38. Luce RD. Individual choice behavior. New York: Wiley, 1959. Martin RC, Lesch MF. Associations and dissociations between language impairment and list recall: Implications for models of STM. In: Gathercole S, editor. Models of short-term memory. Hove: Lawrence Erlbaum Associates, 1996. McCarthy R, Warrington EK. Phonological reading: Phenomena and paradoxes. Cortex 1986; 22: 359–80. McClelland JL, Rumelhart DE. An Interactive Activation model of context effects in letter perception: Part 1. An account of basic findings. Psychological Review 1981; 88: 375–407. McClelland JL, Rumelhart DE, PDP Research Group. Parallel distributed processing: explorations in the microstructure of cognition. Vol. 2: Psychological and biological models. Cambridge, MA: MIT Press. Park N, Martin RC. Reading versus writing: evidence for the dissociation of input and output graphemic lexicons. Poster presented at the Cognitive Neuroscience Society 8th Annual Meeting, New York, 2001. Pedhazur EJ. Multiple regression in behavioral research, 2nd edn. Fort Worth, TX: Holt, Rinehart, & Winston, 1982. Plaut DC, Shallice T. Deep dyslexia: A case study of connectionist neuropsychology. Cognitive Neuropsychology 1993; 10: 377–500. Plaut DC, McClelland JL, Seidenberg MS, Patterson K. Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review 1996; 103: 56–115. Rapp B, Folk JR, Tainturier M. Word reading. In: Rapp B, editor. The handbook of cognitive neuropsychology: What deficits reveal about the human mind. Philadelphia: Psychology Press, 2001. Riddoch MJ, Humphreys GW, Coltheart M, Funnell E. Semantic systems or system? Neuropsychological evidence re-examined. Cognitive Neuropsychology 1988; 5: 3–25. Roach A, Schwartz MF, Martin N, Grewal RS, Brecher A. The Philadelphia Naming Test: scoring and rationale. Clinical Aphasiology 1996; 24: 121–33. Roeltgen DP, Cordell C, Sevush S. A battery of linguistic analysis for writing and reading. International Neuropsychological Society Bulletin 1983; 31 October. Snodgrass JG, Vanderwart M. A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Learning, Memory, and Cognition 1980; 6: 174–215. Treiman R, Zukowski A. Units in reading and spelling. Journal of Memory and Language 1988; 27: 466–77.

Received on 3 January, 2001; resubmitted on 15 February, 2002; accepted on 8 March, 2002

A third route for reading? 293

A third route for reading? Implications from a case of phonological dyslexia

Appendix A. Pictures of body parts and control objects used in experiment 2A Body part

D. H. Wu, R. C. Martin and M. F. Damian Abstract Models of reading in the neuropsychological literature sometimes only include two routes from print to sound, a lexical semantic route and a sublexical phonological route. Other researchers hypothesize an additional route that involves a direct connection between lexical orthographic representations and lexical phonological representations. This so-called ‘third route’ has been invoked to account for the preserved oral reading of some patients who show severe semantic impairments and a disruption of the sublexical phonological route. In their summation hypothesis, Hillis and Caramazza proposed that reading in these cases could result from a combination of partial lexical semantic information and partial sublexical phonological information, thus obviating the need for the third route. The present study examined the case of a phonological dyslexic patient (ML) who exhibited preserved word reading, even for items he could not name, along with a non-word reading impairment. The relationship between ML’s naming and reading, and the influence of semantic variables on his reading were examined. The results of this examination are interpreted as supporting the existence of the third route.

Journal Neurocase 2002; 8: 274–95

Neurocase Reference Number: O261

Primary diagnosis of interest Phonological dyslexia

Author’s designation of case ML

Key theoretical issue d The necessity of a direct route from input orthographic lexicon to output

phonological lexicon for reading Key words: word reading; summation hypothesis; phonological dyslexia

Scan, EEG and related measures Computed tomography scan

Other assessment Picture naming and word reading of pictures from Snodgrass and Vanderwart (1980). Picture naming, definition naming, picture–word matching and word reading of body parts and control objects

Lesion location d Infarction of left frontal and parietal opercula and mild atrophy of left

temporal operculum

Lesion type Infarction and atrophy

Language English

Ankle Arch Arm Beard Belly button Calf Cheek Chin Ear Earlobe Elbow Eye Eyebrow Eyelash Finger Fingernail Fist Foot Forehead Hair Hand Heel Hip Iris Jaw Knee Knuckle Leg Lips Mouth Mustache Neck Nipple Nose Nostril Palm Pupil Rib Shin Shoulder Shoulder blade Skull Teeth Thigh Thumb Thumb nail Toes Tongue Waist Wrist Mean Range aNumber

Frequencya 8 13 94 26 ⬍1 11 20 27 29 ⬍1 10 122 4 ⬍1 40 ⬍1 26 70 16 148 431 9 10 ⬍1 16 35 3 58 18 103 5 81 ⬍1 60 1 22 20 1 3 61 ⬍1 3 103 9 10 ⬍1 9 35 11 10 42.64 ⬍1–431

Control object Diamond Lens Key Cap Tusk Sleeve String Shirt Belt Filament Button Window Pedal Doorknob Bread Parachute Cigarette Box Arrow Paper Church Pants Trunk Shoelace Onion Fish Mane Roof Candle Doctor Buckle Bottle Giraffe Moon Hinge Envelope Crown Accordion Ruler Wheel Wrench Pouch Bridge Drawer Hose Snowman Veil Corn Leaf Pillow Mean Range

Frequencya 8 12 88 27 ⬍1 11 19 27 29 1 10 122 4 ⬍1 41 1 25 70 14 157 348 9 8 ⬍1 16 35 2 59 18 100 5 76 ⬍1 60 1 21 19 1 3 56 ⬍1 2 98 8 9 ⬍1 8 34 12 8 38.23 ⬍1–348

of occurrences among 1 014 000 graphic words in the corpus.

294 D. H. Wu, R. C. Martin and M. F. Damian

Appendix B. Pictures of body parts and control objects used in experiment 2B Body part

Definition

Control object

Definition

Ankle Arch Arm Beard Belly button Calf Cheek Chin Ear Earlobe Elbow Eye Eyebrow Eyelash Finger Fingernail Fist Foot Forehead Hair Hand Heel Hip Iris Jaw Knee Knuckle Leg Lips Mouth Mustache Neck Nipple Nose Nostril Palm Pupil Rib Shin Shoulder Shoulder blade Skull Teeth Thigh Thumb Thumb nail Toes Tongue Waist Wrist

The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The n/a The The The The The The The The The

Diamond Lens Key Cap Tusk Sleeve String Shirt Belt Filament Button Window Pedal Doorknob Bread Parachute Cigarette Box Arrow Paper Church Pants Trunk Shoelace Onion Fish Mane Roof Candle Doctor Buckle Bottle Giraffe Moon Hinge Envelope Crown Accordion Ruler Wheel Wrench Pouch Bridge Drawer Hose Snowman Veil Corn Leaf Pillow

The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The

part of the leg which can be sprained curved part of the bottom of the foot part of the body used to throw things hair on a man’s face indentation on the surface of the abdomen back part of the lower half of a leg part of the face where rouge is put lower part of the face below the mouth thing that people hear with part of the ear that can be pierced joint in the middle of the arm thing that people see with hair on the bony arch just above the eye little hairs around the eye part of the hand on which a ring is worn hard substance that protects the end of the finger ball that a hand is clenched into body part that you wear a shoe on top part of the face thing that grows on people’s heads thing that you shake when you meet someone back part of the bottom of the foot joint at the top of the leg colored part of the eye bone that moves up and down for chewing joint in the middle of the leg joint in the middle of a finger body part used to walk or run body part that chap stick is put on body part used for talking or eating hair on a man’s upper lip part of the body that holds the head up body part that an infant sucks on thing that people smell with opening in the nose part of the hand that touches when people clap black circle in the center of the eye bone in a person’s chest front part of the lower half of the leg joint at the top of the arm bone inside a person’s head things inside the mouth used for chewing upper part of the leg appendage on a hand that is shorter than the fingers hard substance at the end of a person’s thumb appendages on the front of a foot thing that people use to taste part of the body where a belt is worn joint between the arm and the hand

expensive stone on an engagement ring glass part of a pair of glasses thing for opening and closing locks kind of hat that a baseball player wears long front teeth of an elephant piece of clothing that covers the arm part of a violin that the bow touches item of clothing worn on the top half of the body item of clothing worn to hold pants up part of a light bulb that burns common fastener used on shirts opening on the wall that a curtain covers thing that a bicycle rider turns to power the bicycle thing that a person turns to open a door food that is cut into slices and eaten with butter thing that helps a skydiver to land safely thing that people smoke thing that is used for packing or storing things sharp thing shot from a bow flat thing for writing or typing on building where people go to worship on Sunday clothing that covers the legs nose of an elephant part of a shoe that gets tied vegetable that can make people cry animal that has fins and lives underwater hair on a horse’s neck part of a house covered with shingles thing made of wax that has a wick that burns professional who prescribes drugs to sick people fastener on a belt container that you buy wine in spotted animal with a very long neck large disk that shines in the sky at night part of a door that attaches to the frame thing used to send letters in thing that a king wears on his head portable instrument with a keyboard and bellows measuring tool that can help to draw straight lines part of a car that is round and rolls along the road tool used for twisting bolts pocket where baby kangaroos are carried road over water part of a desk or dresser where things are stored long rubber tube for carrying water thing that children build that has a carrot nose thing that a bride wears over her face yellow vegetable that gets eaten on the cob green things that grow on trees thing for people to rest their heads on in bed

A third route for reading? 295

Appendix C. Pictures of body parts and control objects used in experiment 2C Body part

Semantically related distracter

Unrelated distracter

Control object

Semantically related distracter

Unrelated distracter

Ankle Arch Arm Beard Belly button Calf Cheek Chin Ear Earlobe Elbow Eye Eyebrow Eyelash Finger Fingernail Fist Foot Forehead Hair Hand Heel Hip Iris Jaw Knee Knuckle Leg Lips Mouth Mustache Neck Nipple Nose Nostril Palm Pupil Rib Shin Shoulder Shoulder blade Skull Teeth Thigh Thumb Thumb nail Toes Tongue Waist Wrist

Knee Shin Leg Mustache Nipple Thigh Forehead Ear Chin Nostril Wrist Heel Pupil Iris Thumb Thumb nail Palm Hand Cheek Skull Foot Eye Waist Eyelash Teeth Ankle Toes Arm Tongue Nose Beard Shoulder Belly button Mouth Earlobe Fist Eyebrow Shoulder blade Arch Neck Rib Hair Jaw Calf Finger Fingernail Knuckle Lips Hip Elbow

Choir Globe Sign Storm Gaggle Bee Sauce Ranch Movie Pebble Puzzle Radi Dime Scorpion Highway Pecan Treat Rain Jar Letter Water Cork Rag Strawberry Rail Yard Chalk Railroad Brick Afternoon Kitten File Macaroni Garden Wagon Joke Vacuum Kite Magnet Beach Clamp Crumb Ball Duck Hail Neighbor Wheat Bench Shed Luggage

Diamond Lens Key Cap Tusk Sleeve String Shirt Belt Filament Button Window Pedal Doorknob Bread Parachute Cigarette Box Arrow Paper Church Pants Trunk Shoelace Onion Fish Mane Roof Candle Doctor Buckle Bottle Giraffe Moon Hinge Envelope Crown Accordion Ruler Wheel Wrench Pouch Bridge Drawer Hose Snowman Veil Corn Leaf Pillow

Gold Magnifying glass Lock Helmet Ivory Pocket Bow Pants Rope Battery Zipper Curtain Step Hinge Butter Airplane Lighter Bowl Knife Notebook Office Shirt Hoof Sock Carrot Lobster Tail Chimney Torch Patient Snap Glass Zebra Star Doorknob Stamp Hat Trumpet Inch Spoke Hammer Bag Road Lamp Pump Sled Cloak Potato Root Cushion

Sack Grill Poetry Horizon Pancake Log Spy Mirror Lesson Robot Cigar Game Tattoo Pin cushion Tissue Scribble Pond Dust Bush Floor Family Toast Recipe Claw Pottery Tape Bakery Tree Lemon Jazz Swamp Telephone Scarecrow Scale Loaf Jail Flood Oasis Cricket Vision Manhole Eraser Gas Umbrella Pineapple Beak Dock Pencil Bubble Glue