An MEG Study of Temporal Characteristics of Semantic Integration in

2 downloads 0 Views 299KB Size Report
Jun 6, 2008 - sual Presentation (RSVP) has sometimes been used in sen- tence processing ... Women's University, Sendai-shi, 981–8557 Japan. ... trophysiological evidence was obtained that gives an upper ..... were then baseline adjusted using a 100 ms pre-stimulus in- terval. .... Cogn. Proc., vol.19, pp.29–56,. 2004.
IEICE TRANS. INF. & SYST., VOL.E91–D, NO.6 JUNE 2008

1656

PAPER

Special Section on Human Communication III

An MEG Study of Temporal Characteristics of Semantic Integration in Japanese Noun Phrases Hirohisa KIGUCHI†a) , Nonmember and Nobuhiko ASAKURA†† , Member

SUMMARY Many studies of on-line comprehension of semantic violations have shown that the human sentence processor rapidly constructs a higher-order semantic interpretation of the sentence. What remains unclear, however, is the amount of time required to detect semantic anomalies while concatenating two words to form a phrase with very rapid stimuli presentation. We aimed to examine the time course of semantic integration in concatenating two words in phrase structure building, using magnetoencephalography (MEG). In the MEG experiment, subjects decided whether two words (a classifier and its corresponding noun), presented each for 66 ms, form a semantically correct noun phrase. Half of the stimuli were matched pairs of classifiers and nouns. The other half were mismatched pairs of classifiers and nouns. In the analysis of MEG data, there were three primary peaks found at approximately 25 ms (M1), 170 ms (M2) and 250 ms (M3) after the presentation of the target words. As a result, only the M3 latencies were significantly affected by the stimulus conditions. Thus, the present results indicate that the semantic integration in concatenating two words starts from approximately 250 ms. key words: semantic integration, noun phrase, genitive numerical classifier, magnetoencephalography

1. Introduction It has been shown that the human brain can rapidly retrieve semantic information of a visual word when it is briefly presented. Previous studies using the masked priming paradigm, in which primes were visually presented for very short durations (50–60 ms), have found that associative/semantic priming effects have been observed in the lexical decision task [10] and the picture naming task [19]. Little is known, however, about the temporal characteristics of the semantic interpretation of rapidly presented verbal stimuli beyond the isolated word level. Rapid Serial Visual Presentation (RSVP) has sometimes been used in sentence processing experiments. Since the words are presented rapidly, syntactic and interpretive process seems to be affected. Therefore, in the sentence processing literature, the aim of RSVP experiments has been to compare RSVP results to aphasic data in order to study what kind of deficit is involved in aphasia [5], [14], [17] (cf. Gouvea [11]). However, these studies do not directly address the time course of semantic integration. Hence it is still unclear what amount of time is required for concatenating words in sentence processing. Manuscript received September 11, 2007. Manuscript revised December 10, 2007. † The author is with the Department of English, Miyagi Gakuin Women’s University, Sendai-shi, 981–8557 Japan. †† The author is with Human Information System Laboratory, Kanazawa Institute of Technology, Hakusan-shi, 924–0838 Japan. a) E-mail: [email protected] DOI: 10.1093/ietisy/e91–d.6.1656

In the visual processing literature, it is well known that human visual processing is very fast in ultra-rapid categorization tasks where subjects have to decide whether an image, which is briefly presented, belongs to a target category or not. Van Rullen and Thorpe [27] reported that subjects can successfully differentiate complex visual categories which are flashed on the screen (20 ms) in less than 250 ms of behavioral reaction time. Rousselet, FabreThorpe and Thorpe [22] demonstrated that event-related potentials (ERPs) start to reflect a categorical identity of visual images as early as 150 ms after stimulus onset. Thus, by using the rapid presentation technique, behavioral and electrophysiological evidence was obtained that gives an upper limit to the time required for visual categorization, shedding lights on the time course of the underling neural processing. In the present study, we aimed to examine the time course of semantic integration in sentence processing by employing the RSVP paradigm with magnetoencephalography (MEG). The question to address was how rapidly presented words can be interpreted beyond the isolated word level, that is, in the process of phrase structure building. We can begin answering this question by examining simple noun phrases consisting of two words, which escapes from complicated factors that might arise from further sentence processing. This experimental paradigm enables us to concentrate on the investigation of a pure one-on-one selectional relation in concatenating two words, which is essential and the simplest operation in phrase structure building. Specifically, in this study, numeral classifier mismatcheffects in Japanese noun phrases were employed in order to understand how the violation of selectional relations within a phrase is detected under very short stimulus presentation durations. By integrating the current study with our previous behavioral study using the same RSVP paradigm [13], we were able to determine what amount of time it requires for the human brain to process semantic integration. 1.1 Japanese Numeral Classifier System The Japanese numeral classifier system has unique properties. Accordingly, it merits a detailed discussion here. First, numeral classifiers consist of one numeral expression and one classifier. In (1), for example, the numeral classifier san-satsu can be dissected into two parts; san (the numeral expression) and satsu (the classifier). (1)

a. san-satsu -no hon

c 2008 The Institute of Electronics, Information and Communication Engineers Copyright 

KIGUCHI and ASAKURA: AN MEG STUDY OF SEMANTIC INTEGRATION IN JAPANESE NOUN PHRASES

1657

3 -Cl(book) -Gen book b. san-hiki -no kaeru 3 -Cl(animal) -Gen frog Since the genitive case marker must be attached to the classifier as an affix when a quantifier and a noun are combined to form a noun phrase, the noun phase with the numeral quantifiers as shown above are often called genitive numeral classifier (GNC) constructions. It is this type of constructions that we are concerned with in the current study. The examples below illustrate Japanese noun phrases with numeral classifiers. The numeral classifier can precede the host noun as in (2a) and (2b) or it can follow the host noun as in (2c) and (2d). (2)

a. Mituo-wa san-satsu-(no) hon-wo yonda. Mitsuo-Top three-Cl(book)-(Gen) book-Acc read “Mitsuo read three books.” b. san-satsu Mituo-wa hon-wo yonda. three-Cl(book) Mitsuo-Top book-Acc read “Mitsuo read three books.” c. Mitsuo-wa hon-wo san-satsu yonda. Mitsuo-Top book-Acc three-Cl(book) read “Mitsuo read three books.” d. hon-wo Mitsuo-wa san-satsu yonda. book-Acc Mitsuo-Top three-Cl(book) read “Mitsuo read three books.”

On the other hand, if the genitive case marker “-no” is attached to the classifier, the distribution of the classifier is more restricted. Only when the GNC is left-adjacent to the host noun as in (3a), the construction is acceptable. (3)

a. Mituo-wa san-satsu -no hon-wo yonda. Mitsuo-Top three-Cl(book) -Gen book-Acc read “Mitsuo read three books.” b. *san -satsu -no Mituo-wa hon-wo yonda. three-Cl(book) -Gen Mitsuo-Top book-Acc read “Mitsuo read three books.” c. *Mitsuo-wa hon-wo san-satsu -no yonda. Mitsuo-Top book-Acc three-Cl(book) -Gen read “Mitsuo read three books.” d. *hon-wo Mitsuo-wa san -satsu -no yonda. book-Acc Mitsuo-Top three-Cl(book) -Gen read “Mitsuo read three books.”

This is because the GNC must be a part of a noun phrase with the host noun. Therefore, it is also the case that the GNC does not have to be immediately adjacent to the host noun as long as they form a proper noun phrase. (4)

Mituo-wa san-satsu -no atui hon-wo yonda. Mitsuo-Top three-Cl(book) -Gen thick book-Acc read

“Mitsuo read three thick books.” This is similar to English determiners, which also require the subsequent occurrence of the host noun to form a noun phrase, while they do not have to be immediately adjacent to the host noun. (5)

a. John read the book. b. John read the thick book. c. *John read book the.

As discussed above, a GNC must be associated with its host noun. Furthermore, the choice of the classifier depends on the semantics on the host noun. That is, a given GNC must be associated with particular referents. To put it another way, it is the host noun that must select a specific classifier. For example, a book, or a noun that denotes something like a book (e.g. magazine, dictionary among others) selects the classifier, “satsu”. The nouns denoting an animal, on the other hand, select the classifier, “hiki”. As shown in (6), once the combination between the classifier and the host noun is exchanged, the derived noun phrase is unacceptable because of the semantic mismatch between the classifier and its host noun. Thus, the glosses for “satsu” and “hiki” in (6) are marked as Cl(book) and Cl(animal), respectively, meaning that each of the classifier is for a certain type of host nouns that have a particular semantic value. (6)

a. #san -satsu -no kaeru 3 -Cl(book) -Gen frog b. #san -hiki -no hon 3 -Cl(animal) -Gen book

To summarize, the distributional properties of the genitive numeral classifiers can be described as follows: • The occurrence of the genitive classifier requires the subsequent host noun to form a noun phrase. • There are strict selectional restrictions between the genitive numeral classifier and its host noun. 1.2 Related Work and the Purpose of Our Study Yoshida, Aoshima and Phillips [28] reported in their sentence processing studies that GNCs, which semantically mismatch with their adjacent noun, cause a slowdown in reading time compared to the matched cases. However, Yoshida et al.’s purpose was to see if Japanese readers could predict relative clauses in sentence processing. That is, Yoshida et al. used the classifier mismatch-effects merely as “cues” for the parser to predict incoming structures; if the GNC semantically mismatches with the immediately following noun, the parser must correct its prediction for the incoming structures. Specifically, the parser would not associate the GNC with the immediately following noun, but wait for the host noun compatible with the GNC to be available in the incoming structure. If so, the delay of reading time at the mismatched noun caused by semantic mismatch

IEICE TRANS. INF. & SYST., VOL.E91–D, NO.6 JUNE 2008

1658

could stem not only from the difficulty of semantic integration of the GNC and the following noun, but also from the cost in reanalyzing the immediate phrase structure to allow the GNC to be associated with the possible incoming noun. Therefore, in the Yoshida et al.’s experiments, pure activity caused by semantic mismatch in phrase structure building could be obscured. Furthermore, the task in their experiment was self paced reading. It follows that since the reading time of the stimuli were controlled by the subjects, from the results of their studies we cannot address the question how long is required to detect semantic anomaly with very rapid presentation time of the stimuli. In our previous behavioral study [13], we investigated the properties of the responses caused by the (mis)match between the GNC and its host noun under highly constrained temporal conditions. Unlike Yoshida et al.[28], only pairs of a GNC and its host noun as in (6) were presented as experimental stimuli, and each word was presented for 58 or 83 ms with forward and backward masks. These presentation times were determined because in the pre tests, the former presentation time yielded over 70% of accuracy threshold and the latter over 85% of accuracy with which we interpreted the subjects start to realize the content of the stimuli consciously. Following the methods in an ultra-rapid categorization study by Van Rullen and Thorpe [27], an analysis of the distributions of the reaction times revealed that the subjects were shown to discriminate the matched cases and mismatched cases at approximately 340 to 360 ms after the onset of the host noun. Since the reaction time also includes the time needed to generate the motor command [12], [27], we conjectured from the results of the experiment that semantic integration in concatenating words starts from approximately 250 ms. If this estimation is right on the track, we predict that neural responses in this time window should be modulated according to the semantic properties of the stimuli. In the present study, we aimed to confirm this prediction using magnetoencephalography (MEG). Electrophysiological brain recordings provide direct measures of early neural activity during tasks such as language processing. Their millisecond temporal resolution suggests their utility for testing predictions regarding the online incremental processing of language. MEG is a technique that, like electroencephalography (EEG), directly measures neuronal electric activity. Given the advantage in Signal-to-Noise ratio, MEG enables within-subject data analysis, detecting even subtle within-subject processing difference, which is unrealistic in standard ERP analytic method. In the MEG experiment, stimuli were presented in a RSVP fashion that follows our behavioral experiment. This allows us to make a fair comparison between the current MEG recording and our previous behavioral result [13]. The very short stimulus duration has an additional advantage that it minimizes eye movements which can contaminate MEG responses severely with task-irrelevant noise. Moreover, if the stimulus duration were not short enough, overt/covert vocalization of stimuli during their presentation might happen and the cor-

responding brain activity could obscure the MEG responses responsible for semantic integration. The current study, thus, enables us the close comparison of the behavioral and MEG data for linking the behavioral results and their neural correlates. 2. Material and Method 2.1 Subjects Eight right-handed, native speakers of Japanese (1 female; age 21 to 24, mean age 22) with no history of neurological disorders participated in the experiment. All participants gave their informed consent and had normal/corrected-tonormal vision. They were all students at Kanazawa Institute of Technology and were paid for their participation. 2.2 Materials A total of 400 items, which consisted of 200 matched pairs of GNCs and host nouns and 200 mismatched pairs of GNCs and host nouns, served as the materials. For the classifiers in the GNCs, we employed ten types of frequently used classifiers. In a survey by Downing [7], at least 14 out of 15 informants claimed to use these ten classifiers (名-mei, 本hon, 枚-mai, 匹-hiki, 件-ken, 冊-satsu, 台-dai, 発-hatsu, 曲kyoku, 件-ken). All the ten types of classifier appear exactly the same number of times in one experiment. For the numerical expression of the GNCs, we assigned to each of the classifiers one of the Chinese numerals from 3 to 6 (三, 四, 五, 六) randomly. Then, 200 host nouns were prepared so that each of them appeared exactly once in a matched pair and once in a mismatched pair. Since each participant encounter each host noun twice, the above items were divided into two stimulus lists such that each list contained only one appearance of each host noun. Further the order of list presentation was counterbalanced across subjects. All the GNCs in the items consist of three characters; one Kanji character (based on Chinese ideograms) denoting a number, one Kanji character of the classifier and the Japanese genitive marker “-no” in Hiragana (syllabic characters in Japanese orthography). All host nouns consist of two Kanji characters. The pronunciation of all GNCs was delimited to 4 or 5 moras and the pronunciation of all host nouns were 3 or 4 moras, in order to avoid possible contamination out of the difficulty of the mapping from orthography to phonology. Finally, all the host nouns were matched for familiarity [1] (Mean: 5.4868 out of 7.00, SD: 0.0978). 2.3 Procedure During the experiment, participants were seated inside a magnetically shielded room in the KIT MEG laboratory. Stimuli were presented with Cogent Software (Wellcome Department of Imaging Neuroscience, London, UK). For each trial, participants were presented with a fixation mark (+++) projected onto the center of a rear projection screen

KIGUCHI and ASAKURA: AN MEG STUDY OF SEMANTIC INTEGRATION IN JAPANESE NOUN PHRASES

1659

for 266 ms followed by 500 ms of blank screen. This was followed by presentation of the stimulus. In each trial, a forward mask was presented for 133 ms on the center of the screen† . The mask consisted of a jumble of short lines with random orientation in which the length of each line is comparable to those of the strokes in Kanji characters†† . Next, a GNC was presented in the center of the screen for 66 ms. GNCs were immediately replaced by the host noun, which is also presented for 66 ms. Then, a backward mask, whose shape is identical with the forward mask, was immediately followed and presented for 133 ms. After the 800 ms blank screen following the presentation of the backward mask, a series of crosshatches (###), which urges the participant’s response, appeared in the center of the screen. The participants were instructed to decide whether the presented pair of the words forms a semantically proper noun phrases once this mark appeared. The decisions were made by button press with the right hand. Participants were first given a practice session of 10 items to familiarize themselves with the task. The two lists created were presented in separate blocks. The participants had a brief break between the stimulus blocks. 2.4 MEG Recording The magnetic activity was measured in a magnetically shielded room with a 160-channel whole-head magnetometer (Kanazawa Institute of Technology, Kanazawa, Japan). Data were sampled at 1000 Hz, with acquisition between 0.03 and 200 Hz. The event trigger was synchronized to the onset of the presentation of the host noun. The time window of data acquisition was 1500 ms: 500 ms prior to the trigger to 1000 ms after the trigger. Magnetic fields elicited by the electric current of the marker coils were recorded for identifying the head position in the 3-D coordinates of the MEG system prior to the MEG recording. The recording for each participant lasted approximately 50 minutes. Responses to target words were averaged by stimulus condition after rejecting trials with eye blinks or other artifacts by excluding ±2.5 pT in amplitude. Only MEG averages consisting of more than 50 trials after artifact and error rejection were submitted for further analysis. All participants and all conditions survived these criteria. Following averaging, data were lowpass-filtered with a cutoff frequency of 30 Hz, and were then baseline adjusted using a 100 ms pre-stimulus interval. 2.5 Data Analysis In the analysis of MEG data, there were three primary peaks found at approximately 25, 170, and 250 ms after the presentation of the host noun (see Fig. 1). Although the magnetic distributions of the M1 component were not consistent across subjects and did not show a dipolar distribution most of the time and was not observed in one condition of one subject, the other components, M2 and M3 showed left lateralized dipolar distributions consistently across subjects and

conditions. The magnetic field distributions of M2 and M3 for a representative subject are illustrated in Fig. 2. These two components have been reported in prior MEG studies of visual word or character presentation as we discuss later. The amplitudes and latencies of these three peaks in the windows of the M1 (5–50 ms), M2 (120–205 ms), and M3 (225– 355 ms) were determined individually for each participant by calculating the root mean square (RMS) field strength from all 76 channels from the left hemisphere. RMS analysis has been employed in previous MEG studies on visual word recognition [3], [8], [9], [23], [25]. 3. Results In the MEG data, the amplitudes of all three components were not modulated by stimulus conditions i.e. semantically matched/mismatched noun phrases (M1 (p = 0.79); M2 (p = 0.38); M3 (p = 0.61), two-tailed t-test). On the other hand, the M3 latencies were significantly affected by the stimulus conditions. The latencies of the M3 were shorter for matched noun phrases (Mean Peak Latency 268.4 ms) than for mismatched noun phrases (mean 277.2 ms) (p < 0.02, two-tailed t-test:). In contrast to the M3, the latencies of M1 and M2 were not affected by the stimulus conditions (M1 (p = 0.89), M2 (p = 0.54), two-tailed t-test). The mean differences in amplitudes and latencies for each component are shown in Figs. 3 and 4, respectively (see Appendix for the results of source localization of the M3 component). 4. Discussion The magnetoencephalographic results from the current study together with results from our previous behavioral study shows that it requires approximately 250 ms for the human brain to process semantic integration in concatenating words even under a highly constrained visual condition. In our MEG experiment, we observed three MEG peaks, M1, M2 M3 as primary components. Previous MEG studies of visual word recognition with longer stimulus durations have also found the dipolar field distributions that characteristically appear in the following 100–220 ms (M170), 200–300 ms (M250). In addition, Monahan, Fiorentino and Poppel [18] recently reported in their repetition priming study that two robust components corresponding to M170 and M250 time-window were identified in † We used the forward mask to equate the visibility of the GNC and its host noun. Since the host noun was sandwiched between the GNC and a backward mask, it suffered not only from backward masking, but also from forward masking in which the GNC plays a role of the masker. Therefore, if the forward mask was absent, the GNC did not suffer from forward masking and could stand out more saliently than its host noun. This would cause participants to pay more attention to the GNC per se, rather than the whole GNC construction. †† We confirmed in our previous behavioral experiment that this mask can be quite an effective masker for the noun phrases presented in Kanji characters. We also found that a mask consisting of crosshatches, which is often used for masking English characters, does not produce a sufficient masking effect on the Kanji stimuli.

IEICE TRANS. INF. & SYST., VOL.E91–D, NO.6 JUNE 2008

1660

Fig. 1

Waveform illustrating M1, M2 and M3 components in one representative subject.

Fig. 2 The magnetic field distributions of M2 and M3 response components at the peak in each component time in one representative subject.

Fig. 3 The mean amplitudes of MEG components for match and mismatch conditions (mean and S.E.) (n = 8).

their forward and backward masking experiment, though the magnetic distributions were slightly different from the ones obtained in other MEG studies of visual word recognition (see appendix on the variability on the source distribution of the relevant component). The M2, M3 components found in our experiment, thus, appear to correspond to M170 and M250, respectively. In particular, we found a correlation with the stimulus manipulation in the M3 component, which

Fig. 4 The mean latencies of MEG components for match and mismatch conditions (mean and S.E.) (n = 8).

has been regarded as M250 in previous literature. Though the component observed in this time window had been reported in a number of studies of visual word recognition, it remains unclear what processes M250 indexes because its latency and amplitude were not sensitive to stimulus manipulation such as frequency or repetition in the lexical decision paradigm [3], [8], [9], [23], [25]. However, Stockall,

KIGUCHI and ASAKURA: AN MEG STUDY OF SEMANTIC INTEGRATION IN JAPANESE NOUN PHRASES

1661

Stringfellow and Marantz [25] recently reported that the latency of M250 component is affected by phonotactic probability. In their study, the effects of probability were observed in the latency of M250 time-window. In their lexical decision experiment, the words with high phonotactic probability inhibited the latency of M250 component. Hence it suggests that M250 reflects the stage of lexical selection where the preferred candidate is decided after a set of compatible entries is activated at the stage of lexical access. This conclusion/speculation is consistent with the results of our MEG experiment. While lexical selection appears to be still a single word processing stage, it could be that it is also a context-sensitive stage. That is, the semantic context initially activates semantically compatible lexical entries. The process of lexical selection is therefore easier when the incoming word is semantically compatible to the context than when it is incompatible [2], [6]. Thus, it could be argued that M250, at least partially, represents the cost of some sort of lexical selection. Since the stages of lexical selection and lexical integration are not completely separable, for both stages are tightly constrained by available context, it is reasonable to assume that M250 component would be affected by the semantic integration process involved with the selectional restriction. If so, the latency of M250 (M3 in our experiment), which is elicited after presenting the host noun could reflect the cost of its integration with a given semantic context, in our case, the GNC. This view is, in fact, in line with the interactive lexical processing model proposed by Van den Brink and Hagoort [26] who claim that semantic integration processes should be initiated before the preferred candidate is selected from a set of compatible entries activated in the stage of the lexical access. All in all, the results of our studies and Stockall et al. [25] conspire to suggest that the stage of lexical access should come earlier than previously claimed by Embick et al. [8] and Pylkk¨anen and McElree [23] among others whose MEG experiments indicate that the stage of lexical access is approximately at 350 ms after the presentation of a given word. Finally, as for the relationship between previous ERP studies and the finding in our experiments, Sakai et al. [24] reported in their ERP study that the mismatched pairs of GNCs and their host nouns elicited N400. The result confirms that the relation between GNCs and their host noun is semantic but not morph-syntactic such as agreements in person, number and gender found in Indo-European languages. In addition, though the N400 waveform peaked at approximately 380 ms after the presentation of the host noun, it started to diverge from the one in the control condition at approximately 250 ms. This is consistent with the findings of our studies. Second, several studies have reported that Recognition Potential (RP) reflects semantic processing of visually presented stimuli including words [15], [20], [21]. RP is elicited by visually presented words or pictures, which appears at the latency of approximately 225–300 ms from an inferior parieto-occipital area in topological representations. Mart´ın-Loeches, Hinojosa, Casado, Munoz and Fern´andes-

Fr´ıas [16] found in their sentence processing study that the amplitude of RP was larger to semantically normal words than to semantically anomalous words. It is intriguing that RP and M3 in our study share a similar latency, and these two components both appear to index semantic integration in language processing. However, in RP, semantic integration was pronounced in its amplitude on one hand, and in M3 it is in its latency that semantic integration is represented on the other hand. Additionally, it is unknown, at least to our knowledge, whether RP is elicited by very short visual stimuli durations. All in all, further research is called for in order to reveal the more detailed relationship between these two components. Acknowledgment This work was supported by the Academic Frontier Project for Human Information System Laboratory, Kanazawa Institute of Technology from Ministry of Education, Culture, Sports, Science and Technology, Japan, 2002–2006. We thank Roberto Fiorentino, Phill Monahan, David Poeppel and Masaya Yoshida for discussion. References [1] S. Amano and T. Kondo, Nihongo no goitokusei [lexical properties of Japanese] vol.2. NTT Database Series, Sanseido, Tokyo, 2003. [2] J. Aydelott and E. Bates, “Effects of acoustic distortion and semantic context on lexical access,” Lang. Cogn. Proc., vol.19, pp.29–56, 2004. [3] A. Beretta, R. Fiorentino, and D. Poeppel, “The effects of hononymy and polysemy on lexical access: An MEG study,” Cogn. Brain Res., vol.24, pp.57–65, 2005. [4] D. Boatman, B. Gordon, J. Hart, O. Selnes, D. Miglioretti, and F. Lenz, “Transcortical sensory aphasia: Revisited and revised,” Brain, vol.123, pp.1634–1642, 2000. [5] D. Caplan and G. Waters, “Aphasic disorders of syntactic comprehension and working memory capacity,” Cogn. Neuropsychol., vol.12, pp.637–649, 1995. [6] J.F. Connolly and N.A. Phillips, “Event-related potential components reflect phonological and semantic processing of the terminal word of spoken sentences,” J. Cogn. Neurosci., vol.6, pp.256–266, 1994. [7] P. Downing, Numeral classifier systems, John Benjamins, Amsterdam, 1996. [8] D. Embick, M. Hackl, and A. Marantz, “A magnetoencephalographic component whose latency reflects lexical frequency,” Cogn. Brain Res., vol.10, pp.345–348, 2001. [9] R. Fiorentino and D. Poeppel, “Decomposition of compound words: An MEG measure of early access to constituents,” Proc. 25th Ann. Conf. Cogn. Sci. Soc., Lawrence Erlbaum Associates, New Jersey, 2004. [10] K.I. Forster and D. Davis, “Repetition priming and frequency attenuation in lexical access,” J. Exp. Psychol. Learn. Mem. and Cogn., vol.10, pp.680–698, 1984. [11] A. Gouvea, “Syntactic complexity in Brazilian Portuguese and English using rapid serial visual presentation (RSVP) technique,” University of Maryland Working Papers of Linguistics, pp.22–40, University of Maryland, College Park, 2000. [12] J.F. Kalaska and D.J. Crammond, “Cerebral cortical mechanisms of reaching movements,” Science, vol.255, pp.1517–1523, 1992. [13] H. Kiguchi and N. Asakura, “Temporal characteristics of semantic integration in Japanese noun phrases,” Proc. AMLaP 2005, Ghent,

IEICE TRANS. INF. & SYST., VOL.E91–D, NO.6 JUNE 2008

1662

Belgium, 2005. [14] R. Martin, “Working memory does’t work: A critique of Miyake et al.’s capacity theory of aphasic comprehension deficits,” Cogn. Neuropsychol., vol.12, pp.623–636, 1995. [15] M. Mart´ın-Loeches, J.A. Hinojosa, G. G´omez-Jarabo, and F.J. Rubia, “The recognition potential: An ERP index of lexical access,” Brain Lang., vol.70, pp.364–384, 1999. [16] M. Mart´ın-Loeches, J.A. Hinojosa, P. Casado, F. Munoz, and C. Fern´andes-Fr´ıas, “Electrophysiological evidence of an early effect of sentence context in reading,” Biol. Psychol., vol.65, pp.265–280, 2004. [17] A. Miyake, P. Carpenter, and M. Just, “A capacity approach to syntactic comprehension disorders: Making normal adults perform like aphasic patients,” Cogn. Neuropsychol., vol.11, pp.671–717, 1994. [18] P. Monahan, R. Fiorentino, and D. Poeppel, “Spectrotemporal analysis of masked priming using MEG,” Poster presented at The 15th Annu. Conf. on Biomagnetism, Vancouver, BC, 2006. [19] M. Perea and A. Gotor, “Associative and semantic priming effects occur at very short SOAs in lexical decision and naming,” Cognition, vol.67, pp.223–240, 1997. [20] A.P. Rudell, “The recognition potential contrasted with the P300,” Int. J. Neurosci., vol.60, pp.85–111, 1991. [21] A.P. Rudell and J. Hua, “The recognition potential, word difficulty, and individual reading ability: On using event-related potentials to study perception,” J. Exp. Psychol. Hum. Percept. Perform., vol.23, pp.1170–1195, 1997. [22] G.A. Rousselect, M. Fabre-Thorpe, and S.J. Thorpe, “Parallel processing in high-level categorization of natural images,” Nature Neurosci., vol.5, no.7, pp.629–630, 2002. [23] L. Pylkk¨anen and B. McElree, “An MEG study of silent meaning,” J. Cogn. Neurosci., vol.19, pp.1905–1921, 2007. [24] Y. Sakai, K. Iwata, J. Riera, X. Wan, S. Yokoyama, Y. Shimoda, R. Kawashima, K. Yoshimoto, and M. Koizumi, “An ERP study of the integration process between a noun and a numeral classifier: Semantic or morpho-syntactic?,” Cognitive Studies, vol.13, no.3, pp.443– 454, 2006. [25] L. Stockall, A. Stringfellow, and A. Marantz, “The precise time course of lexical activation: MEG measurement of the effects of frequency, probability and density in lexical decision,” Brain Lang., vol.90, pp.88–94, 2004. [26] D. Van den Brink and P. Hagoort, “The influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension as revealed by ERPs,” J. Cogn. Neurosci., vol.16, pp.1068–1084, 2004. [27] R. Van Rullen and S.J. Thorpe, “Is it a bird? Is it a plane? Ultra-rapid visual categorization of natural and artifactual objects,” Perception, vol.30, pp.655–668, 2001. [28] M. Yoshida, S. Aoshima, and C. Phillips, “Relative clause prediction in Japanese,” 17th Annu. CUNY Sentence Process. Conf., University of Maryland, College Park, MD, 2004.

Appendix:

On Source Localization of M3

Here, we briefly report the results of our source localization of M3 in order to take advantage of the spatial resolution of MEG. Equivalent current dipoles (ECDs) were calculated by using a single dipole model, which assumes that the brain is a sphere. ECDs were estimated at the times of RMS peak in M3 time window, using all channels from left hemisphere. Only ECDs which remain in the same anatomical area for more than 20 ms with the maximal goodness of fit above 75% were considered reliable. The ECDs of both conditions from all subjects survived these criteria. The ECDs were superimposed on magnetic resonance images (MRI) of each

Fig. A· 1 An ECD localized at a parietal area for M3 from a representative subject.

Fig. A· 2 An ECD localized at a temporal area for M3 from a representative subject.

subject. We found spatial variance of localization of M3 across subjects. In our 8 subjects, ECDs are obtained at parietal areas for 3 subjects (Fig. A· 1), at inferior or superior temporal areas for 3 subjects (Fig. A· 2), at frontal areas for one subject, and occipital temporal junction areas for one subject. Dipoles for each subject were obtained at a similar area in both experimental conditions. Pylkk¨anen and McElree [23] report similar variance of the localization in this time window in an MEG study of sentence comprehension. In addition, results from electrical interference studies on transcortical sensory aphasia showed that the localization of lexical access can in fact vary from parietal lobe to inferior temporal cortex [4]. At this moment, we may speculate that different paths are employed individually within the left hemisphere in semantic detection. Thus, further understanding of the spatial order of the brain activation for the relevant process is definitely called for.

KIGUCHI and ASAKURA: AN MEG STUDY OF SEMANTIC INTEGRATION IN JAPANESE NOUN PHRASES

1663 Hirohisa Kiguchi received his B.A. in English from Meiji Gakuin University in 1995, and Ph.D in linguistics from University of Maryland in 2002. He was post-doctoral fellow at Kanazawa Institute of Technology in 2003– 2006. He is currently Associate Professor at Miyagi Gakuin Women’s University. His specialization is linguistics. He is a member of the English Linguistic Society of Japan and the Linguistic Society of Japan.

Nobuhiko Asakura received his B.A., M.A. and Ph.D. in experimental psychology from Kyoto University in 1992, 1995 and 1998, respectively. He is currently Research Associate of Human Information System Laboratory at Kanazawa Institute of Technology. His research interests include human visual perception and computational neuroscience. He is a member of the Japanese Psychological Association and the Vision Society of Japan.