Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
The Selectivity of the Occipitotemporal M170 for Faces Jia Liu1, CA Masanori Higuchi2, Alec Marantz3, Nancy Kanwisher1 1 Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA 2 Applied Electronics Laboratory, Kanazawa Institute of Technology, Kanazawa, Japan 3 Department of Linguistics and Philosophy, MIT, Cambridge, MA, USA Correspondence: Jia Liu Brain and cognitive Sciences Department MIT NE20-443 77 Mass Ave. Cambridge, MA 02139 Telephone: (617)258-0670 FAX: (617)253-9767 Email: liujia@psyche.mit.edu Abstract Evidence from fMRI, ERPs, and intracranial recordings suggests the existence of face-specific mechanisms in the primate occipitotemporal cortex. The present study used a 64-channel MEG system to monitor neural activity while normal subjects viewed a sequence of grayscale photographs of a variety of unfamiliar faces and non-face stimuli. In fourteen out of fifteen subjects, face stimuli evoked a larger response than non-face stimuli at a latency of 160 msec after stimulus onset at bilateral occipitotemporal sensors. Inverted face stimuli elicited responses that were no different in amplitude but 13 msec later in latency than upright faces. The profile of this M170 response across stimulus conditions is largely consistent with prior results using scalp and subdural ERPs. Key words: Magnetoencephalography; Face-selective M170; Face perception 1 Introduction: Extensive evidence from a wide variety of techniques suggests the existence of face-specific mechanisms in primate occipitotemporal cortex. The aim of this study was to provide a detailed characterization of the neural response to face stimuli using MEG. Behavioral evidence from normal and brain-damaged subjects has suggested a functional dissociation between face and nonface processing. The strongest evidence comes from the double dissociation between face and object recognition, with prosopagnostic patients impaired at face but not object recognition [1], and other patients showing the opposite pattern of deficit [2]. Many techniques have been used to explore face-processing mechanisms. Functional brain imaging studies have localized a focal region in the fusiform gyrus called the fusiform face area (FFA) that responds in a highly selective fashion to faces, compared to a wide variety of other stimulus types [3-6]. However, fMRI provides little information about the temporal characteristics of face processing. Electrical recordings from the scalp surface have revealed a posterior-lateral negative peak at a latency of 170 msec elicited by human faces but not by animal faces, cars, scrambled faces, items of furniture, or human hands [7-9]. However, the poor spatial resolution of ERPs prevents precise localization of the neural source(s) of the N170. In contrast, intracranial recording has provided impressive evidence for selective neural responses to faces, with both high temporal and spatial resolution. Specifically, multiple distinct regions in the temporal lobes and hippocampus of epilepsy patients have been found to produce an N200 response to faces but not to cars, butterflies and scrambled faces, or letterstrings [10-13]. 2 Nevertheless, these recordings are possible only from severely epileptic patients, where the degree of cortical re-organization caused by the seizures is not known. In contrast to the techniques described, MEG provides excellent temporal resolution, good spatial resolution, and can be used safely in neurologically normal subjects. Several recent studies have found a strong magnetic response (M170) to face stimuli in comparison to nonface stimuli over occipitotemporal brain regions [14-19]. These studies suggest that the M170 is quite selective for faces; however, only a few stimulus conditions were compared in each study. To provide a stronger test of face selectivity, the present study tested the amplitude and latency of the M170 to 13 different stimulus types. In Experiment 1, we compared the magnetic response elicited by faces and a variety of non-face images, in order to test whether the M170 is in fact specific to face processing as opposed to a more general process such as subordinate-level categorization, or processing of anything animate or human. In Experiment 2, we tested the generality of the M170 response across faces that varied in format, surface detail, and viewpoint. In Experiment 3, we tested whether the M170 is sensitive to stimulus inversion. Three critical design features were used in the present study. First, we ran all subjects on a "localizer" experiment with face, object and hand stimuli in order to identify candidate face-selective sensors for each subject on a data set independent from the data collected in the main experiments. Second, in the three main experiments subjects performed a one-back task (pressing a button whenever two identical images were repeated consecutively), which obligated them to attend to all stimuli regardless of 3 inherent interest. Finally, all stimulus classes in each experiment were interleaved in a random order to eliminate any effects of stimulus predictability. Materials and Methods Seventeen healthy normal adults, aged 19-40, volunteered or participated for payment in all four experiments in a single testing session. All were right-handed and reported normal or corrected-to-normal vision. The data from two subjects were omitted from further analysis because they fell asleep during the experiment. Subjects lay on the scanner bed in a dimly lit, sound attenuated, and magnetically shielded room, with a response button under their right hands. A mirror was placed 120 cm in front of the subject’s eyes and the screen center was positioned on the subject’s horizontal line of sight. The stimuli consisted of gray-scale photographs (256 levels) of a variety of unfamiliar faces and non-face stimulus categories. Each image subtended 5.7 × 5.7 degrees of visual angle and was presented at the center of gaze for a duration of 200 msec by a projector. The onset-to-onset interstimulus interval between stimuli was randomized from 600 to 1000 msec and stimuli were presented in a pseudorandom order. During the experiment, a small fixation cross was continuously present at the screen center. The experiment consisted of 8 experimental blocks, divided into 4 experiments (the localizer experiment, plus Experiments 1-3). Each subject was first run on the localizer experiment which involved passively viewing 200 trials each of faces, objects and hands (intermixed). In this experiment, subjects were simply instructed to attentively view the sequence of images. In the following three experiments, subjects performed a 4 one-back task in which they were asked to press a button whenever two consecutive images were identical. In each of the three main experiments, subjects performed 110 trials for each of 5 or 6 different stimulus categories. Only 7 subjects participated in Experiment 2. On average, 10% of trials were repetition targets; these were excluded from the analysis. The magnetic brain activity was digitized continuously (1000 Hz sampling rate with 1Hz high pass and 200Hz low-pass cutoff, and 60 Hz notch) from a 64-channel whole head system with SQUID based first-order gradiometer sensors (KIT MEG SYSTEM). Five hundred millisecond epochs (100 msec pre-stimulus baseline and 400 msec post-stimulus) were acquired for each stimulus. All 200 trials (localizer experiment) or 100 trials (the following three experiments) of each type were averaged together, separately for each sensor, stimulus category, and subject. Results: The most face-selective sensor (i.e. the one showing the greatest increase in response to faces compared to hands and objects) was identified independently for each subject and hemisphere from an inspection of the data from the localizer experiment (see Figure1, top, for an example). The independent definition of our sensor of interest (SOI) allowed us to objectively characterize the response properties of the M170 in the three following experiments which were run on the same subjects in the same session. Figure1 (top) shows the MEG response in each channel to faces and objects in a typical subject, with the face-selective SOI in each hemisphere indicated. Figure 1 (below) shows the magnetic responses in the SOIs from the left and right hemisphere6 for this subject. Only 5 one subject's data did not show any clear face-selective SOI; this subject was excluded from further analyses. In all other subjects, a clear face-selective SOI was found in the ventral occipitotemporal region of each hemisphere. The response to each stimulus type for each sensor of interest was averaged across the subjects in each experiment and is shown in Figure 2. For each subject individually, the amplitude and latency of the M170 was determined for each stimulus in each hemisphere. These values were then analyzed in six different ANOVAs (three experiments and two dependent measures, amplitude and latency), with hemisphere and stimulus condition as factors in each. All six ANOVAs found main effects of stimulus condition (all ps<0.02), but the main effects of hemisphere did not reach significance (all ps >0.05). Because there was no hint of an interaction of condition by hemisphere in any of six ANOVAs (all Fs<1), in subsequent analyses the data from the left and right hemisphere were averaged within each subject. The averages across subjects of each individual subject's M170 amplitude and latency for each condition are shown in Figure 3. In Experiment 1, the amplitude of the M170 was significantly larger for faces than for animals (t[13]=7.69, p<0.0001), human hands (t[13]=5.72, p<0.0001), houses (t[13]=5.18, p<0.0001) and common objects (t[13]=8.34, p<0.0001). In addition, the M170 latency was significant later (by 9 msec on average) to animals than to human faces (t[13]=3.78, p<0.001). For Experiment 2, all face stimuli produced a significantly larger response than the response to objects (all ps<0.005), except for cartoon faces where this difference did not reach significance (t[6]=1.84, p>0.05). The amplitude of the M170 elicited by front- 6 view human faces was significantly larger than that for profile faces (t[6]=2.6, p<0.05) and cartoon faces (t[6]=4.93, p<0.001), but not significantly different from cat faces or line-drawing faces (both ps > 0.2). In addition, the M170 latency was significant later to cat face than to human faces (t[6]=4.48, p<0.001); however, the latencies for linedrawing and profile faces did not differ from that for human front-view faces (all ps>0.1). In the third experiment, the M170 latency was significant later (13 msec on average) to inverted faces than to upright ones (t[13]=8.99, P<0.0001), but no significant difference was revealed in amplitude ( t[13]=0.47, P>0.1). In addition, two-tone Mooney faces failed to elicit as large an M170 as human faces did (t[13]=6.07, P<0.0001). Discussion: The main results of this study can be summarized as follows. A clear and bilateral M170 response to faces was found at occipitotemporal sites in 14 out of 15 subjects tested. Neither animal stimuli nor human hands elicited an M170 as large as that elicited by faces, showing that the M170 is selective for faces, not for human or animal forms. Further, because the M170 response was low to houses and hands yet the task required subjects to discriminate between exemplars of these categories, our data suggest that the M170 does not simply reflect subordinate-level categorization for any stimulus class. Experiment 2 found that the M170 was not significantly lower in amplitude for cat faces and outline faces than for grayscale human front-view faces, demonstrating that the M170 generalizes across face stimuli with very different low-level features. On the other hand, the longer latency of the M170 elicited by cat faces and the lower amplitude of the M170 elicited by profile-view human faces suggest that any deviation from the 7 configuration of human front-view faces reduces the efficiency of the processing underlying this response. In Experiment 3, the M170 to inverted faces was as large as that to upright faces, but it was delayed 13 msec in latency. Our results are generally consistent with prior studies of the M170 (see also Halgren, [20]) except that where we find bilateral face-specific M170s, other studies have found the M170 to be larger [19] or exclusively located [18] in the right hemisphere. The most direct and extensive investigation of face-specific neural responses have been carried out using subdural electrode recordings from the surface of the human ventral pathway [11, 21, 22]. These studies have included most of the stimulus conditions tested in the present study. The response profile we observed for the M170 and the bilaterality of the M170 response are both consistent with the results from direct electrical recordings reported by Allison and colleagues. Our results are also consistent with prior studies using scalp ERPs, except that animal faces did not produce an N170 [9] but they did produce an M170 in the present study. Although the response properties of the M170 are similar to those of the FFA observed with fMRI in most respects, there are several apparent differences. First, M170 responses were bilateral and if any thing larger over the left hemisphere, whereas the FFA is typically larger in the right hemisphere. Second, our experiment showed that the M170 elicited by cartoon and Mooney faces was more like that elicited by common objects than by human faces. These results differ from the pattern of results found for the FFA using fMRI [23]. These differences suggest that the M170 may reflect processing that occurs not only in the FFA but also in other neural sites. 8 Does the M170 reflect face detection or face recognition, or both? Behavioral studies have shown that surface information is critical in face recognition [24]. However in our MEG study, linedrawing faces elicited as large a magnetic response as did grayscale faces. Furthermore, inversion of faces did not reduce the amplitude of M170, although face recognition performance is greatly reduced by inversion [25]. These considerations suggest that the M170 may be engaged in detecting the presence of faces, instead of extracting the critical stimulus information necessary for face recognition. Conclusions: Our results strongly suggest that the M170 response is tuned to the broad stimulus category of faces. The face-selective M170 is similar in many respects to the N170 and N200 observed with scalp and subdural ERPs [9, 11, 21, 22]. Our study lays the groundwork for future MEG investigations into the number and locus of neural sources that generate the M170. Further, the evidence provided here for the selectivity of the M170 enables us to use the M170 as a marker of face processing in future work. ACKNOWLEDGEMENTS: This work was supported by grants to N.K. from Human Frontiers, NIMH (56037), and the Charles E. Reed Faculty Initiatives Fund. We thank Alison Harris for preparing the line drawing faces, John Kanwisher for assistance with the optics, and Anders Dale for discussions of the research. 9 Reference: 1. De Renzi E. Current issues in prosopagnosia. In: Ellis HD, Jeeves MA, Newcombe F and Young AW, eds. Aspects of face processing. Dordrecht: Martinus Nijhoff, 1986: 153-252. 2. Moscovitch M, Winocur G, Behrmann M. J Cogn Neurosci 9, 555-604 (1997). 3. Kanwisher N, McDermott J, Chun M. J Neurosci 1711, 4302-4311 (1997). 4. McCarthy G, Puce A, Gore J et al. J Cogn Neurosci 9, 605-610 (1997). 5. Sergent J, Ohta S, MacDonald B. Brain 115, 15-36 (1992). 6. Haxby JV, Ungerleider LG, Clark VP, et al. Neuron 22, 189-199 (1999). 7. Jeffreys DA. Exp Brain Res 78, 193-202 (1989). 8. George N, Evans J, Fiori N et al. Brain Res 4, 65-76 (1996). 9. Bentin S, Allison T, Puce A et al. J Cogn Neurosci, 8, 551-565 (1996). 10. Allison T, Ginter H, McCarthy G et al. J Neurophysiol 71, 821-825 (1994). 11. Allison T, Puce A, Spencer DD et al. Cereb Cortex 9, 415-430 (1999). 12. Fried I, MacDonald KA, Wilson GL. Neuron 18, 753-765 (1997). 13. Seeck M, Michel CM, Mainwaring N, et al. Neuroreport 8, 2749-54 (1997). 14. Linkenkaer-Hansen K, Palva JM, Sams M et al. Neurosci Lett 253, 147-150 (1998). 15. Lu ST, Hmaldinen MS, Hari R et al. Neuroscience 43, 287-290 (1991). 16. Sams M, Hietanen JK, Hari R, et al. Neuroscience 77, 49-55 (1997). 17. Streit M, Ioannides AA, Liu L., et al. Brain Res Cogn Brain Res 7, 125-142 (1999). 18. Swithenby SJ, Bailey AJ, Brautigam S, et al. Exp Brain Res 118, 501-510 (1998). 19. Watanabe S, Kaligi R, Koyama S, et al. Brain Res Cogn Brain Res 8, 125-142 (1999). 20. Halgren E, Raij T, Marinkovic K, et al. Cereb Cortex, (in press). 10 21. McCarthy G, Puce A, Belger A, et al. Cereb Cortex 9, 431-444 (1999). 22. Puce A, Allison T, McCarthy G. Cereb Cortex 9, 445-458 (1999). 23. Tong F, Nakayama K, Moscovitch M, et al. Cogn Neuropsychol, (in press). 24. Davies GM, Ellis HD, Shepherd JW. J Appl Psychol 63, 180-187 (1978). 25. Farah MJ. Dissociable system for visual recognition: A cognitive neuropsychology approach. In: Kosslyn SM and Osherson DN, eds. Visual Cognition. Cambridge: MIT Press, 1995: 101-119. 11 Figure Legends Figure 1: (Top) The average response of each of 64 channels elicited by faces (black waveform) and objects (red waveform) from a typical subject in the localizer experiment. As can be seen, at least one sensor in each hemisphere shows a much stronger response to faces than objects; these were selected as the SOIs for analyses of subsequent experiments in the same subject. (L: Left; R: Right; F: Frontal; P: Posterior). (Bottom) The response to faces (Red), hands (green), and objects (blue) at these two SOIs in the localizer experiment are shown below. Figure 2: The M170 response from SOIs in the left (top) and right (bottom) hemispheres averaged across subjects for each experiment. Figure 3: The average amplitudes and latencies for each condition from three main experiments. 12