An important human capacity is the ability to imagine performing an action, and its consequences,... more An important human capacity is the ability to imagine performing an action, and its consequences, without actually executing it. Here we seek neural representations of specific manual actions that are common across visuo-motor performance and imagery. Participants were scanned with fMRI while they performed and observed themselves performing two different manual actions during some trials, and imagined performing and observing themselves performing the same actions during other trials. We used multi-variate pattern analysis to identify areas where representations of specific actions generalize across imagined and performed actions. The left anterior parietal cortex showed this property. In this region, we also found that activity patterns for imagined actions generalize better to performed actions than vice versa, and we provide simulation results that can explain this asymmetry. The present results are the first demonstration of action-specific representations that are similar irrespective of whether actions are actively performed or covertly imagined. Further, they demonstrate concretely how the apparent cross-modal visuo-motor coding of actions identified in studies of a human “mirror neuron system” could, at least partially, reflect imagery.
The discovery of mirror neurons—neurons that code specific actions both when executed and observe... more The discovery of mirror neurons—neurons that code specific actions both when executed and observed—in area F5 of the macaque provides a potential neural mechanism underlying action understanding. To date, neuroimaging evidence for similar coding of specific actions across the visual and motor modalities in human ventral premotor cortex (PMv)—the putative homologue of macaque F5—is limited to the case of actions observed from a first-person perspective. However, it is the third-person perspective that figures centrally in our understanding of the actions and intentions of others. To address this gap in the literature, we scanned participants with functional magnetic resonance imaging while they viewed two actions from either a first- or third-person perspective during some trials and executed the same actions during other trials. Using multivoxel pattern analysis, we found action-specific cross-modal visual–motor representations in PMv for the first-person but not for the third-person perspective. Additional analyses showed no evidence for spatial or attentional differences across the two perspective conditions. In contrast, more posterior areas in the parietal and occipitotemporal cortex did show cross-modal coding regardless of perspective. These findings point to a stronger role for these latter regions, relative to PMv, in supporting the understanding of others' actions with reference to one's own actions.
How is working memory for different visual categories supported in the brain? Do the same princip... more How is working memory for different visual categories supported in the brain? Do the same principles of cortical specialization that govern the initial processing and encoding of visual stimuli also apply to their short-term maintenance? We investigated these questions with a delayed discrimination paradigm for faces, bodies, flowers and scenes and applied both univariate and multivariate analyses to functional magnetic resonance imaging (fMRI) data. Activity during encoding followed the well-known specialization in posterior areas. During the delay interval, activity shifted to frontal and parietal regions, but was not specialized for category. Conversely, activity in visual areas returned to baseline during that interval, but showed some evidence of category specialization. We conclude that principles of cortical activation differ between encoding and maintenance of visual material. In contrast to perceptual processes that rely on specialized regions in occipitotemporal cortex, maintenance is supported by a fronto-parietal network that seems to require little specialization at the category level.
Motivation improves the efficiency of intentional behavior, but how this performance modulation i... more Motivation improves the efficiency of intentional behavior, but how this performance modulation is instantiated in the human brain remains unclear. We used a reward-cued antisaccade paradigm to investigate how motivational goals (the expectation of a reward for good performance) modulate patterns of neural activation and functional connectivity to improve preparation for antisaccade performance. Behaviorally, subjects performed better (faster and more accurate antisaccades) when they knew they would be rewarded for good performance. Reward anticipation was associated with increased activation in the ventral and dorsal striatum, and cortical oculomotor regions. Functional connectivity between the caudate nucleus and cortical oculomotor control structures predicted individual differences in the behavioral benefit of reward anticipation. We conclude that while both dorsal and ventral striatal circuitry are involved in the anticipation of reward, only the dorsal striatum and its connected cortical network is involved in the direct modulation of oculomotor behavior by motivational incentive.
Many lines of evidence point to a tight linkage between the perceptual and motoric representation... more Many lines of evidence point to a tight linkage between the perceptual and motoric representations of actions. Numerous demonstrations show how the visual perception of an action engages compatible activity in the observer's motor system. This is seen for both intransitive actions (e.g., in the case of unconscious postural imitation) and transitive actions (e.g., grasping an object). Although the discovery of “mirror neurons” in macaques has inspired explanations of these processes in human action behaviors, the evidence for areas in the human brain that similarly form a crossmodal visual/motor representation of actions remains incomplete. To address this, in the present study, participants performed and observed hand actions while being scanned with functional MRI. We took a data-driven approach by applying whole-brain information mapping using a multivoxel pattern analysis (MVPA) classifier, performed on reconstructed representations of the cortical surface. The aim was to identify regions in which local voxelwise patterns of activity can distinguish among different actions, across the visual and motor domains. Experiment 1 tested intransitive, meaningless hand movements, whereas experiment 2 tested object-directed actions (all right-handed). Our analyses of both experiments revealed crossmodal action regions in the lateral occipitotemporal cortex (bilaterally) and in the left postcentral gyrus/anterior parietal cortex. Furthermore, in experiment 2 we identified a gradient of bias in the patterns of information in the left hemisphere postcentral/parietal region. The postcentral gyrus carried more information about the effectors used to carry out the action (fingers vs. whole hand), whereas anterior parietal regions carried more information about the goal of the action (lift vs. punch). Taken together, these results provide evidence for common neural coding in these areas of the visual and motor aspects of actions, and demonstrate further how MVPA can contribute to our understanding of the nature of distributed neural representations.
For functional magnetic resonance imaging (fMRI), multi-voxel pattern analysis (MVPA) has been sh... more For functional magnetic resonance imaging (fMRI), multi-voxel pattern analysis (MVPA) has been shown to be a sensitive method to detect areas that encode certain stimulus dimensions. By moving a searchlight through the volume of the brain, one can continuously map the information content about the experimental conditions of interest to the brain.
Traditionally, the searchlight is defined as a volume sphere that does not take into account the anatomy of the cortical surface. Here we present a method that uses a cortical surface reconstruction to guide voxel selection for information mapping. This approach differs in two important aspects from a volume-based searchlight definition. First, it uses only voxels that are classified as grey matter based on an anatomical scan. Second, it uses a surface-based geodesic distance metric to define neighbourhoods of voxels, and does not select voxels across a sulcus. We study here the influence of these two factors onto classification accuracy and onto the spatial specificity of the resulting information map.
In our example data set, participants pressed one of four fingers while undergoing fMRI. We used MVPA to identify regions in which local fMRI patterns can successfully discriminate which finger was moved. We show that surface-based information mapping is a more sensitive measure of local information content, and provides better spatial selectivity. This makes surface-based information mapping a useful technique for a data-driven analysis of information representation in the cerebral cortex.
In two fMRI experiments (n = 44), using tasks with different demands, approach–avoidance versus o... more In two fMRI experiments (n = 44), using tasks with different demands, approach–avoidance versus one-back recognition decisions, we measured the responses to the social value of faces. The face stimuli were produced by a parametric model of face evaluation that reduces multiple social evaluations to two orthogonal dimensions of valence and power [Oosterhof, N. N., & Todorov, A. The functional basis of face evaluation. Proceedings of the National Academy of Sciences, U.S.A., 105, 11087–11092, 2008]. Independent of the task, the response within regions of the occipital, fusiform, and lateral prefrontal cortices was sensitive to the valence dimension, with larger responses to low-valence faces. Additionally, there were extensive quadratic responses in the fusiform gyri and dorsal amygdala, with larger responses to faces at the extremes of the face valence continuum than faces in the middle. In all these regions, participants' avoidance decisions correlated with brain responses, with faces more likely to be avoided evoking stronger responses. The findings suggest that both explicit and implicit face evaluation engage multiple brain regions involved in attention, affect, and decision making.
The face is our primary source of visual information for identifying people and reading their emo... more The face is our primary source of visual information for identifying people and reading their emotional and mental states. With the exception of prosopagnosics (who are unable to recognize faces) and those suffering from such disorders of social cognition as autism, people are extremely adept at these two tasks. However, our cognitive powers in this regard come at the price of reading too much into the human face. The face is often treated as a window into a person's true nature. Given the agreement in social perception of faces, this paper discusses that it should be possible to model this perception.
Perception of both gaze-direction and symbolic directional cues (e.g. arrows) orient an observer’... more Perception of both gaze-direction and symbolic directional cues (e.g. arrows) orient an observer’s attention toward the indicated location. It is unclear, however, whether these similar behavioral effects are examples of the same attentional phenomenon and, therefore, subserved by the same neural substrate. It has been proposed that gaze, given its evolutionary significance, constitutes a ‘special’ category of spatial cue. As such, it is predicted that the neural systems supporting spatial reorienting will be different for gaze than for non-biological symbols. We tested this prediction using functional magnetic resonance imaging to measure the brain’s response during target localization in which laterally presented targets were preceded by uninformative gaze or arrow cues. Reaction times were faster during valid than invalid trials for both arrow and gaze cues. However, differential patterns of activity were evoked in the brain. Trials including invalid rather than valid arrow cues resulted in a stronger hemodynamic response in the ventral attention network. No such difference was seen during trials including valid and invalid gaze cues. This differential engagement of the ventral reorienting network is consistent with the notion that the facilitation of target detection by gaze cues and arrow cues is subserved by different neural substrates.
Previous studies have shown that trustworthiness judgments from facial appearance approximate gen... more Previous studies have shown that trustworthiness judgments from facial appearance approximate general valence evaluation of faces (Oosterhof & Todorov, 2008) and are made after as little as 100 ms exposure to novel faces (Willis & Todorov, 2006). In Experiment 1, using better masking procedures and shorter exposures, we replicate the latter findings. In Experiment 2, we systematically manipulate the exposure to faces and show that a sigmoid function almost perfectly describes how judgments change as a function of time exposure. The agreement of these judgments with time-unconstrained judgments is above chance after 33 ms, improves with additional exposure, and does not improve with exposures longer than 167 ms. In Experiment 3, using a priming paradigm, we show that effects of face trustworthiness are detectable even when the faces are presented below the threshold of objective awareness as measured by a forced choice recognition test of the primes. The findings suggest that people automatically make valence/trustworthiness judgments from facial appearance.
Using a composite-face paradigm, we show that social judgments from faces rely on holistic proces... more Using a composite-face paradigm, we show that social judgments from faces rely on holistic processing. Participants judged facial halves more positively when aligned with trustworthy than with untrustworthy halves, despite instructions to ignore the aligned parts (experiment 1). This effect was substantially reduced when the faces were inverted (experiments 2 and 3) and when the halves were misaligned (experiment 3). In all three experiments, judgments were affected to a larger extent by the to-be-attended than the to-be-ignored halves, suggesting that there is partial control of holistic processing. However, after rapid exposures to faces (33 to 100 ms), judgments of trustworthy and untrustworthy halves aligned with incongruent halves were indistinguishable (experiment 4a). Differences emerged with exposures longer than 100 ms. In contrast, when participants were not instructed to attend to specific facial parts, these differences did not emerge (experiment 4b). These findings suggest that the initial pass of information is holistic and that additional time allows participants to partially ignore the task-irrelevant context.
Using a dynamic stimuli paradigm, in which faces expressed either happiness or anger, the authors... more Using a dynamic stimuli paradigm, in which faces expressed either happiness or anger, the authors tested the hypothesis that perceptions of trustworthiness are related to these expressions. Although the same emotional intensity was added to both trustworthy and untrustworthy faces, trustworthy faces who expressed happiness were perceived as happier than untrustworthy faces, and untrustworthy faces who expressed anger were perceived as angrier than trustworthy faces. The authors also manipulated changes in face trustworthiness simultaneously with the change in expression. Whereas transitions in face trustworthiness in the direction of the expressed emotion (e.g., high-to-low trustworthiness and anger) increased the perceived intensity of the emotion, transitions in the opposite direction decreased this intensity. For example, changes from high to low trustworthiness increased the intensity of perceived anger but decreased the intensity of perceived happiness. These findings support the hypothesis that changes along the trustworthiness dimension correspond to subtle changes resembling expressions signaling whether the person displaying the emotion should be avoided or approached.
People reliably and automatically make personality inferences from facial appearance despite litt... more People reliably and automatically make personality inferences from facial appearance despite little evidence for their accuracy. Although such inferences are highly inter-correlated, research has traditionally focused on studying specific traits such as trustworthiness. We advocate an alternative, data-driven approach to identify and model the structure of face evaluation. Initial findings indicate that specific trait inferences can be represented within a 2D space defined by valence/trustworthiness and power/dominance evaluation of faces. Inferences along these dimensions are based on similarity to expressions signaling approach or avoidance behavior and features signaling physical strength, respectively, indicating that trait inferences from faces originate in functionally adaptive mechanisms. We conclude with a discussion of the potential role of the amygdala in face evaluation.
People automatically evaluate faces on multiple trait dimensions, and these evaluations predict i... more People automatically evaluate faces on multiple trait dimensions, and these evaluations predict important social outcomes, ranging from electoral success to sentencing decisions. Based on behavioral studies and computer modeling, we develop a 2D model of face evaluation. First, using a principal components analysis of trait judgments of emotionally neutral faces, we identify two orthogonal dimensions, valence and dominance, that are sufficient to describe face evaluation and show that these dimensions can be approximated by judgments of trustworthiness and dominance. Second, using a data-driven statistical model for face representation, we build and validate models for representing face trustworthiness and face dominance. Third, using these models, we show that, whereas valence evaluation is more sensitive to features resembling expressions signaling whether the person should be avoided or approached, dominance evaluation is more sensitive to features signaling physical strength/weakness. Fourth, we show that important social judgments, such as threat, can be reproduced as a function of the two orthogonal dimensions of valence and dominance. The findings suggest that face evaluation involves an overgeneralization of adaptive mechanisms for inferring harmful intentions and the ability to cause harm and can account for rapid, yet not necessarily accurate, judgments from faces.
Judgments of trustworthiness from faces determine basic approach/avoidance responses and approxim... more Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response—as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic—strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.
We present a generalization of patterns as used in definitions in functional languages, called ap... more We present a generalization of patterns as used in definitions in functional languages, called application patterns. They consist of a function applied to arguments. While matchin such a pattern against an actual argument, inverse functions are used to find the binding of variables to values. Application patterns are universal in the sense that they include list, tuple, algebraic and n+k patterns.
An important human capacity is the ability to imagine performing an action, and its consequences,... more An important human capacity is the ability to imagine performing an action, and its consequences, without actually executing it. Here we seek neural representations of specific manual actions that are common across visuo-motor performance and imagery. Participants were scanned with fMRI while they performed and observed themselves performing two different manual actions during some trials, and imagined performing and observing themselves performing the same actions during other trials. We used multi-variate pattern analysis to identify areas where representations of specific actions generalize across imagined and performed actions. The left anterior parietal cortex showed this property. In this region, we also found that activity patterns for imagined actions generalize better to performed actions than vice versa, and we provide simulation results that can explain this asymmetry. The present results are the first demonstration of action-specific representations that are similar irrespective of whether actions are actively performed or covertly imagined. Further, they demonstrate concretely how the apparent cross-modal visuo-motor coding of actions identified in studies of a human “mirror neuron system” could, at least partially, reflect imagery.
The discovery of mirror neurons—neurons that code specific actions both when executed and observe... more The discovery of mirror neurons—neurons that code specific actions both when executed and observed—in area F5 of the macaque provides a potential neural mechanism underlying action understanding. To date, neuroimaging evidence for similar coding of specific actions across the visual and motor modalities in human ventral premotor cortex (PMv)—the putative homologue of macaque F5—is limited to the case of actions observed from a first-person perspective. However, it is the third-person perspective that figures centrally in our understanding of the actions and intentions of others. To address this gap in the literature, we scanned participants with functional magnetic resonance imaging while they viewed two actions from either a first- or third-person perspective during some trials and executed the same actions during other trials. Using multivoxel pattern analysis, we found action-specific cross-modal visual–motor representations in PMv for the first-person but not for the third-person perspective. Additional analyses showed no evidence for spatial or attentional differences across the two perspective conditions. In contrast, more posterior areas in the parietal and occipitotemporal cortex did show cross-modal coding regardless of perspective. These findings point to a stronger role for these latter regions, relative to PMv, in supporting the understanding of others' actions with reference to one's own actions.
How is working memory for different visual categories supported in the brain? Do the same princip... more How is working memory for different visual categories supported in the brain? Do the same principles of cortical specialization that govern the initial processing and encoding of visual stimuli also apply to their short-term maintenance? We investigated these questions with a delayed discrimination paradigm for faces, bodies, flowers and scenes and applied both univariate and multivariate analyses to functional magnetic resonance imaging (fMRI) data. Activity during encoding followed the well-known specialization in posterior areas. During the delay interval, activity shifted to frontal and parietal regions, but was not specialized for category. Conversely, activity in visual areas returned to baseline during that interval, but showed some evidence of category specialization. We conclude that principles of cortical activation differ between encoding and maintenance of visual material. In contrast to perceptual processes that rely on specialized regions in occipitotemporal cortex, maintenance is supported by a fronto-parietal network that seems to require little specialization at the category level.
Motivation improves the efficiency of intentional behavior, but how this performance modulation i... more Motivation improves the efficiency of intentional behavior, but how this performance modulation is instantiated in the human brain remains unclear. We used a reward-cued antisaccade paradigm to investigate how motivational goals (the expectation of a reward for good performance) modulate patterns of neural activation and functional connectivity to improve preparation for antisaccade performance. Behaviorally, subjects performed better (faster and more accurate antisaccades) when they knew they would be rewarded for good performance. Reward anticipation was associated with increased activation in the ventral and dorsal striatum, and cortical oculomotor regions. Functional connectivity between the caudate nucleus and cortical oculomotor control structures predicted individual differences in the behavioral benefit of reward anticipation. We conclude that while both dorsal and ventral striatal circuitry are involved in the anticipation of reward, only the dorsal striatum and its connected cortical network is involved in the direct modulation of oculomotor behavior by motivational incentive.
Many lines of evidence point to a tight linkage between the perceptual and motoric representation... more Many lines of evidence point to a tight linkage between the perceptual and motoric representations of actions. Numerous demonstrations show how the visual perception of an action engages compatible activity in the observer's motor system. This is seen for both intransitive actions (e.g., in the case of unconscious postural imitation) and transitive actions (e.g., grasping an object). Although the discovery of “mirror neurons” in macaques has inspired explanations of these processes in human action behaviors, the evidence for areas in the human brain that similarly form a crossmodal visual/motor representation of actions remains incomplete. To address this, in the present study, participants performed and observed hand actions while being scanned with functional MRI. We took a data-driven approach by applying whole-brain information mapping using a multivoxel pattern analysis (MVPA) classifier, performed on reconstructed representations of the cortical surface. The aim was to identify regions in which local voxelwise patterns of activity can distinguish among different actions, across the visual and motor domains. Experiment 1 tested intransitive, meaningless hand movements, whereas experiment 2 tested object-directed actions (all right-handed). Our analyses of both experiments revealed crossmodal action regions in the lateral occipitotemporal cortex (bilaterally) and in the left postcentral gyrus/anterior parietal cortex. Furthermore, in experiment 2 we identified a gradient of bias in the patterns of information in the left hemisphere postcentral/parietal region. The postcentral gyrus carried more information about the effectors used to carry out the action (fingers vs. whole hand), whereas anterior parietal regions carried more information about the goal of the action (lift vs. punch). Taken together, these results provide evidence for common neural coding in these areas of the visual and motor aspects of actions, and demonstrate further how MVPA can contribute to our understanding of the nature of distributed neural representations.
For functional magnetic resonance imaging (fMRI), multi-voxel pattern analysis (MVPA) has been sh... more For functional magnetic resonance imaging (fMRI), multi-voxel pattern analysis (MVPA) has been shown to be a sensitive method to detect areas that encode certain stimulus dimensions. By moving a searchlight through the volume of the brain, one can continuously map the information content about the experimental conditions of interest to the brain.
Traditionally, the searchlight is defined as a volume sphere that does not take into account the anatomy of the cortical surface. Here we present a method that uses a cortical surface reconstruction to guide voxel selection for information mapping. This approach differs in two important aspects from a volume-based searchlight definition. First, it uses only voxels that are classified as grey matter based on an anatomical scan. Second, it uses a surface-based geodesic distance metric to define neighbourhoods of voxels, and does not select voxels across a sulcus. We study here the influence of these two factors onto classification accuracy and onto the spatial specificity of the resulting information map.
In our example data set, participants pressed one of four fingers while undergoing fMRI. We used MVPA to identify regions in which local fMRI patterns can successfully discriminate which finger was moved. We show that surface-based information mapping is a more sensitive measure of local information content, and provides better spatial selectivity. This makes surface-based information mapping a useful technique for a data-driven analysis of information representation in the cerebral cortex.
In two fMRI experiments (n = 44), using tasks with different demands, approach–avoidance versus o... more In two fMRI experiments (n = 44), using tasks with different demands, approach–avoidance versus one-back recognition decisions, we measured the responses to the social value of faces. The face stimuli were produced by a parametric model of face evaluation that reduces multiple social evaluations to two orthogonal dimensions of valence and power [Oosterhof, N. N., & Todorov, A. The functional basis of face evaluation. Proceedings of the National Academy of Sciences, U.S.A., 105, 11087–11092, 2008]. Independent of the task, the response within regions of the occipital, fusiform, and lateral prefrontal cortices was sensitive to the valence dimension, with larger responses to low-valence faces. Additionally, there were extensive quadratic responses in the fusiform gyri and dorsal amygdala, with larger responses to faces at the extremes of the face valence continuum than faces in the middle. In all these regions, participants' avoidance decisions correlated with brain responses, with faces more likely to be avoided evoking stronger responses. The findings suggest that both explicit and implicit face evaluation engage multiple brain regions involved in attention, affect, and decision making.
The face is our primary source of visual information for identifying people and reading their emo... more The face is our primary source of visual information for identifying people and reading their emotional and mental states. With the exception of prosopagnosics (who are unable to recognize faces) and those suffering from such disorders of social cognition as autism, people are extremely adept at these two tasks. However, our cognitive powers in this regard come at the price of reading too much into the human face. The face is often treated as a window into a person's true nature. Given the agreement in social perception of faces, this paper discusses that it should be possible to model this perception.
Perception of both gaze-direction and symbolic directional cues (e.g. arrows) orient an observer’... more Perception of both gaze-direction and symbolic directional cues (e.g. arrows) orient an observer’s attention toward the indicated location. It is unclear, however, whether these similar behavioral effects are examples of the same attentional phenomenon and, therefore, subserved by the same neural substrate. It has been proposed that gaze, given its evolutionary significance, constitutes a ‘special’ category of spatial cue. As such, it is predicted that the neural systems supporting spatial reorienting will be different for gaze than for non-biological symbols. We tested this prediction using functional magnetic resonance imaging to measure the brain’s response during target localization in which laterally presented targets were preceded by uninformative gaze or arrow cues. Reaction times were faster during valid than invalid trials for both arrow and gaze cues. However, differential patterns of activity were evoked in the brain. Trials including invalid rather than valid arrow cues resulted in a stronger hemodynamic response in the ventral attention network. No such difference was seen during trials including valid and invalid gaze cues. This differential engagement of the ventral reorienting network is consistent with the notion that the facilitation of target detection by gaze cues and arrow cues is subserved by different neural substrates.
Previous studies have shown that trustworthiness judgments from facial appearance approximate gen... more Previous studies have shown that trustworthiness judgments from facial appearance approximate general valence evaluation of faces (Oosterhof & Todorov, 2008) and are made after as little as 100 ms exposure to novel faces (Willis & Todorov, 2006). In Experiment 1, using better masking procedures and shorter exposures, we replicate the latter findings. In Experiment 2, we systematically manipulate the exposure to faces and show that a sigmoid function almost perfectly describes how judgments change as a function of time exposure. The agreement of these judgments with time-unconstrained judgments is above chance after 33 ms, improves with additional exposure, and does not improve with exposures longer than 167 ms. In Experiment 3, using a priming paradigm, we show that effects of face trustworthiness are detectable even when the faces are presented below the threshold of objective awareness as measured by a forced choice recognition test of the primes. The findings suggest that people automatically make valence/trustworthiness judgments from facial appearance.
Using a composite-face paradigm, we show that social judgments from faces rely on holistic proces... more Using a composite-face paradigm, we show that social judgments from faces rely on holistic processing. Participants judged facial halves more positively when aligned with trustworthy than with untrustworthy halves, despite instructions to ignore the aligned parts (experiment 1). This effect was substantially reduced when the faces were inverted (experiments 2 and 3) and when the halves were misaligned (experiment 3). In all three experiments, judgments were affected to a larger extent by the to-be-attended than the to-be-ignored halves, suggesting that there is partial control of holistic processing. However, after rapid exposures to faces (33 to 100 ms), judgments of trustworthy and untrustworthy halves aligned with incongruent halves were indistinguishable (experiment 4a). Differences emerged with exposures longer than 100 ms. In contrast, when participants were not instructed to attend to specific facial parts, these differences did not emerge (experiment 4b). These findings suggest that the initial pass of information is holistic and that additional time allows participants to partially ignore the task-irrelevant context.
Using a dynamic stimuli paradigm, in which faces expressed either happiness or anger, the authors... more Using a dynamic stimuli paradigm, in which faces expressed either happiness or anger, the authors tested the hypothesis that perceptions of trustworthiness are related to these expressions. Although the same emotional intensity was added to both trustworthy and untrustworthy faces, trustworthy faces who expressed happiness were perceived as happier than untrustworthy faces, and untrustworthy faces who expressed anger were perceived as angrier than trustworthy faces. The authors also manipulated changes in face trustworthiness simultaneously with the change in expression. Whereas transitions in face trustworthiness in the direction of the expressed emotion (e.g., high-to-low trustworthiness and anger) increased the perceived intensity of the emotion, transitions in the opposite direction decreased this intensity. For example, changes from high to low trustworthiness increased the intensity of perceived anger but decreased the intensity of perceived happiness. These findings support the hypothesis that changes along the trustworthiness dimension correspond to subtle changes resembling expressions signaling whether the person displaying the emotion should be avoided or approached.
People reliably and automatically make personality inferences from facial appearance despite litt... more People reliably and automatically make personality inferences from facial appearance despite little evidence for their accuracy. Although such inferences are highly inter-correlated, research has traditionally focused on studying specific traits such as trustworthiness. We advocate an alternative, data-driven approach to identify and model the structure of face evaluation. Initial findings indicate that specific trait inferences can be represented within a 2D space defined by valence/trustworthiness and power/dominance evaluation of faces. Inferences along these dimensions are based on similarity to expressions signaling approach or avoidance behavior and features signaling physical strength, respectively, indicating that trait inferences from faces originate in functionally adaptive mechanisms. We conclude with a discussion of the potential role of the amygdala in face evaluation.
People automatically evaluate faces on multiple trait dimensions, and these evaluations predict i... more People automatically evaluate faces on multiple trait dimensions, and these evaluations predict important social outcomes, ranging from electoral success to sentencing decisions. Based on behavioral studies and computer modeling, we develop a 2D model of face evaluation. First, using a principal components analysis of trait judgments of emotionally neutral faces, we identify two orthogonal dimensions, valence and dominance, that are sufficient to describe face evaluation and show that these dimensions can be approximated by judgments of trustworthiness and dominance. Second, using a data-driven statistical model for face representation, we build and validate models for representing face trustworthiness and face dominance. Third, using these models, we show that, whereas valence evaluation is more sensitive to features resembling expressions signaling whether the person should be avoided or approached, dominance evaluation is more sensitive to features signaling physical strength/weakness. Fourth, we show that important social judgments, such as threat, can be reproduced as a function of the two orthogonal dimensions of valence and dominance. The findings suggest that face evaluation involves an overgeneralization of adaptive mechanisms for inferring harmful intentions and the ability to cause harm and can account for rapid, yet not necessarily accurate, judgments from faces.
Judgments of trustworthiness from faces determine basic approach/avoidance responses and approxim... more Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response—as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic—strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.
We present a generalization of patterns as used in definitions in functional languages, called ap... more We present a generalization of patterns as used in definitions in functional languages, called application patterns. They consist of a function applied to arguments. While matchin such a pattern against an actual argument, inverse functions are used to find the binding of variables to values. Application patterns are universal in the sense that they include list, tuple, algebraic and n+k patterns.
Uploads
Papers by Nick Oosterhof
Participants were scanned with fMRI while they performed and observed themselves performing two different manual actions during some trials, and imagined performing and observing themselves performing the same actions during other trials. We used multi-variate pattern analysis to identify areas where representations of specific actions generalize across imagined and performed actions. The left anterior parietal cortex showed this property. In this region, we also found that activity patterns for imagined actions generalize better to performed actions than vice versa, and we provide simulation results that can explain this asymmetry. The present results are the first demonstration of action-specific representations that are similar irrespective of whether actions are actively performed or covertly imagined. Further, they demonstrate concretely how the apparent cross-modal visuo-motor coding of actions identified in studies of a human “mirror neuron system” could, at least partially, reflect imagery.
the ventral and dorsal striatum, and cortical oculomotor regions. Functional connectivity between the caudate nucleus and cortical oculomotor control structures predicted individual differences in the behavioral benefit of reward anticipation. We conclude that while both dorsal and ventral striatal circuitry are involved in the anticipation of reward, only the dorsal striatum and its connected cortical network is involved in the direct
modulation of oculomotor behavior by motivational incentive.
Traditionally, the searchlight is defined as a volume sphere that does not take into account the anatomy of the cortical surface. Here we present a method that uses a cortical surface reconstruction to guide voxel selection for information mapping. This approach differs in two important aspects from a volume-based searchlight definition. First, it uses only voxels that are classified as grey matter based on an anatomical scan. Second, it uses a surface-based geodesic distance metric to define neighbourhoods of voxels, and does not select voxels across a sulcus. We study here the influence of these two factors onto classification accuracy and onto the spatial specificity of the resulting information map.
In our example data set, participants pressed one of four fingers while undergoing fMRI. We used MVPA to identify regions in which local fMRI patterns can successfully discriminate which finger was moved. We show that surface-based information mapping is a more sensitive measure of local information content, and provides better spatial selectivity. This makes surface-based information mapping a useful technique for a data-driven analysis of information representation in the cerebral cortex.
Participants were scanned with fMRI while they performed and observed themselves performing two different manual actions during some trials, and imagined performing and observing themselves performing the same actions during other trials. We used multi-variate pattern analysis to identify areas where representations of specific actions generalize across imagined and performed actions. The left anterior parietal cortex showed this property. In this region, we also found that activity patterns for imagined actions generalize better to performed actions than vice versa, and we provide simulation results that can explain this asymmetry. The present results are the first demonstration of action-specific representations that are similar irrespective of whether actions are actively performed or covertly imagined. Further, they demonstrate concretely how the apparent cross-modal visuo-motor coding of actions identified in studies of a human “mirror neuron system” could, at least partially, reflect imagery.
the ventral and dorsal striatum, and cortical oculomotor regions. Functional connectivity between the caudate nucleus and cortical oculomotor control structures predicted individual differences in the behavioral benefit of reward anticipation. We conclude that while both dorsal and ventral striatal circuitry are involved in the anticipation of reward, only the dorsal striatum and its connected cortical network is involved in the direct
modulation of oculomotor behavior by motivational incentive.
Traditionally, the searchlight is defined as a volume sphere that does not take into account the anatomy of the cortical surface. Here we present a method that uses a cortical surface reconstruction to guide voxel selection for information mapping. This approach differs in two important aspects from a volume-based searchlight definition. First, it uses only voxels that are classified as grey matter based on an anatomical scan. Second, it uses a surface-based geodesic distance metric to define neighbourhoods of voxels, and does not select voxels across a sulcus. We study here the influence of these two factors onto classification accuracy and onto the spatial specificity of the resulting information map.
In our example data set, participants pressed one of four fingers while undergoing fMRI. We used MVPA to identify regions in which local fMRI patterns can successfully discriminate which finger was moved. We show that surface-based information mapping is a more sensitive measure of local information content, and provides better spatial selectivity. This makes surface-based information mapping a useful technique for a data-driven analysis of information representation in the cerebral cortex.