Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Semantic memory relies on highly-distributed neural machinery mediating both retrieval operations and perceptual knowledge. The current study leverages multivariate pattern analysis to decode retrieval states under varying task demands.... more
Semantic memory relies on highly-distributed neural machinery mediating both retrieval operations and perceptual knowledge. The current study leverages multivariate pattern analysis to decode retrieval states under varying task demands. Neural response patterns were recorded with fMRI while 24 participants performed real-world size and sound comparisons for various object pairs. A factorial design crossed visual and auditory retrieval modalities with easy and hard selection difficulty. Univariate contrasts demonstrated an effect of selection difficulty in left ventrolateral prefrontal cortex (vlPFC) and posterior perceptual areas. Auditory retrieval recruited left-lateralized association cortices, while visual retrieval recruited right-lateralized parietal and lateral occipitotemporal cortices. Using a linear SVM classifier, retrieval states were decoded across subjects from distributed whole-brain activity patterns with accuracies exceeding 80%. Significant cross-classification accuracies for retrieval modality and difficulty suggest both factors are encoded partially independently. Both whole-brain sensitivity analysis and searchlight classification were used to localize cortical contributions to task decoding. Several classification analyses were also performed within anatomically-defined ROIs. Selection difficulty impacted response patterns in the precuneus and ventral temporal cortex independently of retrieval modality. Late-stage perceptual areas were modulated by difficulty in both modalities, whereas early sensory cortices were impacted primarily within their preferred modality. Overall, these results indicate that semantic retrieval states can be robustly decoded across participants. Findings also reveal that response patterns in vlPFC encode both modality and difficulty during retrieval, and that selection difficulty impacts processing in both early and late perceptual areas.
Research Interests:
... tremendous amount of help with my research and academic career. I would like to thank lab members: Melissa Rundle, Sergey Fogelson, Amy Palmer, Stephanie Gagnon, Geethmala Sridaran, Carlton Frost and Samuel Lloyd. Lastly, I would like... more
... tremendous amount of help with my research and academic career. I would like to thank lab members: Melissa Rundle, Sergey Fogelson, Amy Palmer, Stephanie Gagnon, Geethmala Sridaran, Carlton Frost and Samuel Lloyd. Lastly, I would like to extend my ...
Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain... more
Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex.
The information contained in a sensory signal plays a critical role in determining what neural processes are engaged. Here we used interleaved silent steady-state (ISSS) functional magnetic resonance imaging (fMRI) to explore how human... more
The information contained in a sensory signal plays a critical role in determining what neural processes are engaged. Here we used interleaved silent steady-state (ISSS) functional magnetic resonance imaging (fMRI) to explore how human listeners cope with different degrees of acoustic richness during auditory sentence comprehension. Twenty-six healthy young adults underwent scanning while hearing sentences that varied in acoustic richness (high vs. low spectral detail) and syntactic complexity (subject-relative vs. object-relative center-embedded clause structures). We manipulated acoustic richness by presenting the stimuli as unprocessed full-spectrum speech, or noise-vocoded with 24 channels. Importantly, although the vocoded sentences were spectrally impoverished, all sentences were highly intelligible. These manipulations allowed us to test how intelligible speech processing was affected by orthogonal linguistic and acoustic demands. Acoustically rich speech showed stronger activation than acoustically less-detailed speech in a bilateral temporoparietal network with more pronounced activity in the right hemisphere. By contrast, listening to sentences with greater syntactic complexity resulted in increased activation of a left-lateralized network including left posterior lateral temporal cortex, left inferior frontal gyrus, and left dorsolateral prefrontal cortex. Significant interactions between acoustic richness and syntactic complexity occurred in left supramarginal gyrus, right superior temporal gyrus, and right inferior frontal gyrus, indicating that the regions recruited for syntactic challenge differed as a function of acoustic properties of the speech. Our findings suggest that the neural systems involved in speech perception are finely tuned to the type of information available, and that reducing the richness of the acoustic signal dramatically alters the brain's response to spoken language, even when intelligibility is high.
Individual participants vary greatly in their ability to estimate and discriminate intervals of time. This heterogeneity of performance may be caused by reliance on different time perception networks as well as individual differences in... more
Individual participants vary greatly in their ability to estimate and discriminate intervals of time. This heterogeneity of performance may be caused by reliance on different time perception networks as well as individual differences in the activation of brain structures utilized for timing within those networks. To address these possibilities we utilized event-related functional magnetic resonance imaging (fMRI) while human participants (n=25) performed a temporal or color discrimination task. Additionally, based on our previous research, we genotyped participants for DRD2/ANKK1-Taq1a, a single-nucleotide polymorphism associated with a 30-40% reduction in striatal D2 density and associated with poorer timing performance. Similar to previous reports, a wide range of performance was found across our sample; crucially, better performance on the timing versus color task was associated with greater activation in prefrontal and sub-cortical regions previously associated with timing. Furt...
Melody recognition entails the encoding of pitch intervals between successive notes. While it has been shown that a whole melodic sequence is better encoded than the sum of its constituent intervals, the underlying reasons have remained... more
Melody recognition entails the encoding of pitch intervals between successive notes. While it has been shown that a whole melodic sequence is better encoded than the sum of its constituent intervals, the underlying reasons have remained opaque. Here, we compared listeners' accuracy in encoding the relative pitch distance between two notes (for example, C, E) of an interval to listeners accuracy under the following three modifications: (1) doubling the duration of each note (C - E -), (2) repetition of each note (C, C, E, E), and (3) adding a preceding note (G, C, E). Repeating (2) or adding an extra note (3) improved encoding of relative pitch distance when the melodic sequences were transposed to other keys, but lengthening the duration (1) did not improve encoding relative to the standard two-note interval sequences. Crucially, encoding accuracy was higher with the four-note sequences than with long two-note sequences despite the fact that sensory (pitch) information was held constant. We interpret the results to show that re-forming the Gestalts of two-note intervals into two-note "melodies" results in more accurate encoding of relational pitch information due to a richer structural context in which to embed the interval.
Although much effort has been directed toward understanding the neural basis of speech processing, the neural processes involved in the categorical perception of speech have been relatively less studied, and many questions remain open. In... more
Although much effort has been directed toward understanding the neural basis of speech processing, the neural processes involved in the categorical perception of speech have been relatively less studied, and many questions remain open. In this functional magnetic resonance imaging (fMRI) study, we probed the cortical regions mediating categorical speech perception using an advanced brain-mapping technique, whole-brain multivariate pattern-based analysis (MVPA). Normal healthy human subjects (native English speakers) were scanned while they listened to 10 consonant-vowel syllables along the /ba/-/da/ continuum. Outside of the scanner, individuals' own category boundaries were measured to divide the fMRI data into /ba/ and /da/ conditions per subject. The whole-brain MVPA revealed that Broca's area and the left pre-supplementary motor area evoked distinct neural activity patterns between the two perceptual categories (/ba/ vs /da/). Broca's area was also found when the same analysis was applied to another dataset (Raizada and Poldrack, 2007), which previously yielded the supramarginal gyrus using a univariate adaptation-fMRI paradigm. The consistent MVPA findings from two independent datasets strongly indicate that Broca's area participates in categorical speech perception, with a possible role of translating speech signals into articulatory codes. The difference in results between univariate and multivariate pattern-based analyses of the same data suggest that processes in different cortical areas along the dorsal speech perception stream are distributed on different spatial scales.
Spatial smoothness is helpful when averaging fMRI signals across multiple subjects, as it allows different subjects' corresponding brain areas to be pooled together even if they are slightly... more
Spatial smoothness is helpful when averaging fMRI signals across multiple subjects, as it allows different subjects' corresponding brain areas to be pooled together even if they are slightly misaligned. However, smoothing is usually not applied when performing multivoxel pattern-based analyses (MVPA), as it runs the risk of blurring away the information that fine-grained spatial patterns contain. It would therefore be desirable, if possible, to carry out pattern-based analyses which take unsmoothed data as their input but which produce smooth images as output. We show here that the Gaussian Naive Bayes (GNB) classifier does precisely this, when it is used in "searchlight" pattern-based analyses. We explain why this occurs, and illustrate the effect in real fMRI data. Moreover, we show that analyses using GNBs produce results at the multi-subject level which are statistically robust, neurally plausible, and which replicate across two independent data sets. By contrast, SVM classifiers applied to the same data do not generate a replication, even if the SVM-derived searchlight maps have smoothing applied to them. An additional advantage of GNB classifiers for searchlight analyses is that they are orders of magnitude faster to compute than more complex alternatives such as SVMs. Collectively, these results suggest that Gaussian Naive Bayes classifiers may be a highly non-naive choice for multi-subject pattern-based fMRI studies.
Music perception generally involves processing the frequency relationships between successive pitches and extraction of the melodic contour. Previous evidence has suggested that the... more
Music perception generally involves processing the frequency relationships between successive pitches and extraction of the melodic contour. Previous evidence has suggested that the 'ups' and 'downs' of melodic contour are categorically and automatically processed, but knowledge of the brain regions that discriminate different types of contour is limited. Here, we examined melodic contour discrimination using multivariate pattern analysis (MVPA) of fMRI data. Twelve non-musicians were presented with various ascending and descending melodic sequences while being scanned. Whole-brain MVPA was used to identify regions in which the local pattern of activity accurately discriminated between contour categories. We identified three distinct cortical loci: the right superior temporal sulcus (rSTS), the left inferior parietal lobule (lIPL), and the anterior cingulate cortex (ACC). These results complement previous findings of melodic processing within the rSTS, and extend our understanding of the way in which abstract auditory sequences are categorized by the human brain.