Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Control processes allow us to constrain the retrieval of semantic information from long-term memory so that it is appropriate for the task or context. Control demands are influenced by the strength of the target information itself and by... more
Control processes allow us to constrain the retrieval of semantic information from long-term memory so that it is appropriate for the task or context. Control demands are influenced by the strength of the target information itself and by the circumstances in which it is retrieved, with more control needed when relatively weak aspects of knowledge are required and after the sustained retrieval of related concepts. To investigate the neurocognitive basis of individual differences in these aspects of semantic control, we used resting-state fMRI to characterise the intrinsic connectivity of left ventrolateral prefrontal cortex (VLPFC), implicated in controlled retrieval, and examined associations on a paced serial semantic task, in which participants were asked to detect category members amongst distractors. This task manipulated both the strength of target associations and the requirement to sustain retrieval within a narrow semantic category over time. We found that individuals with stronger connectivity between VLPFC and medial prefrontal cortex within the default mode network (DMN) showed better retrieval of strong associations (which are thought to be recalled more automatically). Stronger connectivity between the same VLPFC seed and another DMN region in medial parietal cortex was associated with larger declines in retrieval over the course of the category. In contrast, participants with stronger connectivity between VLPFC and cognitive control regions within the ventral attention network (VAN) had better controlled retrieval of weak associations and were better able to sustain their comprehension throughout the category. These effects overlapped in left insular cortex within the VAN, indicating that a common pattern of connectivity is associated with different aspects of controlled semantic retrieval induced by both the structure of long-term knowledge and the sustained retrieval of related information.
Deteriorated phonological representations are widely assumed to be the underlying cause of reading difficulties in developmental dyslexia, however existing evidence also implicates degraded orthographic processing. Here, we used... more
Deteriorated phonological representations are widely assumed to be the underlying cause of reading difficulties in developmental dyslexia, however existing evidence also implicates degraded orthographic processing. Here, we used event-related potentials whilst dyslexic and control adults performed a pseudoword-word priming task requiring deep phonological analysis to examine phonological and orthographic priming, respectively. Pseudowords were manipulated to be homophonic or non-homophonic to a target word and more or less orthographically similar. Since previous ERP research with normal readers has established phonologically driven differences as early as 250 ms from word presentation, degraded phonological representations were expected to reveal reduced phonological priming in dyslexic readers from 250 ms after target word onset. However, phonological priming main effects in both the N2 and P3 ranges were indistinguishable in amplitude between groups. Critically, we found group di...
Often, as we read, we find ourselves thinking about something other than the text; this tendency to mind-wander is linked to poor comprehension and reduced subsequent memory for texts. Contemporary accounts argue that periods of off-task... more
Often, as we read, we find ourselves thinking about something other than the text; this tendency to mind-wander is linked to poor comprehension and reduced subsequent memory for texts. Contemporary accounts argue that periods of off-task thought are related to the tendency for attention to be decoupled from external input. We used fMRI to understand the neural processes that underpin this phenomenon. First, we found that individuals with poorer text-based memory tend to show reduced recruitment of left middle temporal gyrus in response to orthographic input, within a region located at the intersection of default mode, dorsal attention and frontoparietal networks. Voxels within these networks were taken as seeds in a subsequent resting-state study. The default mode network region (i) had greater connectivity with medial prefrontal cortex, falling within the same network, for individuals with better text-based memory, and (ii) was more decoupled from medial visual regions in participa...
Although the default mode network (DMN) is associated with off-task states, recent evidence shows it can support tasks. This raises the question of how DMN activity can be both beneficial and detrimental to task performance. The... more
Although the default mode network (DMN) is associated with off-task states, recent evidence shows it can support tasks. This raises the question of how DMN activity can be both beneficial and detrimental to task performance. The decoupling hypothesis proposes that these opposing states occur because DMN supports modes of cognition driven by external input, as well as retrieval states unrelated to input. To test this account, we capitalised on the fact that during reading, regions in DMN are thought to represent the meaning of words through their coupling with visual cortex; the absence of visual coupling should occur when the attention drifts off from the text. We examined individual differences in reading comprehension and off-task thought while participants read an expository text in the laboratory, and related variation in these measures to (i) the neural response during reading in the scanner (Experiment 1), and (ii) patterns of intrinsic connectivity measured in the absence of ...
Differing patterns of verbal short-term memory (STM) impairment have provided unique insights into the relationship between STM and broader language function. Lexicality effects (i.e., better recall for words than nonwords) are larger in... more
Differing patterns of verbal short-term memory (STM) impairment have provided unique insights into the relationship between STM and broader language function. Lexicality effects (i.e., better recall for words than nonwords) are larger in patients with phonological deficits following left temporoparietal lesions, and smaller in patients with semantic impairment and anterior temporal damage, supporting linguistic accounts of STM. However, interpretation of these patient dissociations are complicated by (i) non-focal damage and (ii) confounding factors and secondary impairments. This study addressed these issues by examining the impact of inhibitory transcranial magnetic stimulation (TMS) on auditory-verbal STM performance in healthy individuals. We compared the effects of TMS to left anterior supramarginal gyrus (SMG) and left anterior middle temporal gyrus (ATL) on STM for lists of nonwords and random words. SMG stimulation disrupted nonword recall, in a pattern analogous to that obs...
Distinct neural processes are thought to support the retrieval of semantic information that is (i) coherent with strongly-encoded aspects of knowledge, and (ii) non-dominant yet relevant for the current task or context. While the brain... more
Distinct neural processes are thought to support the retrieval of semantic information that is (i) coherent with strongly-encoded aspects of knowledge, and (ii) non-dominant yet relevant for the current task or context. While the brain regions that support coherent and controlled patterns of semantic retrieval are relatively well-characterised, the temporal dynamics of these processes are not well-understood. This study used magnetoencephalography (MEG) and dual-pulse chronometric transcranial magnetic stimulation (cTMS) in two separate experiments to examine temporal dynamics within the temporal lobe during the retrieval of strong and weak associations. MEG results revealed a dissociation within left temporal cortex: anterior temporal lobe (ATL) showed greater oscillatory response for strong than weak associations, while posterior middle temporal gyrus (pMTG) showed the reverse pattern. In the cTMS experiment, stimulation of ATL at ~150ms disrupted the efficient retrieval of strong...
Differing patterns of verbal short-term memory (STM) impairment have provided unique insights into the relationship between STM and broader language function. Lexicality effects (i.e., better recall for words than nonwords) are larger in... more
Differing patterns of verbal short-term memory (STM) impairment have provided unique insights into the relationship between STM and broader language function. Lexicality effects (i.e., better recall for words than nonwords) are larger in patients with phonological deficits following left temporoparietal lesions, and smaller in patients with semantic impairment and anterior temporal damage, supporting linguistic accounts of STM. However, interpretation of these patient dissociations are complicated by (i) non-focal damage and (ii) confounding factors and secondary impairments. This study addressed these issues by examining the impact of inhibitory transcranial magnetic stimulation (TMS) on auditory-verbal STM performance in healthy individuals. We compared the effects of TMS to left anterior supramarginal gyrus (SMG) and left anterior middle temporal gyrus (ATL) on STM for lists of nonwords and random words. SMG stimulation disrupted nonword recall, in a pattern analogous to that obs...
Our ability to hold a sequence of speech sounds in mind, in the correct configuration, supports many aspects of communication, but the contribution of conceptual information to this basic phonological capacity remains controversial.... more
Our ability to hold a sequence of speech sounds in mind, in the correct configuration, supports many aspects of communication, but the contribution of conceptual information to this basic phonological capacity remains controversial. Previous research has shown modest and inconsistent benefits of meaning on phonological stability in short-term memory, but these studies were based on sets of unrelated words. Using a novel design, we examined the immediate recall of sentence-like sequences with coherent meaning, alongside both standard word lists and mixed lists containing words and nonwords. We found, and replicated, substantial effects of coherent meaning on phoneme-level accuracy: The phonemes of both words and nonwords within conceptually coherent sequences were more likely to be produced together and in the correct order. Since nonwords do not exist as items in long-term memory, the semantic enhancement of phoneme-level recall for both item types cannot be explained by a lexically based item reconstruction process employed at the point of retrieval (“redintegration”). Instead, our data show, for naturalistic input, that when meaning emerges from the combination of words, the phonological traces that support language are reinforced by a semantic-binding process that has been largely overlooked by past short-term memory research.
Research Interests:
Verbal short-term memory (STM) is a crucial cognitive function central to language learning, comprehension and reasoning, yet the processes that underlie this capacity are not fully understood. In particular, although STM primarily draws... more
Verbal short-term memory (STM) is a crucial cognitive function central to language learning, comprehension and reasoning, yet the processes that underlie this capacity are not fully understood. In particular, although STM primarily draws on a phonological code, interactions between long-term phonological and semantic representations might help to stabilise the phonological trace for words (“semantic binding hypothesis”). This idea was first proposed to explain the frequent phoneme recombination errors made by patients with semantic dementia when recalling words that are no longer fully understood. However, converging evidence in support of semantic binding is scant: it is unusual for studies of healthy participants to examine serial recall at the phoneme level and also it is difficult to separate the contribution of phonological-lexical knowledge from effects of word meaning. We used a new method to disentangle these influences in healthy individuals by training new ‘words’ with or without associated semantic information. We examined phonological coherence in immediate serial recall (ISR), both immediately and the day after training. Trained items were more likely to be recalled than novel nonwords, confirming the importance of phonological-lexical knowledge, and items with semantic associations were also produced more accurately than those with no meaning, at both time points. For semantically-trained items, there were fewer phoneme ordering and identity errors, and consequently more complete target items were produced in both correct and incorrect list positions. These data show that lexical-semantic knowledge improves the robustness of verbal STM at the sub-item level, even when the effect of phonological familiarity is taken into account.
In three immediate serial recall (ISR) experiments we tested the hypothesis that interactive processing between semantics and phonology supports phonological coherence in verbal short-term memory (STM). Participants categorised spoken... more
In three immediate serial recall (ISR) experiments we tested the hypothesis that interactive processing between semantics and phonology supports phonological coherence in verbal short-term memory (STM). Participants categorised spoken words in six-item lists as they were presented, according to their semantic or phonological properties, then repeated the items in presentation order (Experiment 1). Despite matched categorisation performance between conditions, semantically-categorised words were correctly recalled more often than phonologically-categorised words. This accuracy advantage in the semantic condition was accompanied by fewer phoneme recombination errors. Comparisons with a no-categorisation ISR baseline (Experiment 2) indicated that, although categorisations were disruptive overall, recombination errors were specifically rarer following semantic categorisation. Experiment 3 replicated the key findings from Experiment 1 and also revealed fewer phonologically-related errors following semantic categorisation compared to a perceptual categorisation of high or low pitch. Therefore, augmented activation of semantic representations stabilises the phonological traces of words within verbal short-term memory, in line with the “semantic binding” hypothesis.
Research has shown that direct current stimulation (tDCS) over left temporoparietal cortex – a region implicated in phonological processing – aids new word learning. The locus of this effect remains unclear since (i) experiments have not... more
Research has shown that direct current stimulation (tDCS) over left temporoparietal cortex – a region implicated in phonological processing – aids new word learning. The locus of this effect remains unclear since (i) experiments have not empirically separated the acquisition of phonological forms from lexical-semantic links and (ii) outcome measures have focused on learnt associations with a referent rather than phonological stability. We tested the hypothesis that left temporoparietal tDCS would strengthen the acquisition of phonological forms, even in the absence of the opportunity to acquire lexical-semantic associations. Participants were familiarised with nonwords paired with (i) photographs of concrete referents or (ii) blurred images where no clear features were visible. Nonword familiarisation proceeded under conditions of anodal tDCS and sham stimulation in different sessions. We examined the impact of these manipulations on the stability of the phonological trace in an immediate serial recall (ISR) task the following day, ensuring that any effects were due to the influence of tDCS on long-term learning and not a direct consequence of short-term changes in neural excitability. We found that only a few exposures to the phonological forms of nonwords were sufficient to enhance nonword ISR overall compared to entirely novel items. Anodal tDCS during familiarisation further enhanced the acquisition of phonological forms, producing a specific reduction in the frequency of phoneme migrations when sequences of nonwords were maintained in verbal short-term memory. More of the phonemes that were recalled were bound together as a whole correct nonword following tDCS. These data show that tDCS to left temporoparietal cortex can facilitate word learning by strengthening the acquisition of long-term phonological forms, irrespective of the availability of a concrete referent, and that the consequences of this learning can be seen beyond the learning task as strengthened phonological coherence in verbal short-term memory.
Grapheme-to-phoneme mapping regularity is thought to determine the grain size of orthographic information extracted whilst encoding letter strings. Here we tested whether learning to read in two languages differing in their orthographic... more
Grapheme-to-phoneme mapping regularity is thought to determine the grain size of orthographic information extracted whilst encoding letter strings. Here we tested whether learning to read in two languages differing in their orthographic transparency yields different strategies used for encoding letter-strings as compared to learning to read in one (opaque) language only. Sixteen English monolingual and 16 early Welsh-English bilingual readers undergoing event-related brain potentials (ERPs) recordings were asked to report whether or not a target letter displayed at fixation was present in either a nonword (consonant string) or an English word presented immediately before. In word and nonword probe trials, behavioural performance were overall unaffected by target letter position in the probe, suggesting similarly orthographic encoding in the two groups. By contrast, the amplitude of ERPs locked to the target letters (P3b, 340-570ms post target onset, and a late frontal positive component 600-1000ms post target onset) were differently modulated by the position of the target letter in words and nonwords between bilinguals and monolinguals. P3b results show that bilinguals who learnt to read simultaneously in an opaque and a transparent orthographies encoded orthographic information presented to the right of fixation more poorly than monolinguals. On the opposite, only monolinguals exhibited a position effect on the late positive component for both words and nonwords, interpreted as a sign of better re-evaluation of their responses. The present study shed light on how orthographic transparency constrains grain size and visual strategies underlying letter-string encoding, and how those constraints are influenced by bilingualism.
Whilst there is general consensus that phonological processing is deficient in developmental dyslexia, recent research also implicates visuo-attentional contributions. Capitalising on the P3a wave of event-related potentials as an index... more
Whilst there is general consensus that phonological processing is deficient in developmental dyslexia, recent research also implicates visuo-attentional contributions. Capitalising on the P3a wave of event-related potentials as an index of attentional capture, we tested dyslexic and normal readers on a novel variant of a visual oddball task to examine the interplay of orthographic-phonological integration and attentional engagement. Targets were animal words (10% occurrence). Amongst nontarget stimuli were two critical conditions: pseudohomophones of targets (10%) and control pseudohomophones (of fillers; 10%). Pseudohomophones of targets (but not control pseudohomophones) elicited a large P3 wave in normal readers only, revealing a lack of attentional engagement with these phonologically salient stimuli in dyslexic participants. Critically, both groups showed similar early phonological discrimination as indexed by posterior P2 modulations. Furthermore, phonological engagement, as indexed by P3a differences between pseudohomophone conditions, correlated with several measures of reading. Meanwhile, an analogous experiment using coloured shapes instead of orthographic stimuli failed to show group differences between experimental modulations in the P2 or P3 ranges. Overall, our results show that, whilst automatic aspects of phonological processing appear intact in developmental dyslexia, the breakdown in pseudoword reading occurs at a later stage, when attention is oriented to orthographic-phonological information.
Event-related potential (ERP) studies of word recognition have provided fundamental insights into the time-course and stages of visual and auditory word form processing in reading. Here, we used ERPs to track the time-course of... more
Event-related potential (ERP) studies of word recognition have provided fundamental insights into the time-course and stages of visual and auditory word form processing in reading. Here, we used ERPs to track the time-course of phonological processing in dyslexic adults and matched controls. Participants engaged in semantic judgments of visually presented high-cloze probability sentences ending either with (a) their best completion word, (b) a homophone of the best completion, (c) a pseudohomophone of the best completion, or (d) an unrelated word, to examine the interplay of phonological and orthographic processing in reading and the stage(s) of processing affected in developmental dyslexia. Early ERP peaks (N1, P2, N2) were modulated in amplitude similarly in the two groups of participants. However, dyslexic readers failed to show the P3a modulation seen in control participants for unexpected homophones and pseudohomophones (i.e., sentence completions that are acceptable phonologically but are misspelt). Furthermore, P3a amplitudes significantly correlated with reaction times in each experimental condition. Our results showed no sign of a deficit in accessing phonological representations during reading, since sentence primes yielded phonological priming effects that did not differ between participant groups in the early phases of processing. On the other hand, we report new evidence for a deficient attentional engagement with orthographically unexpected but phonologically expected words in dyslexia, irrespective of task focus on orthography or phonology. In our view, this result is consistent with deficiency in reading occurring from the point at which attention is oriented to phonological analysis, which may underlie broader difficulties in sublexical decoding.
Behavioral studies with proficient late bilinguals have revealed the existence of orthographic neighborhood density effects across languages when participants read either in their first (L1) or second (L2) language. Words with many... more
Behavioral studies with proficient late bilinguals have revealed the existence of orthographic neighborhood density effects across languages when participants read either in their first (L1) or second (L2) language. Words with many cross-language neighbors have been found to elicit more negative event-related potentials (ERPs) than words with few cross-language neighbors (Midgley et al., 2008); the effect started earlier, and was larger, for L2 words. Here, 14 late and 14 early English-Welsh bilinguals performed a semantic categorization task on English and Welsh words presented in separate blocks. The pattern of cross-language activation was different for the two groups of bilinguals. In late bilinguals, words with high cross-language neighborhood density elicited more negative ERP amplitudes than words with low cross-language neighborhood density starting around 175 ms after word onset and lasting until 500 ms. This effect interacted with language in the 300-500 ms time window. A more complex pattern of early effects was revealed in early bilinguals and there were no effects in the N400 window. These results suggest that cross-language activation of orthographic neighbors is highly sensitive to the bilinguals’ learning experience of the two languages.
Deteriorated phonological representations are widely assumed to be the underlying cause of reading difficulties in developmental dyslexia, however existing evidence also implicates degraded orthographic processing. Here, we used... more
Deteriorated phonological representations are widely assumed to be the underlying cause of reading difficulties in developmental dyslexia, however existing evidence also implicates degraded orthographic processing. Here, we used event-related potentials whilst dyslexic and control adults performed a pseudoword-word priming task requiring deep phonological analysis to examine phonological and orthographic priming, respectively. Pseudowords were manipulated to be homophonic or non-homophonic to a target word and more or less orthographically similar. Since previous ERP research with normal readers has established phonologically driven differences as early as 250 ms from word presentation, degraded phonological representations were expected to reveal reduced phonological priming in dyslexic readers from 250 ms after target word onset. However, phonological priming main effects in both the N2 and P3 ranges were indistinguishable in amplitude between groups. Critically, we found group differences in the N1 range, such that orthographic modulations observed in controls were absent in the dyslexic group. Furthermore, early group differences in phonological priming transpired as interactions with orthographic priming (in P2, N2 and P3 ranges). A group difference in phonological priming did not emerge until the P600 range, in which the dyslexic group showed significantly attenuated priming. As the P600 is classically associated with online monitoring and reanalysis, this pattern of results suggest that during deliberate phonological processing, the phonological deficit in reading may relate more to inefficient monitoring rather than deficient detection. Meanwhile, early differences in perceptual processing of phonological information may be driven by the strength of engagement with orthographic information.
Whether humans spontaneously sound out words in their mind during silent reading is a matter of debate. Some models of reading postulate that skilled readers access the meaning directly from print but others involve print-to-sound... more
Whether humans spontaneously sound out words in their mind during silent reading is a matter of debate. Some models of reading postulate that skilled readers access the meaning directly from print but others involve print-to-sound transcoding mechanisms. Here, we provide evidence that silent reading activates the sound form of words before accessing their meaning by comparing event-related potentials induced by highly expected words and their homophones. We found that expected words and words that sound the same but have a different orthography (homophones and pseudohomophones) reduce scalp activity to the same extent within 300 ms of presentation compared with unexpected words. This shows that phonological access during silent reading, which is critical for literacy acquisition, remains active in adulthood.
We investigated the lateralization of the posterior event-related potential (ERP) component N1 (120–170 ms) to written words in two groups of bilinguals. Fourteen Early English–Welsh bilinguals and 14 late learners of Welsh performed a... more
We investigated the lateralization of the posterior event-related potential (ERP) component N1 (120–170 ms) to written words in two groups of bilinguals. Fourteen Early English–Welsh bilinguals and 14 late learners of Welsh performed a semantic categorization task on separate blocks of English and Welsh words. In both groups, the N1 was strongly lateralized over the left posterior sites for both languages. A robust correlation was found between N1 asymmetry for English and N1 asymmetry for Welsh words in both groups. Furthermore, in Late Bilinguals, the N1 asymmetry for Welsh words increased with years of experience in Welsh. These data suggest that, in Late Bilinguals, the lateralization of neural circuits involved in written word recognition for the second language is associated to the organization for the first language, and that increased experience with the second language is associated to a larger functional cerebral asymmetry in favor of the left hemisphere.
The human face expresses emotion asymmetrically. Whereas the left cheek is more emotionally expressive, the right cheek appears more impassive, hence the appropriate cheek to put forward depends on the circumstance. Nicholls, Clode, Wood,... more
The human face expresses emotion asymmetrically. Whereas the left cheek is more emotionally expressive, the right cheek appears more impassive, hence the appropriate cheek to put forward depends on the circumstance. Nicholls, Clode, Wood, and Wood (1999, Proceedings of the Royal Society (Section B), 266, 1517-1522) demonstrated that people posing for family portraits offer the left cheek, whereas those posing as a Royal Society scientist favour the right. Given that the stereotypical representations of members of different academic disciplines differ markedly in their perceived openness and emotionality (e.g., “serious” scientist vs. “creative” writer), we reasoned that people may use cheek as a cue when determining a model's area of academic interest. Two hundred and nine participants (M=90, F=119) viewed pairs of left and right cheek poses, and made a forced-choice decision indicating which image depicted a Chemistry, Psychology or English student. Half the images were mirror-reversed to control for perceptual and aesthetic biases. Consistent with prediction, participants were more likely to select left cheek images for English students, and right cheek images for Chemistry students, irrespective of image orientation. The results confirm that determining the best cheek to put forward depends on your academic expertise: an impassive right cheek suggests hard science, whereas an emotive left cheek implies the arts. Psychology produced no left or right bias, consistent with its position as a discipline perpetually straddling the boundary between art and science.