Nicola Savill
York St John University, Psychological and Social Sciences, Faculty Member
- Phonology, Semantic Memory, Neuromodulation, Psychology, Cognitive Neuroscience, Psychophysiology, and 10 moreOrthography, Dyslexia, ERP (Cognitive Psychology), Reading (Psychology), Short-term memory (auditory-verbal), Neuroscience, Cognitive Science, Cognitive Psychology, Psycholinguistics, and Language Acquisitionedit
- My research centres on cognitive aspects of reading and verbal short-term memory, both in 'normal' language function... moreMy research centres on cognitive aspects of reading and verbal short-term memory, both in 'normal' language function and in language-impaired populations. I'm interested in various topics related to how and when our linguistic and semantic representations influence ongoing language and cognitive processing (particularly phonological function) and when attentional processes do or do not have an important role. I use a range of experimental psycholinguistic and cognitive neuroscientific methods (e.g., ERPs, tDCS, TMS, fMRI) in my research.edit
Control processes allow us to constrain the retrieval of semantic information from long-term memory so that it is appropriate for the task or context. Control demands are influenced by the strength of the target information itself and by... more
Control processes allow us to constrain the retrieval of semantic information from long-term memory so that it is appropriate for the task or context. Control demands are influenced by the strength of the target information itself and by the circumstances in which it is retrieved, with more control needed when relatively weak aspects of knowledge are required and after the sustained retrieval of related concepts. To investigate the neurocognitive basis of individual differences in these aspects of semantic control, we used resting-state fMRI to characterise the intrinsic connectivity of left ventrolateral prefrontal cortex (VLPFC), implicated in controlled retrieval, and examined associations on a paced serial semantic task, in which participants were asked to detect category members amongst distractors. This task manipulated both the strength of target associations and the requirement to sustain retrieval within a narrow semantic category over time. We found that individuals with stronger connectivity between VLPFC and medial prefrontal cortex within the default mode network (DMN) showed better retrieval of strong associations (which are thought to be recalled more automatically). Stronger connectivity between the same VLPFC seed and another DMN region in medial parietal cortex was associated with larger declines in retrieval over the course of the category. In contrast, participants with stronger connectivity between VLPFC and cognitive control regions within the ventral attention network (VAN) had better controlled retrieval of weak associations and were better able to sustain their comprehension throughout the category. These effects overlapped in left insular cortex within the VAN, indicating that a common pattern of connectivity is associated with different aspects of controlled semantic retrieval induced by both the structure of long-term knowledge and the sustained retrieval of related information.
Research Interests:
Deteriorated phonological representations are widely assumed to be the underlying cause of reading difficulties in developmental dyslexia, however existing evidence also implicates degraded orthographic processing. Here, we used... more
Deteriorated phonological representations are widely assumed to be the underlying cause of reading difficulties in developmental dyslexia, however existing evidence also implicates degraded orthographic processing. Here, we used event-related potentials whilst dyslexic and control adults performed a pseudoword-word priming task requiring deep phonological analysis to examine phonological and orthographic priming, respectively. Pseudowords were manipulated to be homophonic or non-homophonic to a target word and more or less orthographically similar. Since previous ERP research with normal readers has established phonologically driven differences as early as 250 ms from word presentation, degraded phonological representations were expected to reveal reduced phonological priming in dyslexic readers from 250 ms after target word onset. However, phonological priming main effects in both the N2 and P3 ranges were indistinguishable in amplitude between groups. Critically, we found group di...
Research Interests:
Often, as we read, we find ourselves thinking about something other than the text; this tendency to mind-wander is linked to poor comprehension and reduced subsequent memory for texts. Contemporary accounts argue that periods of off-task... more
Often, as we read, we find ourselves thinking about something other than the text; this tendency to mind-wander is linked to poor comprehension and reduced subsequent memory for texts. Contemporary accounts argue that periods of off-task thought are related to the tendency for attention to be decoupled from external input. We used fMRI to understand the neural processes that underpin this phenomenon. First, we found that individuals with poorer text-based memory tend to show reduced recruitment of left middle temporal gyrus in response to orthographic input, within a region located at the intersection of default mode, dorsal attention and frontoparietal networks. Voxels within these networks were taken as seeds in a subsequent resting-state study. The default mode network region (i) had greater connectivity with medial prefrontal cortex, falling within the same network, for individuals with better text-based memory, and (ii) was more decoupled from medial visual regions in participa...
Although the default mode network (DMN) is associated with off-task states, recent evidence shows it can support tasks. This raises the question of how DMN activity can be both beneficial and detrimental to task performance. The... more
Although the default mode network (DMN) is associated with off-task states, recent evidence shows it can support tasks. This raises the question of how DMN activity can be both beneficial and detrimental to task performance. The decoupling hypothesis proposes that these opposing states occur because DMN supports modes of cognition driven by external input, as well as retrieval states unrelated to input. To test this account, we capitalised on the fact that during reading, regions in DMN are thought to represent the meaning of words through their coupling with visual cortex; the absence of visual coupling should occur when the attention drifts off from the text. We examined individual differences in reading comprehension and off-task thought while participants read an expository text in the laboratory, and related variation in these measures to (i) the neural response during reading in the scanner (Experiment 1), and (ii) patterns of intrinsic connectivity measured in the absence of ...
Research Interests:
Differing patterns of verbal short-term memory (STM) impairment have provided unique insights into the relationship between STM and broader language function. Lexicality effects (i.e., better recall for words than nonwords) are larger in... more
Differing patterns of verbal short-term memory (STM) impairment have provided unique insights into the relationship between STM and broader language function. Lexicality effects (i.e., better recall for words than nonwords) are larger in patients with phonological deficits following left temporoparietal lesions, and smaller in patients with semantic impairment and anterior temporal damage, supporting linguistic accounts of STM. However, interpretation of these patient dissociations are complicated by (i) non-focal damage and (ii) confounding factors and secondary impairments. This study addressed these issues by examining the impact of inhibitory transcranial magnetic stimulation (TMS) on auditory-verbal STM performance in healthy individuals. We compared the effects of TMS to left anterior supramarginal gyrus (SMG) and left anterior middle temporal gyrus (ATL) on STM for lists of nonwords and random words. SMG stimulation disrupted nonword recall, in a pattern analogous to that obs...
Research Interests:
Distinct neural processes are thought to support the retrieval of semantic information that is (i) coherent with strongly-encoded aspects of knowledge, and (ii) non-dominant yet relevant for the current task or context. While the brain... more
Distinct neural processes are thought to support the retrieval of semantic information that is (i) coherent with strongly-encoded aspects of knowledge, and (ii) non-dominant yet relevant for the current task or context. While the brain regions that support coherent and controlled patterns of semantic retrieval are relatively well-characterised, the temporal dynamics of these processes are not well-understood. This study used magnetoencephalography (MEG) and dual-pulse chronometric transcranial magnetic stimulation (cTMS) in two separate experiments to examine temporal dynamics within the temporal lobe during the retrieval of strong and weak associations. MEG results revealed a dissociation within left temporal cortex: anterior temporal lobe (ATL) showed greater oscillatory response for strong than weak associations, while posterior middle temporal gyrus (pMTG) showed the reverse pattern. In the cTMS experiment, stimulation of ATL at ~150ms disrupted the efficient retrieval of strong...
Research Interests:
Differing patterns of verbal short-term memory (STM) impairment have provided unique insights into the relationship between STM and broader language function. Lexicality effects (i.e., better recall for words than nonwords) are larger in... more
Differing patterns of verbal short-term memory (STM) impairment have provided unique insights into the relationship between STM and broader language function. Lexicality effects (i.e., better recall for words than nonwords) are larger in patients with phonological deficits following left temporoparietal lesions, and smaller in patients with semantic impairment and anterior temporal damage, supporting linguistic accounts of STM. However, interpretation of these patient dissociations are complicated by (i) non-focal damage and (ii) confounding factors and secondary impairments. This study addressed these issues by examining the impact of inhibitory transcranial magnetic stimulation (TMS) on auditory-verbal STM performance in healthy individuals. We compared the effects of TMS to left anterior supramarginal gyrus (SMG) and left anterior middle temporal gyrus (ATL) on STM for lists of nonwords and random words. SMG stimulation disrupted nonword recall, in a pattern analogous to that obs...
Research Interests:
Our ability to hold a sequence of speech sounds in mind, in the correct configuration, supports many aspects of communication, but the contribution of conceptual information to this basic phonological capacity remains controversial.... more
Our ability to hold a sequence of speech sounds in mind, in the correct configuration, supports many aspects of communication, but the contribution of conceptual information to this basic phonological capacity remains controversial. Previous research has shown modest and inconsistent benefits of meaning on phonological stability in short-term memory, but these studies were based on sets of unrelated words. Using a novel design, we examined the immediate recall of sentence-like sequences with coherent meaning, alongside both standard word lists and mixed lists containing words and nonwords. We found, and replicated, substantial effects of coherent meaning on phoneme-level accuracy: The phonemes of both words and nonwords within conceptually coherent sequences were more likely to be produced together and in the correct order. Since nonwords do not exist as items in long-term memory, the semantic enhancement of phoneme-level recall for both item types cannot be explained by a lexically based item reconstruction process employed at the point of retrieval (“redintegration”). Instead, our data show, for naturalistic input, that when meaning emerges from the combination of words, the phonological traces that support language are reinforced by a semantic-binding process that has been largely overlooked by past short-term memory research.
Research Interests:
Verbal short-term memory (STM) is a crucial cognitive function central to language learning, comprehension and reasoning, yet the processes that underlie this capacity are not fully understood. In particular, although STM primarily draws... more
Verbal short-term memory (STM) is a crucial cognitive function central to language learning, comprehension and reasoning, yet the processes that underlie this capacity are not fully understood. In particular, although STM primarily draws on a phonological code, interactions between long-term phonological and semantic representations might help to stabilise the phonological trace for words (“semantic binding hypothesis”). This idea was first proposed to explain the frequent phoneme recombination errors made by patients with semantic dementia when recalling words that are no longer fully understood. However, converging evidence in support of semantic binding is scant: it is unusual for studies of healthy participants to examine serial recall at the phoneme level and also it is difficult to separate the contribution of phonological-lexical knowledge from effects of word meaning. We used a new method to disentangle these influences in healthy individuals by training new ‘words’ with or without associated semantic information. We examined phonological coherence in immediate serial recall (ISR), both immediately and the day after training. Trained items were more likely to be recalled than novel nonwords, confirming the importance of phonological-lexical knowledge, and items with semantic associations were also produced more accurately than those with no meaning, at both time points. For semantically-trained items, there were fewer phoneme ordering and identity errors, and consequently more complete target items were produced in both correct and incorrect list positions. These data show that lexical-semantic knowledge improves the robustness of verbal STM at the sub-item level, even when the effect of phonological familiarity is taken into account.
Research Interests:
Grapheme-to-phoneme mapping regularity is thought to determine the grain size of orthographic information extracted whilst encoding letter strings. Here we tested whether learning to read in two languages differing in their orthographic... more
Grapheme-to-phoneme mapping regularity is thought to determine the grain size of orthographic information extracted whilst encoding letter strings. Here we tested whether learning to read in two languages differing in their orthographic transparency yields different strategies used for encoding letter-strings as compared to learning to read in one (opaque) language only. Sixteen English monolingual and 16 early Welsh-English bilingual readers undergoing event-related brain potentials (ERPs) recordings were asked to report whether or not a target letter displayed at fixation was present in either a nonword (consonant string) or an English word presented immediately before. In word and nonword probe trials, behavioural performance were overall unaffected by target letter position in the probe, suggesting similarly orthographic encoding in the two groups. By contrast, the amplitude of ERPs locked to the target letters (P3b, 340-570ms post target onset, and a late frontal positive component 600-1000ms post target onset) were differently modulated by the position of the target letter in words and nonwords between bilinguals and monolinguals. P3b results show that bilinguals who learnt to read simultaneously in an opaque and a transparent orthographies encoded orthographic information presented to the right of fixation more poorly than monolinguals. On the opposite, only monolinguals exhibited a position effect on the late positive component for both words and nonwords, interpreted as a sign of better re-evaluation of their responses. The present study shed light on how orthographic transparency constrains grain size and visual strategies underlying letter-string encoding, and how those constraints are influenced by bilingualism.
Whilst there is general consensus that phonological processing is deficient in developmental dyslexia, recent research also implicates visuo-attentional contributions. Capitalising on the P3a wave of event-related potentials as an index... more
Whilst there is general consensus that phonological processing is deficient in developmental dyslexia, recent research also implicates visuo-attentional contributions. Capitalising on the P3a wave of event-related potentials as an index of attentional capture, we tested dyslexic and normal readers on a novel variant of a visual oddball task to examine the interplay of orthographic-phonological integration and attentional engagement. Targets were animal words (10% occurrence). Amongst nontarget stimuli were two critical conditions: pseudohomophones of targets (10%) and control pseudohomophones (of fillers; 10%). Pseudohomophones of targets (but not control pseudohomophones) elicited a large P3 wave in normal readers only, revealing a lack of attentional engagement with these phonologically salient stimuli in dyslexic participants. Critically, both groups showed similar early phonological discrimination as indexed by posterior P2 modulations. Furthermore, phonological engagement, as indexed by P3a differences between pseudohomophone conditions, correlated with several measures of reading. Meanwhile, an analogous experiment using coloured shapes instead of orthographic stimuli failed to show group differences between experimental modulations in the P2 or P3 ranges. Overall, our results show that, whilst automatic aspects of phonological processing appear intact in developmental dyslexia, the breakdown in pseudoword reading occurs at a later stage, when attention is oriented to orthographic-phonological information.
Behavioral studies with proficient late bilinguals have revealed the existence of orthographic neighborhood density effects across languages when participants read either in their first (L1) or second (L2) language. Words with many... more
Behavioral studies with proficient late bilinguals have revealed the existence of orthographic neighborhood density effects across languages when participants read either in their first (L1) or second (L2) language. Words with many cross-language neighbors have been found to elicit more negative event-related potentials (ERPs) than words with few cross-language neighbors (Midgley et al., 2008); the effect started earlier, and was larger, for L2 words. Here, 14 late and 14 early English-Welsh bilinguals performed a semantic categorization task on English and Welsh words presented in separate blocks. The pattern of cross-language activation was different for the two groups of bilinguals. In late bilinguals, words with high cross-language neighborhood density elicited more negative ERP amplitudes than words with low cross-language neighborhood density starting around 175 ms after word onset and lasting until 500 ms. This effect interacted with language in the 300-500 ms time window. A more complex pattern of early effects was revealed in early bilinguals and there were no effects in the N400 window. These results suggest that cross-language activation of orthographic neighbors is highly sensitive to the bilinguals’ learning experience of the two languages.
Whether humans spontaneously sound out words in their mind during silent reading is a matter of debate. Some models of reading postulate that skilled readers access the meaning directly from print but others involve print-to-sound... more
Whether humans spontaneously sound out words in their mind during silent reading is a matter of debate. Some models of reading postulate that skilled readers access the meaning directly from print but others involve print-to-sound transcoding mechanisms. Here, we provide evidence that silent reading activates the sound form of words before accessing their meaning by comparing event-related potentials induced by highly expected words and their homophones. We found that expected words and words that sound the same but have a different orthography (homophones and pseudohomophones) reduce scalp activity to the same extent within 300 ms of presentation compared with unexpected words. This shows that phonological access during silent reading, which is critical for literacy acquisition, remains active in adulthood.