The Laboratory of NeuroDynamics of Human Communication (NeDComm Lab) is an international research group at Aarhus University (AU), Denmark. We are part of the multi-disciplinary Center of Functionally Integrative Neuroscience (CFIN) at the AU’s Institute for Clinical Medicine (Health Faculty), and are physically located at Aarhus University Hospital.
Language is a uniquely human neurocognitive function which plays a defining role in our lives. The importance of language as our main communication tool cannot be exaggerated, and its deficiencies are devastating for the sufferers and their families, and costly for society. Developmental language deficits are estimated to affect ~10% of pre-school children; in adults, these are complemented by acquired deficits, such as aphasia. Linguistic deficits also accompany a range of conditions, such as autism, schizophrenia etc. In spite of the obvious importance of language and the high cost of its deficits, it remains one of the least understood neurocognitive functions. One reason for this is that, as a communication system, human language is unparalleled in its complexity and has no suitable animal models. In our work, we explore the dynamic processes of storage and access of linguistic representations in the brain. We combine neuroscientific, linguistic, behavioural and clinical approaches, and our team includes scientists of diverse cross-disciplinary backgrounds. State-of-the-art neuroimaging techniques are combined with psycholinguistic and neuropsychological experimentation in order to address a range of questions related to the language function, from phonological processes to lexical semantics, syntax and pragmatics.
Coordinator: Yury Shtyrov
How are words, the basic building blocks of language, represented and accessed in the human brain? Cognitive accounts of linguistic processes range from parallel to cascade to sequential models of information access, and neurobiological attempts at delineating them diverge even more. Fast processing of all incoming information is vital to survival in a highly dynamic environment; from these perspectives, first activation of word representations at 350-400ms, as it is largely believed, is evolutionarily and neurobiologically untenable. Indeed, our previous studies using EEG/MEG and backed up by fMRI and behavioural research, strongly demonstrated parallel access to various types of language information well before 200ms and sometimes as early as ~100ms after the information becomes available (Shtyrov & Pulvermüller, J Psychophys 2007). However, our theoretical calculations show that existing neuroanatomical connections allow speech signal to activate the entire neurolingustic network much earlier, within 30-60ms, posing a question whether the 100-200ms activity is secondary in itself. Indeed, using strictly controlled stimulus sets and experimental tasks in MEG, we have been able to uncover the earliest stages of lexical processing (~50ms) involving bilateral temporo-frontal networks, and their relationships to the subsequent processing steps (100-200ms, 350-450ms), fundamentally challenging current thinking (MacGregor et al, Nature Comms 2012). We are now extending this work from pure lexical to semantic and contextual processes. A large number of further experiments are run, aimed at detailing temporal dynamics of neural lexico-semantic processes, their structural substrates and their dependence on psycholinguistic variables, with cross-validation of results in different languages.
Do all aspects of language processing need our active attention on the subject? Or can some of them take place irrespective of attention allocation and be, in this sense, automatic? By comparing brain responses under different task demands, we are teasing apart involuntary automatic and controlled stages in language comprehension. We have found that even under attentional withdrawal, size and topography of neurophysiological responses reflect the activation of memory traces for language elements and dynamic interactions between these representations. Our studies show that the language function does possess certain automaticity, which applies to different types of information: from phonological to lexico-semantic to syntactic (Shtyrov, Mental Lexicon 2010). It can be explained by robustness of strongly connected neurolinguistic circuits that can activate fully even when attentional resources are low (Shtyrov et al, J Cogn Neurosci 2010). At the same time, this automaticity is limited to the very first stages of processing (<200ms after the relevant information arrives at the input), and later processing steps are affected by attention modulation. These later steps, possibly reflecting a more in-depth, secondary processing or re-analysis and repair of incoming speech, are therefore dependant on the amount of resources allocated to language. Full processing of spoken language is thus impossible without allocating attentional resources to it; this allocation may be triggered by the early automatic stages in the first place. A range of current studies investigate this intrinsic interplay between automatic and controlled stages in the processing of different types of linguistic information in both auditory and visual modalities. The most recent strand of this research is looking at neuropragmatics, i.e. neural processing of language in social contexts.
Are words real mental objects or are they merely sequences of morphemes held together by rules similar to the rules of syntax? We have established distinct neurophysiological patterns reflecting syntactic and lexical processes in the brain: enhanced automatic response for neural access to meaningful lexical entry (e.g. Shtyrov et al, Neuroimage 2005) vs. enhanced activity for ungrammatical stimuli when parsing a syntactic structure (e.g. Shtyrov et al, J Cogn Neurosci 2003, Pulvermuller et al, Brain and Lang 2008). This allows us to make predictions for neurobiological experiments addressing the nature of morphosyntactic processing which is still debated in (psycho)linguistics. For example, we have shown that particle verbs form unified supra-lexical representations in the brain rather than engage syntactic parsing mechanisms (Cappelle et al, Brain & Lang 2010). This work is now extended to investigate, using different imaging methods and target languages:
Human capacity to quickly learn new words, critical for our ability to communicate using language, is well-known from behavioural studies and observations. Yet, the neural bases of this vital skill of fast word acquisition (so-called ‘fast mapping’) are not yet understood. Using neurophysiology, we have been able to show that human cortex is capable of rapid build-up of novel memory traces within minutes of exposure to novel word stimuli (Shtyrov et al, J Neuroscience 2010) and that this rapid learning capacity is specific to spoken sounds (Shtyrov, Front in Psychol 2011). This research is extended to include:
Patients who are unable to cooperate with behavioural tests and follow instructions (aphasic or demented individuals, language-impaired children, etc) present a challenge for a clinician assessing their linguistic ability; clearly, objective tools for assessment of neural language function without relying on overt behaviour are needed.
Using passive unattended speech presentation in EEG, MEG and fMRI, we have revealed automatic brain responses specific to different types of information (lexical, semantic, syntactic) that are independent of attention to the stimuli and do not require any overt behavioural responses from the subjects. This approach has a substantial potential as a means of assessing neurocognitive functions objectively and non-invasively, using e.g. patient-friendly MEG and/or inexpensive EEG available at most hospitals. We are working on improving these methodologies to optimise them as patient-friendly time-efficient protocols; these experiments include selecting sets of acoustically and psycholinguistically matched stimuli with robust and relevant linguistic contrasts, fine-tuning presentation modes to minimise testing time, streamlining analysis approaches.
For example, we have recently suggested a paradigm which can assess, in a short (20min) task-free patient-friendly recording session, a range of neural processes, from basic auditory discrimination to attention allocation to lexical memory-trace activation (Shtyrov et al, Neuropsychologia 2012). Our further work shows sensitivity of this approach to tracing decay of word representations in ageing and dementia, evident as marked changes in response dynamics from as early as 50ms and in structural redistribution of responses. Further studies are focussed on aphasia, neurodegenerative diseases and schizophrenia, with specific predictions of functional impairments within lexical circuits.