scholarly journals Poorer auditory sensitivity is related to stronger visual enhancement of the human auditory mismatch negativity (MMNm)

2019 ◽  
Author(s):  
Cecilie Møller ◽  
Andreas Højlund ◽  
Klaus B. Bærentsen ◽  
Niels Chr. Hansen ◽  
Joshua C. Skewes ◽  
...  

AbstractMultisensory processing facilitates perception of our everyday environment and becomes particularly important when sensory information is degraded or close to the discrimination threshold. Here, we used magnetoencephalography and an audiovisual oddball paradigm to assess the complementary role of visual information in subtle pitch discrimination at the neural level of participants with varying levels of pitch discrimination abilities, i.e., musicians and nonmusicians. The amplitude of the auditory mismatch negativity (MMNm) served as an index of sensitivity. The gain in amplitude resulting from compatible audiovisual information was larger in participants whose MMNm amplitude was smaller in the condition deviating only in the auditory dimension, in accordance with the multisensory principle of inverse effectiveness. These findings show that discrimination of even a sensory-specific feature as pitch is facilitated by multisensory information at a pre-attentive level, and they highlight the importance of considering inter-individual differences in uni-sensory abilities when assessing multisensory processing.

2020 ◽  
Vol 30 (8) ◽  
pp. 4410-4423
Author(s):  
You Li ◽  
Carol Seger ◽  
Qi Chen ◽  
Lei Mo

Abstract Humans are able to categorize things they encounter in the world (e.g., a cat) by integrating multisensory information from the auditory and visual modalities with ease and speed. However, how the brain learns multisensory categories remains elusive. The present study used functional magnetic resonance imaging to investigate, for the first time, the neural mechanisms underpinning multisensory information-integration (II) category learning. A sensory-modality-general network, including the left insula, right inferior frontal gyrus (IFG), supplementary motor area, left precentral gyrus, bilateral parietal cortex, and right caudate and globus pallidus, was recruited for II categorization, regardless of whether the information came from a single modality or from multiple modalities. Putamen activity was higher in correct categorization than incorrect categorization. Critically, the left IFG and left body and tail of the caudate were activated in multisensory II categorization but not in unisensory II categorization, which suggests this network plays a specific role in integrating multisensory information during category learning. The present results extend our understanding of the role of the left IFG in multisensory processing from the linguistic domain to a broader role in audiovisual learning.


2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


2019 ◽  
Author(s):  
Michael J. Crosse ◽  
John J. Foxe ◽  
Sophie Molholm

AbstractChildren with autism spectrum disorder (ASD) are often impaired in their ability to cope with and process multisensory information, which may contribute to some of the social and communicative deficits that are prevalent in this population. Amelioration of such deficits in adolescence has been observed for ecologically-relevant stimuli such as speech. However, it is not yet known if this recovery generalizes to the processing of nonsocial stimuli such as more basic beeps and flashes, typically used in cognitive neuroscience research. We hypothesize that engagement of different neural processes and lack of environmental exposure to such artificial stimuli leads to protracted developmental trajectories in both neurotypical (NT) individuals and individuals with ASD, thus delaying the age at which we observe this “catch up”. Here, we test this hypothesis using a bisensory detection task by measuring human response times to randomly presented auditory, visual and audiovisual stimuli. By measuring the behavioral gain afforded by an audiovisual signal, we show that the multisensory deficit previously reported in children with ASD recovers in adulthood by the mid-twenties. In addition, we examine the effects of switching between sensory modalities and show that teenagers with ASD incur less of a behavioral cost than their NT peers. Computational modelling reveals that multisensory information interacts according to different rules in children and adults, and that sensory evidence is weighted differently too. In ASD, weighting of sensory information and allocation of attention during multisensory processing differs to that of NT individuals. Based on our findings, we propose a theoretical framework of multisensory development in NT and ASD individuals.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yan H. Yu ◽  
Valerie L. Shafer

Many studies have observed modulation of the amplitude of the neural index mismatch negativity (MMN) related to which member of a phoneme contrast [phoneme A, phoneme B] serves as the frequent (standard) and which serves as the infrequent (deviant) stimulus (i.e., AAAB vs. BBBA) in an oddball paradigm. Explanations for this amplitude modulation range from acoustic to linguistic factors. We tested whether exchanging the role of the mid vowel /ε/ vs. high vowel /ɪ/ of English modulated MMN amplitude and whether the pattern of modulation was compatible with an underspecification account, in which the underspecified height values are [−high] and [−low]. MMN was larger for /ε/ as the deviant, but only when compared across conditions to itself as the standard. For the within-condition comparison, MMN was larger to /ɪ/ deviant minus /ε/ standard than to the reverse. A condition order effect was also observed. MMN amplitude was smaller to the deviant stimulus if it had previously served as the standard. In addition, the amplitudes of late discriminative negativity (LDN) showed similar asymmetry. LDN was larger for deviant /ε/ than deviant /ɪ/ when compared to themselves as the standard. These findings were compatible with an underspecification account, but also with other accounts, such as the Natural Referent Vowel model and a prototype model; we also suggest that non-linguistic factors need to be carefully considered as additional sources of speech processing asymmetries.


2019 ◽  
Author(s):  
Laura Rachman ◽  
Stéphanie Dubal ◽  
Jean-Julien Aucouturier

AbstractIn social interactions, people have to pay attention both to the what and who. In particular, expressive changes heard on speech signals have to be integrated with speaker identity, differentiating e.g. self- and other-produced signals. While previous research has shown that self-related visual information processing is facilitated compared to non-self stimuli, evidence in the auditory modality remains mixed. Here, we compared electroencephalography (EEG) responses to expressive changes in sequence of self- or other-produced speech sounds, using a mismatch negativity (MMN) passive oddball paradigm. Critically, to control for speaker differences, we used programmable acoustic transformations to create voice deviants which differed from standards in exactly the same manner, making EEG responses to such deviations comparable between sequences. Our results indicate that expressive changes on a stranger’s voice are highly prioritized in auditory processing compared to identical changes on the self-voice. Other-voice deviants generate earlier MMN onset responses and involve stronger cortical activations in a left motor and somatosensory network suggestive of an increased recruitment of resources for less internally predictable, and therefore perhaps more socially relevant, signals.


2007 ◽  
Vol 21 (3-4) ◽  
pp. 147-163 ◽  
Author(s):  
István Winkler

The widely accepted “memory-mismatch” interpretation of the mismatch negativity (MMN) event-related brain potential (ERP) suggests that an MMN is elicited when an acoustic event deviates from a memory record describing the immediate history of the sound sequence. The first variant of the memory-mismatch theory suggested that the memory underlying MMN generation was a strong auditory sensory memory trace, which encoded the repetitive standard sound. This “trace-mismatch” explanation of MMN has been primarily based on results obtained in the auditory oddball paradigm. However, in recent years, MMN has been observed in stimulus paradigms containing no frequently repeating sound. We now suggest a different variant of the memory-mismatch interpretation of MMN in order to provide a unified explanation of all MMN phenomena. The regularity-violation explanation of MMN assumes that the memory records retaining the history of auditory stimulation are regularity representations. These representations encode rules extracted from the regular intersound relationships, which are mapped to the concrete sound sequence by finely detailed auditory sensory information. Auditory events are compared with temporally aligned predictions drawn from the regularity representations (predictive models) and the observable MMN response reflects a process updating the representations of those detected regularities whose prediction was mismatched by the acoustic input. It is further suggested that the auditory deviance detection system serves to organize sound in the brain: The predictive models maintained by the MMN-generating process provide the basis of temporal grouping, a crucial step in the formation of auditory objects.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Hame Park ◽  
Christoph Kayser

Perception adapts to mismatching multisensory information, both when different cues appear simultaneously and when they appear sequentially. While both multisensory integration and adaptive trial-by-trial recalibration are central for behavior, it remains unknown whether they are mechanistically linked and arise from a common neural substrate. To relate the neural underpinnings of sensory integration and recalibration, we measured whole-brain magnetoencephalography while human participants performed an audio-visual ventriloquist task. Using single-trial multivariate analysis, we localized the perceptually-relevant encoding of multisensory information within and between trials. While we found neural signatures of multisensory integration within temporal and parietal regions, only medial superior parietal activity encoded past and current sensory information and mediated the perceptual recalibration within and between trials. These results highlight a common neural substrate of sensory integration and perceptual recalibration, and reveal a role of medial parietal regions in linking present and previous multisensory evidence to guide adaptive behavior.


2019 ◽  
Author(s):  
Alexia Bourgeois ◽  
Carole Guedj ◽  
Emmanuel Carrera ◽  
Patrik Vuilleumier

Selective attention is a fundamental cognitive function that guides behavior by selecting and prioritizing salient or relevant sensory information of our environment. Despite early evidence and theoretical proposal pointing to an implication of thalamic control in attention, most studies in the past two decades focused on cortical substrates, largely ignoring the contribution of subcortical regions as well as cortico-subcortical interactions. Here, we suggest a key role of the pulvinar in the selection of salient and relevant information via its involvement in priority maps computation. Prioritization may be achieved through a pulvinar- mediated generation of alpha oscillations, which may then modulate neuronal gain in thalamo-cortical circuits. Such mechanism might orchestrate the synchrony of cortico-cortical interaction, by rendering neural communication more effective, precise and selective. We propose that this theoretical framework will support a timely shift from the prevailing cortico- centric view of cognition to a more integrative perspective of thalamic contributions to attention and executive control processes.


Sign in / Sign up

Export Citation Format

Share Document