The co-occurrence of multisensory facilitation and cross-modal conflict in the human brain

2011 ◽  
Vol 106 (6) ◽  
pp. 2896-2909 ◽  
Author(s):  
Andreea Oliviana Diaconescu ◽  
Claude Alain ◽  
Anthony Randal McIntosh

Perceptual objects often comprise a visual and auditory signature that arrives simultaneously through distinct sensory channels, and cross-modal features are linked by virtue of being attributed to a specific object. Continued exposure to cross-modal events sets up expectations about what a given object most likely “sounds” like, and vice versa, thereby facilitating object detection and recognition. The binding of familiar auditory and visual signatures is referred to as semantic, multisensory integration. Whereas integration of semantically related cross-modal features is behaviorally advantageous, situations of sensory dominance of one modality at the expense of another impair performance. In the present study, magnetoencephalography recordings of semantically related cross-modal and unimodal stimuli captured the spatiotemporal patterns underlying multisensory processing at multiple stages. At early stages, 100 ms after stimulus onset, posterior parietal brain regions responded preferentially to cross-modal stimuli irrespective of task instructions or the degree of semantic relatedness between the auditory and visual components. As participants were required to classify cross-modal stimuli into semantic categories, activity in superior temporal and posterior cingulate cortices increased between 200 and 400 ms. As task instructions changed to incorporate cross-modal conflict, a process whereby auditory and visual components of cross-modal stimuli were compared to estimate their degree of congruence, multisensory processes were captured in parahippocampal, dorsomedial, and orbitofrontal cortices 100 and 400 ms after stimulus onset. Our results suggest that multisensory facilitation is associated with posterior parietal activity as early as 100 ms after stimulus onset. However, as participants are required to evaluate cross-modal stimuli based on their semantic category or their degree of congruence, multisensory processes extend in cingulate, temporal, and prefrontal cortices.

2013 ◽  
Vol 4 (3) ◽  
Author(s):  
Evangelos Paraskevopoulos ◽  
Sibylle Herholz

AbstractThere is a strong interaction between multisensory processing and the neuroplasticity of the human brain. On one hand, recent research demonstrates that experience and training in various domains modifies how information from the different senses is integrated; and, on the other hand multisensory training paradigms seem to be particularly effective in driving functional and structural plasticity. Multisensory training affects early sensory processing within separate sensory domains, as well as the functional and structural connectivity between uni- and multisensory brain regions. In this review, we discuss the evidence for interactions of multisensory processes and brain plasticity and give an outlook on promising clinical applications and open questions.


1990 ◽  
Vol 3 (2) ◽  
pp. 109-115 ◽  
Author(s):  
Guido Gainotti

In recent years several papers have shown that different verbal and non-verbal semantic categories can be selectively disrupted by brain damage and that consistent anatomical localizations correspond to each category-specific semantic disorder. This paper aims to suggest that the brain regions typically damaged in a given type of category-specific semantic disorder might be critically involved in processing the kind of information which mainly contributes to organizing that semantic category and to distinguishing among its members. This general hypothesis is discussed taking into account: (a) comprehension and production of object names (nouns) and of action names (verbs) in agrammatic and in anomic aphasic patients; (b) verbal and non-verbal identification of body parts; (c) verbal and non-verbal identification of living beings and of man made artefacts.


2019 ◽  
Vol 121 (5) ◽  
pp. 1588-1590 ◽  
Author(s):  
Luca Casartelli

Neural, oscillatory, and computational counterparts of multisensory processing remain a crucial challenge for neuroscientists. Converging evidence underlines a certain efficiency in balancing stability and flexibility of sensory sampling, supporting the general idea that multiple parallel and hierarchically organized processing stages in the brain contribute to our understanding of the (sensory/perceptual) world. Intriguingly, how temporal dynamics impact and modulate multisensory processes in our brain can be investigated benefiting from studies on perceptual illusions.


2021 ◽  
pp. 1-12
Author(s):  
Anna Borgolte ◽  
Ahmad Bransi ◽  
Johanna Seifert ◽  
Sermin Toto ◽  
Gregor R. Szycik ◽  
...  

Abstract Synaesthesia is a multimodal phenomenon in which the activation of one sensory modality leads to an involuntary additional experience in another sensory modality. To date, normal multisensory processing has hardly been investigated in synaesthetes. In the present study we examine processes of audiovisual separation in synaesthesia by using a simultaneity judgement task. Subjects were asked to indicate whether an acoustic and a visual stimulus occurred simultaneously or not. Stimulus onset asynchronies (SOA) as well as the temporal order of the stimuli were systematically varied. Our results demonstrate that synaesthetes are better in separating auditory and visual events than control subjects, but only when vision leads.


2002 ◽  
Vol 88 (1) ◽  
pp. 540-543 ◽  
Author(s):  
John J. Foxe ◽  
Glenn R. Wylie ◽  
Antigona Martinez ◽  
Charles E. Schroeder ◽  
Daniel C. Javitt ◽  
...  

Using high-field (3 Tesla) functional magnetic resonance imaging (fMRI), we demonstrate that auditory and somatosensory inputs converge in a subregion of human auditory cortex along the superior temporal gyrus. Further, simultaneous stimulation in both sensory modalities resulted in activity exceeding that predicted by summing the responses to the unisensory inputs, thereby showing multisensory integration in this convergence region. Recently, intracranial recordings in macaque monkeys have shown similar auditory-somatosensory convergence in a subregion of auditory cortex directly caudomedial to primary auditory cortex (area CM). The multisensory region identified in the present investigation may be the human homologue of CM. Our finding of auditory-somatosensory convergence in early auditory cortices contributes to mounting evidence for multisensory integration early in the cortical processing hierarchy, in brain regions that were previously assumed to be unisensory.


2019 ◽  
Author(s):  
Lore Goetschalckx ◽  
Johan Wagemans

This is a preprint. Please find the published, peer reviewed version of the paper here: https://peerj.com/articles/8169/. Images differ in their memorability in consistent ways across observers. What makes an image memorable is not fully understood to date. Most of the current insight is in terms of high-level semantic aspects, related to the content. However, research still shows consistent differences within semantic categories, suggesting a role for factors at other levels of processing in the visual hierarchy. To aid investigations into this role as well as contributions to the understanding of image memorability more generally, we present MemCat. MemCat is a category-based image set, consisting of 10K images representing five broader, memorability-relevant categories (animal, food, landscape, sports, and vehicle) and further divided into subcategories (e.g., bear). They were sampled from existing source image sets that offer bounding box annotations or more detailed segmentation masks. We collected memorability scores for all 10K images, each score based on the responses of on average 99 participants in a repeat-detection memory task. Replicating previous research, the collected memorability scores show high levels of consistency across observers. Currently, MemCat is the second largest memorability image set and the largest offering a category-based structure. MemCat can be used to study the factors underlying the variability in image memorability, including the variability within semantic categories. In addition, it offers a new benchmark dataset for the automatic prediction of memorability scores (e.g., with convolutional neural networks). Finally, MemCat allows to study neural and behavioral correlates of memorability while controlling for semantic category.


2020 ◽  
Vol 30 (7) ◽  
pp. 4076-4091
Author(s):  
Ryu Ohata ◽  
Tomohisa Asai ◽  
Hiroshi Kadota ◽  
Hiroaki Shigemasu ◽  
Kenji Ogawa ◽  
...  

Abstract The sense of agency is defined as the subjective experience that “I” am the one who is causing the action. Theoretical studies postulate that this subjective experience is developed through multistep processes extending from the sensorimotor to the cognitive level. However, it remains unclear how the brain processes such different levels of information and constitutes the neural substrates for the sense of agency. To answer this question, we combined two strategies: an experimental paradigm, in which self-agency gradually evolves according to sensorimotor experience, and a multivoxel pattern analysis. The combined strategies revealed that the sensorimotor, posterior parietal, anterior insula, and higher visual cortices contained information on self-other attribution during movement. In addition, we investigated whether the found regions showed a preference for self-other attribution or for sensorimotor information. As a result, the right supramarginal gyrus, a portion of the inferior parietal lobe (IPL), was found to be the most sensitive to self-other attribution among the found regions, while the bilateral precentral gyri and left IPL dominantly reflected sensorimotor information. Our results demonstrate that multiple brain regions are involved in the development of the sense of agency and that these show specific preferences for different levels of information.


2019 ◽  
Vol 116 (32) ◽  
pp. 16056-16061 ◽  
Author(s):  
Elie Rassi ◽  
Andreas Wutz ◽  
Nadia Müller-Voggel ◽  
Nathan Weisz

Ongoing fluctuations in neural excitability and in networkwide activity patterns before stimulus onset have been proposed to underlie variability in near-threshold stimulus detection paradigms—that is, whether or not an object is perceived. Here, we investigated the impact of prestimulus neural fluctuations on the content of perception—that is, whether one or another object is perceived. We recorded neural activity with magnetoencephalography (MEG) before and while participants briefly viewed an ambiguous image, the Rubin face/vase illusion, and required them to report their perceived interpretation in each trial. Using multivariate pattern analysis, we showed robust decoding of the perceptual report during the poststimulus period. Applying source localization to the classifier weights suggested early recruitment of primary visual cortex (V1) and ∼160-ms recruitment of the category-sensitive fusiform face area (FFA). These poststimulus effects were accompanied by stronger oscillatory power in the gamma frequency band for face vs. vase reports. In prestimulus intervals, we found no differences in oscillatory power between face vs. vase reports in V1 or in FFA, indicating similar levels of neural excitability. Despite this, we found stronger connectivity between V1 and FFA before face reports for low-frequency oscillations. Specifically, the strength of prestimulus feedback connectivity (i.e., Granger causality) from FFA to V1 predicted not only the category of the upcoming percept but also the strength of poststimulus neural activity associated with the percept. Our work shows that prestimulus network states can help shape future processing in category-sensitive brain regions and in this way bias the content of visual experiences.


2017 ◽  
Vol 29 (2) ◽  
pp. 368-381 ◽  
Author(s):  
Jordan E. Pierce ◽  
Jennifer E. McDowell

Cognitive control is engaged to facilitate stimulus–response mappings for novel, complex tasks and supervise performance in unfamiliar, challenging contexts—processes supported by pFC, ACC, and posterior parietal cortex. With repeated task practice, however, the appropriate task set can be selected in a more automatic fashion with less need for top–down cognitive control and weaker activation in these brain regions. One model system for investigating cognitive control is the ocular motor circuitry underlying saccade production, with basic prosaccade trials (look toward a stimulus) and complex antisaccade trials (look to the mirror image location) representing low and high levels of cognitive control, respectively. Previous studies have shown behavioral improvements on saccade tasks after practice with contradictory results regarding the direction of functional MRI BOLD signal change. The current study presented healthy young adults with prosaccade and antisaccade trials in five mixed blocks with varying probability of each trial type (0%, 25%, 50%, 75%, or 100% anti vs. pro) at baseline and posttest MRI sessions. Between the scans, participants practiced either the specific probability blocks used during testing or only a general 100% antisaccade block. Results indicated an overall reduction in BOLD activation within pFC, ACC, and posterior parietal cortex and across saccade circuitry for antisaccade trials. The specific practice group showed additional regions including ACC, insula, and thalamus with an activation decrease after practice, whereas the general practice group showed a little change from baseline in those clusters. These findings demonstrate that cognitive control regions recruited to support novel task behaviors were engaged less after practice, especially with exposure to mixed task contexts rather than a novel task in isolation.


2019 ◽  
Vol 14 (7) ◽  
pp. 699-708 ◽  
Author(s):  
James A Dungan ◽  
Liane Young

Abstract Recent work in psychology and neuroscience has revealed important differences in the cognitive processes underlying judgments of harm and purity violations. In particular, research has demonstrated that whether a violation was committed intentionally vs accidentally has a larger impact on moral judgments of harm violations (e.g. assault) than purity violations (e.g. incest). Here, we manipulate the instructions provided to participants for a moral judgment task to further probe the boundary conditions of this intent effect. Specifically, we instructed participants undergoing functional magnetic resonance imaging to attend to either a violator’s mental states (why they acted that way) or their low-level behavior (how they acted) before delivering moral judgments. Results revealed that task instructions enhanced rather than diminished differences between how harm and purity violations are processed in brain regions for mental state reasoning or theory of mind. In particular, activity in the right temporoparietal junction increased when participants were instructed to attend to why vs how a violator acted to a greater extent for harm than for purity violations. This result constrains the potential accounts of why intentions matter less for purity violations compared to harm violations and provide further insight into the differences between distinct moral norms.


Sign in / Sign up

Export Citation Format

Share Document