scholarly journals Modality-specific and multisensory mechanisms of spatial attention and expectation

2019 ◽  
Author(s):  
Arianna Zuanazzi ◽  
Uta Noppeney

AbstractIn our natural environment, the brain needs to combine signals from multiple sensory modalities into a coherent percept. While spatial attention guides perceptual decisions by prioritizing processing of signals that are task-relevant, spatial expectations encode the probability of signals over space. Previous studies have shown that behavioral effects of spatial attention generalize across sensory modalities. However, because they manipulated spatial attention as signal probability over space, these studies could not dissociate attention and expectation or assess their interaction.In two experiments, we orthogonally manipulated spatial attention (i.e., task-relevance) and expectation (i.e., signal probability) selectively in one sensory modality (i.e., primary modality) (experiment 1: audition, experiment 2: vision) and assessed their effects on primary and secondary sensory modalities in which attention and expectation were held constant.Our results show behavioral effects of spatial attention that are comparable for audition and vision as primary modalities; yet, signal probabilities were learnt more slowly in audition, so that spatial expectations were formed later in audition than vision. Critically, when these differences in learning between audition and vision were accounted for, both spatial attention and expectation affected responses more strongly in the primary modality in which they were manipulated, and generalized to the secondary modality only in an attenuated fashion. Collectively, our results suggest that both spatial attention and expectation rely on modality-specific and multisensory mechanisms.

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Lucilla Cardinali ◽  
Andrea Serino ◽  
Monica Gori

Abstract Cortical body size representations are distorted in the adult, from low-level motor and sensory maps to higher levels multisensory and cognitive representations. Little is known about how such representations are built and evolve during infancy and childhood. Here we investigated how hand size is represented in typically developing children aged 6 to 10. Participants were asked to estimate their hand size using two different sensory modalities (visual or haptic). We found a distortion (underestimation) already present in the youngest children. Crucially, such distortion increases with age and regardless of the sensory modality used to access the representation. Finally, underestimation is specific for the body as no bias was found for object estimation. This study suggests that the brain does not keep up with the natural body growth. However, since motor behavior nor perception were impaired, the distortion seems functional and/or compensated for, for proper interaction with the external environment.


2017 ◽  
Author(s):  
Arianna Zuanazzi ◽  
Uta Noppeney

AbstractSpatial attention and expectation are two critical top-down mechanisms controlling perceptual inference. Based on previous research it remains unclear whether their influence on perceptual decisions is additive or interactive.We developed a novel multisensory approach that orthogonally manipulated spatial attention (i.e. task relevance) and expectation (i.e. signal probability) selectively in audition and evaluated their effects on observers’ responses in vision. Critically, while experiment 1 manipulated expectation directly via the probability of task-relevant auditory targets across hemifields, experiment 2 manipulated it indirectly via task-irrelevant auditory non-targets.Surprisingly, our results demonstrate that spatial attention and signal probability influence perceptual decisions either additively or interactively. These seemingly contradictory results can be explained parsimoniously by a model that combines spatial attention, general and spatially selective response probabilities as predictors with no direct influence of signal probability. Our model provides a novel perspective on how spatial attention and expectations facilitate effective interactions with the environment.


Life ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 296
Author(s):  
Rodrigo Araneda ◽  
Sandra Silva Moura ◽  
Laurence Dricot ◽  
Anne G. De Volder

Using functional magnetic resonance imaging, here we monitored the brain activity in 12 early blind subjects and 12 blindfolded control subjects, matched for age, gender and musical experience, during a beat detection task. Subjects were required to discriminate regular (“beat”) from irregular (“no beat”) rhythmic sequences composed of sounds or vibrotactile stimulations. In both sensory modalities, the brain activity differences between the two groups involved heteromodal brain regions including parietal and frontal cortical areas and occipital brain areas, that were recruited in the early blind group only. Accordingly, early blindness induced brain plasticity changes in the cerebral pathways involved in rhythm perception, with a participation of the visually deprived occipital brain areas whatever the sensory modality for input. We conclude that the visually deprived cortex switches its input modality from vision to audition and vibrotactile sense to perform this temporal processing task, supporting the concept of a metamodal, multisensory organization of this cortex.


2004 ◽  
Vol 16 (2) ◽  
pp. 272-288 ◽  
Author(s):  
Martin Eimer ◽  
José van Velzen ◽  
Jon Driver

Previous ERP studies have uncovered cross-modal interactions in endogenous spatial attention. Directing attention to one side to judge stimuli from one particular modality can modulate early modality-specific ERP components not only for that modality, but also for other currently irrelevant modalities. However, past studies could not determine whether the spatial focus of attention in the task-irrelevant secondary modality was similar to the primary modality, or was instead diffuse across one hemifield. Here, auditory or visual stimuli could appear at any one of four locations (two on each side). In different blocks, subjects judged stimuli at only one of these four locations, for an auditory (Experiment 1) or visual (Experiment 2) task. Early attentional modulations of visual and auditory ERPs were found for stimuli at the currently relevant location, compared with those at the irrelevant location within the same hemifield, thus demonstrating within-hemifield tuning of spatial attention. Crucially, this was found not only for the currently relevant modality, but also for the currently irrelevant modality. Moreover, these within-hemifield attention effects were statistically equivalent regardless of the task relevance of the modality, for both the auditory and visual ERP data. These results demonstrate that within-hemifield spatial attention for one task-relevant modality can transfer cross-modally to a task-irrelevant modality, consistent with spatial selection at a multimodal level of representation.


2021 ◽  
pp. 214-220
Author(s):  
Wei Lin Toh ◽  
Neil Thomas ◽  
Susan L. Rossell

There has been burgeoning interest in studying hallucinations in psychosis occurring across multiple sensory modalities. The current study aimed to characterize the auditory hallucination and delusion profiles in patients with auditory hallucinations only versus those with multisensory hallucinations. Participants with psychosis were partitioned into groups with voices only (AVH; <i>n</i> = 50) versus voices plus hallucinations in at least one other sensory modality (AVH+; <i>n</i> = 50), based on their responses on the Scale for the Assessment of Positive Symptoms (SAPS). Basic demographic and clinical information was collected, and the Questionnaire for Psychotic Experiences (QPE) was used to assess psychosis phenomenology. Relative to the AVH group, greater compliance to perceived commands, auditory illusions, and sensed presences was significantly elevated in the AVH+ group. The latter group also had greater levels of delusion-related distress and functional impairment and was more likely to endorse delusions of reference and misidentification. This preliminary study uncovered important phenomenological differences in those with multisensory hallucinations. Future hallucination research extending beyond the auditory modality is needed.


2017 ◽  
Vol 8 (1) ◽  
pp. e00877 ◽  
Author(s):  
Fabio Richlan ◽  
Juliane Schubert ◽  
Rebecca Mayer ◽  
Florian Hutzler ◽  
Martin Kronbichler

2016 ◽  
Vol 14 (3) ◽  
pp. 21-31 ◽  
Author(s):  
O.B. Bogdashina

Synaesthesia — a phenomenon of perception, when stimulation of one sensory modality triggers a perception in one or more other sensory modalities. Synaesthesia is not uniform and can manifest itself in different ways. As the sensations and their interpretation vary in different periods of time, it makes it hard to study this phenom¬enon. The article presents the classification of different forms of synaesthesia, including sensory and cognitive; and bimodal and multimodal synaesthesia. Some synaesthetes have several forms and variants of synaesthesia, while others – just one form of it. Although synaesthesia is not specific to autism spectrum disorders, it is quite common among autistic individuals. The article deals with the most common forms of synaesthesia in autism, advantages and problems of synesthetic perception in children with autism spectrum disorders, and provides some advice to parents how to recognise synaesthesia in children with autism.


Author(s):  
Zahra Mousavi ◽  
Mohammad Mahdi Kiani ◽  
Hamid Aghajan

AbstractThe brain is constantly anticipating the future of sensory inputs based on past experiences. When new sensory data is different from predictions shaped by recent trends, neural signals are generated to report this surprise. Existing models for quantifying surprise are based on an ideal observer assumption operating under one of the three definitions of surprise set forth as the Shannon, Bayesian, and Confidence-corrected surprise. In this paper, we analyze both visual and auditory EEG and auditory MEG signals recorded during oddball tasks to examine which temporal components in these signals are sufficient to decode the brain’s surprise based on each of these three definitions. We found that for both recording systems the Shannon surprise is always significantly better decoded than the Bayesian surprise regardless of the sensory modality and the selected temporal features used for decoding.Author summaryA regression model is proposed for decoding the level of the brain’s surprise in response to sensory sequences using selected temporal components of recorded EEG and MEG data. Three surprise quantification definitions (Shannon, Bayesian, and Confidence-corrected surprise) are compared in offering decoding power. Four different regimes for selecting temporal samples of EEG and MEG data are used to evaluate which part of the recorded data may contain signatures that represent the brain’s surprise in terms of offering a high decoding power. We found that both the middle and late components of the EEG response offer strong decoding power for surprise while the early components are significantly weaker in decoding surprise. In the MEG response, we found that the middle components have the highest decoding power while the late components offer moderate decoding powers. When using a single temporal sample for decoding surprise, samples of the middle segment possess the highest decoding power. Shannon surprise is always better decoded than the other definitions of surprise for all the four temporal feature selection regimes. Similar superiority for Shannon surprise is observed for the EEG and MEG data across the entire range of temporal sample regimes used in our analysis.


2021 ◽  
Vol 1 (1) ◽  
pp. 30-43
Author(s):  
Surjo Soekadar ◽  
Jennifer Chandler ◽  
Marcello Ienca ◽  
Christoph Bublitz

Recent advances in neurotechnology allow for an increasingly tight integration of the human brain and mind with artificial cognitive systems, blending persons with technologies and creating an assemblage that we call a hybrid mind. In some ways the mind has always been a hybrid, emerging from the interaction of biology, culture (including technological artifacts) and the natural environment. However, with the emergence of neurotechnologies enabling bidirectional flows of information between the brain and AI-enabled devices, integrated into mutually adaptive assemblages, we have arrived at a point where the specific examination of this new instantiation of the hybrid mind is essential. Among the critical questions raised by this development are the effects of these devices on the user’s perception of the self, and on the user’s experience of their own mental contents. Questions arise related to the boundaries of the mind and body and whether the hardware and software that are functionally integrated with the body and mind are to be viewed as parts of the person or separate artifacts subject to different legal treatment. Other questions relate to how to attribute responsibility for actions taken as a result of the operations of a hybrid mind, as well as how to settle questions of the privacy and security of information generated and retained within a hybrid mind.


2006 ◽  
Vol 16 (10) ◽  
pp. 1045-1050
Author(s):  
Song Weiqun ◽  
Lou Yuejia ◽  
Chi Song ◽  
Ji Xunming ◽  
Ling Feng ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document