scholarly journals Improvement in auditory spatial discrimination from ambiguous visual stimuli is not explained by ideal observer causal inference

2019 ◽  
Author(s):  
Madeline S. Cappelloni ◽  
Sabyasachi Shivkumar ◽  
Ralf M. Haefner ◽  
Ross K. Maddox

AbstractIn order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain’s integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory spatial discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were spatially aligned with the auditory stimuli—even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.

PLoS ONE ◽  
2019 ◽  
Vol 14 (9) ◽  
pp. e0215417
Author(s):  
Madeline S. Cappelloni ◽  
Sabyasachi Shivkumar ◽  
Ralf M. Haefner ◽  
Ross K. Maddox

2012 ◽  
Vol 25 (0) ◽  
pp. 169
Author(s):  
Tomoaki Nakamura ◽  
Yukio P. Gunji

The majority of research on audio–visual interaction focused on spatio-temporal factors and synesthesia-like phenomena. Especially, research on synesthesia-like phenomena has been advanced by Marks et al., and they found synesthesia-like correlation between brightness and size of visual stimuli and pitch of auditory stimuli (Marks, 1987). It seems that main interest of research on synesthesia-like phenomena is what perceptual similarity/difference between synesthetes and non-synesthetes is. We guessed that cross-modal phenomena of non-synesthetes on perceptual level emerge as a function to complement the absence or ambiguity of a certain stimulus. To verify the hypothesis, we investigated audio–visual interaction using movement (speed) of an object as visual stimuli and sine-waves as auditory stimuli. In this experiment objects (circles) moved at a fixed speed in one trial and the objects were masked in arbitrary positions, and auditory stimuli (high, middle, low pitch) were given simultaneously with the disappearance of objects. Subject reported the expected position of the objects when auditory stimuli stopped. Result showed that correlation between the position, i.e., the movement speed, of the object and pitch of sound was found. We conjecture that cross-modal phenomena on non-synesthetes tend to occur when one of sensory stimuli are absent/ambiguous.


1999 ◽  
Vol 11 (2) ◽  
pp. 206-213 ◽  
Author(s):  
Tracy L. Taylor ◽  
Raymond M. Klein ◽  
Douglas P. Munoz

Relative to when a fixated stimulus remains visible, saccadic latencies are facilitated when a fixated stimulus is extinguished simultaneously with or prior to the appearance of an eccentric auditory, visual, or combined visual-auditory target. In a study of nine human subjects, we determined whether such facilitation (the “gap effect”) occurs equivalently for the disappearance of fixated auditory stimuli and fixated visual stimuli. In the present study, a fixated auditory (noise) stimulus remained present (overlap) or else was extinguished simultaneously with (step) or 200 msec prior to (gap) the appearance of a visual, auditory (tone), or combined visual-auditory target 10° to the left or right of fixation. The results demonstrated equivalent facilitatory effects due to the disappearance of fixated auditory and visual stimuli and are consistent with the presumed role of the superior colliculus in the gap effect.


Perception ◽  
10.1068/p5849 ◽  
2007 ◽  
Vol 36 (10) ◽  
pp. 1507-1512 ◽  
Author(s):  
Kerstin Königs ◽  
Jonas Knöll ◽  
Frank Bremmer

Previous studies have shown that the perceived location of visual stimuli briefly flashed during smooth pursuit, saccades, or optokinetic nystagmus (OKN) is not veridical. We investigated whether these mislocalisations can also be observed for brief auditory stimuli presented during OKN. Experiments were carried out in a lightproof sound-attenuated chamber. Participants performed eye movements elicited by visual stimuli. An auditory target (white noise) was presented for 5 ms. Our data clearly indicate that auditory targets are mislocalised during reflexive eye movements. OKN induces a shift of perceived location in the direction of the slow eye movement and is modulated in the temporal vicinity of the fast phase. The mislocalisation is stronger for look- as compared to stare-nystagmus. The size and temporal pattern of the observed mislocalisation are different from that found for visual targets. This suggests that different neural mechanisms are at play to integrate oculomotor signals and information on the spatial location of visual as well as auditory stimuli.


1999 ◽  
Vol 82 (1) ◽  
pp. 330-342 ◽  
Author(s):  
Alexander Grunewald ◽  
Jennifer F. Linden ◽  
Richard A. Andersen

The lateral intraparietal area (LIP) of macaques has been considered unresponsive to auditory stimulation. Recent reports, however, indicate that neurons in this area respond to auditory stimuli in the context of an auditory-saccade task. Is this difference in auditory responsiveness of LIP due to auditory-saccade training? To address this issue, LIP responses in two monkeys were recorded at two different times: before and after auditory-saccade training. Before auditory-saccade training, the animals had never been trained on any auditory task, but had been trained on visual tasks. In both sets of experiments, activity of LIP neurons was recorded while auditory and visual stimuli were presented and the animals were fixating. Before training, 172 LIP neurons were recorded. Among these, the number of cells responding to auditory stimuli did not reach significance, whereas about one-half of the cells responded to visual stimuli. An information theory analysis confirmed that no information about auditory stimulus location was available in LIP neurons in the experiments before training. After training, activity from 160 cells was recorded. These experiments showed that 12% of cells in area LIP responded to auditory stimuli, whereas the proportion of cells responding to visual stimuli remained about the same as before training. The information theory analysis confirmed that, after training, information about auditory stimulus location was available in LIP neurons. Auditory-saccade training therefore generated responsiveness to auditory stimuli de novo in LIP neurons. Thus some LIP cells become active for auditory stimuli in a passive fixation task, once the animals have learned that these stimuli are important for oculomotor behavior.


2020 ◽  
Author(s):  
Madeline S. Cappelloni ◽  
Sabyasachi Shivkumar ◽  
Ralf M. Haefner ◽  
Ross K. Maddox

ABSTRACTThe brain combines information from multiple sensory modalities to interpret the environment. Multisensory integration is often modeled by ideal Bayesian causal inference, a model proposing that perceptual decisions arise from a statistical weighting of information from each sensory modality based on its reliability and relevance to the observer’s task. However, ideal Bayesian causal inference fails to describe human behavior in a simultaneous auditory spatial discrimination task in which spatially aligned visual stimuli improve performance despite providing no information about the correct response. This work tests the hypothesis that humans weight auditory and visual information in this task based on their relative reliabilities, even though the visual stimuli are task-uninformative, carrying no information about the correct response, and should be given zero weight. Listeners perform an auditory spatial discrimination task with relative reliabilities modulated by the stimulus durations. By comparing conditions in which task-uninformative visual stimuli are spatially aligned with auditory stimuli or centrally located (control condition), listeners are shown to have a larger multisensory effect when their auditory thresholds are worse. Even in cases in which visual stimuli are not task-informative, the brain combines sensory information that is scene-relevant, especially when the task is difficult due to unreliable auditory information.


2020 ◽  
Vol 2020 (16) ◽  
pp. 41-1-41-7
Author(s):  
Orit Skorka ◽  
Paul J. Kane

Many of the metrics developed for informational imaging are useful in automotive imaging, since many of the tasks – for example, object detection and identification – are similar. This work discusses sensor characterization parameters for the Ideal Observer SNR model, and elaborates on the noise power spectrum. It presents cross-correlation analysis results for matched-filter detection of a tribar pattern in sets of resolution target images that were captured with three image sensors over a range of illumination levels. Lastly, the work compares the crosscorrelation data to predictions made by the Ideal Observer Model and demonstrates good agreement between the two methods on relative evaluation of detection capabilities.


Animals ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 2233
Author(s):  
Loïc Pougnault ◽  
Hugo Cousillas ◽  
Christine Heyraud ◽  
Ludwig Huber ◽  
Martine Hausberger ◽  
...  

Attention is defined as the ability to process selectively one aspect of the environment over others and is at the core of all cognitive processes such as learning, memorization, and categorization. Thus, evaluating and comparing attentional characteristics between individuals and according to situations is an important aspect of cognitive studies. Recent studies showed the interest of analyzing spontaneous attention in standardized situations, but data are still scarce, especially for songbirds. The present study adapted three tests of attention (towards visual non-social, visual social, and auditory stimuli) as tools for future comparative research in the European starling (Sturnus vulgaris), a species that is well known to present individual variations in social learning or engagement. Our results reveal that attentional characteristics (glances versus gazes) vary according to the stimulus broadcasted: more gazes towards unusual visual stimuli and species-specific auditory stimuli and more glances towards species-specific visual stimuli and hetero-specific auditory stimuli. This study revealing individual variations shows that these tests constitute a very useful and easy-to-use tool for evaluating spontaneous individual attentional characteristics and their modulation by a variety of factors. Our results also indicate that attentional skills are not a uniform concept and depend upon the modality and the stimulus type.


1954 ◽  
Vol 100 (419) ◽  
pp. 462-477 ◽  
Author(s):  
K. R. L. Hall ◽  
E. Stride

A number of studies on reaction time (R.T.) latency to visual and auditory stimuli in psychotic patients has been reported since the first investigations on the personal equation were carried out. The general trends from the work up to 1943 are well summarized by Hunt (1944), while Granger's (1953) review of “Personality and visual perception” contains a summary of the studies on R.T. to visual stimuli.


Sign in / Sign up

Export Citation Format

Share Document