scholarly journals Impoverished auditory cues fail to engage brain networks controlling spatial selective attention

2019 ◽  
Author(s):  
Yuqi Deng ◽  
Inyong Choi ◽  
Barbara Shinn-Cunningham ◽  
Robert Baumgartner

AbstractSpatial selective attention enables listeners to process a signal of interest in natural settings. However, most past studies on auditory spatial attention used impoverished spatial cues: presenting competing sounds to different ears, using only interaural differences in time (ITDs) and/or intensity (IIDs), or using non-individualized head-related transfer functions (HRTFs). Here we tested the hypothesis that impoverished spatial cues impair spatial auditory attention by only weakly engaging relevant cortical networks. Eighteen normal-hearing listeners reported the content of one of two competing syllable streams simulated at roughly +30 ° and −30° azimuth. The competing streams consisted of syllables from two different-sex talkers. Spatialization was based on natural spatial cues (individualized HRTFs), individualized IIDs, or generic ITDs. We measured behavioral performance as well as electroencephalographic markers of selective attention. Behaviorally, subjects recalled target streams most accurately with natural cues. Neurally, spatial attention significantly modulated early evoked sensory response magnitudes only for natural cues, not in conditions using only ITDs or IIDs. Consistent with this, parietal oscillatory power in the alpha band (8-14 Hz; associated with filtering out distracting events from unattended directions) showed significantly less attentional modulation with isolated spatial cues than with natural cues. Our findings support the hypothesis that spatial selective attention networks are only partially engaged by impoverished spatial auditory cues. These results not only suggest that studies using unnatural spatial cues underestimate the neural effects of spatial auditory attention, they also illustrate the importance of preserving natural spatial cues in assistive listening devices to support robust attentional control.

i-Perception ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 204166952110271
Author(s):  
Aijun Wang ◽  
Heng Zhou ◽  
Yuanyuan Hu ◽  
Qiong Wu ◽  
Tianyang Zhang ◽  
...  

The Colavita effect refers to the phenomenon wherein people tend to not respond to an auditory stimulus when a visual stimulus is simultaneously presented. Although previous studies have shown that endogenous modality attention influences the Colavita effect, whether the Colavita effect is influenced by endogenous spatial attention remains unknown. In the present study, we established endogenous spatial cues to investigate whether the size of the Colavita effect changes under visual or auditory cues. We measured three indexes to investigate the effect of endogenous spatial attention on the size of the Colavita effect. These three indexes were developed based on the following observations in bimodal trials: (a) The proportion of the “only vision” response was significantly higher than that of the “only audition” response; (b) the proportion of the “vision precedes audition” response was significantly higher than that of the “audition precedes vision” response; and (c) the reaction time difference of the “vision precedes audition” response was significantly higher than that of the “audition precedes vision” response. Our results showed that the Colavita effect was always influenced by endogenous spatial attention and that its size was larger at the cued location than at the uncued location; the cue modality (visual vs. auditory) had no effect on the size of the Colavita effect. Taken together, the present results shed light on how endogenous spatial attention affects the Colavita effect.


2021 ◽  
Vol 25 ◽  
pp. 233121652110453
Author(s):  
Z. Ellen Peng ◽  
Ruth Y. Litovsky

In complex listening environments, children can benefit from auditory spatial cues to understand speech in noise. When a spatial separation is introduced between the target and masker and/or listening with two ears versus one ear, children can gain intelligibility benefits with access to one or more auditory cues for unmasking: monaural head shadow, binaural redundancy, and interaural differences. This study systematically quantified the contribution of individual auditory cues in providing binaural speech intelligibility benefits for children with normal hearing between 6 and 15 years old. In virtual auditory space, target speech was presented from  + 90° azimuth (i.e., listener's right), and two-talker babble maskers were either co-located (+ 90° azimuth) or separated by 180° (–90° azimuth, listener's left). Testing was conducted over headphones in monaural (i.e., right ear) or binaural (i.e., both ears) conditions. Results showed continuous improvement of speech reception threshold (SRT) between 6 and 15 years old and immature performance at 15 years of age for both SRTs and intelligibility benefits from more than one auditory cue. With early maturation of head shadow, the prolonged maturation of unmasking was likely driven by children's poorer ability to gain full benefits from interaural difference cues. In addition, children demonstrated a trade-off between the benefits from head shadow versus interaural differences, suggesting an important aspect of individual differences in accessing auditory cues for binaural intelligibility benefits during development.


2019 ◽  
Author(s):  
Daniel P. Kumpik ◽  
Connor Campbell ◽  
Jan W.H. Schnupp ◽  
Andrew J King

AbstractSound localization requires the integration in the brain of auditory spatial cues generated by interactions with the external ears, head and body. Perceptual learning studies have shown that the relative weighting of these cues can change in a context-dependent fashion if their relative reliability is altered. One factor that may influence this process is vision, which tends to dominate localization judgments when both modalities are present and induces a recalibration of auditory space if they become misaligned. It is not known, however, whether vision can alter the weighting of individual auditory localization cues. Using non-individualized head-related transfer functions, we measured changes in subjects’ sound localization biases and binaural localization cue weights after ~55 minutes of training on an audiovisual spatial oddball task. Four different configurations of spatial congruence between visual and auditory cues (interaural time differences (ITDs) and frequency-dependent interaural level differences (interaural level spectra, ILS) were used. When visual cues were spatially congruent with both auditory spatial cues, we observed an improvement in sound localization, as shown by a reduction in the variance of subjects’ localization biases, which was accompanied by an up-weighting of the more salient ILS cue. However, if the position of either one of the auditory cues was randomized during training, no overall improvement in sound localization occurred. Nevertheless, the spatial gain of whichever cue was matched with vision increased, with different effects observed on the gain for the randomized cue depending on whether ITDs or ILS were matched with vision. As a result, we observed a similar up-weighting in ILS when this cue alone was matched with vision, but no overall change in binaural cue weighting when ITDs corresponded to the visual cues and ILS were randomized. Consistently misaligning both cues with vision produced the ventriloquism aftereffect, i.e., a corresponding shift in auditory localization bias, without affecting the variability of the subjects’ sound localization judgments, and no overall change in binaural cue weighting. These data show that visual contextual information can invoke a reweighting of auditory localization cues, although concomitant improvements in sound localization are only likely to accompany training with fully congruent audiovisual information.


2018 ◽  
Vol 115 (14) ◽  
pp. E3286-E3295 ◽  
Author(s):  
Lengshi Dai ◽  
Virginia Best ◽  
Barbara G. Shinn-Cunningham

Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing.


2019 ◽  
Author(s):  
Lia M. Bonacci ◽  
Lengshi Dai ◽  
Barbara G. Shinn-Cunningham

AbstractSpatial attention may be used to select target speech in one location while suppressing irrelevant speech in another. However, if perceptual resolution of spatial cues is weak, spatially focused attention may work poorly, leading to difficulty communicating in noisy settings. In electroencephalography (EEG), the distribution of alpha (8–14 Hz) power over parietal sensors reflects the spatial focus of attention (Banerjee et al., 2011; Foxe and Snyder, 2011). If spatial attention is degraded, however, alpha may not be modulated across parietal sensors. A previously published behavioral and EEG study found that, compared to normal-hearing (NH) listeners, hearing-impaired (HI) listeners often had higher interaural time difference (ITD) thresholds, worse performance when asked to report the content of an acoustic stream from a particular location, and weaker attentional modulation of neural responses evoked by sounds in a mixture (Dai et al., 2018). This study explored whether these same HI listeners also showed weaker alpha lateralization during the previously reported task. In NH listeners, hemispheric parietal alpha power was greater when the ipsilateral location was attended; this lateralization was stronger when competing melodies were separated by a larger spatial difference. In HI listeners, however, alpha was not lateralized across parietal sensors, consistent with a degraded ability to use spatial features to selectively attend.


2019 ◽  
Vol 62 (3) ◽  
pp. 745-757 ◽  
Author(s):  
Jessica M. Wess ◽  
Joshua G. W. Bernstein

PurposeFor listeners with single-sided deafness, a cochlear implant (CI) can improve speech understanding by giving the listener access to the ear with the better target-to-masker ratio (TMR; head shadow) or by providing interaural difference cues to facilitate the perceptual separation of concurrent talkers (squelch). CI simulations presented to listeners with normal hearing examined how these benefits could be affected by interaural differences in loudness growth in a speech-on-speech masking task.MethodExperiment 1 examined a target–masker spatial configuration where the vocoded ear had a poorer TMR than the nonvocoded ear. Experiment 2 examined the reverse configuration. Generic head-related transfer functions simulated free-field listening. Compression or expansion was applied independently to each vocoder channel (power-law exponents: 0.25, 0.5, 1, 1.5, or 2).ResultsCompression reduced the benefit provided by the vocoder ear in both experiments. There was some evidence that expansion increased squelch in Experiment 1 but reduced the benefit in Experiment 2 where the vocoder ear provided a combination of head-shadow and squelch benefits.ConclusionsThe effects of compression and expansion are interpreted in terms of envelope distortion and changes in the vocoded-ear TMR (for head shadow) or changes in perceived target–masker spatial separation (for squelch). The compression parameter is a candidate for clinical optimization to improve single-sided deafness CI outcomes.


2000 ◽  
Vol 83 (4) ◽  
pp. 2300-2314 ◽  
Author(s):  
U. Koch ◽  
B. Grothe

To date, most physiological studies that investigated binaural auditory processing have addressed the topic rather exclusively in the context of sound localization. However, there is strong psychophysical evidence that binaural processing serves more than only sound localization. This raises the question of how binaural processing of spatial cues interacts with cues important for feature detection. The temporal structure of a sound is one such feature important for sound recognition. As a first approach, we investigated the influence of binaural cues on temporal processing in the mammalian auditory system. Here, we present evidence that binaural cues, namely interaural intensity differences (IIDs), have profound effects on filter properties for stimulus periodicity of auditory midbrain neurons in the echolocating big brown bat, Eptesicus fuscus. Our data indicate that these effects are partially due to changes in strength and timing of binaural inhibitory inputs. We measured filter characteristics for the periodicity (modulation frequency) of sinusoidally frequency modulated sounds (SFM) under different binaural conditions. As criteria, we used 50% filter cutoff frequencies of modulation transfer functions based on discharge rate as well as synchronicity of discharge to the sound envelope. The binaural conditions were contralateral stimulation only, equal stimulation at both ears (IID = 0 dB), and more intense at the ipsilateral ear (IID = −20, −30 dB). In 32% of neurons, the range of modulation frequencies the neurons responded to changed considerably comparing monaural and binaural (IID =0) stimulation. Moreover, in ∼50% of neurons the range of modulation frequencies was narrower when the ipsilateral ear was favored (IID = −20) compared with equal stimulation at both ears (IID = 0). In ∼10% of the neurons synchronization differed when comparing different binaural cues. Blockade of the GABAergic or glycinergic inputs to the cells recorded from revealed that inhibitory inputs were at least partially responsible for the observed changes in SFM filtering. In 25% of the neurons, drug application abolished those changes. Experiments using electronically introduced interaural time differences showed that the strength of ipsilaterally evoked inhibition increased with increasing modulation frequencies in one third of the cells tested. Thus glycinergic and GABAergic inhibition is at least one source responsible for the observed interdependence of temporal structure of a sound and spatial cues.


2020 ◽  
Vol 27 (2) ◽  
pp. 315-321
Author(s):  
Miranda L. Johnson ◽  
John Palmer ◽  
Cathleen M. Moore ◽  
Geoffrey M. Boynton

AbstractSpatial cues help participants detect a visual target when it appears at the cued location. One hypothesis for this cueing effect, called selective perception, is that cueing a location enhances perceptual encoding at that location. Another hypothesis, called selective decision, is that the cue has no effect on perception, but instead provides prior information that facilitates decision-making. We distinguished these hypotheses by comparing a simultaneous display with two spatial locations to sequential displays with two temporal intervals. The simultaneous condition had a partially valid spatial cue, and the sequential condition had a partially valid temporal cue. Selective perception predicts no cueing effect for sequential displays given there is enough time to switch attention. In contrast, selective decision predicts cueing effects for sequential displays regardless of time. We used endogenous cueing of a detection-like coarse orientation discrimination task with clear displays (no external noise or postmasks). Results showed cueing effects for the sequential condition, supporting a decision account of selective attention for endogenous cueing of detection-like tasks.


Perception ◽  
2016 ◽  
Vol 46 (1) ◽  
pp. 6-17 ◽  
Author(s):  
N. Van der Stoep ◽  
S. Van der Stigchel ◽  
T. C. W. Nijboer ◽  
C. Spence

Multisensory integration (MSI) and exogenous spatial attention can both speedup responses to perceptual events. Recently, it has been shown that audiovisual integration at exogenously attended locations is reduced relative to unattended locations. This effect was observed at short cue-target intervals (200–250 ms). At longer intervals, however, the initial benefits of exogenous shifts of spatial attention at the cued location are often replaced by response time (RT) costs (also known as Inhibition of Return, IOR). Given these opposing cueing effects at shorter versus longer intervals, we decided to investigate whether MSI would also be affected by IOR. Uninformative exogenous visual spatial cues were presented between 350 and 450 ms prior to the onset of auditory, visual, and audiovisual targets. As expected, IOR was observed for visual targets (invalid cue RT < valid cue RT). For auditory and audiovisual targets, neither IOR nor any spatial cueing effects were observed. The amount of relative multisensory response enhancement and race model inequality violation was larger for uncued as compared with cued locations indicating that IOR reduces MSI. The results are discussed in the context of changes in unisensory signal strength at cued as compared with uncued locations.


Sign in / Sign up

Export Citation Format

Share Document