scholarly journals Evidence for a neural source of the precedence effect in sound localization

2015 ◽  
Vol 114 (5) ◽  
pp. 2991-3001 ◽  
Author(s):  
Andrew D. Brown ◽  
Heath G. Jones ◽  
Alan Kan ◽  
Tanvi Thakkar ◽  
G. Christopher Stecker ◽  
...  

Normal-hearing human listeners and a variety of studied animal species localize sound sources accurately in reverberant environments by responding to the directional cues carried by the first-arriving sound rather than spurious cues carried by later-arriving reflections, which are not perceived discretely. This phenomenon is known as the precedence effect (PE) in sound localization. Despite decades of study, the biological basis of the PE remains unclear. Though the PE was once widely attributed to central processes such as synaptic inhibition in the auditory midbrain, a more recent hypothesis holds that the PE may arise essentially as a by-product of normal cochlear function. Here we evaluated the PE in a unique human patient population with demonstrated sensitivity to binaural information but without functional cochleae. Users of bilateral cochlear implants (CIs) were tested in a psychophysical task that assessed the number and location(s) of auditory images perceived for simulated source-echo (lead-lag) stimuli. A parallel experiment was conducted in a group of normal-hearing (NH) listeners. Key findings were as follows: 1) Subjects in both groups exhibited lead-lag fusion. 2) Fusion was marginally weaker in CI users than in NH listeners but could be augmented by systematically attenuating the amplitude of the lag stimulus to coarsely simulate adaptation observed in acoustically stimulated auditory nerve fibers. 3) Dominance of the lead in localization varied substantially among both NH and CI subjects but was evident in both groups. Taken together, data suggest that aspects of the PE can be elicited in CI users, who lack functional cochleae, thus suggesting that neural mechanisms are sufficient to produce the PE.

Neuron ◽  
2009 ◽  
Vol 62 (1) ◽  
pp. 123-134 ◽  
Author(s):  
Sasha Devore ◽  
Antje Ihlefeld ◽  
Kenneth Hancock ◽  
Barbara Shinn-Cunningham ◽  
Bertrand Delgutte

2020 ◽  
Vol 31 (03) ◽  
pp. 195-208 ◽  
Author(s):  
Erica E. Bennett ◽  
Ruth Y. Litovsky

AbstractSpatial hearing abilities in children with bilateral cochlear implants (BiCIs) are typically improved when two implants are used compared with a single implant. However, even with BiCIs, spatial hearing is still worse compared to normal-hearing (NH) age-matched children. Here, we focused on children who were younger than three years, hence in their toddler years. Prior research with this age focused on measuring discrimination of sounds from the right versus left.This study measured both discrimination and sound location identification in a nine-alternative forced-choice paradigm using the “reaching for sound” method, whereby children reached for sounding objects as a means of capturing their spatial hearing abilities.Discrimination was measured with sounds randomly presented to the left versus right, and loudspeakers at fixed angles ranging from ±60° to ±15°. On a separate task, sound location identification was measured for locations ranging from ±60° in 15° increments.Thirteen children with BiCIs (27–42 months old) and fifteen age-matched (NH).Discrimination and sound localization were completed for all subjects. For the left–right discrimination task, participants were required to reach a criterion of 4/5 correct trials (80%) at each angular separation prior to beginning the localization task. For sound localization, data was analyzed in two ways. First, percent correct scores were tallied for each participant. Second, for each participant, the root-mean-square-error was calculated to determine the average distance between the response and stimulus, indicative of localization accuracy.All BiCI users were able to discriminate left versus right at angles as small as ±15° when listening with two implants; however, performance was significantly worse when listening with a single implant. All NH toddlers also had >80% correct at ±15°. Sound localization results revealed root-mean-square errors averaging 11.15° in NH toddlers. Children in the BiCI group were generally unable to identify source location on this complex task (average error 37.03°).Although some toddlers with BiCIs are able to localize sound in a manner consistent with NH toddlers, for the majority of toddlers with BiCIs, sound localization abilities are still emerging.


2006 ◽  
Vol 95 (6) ◽  
pp. 3571-3584 ◽  
Author(s):  
Matthew W. Spitzer ◽  
Terry T. Takahashi

We examined the accuracy and precision with which the barn owl ( Tyto alba) turns its head toward sound sources under conditions that evoke the precedence effect (PE) in humans. Stimuli consisted of 25-ms noise bursts emitted from two sources, separated horizontally by 40°, and temporally by 3–50 ms. At delays from 3 to 10 ms, head turns were always directed at the leading source, and were nearly as accurate and precise as turns toward single sources, indicating that the leading source dominates perception. This lead dominance is particularly remarkable, first, because on some trials, the lagging source was significantly higher in amplitude than the lead, arising from the directionality of the owl's ears, and second, because the temporal overlap of the two sounds can degrade the binaural cues with which the owl localizes sounds. With increasing delays, the influence of the lagging source became apparent as the head saccades became increasingly biased toward the lagging source. Furthermore, on some of the trials at delays ≥20 ms, the owl turned its head, first, in the direction of one source, and then the other, suggesting that it was able to resolve two separately localizable sources. At all delays <50 ms, response latencies were longer for paired sources than for single sources. With the possible exception of response latency, these findings demonstrate that the owl exhibits precedence phenomena in sound localization similar to those in humans and cats, and provide a basis for comparison with neurophysiological data.


2017 ◽  
Vol 26 (4) ◽  
pp. 519-530
Author(s):  
Yunfang Zheng ◽  
Janet Koehnke ◽  
Joan Besing

Purpose This study examined the individual and combined effects of noise and reverberation on the ability of listeners with normal hearing (NH) and with bilateral cochlear implants (BCIs) to localize speech. Method Six adults with BCIs and 10 with NH participated. All subjects completed a virtual localization test in quiet and at 0-, −4-, and −8-dB signal-to-noise ratios (SNRs) in simulated anechoic and reverberant (0.2-, 0.6-, and 0.9-s RT 60 ) environments. BCI users were also tested at +8- and +4-dB SNR. A 3-word phrase was presented at 70 dB SPL from 9 simulated locations in the frontal horizontal plane (±90°), with the noise source at 0°. Results BCIs users had significantly poorer localization than listeners with NH in all conditions. BCI users' performance started to decrease at a higher SNR (+4 dB) and shorter RT 60 (0.2 s) than listeners with NH (−4 dB and 0.6 s). The combination of noise and reverberation began to degrade localization of BCI users at a higher SNR and a shorter RT 60 than listeners with NH. Conclusion The clear effect of noise and reverberation on the performance of BCI users provides information that should be useful for refining cochlear implant processing strategies and developing cochlear implant rehabilitation plans to optimize binaural benefit for BCI users in everyday listening situations.


2013 ◽  
Vol 109 (4) ◽  
pp. 924-931 ◽  
Author(s):  
Caitlin S. Baxter ◽  
Brian S. Nelson ◽  
Terry T. Takahashi

Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls ( Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.


2004 ◽  
Vol 92 (4) ◽  
pp. 2051-2070 ◽  
Author(s):  
Matthew W. Spitzer ◽  
Avinash D. S. Bala ◽  
Terry T. Takahashi

Sound localization in echoic conditions depends on a precedence effect (PE), in which the first arriving sound dominates the perceived location of later reflections. Previous studies have demonstrated neurophysiological correlates of the PE in several species, but the underlying mechanisms remain unknown. The present study documents responses of space-specific neurons in the barn owl's inferior colliculus (IC) to stimuli simulating direct sounds and reflections that overlap in time at the listener's ears. Responses to 100-ms noises with lead-lag delays from 1 to 100 ms were recorded from neurons in the space-mapped subdivisions of IC in anesthetized owls (N2O/isofluorane). Responses to a target located at a unit's best location were usually suppressed by a masker located outside the excitatory portion of the spatial receptive field. The least spatially selective units exhibited temporally symmetric effects, in that the amount of suppression was the same whether the masker led or lagged. Such effects mirror the alteration of localization cues caused by acoustic superposition of leading and lagging sounds. In more spatially selective units, the suppression was often temporally asymmetric, being more pronounced when the masker led. The masker often evoked small changes in spatial tuning that were not related to the magnitude of suppressive effects. The association of temporally asymmetric suppression with spatial selectivity suggests that this property emerges within IC, and not at earlier stages of auditory processing. Asymmetric suppression reduces the ability of highly spatially selective neurons to encode the location of lagging sounds, providing a possible basis for the PE.


2020 ◽  
Vol 123 (5) ◽  
pp. 1791-1807 ◽  
Author(s):  
Ryan Dorkoski ◽  
Kenneth E. Hancock ◽  
Gareth A. Whaley ◽  
Timothy R. Wohl ◽  
Noelle C. Stroud ◽  
...  

A “division of labor” has previously been assumed in which the directions of low- and high-frequency sound sources are thought to be encoded by neurons preferentially sensitive to low and high frequencies, respectively. Contrary to this, we found that auditory midbrain neurons encode the directions of both low- and high-frequency sounds regardless of their preferred frequencies. Neural responses were shaped by different sound localization cues depending on the stimulus spectrum—even within the same neuron.


Author(s):  
Huakang Li ◽  
◽  
Jie Huang ◽  
Minyi Guo ◽  
Qunfei Zhao

Mobile robots communicating with people would benefit from being able to detect sound sources to help localize interesting events in real-life settings. We propose using a spherical robot with four microphones to determine the spatial locations of multiple sound sources in ordinary rooms. The arrival temporal disparities from phase difference histograms are used to calculate the time differences. A precedence effect model suppresses the influence of echoes in reverberant environments. To integrate spatial cues of different microphones, we map the correlation between different microphone pairs on a 3D map corresponding to the azimuth and elevation of sound source direction. Results of experiments indicate that our proposed system provides sound source distribution very clearly and precisely, even concurrently in reverberant environments with the Echo Avoidance (EA) model.


Sign in / Sign up

Export Citation Format

Share Document