scholarly journals Threshold interaural time differences and the centroid model of sound localization

2013 ◽  
Author(s):  
William M. Hartmann ◽  
Andrew Brughera
eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Antje Ihlefeld ◽  
Nima Alamatsaz ◽  
Robert M Shapley

Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.


2011 ◽  
Vol 106 (1) ◽  
pp. 4-14 ◽  
Author(s):  
R. Michael Burger ◽  
Iwao Fukui ◽  
Harunori Ohmori ◽  
Edwin W. Rubel

Interaural time differences (ITDs) are the primary cue animals, including humans, use to localize low-frequency sounds. In vertebrate auditory systems, dedicated ITD processing neural circuitry performs an exacting task, the discrimination of microsecond differences in stimulus arrival time at the two ears by coincidence-detecting neurons. These neurons modulate responses over their entire dynamic range to sounds differing in ITD by mere hundreds of microseconds. The well-understood function of this circuitry in birds has provided a fruitful system to investigate how inhibition contributes to neural computation at the synaptic, cellular, and systems level. Our recent studies in the chicken have made significant progress in bringing together many of these findings to provide a cohesive picture of inhibitory function.


1991 ◽  
Vol 62 (6) ◽  
pp. 1211 ◽  
Author(s):  
Daniel H. Ashmead ◽  
DeFord L. Davis ◽  
Tracy Whalen ◽  
Richard D. Odom

2019 ◽  
Vol 23 ◽  
pp. 233121651984387 ◽  
Author(s):  
Stefan Zirn ◽  
Julian Angermeier ◽  
Susan Arndt ◽  
Antje Aschendorff ◽  
Thomas Wesarg

In users of a cochlear implant (CI) together with a contralateral hearing aid (HA), so-called bimodal listeners, differences in processing latencies between digital HA and CI up to 9 ms constantly superimpose interaural time differences. In the present study, the effect of this device delay mismatch on sound localization accuracy was investigated. For this purpose, localization accuracy in the frontal horizontal plane was measured with the original and minimized device delay mismatch. The reduction was achieved by delaying the CI stimulation according to the delay of the individually worn HA. For this, a portable, programmable, battery-powered delay line based on a ring buffer running on a microcontroller was designed and assembled. After an acclimatization period to the delayed CI stimulation of 1 hr, the nine bimodal study participants showed a highly significant improvement in localization accuracy of 11.6% compared with the everyday situation without the delay line ( p < .01). Concluding, delaying CI stimulation to minimize the device delay mismatch seems to be a promising method to increase sound localization accuracy in bimodal listeners.


1999 ◽  
Vol 09 (05) ◽  
pp. 447-452 ◽  
Author(s):  
CARSTEN SCHAUER ◽  
PETER PASCHKE

This paper describes a spike-based model of binaural sound localization using interaural time differences (ITDs). To handle the problem of temporal coding and to facilitate a hardware implementation all neurons are simulated by a spike response model, which includes postsynaptic potentials (PSPs) and a refractory period. A winner-take-all (WTA) network selects the dominant source from the representation of the sound's angles of incidences, and can be biased by a multisensory support, We use simulations on real audio data to investigate the function and the practical application of the system.


2013 ◽  
Vol 133 (1) ◽  
pp. 417-424 ◽  
Author(s):  
Rachel N. Dingle ◽  
Susan E. Hall ◽  
Dennis P. Phillips

Sign in / Sign up

Export Citation Format

Share Document