Neural derivation of sound source location: Resolution of spatial ambiguities in binaural cues

1992 ◽  
Vol 91 (2) ◽  
pp. 1015-1027 ◽  
Author(s):  
Michael S. Brainard ◽  
Eric I. Knudsen ◽  
Steven D. Esterly
2016 ◽  
Vol 27 (07) ◽  
pp. 588-600 ◽  
Author(s):  
W. Owen Brimijoin ◽  
Michael A. Akeroyd

Background: There are two cues that listeners use to disambiguate the front/back location of a sound source: high-frequency spectral cues associated with the head and pinnae, and self-motion–related binaural cues. The use of these cues can be compromised in listeners with hearing impairment and users of hearing aids. Purpose: To determine how age, hearing impairment, and the use of hearing aids affect a listener’s ability to determine front from back based on both self-motion and spectral cues. Research Design: We used a previously published front/back illusion: signals whose physical source location is rotated around the head at twice the angular rate of the listener’s head movements are perceptually located in the opposite hemifield from where they physically are. In normal-hearing listeners, the strength of this illusion decreases as a function of low-pass filter cutoff frequency, this is the result of a conflict between spectral cues and dynamic binaural cues for sound source location. The illusion was used as an assay of self-motion processing in listeners with hearing impairment and users of hearing aids. Study Sample: We recruited 40 hearing-impaired participants, with an average age of 62 yr. The data for three listeners were discarded because they did not move their heads enough during the experiment. Data Collection and Analysis: Listeners sat at the center of a ring of 24 loudspeakers, turned their heads back and forth, and used a wireless keypad to report the front/back location of statically presented signals and of dynamically moving signals with illusory locations. Front/back accuracy for static signals, the strength of front/back illusions, and minimum audible movement angle were measured for each listener in each condition. All measurements were made in each listener both aided and unaided. Results: Hearing-impaired listeners were less accurate at front/back discrimination for both static and illusory conditions. Neither static nor illusory conditions were affected by high-frequency content. Hearing aids had heterogeneous effects from listener to listener, but independent of other factors, on average, listeners wearing aids exhibited a spectrally dependent increase in “front” responses: the more high-frequency energy in the signal, the more likely they were to report it as coming from the front. Conclusions: Hearing impairment was associated with a decrease in the accuracy of self-motion processing for both static and moving signals. Hearing aids may not always reproduce dynamic self-motion–related cues with sufficient fidelity to allow reliable front/back discrimination.


2001 ◽  
Vol 109 (1) ◽  
pp. 430-433 ◽  
Author(s):  
Karsten Brensing ◽  
Katrin Linke ◽  
Dietmar Todt

2002 ◽  
Vol 87 (4) ◽  
pp. 1749-1762 ◽  
Author(s):  
Shigeto Furukawa ◽  
John C. Middlebrooks

Previous studies have demonstrated that the spike patterns of cortical neurons vary systematically as a function of sound-source location such that the response of a single neuron can signal the location of a sound source throughout 360° of azimuth. The present study examined specific features of spike patterns that might transmit information related to sound-source location. Analysis was based on responses of well-isolated single units recorded from cortical area A2 in α-chloralose-anesthetized cats. Stimuli were 80-ms noise bursts presented from loudspeakers in the horizontal plane; source azimuths ranged through 360° in 20° steps. Spike patterns were averaged across samples of eight trials. A competitive artificial neural network (ANN) identified sound-source locations by recognizing spike patterns; the ANN was trained using the learning vector quantization learning rule. The information about stimulus location that was transmitted by spike patterns was computed from joint stimulus-response probability matrices. Spike patterns were manipulated in various ways to isolate particular features. Full-spike patterns, which contained all spike-count information and spike timing with 100-μs precision, transmitted the most stimulus-related information. Transmitted information was sensitive to disruption of spike timing on a scale of more than ∼4 ms and was reduced by an average of ∼35% when spike-timing information was obliterated entirely. In a condition in which all but the first spike in each pattern were eliminated, transmitted information decreased by an average of only ∼11%. In many cases, that condition showed essentially no loss of transmitted information. Three unidimensional features were extracted from spike patterns. Of those features, spike latency transmitted ∼60% more information than that transmitted either by spike count or by a measure of latency dispersion. Information transmission by spike patterns recorded on single trials was substantially reduced compared with the information transmitted by averages of eight trials. In a comparison of averaged and nonaveraged responses, however, the information transmitted by latencies was reduced by only ∼29%, whereas information transmitted by spike counts was reduced by 79%. Spike counts clearly are sensitive to sound-source location and could transmit information about sound-source locations. Nevertheless, the present results demonstrate that the timing of the first poststimulus spike carries a substantial amount, probably the majority, of the location-related information present in spike patterns. The results indicate that any complete model of the cortical representation of auditory space must incorporate the temporal characteristics of neuronal response patterns.


2000 ◽  
Vol 20 (3) ◽  
pp. 1216-1228 ◽  
Author(s):  
Shigeto Furukawa ◽  
Li Xu ◽  
John C. Middlebrooks

1983 ◽  
Vol 265 (2) ◽  
pp. 317-321 ◽  
Author(s):  
L.M. Aitkin ◽  
J.A. Rawson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document