scholarly journals The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl

2013 ◽  
Vol 109 (4) ◽  
pp. 924-931 ◽  
Author(s):  
Caitlin S. Baxter ◽  
Brian S. Nelson ◽  
Terry T. Takahashi

Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls ( Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

2006 ◽  
Vol 95 (6) ◽  
pp. 3571-3584 ◽  
Author(s):  
Matthew W. Spitzer ◽  
Terry T. Takahashi

We examined the accuracy and precision with which the barn owl ( Tyto alba) turns its head toward sound sources under conditions that evoke the precedence effect (PE) in humans. Stimuli consisted of 25-ms noise bursts emitted from two sources, separated horizontally by 40°, and temporally by 3–50 ms. At delays from 3 to 10 ms, head turns were always directed at the leading source, and were nearly as accurate and precise as turns toward single sources, indicating that the leading source dominates perception. This lead dominance is particularly remarkable, first, because on some trials, the lagging source was significantly higher in amplitude than the lead, arising from the directionality of the owl's ears, and second, because the temporal overlap of the two sounds can degrade the binaural cues with which the owl localizes sounds. With increasing delays, the influence of the lagging source became apparent as the head saccades became increasingly biased toward the lagging source. Furthermore, on some of the trials at delays ≥20 ms, the owl turned its head, first, in the direction of one source, and then the other, suggesting that it was able to resolve two separately localizable sources. At all delays <50 ms, response latencies were longer for paired sources than for single sources. With the possible exception of response latency, these findings demonstrate that the owl exhibits precedence phenomena in sound localization similar to those in humans and cats, and provide a basis for comparison with neurophysiological data.


2021 ◽  
Vol 17 (11) ◽  
pp. e1009569
Author(s):  
Julia C. Gorman ◽  
Oliver L. Tufte ◽  
Anna V. R. Miller ◽  
William M. DeBello ◽  
José L. Peña ◽  
...  

Emergent response properties of sensory neurons depend on circuit connectivity and somatodendritic processing. Neurons of the barn owl’s external nucleus of the inferior colliculus (ICx) display emergence of spatial selectivity. These neurons use interaural time difference (ITD) as a cue for the horizontal direction of sound sources. ITD is detected by upstream brainstem neurons with narrow frequency tuning, resulting in spatially ambiguous responses. This spatial ambiguity is resolved by ICx neurons integrating inputs over frequency, a relevant processing in sound localization across species. Previous models have predicted that ICx neurons function as point neurons that linearly integrate inputs across frequency. However, the complex dendritic trees and spines of ICx neurons raises the question of whether this prediction is accurate. Data from in vivo intracellular recordings of ICx neurons were used to address this question. Results revealed diverse frequency integration properties, where some ICx neurons showed responses consistent with the point neuron hypothesis and others with nonlinear dendritic integration. Modeling showed that varied connectivity patterns and forms of dendritic processing may underlie observed ICx neurons’ frequency integration processing. These results corroborate the ability of neurons with complex dendritic trees to implement diverse linear and nonlinear integration of synaptic inputs, of relevance for adaptive coding and learning, and supporting a fundamental mechanism in sound localization.


2013 ◽  
Vol 109 (6) ◽  
pp. 1658-1668 ◽  
Author(s):  
Daniel J. Tollin ◽  
Janet L. Ruhland ◽  
Tom C. T. Yin

Sound localization along the azimuthal dimension depends on interaural time and level disparities, whereas localization in elevation depends on broadband power spectra resulting from the filtering properties of the head and pinnae. We trained cats with their heads unrestrained, using operant conditioning to indicate the apparent locations of sounds via gaze shift. Targets consisted of broadband (BB), high-pass (HP), or low-pass (LP) noise, tones from 0.5 to 14 kHz, and 1/6 octave narrow-band (NB) noise with center frequencies ranging from 6 to 16 kHz. For each sound type, localization performance was summarized by the slope of the regression relating actual gaze shift to desired gaze shift. Overall localization accuracy for BB noise was comparable in azimuth and in elevation but was markedly better in azimuth than in elevation for sounds with limited spectra. Gaze shifts to targets in azimuth were most accurate to BB, less accurate for HP, LP, and NB sounds, and considerably less accurate for tones. In elevation, cats were most accurate in localizing BB, somewhat less accurate to HP, and less yet to LP noise (although still with slopes ∼0.60), but they localized NB noise much worse and were unable to localize tones. Deterioration of localization as bandwidth narrows is consistent with the hypothesis that spectral information is critical for sound localization in elevation. For NB noise or tones in elevation, unlike humans, most cats did not have unique responses at different frequencies, and some appeared to respond with a “default” location at all frequencies.


2008 ◽  
Vol 20 (3) ◽  
pp. 603-635 ◽  
Author(s):  
Murat Aytekin ◽  
Cynthia F. Moss ◽  
Jonathan Z. Simon

Sound localization is known to be a complex phenomenon, combining multisensory information processing, experience-dependent plasticity, and movement. Here we present a sensorimotor model that addresses the question of how an organism could learn to localize sound sources without any a priori neural representation of its head-related transfer function or prior experience with auditory spatial information. We demonstrate quantitatively that the experience of the sensory consequences of its voluntary motor actions allows an organism to learn the spatial location of any sound source. Using examples from humans and echolocating bats, our model shows that a naive organism can learn the auditory space based solely on acoustic inputs and their relation to motor states.


2015 ◽  
Vol 114 (5) ◽  
pp. 2991-3001 ◽  
Author(s):  
Andrew D. Brown ◽  
Heath G. Jones ◽  
Alan Kan ◽  
Tanvi Thakkar ◽  
G. Christopher Stecker ◽  
...  

Normal-hearing human listeners and a variety of studied animal species localize sound sources accurately in reverberant environments by responding to the directional cues carried by the first-arriving sound rather than spurious cues carried by later-arriving reflections, which are not perceived discretely. This phenomenon is known as the precedence effect (PE) in sound localization. Despite decades of study, the biological basis of the PE remains unclear. Though the PE was once widely attributed to central processes such as synaptic inhibition in the auditory midbrain, a more recent hypothesis holds that the PE may arise essentially as a by-product of normal cochlear function. Here we evaluated the PE in a unique human patient population with demonstrated sensitivity to binaural information but without functional cochleae. Users of bilateral cochlear implants (CIs) were tested in a psychophysical task that assessed the number and location(s) of auditory images perceived for simulated source-echo (lead-lag) stimuli. A parallel experiment was conducted in a group of normal-hearing (NH) listeners. Key findings were as follows: 1) Subjects in both groups exhibited lead-lag fusion. 2) Fusion was marginally weaker in CI users than in NH listeners but could be augmented by systematically attenuating the amplitude of the lag stimulus to coarsely simulate adaptation observed in acoustically stimulated auditory nerve fibers. 3) Dominance of the lead in localization varied substantially among both NH and CI subjects but was evident in both groups. Taken together, data suggest that aspects of the PE can be elicited in CI users, who lack functional cochleae, thus suggesting that neural mechanisms are sufficient to produce the PE.


2008 ◽  
Vol 211 (18) ◽  
pp. 2976-2988 ◽  
Author(s):  
L. Hausmann ◽  
D. T. T. Plachta ◽  
M. Singheiser ◽  
S. Brill ◽  
H. Wagner

2017 ◽  
Author(s):  
Fanny Cazettes ◽  
Brian J. Fischer ◽  
Michael V. Beckert ◽  
Jose L. Pena

AbstractThe midbrain map of auditory space commands sound-orienting responses in barn owls. Owls precisely localize sounds in frontal space but underestimate the direction of peripheral sound sources. This bias for central locations was proposed to be adaptive to the decreased reliability in the periphery of sensory cues used for sound localization by the owl. Understanding the neural pathway supporting this biased behavior provides a means to address how adaptive motor commands are implemented by neurons. Here we find that the sensory input for sound direction is weighted by its reliability in premotor neurons of the owl’s midbrain tegmentum such that the mean population firing rate approximates the head-orienting behavior. We provide evidence that this coding may emerge through convergence of upstream projections from the midbrain map of auditory space. We further show that manipulating the sensory input yields changes predicted by the convergent network in both premotor neural responses and behavior. This work demonstrates how a topographic sensory representation can be linearly read out to adjust behavioral responses by the reliability of the sensory input.Significance statementThis research shows how statistics of the sensory input can be integrated into a behavioral command by readout of a sensory representation. The firing rate of midbrain premotor neurons receiving sensory information from a topographic representation of auditory space is weighted by the reliability of sensory cues. We show that these premotor responses are consistent with a weighted convergence from the topographic sensory representation. This convergence was also tested behaviorally, where manipulation of stimulus properties led to bidirectional changes in sound localization errors. Thus a topographic representation of auditory space is translated into a premotor command for sound localization that is modulated by sensory reliability.


e-Neuroforum ◽  
2015 ◽  
Vol 21 (1) ◽  
Author(s):  
C. Leibold ◽  
B. Grothe

AbstractThe Jeffress model for the computation and encoding of interaural time differences (ITDs) is one of the most widely known theoretical models of a neuronal microcircuit. In archosaurs (birds and reptiles), several features envisioned by Jeffress in 1948 seem to be implemented, like a topographic map of space and axonal delay lines. In mammals, however, most of the model predictions could not be verified or have been disproved. This led to an ongoing competition of alternative models and hypothesis, which is not settled by far. Particularly the role of the feed-forward inhibitory inputs to the binaural coincidence detector neurons in the medial superior olive (MSO) remains a matter of debate. In this paper, we review the present state of the field and indicate what in our opinion are the most important gaps in understanding of the mammalian circuitry. Approaching these issues requires integrating all levels of neuroscience from cellular biophysics to behavior and even evolution.


2013 ◽  
Vol 44 (1) ◽  
pp. 16-25 ◽  
Author(s):  
Sabrina Pierucci ◽  
Olivier Klein ◽  
Andrea Carnaghi

This article investigates the role of relational motives in the saying-is-believing effect ( Higgins & Rholes, 1978 ). Building on shared reality theory, we expected this effect to be most likely when communicators were motivated to “get along” with the audience. In the current study, participants were asked to describe an ambiguous target to an audience who either liked or disliked the target. The audience had been previously evaluated as a desirable vs. undesirable communication partner. Only participants who communicated with a desirable audience tuned their messages to suit their audience’s attitude toward the target. In line with predictions, they also displayed an audience-congruent memory bias in later recall.


Sign in / Sign up

Export Citation Format

Share Document