The ‘‘opposite direction shift’’ phenomenon in sound localization using two sound sources

2001 ◽  
Vol 110 (5) ◽  
pp. 2679-2679
Author(s):  
Hiroyuki Fujii ◽  
Kazuhiko Kakehi
Acta Acustica ◽  
2020 ◽  
Vol 5 ◽  
pp. 3
Author(s):  
Aida Hejazi Nooghabi ◽  
Quentin Grimal ◽  
Anthony Herrel ◽  
Michael Reinwald ◽  
Lapo Boschi

We implement a new algorithm to model acoustic wave propagation through and around a dolphin skull, using the k-Wave software package [1]. The equation of motion is integrated numerically in a complex three-dimensional structure via a pseudospectral scheme which, importantly, accounts for lateral heterogeneities in the mechanical properties of bone. Modeling wave propagation in the skull of dolphins contributes to our understanding of how their sound localization and echolocation mechanisms work. Dolphins are known to be highly effective at localizing sound sources; in particular, they have been shown to be equally sensitive to changes in the elevation and azimuth of the sound source, while other studied species, e.g. humans, are much more sensitive to the latter than to the former. A laboratory experiment conducted by our team on a dry skull [2] has shown that sound reverberated in bones could possibly play an important role in enhancing localization accuracy, and it has been speculated that the dolphin sound localization system could somehow rely on the analysis of this information. We employ our new numerical model to simulate the response of the same skull used by [2] to sound sources at a wide and dense set of locations on the vertical plane. This work is the first step towards the implementation of a new tool for modeling source (echo)location in dolphins; in future work, this will allow us to effectively explore a wide variety of emitted signals and anatomical features.


2021 ◽  
Author(s):  
Guus C. van Bentum ◽  
John Van Opstal ◽  
Marc Mathijs van Wanrooij

Sound localization and identification are challenging in acoustically rich environments. The relation between these two processes is still poorly understood. As natural sound-sources rarely occur exactly simultaneously, we wondered whether the auditory system could identify ('what') and localize ('where') two spatially separated sounds with synchronous onsets. While listeners typically report hearing a single source at an average location, one study found that both sounds may be accurately localized if listeners are explicitly being told two sources exist. We here tested whether simultaneous source identification (one vs. two) and localization is possible, by letting listeners choose to make either one or two head-orienting saccades to the perceived location(s). Results show that listeners could identify two sounds only when presented on different sides of the head, and that identification accuracy increased with their spatial separation. Notably, listeners were unable to accurately localize either sound, irrespective of whether one or two sounds were identified. Instead, the first (or only) response always landed near the average location, while second responses were unrelated to the targets. We conclude that localization of synchronous sounds in the absence of prior information is impossible. We discuss that the putative cortical 'what' pathway may not transmit relevant information to the 'where' pathway. We examine how a broadband interaural correlation cue could help to correctly identify the presence of two sounds without being able to localize them. We propose that the persistent averaging behavior reveals that the 'where' system intrinsically assumes that synchronous sounds originate from a single source.


2021 ◽  
Vol 17 (11) ◽  
pp. e1009569
Author(s):  
Julia C. Gorman ◽  
Oliver L. Tufte ◽  
Anna V. R. Miller ◽  
William M. DeBello ◽  
José L. Peña ◽  
...  

Emergent response properties of sensory neurons depend on circuit connectivity and somatodendritic processing. Neurons of the barn owl’s external nucleus of the inferior colliculus (ICx) display emergence of spatial selectivity. These neurons use interaural time difference (ITD) as a cue for the horizontal direction of sound sources. ITD is detected by upstream brainstem neurons with narrow frequency tuning, resulting in spatially ambiguous responses. This spatial ambiguity is resolved by ICx neurons integrating inputs over frequency, a relevant processing in sound localization across species. Previous models have predicted that ICx neurons function as point neurons that linearly integrate inputs across frequency. However, the complex dendritic trees and spines of ICx neurons raises the question of whether this prediction is accurate. Data from in vivo intracellular recordings of ICx neurons were used to address this question. Results revealed diverse frequency integration properties, where some ICx neurons showed responses consistent with the point neuron hypothesis and others with nonlinear dendritic integration. Modeling showed that varied connectivity patterns and forms of dendritic processing may underlie observed ICx neurons’ frequency integration processing. These results corroborate the ability of neurons with complex dendritic trees to implement diverse linear and nonlinear integration of synaptic inputs, of relevance for adaptive coding and learning, and supporting a fundamental mechanism in sound localization.


2006 ◽  
Vol 95 (6) ◽  
pp. 3571-3584 ◽  
Author(s):  
Matthew W. Spitzer ◽  
Terry T. Takahashi

We examined the accuracy and precision with which the barn owl ( Tyto alba) turns its head toward sound sources under conditions that evoke the precedence effect (PE) in humans. Stimuli consisted of 25-ms noise bursts emitted from two sources, separated horizontally by 40°, and temporally by 3–50 ms. At delays from 3 to 10 ms, head turns were always directed at the leading source, and were nearly as accurate and precise as turns toward single sources, indicating that the leading source dominates perception. This lead dominance is particularly remarkable, first, because on some trials, the lagging source was significantly higher in amplitude than the lead, arising from the directionality of the owl's ears, and second, because the temporal overlap of the two sounds can degrade the binaural cues with which the owl localizes sounds. With increasing delays, the influence of the lagging source became apparent as the head saccades became increasingly biased toward the lagging source. Furthermore, on some of the trials at delays ≥20 ms, the owl turned its head, first, in the direction of one source, and then the other, suggesting that it was able to resolve two separately localizable sources. At all delays <50 ms, response latencies were longer for paired sources than for single sources. With the possible exception of response latency, these findings demonstrate that the owl exhibits precedence phenomena in sound localization similar to those in humans and cats, and provide a basis for comparison with neurophysiological data.


1975 ◽  
Vol 63 (3) ◽  
pp. 569-585 ◽  
Author(s):  
D. L. Renaud ◽  
A. N. Popper

1. Sound localization was measured behaviourally for the Atlantic bottlenose porpoise (Tursiops truncatus) using a wide range of pure tone pulses as well as clicks simulating the species echolocation click. 2. Measurements of the minimum audible angle (MAA) on the horizontal plane give localization discrimination thresholds of between 2 and 3 degrees for sounds from 20 to 90 kHz and thresholds from 2–8 to 4 degrees at 6, 10 and 100 kHz. With the azimuth of the animal changed relative to the speakers the MAAs were 1-3-1-5 degrees at an azimuth of 15 degrees and about 5 degrees for an azimuth of 30 degrees. 3. MAAs to clicks were 0-7-0-8 degrees. 4. The animal was able to do almost as well in determining the position of vertical sound sources as it could for horizontal localization. 5. The data indicate that at low frequencies the animal may have been localizing by using the region around the external auditory meatus as a detector, but at frequencies about 20 kHz it is likely that the animal was detecting sounds through the lateral sides of the lower jaw. 6. Above 20 kHz, it is likely that the animal was localizing using binaural intensity cues. 7. Our data support evidence that the lower jaw is an important channel for sound detection in Tursiops.


2013 ◽  
Vol 109 (4) ◽  
pp. 924-931 ◽  
Author(s):  
Caitlin S. Baxter ◽  
Brian S. Nelson ◽  
Terry T. Takahashi

Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls ( Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.


2013 ◽  
Vol 109 (6) ◽  
pp. 1658-1668 ◽  
Author(s):  
Daniel J. Tollin ◽  
Janet L. Ruhland ◽  
Tom C. T. Yin

Sound localization along the azimuthal dimension depends on interaural time and level disparities, whereas localization in elevation depends on broadband power spectra resulting from the filtering properties of the head and pinnae. We trained cats with their heads unrestrained, using operant conditioning to indicate the apparent locations of sounds via gaze shift. Targets consisted of broadband (BB), high-pass (HP), or low-pass (LP) noise, tones from 0.5 to 14 kHz, and 1/6 octave narrow-band (NB) noise with center frequencies ranging from 6 to 16 kHz. For each sound type, localization performance was summarized by the slope of the regression relating actual gaze shift to desired gaze shift. Overall localization accuracy for BB noise was comparable in azimuth and in elevation but was markedly better in azimuth than in elevation for sounds with limited spectra. Gaze shifts to targets in azimuth were most accurate to BB, less accurate for HP, LP, and NB sounds, and considerably less accurate for tones. In elevation, cats were most accurate in localizing BB, somewhat less accurate to HP, and less yet to LP noise (although still with slopes ∼0.60), but they localized NB noise much worse and were unable to localize tones. Deterioration of localization as bandwidth narrows is consistent with the hypothesis that spectral information is critical for sound localization in elevation. For NB noise or tones in elevation, unlike humans, most cats did not have unique responses at different frequencies, and some appeared to respond with a “default” location at all frequencies.


2015 ◽  
Vol 39 (1) ◽  
pp. 81-88 ◽  
Author(s):  
Daniel Fernández Comesana ◽  
Keith R. Holland ◽  
Dolores García Escribano ◽  
Hans-Elias de Bree

Abstract Sound localization problems are usually tackled by the acquisition of data from phased microphone arrays and the application of acoustic holography or beamforming algorithms. However, the number of sensors required to achieve reliable results is often prohibitive, particularly if the frequency range of interest is wide. It is shown that the number of sensors required can be reduced dramatically providing the sound field is time stationary. The use of scanning techniques such as “Scan & Paint” allows for the gathering of data across a sound field in a fast and efficient way, using a single sensor and webcam only. It is also possible to characterize the relative phase field by including an additional static microphone during the acquisition process. This paper presents the theoretical and experimental basis of the proposed method to localise sound sources using only one fixed microphone and one moving acoustic sensor. The accuracy and resolution of the method have been proven to be comparable to large microphone arrays, thus constituting the so called “virtual phased arrays”.


2008 ◽  
Vol 20 (3) ◽  
pp. 603-635 ◽  
Author(s):  
Murat Aytekin ◽  
Cynthia F. Moss ◽  
Jonathan Z. Simon

Sound localization is known to be a complex phenomenon, combining multisensory information processing, experience-dependent plasticity, and movement. Here we present a sensorimotor model that addresses the question of how an organism could learn to localize sound sources without any a priori neural representation of its head-related transfer function or prior experience with auditory spatial information. We demonstrate quantitatively that the experience of the sensory consequences of its voluntary motor actions allows an organism to learn the spatial location of any sound source. Using examples from humans and echolocating bats, our model shows that a naive organism can learn the auditory space based solely on acoustic inputs and their relation to motor states.


Sign in / Sign up

Export Citation Format

Share Document