Spectral Cues Explain Illusory Elevation Effects With Stereo Sounds in Cats

2003 ◽  
Vol 90 (1) ◽  
pp. 525-530 ◽  
Author(s):  
Daniel J. Tollin ◽  
Tom C.T. Yin

Mammals localize sound sources in azimuth based on two binaural cues, interaural differences in the time of arrival and level of the sounds at the ears. In contrast, the cue for elevation is based on patterns of the broadband power spectra at each ear that result from the direction-dependent acoustic filtering properties of the head and pinnae. Although the exact form of this “spectral shape” cue is unknown, most attention has been directed toward a prominent direction-dependent energy minimum, or “notch,” because its location in frequency, for both humans and cats, moves predictably from low to high as a source is moved from low to high elevations. However, there is little direct evidence that these spectral notches are important elevational cues in animals other than humans. Here we demonstrate a striking illusion in the localization of sounds in elevation by cats using stimulus configurations that elicit summing localization and the precedence effect that can be explained by spectral shape cues.

Author(s):  
Michel Olagnon ◽  
Zakoua Gue´de´

Rainflow counting is widely accepted as the method that is most suited to analysis of fatigue damage of materials submitted to irregular loading. Formulas such as the Wirsching-Light and the Dirlik one allow to take into account spectral shape and bandwidth in an empirical or semi-empirical manner to obtain a best estimate damage reduction of the rainflow counting with respect to the narrow-band approximation. However, if one considers parametric shape families of common use for the spectra, a more straightforward way is to make damage depend on the shape parameter of the family rather than on the spectral moments. We provide here such semi-empirical parametric formulas for the Jonswap, Wallops, Triangle and Power-tail families. In addition, the ICA formula allows us to extend the above formulas to the well-known bimodal spectral shape proposed by Ochi-Hubble.


2006 ◽  
Vol 95 (6) ◽  
pp. 3571-3584 ◽  
Author(s):  
Matthew W. Spitzer ◽  
Terry T. Takahashi

We examined the accuracy and precision with which the barn owl ( Tyto alba) turns its head toward sound sources under conditions that evoke the precedence effect (PE) in humans. Stimuli consisted of 25-ms noise bursts emitted from two sources, separated horizontally by 40°, and temporally by 3–50 ms. At delays from 3 to 10 ms, head turns were always directed at the leading source, and were nearly as accurate and precise as turns toward single sources, indicating that the leading source dominates perception. This lead dominance is particularly remarkable, first, because on some trials, the lagging source was significantly higher in amplitude than the lead, arising from the directionality of the owl's ears, and second, because the temporal overlap of the two sounds can degrade the binaural cues with which the owl localizes sounds. With increasing delays, the influence of the lagging source became apparent as the head saccades became increasingly biased toward the lagging source. Furthermore, on some of the trials at delays ≥20 ms, the owl turned its head, first, in the direction of one source, and then the other, suggesting that it was able to resolve two separately localizable sources. At all delays <50 ms, response latencies were longer for paired sources than for single sources. With the possible exception of response latency, these findings demonstrate that the owl exhibits precedence phenomena in sound localization similar to those in humans and cats, and provide a basis for comparison with neurophysiological data.


2013 ◽  
Vol 109 (4) ◽  
pp. 924-931 ◽  
Author(s):  
Caitlin S. Baxter ◽  
Brian S. Nelson ◽  
Terry T. Takahashi

Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls ( Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.


2013 ◽  
Vol 109 (6) ◽  
pp. 1658-1668 ◽  
Author(s):  
Daniel J. Tollin ◽  
Janet L. Ruhland ◽  
Tom C. T. Yin

Sound localization along the azimuthal dimension depends on interaural time and level disparities, whereas localization in elevation depends on broadband power spectra resulting from the filtering properties of the head and pinnae. We trained cats with their heads unrestrained, using operant conditioning to indicate the apparent locations of sounds via gaze shift. Targets consisted of broadband (BB), high-pass (HP), or low-pass (LP) noise, tones from 0.5 to 14 kHz, and 1/6 octave narrow-band (NB) noise with center frequencies ranging from 6 to 16 kHz. For each sound type, localization performance was summarized by the slope of the regression relating actual gaze shift to desired gaze shift. Overall localization accuracy for BB noise was comparable in azimuth and in elevation but was markedly better in azimuth than in elevation for sounds with limited spectra. Gaze shifts to targets in azimuth were most accurate to BB, less accurate for HP, LP, and NB sounds, and considerably less accurate for tones. In elevation, cats were most accurate in localizing BB, somewhat less accurate to HP, and less yet to LP noise (although still with slopes ∼0.60), but they localized NB noise much worse and were unable to localize tones. Deterioration of localization as bandwidth narrows is consistent with the hypothesis that spectral information is critical for sound localization in elevation. For NB noise or tones in elevation, unlike humans, most cats did not have unique responses at different frequencies, and some appeared to respond with a “default” location at all frequencies.


2015 ◽  
Vol 114 (5) ◽  
pp. 2991-3001 ◽  
Author(s):  
Andrew D. Brown ◽  
Heath G. Jones ◽  
Alan Kan ◽  
Tanvi Thakkar ◽  
G. Christopher Stecker ◽  
...  

Normal-hearing human listeners and a variety of studied animal species localize sound sources accurately in reverberant environments by responding to the directional cues carried by the first-arriving sound rather than spurious cues carried by later-arriving reflections, which are not perceived discretely. This phenomenon is known as the precedence effect (PE) in sound localization. Despite decades of study, the biological basis of the PE remains unclear. Though the PE was once widely attributed to central processes such as synaptic inhibition in the auditory midbrain, a more recent hypothesis holds that the PE may arise essentially as a by-product of normal cochlear function. Here we evaluated the PE in a unique human patient population with demonstrated sensitivity to binaural information but without functional cochleae. Users of bilateral cochlear implants (CIs) were tested in a psychophysical task that assessed the number and location(s) of auditory images perceived for simulated source-echo (lead-lag) stimuli. A parallel experiment was conducted in a group of normal-hearing (NH) listeners. Key findings were as follows: 1) Subjects in both groups exhibited lead-lag fusion. 2) Fusion was marginally weaker in CI users than in NH listeners but could be augmented by systematically attenuating the amplitude of the lag stimulus to coarsely simulate adaptation observed in acoustically stimulated auditory nerve fibers. 3) Dominance of the lead in localization varied substantially among both NH and CI subjects but was evident in both groups. Taken together, data suggest that aspects of the PE can be elicited in CI users, who lack functional cochleae, thus suggesting that neural mechanisms are sufficient to produce the PE.


2013 ◽  
Vol 280 (1769) ◽  
pp. 20131428 ◽  
Author(s):  
Ludwig Wallmeier ◽  
Nikodemus Geßele ◽  
Lutz Wiegrebe

Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task.


2021 ◽  
Vol 10 (1) ◽  
pp. 15
Author(s):  
Dídac D.Tortosa ◽  
Iván Herrero-Durá ◽  
Jorge E. Otero

The localization of sound sources has received increasing interest over the last few decades, given its wide range of applications. The triangulation method using the Time of Arrival (ToA) of a signal has shown to be useful and easy-to-use and, at the same time, provides accurate results. In this work, the acoustic trilateration method is applied in experimental measures to study and demonstrate its precision in air. Firstly, the method is tested in an anechoic chamber (low reverberating environment) demonstrating its functionality and accuracy. The next step has been the application of the method by using a low-cost system to demonstrate how a non-anechoic environment affects the accuracy of the localization. The detection of the received signal is implemented using a cross-correlation method in the time domain for both cases. Furthermore, the influence of the number and positions of the receiver that are used for this process in the accuracy of the results is also studied.


1992 ◽  
Vol 36 (3) ◽  
pp. 253-257
Author(s):  
Michael D. Good ◽  
Robert H. Gilkey

The development of optimal three-dimensional auditory displays requires a more complete understanding of the interactions among spatially separated sounds. Free-field masking was investigated as a function of the spatial separation between signal and masker sounds within the horizontal, frontal, and median planes. The detectability of filtered pulse trains in the presence of noise maskers was measured using a cued, two-alternative, forced-choice, adaptive staircase procedure. Signal and masker combinations in low (below 2.3 kHz), middle (1.0–8.5 kHz), and high (above 3.5 kHz) frequency regions were examined. As the sound sources were separated within the horizontal plane, signal detectability increased dramatically. Similar improvement in detectability was observed within the frontal plane. As suggested by traditional binaural models, interaural time cues and interaural intensity cues are likely to play a major role in mediating masking release in both the horizontal and frontal planes. Because no interaural cues exist for stimuli presented within the median plane, traditional models would not predict a release from masking when the stimuli are separated within this plane. However, with high frequency signals, masking release similar to that observed in the horizontal and frontal planes could be observed in the median plane. The current literature suggests that sound localization in the median plane may depend on direction-specific spectral cues that are introduced by the pinna at high frequencies. The masking release observed here may also depend on these “pinna cues.”


Sign in / Sign up

Export Citation Format

Share Document