scholarly journals Behavioral and modeling studies of sound localization in cats: effects of stimulus level and duration

2013 ◽  
Vol 110 (3) ◽  
pp. 607-620 ◽  
Author(s):  
Yan Gai ◽  
Janet L. Ruhland ◽  
Tom C. T. Yin ◽  
Daniel J. Tollin

Sound localization accuracy in elevation can be affected by sound spectrum alteration. Correspondingly, any stimulus manipulation that causes a change in the peripheral representation of the spectrum may degrade localization ability in elevation. The present study examined the influence of sound duration and level on localization performance in cats with the head unrestrained. Two cats were trained using operant conditioning to indicate the apparent location of a sound via gaze shift, which was measured with a search-coil technique. Overall, neither sound level nor duration had a notable effect on localization accuracy in azimuth, except at near-threshold levels. In contrast, localization accuracy in elevation improved as sound duration increased, and sound level also had a large effect on localization in elevation. For short-duration noise, the performance peaked at intermediate levels and deteriorated at low and high levels; for long-duration noise, this “negative level effect” at high levels was not observed. Simulations based on an auditory nerve model were used to explain the above observations and to test several hypotheses. Our results indicated that neither the flatness of sound spectrum (before the sound reaches the inner ear) nor the peripheral adaptation influences spectral coding at the periphery for localization in elevation, whereas neural computation that relies on “multiple looks” of the spectral analysis is critical in explaining the effect of sound duration, but not level. The release of negative level effect observed for long-duration sound could not be explained at the periphery and, therefore, is likely a result of processing at higher centers.

2017 ◽  
Vol 142 (4) ◽  
pp. 2676-2676
Author(s):  
Akio Honda ◽  
Sayaka Tsunokake ◽  
Yôiti Suzuki ◽  
Shuichi Sakamoto

2018 ◽  
Vol 39 (4) ◽  
pp. 305-307 ◽  
Author(s):  
Akio Honda ◽  
Sayaka Tsunokake ◽  
Yôiti Suzuki ◽  
Shuichi Sakamoto

2013 ◽  
Vol 109 (6) ◽  
pp. 1658-1668 ◽  
Author(s):  
Daniel J. Tollin ◽  
Janet L. Ruhland ◽  
Tom C. T. Yin

Sound localization along the azimuthal dimension depends on interaural time and level disparities, whereas localization in elevation depends on broadband power spectra resulting from the filtering properties of the head and pinnae. We trained cats with their heads unrestrained, using operant conditioning to indicate the apparent locations of sounds via gaze shift. Targets consisted of broadband (BB), high-pass (HP), or low-pass (LP) noise, tones from 0.5 to 14 kHz, and 1/6 octave narrow-band (NB) noise with center frequencies ranging from 6 to 16 kHz. For each sound type, localization performance was summarized by the slope of the regression relating actual gaze shift to desired gaze shift. Overall localization accuracy for BB noise was comparable in azimuth and in elevation but was markedly better in azimuth than in elevation for sounds with limited spectra. Gaze shifts to targets in azimuth were most accurate to BB, less accurate for HP, LP, and NB sounds, and considerably less accurate for tones. In elevation, cats were most accurate in localizing BB, somewhat less accurate to HP, and less yet to LP noise (although still with slopes ∼0.60), but they localized NB noise much worse and were unable to localize tones. Deterioration of localization as bandwidth narrows is consistent with the hypothesis that spectral information is critical for sound localization in elevation. For NB noise or tones in elevation, unlike humans, most cats did not have unique responses at different frequencies, and some appeared to respond with a “default” location at all frequencies.


2005 ◽  
Vol 93 (3) ◽  
pp. 1223-1234 ◽  
Author(s):  
Daniel J. Tollin ◽  
Luis C. Populin ◽  
Jordan M. Moore ◽  
Janet L. Ruhland ◽  
Tom C. T. Yin

In oculomotor research, there are two common methods by which the apparent location of visual and/or auditory targets are measured, saccadic eye movements with the head restrained and gaze shifts (combined saccades and head movements) with the head unrestrained. Because cats have a small oculomotor range (approximately ±25°), head movements are necessary when orienting to targets at the extremes of or outside this range. Here we tested the hypothesis that the accuracy of localizing auditory and visual targets using more ethologically natural head-unrestrained gaze shifts would be superior to head-restrained eye saccades. The effect of stimulus duration on localization accuracy was also investigated. Three cats were trained using operant conditioning with their heads initially restrained to indicate the location of auditory and visual targets via eye position. Long-duration visual targets were localized accurately with little error, but the locations of short-duration visual and both long- and short-duration auditory targets were markedly underestimated. With the head unrestrained, localization accuracy improved substantially for all stimuli and all durations. While the improvement for long-duration stimuli with the head unrestrained might be expected given that dynamic sensory cues were available during the gaze shifts and the lack of a memory component, surprisingly, the improvement was greatest for the auditory and visual stimuli with the shortest durations, where the stimuli were extinguished prior to the onset of the eye or head movement. The underestimation of auditory targets with the head restrained is explained in terms of the unnatural sensorimotor conditions that likely result during head restraint.


Acta Acustica ◽  
2020 ◽  
Vol 5 ◽  
pp. 3
Author(s):  
Aida Hejazi Nooghabi ◽  
Quentin Grimal ◽  
Anthony Herrel ◽  
Michael Reinwald ◽  
Lapo Boschi

We implement a new algorithm to model acoustic wave propagation through and around a dolphin skull, using the k-Wave software package [1]. The equation of motion is integrated numerically in a complex three-dimensional structure via a pseudospectral scheme which, importantly, accounts for lateral heterogeneities in the mechanical properties of bone. Modeling wave propagation in the skull of dolphins contributes to our understanding of how their sound localization and echolocation mechanisms work. Dolphins are known to be highly effective at localizing sound sources; in particular, they have been shown to be equally sensitive to changes in the elevation and azimuth of the sound source, while other studied species, e.g. humans, are much more sensitive to the latter than to the former. A laboratory experiment conducted by our team on a dry skull [2] has shown that sound reverberated in bones could possibly play an important role in enhancing localization accuracy, and it has been speculated that the dolphin sound localization system could somehow rely on the analysis of this information. We employ our new numerical model to simulate the response of the same skull used by [2] to sound sources at a wide and dense set of locations on the vertical plane. This work is the first step towards the implementation of a new tool for modeling source (echo)location in dolphins; in future work, this will allow us to effectively explore a wide variety of emitted signals and anatomical features.


2016 ◽  
Vol 140 (4) ◽  
pp. 3269-3269
Author(s):  
Sayaka Tsunokake ◽  
Akio Honda ◽  
Yôiti Suzuki ◽  
Shuichi Sakamoto

2019 ◽  
Vol 23 ◽  
pp. 233121651984387 ◽  
Author(s):  
Stefan Zirn ◽  
Julian Angermeier ◽  
Susan Arndt ◽  
Antje Aschendorff ◽  
Thomas Wesarg

In users of a cochlear implant (CI) together with a contralateral hearing aid (HA), so-called bimodal listeners, differences in processing latencies between digital HA and CI up to 9 ms constantly superimpose interaural time differences. In the present study, the effect of this device delay mismatch on sound localization accuracy was investigated. For this purpose, localization accuracy in the frontal horizontal plane was measured with the original and minimized device delay mismatch. The reduction was achieved by delaying the CI stimulation according to the delay of the individually worn HA. For this, a portable, programmable, battery-powered delay line based on a ring buffer running on a microcontroller was designed and assembled. After an acclimatization period to the delayed CI stimulation of 1 hr, the nine bimodal study participants showed a highly significant improvement in localization accuracy of 11.6% compared with the everyday situation without the delay line ( p < .01). Concluding, delaying CI stimulation to minimize the device delay mismatch seems to be a promising method to increase sound localization accuracy in bimodal listeners.


1987 ◽  
Vol 30 (1) ◽  
pp. 28-36 ◽  
Author(s):  
Richard L. Freyman ◽  
David A. Nelson

This investigation explored the effects of stimulus level on the frequency discrimination of long- and short-duration pure tones by 5 subjects with normal hearing and 7 with sensorineural hearing impairment. Frequency difference limens (DLs) were obtained as a function of signal intensity for 5-ms and 300-ms tones at 500, 1000, and 2000 Hz. The performance of most of the hearing-impaired subjects was poorer than normal for 300-ms tones, but not for 5-ms tones. This result was relatively independent of the stimulus sensation levels at which the data were compared. However, the current results also show an unexpected dependence of the frequency DL on the sensation level of short-duration tones. In several normal-hearing subjects, frequency discrimination performance for these short tones is poorer at moderately high levels than at low levels.


2016 ◽  
Vol 41 (2) ◽  
pp. 323-330
Author(s):  
Maurycy J. Kin ◽  
Andrzej Dobrucki

AbstractThe paper presents results of research on an influence of listening fatigue on the detection of changes in spectrum and envelope of musical samples. The experiment was carried out under conditions which normally exist in a studio or on the stage when sound material is recorded and/or mixed. The equivalent level of presented sound samples is usually 90 dB and this is an average value of sound level existing in control room at various recording activities. Such musical material may be treated as a noise so Temporary Threshold Shift phenomenon may occur after several sessions and this may lead to a listening fatigue effect. Fourteen subjects participated in the first part of the experiment and all of them have the normal hearing thresholds. The stimuli contained the musical material with introduced changes in sound spectrum up to ±6 dB in low (100 Hz), middle (1 kHz) and high frequency (10 kHz) octave bands. In the second part of research five subjects listened to musical samples with introduced envelope changes up to ±6 dB in interval of 1 s. The time of loud music exposure was 60, 90 and 120 minutes and this material was completely different from the tested samples. It turned out that listening to the music with an Leq= 90 dB for 1 hour influences the hearing thresholds for middle frequency region (about 1-2 kHz) and this has been reflected in a perception of spectral changes. The perceived peaks/notches of 3 dB have the detection ability at 70% and the changes of low and high ranges of spectrum were perceived at the similar level. After the longer exposure, the thresholds shifted up to 4.5 dB for the all investigated stimuli. It has been also found that hearing fatigue after 1 hour of a listening influences the perception of envelope which gets worse of 2 dB in comparison to the fresh-ear listening. When time of listening to the loud music increases, the changes in envelopes which can be detected rise to the value of 6 dB after 90-minutes exposure and it does not increase with further prolongation of listening time.


Sign in / Sign up

Export Citation Format

Share Document