The effect of head rotations on vertical plane sound localization

1997 ◽  
Vol 102 (4) ◽  
pp. 2325-2332 ◽  
Author(s):  
Stephen Perrett ◽  
William Noble
2003 ◽  
Vol 114 (1) ◽  
pp. 430-445 ◽  
Author(s):  
Ewan A. Macpherson ◽  
John C. Middlebrooks

2013 ◽  
Vol 306 ◽  
pp. 76-92 ◽  
Author(s):  
Ewan A. Macpherson ◽  
Andrew T. Sabin

Acta Acustica ◽  
2020 ◽  
Vol 5 ◽  
pp. 3
Author(s):  
Aida Hejazi Nooghabi ◽  
Quentin Grimal ◽  
Anthony Herrel ◽  
Michael Reinwald ◽  
Lapo Boschi

We implement a new algorithm to model acoustic wave propagation through and around a dolphin skull, using the k-Wave software package [1]. The equation of motion is integrated numerically in a complex three-dimensional structure via a pseudospectral scheme which, importantly, accounts for lateral heterogeneities in the mechanical properties of bone. Modeling wave propagation in the skull of dolphins contributes to our understanding of how their sound localization and echolocation mechanisms work. Dolphins are known to be highly effective at localizing sound sources; in particular, they have been shown to be equally sensitive to changes in the elevation and azimuth of the sound source, while other studied species, e.g. humans, are much more sensitive to the latter than to the former. A laboratory experiment conducted by our team on a dry skull [2] has shown that sound reverberated in bones could possibly play an important role in enhancing localization accuracy, and it has been speculated that the dolphin sound localization system could somehow rely on the analysis of this information. We employ our new numerical model to simulate the response of the same skull used by [2] to sound sources at a wide and dense set of locations on the vertical plane. This work is the first step towards the implementation of a new tool for modeling source (echo)location in dolphins; in future work, this will allow us to effectively explore a wide variety of emitted signals and anatomical features.


2021 ◽  
Vol 150 (4) ◽  
pp. A340-A340
Author(s):  
Nathaniel J. Spencer ◽  
Zachariah N. Ennis ◽  
Natalie Jackson ◽  
Brian D. Simpson ◽  
Eric R. Thompson

2008 ◽  
Vol 17 (4) ◽  
pp. 392-404 ◽  
Author(s):  
Iwaki Toshima ◽  
Shigeaki Aoki ◽  
Tatsuya Hirahara

TeleHead I is an acoustical telepresence robot that we built on the basis of the concept that remote sound localization could be best achieved by using a user-like dummy head whose movement synchronizes with the user's head movement in real time. We clarified the characteristics of the latest version of TeleHead I, TeleHead II, and verified the validity of this concept by sound localization experiments. TeleHead II can synchronize stably with the user's head movement with a 120-ms delay. The driving noise level measured through headphones is below 24 dB SPL from 1 to 4 kHz. The shape difference between the dummy head and the user is about 3% in head width and 5% in head length. An overall measurement metric indicated that the difference between the head-related transfer functions (HRTFs) of the dummy head and the modeled listener is about 5 dB. The results of the sound localization experiments using TeleHead II clarified that head movement improves horizontal-plane sound localization performance even when the dummy head shape differs from the user's head shape. In contrast, the results for head movement when the dummy head shape and user head shape are different were inconsistent in the median plane. The accuracy of sound localization when using the same-shape dummy head with movement tethered to the user's head movement was always good. These results show that the TeleHead concept is acceptable for building an acoustical telepresence robot. They also show that the physical characteristics of TeleHead II are sufficient for conducting sound localization experiments.


2011 ◽  
Vol 34 (7) ◽  
pp. 1149-1160 ◽  
Author(s):  
Denise C. P. B. M. Van Barneveld ◽  
Floor Binkhorst ◽  
A. John Van Opstal

Sign in / Sign up

Export Citation Format

Share Document