scholarly journals Object localization using a biosonar beam: how opening your mouth improves localization

2015 ◽  
Vol 2 (8) ◽  
pp. 150225 ◽  
Author(s):  
G. Arditi ◽  
A. J. Weiss ◽  
Y. Yovel

Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 532
Author(s):  
Henglin Pu ◽  
Chao Cai ◽  
Menglan Hu ◽  
Tianping Deng ◽  
Rong Zheng ◽  
...  

Multiple blind sound source localization is the key technology for a myriad of applications such as robotic navigation and indoor localization. However, existing solutions can only locate a few sound sources simultaneously due to the limitation imposed by the number of microphones in an array. To this end, this paper proposes a novel multiple blind sound source localization algorithms using Source seParation and BeamForming (SPBF). Our algorithm overcomes the limitations of existing solutions and can locate more blind sources than the number of microphones in an array. Specifically, we propose a novel microphone layout, enabling salient multiple source separation while still preserving their arrival time information. After then, we perform source localization via beamforming using each demixed source. Such a design allows minimizing mutual interference from different sound sources, thereby enabling finer AoA estimation. To further enhance localization performance, we design a new spectral weighting function that can enhance the signal-to-noise-ratio, allowing a relatively narrow beam and thus finer angle of arrival estimation. Simulation experiments under typical indoor situations demonstrate a maximum of only 4∘ even under up to 14 sources.


Energies ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 3446
Author(s):  
Muhammad Usman Liaquat ◽  
Hafiz Suliman Munawar ◽  
Amna Rahman ◽  
Zakria Qadir ◽  
Abbas Z. Kouzani ◽  
...  

Sound localization is a field of signal processing that deals with identifying the origin of a detected sound signal. This involves determining the direction and distance of the source of the sound. Some useful applications of this phenomenon exists in speech enhancement, communication, radars and in the medical field as well. The experimental arrangement requires the use of microphone arrays which record the sound signal. Some methods involve using ad-hoc arrays of microphones because of their demonstrated advantages over other arrays. In this research project, the existing sound localization methods have been explored to analyze the advantages and disadvantages of each method. A novel sound localization routine has been formulated which uses both the direction of arrival (DOA) of the sound signal along with the location estimation in three-dimensional space to precisely locate a sound source. The experimental arrangement consists of four microphones and a single sound source. Previously, sound source has been localized using six or more microphones. The precision of sound localization has been demonstrated to increase with the use of more microphones. In this research, however, we minimized the use of microphones to reduce the complexity of the algorithm and the computation time as well. The method results in novelty in the field of sound source localization by using less resources and providing results that are at par with the more complex methods requiring more microphones and additional tools to locate the sound source. The average accuracy of the system is found to be 96.77% with an error factor of 3.8%.


2017 ◽  
Vol 111 (2) ◽  
pp. 148-164 ◽  
Author(s):  
Oana Bălan ◽  
Alin Moldoveanu ◽  
Florica Moldoveanu ◽  
Hunor Nagy ◽  
György Wersényi ◽  
...  

Introduction As the number of people with visual impairments (that is, those who are blind or have low vision) is continuously increasing, rehabilitation and engineering researchers have identified the need to design sensory-substitution devices that would offer assistance and guidance to these people for performing navigational tasks. Auditory and haptic cues have been shown to be an effective approach towards creating a rich spatial representation of the environment, so they are considered for inclusion in the development of assistive tools that would enable people with visual impairments to acquire knowledge of the surrounding space in a way close to the visually based perception of sighted individuals. However, achieving efficiency through a sensory substitution device requires extensive training for visually impaired users to learn how to process the artificial auditory cues and convert them into spatial information. Methods Considering all the potential advantages game-based learning can provide, we propose a new method for training sound localization and virtual navigational skills of visually impaired people in a 3D audio game with hierarchical levels of difficulty. The training procedure is focused on a multimodal (auditory and haptic) learning approach in which the subjects have been asked to listen to 3D sounds while simultaneously perceiving a series of vibrations on a haptic headband that corresponds to the direction of the sound source in space. Results The results we obtained in a sound-localization experiment with 10 visually impaired people showed that the proposed training strategy resulted in significant improvements in auditory performance and navigation skills of the subjects, thus ensuring behavioral gains in the spatial perception of the environment.


2014 ◽  
Vol 25 (09) ◽  
pp. 791-803 ◽  
Author(s):  
Evelyne Carette ◽  
Tim Van den Bogaert ◽  
Mark Laureyns ◽  
Jan Wouters

Background: Several studies have demonstrated negative effects of directional microphone configurations on left-right and front-back (FB) sound localization. New processing schemes, such as frequency-dependent directionality and front focus with wireless ear-to-ear communication in recent, commercial hearing aids may preserve the binaural cues necessary for left-right localization and may introduce useful spectral cues necessary for FB disambiguation. Purpose: In this study, two hearing aids with different processing schemes, which were both designed to preserve the ability to localize sounds in the horizontal plane (left-right and FB), were compared. Research Design: We compared horizontal (left-right and FB) sound localization performance of hearing aid users fitted with two types of behind-the-ear (BTE) devices. The first type of BTE device had four different programs that provided (1) no directionality, (2–3) symmetric frequency-dependent directionality, and (4) an asymmetric configuration. The second pair of BTE devices was evaluated in its omnidirectional setting. This setting automatically activates a soft forward-oriented directional scheme that mimics the pinna effect. Also, wireless communication between the hearing aids was present in this configuration (5). A broadband stimulus was used as a target signal. The directional hearing abilities of the listeners were also evaluated without hearing aids as a reference. Study Sample: A total of 12 listeners with moderate to severe hearing loss participated in this study. All were experienced hearing-aid users. As a reference, 11 listeners with normal hearing participated. Data Collection and Analysis: The participants were positioned in a 13-speaker array (left-right, –90°/+90°) or 7-speaker array (FB, 0–180°) and were asked to report the number of the loudspeaker located the closest to where the sound was perceived. The root mean square error was calculated for the left-right experiment, and the percentage of FB errors was used as a FB performance measure. Results were analyzed with repeated-measures analysis of variance. Results: For the left-right localization task, no significant differences could be proven between the unaided condition and both partial directional schemes and the omnidirectional scheme. The soft forward-oriented system and the asymmetric system did show a detrimental effect compared with the unaided condition. On average, localization was worst when users used the asymmetric condition. Analysis of the results of the FB experiment showed good performance, similar to unaided, with both the partial directional systems and the asymmetric configuration. Significantly worse performance was found with the omnidirectional and the omnidirectional soft forward-oriented BTE systems compared with the other hearing-aid systems. Conclusions: Bilaterally fitted partial directional systems preserve (part of) the binaural cues necessary for left-right localization and introduce, preserve, or enhance useful spectral cues that allow FB disambiguation. Omnidirectional systems, although good for left-right localization, do not provide the user with enough spectral information for an optimal FB localization performance.


1999 ◽  
Vol 105 (2) ◽  
pp. 1036-1036 ◽  
Author(s):  
Erno H. Langendijk ◽  
Adelbert W. Bronkhorst

2015 ◽  
Vol 19 (3-4) ◽  
pp. 213-222 ◽  
Author(s):  
Jonathan Lam ◽  
Bill Kapralos ◽  
Kamen Kanev ◽  
Karen Collins ◽  
Andrew Hogue ◽  
...  

2011 ◽  
Vol 48-49 ◽  
pp. 551-554 ◽  
Author(s):  
Yuan Yuan Cheng ◽  
Hai Yan Li ◽  
Qi Xiao ◽  
Yu Feng Zhang ◽  
Xin Ling Shi

A novel method was brought forward for the purpose of filtering Gaussian noise effectively by using variable step time matrix of the simplified pulse coupled neural network (PCNN). Firstly, the time matrix of PCNN, related to the grayscale and spatial information of an image, is calculated to identify the noise polluted pixels. Subsequently, a variable step, a long step for strong noise and a short step for weak noise, based on the time matrix is applied to modify the grayscale of noised pixels in a sliding window. And then wiener filter is used to the image to further filter the noise. Experiments show that the proposed filter can remove Gaussian noise effectively than other noise reduction methods such as median filter, mean filter, wiener filter etc, and the filtered image is smooth and the details and edges are sharp. Compared with existing PCNN based Gaussian noise filter, the proposed filter gets higher Peak Signal-to-Noise Ratio (PSNR) and better performance.


Sign in / Sign up

Export Citation Format

Share Document