Integration of Multiple Microphone Arrays and Use of Sound Reflections for 3D Localization of Sound Sources

Author(s):  
Carlos T. ISHI ◽  
Jani EVEN ◽  
Norihiro HAGITA
Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 597
Author(s):  
Alberto Izquierdo ◽  
Lara del Val ◽  
Juan J. Villacorta ◽  
Weikun Zhen ◽  
Sebastian Scherer ◽  
...  

Detecting and finding people are complex tasks when visibility is reduced. This happens, for example, if a fire occurs. In these situations, heat sources and large amounts of smoke are generated. Under these circumstances, locating survivors using thermal or conventional cameras is not possible and it is necessary to use alternative techniques. The challenge of this work was to analyze if it is feasible the integration of an acoustic camera, developed at the University of Valladolid, on an unmanned aerial vehicle (UAV) to locate, by sound, people who are calling for help, in enclosed environments with reduced visibility. The acoustic array, based on MEMS (micro-electro-mechanical system) microphones, locates acoustic sources in space, and the UAV navigates autonomously by closed enclosures. This paper presents the first experimental results locating the angles of arrival of multiple sound sources, including the cries for help of a person, in an enclosed environment. The results are promising, as the system proves able to discriminate the noise generated by the propellers of the UAV, at the same time it identifies the angles of arrival of the direct sound signal and its first echoes reflected on the reflective surfaces.


2017 ◽  
Vol 29 (1) ◽  
pp. 83-93
Author(s):  
Kouhei Sekiguchi ◽  
◽  
Yoshiaki Bando ◽  
Katsutoshi Itoyama ◽  
Kazuyoshi Yoshii

[abstFig src='/00290001/08.jpg' width='300' text='Optimizing robot positions for source separation' ] The active audition method presented here improves source separation performance by moving multiple mobile robots to optimal positions. One advantage of using multiple mobile robots that each has a microphone array is that each robot can work independently or as part of a big reconfigurable array. To determine optimal layout of the robots, we must be able to predict source separation performance from source position information because actual source signals are unknown and actual separation performance cannot be calculated. Our method thus simulates delay-and-sum beamforming from a possible layout to calculate gain theoretically, i.e., the expected ratio of a target sound source to other sound sources in the corresponding separated signal. Robots are moved into the layout with the highest average gain over target sources. Experimental results showed that our method improved the harmonic mean of signal-to-distortion ratios (SDRs) by 5.5 dB in simulation and by 3.5 dB in a real environment.


Author(s):  
Kouhei Sekiguchi ◽  
Yoshiaki bando ◽  
Keisuke Nakamura ◽  
Kazuhiro Nakadai ◽  
Katsutoshi Itoyama ◽  
...  

Author(s):  
Taiki Yamada ◽  
Katsutoshi Itoyama ◽  
Kenji Nishida ◽  
Kazuhiro Nakadai

Drone audition techniques are helpful for listening to target sound sources from the sky, which can be used for human searching tasks in disaster sites. Among many techniques required for drone audition, sound source tracking is an essential technique, and thus several tracking methods have been proposed. Authors have also proposed a sound source tracking method that utilizes multiple microphone arrays to obtain the likelihood distribution of the sound source locations. These methods have been demonstrated in benchmark experiments. However, the performance against various sound sources with different distances and signal-to-noise ratios (SNRs) has been less evaluated. Since drone audition often needs to listen to distant sound sources and the input acoustic signal generally has a low SNR due to drone noise, making a performance assessment against source distance and SNR is essential. Therefore, this paper presents a concrete evaluation of sound source tracking methods using numerical simulation, focusing on various source distances and SNRs. The simulated results captured how the tracking performance will change when the sound source distance and SNR change. The proposed approach based on location distribution estimation tended to be more robust against distance increase, while existing approaches based on directional estimation tended to be more robust against decreasing SNR.


2021 ◽  
Author(s):  
Matthew Kamrath ◽  
Vladimir Ostashev ◽  
D. Wilson ◽  
Michael White ◽  
Carl Hart ◽  
...  

Sound propagation along vertical and slanted paths through the near-ground atmosphere impacts detection and localization of low-altitude sound sources, such as small unmanned aerial vehicles, from ground-based microphone arrays. This article experimentally investigates the amplitude and phase fluctuations of acoustic signals propagating along such paths. The experiment involved nine microphones on three horizontal booms mounted at different heights to a 135-m meteorological tower at the National Wind Technology Center (Boulder, CO). A ground-based loudspeaker was placed at the base of the tower for vertical propagation or 56m from the base of the tower for slanted propagation. Phasor scatterplots qualitatively characterize the amplitude and phase fluctuations of the received signals during different meteorological regimes. The measurements are also compared to a theory describing the log-amplitude and phase variances based on the spectrum of shear and buoyancy driven turbulence near the ground. Generally, the theory correctly predicts the measured log-amplitude variances, which are affected primarily by small-scale, isotropic turbulent eddies. However, the theory overpredicts the measured phase variances, which are affected primarily by large-scale, anisotropic, buoyantly driven eddies. Ground blocking of these large eddies likely explains the overprediction.


2017 ◽  
Vol 16 (4-5) ◽  
pp. 418-430 ◽  
Author(s):  
Gert Herold ◽  
Florian Zenger ◽  
Ennes Sarradj

Microphone arrays can be used to detect sound sources on rotating machinery. For this study, experiments with three different axial fans, featuring backward-skewed, unskewed, and forward-skewed blades, were conducted in a standardized fan test chamber. The measured data are processed using the virtual rotating array method. Subsequent application of beamforming and deconvolution in the frequency domain allows the localization and quantification of separate sources, as appear at different regions on the blades. Evaluating broadband spectra of the leading and trailing edges of the blades, phenomena governing the acoustic characteristics of the fans at different operating points are identified. This enables a detailed discussion of the influence of the blade design on the radiated noise.


2016 ◽  
Vol 138 (6) ◽  
Author(s):  
R. N. Miles

An analysis is presented of the performance benefits that can be achieved by introducing acoustic coupling between the diaphragms in an array of miniature microphones. The introduction of this coupling is analogous to the principles employed in the ears of small animals that are able to localize sound sources. Measured results are shown, which indicate a dramatic improvement in acoustic sensitivity, and noise performance can be achieved by packaging a pair of small microphones so that their diaphragms share a common back volume of air. This is also shown to reduce the adverse effects on directional response of mismatches in the mechanical properties of the microphones.


2015 ◽  
Vol 39 (1) ◽  
pp. 81-88 ◽  
Author(s):  
Daniel Fernández Comesana ◽  
Keith R. Holland ◽  
Dolores García Escribano ◽  
Hans-Elias de Bree

Abstract Sound localization problems are usually tackled by the acquisition of data from phased microphone arrays and the application of acoustic holography or beamforming algorithms. However, the number of sensors required to achieve reliable results is often prohibitive, particularly if the frequency range of interest is wide. It is shown that the number of sensors required can be reduced dramatically providing the sound field is time stationary. The use of scanning techniques such as “Scan & Paint” allows for the gathering of data across a sound field in a fast and efficient way, using a single sensor and webcam only. It is also possible to characterize the relative phase field by including an additional static microphone during the acquisition process. This paper presents the theoretical and experimental basis of the proposed method to localise sound sources using only one fixed microphone and one moving acoustic sensor. The accuracy and resolution of the method have been proven to be comparable to large microphone arrays, thus constituting the so called “virtual phased arrays”.


Sign in / Sign up

Export Citation Format

Share Document