scholarly journals Deep-learning source localization using autocorrelation functions from a single hydrophone in deep ocean

2021 ◽  
Vol 1 (3) ◽  
pp. 036002
Author(s):  
Yining Liu ◽  
Haiqiang Niu ◽  
Zhenglin Li ◽  
Mengyuan Wang
2021 ◽  
Vol 150 (4) ◽  
pp. A315-A315
Author(s):  
Jhon A. Castro-Correa ◽  
Christian D. Escobar-Amado ◽  
Mohsen Badiey ◽  
Tracianne B. Neilsen ◽  
David P. Knobles

Author(s):  
Luca Comanducci ◽  
Federico Borra ◽  
Paolo Bestagini ◽  
Fabio Antonacci ◽  
Stefano Tubaro ◽  
...  

2019 ◽  
Vol 105 (5) ◽  
pp. 888-891
Author(s):  
Wenxu Liu ◽  
Yixin Yang ◽  
Xijing Guo ◽  
Yong Wang ◽  
Yang Shi

Bottom bounce is a typical sound propagation mode in deep oceans. A receiver moored below the sea surface receives sound waves transmitted from a submerged source through multipath paths. Four of them which traveled by bottom bounce mode play the leading role in source localization. A passive broadband source localization method is proposed herein based on the four arrival paths and using a single receiver. Through autocorrelation of the signal that is mainly composed of four multipath arrivals, six positive relative time diff erences that are dependent on the source range and depth are acquired via combination. The time diff erences are sorted in an ascending order to form a measured vector. A ray-tracing model is used to predict the replica vector. Then, the source is localized by fitting the replica vector to the measured one. An experiment is conducted in the South China Sea. The proposed method is utilized to successfully localize a source at a depth of 47 m below the surface and at a distance of 13.6 km.


Author(s):  
Alif Bin Abdul Qayyum ◽  
K. M. Naimul Hassan ◽  
Adrita Anika ◽  
Md. Farhan Shadiq ◽  
Md Mushfiqur Rahman ◽  
...  

Abstract Drone-embedded sound source localization (SSL) has interesting application perspective in challenging search and rescue scenarios due to bad lighting conditions or occlusions. However, the problem gets complicated by severe drone ego-noise that may result in negative signal-to-noise ratios in the recorded microphone signals. In this paper, we present our work on drone-embedded SSL using recordings from an 8-channel cube-shaped microphone array embedded in an unmanned aerial vehicle (UAV). We use angular spectrum-based TDOA (time difference of arrival) estimation methods such as generalized cross-correlation phase-transform (GCC-PHAT), minimum-variance-distortion-less-response (MVDR) as baseline, which are state-of-the-art techniques for SSL. Though we improve the baseline method by reducing ego-noise using speed correlated harmonics cancellation (SCHC) technique, our main focus is to utilize deep learning techniques to solve this challenging problem. Here, we propose an end-to-end deep learning model, called DOANet, for SSL. DOANet is based on a one-dimensional dilated convolutional neural network that computes the azimuth and elevation angles of the target sound source from the raw audio signal. The advantage of using DOANet is that it does not require any hand-crafted audio features or ego-noise reduction for DOA estimation. We then evaluate the SSL performance using the proposed and baseline methods and find that the DOANet shows promising results compared to both the angular spectrum methods with and without SCHC. To evaluate the different methods, we also introduce a well-known parameter—area under the curve (AUC) of cumulative histogram plots of angular deviations—as a performance indicator which, to our knowledge, has not been used as a performance indicator for this sort of problem before.


Sign in / Sign up

Export Citation Format

Share Document