scholarly journals Enhanced Pulsed-Source Localization with 3 Hydrophones: Uncertainty Estimates

2021 ◽  
Vol 13 (9) ◽  
pp. 1817
Author(s):  
Despoina Pavlidi ◽  
Emmanuel K. Skarsoulis

The uncertainty behavior of an enhanced three-dimensional (3D) localization scheme for pulsed sources based on relative travel times at a large-aperture three-hydrophone array is studied. The localization scheme is an extension of a two-hydrophone localization approach based on time differences between direct and surface-reflected arrivals, an approach with significant advantages, but also drawbacks, such as left-right ambiguity, high range/depth uncertainties for broadside sources, and high bearing uncertainties for endfire sources. These drawbacks can be removed by adding a third hydrophone. The 3D localization problem is separated into two, a range/depth estimation problem, for which only the hydrophone depths are needed, and a bearing estimation problem, if the hydrophone geometry in the horizontal is known as well. The refraction of acoustic paths is taken into account using ray theory. The condition for existence of surface-reflected arrivals can be relaxed by considering arrivals with an upper turning point, allowing for localization at longer ranges. A Bayesian framework is adopted, allowing for the estimation of localization uncertainties. Uncertainty estimates are obtained through analytic predictions and simulations and they are compared against two-hydrophone localization uncertainties as well as against two-dimensional localization that is based on direct arrivals.

Author(s):  
Meiyan Zhang ◽  
Wenyu Cai

Background: Effective 3D-localization in mobile underwater sensor networks is still an active research topic. Due to the sparse characteristic of underwater sensor networks, AUVs (Autonomous Underwater Vehicles) with precise positioning abilities will benefit cooperative localization. It has important significance to study accurate localization methods. Methods: In this paper, a cooperative and distributed 3D-localization algorithm for sparse underwater sensor networks is proposed. The proposed algorithm combines with the advantages of both recursive location estimation of reference nodes and the outstanding self-positioning ability of mobile AUV. Moreover, our design utilizes MMSE (Minimum Mean Squared Error) based recursive location estimation method in 2D horizontal plane projected from 3D region and then revises positions of un-localized sensor nodes through multiple measurements of Time of Arrival (ToA) with mobile AUVs. Results: Simulation results verify that the proposed cooperative 3D-localization scheme can improve performance in terms of localization coverage ratio, average localization error and localization confidence level. Conclusion: The research can improve localization accuracy and coverage ratio for whole underwater sensor networks.


2020 ◽  
Vol 24 (4) ◽  
pp. 2061-2081 ◽  
Author(s):  
Xudong Zhou ◽  
Jan Polcher ◽  
Tao Yang ◽  
Ching-Sheng Huang

Abstract. Ensemble estimates based on multiple datasets are frequently applied once many datasets are available for the same climatic variable. An uncertainty estimate based on the difference between the ensemble datasets is always provided along with the ensemble mean estimate to show to what extent the ensemble members are consistent with each other. However, one fundamental flaw of classic uncertainty estimates is that only the uncertainty in one dimension (either the temporal variability or the spatial heterogeneity) can be considered, whereas the variation along the other dimension is dismissed due to limitations in algorithms for classic uncertainty estimates, resulting in an incomplete assessment of the uncertainties. This study introduces a three-dimensional variance partitioning approach and proposes a new uncertainty estimation (Ue) that includes the data uncertainties in both spatiotemporal scales. The new approach avoids pre-averaging in either of the spatiotemporal dimensions and, as a result, the Ue estimate is around 20 % higher than the classic uncertainty metrics. The deviation of Ue from the classic metrics is apparent for regions with strong spatial heterogeneity and where the variations significantly differ in temporal and spatial scales. This shows that classic metrics underestimate the uncertainty through averaging, which means a loss of information in the variations across spatiotemporal scales. Decomposing the formula for Ue shows that Ue has integrated four different variations across the ensemble dataset members, while only two of the components are represented in the classic uncertainty estimates. This analysis of the decomposition explains the correlation as well as the differences between the newly proposed Ue and the two classic uncertainty metrics. The new approach is implemented and analysed with multiple precipitation products of different types (e.g. gauge-based products, merged products and GCMs) which contain different sources of uncertainties with different magnitudes. Ue of the gauge-based precipitation products is the smallest, while Ue of the other products is generally larger because other uncertainty sources are included and the constraints of the observations are not as strong as in gauge-based products. This new three-dimensional approach is flexible in its structure and particularly suitable for a comprehensive assessment of multiple datasets over large regions within any given period.


Author(s):  
Muhammad Tariq Mahmood ◽  
Tae-Sun Choi

Three-dimensional (3D) shape reconstruction is a fundamental problem in machine vision applications. Shape from focus (SFF) is one of the passive optical methods for 3D shape recovery, which uses degree of focus as a cue to estimate 3D shape. In this approach, usually a single focus measure operator is applied to measure the focus quality of each pixel in image sequence. However, the applicability of a single focus measure is limited to estimate accurately the depth map for diverse type of real objects. To address this problem, we introduce the development of optimal composite depth (OCD) function through genetic programming (GP) for accurate depth estimation. The OCD function is developed through optimally combining the primary information extracted using one (homogeneous features) or more focus measures (heterogeneous features). The genetically developed composite function is then used to compute the optimal depth map of objects. The performance of this function is investigated using both synthetic and real world image sequences. Experimental results demonstrate that the proposed estimator is more accurate than existing SFF methods. Further, it is found that heterogeneous function is more effective than homogeneous function.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5765 ◽  
Author(s):  
Seiya Ito ◽  
Naoshi Kaneko ◽  
Kazuhiko Sumi

This paper proposes a novel 3D representation, namely, a latent 3D volume, for joint depth estimation and semantic segmentation. Most previous studies encoded an input scene (typically given as a 2D image) into a set of feature vectors arranged over a 2D plane. However, considering the real world is three-dimensional, this 2D arrangement reduces one dimension and may limit the capacity of feature representation. In contrast, we examine the idea of arranging the feature vectors in 3D space rather than in a 2D plane. We refer to this 3D volumetric arrangement as a latent 3D volume. We will show that the latent 3D volume is beneficial to the tasks of depth estimation and semantic segmentation because these tasks require an understanding of the 3D structure of the scene. Our network first constructs an initial 3D volume using image features and then generates latent 3D volume by passing the initial 3D volume through several 3D convolutional layers. We apply depth regression and semantic segmentation by projecting the latent 3D volume onto a 2D plane. The evaluation results show that our method outperforms previous approaches on the NYU Depth v2 dataset.


2019 ◽  
Vol 11 (7) ◽  
pp. 577-583 ◽  
Author(s):  
Aleksey Barkhatov ◽  
Evgenii Vorobev ◽  
Vladimir Veremyev ◽  
Vladimir Kutuzov

This article presents the configuration and technical specification of the passive radar exploiting third-party transmitters of second-generation digital video broadcasting standard DVB-T2 as illuminators of opportunity. The performance of the two-dimensional (2D) passive radar estimated based on theoretical and experimental study is described. The possible configuration of the 2D non-equidistant antenna array for the three-dimensional (3D) passive radar is proposed to ensure the 3D localization of detected targets. The experimental results on drone detection conducted with the 3D passive radar show that the radar with the 2D antenna array is capable to measure not only azimuth but also elevation and consequently target altitude.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 500 ◽  
Author(s):  
Luca Palmieri ◽  
Gabriele Scrofani ◽  
Nicolò Incardona ◽  
Genaro Saavedra ◽  
Manuel Martínez-Corral ◽  
...  

Light field technologies have seen a rise in recent years and microscopy is a field where such technology has had a deep impact. The possibility to provide spatial and angular information at the same time and in a single shot brings several advantages and allows for new applications. A common goal in these applications is the calculation of a depth map to reconstruct the three-dimensional geometry of the scene. Many approaches are applicable, but most of them cannot achieve high accuracy because of the nature of such images: biological samples are usually poor in features and do not exhibit sharp colors like natural scene. Due to such conditions, standard approaches result in noisy depth maps. In this work, a robust approach is proposed where accurate depth maps can be produced exploiting the information recorded in the light field, in particular, images produced with Fourier integral Microscope. The proposed approach can be divided into three main parts. Initially, it creates two cost volumes using different focal cues, namely correspondences and defocus. Secondly, it applies filtering methods that exploit multi-scale and super-pixels cost aggregation to reduce noise and enhance the accuracy. Finally, it merges the two cost volumes and extracts a depth map through multi-label optimization.


2015 ◽  
Vol 764-765 ◽  
pp. 1227-1233
Author(s):  
Kuan Yu Chen ◽  
Chien Hung Chen ◽  
Cheng Chin Chien

Acquiring three-dimensional data from a pair of stereo images is called stereovision that has been studied by researchers for decades. However, most of the previous studies on this topic focused on establishment of stereovision parameter matching and made conclusions on the premise of fixed focus. With the rapid development of multimedia technology, varifocal digital cameras have been widely used in many robotic applications recently. In general, error in the depth estimate becomes bigger when the focus and aperture is unknown or not fixed. For that reason, a three-stage framework is proposed in this paper to modify the conventional stereovision model for improving accuracy of depth estimation. The first stage is to modify the computational model of conventional stereovision for varifocal cameras. Then, the spacing of depth interval in non-uniform spacing of discrete depth levels can be altered, in particular, it is unaffected by changes in focal length. Finally, with considering the affine transformation, we add the deformation coefficient into the modified stereovision model for correcting three-dimensional affine deformations. Experimental results demonstrated that the depth estimation from stereo images using the proposed scheme was more accurate than conventional method.


Sign in / Sign up

Export Citation Format

Share Document