scholarly journals Application of a Real-Time Visualization Method of AUVs in Underwater Visual Localization

2019 ◽  
Vol 9 (7) ◽  
pp. 1428 ◽  
Author(s):  
Ran Wang ◽  
Xin Wang ◽  
MingMing Zhu ◽  
YinFu Lin

Autonomous underwater vehicles (AUVs) are widely used, but it is a tough challenge to guarantee the underwater location accuracy of AUVs. In this paper, a novel method is proposed to improve the accuracy of vision-based localization systems in feature-poor underwater environments. The traditional stereo visual simultaneous localization and mapping (SLAM) algorithm, which relies on the detection of tracking features, is used to estimate the position of the camera and establish a map of the environment. However, it is hard to find enough reliable point features in underwater environments and thus the performance of the algorithm is reduced. A stereo point and line SLAM (PL-SLAM) algorithm for localization, which utilizes point and line information simultaneously, was investigated in this study to resolve the problem. Experiments with an AR-marker (Augmented Reality-marker) were carried out to validate the accuracy and effect of the investigated algorithm.

2019 ◽  
Vol 11 (23) ◽  
pp. 2827 ◽  
Author(s):  
Narcís Palomeras ◽  
Marc Carreras ◽  
Juan Andrade-Cetto

Exploration of a complex underwater environment without an a priori map is beyond the state of the art for autonomous underwater vehicles (AUVs). Despite several efforts regarding simultaneous localization and mapping (SLAM) and view planning, there is no exploration framework, tailored to underwater vehicles, that faces exploration combining mapping, active localization, and view planning in a unified way. We propose an exploration framework, based on an active SLAM strategy, that combines three main elements: a view planner, an iterative closest point algorithm (ICP)-based pose-graph SLAM algorithm, and an action selection mechanism that makes use of the joint map and state entropy reduction. To demonstrate the benefits of the active SLAM strategy, several tests were conducted with the Girona 500 AUV, both in simulation and in the real world. The article shows how the proposed framework makes it possible to plan exploratory trajectories that keep the vehicle’s uncertainty bounded; thus, creating more consistent maps.


2018 ◽  
Vol 37 (12) ◽  
pp. 1500-1516 ◽  
Author(s):  
Simon Rohou ◽  
Peter Franek ◽  
Clément Aubry ◽  
Luc Jaulin

In this paper we present a reliable method to verify the existence of loops along the uncertain trajectory of a robot, based on proprioceptive measurements only, within a bounded-error context. The loop closure detection is one of the key points in simultaneous localization and mapping (SLAM) methods, especially in homogeneous environments with difficult scenes recognitions. The proposed approach is generic and could be coupled with conventional SLAM algorithms to reliably reduce their computing burden, thus improving the localization and mapping processes in the most challenging environments such as unexplored underwater extents. To prove that a robot performed a loop whatever the uncertainties in its evolution, we employ the notion of topological degree that originates in the field of differential topology. We show that a verification tool based on the topological degree is an optimal method for proving robot loops. This is demonstrated both on datasets from real missions involving autonomous underwater vehicles and by a mathematical discussion.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4442 ◽  
Author(s):  
Xu Bo ◽  
Asghar Razzaqi ◽  
Xiaoyu Wang

The cooperative localization of submerged autonomous underwater vehicles (AUVs) using the Time Difference of Arrival (TDOA) measurements of surface AUV sensors is an effective method for many applications of AUVs. Proper positioning of the sensors to maximize the observability of the AUVs is very critical for cooperative localization. In this paper, a novel method for obtaining the optimal formation of sensor AUVs has been presented for the three-dimensional (3D) cooperative localization of targets using the TDOA technique. An evaluation function for estimating the optimal formation has been derived based on Fisher Information Matrix (FIM) theory for a single target as well as multiple-target cooperative localization systems. An iterative stepping algorithm has been followed to solve the evaluation function and obtain the optimal positions of the sensors. The algorithm ensured that the computation complexity should remain limited, even when the number of sensor AUVs is increased. Various simulation examples are then presented to calculate the optimal formation for different systems/situations. The effect of the position of the reference sensor and operating depth of the target AUVs on the optimal formation of the sensors has also been studied, and conclusions are drawn. For implementation of the proposed method for more practical scenarios, a simulation example is also presented for cases when the target’s position is only known with uncertainty.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2068 ◽  
Author(s):  
César Debeunne ◽  
Damien Vivet

Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. SLAM is used for many applications including mobile robotics, self-driving cars, unmanned aerial vehicles, or autonomous underwater vehicles. In these domains, both visual and visual-IMU SLAM are well studied, and improvements are regularly proposed in the literature. However, LiDAR-SLAM techniques seem to be relatively the same as ten or twenty years ago. Moreover, few research works focus on vision-LiDAR approaches, whereas such a fusion would have many advantages. Indeed, hybridized solutions offer improvements in the performance of SLAM, especially with respect to aggressive motion, lack of light, or lack of visual features. This study provides a comprehensive survey on visual-LiDAR SLAM. After a summary of the basic idea of SLAM and its implementation, we give a complete review of the state-of-the-art of SLAM research, focusing on solutions using vision, LiDAR, and a sensor fusion of both modalities.


2019 ◽  
Vol 38 (14) ◽  
pp. 1549-1559 ◽  
Author(s):  
Maxime Ferrera ◽  
Vincent Creuze ◽  
Julien Moras ◽  
Pauline Trouvé-Peloux

We present a new dataset, dedicated to the development of simultaneous localization and mapping methods for underwater vehicles navigating close to the seabed. The data sequences composing this dataset are recorded in three different environments: a harbor at a depth of a few meters, a first archeological site at a depth of 270 meters, and a second site at a depth of 380 meters. The data acquisition is performed using remotely operated vehicles equipped with a monocular monochromatic camera, a low-cost inertial measurement unit, a pressure sensor, and a computing unit, all embedded in a single enclosure. The sensors’ measurements are recorded synchronously on the computing unit and 17 sequences have been created from all the acquired data. These sequences are made available in the form of ROS bags and as raw data. For each sequence, a trajectory has also been computed offline using a structure-from-motion library in order to allow the comparison with real-time localization methods. With the release of this dataset, we wish to provide data difficult to acquire and to encourage the development of vision-based localization methods dedicated to the underwater environment. The dataset can be downloaded from: http://www.lirmm.fr/aqualoc/


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4545
Author(s):  
Ning Zhang ◽  
Yongjia Zhao

When the camera moves quickly and the image is blurred or the texture in the scene is missing, the Simultaneous Localization and Mapping (SLAM) algorithm based on point feature experiences difficulty tracking enough effective feature points, and the positioning accuracy and robustness are poor, and even may not work properly. For this problem, we propose a monocular visual odometry algorithm based on the point and line features and combining IMU measurement data. Based on this, an environmental-feature map with geometric information is constructed, and the IMU measurement data is incorporated to provide prior and scale information for the visual localization algorithm. Then, the initial pose estimation is obtained based on the motion estimation of the sparse image alignment, and the feature alignment is further performed to obtain the sub-pixel level feature correlation. Finally, more accurate poses and 3D landmarks are obtained by minimizing the re-projection errors of local map points and lines. The experimental results on EuRoC public datasets show that the proposed algorithm outperforms the Open Keyframe-based Visual-Inertial SLAM (OKVIS-mono) algorithm and Oriented FAST and Rotated BRIEF-SLAM (ORB-SLAM) algorithm, which demonstrates the accuracy and speed of the algorithm.


Sign in / Sign up

Export Citation Format

Share Document