scholarly journals Panoramic Visual SLAM Technology for Spherical Images

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 705
Author(s):  
Yi Zhang ◽  
Fei Huang

Simultaneous Localization and Mapping (SLAM) technology is one of the best methods for fast 3D reconstruction and mapping. However, the accuracy of SLAM is not always high enough, which is currently the subject of much research interest. Panoramic vision can provide us with a wide range of angles of view, many feature points, and rich information. The panoramic multi-view cross-imaging feature can be used to realize instantaneous omnidirectional spatial information acquisition and improve the positioning accuracy of SLAM. In this study, we investigated panoramic visual SLAM positioning technology, including three core research points: (1) the spherical imaging model; (2) spherical image feature extraction and matching methods, including the Spherical Oriented FAST and Rotated BRIEF (SPHORB) and ternary scale-invariant feature transform (SIFT) algorithms; and (3) the panoramic visual SLAM algorithm. The experimental results show that the method of panoramic visual SLAM can improve the robustness and accuracy of a SLAM system.

Author(s):  
A. Masiero ◽  
H. Perakis ◽  
J. Gabela ◽  
C. Toth ◽  
V. Gikas ◽  
...  

Abstract. The increasing demand for reliable indoor navigation systems is leading the research community to investigate various approaches to obtain effective solutions usable with mobile devices. Among the recently proposed strategies, Ultra-Wide Band (UWB) positioning systems are worth to be mentioned because of their good performance in a wide range of operating conditions. However, such performance can be significantly degraded by large UWB range errors; mostly, due to non-line-of-sight (NLOS) measurements. This paper considers the integration of UWB with vision to support navigation and mapping applications. In particular, this work compares positioning results obtained with a simultaneous localization and mapping (SLAM) algorithm, exploiting a standard and a Time-of-Flight (ToF) camera, with those obtained with UWB, and then with the integration of UWB and vision. For the latter, a deep learning-based recognition approach was developed to detect UWB devices in camera frames. Such information is both introduced in the navigation algorithm and used to detect NLOS UWB measurements. The integration of this information allowed a 20% positioning error reduction in this case study.


Author(s):  
Tianmiao Wang ◽  
Chaolei Wang ◽  
Jianhong Liang ◽  
Yicheng Zhang

Purpose – The purpose of this paper is to present a Rao–Blackwellized particle filter (RBPF) approach for the visual simultaneous localization and mapping (SLAM) of small unmanned aerial vehicles (UAVs). Design/methodology/approach – Measurements from inertial measurement unit, barometric altimeter and monocular camera are fused to estimate the state of the vehicle while building a feature map. In this SLAM framework, an extra factorization method is proposed to partition the vehicle model into subspaces as the internal and external states. The internal state is estimated by an extended Kalman filter (EKF). A particle filter is employed for the external state estimation and parallel EKFs are for the map management. Findings – Simulation results indicate that the proposed approach is more stable and accurate than other existing marginalized particle filter-based SLAM algorithms. Experiments are also carried out to verify the effectiveness of this SLAM method by comparing with a referential global positioning system/inertial navigation system. Originality/value – The main contribution of this paper is the theoretical derivation and experimental application of the Rao–Blackwellized visual SLAM algorithm with vehicle model partition for small UAVs.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4922
Author(s):  
Like Cao ◽  
Jie Ling ◽  
Xiaohui Xiao

Noise appears in images captured by real cameras. This paper studies the influence of noise on monocular feature-based visual Simultaneous Localization and Mapping (SLAM). First, an open-source synthetic dataset with different noise levels is introduced in this paper. Then the images in the dataset are denoised using the Fast and Flexible Denoising convolutional neural Network (FFDNet); the matching performances of Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF) and Oriented FAST and Rotated BRIEF (ORB) which are commonly used in feature-based SLAM are analyzed in comparison and the results show that ORB has a higher correct matching rate than that of SIFT and SURF, the denoised images have a higher correct matching rate than noisy images. Next, the Absolute Trajectory Error (ATE) of noisy and denoised sequences are evaluated on ORB-SLAM2 and the results show that the denoised sequences perform better than the noisy sequences at any noise level. Finally, the completely clean sequence in the dataset and the sequences in the KITTI dataset are denoised and compared with the original sequence through comprehensive experiments. For the clean sequence, the Root-Mean-Square Error (RMSE) of ATE after denoising has decreased by 16.75%; for KITTI sequences, 7 out of 10 sequences have lower RMSE than the original sequences. The results show that the denoised image can achieve higher accuracy in the monocular feature-based visual SLAM under certain conditions.


2018 ◽  
Vol 2 (3) ◽  
pp. 151 ◽  
Author(s):  
Fethi Denim ◽  
Abdelkrim Nemra ◽  
Kahina Louadj ◽  
Abdelghani Boucheloukh ◽  
Mustapha Hamerlain ◽  
...  

Simultaneous localization and mapping (SLAM) is an essential capability for Unmanned Ground Vehicles (UGVs) travelling in unknown environments where globally accurate position data as GPS is not available. It is an important topic in the autonomous mobile robot research. This paper presents an Adaptive De-centralized Cooperative Vision-based SLAM solution for multiple UGVs, using the Adaptive Covariance Intersection (ACI) supported by a stereo vision sensor. In recent years, SLAM problem has gotten a specific consideration, the most commonly used approaches are the EKF-SLAM algorithm and the FAST-SLAM algorithm. The primary, which requires an accurate process and an observation model, suffers from the linearization problem. The last mentioned is not suitable for real-time implementation. In our work, the Visual SLAM (VSLAM) problem could be solved based on the Smooth Variable Structure Filter (SVSF) is proposed. This new filter is robust and stable to modelling uncertainties making it suitable for UGV localization and mapping problem. This new strategy retains the near optimal performance of the SVSF when applied to an uncertain system, it has the added benefit of presenting a considerable improvement in the robustness of the estimation process. All UGVs will add data features sorted by the ACI that estimate position on the global map. This solution gives, as a result, a large reliable map constructed by a group of UGVs plotted on it. This paper presents a Cooperative SVSF-VSLAM algorithm that contributes to solve the Adaptive Cooperative Vision SLAM problem for multiple UGVs. The algorithm was implemented on three mobile robots Pioneer 3-AT, using stereo vision sensors. Simulation results show eciency and give an advantage to our proposed algorithm, compared to the Cooperative EKF-VSLAM algorithm mainly concerning the noise quality.  This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Tianji Ma ◽  
Nanyang Bai ◽  
Wentao Shi ◽  
Xi Wu ◽  
Lutao Wang ◽  
...  

In the automatic navigation robot field, robotic autonomous positioning is one of the most difficult challenges. Simultaneous localization and mapping (SLAM) technology can incrementally construct a map of the robot’s moving path in an unknown environment while estimating the position of the robot in the map, providing an effective solution for robots to fully navigate autonomously. The camera can obtain corresponding two-dimensional digital images from the real three-dimensional world. These images contain very rich colour, texture information, and highly recognizable features, which provide indispensable information for robots to understand and recognize the environment based on the ability to autonomously explore the unknown environment. Therefore, more and more researchers use cameras to solve SLAM problems, also known as visual SLAM. Visual SLAM needs to process a large number of image data collected by the camera, which has high performance requirements for computing hardware, and thus, its application on embedded mobile platforms is greatly limited. This paper presents a parallelization method based on embedded hardware equipped with embedded GPU. Use CUDA, a parallel computing platform, to accelerate the visual front-end processing of the visual SLAM algorithm. Extensive experiments are done to verify the effectiveness of the method. The results show that the presented method effectively improves the operating efficiency of the visual SLAM algorithm and ensures the original accuracy of the algorithm.


2011 ◽  
Vol 23 (2) ◽  
pp. 292-301 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume ◽  
Shinji Suzuki ◽  
...  

This paper describes a Simultaneous Localization And Mapping (SLAM) algorithm using a monocular camera for a small Unmanned Aerial Vehicle (UAV). A small UAV has attracted the attention for effective means of the collecting aerial information. However, there are few practical applications due to its small payloads for the 3D measurement. We propose extended Kalman filter SLAM to increase UAV position and attitude data and to construct 3D terrain maps using a small monocular camera. We propose 3D measurement based on Scale-Invariant Feature Transform (SIFT) triangulation features extracted from captured images. Field-experiment results show that our proposal effectively estimates position and attitude of the UAV and construct the 3D terrain map.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3604 ◽  
Author(s):  
Peixin Liu ◽  
Xianfeng Yuan ◽  
Chengjin Zhang ◽  
Yong Song ◽  
Chuanzheng Liu ◽  
...  

To solve the illumination sensitivity problems of mobile ground equipment, an enhanced visual SLAM algorithm based on the sparse direct method was proposed in this paper. Firstly, the vignette and response functions of the input sequences were optimized based on the photometric formation of the camera. Secondly, the Shi–Tomasi corners of the input sequence were tracked, and optimization equations were established using the pixel tracking of sparse direct visual odometry (VO). Thirdly, the Levenberg–Marquardt (L–M) method was applied to solve the joint optimization equation, and the photometric calibration parameters in the VO were updated to realize the real-time dynamic compensation of the exposure of the input sequences, which reduced the effects of the light variations on SLAM’s (simultaneous localization and mapping) accuracy and robustness. Finally, a Shi–Tomasi corner filtered strategy was designed to reduce the computational complexity of the proposed algorithm, and the loop closure detection was realized based on the oriented FAST and rotated BRIEF (ORB) features. The proposed algorithm was tested using TUM, KITTI, EuRoC, and an actual environment, and the experimental results show that the positioning and mapping performance of the proposed algorithm is promising.


2018 ◽  
Vol 147 ◽  
pp. 07002 ◽  
Author(s):  
Fu-Hsuan Yeh ◽  
Chun-Jia Huang ◽  
Jen-Yu Han ◽  
Louis Ge

Nowadays, a wide range of site planning, field investigation and slope analysis need to be carried out for slope protection and landslide-related disaster reduction. To enhance the efficiency of topography modeling, unmanned aerial vehicle (UAV) has become a new surveying technique to obtain spatial information. This study aims to determine the topography of a slope by using a digital camera mounted on UAV to photograph with a high degree of overlap. The 3D point clouds data were generated through image feature point extraction integrated with accurate GPS ground control points. It is found in this study that the obtained Digital Surface Model (DSM) data, compared with the traditional field surveying techniques, has much superior performance. The resolution of the DSM has reached 1.58 cm. Also, the error of elevation and distance between DSM and actual 3D coordinates obtained by traditional total station survey is acceptance. It is clear that such a UAV surveying technique can replace conventional surveying methods and provide complete and accurate 3D topography information in automatic and highly efficient manner for most geotechnical applications.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nick Le Large ◽  
Frank Bieder ◽  
Martin Lauer

Abstract For the application of an automated, driverless race car, we aim to assure high map and localization quality for successful driving on previously unknown, narrow race tracks. To achieve this goal, it is essential to choose an algorithm that fulfills the requirements in terms of accuracy, computational resources and run time. We propose both a filter-based and a smoothing-based Simultaneous Localization and Mapping (SLAM) algorithm and evaluate them using real-world data collected by a Formula Student Driverless race car. The accuracy is measured by comparing the SLAM-generated map to a ground truth map which was acquired using high-precision Differential GPS (DGPS) measurements. The results of the evaluation show that both algorithms meet required time constraints thanks to a parallelized architecture, with GraphSLAM draining the computational resources much faster than Extended Kalman Filter (EKF) SLAM. However, the analysis of the maps generated by the algorithms shows that GraphSLAM outperforms EKF SLAM in terms of accuracy.


Sign in / Sign up

Export Citation Format

Share Document