SOFT-SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles

2017 ◽  
Vol 35 (4) ◽  
pp. 578-595 ◽  
Author(s):  
Igor Cvišić ◽  
Josip Ćesić ◽  
Ivan Marković ◽  
Ivan Petrović
Sensors ◽  
2017 ◽  
Vol 17 (4) ◽  
pp. 802 ◽  
Author(s):  
Elena López ◽  
Sergio García ◽  
Rafael Barea ◽  
Luis Bergasa ◽  
Eduardo Molinos ◽  
...  

2017 ◽  
Vol 34 (4) ◽  
pp. 1217-1239 ◽  
Author(s):  
Chen-Chien Hsu ◽  
Cheng-Kai Yang ◽  
Yi-Hsing Chien ◽  
Yin-Tien Wang ◽  
Wei-Yen Wang ◽  
...  

Purpose FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases, there are excessive comparisons of the measurement with all the existing landmarks in each particle. As a result, the execution speed will be too slow to achieve the objective of real-time navigation. Thus, this paper aims to improve the computational efficiency and estimation accuracy of conventional SLAM algorithms. Design/methodology/approach As an attempt to solve this problem, this paper presents a computationally efficient SLAM (CESLAM) algorithm, where odometer information is considered for updating the robot’s pose in particles. When a measurement has a maximum likelihood with the known landmark in the particle, the particle state is updated before updating the landmark estimates. Findings Simulation results show that the proposed CESLAM can overcome the problem of heavy computational burden while improving the accuracy of localization and mapping building. To practically evaluate the performance of the proposed method, a Pioneer 3-DX robot with a Kinect sensor is used to develop an RGB-D-based computationally efficient visual SLAM (CEVSLAM) based on Speeded-Up Robust Features (SURF). Experimental results confirm that the proposed CEVSLAM system is capable of successfully estimating the robot pose and building the map with satisfactory accuracy. Originality/value The proposed CESLAM algorithm overcomes the problem of the time-consuming process because of unnecessary comparisons in existing FastSLAM algorithms. Simulations show that accuracy of robot pose and landmark estimation is greatly improved by the CESLAM. Combining CESLAM and SURF, the authors establish a CEVSLAM to significantly improve the estimation accuracy and computational efficiency. Practical experiments by using a Kinect visual sensor show that the variance and average error by using the proposed CEVSLAM are smaller than those by using the other visual SLAM algorithms.


2015 ◽  
Vol 40 (5) ◽  
pp. 881-902 ◽  
Author(s):  
Pedro Lourenço ◽  
Bruno J. Guerreiro ◽  
Pedro Batista ◽  
Paulo Oliveira ◽  
Carlos Silvestre

Proceedings ◽  
2018 ◽  
Vol 4 (1) ◽  
pp. 44 ◽  
Author(s):  
Ankit Ravankar ◽  
Abhijeet Ravankar ◽  
Yukinori Kobayashi ◽  
Takanori Emaru

Mapping and exploration are important tasks of mobile robots for various applications such as search and rescue, inspection, and surveillance. Unmanned aerial vehicles (UAVs) are more suited for such tasks because they have a large field of view compared to ground robots. Autonomous operation of UAVs is desirable for exploration in unknown environments. In such environments, the UAV must make a map of the environment and simultaneously localize itself in it which is commonly known as the SLAM (simultaneous localization and mapping) problem. This is also required to safely navigate between open spaces, and make informed decisions about the exploration targets. UAVs have physical constraints including limited payload, and are generally equipped with low-spec embedded computational devices and sensors. Therefore, it is often challenging to achieve robust SLAM on UAVs which also affects exploration. In this paper, we present an autonomous exploration of UAVs in completely unknown environments using low cost sensors such as LIDAR and an RGBD camera. A sensor fusion method is proposed to build a dense 3D map of the environment. Multiple images from the scene are geometrically aligned as the UAV explores the environment, and then a frontier exploration technique is used to search for the next target in the mapped area to explore the maximum area possible. The results show that the proposed algorithm can build precise maps even with low-cost sensors, and explore the environment efficiently.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 121
Author(s):  
Buğra ŞİMŞEK ◽  
Hasan Şakir BİLGE

Localization and mapping technologies are of great importance for all varieties of Unmanned Aerial Vehicles (UAVs) to perform their operations. In the near future, it is planned to increase the use of micro/nano-size UAVs. Such vehicles are sometimes expendable platforms, and reuse may not be possible. Compact, mounted and low-cost cameras are preferred in these UAVs due to weight, cost and size limitations. Visual simultaneous localization and mapping (vSLAM) methods are used for providing situational awareness of micro/nano-size UAVs. Fast rotational movements that occur during flight with gimbal-free, mounted cameras cause motion blur. Above a certain level of motion blur, tracking losses exist, which causes vSLAM algorithms not to operate effectively. In this study, a novel vSLAM framework is proposed that prevents the occurrence of tracking losses in micro/nano-UAVs due to the motion blur. In the proposed framework, the blur level of the frames obtained from the platform camera is determined and the frames whose focus measure score is below the threshold are restored by specific motion-deblurring methods. The major reasons of tracking losses have been analyzed with experimental studies, and vSLAM algorithms have been made durable by our studied framework. It has been observed that our framework can prevent tracking losses at 5, 10 and 20 fps processing speeds. vSLAM algorithms continue to normal operations at those processing speeds that have not been succeeded before using standard vSLAM algorithms, which can be considered as a superiority of our study.


Sign in / Sign up

Export Citation Format

Share Document