Online quadrotor trajectory generation and autonomous navigation on point clouds

Author(s):  
Fei Gao ◽  
Shaojie Shen
Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


2020 ◽  
Vol 10 (3) ◽  
pp. 1140 ◽  
Author(s):  
Jorge L. Martínez ◽  
Mariano Morán ◽  
Jesús Morales ◽  
Alfredo Robles ◽  
Manuel Sánchez

Autonomous navigation of ground vehicles on natural environments requires looking for traversable terrain continuously. This paper develops traversability classifiers for the three-dimensional (3D) point clouds acquired by the mobile robot Andabata on non-slippery solid ground. To this end, different supervised learning techniques from the Python library Scikit-learn are employed. Training and validation are performed with synthetic 3D laser scans that were labelled point by point automatically with the robotic simulator Gazebo. Good prediction results are obtained for most of the developed classifiers, which have also been tested successfully on real 3D laser scans acquired by Andabata in motion.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Xiang Song ◽  
Weiqin Zhan ◽  
Xiaoyu Che ◽  
Huilin Jiang ◽  
Biao Yang

Three-dimensional object detection can provide precise positions of objects, which can be beneficial to many robotics applications, such as self-driving cars, housekeeping robots, and autonomous navigation. In this work, we focus on accurate object detection in 3D point clouds and propose a new detection pipeline called scale-aware attention-based PillarsNet (SAPN). SAPN is a one-stage 3D object detection approach similar to PointPillar. However, SAPN achieves better performance than PointPillar by introducing the following strategies. First, we extract multiresolution pillar-level features from the point clouds to make the detection approach more scale-aware. Second, a spatial-attention mechanism is used to highlight the object activations in the feature maps, which can improve detection performance. Finally, SE-attention is employed to reweight the features fed into the detection head, which performs 3D object detection in a multitask learning manner. Experiments on the KITTI benchmark show that SAPN achieved similar or better performance compared with several state-of-the-art LiDAR-based 3D detection methods. The ablation study reveals the effectiveness of each proposed strategy. Furthermore, strategies used in this work can be embedded easily into other LiDAR-based 3D detection approaches, which improve their detection performance with slight modifications.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2113
Author(s):  
Can Chen ◽  
Luca Zanotti Fragonara ◽  
Antonios Tsourdos

Autonomous systems need to localize and track surrounding objects in 3D space for safe motion planning. As a result, 3D multi-object tracking (MOT) plays a vital role in autonomous navigation. Most MOT methods use a tracking-by-detection pipeline, which includes both the object detection and data association tasks. However, many approaches detect objects in 2D RGB sequences for tracking, which lacks reliability when localizing objects in 3D space. Furthermore, it is still challenging to learn discriminative features for temporally consistent detection in different frames, and the affinity matrix is typically learned from independent object features without considering the feature interaction between detected objects in the different frames. To settle these problems, we first employ a joint feature extractor to fuse the appearance feature and the motion feature captured from 2D RGB images and 3D point clouds, and then we propose a novel convolutional operation, named RelationConv, to better exploit the correlation between each pair of objects in the adjacent frames and learn a deep affinity matrix for further data association. We finally provide extensive evaluation to reveal that our proposed model achieves state-of-the-art performance on the KITTI tracking benchmark.


Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 220 ◽  
Author(s):  
Noé Pérez-Higueras ◽  
Alberto Jardón ◽  
Ángel Rodríguez ◽  
Carlos Balaguer

Navigation and exploration in 3D environments is still a challenging task for autonomous robots that move on the ground. Robots for Search and Rescue missions must deal with unstructured and very complex scenarios. This paper presents a path planning system for navigation and exploration of ground robots in such situations. We use (unordered) point clouds as the main sensory input without building any explicit representation of the environment from them. These 3D points are employed as space samples by an Optimal-RRTplanner (RRT * ) to compute safe and efficient paths. The use of an objective function for path construction and the natural exploratory behaviour of the RRT * planner make it appropriate for the tasks. The approach is evaluated in different simulations showing the viability of autonomous navigation and exploration in complex 3D scenarios.


2020 ◽  
Vol 17 (1) ◽  
pp. 172988142090515 ◽  
Author(s):  
Hanzhang Wang ◽  
Yisha Liu

In this article, we design a low-cost navigation system for a quadrotor working in unknown outdoor environments. To reduce the computing burden of the quadrotor, we build a separated system and transfer the computing resources from onboard side to ground station side. Both sides’ communication is guaranteed by 5G wireless networks. We utilize a stereo camera to acquire point clouds and build Octomap for the quadrotor’s navigation. Then, the trajectory is generated in two stages. In the first stage, a modified RRT*-CONNECT algorithm is adopted to generate a set of collision-free waypoints. In the second stage, a curve fitting algorithm is utilized to get a smooth piecewise Bezier trajectory. The advantage of the proposed method is to optimize the path into a safe, smooth, and dynamically feasible trajectory in real time. The modules of the state estimation, dense mapping, and motion planning are integrated into a DJI Matrice 100 quadrotor. Finally, simulation and experiments are both conducted to show the validity and practicality of our method.


2021 ◽  
pp. 1-14
Author(s):  
Ana Luisa Ballinas-Hernández ◽  
Ivan Olmos-Pineda ◽  
José Arturo Olvera-López

 A current challenge for autonomous vehicles is the detection of irregularities on road surfaces in order to prevent accidents; in particular, speed bump detection is an important task for safe and comfortable autonomous navigation. There are some techniques that have achieved acceptable speed bump detection under optimal road surface conditions, especially when signs are well-marked. However, in developing countries it is very common to find unmarked speed bumps and existing techniques fail. In this paper a methodology to detect both marked and unmarked speed bumps is proposed, for clearly painted speed bumps we apply local binary patterns technique to extract features from an image dataset. For unmarked speed bump detection, we apply stereo vision where point clouds obtained by the 3D reconstruction are converted to triangular meshes by applying Delaunay triangulation. A selection and extraction of the most relevant features is made to speed bump elevation on surfaces meshes. Results obtained have an important contribution and improve some of the existing techniques since the reconstruction of three-dimensional meshes provides relevant information for the detection of speed bumps by elevations on surfaces even though they are not marked.


Sign in / Sign up

Export Citation Format

Share Document