scholarly journals Integrated Pose Estimation Using 2D Lidar and INS Based on Hybrid Scan Matching

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5670
Author(s):  
Gwangsoo Park ◽  
Byungjin Lee ◽  
Sangkyung Sung

Point cloud data is essential measurement information that has facilitated an extended functionality horizon for urban mobility. While 3D lidar and image-depth sensors are superior in implementing mapping and localization, sense and avoidance, and cognitive exploration in an unknown area, applying 2D lidar is inevitable for systems with limited resources of weight and computational power, for instance, in an aerial mobility system. In this paper, we propose a new pose estimation scheme that reflects the characteristics of extracted feature point information from 2D lidar on the NDT framework for exploiting an improved point cloud registration. In the case of the 2D lidar point cloud, vertices and corners can be viewed as representative feature points. Based on this feature point information, a point-to-point relationship is functionalized and reflected on a voxelized map matching process to deploy more efficient and promising matching performance. In order to present the navigation performance of the mobile object to which the proposed algorithm is applied, the matching result is combined with the inertial navigation through an integration filter. Then, the proposed algorithm was verified through a simulation study using a high-fidelity flight simulator and an indoor experiment. For performance validation, both results were compared and analyzed with the previous techniques. In conclusion, it was demonstrated that improved accuracy and computational efficiency could be achieved through the proposed algorithms.

2019 ◽  
Vol 9 (16) ◽  
pp. 3273 ◽  
Author(s):  
Wen-Chung Chang ◽  
Van-Toan Pham

This paper develops a registration architecture for the purpose of estimating relative pose including the rotation and the translation of an object in terms of a model in 3-D space based on 3-D point clouds captured by a 3-D camera. Particularly, this paper addresses the time-consuming problem of 3-D point cloud registration which is essential for the closed-loop industrial automated assembly systems that demand fixed time for accurate pose estimation. Firstly, two different descriptors are developed in order to extract coarse and detailed features of these point cloud data sets for the purpose of creating training data sets according to diversified orientations. Secondly, in order to guarantee fast pose estimation in fixed time, a seemingly novel registration architecture by employing two consecutive convolutional neural network (CNN) models is proposed. After training, the proposed CNN architecture can estimate the rotation between the model point cloud and a data point cloud, followed by the translation estimation based on computing average values. By covering a smaller range of uncertainty of the orientation compared with a full range of uncertainty covered by the first CNN model, the second CNN model can precisely estimate the orientation of the 3-D point cloud. Finally, the performance of the algorithm proposed in this paper has been validated by experiments in comparison with baseline methods. Based on these results, the proposed algorithm significantly reduces the estimation time while maintaining high precision.


Author(s):  
Lê Văn Hùng

3D hand pose estimation from egocentric vision is an important study in the construction of assistance systems and modeling of robot hand in robotics. In this paper, we propose a complete method for estimating 3D hand posefrom the complex scene data obtained from the egocentric sensor. In which we propose a simple yet highly efficient pre-processing step for hand segmentation. In the estimation process, we used the Hand PointNet (HPN), V2V-PoseNet(V2V), Point-to-Point Regression PointNet (PtoP) for finetuning to estimate the 3D hand pose from the collected data obtained from the egocentric sensor, such as CVRA, FPHA (First-Person Hand Action) datasets. HPN, V2V, PtoP are thedeep networks/Convolutional Neural Networks (CNNs) for estimating 3D hand pose that uses the point cloud data of the hand. We evaluate the estimation results using the preprocessing step and do not use the pre-processing step to see the effectiveness of the proposed method. The results show that 3D distance error is increased many times compared to estimates on the hand datasets are not obstructed (the hand data obtained from surveillance cameras, are viewed from top view, front view, sides view) such as MSRA, NYU, ICVL datasets. The results are quantified, analyzed, shown on the point cloud data of CVAR dataset and projected on the color image of FPHA dataset.


2021 ◽  
Vol 13 (18) ◽  
pp. 3651
Author(s):  
Weiqi Wang ◽  
Xiong You ◽  
Xin Zhang ◽  
Lingyu Chen ◽  
Lantian Zhang ◽  
...  

Facing the realistic demands of the application environment of robots, the application of simultaneous localisation and mapping (SLAM) has gradually moved from static environments to complex dynamic environments, while traditional SLAM methods usually result in pose estimation deviations caused by errors in data association due to the interference of dynamic elements in the environment. This problem is effectively solved in the present study by proposing a SLAM approach based on light detection and ranging (LiDAR) under semantic constraints in dynamic environments. Four main modules are used for the projection of point cloud data, semantic segmentation, dynamic element screening, and semantic map construction. A LiDAR point cloud semantic segmentation network SANet based on a spatial attention mechanism is proposed, which significantly improves the real-time performance and accuracy of point cloud semantic segmentation. A dynamic element selection algorithm is designed and used with prior knowledge to significantly reduce the pose estimation deviations caused by SLAM dynamic elements. The results of experiments conducted on the public datasets SemanticKITTI, KITTI, and SemanticPOSS show that the accuracy and robustness of the proposed approach are significantly improved.


Author(s):  
Prem Rachakonda ◽  
Bala Muralikrishnan ◽  
Luc Cournoyer ◽  
Daniel Sawyer

Terrestrial laser scanners (TLSs) are instruments that can measure 3D coordinates of objects at high speed using a laser, resulting in high density 3D point cloud data. The Dimensional Metrology Group (DMG) at NIST performed research to support the development of documentary standards within ASTM E57 committee on 3D imaging systems. This led to the publication of the ASTM E3125-2017 standard on point-to-point distance performance evaluation of 3D imaging systems such as TLSs. To ensure that the data from different TLS systems are processed identically, the ASTM E3125- 2017 mandates the use of a common algorithm to determine the center of a sphere from point cloud data. This paper describes this algorithm and software code is provided as a download.


2019 ◽  
Vol 165 ◽  
pp. 298-311 ◽  
Author(s):  
Tae W. Lim ◽  
Charles E. Oestreich

2018 ◽  
Vol 37 (12) ◽  
pp. 1463-1483 ◽  
Author(s):  
Thomas Westfechtel ◽  
Kazunori Ohno ◽  
Bärbel Mertsching ◽  
Ryunosuke Hamada ◽  
Daniel Nickchen ◽  
...  

One of the major challenges for mobile robots in human-shaped environments is navigating stairways. This study presents a method for accurately detecting, localizing, and estimating the characteristics of stairways using point cloud data. The main challenge is the wide variety of different structures and shapes of stairways. This challenge is often aggravated by an unfavorable position of the sensor, which leaves large parts of the stairway occluded. This can be further aggravated by sparse point data. We overcome these difficulties by introducing a three-dimensional graph-based stairway-detection method combined with competing initializations. The stairway graph characterizes the general structural design of stairways in a generic way that can be used to describe a large variety of different stairways. By using multiple ways to initialize the graph, we can robustly detect stairways even if parts of the stairway are occluded. Furthermore, by letting the initializations compete against each other, we find the best initialization that accurately describes the measured stairway. The detection algorithm utilizes a plane-based approach. We also investigate different planar segmentation algorithms and experimentally compare them in an application-orientated manner. Our system accurately detects and estimates the stairway parameters with an average error of only [Formula: see text] for a variety of stairways including ascending, descending, and spiral stairways. Our method works robustly with different depth sensors for either small- or large-scale environments and for dense and sparse point cloud data. Despite this generality, our system’s accuracy is higher than most state-of-the-art stairway-detection methods.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6473
Author(s):  
Tyson Phillips ◽  
Tim D’Adamo ◽  
Peter McAree

The capability to estimate the pose of known geometry from point cloud data is a frequently arising requirement in robotics and automation applications. This problem is directly addressed by Iterative Closest Point (ICP), however, this method has several limitations and lacks robustness. This paper makes the case for an alternative method that seeks to find the most likely solution based on available evidence. Specifically, an evidence-based metric is described that seeks to find the pose of the object that would maximise the conditional likelihood of reproducing the observed range measurements. A seedless search heuristic is also provided to find the most likely pose estimate in light of these measurements. The method is demonstrated to provide for pose estimation (2D and 3D shape poses as well as joint-space searches), object identification/classification, and platform localisation. Furthermore, the method is shown to be robust in cluttered or non-segmented point cloud data as well as being robust to measurement uncertainty and extrinsic sensor calibration.


Sign in / Sign up

Export Citation Format

Share Document