scholarly journals Large Common Plansets-4-Points Congruent Sets for Point Cloud Registration

2020 ◽  
Vol 9 (11) ◽  
pp. 647
Author(s):  
Cedrique Fotsing ◽  
Nafissetou Nziengam ◽  
Christophe Bobda

Point cloud registration combines multiple point cloud data sets collected from different positions using the same or different devices to form a single point cloud within a single coordinate system. Point cloud registration is usually achieved through spatial transformations that align and merge multiple point clouds into a single globally consistent model. In this paper, we present a new segmentation-based approach for point cloud registration. Our method consists of extracting plane structures from point clouds and then, using the 4-Point Congruent Sets (4PCS) technique, we estimate transformations that align the plane structures. Instead of a global alignment using all the points in the dataset, our method aligns 2-point clouds using their local plane structures. This considerably reduces the data size, computational workload, and execution time. Unlike conventional methods that seek to align the largest number of common points between entities, the new method aims to align the largest number of planes. Using partial point clouds of multiple real-world scenes, we demonstrate the superiority of our method compared to raw 4PCS in terms of quality of result (QoS) and execution time. Our method requires about half the execution time of 4PCS in all the tested datasets and produces better alignment of the point clouds.

2019 ◽  
Vol 9 (16) ◽  
pp. 3273 ◽  
Author(s):  
Wen-Chung Chang ◽  
Van-Toan Pham

This paper develops a registration architecture for the purpose of estimating relative pose including the rotation and the translation of an object in terms of a model in 3-D space based on 3-D point clouds captured by a 3-D camera. Particularly, this paper addresses the time-consuming problem of 3-D point cloud registration which is essential for the closed-loop industrial automated assembly systems that demand fixed time for accurate pose estimation. Firstly, two different descriptors are developed in order to extract coarse and detailed features of these point cloud data sets for the purpose of creating training data sets according to diversified orientations. Secondly, in order to guarantee fast pose estimation in fixed time, a seemingly novel registration architecture by employing two consecutive convolutional neural network (CNN) models is proposed. After training, the proposed CNN architecture can estimate the rotation between the model point cloud and a data point cloud, followed by the translation estimation based on computing average values. By covering a smaller range of uncertainty of the orientation compared with a full range of uncertainty covered by the first CNN model, the second CNN model can precisely estimate the orientation of the 3-D point cloud. Finally, the performance of the algorithm proposed in this paper has been validated by experiments in comparison with baseline methods. Based on these results, the proposed algorithm significantly reduces the estimation time while maintaining high precision.


2019 ◽  
Vol 12 (1) ◽  
pp. 61
Author(s):  
Miloš Prokop ◽  
Salman Ahmed Shaikh ◽  
Kyoung-Sook Kim

Modern robotic exploratory strategies assume multi-agent cooperation that raises a need for an effective exchange of acquired scans of the environment with the absence of a reliable global positioning system. In such situations, agents compare the scans of the outside world to determine if they overlap in some region, and if they do so, they determine the right matching between them. The process of matching multiple point-cloud scans is called point-cloud registration. Using the existing point-cloud registration approaches, a good match between any two-point-clouds is achieved if and only if there exists a large overlap between them, however, this limits the advantage of using multiple robots, for instance, for time-effective 3D mapping. Hence, a point-cloud registration approach is highly desirable if it can work with low overlapping scans. This work proposes a novel solution for the point-cloud registration problem with a very low overlapping area between the two scans. In doing so, no initial relative positions of the point-clouds are assumed. Most of the state-of-the-art point-cloud registration approaches iteratively match keypoints in the scans, which is computationally expensive. In contrast to the traditional approaches, a more efficient line-features-based point-cloud registration approach is proposed in this work. This approach, besides reducing the computational cost, avoids the problem of high false-positive rate of existing keypoint detection algorithms, which becomes especially significant in low overlapping point-cloud registration. The effectiveness of the proposed approach is demonstrated with the help of experiments.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Siyuan Huang ◽  
Limin Liu ◽  
Jian Dong ◽  
Xiongjun Fu ◽  
Leilei Jia

Purpose Most of the existing ground filtering algorithms are based on the Cartesian coordinate system, which is not compatible with the working principle of mobile light detection and ranging and difficult to obtain good filtering accuracy. The purpose of this paper is to improve the accuracy of ground filtering by making full use of the order information between the point and the point in the spherical coordinate. Design/methodology/approach First, the cloth simulation (CS) algorithm is modified into a sorting algorithm for scattered point clouds to obtain the adjacent relationship of the point clouds and to generate a matrix containing the adjacent information of the point cloud. Then, according to the adjacent information of the points, a projection distance comparison and local slope analysis are simultaneously performed. These results are integrated to process the point cloud details further and the algorithm is finally used to filter a point cloud in a scene from the KITTI data set. Findings The results show that the accuracy of KITTI point cloud sorting is 96.3% and the kappa coefficient of the ground filtering result is 0.7978. Compared with other algorithms applied to the same scene, the proposed algorithm has higher processing accuracy. Research limitations/implications Steps of the algorithm are parallel computing, which saves time owing to the small amount of computation. In addition, the generality of the algorithm is improved and it could be used for different data sets from urban streets. However, due to the lack of point clouds from the field environment with labeled ground points, the filtering result of this algorithm in the field environment needs further study. Originality/value In this study, the point cloud neighboring information was obtained by a modified CS algorithm. The ground filtering algorithm distinguish ground points and off-ground points according to the flatness, continuity and minimality of ground points in point cloud data. In addition, it has little effect on the algorithm results if thresholds were changed.


2021 ◽  
Vol 30 ◽  
pp. 126-130
Author(s):  
Jan Voříšek ◽  
Bořek Patzák ◽  
Edita Dvořáková ◽  
Daniel Rypl

Laser scanning is used widely in architecture and construction to document existing buildings by providing accurate data for creating a 3D model. The output is a set of data points in space, so-called point cloud. While point clouds can be directly rendered and inspected, they do not hold any semantics. Typically, engineers manually obtain floor plans, structural models, or the whole BIM model, which is a very time-consuming task for large building projects. In this contribution, we present the design and concept of a PointCloud2BIM library [1]. It provides a set of algorithms for automated or user assisted detection of fundamental entities from scanned point cloud data sets, such as floors, rooms, walls, and openings, and identification of the mutual relationships between them. The entity detection is based on a reasonable degree of human interaction (i.e., expected wall thickness). The results reside in a platform-agnostic JSON database allowing future integration into any existing BIM software.


Author(s):  
Y. D. Rajendra ◽  
S. C. Mehrotra ◽  
K. V. Kale ◽  
R. R. Manza ◽  
R. K. Dhumal ◽  
...  

Terrestrial Laser Scanners (TLS) are used to get dense point samples of large object’s surface. TLS is new and efficient method to digitize large object or scene. The collected point samples come into different formats and coordinates. Different scans are required to scan large object such as heritage site. Point cloud registration is considered as important task to bring different scans into whole 3D model in one coordinate system. Point clouds can be registered by using one of the three ways or combination of them, Target based, feature extraction, point cloud based. For the present study we have gone through Point Cloud Based registration approach. We have collected partially overlapped 3D Point Cloud data of Department of Computer Science & IT (DCSIT) building located in Dr. Babasaheb Ambedkar Marathwada University, Aurangabad. To get the complete point cloud information of the building we have taken 12 scans, 4 scans for exterior and 8 scans for interior façade data collection. There are various algorithms available in literature, but Iterative Closest Point (ICP) is most dominant algorithms. The various researchers have developed variants of ICP for better registration process. The ICP point cloud registration algorithm is based on the search of pairs of nearest points in a two adjacent scans and calculates the transformation parameters between them, it provides advantage that no artificial target is required for registration process. We studied and implemented three variants Brute Force, KDTree, Partial Matching of ICP algorithm in MATLAB. The result shows that the implemented version of ICP algorithm with its variants gives better result with speed and accuracy of registration as compared with CloudCompare Open Source software.


Author(s):  
T. Sumi ◽  
H. Date ◽  
S. Kanai

In this paper, an efficient and robust registration method of multiple point clouds is proposed. In our research, we assume that point clouds are acquired by Terrestrial Laser Scanning (TLS) systems, and the scanned environments have a relatively flat base plane such as the ground or a floor. Our method is based on an existing pairwise registration method based on point projection images, which can quickly register the point clouds under the above assumptions. In the method, sliced point clouds are projected onto the base plane, and a binary image with feature points is created. The registration is done by using feature points of the images based on the sample consensus strategy. In this paper, first, we improve the efficiency of the pairwise registration method by introducing height and occlusion information to the image. Then, a validity check method of pairwise registration using space-classified images is proposed to avoid exhaustive pairwise registration in the multiple point cloud registration process. Finally, an efficient multiple point cloud registration algorithm based on progressive creation of a point cloud connectivity graph using iterative rough and precise pairwise registration and the validity check method is proposed. The effectiveness of our method is shown through its application to three datasets of outdoor environments.


2021 ◽  
Author(s):  
Simone Müller ◽  
Dieter Kranzlmüller

Based on depth perception of individual stereo cameras, spatial structures can be derived as point clouds. The quality of such three-dimensional data is technically restricted by sensor limitations, latency of recording, and insufficient object reconstructions caused by surface illustration. Additionally external physical effects like lighting conditions, material properties, and reflections can lead to deviations between real and virtual object perception. Such physical influences can be seen in rendered point clouds as geometrical imaging errors on surfaces and edges. We propose the simultaneous use of multiple and dynamically arranged cameras. The increased information density leads to more details in surrounding detection and object illustration. During a pre-processing phase the collected data are merged and prepared. Subsequently, a logical analysis part examines and allocates the captured images to three-dimensional space. For this purpose, it is necessary to create a new metadata set consisting of image and localisation data. The post-processing reworks and matches the locally assigned images. As a result, the dynamic moving images become comparable so that a more accurate point cloud can be generated. For evaluation and better comparability we decided to use synthetically generated data sets. Our approach builds the foundation for dynamic and real-time based generation of digital twins with the aid of real sensor data.


2020 ◽  
Vol 10 (10) ◽  
pp. 3340 ◽  
Author(s):  
Pavel Chmelar ◽  
Lubos Rejfek ◽  
Tan N. Nguyen ◽  
Duy-Hung Ha

Nowadays, mobile robot exploration needs a rangefinder to obtain a large number of measurement points to achieve a detailed and precise description of a surrounding area and objects, which is called the point cloud. However, a single point cloud scan does not cover the whole area, so multiple point cloud scans must be acquired and compared together to find the right matching between them in a process called registration method. This method requires further processing and places high demands on memory consumption, especially for small embedded devices in mobile robots. This paper describes a novel method to reduce the burden of processing for multiple point cloud scans. We introduce our approach to preprocess an input point cloud in order to detect planar surfaces, simplify space description, fill gaps in point clouds, and get important space features. All of these processes are achieved by applying advanced image processing methods in combination with the quantization of physical space points. The results show the reliability of our approach to detect close parallel walls with suitable parameter settings. More importantly, planar surface detection shows a 99% decrease in necessary descriptive points almost in all cases. This proposed approach is verified on the real indoor point clouds.


2021 ◽  
Vol 13 (17) ◽  
pp. 3474
Author(s):  
Jian Li ◽  
Shuowen Huang ◽  
Hao Cui ◽  
Yurong Ma ◽  
Xiaolong Chen

As an important and fundamental step in 3D reconstruction, point cloud registration aims to find rigid transformation that register two point sets. The major challenge in point cloud registration techniques is finding correct correspondences in the scenes which may contain many repetitive structures and noise. This paper is primarily concerned with improving registration using a priori semantic information in the search for correspondences. In particular, we present a new point cloud registration pipeline for large outdoor scenes that takes advantage of semantic segmentation. Our method consists of extracting semantic segments from point clouds uses an efficient deep neural network; then, detecting the key points of the point cloud and using a feature descriptor to get the initial correspondence set; finally, applying a Random Sample Consensus (RANSAC) strategy to estimate the transformations that align segments with the same labels. Instead of using all points to estimate a global alignment, our method aligns two point clouds using transformations calculated by each segment with the highest inlier ratio. We evaluate our method on the publicly available Whu-TLS registration dataset. These experiments demonstrate how a priori semantic information the improves registration in terms of precision and speed.


Sign in / Sign up

Export Citation Format

Share Document