scholarly journals Performance Evaluation of Robot Localization Using 2D and 3D Point Clouds

2017 ◽  
Vol 29 (5) ◽  
pp. 928-934
Author(s):  
Kiyoaki Takahashi ◽  
◽  
Takafumi Ono ◽  
Tomokazu Takahashi ◽  
Masato Suzuki ◽  
...  

Autonomous mobile robots need to acquire surrounding environmental information based on which they perform their self-localizations. Current autonomous mobile robots often use point cloud data acquired by laser range finders (LRFs) instead of image data. In the virtual robot autonomous traveling tests we have conducted in this study, we have evaluated the robot’s self-localization performance on Normal Distributions Transform (NDT) scan matching. This was achieved using 2D and 3D point cloud data to assess whether they perform better self-localizations in case of using 3D or 2D point cloud data.

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2021 ◽  
Vol 10 (11) ◽  
pp. 762
Author(s):  
Kaisa Jaalama ◽  
Heikki Kauhanen ◽  
Aino Keitaanniemi ◽  
Toni Rantanen ◽  
Juho-Pekka Virtanen ◽  
...  

The importance of ensuring the adequacy of urban ecosystem services and green infrastructure has been widely highlighted in multidisciplinary research. Meanwhile, the consolidation of cities has been a dominant trend in urban development and has led to the development and implementation of the green factor tool in cities such as Berlin, Melbourne, and Helsinki. In this study, elements of the green factor tool were monitored with laser-scanned and photogrammetrically derived point cloud datasets encompassing a yard in Espoo, Finland. The results show that with the support of 3D point clouds, it is possible to support the monitoring of the local green infrastructure, including elements of smaller size in green areas and yards. However, point clouds generated by distinct means have differing abilities in conveying information on green elements, and canopy covers, for example, might hinder these abilities. Additionally, some green factor elements are more promising for 3D measurement-based monitoring than others, such as those with clear geometrical form. The results encourage the involvement of 3D measuring technologies for monitoring local urban green infrastructure (UGI), also of small scale.


2010 ◽  
Vol 22 (2) ◽  
pp. 158-166 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume

This paper describes outdoor localization for a mobile robot using a laser scanner and three-dimensional (3D) point cloud data. A Mobile Mapping System (MMS) measures outdoor 3D point clouds easily and precisely. The full six-dimensional state of a mobile robot is estimated combining dead reckoning and 3D point cloud data. Two-dimensional (2D) position and orientation are extended to 3D using 3D point clouds assuming that the mobile robot remains in continuous contact with the road surface. Our approach applies a particle filter to correct position error in the laser measurement model in 3D point cloud space. Field experiments were conducted to evaluate the accuracy of our proposal. As the result of the experiment, it was confirmed that a localization precision of 0.2 m (RMS) is possible using our proposal.


Author(s):  
M. Weinmann ◽  
A. Schmidt ◽  
C. Mallet ◽  
S. Hinz ◽  
F. Rottensteiner ◽  
...  

The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (<i>i</i>) individually optimized 3D neighborhoods for (<i>ii</i>) the extraction of distinctive geometric features and (<i>iii</i>) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.


2020 ◽  
Vol 12 (11) ◽  
pp. 1729 ◽  
Author(s):  
Saifullahi Aminu Bello ◽  
Shangshu Yu ◽  
Cheng Wang ◽  
Jibril Muhmmad Adam ◽  
Jonathan Li

A point cloud is a set of points defined in a 3D metric space. Point clouds have become one of the most significant data formats for 3D representation and are gaining increased popularity as a result of the increased availability of acquisition devices, as well as seeing increased application in areas such as robotics, autonomous driving, and augmented and virtual reality. Deep learning is now the most powerful tool for data processing in computer vision and is becoming the most preferred technique for tasks such as classification, segmentation, and detection. While deep learning techniques are mainly applied to data with a structured grid, the point cloud, on the other hand, is unstructured. The unstructuredness of point clouds makes the use of deep learning for its direct processing very challenging. This paper contains a review of the recent state-of-the-art deep learning techniques, mainly focusing on raw point cloud data. The initial work on deep learning directly with raw point cloud data did not model local regions; therefore, subsequent approaches model local regions through sampling and grouping. More recently, several approaches have been proposed that not only model the local regions but also explore the correlation between points in the local regions. From the survey, we conclude that approaches that model local regions and take into account the correlation between points in the local regions perform better. Contrary to existing reviews, this paper provides a general structure for learning with raw point clouds, and various methods were compared based on the general structure. This work also introduces the popular 3D point cloud benchmark datasets and discusses the application of deep learning in popular 3D vision tasks, including classification, segmentation, and detection.


2019 ◽  
Vol 53 (2) ◽  
pp. 487-504 ◽  
Author(s):  
Abdul Rahman El Sayed ◽  
Abdallah El Chakik ◽  
Hassan Alabboud ◽  
Adnan Yassine

Many computer vision approaches for point clouds processing consider 3D simplification as an important preprocessing phase. On the other hand, the big amount of point cloud data that describe a 3D object require excessively a large storage and long processing time. In this paper, we present an efficient simplification method for 3D point clouds using weighted graphs representation that optimizes the point clouds and maintain the characteristics of the initial data. This method detects the features regions that describe the geometry of the surface. These features regions are detected using the saliency degree of vertices. Then, we define features points in each feature region and remove redundant vertices. Finally, we will show the robustness of our methodviadifferent experimental results. Moreover, we will study the stability of our method according to noise.


2021 ◽  
Vol 2107 (1) ◽  
pp. 012003
Author(s):  
N I Boslim ◽  
S A Abdul Shukor ◽  
S N Mohd Isa ◽  
R Wong

Abstract 3D point clouds are a set of point coordinates that can be obtained by using sensing device such as the Terrestrial Laser Scanner (TLS). Due to its high capability in collecting data and produce a strong density point cloud surrounding it, segmentation is needed to extract information from the massive point cloud containing different types of objects, apart from the object of interest. Bell Tower of Tawau, Sabah has been chosen as the object of interest to study the performance of different types of classifiers in segmenting the point cloud data. A state-of-the-art TLS was used to collect the data. This research’s aim is to segment the point cloud data of the historical building from its scene by using two different types of classifier and to study their performances. Two main classifiers commonly used in segmenting point cloud data of interest like building are tested here, which is Random Forest (RF) and k-Nearest Neighbour (kNN). As a result, it is found out that Random Forest classifier performs better in segmenting the existing point cloud data that represent the historic building compared to k-Nearest Neighbour classifier.


Author(s):  
R. Boerner ◽  
M. Kröhnert

3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. &lt;br&gt;&lt;br&gt; The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.


Sign in / Sign up

Export Citation Format

Share Document