scholarly journals Automatic Registration of Optical Images with Airborne LiDAR Point Cloud in Urban Scenes Based on Line-Point Similarity Invariant and Extended Collinearity Equations

Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 1086 ◽  
Author(s):  
Shubiao Peng ◽  
Liang Zhang

This paper proposes a novel method to achieve the automatic registration of optical images and Light Detection and Ranging (LiDAR) points in urban areas. The whole procedure, which adopts a coarse-to-precise registration strategy, can be summarized as follows: Coarse registration is performed through a conventional point-feature-based method. The points needed can be extracted from both datasets through a matured point extractor, such as the Forster operator, followed by the extraction of straight lines. Considering that lines are mainly from building roof edges in urban scenes, and being aware of their inaccuracy when extracted from an irregularly spaced point cloud, an "infinitesimal feature analysis method" fully utilizing LiDAR scanning characteristics is proposed to refine edge lines. Points which are matched between the image and LiDAR data are then applied as guidance to search for matched lines via the line-point similarity invariant. Finally, a transformation function based on extended collinearity equations is applied to achieve precise registration. The experimental results show that the proposed method outperforms the conventional ones in terms of the registration accuracy and automation level.

GEOMATICA ◽  
2011 ◽  
Vol 65 (4) ◽  
pp. 375-385 ◽  
Author(s):  
Haiyan Guan ◽  
Jonathan Li ◽  
Michael A. Chapman

This paper presents an effective approach to integrating airborne lidar data and colour imagery acquired simultaneously for urban mapping. Texture and height information extracted from lidar point cloud is integrated with spectral channels of aerial imagery into an image segmentation process. Then, the segmented polygons are integrated with the extracted geometric features (height information between first- and lastreturn, eigenvalue-based local variation and filtered height data) and spectral features (line segments) into a supervised classifier. The results for two different urban areas in Toronto, Canada, demonstrated that a satisfactory overall accuracy of 84.96% and Kappa of 0.76 were achieved in Scene I, while a building detection rate of 92.11%, comission error of 2.10% and omission error of 9.25% were obtained in Scene II.


Author(s):  
X.-F. Xing ◽  
M. A. Mostafavi ◽  
G. Edwards ◽  
N. Sabo

<p><strong>Abstract.</strong> Automatic semantic segmentation of point clouds observed in a 3D complex urban scene is a challenging issue. Semantic segmentation of urban scenes based on machine learning algorithm requires appropriate features to distinguish objects from mobile terrestrial and airborne LiDAR point clouds in point level. In this paper, we propose a pointwise semantic segmentation method based on our proposed features derived from Difference of Normal and the features “directional height above” that compare height difference between a given point and neighbors in eight directions in addition to the features based on normal estimation. Random forest classifier is chosen to classify points in mobile terrestrial and airborne LiDAR point clouds. The results obtained from our experiments show that the proposed features are effective for semantic segmentation of mobile terrestrial and airborne LiDAR point clouds, especially for vegetation, building and ground classes in an airborne LiDAR point clouds in urban areas.</p>


2014 ◽  
Vol 1 (4) ◽  
pp. 223-232 ◽  
Author(s):  
Hao Men ◽  
Kishore Pochiraju

Abstract This paper describes a variant of the extended Gaussian image based registration algorithm for point clouds with surface color information. The method correlates the distributions of surface normals for rotational alignment and grid occupancy for translational alignment with hue filters applied during the construction of surface normal histograms and occupancy grids. In this method, the size of the point cloud is reduced with a hue-based down sampling that is independent of the point sample density or local geometry. Experimental results show that use of the hue filters increases the registration speed and improves the registration accuracy. Coarse rigid transformations determined in this step enable fine alignment with dense, unfiltered point clouds or using Iterative Common Point (ICP) alignment techniques.


Author(s):  
X. H. Chen ◽  
J. Q. Dai ◽  
Y. R. He ◽  
W. W. Ma

Abstract. The traditional electrical power line inspection method has the disadvantages of high labor intensity, low efficiency and long cycle of re-inspection. Airborne LiDAR can quickly obtain the high-precision three-dimensional spatial information of transmission line, and the data which collected by it can make it possible to accurately detect the dangerous points.It is proposed to use the grid method to divide the data into multiple regions for the elevation histogram statistical method to obtain the power line point cloud at the complex mountainous terrain. In the non-ground point data, part of the vegetation point cloud is separated according to the point cloud dimension feature, and then the power line point and the pole point are distinguished according to the density characteristics of the point cloud so as to realize the point cloud classification of the transmission line corridor. On this basis, the power line safety distance detection is carried out on the power line points and vegetation points extracted by the classification, and the early warning analysis of the dangerous points of the transmission line tree barrier is completed. The experimental results show that the method can classify the acquired power line corridor point cloud and extract the complete power line, which effectively eliminates the hidden dangers and has certain practical significance.


Author(s):  
J. Niemeyer ◽  
F. Rottensteiner ◽  
U. Soergel ◽  
C. Heipke

We propose a novel hierarchical approach for the classification of airborne 3D lidar points. Spatial and semantic context is incorporated via a two-layer Conditional Random Field (CRF). The first layer operates on a point level and utilises higher order cliques. Segments are generated from the labelling obtained in this way. They are the entities of the second layer, which incorporates larger scale context. The classification result of the segments is introduced as an energy term for the next iteration of the point-based layer. This framework iterates and mutually propagates context to improve the classification results. Potentially wrong decisions can be revised at later stages. The output is a labelled point cloud as well as segments roughly corresponding to object instances. Moreover, we present two new contextual features for the segment classification: the &lt;i&gt;distance&lt;/i&gt; and the &lt;i&gt;orientation of a segment with respect to the closest road&lt;/i&gt;. It is shown that the classification benefits from these features. In our experiments the hierarchical framework improve the overall accuracies by 2.3% on a point-based level and by 3.0% on a segment-based level, respectively, compared to a purely point-based classification.


Author(s):  
R. Boerner ◽  
Y. Xu ◽  
L. Hoegner ◽  
R. Baran ◽  
F. Steinbacher ◽  
...  

This paper presents a method to register photogrammetric point clouds generated from optical images acquired by UAV and aerial LIDAR point clouds. Normally, the registration of two airborne scans of the same scene is solved by the use of control points and the direct registration using GNSS and INS information. However, the registration of multi-sensor point clouds without control points is more complicated and challenging. For the scene of non urban areas, the registration task gets even more complicated, because it is hard to extract sufficient geometric primitives from the building structures. For our proposed method, an outdoor scene is tested providing almost no man-made objects. Therefore, it is nearly impossible to search for planar objects and use them for registration. With no geometric primitives extracted, the proposed method utilizes the structure of the 2.5D DEM created from the ground points of the point cloud. Besides, instead of using control points or key points, the method automatic detect key planes from the 2.5D DEM as correspondences. These key planes are detected on a regular grid by the use of a predefined mask. To mark a DEM grid cell as key plane the histogram of sums of the angles between the center cell is used. Afterwards, similarity values between two key planes are calculated based on the histogram differences and a RANSAC based strategy is adopted to find corresponding key planes and estimate the transformation parameters. Experiments conducted in this paper indicate that it is feasible to register multi sensor point clouds with a big difference in their ground sampling distances with respect to the used cell size of the 2.5D DEM.


2019 ◽  
Vol 11 (14) ◽  
pp. 1727 ◽  
Author(s):  
Elyta Widyaningrum ◽  
Ben Gorte ◽  
Roderik Lindenbergh

Many urban applications require building polygons as input. However, manual extraction from point cloud data is time- and labor-intensive. Hough transform is a well-known procedure to extract line features. Unfortunately, current Hough-based approaches lack flexibility to effectively extract outlines from arbitrary buildings. We found that available point order information is actually never used. Using ordered building edge points allows us to present a novel ordered points–aided Hough Transform (OHT) for extracting high quality building outlines from an airborne LiDAR point cloud. First, a Hough accumulator matrix is constructed based on a voting scheme in parametric line space (θ, r). The variance of angles in each column is used to determine dominant building directions. We propose a hierarchical filtering and clustering approach to obtain accurate line based on detected hotspots and ordered points. An Ordered Point List matrix consisting of ordered building edge points enables the detection of line segments of arbitrary direction, resulting in high-quality building roof polygons. We tested our method on three different datasets of different characteristics: one new dataset in Makassar, Indonesia, and two benchmark datasets in Vaihingen, Germany. To the best of our knowledge, our algorithm is the first Hough method that is highly adaptable since it works for buildings with edges of different lengths and arbitrary relative orientations. The results prove that our method delivers high completeness (between 90.1% and 96.4%) and correctness percentages (all over 96%). The positional accuracy of the building corners is between 0.2–0.57 m RMSE. The quality rate (89.6%) for the Vaihingen-B benchmark outperforms all existing state of the art methods. Other solutions for the challenging Vaihingen-A dataset are not yet available, while we achieve a quality score of 93.2%. Results with arbitrary directions are demonstrated on the complex buildings around the EYE museum in Amsterdam.


Author(s):  
E. Maset ◽  
B. Padova ◽  
A. Fusiello

Abstract. Nowadays, we are witnessing an increasing availability of large-scale airborne LiDAR (Light Detection and Ranging) data, that greatly improve our knowledge of urban areas and natural environment. In order to extract useful information from these massive point clouds, appropriate data processing is required, including point cloud classification. In this paper we present a deep learning method to efficiently perform the classification of large-scale LiDAR data, ensuring a good trade-off between speed and accuracy. The algorithm employs the projection of the point cloud into a two-dimensional image, where every pixel stores height, intensity, and echo information of the point falling in the pixel. The image is then segmented by a Fully Convolutional Network (FCN), assigning a label to each pixel and, consequently, to the corresponding point. In particular, the proposed approach is applied to process a dataset of 7700 km2 that covers the entire Friuli Venezia Giulia region (Italy), allowing to distinguish among five classes (ground, vegetation, roof, overground and power line), with an overall accuracy of 92.9%.


Author(s):  
Yi-Chen Chen ◽  
Chao-Hung Lin

With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.


Sign in / Sign up

Export Citation Format

Share Document