scholarly journals Automatic Road Marking Extraction and Vectorization from Vehicle-Borne Laser Scanning Data

2021 ◽  
Vol 13 (13) ◽  
pp. 2612
Author(s):  
Lianbi Yao ◽  
Changcai Qin ◽  
Qichao Chen ◽  
Hangbin Wu

Automatic driving technology is becoming one of the main areas of development for future intelligent transportation systems. The high-precision map, which is an important supplement of the on-board sensors during shielding or limited observation distance, provides a priori information for high-precision positioning and path planning in automatic driving. The position and semantic information of the road markings, such as absolute coordinates of the solid lines and dashed lines, are the basic components of the high-precision map. In this paper, we study the automatic extraction and vectorization of road markings. Firstly, scan lines are extracted from the vehicle-borne laser point cloud data, and the pavement is extracted from scan lines according to the geometric mutation at the road boundary. On this basis, the pavement point clouds are transformed into raster images with a certain resolution by using the method of inverse distance weighted interpolation. An adaptive threshold segmentation algorithm is used to convert raster images into binary images. Followed by the adaptive threshold segmentation is the Euclidean clustering method, which is used to extract road markings point clouds from the binary image. Solid lines are detected by feature attribute filtering. All of the solid lines and guidelines in the sample data are correctly identified. The deep learning network framework PointNet++ is used for semantic recognition of the remaining road markings, including dashed lines, guidelines and arrows. Finally, the vectorization of the identified solid lines and dashed lines is carried out based on a line segmentation self-growth algorithm. The vectorization of the identified guidelines is carried out according to an alpha shape algorithm. Point cloud data from four experimental areas are used for road marking extraction and identification. The F-scores of the identification of dashed lines, guidelines, straight arrows and right turn arrows are 0.97, 0.66, 0.84 and 1, respectively.

Author(s):  
L. Yao ◽  
C. Qin ◽  
Q. Chen ◽  
H. Wu ◽  
S. Zhang

Abstract. At present, automatic driving technology has become one of the development direction of the future intelligent transportation system. The high high-precision map, which is an important supplement of the on on-board sensors under the condition of shielding or the restriction of observation distance, provides a priori information for high high-precision positioning and path planning of the automatic driving with the level of L3 and above. The position and semantic information of the road markings, such as the absolute coordinates of th e solid line and the bro ken line, are the basic components of the high high-precision map. At present, point cloud data are still one of the most important data source of the high high-precision map. So, how to get road markings information from original point clouds automatically deserve study. In this paper, point cloud is sliced by the mileage of the road, then each slice is projected onto respective vertical section section. Random Sample Consensus (RANSAC) algorithm is applied to establish road surface buffer area . Finally, moving window filtering is used to extract road surface point cloud from road surface buffer area area. On this basis, the road surface point cloud image is transformed into raster image with a certain resolution by using the method of inverse distance weighted interpolation , and the grid image is converted into binary image by using the method of adaptive threshold segmentation based on the integral graph. Then the method of the Euclidean clustering is used to extract the road markings point cloud from the binary image. Characteristic attribute detection is applied to recognize solid line marking from all clusters. Deep learning network framework pointnet++ is applied to recognize remain road markings including guideline, broken line, straight arrow, and right turn arrow.


Author(s):  
L. Yao ◽  
Q. Chen ◽  
C. Qin ◽  
H. Wu ◽  
S. Zhang

With the development of intelligent transportation, road’s high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.


2021 ◽  
Vol 13 (21) ◽  
pp. 4382
Author(s):  
Ziyang Wang ◽  
Lin Yang ◽  
Yehua Sheng ◽  
Mi Shen

Real-time acquisition and intelligent classification of pole-like street-object point clouds are of great significance in the construction of smart cities. Efficient point cloud processing technology in road scenes can accelerate the development of intelligent transportation and promote the development of high-precision maps. However, available algorithms have the problems of incomplete extraction and the low recognition accuracy of pole-like objects. In this paper, we propose a segmentation method of pole-like objects under geometric structural constraints. As for classification, we fused the classification results at different scales with each other. First, the point cloud data excluding ground point clouds were divided into voxels, and the rod-shaped parts of the pole-like objects were extracted according to the vertical continuity. Second, the regional growth based on the voxel was carried out based on the rod part to retain the non-rod part of the pole-like objects. A one-way double coding strategy was adopted to preserve the details. For spatial overlapping entities, we used multi-rule supervoxels to divide them. Finally, the random forest model was used to classify the pole-like objects based on local- and global-scale features and to fuse the double classification results under the different scales in order to obtain the final result. Experiments showed that the proposed method can effectively extract the pole-like objects of the point clouds in the road scenes, indicating that the method can achieve high-precision classification and identification in the lightweight data. Our method can also bring processing inspiration for large data.


2021 ◽  
Vol 87 (9) ◽  
pp. 639-648
Author(s):  
Chengming Ye ◽  
Hongfu Li ◽  
Ruilong Wei ◽  
Lixuan Wang ◽  
Tianbo Sui ◽  
...  

Due to the large volume and high redundancy of point clouds, there are many dilemmas in road-marking extraction algorithms, especially from uneven lidar point clouds. To extract road markings efficiently, this study presents a novel method for handling the uneven density distribution of point clouds and the high reflection intensity of road markings. The method first segments the point-cloud data into blocks perpendicular to the vehicle trajectory. Then it applies the double adaptive intensity-threshold method to extract road markings from road surfaces. Finally, it performs an adaptive spatial density filter based on the density distribution of point-cloud data to remove false road-marking points. The average completeness, correctness, and F measure of road-marking extraction are 0.827, 0.887, and 0.854, respectively, indicating that the proposed method is efficient and robust.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2021 ◽  
Vol 10 (9) ◽  
pp. 617
Author(s):  
Su Yang ◽  
Miaole Hou ◽  
Ahmed Shaker ◽  
Songnian Li

The digital documentation of cultural relics plays an important role in archiving, protection, and management. In the field of cultural heritage, three-dimensional (3D) point cloud data is effective at expressing complex geometric structures and geometric details on the surface of cultural relics, but lacks semantic information. To elaborate the geometric information of cultural relics and add meaningful semantic information, we propose a modeling and processing method of smart point clouds of cultural relics with complex geometries. An information modeling framework for complex geometric cultural relics was designed based on the concept of smart point clouds, in which 3D point cloud data are organized through the time dimension and different spatial scales indicating different geometric details. The proposed model allows smart point clouds or a subset to be linked with semantic information or related documents. As such, this novel information modeling framework can be used to describe rich semantic information and high-level details of geometry. The proposed information model not only expresses the complex geometric structure of the cultural relics and the geometric details on the surface, but also has rich semantic information, and can even be associated with documents. A case study of the Dazu Thousand-Hand Bodhisattva Statue, which is characterized by a variety of complex geometries, reveals that our proposed framework is capable of modeling and processing the statue with excellent applicability and expansibility. This work provides insights into the sustainable development of cultural heritage protection globally.


2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document