scholarly journals A Knowledge Base for Automatic Feature Recognition from Point Clouds in an Urban Scene

2018 ◽  
Vol 7 (1) ◽  
pp. 28 ◽  
Author(s):  
Xu-Feng Xing ◽  
Mir-Abolfazl Mostafavi ◽  
Seyed Chavoshi
2020 ◽  
Vol 28 (10) ◽  
pp. 2301-2310
Author(s):  
Chun-kang ZHANG ◽  
◽  
Hong-mei LI ◽  
Xia ZHANG

2021 ◽  
Vol 13 (15) ◽  
pp. 3021
Author(s):  
Bufan Zhao ◽  
Xianghong Hua ◽  
Kegen Yu ◽  
Xiaoxing He ◽  
Weixing Xue ◽  
...  

Urban object segmentation and classification tasks are critical data processing steps in scene understanding, intelligent vehicles and 3D high-precision maps. Semantic segmentation of 3D point clouds is the foundational step in object recognition. To identify the intersecting objects and improve the accuracy of classification, this paper proposes a segment-based classification method for 3D point clouds. This method firstly divides points into multi-scale supervoxels and groups them by proposed inverse node graph (IN-Graph) construction, which does not need to define prior information about the node, it divides supervoxels by judging the connection state of edges between them. This method reaches minimum global energy by graph cutting, obtains the structural segments as completely as possible, and retains boundaries at the same time. Then, the random forest classifier is utilized for supervised classification. To deal with the mislabeling of scattered fragments, higher-order CRF with small-label cluster optimization is proposed to refine the classification results. Experiments were carried out on mobile laser scan (MLS) point dataset and terrestrial laser scan (TLS) points dataset, and the results show that overall accuracies of 97.57% and 96.39% were obtained in the two datasets. The boundaries of objects were retained well, and the method achieved a good result in the classification of cars and motorcycles. More experimental analyses have verified the advantages of the proposed method and proved the practicability and versatility of the method.


Author(s):  
Eric Wang

Abstract Interfacing CAD to CAPP (computer-aided process planning) is crucial to the eventual success of a fully-automated computer-integrated manufacturing (CIM) environment. Current CAD and CAPP systems are separated by a “semantic gap” that represents a fundamental difference in the ways in which they represent information. This semantic gap makes the interfacing of CAD to CAPP a non-trivial task. This paper argues that automatic feature recognition is an indispensable technique in interfacing CAD to CAPP. It then surveys the current literature on automatic feature recognition methods and systems, and analyzes their suitability as CAD/CAPP interfaces. It also describes a relatively recent automatic feature recognition method based on volumetric decomposition, using Kim’s alternating sum of volumes with partitioning (ASVP) algorithm. The paper’s main theses are: (1) that most previous automatic feature recognition approaches are ultimately based on pattern-matching; (2) that pattern-matching approaches are unlikely to scale up to the real world; and (3) that volumetric decomposition is an alternative to pattern-matching that avoids its shortcomings. The paper concludes that automatic feature recognition by volumetric decomposition is a promising approach to the interfacing of CAD to CAPP.


Author(s):  
X.-F. Xing ◽  
M. A. Mostafavi ◽  
G. Edwards ◽  
N. Sabo

<p><strong>Abstract.</strong> Automatic semantic segmentation of point clouds observed in a 3D complex urban scene is a challenging issue. Semantic segmentation of urban scenes based on machine learning algorithm requires appropriate features to distinguish objects from mobile terrestrial and airborne LiDAR point clouds in point level. In this paper, we propose a pointwise semantic segmentation method based on our proposed features derived from Difference of Normal and the features “directional height above” that compare height difference between a given point and neighbors in eight directions in addition to the features based on normal estimation. Random forest classifier is chosen to classify points in mobile terrestrial and airborne LiDAR point clouds. The results obtained from our experiments show that the proposed features are effective for semantic segmentation of mobile terrestrial and airborne LiDAR point clouds, especially for vegetation, building and ground classes in an airborne LiDAR point clouds in urban areas.</p>


Author(s):  
Michele Bici ◽  
Saber Seyed Mohammadi ◽  
Francesca Campana

Abstract Reverse Engineering (RE) may help tolerance inspection during production by digitalization of analyzed components and their comparison with design requirements. RE techniques are already applied for geometrical and tolerance shape control. Plastic injection molding is one of the fields where it may be applied, in particular for die set-up of multi-cavities, since no severe accuracy is required for the acquisition system. In this field, RE techniques integrated with Computer-Aided tools for tolerancing and inspection may contribute to the so-called “Smart Manufacturing”. Their integration with PLM and suppliers’ incoming components may set the information necessary to evaluate each component and die. Intensive application of shape digitalization has to front several issues: accuracy of data acquisition hardware and software; automation of experimental and post-processing steps; update of industrial protocol and workers knowledge among others. Concerning post-processing automation, many advantages arise from computer vision, considering that it is based on the same concepts developed in a RE post-processing (detection, segmentation and classification). Recently, deep learning has been applied to classify point clouds, considering object and/or feature recognition. This can be made in two ways: with a 3D voxel grid, increasing regularity, before feeding data to a deep net architecture; or acting directly on point cloud. Literature data demonstrate high accuracy according to net training quality. In this paper, a preliminary study about CNN for 3D points segmentation is provided. Their characteristics have been compared to an automatic approach that has been already implemented by the authors in the past. VoxNet and PointNet architectures have been compared according to the specific task of feature recognition for tolerance inspection and some investigations on test cases are discussed to understand their performance.


Author(s):  
D. Tosic ◽  
S. Tuttas ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> This work proposes an approach for semantic classification of an outdoor-scene point cloud acquired with a high precision Mobile Mapping System (MMS), with major goal to contribute to the automatic creation of High Definition (HD) Maps. The automatic point labeling is achieved by utilizing the combination of a feature-based approach for semantic classification of point clouds and a deep learning approach for semantic segmentation of images. Both, point cloud data, as well as the data from a multi-camera system are used for gaining spatial information in an urban scene. Two types of classification applied for this task are: 1) Feature-based approach, in which the point cloud is organized into a supervoxel structure for capturing geometric characteristics of points. Several geometric features are then extracted for appropriate representation of the local geometry, followed by removing the effect of local tendency for each supervoxel to enhance the distinction between similar structures. And lastly, the Random Forests (RF) algorithm is applied in the classification phase, for assigning labels to supervoxels and therefore to points within them. 2) The deep learning approach is employed for semantic segmentation of MMS images of the same scene. To achieve this, an implementation of Pyramid Scene Parsing Network is used. Resulting segmented images with each pixel containing a class label are then projected onto the point cloud, enabling label assignment for each point. At the end, experiment results are presented from a complex urban scene and the performance of this method is evaluated on a manually labeled dataset, for the deep learning and feature-based classification individually, as well as for the result of the labels fusion. The achieved overall accuracy with fusioned output is 0.87 on the final test set, which significantly outperforms the results of individual methods on the same point cloud. The labeled data is published on the TUM-PF Semantic-Labeling-Benchmark.</p>


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Shuhui Ding ◽  
Qiang Feng ◽  
Zhaoyang Sun ◽  
Fai Ma

Sign in / Sign up

Export Citation Format

Share Document