Proximity based automatic data annotation for autonomous driving

2020 ◽  
Vol 7 (2) ◽  
pp. 395-404 ◽  
Author(s):  
Chen Sun ◽  
Jean M. Uwabeza Vianney ◽  
Ying Li ◽  
Long Chen ◽  
Li Li ◽  
...  
2021 ◽  
Vol 13 (15) ◽  
pp. 2868
Author(s):  
Yonglin Tian ◽  
Xiao Wang ◽  
Yu Shen ◽  
Zhongzheng Guo ◽  
Zilei Wang ◽  
...  

Three-dimensional information perception from point clouds is of vital importance for improving the ability of machines to understand the world, especially for autonomous driving and unmanned aerial vehicles. Data annotation for point clouds is one of the most challenging and costly tasks. In this paper, we propose a closed-loop and virtual–real interactive point cloud generation and model-upgrading framework called Parallel Point Clouds (PPCs). To our best knowledge, this is the first time that the training model has been changed from an open-loop to a closed-loop mechanism. The feedback from the evaluation results is used to update the training dataset, benefiting from the flexibility of artificial scenes. Under the framework, a point-based LiDAR simulation model is proposed, which greatly simplifies the scanning operation. Besides, a group-based placing method is put forward to integrate hybrid point clouds, via locating candidate positions for virtual objects in real scenes. Taking advantage of the CAD models and mobile LiDAR devices, two hybrid point cloud datasets, i.e., ShapeKITTI and MobilePointClouds, are built for 3D detection tasks. With almost zero labor cost on data annotation for newly added objects, the models (PointPillars) trained with ShapeKITTI and MobilePointClouds achieved 78.6% and 60.0% of the average precision of the model trained with real data on 3D detection, respectively.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 213296-213305
Author(s):  
Ying-Qian Zhang ◽  
Yi-Ran Jia ◽  
Xingyuan Wang ◽  
Qiong Niu ◽  
Nian-Dong Chen

Author(s):  
Giuliano Lancioni ◽  
Laura Garofalo ◽  
Raoul Villano ◽  
Francesca Romana Romani ◽  
Marta Campanelli ◽  
...  

2020 ◽  
Author(s):  
Bárbara C. Benato ◽  
Alexandru C. Telea ◽  
Alexandre X. Falcão

Data annotation using visual inspection (supervision) of each training sample can be laborious. Interactive solutions alleviate this by helping experts propagate labels from a few supervised samples to unlabeled ones based solely on the visual analysis of their feature space projection (with no further sample supervision). We present a semi-automatic data annotation approach based on suitable feature space projection and semi-supervised label estimation. We validate our method on the popular MNIST dataset and on images of human intestinal parasites with and without fecal impurities, a large and diverse dataset that makes classification very hard. We evaluate two approaches for semi-supervised learning from the latent and projection spaces, to choose the one that best reduces user annotation effort and also increases classification accuracy on unseen data. Our results demonstrate the added-value of visual analytics tools that combine complementary abilities of humans and machines for more effective machine learning.


2021 ◽  
Vol 7 (11) ◽  
pp. 236
Author(s):  
Javier Gibran Apud Baca ◽  
Thomas Jantos ◽  
Mario Theuermann ◽  
Mohamed Amin Hamdad ◽  
Jan Steinbrener ◽  
...  

Accurately estimating the six degree of freedom (6-DoF) pose of objects in images is essential for a variety of applications such as robotics, autonomous driving, and autonomous, AI, and vision-based navigation for unmanned aircraft systems (UAS). Developing such algorithms requires large datasets; however, generating those is tedious as it requires annotating the 6-DoF relative pose of each object of interest present in the image w.r.t. to the camera. Therefore, this work presents a novel approach that automates the data acquisition and annotation process and thus minimizes the annotation effort to the duration of the recording. To maximize the quality of the resulting annotations, we employ an optimization-based approach for determining the extrinsic calibration parameters of the camera. Our approach can handle multiple objects in the scene, automatically providing ground-truth labeling for each object and taking into account occlusion effects between different objects. Moreover, our approach can not only be used to generate data for 6-DoF pose estimation and corresponding 3D-models but can be also extended to automatic dataset generation for object detection, instance segmentation, or volume estimation for any kind of object.


2021 ◽  
Vol 109 ◽  
pp. 107612
Author(s):  
Bárbara C. Benato ◽  
Jancarlo F. Gomes ◽  
Alexandru C. Telea ◽  
Alexandre X. Falcão

Sign in / Sign up

Export Citation Format

Share Document