scholarly journals Data Augmentation of Automotive LIDAR Point Clouds under Adverse Weather Situations

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4503
Author(s):  
Jose Roberto Vargas Rivero ◽  
Thiemo Gerbich ◽  
Boris Buschardt ◽  
Jia Chen

In contrast to previous works on data augmentation using LIDAR (Light Detection and Ranging), which mostly consider point clouds under good weather conditions, this paper uses point clouds which are affected by spray. Spray water can be a cause of phantom braking and understanding how to handle the extra detections caused by it is an important step in the development of ADAS (Advanced Driver Assistance Systems)/AV (Autonomous Vehicles) functions. The extra detections caused by spray cannot be safely removed without considering cases in which real solid objects may be present in the same region in which the detections caused by spray take place. As collecting real examples would be extremely difficult, the use of synthetic data is proposed. Real scenes are reconstructed virtually with an added extra object in the spray region, in a way that the detections caused by this obstacle match the characteristics a real object in the same position would have regarding intensity, echo number and occlusion. The detections generated by the obstacle are then used to augment the real data, obtaining, after occlusion effects are added, a good approximation of the desired training data. This data is used to train a classifier achieving an average F-Score of 92. The performance of the classifier is analyzed in detail based on the characteristics of the synthetic object: size, position, reflection, duration. The proposed method can be easily expanded to different kinds of obstacles and classifier types.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Rahee Walambe ◽  
Aboli Marathe ◽  
Ketan Kotecha ◽  
George Ghinea

The computer vision systems driving autonomous vehicles are judged by their ability to detect objects and obstacles in the vicinity of the vehicle in diverse environments. Enhancing this ability of a self-driving car to distinguish between the elements of its environment under adverse conditions is an important challenge in computer vision. For example, poor weather conditions like fog and rain lead to image corruption which can cause a drastic drop in object detection (OD) performance. The primary navigation of autonomous vehicles depends on the effectiveness of the image processing techniques applied to the data collected from various visual sensors. Therefore, it is essential to develop the capability to detect objects like vehicles and pedestrians under challenging conditions such as like unpleasant weather. Ensembling multiple baseline deep learning models under different voting strategies for object detection and utilizing data augmentation to boost the models’ performance is proposed to solve this problem. The data augmentation technique is particularly useful and works with limited training data for OD applications. Furthermore, using the baseline models significantly speeds up the OD process as compared to the custom models due to transfer learning. Therefore, the ensembling approach can be highly effective in resource-constrained devices deployed for autonomous vehicles in uncertain weather conditions. The applied techniques demonstrated an increase in accuracy over the baseline models and were able to identify objects from the images captured in the adverse foggy and rainy weather conditions. The applied techniques demonstrated an increase in accuracy over the baseline models and reached 32.75% mean average precision (mAP) and 52.56% average precision (AP) in detecting cars in the adverse fog and rain weather conditions present in the dataset. The effectiveness of multiple voting strategies for bounding box predictions on the dataset is also demonstrated. These strategies help increase the explainability of object detection in autonomous systems and improve the performance of the ensemble techniques over the baseline models.


2021 ◽  
Author(s):  
Sachindra Dahal ◽  
◽  
Jeffery Roesler ◽  

Autonomous vehicles (AV) and advanced driver-assistance systems (ADAS) offer multiple safety benefits for drivers and road agencies. However, maintaining the lateral position of an AV or a vehicle with ADAS within a lane is a challenge, especially in adverse weather conditions when lane markings are occluded. For significant penetration of AV without compromising safety, vehicle-to-infrastructure sensing capabilities are necessary, especially during severe weather conditions. This research proposes a method to create a continuous electromagnetic (EM) signature on the roadway, using materials compatible with existing paving materials and construction methods. Laboratory testing of the proposed concept was performed on notched concrete-slab specimens and concrete prisms containing EM materials. An induction-based eddy-current sensor and magnetometers were implemented to detect the EM signature. The detected signals were compared to evaluate the effects of sensor height above the concrete surface, type of EM materials, EM-material volume, material shape, and volume of EM concrete prisms. A layer of up to 2 in. (5.1 cm) of water, ice, snow, or sand was placed between the sensor and the concrete slab to represent adverse weather conditions. Results showed that factors such as sensor height, EM-material volume, EM dosage, types of the EM material, and shape of the EM material in the prism were significant attenuators of the EM signal and must be engineered properly. Presence of adverse surface conditions had a negligible effect, as compared to normal conditions, indicating robustness of the presented method. This study proposes a promising method to complement existing sensors’ limitations in AVs and ADAS for effective lane-keeping during normal and adverse weather conditions with the help of vehicle-to-pavement interaction.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Boyuan Ma ◽  
Xiaoyan Wei ◽  
Chuni Liu ◽  
Xiaojuan Ban ◽  
Haiyou Huang ◽  
...  

Abstract Recent progress in material data mining has been driven by high-capacity models trained on large datasets. However, collecting experimental data (real data) has been extremely costly owing to the amount of human effort and expertise required. Here, we develop a novel transfer learning strategy to address problems of small or insufficient data. This strategy realizes the fusion of real and simulated data and the augmentation of training data in a data mining procedure. For a specific task of grain instance image segmentation, this strategy aims to generate synthetic data by fusing the images obtained from simulating the physical mechanism of grain formation and the “image style” information in real images. The results show that the model trained with the acquired synthetic data and only 35% of the real data can already achieve competitive segmentation performance of a model trained on all of the real data. Because the time required to perform grain simulation and to generate synthetic data are almost negligible as compared to the effort for obtaining real data, our proposed strategy is able to exploit the strong prediction power of deep learning without significantly increasing the experimental burden of training data preparation.


Author(s):  
J. Schachtschneider ◽  
C. Brenner

Abstract. The development of automated and autonomous vehicles requires highly accurate long-term maps of the environment. Urban areas contain a large number of dynamic objects which change over time. Since a permanent observation of the environment is impossible and there will always be a first time visit of an unknown or changed area, a map of an urban environment needs to model such dynamics.In this work, we use LiDAR point clouds from a large long term measurement campaign to investigate temporal changes. The data set was recorded along a 20 km route in Hannover, Germany with a Mobile Mapping System over a period of one year in bi-weekly measurements. The data set covers a variety of different urban objects and areas, weather conditions and seasons. Based on this data set, we show how scene and seasonal effects influence the measurement likelihood, and that multi-temporal maps lead to the best positioning results.


2020 ◽  
Vol 6 (12) ◽  
pp. 142
Author(s):  
Vicent Ortiz Ortiz Castelló ◽  
Ismael Salvador Salvador Igual ◽  
Omar del Tejo Catalá ◽  
Juan-Carlos Perez-Cortes

Vulnerable Road User (VRU) detection is a major application of object detection with the aim of helping reduce accidents in advanced driver-assistance systems and enabling the development of autonomous vehicles. Due to intrinsic complexity present in computer vision and to limitations in processing capacity and bandwidth, this task has not been completely solved nowadays. For these reasons, the well established YOLOv3 net and the new YOLOv4 one are assessed by training them on a huge, recent on-road image dataset (BDD100K), both for VRU and full on-road classes, with a great improvement in terms of detection quality when compared to their MS-COCO-trained generic correspondent models from the authors but with negligible costs in forward pass time. Additionally, some models were retrained when replacing the original Leaky ReLU convolutional activation functions from original YOLO implementation with two cutting-edge activation functions: the self-regularized non-monotonic function (MISH) and its self-gated counterpart (SWISH), with significant improvements with respect to the original activation function detection performance. Additionally, some trials were carried out including recent data augmentation techniques (mosaic and cutmix) and some grid size configurations, with cumulative improvements over the previous results, comprising different performance-throughput trade-offs.


Author(s):  
Du Chunqi ◽  
Shinobu Hasegawa

In computer vision and computer graphics, 3D reconstruction is the process of capturing real objects’ shapes and appearances. 3D models always can be constructed by active methods which use high-quality scanner equipment, or passive methods that learn from the dataset. However, both of these two methods only aimed to construct the 3D models, without showing what element affects the generation of 3D models. Therefore, the goal of this research is to apply deep learning to automatically generating 3D models, and finding the latent variables which affect the reconstructing process. The existing research GANs can be trained in little data with two networks called Generator and Discriminator, respectively. Generator can produce synthetic data, and Discriminator can discriminate between the generator’s output and real data. The existing research shows that InFoGAN can maximize the mutual information between latent variables and observation. In our approach, we will generate the 3D models based on InFoGAN and design two constraints, shape-constraint and parameters-constraint, respectively. Shape-constraint utilizes the data augmentation method to limit the synthetic data generated in the models’ profiles. At the same time, we also try to employ parameters-constraint to find the 3D models’ relationship corresponding to the latent variables. Furthermore, our approach will be a challenge in the architecture of generating 3D models built on InFoGAN. Finally, in the process of generation, we might discover the contribution of the latent variables influencing the 3D models to the whole network.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5479 ◽  
Author(s):  
Maryam Rahnemoonfar ◽  
Jimmy Johnson ◽  
John Paden

Significant resources have been spent in collecting and storing large and heterogeneous radar datasets during expensive Arctic and Antarctic fieldwork. The vast majority of data available is unlabeled, and the labeling process is both time-consuming and expensive. One possible alternative to the labeling process is the use of synthetically generated data with artificial intelligence. Instead of labeling real images, we can generate synthetic data based on arbitrary labels. In this way, training data can be quickly augmented with additional images. In this research, we evaluated the performance of synthetically generated radar images based on modified cycle-consistent adversarial networks. We conducted several experiments to test the quality of the generated radar imagery. We also tested the quality of a state-of-the-art contour detection algorithm on synthetic data and different combinations of real and synthetic data. Our experiments show that synthetic radar images generated by generative adversarial network (GAN) can be used in combination with real images for data augmentation and training of deep neural networks. However, the synthetic images generated by GANs cannot be used solely for training a neural network (training on synthetic and testing on real) as they cannot simulate all of the radar characteristics such as noise or Doppler effects. To the best of our knowledge, this is the first work in creating radar sounder imagery based on generative adversarial network.


2020 ◽  
Vol 29 (05) ◽  
pp. 2050013
Author(s):  
Oualid Araar ◽  
Abdenour Amamra ◽  
Asma Abdeldaim ◽  
Ivan Vitanov

Traffic Sign Recognition (TSR) is a crucial component in many automotive applications, such as driver assistance, sign maintenance, and vehicle autonomy. In this paper, we present an efficient approach to training a machine learning-based TSR solution. In our choice of recognition method, we have opted for convolutional neural networks, which have demonstrated best-in-class performance in previous works on TSR. One of the challenges related to training deep neural networks is the requirement for a large amount of training data. To circumvent the tedious process of acquiring and manually labelling real data, we investigate the use of synthetically generated images. Our networks, trained on only synthetic data, are capable of recognising traffic signs in challenging real-world footage. The classification results achieved on the GTSRB benchmark are seen to outperform existing state-of-the-art solutions.


2018 ◽  
Vol 42 (3) ◽  
pp. 457-467 ◽  
Author(s):  
A. N. Kamaev ◽  
D. A. Karmanov

A task of autonomous underwater vehicle (AUV) navigation is considered in the paper. The images obtained from an onboard stereo camera are used to build point clouds attached to a particular AUV position. Quantized SIFT descriptors of points are stored in a metric tree to organize an effective search procedure using a best bin first approach. Correspondences for a new point cloud are searched in a compact group of point clouds that have the largest number of similar descriptors stored in the tree. The new point cloud can be positioned relative to the other clouds without any prior information about the AUV position and uncertainty of this position. This approach increases the reliability of the AUV navigation system and makes it insensitive to data losses, textureless seafloor regions and long passes without trajectory intersections. Several algorithms are described in the paper: an algorithm of point clouds computation, an algorithm for establishing point clouds correspondence, and an algorithm of building groups of potentially linked point clouds to speedup the global search of correspondences. The general navigation algorithm consisting of three parallel subroutines: image adding, search tree updating, and global optimization is also presented. The proposed navigation system is tested on real and synthetic data. Tests on real data showed that the trajectory can be built even for an image sequence with 60% data losses with successive images that have either small or zero overlap. Tests on synthetic data showed that the constructed trajectory is close to the true one even for long missions. The average speed of image processing by the proposed navigation system is about 3 frames per second with  a middle-price desktop CPU.


Sign in / Sign up

Export Citation Format

Share Document