scholarly journals Spatial Attention Fusion for Obstacle Detection Using MmWave Radar and Vision Sensor

Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 956 ◽  
Author(s):  
Shuo Chang ◽  
Yifan Zhang ◽  
Fan Zhang ◽  
Xiaotong Zhao ◽  
Sai Huang ◽  
...  

For autonomous driving, it is important to detect obstacles in all scales accurately for safety consideration. In this paper, we propose a new spatial attention fusion (SAF) method for obstacle detection using mmWave radar and vision sensor, where the sparsity of radar points are considered in the proposed SAF. The proposed fusion method can be embedded in the feature-extraction stage, which leverages the features of mmWave radar and vision sensor effectively. Based on the SAF, an attention weight matrix is generated to fuse the vision features, which is different from the concatenation fusion and element-wise add fusion. Moreover, the proposed SAF can be trained by an end-to-end manner incorporated with the recent deep learning object detection framework. In addition, we build a generation model, which converts radar points to radar images for neural network training. Numerical results suggest that the newly developed fusion method achieves superior performance in public benchmarking. In addition, the source code will be released in the GitHub.

Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3672 ◽  
Author(s):  
Chao Lu ◽  
Jianwei Gong ◽  
Chen Lv ◽  
Xin Chen ◽  
Dongpu Cao ◽  
...  

As the main component of an autonomous driving system, the motion planner plays an essential role for safe and efficient driving. However, traditional motion planners cannot make full use of the on-board sensing information and lack the ability to efficiently adapt to different driving scenes and behaviors of different drivers. To overcome this limitation, a personalized behavior learning system (PBLS) is proposed in this paper to improve the performance of the traditional motion planner. This system is based on the neural reinforcement learning (NRL) technique, which can learn from human drivers online based on the on-board sensing information and realize human-like longitudinal speed control (LSC) through the learning from demonstration (LFD) paradigm. Under the LFD framework, the desired speed of human drivers can be learned by PBLS and converted to the low-level control commands by a proportion integration differentiation (PID) controller. Experiments using driving simulator and real driving data show that PBLS can adapt to different drivers by reproducing their driving behaviors for LSC in different scenes. Moreover, through a comparative experiment with the traditional adaptive cruise control (ACC) system, the proposed PBLS demonstrates a superior performance in maintaining driving comfort and smoothness.


2020 ◽  
Vol 9 (12) ◽  
pp. 734
Author(s):  
Chunsen Zhang ◽  
Shu Shi ◽  
Yingwei Ge ◽  
Hengheng Liu ◽  
Weihong Cui

The digital elevation model (DEM) generates a digital simulation of ground terrain in a certain range with the usage of 3D point cloud data. It is an important source of spatial modeling information. Due to various reasons, however, the generated DEM has data holes. Based on the algorithm of deep learning, this paper aims to train a deep generation model (DGM) to complete the DEM void filling task. A certain amount of DEM data and a randomly generated mask are taken as network inputs, along which the reconstruction loss and generative adversarial network (GAN) loss are used to assist network training, so as to perceive the overall known elevation information, in combination with the contextual attention layer, and generate data with reliability to fill the void areas. The experimental results have managed to show that this method has good feature expression and reconstruction accuracy in DEM void filling, which has been proven to be better than that illustrated by the traditional interpolation method.


2019 ◽  
Vol 9 (14) ◽  
pp. 2843 ◽  
Author(s):  
Pierre Duthon ◽  
Michèle Colomb ◽  
Frédéric Bernardin

Autonomous driving is based on innovative technologies that have to ensure that vehicles are driven safely. LiDARs are one of the reference sensors for obstacle detection. However, this technology is affected by adverse weather conditions, especially fog. Different wavelengths are investigated to meet this challenge (905 nm vs. 1550 nm). The influence of wavelength on light transmission in fog is then examined and results reported. A theoretical approach by calculating the extinction coefficient for different wavelengths is presented in comparison to measurements with a spectroradiometer in the range of 350 nm–2450 nm. The experiment took place in the French Cerema PAVIN BPplatform for intelligent vehicles, which makes it possible to reproduce controlled fogs of different density for two types of droplet size distribution. Direct spectroradiometer extinction measurements vary in the same way as the models. Finally, the wavelengths for LiDARs should not be chosen on the basis of fog conditions: there is a small difference (<10%) between the extinction coefficients at 905 nm and 1550 nm for the same emitted power in fog.


Sensors ◽  
2016 ◽  
Vol 16 (3) ◽  
pp. 311 ◽  
Author(s):  
Tae-Jae Lee ◽  
Dong-Hoon Yi ◽  
Dong-Il Cho

Author(s):  
Huanbing Gao ◽  
Lei Liu ◽  
Ya Tian ◽  
Shouyin Lu

This paper presented 3D reconstruction method for road scene with the help of obstacle detection. 3D reconstruction for road scene can be used in autonomous driving, driver assistance system, car navigation systems. However, some errors often rose when 3D reconstructing due to the shade from the moving object in the road scene. The presented 3D reconstruction method with obstacle detection feedback can avoid this problem. Firstly, this paper offers a framework for the 3D reconstruction of road scene by laser scanning and vision. A calibration method based on the location of horizon is proposed, and a method of attitude angle measuring based on vanishing point is proposed to revise the 3D reconstruction result. Secondly, the reconstruction framework is extended by integrating with an object recognition that can automatically detect and discriminate obstacles in the input video streams by a RANSAC approach and threshold filter, and localizes them in the 3D model. 3D reconstruction and obstacle detection are tightly integrated and benefit from each other. The experiment result verified the feasibility and practicability of the proposed method.


10.5772/56603 ◽  
2013 ◽  
Vol 10 (6) ◽  
pp. 261 ◽  
Author(s):  
Hao Sun ◽  
Huanxin Zou ◽  
Shilin Zhou ◽  
Cheng Wang ◽  
Naser El-Sheimy

Sign in / Sign up

Export Citation Format

Share Document