scholarly journals A Study on Development of the Camera-Based Blind Spot Detection System Using the Deep Learning Methodology

2019 ◽  
Vol 9 (14) ◽  
pp. 2941 ◽  
Author(s):  
Donghwoon Kwon ◽  
Ritesh Malaiya ◽  
Geumchae Yoon ◽  
Jeong-Tak Ryu ◽  
Su-Young Pi

One of the recent news headlines is that a pedestrian was killed by an autonomous vehicle because safety features in this vehicle did not detect an object on a road correctly. Due to this accident, some global automobile companies announced plans to postpone development of an autonomous vehicle. Furthermore, there is no doubt about the importance of safety features for autonomous vehicles. For this reason, our research goal is the development of a very safe and lightweight camera-based blind spot detection system, which can be applied to future autonomous vehicles. The blind spot detection system was implemented in open source software. Approximately 2000 vehicle images and 9000 non-vehicle images were adopted for training the Fully Connected Network (FCN) model. Other data processing concepts such as the Histogram of Oriented Gradients (HOG), heat map, and thresholding were also employed. We achieved 99.43% training accuracy and 98.99% testing accuracy of the FCN model, respectively. Source codes with respect to all the methodologies were then deployed to an off-the-shelf embedded board for actual testing on a road. Actual testing was conducted with consideration of various factors, and we confirmed 93.75% average detection accuracy with three false positives.

2020 ◽  
Vol 4 (Supplement_1) ◽  
pp. 555-555
Author(s):  
Neil Charness ◽  
Dustin Souders ◽  
Ryan Best ◽  
Nelson Roque ◽  
JongSung Yoon ◽  
...  

Abstract Older adults are at greater risk of death and serious injury in transportation crashes which have been increasing in older adult cohorts relative to younger cohorts. Can technology provide a safer road environment? Even if technology can mitigate crash risk, is it acceptable to older road users? We outline the results from several studies that tested 1) whether advanced driver assistance systems (ADAS) can improve older adult driving performance, 2) older adults’ acceptance of ADAS and Autonomous Vehicle (AV) systems, and 3) perceptions of value for ADAS systems, particularly for blind-spot detection systems. We found that collision avoidance warning systems improved older adult simulator driving performance, but not lane departure warning systems. In a young to middle-aged sample the factor “concern with AV” showed age effects with older drivers less favorable. Older drivers, however, valued an active blind spot detection system more than younger drivers.


2013 ◽  
Vol 694-697 ◽  
pp. 1008-1012
Author(s):  
Shou Xiao Li ◽  
Yun Xia Cao ◽  
Xin Bi

Considering the problem of rearview mirror blind spot during driving, the paper studied and designed the blind spot detection system based on MMW radar. Radar was installed at an appropriate position on the detection target signal by transmitting, when another car enter the detecting area, the small alarm light beside A pillar would shine or alarm few times, to remind drivers careful change road. And the effect would not effect by weather or time. For the radar sensor application environment, triangle wave LFMCW can effectively solve the speed from the coupling phenomenon. The paper showed experimental and simulation data.


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Ze Liu ◽  
Yingfeng Cai ◽  
Hai Wang ◽  
Long Chen

AbstractRadar and LiDAR are two environmental sensors commonly used in autonomous vehicles, Lidars are accurate in determining objects’ positions but significantly less accurate as Radars on measuring their velocities. However, Radars relative to Lidars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LiDAR, in this paper, an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, Radar and LiDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, the real vehicle test under various driving environment scenarios is carried out. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of autonomous cars.


Road safety has become more concern due to the number of accidents that keeps increasing every year. The safety system includes from simple installation such as seat belt, air bag and rear camera to more complicated and intelligent system such as braking assist, lane change assist and blind spot monitoring. This paper proposed a Smart Vehicle Blind Spot Detection System (VBDS) to observe the blind spot region based on ISO 17387: 2008(E). This system is mounted with two programmable 24 GHz radar sensors on the left and right rear side of the car. In addition, this system provides an audible and visual alert to the driver if the system senses any vehicles in the blind spot region using buzzer and LED, respectively. To analyze the performance of the system, test had been conducted at different demography condition. The accuracy of the system is analyzed by comparing number of vehicles detected within blind spot region and ground truth data. This system will alert the driver automatically to ensure the driver safety and reduce road accident. As conclusion, the system had been proofed applicable to use at different demography condition.


2021 ◽  
Vol 23 (06) ◽  
pp. 1288-1293
Author(s):  
Dr. S. Rajkumar ◽  
◽  
Aklilu Teklemariam ◽  
Addisalem Mekonnen ◽  
◽  
...  

Autonomous Vehicles (AV) reduces human intervention by perceiving the vehicle’s location with respect to the environment. In this regard, utilization of multiple sensors corresponding to various features of environment perception yields not only detection but also enables tracking and classification of the object leading to high security and reliability. Therefore, we propose to deploy hybrid multi-sensors such as Radar, LiDAR, and camera sensors. However, the data acquired with these hybrid sensors overlaps with the wide viewing angles of the individual sensors, and hence convolutional neural network and Kalman Filter (KF) based data fusion framework was implemented with a goal to facilitate a robust object detection system to avoid collisions inroads. The complete system tested over 1000 road scenarios for real-time environment perception showed that our hardware and software configurations outperformed numerous other conventional systems. Hence, this system could potentially find its application in object detection, tracking, and classification in a real-time environment.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4357 ◽  
Author(s):  
Babak Shahian Jahromi ◽  
Theja Tulabandhula ◽  
Sabri Cetin

There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception.


2020 ◽  
Author(s):  
Ze Liu ◽  
Feng Ying Cai

Abstract Radar and Lidar are two environmental sensors commonly used in autonomous vehicles,Lidars are accurate in determining objects’ positions but significantly less accurate on measuring their velocities. However, Radars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LIDAR, we proposed an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, radar and LIDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, we do a variety of driving environment under the real car algorithm verification test. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of driverless cars.


Work ◽  
2012 ◽  
Vol 41 ◽  
pp. 4213-4217
Author(s):  
Giulio Francesco Piccinini ◽  
Anabela Simões ◽  
Carlos Manuel Rodrigues ◽  
Miguel Leitão

Author(s):  
Michael Person ◽  
Mathew Jensen ◽  
Anthony O. Smith ◽  
Hector Gutierrez

In order for autonomous vehicles to safely navigate the road ways, accurate object detection must take place before safe path planning can occur. Currently, general purpose object detection convolutional neural network (CNN) models have the highest detection accuracies of any method. However, there is a gap in the proposed detection frameworks. Specifically, those that provide high detection accuracy necessary for deployment but do not perform inference in realtime, and those that perform inference in realtime but detection accuracy is low. We propose multimodel fusion detection system (MFDS), a sensor fusion system that combines the speed of a fast image detection CNN model along with the accuracy of light detection and range (LiDAR) point cloud data through a decision tree approach. The primary objective is to bridge the tradeoff between performance and accuracy. The motivation for MFDS is to reduce the computational complexity associated with using a CNN model to extract features from an image. To improve efficiency, MFDS extracts complimentary features from the LiDAR point cloud in order to obtain comparable detection accuracy. MFDS is novel by not only using the image detections to aid three-dimensional (3D) LiDAR detection but also using the LiDAR data to jointly bolster the image detections and provide 3D detections. MFDS achieves 3.7% higher accuracy than the base CNN detection model and is able to operate at 10 Hz. Additionally, the memory requirement for MFDS is small enough to fit on the Nvidia Tx1 when deployed on an embedded device.


Sign in / Sign up

Export Citation Format

Share Document