scholarly journals Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning

2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Yingfeng Cai ◽  
Xiaoqiang Sun ◽  
Hai Wang ◽  
Long Chen ◽  
Haobin Jiang

Night vision systems get more and more attention in the field of automotive active safety field. In this area, a number of researchers have proposed far-infrared sensor based night-time vehicle detection algorithm. However, existing algorithms have low performance in some indicators such as the detection rate and processing time. To solve this problem, we propose a far-infrared image vehicle detection algorithm based on visual saliency and deep learning. Firstly, most of the nonvehicle pixels will be removed with visual saliency computation. Then, vehicle candidate will be generated by using prior information such as camera parameters and vehicle size. Finally, classifier trained with deep belief networks will be applied to verify the candidates generated in last step. The proposed algorithm is tested in around 6000 images and achieves detection rate of 92.3% and processing time of 25 Hz which is better than existing methods.

2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Hai Wang ◽  
Yingfeng Cai ◽  
Xiaobo Chen ◽  
Long Chen

The use of night vision systems in vehicles is becoming increasingly common. Several approaches using infrared sensors have been proposed in the literature to detect vehicles in far infrared (FIR) images. However, these systems still have low vehicle detection rates and performance could be improved. This paper presents a novel method to detect vehicles using a far infrared automotive sensor. Firstly, vehicle candidates are generated using a constant threshold from the infrared frame. Contours are then generated by using a local adaptive threshold based on maximum distance, which decreases the number of processing regions for classification and reduces the false positive rate. Finally, vehicle candidates are verified using a deep belief network (DBN) based classifier. The detection rate is 93.9% which is achieved on a database of 5000 images and video streams. This result is approximately a 2.5% improvement on previously reported methods and the false detection rate is also the lowest among them.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Zhaoli Wu ◽  
Xin Wang ◽  
Chao Chen

Due to the limitation of energy consumption and power consumption, the embedded platform cannot meet the real-time requirements of the far-infrared image pedestrian detection algorithm. To solve this problem, this paper proposes a new real-time infrared pedestrian detection algorithm (RepVGG-YOLOv4, Rep-YOLO), which uses RepVGG to reconstruct the YOLOv4 backbone network, reduces the amount of model parameters and calculations, and improves the speed of target detection; using space spatial pyramid pooling (SPP) obtains different receptive field information to improve the accuracy of model detection; using the channel pruning compression method reduces redundant parameters, model size, and computational complexity. The experimental results show that compared with the YOLOv4 target detection algorithm, the Rep-YOLO algorithm reduces the model volume by 90%, the floating-point calculation is reduced by 93.4%, the reasoning speed is increased by 4 times, and the model detection accuracy after compression reaches 93.25%.


2021 ◽  
Author(s):  
ming ji ◽  
Chuanxia Sun ◽  
Yinglei Hu

Abstract In order to solve the increasingly serious traffic congestion problem, an intelligent transportation system is widely used in dynamic traffic management, which effectively alleviates traffic congestion and improves road traffic efficiency. With the continuous development of traffic data acquisition technology, it is possible to obtain real-time traffic data in the road network in time. A large amount of traffic information provides a data guarantee for the analysis and prediction of road network traffic state. Based on the deep learning framework, this paper studies the vehicle recognition algorithm and road environment discrimination algorithm, which greatly improves the accuracy of highway vehicle recognition. Collect highway video surveillance images in different environments, establish a complete original database, build a deep learning model of environment discrimination, and train the classification model to realize real-time environment recognition of highway, as the basic condition of vehicle recognition and traffic event discrimination, and provide basic information for vehicle detection model selection. To improve the accuracy of road vehicle detection, the vehicle target labeling and sample preprocessing of different environment samples are carried out. On this basis, the vehicle recognition algorithm is studied, and the vehicle detection algorithm based on weather environment recognition and fast RCNN model is proposed. Then, the performance of the vehicle detection algorithm described in this paper is verified by comparing the detection accuracy differences between different environment dataset models and overall dataset models, different network structures and deep learning methods, and other methods.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


Author(s):  
Keke Geng ◽  
Wei Zou ◽  
Guodong Yin ◽  
Yang Li ◽  
Zihao Zhou ◽  
...  

Environment perception is a basic and necessary technology for autonomous vehicles to ensure safety and reliable driving. A lot of studies have focused on the ideal environment, while much less work has been done on the perception of low-observable targets, features of which may not be obvious in a complex environment. However, it is inevitable for autonomous vehicles to drive in environmental conditions such as rain, snow and night-time, during which the features of the targets are not obvious and detection models trained by images with significant features fail to detect low-observable target. This article mainly studies the efficient and intelligent recognition algorithm of low-observable targets in complex environments, focuses on the development of engineering method to dual-modal image (color–infrared images) low-observable target recognition and explores the applications of infrared imaging and color imaging for an intelligent perception system in autonomous vehicles. A dual-modal deep neural network is established to fuse the color and infrared images and detect low-observable targets in dual-modal images. A manually labeled color–infrared image dataset of low-observable targets is built. The deep learning neural network is trained to optimize internal parameters to make the system capable for both pedestrians and vehicle recognition in complex environments. The experimental results indicate that the dual-modal deep neural network has a better performance on the low-observable target detection and recognition in complex environments than traditional methods.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3166 ◽  
Author(s):  
Cao ◽  
Song ◽  
Song ◽  
Xiao ◽  
Peng

Lane detection is an important foundation in the development of intelligent vehicles. To address problems such as low detection accuracy of traditional methods and poor real-time performance of deep learning-based methodologies, a lane detection algorithm for intelligent vehicles in complex road conditions and dynamic environments was proposed. Firstly, converting the distorted image and using the superposition threshold algorithm for edge detection, an aerial view of the lane was obtained via region of interest extraction and inverse perspective transformation. Secondly, the random sample consensus algorithm was adopted to fit the curves of lane lines based on the third-order B-spline curve model, and fitting evaluation and curvature radius calculation were then carried out on the curve. Lastly, by using the road driving video under complex road conditions and the Tusimple dataset, simulation test experiments for lane detection algorithm were performed. The experimental results show that the average detection accuracy based on road driving video reached 98.49%, and the average processing time reached 21.5 ms. The average detection accuracy based on the Tusimple dataset reached 98.42%, and the average processing time reached 22.2 ms. Compared with traditional methods and deep learning-based methodologies, this lane detection algorithm had excellent accuracy and real-time performance, a high detection efficiency and a strong anti-interference ability. The accurate recognition rate and average processing time were significantly improved. The proposed algorithm is crucial in promoting the technological level of intelligent vehicle driving assistance and conducive to the further improvement of the driving safety of intelligent vehicles.


Author(s):  
Kapil Kumar Gupta ◽  
Rizwan Beg ◽  
Jitendra Kumar Niranjan

In this study, authors present an enhanced approach of face detection using bacteria foraging technique. This approach is based on chemotexis, reproduction and elimination and dispersal step. In this study the authors analysed face detection algorithm based on human skin color and fitting the ellipse as human face can be approximate by ellipse. Their approach for face detection requires no initial pre-processing of the image. A number of Bacteria agents with evolutionary behaviours are uniformly distributed in the 2-D image environment to search the skin-like pixels and locate each face-like region by evaluating the local color distribution. This approach has the advantage of very fast face detection by reducing pre-processing time of the image. This approach significantly improves face detection rate.


Sign in / Sign up

Export Citation Format

Share Document