scholarly journals Front Vehicle Detection Algorithm for Smart Car Based on Improved SSD Model

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4646 ◽  
Author(s):  
Jingwei Cao ◽  
Chuanxue Song ◽  
Shixin Song ◽  
Silun Peng ◽  
Da Wang ◽  
...  

Vehicle detection is an indispensable part of environmental perception technology for smart cars. Aiming at the issues that conventional vehicle detection can be easily restricted by environmental conditions and cannot have accuracy and real-time performance, this article proposes a front vehicle detection algorithm for smart car based on improved SSD model. Single shot multibox detector (SSD) is one of the current mainstream object detection frameworks based on deep learning. This work first briefly introduces the SSD network model and analyzes and summarizes its problems and shortcomings in vehicle detection. Then, targeted improvements are performed to the SSD network model, including major advancements to the basic structure of the SSD model, the use of weighted mask in network training, and enhancement to the loss function. Finally, vehicle detection experiments are carried out on the basis of the KITTI vision benchmark suite and self-made vehicle dataset to observe the algorithm performance in different complicated environments and weather conditions. The test results based on the KITTI dataset show that the mAP value reaches 92.18%, and the average processing time per frame is 15 ms. Compared with the existing deep learning-based detection methods, the proposed algorithm can obtain accuracy and real-time performance simultaneously. Meanwhile, the algorithm has excellent robustness and environmental adaptability for complicated traffic environments and anti-jamming capabilities for bad weather conditions. These factors are of great significance to ensure the accurate and efficient operation of smart cars in real traffic scenarios and are beneficial to vastly reduce the incidence of traffic accidents and fully protect people’s lives and property.

2021 ◽  
Author(s):  
ming ji ◽  
Chuanxia Sun ◽  
Yinglei Hu

Abstract In order to solve the increasingly serious traffic congestion problem, an intelligent transportation system is widely used in dynamic traffic management, which effectively alleviates traffic congestion and improves road traffic efficiency. With the continuous development of traffic data acquisition technology, it is possible to obtain real-time traffic data in the road network in time. A large amount of traffic information provides a data guarantee for the analysis and prediction of road network traffic state. Based on the deep learning framework, this paper studies the vehicle recognition algorithm and road environment discrimination algorithm, which greatly improves the accuracy of highway vehicle recognition. Collect highway video surveillance images in different environments, establish a complete original database, build a deep learning model of environment discrimination, and train the classification model to realize real-time environment recognition of highway, as the basic condition of vehicle recognition and traffic event discrimination, and provide basic information for vehicle detection model selection. To improve the accuracy of road vehicle detection, the vehicle target labeling and sample preprocessing of different environment samples are carried out. On this basis, the vehicle recognition algorithm is studied, and the vehicle detection algorithm based on weather environment recognition and fast RCNN model is proposed. Then, the performance of the vehicle detection algorithm described in this paper is verified by comparing the detection accuracy differences between different environment dataset models and overall dataset models, different network structures and deep learning methods, and other methods.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3166 ◽  
Author(s):  
Cao ◽  
Song ◽  
Song ◽  
Xiao ◽  
Peng

Lane detection is an important foundation in the development of intelligent vehicles. To address problems such as low detection accuracy of traditional methods and poor real-time performance of deep learning-based methodologies, a lane detection algorithm for intelligent vehicles in complex road conditions and dynamic environments was proposed. Firstly, converting the distorted image and using the superposition threshold algorithm for edge detection, an aerial view of the lane was obtained via region of interest extraction and inverse perspective transformation. Secondly, the random sample consensus algorithm was adopted to fit the curves of lane lines based on the third-order B-spline curve model, and fitting evaluation and curvature radius calculation were then carried out on the curve. Lastly, by using the road driving video under complex road conditions and the Tusimple dataset, simulation test experiments for lane detection algorithm were performed. The experimental results show that the average detection accuracy based on road driving video reached 98.49%, and the average processing time reached 21.5 ms. The average detection accuracy based on the Tusimple dataset reached 98.42%, and the average processing time reached 22.2 ms. Compared with traditional methods and deep learning-based methodologies, this lane detection algorithm had excellent accuracy and real-time performance, a high detection efficiency and a strong anti-interference ability. The accurate recognition rate and average processing time were significantly improved. The proposed algorithm is crucial in promoting the technological level of intelligent vehicle driving assistance and conducive to the further improvement of the driving safety of intelligent vehicles.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6785
Author(s):  
Song Han ◽  
Xiaoping Liu ◽  
Xing Han ◽  
Gang Wang ◽  
Shaobo Wu

Visual sorting of express parcels in complex scenes has always been a key issue in intelligent logistics sorting systems. With existing methods, it is still difficult to achieve fast and accurate sorting of disorderly stacked parcels. In order to achieve accurate detection and efficient sorting of disorderly stacked express parcels, we propose a robot sorting method based on multi-task deep learning. Firstly, a lightweight object detection network model is proposed to improve the real-time performance of the system. A scale variable and the joint weights of the network are used to sparsify the model and automatically identify unimportant channels. Pruning strategies are used to reduce the model size and increase the speed of detection without losing accuracy. Then, an optimal sorting position and pose estimation network model based on multi-task deep learning is proposed. Using an end-to-end network structure, the optimal sorting positions and poses of express parcels are estimated in real time by combining pose and position information for joint training. It is proved that this model can further improve the sorting accuracy. Finally, the accuracy and real-time performance of this method are verified by robotic sorting experiments.


Author(s):  
Andres Bell ◽  
Tomas Mantecon ◽  
Cesar Diaz ◽  
Carlos R. del-Blanco ◽  
Fernando Jaureguizar ◽  
...  

2021 ◽  
Vol 13 (3) ◽  
pp. 809-820
Author(s):  
V. Sowmya ◽  
R. Radha

Vehicle detection and recognition require demanding advanced computational intelligence and resources in a real-time traffic surveillance system for effective traffic management of all possible contingencies. One of the focus areas of deep intelligent systems is to facilitate vehicle detection and recognition techniques for robust traffic management of heavy vehicles. The following are such sophisticated mechanisms: Support Vector Machine (SVM), Convolutional Neural Networks (CNN), Regional Convolutional Neural Networks (R-CNN), You Only Look Once (YOLO) model, etcetera. Accordingly, it is pivotal to choose the precise algorithm for vehicle detection and recognition, which also addresses the real-time environment. In this study, a comparison of deep learning algorithms, such as the Faster R-CNN, YOLOv2, YOLOv3, and YOLOv4, are focused on diverse aspects of the features. Two entities for transport heavy vehicles, the buses and trucks, constitute detection and recognition elements in this proposed work. The mechanics of data augmentation and transfer-learning is implemented in the model; to build, execute, train, and test for detection and recognition to avoid over-fitting and improve speed and accuracy. Extensive empirical evaluation is conducted on two standard datasets such as COCO and PASCAL VOC 2007. Finally, comparative results and analyses are presented based on real-time.


Author(s):  
Ni Nyoman Ayu Marlina ◽  
Denden Mohammad Ariffin ◽  
Arief Suryadi Satyawan ◽  
Mohammed Ikrom Asysyakuur ◽  
Muhammad Farhan Utamajaya ◽  
...  
Keyword(s):  

Seiring dengan perkembangan zaman, setiap produsen mobil selalu menciptakan produkterbarunya lebih canggih. Ide ini kemudian melahirkan konsep kendaraan listrik otonom (KLO). Hal ini dimaksudkan untuk selalu menghadirkan kendaraan yang dapat memenuhi selera konsumen yang terus berkembang, disamping juga ramah lingkungan Kehadiran kendaraan listrik otonom pastinya akan dialami oleh Indonesia yang masyarakatnya sudah mulai bergantung pada alat transportasi mobil. Oleh sebab itu situasi ini mengharuskan kita bersiap menghadapi era Mobility in Society 5.0, dimana kita harus dapat menguasai teknologi pendukungnya. Kendaraan litrik otonom dapat terealisasi jika sistemnya mampu mendeteksi objek dengan baik. Oleh sebab itu pada penelitian ini dilakukan pengembangan sistem pendeteksi pejalan kaki berbasis deep learning dan memanfaatkan gambar 360°. Sistem software deteksi objek yang dibangun menggunakan Single Shot Multibox Detector (SSD) MobilenetV1, sedangkan hardware yang digunakan untuk pengembangan ini adalah Jetson AGX Xavier. Proses pengembangan yang dilakukan dimulai dari pengambilan gambar 360° ternormalisasi berisi informasi pejalan kaki di area kampus Universitas Nurtanio yang dipergunakan sebagai dataset dan data pengujian, melatih SSD MobileNetV1 dengan dataset tersebut (19.038), dan menguji model software terlatih secara real-time maupun offline.Hasil pengujian offline terhadap 735 gambar 360° pada kondisi siang hari menunjukan bahwa55,5% gambar dapat terdeteksi sempurna, sedangkan dari 595 gambar 360° pada kondisi sore hari, 51,2% gambar dapat terdeteksi sempurna. Pada pengujian secara real-time diperoleh kepastian bahwa 98% pejalan kaki pada siang hari terdeteksi, sedangkan pada sore hari hanya 95%. Waktu proses rata-rata pada sebuah gambar kondisi siang hari adalah 32,81283 ms jika menggunakan CPU, sedangkanjika menggunakan GPU adalah 32,79766 ms. Untuk sebuah gambar dengan informasi yang sama pada kondisi sore hari diperoleh waktu proses 37,42598 ms jika menggunakan CPU, sedangkan jika menggunakan GPU adalah 37,45174 ms.


Sign in / Sign up

Export Citation Format

Share Document