A Dual-Stage Robust Vehicle Detection and Tracking for Real-Time Traffic Monitoring

Author(s):  
J. Batista ◽  
P. Peixoto ◽  
C. Fernandes ◽  
M. Ribeiro
2020 ◽  
Vol 21 (2) ◽  
pp. 125-133
Author(s):  
De Rosal Ignatius Moses Setiadi ◽  
Rizki Ramadhan Fratama ◽  
Nurul Diyah Ayu Partiningsih

AbstractThis research proposes a background subtraction method with the truncate threshold to improve the accuracy of vehicle detection and tracking in real-time video streams. In previous research, vehicle detection accuracy still needs to be optimized, so it needed to be improved. In the vehicle detection method, there are several parts that greatly affect, one of which is the thresholding technique. Different thresholding methods can affect the results of the background and foreground separation. Based on the results of testing the proposed method can improve accuracy by more than 20% compared to the previous method. The thresholding method has a considerable influence on the final result of vehicle object detection. The results of the average accuracy of the three types of time, i.e. morning, daytime, and afternoon reached 96.01%. These results indicate that the vehicle counting accuracy is very satisfying, moreover, the method has also been implemented in a real way and can run smoothly.


2020 ◽  
Vol 39 (3) ◽  
pp. 2693-2710 ◽  
Author(s):  
Wael Farag

In this paper, an advanced-and-reliable vehicle detection-and-tracking technique is proposed and implemented. The Real-Time Vehicle Detection-and-Tracking (RT_VDT) technique is well suited for Advanced Driving Assistance Systems (ADAS) applications or Self-Driving Cars (SDC). The RT_VDT is mainly a pipeline of reliable computer vision and machine learning algorithms that augment each other and take in raw RGB images to produce the required boundary boxes of the vehicles that appear in the front driving space of the car. The main contribution of this paper is the careful fusion of the employed algorithms where some of them work in parallel to strengthen each other in order to produce a precise and sophisticated real-time output. In addition, the RT_VDT provides fast enough computation to be embedded in CPUs that are currently employed by ADAS systems. The particulars of the employed algorithms together with their implementation are described in detail. Additionally, these algorithms and their various integration combinations are tested and their performance is evaluated using actual road images, and videos captured by the front-mounted camera of the car as well as on the KITTI benchmark with 87% average precision achieved. The evaluation of the RT_VDT shows that it reliably detects and tracks vehicle boundaries under various conditions.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Cheng-Jian Lin ◽  
Shiou-Yun Jeng ◽  
Hong-Wei Lioa

In recent years, vehicle detection and classification have become essential tasks of intelligent transportation systems, and real-time, accurate vehicle detection from image and video data for traffic monitoring remains challenging. The most noteworthy challenges are real-time system operation to accurately locate and classify vehicles in traffic flows and working around total occlusions that hinder vehicle tracking. For real-time traffic monitoring, we present a traffic monitoring approach that overcomes the abovementioned challenges by employing convolutional neural networks that utilize You Only Look Once (YOLO). A real-time traffic monitoring system has been developed, and it has attracted significant attention from traffic management departments. Digitally processing and analyzing these videos in real time is crucial for extracting reliable data on traffic flow. Therefore, this study presents a real-time traffic monitoring system based on a virtual detection zone, Gaussian mixture model (GMM), and YOLO to increase the vehicle counting and classification efficiency. GMM and a virtual detection zone are used for vehicle counting, and YOLO is used to classify vehicles. Moreover, the distance and time traveled by a vehicle are used to estimate the speed of the vehicle. In this study, the Montevideo Audio and Video Dataset (MAVD), the GARM Road-Traffic Monitoring data set (GRAM-RTM), and our collection data sets are used to verify the proposed method. Experimental results indicate that the proposed method with YOLOv4 achieved the highest classification accuracy of 98.91% and 99.5% in MAVD and GRAM-RTM data sets, respectively. Moreover, the proposed method with YOLOv4 also achieves the highest classification accuracy of 99.1%, 98.6%, and 98% in daytime, night time, and rainy day, respectively. In addition, the average absolute percentage error of vehicle speed estimation with the proposed method is about 7.6%.


Sign in / Sign up

Export Citation Format

Share Document