scholarly journals Shadow-Based Vehicle Detection in Urban Traffic

Sensors ◽  
2017 ◽  
Vol 17 (5) ◽  
pp. 975 ◽  
Author(s):  
Manuel Ibarra-Arenado ◽  
Tardi Tjahjadi ◽  
Juan Pérez-Oria ◽  
Sandra Robla-Gómez ◽  
Agustín Jiménez-Avello
Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6218
Author(s):  
Rodrigo Carvalho Barbosa ◽  
Muhammad Shoaib Ayub ◽  
Renata Lopes Rosa ◽  
Demóstenes Zegarra Rodríguez ◽  
Lunchakorn Wuttisittikulkij

Minimizing human intervention in engines, such as traffic lights, through automatic applications and sensors has been the focus of many studies. Thus, Deep Learning (DL) algorithms have been studied for traffic signs and vehicle identification in an urban traffic context. However, there is a lack of priority vehicle classification algorithms with high accuracy, fast processing, and a lightweight solution. For filling those gaps, a vehicle detection system is proposed, which is integrated with an intelligent traffic light. Thus, this work proposes (1) a novel vehicle detection model named Priority Vehicle Image Detection Network (PVIDNet), based on YOLOV3, (2) a lightweight design strategy for the PVIDNet model using an activation function to decrease the execution time of the proposed model, (3) a traffic control algorithm based on the Brazilian Traffic Code, and (4) a database containing Brazilian vehicle images. The effectiveness of the proposed solutions were evaluated using the Simulation of Urban MObility (SUMO) tool. Results show that PVIDNet reached an accuracy higher than 0.95, and the waiting time of priority vehicles was reduced by up to 50%, demonstrating the effectiveness of the proposed solution.


2014 ◽  
Vol 529 ◽  
pp. 370-374
Author(s):  
Shao Ping Zhu

In this paper, we propose an effective approach for detecting moving vehicles in nighttime traffic scenes. We use Multiple Instance Learning method to automatically detect vehicle from video sequences by constructing the Multiple Instance Learning model at nighttime. At first, we extract SIFT feature using SIFT feature extraction algorithm, which is used to characterize moving vehicles at nighttime. Then Multiple Instance Learning model is used for the on-road detection of vehicles at nighttime, in order to improve the detection accuracy, the class label information was used for the learning of the Multiple Instance Learning model. Final experiments were performed and evaluate the proposed method at nighttime under urban traffic condition, the experiment results show that the average detection accuracy is over 96.2%, which validates that the proposed vehicle detection approach is feasible and effective for the on-road detection of vehicles at nighttime and identification in various nighttime environments.


2016 ◽  
Vol 31 (3) ◽  
pp. 1609-1620 ◽  
Author(s):  
Yunsheng Zhang ◽  
Chihang Zhao ◽  
Aiwei Chen ◽  
Xingzhi Qi

Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 594 ◽  
Author(s):  
Fukai Zhang ◽  
Ce Li ◽  
Feng Yang

Vehicle detection with category inference on video sequence data is an important but challenging task for urban traffic surveillance. The difficulty of this task lies in the fact that it requires accurate localization of relatively small vehicles in complex scenes and expects real-time detection. In this paper, we present a vehicle detection framework that improves the performance of the conventional Single Shot MultiBox Detector (SSD), which effectively detects different types of vehicles in real-time. Our approach, which proposes the use of different feature extractors for localization and classification tasks in a single network, and to enhance these two feature extractors through deconvolution (D) and pooling (P) between layers in the feature pyramid, is denoted as DP-SSD. In addition, we extend the scope of the default box by adjusting its scale so that smaller default boxes can be exploited to guide DP-SSD training. Experimental results on the UA-DETRAC and KITTI datasets demonstrate that DP-SSD can achieve efficient vehicle detection for real-world traffic surveillance data in real-time. For the UA-DETRAC test set trained with UA-DETRAC trainval set, DP-SSD with the input size of 300 × 300 achieves 75.43% mAP (mean average precision) at the speed of 50.47 FPS (frames per second), and the framework with a 512 × 512 sized input reaches 77.94% mAP at 25.12 FPS using an NVIDIA GeForce GTX 1080Ti GPU. The DP-SSD shows comparable accuracy, which is better than those of the compared state-of-the-art models, except for YOLOv3.


2012 ◽  
Vol 182-183 ◽  
pp. 530-534
Author(s):  
Cheng Jun Jin ◽  
Gui Ran Chang ◽  
Wei Cheng ◽  
Hui Yan Jiang

In computer vision-based Intelligent Transportation Systems (ITS), one of the key techniques is to detect the vehicles accurately. In this paper, we propose a background extraction and vehicle detection method based on histogram in YCbCr color space. By using YCbCr color space, the influence of illumination change and shadows is reduced. To solve the problem with change in background itself, we propose a background update method by using the pixel change count and histogram. Experiment results show that the proposed algorithm can effectively extract and update the background information in complicated urban traffic environment. It also improves the accuracy of vehicle detection.


2010 ◽  
Vol 59 (8) ◽  
pp. 3694-3709 ◽  
Author(s):  
Manuel Vargas ◽  
Jose Manuel Milla ◽  
Sergio L. Toral ◽  
Federico Barrero

Sign in / Sign up

Export Citation Format

Share Document