scholarly journals Nighttime Vehicle Detection and Tracking with Occlusion Handling by Pairing Headlights and Taillights

2020 ◽  
Vol 10 (11) ◽  
pp. 3986
Author(s):  
Tuan-Anh Pham ◽  
Myungsik Yoo

In recent years, vision-based vehicle detection has received considerable attention in the literature. Depending on the ambient illuminance, vehicle detection methods are classified as daytime and nighttime detection methods. In this paper, we propose a nighttime vehicle detection and tracking method with occlusion handling based on vehicle lights. First, bright blobs that may be vehicle lights are segmented in the captured image. Then, a machine learning-based method is proposed to classify whether the bright blobs are headlights, taillights, or other illuminant objects. Subsequently, the detected vehicle lights are tracked to further facilitate the determination of the vehicle position. As one vehicle is indicated by one or two light pairs, a light pairing process using spatiotemporal features is applied to pair vehicle lights. Finally, vehicle tracking with occlusion handling is applied to refine incorrect detections under various traffic situations. Experiments on two-lane and four-lane urban roads are conducted, and a quantitative evaluation of the results shows the effectiveness of the proposed method.

Multiple Vehicle detection and tracking is one of the hot research topics in the field of intelligent transportation systems, image processing, computer vision, robotics whereas applications are real time traffic monitoring, lane estimation, accident avoidance, alarm signal to indicate road accidents to save the public safety and so on. There exists a numerous higher level applications are motivated by a young researchers and scientists to identify the newly advanced techniques in which to solve the real time traffic problems using machine learning and deep learning methods to track multiple vehicles accurately. To addresses the various existing challenges in machine learning and deep learning based multiple vehicle detection and tracking algorithms namely camera oscillation, shadowing, changing in background motion, cluttering, camouflage etc. for the detection rate decreases dramatically when the distribution of the training samples and the scene target samples do not match. To address this issue, a new hybrid model of two-tier classifier of Haar+HOG, SVM+AdaBoost classifier algorithm based on a feature extraction algorithm is proposed in this paper. Inspired by the Adaptive Discrete Classifiers mechanism multiple relatively independent source samples are first used to build multiple classifiers and then particle grouping is used to generate the target training samples with confident scores. The global manual feature extraction ability of deep convolutional neural network is then used to perform source-target scene feature similarity calculation with a deep auto encoder in order to design a composite deep structure based adaptive discrete classifier and its global training method. The main contributions of this paper are threefold: 1) To improve the overall accuracy rate of multiple vehicle detection and tracking of front-view vehicles alone rather than full-sided vehicles. 2) The novelty of our proposed work is for particle grouping of multi-vehicles such as car, bus and lorry. 3) To propose the tracking of front- view multi- vehicles in linear and non-linear motion using particle and extended kalman filter along with hybrid new multi-vehicle tracking algorithm and attains 93.6% of accuracy is shown in the experimental results. We evaluates our proposed method with standard data sets PETS 2016 and 5 self-data sets iROAD were manually collected on traffic road and compared with the existing state of the art approaches and along with the Experiments on the Kitti dataset and a 3 different self -data set captured by our group demonstrate that the proposed method performs better than the existing machine-learning based vehicle detection methods. In addition, compared with the existing automatic feature extraction and region based object detection methods, our new hybrid method improves the overall detection rate by an average of approximately 5% of existing methods.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4062 ◽  
Author(s):  
Roberto López-Sastre ◽  
Carlos Herranz-Perdiguero ◽  
Ricardo Guerrero-Gómez-Olmedo ◽  
Daniel Oñoro-Rubio ◽  
Saturnino Maldonado-Bascón

In this work, we address the problem of multi-vehicle detection and tracking for traffic monitoring applications. We preset a novel intelligent visual sensor for tracking-by-detection with simultaneous pose estimation. Essentially, we adapt an Extended Kalman Filter (EKF) to work not only with the detections of the vehicles but also with their estimated coarse viewpoints, directly obtained with the vision sensor. We show that enhancing the tracking with observations of the vehicle pose, results in a better estimation of the vehicles trajectories. For the simultaneous object detection and viewpoint estimation task, we present and evaluate two independent solutions. One is based on a fast GPU implementation of a Histogram of Oriented Gradients (HOG) detector with Support Vector Machines (SVMs). For the second, we adequately modify and train the Faster R-CNN deep learning model, in order to recover from it not only the object localization but also an estimation of its pose. Finally, we publicly release a challenging dataset, the GRAM Road Traffic Monitoring (GRAM-RTM), which has been especially designed for evaluating multi-vehicle tracking approaches within the context of traffic monitoring applications. It comprises more than 700 unique vehicles annotated across more than 40.300 frames of three videos. We expect the GRAM-RTM becomes a benchmark in vehicle detection and tracking, providing the computer vision and intelligent transportation systems communities with a standard set of images, annotations and evaluation procedures for multi-vehicle tracking. We present a thorough experimental evaluation of our approaches with the GRAM-RTM, which will be useful for establishing further comparisons. The results obtained confirm that the simultaneous integration of vehicle localizations and pose estimations as observations in an EKF, improves the tracking results.


Sign in / Sign up

Export Citation Format

Share Document