Vehicle detection methods for surveillance applications

Author(s):  
O. Sidla ◽  
E. Wildling ◽  
Y. Lypetskyy
Author(s):  
Anan Banharnsakun ◽  
Supannee Tanathong

Purpose Developing algorithms for automated detection and tracking of multiple objects is one challenge in the field of object tracking. Especially in a traffic video monitoring system, vehicle detection is an essential and challenging task. In the previous studies, many vehicle detection methods have been presented. These proposed approaches mostly used either motion information or characteristic information to detect vehicles. Although these methods are effective in detecting vehicles, their detection accuracy still needs to be improved. Moreover, the headlights and windshields, which are used as the vehicle features for detection in these methods, are easily obscured in some traffic conditions. The paper aims to discuss these issues. Design/methodology/approach First, each frame will be captured from a video sequence and then the background subtraction is performed by using the Mixture-of-Gaussians background model. Next, the Shi-Tomasi corner detection method is employed to extract the feature points from objects of interest in each foreground scene and the hierarchical clustering approach is then applied to cluster and form them into feature blocks. These feature blocks will be used to track the moving objects frame by frame. Findings Using the proposed method, it is possible to detect the vehicles in both day-time and night-time scenarios with a 95 percent accuracy rate and can cope with irrelevant movement (waving trees), which has to be deemed as background. In addition, the proposed method is able to deal with different vehicle shapes such as cars, vans, and motorcycles. Originality/value This paper presents a hierarchical clustering of features approach for multiple vehicles tracking in traffic environments to improve the capability of detection and tracking in case that the vehicle features are obscured in some traffic conditions.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


Author(s):  
Kai-Mao Cheng ◽  
Cheng-Yen Lin ◽  
Yu-Chun Chen ◽  
Te-Feng Su ◽  
Shang-Hong Lai ◽  
...  

2014 ◽  
Vol 2014 ◽  
pp. 1-11
Author(s):  
Wenhui Li ◽  
Peixun Liu ◽  
Ying Wang ◽  
Hongyin Ni

Vision-based multivehicle detection plays an important role in Forward Collision Warning Systems (FCWS) and Blind Spot Detection Systems (BSDS). The performance of these systems depends on the real-time capability, accuracy, and robustness of vehicle detection methods. To improve the accuracy of vehicle detection algorithm, we propose a multifeature fusion vehicle detection algorithm based on Choquet integral. This algorithm divides the vehicle detection problem into two phases: feature similarity measure and multifeature fusion. In the feature similarity measure phase, we first propose a taillight-based vehicle detection method, and then vehicle taillight feature similarity measure is defined. Second, combining with the definition of Choquet integral, the vehicle symmetry similarity measure and the HOG + AdaBoost feature similarity measure are defined. Finally, these three features are fused together by Choquet integral. Being evaluated on public test collections and our own test images, the experimental results show that our method has achieved effective and robust multivehicle detection in complicated environments. Our method can not only improve the detection rate but also reduce the false alarm rate, which meets the engineering requirements of Advanced Driving Assistance Systems (ADAS).


2016 ◽  
Vol 17 (1) ◽  
pp. 264-278 ◽  
Author(s):  
Damien Dooley ◽  
Brian McGinley ◽  
Ciaran Hughes ◽  
Liam Kilmartin ◽  
Edward Jones ◽  
...  

2021 ◽  
Vol 17 (1) ◽  
pp. 83-92
Author(s):  
Mikhail Gorobetz ◽  
Andrey Potapov ◽  
Aleksandr Korneyev ◽  
Ivars Alps

Abstract To effectively manage the traffic flow in order to reduce traffic congestion, it is necessary to know the volumes and quantitative indicators of this flow. Various detection methods are known for detecting a vehicle in a lane, which, in turn, have their own advantages and disadvantages. To detect vehicles and analyse traffic intensity, the authors use a pulse coherent radar (PCR) sensor module. Testing of various modes of operation of the radar sensor was carried out to select the optimal mode for detecting vehicles. The paper describes a method for fixing vehicles of different sizes, filtering and separating the vehicle from the traffic flow. The developed vehicle detection device works in conjunction with signal traffic lights, through which traffic control takes place. The signal traffic lights, which have their own sensors and control units, communicate with each other via a radio channel; there is no need for cable laying. The system is designed to work on road maintenance sites. The paper describes the experimental data when testing on a separate section of the road. The experiment showed the advantage of traffic lights (cars passed the regulated traffic light faster) from the point of view of calculating the traffic flow over the normal traffic light operation. Reducing downtime in traffic jams, in turn, has a beneficial effect on the environmental situation, since at the moment internal combustion engines prevail in vehicles.


Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3336 ◽  
Author(s):  
Zhongyuan Wu ◽  
Jun Sang ◽  
Qian Zhang ◽  
Hong Xiang ◽  
Bin Cai ◽  
...  

Vehicle detection is a challenging task in computer vision. In recent years, numerous vehicle detection methods have been proposed. Since the vehicles may have varying sizes in a scene, while the vehicles and the background in a scene may be with imbalanced sizes, the performance of vehicle detection is influenced. To obtain better performance on vehicle detection, a multi-scale vehicle detection method was proposed in this paper by improving YOLOv2. The main contributions of this paper include: (1) a new anchor box generation method Rk-means++ was proposed to enhance the adaptation of varying sizes of vehicles and achieve multi-scale detection; (2) Focal Loss was introduced into YOLOv2 for vehicle detection to reduce the negative influence on training resulting from imbalance between vehicles and background. The experimental results upon the Beijing Institute of Technology (BIT)-Vehicle public dataset demonstrated that the proposed method can obtain better performance on vehicle localization and recognition than that of other existing methods.


Sign in / Sign up

Export Citation Format

Share Document