scholarly journals A Pedestrian Detection Algorithm Based on Score Fusion for Multi-LiDAR Systems

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1159
Author(s):  
Tao Wu ◽  
Jun Hu ◽  
Lei Ye ◽  
Kai Ding

Pedestrian detection plays an essential role in the navigation system of autonomous vehicles. Multisensor fusion-based approaches are usually used to improve detection performance. In this study, we aimed to develop a score fusion-based pedestrian detection algorithm by integrating the data of two light detection and ranging systems (LiDARs). We first evaluated a two-stage object-detection pipeline for each LiDAR, including object proposal and fine classification. The scores from these two different classifiers were then fused to generate the result using the Bayesian rule. To improve proposal performance, we applied two features: the central points density feature, which acts as a filter to speed up the process and reduce false alarms; and the location feature, including the density distribution and height difference distribution of the point cloud, which describes an object’s profile and location in a sliding window. Extensive experiments tested in KITTI and the self-built dataset show that our method could produce highly accurate pedestrian detection results in real-time. The proposed method not only considers the accuracy and efficiency but also the flexibility for different modalities.

2014 ◽  
Vol 9 (3) ◽  
pp. 402-414 ◽  
Author(s):  
Hui Li ◽  
Yun Liu ◽  
Shengwu Xiong ◽  
Lin Wang

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


2020 ◽  
Vol 17 (6) ◽  
pp. 172988142097227
Author(s):  
Thomas Andzi-Quainoo Tawiah

Autonomous vehicles include driverless, self-driving and robotic cars, and other platforms capable of sensing and interacting with its environment and navigating without human help. On the other hand, semiautonomous vehicles achieve partial realization of autonomy with human intervention, for example, in driver-assisted vehicles. Autonomous vehicles first interact with their surrounding using mounted sensors. Typically, visual sensors are used to acquire images, and computer vision techniques, signal processing, machine learning, and other techniques are applied to acquire, process, and extract information. The control subsystem interprets sensory information to identify appropriate navigation path to its destination and action plan to carry out tasks. Feedbacks are also elicited from the environment to improve upon its behavior. To increase sensing accuracy, autonomous vehicles are equipped with many sensors [light detection and ranging (LiDARs), infrared, sonar, inertial measurement units, etc.], as well as communication subsystem. Autonomous vehicles face several challenges such as unknown environments, blind spots (unseen views), non-line-of-sight scenarios, poor performance of sensors due to weather conditions, sensor errors, false alarms, limited energy, limited computational resources, algorithmic complexity, human–machine communications, size, and weight constraints. To tackle these problems, several algorithmic approaches have been implemented covering design of sensors, processing, control, and navigation. The review seeks to provide up-to-date information on the requirements, algorithms, and main challenges in the use of machine vision–based techniques for navigation and control in autonomous vehicles. An application using land-based vehicle as an Internet of Thing-enabled platform for pedestrian detection and tracking is also presented.


2020 ◽  
Vol 38 (5) ◽  
pp. 2019-2036 ◽  
Author(s):  
Bao Peng ◽  
Zhi-Bin Chen ◽  
Erkang Fu ◽  
Zi-Chuan Yi

Intelligent surveillance is an important management method for the construction and operation of power stations such as wind power and solar power. The identification and detection of equipment, facilities, personnel, and behaviors of personnel are the key technology for the ubiquitous electricity The Internet of Things. This paper proposes a video solution based on support vector machine and histogram of oriented gradient (HOG) methods for pedestrian safety problems that are common in night driving. First, a series of image preprocessing methods are used to optimize night images and detect lane lines. Second, an image is divided into intelligent regions to be adapted to different road environments. Finally, the HOG and support vector machine methods are used to optimize the pedestrian image on a Linux system, which reduces the number of false alarms in pedestrian detection and the workload of the pedestrian detection algorithm. The test results show that the system can successfully detect pedestrians at night. With image preprocessing optimization, the correct rate of nighttime pedestrian detection can be significantly improved, and the correct rate of detection can reach 92.4%. After the division area is optimized, the number of false alarms decreases significantly, and the average frame rate of the optimized video reaches 28 frames per second.


2014 ◽  
Vol 945-949 ◽  
pp. 1837-1841
Author(s):  
Mei Hua Xu ◽  
Huai Meng Zheng ◽  
Chen Jun Xia

Pedestrian detection has a broad application prospect in automotive assisting driving system, but the real time performance is very poor in most common used detection methods. This paper presents a fast algorithm to realize the real-time pedestrian detection. The Local Binary Patterns (LBP) is used to describe the local texture information with the feature of less calculation, the HOG classifier to extract a typical feature of pedestrian’s edge, and then SVM to train and classify on the databases of INRIA and MIT. While scanning the images, interest regions are extracted to speed up the detection. Series of experiment results shows that the proposed pedestrian detecting strategy is effective and efficient.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


2021 ◽  
Vol 13 (10) ◽  
pp. 1930
Author(s):  
Gabriel Loureiro ◽  
André Dias ◽  
Alfredo Martins ◽  
José Almeida

The use and research of Unmanned Aerial Vehicle (UAV) have been increasing over the years due to the applicability in several operations such as search and rescue, delivery, surveillance, and others. Considering the increased presence of these vehicles in the airspace, it becomes necessary to reflect on the safety issues or failures that the UAVs may have and the appropriate action. Moreover, in many missions, the vehicle will not return to its original location. If it fails to arrive at the landing spot, it needs to have the onboard capability to estimate the best area to safely land. This paper addresses the scenario of detecting a safe landing spot during operation. The algorithm classifies the incoming Light Detection and Ranging (LiDAR) data and store the location of suitable areas. The developed method analyses geometric features on point cloud data and detects potential right spots. The algorithm uses the Principal Component Analysis (PCA) to find planes in point cloud clusters. The areas that have a slope less than a threshold are considered potential landing spots. These spots are evaluated regarding ground and vehicle conditions such as the distance to the UAV, the presence of obstacles, the area’s roughness, and the spot’s slope. Finally, the output of the algorithm is the optimum spot to land and can vary during operation. The proposed approach evaluates the algorithm in simulated scenarios and an experimental dataset presenting suitability to be applied in real-time operations.


2016 ◽  
Vol 14 (1) ◽  
pp. 172988141769231 ◽  
Author(s):  
Yingfeng Cai ◽  
Youguo He ◽  
Hai Wang ◽  
Xiaoqiang Sun ◽  
Long Chen ◽  
...  

The emergence and development of deep learning theory in machine learning field provide new method for visual-based pedestrian recognition technology. To achieve better performance in this application, an improved weakly supervised hierarchical deep learning pedestrian recognition algorithm with two-dimensional deep belief networks is proposed. The improvements are made by taking into consideration the weaknesses of structure and training methods of existing classifiers. First, traditional one-dimensional deep belief network is expanded to two-dimensional that allows image matrix to be loaded directly to preserve more information of a sample space. Then, a determination regularization term with small weight is added to the traditional unsupervised training objective function. By this modification, original unsupervised training is transformed to weakly supervised training. Subsequently, that gives the extracted features discrimination ability. Multiple sets of comparative experiments show that the performance of the proposed algorithm is better than other deep learning algorithms in recognition rate and outperforms most of the existing state-of-the-art methods in non-occlusion pedestrian data set while performs fair in weakly and heavily occlusion data set.


Sign in / Sign up

Export Citation Format

Share Document