Pedestrian detection in poor weather conditions using moving camera

Author(s):  
Imen Jegham ◽  
Anouar Ben Khalifa
2018 ◽  
Vol 7 (3.6) ◽  
pp. 294
Author(s):  
Shantanu Misra ◽  
Vedika Parvez ◽  
Tarush Singh ◽  
E Chitra

Vehicle collision leading to life threatening accidents is a common problem which is incrementing noticeably. This necessitated the need for Driver Assistance Systems (DAS) which helps drivers sense nearby obstacles and drive safely. However, it’s inefficiency in unfavorable weather conditions, overcrowded roads, and low signal penetration rates in India posed many challenges during it’s implementation. In this paper, we present a portable Driver Assistance System that uses augmented reality for it’s working. The headset model comprises of five systems working in conjugation in order to assist the driver. The pedestrian detection module, along with the driver alert system serves to assist the driver in focusing his attention to obstacles in his line of sight. Whereas, the speech recognition, gesture recognition and GPS navigation modules together prevent the driver from getting distracted while driving. In the process of serving these two root causes of accidents, a cost effective, portable and holistic driver assistance system has been developed.  


2020 ◽  
Vol 60 ◽  
pp. 77-96 ◽  
Author(s):  
Anouar Ben Khalifa ◽  
Ihsen Alouani ◽  
Mohamed Ali Mahjoub ◽  
Najoua Essoukri Ben Amara

2020 ◽  
Vol 17 (6) ◽  
pp. 172988142097227
Author(s):  
Thomas Andzi-Quainoo Tawiah

Autonomous vehicles include driverless, self-driving and robotic cars, and other platforms capable of sensing and interacting with its environment and navigating without human help. On the other hand, semiautonomous vehicles achieve partial realization of autonomy with human intervention, for example, in driver-assisted vehicles. Autonomous vehicles first interact with their surrounding using mounted sensors. Typically, visual sensors are used to acquire images, and computer vision techniques, signal processing, machine learning, and other techniques are applied to acquire, process, and extract information. The control subsystem interprets sensory information to identify appropriate navigation path to its destination and action plan to carry out tasks. Feedbacks are also elicited from the environment to improve upon its behavior. To increase sensing accuracy, autonomous vehicles are equipped with many sensors [light detection and ranging (LiDARs), infrared, sonar, inertial measurement units, etc.], as well as communication subsystem. Autonomous vehicles face several challenges such as unknown environments, blind spots (unseen views), non-line-of-sight scenarios, poor performance of sensors due to weather conditions, sensor errors, false alarms, limited energy, limited computational resources, algorithmic complexity, human–machine communications, size, and weight constraints. To tackle these problems, several algorithmic approaches have been implemented covering design of sensors, processing, control, and navigation. The review seeks to provide up-to-date information on the requirements, algorithms, and main challenges in the use of machine vision–based techniques for navigation and control in autonomous vehicles. An application using land-based vehicle as an Internet of Thing-enabled platform for pedestrian detection and tracking is also presented.


2020 ◽  
Vol 10 (3) ◽  
pp. 809 ◽  
Author(s):  
Yunfan Chen ◽  
Hyunchul Shin

Pedestrian-related accidents are much more likely to occur during nighttime when visible (VI) cameras are much less effective. Unlike VI cameras, infrared (IR) cameras can work in total darkness. However, IR images have several drawbacks, such as low-resolution, noise, and thermal energy characteristics that can differ depending on the weather. To overcome these drawbacks, we propose an IR camera system to identify pedestrians at night that uses a novel attention-guided encoder-decoder convolutional neural network (AED-CNN). In AED-CNN, encoder-decoder modules are introduced to generate multi-scale features, in which new skip connection blocks are incorporated into the decoder to combine the feature maps from the encoder and decoder module. This new architecture increases context information which is helpful for extracting discriminative features from low-resolution and noisy IR images. Furthermore, we propose an attention module to re-weight the multi-scale features generated by the encoder-decoder module. The attention mechanism effectively highlights pedestrians while eliminating background interference, which helps to detect pedestrians under various weather conditions. Empirical experiments on two challenging datasets fully demonstrate that our method shows superior performance. Our approach significantly improves the precision of the state-of-the-art method by 5.1% and 23.78% on the Keimyung University (KMU) and Computer Vision Center (CVC)-09 pedestrian dataset, respectively.


2020 ◽  
Vol 37 (2) ◽  
pp. 209-216
Author(s):  
Bilel Tarchoun ◽  
Anouar Khalifa ◽  
Selma Dhifallah ◽  
Imen Jegham ◽  
Mohamed Mahjou

2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Nicola Bernini ◽  
Massimo Bertozzi ◽  
Pietro Cerri ◽  
Rean Isabella Fedriga

This paper presents the results obtained by the 2WIDE_SENSE Project, an EU funded project aimed at developing a low cost camera sensor able to acquire the full spectrum from the visible bandwidth to the Short Wave InfraRed one (from 400 to 1700 nm). Two specific applications have been evaluated, both related to the automotive field: one regarding the possibility of detecting icy and wet surfaces in front of the vehicle and the other regarding the pedestrian detection capability. The former application relies on the physical fact that water shows strong electromagnetic radiation absorption capabilities in the SWIR band around 1450 nm and thus an icy or wet pavement should be seen as dark; the latter is based on the observation that the amount of radiation in the SWIR band is quite high even at night and in case of poor weather conditions. Results show that even the use of SWIR and visible spectrum seems to be a promising approach; the use in outdoor environment is not always effective.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 62775-62784 ◽  
Author(s):  
P. Tumas ◽  
A. Nowosielski ◽  
A. Serackis

2013 ◽  
Vol 11 ◽  
pp. 101-105 ◽  
Author(s):  
H. Lietz ◽  
M. Ritter ◽  
R. Manthey ◽  
G. Wanielik

Abstract. During the last decade, modern Pedestrian Detection Systems made massive use of the steadily growing numbers of high-performance image acquisition sensors. Within our naturalistic driving environment, a lot of different and heterogeneous scenes occur that are caused by varying illumination and weather conditions. Unfortunately, current systems do not work properly under these hardened conditions. The aim of this article is to investigate and evaluate observed video scenes from an open source dataset by using various image features in order to create a basis for robust and more accurate object detection.


Author(s):  
Li Tang ◽  
Yunpeng Shi ◽  
Qing He ◽  
Adel W. Sadek ◽  
Chunming Qiao

This paper intends to analyze the Light Detection and Ranging (Lidar) sensor performance on detecting pedestrians under different weather conditions. Lidar sensor is the key sensor in autonomous vehicles, which can provide high-resolution object information. Thus, it is important to analyze the performance of Lidar. This paper involves an autonomous bus operating several pedestrian detection tests in a parking lot at the University at Buffalo. By comparing the pedestrian detection results on rainy days with the results on sunny days, the evidence shows that the rain can cause unstable performance and even failures of Lidar sensors to detect pedestrians in time. After analyzing the test data, three logit models are built to estimate the probability of Lidar detection failure. The rainy weather still plays an important role in affecting Lidar detection performance. Moreover, the distance between a vehicle and a pedestrian, as well as the autonomous vehicle velocity, are also important. This paper can provide a way to improve the Lidar detection performance in autonomous vehicles.


Sign in / Sign up

Export Citation Format

Share Document