Developing a New Driving Simulator Task to Assess Drivers' Functional Object Detection

2010 ◽  
Author(s):  
Richard R. Goodenough ◽  
Johnell O. Brooks ◽  
Matthew C. Crisler ◽  
William L. Logan
2012 ◽  
Vol 26 (4) ◽  
pp. 240-256
Author(s):  
Richard R. Goodenough ◽  
Johnell O. Brooks ◽  
Matthew C. Crisler ◽  
Patrick J. Rosopa

Author(s):  
Richard R. Goodenough ◽  
Johnell O. Brooks ◽  
Matthew C. Crisler ◽  
William L. Logan

2021 ◽  
Vol 12 (1) ◽  
pp. 281
Author(s):  
Jaesung Jang ◽  
Hyeongyu Lee ◽  
Jong-Chan Kim

For safe autonomous driving, deep neural network (DNN)-based perception systems play essential roles, where a vast amount of driving images should be manually collected and labeled with ground truth (GT) for training and validation purposes. After observing the manual GT generation’s high cost and unavoidable human errors, this study presents an open-source automatic GT generation tool, CarFree, based on the Carla autonomous driving simulator. By that, we aim to democratize the daunting task of (in particular) object detection dataset generation, which was only possible by big companies or institutes due to its high cost. CarFree comprises (i) a data extraction client that automatically collects relevant information from the Carla simulator’s server and (ii) a post-processing software that produces precise 2D bounding boxes of vehicles and pedestrians on the gathered driving images. Our evaluation results show that CarFree can generate a considerable amount of realistic driving images along with their GTs in a reasonable time. Moreover, using the synthesized training images with artificially made unusual weather and lighting conditions, which are difficult to obtain in real-world driving scenarios, CarFree significantly improves the object detection accuracy in the real world, particularly in the case of harsh environments. With CarFree, we expect its users to generate a variety of object detection datasets in hassle-free ways.


Author(s):  
Carlos Gómez-Huélamo ◽  
Javier Del Egido ◽  
Luis Miguel Bergasa ◽  
Rafael Barea ◽  
Elena López-Guillén ◽  
...  

AbstractAutonomous Driving (AD) promises an efficient, comfortable and safe driving experience. Nevertheless, fatalities involving vehicles equipped with Automated Driving Systems (ADSs) are on the rise, especially those related to the perception module of the vehicle. This paper presents a real-time and power-efficient 3D Multi-Object Detection and Tracking (DAMOT) method proposed for Intelligent Vehicles (IV) applications, allowing the vehicle to track $$360^{\circ }$$ 360 ∘ surrounding objects as a preliminary stage to perform trajectory forecasting to prevent collisions and anticipate the ego-vehicle to future traffic scenarios. First, we present our DAMOT pipeline based on Fast Encoders for object detection and a combination of a 3D Kalman Filter and Hungarian Algorithm, used for state estimation and data association respectively. We extend our previous work ellaborating a preliminary version of sensor fusion based DAMOT, merging the extracted features by a Convolutional Neural Network (CNN) using camera information for long-term re-identification and obstacles retrieved by the 3D object detector. Both pipelines exploit the concepts of lightweight Linux containers using the Docker approach to provide the system with isolation, flexibility and portability, and standard communication in robotics using the Robot Operating System (ROS). Second, both pipelines are validated using the recently proposed KITTI-3DMOT evaluation tool that demonstrates the full strength of 3D localization and tracking of a MOT system. Finally, the most efficient architecture is validated in some interesting traffic scenarios implemented in the CARLA (Car Learning to Act) open-source driving simulator and in our real-world autonomous electric car using the NVIDIA AGX Xavier, an AI embedded system for autonomous machines, studying its performance in a controlled but realistic urban environment with real-time execution (results).


2004 ◽  
Author(s):  
Guihua Yang ◽  
Farnaz Baniahmad ◽  
Beverly K. Jaeger ◽  
Ronald R. Mourant
Keyword(s):  

CICTP 2019 ◽  
2019 ◽  
Author(s):  
Lanfang Zhang ◽  
Kun Zhao ◽  
Xuekun Wang ◽  
Shuo Liu

Sign in / Sign up

Export Citation Format

Share Document