A Sensor Simulation Framework for Training and Testing Robots and Autonomous Vehicles

2021 ◽  
Vol 1 (2) ◽  
Author(s):  
Asher Elmquist ◽  
Radu Serban ◽  
Dan Negrut

Abstract Computer simulation can be a useful tool when designing robots expected to operate independently in unstructured environments. In this context, one needs to simulate the dynamics of the robot’s mechanical system, the environment in which the robot operates, and the sensors which facilitate the robot’s perception of the environment. Herein, we focus on the sensing simulation task by presenting a virtual sensing framework built alongside an open-source, multi-physics simulation platform called Chrono. This framework supports camera, lidar, GPS, and IMU simulation. We discuss their modeling as well as the noise and distortion implemented to increase the realism of the synthetic sensor data. We close with two examples that show the sensing simulation framework at work: one pertains to a reduced scale autonomous vehicle and the second is related to a vehicle driven in a digital replica of a Madison neighborhood.

Automation ◽  
2020 ◽  
Vol 1 (1) ◽  
pp. 17-32
Author(s):  
Thomas Kent ◽  
Anthony Pipe ◽  
Arthur Richards ◽  
Jim Hutchinson ◽  
Wolfgang Schuster

VENTURER was one of the first three UK government funded research and innovation projects on Connected Autonomous Vehicles (CAVs) and was conducted predominantly in the South West region of the country. A series of increasingly complex scenarios conducted in an urban setting were used to: (i) evaluate the technology created as a part of the project; (ii) systematically assess participant responses to CAVs and; (iii) inform the development of potential insurance models and legal frameworks. Developing this understanding contributed key steps towards facilitating the deployment of CAVs on UK roads. This paper aims to describe the VENTURER Project trials, their objectives and detail some of the key technologies used. Importantly we aim to introduce some informative challenges that were overcame and the subsequent project and technological lessons learned in a hope to help others plan and execute future CAV research. The project successfully integrated several technologies crucial to CAV development. These included, a Decision Making System using behaviour trees to make high level decisions; A pilot-control system to smoothly and comfortably turn plans into throttle and steering actuation; Sensing and perception systems to make sense of raw sensor data; Inter-CAV Wireless communication capable of demonstrating vehicle-to-vehicle communication of potential hazards. The closely coupled technology integration, testing and participant-focused trial schedule led to a greatly improved understanding of the engineering and societal barriers that CAV development faces. From a behavioural standpoint the importance of reliability and repeatability far outweighs a need for novel trajectories, while the sensor-to-perception capabilities are critical, the process of verification and validation is extremely time consuming. Additionally, the added capabilities that can be leveraged from inter-CAV communications shows the potential for improved road safety that could result. Importantly, to effectively conduct human factors experiments in the CAV sector under consistent and repeatable conditions, one needs to define a scripted and stable set of scenarios that uses reliable equipment and a controllable environmental setting. This requirement can often be at odds with making significant technology developments, and if both are part of a project’s goals then they may need to be separated from each other.


2016 ◽  
Author(s):  
Georg Tanzmeister

This dissertation is focused on the environment model for automated vehicles. A reliable model of the local environment available in real-time is a prerequisite to enable almost any useful ­activity performed by a robot, such as planning motions to fulfill tasks. It is particularly important in safety critical applications, such as for autonomous vehicles in regular traffic. In this thesis, novel concepts for local mapping, tracking, the detection of principal moving directions, cost evaluations in motion planning, and road course estimation have been developed. An object- and sensor-independent grid representation forms the basis of all presented methods enabling a generic and robust estimation of the environment. All approaches have been evaluated with sensor data from real road scenarios, and their performance has been experimentally demonstrated with a test vehicle. ...


Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 5035 ◽  
Author(s):  
Son ◽  
Jeong ◽  
Lee

When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4357 ◽  
Author(s):  
Babak Shahian Jahromi ◽  
Theja Tulabandhula ◽  
Sabri Cetin

There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 899 ◽  
Author(s):  
Veli Ilci ◽  
Charles Toth

Recent developments in sensor technologies such as Global Navigation Satellite Systems (GNSS), Inertial Measurement Unit (IMU), Light Detection and Ranging (LiDAR), radar, and camera have led to emerging state-of-the-art autonomous systems, such as driverless vehicles or UAS (Unmanned Airborne Systems) swarms. These technologies necessitate the use of accurate object space information about the physical environment around the platform. This information can be generally provided by the suitable selection of the sensors, including sensor types and capabilities, the number of sensors, and their spatial arrangement. Since all these sensor technologies have different error sources and characteristics, rigorous sensor modeling is needed to eliminate/mitigate errors to obtain an accurate, reliable, and robust integrated solution. Mobile mapping systems are very similar to autonomous vehicles in terms of being able to reconstruct the environment around the platforms. However, they differ a lot in operations and objectives. Mobile mapping vehicles use professional grade sensors, such as geodetic grade GNSS, tactical grade IMU, mobile LiDAR, and metric cameras, and the solution is created in post-processing. In contrast, autonomous vehicles use simple/inexpensive sensors, require real-time operations, and are primarily interested in identifying and tracking moving objects. In this study, the main objective was to assess the performance potential of autonomous vehicle sensor systems to obtain high-definition maps based on only using Velodyne sensor data for creating accurate point clouds. In other words, no other sensor data were considered in this investigation. The results have confirmed that cm-level accuracy can be achieved.


Author(s):  
Dwarkesh Iyengar ◽  
Diane L. Peters

Autonomous vehicles are a subject of intense research interest, and are being approached by many researchers in different ways. Some of these approaches are based upon pure simulation, while others involve investigations using hardware. One possible approach, which can be useful when investigating how autonomous vehicles might interact, involves the use of physical scaled model vehicles, and the development of an appropriate vehicle is the focus of this paper. For this purpose, a commercially available 1:18 radio controlled car is remodeled and modified. An onboard microcontroller unit (MCU) is used for sensor data acquisition and preliminary signal conditioning as well as actuator control. The sensor array includes a gyroscope/accelerometer, a compass and speed encoder to find the angular and linear position of the car in a local coordinate frame as well as a range finder to detect impending obstacles in the vehicle’s planned path. This information is sent over a serial communication protocol to a Master station via a 2.4 GHz wireless module. The master station consists of a National Instruments (NI) myRIO real-time FPGA module where the local coordinates are used to formulate the position of the car in global coordinates and a user defined control scheme is implemented and the appropriate actuator signal is sent back wirelessly to the MCU on the car. The main purpose of using an independent and offsite control station is to isolate the main processing and increase response speed to changing environmental factors. Furthermore, the myRIO contains the dynamic model of the car which can be modified by linking it to a personal computer station running the LabVIEW graphic user interface (GUI). This adds greater flexibility to the overall system, thus allowing the user to focus on the different control schemes to be implemented through the hardware setup. This setup will be replicated for more cars, set in an urban traffic environment, and the interactions between the cars can then be studied and optimized.


Author(s):  
Mhafuzul Islam ◽  
Mashrur Chowdhury ◽  
Hongda Li ◽  
Hongxin Hu

Vision-based navigation of autonomous vehicles primarily depends on the deep neural network (DNN) based systems in which the controller obtains input from sensors/detectors, such as cameras, and produces a vehicle control output, such as a steering wheel angle to navigate the vehicle safely in a roadway traffic environment. Typically, these DNN-based systems in the autonomous vehicle are trained through supervised learning; however, recent studies show that a trained DNN-based system can be compromised by perturbation or adverse inputs. Similarly, this perturbation can be introduced into the DNN-based systems of autonomous vehicles by unexpected roadway hazards, such as debris or roadblocks. In this study, we first introduce a hazardous roadway environment that can compromise the DNN-based navigational system of an autonomous vehicle, and produce an incorrect steering wheel angle, which could cause crashes resulting in fatality or injury. Then, we develop a DNN-based autonomous vehicle driving system using object detection and semantic segmentation to mitigate the adverse effect of this type of hazard, which helps the autonomous vehicle to navigate safely around such hazards. We find that our developed DNN-based autonomous vehicle driving system, including hazardous object detection and semantic segmentation, improves the navigational ability of an autonomous vehicle to avoid a potential hazard by 21% compared with the traditional DNN-based autonomous vehicle driving system.


Author(s):  
Xing Xu ◽  
Minglei Li ◽  
Feng Wang ◽  
Ju Xie ◽  
Xiaohan Wu ◽  
...  

A human-like trajectory could give a safe and comfortable feeling for the occupants in an autonomous vehicle especially in corners. The research of this paper focuses on planning a human-like trajectory along a section road on a test track using optimal control method that could reflect natural driving behaviour considering the sense of natural and comfortable for the passengers, which could improve the acceptability of driverless vehicles in the future. A mass point vehicle dynamic model is modelled in the curvilinear coordinate system, then an optimal trajectory is generated by using an optimal control method. The optimal control problem is formulated and then solved by using the Matlab tool GPOPS-II. Trials are carried out on a test track, and the tested data are collected and processed, then the trajectory data in different corners are obtained. Different TLCs calculations are derived and applied to different track sections. After that, the human driver’s trajectories and the optimal line are compared to see the correlation using TLC methods. The results show that the optimal trajectory shows a similar trend with human’s trajectories to some extent when driving through a corner although it is not so perfectly aligned with the tested trajectories, which could conform with people’s driving intuition and improve the occupants’ comfort when driving in a corner. This could improve the acceptability of AVs in the automotive market in the future. The driver tends to move to the outside of the lane gradually after passing the apex when driving in corners on the road with hard-lines on both sides.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2244
Author(s):  
S. M. Yang ◽  
Y. A. Lin

Safe path planning for obstacle avoidance in autonomous vehicles has been developed. Based on the Rapidly Exploring Random Trees (RRT) algorithm, an improved algorithm integrating path pruning, smoothing, and optimization with geometric collision detection is shown to improve planning efficiency. Path pruning, a prerequisite to path smoothing, is performed to remove the redundant points generated by the random trees for a new path, without colliding with the obstacles. Path smoothing is performed to modify the path so that it becomes continuously differentiable with curvature implementable by the vehicle. Optimization is performed to select a “near”-optimal path of the shortest distance among the feasible paths for motion efficiency. In the experimental verification, both a pure pursuit steering controller and a proportional–integral speed controller are applied to keep an autonomous vehicle tracking the planned path predicted by the improved RRT algorithm. It is shown that the vehicle can successfully track the path efficiently and reach the destination safely, with an average tracking control deviation of 5.2% of the vehicle width. The path planning is also applied to lane changes, and the average deviation from the lane during and after lane changes remains within 8.3% of the vehicle width.


Sign in / Sign up

Export Citation Format

Share Document