Grid-based Environment Estimation for Local Autonomous Vehicle Navigation

2016 ◽  
Author(s):  
Georg Tanzmeister

This dissertation is focused on the environment model for automated vehicles. A reliable model of the local environment available in real-time is a prerequisite to enable almost any useful ­activity performed by a robot, such as planning motions to fulfill tasks. It is particularly important in safety critical applications, such as for autonomous vehicles in regular traffic. In this thesis, novel concepts for local mapping, tracking, the detection of principal moving directions, cost evaluations in motion planning, and road course estimation have been developed. An object- and sensor-independent grid representation forms the basis of all presented methods enabling a generic and robust estimation of the environment. All approaches have been evaluated with sensor data from real road scenarios, and their performance has been experimentally demonstrated with a test vehicle. ...

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 899 ◽  
Author(s):  
Veli Ilci ◽  
Charles Toth

Recent developments in sensor technologies such as Global Navigation Satellite Systems (GNSS), Inertial Measurement Unit (IMU), Light Detection and Ranging (LiDAR), radar, and camera have led to emerging state-of-the-art autonomous systems, such as driverless vehicles or UAS (Unmanned Airborne Systems) swarms. These technologies necessitate the use of accurate object space information about the physical environment around the platform. This information can be generally provided by the suitable selection of the sensors, including sensor types and capabilities, the number of sensors, and their spatial arrangement. Since all these sensor technologies have different error sources and characteristics, rigorous sensor modeling is needed to eliminate/mitigate errors to obtain an accurate, reliable, and robust integrated solution. Mobile mapping systems are very similar to autonomous vehicles in terms of being able to reconstruct the environment around the platforms. However, they differ a lot in operations and objectives. Mobile mapping vehicles use professional grade sensors, such as geodetic grade GNSS, tactical grade IMU, mobile LiDAR, and metric cameras, and the solution is created in post-processing. In contrast, autonomous vehicles use simple/inexpensive sensors, require real-time operations, and are primarily interested in identifying and tracking moving objects. In this study, the main objective was to assess the performance potential of autonomous vehicle sensor systems to obtain high-definition maps based on only using Velodyne sensor data for creating accurate point clouds. In other words, no other sensor data were considered in this investigation. The results have confirmed that cm-level accuracy can be achieved.


Automation ◽  
2020 ◽  
Vol 1 (1) ◽  
pp. 17-32
Author(s):  
Thomas Kent ◽  
Anthony Pipe ◽  
Arthur Richards ◽  
Jim Hutchinson ◽  
Wolfgang Schuster

VENTURER was one of the first three UK government funded research and innovation projects on Connected Autonomous Vehicles (CAVs) and was conducted predominantly in the South West region of the country. A series of increasingly complex scenarios conducted in an urban setting were used to: (i) evaluate the technology created as a part of the project; (ii) systematically assess participant responses to CAVs and; (iii) inform the development of potential insurance models and legal frameworks. Developing this understanding contributed key steps towards facilitating the deployment of CAVs on UK roads. This paper aims to describe the VENTURER Project trials, their objectives and detail some of the key technologies used. Importantly we aim to introduce some informative challenges that were overcame and the subsequent project and technological lessons learned in a hope to help others plan and execute future CAV research. The project successfully integrated several technologies crucial to CAV development. These included, a Decision Making System using behaviour trees to make high level decisions; A pilot-control system to smoothly and comfortably turn plans into throttle and steering actuation; Sensing and perception systems to make sense of raw sensor data; Inter-CAV Wireless communication capable of demonstrating vehicle-to-vehicle communication of potential hazards. The closely coupled technology integration, testing and participant-focused trial schedule led to a greatly improved understanding of the engineering and societal barriers that CAV development faces. From a behavioural standpoint the importance of reliability and repeatability far outweighs a need for novel trajectories, while the sensor-to-perception capabilities are critical, the process of verification and validation is extremely time consuming. Additionally, the added capabilities that can be leveraged from inter-CAV communications shows the potential for improved road safety that could result. Importantly, to effectively conduct human factors experiments in the CAV sector under consistent and repeatable conditions, one needs to define a scripted and stable set of scenarios that uses reliable equipment and a controllable environmental setting. This requirement can often be at odds with making significant technology developments, and if both are part of a project’s goals then they may need to be separated from each other.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6586
Author(s):  
Andrzej Stateczny ◽  
Marta Wlodarczyk-Sielicka ◽  
Pawel Burdziakowski

Autonomous vehicle navigation has been at the center of several major developments, both in civilian and defense applications [...]


2021 ◽  
Vol 1 (2) ◽  
Author(s):  
Asher Elmquist ◽  
Radu Serban ◽  
Dan Negrut

Abstract Computer simulation can be a useful tool when designing robots expected to operate independently in unstructured environments. In this context, one needs to simulate the dynamics of the robot’s mechanical system, the environment in which the robot operates, and the sensors which facilitate the robot’s perception of the environment. Herein, we focus on the sensing simulation task by presenting a virtual sensing framework built alongside an open-source, multi-physics simulation platform called Chrono. This framework supports camera, lidar, GPS, and IMU simulation. We discuss their modeling as well as the noise and distortion implemented to increase the realism of the synthetic sensor data. We close with two examples that show the sensing simulation framework at work: one pertains to a reduced scale autonomous vehicle and the second is related to a vehicle driven in a digital replica of a Madison neighborhood.


Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


Author(s):  
Xin Jin ◽  
Jacqueline M. Luff ◽  
Shalabh Gupta ◽  
Asok Ray

This paper presents a Statistical Mechanics-inspired navigation algorithm with dynamic adaptation and complete coverage of unknown environments, which is built upon the concept of generalized Ising model. The algorithm enables autonomous vehicles to cover all areas in the environment, avoid unknown obstacles and adapt to target neighborhoods. Potential applications of this algorithm are humanitarian de-mining, hazard detection and floor-cleaning tasks. The algorithm has been validated on a Player/Stage simulator with an example of minesweeping.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 5035 ◽  
Author(s):  
Son ◽  
Jeong ◽  
Lee

When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance.


2021 ◽  
Vol 11 (8) ◽  
pp. 3464
Author(s):  
Balázs Németh ◽  
Péter Gáspár

The design of the motion of autonomous vehicles in non-signalized intersections with the consideration of multiple criteria and safety constraints is a challenging problem with several tasks. In this paper, a learning-based control solution with guarantees for collision avoidance is proposed. The design problem is formed in a novel way through the division of the control problem, which leads to reduced complexity for achieving real-time computation. First, an environment model for the intersection was created based on a constrained quadratic optimization, with which guarantees on collision avoidance can be provided. A robust cruise controller for the autonomous vehicle was also designed. Second, the environment model was used in the training process, which was based on a reinforcement learning method. The goal of the training was to improve the economy of autonomous vehicles, while guaranteeing collision avoidance. The effectiveness of the method is presented through simulation examples in non-signalized intersection scenarios with varying numbers of vehicles.


2021 ◽  
Vol 19 (3) ◽  
pp. 95-104
Author(s):  
M. Rutendo ◽  
◽  
M. A. Al Akkad ◽  

The object of this paper is to create a system that can control any vehicle in any gaming environment to simulate, study, experiment and improve how self-driving vehicles operate. It is to be taken as the bases for future work on autonomous vehicles with real hardware devices. The long-term goal is to eliminate human error. Perception, localisation, planning and control subsystems were developed. LiDAR and RADAR sensors were used in addition to a normal web Camera. After getting information from the perception module, the system will be able to localise where the vehicle is, then the planning module is used to plan to which location the vehicle will move, using localisation module data to draw up the best path to use. After knowing the best path, the system will control the vehicle to move autonomously without human help. As a controller a Proportional Integral Derivative PID controller was used. Python programming language, computer vision, and machine learning were used in developing the system, where the only hardware required is a computer with a GPU and powerful graphical card that can run a game which has a vehicle, roads with lane lines and a map of the road. The developed system is intended to be a good tool in conducting experiments for achieving reliable autonomous vehicle navigation.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4357 ◽  
Author(s):  
Babak Shahian Jahromi ◽  
Theja Tulabandhula ◽  
Sabri Cetin

There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception.


Sign in / Sign up

Export Citation Format

Share Document