scholarly journals Real-time Driving Context Understanding using Deep Grid Net: A Granular Approach

2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Liviu A. Marina

Numerous self-driving cars algorithms rely on grid maps for motion planning, obstacles avoidance, or environment perception. Obtained from fused sensory information, the occupancy grids (OGs) are nowadays among the most popular solutions used in series production in the automotive industry. In this paper, we extend Deep Grid Net (DGN) [18], a deep learning (DL) system designed for understanding the context in which an autonomous car is driving. We consider this paper a granular approach to DGN method due to the improvements added to the original research [18]. DGN incorporates a learned driving environment representation based on OGs obtained from raw real-world Lidar data and constructed on top of the Dempster-Shafer (DS) theory. Our system is able to predict in real-time if the vehicle is driving on the highway, on county roads, inside a city, in parking lots or is stuck in a traffic jam. The predicted driving context is further used for switching between different autonomous driving strategies implemented within EB robinos, Elektrobit’s Autonomous Driving (AD) software platform. We propose a neuroevolutionary approach to search the optimal hyperparameters set of DGN. Genetic algorithms (GAs) were selected due to their demonstrated capabilities to evolve deep neural networks with improved accuracy and processing speed. The performance of the proposed deep network has been evaluated against similar competing driving context estimation classifiers.

2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Márton Pál ◽  
Fanni Vörös ◽  
István Elek ◽  
Béla Kovács

<p><strong>Abstract.</strong> A self-driving car is a vehicle that is able to perceive its surroundings and navigate in it without human action. Radar sensors, lasers, computer vision and GPS technologies help it to drive individually (Figure 1). They interpret the sensed information to calculate routes and navigate between obstacles and traffic elements.</p><p>Sufficiently accurate navigation and information about the current position of the vehicle are indispensable for transport. These expectations are fulfilled in the case of a human driver: the knowledge on traffic rules and signs make possible to navigate through even difficult situations. Self-driving systems substitute humans by monitoring and evaluating the surrounding environment and its objects without the background information of the driver. This analysing process is vulnerable. Sudden or unexpected situations may occur but high precision navigation and background GPS databases can complement sensor-detected data.</p><p>The assistance of global navigation has been used in cars for decades. Drivers can easily plan their routes and reach their destination by using car GPS units. However, these devices do not provide accurate positioning: there may be a difference of several metres from the real location. Self-driving cars also use navigation to complement sensor data. Although there are already autonomous system tests on motorways and countryside roads, in densely built-in areas this technology faces complications due to accuracy problems. The dilution of precision (DOP) values can be extremely high in larger settlements because high buildings may hide southern sky (where satellite signs are sensed from on our latitude).</p><p>We can achieve centimetre-level accuracy (if the conditions are ideal) with geodesic RTK (real-time kinematic) GPS systems. This high-precision position data is derived from satellite-based positioning systems. Measurements of the phase of the signal’s carrier wave are real-time corrected by a single reference or an interpolated virtual station.</p><p>In this research we use RTK GPS technology in order to work out a spatial database. These measurements can also be less precise in dense cities, but there is time during fieldwork to try to eliminate inaccuracy. We have chosen a sample area in the inner city of Budapest, Hungary where we located all traffic signs, pedestrian crossings and other important elements. As self-driving cars need precise position data of these terrain objects, we have tried to work with a maximum error of a few decimetres.</p><p>We have examined online map providers if they have feasible data structure and some base data. The implemented structure is similar to OpenStreetMap DB, in which there are already some traffic lights in important crossings. With this preliminary test database, we would like to filter out dangerous situations. If the camera of the car does not see a traffic sign because of a tree or a truck, information about it will be available from the database. If a pedestrian crossing is hardly visible and the sensor does not recognize it, the background GIS data will warn the car that there may be inattentive people on the road.</p><p>A test application has also been developed (Figure 2.), in which our Postgres/Postgis database records have been inserted. In the next phase of the project we try to test our database in the traffic. We plan to drive through the sample area and observe the GPS accuracy in the recognition of the located signs.</p><p>This research aims to achieve higher safety in the field of autonomous driving. By having a refreshable cartographic GIS database in the memory of a self-driving car, there is a smaller chance of risking human life. However, the maintenance demands a high amount of work. Because of this we should concentrate only on the most important signs. Even the cars can be able to supervise the content of the database if there is a large number of them on the road. The frequent production and analysis of point clouds is also an option to get nearer to safe automatized traffic.</p>


2019 ◽  
Vol 12 (2) ◽  
pp. 120-127 ◽  
Author(s):  
Wael Farag

Background: In this paper, a Convolutional Neural Network (CNN) to learn safe driving behavior and smooth steering manoeuvring, is proposed as an empowerment of autonomous driving technologies. The training data is collected from a front-facing camera and the steering commands issued by an experienced driver driving in traffic as well as urban roads. Methods: This data is then used to train the proposed CNN to facilitate what it is called “Behavioral Cloning”. The proposed Behavior Cloning CNN is named as “BCNet”, and its deep seventeen-layer architecture has been selected after extensive trials. The BCNet got trained using Adam’s optimization algorithm as a variant of the Stochastic Gradient Descent (SGD) technique. Results: The paper goes through the development and training process in details and shows the image processing pipeline harnessed in the development. Conclusion: The proposed approach proved successful in cloning the driving behavior embedded in the training data set after extensive simulations.


Author(s):  
Wulf Loh ◽  
Janina Loh

In this chapter, we give a brief overview of the traditional notion of responsibility and introduce a concept of distributed responsibility within a responsibility network of engineers, driver, and autonomous driving system. In order to evaluate this concept, we explore the notion of man–machine hybrid systems with regard to self-driving cars and conclude that the unit comprising the car and the operator/driver consists of such a hybrid system that can assume a shared responsibility different from the responsibility of other actors in the responsibility network. Discussing certain moral dilemma situations that are structured much like trolley cases, we deduce that as long as there is something like a driver in autonomous cars as part of the hybrid system, she will have to bear the responsibility for making the morally relevant decisions that are not covered by traffic rules.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 657
Author(s):  
Aoki Takanose ◽  
Yoshiki Atsumi ◽  
Kanamu Takikawa ◽  
Junichi Meguro

Autonomous driving support systems and self-driving cars require the determination of reliable vehicle positions with high accuracy. The real time kinematic (RTK) algorithm with global navigation satellite system (GNSS) is generally employed to obtain highly accurate position information. Because RTK can estimate the fix solution, which is a centimeter-level positioning solution, it is also used as an indicator of the position reliability. However, in urban areas, the degradation of the GNSS signal environment poses a challenge. Multipath noise caused by surrounding tall buildings degrades the positioning accuracy. This leads to large errors in the fix solution, which is used as a measure of reliability. We propose a novel position reliability estimation method by considering two factors; one is that GNSS errors are more likely to occur in the height than in the plane direction; the other is that the height variation of the actual vehicle travel path is small compared to the amount of movement in the horizontal directions. Based on these considerations, we proposed a method to detect a reliable fix solution by estimating the height variation during driving. To verify the effectiveness of the proposed method, an evaluation test was conducted in an urban area of Tokyo. According to the evaluation test, a reliability judgment rate of 99% was achieved in an urban environment, and a plane accuracy of less than 0.3 m in RMS was achieved. The results indicate that the accuracy of the proposed method is higher than that of the conventional fix solution, demonstratingits effectiveness.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 15
Author(s):  
Filippo Aleotti ◽  
Giulio Zaccaroni ◽  
Luca Bartolomei ◽  
Matteo Poggi ◽  
Fabio Tosi ◽  
...  

Depth perception is paramount for tackling real-world problems, ranging from autonomous driving to consumer applications. For the latter, depth estimation from a single image would represent the most versatile solution since a standard camera is available on almost any handheld device. Nonetheless, two main issues limit the practical deployment of monocular depth estimation methods on such devices: (i) the low reliability when deployed in the wild and (ii) the resources needed to achieve real-time performance, often not compatible with low-power embedded systems. Therefore, in this paper, we deeply investigate all these issues, showing how they are both addressable by adopting appropriate network design and training strategies. Moreover, we also outline how to map the resulting networks on handheld devices to achieve real-time performance. Our thorough evaluation highlights the ability of such fast networks to generalize well to new environments, a crucial feature required to tackle the extremely varied contexts faced in real applications. Indeed, to further support this evidence, we report experimental results concerning real-time, depth-aware augmented reality and image blurring with smartphones in the wild.


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 943 ◽  
Author(s):  
Il Bae ◽  
Jaeyoung Moon ◽  
Jeongseok Seo

The convergence of mechanical, electrical, and advanced ICT technologies, driven by artificial intelligence and 5G vehicle-to-everything (5G-V2X) connectivity, will help to develop high-performance autonomous driving vehicles and services that are usable and convenient for self-driving passengers. Despite widespread research on self-driving, user acceptance remains an essential part of successful market penetration; this forms the motivation behind studies on human factors associated with autonomous shuttle services. We address this by providing a comfortable driving experience while not compromising safety. We focus on the accelerations and jerks of vehicles to reduce the risk of motion sickness and to improve the driving experience for passengers. Furthermore, this study proposes a time-optimal velocity planning method for guaranteeing comfort criteria when an explicit reference path is given. The overall controller and planning method were verified using real-time, software-in-the-loop (SIL) environments for a real-time vehicle dynamics simulation; the performance was then compared with a typical planning approach. The proposed optimized planning shows a relatively better performance and enables a comfortable passenger experience in a self-driving shuttle bus according to the recommended criteria.


Sign in / Sign up

Export Citation Format

Share Document