scholarly journals Road-Aware Trajectory Prediction for Autonomous Driving on Highways

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4703
Author(s):  
Yookhyun Yoon ◽  
Taeyeon Kim ◽  
Ho Lee ◽  
Jahnghyon Park

For driving safely and comfortably, the long-term trajectory prediction of surrounding vehicles is essential for autonomous vehicles. For handling the uncertain nature of trajectory prediction, deep-learning-based approaches have been proposed previously. An on-road vehicle must obey road geometry, i.e., it should run within the constraint of the road shape. Herein, we present a novel road-aware trajectory prediction method which leverages the use of high-definition maps with a deep learning network. We developed a data-efficient learning framework for the trajectory prediction network in the curvilinear coordinate system of the road and a lane assignment for the surrounding vehicles. Then, we proposed a novel output-constrained sequence-to-sequence trajectory prediction network to incorporate the structural constraints of the road. Our method uses these structural constraints as prior knowledge for the prediction network. It is not only used as an input to the trajectory prediction network, but is also included in the constrained loss function of the maneuver recognition network. Accordingly, the proposed method can predict a feasible and realistic intention of the driver and trajectory. Our method has been evaluated using a real traffic dataset, and the results thus obtained show that it is data-efficient and can predict reasonable trajectories at merging sections.

2020 ◽  
Vol 34 (08) ◽  
pp. 13255-13260
Author(s):  
Mahdi Elhousni ◽  
Yecheng Lyu ◽  
Ziming Zhang ◽  
Xinming Huang

In a world where autonomous driving cars are becoming increasingly more common, creating an adequate infrastructure for this new technology is essential. This includes building and labeling high-definition (HD) maps accurately and efficiently. Today, the process of creating HD maps requires a lot of human input, which takes time and is prone to errors. In this paper, we propose a novel method capable of generating labelled HD maps from raw sensor data. We implemented and tested our methods on several urban scenarios using data collected from our test vehicle. The results show that the proposed deep learning based method can produce highly accurate HD maps. This approach speeds up the process of building and labeling HD maps, which can make meaningful contribution to the deployment of autonomous vehicles.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8152
Author(s):  
Dongyeon Yu ◽  
Honggyu Lee ◽  
Taehoon Kim ◽  
Sung-Ho Hwang

It is essential for autonomous vehicles at level 3 or higher to have the ability to predict the trajectories of surrounding vehicles to safely and effectively plan and drive along trajectories in complex traffic situations. However, predicting the future behavior of vehicles is a challenging issue because traffic vehicles each have different drivers with different driving tendencies and intentions and they interact with each other. This paper presents a Long Short-Term Memory (LSTM) encoder–decoder model that utilizes an attention mechanism that focuses on certain information to predict vehicles’ trajectories. The proposed model was trained using the Highway Drone (HighD) dataset, which is a high-precision, large-scale traffic dataset. We also compared this model to previous studies. Our model effectively predicted future trajectories by using an attention mechanism to manage the importance of the driving flow of the target and adjacent vehicles and the target vehicle’s dynamics in each driving situation. Furthermore, this study presents a method of linearizing the road geometry such that the trajectory prediction model can be used in a variety of road environments. We verified that the road geometry linearization mechanism can improve the trajectory prediction model’s performance on various road environments in a virtual test-driving simulator constructed based on actual road data.


Author(s):  
M. L. R. Lagahit ◽  
Y. H. Tseng

Abstract. The concept of Autonomous vehicles or self-driving cars has recently been gaining a lot of popularity. Because of this, a lot of research is being done to develop the technology. One of which is High Definition (HD) Maps, which are centimeter-level precision 3D maps that contain a lot of geometric and semantic information about the road which can assist the AV when driving. An important component of HD maps is the road markings which indicates a set of rules on how a vehicle should navigate itself on the road. For example, lane lines indicate which part of the road a vehicle can drive on in a certain direction. This research proposes a methodology that uses deep learning techniques to detect road arrows, road markings that show possible driving directions, on LIDAR derived images, and extract them as polyline vector shapefiles. The general workflow consists of (1) converting the LIDAR point cloud to images, (2) training and applying U-Net – a fully convolutional neural network, (3) creating masks from image segmentation results that have been transformed to fit the local coordinates, (4) extracting the polygons and polylines, and finally (5) exporting the vectors in shapefile format. The proposed methodology has shown promising results with object segmentation accuracies comparable with previous related works.


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


Author(s):  
Varsha R ◽  
Meghna Manoj Nair ◽  
Siddharth M. Nair ◽  
Amit Kumar Tyagi

The Internet of Things (smart things) is used in many sectors and applications due to recent technological advances. One of such application is in the transportation system, which is of primary use for the users to move from one place to another place. The smart devices which were embedded in vehicles are useful for the passengers to solve his/her query, wherein future vehicles will be fully automated to the advanced stage, i.e. future cars with driverless feature. These autonomous cars will help people a lot to reduce their time and increases their productivity in their respective (associated) business. In today’s generation and in the near future, privacy preserving and trust will be a major concern among users and autonomous vehicles and hence, this paper will be able to provide clarity for the same. Many attempts in previous decade have provided many efficient mechanisms, but they all work only with vehicles along with a driver. However, these mechanisms are not valid and useful for future vehicles. In this paper, we will use deep learning techniques for building trust using recommender systems and Blockchain technology for privacy preserving. We also maintain a certain level of trust via maintaining the highest level of privacy among users living in a particular environment. In this research, we developed a framework that could offer maximum trust or reliable communication to users over the road network. With this, we also preserve privacy of users during traveling, i.e., without revealing identity of respective users from Trusted Third Parties or even Location Based Service in reaching a destination. Thus, Deep Learning based Blockchain Solution (DLBS) is illustrated for providing an efficient recommendation system.


2021 ◽  
Vol 13 (22) ◽  
pp. 4525
Author(s):  
Junjie Zhang ◽  
Kourosh Khoshelham ◽  
Amir Khodabandeh

Accurate and seamless vehicle positioning is fundamental for autonomous driving tasks in urban environments, requiring the provision of high-end measuring devices. Light Detection and Ranging (lidar) sensors, together with Global Navigation Satellite Systems (GNSS) receivers, are therefore commonly found onboard modern vehicles. In this paper, we propose an integration of lidar and GNSS code measurements at the observation level via a mixed measurement model. An Extended Kalman-Filter (EKF) is implemented to capture the dynamic of the vehicle movement, and thus, to incorporate the vehicle velocity parameters into the measurement model. The lidar positioning component is realized using point cloud registration through a deep neural network, which is aided by a high definition (HD) map comprising accurately georeferenced scans of the road environments. Experiments conducted in a densely built-up environment show that, by exploiting the abundant measurements of GNSS and high accuracy of lidar, the proposed vehicle positioning approach can maintain centimeter-to meter-level accuracy for the entirety of the driving duration in urban canyons.


Author(s):  
Armando Vieira

Deep Learning (DL) took Artificial Intelligence (AI) by storm and has infiltrated into business at an unprecedented rate. Access to vast amounts of data extensive computational power and a new wave of efficient learning algorithms, helped Artificial Neural Networks to achieve state-of-the-art results in almost all AI challenges. DL is the cornerstone technology behind products for image recognition and video annotation, voice recognition, personal assistants, automated translation and autonomous vehicles. DL works similarly to the brain by extracting high-level, complex abstractions from data in a hierarchical and discriminative or generative way. The implications of DL supported AI in business is tremendous, shaking to the foundations many industries. In this chapter, I present the most significant algorithms and applications, including Natural Language Processing (NLP), image and video processing and finance.


Author(s):  
Irfan Khan ◽  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati

Abstract This paper presents a controller dedicated to the lateral and longitudinal vehicle dynamics control for autonomous driving. The proposed strategy exploits a Model Predictive Control strategy to perform lateral guidance and speed regulation. To this end, the algorithm controls the steering angle and the throttle and brake pedals for minimizing the vehicle’s lateral deviation and relative yaw angle with respect to the reference trajectory, while the vehicle speed is controlled to drive at the maximum acceptable longitudinal speed considering the adherence and legal speed limits. The technique exploits data computed by a simulated camera mounted on the top of the vehicle while moving in different driving scenarios. The longitudinal control strategy is based on a reference speed generator, which computes the maximum speed considering the road geometry and lateral motion of the vehicle at the same time. The proposed controller is tested in highway, interurban and urban driving scenarios to check the performance of the proposed method in different driving environments.


2015 ◽  
Vol 27 (6) ◽  
pp. 660-670 ◽  
Author(s):  
Udara Eshan Manawadu ◽  
◽  
Masaaki Ishikawa ◽  
Mitsuhiro Kamezaki ◽  
Shigeki Sugano ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/08.jpg"" width=""300"" /> Driving simulator</div>Intelligent passenger vehicles with autonomous capabilities will be commonplace on our roads in the near future. These vehicles will reshape the existing relationship between the driver and vehicle. Therefore, to create a new type of rewarding relationship, it is important to analyze when drivers prefer autonomous vehicles to manually-driven (conventional) vehicles. This paper documents a driving simulator-based study conducted to identify the preferences and individual driving experiences of novice and experienced drivers of autonomous and conventional vehicles under different traffic and road conditions. We first developed a simplified driving simulator that could connect to different driver-vehicle interfaces (DVI). We then created virtual environments consisting of scenarios and events that drivers encounter in real-world driving, and we implemented fully autonomous driving. We then conducted experiments to clarify how the autonomous driving experience differed for the two groups. The results showed that experienced drivers opt for conventional driving overall, mainly due to the flexibility and driving pleasure it offers, while novices tend to prefer autonomous driving due to its inherent ease and safety. A further analysis indicated that drivers preferred to use both autonomous and conventional driving methods interchangeably, depending on the road and traffic conditions.


2021 ◽  
Vol 11 (17) ◽  
pp. 7984
Author(s):  
Prabu Subramani ◽  
Khalid Nazim Abdul Sattar ◽  
Rocío Pérez de Prado ◽  
Balasubramanian Girirajan ◽  
Marcin Wozniak

Connected autonomous vehicles (CAVs) currently promise cooperation between vehicles, providing abundant and real-time information through wireless communication technologies. In this paper, a two-level fusion of classifiers (TLFC) approach is proposed by using deep learning classifiers to perform accurate road detection (RD). The proposed TLFC-RD approach improves the classification by considering four key strategies such as cross fold operation at input and pre-processing using superpixel generation, adequate features, multi-classifier feature fusion and a deep learning classifier. Specifically, the road is classified as drivable and non-drivable areas by designing the TLFC using the deep learning classifiers, and the detected information using the TLFC-RD is exchanged between the autonomous vehicles for the ease of driving on the road. The TLFC-RD is analyzed in terms of its accuracy, sensitivity or recall, specificity, precision, F1-measure and max F measure. The TLFC- RD method is also evaluated compared to three existing methods: U-Net with the Domain Adaptation Model (DAM), Two-Scale Fully Convolutional Network (TFCN) and a cooperative machine learning approach (i.e., TAAUWN). Experimental results show that the accuracy of the TLFC-RD method for the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset is 99.12% higher than its competitors.


Sign in / Sign up

Export Citation Format

Share Document