scholarly journals Multi-Classifier Feature Fusion-Based Road Detection for Connected Autonomous Vehicles

2021 ◽  
Vol 11 (17) ◽  
pp. 7984
Author(s):  
Prabu Subramani ◽  
Khalid Nazim Abdul Sattar ◽  
Rocío Pérez de Prado ◽  
Balasubramanian Girirajan ◽  
Marcin Wozniak

Connected autonomous vehicles (CAVs) currently promise cooperation between vehicles, providing abundant and real-time information through wireless communication technologies. In this paper, a two-level fusion of classifiers (TLFC) approach is proposed by using deep learning classifiers to perform accurate road detection (RD). The proposed TLFC-RD approach improves the classification by considering four key strategies such as cross fold operation at input and pre-processing using superpixel generation, adequate features, multi-classifier feature fusion and a deep learning classifier. Specifically, the road is classified as drivable and non-drivable areas by designing the TLFC using the deep learning classifiers, and the detected information using the TLFC-RD is exchanged between the autonomous vehicles for the ease of driving on the road. The TLFC-RD is analyzed in terms of its accuracy, sensitivity or recall, specificity, precision, F1-measure and max F measure. The TLFC- RD method is also evaluated compared to three existing methods: U-Net with the Domain Adaptation Model (DAM), Two-Scale Fully Convolutional Network (TFCN) and a cooperative machine learning approach (i.e., TAAUWN). Experimental results show that the accuracy of the TLFC-RD method for the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset is 99.12% higher than its competitors.

Author(s):  
M. L. R. Lagahit ◽  
Y. H. Tseng

Abstract. The concept of Autonomous vehicles or self-driving cars has recently been gaining a lot of popularity. Because of this, a lot of research is being done to develop the technology. One of which is High Definition (HD) Maps, which are centimeter-level precision 3D maps that contain a lot of geometric and semantic information about the road which can assist the AV when driving. An important component of HD maps is the road markings which indicates a set of rules on how a vehicle should navigate itself on the road. For example, lane lines indicate which part of the road a vehicle can drive on in a certain direction. This research proposes a methodology that uses deep learning techniques to detect road arrows, road markings that show possible driving directions, on LIDAR derived images, and extract them as polyline vector shapefiles. The general workflow consists of (1) converting the LIDAR point cloud to images, (2) training and applying U-Net – a fully convolutional neural network, (3) creating masks from image segmentation results that have been transformed to fit the local coordinates, (4) extracting the polygons and polylines, and finally (5) exporting the vectors in shapefile format. The proposed methodology has shown promising results with object segmentation accuracies comparable with previous related works.


Author(s):  
Varsha R ◽  
Meghna Manoj Nair ◽  
Siddharth M. Nair ◽  
Amit Kumar Tyagi

The Internet of Things (smart things) is used in many sectors and applications due to recent technological advances. One of such application is in the transportation system, which is of primary use for the users to move from one place to another place. The smart devices which were embedded in vehicles are useful for the passengers to solve his/her query, wherein future vehicles will be fully automated to the advanced stage, i.e. future cars with driverless feature. These autonomous cars will help people a lot to reduce their time and increases their productivity in their respective (associated) business. In today’s generation and in the near future, privacy preserving and trust will be a major concern among users and autonomous vehicles and hence, this paper will be able to provide clarity for the same. Many attempts in previous decade have provided many efficient mechanisms, but they all work only with vehicles along with a driver. However, these mechanisms are not valid and useful for future vehicles. In this paper, we will use deep learning techniques for building trust using recommender systems and Blockchain technology for privacy preserving. We also maintain a certain level of trust via maintaining the highest level of privacy among users living in a particular environment. In this research, we developed a framework that could offer maximum trust or reliable communication to users over the road network. With this, we also preserve privacy of users during traveling, i.e., without revealing identity of respective users from Trusted Third Parties or even Location Based Service in reaching a destination. Thus, Deep Learning based Blockchain Solution (DLBS) is illustrated for providing an efficient recommendation system.


2019 ◽  
Vol 9 (5) ◽  
pp. 996
Author(s):  
Fenglei Ren ◽  
Xin He ◽  
Zhonghui Wei ◽  
Lei Zhang ◽  
Jiawei He ◽  
...  

Road detection is a crucial research topic in computer vision, especially in the framework of autonomous driving and driver assistance. Moreover, it is an invaluable step for other tasks such as collision warning, vehicle detection, and pedestrian detection. Nevertheless, road detection remains challenging due to the presence of continuously changing backgrounds, varying illumination (shadows and highlights), variability of road appearance (size, shape, and color), and differently shaped objects (lane markings, vehicles, and pedestrians). In this paper, we propose an algorithm fusing appearance and prior cues for road detection. Firstly, input images are preprocessed by simple linear iterative clustering (SLIC), morphological processing, and illuminant invariant transformation to get superpixels and remove lane markings, shadows, and highlights. Then, we design a novel seed superpixels selection method and model appearance cues using the Gaussian mixture model with the selected seed superpixels. Next, we propose to construct a road geometric prior model offline, which can provide statistical descriptions and relevant information to infer the location of the road surface. Finally, a Bayesian framework is used to fuse appearance and prior cues. Experiments are carried out on the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) road benchmark where the proposed algorithm shows compelling performance and achieves state-of-the-art results among the model-based methods.


Author(s):  
Yalda Rahmati ◽  
Mohammadreza Khajeh Hosseini ◽  
Alireza Talebpour ◽  
Benjamin Swain ◽  
Christopher Nelson

Despite numerous studies on general human–robot interactions, in the context of transportation, automated vehicle (AV)–human driver interaction is not a well-studied subject. These vehicles have fundamentally different decision-making logic compared with human drivers and the driving interactions between AVs and humans can potentially change traffic flow dynamics. Accordingly, through an experimental study, this paper investigates whether there is a difference between human–human and human–AV interactions on the road. This study focuses on car-following behavior and conducted several car-following experiments utilizing Texas A&M University’s automated Chevy Bolt. Utilizing NGSIM US-101 dataset, two scenarios for a platoon of three vehicles were considered. For both scenarios, the leader of the platoon follows a series of speed profiles extracted from the NGSIM dataset. The second vehicle in the platoon can be either another human-driven vehicle (scenario A) or an AV (scenario B). Data is collected from the third vehicle in the platoon to characterize the changes in driving behavior when following an AV. A data-driven and a model-based approach were used to identify possible changes in driving behavior from scenario A to scenario B. The findings suggested there is a statistically significant difference between human drivers’ behavior in these two scenarios and human drivers felt more comfortable following the AV. Simulation results also revealed the importance of capturing these changes in human behavior in microscopic simulation models of mixed driving environments.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4703
Author(s):  
Yookhyun Yoon ◽  
Taeyeon Kim ◽  
Ho Lee ◽  
Jahnghyon Park

For driving safely and comfortably, the long-term trajectory prediction of surrounding vehicles is essential for autonomous vehicles. For handling the uncertain nature of trajectory prediction, deep-learning-based approaches have been proposed previously. An on-road vehicle must obey road geometry, i.e., it should run within the constraint of the road shape. Herein, we present a novel road-aware trajectory prediction method which leverages the use of high-definition maps with a deep learning network. We developed a data-efficient learning framework for the trajectory prediction network in the curvilinear coordinate system of the road and a lane assignment for the surrounding vehicles. Then, we proposed a novel output-constrained sequence-to-sequence trajectory prediction network to incorporate the structural constraints of the road. Our method uses these structural constraints as prior knowledge for the prediction network. It is not only used as an input to the trajectory prediction network, but is also included in the constrained loss function of the maneuver recognition network. Accordingly, the proposed method can predict a feasible and realistic intention of the driver and trajectory. Our method has been evaluated using a real traffic dataset, and the results thus obtained show that it is data-efficient and can predict reasonable trajectories at merging sections.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4719
Author(s):  
Malik Haris ◽  
Jin Hou

Nowadays, autonomous vehicle is an active research area, especially after the emergence of machine vision tasks with deep learning. In such a visual navigation system for autonomous vehicle, the controller captures images and predicts information so that the autonomous vehicle can safely navigate. In this paper, we first introduced small and medium-sized obstacles that were intentionally or unintentionally left on the road, which can pose hazards for both autonomous and human driving situations. Then, we discuss Markov random field (MRF) model by fusing three potentials (gradient potential, curvature prior potential, and depth variance potential) to segment the obstacles and non-obstacles into the hazardous environment. Since the segment of obstacles is done by MRF model, we can predict the information to safely navigate the autonomous vehicle form hazardous environment on the roadway by DNN model. We found that our proposed method can segment the obstacles accuracy from the blended background road and improve the navigation skills of the autonomous vehicle.


1979 ◽  
Vol 23 (1) ◽  
pp. 263-266
Author(s):  
Douglas H. Harris

Visual cues were identified and procedures were developed to enhance on-the-road detection of driving while intoxicated (DWI) by police patrol officers. Related research was reviewed; police officers with demonstrated effectiveness in DWI detection were interviewed; DWI arrest reports were analyzed; and a study was conducted to determine the frequency of occurrence and relative discriminability of potential visual cues. Based on the results, a DWI detection Guide was developed; the Guide is currently being verified and evaluated in a field-study involving a sample of 10 law enforcement agencies.


2015 ◽  
Vol 27 (6) ◽  
pp. 660-670 ◽  
Author(s):  
Udara Eshan Manawadu ◽  
◽  
Masaaki Ishikawa ◽  
Mitsuhiro Kamezaki ◽  
Shigeki Sugano ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/08.jpg"" width=""300"" /> Driving simulator</div>Intelligent passenger vehicles with autonomous capabilities will be commonplace on our roads in the near future. These vehicles will reshape the existing relationship between the driver and vehicle. Therefore, to create a new type of rewarding relationship, it is important to analyze when drivers prefer autonomous vehicles to manually-driven (conventional) vehicles. This paper documents a driving simulator-based study conducted to identify the preferences and individual driving experiences of novice and experienced drivers of autonomous and conventional vehicles under different traffic and road conditions. We first developed a simplified driving simulator that could connect to different driver-vehicle interfaces (DVI). We then created virtual environments consisting of scenarios and events that drivers encounter in real-world driving, and we implemented fully autonomous driving. We then conducted experiments to clarify how the autonomous driving experience differed for the two groups. The results showed that experienced drivers opt for conventional driving overall, mainly due to the flexibility and driving pleasure it offers, while novices tend to prefer autonomous driving due to its inherent ease and safety. A further analysis indicated that drivers preferred to use both autonomous and conventional driving methods interchangeably, depending on the road and traffic conditions.


Author(s):  
Yiqi Gao ◽  
Theresa Lin ◽  
Francesco Borrelli ◽  
Eric Tseng ◽  
Davor Hrovat

Two frameworks based on Model Predictive Control (MPC) for obstacle avoidance with autonomous vehicles are presented. A given trajectory represents the driver intent. An MPC has to safely avoid obstacles on the road while trying to track the desired trajectory by controlling front steering angle and differential braking. We present two different approaches to this problem. The first approach solves a single nonlinear MPC problem. The second approach uses a hierarchical scheme. At the high-level, a trajectory is computed on-line, in a receding horizon fashion, based on a simplified point-mass vehicle model in order to avoid an obstacle. At the low-level an MPC controller computes the vehicle inputs in order to best follow the high level trajectory based on a nonlinear vehicle model. This article presents the design and comparison of both approaches, the method for implementing them, and successful experimental results on icy roads.


Sign in / Sign up

Export Citation Format

Share Document