scholarly journals Occlusion-Free Road Segmentation Leveraging Semantics for Autonomous Vehicles

Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4711 ◽  
Author(s):  
Kewei Wang ◽  
Fuwu Yan ◽  
Bin Zou ◽  
Luqi Tang ◽  
Quan Yuan ◽  
...  

The deep convolutional neural network has led the trend of vision-based road detection, however, obtaining a full road area despite the occlusion from monocular vision remains challenging due to the dynamic scenes in autonomous driving. Inferring the occluded road area requires a comprehensive understanding of the geometry and the semantics of the visible scene. To this end, we create a small but effective dataset based on the KITTI dataset named KITTI-OFRS (KITTI-occlusion-free road segmentation) dataset and propose a lightweight and efficient, fully convolutional neural network called OFRSNet (occlusion-free road segmentation network) that learns to predict occluded portions of the road in the semantic domain by looking around foreground objects and visible road layout. In particular, the global context module is used to build up the down-sampling and joint context up-sampling block in our network, which promotes the performance of the network. Moreover, a spatially-weighted cross-entropy loss is designed to significantly increases the accuracy of this task. Extensive experiments on different datasets verify the effectiveness of the proposed approach, and comparisons with current excellent methods show that the proposed method outperforms the baseline models by obtaining a better trade-off between accuracy and runtime, which makes our approach is able to be applied to autonomous vehicles in real-time.

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Cheng-Jian Lin ◽  
Chun-Hui Lin ◽  
Shyh-Hau Wang

Deep learning has accomplished huge success in computer vision applications such as self-driving vehicles, facial recognition, and controlling robots. A growing need for deploying systems on resource-limited or resource-constrained environments such as smart cameras, autonomous vehicles, robots, smartphones, and smart wearable devices drives one of the current mainstream developments of convolutional neural networks: reducing model complexity but maintaining fine accuracy. In this study, the proposed efficient light convolutional neural network (ELNet) comprises three convolutional modules which perform ELNet using fewer computations, which is able to be implemented in resource-constrained hardware equipment. The classification task using CIFAR-10 and CIFAR-100 datasets was used to verify the model performance. According to the experimental results, ELNet reached 92.3% and 69%, respectively, in CIFAR-10 and CIFAR-100 datasets; moreover, ELNet effectively lowered the computational complexity and parameters required in comparison with other CNN architectures.


Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4703
Author(s):  
Yookhyun Yoon ◽  
Taeyeon Kim ◽  
Ho Lee ◽  
Jahnghyon Park

For driving safely and comfortably, the long-term trajectory prediction of surrounding vehicles is essential for autonomous vehicles. For handling the uncertain nature of trajectory prediction, deep-learning-based approaches have been proposed previously. An on-road vehicle must obey road geometry, i.e., it should run within the constraint of the road shape. Herein, we present a novel road-aware trajectory prediction method which leverages the use of high-definition maps with a deep learning network. We developed a data-efficient learning framework for the trajectory prediction network in the curvilinear coordinate system of the road and a lane assignment for the surrounding vehicles. Then, we proposed a novel output-constrained sequence-to-sequence trajectory prediction network to incorporate the structural constraints of the road. Our method uses these structural constraints as prior knowledge for the prediction network. It is not only used as an input to the trajectory prediction network, but is also included in the constrained loss function of the maneuver recognition network. Accordingly, the proposed method can predict a feasible and realistic intention of the driver and trajectory. Our method has been evaluated using a real traffic dataset, and the results thus obtained show that it is data-efficient and can predict reasonable trajectories at merging sections.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 36612-36623 ◽  
Author(s):  
Muhammad Junaid ◽  
Mubeen Ghafoor ◽  
Ali Hassan ◽  
Shehzad Khalid ◽  
Syed Ali Tariq ◽  
...  

2015 ◽  
Vol 27 (6) ◽  
pp. 660-670 ◽  
Author(s):  
Udara Eshan Manawadu ◽  
◽  
Masaaki Ishikawa ◽  
Mitsuhiro Kamezaki ◽  
Shigeki Sugano ◽  
...  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00270006/08.jpg"" width=""300"" /> Driving simulator</div>Intelligent passenger vehicles with autonomous capabilities will be commonplace on our roads in the near future. These vehicles will reshape the existing relationship between the driver and vehicle. Therefore, to create a new type of rewarding relationship, it is important to analyze when drivers prefer autonomous vehicles to manually-driven (conventional) vehicles. This paper documents a driving simulator-based study conducted to identify the preferences and individual driving experiences of novice and experienced drivers of autonomous and conventional vehicles under different traffic and road conditions. We first developed a simplified driving simulator that could connect to different driver-vehicle interfaces (DVI). We then created virtual environments consisting of scenarios and events that drivers encounter in real-world driving, and we implemented fully autonomous driving. We then conducted experiments to clarify how the autonomous driving experience differed for the two groups. The results showed that experienced drivers opt for conventional driving overall, mainly due to the flexibility and driving pleasure it offers, while novices tend to prefer autonomous driving due to its inherent ease and safety. A further analysis indicated that drivers preferred to use both autonomous and conventional driving methods interchangeably, depending on the road and traffic conditions.


This research proposes form shape mounted on “the deep convolutional neural network (CNN) for the detection of roads and the segmentation of aerial pix. Those images are received by using a UAV. The photograph segmentation set of rules has two levels: the studying segment and the working phase. The aerial images of the data deteriorated into their coloration additives, had been pre-processed in matlab on hue, after which divided into small 33 × 33 pixel packing containers the usage of a sliding container set of rules. CNN was once designed with matconvnet and had the accompanying structure: 4 convolutional levels, 4 grouping stages, a relu layer, a totally linked layer, and a softmax layer. The entire community has been organized for the use of 2,000 boxes. CNN was implemented the use of matlab programming on the gpu and the outcomes are promising. The CNN output offers pixel-by means of-pixel records, which class it has a location with (road / non-road). White pixel and choppy terrain are known as "0" (dark). Monitoring roads is a troublesome venture in aerial picture segmentation due to quite more than a few sizes and surfaces. One of the vastest steps in CNN training is the pre-processing phase. Due to toll road segmentation, dismissal structures and complexity enhancement have been applied.” this is an audited article on the relationship between representative upkeep techniques with work pleasure and responsibility in insurance plan businesses.


In this paper, we propose a method to automatically segment the road area from the input road images to support safe driving of autonomous vehicles. In the proposed method, the semantic segmentation network (SSN) is trained by using the deep learning method and the road area is segmented by utilizing the SSN. The SSN uses the weights initialized from the VGC-16 network to create the SegNet network. In order to fast the learning time and to obtain results, the class is simplified and learned so that it can be divided into two classes as the road area and the non-road area in the trained SegNet CNN network. In order to improve the accuracy of the road segmentation result, the boundary line of the road region with the straight-line component is detected through the Hough transform and the result is shown by dividing the accurate road region by combining with the segmentation result of the SSN. The proposed method can be applied to safe driving support by autonomously driving the autonomous vehicle by automatically classifying the road area during operation and applying it to the road area departure warning system


Author(s):  
Pranav Kale ◽  
Mayuresh Panchpor ◽  
Saloni Dingore ◽  
Saloni Gaikwad ◽  
Prof. Dr. Laxmi Bewoor

In today's world, deep learning fields are getting boosted with increasing speed. Lot of innovations and different algorithms are being developed. In field of computer vision, related to autonomous driving sector, traffic signs play an important role to provide real time data of an environment. Different algorithms were developed to classify these Signs. But performance still needs to improve for real time environment. Even the computational power required to train such model is high. In this paper, Convolutional Neural Network model is used to Classify Traffic Sign. The experiments are conducted on a real-world data set with images and videos captured from ordinary car driving as well as on GTSRB dataset [15] available on Kaggle. This proposed model is able to outperform previous models and resulted with accuracy of 99.6% on validation set. This idea has been granted Innovation Patent by Australian IP to Authors of this Research Paper. [24]


Sign in / Sign up

Export Citation Format

Share Document