scholarly journals Robust Lane-Detection Method for Low-Speed Environments

Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4274 ◽  
Author(s):  
Qingquan Li ◽  
Jian Zhou ◽  
Bijun Li ◽  
Yuan Guo ◽  
Jinsheng Xiao

Vision-based lane-detection methods provide low-cost density information about roads for autonomous vehicles. In this paper, we propose a robust and efficient method to expand the application of these methods to cover low-speed environments. First, the reliable region near the vehicle is initialized and a series of rectangular detection regions are dynamically constructed along the road. Then, an improved symmetrical local threshold edge extraction is introduced to extract the edge points of the lane markings based on accurate marking width limitations. In order to meet real-time requirements, a novel Bresenham line voting space is proposed to improve the process of line segment detection. Combined with straight lines, polylines, and curves, the proposed geometric fitting method has the ability to adapt to various road shapes. Finally, different status vectors and Kalman filter transfer matrices are used to track the key points of the linear and nonlinear parts of the lane. The proposed method was tested on a public database and our autonomous platform. The experimental results show that the method is robust and efficient and can meet the real-time requirements of autonomous vehicles.

Author(s):  
Robert D. Leary ◽  
Sean Brennan

Currently, there is a lack of low-cost, real-time solutions for accurate autonomous vehicle localization. The fusion of a precise a priori map and a forward-facing camera can provide an alternative low-cost method for achieving centimeter-level localization. This paper analyzes the position and orientation bounds, or region of attraction, with which a real-time vehicle pose estimator can localize using monocular vision and a lane marker map. A pose estimation algorithm minimizes the residual pixel-level error between the estimated and detected lane marker features via Gauss-Newton nonlinear least-squares. Simulations of typical road scenes were used as ground truth to ensure the pose estimator will converge to the true vehicle pose. A successful convergence was defined as a pose estimate that fell within 5 cm and 0.25 degrees of the true vehicle pose. The results show that the longitudinal vehicle state is weakly observable with the smallest region of attraction. Estimating the remaining five vehicle states gives repeatable convergence within the prescribed convergence bounds over a relatively large region of attraction, even for the simple lane detection methods used herein. A main contribution of this paper is to demonstrate a repeatable and verifiable method to assess and compare lane-based vehicle localization strategies.


Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 276 ◽  
Author(s):  
Jiyoung Jung ◽  
Sung-Ho Bae

The generation of digital maps with lane-level resolution is rapidly becoming a necessity, as semi- or fully-autonomous driving vehicles are now commercially available. In this paper, we present a practical real-time working prototype for road lane detection using LiDAR data, which can be further extended to automatic lane-level map generation. Conventional lane detection methods are limited to simple road conditions and are not suitable for complex urban roads with various road signs on the ground. Given a 3D point cloud scanned by a 3D LiDAR sensor, we categorized the points of the drivable region and distinguished the points of the road signs on the ground. Then, we developed an expectation-maximization method to detect parallel lines and update the 3D line parameters in real time, as the probe vehicle equipped with the LiDAR sensor moved forward. The detected and recorded line parameters were integrated to build a lane-level digital map with the help of a GPS/INS sensor. The proposed system was tested to generate accurate lane-level maps of two complex urban routes. The experimental results showed that the proposed system was fast and practical in terms of effectively detecting road lines and generating lane-level maps.


2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


Author(s):  
Gautham G ◽  
Deepika Venkatesh ◽  
A. Kalaiselvi

In recent years, due to the increasing density of traffic every year, it is been a hassle for drivers in metropolitan cities to maintain lane and speeds on road. The drivers usually waste time and effort in idling their cars to maintain in traffic conditions. The drivers get easily frustrated when they tried to maintain the path because of the havoc created. Transportation Institute found that the odds of a crash(or near crash) more than doubled when the driver took his or her eyes off the road formore than two seconds. This tends to cause about 23% of accidents when not following their lane paths. In worst case the fuel economy often drops and tends to cause increase in pollution about 28% to 36% per vehicle annually. This corresponds to the wastage of fuel. Owing to this problem, we proposed an ingenious method by which the lane detection can be made affordable and applicable to existing automobiles. The proposed prototype of lane detection is carried over with a temporary autonomous bot which is interfaced with Raspberry pi processor, loaded with the lane detection algorithm. This prototype bot is made to get live video which is then processed by the algorithm. Also, the preliminary setups are carried over in such a way that it is easily implemented and accessible at low cost with better efficiency, providing a better impact on future automobiles.


2019 ◽  
Vol 72 (04) ◽  
pp. 917-930
Author(s):  
Fang-Shii Ning ◽  
Xiaolin Meng ◽  
Yi-Ting Wang

Connected and Autonomous Vehicles (CAVs) have been researched extensively for solving traffic issues and for realising the concept of an intelligent transport system. A well-developed positioning system is critical for CAVs to achieve these aims. The system should provide high accuracy, mobility, continuity, flexibility and scalability. However, high-performance equipment is too expensive for the commercial use of CAVs; therefore, the use of a low-cost Global Navigation Satellite System (GNSS) receiver to achieve real-time, high-accuracy and ubiquitous positioning performance will be a future trend. This research used RTKLIB software to develop a low-cost GNSS receiver positioning system and assessed the developed positioning system according to the requirements of CAV applications. Kinematic tests were conducted to evaluate the positioning performance of the low-cost receiver in a CAV driving environment based on the accuracy requirements of CAVs. The results showed that the low-cost receiver satisfied the “Where in Lane” accuracy level (0·5 m) and achieved a similar positioning performance in rural, interurban, urban and motorway areas.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Hai Wang ◽  
Xinyu Lou ◽  
Yingfeng Cai ◽  
Yicheng Li ◽  
Long Chen

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.


Author(s):  
Simon Roberts

The CoDRIVE solution builds on R&D in the development of connected and autonomous vehicles (CAVs). The mainstay of the system is a low-cost GNSS receiver integrated with a MEMS grade IMU powered with CoDRIVE algorithms and high precision data processing software. The solution integrates RFID (radio-frequency identification) localisation information derived from tags installed in the roads around the University of Nottingham. This aids the positioning solution by correcting the long-term drift of inertial navigation technology in the absence of GNSS. The solution is informed of obscuration of GNSS through city models of skyview and elevation masks derived from 360-degree photography. The results show that predictive intelligence of the denial of GNSS and RFID aiding realises significant benefits compared to the inertial only solution. According to the validation, inertial only solutions drift over time, with an overall RMS accuracy over a 300 metres section of GNSS outage of 10 to 20 metres. After deploying the RFID tags on the road, experiments show that the RFID aided algorithm is able to constrain the maximum error to within 3.76 metres, and with 93.9% of points constrained to 2 metres accuracy overall.


Author(s):  
M. L. R. Lagahit ◽  
Y. H. Tseng

Abstract. The concept of Autonomous Vehicles (AV) or self-driving cars has been increasingly popular these past few years. As such, research and development of AVs have also escalated around the world. One of those researches is about High-Definition (HD) maps. HD Maps are basically very detailed maps that provide all the geometric and semantic information on the road, which helps the AV in positioning itself on the lanes as well as mapping objects and markings on the road. This research will focus on the early stages of updating said HD maps. The methodology mainly consists of (1) running YOLOv3, a real-time object detection system, on a photo taken from a stereo camera to detect the object of interest, in this case a traffic cone, (2) applying the theories of stereo-photogrammetry to determine the 3D coordinates of the traffic cone, and (3) executing all of it at the same time on a Python-based platform. Results have shown centimeter-level accuracy in terms of obtained distance and height of the detected traffic cone from the camera setup. In future works, observed coordinates can be uploaded to a database and then connected to an application for real-time data storage/management and interactive visualization.


2019 ◽  
Vol 11 (3) ◽  
pp. 287 ◽  
Author(s):  
Francesco Nex ◽  
Diogo Duarte ◽  
Anne Steenbeek ◽  
Norman Kerle

The timely and efficient generation of detailed damage maps is of fundamental importance following disaster events to speed up first responders’ (FR) rescue activities and help trapped victims. Several works dealing with the automated detection of building damages have been published in the last decade. The increasingly widespread availability of inexpensive UAV platforms has also driven their recent adoption for rescue operations (i.e., search and rescue). Their deployment, however, remains largely limited to visual image inspection by skilled operators, limiting their applicability in time-constrained real conditions. This paper proposes a new solution to autonomously map building damages with a commercial UAV in near real-time. The solution integrates different components that allow the live streaming of the images on a laptop and their processing on the fly. Advanced photogrammetric techniques and deep learning algorithms are combined to deliver a true-orthophoto showing the position of building damages, which are already processed by the time the UAV returns to base. These algorithms have been customized to deliver fast results, fulfilling the near real-time requirements. The complete solution has been tested in different conditions, and received positive feedback by the FR involved in the EU funded project INACHUS. Two realistic pilot tests are described in the paper. The achieved results show the great potential of the presented approach, how close the proposed solution is to FR’ expectations, and where more work is still needed.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4896 ◽  
Author(s):  
Mohamed Elsheikh ◽  
Walid Abdelfatah ◽  
Aboelmagd Nourledin ◽  
Umar Iqbal ◽  
Michael Korenberg

The last decade has witnessed a growing demand for precise positioning in many applications including car navigation. Navigating automated land vehicles requires at least sub-meter level positioning accuracy with the lowest possible cost. The Global Navigation Satellite System (GNSS) Single-Frequency Precise Point Positioning (SF-PPP) is capable of achieving sub-meter level accuracy in benign GNSS conditions using low-cost GNSS receivers. However, SF-PPP alone cannot be employed for land vehicles due to frequent signal degradation and blockage. In this paper, real-time SF-PPP is integrated with a low-cost consumer-grade Inertial Navigation System (INS) to provide a continuous and precise navigation solution. The PPP accuracy and the applied estimation algorithm contributed to reducing the effects of INS errors. The system was evaluated through two road tests which included open-sky, suburban, momentary outages, and complete GNSS outage conditions. The results showed that the developed PPP/INS system maintained horizontal sub-meter Root Mean Square (RMS) accuracy in open-sky and suburban environments. Moreover, the PPP/INS system could provide a continuous real-time positioning solution within the lane the vehicle is moving in. This lane-level accuracy was preserved even when passing under bridges and overpasses on the road. The developed PPP/INS system is expected to benefit low-cost precise land vehicle navigation applications including level 2 of vehicle automation which comprises services such as lane departure warning and lane-keeping assistance.


Sign in / Sign up

Export Citation Format

Share Document