scholarly journals Research on Lane Detection Based on Global Search of Dynamic Region of Interest (DROI)

2020 ◽  
Vol 10 (7) ◽  
pp. 2543 ◽  
Author(s):  
Jianjun Hu ◽  
Songsong Xiong ◽  
Yuqi Sun ◽  
Junlin Zha ◽  
Chunyun Fu

A novel lane detection approach, based on the dynamic region of interest (DROI) selection in the horizontal and vertical safety vision, is proposed to improve the accuracy of lane detection in this paper. The curvature of each point on the edge of the road and the maximum safe distance, which are solved by the lane line equation and vehicle speed data of the previous frame, are used to accurately select the DROI at the current moment. Next, the global search of DROI is applied to identify the lane line feature points. Subsequently, the discontinuous points are processed by interpolation. To fulfill fast and accurate matching of lane feature points and mathematical equations, the lane line is fitted in the polar coordinate equation. The proposed approach was verified by the Caltech database, under the premise of ensuring real-time performance. The accuracy rate was 99.21% which is superior to other mainstream methods described in the literature. Furthermore, to test the robustness of the proposed method, it was tested in 5683 frames of complicated real road pictures, and the positive detection rate was 99.07%.

2021 ◽  
Vol 309 ◽  
pp. 01016
Author(s):  
A. Sai Hanuman ◽  
G. Prasanna Kumar

In the Advanced Driver Assistance System (ADAS), lane detection plays a vital role to avoid road accidents of an Autonomous vehicle. Also, autonomous vehicles should be able to navigate by themselves, in-order to do, it needs to understand its surrounding conditions like a human. So that vehicle can determine its path in streets and highways it can maintain lane manoeuvre. Also, It has become the most fundamental aspect to consider in current ADAS research. One of the major hurdles in self-driving vehicle research is identifying the curved lanes, multiple lanes with challenging light, and weather conditions, especially in Indian highway scenarios. As it is a vision-based lane detection approach we are using OpenCV library which consists of multiple algorithms like the optimization of canny edge detection to find out the edges, features of the lane and Hough Transform for lane line generation and apply on the particular region of interest.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Zengcai Wang ◽  
Xiaojin Wang ◽  
Lei Zhao ◽  
Guoxin Zhang

This paper presents a lane departure detection approach that utilizes a stacked sparse autoencoder (SSAE) for vehicles driving on motorways or similar roads. Image preprocessing techniques are successfully executed in the initialization procedure to obtain robust region-of-interest extraction parts. Lane detection operations based on Hough transform with a polar angle constraint and a matching algorithm are then implemented for two-lane boundary extraction. The slopes and intercepts of lines are obtained by converting the two lanes from polar to Cartesian space. Lateral offsets are also computed as an important step of feature extraction in the image pixel coordinate without any intrinsic or extrinsic camera parameter. Subsequently, a softmax classifier is designed with the proposed SSAE. The slopes and intercepts of lines and lateral offsets are the feature inputs. A greedy, layer-wise method is employed based on the inputs to pretrain the weights of the entire deep network. Fine-tuning is conducted to determine the global optimal parameters by simultaneously altering all layer parameters. The outputs are three detection labels. Experimental results indicate that the proposed approach can detect lane departure robustly with a high detection rate. The efficiency of the proposed method is demonstrated on several real images.


2014 ◽  
Vol 1042 ◽  
pp. 126-130 ◽  
Author(s):  
Yu Chai ◽  
Su Jing Wei ◽  
Xin Chun Li

In order to improve the accuracy of detecting lane for automatic vehicle driving, a method for detecting the straight part of Lane is proposed, which is the Multi-Scale Hough transform method for lane detection based on the algorithm of Otsu and Canny. First of all, by the methods of Otsu to segment image and use the morphology operation of erode and dilate to wipe off the information of roadside trees and fences to strengthen the road boundary characteristics.Then the lane edge and feature is gained by the canny operator. At last, using Standard Hough Transform, Progressiveness Probabilities Hough Transform and Multi-Scale Hough Transform complete the detection of lane’s straight part. The experimental results show that, Multi-Scale Hough Transform method can accurately detect the lane line and provide the reliable basis for the path planning, automatic follow-up vehicle driving and lane departure warning.


2011 ◽  
Vol 305 ◽  
pp. 164-167
Author(s):  
Xin Sheng He ◽  
Shi Shi Wang ◽  
Zhi Yong Cai ◽  
Dong Yun Wang

Detection algorithm of lane line under the special condition is based on focal and difficult point of lane line departure warning system of computer vision. This article firstly deals with the image compression and grayscale, establishes reasonable region of interest, and remove the non-road information in the image; Then we proceed the probabilistic and statistical computing for the image pixels, draw the gray level histogram. By analyzing the dynamic gray level histogram, we identify the lane line and grey value of the road and automatically calculate the reasonable threshold to binarizate then denoise the images. Last we label the images to reach the goal of identification of lane line and establishment the space between lane line and vehicles. The test results show that: the algorithm mentioned in this paper can not only detect the lane line accurately in real time, but also it enjoys a wide range of applicability to provide reference for improvement of lane line departure warning system.


2014 ◽  
Vol 513-517 ◽  
pp. 2876-2879
Author(s):  
Hong She Dang ◽  
Chu Jia Guo

In this paper, we propose a method for structure lane detection; the method is based on two features: color and direction. This method can improve the robustness and accuracy of lane detection. Two kinds of saliency map have been calculated: color saliency map and direction saliency map. The final saliency map is the combination of the two map mentioned above. The binary image is getting from the final saliency map, and the feature points which used for fitting have been selected. The road region is segmented by the lanes. Experiment result shows that the proposed method produces better performance against some other methods.


Every year in India, most of the car accidents are occurs and affects on number of lives. Most of the road accidents are occurs due to driver’s inattention and fatigue. Drivers require to focus on different circumstances, together with vehicle speed and path, the separation between vehicles, passing vehicles, and potential risky or uncommon events ahead. Also the accident occurs due to the who bring into play cell phones at the same time as driving, drink and drive, etc. Due to this, most of the companies of automobiles tries to make available best Advanced Driver Assistance System (ADAS) to the customer to avoid the accidents. The lane detection approach is one of the method provided by automobile companies in ADAS, in which the vehicle must follows the lane. Therefore, there is less chance to get an accident. The information obtained from the lane is used to alert the driver. Therefore most of the researchers are attracted towards this field. But, due to the varying road circumstances, it is very difficult to detect the lane. The computer apparition and machine learning approaches are presents in most of the articles. In this article, we presents the deep learning scheme for identification of lane. There are two phases are presents in this work. In a first phase the image transformation is done and in second phase lane detection is occurred. At first, the proposed model gets the numerous lane pictures and changes the picture into its relating Bird's eye view picture by using Inverse perspective mapping transformation. The Deep Convolutional Neural Network (DCNN) classifier to identify the lane from the bird’s eye view image. The Earth Worm- Crow Search Algorithm (EW-CSA) is designed to help DCNN with the optimal weights. The DCNN classifier gets trained with the view picture from the bird’s eye image and the optimal weights are selected through newly developed EW-CSA algorithm. All these algorithms are performed in MATLAB. The simulation results shows that the exact detection of lane of road. Also, the accuracy, sensitivity, and specificity are calculated and its values are 0.99512, 0.9925, and 0.995 respectively.


Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 324 ◽  
Author(s):  
Dae-Hyun Kim

An advanced driver-assistance system (ADAS), based on lane detection technology, detects dangerous situations through various sensors and either warns the driver or takes over direct control of the vehicle. At present, cameras are commonly used for lane detection; however, their performance varies widely depending on the lighting conditions. Consequently, many studies have focused on using radar for lane detection. However, when using radar, it is difficult to distinguish between the plain road surface and painted lane markers, necessitating the use of radar reflectors for guidance. Previous studies have used long-range radars which may receive interference signals from various objects, including other vehicles, pedestrians, and buildings, thereby hampering lane detection. Therefore, we propose a lane detection method that uses an impulse radio ultra-wideband radar with high-range resolution and metal lane markers installed at regular intervals on the road. Lane detection and departure is realized upon using the periodically reflected signals as well as vehicle speed data as inputs. For verification, a field test was conducted by attaching radar to a vehicle and installing metal lane markers on the road. Experimental scenarios were established by varying the position and movement of the vehicle, and it was demonstrated that the proposed method enables lane detection based on the data measured.


2021 ◽  
Author(s):  
Chuangxin Cai ◽  
Shangbing Gao ◽  
Zhigeng Pan ◽  
Hao Zheng ◽  
Zihe Huang

Abstract Lane detection embedded in intelligent vehicles can greatly improve the security of automatic driving. This work offers a new approach towards lane detection in the video in real-time combining multi-feature fusion and window searching. As the pre-procession, polygon filling is adopted to locate ROI (Region of Interest) in the video frames, which contain the lane-lines to be detected. To remove the backgrounds in the ROIs, we extract and fuse the features of color, histogram, and gradient of line lanes. Based on the density distribution of the pixels in the line lanes, the initial location is found by homography transformation. Then all candidate pixel points in the whole lane-line are extracted in the way of window searching. Finally, On the basis of the obtained lane-mark coordinates, a curve model is defined, and the model parameters are obtained by Least Square Estimation (LSE). Experimental results show the robustness and instantaneity of the proposed algorithm with the accuracy of 96% and the detecting time of only 20.7ms. In addition, lane-lines with misleading backdrops can also be detected such as yellow lane lines on the ground, shadow, bright light, lane-line defects and traffic light


Author(s):  
Tom Partridge ◽  
Lorelei Gherman ◽  
David Morris ◽  
Roger Light ◽  
Andrew Leslie ◽  
...  

Transferring sick premature infants between hospitals increases the risk of severe brain injury, potentially linked to the excessive exposure to noise, vibration and driving-related accelerations. One method of reducing these levels may be to travel along smoother and quieter roads at an optimal speed, however this requires mass data on the effect of roads on the environment within ambulances. An app for the Android operating system has been developed for the purpose of recording vibration, noise levels, location and speed data during ambulance journeys. Smartphone accelerometers were calibrated using sinusoidal excitation and the microphones using calibrated pink noise. Four smartphones were provided to the local neonatal transport team and mounted on their neonatal transport systems to collect data. Repeatability of app recordings was assessed by comparing 37 journeys, made during the study period, along an 8.5 km single carriageway. The smartphones were found to have an accelerometer accurate to 5% up to 55 Hz and microphone accurate to 0.8 dB up to 80 dB. Use of the app was readily adopted by the neonatal transport team, recording more than 97,000 km of journeys in 1 year. To enable comparison between journeys, the 8.5 km route was split into 10 m segments. Interquartile ranges for vehicle speed, vertical acceleration and maximum noise level were consistent across all segments (within 0.99 m . s−1, 0.13 m · s−2 and 1.4 dB, respectively). Vertical accelerations registered were representative of the road surface. Noise levels correlated with vehicle speed. Android smartphones are a viable method of accurate mass data collection for this application. We now propose to utilise this approach to reduce potential harmful exposure, from vibration and noise, by routing ambulances along the most comfortable roads.


2021 ◽  
Vol 11 (2) ◽  
pp. 196
Author(s):  
Sébastien Laurent ◽  
Laurence Paire-Ficout ◽  
Jean-Michel Boucheix ◽  
Stéphane Argon ◽  
Antonio Hidalgo-Muñoz

The question of the possible impact of deafness on temporal processing remains unanswered. Different findings, based on behavioral measures, show contradictory results. The goal of the present study is to analyze the brain activity underlying time estimation by using functional near infrared spectroscopy (fNIRS) techniques, which allow examination of the frontal, central and occipital cortical areas. A total of 37 participants (19 deaf) were recruited. The experimental task involved processing a road scene to determine whether the driver had time to safely execute a driving task, such as overtaking. The road scenes were presented in animated format, or in sequences of 3 static images showing the beginning, mid-point, and end of a situation. The latter presentation required a clocking mechanism to estimate the time between the samples to evaluate vehicle speed. The results show greater frontal region activity in deaf people, which suggests that more cognitive effort is needed to process these scenes. The central region, which is involved in clocking according to several studies, is particularly activated by the static presentation in deaf people during the estimation of time lapses. Exploration of the occipital region yielded no conclusive results. Our results on the frontal and central regions encourage further study of the neural basis of time processing and its links with auditory capacity.


Sign in / Sign up

Export Citation Format

Share Document