Fast Ellipse Detection Algorithm Using Hough Transform on the GPU

Author(s):  
Yasuaki Ito ◽  
Kouhei Ogawa ◽  
Koji Nakano
1996 ◽  
Vol 17 (7) ◽  
pp. 777-784 ◽  
Author(s):  
P.S. Nair ◽  
A.T. Saunders

2021 ◽  
Vol 18 (2) ◽  
pp. 172988142110087
Author(s):  
Qiao Huang ◽  
Jinlong Liu

The vision-based road lane detection technique plays a key role in driver assistance system. While existing lane recognition algorithms demonstrated over 90% detection rate, the validation test was usually conducted on limited scenarios. Significant gaps still exist when applied in real-life autonomous driving. The goal of this article was to identify these gaps and to suggest research directions that can bridge them. The straight lane detection algorithm based on linear Hough transform (HT) was used in this study as an example to evaluate the possible perception issues under challenging scenarios, including various road types, different weather conditions and shades, changed lighting conditions, and so on. The study found that the HT-based algorithm presented an acceptable detection rate in simple backgrounds, such as driving on a highway or conditions showing distinguishable contrast between lane boundaries and their surroundings. However, it failed to recognize road dividing lines under varied lighting conditions. The failure was attributed to the binarization process failing to extract lane features before detections. In addition, the existing HT-based algorithm would be interfered by lane-like interferences, such as guardrails, railways, bikeways, utility poles, pedestrian sidewalks, buildings and so on. Overall, all these findings support the need for further improvements of current road lane detection algorithms to be robust against interference and illumination variations. Moreover, the widely used algorithm has the potential to raise the lane boundary detection rate if an appropriate search range restriction and illumination classification process is added.


2019 ◽  
Vol 52 (3-4) ◽  
pp. 252-261 ◽  
Author(s):  
Xiaohua Cao ◽  
Daofan Liu ◽  
Xiaoyu Ren

Auto guide vehicle’s position deviation always appears in its walking process. Current edge approaches applied in the visual navigation field are difficult to meet the high-level requirements of complex environment in factories since they are easy to be affected by noise, which results in low measurement accuracy and unsteadiness. In order to avoid the defects of edge detection algorithm, an improved detection method based on image thinning and Hough transform is proposed to solve the problem of auto guide vehicle’s walking deviation. First, the image of lane line is preprocessed with gray processing, threshold segmentation, and mathematical morphology, and then, the refinement algorithm is employed to obtain the skeleton of the lane line, combined with Hough detection and line fitting, the equation of the guide line is generated, and finally, the value of auto guide vehicle’s walking deviation can be calculated. The experimental results show that the methodology we proposed can deal with non-ideal factors of the actual environment such as bright area, path breaks, and clutters on road, and extract the parameters of the guide line effectively, after which the value of auto guide vehicle’s walking deviation is obtained. This method is proved to be feasible for auto guide vehicle in indoor environment for visual navigation.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
BinBin Zhang ◽  
Fumin Zhang ◽  
Xinghua Qu

Purpose Laser-based measurement techniques offer various advantages over conventional measurement techniques, such as no-destructive, no-contact, fast and long measuring distance. In cooperative laser ranging systems, it’s crucial to extract center coordinates of retroreflectors to accomplish automatic measurement. To solve this problem, this paper aims to propose a novel method. Design/methodology/approach We propose a method using Mask RCNN (Region Convolutional Neural Network), with ResNet101 (Residual Network 101) and FPN (Feature Pyramid Network) as the backbone, to localize retroreflectors, realizing automatic recognition in different backgrounds. Compared with two other deep learning algorithms, experiments show that the recognition rate of Mask RCNN is better especially for small-scale targets. Based on this, an ellipse detection algorithm is introduced to obtain the ellipses of retroreflectors from recognized target areas. The center coordinates of retroreflectors in the camera coordinate system are obtained by using a mathematics method. Findings To verify the accuracy of this method, an experiment was carried out: the distance between two retroreflectors with a known distance of 1,000.109 mm was measured, with 2.596 mm root-mean-squar error, meeting the requirements of the coarse location of retroreflectors. Research limitations/implications The research limitations/implications are as follows: (i) As the data set only has 200 pictures, although we have used some data augmentation methods such as rotating, mirroring and cropping, there is still room for improvement in the generalization ability of detection. (ii) The ellipse detection algorithm needs to work in relatively dark conditions, as the retroreflector is made of stainless steel, which easily reflects light. Originality/value The originality/value of the article lies in being able to obtain center coordinates of multiple retroreflectors automatically even in a cluttered background; being able to recognize retroreflectors with different sizes, especially for small targets; meeting the recognition requirement of multiple targets in a large field of view and obtaining 3 D centers of targets by monocular model-based vision.


2014 ◽  
Vol 22 (4) ◽  
pp. 1104-1111 ◽  
Author(s):  
叶峰 YE Feng ◽  
陈灿杰 CHEN Can-jie ◽  
赖乙宗 LAI Yi-zong ◽  
陈剑东 CHEN Jian-dong

2018 ◽  
Vol 29 (5) ◽  
pp. 845-860 ◽  
Author(s):  
Ion Martinikorena ◽  
Rafael Cabeza ◽  
Arantxa Villanueva ◽  
Iñaki Urtasun ◽  
Andoni Larumbe

10.5772/63540 ◽  
2016 ◽  
Vol 13 (3) ◽  
pp. 98 ◽  
Author(s):  
Abhijeet Ravankar ◽  
Ankit A. Ravankar ◽  
Yohei Hoshino ◽  
Takanori Emaru ◽  
Yukinori Kobayashi

Sign in / Sign up

Export Citation Format

Share Document