Efficient extraction of road information for car navigation applications using road pavement markings obtained from aerial images

2006 ◽  
Vol 33 (10) ◽  
pp. 1320-1331 ◽  
Author(s):  
Jin Gon Kim ◽  
Dong Yeob Han ◽  
Ki Yun Yu ◽  
Yong Il Kim ◽  
Sung Mo Rhee

The efficient extraction of road information is increasingly important with the rapid growth of road-related services, such as car navigation systems, telematics, and location-based services. Conventional methods of creating and updating road information are expensive and time consuming. Therefore, a set of processes is required that collects the same information more efficiently. We propose a new method for collecting road information in complex urban areas from road pavement markings located on aerial images. This information includes lane and symbol markings that guide direction; the geometric properties of the pavement markings and their spatial relationships are analyzed. Road construction manuals and a series of cutting-edge techniques, including template matching, are used in our analysis. To validate our approach, the accuracy of our results was evaluated by comparing the data with manually extracted ground truth data. Our approach demonstrates that road information can be extracted efficiently to an extent in a complex urban area.Key words: aerial image, automatic extraction, pavement marking, road information, CNS.

2020 ◽  
Vol 34 (01) ◽  
pp. 1037-1045 ◽  
Author(s):  
Hao Wu ◽  
Hanyuan Zhang ◽  
Xinyu Zhang ◽  
Weiwei Sun ◽  
Baihua Zheng ◽  
...  

Automatic map extraction is of great importance to urban computing and location-based services. Aerial image and GPS trajectory data refer to two different data sources that could be leveraged to generate the map, although they carry different types of information. Most previous works on data fusion between aerial images and data from auxiliary sensors do not fully utilize the information of both modalities and hence suffer from the issue of information loss. We propose a deep convolutional neural network called DeepDualMapper which fuses the aerial image and trajectory data in a more seamless manner to extract the digital map. We design a gated fusion module to explicitly control the information flows from both modalities in a complementary-aware manner. Moreover, we propose a novel densely supervised refinement decoder to generate the prediction in a coarse-to-fine way. Our comprehensive experiments demonstrate that DeepDualMapper can fuse the information of images and trajectories much more effectively than existing approaches, and is able to generate maps with higher accuracy.


Land ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 771
Author(s):  
Athos Agapiou

Land cover mapping is often performed via satellite or aerial multispectral/hyperspectral datasets. This paper explores new potentials for the characterisation of land cover from archive greyscale satellite sources by using classification analysis of colourised images. In particular, a CORONA satellite image over Larnaca city in Cyprus was used for this study. The DeOldify Deep learning method embedded in the MyHeritage platform was initially applied to colourise the CORONA image. The new image was then compared against the original greyscale image across various quality metric methods. Then, the geometric correction of the CORONA coloured image was performed using common ground control points taken for aerial images. Later a segmentation process of the image was completed, while segments were selected and characterised for training purposes during the classification process. The latest was performed using the support vector machine (SVM) classifier. Five main land cover classes were selected: land, water, salt lake, vegetation, and urban areas. The overall results of the classification process were then evaluated. The results were very promising (>85 classification accuracy, 0.91 kappa coefficient). The outcomes show that this method can be implemented in any archive greyscale satellite or aerial image to characterise preview landscapes. These results are improved compared to other methods, such as using texture filters.


Author(s):  
Zille Hussnain ◽  
Sander Oude Elberink ◽  
George Vosselman

In mobile laser scanning systems, the platform’s position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.


Author(s):  
Zille Hussnain ◽  
Sander Oude Elberink ◽  
George Vosselman

In mobile laser scanning systems, the platform’s position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2021 ◽  
Vol 13 (14) ◽  
pp. 2656
Author(s):  
Furong Shi ◽  
Tong Zhang

Deep-learning technologies, especially convolutional neural networks (CNNs), have achieved great success in building extraction from areal images. However, shape details are often lost during the down-sampling process, which results in discontinuous segmentation or inaccurate segmentation boundary. In order to compensate for the loss of shape information, two shape-related auxiliary tasks (i.e., boundary prediction and distance estimation) were jointly learned with building segmentation task in our proposed network. Meanwhile, two consistency constraint losses were designed based on the multi-task network to exploit the duality between the mask prediction and two shape-related information predictions. Specifically, an atrous spatial pyramid pooling (ASPP) module was appended to the top of the encoder of a U-shaped network to obtain multi-scale features. Based on the multi-scale features, one regression loss and two classification losses were used for predicting the distance-transform map, segmentation, and boundary. Two inter-task consistency-loss functions were constructed to ensure the consistency between distance maps and masks, and the consistency between masks and boundary maps. Experimental results on three public aerial image data sets showed that our method achieved superior performance over the recent state-of-the-art models.


Author(s):  
Linying Zhou ◽  
Zhou Zhou ◽  
Hang Ning

Road detection from aerial images still is a challenging task since it is heavily influenced by spectral reflectance, shadows and occlusions. In order to increase the road detection accuracy, a proposed method for road detection by GAC model with edge feature extraction and segmentation is studied in this paper. First, edge feature can be extracted using the proposed gradient magnitude with Canny operator. Then, a reconstructed gradient map is applied in watershed transformation method, which is segmented for the next initial contour. Last, with the combination of edge feature and initial contour, the boundary stopping function is applied in the GAC model. The road boundary result can be accomplished finally. Experimental results show, by comparing with other methods in [Formula: see text]-measure system, that the proposed method can achieve satisfying results.


2021 ◽  
Vol 87 (4) ◽  
pp. 237-248
Author(s):  
Nahed Osama ◽  
Bisheng Yang ◽  
Yue Ma ◽  
Mohamed Freeshah

The ICE, Cloud and land Elevation Satellite-2 (ICES at-2) can provide new measurements of the Earth's elevations through photon-counting technology. Most research has focused on extracting the ground and the canopy photons in vegetated areas. Yet the extraction of the ground photons from urban areas, where the vegetation is mixed with artificial constructions, has not been fully investigated. This article proposes a new method to estimate the ground surface elevations in urban areas. The ICES at-2 signal photons were detected by the improved Density-Based Spatial Clustering of Applications with Noise algorithm and the Advanced Topographic Laser Altimeter System algorithm. The Advanced Land Observing Satellite-1 PALSAR –derived digital surface model has been utilized to separate the terrain surface from the ICES at-2 data. A set of ground-truth data was used to evaluate the accuracy of these two methods, and the achieved accuracy was up to 2.7 cm, which makes our method effective and accurate in determining the ground elevation in urban scenes.


Author(s):  
WANG WEI ◽  
YANG XIN

This paper describes an innovative aerial images segmentation algorithm. The algorithm is based upon the knowledge of image multiscale geometric analysis using contourlet transform, which can extract the image's intrinsic geometrical structure efficiently. The contourlet transform is introduced to represent the most distinguished and the rotation invariant features of the image. A modified Mumford–Shah model is applied to segment the aerial image by a multifeature level set evolution. To avoid possible local minima in the level set evolution, we adjust the weighting coefficients of the multiscale features in different evolution periods, i.e. the global features have bigger weighting coefficients at the beginning stages which roughly define the shape of the contour, then bigger weighting coefficients are assigned to the detailed features for segmenting the precise shape. When the algorithm is applied to segment the aerial images with several classes, satisfied experimental results are achieved by the proposed method.


Sign in / Sign up

Export Citation Format

Share Document