scholarly journals MRENet: Simultaneous Extraction of Road Surface and Road Centerline in Complex Urban Scenes from Very High-Resolution Images

2021 ◽  
Vol 13 (2) ◽  
pp. 239
Author(s):  
Zhenfeng Shao ◽  
Zifan Zhou ◽  
Xiao Huang ◽  
Ya Zhang

Automatic extraction of the road surface and road centerline from very high-resolution (VHR) remote sensing images has always been a challenging task in the field of feature extraction. Most existing road datasets are based on data with simple and clear backgrounds under ideal conditions, such as images derived from Google Earth. Therefore, the studies on road surface extraction and road centerline extraction under complex scenes are insufficient. Meanwhile, most existing efforts addressed these two tasks separately, without considering the possible joint extraction of road surface and centerline. With the introduction of multitask convolutional neural network models, it is possible to carry out these two tasks simultaneously by facilitating information sharing within a multitask deep learning model. In this study, we first design a challenging dataset using remote sensing images from the GF-2 satellite. The dataset contains complex road scenes with manually annotated images. We then propose a two-task and end-to-end convolution neural network, termed Multitask Road-related Extraction Network (MRENet), for road surface extraction and road centerline extraction. We take features extracted from the road as the condition of centerline extraction, and the information transmission and parameter sharing between the two tasks compensate for the potential problem of insufficient road centerline samples. In the network design, we use atrous convolutions and a pyramid scene parsing pooling module (PSP pooling), aiming to expand the network receptive field, integrate multilevel features, and obtain more abundant information. In addition, we use a weighted binary cross-entropy function to alleviate the background imbalance problem. Experimental results show that the proposed algorithm outperforms several comparative methods in the aspects of classification precision and visual interpretation.

2020 ◽  
Vol 12 (18) ◽  
pp. 2935
Author(s):  
Zixia Tang ◽  
Mengmeng Li ◽  
Xiaoqin Wang

Tea is an important economic plant, which is widely cultivated in many countries, particularly in China. Accurately mapping tea plantations is crucial in the operations, management, and supervision of the growth and development of the tea industry. We propose an object-based convolutional neural network (CNN) to extract tea plantations from very high resolution remote sensing images. Image segmentation was performed to obtain image objects, while a fine-tuned CNN model was used to extract deep image features. We conducted feature selection based on the Gini index to reduce the dimensionality of deep features, and the selected features were then used for classifying tea objects via a random forest. The proposed method was first applied to Google Earth images and then transferred to GF-2 satellite images. We compared the proposed classification with existing methods: Object-based classification using random forest, Mask R-CNN, and object-based CNN without fine-tuning. The results show the proposed method achieved a higher classification accuracy than other methods and produced smaller over- and under-classification geometric errors than Mask R-CNN in terms of shape integrity and boundary consistency. The proposed approach, trained using Google Earth images, achieved comparable results when transferring to the classification of tea objects from GF-2 images. We conclude that the proposed method is effective for mapping tea plantations using very high-resolution remote sensing images even with limited training samples and has huge potential for mapping tea plantations in large areas.


2018 ◽  
Vol 10 (8) ◽  
pp. 1284 ◽  
Author(s):  
Zhiqiang Zhang ◽  
Xinchang Zhang ◽  
Ying Sun ◽  
Pengcheng Zhang

The road networks provide key information for a broad range of applications such as urban planning, urban management, and navigation. The fast-developing technology of remote sensing that acquires high-resolution observational data of the land surface offers opportunities for automatic extraction of road networks. However, the road networks extracted from remote sensing images are likely affected by shadows and trees, making the road map irregular and inaccurate. This research aims to improve the extraction of road centerlines using both very-high-resolution (VHR) aerial images and light detection and ranging (LiDAR) by accounting for road connectivity. The proposed method first applies the fractal net evolution approach (FNEA) to segment remote sensing images into image objects and then classifies image objects using the machine learning classifier, random forest. A post-processing approach based on the minimum area bounding rectangle (MABR) is proposed and a structure feature index is adopted to obtain the complete road networks. Finally, a multistep approach, that is, morphology thinning, Harris corner detection, and least square fitting (MHL) approach, is designed to accurately extract the road centerlines from the complex road networks. The proposed method is applied to three datasets, including the New York dataset obtained from the object identification dataset, the Vaihingen dataset obtained from the International Society for Photogrammetry and Remote Sensing (ISPRS) 2D semantic labelling benchmark and Guangzhou dataset. Compared with two state-of-the-art methods, the proposed method can obtain the highest completeness, correctness, and quality for the three datasets. The experiment results show that the proposed method is an efficient solution for extracting road centerlines in complex scenes from VHR aerial images and light detection and ranging (LiDAR) data.


2021 ◽  
Vol 42 (21) ◽  
pp. 8318-8344
Author(s):  
Xianwei Lv ◽  
Zhenfeng Shao ◽  
Dongping Ming ◽  
Chunyuan Diao ◽  
Keqi Zhou ◽  
...  

2020 ◽  
Vol 86 (3) ◽  
pp. 153-160
Author(s):  
Xiaoyan Lu ◽  
Yanfei Zhong ◽  
Zhuo Zheng ◽  
Ji Zhao ◽  
Liangpei Zhang

Road detection in very-high-resolution remote sensing imagery is a hot research topic. However, the high resolution results in highly complex data distributions, which lead to much noise for road detection—for example, shadows and occlusions caused by disturbance on the roadside make it difficult to accurately recognize road. In this article, a novel edge-reinforced convolutional neural network, combined with multiscale feature extraction and edge reinforcement, is proposed to alleviate this problem. First, multiscale feature extraction is used in the center part of the proposed network to extract multiscale context information. Then edge reinforcement, applying a simplified U-Net to learn additional edge information, is used to restore the road information. The two operations can be used with different convolutional neural networks. Finally, two public road data sets are adopted to verify the effectiveness of the proposed approach, with experimental results demonstrating its superiority.


Sign in / Sign up

Export Citation Format

Share Document