scholarly journals Land Cover Mapping Method Using a Competitive Neural Network.

Author(s):  
Yosuke ITO ◽  
Sigeru OMATU
2020 ◽  
Vol 12 (14) ◽  
pp. 2292
Author(s):  
Xin Luo ◽  
Xiaohua Tong ◽  
Zhongwen Hu ◽  
Guofeng Wu

Moderate spatial resolution (MSR) satellite images, which hold a trade-off among radiometric, spectral, spatial and temporal characteristics, are extremely popular data for acquiring land cover information. However, the low accuracy of existing classification methods for MSR images is still a fundamental issue restricting their capability in urban land cover mapping. In this study, we proposed a hybrid convolutional neural network (H-ConvNet) for improving urban land cover mapping with MSR Sentinel-2 images. The H-ConvNet was structured with two streams: one lightweight 1D ConvNet for deep spectral feature extraction and one lightweight 2D ConvNet for deep context feature extraction. To obtain a well-trained 2D ConvNet, a training sample expansion strategy was introduced to assist context feature learning. The H-ConvNet was tested in six highly heterogeneous urban regions around the world, and it was compared with support vector machine (SVM), object-based image analysis (OBIA), Markov random field model (MRF) and a newly proposed patch-based ConvNet system. The results showed that the H-ConvNet performed best. We hope that the proposed H-ConvNet would benefit for the land cover mapping with MSR images in highly heterogeneous urban regions.


2021 ◽  
Vol 87 (6) ◽  
pp. 405-412
Author(s):  
Qiutong Yu ◽  
Wei Liu ◽  
Wesley Nunes Gonçalves ◽  
José Marcato Junior ◽  
Jonathan Li

Multispectral satellite imagery is the primary data source for monitoring land cover change and characterizing land cover globally. However, the consistency of land cover monitoring is limited by the spatial and temporal resolutions of the acquired satellite images. The public availability of daily high-resolution images is still scarce. This paper aims to fill this gap by proposing a novel spatiotemporal fusion method to enhance daily low spatial resolution land cover mapping using a weakly supervised deep convolutional neural network. We merge Sentinel images and moderate resolution imaging spectroradiometer (MODIS )-derived thematic land cover maps under the application background of massive remote sensing data and the large spatial resolution gaps between MODIS data and Sentinel images. The neural network training was conducted on the public data set SEN12MS, while the validation and testing used ground truth data from the 2020 IEEE Geoscience and Remote Sensing Society data fusion contest. The proposed data fusion method shows that the synthesized land cover map has significantly higher spatial resolution than the corresponding MODIS-derived land cover map. The ensemble approach can be implemented for generating high-resolution time series of satellite images by fusing fine images from Sentinel-1 and -2 and daily coarse images from MODIS.


2019 ◽  
Vol 11 (3) ◽  
pp. 326 ◽  
Author(s):  
Rui Ba ◽  
Weiguo Song ◽  
Xiaolian Li ◽  
Zixi Xie ◽  
Siuming Lo

Since wildfires have occurred frequently in recent years, accurate burned area mapping is required for wildfire severity assessment and burned land reconstruction. Satellite remote sensing is an effective technology that can provide valuable information for wildfire assessment. However, the common approaches based on using a single satellite image to promptly detect the burned areas have low accuracy and limited applicability. This paper develops a new burned area mapping method that surpasses the detection accuracy of previous methods, while still using a single Moderate Resolution Imaging Spectroradiometer (MODIS) sensor image. The key innovation is integrating optimal spectral indices and a neural network algorithm. We used the traditional empirical formula method, multi-threshold method and visual interpretation method to extract the sample sets of five typical types (burned area, vegetation, cloud, bare soil, and cloud shadow) from the MODIS data of several wildfires in the American states of Nevada, Washington and California in 2016. Afterward, the separability index M was adopted to assess the capacity of seven spectral bands and 13 spectral indices to distinguish the burned area from four unburned land cover types. Based on the separability analysis between the burned area and unburned areas, the spectral indices with an M value higher than 1.0 were employed to generate the training sample sets that were assessed to have an overall accuracy of 98.68% and Kappa coefficient of 97.46%. Finally, we utilized a back-propagation neural network (BPNN) to learn the spectral differences of different types from the training sample sets and obtain the output burned area map. The proposed method was applied to three wildfire cases in the American states of Idaho, Nevada and Oregon in 2017. A comparison of detection results between the new MODIS-based burned area map and the reference burned area map compiled from Landsat-8 Operational Land Imager (OLI) data indicates that the proposed method can effectively exploit the spectral characteristics of various land cover types. Also, this new method can achieve higher accuracy with the reduction of commission error (CE, >10%) and omission error (OE, >6%) compared to the traditional empirical formula method. The new burned area mapping method could help managers and the public perform more effective wildfire assessments and emergency management.


Sign in / Sign up

Export Citation Format

Share Document