scholarly journals Spatiotemporal Fusion of Remote Sensing Image Based on Deep Learning

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Xiaofei Wang ◽  
Xiaoyi Wang

High spatial and temporal resolution remote sensing data play an important role in monitoring the rapid change of the earth surface. However, there is an irreconcilable contradiction between the spatial and temporal resolutions of the remote sensing image acquired from a same sensor. The spatiotemporal fusion technology for remote sensing data is an effective way to solve the contradiction. In this paper, we will study the spatiotemporal fusion method based on the convolutional neural network, which can fuse the Landsat data with high spatial but low temporal resolution and MODIS data with low spatial but high temporal resolution, and generate time series data with high spatial resolution. In order to improve the accuracy of spatiotemporal fusion, a residual convolution neural network is proposed. MODIS image is used as the input to predict the residual image between MODIS and Landsat, and the sum of the predicted residual image and MODIS data is used as the predicted Landsat-like image. In this paper, the residual network not only increases the depth of the superresolution network but also avoids the problem of vanishing gradient due to the deep network structure. The experimental results show that the prediction accuracy by our method is greater than that of several mainstream methods.

2020 ◽  
Vol 12 (23) ◽  
pp. 3888
Author(s):  
Mingyuan Peng ◽  
Lifu Zhang ◽  
Xuejian Sun ◽  
Yi Cen ◽  
Xiaoyang Zhao

With the growing development of remote sensors, huge volumes of remote sensing data are being utilized in related applications, bringing new challenges to the efficiency and capability of processing huge datasets. Spatiotemporal remote sensing data fusion can restore high spatial and high temporal resolution remote sensing data from multiple remote sensing datasets. However, the current methods require long computing times and are of low efficiency, especially the newly proposed deep learning-based methods. Here, we propose a fast three-dimensional convolutional neural network-based spatiotemporal fusion method (STF3DCNN) using a spatial-temporal-spectral dataset. This method is able to fuse low-spatial high-temporal resolution data (HTLS) and high-spatial low-temporal resolution data (HSLT) in a four-dimensional spatial-temporal-spectral dataset with increasing efficiency, while simultaneously ensuring accuracy. The method was tested using three datasets, and discussions of the network parameters were conducted. In addition, this method was compared with commonly used spatiotemporal fusion methods to verify our conclusion.


2020 ◽  
Vol 86 (6) ◽  
pp. 383-392
Author(s):  
Liguo Wang ◽  
Xiaoyi Wang ◽  
Qunming Wang

Spatiotemporal fusion is an important technique to solve the problem of incompatibility between the temporal and spatial resolution of remote sensing data. In this article, we studied the fusion of Landsat data with fine spatial resolution but coarse temporal resolution and Moderate Resolution Imaging Spectroradiometer (MODIS) data with coarse spatial resolution but fine temporal resolution. The goal of fusion is to produce time-series data with the fine spatial resolution of Landsat and the fine temporal resolution of MODIS. In recent years, learning-based spatiotemporal fusion methods, in particular the sparse representation-based spatiotemporal reflectance fusion model (SPSTFM), have gained increasing attention because of their great restoration ability for heterogeneous landscapes. However, remote sensing data from different sensors differ greatly on spatial resolution, which limits the performance of the spatiotemporal fusion methods (including SPSTFM) to some extent. In order to increase the accuracy of spatiotemporal fusion, in this article we used existing 250-m MODISbands (i.e., red and near-infrared bands) to downscale the observed 500-m MODIS bands to 250 m before SPTSFM-based fusion of MODIS and Landsat data. The experimental results show that the fusion accuracy of SPTSFM is increased when using 250-m MODIS data, and the accuracy of SPSTFM coupled with 250-m MODIS data is greater than the compared benchmark methods.


2021 ◽  
Vol 973 (7) ◽  
pp. 21-31
Author(s):  
Е.А. Rasputina ◽  
A.S. Korepova

The mapping and analysis of the dates of onset and melting the snow cover in the Baikal region for 2000–2010 based on eight-day MODIS “snow cover” composites with a spatial resolution of 500 m, as well as their verification based on the data of 17 meteorological stations was carried out. For each year of the decennary under study, for each meteorological station, the difference in dates determined from the MODIS data and that of weather stations was calculated. Modulus of deviations vary from 0 to 36 days for onset dates and from 0 to 47 days – for those of stable snow cover melting, the average of the deviation modules for all meteorological stations and years is 9–10 days. It is assumed that 83 % of the cases for the onset dates can be considered admissible (with deviations up to 16 days), and 79 % of them for the end dates. Possible causes of deviations are analyzed. It was revealed that the largest deviations correspond to coastal meteorological stations and are associated with the inhomogeneity of the characteristics of the snow cover inside the pixels containing water and land. The dates of onset and melting of a stable snow cover from the images turned out to be later than those of weather stations for about 10 days. First of all (from the end of August to the middle of September), the snow is established on the tops of the ranges Barguzinsky, Baikalsky, Khamar-Daban, and later (in late November–December) a stable cover appears in the Barguzin valley, in the Selenga lowland, and in Priolkhonye. The predominant part of the Baikal region territory is covered with snow in October, and is released from it in the end of April till the middle of May.


Sign in / Sign up

Export Citation Format

Share Document