scholarly journals Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors

2020 ◽  
Vol 12 (8) ◽  
pp. 1263 ◽  
Author(s):  
Yingfei Xiong ◽  
Shanxin Guo ◽  
Jinsong Chen ◽  
Xinping Deng ◽  
Luyi Sun ◽  
...  

Detailed and accurate information on the spatial variation of land cover and land use is a critical component of local ecology and environmental research. For these tasks, high spatial resolution images are required. Considering the trade-off between high spatial and high temporal resolution in remote sensing images, many learning-based models (e.g., Convolutional neural network, sparse coding, Bayesian network) have been established to improve the spatial resolution of coarse images in both the computer vision and remote sensing fields. However, data for training and testing in these learning-based methods are usually limited to a certain location and specific sensor, resulting in the limited ability to generalize the model across locations and sensors. Recently, generative adversarial nets (GANs), a new learning model from the deep learning field, show many advantages for capturing high-dimensional nonlinear features over large samples. In this study, we test whether the GAN method can improve the generalization ability across locations and sensors with some modification to accomplish the idea “training once, apply to everywhere and different sensors” for remote sensing images. This work is based on super-resolution generative adversarial nets (SRGANs), where we modify the loss function and the structure of the network of SRGANs and propose the improved SRGAN (ISRGAN), which makes model training more stable and enhances the generalization ability across locations and sensors. In the experiment, the training and testing data were collected from two sensors (Landsat 8 OLI and Chinese GF 1) from different locations (Guangdong and Xinjiang in China). For the cross-location test, the model was trained in Guangdong with the Chinese GF 1 (8 m) data to be tested with the GF 1 data in Xinjiang. For the cross-sensor test, the same model training in Guangdong with GF 1 was tested in Landsat 8 OLI images in Xinjiang. The proposed method was compared with the neighbor-embedding (NE) method, the sparse representation method (SCSR), and the SRGAN. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were chosen for the quantitive assessment. The results showed that the ISRGAN is superior to the NE (PSNR: 30.999, SSIM: 0.944) and SCSR (PSNR: 29.423, SSIM: 0.876) methods, and the SRGAN (PSNR: 31.378, SSIM: 0.952), with the PSNR = 35.816 and SSIM = 0.988 in the cross-location test. A similar result was seen in the cross-sensor test. The ISRGAN had the best result (PSNR: 38.092, SSIM: 0.988) compared to the NE (PSNR: 35.000, SSIM: 0.982) and SCSR (PSNR: 33.639, SSIM: 0.965) methods, and the SRGAN (PSNR: 32.820, SSIM: 0.949). Meanwhile, we also tested the accuracy improvement for land cover classification before and after super-resolution by the ISRGAN. The results show that the accuracy of land cover classification after super-resolution was significantly improved, in particular, the impervious surface class (the road and buildings with high-resolution texture) improved by 15%.

2020 ◽  
Vol 12 (5) ◽  
pp. 758 ◽  
Author(s):  
Mengjiao Qin ◽  
Sébastien Mavromatis ◽  
Linshu Hu ◽  
Feng Zhang ◽  
Renyi Liu ◽  
...  

Super-resolution (SR) is able to improve the spatial resolution of remote sensing images, which is critical for many practical applications such as fine urban monitoring. In this paper, a new single-image SR method, deep gradient-aware network with image-specific enhancement (DGANet-ISE) was proposed to improve the spatial resolution of remote sensing images. First, DGANet was proposed to model the complex relationship between low- and high-resolution images. A new gradient-aware loss was designed in the training phase to preserve more gradient details in super-resolved remote sensing images. Then, the ISE approach was proposed in the testing phase to further improve the SR performance. By using the specific features of each test image, ISE can further boost the generalization capability and adaptability of our method on inexperienced datasets. Finally, three datasets were used to verify the effectiveness of our method. The results indicate that DGANet-ISE outperforms the other 14 methods in the remote sensing image SR, and the cross-database test results demonstrate that our method exhibits satisfactory generalization performance in adapting to new data.


2019 ◽  
Vol 11 (24) ◽  
pp. 3000 ◽  
Author(s):  
Francisco Alonso-Sarria ◽  
Carmen Valdivieso-Ros ◽  
Francisco Gomariz-Castillo

Supervised land cover classification from remote sensing imagery is based on gathering a set of training areas to characterise each of the classes and to train a predictive model that is then used to predict land cover in the rest of the image. This procedure relies mainly on the assumptions of statistical separability of the classes and the representativeness of the training areas. This paper uses isolation forests, a type of random tree ensembles, to analyse both assumptions and to easily correct lack of representativeness by digitising new training areas where needed to improve the classification of a Landsat-8 set of images with Random Forest. The results show that the improved set of training areas after the isolation forest analysis is more representative of the whole image and increases classification accuracy. Besides, the distribution of isolation values can be useful to estimate class separability. A class separability parameter that summarises such distributions is proposed. This parameter is more correlated to omission and commission errors than other separability measures such as the Jeffries–Matusita distance.


2020 ◽  
Vol 17 (6) ◽  
pp. 1057-1061 ◽  
Author(s):  
Qianbo Sang ◽  
Yin Zhuang ◽  
Shan Dong ◽  
Guanqun Wang ◽  
He Chen

Author(s):  
Cui Guo Qing ◽  
Lv Zhi Yong ◽  
Li GuangFei ◽  
Jón Atli Benediktsson ◽  
Lu Yu Dong

Land-cover classification that uses very-high-resolution (VHR) remote sensing images is a topic of considerable interest. Although many classification methods have been developed, there is still room for improvements in the accuracy and usability of classification systems. In this paper, a novel post-processing approach based on a dual-adaptive majority voting strategy (D-AMVS) is proposed for improving the performance of initial classification maps. D-AMVS defines a strategy for refining each label of a classified map that is obtained by different classification methods from the same original image and fusing the different refined classification maps to generate a final classification result. The proposed D-AMVS contains three main blocks. 1) An adaptive region is generated by extending gradually the region around a central pixel based on two predefined parameters (T1 and T2) in order to utilize the spatial feature of ground targets in a VHR image. 2) For each classified map, the label of the central pixel is refined according to the majority voting rule within the adaptive region. This is defined as adaptive majority voting (AMV). Each initial classified map is refined in this manner pixel by pixel. 3) Finally, the refined classified maps are used to generate a final classification map, and the label of the central pixel in the final classification map is determined by applying AMV again. Each entire classified map is scanned and refined pixel by pixel based on the proposed D-AMVS. The accuracies of the proposed D-AMVS approach are investigated through two remote sensing images with high spatial resolutions of 1.0 and 1.3 m, respectively. Compared with the classical majority voting method and a relatively new post-processing method called general post-classification framework, the proposed D-AMVS can achieve a land-cover classification map with less noise and higher classification accuracies.


Sign in / Sign up

Export Citation Format

Share Document