scholarly journals Deriving Non-Cloud Contaminated Sentinel-2 Images with RGB and Near-Infrared Bands from Sentinel-1 Images Based on a Conditional Generative Adversarial Network

2021 ◽  
Vol 13 (8) ◽  
pp. 1512
Author(s):  
Quan Xiong ◽  
Liping Di ◽  
Quanlong Feng ◽  
Diyou Liu ◽  
Wei Liu ◽  
...  

Sentinel-2 images have been widely used in studying land surface phenomena and processes, but they inevitably suffer from cloud contamination. To solve this critical optical data availability issue, it is ideal to fuse Sentinel-1 and Sentinel-2 images to create fused, cloud-free Sentinel-2-like images for facilitating land surface applications. In this paper, we propose a new data fusion model, the Multi-channels Conditional Generative Adversarial Network (MCcGAN), based on the conditional generative adversarial network, which is able to convert images from Domain A to Domain B. With the model, we were able to generate fused, cloud-free Sentinel-2-like images for a target date by using a pair of reference Sentinel-1/Sentinel-2 images and target-date Sentinel-1 images as inputs. In order to demonstrate the superiority of our method, we also compared it with other state-of-the-art methods using the same data. To make the evaluation more objective and reliable, we calculated the root-mean-square-error (RSME), R2, Kling–Gupta efficiency (KGE), structural similarity index (SSIM), spectral angle mapper (SAM), and peak signal-to-noise ratio (PSNR) of the simulated Sentinel-2 images generated by different methods. The results show that the simulated Sentinel-2 images generated by the MCcGAN have a higher quality and accuracy than those produced via the previous methods.

2017 ◽  
Author(s):  
Andreas Kääb ◽  
Bas Altena ◽  
Joseph Mascaro

Abstract. Satellite measurements of coseismic displacements are typically based on Synthetic Aperture Radar (SAR) interferometry or amplitude tracking, or based on optical data such as from Landsat, Sentinel-2, SPOT, ASTER, very-high resolution satellites, or airphotos. Here, we evaluate a new class of optical satellite images for this purpose – data from cubesats. More specific, we investigate the PlanetScope cubesat constellation for horizontal surface displacements by the 14 November 2016 Mw7.8 Kaikoura, New Zealand, earthquake. Single PlanetScope scenes are 2–4 m resolution visible and near-infrared frame images of approximately 20–30 km × 9–15 km in size, acquired in continuous sequence along an orbit of approximately 375–475 km height. From single scenes or mosaics from before and after the earthquake we observe surface displacements of up to almost 10 m and estimate a matching accuracy from PlanetScope data of up to ±0.2 pixels (~ ±0.6 m). This accuracy, the daily revisit anticipated for the PlanetScope constellation for the entire land surface of Earth, and a number of other features, together offer new possibilities for investigating coseismic and other Earth surface displacements and managing related hazards and disasters, and complement existing SAR and optical methods. For comparison and for a better regional overview we also match the coseismic displacements by the 2016 Kaikoura earthquake using Landsat8 and Sentinel-2 data.


2017 ◽  
Vol 17 (5) ◽  
pp. 627-639 ◽  
Author(s):  
Andreas Kääb ◽  
Bas Altena ◽  
Joseph Mascaro

Abstract. Satellite measurements of coseismic displacements are typically based on synthetic aperture radar (SAR) interferometry or amplitude tracking, or based on optical data such as from Landsat, Sentinel-2, SPOT, ASTER, very high-resolution satellites, or air photos. Here, we evaluate a new class of optical satellite images for this purpose – data from cubesats. More specific, we investigate the PlanetScope cubesat constellation for horizontal surface displacements by the 14 November 2016 Mw 7.8 Kaikoura, New Zealand, earthquake. Single PlanetScope scenes are 2–4 m-resolution visible and near-infrared frame images of approximately 20–30 km  ×  9–15 km in size, acquired in continuous sequence along an orbit of approximately 375–475 km height. From single scenes or mosaics from before and after the earthquake, we observe surface displacements of up to almost 10 m and estimate matching accuracies from PlanetScope data between ±0.25 and ±0.7 pixels (∼ ±0.75 to ±2.0 m), depending on time interval and image product type. Thereby, the most optimistic accuracy estimate of ±0.25 pixels might actually be typical for the final, sun-synchronous, and near-polar-orbit PlanetScope constellation when unrectified data are used for matching. This accuracy, the daily revisit anticipated for the PlanetScope constellation for the entire land surface of Earth, and a number of other features, together offer new possibilities for investigating coseismic and other Earth surface displacements and managing related hazards and disasters, and complement existing SAR and optical methods. For comparison and for a better regional overview we also match the coseismic displacements by the 2016 Kaikoura earthquake using Landsat 8 and Sentinel-2 data.


Author(s):  
F. Pineda ◽  
V. Ayma ◽  
C. Beltran

Abstract. High-resolution satellite images have always been in high demand due to the greater detail and precision they offer, as well as the wide scope of the fields in which they could be applied; however, satellites in operation offering very high-resolution (VHR) images has experienced an important increase, but they remain as a smaller proportion against existing lower resolution (HR) satellites. Recent models of convolutional neural networks (CNN) are very suitable for applications with image processing, like resolution enhancement of images; but in order to obtain an acceptable result, it is important, not only to define the kind of CNN architecture but the reference set of images to train the model. Our work proposes an alternative to improve the spatial resolution of HR images obtained by Sentinel-2 satellite by using the VHR images from PeruSat1, a peruvian satellite, which serve as the reference for the super-resolution approach implementation based on a Generative Adversarial Network (GAN) model, as an alternative for obtaining VHR images. The VHR PeruSat-1 image dataset is used for the training process of the network. The results obtained were analyzed considering the Peak Signal to Noise Ratios (PSNR) and the Structural Similarity (SSIM). Finally, some visual outcomes, over a given testing dataset, are presented so the performance of the model could be analyzed as well.


2019 ◽  
Vol 11 (19) ◽  
pp. 2304 ◽  
Author(s):  
Hanna Huryna ◽  
Yafit Cohen ◽  
Arnon Karnieli ◽  
Natalya Panov ◽  
William P. Kustas ◽  
...  

A spatially distributed land surface temperature is important for many studies. The recent launch of the Sentinel satellite programs paves the way for an abundance of opportunities for both large area and long-term investigations. However, the spatial resolution of Sentinel-3 thermal images is not suitable for monitoring small fragmented fields. Thermal sharpening is one of the primary methods used to obtain thermal images at finer spatial resolution at a daily revisit time. In the current study, the utility of the TsHARP method to sharpen the low resolution of Sentinel-3 thermal data was examined using Sentinel-2 visible-near infrared imagery. Compared to Landsat 8 fine thermal images, the sharpening resulted in mean absolute errors of ~1 °C, with errors increasing as the difference between the native and the target resolutions increases. Part of the error is attributed to the discrepancy between the thermal images acquired by the two platforms. Further research is due to test additional sites and conditions, and potentially additional sharpening methods, applied to the Sentinel platforms.


2020 ◽  
Vol 10 (1) ◽  
pp. 375 ◽  
Author(s):  
Zetao Jiang ◽  
Yongsong Huang ◽  
Lirui Hu

The super-resolution generative adversarial network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied by unpleasant artifacts. To further enhance the visual quality, we propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The method is based on depthwise separable convolution super-resolution generative adversarial network (DSCSRGAN). A new depthwise separable convolution dense block (DSC Dense Block) was designed for the generator network, which improved the ability to represent and extract image features, while greatly reducing the total amount of parameters. For the discriminator network, the batch normalization (BN) layer was discarded, and the problem of artifacts was reduced. A frequency energy similarity loss function was designed to constrain the generator network to generate better super-resolution images. Experiments on several different datasets showed that the peak signal-to-noise ratio (PSNR) was improved by more than 3 dB, structural similarity index (SSIM) was increased by 16%, and the total parameter was reduced to 42.8% compared with the original model. Combining various objective indicators and subjective visual evaluation, the algorithm was shown to generate richer image details, clearer texture, and lower complexity.


2021 ◽  
Author(s):  
Ziyu Li ◽  
Qiyuan Tian ◽  
Chanon Ngamsombat ◽  
Samuel Cartmell ◽  
John Conklin ◽  
...  

Purpose: To improve the signal-to-noise ratio (SNR) of highly accelerated volumetric MRI while preserve realistic textures using a generative adversarial network (GAN). Methods: A hybrid GAN for denoising entitled "HDnGAN" with a 3D generator and a 2D discriminator was proposed to denoise 3D T2-weighted fluid-attenuated inversion recovery (FLAIR) images acquired in 2.75 minutes (R=3×2) using wave-controlled aliasing in parallel imaging (Wave-CAIPI). HDnGAN was trained on data from 25 multiple sclerosis patients by minimizing a combined mean squared error and adversarial loss with adjustable weight λ. Results were evaluated on eight separate patients by comparing to standard T2-SPACE FLAIR images acquired in 7.25 minutes (R=2×2) using mean absolute error (MAE), peak SNR (PSNR), structural similarity index (SSIM), and VGG perceptual loss, and by two neuroradiologists using a five-point score regarding gray-white matter contrast, sharpness, SNR, lesion conspicuity, and overall quality. Results: HDnGAN (λ=0) produced the lowest MAE, highest PSNR and SSIM. HDnGAN (λ=10-3) produced the lowest VGG loss. In the reader study, HDnGAN (λ=10-3) significantly improved the gray-white contrast and SNR of Wave-CAIPI images, and outperformed BM4D and HDnGAN (λ=0) regarding image sharpness. The overall quality score from HDnGAN (λ=10-3) was significantly higher than those from Wave-CAIPI, BM4D, and HDnGAN (λ=0), with no significant difference compared to standard images. Conclusion: HDnGAN concurrently benefits from improved image synthesis performance of 3D convolution and increased training samples for training the 2D discriminator on limited data. HDnGAN generates images with high SNR and realistic textures, similar to those acquired in longer times and preferred by neuroradiologists.


2018 ◽  
Vol 7 (10) ◽  
pp. 389 ◽  
Author(s):  
Wei He ◽  
Naoto Yokoya

In this paper, we present the optical image simulation from synthetic aperture radar (SAR) data using deep learning based methods. Two models, i.e., optical image simulation directly from the SAR data and from multi-temporal SAR-optical data, are proposed to testify the possibilities. The deep learning based methods that we chose to achieve the models are a convolutional neural network (CNN) with a residual architecture and a conditional generative adversarial network (cGAN). We validate our models using the Sentinel-1 and -2 datasets. The experiments demonstrate that the model with multi-temporal SAR-optical data can successfully simulate the optical image; meanwhile, the state-of-the-art model with simple SAR data as input failed. The optical image simulation results indicate the possibility of SAR-optical information blending for the subsequent applications such as large-scale cloud removal, and optical data temporal super-resolution. We also investigate the sensitivity of the proposed models against the training samples, and reveal possible future directions.


2020 ◽  
Vol 12 (18) ◽  
pp. 3062 ◽  
Author(s):  
Michel E. D. Chaves ◽  
Michelle C. A. Picoli ◽  
Ieda D. Sanches

Recent applications of Landsat 8 Operational Land Imager (L8/OLI) and Sentinel-2 MultiSpectral Instrument (S2/MSI) data for acquiring information about land use and land cover (LULC) provide a new perspective in remote sensing data analysis. Jointly, these sources permit researchers to improve operational classification and change detection, guiding better reasoning about landscape and intrinsic processes, as deforestation and agricultural expansion. However, the results of their applications have not yet been synthesized in order to provide coherent guidance on the effect of their applications in different classification processes, as well as to identify promising approaches and issues which affect classification performance. In this systematic review, we present trends, potentialities, challenges, actual gaps, and future possibilities for the use of L8/OLI and S2/MSI for LULC mapping and change detection. In particular, we highlight the possibility of using medium-resolution (Landsat-like, 10–30 m) time series and multispectral optical data provided by the harmonization between these sensors and data cube architectures for analysis-ready data that are permeated by publicizations, open data policies, and open science principles. We also reinforce the potential for exploring more spectral bands combinations, especially by using the three Red-edge and the two Near Infrared and Shortwave Infrared bands of S2/MSI, to calculate vegetation indices more sensitive to phenological variations that were less frequently applied for a long time, but have turned on since the S2/MSI mission. Summarizing peer-reviewed papers can guide the scientific community to the use of L8/OLI and S2/MSI data, which enable detailed knowledge on LULC mapping and change detection in different landscapes, especially in agricultural and natural vegetation scenarios.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1969
Author(s):  
Hongrui Liu ◽  
Shuoshi Li ◽  
Hongquan Wang ◽  
Xinshan Zhu

The existing face image completion approaches cannot be utilized to rationally complete damaged face images where their identity information is completely lost due to being obscured by center masks. Hence, in this paper, a reference-guided double-pipeline face image completion network (RG-DP-FICN) is designed within the framework of the generative adversarial network (GAN) completing the identity information of damaged images utilizing reference images with the same identity as damaged images. To reasonably integrate the identity information of reference images into completed images, the reference image is decoupled into identity features (e.g., the contour of eyes, eyebrows, nose) and pose features (e.g., the orientation of face and the positions of the facial features), and then the resulting identity features are fused with posture features of damaged images. Specifically, a lightweight identity predictor is used to extract the pose features; an identity extraction module is designed to compress and globally extract the identity features of the reference images, and an identity transfer module is proposed to effectively fuse identity and pose features by performing identity rendering on different receptive fields. Furthermore, quantitative and qualitative evaluations are conducted on a public dataset CelebA-HQ. Compared to the state-of-the-art methods, the evaluation metrics peak signal-to-noise ratio (PSNR), structure similarity index (SSIM) and L1 loss are improved by 2.22 dB, 0.033 and 0.79%, respectively. The results indicate that RG-DP-FICN can generate completed images with reasonable identity, with superior completion effect compared to existing completion approaches.


Sign in / Sign up

Export Citation Format

Share Document