scholarly journals Landsat Super-Resolution Enhancement Using Convolution Neural Networks and Sentinel-2 for Training

2018 ◽  
Vol 10 (3) ◽  
pp. 394 ◽  
Author(s):  
Darren Pouliot ◽  
Rasim Latifovic ◽  
Jon Pasher ◽  
Jason Duffe
2019 ◽  
Vol 11 (22) ◽  
pp. 2635 ◽  
Author(s):  
Massimiliano Gargiulo ◽  
Antonio Mazza ◽  
Raffaele Gaetano ◽  
Giuseppe Ruello ◽  
Giuseppe Scarpa

Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their associated open access policy. Due to a sensor design trade-off, images are acquired (and delivered) at different spatial resolutions (10, 20 and 60 m) according to specific sets of wavelengths, with only the four visible and near infrared bands provided at the highest resolution (10 m). Although this is not a limiting factor in general, many applications seem to emerge in which the resolution enhancement of 20 m bands may be beneficial, motivating the development of specific super-resolution methods. In this work, we propose to leverage Convolutional Neural Networks (CNNs) to provide a fast, upscalable method for the single-sensor fusion of Sentinel-2 (S2) data, whose aim is to provide a 10 m super-resolution of the original 20 m bands. Experimental results demonstrate that the proposed solution can achieve better performance with respect to most of the state-of-the-art methods, including other deep learning based ones with a considerable saving of computational burden.


2017 ◽  
Vol 6 (4) ◽  
pp. 15
Author(s):  
JANARDHAN CHIDADALA ◽  
RAMANAIAH K.V. ◽  
BABULU K ◽  
◽  
◽  
...  

Author(s):  
F. Pineda ◽  
V. Ayma ◽  
C. Beltran

Abstract. High-resolution satellite images have always been in high demand due to the greater detail and precision they offer, as well as the wide scope of the fields in which they could be applied; however, satellites in operation offering very high-resolution (VHR) images has experienced an important increase, but they remain as a smaller proportion against existing lower resolution (HR) satellites. Recent models of convolutional neural networks (CNN) are very suitable for applications with image processing, like resolution enhancement of images; but in order to obtain an acceptable result, it is important, not only to define the kind of CNN architecture but the reference set of images to train the model. Our work proposes an alternative to improve the spatial resolution of HR images obtained by Sentinel-2 satellite by using the VHR images from PeruSat1, a peruvian satellite, which serve as the reference for the super-resolution approach implementation based on a Generative Adversarial Network (GAN) model, as an alternative for obtaining VHR images. The VHR PeruSat-1 image dataset is used for the training process of the network. The results obtained were analyzed considering the Peak Signal to Noise Ratios (PSNR) and the Structural Similarity (SSIM). Finally, some visual outcomes, over a given testing dataset, are presented so the performance of the model could be analyzed as well.


2020 ◽  
Vol 12 (15) ◽  
pp. 2366
Author(s):  
Nicolas Latte ◽  
Philippe Lejeune

Sentinel-2 (S2) imagery is used in many research areas and for diverse applications. Its spectral resolution and quality are high but its spatial resolutions, of at most 10 m, is not sufficient for fine scale analysis. A novel method was thus proposed to super-resolve S2 imagery to 2.5 m. For a given S2 tile, the 10 S2 bands (four at 10 m and six at 20 m) were fused with additional images acquired at higher spatial resolution by the PlanetScope (PS) constellation. The radiometric inconsistencies between PS microsatellites were normalized. Radiometric normalization and super-resolution were achieved simultaneously using state-of–the-art super-resolution residual convolutional neural networks adapted to the particularities of S2 and PS imageries (including masks of clouds and shadows). The method is described in detail, from image selection and downloading to neural network architecture, training, and prediction. The quality was thoroughly assessed visually (photointerpretation) and quantitatively, confirming that the proposed method is highly spatially and spectrally accurate. The method is also robust and can be applied to S2 images acquired worldwide at any date.


2021 ◽  
Vol 13 (13) ◽  
pp. 2614
Author(s):  
Yu Tao ◽  
Siting Xiong ◽  
Rui Song ◽  
Jan-Peter Muller

Higher spatial resolution imaging data are considered desirable in many Earth observation applications. In this work, we propose and demonstrate the TARSGAN (learning Terrestrial image deblurring using Adaptive weighted dense Residual Super-resolution Generative Adversarial Network) system for Super-resolution Restoration (SRR) of 10 m/pixel Sentinel-2 “true” colour images as well as all the other multispectral bands. In parallel, the ELF (automated image Edge detection and measurements of edge spread function, Line spread function, and Full width at half maximum) system is proposed to achieve automated and precise assessments of the effective resolutions of the input and SRR images. Subsequent ELF measurements of the TARSGAN SRR results suggest an averaged effective resolution enhancement factor of about 2.91 times (equivalent to ~3.44 m/pixel for the 10 m/pixel bands) given a nominal SRR upscaling factor of 4 times. Several examples are provided for different types of scenes from urban landscapes to agricultural scenes and sea-ice floes.


2020 ◽  
Author(s):  
Patrice Carbonneau ◽  
Barbara Belletti ◽  
Marco Micotti ◽  
Andrea Casteletti ◽  
Stefano Mariani ◽  
...  

<p><span>In current fluvial remote sensing approaches, there exists a certain dichotomy between the analysis of small channels at local scales which is generally done with airborne data and the analysis of entire basins at regional and national scales with satellite data. </span><span>One possible solution to this challeng</span><span>e</span><span> is to use low-altitude imagery from low-cost UAVs to provide sub-metric scale class information which can then be used to train fuzzy classification models for entire Sentinel 2 tiles</span><span>. </span><span>The fuzzy classification approach can allow for sub-pixel information and when extended to entire Sentinel 2 tiles, the method therefore develops information at a resolution of less than 10 meters (the best spatial resolution of Sentinel 2 bands) at regional scales. </span><span>In </span><span>this</span><span> contribution, we present </span><span>such </span><span>a method wh</span><span>ere</span><span> UAV </span><span>imagery </span><span>is used </span><span>as the training data for the fully fuzzy classification of</span><span> Sentinel 2 imagery. </span><span>We partition the fluvial corridor in three simple classes: water, dry sediment and vegetation.  Then we manually classify the local UAV imagery into highly accurate class rasters. In order to augment the value of the Sentinel 2 data, we use an established super-resolution method that delivers 10 meter spatial resolution across all 11 Sentinel 2 bands</span><span>. </span><span>We </span><span>then use the sub-metric UAV classifications as training data for the 10 meter super-resolved Sentinel 2 imagery and we</span><span> train </span><span>fuzzy classification </span><span>models using random forests, dense neural networks and convolutional neural networks (CNN). We find that CNN architectures perform best</span> <span>and </span><span>can predict class membership within a pixel of </span><span>a new </span><span>Sentinel 2 </span><span>tile not seen in the training phase</span><span> with a mean error of 0% and an RMS error of 1</span><span>8</span><span>%. Crisp class predictions derived from the fuzzy models range in accuracy from 88% to 9</span><span>9</span><span>%, </span><span>even in the case of tiles never seen in the training phase</span><span>. </span><span>With this approach, it is now possible to deploy a low-cost UAV in order to train a transferable CNN model that can predict </span><span>fuzzy classes at very large scales from freely available Sentinel 2 imagery. </span> <span>This approach can therefore serve as the basis for multi temporal classification and change detection of the Sentinel 2 archives.</span></p>


Sign in / Sign up

Export Citation Format

Share Document