Deep learning for ocean remote sensing: an application of convolutional neural networks for super-resolution on satellite-derived SST data

Author(s):  
Aurelien Ducournau ◽  
Ronan Fablet
2021 ◽  
Vol 12 (3) ◽  
pp. 46-47
Author(s):  
Nikita Saxena

Space-borne satellite radiometers measure Sea Surface Temperature (SST), which is pivotal to studies of air-sea interactions and ocean features. Under clear sky conditions, high resolution measurements are obtainable. But under cloudy conditions, data analysis is constrained to the available low resolution measurements. We assess the efficiency of Deep Learning (DL) architectures, particularly Convolutional Neural Networks (CNN) to downscale oceanographic data from low spatial resolution (SR) to high SR. With a focus on SST Fields of Bay of Bengal, this study proves that Very Deep Super Resolution CNN can successfully reconstruct SST observations from 15 km SR to 5km SR, and 5km SR to 1km SR. This outcome calls attention to the significance of DL models explicitly trained for the reconstruction of high SR SST fields by using low SR data. Inference on DL models can act as a substitute to the existing computationally expensive downscaling technique: Dynamical Downsampling. The complete code is available on this Github Repository.


2020 ◽  
Vol 12 (5) ◽  
pp. 765 ◽  
Author(s):  
Calimanut-Ionut Cira ◽  
Ramon Alcarria ◽  
Miguel-Ángel Manso-Callejo ◽  
Francisco Serradilla

Remote sensing imagery combined with deep learning strategies is often regarded as an ideal solution for interpreting scenes and monitoring infrastructures with remarkable performance levels. In addition, the road network plays an important part in transportation, and currently one of the main related challenges is detecting and monitoring the occurring changes in order to update the existent cartography. This task is challenging due to the nature of the object (continuous and often with no clearly defined borders) and the nature of remotely sensed images (noise, obstructions). In this paper, we propose a novel framework based on convolutional neural networks (CNNs) to classify secondary roads in high-resolution aerial orthoimages divided in tiles of 256 × 256 pixels. We will evaluate the framework’s performance on unseen test data and compare the results with those obtained by other popular CNNs trained from scratch.


2020 ◽  
Vol 12 (7) ◽  
pp. 1092
Author(s):  
David Browne ◽  
Michael Giering ◽  
Steven Prestwich

Scene classification is an important aspect of image/video understanding and segmentation. However, remote-sensing scene classification is a challenging image recognition task, partly due to the limited training data, which causes deep-learning Convolutional Neural Networks (CNNs) to overfit. Another difficulty is that images often have very different scales and orientation (viewing angle). Yet another is that the resulting networks may be very large, again making them prone to overfitting and unsuitable for deployment on memory- and energy-limited devices. We propose an efficient deep-learning approach to tackle these problems. We use transfer learning to compensate for the lack of data, and data augmentation to tackle varying scale and orientation. To reduce network size, we use a novel unsupervised learning approach based on k-means clustering, applied to all parts of the network: most network reduction methods use computationally expensive supervised learning methods, and apply only to the convolutional or fully connected layers, but not both. In experiments, we set new standards in classification accuracy on four remote-sensing and two scene-recognition image datasets.


2021 ◽  
Author(s):  
Dario Spiller ◽  
Luigi Ansalone ◽  
Nicolas Longépé ◽  
James Wheeler ◽  
Pierre Philippe Mathieu

<p>Over the last few years, wildfires have become more severe and destructive, having extreme consequences on local and global ecosystems. Fire detection and accurate monitoring of risk areas is becoming increasingly important. Satellite remote sensing offers unique opportunities for mapping, monitoring, and analysing the evolution of wildfires, providing helpful contributions to counteract dangerous situations.</p><p>Among the different remote sensing technologies, hyper-spectral (HS) imagery presents nonpareil features in support to fire detection. In this study, HS images from the Italian satellite PRISMA (PRecursore IperSpettrale della Missione Applicativa) will be used. The PRISMA satellite, launched on 22 March 2019, holds a hyperspectral and panchromatic  payload which is able to acquire images with a worldwide coverage. The hyperspectral camera works in the spectral range of 0.4–2.5 µm, with 66 and 173 channels in the VNIR (Visible and Near InfraRed) and SWIR (Short-Wave InfraRed) regions, respectively. The average spectral resolution is less than 10 nm on the entire range with an accuracy of ±0.1 nm, while the ground sampling distance of PRISMA images is about 5 m and 30 m for panchromatic and hyperspectral camera, respectively.</p><p>This work will investigate how PRISMA HS images can be used to support fire detection and related crisis management. To this aim, deep learning methodologies will be investigated, as 1D convolutional neural networks to perform spectral analysis of the data or 3D convolutional neural networks to perform spatial and spectral analyses at the same time. Semantic segmentation of input HS data will be discussed, where an output image with metadata will be associated to each pixels of the input image. The overall goal of this work is to highlight how PRISMA hyperspectral data can contribute to remote sensing and Earth-observation data analysis with regard to natural hazard and risk studies focusing specially on wildfires, also considering the benefits with respect to standard multi-spectral imagery or previous hyperspectral sensors such as Hyperion.</p><p>The contributions of this work to the state of the art are the following:</p><ul><li>Demonstrating the advantages of using PRISMA HS data over using multi-spectral data.</li> <li>Discussing the potentialities of deep learning methodologies based on 1D and 3D convolutional neural networks to catch spectral (and spatial for the 3D case) dependencies, which is crucial when dealing with HS images.</li> <li>Discussing the possibility and benefit to integrate HS-based approach in future monitoring systems in case of wildfire alerts and disasters.</li> <li>Discussing the opportunity to design and develop future missions for HS remote sensing specifically dedicated for fire detection with on-board analysis.</li> </ul><p>To conclude, this work will raise awareness in the potentialities of using PRISMA HS data for disasters monitoring with specialized focus on wildfires.</p>


2020 ◽  
Vol 12 (7) ◽  
pp. 1117 ◽  
Author(s):  
Wenyang Duan ◽  
Ke Yang ◽  
Limin Huang ◽  
Xuewen Ma

X-band marine radar is an effective tool for sea wave remote sensing. Conventional physical-based methods for acquiring wave parameters from radar sea clutter images use three-dimensional Fourier transform and spectral analysis. They are limited by some assumptions, empirical formulas and the calibration process while obtaining the modulation transfer function (MTF) and signal-to-noise ratio (SNR). Therefore, further improvement of wave inversion accuracy by using the physical-based method presents a challenge. Inspired by the capability of convolutional neural networks (CNN) in image characteristic processing, a deep-learning inversion method based on deep CNN is proposed. No intermediate step or parameter is needed in the CNN-based method, therefore fewer errors are introduced. Wave parameter inversion models were constructed based on CNN to inverse the wave’s spectral peak period and significant wave height. In the present paper, the numerically simulated X-band radar image data were used for a numerical investigation of wave parameters. Results of the conventional spectral analysis and CNN-based methods were compared and the CNN-based method had a higher accuracy on the same data set. The influence of training strategy on CNN-based inversion models was studied to analyze the dependence of a deep-learning inversion model on training data. Additionally, the effects of target parameters on the inversion accuracy of CNN-based models was also studied.


We describe our achievements in collecting alternating convergence points with a thickness of 7 μm and focal lengths of 200 and 350 mm, combined with shadow correction, deconvolution and significant neural frame training for transmission close to photography. Visual quality image. Although images taken using diffractive optics have been shown in previous papers, important neural structures have been used in the recovery phase. We use the imagery component of our imaging structure to activate the rise of ultralight cameras with remote identification for Nano and pico satellites, as well as small drones and solar-guided aircraft for aeronautical remote identification systems. . We extend the customizability of the liquid center focus on non-circular surfaces, forcing movement at the liquid convergence point of the surface. We study their trends and whether we can use them in optical structures.


Author(s):  
Z. Hu ◽  
L. Xu ◽  
B. Yu

A empirical model is established to analyse the daily retrieval of soil moisture from passive microwave remote sensing using convolutional neural networks (CNN). Soil moisture plays an important role in the water cycle. However, with the rapidly increasing of the acquiring technology for remotely sensed data, it's a hard task for remote sensing practitioners to find a fast and convenient model to deal with the massive data. In this paper, the AMSR-E brightness temperatures are used to train CNN for the prediction of the European centre for medium-range weather forecasts (ECMWF) model. Compared with the classical inversion methods, the deep learning-based method is more suitable for global soil moisture retrieval. It is very well supported by graphics processing unit (GPU) acceleration, which can meet the demand of massive data inversion. Once the model trained, a global soil moisture map can be predicted in less than 10 seconds. What's more, the method of soil moisture retrieval based on deep learning can learn the complex texture features from the big remote sensing data. In this experiment, the results demonstrates that the CNN deployed to retrieve global soil moisture can achieve a better performance than the support vector regression (SVR) for soil moisture retrieval.


Author(s):  
M. Brandmeier ◽  
Y. Chen

<p><strong>Abstract.</strong> Deep learning has been used successfully in computer vision problems, e.g. image classification, target detection and many more. We use deep learning in conjunction with ArcGIS to implement a model with advanced convolutional neural networks (CNN) for lithological mapping in the Mount Isa region (Australia). The area is ideal for spectral remote sensing as there is only sparse vegetation and besides freely available Sentinel-2 and ASTER data, several geophysical datasets are available from exploration campaigns. By fusing the data and thus covering a wide spectral range as well as capturing geophysical properties of rocks, we aim at improving classification accuracies and support geological mapping. We also evaluate the performance of the sensors on their own compared to a joint use as the Sentinel-2 satellites are relatively new and as of now there exist only few studies for geological applications. We developed an end-to-end deep learning model using Keras and Tensorflow that consists of several convolutional, pooling and deconvolutional layers. Our model was inspired by the family of U-Net architectures, where low-level feature maps (encoders) are concatenated with high-level ones (decoders), which enables precise localization. This type of network architecture was especially designed to effectively solve pixel-wise classification problems, which is appropriate for lithological classification. We spatially resampled and fused the multi-sensor remote sensing data with different bands and geophysical data into image cubes as input for our model. Pre-processing was done in ArcGIS and the final, fine-tuned model was imported into a toolbox to be used on further scenes directly in the GIS environment. The tool classifies each pixel of the multiband imagery into different types of rocks according to a defined probability threshold. Results highlight the power of using Sentinel-2 in conjunction with ASTER data with accuracies of 75% in comparison to only 70% and 73% for ASTER or Sentinel-2 data alone. These results are similar but examining the different classes shows that there are significant improvements for classes such as dolerite or carbonate sediments that are not that widely distributed in the area. Adding geophysical datasets reduced accuracies to 60%, probably due to an order of magnitude difference in spatial resolution. In comparison, Random Forest (RF) and Support Vector Machines (SVMs) that were trained on the same data only achieve accuracies of 46 % and 36 % respectively. Most insecurity is due to labelling errors and labels with mixed lithologies. However, results show that the U-Netmodel is a powerful alternative to other classifiers for medium-resolution multispectral data.</p>


2022 ◽  
Vol 14 (2) ◽  
pp. 359
Author(s):  
Ali Jamali ◽  
Masoud Mahdianpari

The use of machine learning algorithms to classify complex landscapes has been revolutionized by the introduction of deep learning techniques, particularly in remote sensing. Convolutional neural networks (CNNs) have shown great success in the classification of complex high-dimensional remote sensing imagery, specifically in wetland classification. On the other hand, the state-of-the-art natural language processing (NLP) algorithms are transformers. Although the transformers have been studied for a few remote sensing applications, the integration of deep CNNs and transformers has not been studied, particularly in wetland mapping. As such, in this study, we explore the potential and possible limitations to be overcome regarding the use of a multi-model deep learning network with the integration of a modified version of the well-known deep CNN network of VGG-16, a 3D CNN network, and Swin transformer for complex coastal wetland classification. Moreover, we discuss the potential and limitation of the proposed multi-model technique over several solo models, including a random forest (RF), support vector machine (SVM), VGG-16, 3D CNN, and Swin transformer in the pilot site of Saint John city located in New Brunswick, Canada. In terms of F-1 score, the multi-model network obtained values of 0.87, 0.88, 0.89, 0.91, 0.93, 0.93, and 0.93 for the recognition of shrub wetland, fen, bog, aquatic bed, coastal marsh, forested wetland, and freshwater marsh, respectively. The results suggest that the multi-model network is superior to other solo classifiers from 3.36% to 33.35% in terms of average accuracy. Results achieved in this study suggest the high potential for integrating and using CNN networks with the cutting-edge transformers for the classification of complex landscapes in remote sensing.


2020 ◽  
Vol 10 (6) ◽  
pp. 1959
Author(s):  
Hyeongyeom Ahn ◽  
Changhoon Yim

In this paper, we propose a deep learning method with convolutional neural networks (CNNs) using skip connections with layer groups for super-resolution image reconstruction. In the proposed method, entire CNN layers for residual data processing are divided into several layer groups, and skip connections with different multiplication factors are applied from input data to these layer groups. With the proposed method, the processed data in hidden layer units tend to be distributed in a wider range. Consequently, the feature information from input data is transmitted to the output more robustly. Experimental results show that the proposed method yields a higher peak signal-to-noise ratio and better subjective quality than existing methods for super-resolution image reconstruction.


Sign in / Sign up

Export Citation Format

Share Document