scholarly journals Deep Learning for SAR Image Despeckling

2019 ◽  
Vol 11 (13) ◽  
pp. 1532 ◽  
Author(s):  
Francesco Lattari ◽  
Borja Gonzalez Leon ◽  
Francesco Asaro ◽  
Alessio Rucci ◽  
Claudio Prati ◽  
...  

Speckle filtering is an unavoidable step when dealing with applications that involve amplitude or intensity images acquired by coherent systems, such as Synthetic Aperture Radar (SAR). Speckle is a target-dependent phenomenon; thus, its estimation and reduction require the individuation of specific properties of the image features. Speckle filtering is one of the most prominent topics in the SAR image processing research community, who has first tackled this issue using handcrafted feature-based filters. Even if classical algorithms have slowly and progressively achieved better and better performance, the more recent Convolutional-Neural-Networks (CNNs) have proven to be a promising alternative, in the light of the outstanding capabilities in efficiently learning task-specific filters. Currently, only simplistic CNN architectures have been exploited for the speckle filtering task. While these architectures outperform classical algorithms, they still show some weakness in the texture preservation. In this work, a deep encoder–decoder CNN architecture, focused in the specific context of SAR images, is proposed in order to enhance speckle filtering capabilities alongside texture preservation. This objective has been addressed through the adaptation of the U-Net CNN, which has been modified and optimized accordingly. This architecture allows for the extraction of features at different scales, and it is capable of producing detailed reconstructions through its system of skip connections. In this work, a two-phase learning strategy is adopted, by first pre-training the model on a synthetic dataset and by adapting the learned network to the real SAR image domain through a fast fine-tuning procedure. During the fine-tuning phase, a modified version of the total variation (TV) regularization was introduced to improve the network performance when dealing with real SAR data. Finally, experiments were carried out on simulated and real data to compare the performance of the proposed method with respect to the state-of-the-art methodologies.

Energies ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 3650
Author(s):  
Zhe Yan ◽  
Zheng Zhang ◽  
Shaoyong Liu

Fault interpretation is an important part of seismic structural interpretation and reservoir characterization. In the conventional approach, faults are detected as reflection discontinuity or abruption and are manually tracked in post-stack seismic data, which is time-consuming. In order to improve efficiency, a variety of automatic fault detection methods have been proposed, among which widespread attention has been given to deep learning-based methods. However, deep learning techniques require a large amount of marked seismic samples as a training dataset. Although the amount of synthetic seismic data can be guaranteed and the labels are accurate, the difference between synthetic data and real data still exists. To overcome this drawback, we apply a transfer learning strategy to improve the performance of automatic fault detection by deep learning methods. We first pre-train a deep neural network with synthetic seismic data. Then we retrain the network with real seismic samples. We use a random sample consensus (RANSAC) method to obtain real seismic samples and generate corresponding labels automatically. Three real 3D examples are included to demonstrate that the fault detection accuracy of the pre-trained network models can be greatly improved by retraining the network with a few amount of real seismic samples.


Atmosphere ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 577
Author(s):  
Gabriele Graffieti ◽  
Davide Maltoni

In this paper, we present a novel defogging technique, named CurL-Defog, with the aim of minimizing the insertion of artifacts while maintaining good contrast restoration and visibility enhancement. Many learning-based defogging approaches rely on paired data, where fog is artificially added to clear images; this usually provides good results on mildly fogged images but is not effective for difficult cases. On the other hand, the models trained with real data can produce visually impressive results, but unwanted artifacts are often present. We propose a curriculum learning strategy and an enhanced CycleGAN model to reduce the number of produced artifacts, where both synthetic and real data are used in the training procedure. We also introduce a new metric, called HArD (Hazy Artifact Detector), to numerically quantify the number of artifacts in the defogged images, thus avoiding the tedious and subjective manual inspection of the results. HArD is then combined with other defogging indicators to produce a solid metric that is not deceived by the presence of artifacts. The proposed approach compares favorably with state-of-the-art techniques on both real and synthetic datasets.


Author(s):  
Minghui Wu ◽  
Canghong Jin ◽  
Wenkang Hu ◽  
Yabo Chen

Understanding mathematical topics is important for both educators and students to capture latent concepts of questions, evaluate study performance, and recommend content in online learning systems. Compared to traditional text classification, mathematical topic classification has several main challenges: (1) the length of mathematical questions is relatively short; (2) there are various representations of the same mathematical concept(i.e., calculations and application); (3) the content of question is complex including algebra, geometry, and calculus. In order to overcome these problems, we propose a framework that combines content tokens and mathematical knowledge concepts in whole procedures. We embed entities from mathematics knowledge graphs, integrate entities into tokens in a masked language model, set up semantic similarity-based tasks for next-sentence prediction, and fuse knowledge vectors and token vectors during the fine-tuning procedure. We also build a Chinese mathematical topic prediction dataset consisting of more than 70,000 mathematical questions with topics. Our experiments using real data demonstrate that our knowledge graph-based mathematical topic prediction model outperforms other state-of-the-art methods.


2021 ◽  
pp. 323-346
Author(s):  
Ruliang Yang ◽  
Bowei Dai ◽  
Lulu Tan ◽  
Xiuqing Liu ◽  
Zhen Yang ◽  
...  

2020 ◽  
Vol 309 ◽  
pp. 03037
Author(s):  
Dongqiu Xing ◽  
Rui Chen ◽  
Lihua Qi ◽  
Jing Zhao ◽  
Yi Wang

This study establishes a multi-source fault identification method based on a combined deep learning strategy to identify a multi-source fault effectively in the fault diagnosis of complex industrial systems. This framework is composed of feature extraction and classifier design. In the first state, the signal is transformed to the time-frequency domain and the time-frequency feature is learned using stacked denoising autoencoders. A learning method that consists of unsupervised pre-learning and supervised fine-tuning is used to train this deep model. In the second state, a model for an ensemble multiple support vector machine classifier is created to recognize fault information. Ten types of rolling bearing signals were adopted in a simulation experiment to validate the effectiveness of the proposed framework. The results demonstrate that the joint model helps to obtain higher recognition accuracy.


2020 ◽  
Vol 9 (2) ◽  
pp. 61
Author(s):  
Hongwei Zhao ◽  
Lin Yuan ◽  
Haoyu Zhao

Recently, with the rapid growth of the number of datasets with remote sensing images, it is urgent to propose an effective image retrieval method to manage and use such image data. In this paper, we propose a deep metric learning strategy based on Similarity Retention Loss (SRL) for content-based remote sensing image retrieval. We have improved the current metric learning methods from the following aspects—sample mining, network model structure and metric loss function. On the basis of redefining the hard samples and easy samples, we mine the positive and negative samples according to the size and spatial distribution of the dataset classes. At the same time, Similarity Retention Loss is proposed and the ratio of easy samples to hard samples in the class is used to assign dynamic weights to the hard samples selected in the experiment to learn the sample structure characteristics within the class. For negative samples, different weights are set based on the spatial distribution of the surrounding samples to maintain the consistency of similar structures among classes. Finally, we conduct a large number of comprehensive experiments on two remote sensing datasets with the fine-tuning network. The experiment results show that the method used in this paper achieves the state-of-the-art performance.


2019 ◽  
Vol 11 (10) ◽  
pp. 1169 ◽  
Author(s):  
Yu Wang ◽  
Guoqing Zhou ◽  
Haotian You

To extract more structural features, which can contribute to segment a synthetic aperture radar (SAR) image accurately, and explore their roles in the segmentation procedure, this paper presents an energy-based SAR image segmentation method with weighted features. To precisely segment a SAR image, multiple structural features are incorporated into a block- and energy-based segmentation model in weighted way. In this paper, the multiple features of a pixel, involving spectral feature obtained from original SAR image, texture and boundary features extracted by a curvelet transform, form a feature vector. All the pixels’ feature vectors form a feature set of a SAR image. To automatically determine the roles of the multiple features in the segmentation procedure, weight variables are assigned to them. All the weight variables form a weight set. Then the image domain is partitioned into a set of blocks by regular tessellation. Afterwards, an energy function and a non-constrained Gibbs probability distribution are used to combine the feature and weight sets to build a block-based energy segmentation model with feature weighted on the partitioned image domain. Further, a reversible jump Markov Chain Monte Carlo (RJMCMC) algorithm is designed to simulate from the segmentation model. In the RJMCMC algorithm, three move types were designed according to the segmentation model. Finally, the proposed method was tested on the SAR images, and the quantitative and qualitative results demonstrated its effectiveness.


2020 ◽  
Vol 12 (3) ◽  
pp. 548 ◽  
Author(s):  
Xinzheng Zhang ◽  
Guo Liu ◽  
Ce Zhang ◽  
Peter M. Atkinson ◽  
Xiaoheng Tan ◽  
...  

Change detection is one of the fundamental applications of synthetic aperture radar (SAR) images. However, speckle noise presented in SAR images has a negative effect on change detection, leading to frequent false alarms in the mapping products. In this research, a novel two-phase object-based deep learning approach is proposed for multi-temporal SAR image change detection. Compared with traditional methods, the proposed approach brings two main innovations. One is to classify all pixels into three categories rather than two categories: unchanged pixels, changed pixels caused by strong speckle (false changes), and changed pixels formed by real terrain variation (real changes). The other is to group neighbouring pixels into superpixel objects such as to exploit local spatial context. Two phases are designed in the methodology: (1) Generate objects based on the simple linear iterative clustering (SLIC) algorithm, and discriminate these objects into changed and unchanged classes using fuzzy c-means (FCM) clustering and a deep PCANet. The prediction of this Phase is the set of changed and unchanged superpixels. (2) Deep learning on the pixel sets over the changed superpixels only, obtained in the first phase, to discriminate real changes from false changes. SLIC is employed again to achieve new superpixels in the second phase. Low rank and sparse decomposition are applied to these new superpixels to suppress speckle noise significantly. A further clustering step is applied to these new superpixels via FCM. A new PCANet is then trained to classify two kinds of changed superpixels to achieve the final change maps. Numerical experiments demonstrate that, compared with benchmark methods, the proposed approach can distinguish real changes from false changes effectively with significantly reduced false alarm rates, and achieve up to 99.71% change detection accuracy using multi-temporal SAR imagery.


2012 ◽  
Vol 2012 ◽  
pp. 1-16 ◽  
Author(s):  
A. E. Abdelkareem ◽  
B. S. Sharif ◽  
C. C. Tsimenidis ◽  
J. A. Neasham

In particular cases, such as acceleration, it is required to design a receiver structure that is capable of accomplishing time varying Doppler compensation. In this paper, two approaches are taken into consideration in order to estimate the symbol timing offset parameter. The first method employed to achieve an estimate of this particular parameter is based upon centroid localization and this prediction is reinforced by a second technique which utilises linear prediction, based on the assumption that the speed changes linearly during the OFDM symbol time. Subsequently, the two estimations of the symbol timing offset parameter are smoothed in order to obtain a fine tuned approximation of the Doppler scale. Additionally, the effects of weighting coefficients on smoothing the Doppler scale and on the performance of the receiver are also investigated. The proposed receiver is investigated, incorporating an improvement that includes fine tuning of the coarse timing synchronization in order to accommodate the time-varying Doppler. Based on this fine-tuned timing synchronization, an extension to the improved receiver is presented to assess the performance of two point correlations. The proposed algorithms' performances were investigated using real data obtained from an experiment that took place in the North Sea in 2009.


Geophysics ◽  
1984 ◽  
Vol 49 (5) ◽  
pp. 550-565 ◽  
Author(s):  
Chong‐Yung Chi ◽  
Jerry M. Mendel ◽  
Dan Hampson

In this paper we derive and implement a maximum‐likelihood deconvolution (MLD) algorithm, based on the same channel and statistical models used by Kormylo and Mendel (1983a), that leads to many fewer computations than their MLD algorithm. Both algorithms can simultaneously estimate a nonminimum phase wavelet and statistical parameters, detect locations of significant reflectors, and deconvolve the data. Our MLD algorithm is implemented by a two‐phase block component method (BCM). The phase‐1 block functions like a coarse adjustment of unknown quantities and provides a set of good initial conditions for the phase‐2 block, which functions like a fine adjustment of unknown quantities. We demonstrate good performance of our algorithm for both synthetic and real data.


Sign in / Sign up

Export Citation Format

Share Document