scholarly journals Semi-Supervised Learning for Seismic Impedance Inversion Using Generative Adversarial Networks

2021 ◽  
Vol 13 (5) ◽  
pp. 909
Author(s):  
Bangyu Wu ◽  
Delin Meng ◽  
Haixia Zhao

Seismic impedance inversion is essential to characterize hydrocarbon reservoir and detect fluids in field of geophysics. However, it is nonlinear and ill-posed due to unknown seismic wavelet, observed data band limitation and noise, but it also requires a forward operator that characterizes physical relation between measured data and model parameters. Deep learning methods have been successfully applied to solve geophysical inversion problems recently. It can obtain results with higher resolution compared to traditional inversion methods, but its performance often not fully explored for the lack of adequate labeled data (i.e., well logs) in training process. To alleviate this problem, we propose a semi-supervised learning workflow based on generative adversarial network (GAN) for acoustic impedance inversion. The workflow contains three networks: a generator, a discriminator and a forward model. The training of the generator and discriminator are guided by well logs and constrained by unlabeled data via the forward model. The benchmark models Marmousi2, SEAM and a field data are used to demonstrate the performance of our method. Results show that impedance predicted by the presented method, due to making use of both labeled and unlabeled data, are better consistent with ground truth than that of conventional deep learning methods.

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3913 ◽  
Author(s):  
Mingxuan Li ◽  
Ou Li ◽  
Guangyi Liu ◽  
Ce Zhang

With the recently explosive growth of deep learning, automatic modulation recognition has undergone rapid development. Most of the newly proposed methods are dependent on large numbers of labeled samples. We are committed to using fewer labeled samples to perform automatic modulation recognition in the cognitive radio domain. Here, a semi-supervised learning method based on adversarial training is proposed which is called signal classifier generative adversarial network. Most of the prior methods based on this technology involve computer vision applications. However, we improve the existing network structure of a generative adversarial network by adding the encoder network and a signal spatial transform module, allowing our framework to address radio signal processing tasks more efficiently. These two technical improvements effectively avoid nonconvergence and mode collapse problems caused by the complexity of the radio signals. The results of simulations show that compared with well-known deep learning methods, our method improves the classification accuracy on a synthetic radio frequency dataset by 0.1% to 12%. In addition, we verify the advantages of our method in a semi-supervised scenario and obtain a significant increase in accuracy compared with traditional semi-supervised learning methods.


Author(s):  
Mohamed Nadjib Boufenara ◽  
Mahmoud Boufaida ◽  
Mohamed Lamine Berkane

With the exponential growth of biological data, labeling this kind of data becomes difficult and costly. Although unlabeled data are comparatively more plentiful than labeled ones, most supervised learning methods are not designed to use unlabeled data. Semi-supervised learning methods are motivated by the availability of large unlabeled datasets rather than a small amount of labeled examples. However, incorporating unlabeled data into learning does not guarantee an improvement in classification performance. This paper introduces an approach based on a model of semi-supervised learning, which is the self-training with a deep learning algorithm to predict missing classes from labeled and unlabeled data. In order to assess the performance of the proposed approach, two datasets are used with four performance measures: precision, recall, F-measure, and area under the ROC curve (AUC).


Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 988 ◽  
Author(s):  
Fazakis ◽  
Kanas ◽  
Aridas ◽  
Karlos ◽  
Kotsiantis

One of the major aspects affecting the performance of the classification algorithms is the amount of labeled data which is available during the training phase. It is widely accepted that the labeling procedure of vast amounts of data is both expensive and time-consuming since it requires the employment of human expertise. For a wide variety of scientific fields, unlabeled examples are easy to collect but hard to handle in a useful manner, thus improving the contained information for a subject dataset. In this context, a variety of learning methods have been studied in the literature aiming to efficiently utilize the vast amounts of unlabeled data during the learning process. The most common approaches tackle problems of this kind by individually applying active learning or semi-supervised learning methods. In this work, a combination of active learning and semi-supervised learning methods is proposed, under a common self-training scheme, in order to efficiently utilize the available unlabeled data. The effective and robust metrics of the entropy and the distribution of probabilities of the unlabeled set, to select the most sufficient unlabeled examples for the augmentation of the initial labeled set, are used. The superiority of the proposed scheme is validated by comparing it against the base approaches of supervised, semi-supervised, and active learning in the wide range of fifty-five benchmark datasets.


Author(s):  
SHI ZHONG

Using unlabeled data to help supervised learning has become an increasingly attractive methodology and proven to be effective in many applications. This paper applies semi-supervised classification algorithms, based on hidden Markov models, to classify sequences. For model-based classification, semi-supervised learning amounts to using both labeled and unlabeled data to train model parameters. We examine three different strategies of using labeled and unlabeled data in the model training process. These strategies differ in how and when labeled and unlabeled data contribute to the model training process. We also compare regular semi-supervised learning, where there are separate unlabeled training data and unlabeled test data, with transductive learning where we do not differentiate between unlabeled training data and unlabeled test data. Our experimental results on synthetic and real EEG time-series show that substantially improved classification accuracy can be achieved by these semi-supervised learning strategies. The effect of model complexity on semi-supervised learning is also studied in our experiments.


2021 ◽  
Author(s):  
Roberto Augusto Philippi Martins ◽  
Danilo Silva

The lack of labeled data is one of the main prohibiting issues on the development of deep learning models, as they rely on large labeled datasets in order to achieve high accuracy in complex tasks. Our objective is to evaluate the performance gain of having additional unlabeled data in the training of a deep learning model when working with medical imaging data. We present a semi-supervised learning algorithm that utilizes a teacher-student paradigm in order to leverage unlabeled data in the classification of chest X-ray images. Using our algorithm on the ChestX-ray14 dataset, we manage to achieve a substantial increase in performance when using small labeled datasets. With our method, a model achieves an AUROC of 0.822 with only 2% labeled data and 0.865 with 5% labeled data, while a fully supervised method achieves an AUROC of 0.807 with 5% labeled data and only 0.845 with 10%.


2019 ◽  
Vol 490 (1) ◽  
pp. 371-384 ◽  
Author(s):  
Aristide Doussot ◽  
Evan Eames ◽  
Benoit Semelin

ABSTRACT Within the next few years, the Square Kilometre Array (SKA) or one of its pathfinders will hopefully detect the 21-cm signal fluctuations from the Epoch of Reionization (EoR). Then, the goal will be to accurately constrain the underlying astrophysical parameters. Currently, this is mainly done with Bayesian inference. Recently, neural networks have been trained to perform inverse modelling and, ideally, predict the maximum-likelihood values of the model parameters. We build on these by improving the accuracy of the predictions using several supervised learning methods: neural networks, kernel regressions, or ridge regressions. Based on a large training set of 21-cm power spectra, we compare the performances of these methods. When using a noise-free signal generated by the model itself as input, we improve on previous neural network accuracy by one order of magnitude and, using a local ridge kernel regression, we gain another factor of a few. We then reach an accuracy level on the reconstruction of the maximum-likelihood parameter values of a few per cents compared the 1σ confidence level due to SKA thermal noise (as estimated with Bayesian inference). For an input signal affected by an SKA-like thermal noise but constrained to yield the same maximum-likelihood parameter values as the noise-free signal, our neural network exhibits an error within half of the 1σ confidence level due to the SKA thermal noise. This accuracy improves to 10$\, {\rm per\, cent}$ of the 1σ level when using the local ridge kernel. We are thus reaching a performance level where supervised learning methods are a viable alternative to determine the maximum-likelihood parameters values.


2019 ◽  
Author(s):  
Aymeric-Pierre Peyret ◽  
Joaquín Ambía ◽  
Carlos Torres-Verdín ◽  
Joachim Strobel

2021 ◽  
Author(s):  
Zhang Jian ◽  
Wanjuan Song

Abstract Image dehazing has always been a challenging topic in image processing. The development of deep learning methods, especially the Generative Adversarial Networks(GAN), provides a new way for image dehazing. In recent years, many deep learning methods based on GAN have been applied to image dehazing. However, GAN has two problems in image dehazing. Firstly, For haze image, haze not only reduces the quality of the image, but also blurs the details of the image. For Gan network, it is difficult for the generator to restore the details of the whole image while removing the haze. Secondly, GAN model is defined as a minimax problem, which weakens the loss function. It is difficult to distinguish whether GAN is making progress in the training process. Therefore, we propose a Guided Generative Adversarial Dehazing Network(GGADN). Different from other generation adversarial networks, GGADN adds a guided module on the generator. The guided module verifies the network of each layer of the generator. At the same time, the details of the map generated by each layer are strengthened. Network training is based on the pre-trained VGG feature model and L1-regularized gradient prior which is developed by new loss function parameters. From the dehazing results of synthetic images and real images, proposed method is better than the state-of-the-art dehazing methods.


Sign in / Sign up

Export Citation Format

Share Document