scholarly journals Disentangling Information in Artificial Images of Plant Seedlings Using Semi-Supervised GAN

2019 ◽  
Vol 11 (22) ◽  
pp. 2671
Author(s):  
Simon Leminen Madsen ◽  
Anders Krogh Mortensen ◽  
Rasmus Nyholm Jørgensen ◽  
Henrik Karstoft

Lack of annotated data for training of deep learning systems is a challenge for many visual recognition tasks. This is especially true for domain-specific applications, such as plant detection and recognition, where the annotation process can be both time-consuming and error-prone. Generative models can be used to alleviate this issue by producing artificial data that mimic properties of real data. This work presents a semi-supervised generative adversarial network (GAN) model to produce artificial samples of plant seedlings. By applying the semi-supervised approach, we are able to produce visually distinct samples for nine unique plant species using a single GAN model, while still maintaining a relatively high visual variance in the produced samples for each species. Additionally, we are able to control the appearance of the generated samples with respect to rotation and size through a set of latent variables, despite these not being annotated features in the training data. The generated samples resemble the intended species with an average recognition accuracy of ∼64.3%, evaluated using an external state-of-the-art plant seedling classification model. Additionally, we explore the potential of using the GAN model’s discriminator as a quality assessment tool to remove poor representations of plant seedlings from the artificial samples.

2021 ◽  
Vol 263 (5) ◽  
pp. 1527-1538
Author(s):  
Xenofon Karakonstantis ◽  
Efren Fernandez Grande

The characterization of Room Impulse Responses (RIR) over an extended region in a room by means of measurements requires dense spatial with many microphones. This can often become intractable and time consuming in practice. Well established reconstruction methods such as plane wave regression show that the sound field in a room can be reconstructed from sparsely distributed measurements. However, these reconstructions usually rely on assuming physical sparsity (i.e. few waves compose the sound field) or trait in the measured sound field, making the models less generalizable and problem specific. In this paper we introduce a method to reconstruct a sound field in an enclosure with the use of a Generative Adversarial Network (GAN), which s new variants of the data distributions that it is trained upon. The goal of the proposed GAN model is to estimate the underlying distribution of plane waves in any source free region, and map these distributions from a stochastic, latent representation. A GAN is trained on a large number of synthesized sound fields represented by a random wave field and then tested on both simulated and real data sets, of lightly damped and reverberant rooms.


2021 ◽  
Vol 8 ◽  
Author(s):  
Rodrigo F. Cádiz ◽  
Agustín Macaya ◽  
Manuel Cartagena ◽  
Denis Parra

Deep learning, one of the fastest-growing branches of artificial intelligence, has become one of the most relevant research and development areas of the last years, especially since 2012, when a neural network surpassed the most advanced image classification techniques of the time. This spectacular development has not been alien to the world of the arts, as recent advances in generative networks have made possible the artificial creation of high-quality content such as images, movies or music. We believe that these novel generative models propose a great challenge to our current understanding of computational creativity. If a robot can now create music that an expert cannot distinguish from music composed by a human, or create novel musical entities that were not known at training time, or exhibit conceptual leaps, does it mean that the machine is then creative? We believe that the emergence of these generative models clearly signals that much more research needs to be done in this area. We would like to contribute to this debate with two case studies of our own: TimbreNet, a variational auto-encoder network trained to generate audio-based musical chords, and StyleGAN Pianorolls, a generative adversarial network capable of creating short musical excerpts, despite the fact that it was trained with images and not musical data. We discuss and assess these generative models in terms of their creativity and we show that they are in practice capable of learning musical concepts that are not obvious based on the training data, and we hypothesize that these deep models, based on our current understanding of creativity in robots and machines, can be considered, in fact, creative.


2020 ◽  
Vol 143 (3) ◽  
Author(s):  
Wei Chen ◽  
Faez Ahmed

Abstract Deep generative models are proven to be a useful tool for automatic design synthesis and design space exploration. When applied in engineering design, existing generative models face three challenges: (1) generated designs lack diversity and do not cover all areas of the design space, (2) it is difficult to explicitly improve the overall performance or quality of generated designs, and (3) existing models generally do not generate novel designs, outside the domain of the training data. In this article, we simultaneously address these challenges by proposing a new determinantal point process-based loss function for probabilistic modeling of diversity and quality. With this new loss function, we develop a variant of the generative adversarial network, named “performance augmented diverse generative adversarial network” (PaDGAN), which can generate novel high-quality designs with good coverage of the design space. By using three synthetic examples and one real-world airfoil design example, we demonstrate that PaDGAN can generate diverse and high-quality designs. In comparison to a vanilla generative adversarial network, on average, it generates samples with a 28% higher mean quality score with larger diversity and without the mode collapse issue. Unlike typical generative models that usually generate new designs by interpolating within the boundary of training data, we show that PaDGAN expands the design space boundary outside the training data towards high-quality regions. The proposed method is broadly applicable to many tasks including design space exploration, design optimization, and creative solution recommendation.


Author(s):  
Wei Chen ◽  
Faez Ahmed

Abstract Deep generative models are proven to be a useful tool for automatic design synthesis and design space exploration. When applied in engineering design, existing generative models face three challenges: 1) generated designs lack diversity and do not cover all areas of the design space, 2) it is difficult to explicitly improve the overall performance or quality of generated designs, and 3) existing models generate do not generate novel designs, outside the domain of the training data. In this paper, we simultaneously address these challenges by proposing a new Determinantal Point Processes based loss function for probabilistic modeling of diversity and quality. With this new loss function, we develop a variant of the Generative Adversarial Network, named “Performance Augmented Diverse Generative Adversarial Network” or PaDGAN, which can generate novel high-quality designs with good coverage of the design space. Using three synthetic examples and one real-world airfoil design example, we demonstrate that PaDGAN can generate diverse and high-quality designs. In comparison to a vanilla Generative Adversarial Network, on average, it generates samples with 28% higher mean quality score with larger diversity and without the mode collapse issue. Unlike typical generative models that usually generate new designs by interpolating within the boundary of training data, we show that PaDGAN expands the design space boundary outside the training data towards high-quality regions. The proposed method is broadly applicable to many tasks including design space exploration, design optimization, and creative solution recommendation.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Makoto Naruse ◽  
Takashi Matsubara ◽  
Nicolas Chauvet ◽  
Kazutaka Kanno ◽  
Tianyu Yang ◽  
...  

Abstract Generative adversarial networks (GANs) are becoming increasingly important in the artificial construction of natural images and related functionalities, wherein two types of networks called generators and discriminators evolve through adversarial mechanisms. Using deep convolutional neural networks and related techniques, high-resolution and highly realistic scenes, human faces, etc. have been generated. GANs generally require large amounts of genuine training data sets, as well as vast amounts of pseudorandom numbers. In this study, we utilized chaotic time series generated experimentally by semiconductor lasers for the latent variables of a GAN, whereby the inherent nature of chaos could be reflected or transformed into the generated output data. We show that the similarity in proximity, which describes the robustness of the generated images with respect to minute changes in the input latent variables, is enhanced, while the versatility overall is not severely degraded. Furthermore, we demonstrate that the surrogate chaos time series eliminates the signature of the generated images that is originally observed corresponding to the negative autocorrelation inherent in the chaos sequence. We also address the effects of utilizing chaotic time series to retrieve images from the trained generator.


2020 ◽  
Vol 34 (04) ◽  
pp. 4140-4149
Author(s):  
Zhiwei Hong ◽  
Xiaocheng Fan ◽  
Tao Jiang ◽  
Jianxing Feng

Image denoising is a classic low level vision problem that attempts to recover a noise-free image from a noisy observation. Recent advances in deep neural networks have outperformed traditional prior based methods for image denoising. However, the existing methods either require paired noisy and clean images for training or impose certain assumptions on the noise distribution and data types. In this paper, we present an end-to-end unpaired image denoising framework (UIDNet) that denoises images with only unpaired clean and noisy training images. The critical component of our model is a noise learning module based on a conditional Generative Adversarial Network (cGAN). The model learns the noise distribution from the input noisy images and uses it to transform the input clean images to noisy ones without any assumption on the noise distribution and data types. This process results in pairs of clean and pseudo-noisy images. Such pairs are then used to train another denoising network similar to the existing denoising methods based on paired images. The noise learning and denoising components are integrated together so that they can be trained end-to-end. Extensive experimental evaluation has been performed on both synthetic and real data including real photographs and computer tomography (CT) images. The results demonstrate that our model outperforms the previous models trained on unpaired images as well as the state-of-the-art methods based on paired training data when proper training pairs are unavailable.


2021 ◽  
Vol 5 (45) ◽  
pp. 736-748
Author(s):  
A.S. Konushin ◽  
B.V. Faizov ◽  
V.I. Shakhuro

Traffic sign recognition is a well-researched problem in computer vision. However, the state of the art methods works only for frequent sign classes, which are well represented in training datasets. We consider the task of rare traffic sign detection and classification. We aim to solve that problem by using synthetic training data. Such training data is obtained by embedding synthetic images of signs in the real photos. We propose three methods for making synthetic signs consistent with a scene in appearance. These methods are based on modern generative adversarial network (GAN) architectures. Our proposed methods allow realistic embedding of rare traffic sign classes that are absent in the training set. We adapt a variational autoencoder for sampling plausible locations of new traffic signs in images. We demonstrate that using a mixture of our synthetic data with real data improves the accuracy of both classifier and detector.


Author(s):  
Jian Zhao ◽  
Lin Xiong ◽  
Yu Cheng ◽  
Yi Cheng ◽  
Jianshu Li ◽  
...  

Learning from synthetic faces, though perhaps appealing for high data efficiency, may not bring satisfactory performance due to the distribution discrepancy of the synthetic and real face images. To mitigate this gap, we propose a 3D-Aided Deep Pose-Invariant Face Recognition Model (3D-PIM), which automatically recovers realistic frontal faces from arbitrary poses through a 3D face model in a novel way. Specifically, 3D-PIM incorporates a simulator with the aid of a 3D Morphable Model (3D MM) to obtain shape and appearance prior for accelerating face normalization learning, requiring less training data. It further leverages a global-local Generative Adversarial Network (GAN) with multiple critical improvements as a refiner to enhance the realism of both global structures and local details of the face simulator’s output using unlabelled real data only, while preserving the identity information. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks clearly demonstrate superiority of the proposed model over state-of-the-arts.


2020 ◽  
pp. 1-11
Author(s):  
Xianzhong Jian ◽  
Xutao Wang

The existing methods for classification of power quality disturbance signals (PQDs) have the problems that the process of signal feature selection is tedious and imprecise, the accuracy of classification has no guiding significance for feature extraction, and lack of adequate labelled training data. To solve these problems, this paper proposes a new semi-supervised method for classification of PQDs based on generative adversarial network (GAN). Firstly, a GAN model is designed which we call it PQDGAN. After the unsupervised pre-training with unlabeled training data, the trained discriminator is extracted alone and conduct supervised training with a small amount of labelled training data. Finally, the discriminator became a classifier with high accuracy. This model can achieve the step of feature extraction and selection efficiently. In addition, only a small amount of labelled training data is used, which greatly reduces the dependence of classification model on labelled data. Experiments show that this method has high classification accuracy, less computations and strong robustness. It is a new semi-supervised method for classification of PQDs.


Author(s):  
Annapoorani Gopal ◽  
Lathaselvi Gandhimaruthian ◽  
Javid Ali

The Deep Neural Networks have gained prominence in the biomedical domain, becoming the most commonly used networks after machine learning technology. Mammograms can be used to detect breast cancers with high precision with the help of Convolutional Neural Network (CNN) which is deep learning technology. An exhaustive labeled data is required to train the CNN from scratch. This can be overcome by deploying Generative Adversarial Network (GAN) which comparatively needs lesser training data during a mammogram screening. In the proposed study, the application of GANs in estimating breast density, high-resolution mammogram synthesis for clustered microcalcification analysis, effective segmentation of breast tumor, analysis of the shape of breast tumor, extraction of features and augmentation of the image during mammogram classification have been extensively reviewed.


Sign in / Sign up

Export Citation Format

Share Document