Conditional Generative Adversarial Network for Structured Domain Adaptation

Author(s):  
Weixiang Hong ◽  
Zhenzhen Wang ◽  
Ming Yang ◽  
Junsong Yuan
Author(s):  
Cara Murphy ◽  
John Kerekes

The classification of trace chemical residues through active spectroscopic sensing is challenging due to the lack of physics-based models that can accurately predict spectra. To overcome this challenge, we leveraged the field of domain adaptation to translate data from the simulated to the measured domain for training a classifier. We developed the first 1D conditional generative adversarial network (GAN) to perform spectrum-to-spectrum translation of reflectance signatures. We applied the 1D conditional GAN to a library of simulated spectra and quantified the improvement in classification accuracy on real data using the translated spectra for training the classifier. Using the GAN-translated library, the average classification accuracy increased from 0.622 to 0.723 on real chemical reflectance data, including data from chemicals not included in the GAN training set.


2020 ◽  
Vol 34 (03) ◽  
pp. 2661-2668
Author(s):  
Chuang Lin ◽  
Sicheng Zhao ◽  
Lei Meng ◽  
Tat-Seng Chua

Existing domain adaptation methods on visual sentiment classification typically are investigated under the single-source scenario, where the knowledge learned from a source domain of sufficient labeled data is transferred to the target domain of loosely labeled or unlabeled data. However, in practice, data from a single source domain usually have a limited volume and can hardly cover the characteristics of the target domain. In this paper, we propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN), for visual sentiment classification. To handle data from multiple source domains, it learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution. This is achieved via cycle consistent adversarial learning in an end-to-end manner. Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4718
Author(s):  
Tho Nguyen Duc ◽  
Chanh Minh Tran ◽  
Phan Xuan Tan ◽  
Eiji Kamioka

Imitation learning is an effective approach for an autonomous agent to learn control policies when an explicit reward function is unavailable, using demonstrations provided from an expert. However, standard imitation learning methods assume that the agents and the demonstrations provided by the expert are in the same domain configuration. Such an assumption has made the learned policies difficult to apply in another distinct domain. The problem is formalized as domain adaptive imitation learning, which is the process of learning how to perform a task optimally in a learner domain, given demonstrations of the task in a distinct expert domain. We address the problem by proposing a model based on Generative Adversarial Network. The model aims to learn both domain-shared and domain-specific features and utilizes it to find an optimal policy across domains. The experimental results show the effectiveness of our model in a number of tasks ranging from low to complex high-dimensional.


2020 ◽  
Vol 34 (04) ◽  
pp. 3462-3469
Author(s):  
Jiawei Chen ◽  
Yuexiang Li ◽  
Kai Ma ◽  
Yefeng Zheng

Endoscopic videos from multicentres often have different imaging conditions, e.g., color and illumination, which make the models trained on one domain usually fail to generalize well to another. Domain adaptation is one of the potential solutions to address the problem. However, few of existing works focused on the translation of video-based data. In this work, we propose a novel generative adversarial network (GAN), namely VideoGAN, to transfer the video-based data across different domains. As the frames of a video may have similar content and imaging conditions, the proposed VideoGAN has an X-shape generator to preserve the intra-video consistency during translation. Furthermore, a loss function, namely color histogram loss, is proposed to tune the color distribution of each translated frame. Two colonoscopic datasets from different centres, i.e., CVC-Clinic and ETIS-Larib, are adopted to evaluate the performance of domain adaptation of our VideoGAN. Experimental results demonstrate that the adapted colonoscopic video generated by our VideoGAN can significantly boost the segmentation accuracy, i.e., an improvement of 5%, of colorectal polyps on multicentre datasets. As our VideoGAN is a general network architecture, we also evaluate its performance with the CamVid driving video dataset on the cloudy-to-sunny translation task. Comprehensive experiments show that the domain gap could be substantially narrowed down by our VideoGAN.


2021 ◽  
Author(s):  
surabhi sinha ◽  
Sophia I. Thomopoulos ◽  
Pradeep Lam ◽  
Alexandra Muir ◽  
Paul M. Thompson

Alzheimer's disease (AD) accounts for 60% of dementia cases worldwide; patients with the disease typically suffer from irreversible memory loss and progressive decline in multiple cognitive domains. With brain imaging techniques such as magnetic resonance imaging (MRI), microscopic brain changes are detectable even before abnormal memory loss is detected clinically. Patterns of brain atrophy can be measured using MRI, which gives us an opportunity to facilitate AD detection using image classification techniques. Even so, MRI scanning protocols and scanners differ across studies. The resulting differences in image contrast and signal to noise make it important to train and test classification models on multiple datasets, and to handle shifts in image characteristics across protocols (also known as domain transfer or domain adaptation). Here, we examined whether adversarial domain adaptation can boost the performance of a Convolutional Neural Network (CNN) model designed to classify AD. To test this, we used an Attention-Guided Generative Adversarial Network (GAN) to harmonize images from three publicly available brain MRI datasets - ADNI, AIBL and OASIS - adjusting for scanner-dependent effects. Our AG-GAN optimized a joint objective function that included attention loss, pixel loss, cycle-consistency loss and adversarial loss; the model was trained bidirectionally in an end-to-end fashion. For AD classification, we adapted the popular 2D AlexNet CNN to handle 3D images. Classification based on harmonized MR images significantly outperformed classification based on the three datasets in non-harmonized form, motivating further work on image harmonization using adversarial techniques.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 583
Author(s):  
Gabriel Villalonga ◽  
Joost Van de Weijer ◽  
Antonio M. López

On-board vision systems may need to increase the number of classes that can be recognized in a relatively short period. For instance, a traffic sign recognition system may suddenly be required to recognize new signs. Since collecting and annotating samples of such new classes may need more time than we wish, especially for uncommon signs, we propose a method to generate these samples by combining synthetic images and Generative Adversarial Network (GAN) technology. In particular, the GAN is trained on synthetic and real-world samples from known classes to perform synthetic-to-real domain adaptation, but applied to synthetic samples of the new classes. Using the Tsinghua dataset with a synthetic counterpart, SYNTHIA-TS, we have run an extensive set of experiments. The results show that the proposed method is indeed effective, provided that we use a proper Convolutional Neural Network (CNN) to perform the traffic sign recognition (classification) task as well as a proper GAN to transform the synthetic images. Here, a ResNet101-based classifier and domain adaptation based on CycleGAN performed extremely well for a ratio ∼ 1 / 4 for new/known classes; even for more challenging ratios such as ∼ 4 / 1 , the results are also very positive.


2019 ◽  
Vol 11 (22) ◽  
pp. 2631 ◽  
Author(s):  
Bo Fang ◽  
Rong Kou ◽  
Li Pan ◽  
Pengfei Chen

Since manually labeling aerial images for pixel-level classification is expensive and time-consuming, developing strategies for land cover mapping without reference labels is essential and meaningful. As an efficient solution for this issue, domain adaptation has been widely utilized in numerous semantic labeling-based applications. However, current approaches generally pursue the marginal distribution alignment between the source and target features and ignore the category-level alignment. Therefore, directly applying them to land cover mapping leads to unsatisfactory performance in the target domain. In our research, to address this problem, we embed a geometry-consistent generative adversarial network (GcGAN) into a co-training adversarial learning network (CtALN), and then develop a category-sensitive domain adaptation (CsDA) method for land cover mapping using very-high-resolution (VHR) optical aerial images. The GcGAN aims to eliminate the domain discrepancies between labeled and unlabeled images while retaining their intrinsic land cover information by translating the features of the labeled images from the source domain to the target domain. Meanwhile, the CtALN aims to learn a semantic labeling model in the target domain with the translated features and corresponding reference labels. By training this hybrid framework, our method learns to distill knowledge from the source domain and transfers it to the target domain, while preserving not only global domain consistency, but also category-level consistency between labeled and unlabeled images in the feature space. The experimental results between two airborne benchmark datasets and the comparison with other state-of-the-art methods verify the robustness and superiority of our proposed CsDA.


Sign in / Sign up

Export Citation Format

Share Document