Source data‐free domain adaptation of object detector through domain‐specific perturbation

Author(s):  
Lin Xiong ◽  
Mao Ye ◽  
Dan Zhang ◽  
Yan Gan ◽  
Xue Li ◽  
...  
2021 ◽  
pp. 108436
Author(s):  
Lin Xiong ◽  
Mao Ye ◽  
Dan Zhang ◽  
Yan Gan ◽  
Yiguang Liu

Author(s):  
Si Wu ◽  
Jian Zhong ◽  
Wenming Cao ◽  
Rui Li ◽  
Zhiwen Yu ◽  
...  

For unsupervised domain adaptation, the process of learning domain-invariant representations could be dominated by the labeled source data, such that the specific characteristics of the target domain may be ignored. In order to improve the performance in inferring target labels, we propose a targetspecific network which is capable of learning collaboratively with a domain adaptation network, instead of directly minimizing domain discrepancy. A clustering regularization is also utilized to improve the generalization capability of the target-specific network by forcing target data points to be close to accumulated class centers. As this network learns and specializes to the target domain, its performance in inferring target labels improves, which in turn facilitates the learning process of the adaptation network. Therefore, there is a mutually beneficial relationship between these two networks. We perform extensive experiments on multiple digit and object datasets, and the effectiveness and superiority of the proposed approach is presented and verified on multiple visual adaptation benchmarks, e.g., we improve the state-ofthe-art on the task of MNIST→SVHN from 76.5% to 84.9% without specific augmentation.


Author(s):  
Jogendra Nath Kundu ◽  
Naveen Venkat ◽  
M. V. Rahul ◽  
R. Venkatesh Babu

2020 ◽  
Vol 34 (07) ◽  
pp. 11386-11393 ◽  
Author(s):  
Shuang Li ◽  
Chi Liu ◽  
Qiuxia Lin ◽  
Binhui Xie ◽  
Zhengming Ding ◽  
...  

Tremendous research efforts have been made to thrive deep domain adaptation (DA) by seeking domain-invariant features. Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target. However, we argue that such strongly-shared convolutional layers might be harmful for domain-specific feature learning when source and target data distribution differs to a large extent. In this paper, we relax a shared-convnets assumption made by previous DA methods and propose a Domain Conditioned Adaptation Network (DCAN), which aims to excite distinct convolutional channels with a domain conditioned channel attention mechanism. As a result, the critical low-level domain-dependent knowledge could be explored appropriately. As far as we know, this is the first work to explore the domain-wise convolutional channel activation for deep DA networks. Moreover, to effectively align high-level feature distributions across two domains, we further deploy domain conditioned feature correction blocks after task-specific layers, which will explicitly correct the domain discrepancy. Extensive experiments on three cross-domain benchmarks demonstrate the proposed approach outperforms existing methods by a large margin, especially on very tough cross-domain learning tasks.


2020 ◽  
Vol 39 (6) ◽  
pp. 8149-8159
Author(s):  
Ping Li ◽  
Zhiwei Ni ◽  
Xuhui Zhu ◽  
Juan Song

Domain adaptation (DA) aims to train a robust predictor by transferring rich knowledge from a well-labeled source domain to annotate a newly coming target domain; however, the two domains are usually drawn from very different distributions. Most current methods either learn the common features by matching inter-domain feature distributions and training the classifier separately or align inter-domain label distributions to directly obtain an adaptive classifier based on the original features despite feature distortion. Moreover, intra-domain information may be greatly degraded during the DA process; i.e., the source data samples from different classes might grow closer. To this end, this paper proposes a novel DA approach, referred to as inter-class distribution alienation and inter-domain distribution alignment based on manifold embedding (IDAME). Specifically, IDAME commits to adapting the classifier on the Grassmann manifold by using structural risk minimization, where inter-domain feature distributions are aligned to mitigate feature distortion, and the target pseudo labels are exploited using the distances on the Grassmann manifold. During the classifier adaptation process, we simultaneously consider the inter-class distribution alienation, the inter-domain distribution alignment, and the manifold consistency. Extensive experiments validate that IDAME can outperform several comparative state-of-the-art methods on real-world cross-domain image datasets.


Sign in / Sign up

Export Citation Format

Share Document