"Effective Visual Domain Adaptation via Generative Adversarial Distribu" by Qi Kang, Siya Yao et al.
 

Effective Visual Domain Adaptation via Generative Adversarial Distribution Matching

Document Type

Article

Publication Date

9-1-2021

Abstract

In the field of computer vision, without sufficient labeled images, it is challenging to train an accurate model. However, through visual adaptation from source to target domains, a relevant labeled dataset can help solve such problem. Many methods apply adversarial learning to diminish cross-domain distribution difference. They are able to greatly enhance the performance on target classification tasks. Generative adversarial network (GAN) loss is widely used in adversarial adaptation learning methods to reduce an across-domain distribution difference. However, it becomes difficult to decline such distribution difference if generator or discriminator in GAN fails to work as expected and degrades its performance. To solve such cross-domain classification problems, we put forward a novel adaptation framework called generative adversarial distribution matching (GADM). In GADM, we improve the objective function by taking cross-domain discrepancy distance into consideration and further minimize the difference through the competition between a generator and discriminator, thereby greatly decreasing cross-domain distribution difference. Experimental results and comparison with several state-of-the-art methods verify GADM's superiority in image classification across domains.

Identifier

85091316102 (Scopus)

Publication Title

IEEE Transactions on Neural Networks and Learning Systems

External Full Text Location

https://doi.org/10.1109/TNNLS.2020.3016180

e-ISSN

21622388

ISSN

2162237X

PubMed ID

32915748

First Page

3919

Last Page

3929

Issue

9

Volume

32

Grant

51775385

Fund Ref

National Natural Science Foundation of China

This document is currently not available here.

Share

COinS