MU-GAN: Facial Attribute Editing Based on Multi-Attention Mechanism

Document Type

Article

Publication Date

9-1-2021

Abstract

Facial attribute editing has mainly two objectives: 1) translating image from a source domain to a target one, and 2) only changing the facial regions related to a target attribute and preserving the attribute-excluding details. In this work, we propose a multi-attention U-Net-based generative adversarial network (MU-GAN). First, we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator, and then apply an additive attention mechanism to build attention-based U-Net connections for adaptively transferring encoder representations to complement a decoder with attribute-excluding detail and enhance attribute editing ability. Second, a self-attention (SA) mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies across image regions. Experimental results indicate that our method is capable of balancing attribute editing ability and details preservation ability, and can decouple the correlation among attributes. It outperforms the state-of-the-art methods in terms of attribute manipulation accuracy and image quality. Our code is available at https://github.com/SuSir1996/MU-GAN.

Identifier

85112252032 (Scopus)

Publication Title

IEEE Caa Journal of Automatica Sinica

External Full Text Location

https://doi.org/10.1109/JAS.2020.1003390

e-ISSN

23299274

ISSN

23299266

First Page

1614

Last Page

1626

Issue

9

Volume

8

Grant

61302163

Fund Ref

Nvidia

This document is currently not available here.

Share

COinS