"Fairness via Group Contribution Matching" by Tianlin Li, Zhiming Li et al.
 

Fairness via Group Contribution Matching

Document Type

Conference Proceeding

Publication Date

1-1-2023

Abstract

Fairness issues in Deep Learning models have recently received increasing attention due to their significant societal impact. Although methods for mitigating unfairness are constantly proposed, little research has been conducted to understand how discrimination and bias develop during the standard training process. In this study, we propose analyzing the contribution of each subgroup (i.e., a group of data with the same sensitive attribute) in the training process to understand the cause of such bias development process. We propose a gradient-based metric to assess training subgroup contribution disparity, showing that unequal contributions from different subgroups are one source of such unfairness. One way to balance the contribution of each subgroup is through oversampling, which ensures that an equal number of samples are drawn from each subgroup during each training iteration. However, we find that even with a balanced number of samples, the contribution of each group remains unequal, resulting in unfairness under such a strategy. To address the above issues, we propose an easy but effective group contribution matching (GCM) method to match the contribution of each subgroup. Our experiments show that our GCM effectively improves fairness and outperforms other methods significantly.

Identifier

85170355946 (Scopus)

ISBN

[9781956792034]

Publication Title

Ijcai International Joint Conference on Artificial Intelligence

External Full Text Location

https://doi.org/10.24963/ijcai.2023/49

ISSN

10450823

First Page

436

Last Page

445

Volume

2023-August

Grant

AISG2-RP-2020-019

Fund Ref

National Research Foundation Singapore

This document is currently not available here.

Share

COinS