ProxiMix: Enhancing Fairness with Proximity Samples in Subgroups

Document Type

Conference Proceeding

Publication Date

1-1-2024

Abstract

Many bias mitigation methods have been developed for addressing fairness issues in machine learning. We have found that using linear mixup alone, a data augmentation technique, for bias mitigation, can still retain biases present in dataset labels. Research presented in this paper aims to address this issue by proposing a novel pre-processing strategy in which both an existing mixup method and our new bias mitigation algorithm can be utilized to improve the generation of labels of augmented samples, hence being proximity aware. Specifically, we propose ProxiMix which keeps both pairwise and proximity relationships for fairer data augmentation. We have conducted thorough experiments with three datasets, three ML models, and different hyperparameters settings. Our experimental results show the effectiveness of ProxiMix from both fairness of predictions and fairness of recourse perspectives.

Identifier

85210029563 (Scopus)

Publication Title

CEUR Workshop Proceedings

ISSN

16130073

Volume

3808

Grant

EP/W524414/1/2894964

This document is currently not available here.

Share

COinS