Document Type
Thesis
Date of Award
5-31-2022
Degree Name
Master of Science in Computer Engineering - (M.S.)
Department
Electrical and Computer Engineering
First Advisor
Abdallah Khreishah
Second Advisor
Cong Wang
Third Advisor
Hai Nhat Phan
Fourth Advisor
Qing Gary Liu
Abstract
Machine learning models have been shown to be vulnerable against various backdoor and data poisoning attacks that adversely affect model behavior. Additionally, these attacks have been shown to make unfair predictions with respect to certain protected features. In federated learning, multiple local models contribute to a single global model communicating only using local gradients, the issue of attacks become more prevalent and complex. Previously published works revolve around solving these issues both individually and jointly. However, there has been little study on the effects of attacks against model fairness. Demonstrated in this work, a flexible attack, which we call Un-Fair Trojan, that targets model fairness while remaining stealthy can have devastating effects against machine learning models.
Recommended Citation
Furth, Nicholas, "Un-fair trojan: Targeted backdoor attacks against model fairness" (2022). Theses. 1996.
https://digitalcommons.njit.edu/theses/1996
Included in
Artificial Intelligence and Robotics Commons, Computer Engineering Commons, Data Science Commons