Investigating the Factors Impacting Adversarial Attack and Defense Performances in Federated Learning

Document Type

Article

Publication Date

1-1-2024

Abstract

Despite the promising success of federated learning in various application areas, its inherent vulnerability to adversarial attacks hinders its applicability in security-critical areas. This calls for developing viable defense measures against such attacks. A prerequisite for this development, however, is the understanding of what creates, promotes, and aggravates this vulnerability. To date, developing this understanding remains an outstanding gap in the literature. Accordingly, this paper presents an attempt at developing such an understanding. Primarily, this is achieved from two main perspectives. The first perspective concerns addressing the factors, elements, and parameters contributing to the vulnerability of federated learning models to adversarial attacks, their degrees of severity, and combined effects. This includes addressing diverse operating conditions, attack types and scenarios, and collaborations between attacking agents. The second perspective regards analyzing the appearance of the adversarial property of a model in how it updates its coefficients and exploiting this for defense purposes. These analyses are conducted through extensive experiments on image and text classification tasks. Simulation results reveal the importance of specific parameters and factors on the severity of this vulnerability. Besides, the proposed defense strategy is shown able to provide promising performances.

Identifier

85130497453 (Scopus)

Publication Title

IEEE Transactions on Engineering Management

External Full Text Location

https://doi.org/10.1109/TEM.2022.3155353

e-ISSN

15580040

ISSN

00189391

First Page

12542

Last Page

12555

Volume

71

Grant

1120

Fund Ref

Ministry of Education - Kingdom of Saudi Arabia

This document is currently not available here.

Share

COinS