Complement Sparsification: Low-Overhead Model Pruning for Federated Learning
Document Type
Conference Proceeding
Publication Date
6-27-2023
Abstract
Federated Learning (FL) is a privacy-preserving distributed deep learning paradigm that involves substantial communication and computation effort, which is a problem for resource-constrained mobile and IoT devices. Model pruning/sparsification develops sparse models that could solve this problem, but existing sparsification solutions cannot satisfy at the same time the requirements for low bidirectional communication overhead between the server and the clients, low computation overhead at the clients, and good model accuracy, under the FL assumption that the server does not have access to raw data to fine-tune the pruned models. We propose Complement Sparsification (CS), a pruning mechanism that satisfies all these requirements through a complementary and collaborative pruning done at the server and the clients. At each round, CS creates a global sparse model that contains the weights that capture the general data distribution of all clients, while the clients create local sparse models with the weights pruned from the global model to capture the local trends. For improved model performance, these two types of complementary sparse models are aggregated into a dense model in each round, which is subsequently pruned in an iterative process. CS requires little computation overhead on the top of vanilla FL for both the server and the clients. We demonstrate that CS is an approximation of vanilla FL and, thus, its models perform well. We evaluate CS experimentally with two popular FL benchmark datasets. CS achieves substantial reduction in bidirectional communication, while achieving performance comparable with vanilla FL. In addition, CS outperforms baseline pruning mechanisms for FL.
Identifier
85168252309 (Scopus)
ISBN
[9781577358800]
Publication Title
Proceedings of the 37th Aaai Conference on Artificial Intelligence Aaai 2023
External Full Text Location
https://doi.org/10.1609/aaai.v37i7.25977
First Page
8087
Last Page
8095
Volume
37
Grant
DGE 2043104
Fund Ref
National Science Foundation
Recommended Citation
Jiang, Xiaopeng and Borcea, Cristian, "Complement Sparsification: Low-Overhead Model Pruning for Federated Learning" (2023). Faculty Publications. 1631.
https://digitalcommons.njit.edu/fac_pubs/1631