Straggler-Resilient Differentially-Private Decentralized Learning

Document Type

Conference Proceeding

Publication Date

1-1-2022

Abstract

We consider straggler resiliency in decentralized learning using stochastic gradient descent under the notion of network differential privacy (DP). In particular, we extend the recently proposed framework of privacy amplification by decentralization by Cyffers and Bellet to include training latency -comprising both computation and communication latency. Analytical results on both the convergence speed and the DP level are derived for training over a logical ring for both a skipping scheme (which ignores the stragglers after a timeout) and a baseline scheme that waits for each node to finish before the training continues. Our results show a trade-off between training latency, accuracy, and privacy, parameterized by the timeout of the skipping scheme. Finally, results when training a logistic regression model on a real-world dataset are presented.

Identifier

85144592566 (Scopus)

ISBN

[9781665483414]

Publication Title

2022 IEEE Information Theory Workshop Itw 2022

External Full Text Location

https://doi.org/10.1109/ITW54588.2022.9965898

First Page

708

Last Page

713

This document is currently not available here.

Share

COinS