Improved Information-Theoretic Generalization Bounds for Distributed, Federated, and Iterative Learning †
Document Type
Article
Publication Date
9-1-2022
Abstract
We consider information-theoretic bounds on the expected generalization error for statistical learning problems in a network setting. In this setting, there are K nodes, each with its own independent dataset, and the models from the K nodes have to be aggregated into a final centralized model. We consider both simple averaging of the models as well as more complicated multi-round algorithms. We give upper bounds on the expected generalization error for a variety of problems, such as those with Bregman divergence or Lipschitz continuous losses, that demonstrate an improved dependence of (Formula presented.) on the number of nodes. These “per node” bounds are in terms of the mutual information between the training dataset and the trained weights at each node and are therefore useful in describing the generalization properties inherent to having communication or privacy constraints at each node.
Identifier
85138553459 (Scopus)
Publication Title
Entropy
External Full Text Location
https://doi.org/10.3390/e24091178
e-ISSN
10994300
Issue
9
Volume
24
Grant
CCF-1908308
Fund Ref
National Science Foundation
Recommended Citation
Barnes, Leighton Pate; Dytso, Alex; and Poor, Harold Vincent, "Improved Information-Theoretic Generalization Bounds for Distributed, Federated, and Iterative Learning †" (2022). Faculty Publications. 2689.
https://digitalcommons.njit.edu/fac_pubs/2689