Continual Learning with Differential Privacy
Document Type
Conference Proceeding
Publication Date
1-1-2021
Abstract
In this paper, we focus on preserving differential privacy (DP) in continual learning (CL), in which we train ML models to learn a sequence of new tasks while memorizing previous tasks. We first introduce a notion of continual adjacent databases to bound the sensitivity of any data record participating in the training process of CL. Based upon that, we develop a new DP-preserving algorithm for CL with a data sampling strategy to quantify the privacy risk of training data in the well-known Averaged Gradient Episodic Memory (A-GEM) approach by applying a moments accountant. Our algorithm provides formal guarantees of privacy for data records across tasks in CL. Preliminary theoretical analysis and evaluations show that our mechanism tightens the privacy loss while maintaining a promising model utility.
Identifier
85121908638 (Scopus)
ISBN
[9783030923099]
Publication Title
Communications in Computer and Information Science
External Full Text Location
https://doi.org/10.1007/978-3-030-92310-5_39
e-ISSN
18650937
ISSN
18650929
First Page
334
Last Page
343
Volume
1517 CCIS
Grant
2041096
Fund Ref
National Science Foundation
Recommended Citation
Desai, Pradnya; Lai, Phung; Phan, Nhat Hai; and Thai, My T., "Continual Learning with Differential Privacy" (2021). Faculty Publications. 4477.
https://digitalcommons.njit.edu/fac_pubs/4477