Scalable differential privacy with certified robustness in adversarial learning
Document Type
Conference Proceeding
Publication Date
1-1-2020
Abstract
In this paper, we aim to develop a scalable algorithm to preserve differential privacy (DP) in adversarial learning for deep neural networks (DNNs), with certified robustness to adversarial examples. By leveraging the sequential composition theory in DP, we randomize both input and latent spaces to strengthen our certified robustness bounds. To address the trade-off among model utility, privacy loss, and robustness, we design an original adversarial objective function, based on the post-processing property in DP, to tighten the sensitivity of our model. A new stochastic batch training is proposed to apply our mechanism on large DNNs and datasets, by bypassing the vanilla iterative batch-by-batch training in DP DNNs. An end-to-end theoretical analysis and evaluations show that our mechanism notably improves the robustness and scalability of DP DNNs.
Identifier
85105337663 (Scopus)
ISBN
[9781713821120]
Publication Title
37th International Conference on Machine Learning Icml 2020
First Page
7639
Last Page
7650
Volume
PartF168147-10
Grant
CNS-1747798
Fund Ref
National Science Foundation
Recommended Citation
Phan, Nhat Hai; Thai, My T.; Hu, Han; Jin, Ruoming; Sun, Tong; and Dou, Dejing, "Scalable differential privacy with certified robustness in adversarial learning" (2020). Faculty Publications. 5834.
https://digitalcommons.njit.edu/fac_pubs/5834
