Heterogeneous Gaussian mechanism: Preserving differential privacy in deep learning with provable robustness

Document Type

Conference Proceeding

Publication Date

1-1-2019

Abstract

In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples. We first relax the constraint of the privacy budget in the traditional Gaussian Mechanism from (0, 1] to (0, infty), with a new bound of the noise scale to preserve differential privacy. The noise in our mechanism can be arbitrarily redistributed, offering a distinctive ability to address the trade-off between model utility and privacy loss. To derive provable robustness, our HGM is applied to inject Gaussian noise into the first hidden layer. Then, a tighter robustness bound is proposed. Theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of differentially private deep neural networks, compared with baseline approaches, under a variety of model attacks.

Identifier

85074919621 (Scopus)

ISBN

[9780999241141]

Publication Title

Ijcai International Joint Conference on Artificial Intelligence

External Full Text Location

https://doi.org/10.24963/ijcai.2019/660

ISSN

10450823

First Page

4753

Last Page

4759

Volume

2019-August

Grant

CNS-1747798

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS