Adaptive laplace mechanism: Differential privacy preservation in deep learning

Document Type

Conference Proceeding

Publication Date

12-15-2017

Abstract

In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds 'more noise' into features which are 'less relevant' to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions.

Identifier

85043981824 (Scopus)

ISBN

[9781538638347]

Publication Title

Proceedings IEEE International Conference on Data Mining Icdm

External Full Text Location

https://doi.org/10.1109/ICDM.2017.48

ISSN

15504786

First Page

385

Last Page

394

Volume

2017-November

Grant

R01GM103309

Fund Ref

National Institutes of Health

This document is currently not available here.

Share

COinS