XRAND: Differentially Private Defense against Explanation-Guided Attacks

Document Type

Conference Proceeding

Publication Date

6-27-2023

Abstract

Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query. However, XAI also opens a door for adversaries to gain insights into the black-box models in MLaaS, thereby making the models more vulnerable to several attacks. For example, feature-based explanations (e.g., SHAP) could expose the top important features that a black-box model focuses on. Such disclosure has been exploited to craft effective backdoor triggers against malware classifiers. To address this trade-off, we introduce a new concept of achieving local differential privacy (LDP) in the explanations, and from that we establish a defense, called XRAND, against such attacks. We show that our mechanism restricts the information that the adversary can learn about the top important features, while maintaining the faithfulness of the explanations.

Identifier

85168249381 (Scopus)

ISBN

[9781577358800]

Publication Title

Proceedings of the 37th Aaai Conference on Artificial Intelligence Aaai 2023

External Full Text Location

https://doi.org/10.1609/aaai.v37i10.2636126401

First Page

11873

Last Page

11881

Volume

37

Grant

IIS-2123809

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS