Black-box Backdoor Defense via Zero-shot Image Purification
Document Type
Conference Proceeding
Publication Date
1-1-2023
Abstract
Backdoor attacks inject poisoned samples into the training data, resulting in the misclassification of the poisoned input during a model’s deployment. Defending against such attacks is challenging, especially for real-world black-box models where only query access is permitted. In this paper, we propose a novel defense framework against backdoor attacks through Zero-shot Image Purification (ZIP). Our framework can be applied to poisoned models without requiring internal information about the model or any prior knowledge of the clean/poisoned samples. Our defense framework involves two steps. First, we apply a linear transformation (e.g., blurring) on the poisoned image to destroy the backdoor pattern. Then, we use a pre-trained diffusion model to recover the missing semantic information removed by the transformation. In particular, we design a new reverse process by using the transformed image to guide the generation of high-fidelity purified images, which works in zero-shot settings. We evaluate our ZIP framework on multiple datasets with different types of attacks. Experimental results demonstrate the superiority of our ZIP framework compared to state-of-the-art backdoor defense baselines. We believe that our results will provide valuable insights for future defense methods for black-box models.
Identifier
85180328261 (Scopus)
ISBN
[9781713899921]
Publication Title
Advances in Neural Information Processing Systems
ISSN
10495258
Volume
36
Grant
2223768
Fund Ref
National Science Foundation
Recommended Citation
Shi, Yucheng; Du, Mengnan; Wu, Xuansheng; Guan, Zihan; Sun, Jin; and Liu, Ninghao, "Black-box Backdoor Defense via Zero-shot Image Purification" (2023). Faculty Publications. 2139.
https://digitalcommons.njit.edu/fac_pubs/2139