A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
Document Type
Conference Proceeding
Publication Date
1-1-2021
Abstract
The pervasiveness of neural networks (NNs) in critical computer vision and image processing applications makes them very attractive for adversarial manipulation. A large body of existing research thoroughly investigates two broad categories of attacks targeting the integrity of NN models. The first category of attacks, commonly called Adversarial Examples, perturbs the model's inference by carefully adding noise into input examples. In the second category of attacks, adversaries try to manipulate the model during the training process by implanting Trojan backdoors. Researchers show that such attacks pose severe threats to the growing applications of NNs and propose several defenses against each attack type individually. However, such one-sided defense approaches leave potentially unknown risks in real-world scenarios when an adversary can unify different attacks to create new and more lethal ones bypassing existing defenses.In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan. AdvTrojan is stealthy because it can be activated only when: 1) a carefully crafted adversarial perturbation is injected into the input examples during inference, and 2) a Trojan backdoor is implanted during the training process of the model. We leverage adversarial noise in the input space to move Trojan-infected examples across the model decision boundary, making it difficult to detect. The stealthiness behavior of AdvTrojan fools the users into accidentally trusting the infected model as a robust classifier against adversarial examples. AdvTrojan can be implemented by only poisoning the training data similar to conventional Trojan backdoor attacks. Our thorough analysis and extensive experiments on several benchmark datasets show that AdvTrojan can bypass existing defenses with a success rate close to 100% in most of our experimental scenarios and can be extended to attack federated learning as well as high-resolution images.
Identifier
85125302120 (Scopus)
ISBN
[9781665439022]
Publication Title
Proceedings 2021 IEEE International Conference on Big Data Big Data 2021
External Full Text Location
https://doi.org/10.1109/BigData52589.2021.9671964
First Page
834
Last Page
846
Recommended Citation
Liu, Guanxiong; Khalil, Issa; Khreishah, Abdallah; and Phan, Nhat Hai, "A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples" (2021). Faculty Publications. 4488.
https://digitalcommons.njit.edu/fac_pubs/4488