Quantifying the Vulnerability of Anomaly Detection Implementations to Nondeterminism-based Attacks

Document Type

Conference Proceeding

Publication Date

1-1-2024

Abstract

Anomaly Detection (AD) is widely used in security applications such as intrusion detection, but its vulnerability to nondeterminism attacks has not been noticed, and its robustness against such attacks has not been studied. Nondeterminism, i.e., output variation on the same input dataset, is a common trait of AD implementations. We show that nondeterminism can be exploited by an attacker that tries to have a malicious input point (outlier) classified as benign input (inlier). In our threat model, the attacker has extremely limited capabilities - they can only retry the attack; they cannot influence the model, manipulate the AD/IDS implementation, or insert noise. We focus on three concrete, orthogonal attack scenarios: (1) a restart attack that exploits a simple re-run, (2) a resource attack that exploits the use of less computationally-expensive parameter settings, and (3) an inconsistency attack that exploits the differences between toolkits implementing the same algorithm. We quantify attack vulnerability in popular implementations of four AD algorithms - IF, RobCov, LOF, and OCSVM - and offer mitigation strategies. We show that in each scenario, despite attackers' limited capabilities, attacks have a high likelihood of success.

Identifier

85206479273 (Scopus)

ISBN

[9798350365054]

Publication Title

Proceedings - 6th IEEE International Conference on Artificial Intelligence Testing, AITest 2024

External Full Text Location

https://doi.org/10.1109/AITest62860.2024.00013

First Page

37

Last Page

46

Grant

CCF-2007730

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS