Adversarial Attacks on Multiagent Deep Reinforcement Learning Models in Continuous Action Space

Document Type

Article

Publication Date

1-1-2024

Abstract

Multiagent deep reinforcement learning (MADRL) has been recently applied in many fields, including industry 5.0, but it is sensitive to adversarial attacks. Although adversarial attacks can be detrimental, they are crucial for testing and assisting in enhancing the robustness of models. Existing attacks on MADRL-based models are not sufficient since these attacks involve fixed perturbed agents, without taking into account cases where perturbed agents change. In this article, we present a novel adversarial attack framework. In this framework, we define critical agents that change over time, i.e., when they are perturbed a little, the whole multiagent system is perturbed greatly. Then, we identify critical agents through their worst-case joint actions. In this identifying process, we use gradient information, differential evolution, and SARSA to deal with the challenge caused by changes in the perturbed agents and to compute the worst-case joint actions. After identifying them, we use the target attack method to perturb them. We apply our method to attack the models trained by two state-of-the-art MADRL algorithms under three environments, including two industry-related ones. The experimental results demonstrate our method has a stronger perturbing ability than the existing methods.

Identifier

85204945432 (Scopus)

Publication Title

IEEE Transactions on Systems, Man, and Cybernetics: Systems

External Full Text Location

https://doi.org/10.1109/TSMC.2024.3454118

e-ISSN

21682232

ISSN

21682216

First Page

7633

Last Page

7646

Issue

12

Volume

54

Grant

62172299

Fund Ref

National Natural Science Foundation of China

This document is currently not available here.

Share

COinS