A Robust Mean-Field Actor-Critic Reinforcement Learning Against Adversarial Perturbations on Agent States

Document Type

Article

Publication Date

1-1-2024

Abstract

Multiagent deep reinforcement learning (DRL) makes optimal decisions dependent on system states observed by agents, but any uncertainty on the observations may mislead agents to take wrong actions. The mean-field actor-critic (MFAC) reinforcement learning is well-known in the multiagent field since it can effectively handle a scalability problem. However, it is sensitive to state perturbations that can significantly degrade the team rewards. This work proposes a Robust MFAC (RoMFAC) reinforcement learning that has two innovations: 1) a new objective function of training actors, composed of a policy gradient function that is related to the expected cumulative discount reward on sampled clean states and an action loss function that represents the difference between actions taken on clean and adversarial states and 2) a repetitive regularization of the action loss, ensuring the trained actors to obtain excellent performance. Furthermore, this work proposes a game model named a state-adversarial stochastic game (SASG). Despite the Nash equilibrium of SASG may not exist, adversarial perturbations to states in the RoMFAC are proven to be defensible based on SASG. Experimental results show that RoMFAC is robust against adversarial perturbations while maintaining its competitive performance in environments without perturbations.

Identifier

85161516711 (Scopus)

Publication Title

IEEE Transactions on Neural Networks and Learning Systems

External Full Text Location

https://doi.org/10.1109/TNNLS.2023.3278715

e-ISSN

21622388

ISSN

2162237X

PubMed ID

37276092

First Page

14370

Last Page

14381

Issue

10

Volume

35

Grant

22511105500

Fund Ref

Science and Technology Commission of Shanghai Municipality

This document is currently not available here.

Share

COinS