Reinforcement Learning Approaches for Traffic Signal Control under Missing Data
Document Type
Conference Proceeding
Publication Date
1-1-2023
Abstract
The emergence of reinforcement learning (RL) methods in traffic signal control (TSC) tasks has achieved promising results. Most RL approaches require the observation of the environment for the agent to decide which action is optimal for a long-term reward. However, in real-world urban scenarios, missing observation of traffic states may frequently occur due to the lack of sensors, which makes existing RL methods inapplicable on road networks with missing observation. In this work, we aim to control the traffic signals in a real-world setting, where some of the intersections in the road network are not installed with sensors and thus with no direct observations around them. To the best of our knowledge, we are the first to use RL methods to tackle the TSC problem in this real-world setting. Specifically, we propose two solutions: 1) imputes the traffic states to enable adaptive control. 2) imputes both states and rewards to enable adaptive control and the training of RL agents. Through extensive experiments on both synthetic and real-world road network traffic, we reveal that our method outperforms conventional approaches and performs consistently with different missing rates. We also investigate how missing data influences the performance of our model.
Identifier
85170392450 (Scopus)
ISBN
[9781956792034]
Publication Title
Ijcai International Joint Conference on Artificial Intelligence
External Full Text Location
https://doi.org/10.24963/ijcai.2023/251
ISSN
10450823
First Page
2261
Last Page
2269
Volume
2023-August
Grant
2153311
Fund Ref
National Science Foundation
Recommended Citation
Mei, Hao; Li, Junxian; Shi, Bin; and Wei, Hua, "Reinforcement Learning Approaches for Traffic Signal Control under Missing Data" (2023). Faculty Publications. 2201.
https://digitalcommons.njit.edu/fac_pubs/2201