Learning Crowd Motion Dynamics with Crowds
Document Type
Article
Publication Date
5-13-2024
Abstract
Reinforcement Learning (RL) has become a popular framework for learning desired behaviors for computational agents in graphics and games. In a multi-agent crowd, one major goal is for agents to avoid collisions while navigating in a dynamic environment. Another goal is to simulate natural-looking crowds, which is difficult to define due to the ambiguity as to what is a natural crowd motion. We introduce a novel methodology for simulating crowds, which learns most-preferred crowd simulation behaviors from crowd-sourced votes via Bayesian optimization. Our method uses deep reinforcement learning for simulating crowds, where crowdsourcing is used to select policy hyper-parameters. Training agents with such parameters results in a crowd simulation that is preferred to users. We demonstrate our method's robustness in multiple scenarios and metrics, where we show it is superior compared to alternate policies and prior work.
Identifier
85193437241 (Scopus)
Publication Title
Proceedings of the ACM on Computer Graphics and Interactive Techniques
External Full Text Location
https://doi.org/10.1145/3651302
e-ISSN
25776193
Issue
1
Volume
7
Recommended Citation
Talukdar, Bilas; Zhang, Yunhao; and Weiss, Tomer, "Learning Crowd Motion Dynamics with Crowds" (2024). Faculty Publications. 425.
https://digitalcommons.njit.edu/fac_pubs/425