STAS: Spatial-Temporal Return Decomposition for Solving Sparse Rewards Problems in Multi-agent Reinforcement Learning

Authors

  • Sirui Chen Renmin University of China
  • Zhaowei Zhang Institute for Artificial Intelligence, Peking University National Key Laboratory of General Artificial Intelligence, BIGAI
  • Yaodong Yang Institute for Artificial Intelligence, Peking University
  • Yali Du King's College London

DOI:

https://doi.org/10.1609/aaai.v38i16.29681

Keywords:

MAS: Coordination and Collaboration, MAS: Multiagent Learning

Abstract

Centralized Training with Decentralized Execution (CTDE) has been proven to be an effective paradigm in cooperative multi-agent reinforcement learning (MARL). One of the major challenges is credit assignment, which aims to credit agents by their contributions. They lack the functionality to model complicated relations of the delayed global reward in the temporal dimension and suffer from inefficiencies. To tackle this, we introduce Spatial-Temporal Attention with Shapley (STAS), a novel method that learns credit assignment in both temporal and spatial dimensions. It first decomposes the global return back to each time step, then utilizes the Shapley Value to redistribute the individual payoff from the decomposed global reward. To mitigate the computational complexity of the Shapley Value, we introduce an approximation of marginal contribution and utilize Monte Carlo sampling to estimate it. We evaluate our method on an Alice & Bob example and MPE environments across different scenarios. Our results demonstrate that our method effectively assigns spatial-temporal credit, outperforming all state-of-the-art baselines.

Published

2024-03-24

How to Cite

Chen, S., Zhang, Z., Yang, Y., & Du, Y. (2024). STAS: Spatial-Temporal Return Decomposition for Solving Sparse Rewards Problems in Multi-agent Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17337-17345. https://doi.org/10.1609/aaai.v38i16.29681

Issue

Section

AAAI Technical Track on Multiagent Systems