Neural Network Approximation for Pessimistic Offline Reinforcement Learning

Authors

  • Di Wu Wuhan University
  • Yuling Jiao Wuhan University
  • Li Shen JD Explore Academy
  • Haizhao Yang University of Maryland, College Park
  • Xiliang Lu Wuhan University

DOI:

https://doi.org/10.1609/aaai.v38i14.29517

Keywords:

ML: Reinforcement Learning, ML: Deep Learning Theory

Abstract

Deep reinforcement learning (RL) has shown remarkable success in specific offline decision-making scenarios, yet its theoretical guarantees are still under development. Existing works on offline RL theory primarily emphasize a few trivial settings, such as linear MDP or general function approximation with strong assumptions and independent data, which lack guidance for practical use. The coupling of deep learning and Bellman residuals makes this problem challenging, in addition to the difficulty of data dependence. In this paper, we establish a non-asymptotic estimation error of pessimistic offline RL using general neural network approximation with C-mixing data regarding the structure of networks, the dimension of datasets, and the concentrability of data coverage, under mild assumptions. Our result shows that the estimation error consists of two parts: the first converges to zero at a desired rate on the sample size with partially controllable concentrability, and the second becomes negligible if the residual constraint is tight. This result demonstrates the explicit efficiency of deep adversarial offline RL frameworks. We utilize the empirical process tool for C-mixing sequences and the neural network approximation theory for the Holder class to achieve this. We also develop methods to bound the Bellman estimation error caused by function approximation with empirical Bellman constraint perturbations. Additionally, we present a result that lessens the curse of dimensionality using data with low intrinsic dimensionality and function classes with low complexity. Our estimation provides valuable insights into the development of deep offline RL and guidance for algorithm model design.

Published

2024-03-24

How to Cite

Wu, D., Jiao, Y., Shen, L., Yang, H., & Lu, X. (2024). Neural Network Approximation for Pessimistic Offline Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15868-15877. https://doi.org/10.1609/aaai.v38i14.29517

Issue

Section

AAAI Technical Track on Machine Learning V