Exact Policy Recovery in Offline RL with Both Heavy-Tailed Rewards and Data Corruption

Authors

  • Yiding Chen University of Wisconsin-Madison
  • Xuezhou Zhang Boston University
  • Qiaomin Xie University of Wisconsin-Madison
  • Xiaojin Zhu University of Wisconsin-Madison

DOI:

https://doi.org/10.1609/aaai.v38i10.29022

Keywords:

ML: Reinforcement Learning, ML: Learning Theory

Abstract

We study offline reinforcement learning (RL) with heavy-tailed reward distribution and data corruption: (i) Moving beyond subGaussian reward distribution, we allow the rewards to have infinite variances; (ii) We allow corruptions where an attacker can arbitrarily modify a small fraction of the rewards and transitions in the dataset. We first derive a sufficient optimality condition for generalized Pessimistic Value Iteration (PEVI), which allows various estimators with proper confidence bounds and can be applied to multiple learning settings. In order to handle the data corruption and heavy-tailed reward setting, we prove that the trimmed-mean estimation achieves the minimax optimal error rate for robust mean estimation under heavy-tailed distributions. In the PEVI algorithm, we plug in the trimmed mean estimation and the confidence bound to solve the robust offline RL problem. Standard analysis reveals that data corruption induces a bias term in the suboptimality gap, which gives the false impression that any data corruption prevents optimal policy learning. By using the optimality condition for the generalized PEVI, we show that as long as the bias term is less than the ``action gap'', the policy returned by PEVI achieves the optimal value given sufficient data.

Downloads

Published

2024-03-24

How to Cite

Chen, Y., Zhang, X., Xie, Q., & Zhu, X. (2024). Exact Policy Recovery in Offline RL with Both Heavy-Tailed Rewards and Data Corruption. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11416-11424. https://doi.org/10.1609/aaai.v38i10.29022

Issue

Section

AAAI Technical Track on Machine Learning I