False Correlation Reduction for Offline Reinforcement Learning

Offline reinforcement learning (RL) harnesses the power of massive datasets for resolving sequential decision problems. Most existing papers only discuss defending against out-of-distribution (OOD) actions while we investigate a broader issue, the false correlations between epistemic uncertainty and decision-making, an essential factor that causes suboptimality. In this paper, we propose falSe COrrelation REduction (SCORE) for offline RL, a practically effective and theoretically provable algorithm. We empirically show that SCORE achieves the SoTA performance with 3.1x acceleration on various tasks in a standard benchmark (D4RL). The proposed algorithm introduces an annealing behavior cloning regularizer to help produce a high-quality estimation of uncertainty which is critical for eliminating false correlations from suboptimality. Theoretically, we justify the rationality of the proposed method and prove its convergence to the optimal policy with a sublinear rate under mild assumptions.

Medienart:

E-Artikel

Erscheinungsjahr:

2024

Erschienen:

2024

Enthalten in:

Zur Gesamtaufnahme - volume:46

Enthalten in:

IEEE transactions on pattern analysis and machine intelligence - 46(2024), 2 vom: 27. Jan., Seite 1199-1211

Sprache:

Englisch

Beteiligte Personen:

Deng, Zhihong [VerfasserIn]
Fu, Zuyue [VerfasserIn]
Wang, Lingxiao [VerfasserIn]
Yang, Zhuoran [VerfasserIn]
Bai, Chenjia [VerfasserIn]
Zhou, Tianyi [VerfasserIn]
Wang, Zhaoran [VerfasserIn]
Jiang, Jing [VerfasserIn]

Links:

Volltext

Themen:

Journal Article

Anmerkungen:

Date Revised 12.01.2024

published: Print-Electronic

Citation Status PubMed-not-MEDLINE

doi:

10.1109/TPAMI.2023.3328397

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM363950591