Computer Science > Computation and Language
[Submitted on 4 Jan 2025 (v1), last revised 10 Nov 2025 (this version, v9)]
Title:REINFORCE++: Stabilizing Critic-Free Policy Optimization with Global Advantage Normalization
View PDF HTML (experimental)Abstract:Reinforcement Learning from Human Feedback~(RLHF) plays a crucial role in aligning Large Language Models~(LLMs). The dominant algorithm, Proximal Policy Optimization~(PPO), employs a critic network to estimate advantages, which introduces significant computational and memory overhead. To address this, a family of critic-free algorithms (e.g., GRPO, RLOO) has emerged. However, these methods typically rely on \textit{prompt-level (local)} advantage normalization, which suffers from inaccurate advantage estimation, a tendency to overfit, and, as we show, is a theoretically biased estimator. To solve these challenges, we introduce REINFORCE++, a critic-free framework centered on \textbf{Global Advantage Normalization}. By normalizing advantages across the entire global batch rather than small, prompt-specific groups, our method provides a more stable and theoretically sound, \textit{effectively unbiased} estimate (whose bias vanishes as batch size increases). We introduce two variants: REINFORCE++, a highly efficient and general algorithm ($k \ge 1$) for general-domain RLHF, and REINFORCE++ /w baseline, a robust group-sampling variant ($k > 1$) for complex reasoning tasks. Our empirical evaluation demonstrates that each variant shows superior stability and performance in its respective domain, outperforming existing methods and even PPO in complex agentic settings.
Submission history
From: Jian Hu [view email][v1] Sat, 4 Jan 2025 02:08:06 UTC (1,284 KB)
[v2] Thu, 3 Apr 2025 03:20:56 UTC (2,387 KB)
[v3] Sun, 6 Apr 2025 02:23:29 UTC (2,387 KB)
[v4] Thu, 3 Jul 2025 04:17:04 UTC (2,597 KB)
[v5] Fri, 4 Jul 2025 03:51:01 UTC (2,229 KB)
[v6] Mon, 14 Jul 2025 02:04:47 UTC (2,238 KB)
[v7] Mon, 28 Jul 2025 03:53:55 UTC (2,244 KB)
[v8] Sun, 3 Aug 2025 16:48:29 UTC (2,245 KB)
[v9] Mon, 10 Nov 2025 15:11:13 UTC (718 KB)
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.