Papers
arxiv:2410.01679

VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment

Published on Oct 2
· Submitted by kazemnejad on Oct 4
Authors:
,
,
,
,
,

Abstract

Large language models (LLMs) are increasingly applied to complex reasoning tasks that require executing several complex steps before receiving any reward. Properly assigning credit to these steps is essential for enhancing model performance. Proximal Policy Optimization (PPO), a state-of-the-art reinforcement learning (RL) algorithm used for LLM finetuning, employs value networks to tackle credit assignment. However, value networks face challenges in predicting the expected cumulative rewards accurately in complex reasoning tasks, often leading to high-variance updates and suboptimal performance. In this work, we systematically evaluate the efficacy of value networks and reveal their significant shortcomings in reasoning-heavy LLM tasks, showing that they barely outperform a random baseline when comparing alternative steps. To address this, we propose VinePPO, a straightforward approach that leverages the flexibility of language environments to compute unbiased Monte Carlo-based estimates, bypassing the need for large value networks. Our method consistently outperforms PPO and other RL-free baselines across MATH and GSM8K datasets with fewer gradient updates (up to 9x), less wall-clock time (up to 3.0x). These results emphasize the importance of accurate credit assignment in RL finetuning of LLM and demonstrate VinePPO's potential as a superior alternative.

Community

Paper author Paper submitter

VinePPO is a straightforward modification to PPO that unlocks RL’s true potential for LLM Reasoning.

  • It outperforms RL-free methods (e.g. DPO and RestEM) and PPO, surpassing it in fewer gradient steps (up to 9x), less wall-clock time (up to 3x), and less KL divergence with half of the GPU memory.
  • VinePPO provides a variable training compute component such that with increasing training time compute we can obtain better model performance (similar to the training part of OpenAI O1)
  • VinePPO achieves this by fixing a weak part of many RL post raining methods: Weak Credit Assignment.

💻Code: https://github.com/McGill-NLP/VinePPO

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 6

Browse 6 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.01679 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.01679 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.