feat(experimental): Divergence Proximal Policy Optimization#5117
Open
LeonEricsson wants to merge 8 commits intohuggingface:mainfrom
Open
feat(experimental): Divergence Proximal Policy Optimization#5117LeonEricsson wants to merge 8 commits intohuggingface:mainfrom
LeonEricsson wants to merge 8 commits intohuggingface:mainfrom
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Collaborator
Author
|
oh, I completely missed that there was already a draft PR for this #5065, sorry @catherinelee274 |
qgallouedec
reviewed
Feb 18, 2026
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
note this PR incorporates #5107
Read "Rethinking the Trust Region in LLM Reinforcement Learning" over the weekend. Really like this approach. It continues the recent push toward improved off-policy regulation. The work comes from the same authors as the DAPO paper.
What does this PR do?
Implements the proposed method, DPPO, as an experimental trainer.
DPPO replaces PPO/GRPO clipping with a principled trust region based on direct policy divergence estimates. The paper argues that PPO-style clipping, being based on the probability ratio of the sampled token, acts as a noisy single-sample Monte Carlo estimate of the true policy divergence, over-penalizing low-probability tokens and under-penalizing high-probability ones.
Implementation diffs to
GRPOTrainerDPPO uses a two-policy setup, in contrast to GRPO’s three-policy formulation: a sampling policy$\mu_{\text{old}}$ (used to generate rollouts) and a current policy $\pi$ (updated during training). Unlike GRPO’s old/new-policy structure, DPPO computes its trust-region terms directly between $\pi$ and $\mu_{\text{old}}$ , using rollout-time statistics collected from $\mu_{\text{old}}$ . We remove any recomputation of the logprobs at rollout time.
Depending on the divergence type, DPPO requires additional statistics from the policy distributions that are not computed in GRPO. The$\mu_{\text{old}}$ ’s top-K distribution, as well as $\pi$ evaluated on those same token IDs. This constitutes the main deviation from
binary_*objectives only require sampled-token log-probabilities (already available in GRPO-style rollouts). In contrast, thetopk_*objectives additionally requireGRPOTrainer.To support this, we modify the generation step to compute top-K log-probabilities at rollout time. During each gradient step, we then evaluate$\pi$ on the token IDs corresponding to $\mu_{\text{old}}$ ’s top-K positions.
Before submitting
Pull Request section?
to it if that's the case.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.