Skip to content

feat(experimental): Divergence Proximal Policy Optimization#5117

Open
LeonEricsson wants to merge 8 commits intohuggingface:mainfrom
LeonEricsson:feature/dppo
Open

feat(experimental): Divergence Proximal Policy Optimization#5117
LeonEricsson wants to merge 8 commits intohuggingface:mainfrom
LeonEricsson:feature/dppo

Conversation

@LeonEricsson
Copy link
Collaborator

@LeonEricsson LeonEricsson commented Feb 17, 2026

note this PR incorporates #5107

Read "Rethinking the Trust Region in LLM Reinforcement Learning" over the weekend. Really like this approach. It continues the recent push toward improved off-policy regulation. The work comes from the same authors as the DAPO paper.

What does this PR do?

Implements the proposed method, DPPO, as an experimental trainer.

DPPO replaces PPO/GRPO clipping with a principled trust region based on direct policy divergence estimates. The paper argues that PPO-style clipping, being based on the probability ratio of the sampled token, acts as a noisy single-sample Monte Carlo estimate of the true policy divergence, over-penalizing low-probability tokens and under-penalizing high-probability ones.

image

Implementation diffs to GRPOTrainer

DPPO uses a two-policy setup, in contrast to GRPO’s three-policy formulation: a sampling policy $\mu_{\text{old}}$ (used to generate rollouts) and a current policy $\pi$ (updated during training). Unlike GRPO’s old/new-policy structure, DPPO computes its trust-region terms directly between $\pi$ and $\mu_{\text{old}}$, using rollout-time statistics collected from $\mu_{\text{old}}$. We remove any recomputation of the logprobs at rollout time.

Depending on the divergence type, DPPO requires additional statistics from the policy distributions that are not computed in GRPO. The binary_* objectives only require sampled-token log-probabilities (already available in GRPO-style rollouts). In contrast, the topk_* objectives additionally require $\mu_{\text{old}}$’s top-K distribution, as well as $\pi$ evaluated on those same token IDs. This constitutes the main deviation from GRPOTrainer.

To support this, we modify the generation step to compute top-K log-probabilities at rollout time. During each gradient step, we then evaluate $\pi$ on the token IDs corresponding to $\mu_{\text{old}}$’s top-K positions.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@LeonEricsson
Copy link
Collaborator Author

oh, I completely missed that there was already a draft PR for this #5065, sorry @catherinelee274

Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments