-
Notifications
You must be signed in to change notification settings - Fork 350
feat: support GDPO #1986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
feat: support GDPO #1986
Changes from all commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
f614e4a
squash (remove dependency update)
yuki-97 295e315
update dataset to new structure
yuki-97 578698e
register env and remove run_gdpo_gsm8k.py
yuki-97 16b0d53
align compute_advantage interface and move the initialization forward
6e1fd96
fix a small bug forgot to import re
e6a28ad
revert dependency
yuki-97 0428ef1
update math_gdpo_data_processor and system_prompt
yuki-97 665219d
add assert and revert some missing comments
yuki-97 b7597c6
revert async change and enable fast test
yuki-97 e898d01
addressed all comments and fixed bugs
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
Submodule Megatron-Bridge
updated
432 files
Submodule Megatron-LM
updated
751 files
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,62 @@ | ||
| # GDPO: inherits from grpo_math_1B.yaml and overrides only what differs. | ||
| defaults: grpo_math_1B.yaml | ||
|
|
||
| grpo: | ||
| adv_estimator: | ||
| name: "gdpo" | ||
| normalize_rewards: true | ||
| use_leave_one_out_baseline: false | ||
|
|
||
| checkpointing: | ||
| checkpoint_dir: "results/gdpo" | ||
|
|
||
| policy: | ||
| model_name: "Qwen/Qwen2.5-1.5B-Instruct" | ||
| logprob_batch_size: 4 | ||
| max_total_sequence_length: 1024 | ||
| megatron_cfg: | ||
| optimizer: | ||
| weight_decay: 0.0 | ||
| scheduler: | ||
| lr_decay_style: "cosine" | ||
| lr_warmup_iters: 10 | ||
|
|
||
| # GDPO uses a single flat data config (GSM8K + math_gdpo_data_processor); replace parent's train/validation/default. | ||
| data: | ||
| _override_: true | ||
|
|
||
| max_input_seq_length: ${policy.max_total_sequence_length} | ||
| shuffle: true | ||
| num_workers: 1 | ||
|
|
||
| use_multiple_dataloader: false | ||
|
|
||
| train: | ||
| dataset_name: "gsm8k" | ||
| split: train | ||
| validation: | ||
| dataset_name: "gsm8k" | ||
| split: test | ||
|
|
||
| default: | ||
| prompt_file: null | ||
| system_prompt_file: "examples/prompts/gsm8k.txt" | ||
| processor: "math_gdpo_data_processor" | ||
| env_name: "math_multi_reward" | ||
|
|
||
| env: | ||
| math_multi_reward: | ||
| num_workers: 8 | ||
| math_verify_impl: "hf_math_verify" | ||
|
|
||
| logger: | ||
| wandb_enabled: true | ||
| wandb: | ||
| project: "gdpo-dev" | ||
| name: "gdpo-dev-logger" | ||
| swanlab: | ||
| project: "gdpo-dev" | ||
| name: "gdpo-dev-logger" | ||
| mlflow: | ||
| experiment_name: "gdpo-dev" | ||
| run_name: "gdpo-dev-logger" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,17 @@ | ||
| You are a helpful AI assistant. | ||
|
|
||
| For every request, you should carefully think through the math problem step by step, then provide the final answer in integer format. | ||
|
|
||
| Steps for Each Request: | ||
| 1. Think: Provide detailed, step-by-step reasoning, calculations, or derivations. | ||
| 2. Produce Final Answer: After step-by-step reasoning, output the final answer in integer format. | ||
|
|
||
| Output Format: | ||
| <think>Your thoughts and reasoning</think> | ||
| <answer>Final answer in integer format</answer> | ||
|
|
||
| Important Notes: | ||
| 1. You must include your reasoning steps inside <think>. | ||
| 2. You must always output the Final Answer within <answer> after the reasoning steps is done. | ||
| 3. You should consistently work through the solution step by step before giving the final answer. | ||
| 4. The final answer can only be an integer. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -13,6 +13,7 @@ | |||||||||
| # limitations under the License. | ||||||||||
| import gc | ||||||||||
| import os | ||||||||||
| import re | ||||||||||
| import time | ||||||||||
| import warnings | ||||||||||
| from concurrent.futures import ThreadPoolExecutor | ||||||||||
|
|
@@ -29,6 +30,7 @@ | |||||||||
|
|
||||||||||
| from nemo_rl.algorithms.advantage_estimator import ( | ||||||||||
| GRPOAdvantageEstimator, | ||||||||||
| GDPOAdvantageEstimator, | ||||||||||
| ReinforcePlusPlusAdvantageEstimator, | ||||||||||
| ) | ||||||||||
| from nemo_rl.algorithms.loss import ( | ||||||||||
|
|
@@ -46,6 +48,7 @@ | |||||||||
| log_generation_metrics_to_wandb, | ||||||||||
| print_performance_metrics, | ||||||||||
| set_seed, | ||||||||||
| get_gdpo_reward_component_keys | ||||||||||
| ) | ||||||||||
| from nemo_rl.data import DataConfig | ||||||||||
| from nemo_rl.data.collate_fn import rl_collate_fn | ||||||||||
|
|
@@ -121,9 +124,9 @@ class AsyncGRPOConfig(TypedDict): | |||||||||
|
|
||||||||||
|
|
||||||||||
| class AdvEstimatorConfig(TypedDict): | ||||||||||
| """Configuration for advantage estimator (GRPO or Reinforce++).""" | ||||||||||
| """Configuration for advantage estimator (GRPO, GDPO, or Reinforce++).""" | ||||||||||
|
|
||||||||||
| name: str # "grpo" or "reinforce_plus_plus" | ||||||||||
| name: str # "grpo", "gdpo", or "reinforce_plus_plus" | ||||||||||
| # GRPO specific | ||||||||||
| normalize_rewards: NotRequired[bool] | ||||||||||
| use_leave_one_out_baseline: NotRequired[bool] | ||||||||||
|
|
@@ -966,11 +969,16 @@ def scale_rewards( | |||||||||
| ) | ||||||||||
|
|
||||||||||
| # Clamp and scale | ||||||||||
| rewards = torch.clamp(rewards, min=source_min, max=source_max) | ||||||||||
| scaled_rewards = target_min + (rewards - source_min) / ( | ||||||||||
| source_max - source_min | ||||||||||
| ) * (target_max - target_min) | ||||||||||
| def _scale(reward_tensor: torch.Tensor) -> torch.Tensor: | ||||||||||
| r = torch.clamp(reward_tensor, min=source_min, max=source_max) | ||||||||||
| return target_min + (r - source_min) / ( | ||||||||||
| source_max - source_min | ||||||||||
| ) * (target_max - target_min) | ||||||||||
|
|
||||||||||
| scaled_rewards = _scale(rewards) | ||||||||||
| repeated_batch["total_reward"] = scaled_rewards | ||||||||||
| for key in get_gdpo_reward_component_keys(repeated_batch): | ||||||||||
| repeated_batch[key] = _scale(repeated_batch[key]) | ||||||||||
|
|
||||||||||
| return repeated_batch | ||||||||||
|
|
||||||||||
|
|
@@ -1031,7 +1039,7 @@ def _create_advantage_estimator(master_config: MasterConfig): | |||||||||
| master_config: The master configuration dictionary. | ||||||||||
|
|
||||||||||
| Returns: | ||||||||||
| An advantage estimator instance (GRPOAdvantageEstimator or ReinforcePlusPlusAdvantageEstimator). | ||||||||||
| An advantage estimator instance (GRPO, GDPO, or ReinforcePlusPlus). | ||||||||||
|
|
||||||||||
| Raises: | ||||||||||
| ValueError: If the advantage estimator name is not recognized. | ||||||||||
|
|
@@ -1055,7 +1063,14 @@ def _create_advantage_estimator(master_config: MasterConfig): | |||||||||
| ) | ||||||||||
|
|
||||||||||
| adv_estimator_name = adv_estimator_config["name"] | ||||||||||
| if adv_estimator_name == "grpo": | ||||||||||
| if adv_estimator_name == "gdpo": | ||||||||||
| assert not _should_use_async_rollouts(master_config), ( | ||||||||||
| "GDPO is not supported for async rollouts, " | ||||||||||
| "please set policy.generation.vllm_cfg.async_engine to false in your config." | ||||||||||
| ) | ||||||||||
| adv_estimator = GDPOAdvantageEstimator(adv_estimator_config, loss_config) | ||||||||||
| print(" ✓ Using GDPO advantage estimator (multi-reward)") | ||||||||||
| elif adv_estimator_name == "grpo": | ||||||||||
| adv_estimator = GRPOAdvantageEstimator(adv_estimator_config, loss_config) | ||||||||||
| print(" ✓ Using GRPO advantage estimator") | ||||||||||
| elif adv_estimator_name == "reinforce_plus_plus": | ||||||||||
|
|
@@ -1590,6 +1605,10 @@ def grpo_train( | |||||||||
| with timer.time("reward_calculation"): | ||||||||||
| # Extract rewards from final_batch | ||||||||||
| rewards = repeated_batch["total_reward"] | ||||||||||
| # Store input_ids in batch so that after dynamic_sampling it stays aligned with | ||||||||||
| # the (possibly filtered) batch: select_indices / from_batches / slice all | ||||||||||
| # apply to this key, so per-reward baselines use the same prompts as reward components. | ||||||||||
| repeated_batch["_input_ids_for_baseline"] = input_ids | ||||||||||
|
|
||||||||||
| print("▶ Computing advantages...", flush=True) | ||||||||||
| if master_config["grpo"].get("calculate_advantages_on_gpu"): | ||||||||||
|
|
@@ -1644,10 +1663,10 @@ def grpo_train( | |||||||||
| # If the current batch is not enough to fill the buffer during dynamic sampling, we update the cache and process the next batch. | ||||||||||
| if not is_batch_complete: | ||||||||||
| continue | ||||||||||
|
|
||||||||||
| gen_step_metrics = {} | ||||||||||
| if hasattr(policy_generation, "get_step_metrics"): | ||||||||||
| gen_step_metrics = policy_generation.get_step_metrics() | ||||||||||
| advantages = (rewards - baseline).unsqueeze(-1) | ||||||||||
|
|
||||||||||
| # Save baseline for logging (before deletion) | ||||||||||
| baseline_for_log = baseline.clone() | ||||||||||
|
|
@@ -1778,6 +1797,7 @@ def grpo_train( | |||||||||
| train_data["advantages"] = adv_estimator.compute_advantage( | ||||||||||
| prompt_ids=prompt_ids_for_adv, | ||||||||||
| rewards=rewards, | ||||||||||
| repeated_batch=repeated_batch, | ||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. wdyt just keep
Suggested change
|
||||||||||
| mask=mask, | ||||||||||
| logprobs_policy=train_data["prev_logprobs"], | ||||||||||
| logprobs_reference=train_data.get("reference_policy_logprobs"), | ||||||||||
|
|
@@ -2724,6 +2744,8 @@ def async_grpo_train( | |||||||||
| del prompt_batched_flat | ||||||||||
|
|
||||||||||
| rewards = repeated_batch["total_reward"] | ||||||||||
| # All estimators read _input_ids_for_baseline from repeated_batch | ||||||||||
| repeated_batch["_input_ids_for_baseline"] = prompt_ids_for_adv | ||||||||||
|
|
||||||||||
| print( | ||||||||||
| f" 📊 Rewards stats: min={rewards.min():.4f}, max={rewards.max():.4f}, mean={rewards.mean():.4f}, std={rewards.std():.4f}" | ||||||||||
|
|
@@ -2809,6 +2831,7 @@ def async_grpo_train( | |||||||||
| train_data["advantages"] = adv_estimator.compute_advantage( | ||||||||||
| prompt_ids=prompt_ids_for_adv, | ||||||||||
| rewards=rewards, | ||||||||||
| repeated_batch=repeated_batch, | ||||||||||
| mask=mask, | ||||||||||
| logprobs_policy=train_data["prev_logprobs"], | ||||||||||
| logprobs_reference=train_data.get("reference_policy_logprobs"), | ||||||||||
|
|
||||||||||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.