Conversation
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
b9f870f to
a80d3d3
Compare
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
📝 WalkthroughWalkthroughThis PR adds Mixture of Experts (MOE) metrics tracking and per-layer logging capabilities to the NeMo RL training pipeline. Configuration flags are introduced to enable optional MOE metrics collection, a new function computes and aggregates MOE auxiliary losses, and training algorithms are updated to propagate these metrics through their results. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
nemo_rl/models/policy/__init__.py (1)
128-129: MoE flags correctly added to MegatronConfig
track_moe_metricsandmoe_per_layer_loggingare cleanly integrated as explicit booleans and align with how configs use them; no behavioral issues here. Consider adding brief docs/comments for these fields alongside other Megatron MoE knobs in a follow-up for discoverability.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
examples/configs/dpo.yaml(1 hunks)examples/configs/grpo_math_1B.yaml(1 hunks)examples/configs/sft.yaml(1 hunks)examples/configs/vlm_grpo_3B.yaml(1 hunks)examples/configs/vlm_grpo_3B_megatron.yaml(1 hunks)examples/penguin/grpo_dapo17k_bytedtsinghua_qwen3_4binstruct_nf.yaml(1 hunks)nemo_rl/algorithms/dpo.py(1 hunks)nemo_rl/algorithms/grpo.py(2 hunks)nemo_rl/algorithms/sft.py(1 hunks)nemo_rl/models/megatron/common.py(2 hunks)nemo_rl/models/policy/__init__.py(1 hunks)nemo_rl/models/policy/lm_policy.py(1 hunks)nemo_rl/models/policy/megatron_policy_worker.py(4 hunks)tests/unit/models/megatron/test_moe_metrics.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
!(**/tests/**|**/test_*.py|**/test_*.sh)
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year
Files:
examples/configs/sft.yamlexamples/configs/vlm_grpo_3B.yamlexamples/penguin/grpo_dapo17k_bytedtsinghua_qwen3_4binstruct_nf.yamlnemo_rl/algorithms/dpo.pynemo_rl/algorithms/sft.pyexamples/configs/grpo_math_1B.yamlnemo_rl/models/policy/__init__.pyexamples/configs/dpo.yamltests/unit/models/megatron/test_moe_metrics.pynemo_rl/models/policy/lm_policy.pynemo_rl/models/policy/megatron_policy_worker.pynemo_rl/models/megatron/common.pyexamples/configs/vlm_grpo_3B_megatron.yamlnemo_rl/algorithms/grpo.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code
Files:
nemo_rl/algorithms/dpo.pynemo_rl/algorithms/sft.pynemo_rl/models/policy/__init__.pytests/unit/models/megatron/test_moe_metrics.pynemo_rl/models/policy/lm_policy.pynemo_rl/models/policy/megatron_policy_worker.pynemo_rl/models/megatron/common.pynemo_rl/algorithms/grpo.py
nemo_rl/**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes
Files:
nemo_rl/algorithms/dpo.pynemo_rl/algorithms/sft.pynemo_rl/models/policy/__init__.pynemo_rl/models/policy/lm_policy.pynemo_rl/models/policy/megatron_policy_worker.pynemo_rl/models/megatron/common.pynemo_rl/algorithms/grpo.py
**/*.{py,sh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)
Files:
nemo_rl/algorithms/dpo.pynemo_rl/algorithms/sft.pynemo_rl/models/policy/__init__.pytests/unit/models/megatron/test_moe_metrics.pynemo_rl/models/policy/lm_policy.pynemo_rl/models/policy/megatron_policy_worker.pynemo_rl/models/megatron/common.pynemo_rl/algorithms/grpo.py
🧬 Code graph analysis (4)
nemo_rl/algorithms/dpo.py (1)
nemo_rl/data/packing/metrics.py (1)
update(52-91)
nemo_rl/algorithms/sft.py (1)
nemo_rl/data/packing/metrics.py (1)
update(52-91)
tests/unit/models/megatron/test_moe_metrics.py (1)
nemo_rl/models/megatron/common.py (1)
get_moe_metrics(602-645)
nemo_rl/models/policy/megatron_policy_worker.py (1)
nemo_rl/models/megatron/common.py (1)
get_moe_metrics(602-645)
🪛 Ruff (0.14.6)
tests/unit/models/megatron/test_moe_metrics.py
38-38: Unused lambda argument: args
(ARG005)
38-38: Unused lambda argument: kwargs
(ARG005)
82-82: Unused lambda argument: args
(ARG005)
82-82: Unused lambda argument: kwargs
(ARG005)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: sphinx-build / Build docs
- GitHub Check: Lint check
- GitHub Check: Lint check
- GitHub Check: Post automodel integration comment / Comment on PR
- GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (18)
examples/configs/vlm_grpo_3B_megatron.yaml (1)
146-147: MoE metric flags are wired correctly in VLM GRPO config
track_moe_metricsandmoe_per_layer_loggingare added undermegatron_cfgwith explicitFalsedefaults and consistent naming with the TypedDict.nemo_rl/algorithms/dpo.py (1)
606-609: DPO now cleanly surfaces MoE metrics into logged metricsConditionally merging
train_results["moe_metrics"]under amoe/prefix is consistent with GRPO/SFT and keeps default behavior unchanged when MoE tracking is disabled. This looks good.examples/configs/dpo.yaml (1)
121-122: MoE flags added consistently in DPO configAdding
track_moe_metricsandmoe_per_layer_loggingwith explicitFalsedefaults undermegatron_cfgkeeps configs aligned with the new TypedDict while remaining inert whenenabled: false.nemo_rl/models/policy/lm_policy.py (1)
533-535: MoE metrics correctly exposed from policy workersForwarding
results[0]["moe_metrics"]intoaggregated_resultsbehind a presence check mirrors the existing pattern for global loss/grad_norm and lets higher-level algorithms log MoE metrics without further refactoring.examples/configs/grpo_math_1B.yaml (1)
117-118: GRPO math config MoE flags are consistent and explicitThe added
track_moe_metricsandmoe_per_layer_loggingentries undermegatron_cfgmatch the TypedDict and other example configs, with clearFalsedefaults.nemo_rl/algorithms/sft.py (1)
489-492: SFT training now surfaces MoE metrics consistentlyThe conditional
moe/-prefixed merge mirrors DPO/GRPO, leaving existing behavior intact when MoE tracking is disabled and making MoE stats available to the logger when present.nemo_rl/algorithms/grpo.py (1)
1328-1331: GRPO (sync & async) cleanly integrate optional MoE metricsBoth
grpo_trainandasync_grpo_trainnow attachmoe/-prefixed entries fromtrain_results["moe_metrics"]into the metrics dict behind a simple presence check, so logging gains MoE visibility without affecting runs where MoE tracking is disabled.Also applies to: 2251-2254
examples/configs/vlm_grpo_3B.yaml (1)
105-106: LGTM! Consistent configuration.The MOE metric flags are added consistently with other config files in this PR.
examples/penguin/grpo_dapo17k_bytedtsinghua_qwen3_4binstruct_nf.yaml (1)
103-104: LGTM! Configuration consistent across examples.nemo_rl/models/policy/megatron_policy_worker.py (4)
119-119: LGTM! Import added correctly.
987-987: LGTM! Microbatch counter initialization.The counter is correctly initialized before the global batch loop to accumulate across all microbatches.
1054-1055: LGTM! Microbatch tracking logic is sound.The counter correctly accumulates microbatches across global batches with a clear explanatory comment.
1185-1193: LGTM! MOE metrics collection implemented correctly.The metrics collection logic is sound:
- Properly guarded by the config flag
- Loss scaling averages across all microbatches
- The
max(1, total_num_microbatches)safeguard prevents division by zero- Only adds metrics when non-empty
tests/unit/models/megatron/test_moe_metrics.py (3)
1-27: LGTM! Test file setup is correct.Copyright header and imports are appropriate. The helper function
_make_fake_trackercorrectly mimics the tracker structure.
29-60: LGTM! Empty tracker test is thorough.The test correctly validates that an empty tracker returns an empty metrics dict and that cleanup is called.
Note: The static analysis warnings about unused lambda arguments (lines 38, 82) are false positives—these lambdas are mocking functions that require those signatures.
63-117: LGTM! Aggregation test validates core functionality.The test thoroughly validates:
- Correct scaling of losses by
loss_scale- Proper averaging across layers
- Per-layer logging when enabled
- Cleanup is called
All assertions are mathematically correct.
nemo_rl/models/megatron/common.py (2)
29-33: LGTM! MOE utilities imported correctly.The imports are specific and from the appropriate Megatron-Core module.
602-645: LGTM! MOE metrics function is well-implemented.The implementation is clean and correct:
- Reduces losses across ranks before aggregation
- Properly scales and averages across MoE layers
- Handles edge cases (empty tracker, zero elements)
- Clears tracker to prevent accumulation across steps
- Comprehensive docstring
The safeguard
num_layers = int(loss_list.numel()) if loss_list.numel() > 0 else 1prevents division by zero.
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
What does this PR do ?
Port moe load balancing metrics from Megatron-LM to nemo-rl
Issues
List issues that this PR closes (syntax):
Usage
# Add a code snippet demonstrating how to use thisBefore your PR is "Ready for review"
Pre checks:
Additional Information
Summary by CodeRabbit
Release Notes
New Features
Configuration
track_moe_metricsandmoe_per_layer_loggingflags across training configurations (disabled by default)Tests
✏️ Tip: You can customize this high-level summary in your review settings.