Skip to content

feat: Add moe load balancing metrics#1520

Merged
terrykong merged 9 commits intomainfrom
yifu/moe_metrics_main
Dec 3, 2025
Merged

feat: Add moe load balancing metrics#1520
terrykong merged 9 commits intomainfrom
yifu/moe_metrics_main

Conversation

@yfw
Copy link
Copy Markdown
Contributor

@yfw yfw commented Nov 13, 2025

What does this PR do ?

Port moe load balancing metrics from Megatron-LM to nemo-rl

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

Release Notes

  • New Features

    • Added MOE (Mixture of Experts) metrics tracking to training pipelines
    • Per-layer metrics logging available for detailed MOE diagnostics
  • Configuration

    • Introduced track_moe_metrics and moe_per_layer_logging flags across training configurations (disabled by default)
  • Tests

    • Added comprehensive unit tests for MOE metrics collection and aggregation

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
@github-actions
Copy link
Copy Markdown

✅ Submodule Fast-Forward Check Results

Check based on commit: b9f870f (PR #1520 from yifu/moe_metrics_main)

✅ Submodules that are properly updated:

Megatron-Bridge: ✅ PR branch is ahead of main branch (fast-forward)

All submodule changes look good! ✨

Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
@yfw yfw marked this pull request as ready for review December 1, 2025 18:49
@yfw yfw requested review from a team as code owners December 1, 2025 18:49
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Dec 1, 2025

📝 Walkthrough

Walkthrough

This PR adds Mixture of Experts (MOE) metrics tracking and per-layer logging capabilities to the NeMo RL training pipeline. Configuration flags are introduced to enable optional MOE metrics collection, a new function computes and aggregates MOE auxiliary losses, and training algorithms are updated to propagate these metrics through their results.

Changes

Cohort / File(s) Summary
Configuration files
examples/configs/dpo.yaml, examples/configs/grpo_math_1B.yaml, examples/configs/sft.yaml, examples/configs/vlm_grpo_3B.yaml, examples/configs/vlm_grpo_3B_megatron.yaml, examples/penguin/grpo_dapo17k_bytedtsinghua_qwen3_4binstruct_nf.yaml
Added two new boolean configuration flags under megatron_cfg: track_moe_metrics and moe_per_layer_logging, both defaulting to False
Algorithm metric aggregation
nemo_rl/algorithms/dpo.py, nemo_rl/algorithms/grpo.py, nemo_rl/algorithms/sft.py
Updated to conditionally include moe_metrics from train_results in the training metrics dictionary with "moe/" prefix
MOE metrics computation
nemo_rl/models/megatron/common.py
Added new get_moe_metrics() function that computes, scales, and aggregates MOE auxiliary losses with optional per-layer logging
Policy configuration schema
nemo_rl/models/policy/__init__.py
Extended MegatronConfig TypedDict with track_moe_metrics and moe_per_layer_logging boolean fields
Policy training integration
nemo_rl/models/policy/lm_policy.py, nemo_rl/models/policy/megatron_policy_worker.py
Added conditional MOE metrics propagation from worker results and integration of metrics computation in the training loop using the new get_moe_metrics function
MOE metrics tests
tests/unit/models/megatron/test_moe_metrics.py
Added unit tests covering get_moe_metrics behavior with empty trackers and aggregation with per-layer logging

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Key areas requiring attention:
    • nemo_rl/models/megatron/common.py — Verify loss scaling logic and averaging across MOE layers is correct
    • nemo_rl/models/policy/megatron_policy_worker.py — Confirm proper initialization and increment of total_num_microbatches and correct loss scale calculation (1/max(1, total_num_microbatches))
    • tests/unit/models/megatron/test_moe_metrics.py — Ensure test coverage accurately reflects expected aggregation behavior and per-layer logging format

Possibly related PRs

Suggested labels

CI:L1

Suggested reviewers

  • terrykong
  • chtruong814
  • parthchadha
  • yaoyu-33

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 61.54% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Test Results For Major Changes ⚠️ Warning PR introduces major feature affecting 3 algorithms across 11+ files, but PR description lacks all testing information, convergence validation, performance analysis, and concrete details—it is essentially a template with placeholders. Update PR description to include: unit test results, integration testing evidence across DPO/GRPO/SFT, convergence validation showing no regression, performance impact analysis, example metric outputs, and add requested docstrings for configuration flags.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: Add moe load balancing metrics' accurately reflects the main changes: introducing MOE metrics tracking with configuration flags and new metric extraction logic across multiple algorithm files.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch yifu/moe_metrics_main

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@yfw yfw added the CI:L1 Run doctests, unit tests, and functional tests label Dec 1, 2025
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
nemo_rl/models/policy/__init__.py (1)

128-129: MoE flags correctly added to MegatronConfig

track_moe_metrics and moe_per_layer_logging are cleanly integrated as explicit booleans and align with how configs use them; no behavioral issues here. Consider adding brief docs/comments for these fields alongside other Megatron MoE knobs in a follow-up for discoverability.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 25ff3f6 and 75696fd.

📒 Files selected for processing (14)
  • examples/configs/dpo.yaml (1 hunks)
  • examples/configs/grpo_math_1B.yaml (1 hunks)
  • examples/configs/sft.yaml (1 hunks)
  • examples/configs/vlm_grpo_3B.yaml (1 hunks)
  • examples/configs/vlm_grpo_3B_megatron.yaml (1 hunks)
  • examples/penguin/grpo_dapo17k_bytedtsinghua_qwen3_4binstruct_nf.yaml (1 hunks)
  • nemo_rl/algorithms/dpo.py (1 hunks)
  • nemo_rl/algorithms/grpo.py (2 hunks)
  • nemo_rl/algorithms/sft.py (1 hunks)
  • nemo_rl/models/megatron/common.py (2 hunks)
  • nemo_rl/models/policy/__init__.py (1 hunks)
  • nemo_rl/models/policy/lm_policy.py (1 hunks)
  • nemo_rl/models/policy/megatron_policy_worker.py (4 hunks)
  • tests/unit/models/megatron/test_moe_metrics.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • examples/configs/sft.yaml
  • examples/configs/vlm_grpo_3B.yaml
  • examples/penguin/grpo_dapo17k_bytedtsinghua_qwen3_4binstruct_nf.yaml
  • nemo_rl/algorithms/dpo.py
  • nemo_rl/algorithms/sft.py
  • examples/configs/grpo_math_1B.yaml
  • nemo_rl/models/policy/__init__.py
  • examples/configs/dpo.yaml
  • tests/unit/models/megatron/test_moe_metrics.py
  • nemo_rl/models/policy/lm_policy.py
  • nemo_rl/models/policy/megatron_policy_worker.py
  • nemo_rl/models/megatron/common.py
  • examples/configs/vlm_grpo_3B_megatron.yaml
  • nemo_rl/algorithms/grpo.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/algorithms/dpo.py
  • nemo_rl/algorithms/sft.py
  • nemo_rl/models/policy/__init__.py
  • tests/unit/models/megatron/test_moe_metrics.py
  • nemo_rl/models/policy/lm_policy.py
  • nemo_rl/models/policy/megatron_policy_worker.py
  • nemo_rl/models/megatron/common.py
  • nemo_rl/algorithms/grpo.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/algorithms/dpo.py
  • nemo_rl/algorithms/sft.py
  • nemo_rl/models/policy/__init__.py
  • nemo_rl/models/policy/lm_policy.py
  • nemo_rl/models/policy/megatron_policy_worker.py
  • nemo_rl/models/megatron/common.py
  • nemo_rl/algorithms/grpo.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/algorithms/dpo.py
  • nemo_rl/algorithms/sft.py
  • nemo_rl/models/policy/__init__.py
  • tests/unit/models/megatron/test_moe_metrics.py
  • nemo_rl/models/policy/lm_policy.py
  • nemo_rl/models/policy/megatron_policy_worker.py
  • nemo_rl/models/megatron/common.py
  • nemo_rl/algorithms/grpo.py
🧬 Code graph analysis (4)
nemo_rl/algorithms/dpo.py (1)
nemo_rl/data/packing/metrics.py (1)
  • update (52-91)
nemo_rl/algorithms/sft.py (1)
nemo_rl/data/packing/metrics.py (1)
  • update (52-91)
tests/unit/models/megatron/test_moe_metrics.py (1)
nemo_rl/models/megatron/common.py (1)
  • get_moe_metrics (602-645)
nemo_rl/models/policy/megatron_policy_worker.py (1)
nemo_rl/models/megatron/common.py (1)
  • get_moe_metrics (602-645)
🪛 Ruff (0.14.6)
tests/unit/models/megatron/test_moe_metrics.py

38-38: Unused lambda argument: args

(ARG005)


38-38: Unused lambda argument: kwargs

(ARG005)


82-82: Unused lambda argument: args

(ARG005)


82-82: Unused lambda argument: kwargs

(ARG005)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: sphinx-build / Build docs
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Post automodel integration comment / Comment on PR
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (18)
examples/configs/vlm_grpo_3B_megatron.yaml (1)

146-147: MoE metric flags are wired correctly in VLM GRPO config

track_moe_metrics and moe_per_layer_logging are added under megatron_cfg with explicit False defaults and consistent naming with the TypedDict.

nemo_rl/algorithms/dpo.py (1)

606-609: DPO now cleanly surfaces MoE metrics into logged metrics

Conditionally merging train_results["moe_metrics"] under a moe/ prefix is consistent with GRPO/SFT and keeps default behavior unchanged when MoE tracking is disabled. This looks good.

examples/configs/dpo.yaml (1)

121-122: MoE flags added consistently in DPO config

Adding track_moe_metrics and moe_per_layer_logging with explicit False defaults under megatron_cfg keeps configs aligned with the new TypedDict while remaining inert when enabled: false.

nemo_rl/models/policy/lm_policy.py (1)

533-535: MoE metrics correctly exposed from policy workers

Forwarding results[0]["moe_metrics"] into aggregated_results behind a presence check mirrors the existing pattern for global loss/grad_norm and lets higher-level algorithms log MoE metrics without further refactoring.

examples/configs/grpo_math_1B.yaml (1)

117-118: GRPO math config MoE flags are consistent and explicit

The added track_moe_metrics and moe_per_layer_logging entries under megatron_cfg match the TypedDict and other example configs, with clear False defaults.

nemo_rl/algorithms/sft.py (1)

489-492: SFT training now surfaces MoE metrics consistently

The conditional moe/-prefixed merge mirrors DPO/GRPO, leaving existing behavior intact when MoE tracking is disabled and making MoE stats available to the logger when present.

nemo_rl/algorithms/grpo.py (1)

1328-1331: GRPO (sync & async) cleanly integrate optional MoE metrics

Both grpo_train and async_grpo_train now attach moe/-prefixed entries from train_results["moe_metrics"] into the metrics dict behind a simple presence check, so logging gains MoE visibility without affecting runs where MoE tracking is disabled.

Also applies to: 2251-2254

examples/configs/vlm_grpo_3B.yaml (1)

105-106: LGTM! Consistent configuration.

The MOE metric flags are added consistently with other config files in this PR.

examples/penguin/grpo_dapo17k_bytedtsinghua_qwen3_4binstruct_nf.yaml (1)

103-104: LGTM! Configuration consistent across examples.

nemo_rl/models/policy/megatron_policy_worker.py (4)

119-119: LGTM! Import added correctly.


987-987: LGTM! Microbatch counter initialization.

The counter is correctly initialized before the global batch loop to accumulate across all microbatches.


1054-1055: LGTM! Microbatch tracking logic is sound.

The counter correctly accumulates microbatches across global batches with a clear explanatory comment.


1185-1193: LGTM! MOE metrics collection implemented correctly.

The metrics collection logic is sound:

  • Properly guarded by the config flag
  • Loss scaling averages across all microbatches
  • The max(1, total_num_microbatches) safeguard prevents division by zero
  • Only adds metrics when non-empty
tests/unit/models/megatron/test_moe_metrics.py (3)

1-27: LGTM! Test file setup is correct.

Copyright header and imports are appropriate. The helper function _make_fake_tracker correctly mimics the tracker structure.


29-60: LGTM! Empty tracker test is thorough.

The test correctly validates that an empty tracker returns an empty metrics dict and that cleanup is called.

Note: The static analysis warnings about unused lambda arguments (lines 38, 82) are false positives—these lambdas are mocking functions that require those signatures.


63-117: LGTM! Aggregation test validates core functionality.

The test thoroughly validates:

  • Correct scaling of losses by loss_scale
  • Proper averaging across layers
  • Per-layer logging when enabled
  • Cleanup is called

All assertions are mathematically correct.

nemo_rl/models/megatron/common.py (2)

29-33: LGTM! MOE utilities imported correctly.

The imports are specific and from the appropriate Megatron-Core module.


602-645: LGTM! MOE metrics function is well-implemented.

The implementation is clean and correct:

  • Reduces losses across ranks before aggregation
  • Properly scales and averages across MoE layers
  • Handles edge cases (empty tracker, zero elements)
  • Clears tracker to prevent accumulation across steps
  • Comprehensive docstring

The safeguard num_layers = int(loss_list.numel()) if loss_list.numel() > 0 else 1 prevents division by zero.

Comment thread examples/configs/sft.yaml Outdated
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
@yfw yfw requested a review from a team as a code owner December 2, 2025 02:31
@yfw yfw added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Dec 2, 2025
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
@yfw yfw added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Dec 2, 2025
Comment thread nemo_rl/models/megatron/common.py
Comment thread nemo_rl/models/policy/megatron_policy_worker.py Outdated
Comment thread nemo_rl/models/policy/megatron_policy_worker.py
@yfw yfw added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Dec 3, 2025
@terrykong terrykong merged commit 859a89a into main Dec 3, 2025
40 of 42 checks passed
@terrykong terrykong deleted the yifu/moe_metrics_main branch December 3, 2025 21:40
DeL-TaiseiOzaki pushed a commit to DeL-TaiseiOzaki/RL that referenced this pull request Jan 8, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
seonjinn pushed a commit that referenced this pull request Mar 9, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants