Skip to content

feat: Using mcore cpu optimizer#1242

Merged
terrykong merged 9 commits intoNVIDIA-NeMo:mainfrom
guyueh1:opt_cpu_offload
Oct 9, 2025
Merged

feat: Using mcore cpu optimizer#1242
terrykong merged 9 commits intoNVIDIA-NeMo:mainfrom
guyueh1:opt_cpu_offload

Conversation

@guyueh1
Copy link
Copy Markdown
Contributor

@guyueh1 guyueh1 commented Oct 1, 2025

What does this PR do ?

Add necessary support to use mcore cpu optimizer

Issues

List issues that this PR closes (syntax):

Closes #915

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • Bug Fixes
    • Enforced valid configuration for optimizer CPU offload (requires 100% offload), preventing misconfiguration.
    • Prevented unintended transfers of optimizer state to GPU when CPU offload is enabled, reducing memory pressure and related failures.
    • Standardized offload behavior across training and refit workflows for consistent, predictable execution.
    • Added early runtime checks to surface configuration issues sooner with clearer errors.

Signed-off-by: Guyue Huang <guyueh@nvidia.com>
@guyueh1 guyueh1 requested a review from a team as a code owner October 1, 2025 04:51
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 1, 2025

📝 Walkthrough

Walkthrough

Introduces runtime guards in MegatronPolicyWorker to enforce optimizer CPU offload only when optimizer_offload_fraction is 1.0, and conditions CUDA transfers of optimizer state based on optimizer_cpu_offload across initialization, prepare_for_training, and refit/offload paths.

Changes

Cohort / File(s) Summary
Optimizer offload guards and state transfer gating
nemo_rl/models/policy/megatron_policy_worker.py
Added assertions enforcing optimizer_cpu_offload implies optimizer_offload_fraction == 1.0. Gated optimizer-state moves to CUDA when CPU offload is enabled, updating initialization, prepare_for_training, offload_before_refit, and related transitions to maintain consistent offload behavior.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Trainer
  participant Worker as MegatronPolicyWorker
  participant Optim as Optimizer
  participant GPU
  participant CPU

  Trainer->>Worker: __init__(config)
  Worker->>Worker: Read optimizer_cpu_offload, optimizer_offload_fraction
  alt CPU offload enabled
    Worker->>Worker: Assert optimizer_offload_fraction == 1.0
  else
    Worker->>Worker: Continue
  end

  Trainer->>Worker: prepare_for_training()
  alt optimizer_cpu_offload == True
    note over Worker,Optim: Keep optimizer state on CPU<br/>(skip move to CUDA)
    Worker-xGPU: Do not transfer optimizer state
    Worker->>CPU: Maintain optimizer state
  else
    note over Worker,Optim: Move optimizer state to CUDA
    Worker->>GPU: Transfer optimizer state
  end

  Trainer->>Worker: offload_before_refit()
  alt optimizer_cpu_offload == True
    note over Worker: Skip CUDA migration during refit/offload
  else
    Worker->>GPU: Move optimizer state as needed
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Test Results For Major Changes ⚠️ Warning The PR introduces a new capability to use the mcore CPU optimizer, which is a significant feature change that can affect training behavior and potentially convergence, yet the PR description on October 1, 2025 does not document any tests, performance validation, or numerical checks to demonstrate the change is safe. Lacking this required evidence, the custom check cannot be considered satisfied. Please update the PR description with the relevant test results (and, if applicable, convergence or performance metrics with configuration details) demonstrating that enabling the CPU optimizer integration works correctly without regressions.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title succinctly captures the primary change—enabling use of the mcore CPU optimizer—and adheres to conventional commit style, making it clear and directly related to the pull request’s objectives without extraneous detail.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
nemo_rl/models/policy/megatron_policy_worker.py (2)

750-752: Consider caching the optimizer_cpu_offload flag for consistency.

The conditional logic correctly prevents moving optimizer state to CUDA when CPU offload is enabled. However, reading self.cfg['megatron_cfg']['optimizer'].get('optimizer_cpu_offload', False) on every call to prepare_for_training is repetitive. Consider caching this flag as an instance variable (e.g., self.optimizer_cpu_offload) in __init__ after line 616 for consistency and clarity.

Example:

# In __init__ after line 616:
self.optimizer_cpu_offload = optimizer_cpu_offload

# In prepare_for_training:
if hasattr(self, "optimizer") and self.optimizer is not None and (
    not self.optimizer_cpu_offload
):

779-781: Logic is correct; caching suggestion applies here too.

The conditional logic correctly prevents moving optimizer state to CPU when CPU offload is already enabled. The same caching suggestion from prepare_for_training applies here: consider storing optimizer_cpu_offload as an instance variable to avoid repeated config lookups.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cc8a93e and efe72f1.

📒 Files selected for processing (1)
  • nemo_rl/models/policy/megatron_policy_worker.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts

Files:

  • nemo_rl/models/policy/megatron_policy_worker.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

nemo_rl/**/*.py: Do not set non-None configuration defaults in code; YAML is the single source of truth for defaults
Access required config attributes directly (e.g., policy_cfg["precision"]) and assume presence; do not introduce hidden defaults
Express configuration optionality via TypedDict using typing.NotRequired
When adding a new config key to a TypedDict subclass, document the key’s purpose, valid values/types, and recommended default in code
For any class or function decorated with @ray.remote, add '# pragma: no cover' on the class/def line (and on remote functions)

Files:

  • nemo_rl/models/policy/megatron_policy_worker.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Lint check
  • GitHub Check: Post automodel integration comment / Comment on PR
  • GitHub Check: Post submodule check comment / Comment on PR

Comment thread nemo_rl/models/policy/megatron_policy_worker.py Outdated
Signed-off-by: Guyue Huang <guyueh@nvidia.com>
@guyueh1 guyueh1 requested a review from terrykong October 1, 2025 16:14
Comment thread nemo_rl/models/policy/megatron_policy_worker.py Outdated
Signed-off-by: Guyue Huang <guyueh@nvidia.com>
@guyueh1 guyueh1 requested review from a team as code owners October 1, 2025 20:19
Comment thread nemo_rl/models/policy/megatron_policy_worker.py Outdated
Comment thread nemo_rl/models/policy/megatron_policy_worker.py Outdated
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Signed-off-by: Guyue Huang <140554423+guyueh1@users.noreply.github.com>
terrykong
terrykong previously approved these changes Oct 1, 2025
@terrykong terrykong requested a review from a team October 1, 2025 20:56
@terrykong
Copy link
Copy Markdown
Collaborator

lgtm. @yaoyu-33 to review

@terrykong
Copy link
Copy Markdown
Collaborator

@guyueh1 you'll need to lint the PR

Signed-off-by: Guyue Huang <guyueh@nvidia.com>
@yfw
Copy link
Copy Markdown
Contributor

yfw commented Oct 2, 2025

Closes #915

@yfw
Copy link
Copy Markdown
Contributor

yfw commented Oct 2, 2025

I saw here (https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core/optimizer/cpu_offloading#configuration-recommendataions) that it is recommended to set the flag --overlap-cpu-optimizer-d2h-h2d. Is that necessary here?

@guyueh1 guyueh1 added the CI:L0 Run doctests and unit tests label Oct 2, 2025
Signed-off-by: Guyue Huang <guyueh@nvidia.com>
@guyueh1 guyueh1 requested a review from a team as a code owner October 2, 2025 04:45
@guyueh1
Copy link
Copy Markdown
Contributor Author

guyueh1 commented Oct 2, 2025

I saw here (https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core/optimizer/cpu_offloading#configuration-recommendataions) that it is recommended to set the flag --overlap-cpu-optimizer-d2h-h2d. Is that necessary here?

Users who want to try this feature can use +policy.megatron_cfg.optimizer.overlap_cpu_optimizer_d2h_h2d=true, but I haven't tested it out locally so not sure if it works in RL. I think anyways cpu optimizer compute is gonna be slow, so this option is provided for users with limited resources and can tolerate low performance, so it's ok not to use it in default.

yfw
yfw previously approved these changes Oct 2, 2025
@terrykong terrykong removed the CI:L0 Run doctests and unit tests label Oct 2, 2025
@terrykong terrykong added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Oct 4, 2025
@guyueh1
Copy link
Copy Markdown
Contributor Author

guyueh1 commented Oct 7, 2025

@terrykong this PR needs an approval, and do you think it should be v0.4?

@terrykong terrykong added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Oct 8, 2025
@guyueh1
Copy link
Copy Markdown
Contributor Author

guyueh1 commented Oct 8, 2025

@terrykong this PR is stale so I merged main again, could you approve again?

@guyueh1
Copy link
Copy Markdown
Contributor Author

guyueh1 commented Oct 9, 2025

@terrykong this is ok to merge? Or any TODOs?

@terrykong
Copy link
Copy Markdown
Collaborator

@guyueh1 strange. i don't know why the CI was skipped, trying again

@terrykong terrykong added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Oct 9, 2025
@terrykong terrykong merged commit 52cd68d into NVIDIA-NeMo:main Oct 9, 2025
40 of 41 checks passed
chtruong814 pushed a commit that referenced this pull request Oct 9, 2025
Signed-off-by: Guyue Huang <guyueh@nvidia.com>
Signed-off-by: Guyue Huang <140554423+guyueh1@users.noreply.github.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
PrinsYin pushed a commit to PrinsYin/RL that referenced this pull request Nov 30, 2025
Signed-off-by: Guyue Huang <guyueh@nvidia.com>
Signed-off-by: Guyue Huang <140554423+guyueh1@users.noreply.github.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
Signed-off-by: Guyue Huang <guyueh@nvidia.com>
Signed-off-by: Guyue Huang <140554423+guyueh1@users.noreply.github.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests r0.4.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Need offload parameter for Megatron backend Add megatron cpu optimizer support

3 participants