Skip to content

cp: fix: apply offloading change from v2 to v1 (1726) into r0.5.0#1737

Merged
terrykong merged 1 commit intor0.5.0from
cherry-pick-1726-r0.5.0
Jan 8, 2026
Merged

cp: fix: apply offloading change from v2 to v1 (1726) into r0.5.0#1737
terrykong merged 1 commit intor0.5.0from
cherry-pick-1726-r0.5.0

Conversation

@chtruong814
Copy link
Copy Markdown
Contributor

@chtruong814 chtruong814 commented Jan 7, 2026

beep boop [🤖]: Hi @terrykong 👋,

we've cherry picked #1726 into  for you! 🚀

Please review and approve this cherry pick by your convenience!

Summary by CodeRabbit

  • Refactor

    • Updated internal buffer transfer mechanism for improved memory handling in policy workers.
  • Tests

    • Enhanced test coverage to validate multiple training backend configurations.

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
@chtruong814 chtruong814 requested review from a team as code owners January 7, 2026 20:33
@chtruong814 chtruong814 requested a review from terrykong January 7, 2026 20:33
@chtruong814 chtruong814 requested a review from a team as a code owner January 7, 2026 20:33
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Jan 7, 2026

ℹ️ File Consistency Check

Check based on commit: ed92b26 (PR #1737 from cherry-pick-1726-r0.5.0)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jan 7, 2026

📝 Walkthrough

Walkthrough

Modified DTensor policy workers to use torch.utils.swap_tensors for buffer device migration instead of in-place assignment. Extended vLLM weight update memory test with parameterized training backend support (dtensor_v1, dtensor_v2, megatron). Public APIs remain unchanged; internal buffer movement mechanics altered.

Changes

Cohort / File(s) Summary
DTensor Policy Buffer Migration
nemo_rl/models/policy/workers/dtensor_policy_worker.py, nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
Replaced in-place buffer assignment (v.data = v.data.to(device)) with torch.utils.swap_tensors(v, v.to(device)) in move_buffer_to_device for FSDP module buffer transfers. Changes in-place swap semantics without altering public API or control flow.
vLLM Memory Test Parameterization
tests/unit/models/generation/test_vllm_generation.py
Added train_backend parameter to test_vllm_weight_update_memory with pytest.mark.parametrize (dtensor_v1, dtensor_v2, megatron). Conditional train_config selection based on backend choice; dtensor_v2 sets dtensor_cfg["_v2"] = True. Validation added for invalid backend values.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • PR #1726: Applies the same torch.utils.swap_tensors buffer-move change to DTensor policy workers v1 and v2, with corresponding test updates for multi-backend support.

Suggested labels

CI:L1, r0.5.0

Suggested reviewers

  • terrykong
  • yfw
🚥 Pre-merge checks | ✅ 2 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Test Results For Major Changes ⚠️ Warning PR description lacks documentation of test execution results and validation for significant buffer transfer logic changes across all three parameterized backends (dtensor_v1, dtensor_v2, megatron). Update PR description to document test execution confirmation for all three backends, test results showing no regression, performance/memory profiling data, and backward compatibility validation.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly indicates this is a cherry-pick of PR #1726 (offloading change from v2 to v1) into the r0.5.0 branch, which accurately reflects the PR's objective and main action.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/unit/models/generation/test_vllm_generation.py (1)

1667-1669: Backend parameterization for vLLM weight-update memory test is sound; consider deepcopy and cleanup

The new train_backend parameter and config switch correctly exercise both DTensor backends:

  • dtensor_v1 → base basic_dtensor_test_config (v1 worker).
  • dtensor_v2 → deepcopy + dtensor_cfg["_v2"] = True (v2 worker), matching how Policy selects DTensorPolicyWorkerV2.
  • The extra "megatron" branch is currently unused but kept for possible future expansion, consistent with the comment about only testing DTensor for now.

Two minor, non-blocking polish suggestions:

  1. For symmetry and to avoid any accidental shared-state mutations in future changes, you could also deepcopy the v1 config:

    if train_backend == "dtensor_v1":
        train_config = deepcopy(basic_dtensor_test_config)
  2. If you don’t expect to bring back the Megatron case here, you could drop the "megatron" branch and ValueError to reduce dead code; otherwise this is fine as is.

Also applies to: 1690-1699

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5aeca40 and ed92b26.

📒 Files selected for processing (3)
  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • tests/unit/models/generation/test_vllm_generation.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • tests/unit/models/generation/test_vllm_generation.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • tests/unit/models/generation/test_vllm_generation.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • tests/unit/models/generation/test_vllm_generation.py
🧠 Learnings (1)
📚 Learning: 2025-09-18T14:20:36.297Z
Learnt from: zpqiu
Repo: NVIDIA-NeMo/RL PR: 1006
File: examples/configs/recipes/llm/distillation-qwen3-32b-to-8b-base-2n8g-fsdp2tp2.v1.yaml:113-120
Timestamp: 2025-09-18T14:20:36.297Z
Learning: In distillation workflows, the teacher policy does not perform generation - it only does inference/logprob computation on sequences generated by the student policy. Therefore, teacher generation configuration mismatches (like vLLM tensor parallelism settings) and colocation concerns are not relevant.

Applied to files:

  • tests/unit/models/generation/test_vllm_generation.py
🧬 Code graph analysis (1)
tests/unit/models/generation/test_vllm_generation.py (1)
nemo_rl/models/policy/lm_policy.py (1)
  • Policy (58-887)
🪛 Ruff (0.14.10)
tests/unit/models/generation/test_vllm_generation.py

1699-1699: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (2)
nemo_rl/models/policy/workers/dtensor_policy_worker.py (1)

1837-1844: swap_tensors-based buffer move looks correct and matches v2 implementation

Using torch.utils.swap_tensors(v, v.to(device)) here preserves buffer object identity while moving storage to the target device, which is exactly what you want under FSDP2/DTensor and aligns this worker with the v2 implementation. Assumes model.buffers() only yields torch.Tensor (not DTensor), which is consistent with the rest of this module.

Please double-check against the PyTorch 2.9 swap_tensors docs that its semantics are stable for non-parameter buffers and that no DTensor buffers are expected from model.buffers() in your FSDP2 setup.

nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (1)

1890-1897: Consistent swap_tensors buffer migration for DTensor v2

Mirroring v1, using torch.utils.swap_tensors(v, v.to(device)) for buffers is appropriate here: it moves buffer storage across devices without rebinding attributes, which plays nicely with FSDP2 and the Automodel stack.

Please confirm against your current PyTorch version that swap_tensors is supported and officially recommended for device moves in this context.

@terrykong terrykong enabled auto-merge (squash) January 7, 2026 21:13
@terrykong terrykong added CI:L0 Run doctests and unit tests CI:L1 Run doctests, unit tests, and functional tests and removed CI:L0 Run doctests and unit tests labels Jan 7, 2026
@terrykong terrykong merged commit 0e576a6 into r0.5.0 Jan 8, 2026
78 of 81 checks passed
@terrykong terrykong deleted the cherry-pick-1726-r0.5.0 branch January 8, 2026 00:10
avenkateshha pushed a commit to avenkateshha/RL that referenced this pull request Apr 10, 2026
…NVIDIA-NeMo#1737)

Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
Co-authored-by: Terry Kong <terrycurtiskong@gmail.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cherry-pick CI:L1 Run doctests, unit tests, and functional tests Run CICD

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants