Skip to content

fix: apply offloading change from v2 to v1#1726

Merged
yfw merged 2 commits intomainfrom
test-fix-4
Jan 7, 2026
Merged

fix: apply offloading change from v2 to v1#1726
yfw merged 2 commits intomainfrom
test-fix-4

Conversation

@terrykong
Copy link
Copy Markdown
Collaborator

@terrykong terrykong commented Jan 6, 2026

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • Tests

    • Expanded vLLM weight update memory testing to support multiple training backend configurations.
  • Chores

    • Optimized buffer device migration mechanism in policy workers for improved memory handling.

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: Terry Kong <terryk@nvidia.com>
@terrykong terrykong requested review from yfw and yuki-97 January 6, 2026 08:42
@terrykong terrykong requested review from a team as code owners January 6, 2026 08:42
@terrykong terrykong added CI:L0 Run doctests and unit tests r0.5.0 labels Jan 6, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jan 6, 2026

📝 Walkthrough

Walkthrough

Buffer movement logic in two policy worker modules is modified to use torch.utils.swap_tensors instead of direct data assignment. Parametrized testing is added to vLLM generation tests to support multiple training backends (dtensor_v1, dtensor_v2, megatron).

Changes

Cohort / File(s) Summary
Buffer Movement Refactoring
nemo_rl/models/policy/workers/dtensor_policy_worker.py, nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
Changed move_buffer_to_device logic from direct buffer assignment (v.data = v.data.to(device) / v = v.to(device)) to tensor swapping via torch.utils.swap_tensors(v, v.to(device)). Method of buffer relocation altered without changing function signature or external behavior.
Test Parametrization
tests/unit/models/generation/test_vllm_generation.py
Added @pytest.mark.parametrize decorator with train_backend parameter supporting dtensor_v1, dtensor_v2, and megatron backends. Replaced single DTensor policy creation with backend-specific conditional logic; updated log message and test signature to accept train_backend argument.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested labels

CI

Suggested reviewers

  • yaoyu-33
  • hemildesai
🚥 Pre-merge checks | ✅ 2 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Test Results For Major Changes ⚠️ Warning PR description lacks comprehensive testing documentation for buffer management changes in distributed tensor handling that could affect memory and performance. Add test results, performance benchmarks, convergence validation, and specific testing configurations to the PR description before merging.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main change: applying an offloading mechanism (swap_tensors) from the v2 worker implementation to the v1 worker, as evidenced by identical buffer-moving logic changes in both dtensor_policy_worker.py and dtensor_policy_worker_v2.py.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Jan 6, 2026

⚠️ File Consistency Check

Check based on commit: ddce3f7 (PR #1726 from test-fix-4)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/workers/dtensor_policy_worker.py was modified in this PR, but nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/workers/dtensor_policy_worker.py should also be applied to nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • Update nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • Not modified: nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

yuki-97
yuki-97 previously approved these changes Jan 6, 2026
@yuki-97 yuki-97 added CI:L0 Run doctests and unit tests and removed CI:L0 Run doctests and unit tests labels Jan 6, 2026
@yuki-97 yuki-97 enabled auto-merge (squash) January 6, 2026 09:49
yfw
yfw previously approved these changes Jan 6, 2026
@yuki-97 yuki-97 disabled auto-merge January 7, 2026 11:16
Signed-off-by: Yuki Huang <yukih@nvidia.com>
@yfw yfw dismissed stale reviews from yuki-97 and themself via b9b98f4 January 7, 2026 17:30
@yfw yfw requested a review from a team as a code owner January 7, 2026 17:30
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Jan 7, 2026

ℹ️ File Consistency Check

Check based on commit: b9b98f4 (PR #1726 from test-fix-4)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

1 similar comment
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Jan 7, 2026

ℹ️ File Consistency Check

Check based on commit: b9b98f4 (PR #1726 from test-fix-4)

✅ DTensor Policy Worker Synchronization Check

Both DTensor policy worker files were modified in this PR:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py

Please ensure that the changes are consistent between both files where applicable.


This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@yfw yfw added CI:L1 Run doctests, unit tests, and functional tests and removed CI:L0 Run doctests and unit tests labels Jan 7, 2026
@yfw yfw enabled auto-merge (squash) January 7, 2026 17:32
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/unit/models/generation/test_vllm_generation.py (1)

1667-1700: Backend parametrization is good; consider cleaning up the Megatron branch and copying configs

The new train_backend parameterization over ["dtensor_v1", "dtensor_v2"] is a nice way to cover both DTensor paths in test_vllm_weight_update_memory.

Two small cleanups you might want to do:

  1. The "megatron" branch is currently unreachable under this parametrization. Either:

    • remove that branch for now, or
    • add "megatron" to the @pytest.mark.parametrize list so it’s actually exercised.
  2. For consistency and to avoid accidental in‑place mutation of the shared template, it would be slightly safer to deepcopy the base config in all branches, e.g.:

    • start with train_config = deepcopy(basic_dtensor_test_config) for DTensor v1, and
    • then set train_config["dtensor_cfg"]["_v2"] = True for the v2 case.

This keeps the test behavior the same today but reduces surprises if Policy or future tests ever mutate train_config.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1720466 and b9b98f4.

📒 Files selected for processing (3)
  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • tests/unit/models/generation/test_vllm_generation.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • tests/unit/models/generation/test_vllm_generation.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • tests/unit/models/generation/test_vllm_generation.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • tests/unit/models/generation/test_vllm_generation.py
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
🧠 Learnings (1)
📚 Learning: 2025-09-18T14:20:36.297Z
Learnt from: zpqiu
Repo: NVIDIA-NeMo/RL PR: 1006
File: examples/configs/recipes/llm/distillation-qwen3-32b-to-8b-base-2n8g-fsdp2tp2.v1.yaml:113-120
Timestamp: 2025-09-18T14:20:36.297Z
Learning: In distillation workflows, the teacher policy does not perform generation - it only does inference/logprob computation on sequences generated by the student policy. Therefore, teacher generation configuration mismatches (like vLLM tensor parallelism settings) and colocation concerns are not relevant.

Applied to files:

  • tests/unit/models/generation/test_vllm_generation.py
🧬 Code graph analysis (1)
tests/unit/models/generation/test_vllm_generation.py (1)
nemo_rl/models/policy/lm_policy.py (1)
  • Policy (58-887)
🪛 Ruff (0.14.10)
tests/unit/models/generation/test_vllm_generation.py

1699-1699: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: build-container / main
  • GitHub Check: sphinx-build / Build docs
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (1)
nemo_rl/models/policy/workers/dtensor_policy_worker.py (1)

1837-1844: Buffer move via torch.utils.swap_tensors looks correct and reference‑safe

Using torch.utils.swap_tensors(v, v.to(device)) for each buffer preserves existing v references while swapping storage to the target device, which is what you want here and matches the v2 worker pattern. This should be a no‑op for already‑on‑device buffers and keeps FSDP/DTensor state intact as long as model.buffers() only yields plain Tensor instances.

If you haven’t already, please double‑check the PyTorch docs for torch.utils.swap_tensors in 2.9 to confirm there are no caveats for non‑leaf tensors or unusual buffer dtypes/devices in this path.

@yfw yfw merged commit ba46741 into main Jan 7, 2026
58 of 67 checks passed
@yfw yfw deleted the test-fix-4 branch January 7, 2026 20:33
chtruong814 pushed a commit that referenced this pull request Jan 7, 2026
Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
parthmannan pushed a commit to parthmannan/RL that referenced this pull request Jan 15, 2026
Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: Parth Mannan <pmannan@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 12, 2026
Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 9, 2026
Signed-off-by: Terry Kong <terryk@nvidia.com>
Signed-off-by: Yuki Huang <yukih@nvidia.com>
Co-authored-by: Yuki Huang <yukih@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests r0.5.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants