Skip to content

cp: fix: Fix crash when using activation_checkpointing (1676) into r0.5.0#1682

Merged
terrykong merged 1 commit intor0.5.0from
cherry-pick-1676-r0.5.0
Dec 22, 2025
Merged

cp: fix: Fix crash when using activation_checkpointing (1676) into r0.5.0#1682
terrykong merged 1 commit intor0.5.0from
cherry-pick-1676-r0.5.0

Conversation

@chtruong814
Copy link
Copy Markdown
Contributor

@chtruong814 chtruong814 commented Dec 22, 2025

beep boop [🤖]: Hi @yfw 👋,

we've cherry picked #1676 into  for you! 🚀

Please review and approve this cherry pick by your convenience!

Summary by CodeRabbit

  • Bug Fixes
    • Resolved metadata mismatch errors in attention mechanisms during activation checkpointing.
    • Improved activation checkpointing support for distributed tensor training configurations.

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
@github-actions
Copy link
Copy Markdown

⚠️ File Consistency Check

Check based on commit: eb24a98 (PR #1682 from cherry-pick-1676-r0.5.0)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py was modified in this PR, but nemo_rl/models/policy/workers/dtensor_policy_worker.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py should also be applied to nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • Update nemo_rl/models/policy/workers/dtensor_policy_worker.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • Not modified: nemo_rl/models/policy/workers/dtensor_policy_worker.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@terrykong terrykong added the CI:L1 Run doctests, unit tests, and functional tests label Dec 22, 2025
@terrykong terrykong enabled auto-merge (squash) December 22, 2025 07:11
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Dec 22, 2025

📝 Walkthrough

Walkthrough

Modified the condition for SDPA method construction in DTensorPolicyWorkerV2's __init__ to also trigger when activation checkpointing is enabled, alongside the existing cp_size > 1 check. Added clarification that CUDNN_ATTENTION must be excluded when activation checkpointing is active to prevent forward-backward metadata mismatch errors.

Changes

Cohort / File(s) Summary
SDPA Backend Selection Logic
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
Expanded condition for SDPA method construction from cp_size > 1 to also include activation_checkpointing flag; added commentary on CUDNN_ATTENTION exclusion requirement; updated SDPBackend import usage context

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Verify the expanded conditional logic is semantically correct and aligns with intended behavior when both cp_size > 1 and activation_checkpointing are evaluated
  • Confirm CUDNN_ATTENTION exclusion is properly enforced under the new condition to prevent the documented metadata mismatch error

Possibly related PRs

  • PR #1676 — Directly modifies the same DTensorPolicyWorkerV2 __init__ logic to select SDPA backend when activation_checkpointing is enabled with CUDNN_ATTENTION exclusion
  • PR #1665 — Modifies the same file to refactor SDPA backend (sdpa_method/SDPBackend) determination and propagation into model construction

Suggested labels

r0.5.0, CI:L1

Suggested reviewers

  • yfw
  • terrykong

Pre-merge checks and finishing touches

✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main change: fixing a crash when using activation_checkpointing, which aligns with the code modifications in dtensor_policy_worker_v2.py.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Test Results For Major Changes ✅ Passed PR #1682 is a minor bug fix cherry-pick that expands a condition to handle activation_checkpointing and excludes CUDNN_ATTENTION to prevent crashes. The code modification is limited and backward compatible.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch cherry-pick-1676-r0.5.0

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc352c4 and eb24a98.

📒 Files selected for processing (1)
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: sphinx-build / Build docs
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (1)
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (1)

285-300: Fix correctly excludes CUDNN_ATTENTION backend when activation checkpointing is enabled, preventing metadata mismatch errors.

The expanded condition appropriately triggers SDPA backend selection when either context parallelism is enabled or activation checkpointing is active. By restricting to FLASH_ATTENTION and EFFICIENT_ATTENTION backends in these cases, the fix resolves compatibility issues where CUDNN_ATTENTION causes recomputed tensor metadata mismatches during activation checkpointing. The implementation is clean, the comment explains the technical rationale, and the import is properly scoped. Code follows all guidelines including copyright header presence and config attribute access patterns.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@terrykong terrykong merged commit 31a4a72 into r0.5.0 Dec 22, 2025
68 of 71 checks passed
@terrykong terrykong deleted the cherry-pick-1676-r0.5.0 branch December 22, 2025 11:29
avenkateshha pushed a commit to avenkateshha/RL that referenced this pull request Apr 10, 2026
…`r0.5.0` (NVIDIA-NeMo#1682)

Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
Co-authored-by: Yi-Fu Wu <yifu.wu@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cherry-pick CI:L1 Run doctests, unit tests, and functional tests Run CICD

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants