Skip to content

fix: Fix crash when using activation_checkpointing#1676

Merged
terrykong merged 1 commit intomainfrom
yifu/fix_actckpt
Dec 22, 2025
Merged

fix: Fix crash when using activation_checkpointing#1676
terrykong merged 1 commit intomainfrom
yifu/fix_actckpt

Conversation

@yfw
Copy link
Copy Markdown
Contributor

@yfw yfw commented Dec 21, 2025

What does this PR do ?

Fixes a crash that occasionally occurs when enabling activation_checkpointing in dtensor path.

Previously, we were seeing the following error:

torch.utils.checkpoint.CheckpointError: torch.utils.checkpoint: Recomputed values for the following tensors have different metadata than during the forward pass.
tensor at position 25:
saved metadata: {'shape': torch.Size([16, 4, 448, 1]), 'dtype': torch.float32, 'device': device(type='cuda', index=0)}
recomputed metadata: {'shape': torch.Size([16, 4, 448]), 'dtype': torch.float32, 'device': device(type='cuda', index=0)}
tensor at position 27:
saved metadata: {'shape': torch.Size([]), 'dtype': torch.int64, 'device': device(type='cuda', index=0)}
recomputed metadata: {'shape': torch.Size([2]), 'dtype': torch.uint64, 'device': device(type='cuda', index=0)}
tensor at position 28:
saved metadata: {'shape': torch.Size([]), 'dtype': torch.int64, 'device': device(type='cuda', index=0)}
recomputed metadata: {'shape': torch.Size([]), 'dtype': torch.uint64, 'device': device(type='cuda', index=0)}

This seemed to be happening with the CUDNN_ATTENTION sdpa implementation was being selected and activation_checkpointing was enabled. As a workaround, we do not allow CUDNN_ATTENTION when activation_checkpointing is enabled in the dtensor path.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

Release Notes

  • Bug Fixes
    • Enhanced backend selection logic for attention computation to support additional configuration scenarios, improving stability and performance in more use cases.

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
@yfw yfw requested review from a team as code owners December 21, 2025 21:44
@github-actions
Copy link
Copy Markdown

⚠️ File Consistency Check

Check based on commit: 6de8b8f (PR #1676 from yifu/fix_actckpt)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py was modified in this PR, but nemo_rl/models/policy/workers/dtensor_policy_worker.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py should also be applied to nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • Update nemo_rl/models/policy/workers/dtensor_policy_worker.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • Not modified: nemo_rl/models/policy/workers/dtensor_policy_worker.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Dec 21, 2025

📝 Walkthrough

Walkthrough

The change modifies the condition in DTensorPolicyWorkerV2.init that determines when to apply a specialized SDPA backend. The condition now triggers when cp_size > 1 OR activation_checkpointing is enabled, whereas previously it only checked cp_size > 1. A comment clarifies that activation_checkpointing requires excluding CUDNN_ATTENTION due to a known error.

Changes

Cohort / File(s) Summary
SDPA Backend Selection Logic
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
Expanded condition for choosing specialized sdpa_method to include activation_checkpointing flag alongside cp_size check; added clarifying comment regarding CUDNN_ATTENTION exclusion requirement

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Single file modification affecting only initialization logic
  • Straightforward conditional expansion with clarifying comments
  • Verify the logic change aligns with the intended behavior for both cp_size > 1 and activation_checkpointing scenarios

Possibly related PRs

Suggested labels

CI:L1, r0.5.0

Suggested reviewers

  • yuki-97
  • joyang-nv
  • terrykong

Pre-merge checks and finishing touches

✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: fixing a crash when activation_checkpointing is enabled, which directly matches the PR's purpose and the code modification.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Test Results For Major Changes ✅ Passed This PR contains a minor, targeted bug fix addressing a specific compatibility issue between CUDNN_ATTENTION and activation checkpointing.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch yifu/fix_actckpt

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b238e41 and 6de8b8f.

📒 Files selected for processing (1)
  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
!(**/tests/**|**/test_*.py|**/test_*.sh)

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
**/*.{py,sh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)

Files:

  • nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (1)
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (1)

285-298: SDPA backend restriction for activation checkpointing is correct.

The expanded condition properly addresses the incompatibility with CUDNN_ATTENTION when activation checkpointing is enabled. Restricting to FLASH_ATTENTION and EFFICIENT_ATTENTION backends when either context parallelism or activation checkpointing is enabled is a valid workaround.

Confirm before merge:

  1. Tests in test_dtensor_worker_v2.py cover activation checkpointing scenarios.
  2. If a tracking issue exists for the CUDNN_ATTENTION metadata mismatch with activation checkpointing, document it in the code comment for future reference.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@yfw yfw added the CI:L1 Run doctests, unit tests, and functional tests label Dec 21, 2025
@terrykong terrykong added r0.5.0 CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Dec 21, 2025
@terrykong terrykong merged commit 02d5142 into main Dec 22, 2025
64 of 72 checks passed
@terrykong terrykong deleted the yifu/fix_actckpt branch December 22, 2025 07:09
chtruong814 pushed a commit that referenced this pull request Dec 22, 2025
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
DeL-TaiseiOzaki pushed a commit to DeL-TaiseiOzaki/RL that referenced this pull request Jan 8, 2026
parthmannan pushed a commit to parthmannan/RL that referenced this pull request Jan 15, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Parth Mannan <pmannan@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 12, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
seonjinn pushed a commit that referenced this pull request Mar 8, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
seonjinn pushed a commit that referenced this pull request Mar 9, 2026
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests r0.5.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants