fix: Fix crash when using cp in dtensor path#1663
Conversation
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
|
📝 WalkthroughWalkthroughContext-parallel (cp) handling added to DTensor policy worker initialization. Computes cp_size from config and conditionally loads SDPBackend, constructing sdpa_method with FLASH_ATTENTION and EFFICIENT_ATTENTION when cp_size > 1. sdpa_method is passed to model_config creation to enable SDPA backend selection during model initialization. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~15 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (1)
289-289: Consider moving the import to the top of the file.The conditional import of
SDPBackendworks correctly but deviates from Python conventions. Moving it to the top-level imports would improve consistency and make the dependency more visible.🔎 Suggested import placement
At the top of the file, add the import alongside other torch imports (around line 26):
import torch +from torch.nn.attention import SDPBackend from accelerate import init_empty_weightsThen simplify the conditional block:
if cp_size > 1: # Match Automodel's `get_train_context` in `cp_utils.py` where only # flash and efficient backends are supported # Ref: https://github.com/NVIDIA-NeMo/Automodel/blob/81788d6f4848f5f066c4a6a2bece4689a6a83687/nemo_automodel/components/distributed/cp_utils.py#L57 - from torch.nn.attention import SDPBackend - sdpa_method = [ SDPBackend.FLASH_ATTENTION, SDPBackend.EFFICIENT_ATTENTION, ]
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code
Files:
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
nemo_rl/**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes
Files:
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
!(**/tests/**|**/test_*.py|**/test_*.sh)
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year
Files:
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
**/*.{py,sh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)
Files:
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: adil-a
Repo: NVIDIA-NeMo/RL PR: 1440
File: examples/configs/sft_automodel.yaml:48-58
Timestamp: 2025-10-30T20:50:44.126Z
Learning: In DTensor configurations for MoE (Mixture of Experts) models, expert_parallel_size and data_parallel_size can be applied together without multiplying the GPU requirements. Expert Parallelism (EP) only applies to MoE layers, while Data Parallelism/FSDP applies to non-MoE layers. Therefore, configurations like expert_parallel_size: 8 and data_parallel_size: 8 are valid on an 8-GPU cluster for MoE models.
📚 Learning: 2025-10-30T20:50:44.126Z
Learnt from: adil-a
Repo: NVIDIA-NeMo/RL PR: 1440
File: examples/configs/sft_automodel.yaml:48-58
Timestamp: 2025-10-30T20:50:44.126Z
Learning: In DTensor configurations for MoE (Mixture of Experts) models, expert_parallel_size and data_parallel_size can be applied together without multiplying the GPU requirements. Expert Parallelism (EP) only applies to MoE layers, while Data Parallelism/FSDP applies to non-MoE layers. Therefore, configurations like expert_parallel_size: 8 and data_parallel_size: 8 are valid on an 8-GPU cluster for MoE models.
Applied to files:
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
📚 Learning: 2025-09-19T03:00:58.662Z
Learnt from: shuo-nvidia
Repo: NVIDIA-NeMo/RL PR: 1006
File: examples/configs/recipes/llm/distillation-qwen3-32b-to-1.7b-base-1n8g-fsdp2tp1.v1.yaml:85-101
Timestamp: 2025-09-19T03:00:58.662Z
Learning: In distillation and GRPO configurations, max_new_tokens is intentionally set to the full context window (max_total_sequence_length) for consistency across the codebase. Overflow cases when prompt + generation tokens exceed max_model_len are handled by safeguards implemented in vllm_worker.py.
Applied to files:
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Lint check
- GitHub Check: sphinx-build / Build docs
- GitHub Check: build-container / main
- GitHub Check: Lint check
- GitHub Check: Lint check
- GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (3)
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py (3)
280-280: LGTM - Early cp_size computation enables downstream SDPA backend selection.Computing
cp_sizeearly is necessary for the conditional SDPA backend logic that follows. The change aligns with the PR objective to restrict attention backends when context parallelism is enabled.
285-296: Correctly restricts SDPA backends for context parallelism to prevent H100 crash.PyTorch 2.9.0 supports
torch.nn.attention.SDPBackendwithFLASH_ATTENTIONandEFFICIENT_ATTENTIONbackends. The conditional logic properly limits attention backends to these two whencp_size > 1, addressing the cuDNN SDP backend crash on H100 machines. The approach aligns with Automodel'sget_train_contextimplementation referenced in the comment.
306-306: The parameter is correctly passed tomodel_class.from_config(). NeMoAutoModel's from_config method accepts sdpa_method as an optional parameter specifying an ordered list of SDPBackend implementations, and the code properly uses it to restrict SDPA backends to FLASH_ATTENTION and EFFICIENT_ATTENTION when context parallelism is enabled.
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: Parth Mannan <pmannan@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com> Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
Signed-off-by: Yi-Fu Wu <yifu.wu@gmail.com>
What does this PR do ?
Fixes a crash when cp > 1 in dtensor path that was introduced in 5bf56a9.
After this version bump, we found that the cudnn sdpa backend was getting selected even when
cp > 1on certain machines (we noticed this for h100, but not a100), which was causing a crash due to this bug: pytorch/pytorch#162743. This is unexpected because we restrict toSDPBackend.FLASH_ATTENTIONandSDPBackend.EFFICIENT_ATTENTIONwhencp > 1: https://github.com/NVIDIA-NeMo/Automodel/blob/81788d6f4848f5f066c4a6a2bece4689a6a83687/nemo_automodel/components/distributed/cp_utils.py#L57.The issue is that we patch the attention with all possible backends here: https://github.com/NVIDIA-NeMo/Automodel/blob/81788d6f4848f5f066c4a6a2bece4689a6a83687/nemo_automodel/components/_transformers/auto_model.py#L65-L71 which overrides the previous restriction. To fix this issue, we pass in only
SDPBackend.FLASH_ATTENTIONandSDPBackend.EFFICIENT_ATTENTIONwhencp > 1when callingfrom_configto restrict to only these backends, as is expected whencp > 1.Issues
List issues that this PR closes (syntax):
Usage
# Add a code snippet demonstrating how to use thisBefore your PR is "Ready for review"
Pre checks:
Additional Information
Summary by CodeRabbit
New Features
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.