feat: Add attention_backend config support for Megatron policy#1628
feat: Add attention_backend config support for Megatron policy#1628yuki-97 merged 11 commits intoNVIDIA-NeMo:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds optional wiring to configure Megatron's attention backend in the policy worker initialization. When Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~15–20 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
nemo_rl/models/policy/workers/megatron_policy_worker.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Conform code to Python 3.12+
Indent code with 4 spaces. Do not use tabs
Use snake_case for file names
Use PascalCase for class names
Use snake_case for function and method names
Use snake_case for local variables
Prefix variable names that start with a number with 'k' (e.g., k_99th_percentile)
Use upper snake_case with 'G' prefix for global variables (e.g., G_MY_GLOBAL)
Use upper snake_case for constants
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
Prefer docstrings over comments for interfaces that may be used outside a file
Reserve comments for code within a function or interfaces that are local to a file
If a piece of code is commented out, include a comment describing its usage and why it's commented out. Remove debug comments before merging
Use Google style docstrings for classes and functions in Python, which can be parsed by Sphinx
Avoid using reflection when functionality can be easily achieved without reflection
When using try-except blocks, limit the except clause to the smallest set of specific errors possible
When using try-except blocks for duck-typing, keep the body of the try as small as possible and use the else block for logic
YAML is the single source of truth for configuration defaults. Do not set non-None defaults in code for configuration values
For required configuration attributes, access config directly and expect presence (e.g., policy_cfg['precision']) without hidden defaults
Use typing.NotRequired to mark optional attributes in TypedDict for configuration
When adding a new config key to a TypedDict subclass, document the key's purpose, valid values/types, and recommended default, and reflect the default in exemplar YAMLs under examples/configs/*.yaml
Follow the Google Python Style Guide for Python code
Files:
nemo_rl/models/policy/workers/megatron_policy_worker.py
nemo_rl/**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
For any source file under nemo_rl/*.py that defines a class or function decorated with @ray.remote, add a coverage pragma (# pragma: no cover) because these run in separate Ray processes
Files:
nemo_rl/models/policy/workers/megatron_policy_worker.py
!(**/tests/**|**/test_*.py|**/test_*.sh)
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Add the NVIDIA copyright header to all Python files and shell scripts (excluding tests). The header should include the current year
Files:
nemo_rl/models/policy/workers/megatron_policy_worker.py
**/*.{py,sh}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
The NVIDIA copyright header should appear at the top of all Python files and shell scripts (excluding tests)
Files:
nemo_rl/models/policy/workers/megatron_policy_worker.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Post submodule check comment / Comment on PR
- GitHub Check: Post automodel integration comment / Comment on PR
284865b to
5b401cc
Compare
7884009 to
9a97ffe
Compare
9a97ffe to
7a6c3aa
Compare
|
Hi @terrykong — just checking in on this PR. I rebased it onto the latest main, so it should apply cleanly. This builds on my previously merged PRs (#1610, #1611) and addresses MoE scalability. Today, the default ( Could you take a look when you have a moment? Thanks! |
terrykong
left a comment
There was a problem hiding this comment.
thanks for the contribution @sahgerlad . sorry this one slipped. @yaoyu-33 could you review
7a6c3aa to
fcf9964
Compare
fcf9964 to
963531e
Compare
guyueh1
left a comment
There was a problem hiding this comment.
LGTM, except for one comment; I'll approve
|
@sahgerlad there has been a refactor on megatron policy worker causing merge conflict. Can you resolve it? Also fix the lint please. |
|
@sahgerlad looks like the new FA test you added fails. TE can't find flash. did it work for you locally? |
Head branch was pushed to by a user without write access
FA doesn't support FP32. Specified BF16. Should be okay now |
19da43d to
d4efbb3
Compare
… remove env var that causes issues Signed-off-by: Sahger Lad <lad.sahger@gmail.com>
d4efbb3 to
c58a83f
Compare
|
On my end I ran: |
Co-authored-by: Yuki Huang <yukih@nvidia.com> Signed-off-by: sahgerlad <36946563+sahgerlad@users.noreply.github.com>
Signed-off-by: Sahger Lad <lad.sahger@gmail.com>
|
/ok to test 86f92f8 |
|
hi @sahgerlad , overall lgtm, could you take a look at the remaining comments? |
@yuki-97 I believe all comments have been addressed. Let me know if there is anything remaining |
hi @sahgerlad , just a small Q here #1628 (comment) |
Sorry, had a few comments sitting in review. Just posted. Let me know if there are remaining open items |
yuki-97
left a comment
There was a problem hiding this comment.
thanks for the contribution again! lgtm
|
/ok to test b7c8ccd |
What does this PR do ?
Enable configuring the attention backend (flash, fused, unfused, local, auto) via megatron_cfg.attention_backend in the YAML configuration.
policy.megatron_cfg.attention_backendin YAML configurationflash,fused,unfused,local, orautoattention implementationsUsage
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.