Pass packed boundary metadata to Qwen3.5 linear-attention fast kernels#44867
Closed
sdharani91 wants to merge 1 commit intohuggingface:mainfrom
Closed
Pass packed boundary metadata to Qwen3.5 linear-attention fast kernels#44867sdharani91 wants to merge 1 commit intohuggingface:mainfrom
sdharani91 wants to merge 1 commit intohuggingface:mainfrom
Conversation
…s for issue 44717
Contributor
|
[For maintainers] Suggested jobs to run (before merge) run-slow: qwen3_5 |
This was referenced Mar 25, 2026
Author
|
Follow up draft PR: https://github.com/huggingface/transformers/pull/45034/changes#diff-6064941ca492d13e60e6c551ee54b967d803c2fdc75dc0751676563eb615ae63 based on comments from #44717 (comment) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Fixes #44717
This PR fixes packed-sequence handling for the Qwen3.5 linear-attention fast path.
Before this change, Qwen3.5 produced different outputs for:
a padded representation of multiple sequences
a packed representation of the same sequences using reset position_ids
The issue was specific to the linear-attention fast path. Full-attention layers already respected packed boundaries through the shared masking logic, but the Qwen3.5 fast linear-attention path was not passing packed-boundary metadata into its kernels.
This PR fixes that by:
deriving packed boundary metadata from packed position_ids
passing seq_idx to the causal convolution fast path
passing cu_seqlens to the FLA gated-delta-rule fast path
The change is intentionally scoped to the Qwen3.5 fast path for packed prefill inputs. The slow fallback path is not changed in this PR.
How was this tested?
Manual validation:
Reproduced the bug before the fix on Qwen3.5 using a tiny local config with one full-attention layer and one linear-attention layer.
Compared:
padded inputs for multiple sequences
packed inputs for the same sequences with reset position_ids
Before the fix on the fast path:
allclose: False
max abs diff was about 8e-3
After the fix on the fast path:
the original 2-segment packed-vs-padded repro matches
a multi-segment packed-vs-padded repro also matches with max abs diff around 6e-8
Sanity checks:
Verified Qwen3.5 was using the fast kernels:
causal_conv1d_fn present: True
fla.ops.gated_delta_rule.chunk
fla.ops.gated_delta_rule.fused_recurrent
Verified a normal unpacked Qwen3.5 forward still works after the change.
Unit tests:
Added tests for the packed-metadata helper in tests/models/qwen3_5/test_modeling_qwen3_5.py, including:
simple packed input
multi-segment packed input
cases where packed metadata should be skipped, such as cached inputs or unsupported batch layouts
Before submitting
Pull Request section?
to it if that's the case.
Support packed sequences for linear attention models (i.e. Qwen3.5) #44717 (comment)
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@vasqu