support cp ,fix qwen3.5 gdn sp#138
Merged
meichangsu1 merged 7 commits intomodelscope:mainfrom Apr 21, 2026
Merged
Conversation
Contributor
There was a problem hiding this comment.
Code Review
This pull request significantly enhances sequence parallelism support by implementing ZigZag Ring Attention for long-sequence training and Ulysses-style sequence parallelism for Qwen3.5 linear attention. It also introduces multimodal deepstack patching for Qwen3-VL and refactors the SequenceParallel strategy to better handle complex device meshes and packed/varlen inputs. Feedback focuses on improving code maintainability and robustness, specifically by grouping attributes in the SequenceParallel constructor, removing redundant logic and unused imports, replacing deprecated inspection methods, and centralizing duplicated loss-gathering logic.
- Refactor linear attention sequence parallel import error message into a constantt - Fix token counting in TransformersModel by using raw DP/FSDP world size instead of data_world_size - Enhance Framework.gather_object to check distributed initialization before accessing world size - Add test utility for creating padded labels in sequence parallel tests
- Add `num_tokens` field to `ModelOutput` TypedDict for explicit token denominator - Update `LossOutput` to use `OutputType` for `num_tokens` instead of `int` - Refactor `LossMetric` to prefer `num_tokens` from outputs, with fallback to labels - Remove `_get_raw_dp_fsdp_world_size` helper and use `_device_mesh._get_dp_fsdp_world_size` - Use `InputProcessor.postprocess_tensor_sp` for loss tensor gathering in TransformersModel - Simplify sequence-parallel loss normalization by relying on output `num_tokens`
0e8600d to
f13bc37
Compare
tastelikefeet
approved these changes
Apr 21, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR type
PR information
This PR adds context parallel and Qwen3.5 Gated DeltaNet sequence parallel support to the transformers stack, and refactors sequence parallel into a package-based implementation.
Main changes:
sequence_parallel.pyintosequence_parallel/and add shared utilities.linear_attention_sp.py;Ring attention is not supported for this path yet.sp_fsdp_dense.tests/moe/test_expert_parallel_qwen3_fsdp_sp.py.Experiment results