Skip to content

sequence parallel add tests#36

Merged
meichangsu1 merged 4 commits intodevfrom
sp_ljl_dev
Feb 5, 2026
Merged

sequence parallel add tests#36
meichangsu1 merged 4 commits intodevfrom
sp_ljl_dev

Conversation

@meichangsu1
Copy link
Copy Markdown
Collaborator

add tests for accurancy

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @meichangsu1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the testing coverage for the twinkle framework's sequence parallelism implementation. The primary goal is to ensure the numerical accuracy and correct integration of Sequence Parallelism with other distributed training strategies like Expert Parallelism and Fully Sharded Data Parallelism, particularly for Qwen3 Mixture-of-Experts models. This is achieved through the addition of a new, detailed test suite that performs rigorous alignment checks on model outputs and gradients. A minor, related simplification was also made to a utility function for determining ulysses_size.

Highlights

  • Refactored _get_ulysses_size function: Simplified the logic for determining ulysses_size in sequence_parallel.py by removing a conditional check that relied on device_mesh.has_dim("sp").
  • Added comprehensive accuracy tests for Sequence Parallelism: Introduced a new test file test_expert_parallel_qwen3_fsdp_sp.py to validate the numerical accuracy of Sequence Parallel (SP) when combined with Expert Parallel (EP) and Fully Sharded Data Parallel (FSDP) for Qwen3 MoE models.
  • Implemented detailed alignment checks: The new tests include functions to compare forward pass logits and backward pass gradients between baseline configurations (without SP) and SP-enabled configurations, ensuring correctness in distributed environments.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/twinkle/model/transformers/strategy/sequence_parallel.py
    • Removed redundant logic for calculating ulysses_size based on device_mesh dimensions.
  • tests/moe/test_expert_parallel_qwen3_fsdp_sp.py
    • New File: Added a dedicated test file to verify the accuracy of sequence parallelism in conjunction with expert parallelism and FSDP for Qwen3 MoE models.
    • EP+FSDP+SP Alignment Test: Implemented _run_worker_ep_fsdp_sp_align to compare router states, logits, and expert gradients between EP+FSDP and EP+FSDP+SP configurations.
    • FSDP+SP Alignment Test: Implemented _run_worker_fsdp_sp_align to compare logits and input embedding gradients between FSDP-only and FSDP+SP configurations.
    • Distributed Utilities: Included various helper functions for setting up distributed processes, ensuring determinism, and capturing model states for comparison.
Activity
  • The pull request was created by meichangsu1 with the clear intention of adding accuracy tests for sequence parallelism, as indicated by the title "sequence parallel add tests" and description "add tests for accurancy". The changes in the patch directly address this goal by introducing a new, comprehensive test file and a minor related code adjustment.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds comprehensive alignment tests for sequence parallelism combined with FSDP and expert parallelism. The tests are well-structured, comparing baseline configurations against sequence parallel variants to ensure correctness of logits and gradients. The change in sequence_parallel.py to remove implicit derivation of ulysses_size makes the configuration more explicit, which is a good improvement. I've added one suggestion to improve test performance.


# --- SP variant: EP+FSDP+SP ---
# Note: SP does global patching; keep it after baseline in this process.
model_sp = _load_qwen3_moe_pretrained(model_id, local_files_only, device)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

You are loading the pretrained model again here. For better test performance, you could load it once at the beginning of the test, then use copy.deepcopy to create the two model instances for the baseline and the variant. This avoids a second expensive from_pretrained call. You've already used this optimization in _run_worker_fsdp_sp_align.

Move sequence-parallel strategy construction to a lazy method `_ensure_sp_strategy` to reduce side effects during model initialization. The strategy is now created only when needed, after the underlying Hugging Face model is fully initialized and before wrapping. This improves initialization performance and avoids unnecessary overhead when sequence parallelism is not enabled.
@meichangsu1 meichangsu1 merged commit 43ada26 into dev Feb 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant