Skip to content

Add MoE to Gemma4 TP plan#45219

Merged
Cyrilvallez merged 2 commits intohuggingface:mainfrom
sywangyi:reduce_memory_gemma4
Apr 8, 2026
Merged

Add MoE to Gemma4 TP plan#45219
Cyrilvallez merged 2 commits intohuggingface:mainfrom
sywangyi:reduce_memory_gemma4

Conversation

@sywangyi
Copy link
Copy Markdown
Contributor

@sywangyi sywangyi commented Apr 3, 2026

What does this PR do?

google/gemma-4-26B-A4B-it
tp 2, memory is 46G per rank wo the change, drop to about 25G w per rank with the change

Signed-off-by: Wang, Yi <yi.a.wang@intel.com>
Rocketknight1
Rocketknight1 previously approved these changes Apr 8, 2026
Copy link
Copy Markdown
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to approve this one and people can yell at me later if there's any problem!

@Rocketknight1 Rocketknight1 enabled auto-merge April 8, 2026 13:06
@Rocketknight1 Rocketknight1 added this pull request to the merge queue Apr 8, 2026
@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to failed status checks Apr 8, 2026
@sywangyi
Copy link
Copy Markdown
Contributor Author

sywangyi commented Apr 8, 2026

@Rocketknight1 thanks for the approval, I check the failure in ci, nothing to do with the PR

@Cyrilvallez Cyrilvallez dismissed Rocketknight1’s stale review April 8, 2026 13:58

TP plan not correct

@Cyrilvallez Cyrilvallez changed the title reduce memory for gemma4 moe model in tp Improve Gemma4 TP plan Apr 8, 2026
@Cyrilvallez Cyrilvallez changed the title Improve Gemma4 TP plan Add MoE to Gemma4 TP plan Apr 8, 2026
Copy link
Copy Markdown
Member

@Cyrilvallez Cyrilvallez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, indeed the MoE part was forgotten! Sorry @Rocketknight1, I dismissed your review as I saw it was in the merging queue and I panicked thinking the mlp part should be removed, but it should stay as well haha

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 8, 2026

[For maintainers] Suggested jobs to run (before merge)

run-slow: gemma4

@Cyrilvallez Cyrilvallez merged commit 7f6cc4b into huggingface:main Apr 8, 2026
15 of 18 checks passed
bigshanedogg pushed a commit to bigshanedogg/transformers that referenced this pull request Apr 9, 2026
reduce memory for gemma4 moe model in tp

Signed-off-by: Wang, Yi <yi.a.wang@intel.com>
Co-authored-by: Cyril Vallez <cyril.vallez@huggingface.co>
Cyrilvallez added a commit that referenced this pull request Apr 9, 2026
reduce memory for gemma4 moe model in tp

Signed-off-by: Wang, Yi <yi.a.wang@intel.com>
Co-authored-by: Cyril Vallez <cyril.vallez@huggingface.co>
sirzechs66 pushed a commit to sirzechs66/transformers that referenced this pull request Apr 18, 2026
reduce memory for gemma4 moe model in tp

Signed-off-by: Wang, Yi <yi.a.wang@intel.com>
Co-authored-by: Cyril Vallez <cyril.vallez@huggingface.co>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants