Add tuned a8w8 blockscale GEMM config for Qwen3-Next-80B-A3B on MI355X#2868
Open
Add tuned a8w8 blockscale GEMM config for Qwen3-Next-80B-A3B on MI355X#2868
Conversation
Tuned 1482 shapes (TP1/TP2/TP4) for Qwen/Qwen3-Next-80B-A3B-Instruct-FP8 on MI355X using CK + CK-TILE backends with splitK support. Depends on: - PR ROCm#2862 (CK bump for stride fix in CK-TILE blockscale) - PR ROCm#2541 (splitK support for CK/CK-TILE blockscale GEMMs) - PR ROCm#2487 (AQLayout tunable for CK-TILE blockscale 8-warp kernels)
Contributor
🏷️ CI GuideRuns automatically on every PR:
Extended tests (opt-in via labels):
|
…tions Full retune of all 1482 shapes on MI355X (gfx950, cu_num=256). Key changes: - SplitK usage dropped from 613 to 88 CK shapes (splitK > 0) - All shapes validated via --run_config (1482/1482 OK) - E2e perf: 2-8% output throughput improvement vs untuned heuristic
3 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Motivation
Add tuned and untuned blockscale a8w8 GEMM configs for Qwen/Qwen3-Next-80B-A3B-Instruct-FP8 on MI355X (gfx950, 256 CUs). The tuned config covers 1482 shapes across TP1, TP2, and TP4, delivering +3.9% to +8.5% e2e output throughput over the unmodified vLLM 0.19 baseline (heuristic default kernels).
Depends on:
Technical Detailserm
Kernel distribution in the tuned CSV:
72% of CK shapes benefit from splitK > 0. CK-TILE wins 29% of shapes (all splitK=0), primarily at large N (gate/up projections, MoE shared experts).
Test Plan
vllm bench serve(random dataset,--request-rate inf)Test Result
Accuracy: GSM8K 5-shot flexible-extract 0.8522 ± 0.0098 (matches CK baseline 0.8499 ± 0.0098). Coherence PASS at all concurrency levels.
E2E throughput (output tok/s, TP1, MI355X):
Submission Checklist