fix: MoE dispatch for Quark W4A6 models (MXFP4 weights with QuantType.No)#2457
Open
vecheruk-amd wants to merge 1 commit intoROCm:mainfrom
Open
fix: MoE dispatch for Quark W4A6 models (MXFP4 weights with QuantType.No)#2457vecheruk-amd wants to merge 1 commit intoROCm:mainfrom
vecheruk-amd wants to merge 1 commit intoROCm:mainfrom
Conversation
Contributor
🏷️ CI GuideRuns automatically on every PR:
Extended tests (opt-in via labels):
|
valarLip
requested changes
May 1, 2026
| quant_type = quant_remap.get(quant_type, quant_type) | ||
| # W4A6: remap QuantType.No -> per_1x32 for fp4x2 weights so activations | ||
| # get quantized to fp4x2 at runtime (CK MoE only supports A4W4). | ||
| if quant_type == QuantType.No and w1.dtype == dtypes.fp4x2: |
Collaborator
There was a problem hiding this comment.
why not just send QuantType.per_1x32 in, instead of this hack
3 tasks
sunway513
added a commit
that referenced
this pull request
May 4, 2026
…e.py - Restore import to match main: use `from aiter import fused_dynamic_mxfp4_quant_moe_sort, mxfp4_moe_sort_fwd` instead of importing from internal triton path and fp4_utils - Replace all fp4_utils.moe_mxfp4_sort() calls with mxfp4_moe_sort_fwd() using correct parameter names (cols= instead of block_size=) - Remove all moe_buf preallocated buffer additions (PR #2687 rejected): parameter defaults, if-guards, and pass-throughs in _moe_sorting_impl, moe_sorting, fused_moe, fused_moe_fake, and fused_moe_ - Fix moe_sorting_dispatch_policy type annotation: bool -> int in fused_moe_fake and fused_moe_ - Remove moe_buf pass-through test from test_moe_sorting.py - Preserve legitimate fp4_utils usage (mxfp4_to_f32, e8m0_to_f32) with local imports in stage1/stage2 fallback functions
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Motivation
W4A6 models store MoE weights in MXFP4 (
fp4x2 dtype), but use MXFP6 for activation quantization. Because the Quark quantization scheme handles activation quantization separately, it passesQuantType.Noto the AITER CK-based fused MoE kernel. However, the CK kernel only supports A4W4 (both activations and weights in fp4) and there is no codepath for bf16 activations with fp4x2 weights.Technical Details
After the existing
quant_remaplookup inck_moe_2stages(and the equivalent inck_moe_2stages_dp), detect the unsupported combination ofQuantType.Nowithfp4x2weights and remap toQuantType.per_1x32. This ensures activations are dynamically quantized to fp4x2 at runtime, matching what the CK kernel expects.Test Plan
Verified with
ziliangpeng/DeepSeek-V3-Quark-MXFP4-v4-w4a6on MI355X (gfx950) / ROCm 7.2 / vLLM 0.17.1. MoE layers execute successfully in both eager and compile modes. Results of the experiment can be found here: https://github.com/AMD-AGI/di-recipes/blob/main/tools/prompt_replay/baselines/DeepSeek-V3-0324/MI355/serve_dsr1_0528_mxfp4-v4-w4a6_20260316_smci355-ccs-aus-m15-13.cs-aus.dcgpu.logTest Result
Submission Checklist