mla: drop max_split_per_batch=16 cap to match vLLM#611
Open
peizhang56 wants to merge 1 commit intomainfrom
Open
mla: drop max_split_per_batch=16 cap to match vLLM#611peizhang56 wants to merge 1 commit intomainfrom
peizhang56 wants to merge 1 commit intomainfrom
Conversation
ATOM was passing `max_split_per_batch=16` to aiter's `get_mla_metadata_v1`
in three sites (one in `atom/plugin/attention.py`, two in
`atom/model_ops/attentions/aiter_mla.py`). aiter then computed the work
split as `min(num_clusters, max_split_per_batch * bs)`, which severely
under-utilizes the GPU at small batch / large KV. vLLM's
`AiterMLAMetadataBuilder._build_decode` (in
`vllm/v1/attention/backends/mla/rocm_aiter_mla.py`) omits the parameter
entirely, letting it default to -1 so the kernel uses all `num_clusters`
splits.
This change drops the cap so the FP8 MLA decode-stage1 kernel
(`mla_a8w8_qh16_qseqlen1_gqaratio16_ps`) gets full CU utilization.
The aiter persistent-MLA op test makes the win clear:
python op_tests/test_mla_persistent.py -d fp8 -kvd fp8 -n 16,1 \
-k 512 -qr 64 -vh 512 -blk 1 -b 4 -c 100000 -ms 16
(`-ms 16` = the previous ATOM behavior; `-ms -1` = the new behavior =
vLLM behavior. Compare the reported decode kernel time across the two
runs.)
Buffer safety: `get_mla_metadata_info_v1` already pre-sizes the
reduce/partial buffers to `~2 * num_clusters` tiles, so any value of
`max_split_per_batch` (including -1) is within the pre-allocated
capacity. The other `max_split_per_batch` references in the repo
(`aiter_attention.py` already uses -1; the SGLang backend keeps its
own configurable knob) are intentionally left untouched.
Made-with: Cursor
Author
This was referenced Apr 28, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Summary
ATOM was passing
max_split_per_batch=16to aiter'sget_mla_metadata_v1in three sites (one in
atom/plugin/attention.py, two inatom/model_ops/attentions/aiter_mla.py). aiter then computes the worksplit as
min(num_clusters, max_split_per_batch * bs), which severelyunder-utilizes the GPU at small batch / large KV.
vLLM's
AiterMLAMetadataBuilder._build_decode(invllm/v1/attention/backends/mla/rocm_aiter_mla.py) omits the parameterentirely, letting it default to
-1so the kernel uses allnum_clusterssplits. This PR aligns ATOM with vLLM.Why
mla_a8w8_qh16_qseqlen1_gqaratio16_pswas running ~4x slower in ATOM than in vLLM on the same workload.
min(num_clusters, 16 * bs)caps splits far below theavailable CUs at small batches. e.g.
bs=2on a 256-CU GPU yieldsonly 32 splits used out of 256 CUs.
get_mla_metadata_info_v1pre-sizes the reduce/partialbuffers for
~2 * num_clusterstiles, so passing-1is within thepre-allocated capacity.
Test plan
The aiter persistent-MLA op test reproduces the difference directly.
-msis the samemax_split_per_batchknob exposed at the test layer:vs.
The
-ms -1run reports a substantially lower MLA decode kernel time.Mirror runs through ATOM's serving stack on the same GPU show the same
speedup on the
mla_a8w8_qh16_qseqlen1_gqaratio16_psinvocation.-ms 16and-ms -1mla_a8w8_qh16_qseqlen1_gqaratio16_pskernel time matches vLLMNotes
Other
max_split_per_batchreferences are intentionally left untouched:atom/model_ops/attentions/aiter_attention.pyalready passes-1, andthe SGLang backend in
atom/plugin/sglang/attention_backend/sgl_attn_backend.pykeeps its ownconfigurable knob.
Historical context: the cap was introduced in #47 ("limit
max_split_per_batch to 16") with no recorded rationale, and propagated
to the plugin via #304. vLLM's MLA backend never set the cap.