Skip to content

Add B300 config: kimi-k2.5-fp4-vllm#1056

Merged
functionstackx merged 3 commits intomainfrom
claude/add-kimi-k2.5-fp4-b300-vllm
Apr 17, 2026
Merged

Add B300 config: kimi-k2.5-fp4-vllm#1056
functionstackx merged 3 commits intomainfrom
claude/add-kimi-k2.5-fp4-b300-vllm

Conversation

@functionstackx
Copy link
Copy Markdown
Contributor

Summary

  • Add kimik2.5-fp4-b300-vllm benchmark config and the corresponding benchmarks/single_node/kimik2.5_fp4_b300.sh launch script
  • At the time of submission, the vLLM Kimi-K2.5 recipes page does not have a B300-specific recipe, so this reuses the existing Kimi-K2.5 FP4 (NVFP4) B200 vLLM recipe as-is until B300-specific tuning is available
  • Image: vllm/vllm-openai:v0.17.0 (same as B200), runner: b300, same TP/EP/concurrency search-space as B200

Test plan

  • CI config validation passes
  • Run kimik2.5-fp4-b300-vllm single-node benchmark on a B300 node and confirm server starts, benchmark completes, and result file is produced

🤖 Generated with Claude Code

@github-actions
Copy link
Copy Markdown
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow

If additional help is needed, PR authors can reach out to core maintainers over Slack.

1 similar comment
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow

If additional help is needed, PR authors can reach out to core maintainers over Slack.


nvidia-smi

export TORCH_CUDA_ARCH_LIST="10.0"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The new kimik2.5_fp4_b300.sh script carries over export TORCH_CUDA_ARCH_LIST="10.0" from the B200 equivalent, but the B300 runner (launch_b300-nv.sh) never sets this variable — unlike the B200 Docker runner which explicitly passes -e TORCH_CUDA_ARCH_LIST="10.0". Every other B300 benchmark script (dsr1, qwen3.5) leaves it unset, letting PyTorch auto-detect the correct architecture; the new vLLM script is the sole exception. If B300 uses a different SM variant than B200 (e.g., SM 10.0a), hardcoding 10.0 could prevent B300-native torch.compile kernel optimizations from taking effect — remove this line to match B300 convention.

Extended reasoning...

What the bug is: Line 27 of kimik2.5_fp4_b300.sh sets export TORCH_CUDA_ARCH_LIST="10.0", copied verbatim from kimik2.5_fp4_b200.sh. While this value is correct for B200 (Blackwell SM 10.0), it was never verified for B300 and contradicts the established B300 scripting convention.

The specific code path: The B300 single-node runner (runners/launch_b300-nv.sh) uses srun --export=ALL with Slurm/enroot and does not set TORCH_CUDA_ARCH_LIST anywhere. This contrasts with the B200 Docker runner (runners/launch_b200-dgxc.sh) and H100/H200 runners, which explicitly inject the architecture via -e TORCH_CUDA_ARCH_LIST="..." into the container environment. The benchmark scripts on B200/H100/H200 mirror that runner-level value redundantly; B300 scripts correctly reflect the runner convention of leaving it unset.

Why existing code doesn't prevent it: A refutation argues this is a vLLM-specific convention (vLLM scripts set it; SGLang scripts don't). However, the true pattern is runner-level: B200/H100/H200 runners all set it; the B300 runner never does. All three pre-existing B300 single-node scripts (qwen3.5_fp8_b300.sh, qwen3.5_fp8_b300_mtp.sh, dsr1_fp4_b300.sh) — regardless of framework — leave TORCH_CUDA_ARCH_LIST unset, consistent with the B300 runner's behavior. The new vLLM B300 script is the outlier.

Impact: vLLM uses torch.compile via --compilation_config.pass_config.fuse_allreduce_rms true (present in this script). PyTorch compiles kernels for the arch list specified; if B300 has a distinct SM variant from exactly 10.0 (e.g., sm_100a), the compiled kernels may be suboptimal or miss B300-specific optimizations. Impact is uncertain since B200 and B300 may share SM 10.0, but the inconsistency with infrastructure is clear.

Fix: Remove line 27 (export TORCH_CUDA_ARCH_LIST="10.0") to match the pattern of all other B300 scripts and the B300 runner itself.

Step-by-step proof:

  1. runners/launch_b200-dgxc.sh passes -e TORCH_CUDA_ARCH_LIST="10.0" to Docker — B200 vLLM scripts also set it (double-coverage, consistent).
  2. runners/launch_b300-nv.sh uses srun --export=ALL with no TORCH_CUDA_ARCH_LIST assignment anywhere in the file.
  3. benchmarks/single_node/dsr1_fp4_b300.sh, qwen3.5_fp8_b300.sh, and qwen3.5_fp8_b300_mtp.sh all omit TORCH_CUDA_ARCH_LIST — consistent with the runner.
  4. benchmarks/single_node/kimik2.5_fp4_b300.sh line 27 sets it to 10.0 — inconsistent with the runner and every other B300 script.
  5. If the CI environment does not pre-set TORCH_CUDA_ARCH_LIST and B300 reports a slightly different SM, PyTorch auto-detection would choose the correct architecture but is blocked by the hardcoded value.

functionstackx and others added 3 commits April 17, 2026 09:15
At the time of submission, the vLLM Kimi-K2.5 recipes page
(https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html)
does not have a B300-specific recipe, so this config reuses the existing
Kimi-K2.5 FP4 (NVFP4) B200 vLLM recipe as-is until B300-specific tuning
is available.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Align with the standard B300 vLLM image used by other B300 vLLM configs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@functionstackx functionstackx force-pushed the claude/add-kimi-k2.5-fp4-b300-vllm branch from 26b13d2 to fe13a8d Compare April 17, 2026 13:15
@functionstackx functionstackx merged commit a35e536 into main Apr 17, 2026
17 checks passed
@functionstackx functionstackx deleted the claude/add-kimi-k2.5-fp4-b300-vllm branch April 17, 2026 13:20
@functionstackx functionstackx restored the claude/add-kimi-k2.5-fp4-b300-vllm branch April 17, 2026 23:09
cquil11 added a commit that referenced this pull request Apr 20, 2026
cquil11 added a commit that referenced this pull request Apr 20, 2026
@cquil11 cquil11 mentioned this pull request Apr 20, 2026
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Development

Successfully merging this pull request may close these issues.

1 participant