Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions .github/configs/nvidia-master.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2045,6 +2045,29 @@ kimik2.5-fp4-b200-vllm:
- { tp: 8, ep: 1, conc-start: 4, conc-end: 4 }
- { tp: 4, ep: 1, conc-start: 4, conc-end: 64 }

# NOTE: At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html
# does not have a B300-specific recipe, so this config reuses the existing
# Kimi-K2.5 FP4 B200 vLLM recipe as-is until B300-specific tuning is available.
kimik2.5-fp4-b300-vllm:
image: vllm/vllm-openai:v0.19.0-cu130
model: nvidia/Kimi-K2.5-NVFP4
model-prefix: kimik2.5
runner: b300
precision: fp4
framework: vllm
multinode: false
seq-len-configs:
- isl: 1024
osl: 1024
search-space:
- { tp: 8, ep: 1, conc-start: 4, conc-end: 4 }
- { tp: 4, ep: 1, conc-start: 4, conc-end: 64 }
- isl: 8192
osl: 1024
search-space:
- { tp: 8, ep: 1, conc-start: 4, conc-end: 4 }
- { tp: 4, ep: 1, conc-start: 4, conc-end: 64 }

dsr1-fp8-b200-sglang-mtp:
image: lmsysorg/sglang:v0.5.9-cu130
model: deepseek-ai/DeepSeek-R1-0528
Expand Down
80 changes: 80 additions & 0 deletions benchmarks/single_node/kimik2.5_fp4_b300.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
#!/usr/bin/env bash

# NOTE: At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html
# does not have a B300-specific recipe, so this script reuses the existing
# Kimi-K2.5 FP4 B200 vLLM recipe as-is until B300-specific tuning is available.

source "$(dirname "$0")/../benchmark_lib.sh"

check_env_vars \
MODEL \
TP \
CONC \
ISL \
OSL \
MAX_MODEL_LEN \
RANDOM_RANGE_RATIO \
RESULT_FILENAME

if [[ -n "$SLURM_JOB_ID" ]]; then
echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME"
fi

hf download "$MODEL"

nvidia-smi

export TORCH_CUDA_ARCH_LIST="10.0"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The new kimik2.5_fp4_b300.sh script carries over export TORCH_CUDA_ARCH_LIST="10.0" from the B200 equivalent, but the B300 runner (launch_b300-nv.sh) never sets this variable — unlike the B200 Docker runner which explicitly passes -e TORCH_CUDA_ARCH_LIST="10.0". Every other B300 benchmark script (dsr1, qwen3.5) leaves it unset, letting PyTorch auto-detect the correct architecture; the new vLLM script is the sole exception. If B300 uses a different SM variant than B200 (e.g., SM 10.0a), hardcoding 10.0 could prevent B300-native torch.compile kernel optimizations from taking effect — remove this line to match B300 convention.

Extended reasoning...

What the bug is: Line 27 of kimik2.5_fp4_b300.sh sets export TORCH_CUDA_ARCH_LIST="10.0", copied verbatim from kimik2.5_fp4_b200.sh. While this value is correct for B200 (Blackwell SM 10.0), it was never verified for B300 and contradicts the established B300 scripting convention.

The specific code path: The B300 single-node runner (runners/launch_b300-nv.sh) uses srun --export=ALL with Slurm/enroot and does not set TORCH_CUDA_ARCH_LIST anywhere. This contrasts with the B200 Docker runner (runners/launch_b200-dgxc.sh) and H100/H200 runners, which explicitly inject the architecture via -e TORCH_CUDA_ARCH_LIST="..." into the container environment. The benchmark scripts on B200/H100/H200 mirror that runner-level value redundantly; B300 scripts correctly reflect the runner convention of leaving it unset.

Why existing code doesn't prevent it: A refutation argues this is a vLLM-specific convention (vLLM scripts set it; SGLang scripts don't). However, the true pattern is runner-level: B200/H100/H200 runners all set it; the B300 runner never does. All three pre-existing B300 single-node scripts (qwen3.5_fp8_b300.sh, qwen3.5_fp8_b300_mtp.sh, dsr1_fp4_b300.sh) — regardless of framework — leave TORCH_CUDA_ARCH_LIST unset, consistent with the B300 runner's behavior. The new vLLM B300 script is the outlier.

Impact: vLLM uses torch.compile via --compilation_config.pass_config.fuse_allreduce_rms true (present in this script). PyTorch compiles kernels for the arch list specified; if B300 has a distinct SM variant from exactly 10.0 (e.g., sm_100a), the compiled kernels may be suboptimal or miss B300-specific optimizations. Impact is uncertain since B200 and B300 may share SM 10.0, but the inconsistency with infrastructure is clear.

Fix: Remove line 27 (export TORCH_CUDA_ARCH_LIST="10.0") to match the pattern of all other B300 scripts and the B300 runner itself.

Step-by-step proof:

  1. runners/launch_b200-dgxc.sh passes -e TORCH_CUDA_ARCH_LIST="10.0" to Docker — B200 vLLM scripts also set it (double-coverage, consistent).
  2. runners/launch_b300-nv.sh uses srun --export=ALL with no TORCH_CUDA_ARCH_LIST assignment anywhere in the file.
  3. benchmarks/single_node/dsr1_fp4_b300.sh, qwen3.5_fp8_b300.sh, and qwen3.5_fp8_b300_mtp.sh all omit TORCH_CUDA_ARCH_LIST — consistent with the runner.
  4. benchmarks/single_node/kimik2.5_fp4_b300.sh line 27 sets it to 10.0 — inconsistent with the runner and every other B300 script.
  5. If the CI environment does not pre-set TORCH_CUDA_ARCH_LIST and B300 reports a slightly different SM, PyTorch auto-detection would choose the correct architecture but is blocked by the hardcoded value.

export PYTHONNOUSERSITE=1

SERVER_LOG=/workspace/server.log
PORT=${PORT:-8888}

if [ "${EVAL_ONLY}" = "true" ]; then
setup_eval_context
MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN"
fi
# Start GPU monitoring (power, temperature, clocks every second)
start_gpu_monitor

set -x
vllm serve $MODEL --host 0.0.0.0 --port $PORT \
--tensor-parallel-size=$TP \
--gpu-memory-utilization 0.90 \
--max-model-len $MAX_MODEL_LEN \
--max-num-seqs $CONC \
--reasoning-parser kimi_k2 \
--tool-call-parser kimi_k2 \
--compilation_config.pass_config.fuse_allreduce_rms true \
--no-enable-prefix-caching \
--trust-remote-code > $SERVER_LOG 2>&1 &

SERVER_PID=$!

# Wait for server to be ready
wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID"

pip install -q datasets pandas

run_benchmark_serving \
--model "$MODEL" \
--port "$PORT" \
--backend vllm \
--input-len "$ISL" \
--output-len "$OSL" \
--random-range-ratio "$RANDOM_RANGE_RATIO" \
--num-prompts $(( CONC * 10 )) \
--max-concurrency "$CONC" \
--result-filename "$RESULT_FILENAME" \
--result-dir /workspace/ \
--trust-remote-code

# After throughput, run evaluation only if RUN_EVAL is true
if [ "${RUN_EVAL}" = "true" ]; then
run_eval --framework lm-eval --port "$PORT"
append_lm_eval_summary
fi

# Stop GPU monitoring
stop_gpu_monitor
set +x
8 changes: 8 additions & 0 deletions perf-changelog.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1460,3 +1460,11 @@
- "Image: vllm/vllm-openai:v0.19.0-cu130"
- "At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html does not have a B300-specific recipe, so this reuses the existing MiniMax-M2.5 FP8 B200 vLLM recipe as-is"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1054

- config-keys:
- kimik2.5-fp4-b300-vllm
description:
- "Add Kimi-K2.5 FP4 (NVFP4) B300 vLLM benchmark"
- "Image: vllm/vllm-openai:v0.19.0-cu130"
- "At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html does not have a B300-specific recipe, so this reuses the existing Kimi-K2.5 FP4 B200 vLLM recipe as-is"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1056
Loading