-
Notifications
You must be signed in to change notification settings - Fork 156
Add B300 config: kimi-k2.5-int4-vllm (vLLM 0.20.0 + TP=4/EP=1 sweep) #1071
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
+111
−0
Closed
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,80 @@ | ||
| #!/usr/bin/env bash | ||
|
|
||
| # NOTE: At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html | ||
| # does not have a B300-specific recipe, so this script reuses the existing | ||
| # Kimi-K2.5 INT4 B200 vLLM recipe as-is until B300-specific tuning is available. | ||
|
|
||
| source "$(dirname "$0")/../benchmark_lib.sh" | ||
|
|
||
| check_env_vars \ | ||
| MODEL \ | ||
| TP \ | ||
| CONC \ | ||
| ISL \ | ||
| OSL \ | ||
| MAX_MODEL_LEN \ | ||
| RANDOM_RANGE_RATIO \ | ||
| RESULT_FILENAME | ||
|
|
||
| if [[ -n "$SLURM_JOB_ID" ]]; then | ||
| echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME" | ||
| fi | ||
|
|
||
| hf download "$MODEL" | ||
|
|
||
| nvidia-smi | ||
|
|
||
| export PYTHONNOUSERSITE=1 | ||
| export VLLM_USE_FLASHINFER_MOE_INT4=1 | ||
|
|
||
| SERVER_LOG=/workspace/server.log | ||
| PORT=${PORT:-8888} | ||
|
|
||
| if [ "${EVAL_ONLY}" = "true" ]; then | ||
| setup_eval_context | ||
| MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN" | ||
| fi | ||
| # Start GPU monitoring (power, temperature, clocks every second) | ||
| start_gpu_monitor | ||
|
|
||
| set -x | ||
| vllm serve $MODEL --host 0.0.0.0 --port $PORT \ | ||
| --gpu-memory-utilization 0.95 \ | ||
| --tensor-parallel-size $TP \ | ||
| --max-model-len $MAX_MODEL_LEN \ | ||
| --max-num-seqs $CONC \ | ||
| --reasoning-parser kimi_k2 \ | ||
| --tool-call-parser kimi_k2 \ | ||
| --compilation_config.pass_config.fuse_allreduce_rms true \ | ||
| --trust-remote-code \ | ||
| --no-enable-prefix-caching > $SERVER_LOG 2>&1 & | ||
|
|
||
| SERVER_PID=$! | ||
|
|
||
| # Wait for server to be ready | ||
| wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID" | ||
|
|
||
| pip install -q datasets pandas | ||
|
|
||
| run_benchmark_serving \ | ||
| --model "$MODEL" \ | ||
| --port "$PORT" \ | ||
| --backend vllm \ | ||
| --input-len "$ISL" \ | ||
| --output-len "$OSL" \ | ||
| --random-range-ratio "$RANDOM_RANGE_RATIO" \ | ||
| --num-prompts $(( CONC * 10 )) \ | ||
| --max-concurrency "$CONC" \ | ||
| --result-filename "$RESULT_FILENAME" \ | ||
| --result-dir /workspace/ \ | ||
| --trust-remote-code | ||
|
|
||
| # After throughput, run evaluation only if RUN_EVAL is true | ||
| if [ "${RUN_EVAL}" = "true" ]; then | ||
| run_eval --framework lm-eval --port "$PORT" | ||
| append_lm_eval_summary | ||
| fi | ||
|
|
||
| # Stop GPU monitoring | ||
| stop_gpu_monitor | ||
| set +x |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 The new
kimik2.5-int4-b300-vllmentry inperf-changelog.yamlhas two documentation issues: (1)pr-linkpoints to the reverted PR #1057 instead of the current PR #1071, and (2) the entry was inserted before the existinggptoss-fp4-mi300x-vllmentry rather than appended at the very end of the file, violating the AGENTS.md ordering convention. Both can be fixed by moving the new entry to the bottom of the file and updating thepr-linktohttps://github.com/SemiAnalysisAI/InferenceX/pull/1071.Extended reasoning...
Issue 1: Wrong PR link
The new
kimik2.5-int4-b300-vllmentry carriespr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1057. PR #1057 was the original submission that was subsequently reverted entirely by PR #1070. The current PR, #1071, is explicitly described as a reopen of #1057 with otherwise identical contents. Since PR #1057 never landed inmain(it was reverted), the canonical PR that actually introduces this config is #1071. Thepr-linkfield is used as historical documentation linking a config to the PR that merged it, so pointing it at a reverted PR is incorrect — anyone following the link will reach a reverted (and confusing) PR rather than the one that introduced the change.Issue 2: Entry inserted in the wrong position
AGENTS.md line 159 states: "The file is read in chronological order: oldest at the top, newest at the bottom. New entries MUST be appended to the END of the file — never insert in the middle or prepend." The diff shows the new
kimik2.5-int4-b300-vllmblock was inserted between thekimik2.5-fp4-b300-vllmentry (PR #1056) and thegptoss-fp4-mi300x-vllmentry (PR #1053). After the PR merges,gptoss-fp4-mi300x-vllmremains the last entry in the file, while the newerkimik2.5-int4-b300-vllmentry sits above it — a clear ordering inversion.Why existing code doesn't prevent this
There is no automated enforcement of the append-only rule or the
pr-linkvalue in the CI pipeline. The AGENTS.md instruction is a human convention. When the author copied the entry from PR #1057, both thepr-linkvalue and the insertion position were carried over as-is, bypassing the update.Impact
The impact is limited to documentation accuracy. The wrong
pr-linkmeans anyone using the changelog to trace config history will land on the reverted PR, potentially causing confusion about whether the config is actually live. The ordering violation makes the changelog harder to read chronologically and sets a precedent for future out-of-order insertions.How to fix
Move the entire
kimik2.5-int4-b300-vllmblock to the very bottom ofperf-changelog.yaml(after thegptoss-fp4-mi300x-vllmentry) and update itspr-linktohttps://github.com/SemiAnalysisAI/InferenceX/pull/1071.Step-by-step proof
gptoss-fp4-mi300x-vllm(PR [AMD][MI300X] Expand GPT-OSS FP4 TP=1 concurrency from 64 to 256 #1053).gptoss-fp4-mi300x-vllm— confirmed by the diff context showing- config-keys: [gptoss-fp4-mi300x-vllm]…appearing immediately after the added block.perf-changelog.yamlshows the order: …kimik2.5-fp4-b300-vllm→kimik2.5-int4-b300-vllm→gptoss-fp4-mi300x-vllm, so PR [AMD][MI300X] Expand GPT-OSS FP4 TP=1 concurrency from 64 to 256 #1053's entry is still last despite being older.pr-linkvalue in the new entry ispull/1057, which maps to the reverted PR, not the current one (pull/1071).