-
Notifications
You must be signed in to change notification settings - Fork 156
Add B300 config: kimi-k2.5-fp4-vllm #1100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,80 @@ | ||
| #!/usr/bin/env bash | ||
|
|
||
| # NOTE: At the time of submission, https://docs.vllm.ai/projects/recipes/en/latest/moonshotai/Kimi-K2.5.html | ||
| # does not have a B300-specific recipe, so this script reuses the existing | ||
| # Kimi-K2.5 FP4 B200 vLLM recipe as-is until B300-specific tuning is available. | ||
|
|
||
| source "$(dirname "$0")/../benchmark_lib.sh" | ||
|
|
||
| check_env_vars \ | ||
| MODEL \ | ||
| TP \ | ||
| CONC \ | ||
| ISL \ | ||
| OSL \ | ||
| MAX_MODEL_LEN \ | ||
| RANDOM_RANGE_RATIO \ | ||
| RESULT_FILENAME | ||
|
|
||
| if [[ -n "$SLURM_JOB_ID" ]]; then | ||
| echo "JOB $SLURM_JOB_ID running on $SLURMD_NODENAME" | ||
| fi | ||
|
|
||
| hf download "$MODEL" | ||
|
|
||
| nvidia-smi | ||
|
|
||
| export TORCH_CUDA_ARCH_LIST="10.0" | ||
| export PYTHONNOUSERSITE=1 | ||
|
|
||
| SERVER_LOG=/workspace/server.log | ||
| PORT=${PORT:-8888} | ||
|
|
||
| if [ "${EVAL_ONLY}" = "true" ]; then | ||
| setup_eval_context | ||
| MAX_MODEL_LEN="$EVAL_MAX_MODEL_LEN" | ||
| fi | ||
| # Start GPU monitoring (power, temperature, clocks every second) | ||
| start_gpu_monitor | ||
|
|
||
| set -x | ||
| vllm serve $MODEL --host 0.0.0.0 --port $PORT \ | ||
| --tensor-parallel-size=$TP \ | ||
| --gpu-memory-utilization 0.90 \ | ||
| --max-model-len $MAX_MODEL_LEN \ | ||
| --max-num-seqs $CONC \ | ||
| --reasoning-parser kimi_k2 \ | ||
| --tool-call-parser kimi_k2 \ | ||
| --compilation_config.pass_config.fuse_allreduce_rms true \ | ||
| --no-enable-prefix-caching \ | ||
| --trust-remote-code > $SERVER_LOG 2>&1 & | ||
|
|
||
| SERVER_PID=$! | ||
|
|
||
| # Wait for server to be ready | ||
| wait_for_server_ready --port "$PORT" --server-log "$SERVER_LOG" --server-pid "$SERVER_PID" | ||
|
|
||
| pip install -q datasets pandas | ||
|
|
||
| run_benchmark_serving \ | ||
| --model "$MODEL" \ | ||
| --port "$PORT" \ | ||
| --backend vllm \ | ||
| --input-len "$ISL" \ | ||
| --output-len "$OSL" \ | ||
| --random-range-ratio "$RANDOM_RANGE_RATIO" \ | ||
| --num-prompts $(( CONC * 10 )) \ | ||
| --max-concurrency "$CONC" \ | ||
| --result-filename "$RESULT_FILENAME" \ | ||
| --result-dir /workspace/ \ | ||
| --trust-remote-code | ||
|
|
||
| # After throughput, run evaluation only if RUN_EVAL is true | ||
| if [ "${RUN_EVAL}" = "true" ]; then | ||
| run_eval --framework lm-eval --port "$PORT" | ||
| append_lm_eval_summary | ||
| fi | ||
|
|
||
| # Stop GPU monitoring | ||
| stop_gpu_monitor | ||
| set +x |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 The
perf-changelog.yamlentry forkimik2.5-fp4-b300-vllmhaspr-linkpointing to PR #1056, which was explicitly reverted by PR #1099. Since this PR (#1100) is the one that will actually land the change, thepr-linkshould referencehttps://github.com/SemiAnalysisAI/InferenceX/pull/1100instead.Extended reasoning...
What the bug is and how it manifests
The new
perf-changelog.yamlentry forkimik2.5-fp4-b300-vllm(added in this PR) setspr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1056. PR #1056 was the original submission of this B300 config, but it was subsequently reverted by PR #1099 due to an error. The PR description for #1100 explicitly states: "This PR is a reopen of #1056, which was reverted in #1099 due to an error with the first PR." This means PR #1056 exists on GitHub in a reverted/superseded state, while PR #1100 is the change that will actually merge the configuration intomain.The specific code path that triggers it
In the diff, the new changelog block reads:
This was carried over from the original PR #1056 submission without being updated to reflect the current PR number (#1100).
Why existing code does not prevent it
The changelog is a manually maintained YAML file with no automated validation that cross-checks
pr-linkvalues against the actual PR being merged. There is no CI check to enforce that the link matches the current PR. The error is a straightforward copy-paste oversight when reopening the PR.What the impact would be
When anyone later looks up the changelog entry for
kimik2.5-fp4-b300-vllmto understand when and how it was introduced, they will follow thepr-linkand land on PR #1056, which GitHub shows as reverted. This is misleading — it appears the feature was reverted rather than merged. The actual merging PR (#1100) would not be linked anywhere in the changelog, making historical tracking inaccurate.How to fix it
Change the
pr-linkin the new changelog entry frompull/1056topull/1100.Step-by-step proof
kimik2.5-fp4-b300-vllmconfig.pr-linkto/pull/1056(the reverted PR) instead of/pull/1100(this PR).minimaxm2.5-fp8-b300-vllm-> Add B300 config: minimaxm2.5-fp8-vllm #1054,minimaxm2.5-fp4-b300-vllm-> Add B300 config: minimaxm2.5-fp4-vllm #1055,dsr1-fp8-b300-sglang-mtp-> Add B300 config: dsr1-fp8-sglang-mtp #1059.pr-linkshould therefore be updated tohttps://github.com/SemiAnalysisAI/InferenceX/pull/1100.