Skip to content

feat: add DeepSeek-V4-Flash FP8 B300 SGLang benchmark#1135

Closed
cquil11 wants to merge 1 commit intomainfrom
chore/dsv4-fp8-sgl-b300
Closed

feat: add DeepSeek-V4-Flash FP8 B300 SGLang benchmark#1135
cquil11 wants to merge 1 commit intomainfrom
chore/dsv4-fp8-sgl-b300

Conversation

@cquil11
Copy link
Copy Markdown
Collaborator

@cquil11 cquil11 commented Apr 24, 2026

Summary

  • Add dsv4-fp8-b300-sglang to .github/configs/nvidia-master.yaml (TP=4/EP=4/dp-attn=true, conc 4–1024 for 1k1k and 4–512 for 8k1k)
  • Add benchmarks/single_node/dsv4_fp8_b300.sh mirroring the H200 Flash Max-Throughput recipe (DP + DeepEP, no MTP) on the Blackwell image with SGLANG_DSV4_FP4_EXPERTS=0 to swap MoE experts to FP8
  • Prefix caching disabled (--disable-radix-cache) and no speculative decoding
  • Append perf-changelog entry to trigger the sweep
  • Also carries the B300 runner fixes from [NVIDIA] chore: B300 single node DeepSeek v4 SGLang #1132 so this PR is independently mergeable

Model: sgl-project/DeepSeek-V4-Flash-FP8 — the Pro-FP8 checkpoint is still pending upload per the cookbook, so Flash is the only live FP8 variant.

Test plan

  • generate_sweep_configs.py --runner-type b300 --model-prefix dsv4 --precision fp8 → 17 matrix entries, validation passes
  • pytest utils/matrix_logic/ -q → 149 passed
  • Sweep run on B300 completes end-to-end

Mirrors #1132 (FP4 Pro) but for FP8 Flash:
- Config key dsv4-fp8-b300-sglang (TP=4/EP=4/dp-attn=true, conc 4-1024
  for 1k1k, 4-512 for 8k1k).
- Model sgl-project/DeepSeek-V4-Flash-FP8 (the Pro-FP8 checkpoint is
  still pending upload per the cookbook).
- Image lmsysorg/sglang:deepseek-v4-blackwell with
  SGLANG_DSV4_FP4_EXPERTS=0 to swap MoE experts to FP8 on Blackwell.
- Reuses the H200 Flash Max-Throughput recipe (DP + DeepEP, no MTP)
  from the cookbook; prefix caching disabled.

Also includes the B300 runner fixes from #1132 so the PR can be
merged independently: paths moved to /data/home/sa-shared/gharunners/
{squash,hf-hub-cache}, HF cache mount target changed to \$HF_HUB_CACHE,
flock-guarded squash import, and the /scratch/models Qwen3.5 override
removed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook

If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you

PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow

As a rule of thumb, generally, PR authors should request a review & get a PR approval from the respective companies' CODEOWNERS before requesting a review from core maintainers.

If additional help is needed, PR authors can reach out to core maintainers over Slack.

@cquil11
Copy link
Copy Markdown
Collaborator Author

cquil11 commented Apr 24, 2026

Closing: picked Flash over Pro, but Pro was the correct target. Pro-FP8 checkpoint is not publicly available yet (cookbook has <TO_BE_UPLOADED_DeepSeek-V4-Pro-FP8> placeholder). Will reopen once the Pro-FP8 model is published.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Development

Successfully merging this pull request may close these issues.

1 participant