Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/configs/amd-master.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ dsr1-fp4-mi355x-sglang:
- { tp: 8, conc-start: 4, conc-end: 64 }

dsr1-fp8-mi300x-sglang:
image: rocm/7.0:rocm7.0_ubuntu_22.04_sgl-dev-v0.5.2-rocm7.0-mi30x-20250915
image: lmsysorg/sglang:v0.5.5.post3-rocm700-mi30x
Comment thread
functionstackx marked this conversation as resolved.
model: deepseek-ai/DeepSeek-R1-0528
model-prefix: dsr1
runner: mi300x
Expand All @@ -44,7 +44,7 @@ dsr1-fp8-mi300x-sglang:
- { tp: 8, conc-start: 4, conc-end: 64 }

dsr1-fp8-mi325x-sglang:
image: rocm/7.0:rocm7.0_ubuntu_22.04_sgl-dev-v0.5.2-rocm7.0-mi30x-20250915
image: lmsysorg/sglang:v0.5.5.post3-rocm700-mi30x
Comment thread
functionstackx marked this conversation as resolved.
Comment thread
functionstackx marked this conversation as resolved.
model: deepseek-ai/DeepSeek-R1-0528
model-prefix: dsr1
runner: mi325x
Expand All @@ -66,7 +66,7 @@ dsr1-fp8-mi325x-sglang:
- { tp: 8, conc-start: 4, conc-end: 64 }

dsr1-fp8-mi355x-sglang:
image: rocm/7.0:rocm7.0_ubuntu_22.04_sgl-dev-v0.5.2-rocm7.0-mi35x-20250915
image: lmsysorg/sglang:v0.5.5.post3-rocm700-mi35x
model: deepseek-ai/DeepSeek-R1-0528
model-prefix: dsr1
runner: mi355x
Expand Down
4 changes: 4 additions & 0 deletions benchmarks/dsr1_fp8_mi355x_docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,13 @@
# https://rocm.docs.amd.com/en/docs-7.0-docker/benchmark-docker/inference-sglang-deepseek-r1-fp8.html

export SGLANG_USE_AITER=1
export RCCL_MSCCL_ENABLE=0
export ROCM_QUICK_REDUCE_QUANTIZATION=INT4

SERVER_LOG=$(mktemp /tmp/server-XXXXXX.log)

python3 -m sglang.launch_server \
--attention-backend aiter \
--model-path $MODEL \
--host=0.0.0.0 \
--port $PORT \
Expand All @@ -28,6 +31,7 @@ python3 -m sglang.launch_server \
--mem-fraction-static 0.8 --disable-radix-cache \
--num-continuous-decode-steps 4 \
--max-prefill-tokens 196608 \
--enable-torch-compile \
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MI300x and MI325x scripts missing flags for new image

The PR updates the SGLang image from v0.5.2 to v0.5.5.post3 for all three platforms (mi300x, mi325x, mi355x), but only the mi355x benchmark scripts were updated with the new flags (--attention-backend aiter, --enable-torch-compile, RCCL_MSCCL_ENABLE=0, ROCM_QUICK_REDUCE_QUANTIZATION=INT4). The existing dsr1_fp8_mi300x_*.sh and dsr1_fp8_mi325x_*.sh scripts lack these flags despite also receiving the new image version. This inconsistency may cause mi300x and mi325x benchmarks to fail or produce suboptimal results with the new upstream image.

Additional Locations (1)

Fix in Cursor Fix in Web

--cuda-graph-max-bs 128 > $SERVER_LOG 2>&1 &

SERVER_PID=$!
Expand Down
6 changes: 5 additions & 1 deletion benchmarks/dsr1_fp8_mi355x_slurm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,14 @@
# PORT_OFFSET

export SGLANG_USE_AITER=1
export RCCL_MSCCL_ENABLE=0
export ROCM_QUICK_REDUCE_QUANTIZATION=INT4

SERVER_LOG=$(mktemp /tmp/server-XXXXXX.log)
PORT=$(( 8888 + $PORT_OFFSET ))

python3 -m sglang.launch_server \
--attention-backend aiter \
--model-path $MODEL \
--host=0.0.0.0 \
--port $PORT \
Expand All @@ -27,7 +30,8 @@ python3 -m sglang.launch_server \
--mem-fraction-static 0.8 --disable-radix-cache \
--num-continuous-decode-steps 4 \
--max-prefill-tokens 196608 \
--cuda-graph-max-bs 128 > $SERVER_LOG 2>&1 &
--cuda-graph-max-bs 128 \
--enable-torch-compile > $SERVER_LOG 2>&1 &

SERVER_PID=$!

Expand Down
8 changes: 8 additions & 0 deletions perf-changelog.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -133,3 +133,11 @@
- "Update vLLM image from v0.11.2 to v0.13.0"
- "Add VLLM_MXFP4_USE_MARLIN=1 to H100 and H200 benchmark scripts"
pr-link: https://github.com/InferenceMAX/InferenceMAX/pull/327

- config-keys:
- dsr1-fp8-mi300x-sglang
- dsr1-fp8-mi325x-sglang
- dsr1-fp8-mi355x-sglang
description:
- Use upstream SGLang images on mi300, mi325 and mi355 for dsr1fp8
pr-link: https://github.com/InferenceMAX/InferenceMAX/pull/332
Loading