Note: This is only a primitive demo for now, with lots of workarounds and perhaps some bugs. We will keep working on it.
2 choices are provided:
- AFD with ZMQ: Easy to deploy. Poor performance.
- AFD with StepMesh: Better performance.
Download this branch and install it. (Install SGLang)
Example:
Attn
export CUDA_VISIBLE_DEVICES=0,1,2,3
# Address of ffn scheduler, necessary for inter-node AFD, default is 127.0.0.1
export AFD_SCHED_HOST=<ffn_ip>
# 2 new options for AFD
# --afd-perspective <attn/ffn>
# --afd-mirco-batch <1/2/3>: Micro batch number for overlap, only 1,2,3 are supported now.
python -m sglang.launch_server --model-path <Qwen3-moe model> --disable-overlap-schedule --disable-cuda-graph --afd-perspective attn --afd-mirco-batch 3FFN
export CUDA_VISIBLE_DEVICES=4,5,6,7
export AFD_SCHED_HOST=<ffn_ip>
python -m sglang.launch_server --model-path <Qwen3-moe model> --disable-overlap-schedule --disable-cuda-graph --port <port> --skip-server-warmup --watchdog-timeout 3600 --afd-perspective ffn --afd-mirco-batch 3Note:
- Some un-supported features should be disabled explicitly, such as cudagraph, overlap schedule.
- Some ports are hardcoded in AFD implementation including
4000x,5000x, and65300. Please avoid using them again.
Setup StepMesh besides SGLang. (StepMesh README)
Example:
Attn
export CUDA_VISIBLE_DEVICES=0,1,2,3
export AFD_SCHED_HOST=<ffn_ip>
# RDMA NIC such as "bond0", this will enable StepMesh
export MLC_INTERFACE=<nic_name>
# StepMesh root ip, should be same with attn and ffn
export DMLC_PS_ROOT_URI=<root_ip>
# Node number of ffn
export DMLC_NUM_SERVER=1
# Node number of attn
export DMLC_NUM_WORKER=1
# GPU number
export DMLC_GROUP_SIZE=1
python -m sglang.launch_server --model-path <Qwen3-moe model> --disable-overlap-schedule --disable-cuda-graph --afd-perspective attn --afd-mirco-batch 3FFN
export CUDA_VISIBLE_DEVICES=4,5,6,7
export AFD_SCHED_HOST=<ffn_ip>
export MLC_INTERFACE=<nic_name>
export DMLC_PS_ROOT_URI=<root_ip>
export DMLC_NUM_SERVER=1
export DMLC_NUM_WORKER=1
export DMLC_GROUP_SIZE=1
python -m sglang.launch_server --model-path <Qwen3-moe model> --disable-overlap-schedule --disable-cuda-graph --port <port> --skip-server-warmup --watchdog-timeout 3600 --afd-perspective ffn --afd-mirco-batch 3| Blog | Documentation | Join Slack | Join Bi-Weekly Development Meeting | Roadmap | Slides |
- [2025/06] 🔥 SGLang, the high-performance serving infrastructure powering trillions of tokens daily, has been awarded the third batch of the Open Source AI Grant by a16z (a16z blog).
- [2025/06] 🔥 Deploying DeepSeek on GB200 NVL72 with PD and Large Scale EP (Part I): 2.7x Higher Decoding Throughput (blog).
- [2025/05] 🔥 Deploying DeepSeek with PD Disaggregation and Large-scale Expert Parallelism on 96 H100 GPUs (blog).
- [2025/03] Supercharge DeepSeek-R1 Inference on AMD Instinct MI300X (AMD blog)
- [2025/03] SGLang Joins PyTorch Ecosystem: Efficient LLM Serving Engine (PyTorch blog)
- [2024/12] v0.4 Release: Zero-Overhead Batch Scheduler, Cache-Aware Load Balancer, Faster Structured Outputs (blog).
- [2024/07] v0.2 Release: Faster Llama3 Serving with SGLang Runtime (vs. TensorRT-LLM, vLLM) (blog).
More
- [2025/02] Unlock DeepSeek-R1 Inference Performance on AMD Instinct™ MI300X GPU (AMD blog)
- [2025/01] SGLang provides day one support for DeepSeek V3/R1 models on NVIDIA and AMD GPUs with DeepSeek-specific optimizations. (instructions, AMD blog, 10+ other companies)
- [2024/10] The First SGLang Online Meetup (slides).
- [2024/09] v0.3 Release: 7x Faster DeepSeek MLA, 1.5x Faster torch.compile, Multi-Image/Video LLaVA-OneVision (blog).
- [2024/02] SGLang enables 3x faster JSON decoding with compressed finite state machine (blog).
- [2024/01] SGLang provides up to 5x faster inference with RadixAttention (blog).
- [2024/01] SGLang powers the serving of the official LLaVA v1.6 release demo (usage).
SGLang is a fast serving framework for large language models and vision language models. It makes your interaction with models faster and more controllable by co-designing the backend runtime and frontend language. The core features include:
- Fast Backend Runtime: Provides efficient serving with RadixAttention for prefix caching, zero-overhead CPU scheduler, prefill-decode disaggregation, speculative decoding, continuous batching, paged attention, tensor parallelism, pipeline parallelism, expert parallelism, structured outputs, chunked prefill, quantization (FP8/INT4/AWQ/GPTQ), and multi-lora batching.
- Flexible Frontend Language: Offers an intuitive interface for programming LLM applications, including chained generation calls, advanced prompting, control flow, multi-modal inputs, parallelism, and external interactions.
- Extensive Model Support: Supports a wide range of generative models (Llama, Gemma, Mistral, Qwen, DeepSeek, LLaVA, etc.), embedding models (e5-mistral, gte, mcdse) and reward models (Skywork), with easy extensibility for integrating new models.
- Active Community: SGLang is open-source and backed by an active community with industry adoption.
Learn more in the release blogs: v0.2 blog, v0.3 blog, v0.4 blog, Large-scale expert parallelism.
SGLang has been deployed at large scale, generating trillions of tokens in production each day. It is trusted and adopted by a wide range of leading enterprises and institutions, including xAI, AMD, NVIDIA, Intel, LinkedIn, Cursor, Oracle Cloud, Google Cloud, Microsoft Azure, AWS, Atlas Cloud, Voltage Park, Nebius, DataCrunch, Novita, InnoMatrix, MIT, UCLA, the University of Washington, Stanford, UC Berkeley, Tsinghua University, Jam & Tea Studios, Baseten, and other major technology organizations across North America and Asia. As an open-source LLM inference engine, SGLang has become the de facto industry standard, with deployments running on over 1,000,000 GPUs worldwide.
For enterprises interested in adopting or deploying SGLang at scale, including technical consulting, sponsorship opportunities, or partnership inquiries, please contact us at contact@sglang.ai.
We learned the design and reused code from the following projects: Guidance, vLLM, LightLLM, FlashInfer, Outlines, and LMQL.

