An imperative language for AI workload orchestration
-
Updated
Mar 11, 2026 - Python
An imperative language for AI workload orchestration
A policy layer above transport for KV movement, workload-aware admissibility, and explainable routing in disaggregated inference.
A fault-tolerant LLM routing system that decouples inference from AWS Bedrock by routing prefill and decode tasks through SQS and ensuring zero-downtime scaling with a graceful drain sidecar.
Research fork of LLMServingSim 2.0 investigating Head-of-Line (HoL) blocking mitigation in disaggregated LLM serving via novel scheduling algorithms.
Add a description, image, and links to the disaggregated-inference topic page so that developers can more easily learn about it.
To associate your repository with the disaggregated-inference topic, visit your repo's landing page and select "manage topics."