Serve the home! Inference stack for your Nvidia DGX Spark aka the Grace Blackwell AI supercomputer on your desk. Mostly vLLM based for now
-
Updated
Mar 14, 2026 - JavaScript
Serve the home! Inference stack for your Nvidia DGX Spark aka the Grace Blackwell AI supercomputer on your desk. Mostly vLLM based for now
🚀 Serve large language models efficiently at home with this Docker-based inference stack on your Nvidia DGX, featuring intelligent resource management.
Add a description, image, and links to the dgx topic page so that developers can more easily learn about it.
To associate your repository with the dgx topic, visit your repo's landing page and select "manage topics."