Skip to content

fix(vllm): make vllm-omni respect the selected torch backend#4

Merged
rushilbhat merged 1 commit into
mainfrom
fix/vllm-source-build-torch-mismatch
Apr 28, 2026
Merged

fix(vllm): make vllm-omni respect the selected torch backend#4
rushilbhat merged 1 commit into
mainfrom
fix/vllm-source-build-torch-mismatch

Conversation

@rushilbhat
Copy link
Copy Markdown

Summary

Fix a Docker build failure in the vLLM source-install path by making the vllm-omni install respect the selected Torch backend.

Problem

When install_vllm.sh is used to install vLLM from source in a CUDA environment, it installs vllm-omni before building vLLM.

The issue was that the vllm-omni install did not use the selected Torch backend. That could pull in a Torch build for a different CUDA version than the one expected by the image being built, which caused the Docker build to fail with a CUDA/Torch version mismatch.

Solution

Update container/deps/vllm/install_vllm.sh so that CUDA installs of vllm-omni use the same --torch-backend selected for the rest of the vLLM install flow. This applies to both the PyPI install path and the source fallback path

@rushilbhat rushilbhat merged commit a72b788 into main Apr 28, 2026
6 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant