-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Connecting Nemoclaw to self-hosted vLLM model on same host #893
Copy link
Copy link
Open
Labels
Getting StartedUse this label to identify setup, installation, or onboarding issues.Use this label to identify setup, installation, or onboarding issues.Provider: OpenAIUse this label to identify issues with the OpenAI provider integration.Use this label to identify issues with the OpenAI provider integration.bugSomething isn't workingSomething isn't workingenhancement: providerUse this label to identify requests to add a new AI provider to NemoClaw.Use this label to identify requests to add a new AI provider to NemoClaw.priority: highImportant issue that should be resolved in the next releaseImportant issue that should be resolved in the next release
Metadata
Metadata
Assignees
Labels
Getting StartedUse this label to identify setup, installation, or onboarding issues.Use this label to identify setup, installation, or onboarding issues.Provider: OpenAIUse this label to identify issues with the OpenAI provider integration.Use this label to identify issues with the OpenAI provider integration.bugSomething isn't workingSomething isn't workingenhancement: providerUse this label to identify requests to add a new AI provider to NemoClaw.Use this label to identify requests to add a new AI provider to NemoClaw.priority: highImportant issue that should be resolved in the next releaseImportant issue that should be resolved in the next release
Description
using https url
I've successfully
nemoclaw onboardmy self-hosted vllm instance ofgpt-oss-120busing the3) Other OpenAI-compatible endpointoption with:But when I try to interact with this model through
openclaw tuii get:and within the logs (
nemoclaw ai-lab-test logs --follow) I see this each time i attempt inference:I know the
/v1/responsesendpoint is working because I've tested it with cURL from my local machine. but If i send the same test cURL request from inside theai-lab-testsandbox I see this in the nemoclaw logs:Is the self-signed cert the issue causing all the problems here? How can I add this custom cert to the nemoclaw instance?
using
localhosturlI've also tried adding a new provider for the same service, but using the
localhosturl to the vLLM container which is running on the same host (``)but i know this endpoint is also working.
Any help on verifying this is possible or getting this configured would be greatly appreciated!
Reproduction Steps
nemoclaw onboardmy model provider successfully. test the model by saying "hello" in theopenclaw tui. No inference and receiverun error: LLM request timed out.messageEnvironment
Debug Output
Logs
Checklist