feat(opencode): add local server provider with auto model discovery#19959
Open
hmblair wants to merge 2 commits intoanomalyco:devfrom
Open
feat(opencode): add local server provider with auto model discovery#19959hmblair wants to merge 2 commits intoanomalyco:devfrom
hmblair wants to merge 2 commits intoanomalyco:devfrom
Conversation
added 2 commits
March 29, 2026 16:02
Add a custom loader for a "local" provider that auto-discovers models from any OpenAI-compatible local server (llama.cpp, ollama, vLLM, LM Studio, etc.) by querying the standard /v1/models endpoint at startup. Users configure only a baseURL and optional apiKey — no manual model listing required. Discovered models are merged with any manually configured models without overwriting them. Closes anomalyco#6231
…local provider tests The CUSTOM_LOADERS loop skipped providers not registered in models.dev, which prevented the local provider from ever being invoked. Create a stub Info as a fallback so custom loaders can bootstrap themselves. Adds three tests for the local provider: successful auto-discovery, unreachable endpoint, and missing baseURL.
Contributor
|
The following comment was made by an LLM, it may be inaccurate: Potential Duplicate Found:
|
6 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Issue for this PR
Closes #6231
Type of change
What does this PR do?
Adds a
localprovider toCUSTOM_LOADERSthat auto-discovers models from any OpenAI-compatible/v1/modelsendpoint at startup. When configured with abaseURL, it fetches the model list, registers each model, and returnsautoload: true. If the endpoint is unreachable or returns nothing, it returnsautoload: falseand the provider is silently skipped.Also fixes the
CUSTOM_LOADERSloop to allow custom loaders that don't have a models.dev entry — previously they were silently skipped becausedatabase[providerID]returned undefined and the loop hitcontinue.Usage:
{ "provider": { "local": { "options": { "baseURL": "http://localhost:11434/v1", "apiKey": "optional-key" } } } }This is an alternative to #17670 that solves the same problem in ~50 lines with no new config surface.
How did you verify your code works?
Three tests covering:
/modelsendpointbaseURLis not configuredTested locally against llama-server and LM Studio.
Screenshots / recordings
N/A — no UI changes.
Checklist