Skip to content

feat(opencode): add local server provider with auto model discovery#19959

Open
hmblair wants to merge 2 commits intoanomalyco:devfrom
hmblair:autoload-models
Open

feat(opencode): add local server provider with auto model discovery#19959
hmblair wants to merge 2 commits intoanomalyco:devfrom
hmblair:autoload-models

Conversation

@hmblair
Copy link
Copy Markdown

@hmblair hmblair commented Mar 29, 2026

Issue for this PR

Closes #6231

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Adds a local provider to CUSTOM_LOADERS that auto-discovers models from any OpenAI-compatible /v1/models endpoint at startup. When configured with a baseURL, it fetches the model list, registers each model, and returns autoload: true. If the endpoint is unreachable or returns nothing, it returns autoload: false and the provider is silently skipped.

Also fixes the CUSTOM_LOADERS loop to allow custom loaders that don't have a models.dev entry — previously they were silently skipped because database[providerID] returned undefined and the loop hit continue.

Usage:

{
  "provider": {
    "local": {
      "options": {
        "baseURL": "http://localhost:11434/v1",
        "apiKey": "optional-key"
      }
    }
  }
}

This is an alternative to #17670 that solves the same problem in ~50 lines with no new config surface.

How did you verify your code works?

Three tests covering:

  • Successful discovery from a mocked /models endpoint
  • Graceful handling when the endpoint is unreachable
  • Graceful handling when baseURL is not configured

Tested locally against llama-server and LM Studio.

Screenshots / recordings

N/A — no UI changes.

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

Hamish M. Blair added 2 commits March 29, 2026 16:02
Add a custom loader for a "local" provider that auto-discovers models
from any OpenAI-compatible local server (llama.cpp, ollama, vLLM, LM
Studio, etc.) by querying the standard /v1/models endpoint at startup.

Users configure only a baseURL and optional apiKey — no manual model
listing required. Discovered models are merged with any manually
configured models without overwriting them.

Closes anomalyco#6231
…local provider tests

The CUSTOM_LOADERS loop skipped providers not registered in models.dev,
which prevented the local provider from ever being invoked. Create a
stub Info as a fallback so custom loaders can bootstrap themselves.

Adds three tests for the local provider: successful auto-discovery,
unreachable endpoint, and missing baseURL.
@github-actions
Copy link
Copy Markdown
Contributor

The following comment was made by an LLM, it may be inaccurate:

Potential Duplicate Found:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Auto-discover models from OpenAI-compatible provider endpoints

1 participant