feat(provider): native LM Studio support with dynamic model discovery#19578
feat(provider): native LM Studio support with dynamic model discovery#19578Aarogaming wants to merge 2 commits intoanomalyco:devfrom
Conversation
|
This PR doesn't fully meet our contributing guidelines and PR template. What needs to be fixed:
Please edit this PR description to address the above within 2 hours, or it will be automatically closed. If you believe this was flagged incorrectly, please let a maintainer know. |
|
The following comment was made by an LLM, it may be inaccurate: Based on my search, I found two potentially related PRs that address similar functionality:
These PRs likely address the same or overlapping feature set as PR #19578. You may want to check their status (open/closed/merged) and review their implementation to ensure there's no duplication of effort. |
There was a problem hiding this comment.
Pull request overview
Adds a native LM Studio provider integration in opencode intended to auto-configure an OpenAI-compatible endpoint and dynamically discover locally loaded models from a running LM Studio instance.
Changes:
- Adds
"lmstudio"as a bundled provider entry. - Introduces a
CUSTOM_LOADERS.lmstudioloader with defaultLM_STUDIO_URL/http://127.0.0.1:1234/v1configuration and/modelsdiscovery logic. - Generates
LM Studio: <Model Name>entries while skipping embeddings.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const res = await fetch(`${baseURL}/models`) | ||
| if (!res.ok) return {} | ||
| const data = await res.json() as any |
There was a problem hiding this comment.
The LM Studio /models request has no timeout/abort signal. If the endpoint is slow or a connection hangs, this can stall provider initialization (especially once discovery is wired up). Consider adding an AbortSignal.timeout(...) (and/or a short connect timeout) similar to how models.dev refresh is handled, and log a debug/warn on non-OK responses to aid troubleshooting.
| async lmstudio() { | ||
| const baseURL = Env.get("LM_STUDIO_URL") || "http://127.0.0.1:1234/v1" | ||
| return { | ||
| autoload: true, | ||
| options: { baseURL, apiKey: "lm-studio" }, | ||
| async discoverModels() { | ||
| try { | ||
| const res = await fetch(`${baseURL}/models`) | ||
| if (!res.ok) return {} | ||
| const data = await res.json() as any | ||
| const models: Record<string, any> = {} | ||
| for (const m of (data.data || [])) { | ||
| if (m.id.includes("embedding")) continue // skip embeddings | ||
|
|
||
| const prettyName = m.id.split("/").pop() || m.id | ||
|
|
||
| models[`lmstudio/${m.id}`] = { | ||
| id: m.id, | ||
| name: \`LM Studio: \${prettyName}\`, | ||
| providerID: "lmstudio", | ||
| family: "lmstudio-local", | ||
| api: { | ||
| id: m.id, | ||
| url: baseURL, | ||
| npm: "@ai-sdk/openai-compatible", | ||
| }, | ||
| status: "active", | ||
| headers: {}, | ||
| options: {}, | ||
| cost: { | ||
| input: 0, | ||
| output: 0, | ||
| cache: { read: 0, write: 0 }, | ||
| }, | ||
| limit: { | ||
| context: 32000, | ||
| output: 4096, | ||
| }, | ||
| capabilities: { | ||
| temperature: true, | ||
| reasoning: false, | ||
| attachment: false, | ||
| toolcall: true, | ||
| interleaved: false, | ||
| input: { | ||
| text: true, | ||
| audio: false, | ||
| image: false, | ||
| video: false, | ||
| pdf: false, | ||
| }, | ||
| output: { | ||
| text: true, | ||
| audio: false, | ||
| image: false, | ||
| video: false, | ||
| pdf: false, | ||
| }, | ||
| }, | ||
| release_date: "2025-01-01", | ||
| variants: {}, | ||
| } | ||
| } | ||
| return models | ||
| } catch (e) { | ||
| return {} // Return empty if LM Studio is not currently running | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
This change introduces a new provider integration path (LM Studio autoload + model discovery) but there’s no accompanying test. The repo already has extensive Provider.list()/config/env precedence tests; please add a test that stubs an LM Studio /v1/models response and asserts the provider is present and models are added/skipped as expected (e.g., embeddings filtered).
| "@ai-sdk/vercel": createVercel, | ||
| "gitlab-ai-provider": createGitLab, | ||
| "@ai-sdk/github-copilot": createGitHubCopilotOpenAICompatible, | ||
| "lmstudio": createOpenAICompatible, |
There was a problem hiding this comment.
BUNDLED_PROVIDERS is keyed by model.api.npm, but the LM Studio-discovered models below set api.npm to "@ai-sdk/openai-compatible". As written, the "lmstudio" entry here will never be selected (and it’s also not an actual npm spec). Consider removing this mapping, or ensure LM Studio models use the same api.npm key you intend to bundle so this entry has an effect.
| "lmstudio": createOpenAICompatible, |
| async lmstudio() { | ||
| const baseURL = Env.get("LM_STUDIO_URL") || "http://127.0.0.1:1234/v1" | ||
| return { | ||
| autoload: true, | ||
| options: { baseURL, apiKey: "lm-studio" }, | ||
| async discoverModels() { | ||
| try { | ||
| const res = await fetch(`${baseURL}/models`) |
There was a problem hiding this comment.
discoverModels() is implemented for lmstudio, but provider state only runs discovery for GitLab (hard-coded later in this file). That means LM Studio models will never actually be discovered/populated, so the model dropdown won’t change as described. Please wire discoveryLoaders into the general provider initialization flow (or add an LM Studio-specific discovery call) before providers/models are finalized.
| models[`lmstudio/${m.id}`] = { | ||
| id: m.id, | ||
| name: \`LM Studio: \${prettyName}\`, | ||
| providerID: "lmstudio", | ||
| family: "lmstudio-local", |
There was a problem hiding this comment.
The discovered model objects don’t conform to the Provider.Model schema used elsewhere in this file: id should be the model record key wrapped with ModelID.make(...) (not m.id), and providerID should be a ProviderID value. Also, the record key lmstudio/${m.id} currently won’t match the stored model.id, which can break lookups and downstream assumptions. Please construct Model values the same way as config/models.dev parsing does.
|
This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window. Feel free to open a new pull request that follows our guidelines. |
Issue for this PR
Closes #19582
Type of change
What does this PR do?
Adds native auto-discovery for LM Studio instances running locally, streamlining user experience for offline models.
How did you verify your code works?
Ran the local LM Studio instance and confirmed the opencode provider system properly discovered the local models without manual URL entry. Typecheck and unit tests were updated.
Screenshots / recordings
Checklist