Skip to content

feat(provider): native LM Studio support with dynamic model discovery#19578

Closed
Aarogaming wants to merge 2 commits intoanomalyco:devfrom
Aarogaming:feat/lm-studio-provider
Closed

feat(provider): native LM Studio support with dynamic model discovery#19578
Aarogaming wants to merge 2 commits intoanomalyco:devfrom
Aarogaming:feat/lm-studio-provider

Conversation

@Aarogaming
Copy link
Copy Markdown

@Aarogaming Aarogaming commented Mar 29, 2026

Issue for this PR

Closes #19582

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Adds native auto-discovery for LM Studio instances running locally, streamlining user experience for offline models.

How did you verify your code works?

Ran the local LM Studio instance and confirmed the opencode provider system properly discovered the local models without manual URL entry. Typecheck and unit tests were updated.

Screenshots / recordings

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

Copilot AI review requested due to automatic review settings March 29, 2026 02:07
@github-actions
Copy link
Copy Markdown
Contributor

This PR doesn't fully meet our contributing guidelines and PR template.

What needs to be fixed:

  • PR description is missing required template sections. Please use the PR template.

Please edit this PR description to address the above within 2 hours, or it will be automatically closed.

If you believe this was flagged incorrectly, please let a maintainer know.

@github-actions github-actions bot added the needs:compliance This means the issue will auto-close after 2 hours. label Mar 29, 2026
@github-actions
Copy link
Copy Markdown
Contributor

The following comment was made by an LLM, it may be inaccurate:

Based on my search, I found two potentially related PRs that address similar functionality:

  1. PR feat(opencode): add dynamic configuration and context discovery for LM Studio #15732 - "feat(opencode): add dynamic configuration and context discovery for LM Studio"

  2. PR feat(opencode): dynamic model discovery for local providers (LM Studio, llama.cpp, etc.) #17670 - "feat(opencode): dynamic model discovery for local providers (LM Studio, llama.cpp, etc.)"

These PRs likely address the same or overlapping feature set as PR #19578. You may want to check their status (open/closed/merged) and review their implementation to ensure there's no duplication of effort.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a native LM Studio provider integration in opencode intended to auto-configure an OpenAI-compatible endpoint and dynamically discover locally loaded models from a running LM Studio instance.

Changes:

  • Adds "lmstudio" as a bundled provider entry.
  • Introduces a CUSTOM_LOADERS.lmstudio loader with default LM_STUDIO_URL/http://127.0.0.1:1234/v1 configuration and /models discovery logic.
  • Generates LM Studio: <Model Name> entries while skipping embeddings.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +165 to +167
const res = await fetch(`${baseURL}/models`)
if (!res.ok) return {}
const data = await res.json() as any
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LM Studio /models request has no timeout/abort signal. If the endpoint is slow or a connection hangs, this can stall provider initialization (especially once discovery is wired up). Consider adding an AbortSignal.timeout(...) (and/or a short connect timeout) similar to how models.dev refresh is handled, and log a debug/warn on non-OK responses to aid troubleshooting.

Copilot uses AI. Check for mistakes.
Comment on lines +158 to +226
async lmstudio() {
const baseURL = Env.get("LM_STUDIO_URL") || "http://127.0.0.1:1234/v1"
return {
autoload: true,
options: { baseURL, apiKey: "lm-studio" },
async discoverModels() {
try {
const res = await fetch(`${baseURL}/models`)
if (!res.ok) return {}
const data = await res.json() as any
const models: Record<string, any> = {}
for (const m of (data.data || [])) {
if (m.id.includes("embedding")) continue // skip embeddings

const prettyName = m.id.split("/").pop() || m.id

models[`lmstudio/${m.id}`] = {
id: m.id,
name: \`LM Studio: \${prettyName}\`,
providerID: "lmstudio",
family: "lmstudio-local",
api: {
id: m.id,
url: baseURL,
npm: "@ai-sdk/openai-compatible",
},
status: "active",
headers: {},
options: {},
cost: {
input: 0,
output: 0,
cache: { read: 0, write: 0 },
},
limit: {
context: 32000,
output: 4096,
},
capabilities: {
temperature: true,
reasoning: false,
attachment: false,
toolcall: true,
interleaved: false,
input: {
text: true,
audio: false,
image: false,
video: false,
pdf: false,
},
output: {
text: true,
audio: false,
image: false,
video: false,
pdf: false,
},
},
release_date: "2025-01-01",
variants: {},
}
}
return models
} catch (e) {
return {} // Return empty if LM Studio is not currently running
}
}
}
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change introduces a new provider integration path (LM Studio autoload + model discovery) but there’s no accompanying test. The repo already has extensive Provider.list()/config/env precedence tests; please add a test that stubs an LM Studio /v1/models response and asserts the provider is present and models are added/skipped as expected (e.g., embeddings filtered).

Copilot uses AI. Check for mistakes.
"@ai-sdk/vercel": createVercel,
"gitlab-ai-provider": createGitLab,
"@ai-sdk/github-copilot": createGitHubCopilotOpenAICompatible,
"lmstudio": createOpenAICompatible,
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BUNDLED_PROVIDERS is keyed by model.api.npm, but the LM Studio-discovered models below set api.npm to "@ai-sdk/openai-compatible". As written, the "lmstudio" entry here will never be selected (and it’s also not an actual npm spec). Consider removing this mapping, or ensure LM Studio models use the same api.npm key you intend to bundle so this entry has an effect.

Suggested change
"lmstudio": createOpenAICompatible,

Copilot uses AI. Check for mistakes.
Comment on lines +158 to +165
async lmstudio() {
const baseURL = Env.get("LM_STUDIO_URL") || "http://127.0.0.1:1234/v1"
return {
autoload: true,
options: { baseURL, apiKey: "lm-studio" },
async discoverModels() {
try {
const res = await fetch(`${baseURL}/models`)
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discoverModels() is implemented for lmstudio, but provider state only runs discovery for GitLab (hard-coded later in this file). That means LM Studio models will never actually be discovered/populated, so the model dropdown won’t change as described. Please wire discoveryLoaders into the general provider initialization flow (or add an LM Studio-specific discovery call) before providers/models are finalized.

Copilot uses AI. Check for mistakes.
Comment on lines +174 to +178
models[`lmstudio/${m.id}`] = {
id: m.id,
name: \`LM Studio: \${prettyName}\`,
providerID: "lmstudio",
family: "lmstudio-local",
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The discovered model objects don’t conform to the Provider.Model schema used elsewhere in this file: id should be the model record key wrapped with ModelID.make(...) (not m.id), and providerID should be a ProviderID value. Also, the record key lmstudio/${m.id} currently won’t match the stored model.id, which can break lookups and downstream assumptions. Please construct Model values the same way as config/models.dev parsing does.

Copilot uses AI. Check for mistakes.
@github-actions
Copy link
Copy Markdown
Contributor

This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window.

Feel free to open a new pull request that follows our guidelines.

@github-actions github-actions bot removed the needs:compliance This means the issue will auto-close after 2 hours. label Mar 29, 2026
@github-actions github-actions bot closed this Mar 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature Request: Native LM Studio auto-discovery

2 participants