Skip to content

feat(provider): native lm studio auto-discovery#20524

Open
Aarogaming wants to merge 4 commits intoanomalyco:devfrom
Aarogaming:feat/lm-studio-provider
Open

feat(provider): native lm studio auto-discovery#20524
Aarogaming wants to merge 4 commits intoanomalyco:devfrom
Aarogaming:feat/lm-studio-provider

Conversation

@Aarogaming
Copy link
Copy Markdown

@Aarogaming Aarogaming commented Apr 1, 2026

Issue for this PR

Closes #20609

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Adds native auto-discovery for LM Studio instances running locally, streamlining user experience for offline models.

How did you verify your code works?

Ran the local LM Studio instance and confirmed the opencode provider system properly discovered the local models without manual URL entry. Typecheck and unit tests were updated.

Screenshots / recordings

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

Copilot AI review requested due to automatic review settings April 1, 2026 17:59
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

The following comment was made by an LLM, it may be inaccurate:

Potential Duplicate/Related PRs Found

I found two highly related PRs that may be addressing similar functionality:

  1. PR feat(opencode): dynamic model discovery for local providers (LM Studio, llama.cpp, etc.) #17670 - feat(opencode): dynamic model discovery for local providers (LM Studio, llama.cpp, etc.)

  2. PR feat(opencode): add local server provider with auto model discovery #19959 - feat(opencode): add local server provider with auto model discovery

  3. PR feat(opencode): add dynamic configuration and context discovery for LM Studio #15732 - feat(opencode): add dynamic configuration and context discovery for LM Studio

Why they might be duplicates: These PRs all address auto-discovery functionality for LM Studio and local model providers. You should verify whether PR #20524 is a newer/improved version of this functionality or if there's overlap that needs to be resolved.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an LM Studio provider integration intended to auto-discover locally running LM Studio instances and expose their models via the existing OpenAI-compatible provider plumbing.

Changes:

  • Registers a new lmstudio provider mapping to the bundled OpenAI-compatible SDK.
  • Adds a CUSTOM_LOADERS.lmstudio() loader that sets default connection options and implements /models discovery.
  • Generates model metadata for discovered LM Studio models (capabilities, limits, cost, etc.).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +163 to +166
async discoverModels() {
try {
const res = await fetch(`${baseURL}/models`)
if (!res.ok) return {}
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discoverModels() is wired into CUSTOM_LOADERS, but provider state currently only invokes discoveryLoaders[...]() for GitLab. As-is, LM Studio models returned here will never be merged into providers["lmstudio"].models, so the advertised auto-discovery won’t actually surface any models. Consider generalizing the discovery step to run for any provider with a discoverModels loader (and merge results) before the empty-models filter, or add an explicit LM Studio discovery/merge like the GitLab block.

Copilot uses AI. Check for mistakes.

const prettyName = m.id.split("/").pop() || m.id

models[`lmstudio/${m.id}`] = {
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The discovered models map uses keys like lmstudio/${m.id} but sets providerID: "lmstudio". Provider model maps elsewhere use unprefixed model IDs as keys (the provider already scopes them), and parseModel() splits provider/model strings. With the current key format, model lookup/selection is likely to break (especially when m.id already contains /). Use m.id as the map key (scoped by provider) and keep the model id consistent with that key.

Suggested change
models[`lmstudio/${m.id}`] = {
models[m.id] = {

Copilot uses AI. Check for mistakes.
Comment on lines +160 to +161
return {
autoload: true,
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

autoload: true here enables the LM Studio provider unconditionally whenever it exists in the models.dev database, even when the local LM Studio server isn’t running (in which case discoverModels() returns {}). That can expose non-functional models in the UI and lead to confusing runtime errors. Consider making autoload conditional on a quick reachability check / successful discovery (or only autoload after discovery yields at least one model).

Suggested change
return {
autoload: true,
// Determine whether LM Studio is reachable and has at least one model.
let autoload = false
try {
const res = await fetch(`${baseURL}/models`)
if (res.ok) {
const data = (await res.json()) as any
const hasModels = Array.isArray(data?.data) && data.data.length > 0
autoload = hasModels
}
} catch {
autoload = false
}
return {
autoload,

Copilot uses AI. Check for mistakes.
options: { baseURL, apiKey: "lm-studio" },
async discoverModels() {
try {
const res = await fetch(`${baseURL}/models`)
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetch(${baseURL}/models) has no timeout/abort signal. If LM Studio is bound but slow/hung, provider initialization could block state loading indefinitely. Consider adding an AbortSignal.timeout(...) (and handling abort) to keep startup responsive.

Suggested change
const res = await fetch(`${baseURL}/models`)
const timeoutSignal =
typeof AbortSignal !== "undefined" && "timeout" in AbortSignal
? AbortSignal.timeout(5000)
: undefined
const res = await fetch(
`${baseURL}/models`,
timeoutSignal ? { signal: timeoutSignal } : undefined
)

Copilot uses AI. Check for mistakes.
@Aarogaming Aarogaming requested a review from adamdotdevin as a code owner April 1, 2026 18:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature Request: Native LM Studio auto-discovery

2 participants