feat(provider): native lm studio auto-discovery#20524
feat(provider): native lm studio auto-discovery#20524Aarogaming wants to merge 4 commits intoanomalyco:devfrom
Conversation
|
The following comment was made by an LLM, it may be inaccurate: Potential Duplicate/Related PRs FoundI found two highly related PRs that may be addressing similar functionality:
Why they might be duplicates: These PRs all address auto-discovery functionality for LM Studio and local model providers. You should verify whether PR #20524 is a newer/improved version of this functionality or if there's overlap that needs to be resolved. |
There was a problem hiding this comment.
Pull request overview
Adds an LM Studio provider integration intended to auto-discover locally running LM Studio instances and expose their models via the existing OpenAI-compatible provider plumbing.
Changes:
- Registers a new
lmstudioprovider mapping to the bundled OpenAI-compatible SDK. - Adds a
CUSTOM_LOADERS.lmstudio()loader that sets default connection options and implements/modelsdiscovery. - Generates model metadata for discovered LM Studio models (capabilities, limits, cost, etc.).
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| async discoverModels() { | ||
| try { | ||
| const res = await fetch(`${baseURL}/models`) | ||
| if (!res.ok) return {} |
There was a problem hiding this comment.
discoverModels() is wired into CUSTOM_LOADERS, but provider state currently only invokes discoveryLoaders[...]() for GitLab. As-is, LM Studio models returned here will never be merged into providers["lmstudio"].models, so the advertised auto-discovery won’t actually surface any models. Consider generalizing the discovery step to run for any provider with a discoverModels loader (and merge results) before the empty-models filter, or add an explicit LM Studio discovery/merge like the GitLab block.
|
|
||
| const prettyName = m.id.split("/").pop() || m.id | ||
|
|
||
| models[`lmstudio/${m.id}`] = { |
There was a problem hiding this comment.
The discovered models map uses keys like lmstudio/${m.id} but sets providerID: "lmstudio". Provider model maps elsewhere use unprefixed model IDs as keys (the provider already scopes them), and parseModel() splits provider/model strings. With the current key format, model lookup/selection is likely to break (especially when m.id already contains /). Use m.id as the map key (scoped by provider) and keep the model id consistent with that key.
| models[`lmstudio/${m.id}`] = { | |
| models[m.id] = { |
| return { | ||
| autoload: true, |
There was a problem hiding this comment.
autoload: true here enables the LM Studio provider unconditionally whenever it exists in the models.dev database, even when the local LM Studio server isn’t running (in which case discoverModels() returns {}). That can expose non-functional models in the UI and lead to confusing runtime errors. Consider making autoload conditional on a quick reachability check / successful discovery (or only autoload after discovery yields at least one model).
| return { | |
| autoload: true, | |
| // Determine whether LM Studio is reachable and has at least one model. | |
| let autoload = false | |
| try { | |
| const res = await fetch(`${baseURL}/models`) | |
| if (res.ok) { | |
| const data = (await res.json()) as any | |
| const hasModels = Array.isArray(data?.data) && data.data.length > 0 | |
| autoload = hasModels | |
| } | |
| } catch { | |
| autoload = false | |
| } | |
| return { | |
| autoload, |
| options: { baseURL, apiKey: "lm-studio" }, | ||
| async discoverModels() { | ||
| try { | ||
| const res = await fetch(`${baseURL}/models`) |
There was a problem hiding this comment.
fetch(${baseURL}/models) has no timeout/abort signal. If LM Studio is bound but slow/hung, provider initialization could block state loading indefinitely. Consider adding an AbortSignal.timeout(...) (and handling abort) to keep startup responsive.
| const res = await fetch(`${baseURL}/models`) | |
| const timeoutSignal = | |
| typeof AbortSignal !== "undefined" && "timeout" in AbortSignal | |
| ? AbortSignal.timeout(5000) | |
| : undefined | |
| const res = await fetch( | |
| `${baseURL}/models`, | |
| timeoutSignal ? { signal: timeoutSignal } : undefined | |
| ) |
Issue for this PR
Closes #20609
Type of change
What does this PR do?
Adds native auto-discovery for LM Studio instances running locally, streamlining user experience for offline models.
How did you verify your code works?
Ran the local LM Studio instance and confirmed the opencode provider system properly discovered the local models without manual URL entry. Typecheck and unit tests were updated.
Screenshots / recordings
Checklist