Add OpenAI-compatible provider support for local LLMs#2
Add OpenAI-compatible provider support for local LLMs#2
Conversation
Co-authored-by: tnglemongrass <113173292+tnglemongrass@users.noreply.github.com>
…lity Co-authored-by: tnglemongrass <113173292+tnglemongrass@users.noreply.github.com>
Co-authored-by: tnglemongrass <113173292+tnglemongrass@users.noreply.github.com>
Co-authored-by: tnglemongrass <113173292+tnglemongrass@users.noreply.github.com>
Co-authored-by: tnglemongrass <113173292+tnglemongrass@users.noreply.github.com>
Co-authored-by: tnglemongrass <113173292+tnglemongrass@users.noreply.github.com>
|
|
||
| Learn more about [agents](https://opencode.ai/docs/agents). | ||
|
|
||
| ### Using OpenAI-Compatible Providers |
There was a problem hiding this comment.
The feature seems to be a nice one, however I dont really understand as a user what needs to be done s.t. it fetches its models automatically. I thought it is about /models slash command, but that shows notihng relevant? Maybe add documentation and make sure the flow works end to end, might just be that it is currently not implemented fully yet.
There was a problem hiding this comment.
Updated the documentation to clarify how the feature works end-to-end. The workflow is:
- Set
OPENAI_API_BASEenv var (or use--api-baseflag) - Run
opencode models- this automatically fetches models from your endpoint's/modelsAPI - Models appear under the
openai-compatibleprovider - Use them like:
opencode run --model openai-compatible/llama-3-8b "Hello"
Added a "Quick Start with Ollama" section showing the complete flow. The feature is fully implemented and tested - it fetches models automatically when you configure the API endpoint. (1dc5e0b)
There was a problem hiding this comment.
Does it do some caching? How often does it update? Do i need to run opencode models on cli? Or does it also work on slash command /models to update that list? Does it update on startup, or once a day? Does it still update even if the previous update run failed, or do users need to wait until cache is expired? Wouldn't it be much easier to understand for users to run opencode models --refresh to trigger a forced update, is that possible?
There was a problem hiding this comment.
Models are cached per session (fetched once when first accessed). The --refresh flag now forces re-fetching from the endpoint. No automatic time-based updates - use opencode models --refresh to manually update when you add/remove models. This is a CLI feature only, not a slash command. (2d6f7f9)
Co-authored-by: tnglemongrass <113173292+tnglemongrass@users.noreply.github.com>
README.md
Outdated
| # 2. Set the API base URL | ||
| export OPENAI_API_BASE="http://localhost:11434/v1" | ||
|
|
||
| # 3. List available models - OpenCode fetches them automatically |
There was a problem hiding this comment.
automatically yes. but when? would be useful to add "on first call, once a day or similar here", otherwise its useless
There was a problem hiding this comment.
Updated documentation to clarify: "Models are fetched once per session and cached. Use opencode models --refresh to force a refresh when you add/remove models from your OpenAI-compatible endpoint." No automatic time-based refresh - manual refresh only. (2d6f7f9)
There was a problem hiding this comment.
You have a much nicer quickstart description in the pr description. use that instead.
There was a problem hiding this comment.
Replaced with the simpler Quick Start from the PR description. Removed numbered steps and extra commentary for a cleaner, more concise guide. (108062e)
…behavior Co-authored-by: tnglemongrass <113173292+tnglemongrass@users.noreply.github.com>
Co-authored-by: tnglemongrass <113173292+tnglemongrass@users.noreply.github.com>
Co-authored-by: tnglemongrass <113173292+tnglemongrass@users.noreply.github.com>
Implementation Complete ✅
All tasks have been completed successfully:
--refreshflag support for OpenAI-compatible modelsRecent Changes
Documentation simplification (latest commit):
Caching and refresh improvements:
--refreshflag support to force re-fetch models from OpenAI-compatible endpointsPrevious documentation improvements:
/modelsendpoint (not a slash command)openai-compatibleproviderEarlier changes (based on PR feedback):
Summary
This PR enables OpenCode to connect to any OpenAI-compatible API endpoint:
Key Features
/modelsAPI--refreshflag to force re-fetch models from endpointCaching Behavior
opencode models --refreshto force update the model listQuick Start
Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.