Problem
When using the github-copilot provider, GPT-5+ models are currently routed through the Chat API. This prevents users from enabling Responses-API-only controls (eg. reasoningEffort, reasoningSummary) for those models, even when they are supported by the model and useful for agent behavior tuning.
This is especially noticeable for github-copilot/gpt-5.* where users expect to be able to configure reasoning parameters similarly to other OpenAI-compatible providers.
Proposed Solution
Add an opt-in provider option to route supported Copilot models through the Responses API:
- New config flag:
provider.github-copilot.options.useResponsesApi: true
- Behavior:
- Always route
codex* models through sdk.responses(modelID) (existing behavior)
- If
useResponsesApi is enabled and the model is GPT-5 or later, route through sdk.responses(modelID)
- Otherwise, default to
sdk.chat(modelID) (no behavior change by default)
Example config:
{
"provider": {
"github-copilot": {
"options": {
"useResponsesApi": true
}
}
}
}
Future (Deprecation)
Once GitHub Copilot / VS Code enables Responses API usage broadly (the VS Code setting is currently labeled Experimental), this option can likely be deprecated and/or defaulted on for supported models to reduce configuration surface area.
References
Current status of supported API route for provider github-copilot:
| Model |
API Route |
| gpt-4.1 |
chat |
| gpt-4o |
chat |
| gpt-5 |
responses |
| gpt-5-codex |
responses |
| gpt-5-mini |
chat |
| gpt-5.1 |
responses |
| gpt-5.1-codex |
responses |
| gpt-5.1-codex-max |
responses |
| gpt-5.1-codex-mini |
responses |
| gpt-5.2 |
responses |
- VS Code UI (settings) relevant to this feature:
GitHub > Copilot > Chat: Use Responses Api (Experimental)
GitHub > Copilot > Chat: Responses Api Reasoning Effort (Experimental)
GitHub > Copilot > Chat: Responses Api Reasoning Summary (Experimental)
Screenshot of the VS Code settings panel showing the above Copilot settings.
Environment
- Repo:
sst/opencode (dev)
- Platform observed: Windows (but the behavior is provider/model routing specific, not OS-specific)
Problem
When using the
github-copilotprovider, GPT-5+ models are currently routed through the Chat API. This prevents users from enabling Responses-API-only controls (eg.reasoningEffort,reasoningSummary) for those models, even when they are supported by the model and useful for agent behavior tuning.This is especially noticeable for
github-copilot/gpt-5.*where users expect to be able to configure reasoning parameters similarly to other OpenAI-compatible providers.Proposed Solution
Add an opt-in provider option to route supported Copilot models through the Responses API:
provider.github-copilot.options.useResponsesApi: truecodex*models throughsdk.responses(modelID)(existing behavior)useResponsesApiis enabled and the model is GPT-5 or later, route throughsdk.responses(modelID)sdk.chat(modelID)(no behavior change by default)Example config:
{ "provider": { "github-copilot": { "options": { "useResponsesApi": true } } } }Future (Deprecation)
Once GitHub Copilot / VS Code enables Responses API usage broadly (the VS Code setting is currently labeled Experimental), this option can likely be deprecated and/or defaulted on for supported models to reduce configuration surface area.
References
Current status of supported API route for provider
github-copilot:GitHub > Copilot > Chat: Use Responses Api(Experimental)GitHub > Copilot > Chat: Responses Api Reasoning Effort(Experimental)GitHub > Copilot > Chat: Responses Api Reasoning Summary(Experimental)Screenshot of the VS Code settings panel showing the above Copilot settings.
Environment
sst/opencode(dev)