You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Anthropic models (Claude Sonnet) became extremely slow starting with v1.3.7 when used via the GitHub Copilot provider. GPT models over the same Copilot endpoint are unaffected. Downgrading to v1.3.6 immediately restores fast responses.
Tested by downloading release binaries and comparing identical prompts with Sonnet via GitHub Copilot:
Version
Sonnet Speed
v1.3.5
Fast
v1.3.6
Fast
v1.3.7
Extremely slow
v1.3.10
Extremely slow
v1.3.12
Extremely slow
v1.3.13
Extremely slow
GPT models remain fast across all versions.
Supposition:
In v1.3.4, the bash tool description was deliberately made static to improve cache hit rates (changelog: "Adjust bash tool description to increase cache hit rates between projects").
In v1.3.7, the PowerShell support PR (#16069) reintroduced dynamic, per-project values into the bash tool description.
Environment
Provider: GitHub Copilot
Model: Claude Sonnet 4.6
Fast version: v1.3.6
Slow version: v1.3.7 through v1.3.13
Related
v1.3.4 changelog: "Adjust bash tool description to increase cache hit rates between projects"
Open any project directory and run v1.3.6:
bash
/tmp/opencode-test/v1.3.6/opencode
Send a simple prompt (e.g. "list all files in this directory") → observe fast response
Exit and run v1.3.7 in the same project:
bash
/tmp/opencode-test/v1.3.7/opencode
Send the same prompt → observe significantly slower response
Expected: Both versions respond at similar speed. Actual: v1.3.7+ is dramatically slower with Anthropic models. GPT models remain fast in both versions.
Description
Anthropic models (Claude Sonnet) became extremely slow starting with v1.3.7 when used via the GitHub Copilot provider. GPT models over the same Copilot endpoint are unaffected. Downgrading to v1.3.6 immediately restores fast responses.
Tested by downloading release binaries and comparing identical prompts with Sonnet via GitHub Copilot:
GPT models remain fast across all versions.
Supposition:
In v1.3.4, the bash tool description was deliberately made static to improve cache hit rates (changelog: "Adjust bash tool description to increase cache hit rates between projects").
In v1.3.7, the PowerShell support PR (#16069) reintroduced dynamic, per-project values into the bash tool description.
Environment
Related
applyCaching()inpackages/opencode/src/provider/transform.tssets cache breakpoints on system messages and last conversation turnsPlugins
None
OpenCode version
1.3.13
Steps to reproduce
Configure OpenCode with GitHub Copilot as provider and select a Claude Sonnet model
Download v1.3.6 and v1.3.7 binaries:
bash
mkdir -p /tmp/opencode-test
curl -L -o /tmp/opencode-test/v1.3.6.zip "https://github.com/anomalyco/opencode/releases/download/v1.3.6/opencode-darwin-arm64.zip"
curl -L -o /tmp/opencode-test/v1.3.7.zip "https://github.com/anomalyco/opencode/releases/download/v1.3.7/opencode-darwin-arm64.zip"
unzip -o /tmp/opencode-test/v1.3.6.zip -d /tmp/opencode-test/v1.3.6
unzip -o /tmp/opencode-test/v1.3.7.zip -d /tmp/opencode-test/v1.3.7
Open any project directory and run v1.3.6:
bash
/tmp/opencode-test/v1.3.6/opencode
Send a simple prompt (e.g. "list all files in this directory") → observe fast response
Exit and run v1.3.7 in the same project:
bash
/tmp/opencode-test/v1.3.7/opencode
Send the same prompt → observe significantly slower response
Expected: Both versions respond at similar speed.
Actual: v1.3.7+ is dramatically slower with Anthropic models. GPT models remain fast in both versions.
Screenshot and/or share link
No response
Operating System
macOS 26.4
Terminal
iTerm2