Skip to content

feat(opencode): add copilot specific provider to properly handle copilot reasoning tokens#8900

Merged
rekram1-node merged 19 commits intoanomalyco:devfrom
SteffenDE:sd-copilot-provider
Jan 31, 2026
Merged

feat(opencode): add copilot specific provider to properly handle copilot reasoning tokens#8900
rekram1-node merged 19 commits intoanomalyco:devfrom
SteffenDE:sd-copilot-provider

Conversation

@SteffenDE
Copy link
Contributor

@SteffenDE SteffenDE commented Jan 16, 2026

What does this PR do?

This PR adds a copilot specific provider for the copilot completions API (there's already copilot specific code for the responses API). The code already states that this code is only meant for copilot, so I decided to rename the folder accordingly for clarity. While it would be great to not need this, I currently don't see a way to not have the copilot specifics (apart from extracting it into a separate repository).

It is similar to #5346, but it changes the completions code to properly store the reasoning_opaque field and send it back to the copilot API.

This PR is based on code I wrote for tidewave. I used Claude to implement the same changes based on a fresh copy of the upstream openai-compatible provider: https://github.com/vercel/ai/tree/%40ai-sdk/openai-compatible%401.0.30/packages/openai-compatible/src/chat

There are multiple small commits to make reviewing this easier and I also added tests for the important cases I encountered when handling the reasoning fields.

In the past, the Copilot API failed if the reasoning signature (reasoning_opaque) was not sent back for the Gemini 3 models. At some point GitHub seems to have changed this. Note though that the models still behave differently if the reasoning tokens are not passed back. For example, in Tidewave we've often seen Copilot's Gemini 2.5 spiral into a loop when omitting the reasoning, while it doesn't do that when including those.

How did you verify your code works?

You can chat with Gemini 2.5 Pro (3 Pro Preview is currently broken because of #8829, but it works in the Tidewave version of the code and all the fields are the same).

image

Closes #6864.

@github-actions
Copy link
Contributor

Hey! Your PR title Add copilot specific provider to properly handle copilot reasoning tokens doesn't follow conventional commit format.

Please update it to start with one of:

  • feat: or feat(scope): new feature
  • fix: or fix(scope): bug fix
  • docs: or docs(scope): documentation changes
  • chore: or chore(scope): maintenance tasks
  • refactor: or refactor(scope): code refactoring
  • test: or test(scope): adding or updating tests

Where scope is the package name (e.g., app, desktop, opencode).

See CONTRIBUTING.md for details.

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Related PRs Found

PR #5346: feat: reasoning text for Gemini 3 Pro in GH Copilot
#5346

Why it's related: This PR is explicitly mentioned in the description as similar work. PR #8900 builds on the approach from #5346 but extends it to properly handle reasoning_opaque field storage and transmission for the completions API, whereas #5346 focused on reasoning text for the responses API.


PR #5877: fix(github-copilot): auto-route GPT-5+ models to Responses API
#5877

Why it's related: Deals with GitHub Copilot routing and API handling, which is in the same domain as the copilot-specific provider changes.

@SteffenDE SteffenDE changed the title Add copilot specific provider to properly handle copilot reasoning tokens feat(opencode): add copilot specific provider to properly handle copilot reasoning tokens Jan 16, 2026
@github-actions
Copy link
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@Coruscant11
Copy link

Hello, I can confirm the reasoning works with Gemini 2.5 Pro however it doesn't seem to be the case with GPT 5.2 Codex / GPT 5.2 and Opus 4.5

@SteffenDE
Copy link
Contributor Author

@Coruscant11 thanks for trying! GPT-5 variants use the responses API version, which this PR does not affect. I need to check Opus - it's possible that reasoning needs to be explicitly enabled.

christso added a commit to EntityProcess/opencode that referenced this pull request Jan 17, 2026
Document findings from investigating Claude extended thinking via
GitHub Copilot:

- Responses API rejects Claude models entirely
- Chat API accepts thinking params but ignores them
- Gemini models DO return reasoning_text, Claude does not
- Feature blocked on GitHub enabling it server-side

Include test scripts used during investigation for future reference.

See also: PR anomalyco#8900 which adds proper reasoning handling for Gemini.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@aadishv
Copy link

aadishv commented Jan 18, 2026

Here's the debugging I did so far -- lmk if there's something wrong with my methodology!

I added basic logging for the opaque signatures 1) received by the API and 2) received by convertToOpenAICompatibleChatMessages. See aadishv@5776493

Then I ran a basic test conversation:
image

The logs indicated that convertToOpenAICompatibleChatMessages didn't receive any opaque signatures back (see the "Parsed back:" log)

image

The research I did earlier indicates that this because the AI SDK version currently being used (5.0.x?) doesn't automatically propagate providerMetadata from response -> next request. iirc, we can get this PR to work by upgrading to a version of the AI SDK that does do this, but I don't recall the specifics.

Hope this helps @SteffenDE :)

@aadishv
Copy link

aadishv commented Jan 18, 2026

I also remember that @​rekram1-node (not pinging rn) was quite knowledgeable on the Copilot API/AI SDK relationship so he might be able to give some insights

@SteffenDE
Copy link
Contributor Author

@aadishv the place you log in convertToOpenAICompatibleChatMessages is wrong though, it’s just after initializing the variable so you’ll never see opaque not being undefined there. Try logging here https://github.com/aadishv/opencode/blob/5776493d4add243cf2bb155ed3989dac835328a9/packages/opencode/src/provider/sdk/copilot/chat/openai-compatible-chat-language-model.ts#L320, which is where I tested and did see it working. But as mentioned I’m only on my phone right now - will verify tomorrow :)

I was also using ai-sdk v5 in another project from which I adapted the code and it‘s definitely passing the metadata back.

@aadishv
Copy link

aadishv commented Jan 18, 2026

Hmm interesting, added this log:
image
But I still don't see the signature:
image
Conversation (the last message is the one where the logs are from):
image

Not sure if there's something else wrong with my local setup though. If it worked for you then no worries!

@SteffenDE
Copy link
Contributor Author

@aadishv the reasoning_opaque is part of the previous assistant messages, not the user message. So your check would need to be args.messages.at(-2).reasoning_opaque

image

SteffenDE and others added 10 commits January 19, 2026 10:31
Consolidate all copilot provider code under src/provider/sdk/copilot/:
- Move responses/ folder from openai-compatible
- Move provider setup files
- Rename openai-compatible-provider.ts to copilot-provider.ts

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add type-only imports for verbatimModuleSyntax compliance
- Update provider.ts to import from ./sdk/copilot
- Update index.ts to reference renamed copilot-provider.ts

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add Copilot-specific cache control to all messages for prompt caching:
- System messages: use content array format with cache control on each part
- User messages: add cache control at message level
- Assistant messages: add cache control at message level
- Tool messages: add cache control at message level

Also update OpenAICompatibleSystemMessage type to support content array format.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add Copilot-specific reasoning fields to response and streaming schemas:
- reasoning_text: the actual reasoning text (like reasoning_content)
- reasoning_opaque: opaque signature for multi-turn reasoning

Update doGenerate and doStream to:
- Extract reasoning from reasoning_text in addition to reasoning_content/reasoning
- Include reasoning_opaque in providerMetadata for multi-turn context
- Emit reasoning-end BEFORE text-start or tool-input-start when they arrive
  in the same chunk (critical for proper event ordering)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…er options

- Change provider options key from openaiCompatible to copilot
- Add reasoning_text and reasoning_opaque extraction from assistant messages
- Extract reasoningOpaque from providerOptions.copilot on any message part
- Use null instead of empty string for content when assistant has no text
- Add Copilot-specific types to OpenAICompatibleMessage interfaces:
  - CopilotCacheControl type for cache control fields
  - reasoning_text and reasoning_opaque on assistant messages
- Update tests for new implementation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update copilot-provider.ts to import OpenAICompatibleChatLanguageModel
from local chat folder instead of @ai-sdk/openai-compatible, enabling
the Copilot-specific features:
- copilot_cache_control on all messages
- reasoning_text/reasoning_opaque schema support
- Proper reasoning-end event ordering in streaming

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add validation to throw InvalidResponseDataError if multiple
reasoning_opaque values are received in a single response, as
only one thinking part per response is supported.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove reasoning_content and reasoning fields from schemas and
extraction logic. Copilot only uses reasoning_text for thinking
tokens, so we don't need the fallback chain.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The new copilot provider looks for providerOptions.copilot, not
providerOptions.openaiCompatible. This ensures that custom options
are passed through.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@SteffenDE SteffenDE force-pushed the sd-copilot-provider branch from 5f712a1 to 82ca0a3 Compare January 19, 2026 10:34
@caozhiyuan
Copy link
Contributor

caozhiyuan commented Jan 20, 2026

@SteffenDE Perhaps you can refer to this https://github.com/caozhiyuan/copilot-api/tree/all/src/routes/messages, which supports claude thinking & interleaved thinking (not native, by prompt), gemini thinking, and gpt responses API. The copilot models API will return model information, such as support for responses API / chat completions / message API (message API may be supported in the next version of claude or the next major version of vscode).

[{
	"billing": {
		"is_premium": true,
		"multiplier": 1,
		"restricted_to": [
			"pro",
			"pro_plus",
			"business",
			"enterprise"
		]
	},
	"capabilities": {
		"family": "gpt-5.2-codex",
		"limits": {
			"max_context_window_tokens": 400000,
			"max_output_tokens": 128000,
			"max_prompt_tokens": 272000,
			"vision": {
				"max_prompt_image_size": 3145728,
				"max_prompt_images": 1,
				"supported_media_types": [
					"image/jpeg",
					"image/png",
					"image/webp",
					"image/gif"
				]
			}
		},
		"object": "model_capabilities",
		"supports": {
			"parallel_tool_calls": true,
			"streaming": true,
			"structured_outputs": true,
			"tool_calls": true,
			"vision": true
		},
		"tokenizer": "o200k_base",
		"type": "chat"
	},
	"id": "gpt-5.2-codex",
	"is_chat_default": false,
	"is_chat_fallback": false,
	"model_picker_category": "powerful",
	"model_picker_enabled": true,
	"name": "GPT-5.2-Codex",
	"object": "model",
	"policy": {
		"state": "enabled",
		"terms": "Enable access to the latest GPT-5.2-Codex model from OpenAI. [Learn more about how GitHub Copilot serves GPT-5.2-Codex](https://gh.io/copilot-openai)."
	},
	"preview": false,
	"supported_endpoints": [
		"/responses"
	],
	"vendor": "OpenAI",
	"version": "gpt-5.2-codex"
},{
    "billing": {
      "is_premium": true,
      "multiplier": 3,
      "restricted_to": [
        "pro",
        "pro_plus",
        "max",
        "business",
        "enterprise"
      ]
    },
    "capabilities": {
      "family": "claude-opus-4.5",
      "limits": {
        "max_context_window_tokens": 160000,
        "max_output_tokens": 16000,
        "max_prompt_tokens": 128000,
        "vision": {
          "max_prompt_image_size": 3145728,
          "max_prompt_images": 5,
          "supported_media_types": [
            "image/jpeg",
            "image/png",
            "image/webp"
          ]
        }
      },
      "object": "model_capabilities",
      "supports": {
        "max_thinking_budget": 32000,
        "min_thinking_budget": 1024,
        "parallel_tool_calls": true,
        "streaming": true,
        "tool_calls": true,
        "vision": true
      },
      "tokenizer": "o200k_base",
      "type": "chat"
    },
    "id": "claude-opus-4.5",
    "is_chat_default": false,
    "is_chat_fallback": false,
    "model_picker_category": "powerful",
    "model_picker_enabled": true,
    "name": "Claude Opus 4.5",
    "object": "model",
    "policy": {
      "state": "disabled",
      "terms": "Enable access to the latest Claude Opus 4.5 model from Anthropic. [Learn more about how GitHub Copilot serves Claude Opus 4.5](https://gh.io/copilot-anthropic)."
    },
    "preview": false,
    "supported_endpoints": [
      "/chat/completions"
    ],
    "vendor": "Anthropic",
    "version": "claude-opus-4.5"
  }]

@caozhiyuan
Copy link
Contributor

I discovered through copilot-api that it's possible to bypass the max_prompt_tokens limit, allowing even Claude models to exceed 200k, though I'm not sure if this is a bug in Copilot. However, I haven't abused this.

@rekram1-node
Copy link
Collaborator

okay I played w/ this a bit, I think it's good now

@rekram1-node rekram1-node merged commit d9f18e4 into anomalyco:dev Jan 31, 2026
5 of 7 checks passed
@caozhiyuan
Copy link
Contributor

seems need change "Openai-Intent" to "conversation-agent" and variants logic change to if (model.id.includes("claude")) {
return {
high: { thinking_budget: Math.min(15_999, Math.floor(model.limit.output / 2 - 1)) },
max: { thinking_budget: Math.min(31_999, model.limit.output - 1) },
}
} for claude model thinking, but only can thinking in first turn (chat comletion not support interleaved thinking). @SteffenDE and The gemini flash 3 model missed the reasoning_opaque thinking signature processing.

@caozhiyuan
Copy link
Contributor

@rekram1-node can add a Flag.OPENCODE_EXPERIMENTAL_xxx (default false) for claude message api ?

@rekram1-node
Copy link
Collaborator

@caozhiyuan sure, but note that it does have a little bit to it, did you see I actually was using it in prod for a while but we ran into rate limit issues for users of it

@rekram1-node
Copy link
Collaborator

seems need change "Openai-Intent" to "conversation-agent"

What's this for again?

@rekram1-node
Copy link
Collaborator

lol u know the api rlly well

@caozhiyuan
Copy link
Contributor

seems need change "Openai-Intent" to "conversation-agent"

What's this for again?

@rekram1-node for chat completions api , claude thinking need conversation-agent.

@NateSmyth
Copy link
Contributor

@caozhiyuan I'm pretty sure the opencode client is actually blocked on copilots messages proxy now. It just 404's on messages now for me (unless I spoof being vscode, with a token obtained with the vscode client id). Could just be me but the fact that it works with the vscode auth makes me think its actually blocked

And yeah this does need to handle claude vs. gemini opaque_reasoning, right now if you switch from gemini -> claude mid-session claude will get a gemini reasoning signature and 400

@rekram1-node
Copy link
Collaborator

#11569

working on a fix for all that

@rekram1-node
Copy link
Collaborator

for chat completions api , claude thinking need conversation-agent.

Does this actually make a meaningful difference or is this a consistency thing

I think copilot team was telling me this header may not be necessary but i forget

@caozhiyuan
Copy link
Contributor

seems need change "Openai-Intent" to "conversation-agent" and variants logic change to if (model.id.includes("claude")) {
return {
high: { thinking_budget: Math.min(15_999, Math.floor(model.limit.output / 2 - 1)) },
max: { thinking_budget: Math.min(31_999, model.limit.output - 1) },
}
} for claude model thinking, but only can thinking in first turn (chat comletion not support interleaved thinking). @SteffenDE and The gemini flash 3 model missed the reasoning_opaque thinking signature processing.

@rekram1-node This morning I tried the dev branch code, and the Claude model wasn't outputting thinking information. After changing the edits in the header to agent and adding the high max variant, it worked.

@rekram1-node
Copy link
Collaborator

rekram1-node commented Feb 1, 2026

It won't output thinking unless u use thinking variant tho, hit ctrl+t and youll see it, we should default to it on tho prolly

@SteffenDE
Copy link
Contributor Author

@caozhiyuan I explicitly tested thinking with both intent header values and it did not make a difference

@rekram1-node
Copy link
Collaborator

cool, ill go over my pr one more time make sure everything still works and then merge it and should fix some small issues

@caozhiyuan
Copy link
Contributor

It's possible that Copilot has relaxed its restrictions. Previously, when thinking budget was first supported, only agent calls were supported. This morning, I installed the latest version of OpenCode via npm. Typing prompt didn't display any output (although it did return output). After restarting OpenCode, it worked normally, and switching to the previous session also displayed the information correctly. Pressing Ctrl+T didn't display variant information either, so I tried making some changes. @SteffenDE @rekram1-node

@caozhiyuan
Copy link
Contributor

@rekram1-node It would be better if it could support different variations of the thinking budget.

@rekram1-node
Copy link
Collaborator

Yeah we cna do that im game

alexyaroshuk pushed a commit to alexyaroshuk/opencode that referenced this pull request Feb 1, 2026
…lot reasoning tokens (anomalyco#8900)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>
Co-authored-by: Aiden Cline <aidenpcline@gmail.com>
ishaksebsib pushed a commit to ishaksebsib/opencode that referenced this pull request Feb 4, 2026
…lot reasoning tokens (anomalyco#8900)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>
Co-authored-by: Aiden Cline <aidenpcline@gmail.com>
ishaksebsib pushed a commit to ishaksebsib/opencode that referenced this pull request Feb 4, 2026
…lot reasoning tokens (anomalyco#8900)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>
Co-authored-by: Aiden Cline <aidenpcline@gmail.com>
pRizz added a commit to pRizz/opencode that referenced this pull request Feb 6, 2026
* release: v1.1.42

* feat(desktop): Add desktop deep link (anomalyco#10072)

Co-authored-by: Brendan Allan <git@brendonovich.dev>

* chore: generate

* chore: update nix node_modules hashes

* fix: remove redundant Highlights heading from publish template (anomalyco#11121)

* ignore: update download stats 2026-01-29

* fix(telemetry): restore userId and sessionId metadata in experimental_telemetry (anomalyco#8195)

* fix: ensure kimi k2.5 from fireworks ai and kimi for coding providers properly set temperature

* ci: added gh workflow that adds 'contributor' label to PRs/Issues (anomalyco#11118)

Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>

* tweak: add 'skill' to permissions config section so that ides will show it as autocomplete option (this is already a respected setting)

* zen: m2.1 and glm4.7 free models

* feat: support config skill registration (anomalyco#9640)

Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>

* chore: regen sdk

* chore: format code

* chore: generate

* ci (anomalyco#11149)

Co-authored-by: opencode <opencode@sst.dev>

* test: skip failing tests (anomalyco#11184)

* chore: consolidate and standardize workflow files (anomalyco#11183)

* ci: disable nix-desktop workflow (anomalyco#11188)

* ci: remove push triggers from workflow files (anomalyco#11186)

* feat: add beta branch sync workflow for contributor PRs (anomalyco#11190)

* feat: expose acp thinking variants (anomalyco#9064)

Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>

* fix(app): better header item wrapping (anomalyco#10831)

* fix: show all provider models when no providers connected (anomalyco#11198)

* zen: kimi k2.5 free (anomalyco#11199)

* fix: use ?? to prevent args being undefined for mcp server in some cases (anomalyco#11203)

* feat(config): add managed settings support for enterprise deployments (anomalyco#6441)

Co-authored-by: Dax <mail@thdxr.com>

* chore: update nix node_modules hashes

* ci

* ci

* ci

* ci

* release: v1.1.43

* ci: upgrade bun cache to stickydisk for faster ci builds

* ci

* ci: trigger publish workflow automatically after beta builds complete

* ci

* test(app): test for toggling model variant (anomalyco#11221)

* fix(app): version to latest to avoid errors for new devs (anomalyco#11201)

* ci

* ci

* ci

* fix(beta): use local git rebase instead of gh pr update-branch

* fix(app): dialog not closing

* fix(app): free model scroll

* sync

* fix(app): free model layout

* ci: cache apt packages to reduce CI build times on ubuntu

* ci: add container build workflow

Add prebuilt build images and a publish workflow to speed CI by reusing heavy dependencies.

* ci: fix container build script

Invoke docker build with Bun shell so commands run correctly, and document default automation behavior.

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* release: v1.1.45

* fix: rm ai sdk middleware that was preventing <think> blocks from being sent back as assistant message content (anomalyco#11270)

Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com>

* ci

* ci

* ci

* ci

* sync

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* ci

* wip: zen (anomalyco#11343)

* chore: generate

* ci: update pr template (anomalyco#11341)

* ci

* feat: Transitions, spacing, scroll fade, prompt area update (anomalyco#11168)

Co-authored-by: Github Action <action@github.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: aaroniker <4730431+aaroniker@users.noreply.github.com>

* chore: generate

* test(app): change language test (anomalyco#11295)

* chore(tui): remove unused experimental keys (anomalyco#11195)

* chore: generate

* release: v1.1.46

* commit

* fix(github): add owner parameter to app token for org-wide repo access

* release: v1.1.47

* ci: increase ARM runner to 8 vCPUs for faster Tauri builds

* fix(provider): exclude chat models from textVerbosity setting (anomalyco#11363)

* refactor(app): refactored tests + added project tests (anomalyco#11349)

* refactor(provider): remove google-vertex-anthropic special case in ge… (anomalyco#10743)

* fix: handle redirected_statement treesitter node in bash permissions (anomalyco#6737)

* test: add llm.test.ts (anomalyco#11375)

* docs: update agents options section to include color and top_p options (anomalyco#11355)

* fix: ensure ask question tool isn't included when using acp (anomalyco#11379)

* chore(deps): bump ai-sdk packages (anomalyco#11383)

Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com>

* fix(provider): use snake_case for thinking param with OpenAI-compatible APIs (anomalyco#10109)

* chore: generate

* feat: make skills invokable as slash commands in the TUI

- Add Skill.content() method to load skill template content from SKILL.md files
- Modify Command.list() to include skills as invokable commands
- Add 'skill' boolean property to Command.Info schema
- Update autocomplete to show skills with (Skill) label in slash commands
- Regenerate SDK to include skill property in Command type

* feat(build): respect OPENCODE_MODELS_URL env var (anomalyco#11384)

* Revert "feat: make skills invokable as slash commands in the TUI"

This reverts commit 8512655.

* feat(opencode): add copilot specific provider to properly handle copilot reasoning tokens (anomalyco#8900)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>
Co-authored-by: Aiden Cline <aidenpcline@gmail.com>

* chore: generate

* ci: schedule beta workflow hourly to automate sync runs

* ci: allow manual beta workflow trigger so users can release on demand instead of waiting for hourly schedule

* ci: remove workflow restrictions to allow all PR triggers for broader CI coverage

* ci: remove pull-request write permissions from beta workflow

* fix: ensure the mistral ordering fixes also apply to devstral (anomalyco#11412)

* core: prevent parallel test runs from contaminating environment variables

* ci: run tests automatically when code is pushed to dev branch

* fix: don't --follow by default for grep and other things using ripgrep (anomalyco#11415)

* feat: make skills invokable as slash commands in the TUI (anomalyco#11390)

* chore: generate

* core: ensure models configuration is not empty before loading

* ci: copy models fixture for e2e test consistency

* tui: allow specifying custom models file path via OPENCODE_MODELS_PATH

 Users can now configure their own models configuration file by setting the OPENCODE_MODELS_PATH environment variable, providing more flexibility for testing and configuration.

* sync

* chore: generate

* test: fix flaky test (anomalyco#11427)

* test(app): session actions (anomalyco#11386)

* Revert "feat: Transitions, spacing, scroll fade, prompt area update (anomalyco#11168)" (anomalyco#11461)

Co-authored-by: adamelmore <2363879+adamdottv@users.noreply.github.com>

* release: v1.1.48

* fix(app): session header 'share' button to hug content (anomalyco#11371)

* fix(pty): Add UTF-8 encoding defaults for Windows PTY (anomalyco#11459)

* feat(app): add skill slash commands (anomalyco#11369)

* Revert "feat(app): add skill slash commands" (anomalyco#11484)

* feat(opencode): add reasoning variants support for SAP AI Core (anomalyco#8753)

Co-authored-by: Github Action <action@github.com>

* docs: fix documentation issues (anomalyco#11435)

Co-authored-by: damaozi <1811866786@qq.com>

* fix(nix): restore native runners for darwin hash computation (anomalyco#11495)

* fix(provider): use process.env directly for runtime env mutations (anomalyco#11482)

* tweak: show actual retry error message instead of generic error msg (anomalyco#11520)

* tui: enable password authentication for remote session attachment

Allow users to authenticate when attaching to a remote OpenCode session by supporting basic auth via a password flag or OPENCODE_SERVER_PASSWORD environment variable

* test(app): general settings, shortcuts, providers and status popover  (anomalyco#11517)

* chore: generate

* fix(ci): portable hash parsing in nix-hashes workflow (anomalyco#11533)

* ci: fix nix hash issue (anomalyco#11530)

Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com>

* chore: update nix node_modules hashes

* fix(app): rendering question tool when the step are collapsed (anomalyco#11539)

* feat: add skill dialog for selecting and inserting skills (anomalyco#11547)

* fix: issue where you couldn't @ folders/files that started with a "." (anomalyco#11553)

* ci: fixed stale pr workflow (anomalyco#11310)

* fix(tui): conditionally render bash tool output (anomalyco#11558)

* feat(tui): add UI for skill tool in session view (anomalyco#11561)

* fix(tui): remove extra padding between search and results in dialog-select (anomalyco#11564)

* fix: correct pluralization of match count in grep and glob tools (anomalyco#11565)

* fix: ensure switching anthropic models mid convo on copilot works without errors, fix issue with reasoning opaque not being picked up for gemini models (anomalyco#11569)

* fix(app): show retry status only on active turn (anomalyco#11543)

* docs: improve zh-TW punctuation to match Taiwan usage (anomalyco#11574) (anomalyco#11589)

* docs: add Turkish README translation (anomalyco#11524)

* fix(app): use static language names in Thai localization (anomalyco#11496)

* fix(app): binary file handling in file view (anomalyco#11312)

* chore: generate

* fix: allow user plugins to override built-in auth plugins (anomalyco#11058)

Co-authored-by: JosXa <info@josxa.dev>

* docs: prefer wsl over native windows stuff (anomalyco#11637)

* fix(ecosystem): fix link Daytona  (anomalyco#11621)

* fix(tui): remove outer backtick wrapper in session transcript tool formatting (anomalyco#11566)

Co-authored-by: Claude <noreply@anthropic.com>

* fix: opencode hanging when using client.app.log() during initialization (anomalyco#11642)

* fix: prevent duplicate AGENTS.md injection when reading instruction files (anomalyco#11581)

Co-authored-by: Aiden Cline <aidenpcline@gmail.com>

* fix(opencode): scope agent variant to model (anomalyco#11410)

* chore: reduce nix fetching (anomalyco#11660)

* chore: generate

* fix(desktop): nix - add missing dep (anomalyco#11656)

Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>

* fix: prompt caching for opus on bedrock (anomalyco#11664)

* chore: update nix node_modules hashes

* fix: variant logic for anthropic models through openai compat endpoint (anomalyco#11665)

* fix: when using codex sub, send the custom agent prompts as a separate developer message (previously sent as user message but api allows for instructions AND developer messages) (anomalyco#11667)

Co-authored-by: Carlos <carloscanas942@gmail.com>

* test(app): workspace tests (anomalyco#11659)

* docs (web): Update incorrect Kimi model name in zen.mdx (anomalyco#11677)

* zen: add minimax logo (anomalyco#11682)

* feat(ui): Select, dropdown, popover styles & transitions (anomalyco#11675)

* chore: generate

* feat(ui): Smooth fading out on scroll, style fixes (anomalyco#11683)

* chore: generate

* feat(app): show skill/mcp badges for slash commands

Display 'skill' or 'mcp' badge instead of 'custom' for slash commands
based on their source type. This provides better clarity to users about
where each command comes from.

* fix(app): hide badge for builtin slash commands

Add source: 'command' to builtin and config-defined commands so they
don't show a 'custom' badge. Only MCP and skill commands show badges.

* fix: adjust resolve parts so that when messages with multiple @ references occur, the tool calls are properly ordered

* test: add unit test

* fix(plugin): correct exports to point to dist instead of src

The package.json exports were pointing to ./src/*.ts but the published
package only includes the dist/ folder. This caused 'Cannot find module'
errors when custom tools tried to import @opencode-ai/plugin.

Changed exports from:
  ".": "./src/index.ts"
  "./tool": "./src/tool.ts"

To:
  ".": { "types": "./dist/index.d.ts", "import": "./dist/index.js" }
  "./tool": { "types": "./dist/tool.d.ts", "import": "./dist/tool.js" }

* ci: prevent rate limit errors when fetching team PRs for beta releases

* ci: collect all failed PR merges and notify Discord

* ci: add DISCORD_ISSUES_WEBHOOK_URL secret to beta workflow

* ci: add --discord-webhook / -d CLI option for custom Discord webhook URL

* ci: remove parseArgs CLI option and use environment variable directly

* ci: rewrite beta script to use proper Bun shell patterns

* ci: post PR comments when beta merge fails instead of Discord notifications

* ci: add --label beta filter to only process PRs with beta label

* ci: change trigger from scheduled cron to PR labeled events with beta label condition

* ci: add synchronize event and check for beta label using contains()

* ci: centralize team list in @opencode-ai/script package and use beta label filter

* ci: run beta workflow on hourly schedule only

* ci: allow manual dispatch for beta workflow

* ci: skip force push when beta branch is unchanged

* core: allow starting new sessions after errors by fixing stuck session status

* tui: fix task status to show current tool state from message store

* ci: skip unicode filename test due to inconsistent behavior causing CI failures

* ignore: switch commit model to kimi-k2.5 for improved command execution reliability

* ci: restrict nix-hashes workflow to dev branch pushes only

Remove pull_request trigger and limit push trigger to dev branch to prevent

unnecessary workflow runs on feature branches and PRs. The workflow will now

only execute when dependency files change on the dev branch.

* ci: Fix Pulumi version conflict in deploy workflow

Added a workaround to fix Pulumi version conflict in the deployment workflow.

* docs: Restructure AGENTS.md style guide with organized sections and code examples

* Revert "fix(plugin): correct exports to point to dist instead of src"

This reverts commit 7417e6e.

* ci: enable typecheck on push to dev branch to catch type errors immediately after merge

* Use opentui OSC52 clipboard (anomalyco#11718)

Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com>

* Add spinner animation for Task tool (anomalyco#11725)

* Simplify directory tree output for prompts (anomalyco#11731)

* fix: session title generation with OpenAI models. (anomalyco#11678)

* fix(opencode): give OPENCODE_CONFIG_CONTENT proper priority for setting config based on docs (anomalyco#11670)

* Revert "Use opentui OSC52 clipboard (anomalyco#11718)"

This reverts commit 8e985e0.

* tui: show exit message banner (anomalyco#11733)

* fix: convert system message content to string for Copilot provider (anomalyco#11600)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* Resolve catchup-001 app/ui conflict set

* Fix app/desktop type drift in catchup batch 001

* Use GitHub-hosted Ubuntu runners for PR checks

* Replace Blacksmith workflow runners with GitHub-hosted labels

* Honor OPENCODE_DISABLE_PROJECT_CONFIG in config loading

* Order turbo typecheck by dependency graph

* Exit e2e seed script after completion

* Default auth config to disabled

* Run app e2e only in Linux test workflow

* Align Playwright install cache with test runtime env

* Respect disable-project-config in server auth loader

* Fix app e2e local server cleanup

* Default local e2e runner to disable project config

---------

Co-authored-by: opencode <opencode@sst.dev>
Co-authored-by: Hegyi Áron Ferenc <hegyi.aron101@gmail.com>
Co-authored-by: Brendan Allan <git@brendonovich.dev>
Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: Ryan Vogel <ryan@mandarin3d.com>
Co-authored-by: Ravi Kumar <82090231+Raviguntakala@users.noreply.github.com>
Co-authored-by: Aiden Cline <aidenpcline@gmail.com>
Co-authored-by: Goni Zahavy <goni1993@gmail.com>
Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>
Co-authored-by: Frank <frank@anoma.ly>
Co-authored-by: Spoon <212802214+spoons-and-mirrors@users.noreply.github.com>
Co-authored-by: Dax <mail@thdxr.com>
Co-authored-by: Mert Can Demir <validatedev@gmail.com>
Co-authored-by: Filip <34747899+neriousy@users.noreply.github.com>
Co-authored-by: Mikhail Levchenko <mishkun.lev@gmail.com>
Co-authored-by: Dax Raad <d@ironbay.co>
Co-authored-by: Rahul A Mistry <149420892+ProdigyRahul@users.noreply.github.com>
Co-authored-by: adamelmore <2363879+adamdottv@users.noreply.github.com>
Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com>
Co-authored-by: Aaron Iker <aaron@anoma.ly>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: aaroniker <4730431+aaroniker@users.noreply.github.com>
Co-authored-by: Idris Gadi <85882535+IdrisGit@users.noreply.github.com>
Co-authored-by: Michael Yochpaz <myochpaz@redhat.com>
Co-authored-by: Patrick Schiel <p.schiel@gmail.com>
Co-authored-by: Cesar Garcia <128240629+Chesars@users.noreply.github.com>
Co-authored-by: Benjamin Bartels <benjamin@bartels.dev>
Co-authored-by: Steffen Deusch <steffen@deusch.me>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Adam <2363879+adamdotdevin@users.noreply.github.com>
Co-authored-by: Alex Yaroshuk <34632190+alexyaroshuk@users.noreply.github.com>
Co-authored-by: 陆奕丞 <01luyicheng@gmail.com>
Co-authored-by: Jérôme Benoit <jerome.benoit@piment-noir.org>
Co-authored-by: 大猫子 <ll1042668699@gmail.com>
Co-authored-by: damaozi <1811866786@qq.com>
Co-authored-by: Jérôme Benoit <jerome.benoit@sap.com>
Co-authored-by: adamjhf <50264672+adamjhf@users.noreply.github.com>
Co-authored-by: Alper Kartkaya <114335677+AlperKartkaya@users.noreply.github.com>
Co-authored-by: Joscha Götzer <joscha.goetzer@gmail.com>
Co-authored-by: JosXa <info@josxa.dev>
Co-authored-by: Axel Sarmiento Mrak <96851183+AxelMrak@users.noreply.github.com>
Co-authored-by: zerone0x <hi@trine.dev>
Co-authored-by: Desmond Sow <desmondsow@hotmail.com>
Co-authored-by: YeonGyu-Kim <code.yeon.gyu@gmail.com>
Co-authored-by: neavo <neavo@neavo.me>
Co-authored-by: Caleb Norton <n0603919@outlook.com>
Co-authored-by: Rohan Godha <git@rohangodha.com>
Co-authored-by: Carlos <carloscanas942@gmail.com>
Co-authored-by: Sumit Srivastava <sumitsrisumit@gmail.com>
Co-authored-by: R44VC0RP <n01508660@unf.edu>
Co-authored-by: Sebastian <hasta84@gmail.com>
Co-authored-by: Mathias Beugnon <mathias@beugnon.fr>
Co-authored-by: OpeOginni <107570612+OpeOginni@users.noreply.github.com>
Co-authored-by: Jigar <jpatel4404@gmail.com>
fanjia1024 pushed a commit to fanjia1024/opencode that referenced this pull request Feb 10, 2026
…lot reasoning tokens (anomalyco#8900)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Aiden Cline <63023139+rekram1-node@users.noreply.github.com>
Co-authored-by: Aiden Cline <aidenpcline@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

No reasoning at all through Github Copilot provider

7 participants