Skip to content

fix: disable assistant prefill for Claude 4.6 models#14772

Open
hsuanguo wants to merge 7 commits intoanomalyco:devfrom
hsuanguo:fix/github-copilot-claude-prefill-issue-13768
Open

fix: disable assistant prefill for Claude 4.6 models#14772
hsuanguo wants to merge 7 commits intoanomalyco:devfrom
hsuanguo:fix/github-copilot-claude-prefill-issue-13768

Conversation

@hsuanguo
Copy link
Copy Markdown

@hsuanguo hsuanguo commented Feb 23, 2026

Issue for this PR

Closes #13768

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Problem:
Claude Opus 4.6 and Sonnet 4.6 models reject requests where the last message in the conversation is an assistant message, returning: "This model does not support assistant message prefill. The conversation must end with a user message."

This happens in at least two scenarios:

  1. When OpenCode reaches max steps and appends an assistant message as a hint to the model
  2. When a prior turn was interrupted/aborted mid-session, leaving an assistant message as the last entry in the conversation

The issue affects these models across all providers — native Anthropic API, GitHub Copilot, OpenRouter, etc.

Fix:
Added stripTrailingAssistant() inside normalizeMessages() in provider/transform.ts. Since normalizeMessages is called from the single ProviderTransform.message() choke point used for all outgoing requests, this covers every code path rather than patching individual call sites.

For models that don't support prefill (detected via supportsAssistantPrefill()), any trailing assistant messages are stripped before the request is sent. For all other models, nothing changes.

How did you verify your code works?

Confirmed the root cause using logs from a live session that reproduced the error:

  • providerID=anthropic, modelID=claude-opus-4-6, URL https://api.anthropic.com/v1/messages
  • The last message in the 134-message conversation was role: assistant from a prior interrupted turn — not a max steps prefill
  • This showed the original fix was too narrow (only covered max steps)

Added integration tests in ProviderTransform.message - strip trailing assistant for Claude 4.6 that verify:

  • Trailing assistant message is stripped for Claude 4.6
  • Trailing assistant message is preserved for Claude 3.5 (which supports prefill)
  • Multiple consecutive trailing assistant messages are all stripped
  • No-op when last message is already a user message

Screenshots / recordings

Not applicable - backend logic fix

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

Claude Opus 4.6 and Sonnet 4.6 models don't support assistant message
prefill across all providers (Anthropic, GitHub Copilot, OpenRouter, etc),
causing errors when OpenCode reaches max steps.

This fix detects Claude 4.6 models and uses a user message instead of
assistant prefill for max steps handling, while preserving prefill for
models that support it.

Changes:
- Add supportsAssistantPrefill() to detect Claude 4.6 models
- Update max steps handling to conditionally use user/assistant role
- Add comprehensive tests for various providers and model versions

Fixes anomalyco#13768
@github-actions github-actions bot added the needs:compliance This means the issue will auto-close after 2 hours. label Feb 23, 2026
@hsuanguo hsuanguo closed this Feb 23, 2026
@github-actions github-actions bot removed the needs:compliance This means the issue will auto-close after 2 hours. label Feb 23, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for updating your PR! It now meets our contributing guidelines. 👍

…co#13768)

The original fix only covered the isLastStep/MAX_STEPS prefill case.
Logs revealed the error also occurs in normal conversation flow when
any assistant message ends up last in the message array (e.g. from an
aborted or interrupted turn).

Broader fix: add stripTrailingAssistant() inside normalizeMessages()
which is the single choke point for all message processing. This
strips any trailing assistant messages for models that don't support
prefill (Claude opus-4.6 and sonnet-4.6 across all providers), covering
all code paths rather than just the max steps case.

Also reverts the prompt.ts role switch since normalizeMessages handles
it more correctly at the transport layer.
@adlternative
Copy link
Copy Markdown

@hsuanguo Has this been fixed yet? This feature is very important to me.

@hsuanguo
Copy link
Copy Markdown
Author

@hsuanguo Has this been fixed yet? This feature is very important to me.

I haven't seen the error in the last couple of days so I assume it fixes it.

@coygeek
Copy link
Copy Markdown

coygeek commented Mar 2, 2026

I validated PR #14772 locally against issue #13768 by testing it side-by-side with other candidate fixes and replaying the real failing session exports shared in the issue discussion.

To make sure the result was not specific to one path, I used separate worktrees for each approach: the PR branch itself (pr-14772), plus two alternative implementations (a session loop-guard approach and a Copilot-adapter-only cleanup approach). In every worktree, I ran bun run typecheck, bun run build, and bun run lint (which in this repo executes the full coverage test suite), and then replayed the exported failing sessions.

For replay validation, I used the two exported sessions attached in issue comments by @sbrunecker and @Yiximail. On baseline dev, both replays still produced an outbound prompt that ended with an assistant message, which matches the failure condition for Claude 4.6 models. On PR #14772, both replays no longer had a trailing assistant message at send time, which is the expected behavior for this fix.

The alternatives were informative but incomplete: both the loop-guard variant and the Copilot-adapter-only variant had passing tests, but replay still showed a trailing assistant in normalized outbound messages for real failing data. In other words, they reduced risk in some paths but did not eliminate the root transport-shape problem.

Based on these comparisons, PR #14772 is the correct fix for #13768. It addresses the issue at the transport normalization layer (ProviderTransform.message), which is why it consistently handles both max-step and interrupted-turn trailing-assistant scenarios across provider paths.

@adlternative
Copy link
Copy Markdown

nobody merge this?

@klondenberg-bioptimus
Copy link
Copy Markdown

I tried this, it also fixed the issue for me ( both Claude Opus 4.6 on Bedrock and on Github Copilot )

@hulk-ilmtec
Copy link
Copy Markdown

Please prioritize merging this PR. This bug makes Claude 4.6 models (Opus and Sonnet) essentially unreliable on AWS Bedrock, which is the primary provider for enterprise users.

Impact

  • Every multi-turn conversation eventually hits this error — it's not edge-case, it's a matter of when, not if
  • The error is invisible to the user — the model's response from step 0 is generated and saved, but the step 1 prefill error overwrites it in the web UI, making it look like the agent returned nothing
  • There is no config workaround — users cannot disable prefill behavior, and the continue workaround is fragile and breaks Slack/API integrations where there's no human to retry
  • This affects all providers serving Claude 4.6: Bedrock, Anthropic direct, GitHub Copilot, OpenRouter, Vertex
  • The issue has 43 comments and 16+ upvotes on This model does not support assistant message prefill / Github Copilot with Opus 4.6 #13768 — it's the most-requested bug fix in the tracker right now

Why this PR specifically

  • Clean, focused fix — single function (stripTrailingAssistant) in the right choke point (normalizeMessages in provider/transform.ts)
  • Covers all code paths (max steps hint, interrupted streams, normal continuation)
  • Has integration tests
  • Author has kept it rebased on dev for 33 days
  • No side effects for models that support prefill (Claude 3.5, etc.)

Our situation

We run OpenCode headless on a server with Bedrock Opus 4.6, accessed via web UI, Desktop app, and Slack. This bug forced us to build from this PR branch ourselves. We'd much rather track stable releases.

cc @adamdotdevin @rekram1-node

superdav42 pushed a commit to superdav42/aidevops that referenced this pull request Apr 8, 2026
…aude 4.x prefill error (GH#17790)

Adds prefill-guard.mjs to opencode-aidevops plugin and composes it into the
experimental.chat.messages.transform hook after the existing TTSR hook.

The guard strips trailing assistant messages from the outgoing LLM payload
when safe — preserving messages with finish=tool-calls or active tool parts
so legitimate tool-call flows are untouched. Session DB is never modified.

Fixes the mobile webui error 'This model does not support assistant message
prefill' on Claude Opus/Sonnet 4.x. Mirrors upstream PRs anomalyco/opencode#14772,
marcusquinn#16921, and #18091 which add the same logic in provider/transform.ts but are
not yet merged in opencode v1.3.17 / v1.4.0.

Fixes marcusquinn#17790
marcusquinn pushed a commit to superdav42/aidevops that referenced this pull request Apr 9, 2026
…aude 4.x prefill error (GH#17790)

Adds prefill-guard.mjs to opencode-aidevops plugin and composes it into the
experimental.chat.messages.transform hook after the existing TTSR hook.

The guard strips trailing assistant messages from the outgoing LLM payload
when safe — preserving messages with finish=tool-calls or active tool parts
so legitimate tool-call flows are untouched. Session DB is never modified.

Fixes the mobile webui error 'This model does not support assistant message
prefill' on Claude Opus/Sonnet 4.x. Mirrors upstream PRs anomalyco/opencode#14772,
marcusquinn#16921, and #18091 which add the same logic in provider/transform.ts but are
not yet merged in opencode v1.3.17 / v1.4.0.

Fixes marcusquinn#17790
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

This model does not support assistant message prefill / Github Copilot with Opus 4.6

6 participants