Bug Description
When using OpenCode with Azure AI Claude (via LiteLLM proxy), aborting a tool call (Ctrl+C or interrupt) causes the next LLM request to fail with:
litellm.BadRequestError: Azure_aiException - "This model does not support assistant message prefill. The conversation must end with a user message."
The raw XML of the attempted tool call leaks into the conversation as visible text:
<invoke name="task">
<parameter name="description">...</parameter>
<parameter name="prompt">...</parameter>
</invoke>
litellm.BadRequestError: Azure_aiException - ...
The user must send a new message to reset the conversation turn. The question tool also consistently triggers this error on Azure AI Claude.
Environment
- OpenCode 1.3.0 (Homebrew)
- Provider: Azure AI Claude (Opus 4.6) via LiteLLM proxy (Orion)
- Model group:
azure_ai/claude-opus-4-6
Root Cause
Azure AI's Claude inference layer does not support "assistant message prefill" (trailing role: "assistant" in the messages array). The standard Anthropic API does support this, so OpenCode works fine with direct Anthropic, but fails through Azure AI / LiteLLM.
There are two code paths that produce a trailing assistant message:
1. Aborted assistant message with text but no tool calls
When a user aborts mid-stream after text has started streaming but before any tool call is registered, the aborted assistant message (with partial text) gets included in history.
The error filtering logic in toModelMessages() intentionally keeps aborted messages that have "substantive" parts (text qualifies):
packages/opencode/src/session/message-v2.ts lines 690-698:
if (
msg.info.error &&
!(
MessageV2.AbortedError.isInstance(msg.info.error) &&
msg.parts.some((part) => part.type !== "step-start" && part.type !== "reasoning")
)
) {
continue // skip this assistant message
}
When an aborted message has tool calls, the tool results get appended as role: "user" (Anthropic convention), so the sequence is fine. But when it only has text, the sequence ends with role: "assistant".
2. isLastStep prefill injection
When the agentic loop hits the step limit, a hardcoded assistant message is appended unconditionally:
packages/opencode/src/session/prompt.ts lines 700-710:
messages: [
...(await MessageV2.toModelMessages(msgs, model)),
...(isLastStep
? [{ role: "assistant" as const, content: MAX_STEPS }]
: []),
],
This is valid prefill for the direct Anthropic API but rejected by Azure AI Claude.
Suggested Fix
Option A (recommended): Strip trailing assistant messages in normalizeMessages()
packages/opencode/src/provider/transform.ts function normalizeMessages() (line 49) already performs provider-specific message normalization. Add logic to detect when the message array ends with role: "assistant" and either remove it or append a synthetic user message, for providers that don't support prefill.
The codebase already has precedents for this pattern:
- Synthetic user messages for Gemini reasoning models (
prompt.ts lines 520-543)
- Message sequence fixes for Mistral (
transform.ts lines 138-151)
- LiteLLM dummy tool injection (
llm.ts lines 160-178)
Detection criteria for "no prefill support":
- Provider uses
@ai-sdk/openai-compatible (LiteLLM setups)
- Provider ID contains
azure_ai
- Or a provider-level capability flag
Option B: Guard isLastStep by provider capability
Only inject the MAX_STEPS assistant message for providers known to support prefill (direct Anthropic).
Option C: Proxy-side workaround
Configure LiteLLM to strip trailing assistant messages before forwarding to Azure AI. Less ideal since it papers over the issue.
Related Code Paths
| File |
Lines |
Role |
packages/opencode/src/session/message-v2.ts |
690-698 |
Error filtering includes aborted messages |
packages/opencode/src/session/message-v2.ts |
756-766 |
Pending/running tool call -> error result conversion |
packages/opencode/src/session/prompt.ts |
700-710 |
isLastStep prefill injection |
packages/opencode/src/session/processor.ts |
354-424 |
Abort handling |
packages/opencode/src/provider/transform.ts |
49-190 |
normalizeMessages() - provider-specific cleanup |
packages/opencode/src/session/llm.ts |
160-178 |
Existing LiteLLM workarounds |
Workaround
After the error occurs, send any new user message (even just "continue") to reset the conversation turn. This works because the new user message ensures the history no longer ends with an assistant message.
Bug Description
When using OpenCode with Azure AI Claude (via LiteLLM proxy), aborting a tool call (Ctrl+C or interrupt) causes the next LLM request to fail with:
The raw XML of the attempted tool call leaks into the conversation as visible text:
The user must send a new message to reset the conversation turn. The
questiontool also consistently triggers this error on Azure AI Claude.Environment
azure_ai/claude-opus-4-6Root Cause
Azure AI's Claude inference layer does not support "assistant message prefill" (trailing
role: "assistant"in the messages array). The standard Anthropic API does support this, so OpenCode works fine with direct Anthropic, but fails through Azure AI / LiteLLM.There are two code paths that produce a trailing assistant message:
1. Aborted assistant message with text but no tool calls
When a user aborts mid-stream after text has started streaming but before any tool call is registered, the aborted assistant message (with partial text) gets included in history.
The error filtering logic in
toModelMessages()intentionally keeps aborted messages that have "substantive" parts (text qualifies):packages/opencode/src/session/message-v2.tslines 690-698:When an aborted message has tool calls, the tool results get appended as
role: "user"(Anthropic convention), so the sequence is fine. But when it only has text, the sequence ends withrole: "assistant".2.
isLastStepprefill injectionWhen the agentic loop hits the step limit, a hardcoded assistant message is appended unconditionally:
packages/opencode/src/session/prompt.tslines 700-710:This is valid prefill for the direct Anthropic API but rejected by Azure AI Claude.
Suggested Fix
Option A (recommended): Strip trailing assistant messages in
normalizeMessages()packages/opencode/src/provider/transform.tsfunctionnormalizeMessages()(line 49) already performs provider-specific message normalization. Add logic to detect when the message array ends withrole: "assistant"and either remove it or append a synthetic user message, for providers that don't support prefill.The codebase already has precedents for this pattern:
prompt.tslines 520-543)transform.tslines 138-151)llm.tslines 160-178)Detection criteria for "no prefill support":
@ai-sdk/openai-compatible(LiteLLM setups)azure_aiOption B: Guard
isLastStepby provider capabilityOnly inject the
MAX_STEPSassistant message for providers known to support prefill (direct Anthropic).Option C: Proxy-side workaround
Configure LiteLLM to strip trailing assistant messages before forwarding to Azure AI. Less ideal since it papers over the issue.
Related Code Paths
packages/opencode/src/session/message-v2.tspackages/opencode/src/session/message-v2.tspackages/opencode/src/session/prompt.tsisLastStepprefill injectionpackages/opencode/src/session/processor.tspackages/opencode/src/provider/transform.tsnormalizeMessages()- provider-specific cleanuppackages/opencode/src/session/llm.tsWorkaround
After the error occurs, send any new user message (even just "continue") to reset the conversation turn. This works because the new user message ensures the history no longer ends with an assistant message.