Description
When using OpenAI-compatible providers (e.g., Gemini 3 Flash via @ai-sdk/openai-compatible, LiteLLM), the agent loop stops after executing a tool call instead of continuing to process.
Root cause: These providers return finish_reason: "stop" even when the response contains tool calls. The OpenAI standard returns "tool_calls", but providers like Gemini and LiteLLM don't follow this convention.
In packages/opencode/src/session/prompt.ts, the loop exit condition only checks finish_reason:
if (
lastAssistant?.finish &&
!["tool-calls", "unknown"].includes(lastAssistant.finish) &&
lastUser.id < lastAssistant.id
) {
break // Agent stops here even though tools were called
}
Since finish_reason is "stop" (not "tool-calls"), the loop exits prematurely.
Steps to reproduce
- Configure an OpenAI-compatible provider with Gemini (e.g.,
gemini-3-flash-preview) or LiteLLM
- Send a prompt that triggers tool usage (e.g., "read file X")
- The agent executes the tool but then stops instead of continuing to process the tool result
Expected behavior
After executing a tool call, the agent should continue processing (make another LLM call with the tool results) regardless of the provider's finish_reason.
Actual behavior
The agent stops after executing the tool. The user sees the tool output but the agent never processes it further.
Related issues
OpenCode version
1.2.11
Operating System
macOS
Description
When using OpenAI-compatible providers (e.g., Gemini 3 Flash via
@ai-sdk/openai-compatible, LiteLLM), the agent loop stops after executing a tool call instead of continuing to process.Root cause: These providers return
finish_reason: "stop"even when the response contains tool calls. The OpenAI standard returns"tool_calls", but providers like Gemini and LiteLLM don't follow this convention.In
packages/opencode/src/session/prompt.ts, the loop exit condition only checksfinish_reason:Since
finish_reasonis"stop"(not"tool-calls"), the loop exits prematurely.Steps to reproduce
gemini-3-flash-preview) or LiteLLMExpected behavior
After executing a tool call, the agent should continue processing (make another LLM call with the tool results) regardless of the provider's
finish_reason.Actual behavior
The agent stops after executing the tool. The user sees the tool output but the agent never processes it further.
Related issues
@ai-sdk/openai-compatible#14063 (Gemini models in Task/subagent session immediately stopped)OpenCode version
1.2.11
Operating System
macOS