Skip to content

fix(opencode): recover from truncated tool calls instead of failing silently#21688

Open
nickveenhof wants to merge 2 commits intoanomalyco:devfrom
nickveenhof:fix/truncated-tool-call-recovery
Open

fix(opencode): recover from truncated tool calls instead of failing silently#21688
nickveenhof wants to merge 2 commits intoanomalyco:devfrom
nickveenhof:fix/truncated-tool-call-recovery

Conversation

@nickveenhof
Copy link
Copy Markdown

@nickveenhof nickveenhof commented Apr 9, 2026

Issue for this PR

Closes #18108
Closes #17471
Refs #14519, #13102, #18151, #18131

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

When a model's output hits max_tokens mid-tool-call, the streamed JSON arguments get truncated. The tool parser receives undefined for required fields, producing:

The bash tool was called with invalid arguments: [
  {"expected": "string", "code": "invalid_type", "path": ["command"],
   "message": "Invalid input: expected string, received undefined"}
]

Two fixes:

1. Truncation detection in experimental_repairToolCall (llm.ts)

repairToolCall currently only handles tool name case-sensitivity. All other parse failures route to the generic invalid tool. But when toolName is a valid registered tool and the args failed to parse, the cause is truncation, not an invalid tool. This check distinguishes the two cases and returns an actionable error telling the model to split its operation into smaller pieces.

2. Auto-continue on finishReason: "length" (prompt.ts)

The session loop exits when finishReason is anything other than "tool-calls". This means "length" (token limit hit) causes the session to stop silently. Now the loop detects "length", injects a synthetic continuation message, and keeps going so the model can resume.

How did you verify your code works?

  • bun run --cwd packages/opencode tsc --noEmit passes with no new type errors
  • Changes are additive only: 51 lines added, 0 removed, no changes to tests or public API
  • Manually tested by triggering large file writes that exceed OUTPUT_TOKEN_MAX and verifying the model receives the truncation error message and retries with smaller operations

Screenshots / recordings

N/A - no UI changes

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

…ilently

When a model's output hits the token limit mid-tool-call, the JSON
arguments are truncated and tool parsing fails with 'expected string,
received undefined'. Two fixes:

1. experimental_repairToolCall now detects truncation: when the tool
   name is a valid registered tool but args failed to parse, return an
   actionable error telling the model to split its operation into
   smaller pieces. Previously all parse failures were routed to a
   generic 'invalid tool' handler with no recovery guidance.

2. The session loop now handles finishReason 'length': instead of
   silently exiting when the model is cut off by the token limit,
   inject a synthetic continuation message so the model can resume
   where it left off.

Fixes anomalyco#18108
Fixes anomalyco#17471
Refs anomalyco#14519, anomalyco#13102
@github-actions github-actions bot added needs:compliance This means the issue will auto-close after 2 hours. and removed needs:compliance This means the issue will auto-close after 2 hours. labels Apr 9, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 9, 2026

Thanks for updating your PR! It now meets our contributing guidelines. 👍

@github-actions github-actions bot mentioned this pull request Apr 9, 2026
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

2 participants