Summary
When the default OpenCode agent talks to local OpenAI-compatible backends, two brittle cases can break or stall normal tool use:
- the bash tool can fail hard if the model omits
description but provides a valid command
- some local backends return
finish_reason: "tool_calls" together with tool_calls: [], which should not be treated as a real tool-call turn
Reproduction
Use the default build agent against a local OpenAI-compatible backend such as llama.cpp or LM Studio.
Observed failures:
The bash tool was called with invalid arguments ... expected string, received undefined on description
- session loop/hang behavior when empty
tool_calls: [] is returned
Expected
- harmless missing bash descriptions should be normalized or defaulted
- empty
tool_calls arrays should be treated like a normal stop completion
Verification
This was reproduced against a local llama.cpp backend with the default agent/tool stack.
Summary
When the default OpenCode agent talks to local OpenAI-compatible backends, two brittle cases can break or stall normal tool use:
descriptionbut provides a validcommandfinish_reason: "tool_calls"together withtool_calls: [], which should not be treated as a real tool-call turnReproduction
Use the default build agent against a local OpenAI-compatible backend such as llama.cpp or LM Studio.
Observed failures:
The bash tool was called with invalid arguments ... expected string, received undefinedondescriptiontool_calls: []is returnedExpected
tool_callsarrays should be treated like a normal stop completionVerification
This was reproduced against a local llama.cpp backend with the default agent/tool stack.