fix: filter orphaned function_call_output even when tools present#63
fix: filter orphaned function_call_output even when tools present#63code-yeongyu wants to merge 2 commits intonumman-ali:mainfrom
Conversation
Previously, orphaned function_call_output items were only filtered when body.tools was falsy. This caused 400 errors when: 1. Previous conversation had tool calls stored as item_reference 2. filterInput() removed item_reference (which contained function_call) 3. function_call_output remained with dangling call_id 4. API returned: 'No tool call found for function call output' Now always filter orphaned function_call_output regardless of tools presence. 🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)
Previously, orphan function_call_output items were removed, causing LLM to lose tool results and retry infinitely. Now converts them to assistant messages to preserve context while avoiding API 400 errors. Changes: - Convert orphan to message with [Previous tool result; call_id=...] format - Add try/catch for JSON.stringify failures (circular refs, BigInt) - Truncate large outputs at 16KB to prevent token explosion - Include call_id in message for debugging 🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)
|
hey this totally saved me! |
|
Fast hotfix hack: {
"dependencies": {
"@ai-sdk/google-vertex": "^3.0.86",
"@opencode-ai/plugin": "1.0.152",
"oh-my-opencode": "^0.3.4",
+ "opencode-openai-codex-auth": "code-yeongyu/opencode-openai-codex-auth#fix/orphaned-function-call-output-with-tools"
}
}and |
|
Appreciate the PR! I need to understand the root cause for this before merging I have found no way to replicate it Can you do a clean opencode install, with only this plugin installed and test everything ie tool calls, image input, compaction and agent creation etc Let me know if there is issues If no issues, proceed with adding your set up back in and tell me where the issues are Otherwise PR looks good and can merge but this will help ensure there is no other underlying issue |
|
Thanks for the review! In my experience, this happens in multi-turn conversations when tool usage is mixed in and the session gets longer. I tested with a clean OpenCode install with only this plugin - same issue. Based on my analysis with OpenCode, here's the exact flow causing the problem: Step 1: if (item.type === "item_reference") {
return false; // AI SDK only - references server state
}Step 2: Step 3: But the orphan handling only runs when // Filter orphaned function_call_output items (where function_call was an item_reference that got filtered)
// Keep matched pairs for compaction context
if (!body.tools && body.input) {The problem: OpenCode stores previous This explains why Issue #48 kept appearing even after v4.0.2 - the condition was too narrow. Honestly, I'm more surprised that you haven't encountered this issue. |
|
Okay thanks for the write up Only other question is are you using a personal ChatGPT account or a team ChatGPT account? They might differ in their usage and response from the API People have had issues in relation to that before And what computer, is it macox, linux or windows |
Oh good to hear that. I am using team plan, macos. What about yours? |
|
Personal I think that's where there might be something going on I'll try test your change this week And will give you back full testing instructions too If we both pass it's a merge my friend |
|
@numman-ali I have tested this fix on my personal ChatGPT Pro account and the fix from @code-yeongyu seems to be working great! I pulled this PR locally, built it, and in the |
…fix, and HTML version update This release combines fixes and features from PRs #62, #63, and #64: ### Added - "none" reasoning effort support for GPT-5.2 and GPT-5.1 general purpose models - gpt-5.2-none and gpt-5.1-none model mappings and presets (now 18 total models) - 4 new unit tests for "none" reasoning behavior (197 total tests) ### Fixed - Orphaned function_call_output 400 API errors - now converts orphans to assistant messages to preserve context while avoiding API errors - OAuth HTML version display updated from 1.0.4 to 4.1.0 ### Technical Details - getReasoningConfig() detects GPT-5.1 general purpose models and allows "none" - Codex variants auto-convert "none" to "low" (or "medium" for Codex Mini) - Orphan handling now works regardless of tools presence in request Contributors: @code-yeongyu (PR #63), @kanemontreuil (PR #64), @ben-vargas (PR #62) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
This has been added in the latest release Thanks for the guidance guys, really appreciate it! |
|
Closed by v4.3.0 (tag v4.3.0). Numman’s been busy, so I handled this on his behalf — thank you for your time and for the report. This was a tough job, but Sam Altman had my back getting it over the line. Release: https://github.com/numman-ali/opencode-openai-codex-auth/releases/tag/v4.3.0 |
Summary
function_call_outputitems were only filtered when!body.toolsRoot Cause
When conversation history contains tool calls stored as
item_reference:filterInput()removesitem_reference(which containedfunction_call)function_call_outputremains with danglingcall_idThe
!body.toolscondition was incorrect because item_reference filtering happens regardless of current request having tools.Testing
All 181 existing tests pass.