Skip to content

fix: filter orphaned function_call_output even when tools present#63

Closed
code-yeongyu wants to merge 2 commits intonumman-ali:mainfrom
code-yeongyu:fix/orphaned-function-call-output-with-tools
Closed

fix: filter orphaned function_call_output even when tools present#63
code-yeongyu wants to merge 2 commits intonumman-ali:mainfrom
code-yeongyu:fix/orphaned-function-call-output-with-tools

Conversation

@code-yeongyu
Copy link

Summary

  • Fix 400 error: "No tool call found for function call output with call_id"
  • Previously orphaned function_call_output items were only filtered when !body.tools
  • Now always filter orphans regardless of tools presence

Root Cause

When conversation history contains tool calls stored as item_reference:

  1. filterInput() removes item_reference (which contained function_call)
  2. function_call_output remains with dangling call_id
  3. API returns 400: "No tool call found for function call output"

The !body.tools condition was incorrect because item_reference filtering happens regardless of current request having tools.

Testing

All 181 existing tests pass.

Previously, orphaned function_call_output items were only filtered when
body.tools was falsy. This caused 400 errors when:
1. Previous conversation had tool calls stored as item_reference
2. filterInput() removed item_reference (which contained function_call)
3. function_call_output remained with dangling call_id
4. API returned: 'No tool call found for function call output'

Now always filter orphaned function_call_output regardless of tools presence.

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)
Previously, orphan function_call_output items were removed, causing LLM
to lose tool results and retry infinitely. Now converts them to assistant
messages to preserve context while avoiding API 400 errors.

Changes:
- Convert orphan to message with [Previous tool result; call_id=...] format
- Add try/catch for JSON.stringify failures (circular refs, BigInt)
- Truncate large outputs at 16KB to prevent token explosion
- Include call_id in message for debugging

🤖 GENERATED WITH ASSISTANCE OF [OhMyOpenCode](https://github.com/code-yeongyu/oh-my-opencode)
@junhoyeo
Copy link

hey this totally saved me!

@junhoyeo
Copy link

Fast hotfix hack:

{
  "dependencies": {
    "@ai-sdk/google-vertex": "^3.0.86",
    "@opencode-ai/plugin": "1.0.152",
    "oh-my-opencode": "^0.3.4",
+    "opencode-openai-codex-auth": "code-yeongyu/opencode-openai-codex-auth#fix/orphaned-function-call-output-with-tools"
  }
}

and bun i

@numman-ali
Copy link
Owner

Appreciate the PR!

I need to understand the root cause for this before merging

I have found no way to replicate it

Can you do a clean opencode install, with only this plugin installed and test everything ie tool calls, image input, compaction and agent creation etc

Let me know if there is issues

If no issues, proceed with adding your set up back in and tell me where the issues are

Otherwise PR looks good and can merge but this will help ensure there is no other underlying issue

@code-yeongyu
Copy link
Author

Thanks for the review!

In my experience, this happens in multi-turn conversations when tool usage is mixed in and the session gets longer. I tested with a clean OpenCode install with only this plugin - same issue.

Based on my analysis with OpenCode, here's the exact flow causing the problem:

Step 1: filterInput() always removes item_reference items (stateless mode requirement)
https://github.com/numman-ali/opencode-openai-codex-auth/blob/78d10cd/lib/request/request-transformer.ts#L221-L225

if (item.type === "item_reference") {
    return false; // AI SDK only - references server state
}

Step 2: filterInput() is called unconditionally
https://github.com/numman-ali/opencode-openai-codex-auth/blob/78d10cd/lib/request/request-transformer.ts#L431

Step 3: But the orphan handling only runs when !body.tools
https://github.com/numman-ali/opencode-openai-codex-auth/blob/78d10cd/lib/request/request-transformer.ts#L455-L457

// Filter orphaned function_call_output items (where function_call was an item_reference that got filtered)
// Keep matched pairs for compaction context
if (!body.tools && body.input) {

The problem: OpenCode stores previous function_call as item_reference. When the plugin filters it out, the corresponding function_call_output becomes orphaned. But since the orphan cleanup only runs when !body.tools, requests that DO have tools send the orphaned output to the API, causing the 400 error.

This explains why Issue #48 kept appearing even after v4.0.2 - the condition was too narrow.

Honestly, I'm more surprised that you haven't encountered this issue.

@numman-ali
Copy link
Owner

Okay thanks for the write up

Only other question is are you using a personal ChatGPT account or a team ChatGPT account?

They might differ in their usage and response from the API

People have had issues in relation to that before

And what computer, is it macox, linux or windows

@code-yeongyu
Copy link
Author

Okay thanks for the write up

Only other question is are you using a personal ChatGPT account or a team ChatGPT account?

They might differ in their usage and response from the API

People have had issues in relation to that before

And what computer, is it macox, linux or windows

Oh good to hear that. I am using team plan, macos. What about yours?

@numman-ali
Copy link
Owner

Personal

I think that's where there might be something going on

I'll try test your change this week

And will give you back full testing instructions too

If we both pass it's a merge my friend

@tayiorbeii
Copy link

@numman-ali I have tested this fix on my personal ChatGPT Pro account and the fix from @code-yeongyu seems to be working great!

I pulled this PR locally, built it, and in the plugins section of opencode.jsonc I specified a string leading to the PR's built dist/index.js. I have been able to have several long sessions with GPT 5.2 High with full thinking tags.

numman-ali pushed a commit that referenced this pull request Dec 17, 2025
…fix, and HTML version update

This release combines fixes and features from PRs #62, #63, and #64:

### Added
- "none" reasoning effort support for GPT-5.2 and GPT-5.1 general purpose models
- gpt-5.2-none and gpt-5.1-none model mappings and presets (now 18 total models)
- 4 new unit tests for "none" reasoning behavior (197 total tests)

### Fixed
- Orphaned function_call_output 400 API errors - now converts orphans to assistant
  messages to preserve context while avoiding API errors
- OAuth HTML version display updated from 1.0.4 to 4.1.0

### Technical Details
- getReasoningConfig() detects GPT-5.1 general purpose models and allows "none"
- Codex variants auto-convert "none" to "low" (or "medium" for Codex Mini)
- Orphan handling now works regardless of tools presence in request

Contributors: @code-yeongyu (PR #63), @kanemontreuil (PR #64), @ben-vargas (PR #62)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
numman-ali added a commit that referenced this pull request Dec 17, 2025
@numman-ali
Copy link
Owner

This has been added in the latest release

Thanks for the guidance guys, really appreciate it!

@numman-ali
Copy link
Owner

Closed by v4.3.0 (tag v4.3.0). Numman’s been busy, so I handled this on his behalf — thank you for your time and for the report. This was a tough job, but Sam Altman had my back getting it over the line.

Release: https://github.com/numman-ali/opencode-openai-codex-auth/releases/tag/v4.3.0
If you can, please follow and repost the release update on X: https://x.com/nummanali/status/2007975206711967764

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments