fix: use custom OpenAI-compatible provider to resolve empty responses#15804
Closed
eliya-mazoz wants to merge 1 commit intoanomalyco:devfrom
Closed
fix: use custom OpenAI-compatible provider to resolve empty responses#15804eliya-mazoz wants to merge 1 commit intoanomalyco:devfrom
eliya-mazoz wants to merge 1 commit intoanomalyco:devfrom
Conversation
Fixes anomalyco#5210, anomalyco#5674, anomalyco#10573, anomalyco#15756 ## Problem Custom OpenAI-compatible providers configured via @ai-sdk/openai-compatible return empty responses ({ data: {}, request: {}, response: {} }) despite the provider API working correctly when called directly. ## Root Cause The official @ai-sdk/openai-compatible package does not properly emit text-start and text-delta events during streaming, causing the SDK's session.prompt() to receive no content even though the API returns valid responses. ## Solution Replace the official createOpenAICompatible with the custom createGitHubCopilotOpenAICompatible implementation which properly handles: - text-start event emission before streaming begins - text-delta events with actual content - reasoning-delta events for o1 models - Proper state management for isActiveText/isActiveReasoning ## Technical Details The custom implementation in openai-compatible-chat-language-model.ts (lines 477-503) correctly handles delta.content by: 1. Emitting text-start event when streaming begins 2. Emitting text-delta events with delta content 3. Managing state transitions between reasoning and text This fix applies the same working implementation used for @ai-sdk/github-copilot to all @ai-sdk/openai-compatible providers. ## Testing Tested with custom OpenAI-compatible endpoint (llmlite-dev) running gpt-4o: - Before: Empty responses - After: Full responses with proper streaming Co-authored-by: OpenCode AI <opencode@anomaly.co>
Contributor
|
This PR doesn't fully meet our contributing guidelines and PR template. What needs to be fixed:
Please edit this PR description to address the above within 2 hours, or it will be automatically closed. If you believe this was flagged incorrectly, please let a maintainer know. |
Contributor
|
Thanks for your contribution! This PR doesn't have a linked issue. All PRs must reference an existing issue. Please:
See CONTRIBUTING.md for details. |
This was referenced Mar 3, 2026
Contributor
|
This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window. Feel free to open a new pull request that follows our guidelines. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR fixes a critical bug where custom OpenAI-compatible providers return empty responses despite the provider API working correctly. The fix applies the same working streaming implementation used for GitHub Copilot to all OpenAI-compatible providers.
Fixes: #5210, #5674, #10573, #15756
Problem
Users configuring custom OpenAI-compatible providers via
@ai-sdk/openai-compatibleexperience:session.prompt()returns empty responses:{ data: {}, request: {}, response: {} }This has been affecting users since December 2024 with no workaround available.
Root Cause
The official
@ai-sdk/openai-compatiblepackage does not properly emit streaming events:text-startevent emissiontext-deltaevents with actual contentThe SDK receives valid API responses but cannot process them into visible content.
Solution
One-line change in
packages/opencode/src/provider/provider.ts(line 95):This replaces the buggy official implementation with the custom
OpenAICompatibleChatLanguageModelwhich properly handles:Text streaming (
openai-compatible-chat-language-model.tslines 477-503):text-startevent when content beginstext-deltaevents with actual delta contentisActiveTextstate correctlyReasoning support (for o1 models):
reasoning-deltaeventsisActiveReasoningstateTool calls:
Testing
Verified with custom OpenAI-compatible endpoint running
gpt-4o:Before fix:
After fix:
Impact
Related Issues
Additional Notes
The custom implementation has comprehensive test coverage in:
packages/opencode/test/provider/copilot/copilot-chat-model.test.tsAll existing tests should continue passing as this change only affects the internal provider implementation, not the public API.