Summary
When using a custom OpenAI-compatible provider with the OpenCode SDK, session.prompt() returns empty responses ({ data: {}, request: {}, response: {} }), even though direct API calls to the same endpoint work perfectly.
Environment
- OpenCode SDK versions tested:
1.0.0, 1.2.15, 0.0.0-beta-202603021851
- Node.js: v23.3.0
- Platform: macOS (darwin)
- Custom provider: OpenAI-compatible endpoint at
llmlite_url
Configuration
opencode.json:
{
"model": "custom-llm/gpt-4o",
"provider": {
"custom-llm": {
"npm": "@ai-sdk/openai-compatible",
"name": "Custom OpenAI Provider",
"options": {
"baseURL": "llmlite_url",
"apiKey": "sk-***"
},
"models": {
"gpt-4o": {
"name": "GPT-4o",
"limit": { "context": 128000, "output": 16384 }
},
"gpt-4o-mini": {
"name": "GPT-4o Mini",
"limit": { "context": 128000, "output": 16384 }
}
}
}
}
}
~/.local/share/opencode/auth.json:
{
"custom-llm": {
"type": "api",
"key": "sk-***"
}
}
Test Results
✅ Direct API Call - WORKS
const response = await fetch('llmlite_url', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer sk-***'
},
body: JSON.stringify({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Say "Direct API call works!"' }]
})
});
// Returns: { choices: [{ message: { content: "Direct API call works!" } }] }
❌ OpenCode SDK - EMPTY RESPONSE
import { OpencodeClient } from '@opencode-ai/sdk';
const client = new OpencodeClient({ embeddedServer: { cwd: process.cwd() } });
await client.start();
const session = await client.createSession({ model: 'custom-llm/gpt-4o' });
const response = await session.prompt('Say "SDK works!"');
// Returns: { data: {}, request: {}, response: {} }
// Expected: { data: { parts: [...] } }
Verification
-
✅ OpenCode CLI recognizes provider:
$ opencode models custom-llm
# Shows: custom-llm/gpt-4o, custom-llm/gpt-4o-mini
-
✅ Config loads in SDK:
const config = await client.config.get();
// Returns full config with custom-llm provider details
-
✅ Server starts successfully:
- No errors in server logs
- Server URL available (e.g.,
http://127.0.0.1:65235)
-
✅ Session creates successfully:
- Returns valid session ID (e.g.,
ses_34fed36ccffeO0TI9rPks1UiMw)
-
❌ Prompt returns empty:
response.data is always {}
response.data.parts is undefined
- No error messages
Tested Across Versions
Same behavior across:
- SDK v1.0.0 (first stable release)
- SDK v1.2.15 (latest stable)
- SDK v0.0.0-beta-202603021851 (latest beta)
This suggests it's a fundamental issue with custom OpenAI-compatible providers, not a version-specific regression.
Complete Test Script
import { OpencodeClient } from '@opencode-ai/sdk';
// Direct API call - WORKS
const directResponse = await fetch('llmlite_url', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer sk-***'
},
body: JSON.stringify({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }]
})
});
console.log('Direct API:', await directResponse.json()); // ✅ Returns content
// SDK call - EMPTY
const client = new OpencodeClient({ embeddedServer: { cwd: process.cwd() } });
await client.start();
const session = await client.createSession({ model: 'custom-llm/gpt-4o' });
const sdkResponse = await session.prompt('Hello!');
console.log('SDK Response:', sdkResponse); // ❌ Returns { data: {}, request: {}, response: {} }
Expected Behavior
When using a custom OpenAI-compatible provider, session.prompt() should return responses in the same format as when using built-in providers like GitHub Copilot.
Actual Behavior
session.prompt() returns { data: {}, request: {}, response: {} } with no content or error messages.
Questions
- Are custom OpenAI-compatible providers fully supported in embedded server mode?
- Is there additional configuration required for custom providers?
- Is this a known limitation?
- Should we use non-embedded server mode (
opencode serve) instead?
Workaround Needed
We're building a Microsoft Teams bot that needs to use a custom OpenAI-compatible LLM (not GitHub Copilot, not Azure OpenAI). We'd like to use OpenCode SDK for tool orchestration, but this blocker prevents us from proceeding.
Any guidance would be greatly appreciated!
Summary
When using a custom OpenAI-compatible provider with the OpenCode SDK,
session.prompt()returns empty responses ({ data: {}, request: {}, response: {} }), even though direct API calls to the same endpoint work perfectly.Environment
1.0.0,1.2.15,0.0.0-beta-202603021851llmlite_urlConfiguration
opencode.json:
{ "model": "custom-llm/gpt-4o", "provider": { "custom-llm": { "npm": "@ai-sdk/openai-compatible", "name": "Custom OpenAI Provider", "options": { "baseURL": "llmlite_url", "apiKey": "sk-***" }, "models": { "gpt-4o": { "name": "GPT-4o", "limit": { "context": 128000, "output": 16384 } }, "gpt-4o-mini": { "name": "GPT-4o Mini", "limit": { "context": 128000, "output": 16384 } } } } } }~/.local/share/opencode/auth.json:
{ "custom-llm": { "type": "api", "key": "sk-***" } }Test Results
✅ Direct API Call - WORKS
❌ OpenCode SDK - EMPTY RESPONSE
Verification
✅ OpenCode CLI recognizes provider:
$ opencode models custom-llm # Shows: custom-llm/gpt-4o, custom-llm/gpt-4o-mini✅ Config loads in SDK:
✅ Server starts successfully:
http://127.0.0.1:65235)✅ Session creates successfully:
ses_34fed36ccffeO0TI9rPks1UiMw)❌ Prompt returns empty:
response.datais always{}response.data.partsis undefinedTested Across Versions
Same behavior across:
This suggests it's a fundamental issue with custom OpenAI-compatible providers, not a version-specific regression.
Complete Test Script
Expected Behavior
When using a custom OpenAI-compatible provider,
session.prompt()should return responses in the same format as when using built-in providers like GitHub Copilot.Actual Behavior
session.prompt()returns{ data: {}, request: {}, response: {} }with no content or error messages.Questions
opencode serve) instead?Workaround Needed
We're building a Microsoft Teams bot that needs to use a custom OpenAI-compatible LLM (not GitHub Copilot, not Azure OpenAI). We'd like to use OpenCode SDK for tool orchestration, but this blocker prevents us from proceeding.
Any guidance would be greatly appreciated!