Skip to content

SDK session.prompt() returns empty responses with custom OpenAI-compatible provider #15756

@eliya-mazoz

Description

@eliya-mazoz

Summary

When using a custom OpenAI-compatible provider with the OpenCode SDK, session.prompt() returns empty responses ({ data: {}, request: {}, response: {} }), even though direct API calls to the same endpoint work perfectly.

Environment

  • OpenCode SDK versions tested: 1.0.0, 1.2.15, 0.0.0-beta-202603021851
  • Node.js: v23.3.0
  • Platform: macOS (darwin)
  • Custom provider: OpenAI-compatible endpoint at llmlite_url

Configuration

opencode.json:

{
  "model": "custom-llm/gpt-4o",
  "provider": {
    "custom-llm": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Custom OpenAI Provider",
      "options": {
        "baseURL": "llmlite_url",
        "apiKey": "sk-***"
      },
      "models": {
        "gpt-4o": {
          "name": "GPT-4o",
          "limit": { "context": 128000, "output": 16384 }
        },
        "gpt-4o-mini": {
          "name": "GPT-4o Mini",
          "limit": { "context": 128000, "output": 16384 }
        }
      }
    }
  }
}

~/.local/share/opencode/auth.json:

{
  "custom-llm": {
    "type": "api",
    "key": "sk-***"
  }
}

Test Results

✅ Direct API Call - WORKS

const response = await fetch('llmlite_url', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer sk-***'
  },
  body: JSON.stringify({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Say "Direct API call works!"' }]
  })
});

// Returns: { choices: [{ message: { content: "Direct API call works!" } }] }

❌ OpenCode SDK - EMPTY RESPONSE

import { OpencodeClient } from '@opencode-ai/sdk';

const client = new OpencodeClient({ embeddedServer: { cwd: process.cwd() } });
await client.start();

const session = await client.createSession({ model: 'custom-llm/gpt-4o' });
const response = await session.prompt('Say "SDK works!"');

// Returns: { data: {}, request: {}, response: {} }
// Expected: { data: { parts: [...] } }

Verification

  1. ✅ OpenCode CLI recognizes provider:

    $ opencode models custom-llm
    # Shows: custom-llm/gpt-4o, custom-llm/gpt-4o-mini
  2. ✅ Config loads in SDK:

    const config = await client.config.get();
    // Returns full config with custom-llm provider details
  3. ✅ Server starts successfully:

    • No errors in server logs
    • Server URL available (e.g., http://127.0.0.1:65235)
  4. ✅ Session creates successfully:

    • Returns valid session ID (e.g., ses_34fed36ccffeO0TI9rPks1UiMw)
  5. ❌ Prompt returns empty:

    • response.data is always {}
    • response.data.parts is undefined
    • No error messages

Tested Across Versions

Same behavior across:

  • SDK v1.0.0 (first stable release)
  • SDK v1.2.15 (latest stable)
  • SDK v0.0.0-beta-202603021851 (latest beta)

This suggests it's a fundamental issue with custom OpenAI-compatible providers, not a version-specific regression.

Complete Test Script

import { OpencodeClient } from '@opencode-ai/sdk';

// Direct API call - WORKS
const directResponse = await fetch('llmlite_url', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer sk-***'
  },
  body: JSON.stringify({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello!' }]
  })
});
console.log('Direct API:', await directResponse.json()); // ✅ Returns content

// SDK call - EMPTY
const client = new OpencodeClient({ embeddedServer: { cwd: process.cwd() } });
await client.start();
const session = await client.createSession({ model: 'custom-llm/gpt-4o' });
const sdkResponse = await session.prompt('Hello!');
console.log('SDK Response:', sdkResponse); // ❌ Returns { data: {}, request: {}, response: {} }

Expected Behavior

When using a custom OpenAI-compatible provider, session.prompt() should return responses in the same format as when using built-in providers like GitHub Copilot.

Actual Behavior

session.prompt() returns { data: {}, request: {}, response: {} } with no content or error messages.

Questions

  1. Are custom OpenAI-compatible providers fully supported in embedded server mode?
  2. Is there additional configuration required for custom providers?
  3. Is this a known limitation?
  4. Should we use non-embedded server mode (opencode serve) instead?

Workaround Needed

We're building a Microsoft Teams bot that needs to use a custom OpenAI-compatible LLM (not GitHub Copilot, not Azure OpenAI). We'd like to use OpenCode SDK for tool orchestration, but this blocker prevents us from proceeding.

Any guidance would be greatly appreciated!

Metadata

Metadata

Assignees

Labels

coreAnything pertaining to core functionality of the application (opencode server stuff)

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions