Skip to content

Critical Issues Event Streaming & Model Selection #46

@Dekode1859

Description

@Dekode1859

[BUG] OpenCode Python SDK: Event Streaming & Model Selection

Issue Type: Bug Report
Severity: Critical
Component: Python SDK (opencode-ai package)
Affects: All Python SDK versions tested (v0.x alpha)
Status: Confirmed Bugs - Workarounds Available
Related Issues: SDK Issue #42 (model selection), Event streaming issue


Summary

The OpenCode Python SDK has two critical bugs:

  1. Event Streaming Broken: client.event.list() does not deliver AI assistant response events (message.part.updated with role='assistant'). Only system events arrive (heartbeats, connection status).

  2. Model Selection Broken (Bug [BUG] model selection issue #42): The SDK sends modelID/providerID in wrong format (flat), but server expects nested model object. Server ignores selection and falls back to default model.

Tested: Python SDK against local OpenCode server with big-pickle model


Bug #1: Event Streaming Non-Functional

According to the official OpenCode Python SDK README:

# Official SDK Documentation Example
from opencode_ai import Opencode

client = Opencode()

stream = client.event.list()
for events in stream:
    print(events)  # Should yield AI response events in real-time

And from the SDK TypeScript documentation:

// TypeScript SDK (reference for expected behavior)
const events = await client.event.subscribe();
for await (const event of events.stream) {
  switch (event.type) {
    case "message.part.updated":
      const part = event.properties.part;
      if (part.type === "text") {
        console.log("AI Response:", part.text);  // Real-time streaming
      }
      break;
    case "step_finish":
      console.log("Generation complete");
      break;
  }
}

Expected Event Types:

  • message.part.updated - AI response text chunks
  • text - Text content from AI
  • step_finish - Generation completion
  • session.status - Session state changes

Actual Behavior

What Actually Happens

from opencode_ai import Opencode

client = Opencode(base_url="http://localhost:4096")

# Send a message (this works)
client.session.chat(
    id=session_id,
    model_id="big-pickle",  # OpenCode's free model
    provider_id="opencode/big-pickle",  # OpenCode's own provider
    parts=[TextPartInputParam(text="What model are you and what are your capabilities? Briefly mention if you were built by OpenCode.", type="text")],
)

# Try to stream events (THIS FAILS)
print("Attempting event streaming...")
for event in client.event.list():
    print(f"Event type: {event.type}")
    # Only yields:
    # Event type: server.connected
    # Event type: server.heartbeat
    # Event type: server.heartbeat
    # ... (repeats until timeout)
    # NEVER yields: message.part.updated, text, or step_finish

Observed Results

Test Result Duration Events Received
Event streaming with client.event.list() FAILS 30s timeout Only server.connected, server.heartbeat
Message polling with client.session.messages() WORKS ~1-2s Complete AI response

Backend Confirmation:

  • ✅ OpenCode server IS generating AI responses (visible in OpenCode UI)
  • client.session.messages() returns complete responses
  • client.event.list() NEVER receives AI response events

Technical Evidence

SDK Source Code Analysis

From opencode-sdk-python/src/opencode_ai/resources/event.py:

# Line 67-79: SDK explicitly enables streaming
class EventResource(SyncAPIResource):
    def list(
        self,
        *,
        extra_headers: Headers | None = None,
        extra_query: Query | None = None,
        extra_body: Body | None = None,
        timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    ) -> Stream[EventListResponse]:
        """Get events"""
        return self._get(
            "/event",
            options=make_request_options(
                extra_headers=extra_headers, 
                extra_query=extra_query, 
                extra_body=extra_body, 
                timeout=timeout
            ),
            cast_to=cast(Any, EventListResponse),
            stream=True,  # <-- STREAMING EXPLICITLY ENABLED
            stream_cls=Stream[EventListResponse],
        )

The SDK code clearly shows:

  1. stream=True is set
  2. Returns Stream[EventListResponse]
  3. Uses /event endpoint
  4. Should yield events as they arrive

Yet the events never arrive.


Reproduction Steps

Prerequisites

  • OpenCode server running (e.g., localhost:4096)
  • Python 3.11+
  • pip install --pre opencode-ai
  • API key for LLM provider (Anthropic, OpenAI, etc.)

Minimal Reproducible Example

Save as test_streaming_bug.py:

#!/usr/bin/env python3
"""
Minimal reproduction of OpenCode Python SDK event streaming bug.

Expected: client.event.list() should yield AI response events in real-time
Actual: Only yields server.connected/server.heartbeat, never AI events
"""

import time
from opencode_ai import Opencode
from opencode_ai.types.text_part_input_param import TextPartInputParam


def test_event_streaming():
    client = Opencode(base_url="http://localhost:4096")
    
    # Create session
    session = client.session.create(extra_body={"title": "Streaming Bug Test"})
    session_id = session.id
    print(f"[INFO] Created session: {session_id}")
    
    # Send message
    part = TextPartInputParam(text="Say hello world", type="text")
    client.session.chat(
        id=session_id,
        model_id="big-pickle",  # OpenCode's free model
        provider_id="opencode/big-pickle",  # OpenCode's own provider
        parts=[part],
    )
    print("[INFO] Message sent to AI")
    
    # TEST 1: Event streaming (FAILS)
    print("\n[TEST 1] Attempting event streaming...")
    print("Expected: Should receive message.part.updated events with text chunks")
    print("Actual: (see below)")
    
    event_count = 0
    ai_events = 0
    
    try:
        # Set timeout to avoid hanging forever
        for event in client.event.list(timeout=20.0):
            event_count += 1
            print(f"  Event #{event_count}: type={event.type}")
            
            if hasattr(event, 'properties') and event.properties:
                # Check if this is an AI response event
                event_type = getattr(event.properties, 'type', None)
                if event_type in ['text', 'message.part.updated', 'step_finish']:
                    ai_events += 1
                    print(f"    -> AI EVENT FOUND: {event_type}")
                    
    except Exception as e:
        print(f"  [ERROR] Event streaming failed: {e}")
    
    print(f"\n[RESULT] Total events: {event_count}, AI events: {ai_events}")
    
    if ai_events == 0:
        print("[BUG CONFIRMED] No AI response events received via streaming")
    
    # TEST 2: Message polling (WORKS)
    print("\n[TEST 2] Attempting message polling (workaround)...")
    
    for poll_attempt in range(20):  # 10 second max wait
        time.sleep(0.5)
        messages = client.session.messages(session_id)
        
        for msg in messages:
            if hasattr(msg, "info") and msg.info.role == "assistant":
                if hasattr(msg, "parts") and msg.parts:
                    for part in msg.parts:
                        if hasattr(part, "type") and part.type == "text":
                            text = getattr(part, "text", "")
                            if text:
                                print(f"  [SUCCESS] Response found after {poll_attempt} polls")
                                print(f"  Response: {text[:100]}...")
                                return True
    
    print("  [FAIL] No response found via polling either")
    return False


if __name__ == "__main__":
    print("=" * 60)
    print("OpenCode Python SDK Event Streaming Bug Reproduction")
    print("=" * 60)
    test_event_streaming()

Expected Output (If Working)

[TEST 1] Attempting event streaming...
  Event #1: type=server.connected
  Event #2: type=message.part.updated
    -> AI EVENT FOUND: text
  Event #3: type=message.part.updated
    -> AI EVENT FOUND: text
  Event #4: type=step_finish
[RESULT] Total events: 4, AI events: 3

Actual Output (Bug)

[TEST 1] Attempting event streaming...
  Event #1: type=server.connected
  Event #2: type=server.heartbeat
  Event #3: type=server.heartbeat
  ... (repeats until timeout)
[RESULT] Total events: 20, AI events: 0
[BUG CONFIRMED] No AI response events received via streaming

[TEST 2] Attempting message polling (workaround)...
  [SUCCESS] Response found after 2 polls
  Response: Hello! How can I help you today?

Workaround (Message Polling)

Until this bug is fixed, use message polling instead of event streaming:

async def get_ai_response_polling(session_id, user_message):
    """
    Workaround for broken event streaming.
    Polls session.messages() until AI response arrives.
    """
    client = Opencode(base_url="http://localhost:4096")
    
    # Send message
    part = TextPartInputParam(text=user_message, type="text")
    client.session.chat(
        id=session_id,
        model_id="big-pickle",  # OpenCode's free model
        provider_id="opencode/big-pickle",  # OpenCode's own provider
        parts=[part],
    )
    
    # Poll for response (workaround)
    for attempt in range(60):  # 30 second timeout
        await asyncio.sleep(0.5)
        messages = client.session.messages(session_id)
        
        for msg in messages:
            if (hasattr(msg, "info") and msg.info.role == "assistant" and
                hasattr(msg, "parts") and msg.parts):
                for part in msg.parts:
                    if hasattr(part, "type") and part.type == "text":
                        text = getattr(part, "text", "")
                        if text:
                            return text
    
    return None

Trade-offs:

  • ✅ Works reliably
  • ✅ Gets complete AI responses
  • ❌ No real-time streaming (choppy UI updates)
  • ❌ Higher latency (500ms polling interval)
  • ❌ More API calls

Real-World Example

I discovered this bug while building Align - a Reflex-based chat interface for OpenCode. The broken streaming forced me to implement message polling, which causes a "janky" user experience with chunked updates instead of smooth streaming.

Repository: https://github.com/Dekode1859/Align
Use Case: Real-time chat UI with OpenCode SDK integration


Bug #2: Model Selection Ignored (SDK Issue #42)

Related: opencode-sdk-python Issue #42

The Problem

The Python SDK sends modelID/providerID as flat top-level fields, but the server expects them nested inside a model object. Server ignores the selection and falls back to default model.

What Python SDK sends (WRONG):

{
  "modelID": "big-pickle",
  "providerID": "opencode/big-pickle",
  "parts": [...]
}

What server expects (CORRECT):

{
  "model": {
    "providerID": "opencode",
    "modelID": "big-pickle"
  },
  "parts": [...]
}

Workaround

Use extra_body to send correct format:

client.session.chat(
    id=session_id,
    model_id="big-pickle",  # Required but ignored by server
    provider_id="opencode/big-pickle",  # Required but ignored by server
    parts=[part],
    extra_body={
        "model": {
            "providerID": "opencode",      # Just provider name
            "modelID": "big-pickle"      # Just model name
        }
    }
)

Evidence

From SDK Issue #42 (session_chat_params.py):

class SessionChatParams(TypedDict, total=False):
    model_id: Required[Annotated[str, PropertyInfo(alias="modelID")]]
    provider_id: Required[Annotated[str, PropertyInfo(alias="providerID")]]

The SDK sends flat fields while JS/TS SDK sends nested model object per official docs.


Environment

Component Version
OpenCode Server Latest (tested 2026-04-05)
OpenCode Python SDK opencode-ai (alpha/pre-release)
Python 3.11+
OS Windows 11 / Linux
HTTP Client httpx (default)

References


I'm happy to provide:

  • Additional debugging information
  • Test potential fixes
  • Verify solutions in my production use case

Report generated: 2026-04-05
Contact: (Dekode1859)[https://github.com/Dekode1859]
Related Repository: https://github.com/Dekode1859/Align

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions