You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Event Streaming Broken:client.event.list()does not deliver AI assistant response events (message.part.updated with role='assistant'). Only system events arrive (heartbeats, connection status).
Model Selection Broken (Bug [BUG] model selection issue #42): The SDK sends modelID/providerID in wrong format (flat), but server expects nested model object. Server ignores selection and falls back to default model.
Tested: Python SDK against local OpenCode server with big-pickle model
# Official SDK Documentation Examplefromopencode_aiimportOpencodeclient=Opencode()
stream=client.event.list()
foreventsinstream:
print(events) # Should yield AI response events in real-time
fromopencode_aiimportOpencodeclient=Opencode(base_url="http://localhost:4096")
# Send a message (this works)client.session.chat(
id=session_id,
model_id="big-pickle", # OpenCode's free modelprovider_id="opencode/big-pickle", # OpenCode's own providerparts=[TextPartInputParam(text="What model are you and what are your capabilities? Briefly mention if you were built by OpenCode.", type="text")],
)
# Try to stream events (THIS FAILS)print("Attempting event streaming...")
foreventinclient.event.list():
print(f"Event type: {event.type}")
# Only yields:# Event type: server.connected# Event type: server.heartbeat# Event type: server.heartbeat# ... (repeats until timeout)# NEVER yields: message.part.updated, text, or step_finish
Observed Results
Test
Result
Duration
Events Received
Event streaming with client.event.list()
❌ FAILS
30s timeout
Only server.connected, server.heartbeat
Message polling with client.session.messages()
✅ WORKS
~1-2s
Complete AI response
Backend Confirmation:
✅ OpenCode server IS generating AI responses (visible in OpenCode UI)
API key for LLM provider (Anthropic, OpenAI, etc.)
Minimal Reproducible Example
Save as test_streaming_bug.py:
#!/usr/bin/env python3"""Minimal reproduction of OpenCode Python SDK event streaming bug.Expected: client.event.list() should yield AI response events in real-timeActual: Only yields server.connected/server.heartbeat, never AI events"""importtimefromopencode_aiimportOpencodefromopencode_ai.types.text_part_input_paramimportTextPartInputParamdeftest_event_streaming():
client=Opencode(base_url="http://localhost:4096")
# Create sessionsession=client.session.create(extra_body={"title": "Streaming Bug Test"})
session_id=session.idprint(f"[INFO] Created session: {session_id}")
# Send messagepart=TextPartInputParam(text="Say hello world", type="text")
client.session.chat(
id=session_id,
model_id="big-pickle", # OpenCode's free modelprovider_id="opencode/big-pickle", # OpenCode's own providerparts=[part],
)
print("[INFO] Message sent to AI")
# TEST 1: Event streaming (FAILS)print("\n[TEST 1] Attempting event streaming...")
print("Expected: Should receive message.part.updated events with text chunks")
print("Actual: (see below)")
event_count=0ai_events=0try:
# Set timeout to avoid hanging foreverforeventinclient.event.list(timeout=20.0):
event_count+=1print(f" Event #{event_count}: type={event.type}")
ifhasattr(event, 'properties') andevent.properties:
# Check if this is an AI response eventevent_type=getattr(event.properties, 'type', None)
ifevent_typein ['text', 'message.part.updated', 'step_finish']:
ai_events+=1print(f" -> AI EVENT FOUND: {event_type}")
exceptExceptionase:
print(f" [ERROR] Event streaming failed: {e}")
print(f"\n[RESULT] Total events: {event_count}, AI events: {ai_events}")
ifai_events==0:
print("[BUG CONFIRMED] No AI response events received via streaming")
# TEST 2: Message polling (WORKS)print("\n[TEST 2] Attempting message polling (workaround)...")
forpoll_attemptinrange(20): # 10 second max waittime.sleep(0.5)
messages=client.session.messages(session_id)
formsginmessages:
ifhasattr(msg, "info") andmsg.info.role=="assistant":
ifhasattr(msg, "parts") andmsg.parts:
forpartinmsg.parts:
ifhasattr(part, "type") andpart.type=="text":
text=getattr(part, "text", "")
iftext:
print(f" [SUCCESS] Response found after {poll_attempt} polls")
print(f" Response: {text[:100]}...")
returnTrueprint(" [FAIL] No response found via polling either")
returnFalseif__name__=="__main__":
print("="*60)
print("OpenCode Python SDK Event Streaming Bug Reproduction")
print("="*60)
test_event_streaming()
Expected Output (If Working)
[TEST 1] Attempting event streaming...
Event #1: type=server.connected
Event #2: type=message.part.updated
-> AI EVENT FOUND: text
Event #3: type=message.part.updated
-> AI EVENT FOUND: text
Event #4: type=step_finish
[RESULT] Total events: 4, AI events: 3
Actual Output (Bug)
[TEST 1] Attempting event streaming...
Event #1: type=server.connected
Event #2: type=server.heartbeat
Event #3: type=server.heartbeat
... (repeats until timeout)
[RESULT] Total events: 20, AI events: 0
[BUG CONFIRMED] No AI response events received via streaming
[TEST 2] Attempting message polling (workaround)...
[SUCCESS] Response found after 2 polls
Response: Hello! How can I help you today?
Workaround (Message Polling)
Until this bug is fixed, use message polling instead of event streaming:
asyncdefget_ai_response_polling(session_id, user_message):
""" Workaround for broken event streaming. Polls session.messages() until AI response arrives. """client=Opencode(base_url="http://localhost:4096")
# Send messagepart=TextPartInputParam(text=user_message, type="text")
client.session.chat(
id=session_id,
model_id="big-pickle", # OpenCode's free modelprovider_id="opencode/big-pickle", # OpenCode's own providerparts=[part],
)
# Poll for response (workaround)forattemptinrange(60): # 30 second timeoutawaitasyncio.sleep(0.5)
messages=client.session.messages(session_id)
formsginmessages:
if (hasattr(msg, "info") andmsg.info.role=="assistant"andhasattr(msg, "parts") andmsg.parts):
forpartinmsg.parts:
ifhasattr(part, "type") andpart.type=="text":
text=getattr(part, "text", "")
iftext:
returntextreturnNone
Trade-offs:
✅ Works reliably
✅ Gets complete AI responses
❌ No real-time streaming (choppy UI updates)
❌ Higher latency (500ms polling interval)
❌ More API calls
Real-World Example
I discovered this bug while building Align - a Reflex-based chat interface for OpenCode. The broken streaming forced me to implement message polling, which causes a "janky" user experience with chunked updates instead of smooth streaming.
The Python SDK sends modelID/providerID as flat top-level fields, but the server expects them nested inside a model object. Server ignores the selection and falls back to default model.
client.session.chat(
id=session_id,
model_id="big-pickle", # Required but ignored by serverprovider_id="opencode/big-pickle", # Required but ignored by serverparts=[part],
extra_body={
"model": {
"providerID": "opencode", # Just provider name"modelID": "big-pickle"# Just model name
}
}
)
[BUG] OpenCode Python SDK: Event Streaming & Model Selection
Summary
The OpenCode Python SDK has two critical bugs:
Event Streaming Broken:
client.event.list()does not deliver AI assistant response events (message.part.updatedwithrole='assistant'). Only system events arrive (heartbeats, connection status).Model Selection Broken (Bug [BUG] model selection issue #42): The SDK sends
modelID/providerIDin wrong format (flat), but server expects nestedmodelobject. Server ignores selection and falls back to default model.Tested: Python SDK against local OpenCode server with
big-picklemodelBug #1: Event Streaming Non-Functional
According to the official OpenCode Python SDK README:
And from the SDK TypeScript documentation:
Expected Event Types:
message.part.updated- AI response text chunkstext- Text content from AIstep_finish- Generation completionsession.status- Session state changesActual Behavior
What Actually Happens
Observed Results
client.event.list()server.connected,server.heartbeatclient.session.messages()Backend Confirmation:
client.session.messages()returns complete responsesclient.event.list()NEVER receives AI response eventsTechnical Evidence
SDK Source Code Analysis
From opencode-sdk-python/src/opencode_ai/resources/event.py:
The SDK code clearly shows:
stream=Trueis setStream[EventListResponse]/eventendpointYet the events never arrive.
Reproduction Steps
Prerequisites
localhost:4096)pip install --pre opencode-aiMinimal Reproducible Example
Save as
test_streaming_bug.py:Expected Output (If Working)
Actual Output (Bug)
Workaround (Message Polling)
Until this bug is fixed, use message polling instead of event streaming:
Trade-offs:
Real-World Example
I discovered this bug while building Align - a Reflex-based chat interface for OpenCode. The broken streaming forced me to implement message polling, which causes a "janky" user experience with chunked updates instead of smooth streaming.
Repository: https://github.com/Dekode1859/Align
Use Case: Real-time chat UI with OpenCode SDK integration
Bug #2: Model Selection Ignored (SDK Issue #42)
Related: opencode-sdk-python Issue #42
The Problem
The Python SDK sends
modelID/providerIDas flat top-level fields, but the server expects them nested inside amodelobject. Server ignores the selection and falls back to default model.What Python SDK sends (WRONG):
{ "modelID": "big-pickle", "providerID": "opencode/big-pickle", "parts": [...] }What server expects (CORRECT):
{ "model": { "providerID": "opencode", "modelID": "big-pickle" }, "parts": [...] }Workaround
Use
extra_bodyto send correct format:Evidence
From SDK Issue #42 (session_chat_params.py):
The SDK sends flat fields while JS/TS SDK sends nested
modelobject per official docs.Environment
opencode-ai(alpha/pre-release)References
I'm happy to provide:
Report generated: 2026-04-05
Contact: (Dekode1859)[https://github.com/Dekode1859]
Related Repository: https://github.com/Dekode1859/Align