feat: embed OpenCode web UI in worker detail view#314
Conversation
LLMs consistently produce tilde-prefixed paths (e.g. ~/Projects/foo) because that's what appears in conversation context. Path::canonicalize() doesn't expand tildes — that's a shell feature — so these paths fail with 'directory does not exist'. Expand ~ to the home directory before passing to the server pool.
Add a reverse proxy and iframe-based embed so users can see the full
OpenCode coding interface (diffs, file tree, timeline, terminal) when
viewing OpenCode worker runs in the workers tab.
Backend:
- New migration adding opencode_session_id and opencode_port to worker_runs
- New ProcessEvent::OpenCodeSessionCreated emitted after session creation
- Reverse proxy at /api/opencode/{port}/{*path} forwarding to localhost
OpenCode servers with port range validation and SSE streaming support
- Worker list/detail API responses now include OpenCode metadata
Frontend:
- Tabbed detail view (OpenCode / Transcript) for OpenCode workers
- OpenCodeEmbed component with health check and iframe loading states
- Falls back to transcript tab when server is unreachable
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds OpenCode session metadata through DB, runtime events, persistence, API fields, a reverse-proxy, spawn/path handling, and frontend tab + iframe embedding to surface OpenCode sessions in the UI. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/routes/AgentWorkers.tsx`:
- Around line 570-576: The health-check fetch and the iframe embed in
AgentWorkers.tsx are failing under Bearer-token API auth because they call
/api/opencode/... directly (the fetch at the block that sets setState and the
iframe src around lines ~607-615) without an Authorization header and iframes
cannot set Bearer headers; fix by routing these requests through a backend proxy
endpoint that attaches the server-side Bearer token (or returns a
tokenized/self-authenticating URL): implement a new server route (e.g., GET
/internal/proxy/opencode/:port/global/health and GET
/internal/proxy/opencode/:port/embed) which performs server-side fetch to
/api/opencode/:port/... with the required Authorization header and returns the
response (or the embed HTML/URL), then update the fetch in AgentWorkers.tsx to
call the new proxy health endpoint and change the iframe src to the proxy embed
URL so auth is applied server-side.
- Around line 19-24: Remove the unused function base64UrlEncode from
AgentWorkers.tsx to satisfy the noUnusedLocals rule: locate the declaration of
function base64UrlEncode (the RFC 4648 base64url encoder) and delete the entire
function definition and its comment since it is never referenced; if you believe
encoding is needed later, alternatively export it from a util module and import
where used, but do not leave an unused local function in this file.
In `@src/agent/channel_dispatch.rs`:
- Around line 666-682: The expand_tilde function currently falls back to
filesystem root when dirs::home_dir() returns None; change it so that if the
home directory cannot be resolved you preserve and return the original path
instead of "/" — i.e., in the "~" case and the "~/..." case, only join with the
resolved home_dir when dirs::home_dir() returns Some(dir), otherwise return
std::path::PathBuf::from(path). Update the logic in expand_tilde to check
dirs::home_dir() and fall back to PathBuf::from(path) rather than
PathBuf::from("/") to avoid operating on the filesystem root.
In `@src/agent/channel.rs`:
- Around line 1577-1584: The OpenCode metadata UPDATE can silently affect 0 rows
if the INSERT from log_worker_started hasn't completed; in log_opencode_metadata
check the query result's rows_affected() and handle 0-rows cases by either
retrying a small number of times with backoff or switching to an upsert (INSERT
... ON CONFLICT DO UPDATE / INSERT OR REPLACE) so metadata isn't lost; update
the ProcessEvent::OpenCodeSessionCreated handling notes and add test(s) for the
race between log_worker_started and log_opencode_metadata to document the
terminal-state reasoning per guidelines.
In `@src/api/opencode_proxy.rs`:
- Around line 44-47: The handler is wrongly stripping "/api/opencode/" from the
incoming path, but Axum nests remove the "/api" segment so the handler actually
receives paths starting with "/opencode/"; update the call to path.strip_prefix
to use "/opencode/" (replace the path.strip_prefix("/api/opencode/") usage that
sets after_prefix) and adjust the preceding comment to state that the handler
receives "/opencode/{port}/{path}" so valid requests are parsed correctly.
In `@src/api/server.rs`:
- Around line 95-98: The opencode proxy route currently trusts the numeric
{port} and only enforces a range; update the opencode_proxy::opencode_proxy
handler to reject any request whose port is not explicitly bound to an active
authenticated worker/session. Inside opencode_proxy, resolve the incoming
request's identity (session/agent) and look up the session->port mapping (or
worker registry) to confirm the port belongs to that active worker; if it does
not, return a 403/400 and do not forward. Ensure the lookup and binding check
are performed before any socket connect/forward logic so only agent-scoped,
active worker ports are proxied.
- Around line 95-98: The handler opencode_proxy (opencode_proxy::opencode_proxy)
currently strips "/api/opencode/" but because the route is nested under "/api"
Axum already removes that prefix; update the handler to strip "/opencode/"
instead (or better, use the "{*path}" path parameter provided by the route to
get the remaining path rather than manually striping prefixes) so the path
parsing succeeds when the route defined in server.rs
("/opencode/{port}/{*path}") is invoked.
In `@src/conversation/history.rs`:
- Around line 375-393: log_opencode_metadata currently does a fire-and-forget
UPDATE that can silently fail if the worker row isn't inserted yet (see
log_worker_started); change it to detect rows_affected == 0 and retry (with a
short backoff and a reasonable total timeout) until the row exists or
timeout/attempts exhausted, logging on persistent failure; implement this inside
the tokio::spawn task around the sqlx::query(...) execution (check the execute
result for .rows_affected()) and avoid unbounded retries by using
exponential/backoff sleep and a max attempts or deadline.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5389cb66-848e-4719-915b-b5ebc017ea15
📒 Files selected for processing (12)
interface/src/api/client.tsinterface/src/routes/AgentWorkers.tsxmigrations/20260303000001_opencode_worker_metadata.sqlsrc/agent/channel.rssrc/agent/channel_dispatch.rssrc/api.rssrc/api/opencode_proxy.rssrc/api/server.rssrc/api/workers.rssrc/conversation/history.rssrc/lib.rssrc/opencode/worker.rs
| /** RFC 4648 base64url encoding (no padding), matching OpenCode's directory encoding. */ | ||
| function base64UrlEncode(value: string): string { | ||
| const bytes = new TextEncoder().encode(value); | ||
| const binary = Array.from(bytes, (b) => String.fromCharCode(b)).join(""); | ||
| return btoa(binary).replace(/\+/g, "-").replace(/\//g, "_").replace(/=/g, ""); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify call sites for base64UrlEncode in TS/TSX files
rg -nP --type=ts --type=tsx '\bbase64UrlEncode\s*\('Repository: spacedriveapp/spacebot
Length of output: 93
🏁 Script executed:
#!/bin/bash
# Search for base64UrlEncode usage more broadly
rg 'base64UrlEncode' --type=js interface/
# Also check the actual file
echo "=== File content ==="
cat -n interface/src/routes/AgentWorkers.tsx | head -50Repository: spacedriveapp/spacebot
Length of output: 2213
🏁 Script executed:
#!/bin/bash
# Search entire repo for base64UrlEncode
rg 'base64UrlEncode' .
# Also check if there are TypeScript errors being reported
echo "=== Checking for tsconfig and eslint config ==="
find . -maxdepth 2 -name "tsconfig.json" -o -name ".eslintrc*" -o -name "eslint.config.*" | head -10Repository: spacedriveapp/spacebot
Length of output: 250
🏁 Script executed:
#!/bin/bash
# Check tsconfig for unused variable checking
cat ./interface/tsconfig.jsonRepository: spacedriveapp/spacebot
Length of output: 633
Remove base64UrlEncode to unblock CI.
The function is declared but never called, triggering TS6133 in the interface/tsconfig.json where noUnusedLocals is enabled. CI will fail until this is removed.
Fix
-/** RFC 4648 base64url encoding (no padding), matching OpenCode's directory encoding. */
-function base64UrlEncode(value: string): string {
- const bytes = new TextEncoder().encode(value);
- const binary = Array.from(bytes, (b) => String.fromCharCode(b)).join("");
- return btoa(binary).replace(/\+/g, "-").replace(/\//g, "_").replace(/=/g, "");
-}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** RFC 4648 base64url encoding (no padding), matching OpenCode's directory encoding. */ | |
| function base64UrlEncode(value: string): string { | |
| const bytes = new TextEncoder().encode(value); | |
| const binary = Array.from(bytes, (b) => String.fromCharCode(b)).join(""); | |
| return btoa(binary).replace(/\+/g, "-").replace(/\//g, "_").replace(/=/g, ""); | |
| } |
🧰 Tools
🪛 GitHub Actions: Interface CI
[error] 20-20: TS6133: 'base64UrlEncode' is declared but its value is never read. Command 'bunx tsc --noEmit' exited with code 2.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/AgentWorkers.tsx` around lines 19 - 24, Remove the
unused function base64UrlEncode from AgentWorkers.tsx to satisfy the
noUnusedLocals rule: locate the declaration of function base64UrlEncode (the RFC
4648 base64url encoder) and delete the entire function definition and its
comment since it is never referenced; if you believe encoding is needed later,
alternatively export it from a util module and import where used, but do not
leave an unused local function in this file.
| fetch(`/api/opencode/${port}/global/health`, {signal: controller.signal}) | ||
| .then((response) => { | ||
| setState(response.ok ? "ready" : "error"); | ||
| }) | ||
| .catch(() => { | ||
| setState("error"); | ||
| }); |
There was a problem hiding this comment.
OpenCode embed requests will fail under Bearer-token API auth.
src/api/server.rs Line 276-281 enforces Authorization: Bearer ... for /api/*. Here, Line 570 uses plain fetch without auth headers, and the iframe request (Line 611) cannot send Bearer headers, so secured deployments will return 401.
Also applies to: 607-615
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/AgentWorkers.tsx` around lines 570 - 576, The
health-check fetch and the iframe embed in AgentWorkers.tsx are failing under
Bearer-token API auth because they call /api/opencode/... directly (the fetch at
the block that sets setState and the iframe src around lines ~607-615) without
an Authorization header and iframes cannot set Bearer headers; fix by routing
these requests through a backend proxy endpoint that attaches the server-side
Bearer token (or returns a tokenized/self-authenticating URL): implement a new
server route (e.g., GET /internal/proxy/opencode/:port/global/health and GET
/internal/proxy/opencode/:port/embed) which performs server-side fetch to
/api/opencode/:port/... with the required Authorization header and returns the
response (or the embed HTML/URL), then update the fetch in AgentWorkers.tsx to
call the new proxy health endpoint and change the iframe src to the proxy embed
URL so auth is applied server-side.
| /// Expand a leading `~` or `~/` in a path to the user's home directory. | ||
| /// | ||
| /// LLMs consistently produce tilde-prefixed paths because that's what appears | ||
| /// in conversation context. `std::path::Path::canonicalize()` doesn't expand | ||
| /// tildes (that's a shell feature), so paths like `~/Projects/foo` fail with | ||
| /// "directory does not exist". This handles the common cases. | ||
| fn expand_tilde(path: &str) -> std::path::PathBuf { | ||
| if path == "~" { | ||
| dirs::home_dir().unwrap_or_else(|| std::path::PathBuf::from("/")) | ||
| } else if let Some(rest) = path.strip_prefix("~/") { | ||
| dirs::home_dir() | ||
| .unwrap_or_else(|| std::path::PathBuf::from("/")) | ||
| .join(rest) | ||
| } else { | ||
| std::path::PathBuf::from(path) | ||
| } | ||
| } |
There was a problem hiding this comment.
Do not fall back to filesystem root when home dir is unavailable.
Returning / for ~/~/... can run OpenCode against an unintended directory. Preserve the original path when home directory cannot be resolved.
🔧 Proposed fix
fn expand_tilde(path: &str) -> std::path::PathBuf {
if path == "~" {
- dirs::home_dir().unwrap_or_else(|| std::path::PathBuf::from("/"))
+ dirs::home_dir().unwrap_or_else(|| std::path::PathBuf::from(path))
} else if let Some(rest) = path.strip_prefix("~/") {
dirs::home_dir()
- .unwrap_or_else(|| std::path::PathBuf::from("/"))
- .join(rest)
+ .map(|home| home.join(rest))
+ .unwrap_or_else(|| std::path::PathBuf::from(path))
} else {
std::path::PathBuf::from(path)
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Expand a leading `~` or `~/` in a path to the user's home directory. | |
| /// | |
| /// LLMs consistently produce tilde-prefixed paths because that's what appears | |
| /// in conversation context. `std::path::Path::canonicalize()` doesn't expand | |
| /// tildes (that's a shell feature), so paths like `~/Projects/foo` fail with | |
| /// "directory does not exist". This handles the common cases. | |
| fn expand_tilde(path: &str) -> std::path::PathBuf { | |
| if path == "~" { | |
| dirs::home_dir().unwrap_or_else(|| std::path::PathBuf::from("/")) | |
| } else if let Some(rest) = path.strip_prefix("~/") { | |
| dirs::home_dir() | |
| .unwrap_or_else(|| std::path::PathBuf::from("/")) | |
| .join(rest) | |
| } else { | |
| std::path::PathBuf::from(path) | |
| } | |
| } | |
| /// Expand a leading `~` or `~/` in a path to the user's home directory. | |
| /// | |
| /// LLMs consistently produce tilde-prefixed paths because that's what appears | |
| /// in conversation context. `std::path::Path::canonicalize()` doesn't expand | |
| /// tildes (that's a shell feature), so paths like `~/Projects/foo` fail with | |
| /// "directory does not exist". This handles the common cases. | |
| fn expand_tilde(path: &str) -> std::path::PathBuf { | |
| if path == "~" { | |
| dirs::home_dir().unwrap_or_else(|| std::path::PathBuf::from(path)) | |
| } else if let Some(rest) = path.strip_prefix("~/") { | |
| dirs::home_dir() | |
| .map(|home| home.join(rest)) | |
| .unwrap_or_else(|| std::path::PathBuf::from(path)) | |
| } else { | |
| std::path::PathBuf::from(path) | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/agent/channel_dispatch.rs` around lines 666 - 682, The expand_tilde
function currently falls back to filesystem root when dirs::home_dir() returns
None; change it so that if the home directory cannot be resolved you preserve
and return the original path instead of "/" — i.e., in the "~" case and the
"~/..." case, only join with the resolved home_dir when dirs::home_dir() returns
Some(dir), otherwise return std::path::PathBuf::from(path). Update the logic in
expand_tilde to check dirs::home_dir() and fall back to PathBuf::from(path)
rather than PathBuf::from("/") to avoid operating on the filesystem root.
| ProcessEvent::OpenCodeSessionCreated { | ||
| worker_id, | ||
| session_id, | ||
| port, | ||
| .. | ||
| } => { | ||
| run_logger.log_opencode_metadata(*worker_id, session_id, *port); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and examine the logger implementation for both functions
rg -n "fn log_worker_started|fn log_opencode_metadata" src/conversation/history.rs -A 15Repository: spacedriveapp/spacebot
Length of output: 1360
🏁 Script executed:
# Examine the ProcessEvent handler context to understand sequencing
rg -n "ProcessEvent::OpenCodeSessionCreated|ProcessEvent::WorkerStarted" src/agent/channel.rs -B 5 -A 10Repository: spacedriveapp/spacebot
Length of output: 1188
🏁 Script executed:
# Check how these logger methods are called in the worker lifecycle
rg -n "log_worker_started|log_opencode_metadata" src/agent/channel.rs -B 3 -A 2Repository: spacedriveapp/spacebot
Length of output: 490
🏁 Script executed:
# Look for worker_runs table schema and insert/update patterns
rg -n "INSERT INTO worker_runs|UPDATE worker_runs" src/ -C 3Repository: spacedriveapp/spacebot
Length of output: 2507
🏁 Script executed:
# Get the complete implementation of log_worker_started
sed -n '305,340p' src/conversation/history.rsRepository: spacedriveapp/spacebot
Length of output: 1308
🏁 Script executed:
# Check if there's explicit synchronization or ordering between WorkerStarted and OpenCodeSessionCreated
rg -n "ProcessEvent::WorkerStarted|ProcessEvent::OpenCodeSessionCreated" src/ -B 2 -A 2Repository: spacedriveapp/spacebot
Length of output: 2351
🏁 Script executed:
# Verify if sqlx returns error or rows_affected for 0-row UPDATE
rg -n "execute.*worker_runs" src/conversation/history.rs -A 2Repository: spacedriveapp/spacebot
Length of output: 48
Handle zero-row UPDATE race in OpenCode metadata persistence.
Both log_worker_started (INSERT) and log_opencode_metadata (UPDATE) are fire-and-forget spawned tasks with no ordering guarantee. If ProcessEvent::OpenCodeSessionCreated handler executes before the worker row is inserted, the UPDATE worker_runs ... WHERE id = ? will succeed but affect 0 rows—and the current code only logs on query errors, not on 0-rows-affected, so metadata is silently lost.
Suggested fix: check rows_affected() in log_opencode_metadata and either retry, or coalesce with an upsert pattern (e.g., INSERT OR REPLACE ... ON CONFLICT UPDATE ...).
Per coding guidelines, async/stateful paths like worker lifecycle must include explicit race/terminal-state reasoning in the PR summary and run targeted tests beyond just gate-pr.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/agent/channel.rs` around lines 1577 - 1584, The OpenCode metadata UPDATE
can silently affect 0 rows if the INSERT from log_worker_started hasn't
completed; in log_opencode_metadata check the query result's rows_affected() and
handle 0-rows cases by either retrying a small number of times with backoff or
switching to an upsert (INSERT ... ON CONFLICT DO UPDATE / INSERT OR REPLACE) so
metadata isn't lost; update the ProcessEvent::OpenCodeSessionCreated handling
notes and add test(s) for the race between log_worker_started and
log_opencode_metadata to document the terminal-state reasoning per guidelines.
| let after_prefix = match path.strip_prefix("/api/opencode/") { | ||
| Some(rest) => rest, | ||
| None => return (StatusCode::BAD_REQUEST, "invalid proxy path").into_response(), | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and examine the route setup in server.rs
find . -name "server.rs" -path "*/src/api/*" -type fRepository: spacedriveapp/spacebot
Length of output: 85
🏁 Script executed:
# Search for route mounting patterns related to /api and /opencode
rg -n "nest|merge|route" src/api/server.rs -A 3 -B 1 | head -80Repository: spacedriveapp/spacebot
Length of output: 3483
🏁 Script executed:
# Examine the opencode_proxy.rs handler at lines 44-47
cat -n src/api/opencode_proxy.rs | sed -n '40,50p'Repository: spacedriveapp/spacebot
Length of output: 560
🏁 Script executed:
# Check if there are any other path prefix usages in the codebase
rg -n "strip_prefix" src/api/opencode_proxy.rs -B 2 -A 2Repository: spacedriveapp/spacebot
Length of output: 361
🏁 Script executed:
# Find where api_routes is used after its definition
rg -n "api_routes" src/api/server.rs -A 2 -B 1Repository: spacedriveapp/spacebot
Length of output: 339
🏁 Script executed:
# Let's check the full handler function to understand the context better
cat -n src/api/opencode_proxy.rs | head -60Repository: spacedriveapp/spacebot
Length of output: 2796
Path prefix mismatch breaks opencode proxy requests.
When api_routes is mounted with .nest("/api", api_routes) at server.rs:238, Axum strips the /api prefix before routing to nested handlers. The opencode proxy handler receives /opencode/{port}/{path}, not /api/opencode/{port}/{path}. Line 44's strip_prefix("/api/opencode/") therefore fails on every request, returning BAD_REQUEST for all valid traffic.
🔧 Fix: Strip the prefix Axum actually passes
- let after_prefix = match path.strip_prefix("/api/opencode/") {
+ let after_prefix = match path.strip_prefix("/opencode/") {
Some(rest) => rest,
None => return (StatusCode::BAD_REQUEST, "invalid proxy path").into_response(),
};Also update the comment at line 43 to reflect the actual path format received by the handler.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let after_prefix = match path.strip_prefix("/api/opencode/") { | |
| Some(rest) => rest, | |
| None => return (StatusCode::BAD_REQUEST, "invalid proxy path").into_response(), | |
| }; | |
| let after_prefix = match path.strip_prefix("/opencode/") { | |
| Some(rest) => rest, | |
| None => return (StatusCode::BAD_REQUEST, "invalid proxy path").into_response(), | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/opencode_proxy.rs` around lines 44 - 47, The handler is wrongly
stripping "/api/opencode/" from the incoming path, but Axum nests remove the
"/api" segment so the handler actually receives paths starting with
"/opencode/"; update the call to path.strip_prefix to use "/opencode/" (replace
the path.strip_prefix("/api/opencode/") usage that sets after_prefix) and adjust
the preceding comment to state that the handler receives
"/opencode/{port}/{path}" so valid requests are parsed correctly.
| .route( | ||
| "/opencode/{port}/{*path}", | ||
| any(opencode_proxy::opencode_proxy), | ||
| ) |
There was a problem hiding this comment.
Port-range validation alone is insufficient for this proxy endpoint.
The route exposes a reverse proxy, and current handler logic only checks numeric range. That still allows authenticated callers to probe arbitrary localhost services in 10000..=60000. Bind port to an active worker/session (agent-scoped) before forwarding.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/server.rs` around lines 95 - 98, The opencode proxy route currently
trusts the numeric {port} and only enforces a range; update the
opencode_proxy::opencode_proxy handler to reject any request whose port is not
explicitly bound to an active authenticated worker/session. Inside
opencode_proxy, resolve the incoming request's identity (session/agent) and look
up the session->port mapping (or worker registry) to confirm the port belongs to
that active worker; if it does not, return a 403/400 and do not forward. Ensure
the lookup and binding check are performed before any socket connect/forward
logic so only agent-scoped, active worker ports are proxied.
🧩 Analysis chain
🏁 Script executed:
# First, check the route structure in server.rs around the mentioned lines
fd "server.rs" --type f | head -5Repository: spacedriveapp/spacebot
Length of output: 130
🏁 Script executed:
# Look for the full context of the routing setup
rg -n "route\(" src/api/server.rs -A 2 -B 1 | head -40Repository: spacedriveapp/spacebot
Length of output: 1810
🏁 Script executed:
# Check if routes are nested under /api
rg -n "nest\(" src/api/server.rs -A 3 -B 1Repository: spacedriveapp/spacebot
Length of output: 268
🏁 Script executed:
# Find opencode_proxy.rs and check its implementation
fd "opencode_proxy.rs" --type fRepository: spacedriveapp/spacebot
Length of output: 91
🏁 Script executed:
# Check axum version in Cargo.toml
rg "axum" Cargo.tomlRepository: spacedriveapp/spacebot
Length of output: 118
🏁 Script executed:
# Find the exact opencode_proxy route line numbers
rg -n "opencode" src/api/server.rs -B 2 -A 2Repository: spacedriveapp/spacebot
Length of output: 567
🏁 Script executed:
# Read opencode_proxy.rs to check the path parsing implementation
cat -n src/api/opencode_proxy.rsRepository: spacedriveapp/spacebot
Length of output: 5843
🏁 Script executed:
# Look for tests of opencode_proxy to see if it's actually working
rg -l "opencode_proxy" --type rsRepository: spacedriveapp/spacebot
Length of output: 92
🏁 Script executed:
# Check if there are any tests for this functionality
rg "opencode" --type rs | grep -i testRepository: spacedriveapp/spacebot
Length of output: 92
🏁 Script executed:
# Look for the full server.rs structure to understand request flow
head -100 src/api/server.rsRepository: spacedriveapp/spacebot
Length of output: 3884
🌐 Web query:
axum 0.8 nested router request.uri().path() full path or stripped
💡 Result:
In axum 0.8, when a router is mounted with Router::nest(...), the request URI seen inside the nested router has the matched prefix stripped. So req.uri().path() (or Uri extractor) returns the stripped path, not the full original path. [1]
If you need the full original path, use axum::extract::OriginalUri (feature: original-uri) and read OriginalUri.0.path() (available both as an extractor and via req.extensions()). [1][2]
If you need the nest mount point (the stripped prefix), use axum::extract::NestedPath. [6]
Example (middleware):
use axum::extract::OriginalUri;
use axum::http::Request;
let path = if let Some(OriginalUri(uri)) = req.extensions().get::<OriginalUri>() {
uri.path()
} else {
req.uri().path()
};[2]
Sources: Router “How the URI changes” note (prefix stripped, use OriginalUri) [1]; OriginalUri docs + middleware example [2]; NestedPath docs [6].
🏁 Script executed:
# Check if there are integration tests or if opencode_proxy is tested
find . -name "*.rs" -type f | xargs grep -l "opencode_proxy"Repository: spacedriveapp/spacebot
Length of output: 126
🏁 Script executed:
# Look for any mentions of OriginalUri or path handling workarounds
rg "OriginalUri" src/api/Repository: spacedriveapp/spacebot
Length of output: 48
🏁 Script executed:
# Check git history to see if this was recently added or has issues
git log --oneline src/api/opencode_proxy.rs | head -5Repository: spacedriveapp/spacebot
Length of output: 123
Proxy path parsing will fail for all requests due to axum nested router path stripping.
This route is nested under /api, and in axum 0.8 Router::nest() strips the matched prefix before passing the request to the nested router. The handler will receive /opencode/{port}/{path} but tries to strip_prefix("/api/opencode/"), which will always fail and reject all requests with BAD_REQUEST "invalid proxy path".
Fix in src/api/opencode_proxy.rs
- let after_prefix = match path.strip_prefix("/api/opencode/") {
+ let after_prefix = match path.strip_prefix("/opencode/") {
Some(rest) => rest,
None => return (StatusCode::BAD_REQUEST, "invalid proxy path").into_response(),
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/server.rs` around lines 95 - 98, The handler opencode_proxy
(opencode_proxy::opencode_proxy) currently strips "/api/opencode/" but because
the route is nested under "/api" Axum already removes that prefix; update the
handler to strip "/opencode/" instead (or better, use the "{*path}" path
parameter provided by the route to get the remaining path rather than manually
striping prefixes) so the path parsing succeeds when the route defined in
server.rs ("/opencode/{port}/{*path}") is invoked.
| pub fn log_opencode_metadata(&self, worker_id: WorkerId, session_id: &str, port: u16) { | ||
| let pool = self.pool.clone(); | ||
| let id = worker_id.to_string(); | ||
| let session_id = session_id.to_string(); | ||
|
|
||
| tokio::spawn(async move { | ||
| if let Err(error) = sqlx::query( | ||
| "UPDATE worker_runs SET opencode_session_id = ?, opencode_port = ? WHERE id = ?", | ||
| ) | ||
| .bind(&session_id) | ||
| .bind(port as i32) | ||
| .bind(&id) | ||
| .execute(&pool) | ||
| .await | ||
| { | ||
| tracing::warn!(%error, worker_id = %id, "failed to persist OpenCode metadata"); | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
log_opencode_metadata can lose updates due write-order race.
log_worker_started and this method are both fire-and-forget. If this UPDATE runs before the worker row exists, rows_affected = 0 and metadata is silently lost.
Suggested resilient update (retry on zero rows)
pub fn log_opencode_metadata(&self, worker_id: WorkerId, session_id: &str, port: u16) {
let pool = self.pool.clone();
let id = worker_id.to_string();
let session_id = session_id.to_string();
tokio::spawn(async move {
- if let Err(error) = sqlx::query(
- "UPDATE worker_runs SET opencode_session_id = ?, opencode_port = ? WHERE id = ?",
- )
- .bind(&session_id)
- .bind(port as i32)
- .bind(&id)
- .execute(&pool)
- .await
- {
- tracing::warn!(%error, worker_id = %id, "failed to persist OpenCode metadata");
- }
+ for attempt in 0..5 {
+ match sqlx::query(
+ "UPDATE worker_runs SET opencode_session_id = ?, opencode_port = ? WHERE id = ?",
+ )
+ .bind(&session_id)
+ .bind(i32::from(port))
+ .bind(&id)
+ .execute(&pool)
+ .await
+ {
+ Ok(result) if result.rows_affected() > 0 => return,
+ Ok(_) if attempt < 4 => {
+ tokio::time::sleep(std::time::Duration::from_millis(25)).await;
+ }
+ Ok(_) => {
+ tracing::warn!(worker_id = %id, "worker row not found while persisting OpenCode metadata");
+ return;
+ }
+ Err(error) => {
+ tracing::warn!(%error, worker_id = %id, "failed to persist OpenCode metadata");
+ return;
+ }
+ }
+ }
});
}As per coding guidelines **/*.rs: “For changes in async/stateful paths … include explicit race/terminal-state reasoning …”.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn log_opencode_metadata(&self, worker_id: WorkerId, session_id: &str, port: u16) { | |
| let pool = self.pool.clone(); | |
| let id = worker_id.to_string(); | |
| let session_id = session_id.to_string(); | |
| tokio::spawn(async move { | |
| if let Err(error) = sqlx::query( | |
| "UPDATE worker_runs SET opencode_session_id = ?, opencode_port = ? WHERE id = ?", | |
| ) | |
| .bind(&session_id) | |
| .bind(port as i32) | |
| .bind(&id) | |
| .execute(&pool) | |
| .await | |
| { | |
| tracing::warn!(%error, worker_id = %id, "failed to persist OpenCode metadata"); | |
| } | |
| }); | |
| } | |
| pub fn log_opencode_metadata(&self, worker_id: WorkerId, session_id: &str, port: u16) { | |
| let pool = self.pool.clone(); | |
| let id = worker_id.to_string(); | |
| let session_id = session_id.to_string(); | |
| tokio::spawn(async move { | |
| for attempt in 0..5 { | |
| match sqlx::query( | |
| "UPDATE worker_runs SET opencode_session_id = ?, opencode_port = ? WHERE id = ?", | |
| ) | |
| .bind(&session_id) | |
| .bind(i32::from(port)) | |
| .bind(&id) | |
| .execute(&pool) | |
| .await | |
| { | |
| Ok(result) if result.rows_affected() > 0 => return, | |
| Ok(_) if attempt < 4 => { | |
| tokio::time::sleep(std::time::Duration::from_millis(25)).await; | |
| } | |
| Ok(_) => { | |
| tracing::warn!(worker_id = %id, "worker row not found while persisting OpenCode metadata"); | |
| return; | |
| } | |
| Err(error) => { | |
| tracing::warn!(%error, worker_id = %id, "failed to persist OpenCode metadata"); | |
| return; | |
| } | |
| } | |
| } | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/conversation/history.rs` around lines 375 - 393, log_opencode_metadata
currently does a fire-and-forget UPDATE that can silently fail if the worker row
isn't inserted yet (see log_worker_started); change it to detect rows_affected
== 0 and retry (with a short backoff and a reasonable total timeout) until the
row exists or timeout/attempts exhausted, logging on persistent failure;
implement this inside the tokio::spawn task around the sqlx::query(...)
execution (check the execute result for .rows_affected()) and avoid unbounded
retries by using exponential/backoff sleep and a max attempts or deadline.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
interface/src/routes/AgentWorkers.tsx (1)
570-576:⚠️ Potential issue | 🟠 MajorOpenCode embed path still breaks in Bearer-only API auth setups.
Line 570 health-check fetch and Lines 607–615 iframe both hit
/api/opencode/...directly. If/api/*requiresAuthorization: Bearer, the fetch can 401 and the iframe cannot attach Bearer headers, so the embed won’t load.#!/bin/bash set -euo pipefail echo "== Check API auth enforcement and OpenCode route wiring ==" rg -n -C3 '/api/|Authorization|Bearer|auth' src/api/server.rs echo echo "== Check OpenCode proxy handler ==" rg -n -C3 'opencode_proxy|/api/opencode|PORT_MIN|PORT_MAX' src/api/opencode_proxy.rs src/api/server.rs echo echo "== Check frontend direct OpenCode calls ==" rg -n -C2 '/api/opencode/' interface/src/routes/AgentWorkers.tsxAlso applies to: 607-615
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/routes/AgentWorkers.tsx` around lines 570 - 576, The frontend is calling /api/opencode/... directly (see AgentWorkers fetch using controller.signal and the iframe src block), which fails under Bearer-only API auth because the iframe can’t send Authorization headers; instead route these requests through the server-side OpenCode proxy that can inject the Bearer token from the authenticated session. Update AgentWorkers to call the proxy path (e.g., /opencode/<port>/health and /opencode/<port>/embed) for both the health-check fetch and iframe src, and ensure the server-side handler (opencode_proxy / opencode_proxy.rs or the handler referenced in server.rs) forwards to /api/opencode/... attaching the Authorization header from the user session or using server credentials so the embed and health check work without the client needing to set Bearer headers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/routes/AgentWorkers.tsx`:
- Around line 566-576: The health-check useEffect should ignore aborted fetches
to avoid transient "error" state: in the catch handler for the fetch to
`/api/opencode/${port}/global/health` (inside the useEffect in AgentWorkers.tsx)
check whether the thrown error is an abort (e.g., error.name === 'AbortError' or
controller.signal.aborted) and if so do nothing, otherwise call
setState("error"); also ensure the effect returns a cleanup that calls
controller.abort() so stale requests are cancelled when port/worker changes. Use
the existing AbortController, setState, and fetch call sites to implement this
change.
---
Duplicate comments:
In `@interface/src/routes/AgentWorkers.tsx`:
- Around line 570-576: The frontend is calling /api/opencode/... directly (see
AgentWorkers fetch using controller.signal and the iframe src block), which
fails under Bearer-only API auth because the iframe can’t send Authorization
headers; instead route these requests through the server-side OpenCode proxy
that can inject the Bearer token from the authenticated session. Update
AgentWorkers to call the proxy path (e.g., /opencode/<port>/health and
/opencode/<port>/embed) for both the health-check fetch and iframe src, and
ensure the server-side handler (opencode_proxy / opencode_proxy.rs or the
handler referenced in server.rs) forwards to /api/opencode/... attaching the
Authorization header from the user session or using server credentials so the
embed and health check work without the client needing to set Bearer headers.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1b9f6309-eefc-4a8b-bc43-3478452c13cf
📒 Files selected for processing (2)
interface/src/routes/AgentWorkers.tsxsrc/api/opencode_proxy.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/api/opencode_proxy.rs
| useEffect(() => { | ||
| setState("loading"); | ||
| const controller = new AbortController(); | ||
|
|
||
| fetch(`/api/opencode/${port}/global/health`, {signal: controller.signal}) | ||
| .then((response) => { | ||
| setState(response.ok ? "ready" : "error"); | ||
| }) | ||
| .catch(() => { | ||
| setState("error"); | ||
| }); |
There was a problem hiding this comment.
Ignore aborted health-check requests to avoid transient false error state.
When switching workers quickly, an aborted request can still land in catch and set "error", causing a flicker before the next request resolves.
Suggested fix
useEffect(() => {
setState("loading");
const controller = new AbortController();
fetch(`/api/opencode/${port}/global/health`, {signal: controller.signal})
.then((response) => {
- setState(response.ok ? "ready" : "error");
+ if (!controller.signal.aborted) {
+ setState(response.ok ? "ready" : "error");
+ }
})
- .catch(() => {
- setState("error");
+ .catch((error) => {
+ if (error?.name === "AbortError" || controller.signal.aborted) return;
+ setState("error");
});
return () => controller.abort();
}, [port, sessionId]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/AgentWorkers.tsx` around lines 566 - 576, The
health-check useEffect should ignore aborted fetches to avoid transient "error"
state: in the catch handler for the fetch to
`/api/opencode/${port}/global/health` (inside the useEffect in AgentWorkers.tsx)
check whether the thrown error is an abort (e.g., error.name === 'AbortError' or
controller.signal.aborted) and if so do nothing, otherwise call
setState("error"); also ensure the effect returns a cleanup that calls
controller.abort() so stale requests are cancelled when port/worker changes. Use
the existing AbortController, setState, and fetch call sites to implement this
change.
The OpenCode SPA is served from app.opencode.ai via a catch-all proxy in
the OpenCode server. Asset URLs in the HTML are absolute paths like
/assets/index.js which resolve against the iframe origin (Spacebot),
not the proxy path. Similarly, the SDK uses location.origin for API
calls, bypassing our proxy entirely.
Fix by intercepting HTML responses and injecting:
- <base href="/api/opencode/{port}/"> so asset URLs resolve through
the proxy path
- A script that sets localStorage defaultServerUrl so the OpenCode SDK
routes all API fetch calls through the proxy
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/api/opencode_proxy.rs (1)
54-57:⚠️ Potential issue | 🔴 CriticalFix nested-route prefix parsing (Line 54).
api_routesis nested under/api, so this handler receives/opencode/{port}/{...}. Stripping"/api/opencode/"causes valid requests to fail withBAD_REQUEST.🔧 Proposed fix
- // Path format: /api/opencode/{port}/{rest...} - let after_prefix = match path.strip_prefix("/api/opencode/") { + // Path format inside nested api router: /opencode/{port}/{rest...} + let after_prefix = match path.strip_prefix("/opencode/") { Some(rest) => rest, None => return (StatusCode::BAD_REQUEST, "invalid proxy path").into_response(), };#!/bin/bash # Verify nested route + parsed prefix mismatch rg -nP 'nest\("/api",\s*api_routes\)' src/api/server.rs -C2 rg -nP 'route\("/opencode/\{port\}/\{\*path\}"' src/api/server.rs -C2 rg -nP 'strip_prefix\("/api/opencode/"\)|strip_prefix\("/opencode/"\)' src/api/opencode_proxy.rs -C2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/opencode_proxy.rs` around lines 54 - 57, The handler incorrectly strips "/api/opencode/" from the incoming path (the variable computed as after_prefix via path.strip_prefix("/api/opencode/")), but api_routes is already nested under "/api" so the request path at this handler starts with "/opencode/..."; update the strip_prefix call to use "/opencode/" (i.e. replace path.strip_prefix("/api/opencode/") with path.strip_prefix("/opencode/")) so valid requests parse correctly and keep the existing error branch for the None case unchanged.
🧹 Nitpick comments (1)
src/api/opencode_proxy.rs (1)
59-61: Use descriptive tuple bindings for parsed path parts (Line 60).
p/rare abbreviated and reduce readability. Prefer explicit names.✏️ Proposed refactor
- let (port_str, remainder) = match after_prefix.split_once('/') { - Some((p, r)) => (p, r), + let (port_str, remainder) = match after_prefix.split_once('/') { + Some((port_segment, path_remainder)) => (port_segment, path_remainder), None => (after_prefix, ""), };As per coding guidelines, "Don't abbreviate variable names; use
queuenotq,messagenotmsg,channelnotch(common abbreviations likeconfigare acceptable)".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/opencode_proxy.rs` around lines 59 - 61, Replace the single-letter tuple bindings in the split_once match with descriptive names: when matching after_prefix.split_once('/'), change (p, r) to (port_part, path_remainder) (keeping the resulting variables port_str and remainder as needed) so the match reads Some((port_part, path_remainder)) => (port_part, path_remainder), and keep the None branch unchanged; update any usage sites if you rename the final variables to preserve clarity around port_str and remainder in the function that processes after_prefix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/api/opencode_proxy.rs`:
- Around line 87-93: The loop that forwards headers (iterating request.headers()
and building proxy_request) incorrectly forwards sensitive headers like
Authorization and Cookie; update that loop in src/api/opencode_proxy.rs to
explicitly skip header::AUTHORIZATION and header::COOKIE in addition to
header::HOST and is_hop_by_hop(name) so proxy_request is never given inbound
auth/session headers; locate the for (name, value) in request.headers() block
and add checks for these headers before calling proxy_request.header(...) to
filter them out.
- Around line 78-83: The reqwest client builder currently uses
unwrap_or_default(), which silently discards builder errors and loses
.no_proxy() and .connect_timeout() settings; replace that with a matched build()
call that logs the build error (using the same logger used for body
read/upstream send failures) and returns/propagates an Err/HTTP 500 response
instead of falling back to a default client. Locate the
reqwest::Client::builder() invocation (the variable client) and change the
unwrap_or_default() to a match or map_err that logs the error with context
(e.g., "failed to build reqwest client") and returns the appropriate error
result consistent with the function's existing error handling.
---
Duplicate comments:
In `@src/api/opencode_proxy.rs`:
- Around line 54-57: The handler incorrectly strips "/api/opencode/" from the
incoming path (the variable computed as after_prefix via
path.strip_prefix("/api/opencode/")), but api_routes is already nested under
"/api" so the request path at this handler starts with "/opencode/..."; update
the strip_prefix call to use "/opencode/" (i.e. replace
path.strip_prefix("/api/opencode/") with path.strip_prefix("/opencode/")) so
valid requests parse correctly and keep the existing error branch for the None
case unchanged.
---
Nitpick comments:
In `@src/api/opencode_proxy.rs`:
- Around line 59-61: Replace the single-letter tuple bindings in the split_once
match with descriptive names: when matching after_prefix.split_once('/'), change
(p, r) to (port_part, path_remainder) (keeping the resulting variables port_str
and remainder as needed) so the match reads Some((port_part, path_remainder)) =>
(port_part, path_remainder), and keep the None branch unchanged; update any
usage sites if you rename the final variables to preserve clarity around
port_str and remainder in the function that processes after_prefix.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8407d76e-e062-48b5-9ba6-cfc183be0123
📒 Files selected for processing (1)
src/api/opencode_proxy.rs
| let client = reqwest::Client::builder() | ||
| .no_proxy() | ||
| // No read timeout — SSE connections are long-lived | ||
| .connect_timeout(std::time::Duration::from_secs(5)) | ||
| .build() | ||
| .unwrap_or_default(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check the file exists and read lines 70-90 for context
wc -l src/api/opencode_proxy.rs
head -90 src/api/opencode_proxy.rs | tail -30Repository: spacedriveapp/spacebot
Length of output: 1221
🏁 Script executed:
# Check for the function signature containing these lines
sed -n '60,90p' src/api/opencode_proxy.rsRepository: spacedriveapp/spacebot
Length of output: 1223
🏁 Script executed:
# Get the function signature and context
sed -n '1,85p' src/api/opencode_proxy.rs | head -50Repository: spacedriveapp/spacebot
Length of output: 2170
🏁 Script executed:
# Check function declaration
rg -nP 'async fn|fn' src/api/opencode_proxy.rs -A 3 | head -40Repository: spacedriveapp/spacebot
Length of output: 326
🏁 Script executed:
# Check reqwest documentation or implementation to understand unwrap_or_default behavior
# Let's also check if there are imports or other context
head -30 src/api/opencode_proxy.rs | grep -E "use|import"Repository: spacedriveapp/spacebot
Length of output: 238
🏁 Script executed:
# Check full function to see error handling patterns used elsewhere
sed -n '48,120p' src/api/opencode_proxy.rsRepository: spacedriveapp/spacebot
Length of output: 2750
Avoid silently falling back to a default reqwest client when builder fails.
Lines 78–83 use unwrap_or_default() on ClientBuilder::build(), which silently creates a default client on failure, losing the configured .no_proxy() and .connect_timeout(5s) settings. This violates the guideline: don't silently discard errors.
Handle this with a match expression and error logging, consistent with the error patterns already used in this function (e.g., body read and upstream send failures):
Suggested fix
- let client = reqwest::Client::builder()
+ let client = match reqwest::Client::builder()
.no_proxy()
// No read timeout — SSE connections are long-lived
.connect_timeout(std::time::Duration::from_secs(5))
.build()
- .unwrap_or_default();
+ {
+ Ok(client) => client,
+ Err(error) => {
+ tracing::error!(%error, "failed to build OpenCode proxy HTTP client");
+ return (StatusCode::INTERNAL_SERVER_ERROR, "proxy client init error").into_response();
+ }
+ };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/opencode_proxy.rs` around lines 78 - 83, The reqwest client builder
currently uses unwrap_or_default(), which silently discards builder errors and
loses .no_proxy() and .connect_timeout() settings; replace that with a matched
build() call that logs the build error (using the same logger used for body
read/upstream send failures) and returns/propagates an Err/HTTP 500 response
instead of falling back to a default client. Locate the
reqwest::Client::builder() invocation (the variable client) and change the
unwrap_or_default() to a match or map_err that logs the error with context
(e.g., "failed to build reqwest client") and returns the appropriate error
result consistent with the function's existing error handling.
| // Forward headers, skipping hop-by-hop and host | ||
| for (name, value) in request.headers() { | ||
| if name == header::HOST || is_hop_by_hop(name) { | ||
| continue; | ||
| } | ||
| proxy_request = proxy_request.header(name.clone(), value.clone()); | ||
| } |
There was a problem hiding this comment.
Do not forward Authorization/Cookie headers upstream (Line 87).
Forwarding inbound auth/session headers to 127.0.0.1:{port} can leak Spacebot credentials to the OpenCode process. Filter them explicitly.
🔒 Proposed fix
for (name, value) in request.headers() {
- if name == header::HOST || is_hop_by_hop(name) {
+ if name == header::HOST
+ || name == header::AUTHORIZATION
+ || name == header::COOKIE
+ || is_hop_by_hop(name)
+ {
continue;
}
proxy_request = proxy_request.header(name.clone(), value.clone());
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/opencode_proxy.rs` around lines 87 - 93, The loop that forwards
headers (iterating request.headers() and building proxy_request) incorrectly
forwards sensitive headers like Authorization and Cookie; update that loop in
src/api/opencode_proxy.rs to explicitly skip header::AUTHORIZATION and
header::COOKIE in addition to header::HOST and is_hop_by_hop(name) so
proxy_request is never given inbound auth/session headers; locate the for (name,
value) in request.headers() block and add checks for these headers before
calling proxy_request.header(...) to filter them out.
…follow-up loop to exit Interactive OpenCode workers blocked the channel retrigger indefinitely. The channel only retriggered on WorkerComplete, which only fired when worker.run() returned, which blocked on input_rx.recv() waiting for follow-ups, which never came because the channel held the input_tx and only dropped it on WorkerComplete — a circular dependency. Fix by adding WorkerInitialResult event, emitted when the initial task completes but the worker stays alive for follow-ups. The channel handles this like WorkerComplete (queues result, retriggers) but does NOT remove the worker from the active set, preserving the follow-up capability.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/agent/channel.rs (1)
1577-1584:⚠️ Potential issue | 🟠 MajorHandle the OpenCode metadata write ordering race.
Line 1583 triggers
log_opencode_metadata, but worker-start insert and metadata update are both fire-and-forget writes; if update runs first, it can affect 0 rows and silently drop metadata.🐛 Suggested fix (in
src/conversation/history.rs)pub fn log_opencode_metadata(&self, worker_id: WorkerId, session_id: &str, port: u16) { let pool = self.pool.clone(); let id = worker_id.to_string(); let session_id = session_id.to_string(); tokio::spawn(async move { - if let Err(error) = sqlx::query( - "UPDATE worker_runs SET opencode_session_id = ?, opencode_port = ? WHERE id = ?", - ) - .bind(&session_id) - .bind(port as i32) - .bind(&id) - .execute(&pool) - .await - { - tracing::warn!(%error, worker_id = %id, "failed to persist OpenCode metadata"); - } + match sqlx::query( + "UPDATE worker_runs SET opencode_session_id = ?, opencode_port = ? WHERE id = ?", + ) + .bind(&session_id) + .bind(port as i32) + .bind(&id) + .execute(&pool) + .await + { + Ok(done) if done.rows_affected() == 0 => { + tracing::warn!(worker_id = %id, "OpenCode metadata update affected 0 rows; consider retry/upsert"); + } + Ok(_) => {} + Err(error) => { + tracing::warn!(%error, worker_id = %id, "failed to persist OpenCode metadata"); + } + } }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 1577 - 1584, The OpenCode metadata write races with the worker-start insert because run_logger.log_opencode_metadata (called in ProcessEvent::OpenCodeSessionCreated) does a fire-and-forget update that can run before the insert and silently affect 0 rows; fix by making the write atomic/ordered: update log_opencode_metadata to perform an upsert or wrap the insert+metadata update in a single transaction (or have the update check rows affected and perform an insert if 0), or change the caller so the worker-start insert is awaited before calling log_opencode_metadata; reference run_logger.log_opencode_metadata, ProcessEvent::OpenCodeSessionCreated and the insert/update functions in conversation/history.rs when implementing the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib.rs`:
- Around line 200-215: The new ProcessEvent variants OpenCodeSessionCreated and
WorkerInitialResult are never forwarded because src/api/state.rs currently
swallows unknown variants with `_ => {}`; update the event dispatch path in the
function that matches ProcessEvent (the match with `_ => {}`) to add explicit
arms for OpenCodeSessionCreated { agent_id, worker_id, session_id, port } and
WorkerInitialResult { agent_id, worker_id, channel_id, result } that convert
them into the corresponding API/SSE payloads (or add new ApiEvent variants if
missing) and send/broadcast them to subscribers the same way other cases do;
ensure serialization types (ApiEvent enum or payload structs) are added/updated
if needed and the match no longer falls through to `_ => {}` for these variants.
In `@src/opencode/worker.rs`:
- Around line 195-207: The send of ProcessEvent::WorkerInitialResult currently
uses "let _ =" which silently discards the Result; change the send call in
src/opencode/worker.rs (the self.event_tx.send(...) that emits
WorkerInitialResult after scrub_text/scrub_leaks) to handle the Result per
project rules — either append .ok() on the send to explicitly ignore the
dropped-receiver case or match the Result and log the error (e.g., if let Err(e)
= self.event_tx.send(...) { warn!(...e) }) so errors are not silently discarded.
---
Duplicate comments:
In `@src/agent/channel.rs`:
- Around line 1577-1584: The OpenCode metadata write races with the worker-start
insert because run_logger.log_opencode_metadata (called in
ProcessEvent::OpenCodeSessionCreated) does a fire-and-forget update that can run
before the insert and silently affect 0 rows; fix by making the write
atomic/ordered: update log_opencode_metadata to perform an upsert or wrap the
insert+metadata update in a single transaction (or have the update check rows
affected and perform an insert if 0), or change the caller so the worker-start
insert is awaited before calling log_opencode_metadata; reference
run_logger.log_opencode_metadata, ProcessEvent::OpenCodeSessionCreated and the
insert/update functions in conversation/history.rs when implementing the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8f39e1c2-a368-4d74-9339-dd37cb66dc3a
📒 Files selected for processing (3)
src/agent/channel.rssrc/lib.rssrc/opencode/worker.rs
| // Emit the initial result immediately so the channel can retrigger | ||
| // and tell the user what the worker found. Without this, the channel | ||
| // would block until the entire follow-up loop exits — which only | ||
| // happens when the channel drops the input sender, creating a deadlock. | ||
| let scrubbed_result = self.scrub_text(&result_text); | ||
| let scrubbed_result = crate::secrets::scrub::scrub_leaks(&scrubbed_result); | ||
| let _ = self.event_tx.send(ProcessEvent::WorkerInitialResult { | ||
| agent_id: self.agent_id.clone(), | ||
| worker_id: self.id, | ||
| channel_id: self.channel_id.clone(), | ||
| result: scrubbed_result, | ||
| }); | ||
|
|
There was a problem hiding this comment.
Avoid let _ = when sending WorkerInitialResult.
Line 201 silently discards the send Result with let _ =. Use .ok() (or log the error) to match project error-handling rules.
🔧 Minimal fix
- let _ = self.event_tx.send(ProcessEvent::WorkerInitialResult {
+ self.event_tx.send(ProcessEvent::WorkerInitialResult {
agent_id: self.agent_id.clone(),
worker_id: self.id,
channel_id: self.channel_id.clone(),
result: scrubbed_result,
- });
+ }).ok();As per coding guidelines: "Don't silently discard errors; no let _ = on Results except .ok() on channel sends where the receiver may be dropped".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Emit the initial result immediately so the channel can retrigger | |
| // and tell the user what the worker found. Without this, the channel | |
| // would block until the entire follow-up loop exits — which only | |
| // happens when the channel drops the input sender, creating a deadlock. | |
| let scrubbed_result = self.scrub_text(&result_text); | |
| let scrubbed_result = crate::secrets::scrub::scrub_leaks(&scrubbed_result); | |
| let _ = self.event_tx.send(ProcessEvent::WorkerInitialResult { | |
| agent_id: self.agent_id.clone(), | |
| worker_id: self.id, | |
| channel_id: self.channel_id.clone(), | |
| result: scrubbed_result, | |
| }); | |
| // Emit the initial result immediately so the channel can retrigger | |
| // and tell the user what the worker found. Without this, the channel | |
| // would block until the entire follow-up loop exits — which only | |
| // happens when the channel drops the input sender, creating a deadlock. | |
| let scrubbed_result = self.scrub_text(&result_text); | |
| let scrubbed_result = crate::secrets::scrub::scrub_leaks(&scrubbed_result); | |
| self.event_tx.send(ProcessEvent::WorkerInitialResult { | |
| agent_id: self.agent_id.clone(), | |
| worker_id: self.id, | |
| channel_id: self.channel_id.clone(), | |
| result: scrubbed_result, | |
| }).ok(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/opencode/worker.rs` around lines 195 - 207, The send of
ProcessEvent::WorkerInitialResult currently uses "let _ =" which silently
discards the Result; change the send call in src/opencode/worker.rs (the
self.event_tx.send(...) that emits WorkerInitialResult after
scrub_text/scrub_leaks) to handle the Result per project rules — either append
.ok() on the send to explicitly ignore the dropped-receiver case or match the
Result and log the error (e.g., if let Err(e) = self.event_tx.send(...) {
warn!(...e) }) so errors are not silently discarded.
Instead of opaque 'working' and 'running: read' messages, derive meaningful descriptions from tool input fields: - read/write/edit: show file path (e.g. 'running: src/api/server.rs') - bash: show LLM description or command (e.g. 'running: check git status') - grep: show pattern (e.g. 'running: search "TODO"') - glob: show pattern (e.g. 'running: glob **/*.ts') - task: show description - completed tools: show 'done: <title>' instead of 'working' The title field from ToolState::Running is often None for most tools. Fall back to extracting filePath, description, pattern, or command from the input JSON to build a status the user can actually follow.
Two bugs fixed: 1. Result text only captured the last Part::Text fragment. For multi-step sessions (code reviews, complex tasks), the final text would be something like 'The full report is above' — referring to content that was overwritten. Now all text parts are accumulated into all_text and concatenated as the full result. 2. OpenCode worker transcripts were never persisted. The transcript column stayed NULL because OpenCode workers don't use Rig's message history (which is what the builtin Worker::persist_transcript serializes). Now the worker builds TranscriptStep entries directly from SSE events (text parts, tool calls, tool results) and persists them on completion via channel_dispatch.
Stream finalized OpenCode parts to the frontend in real-time so the dashboard shows a live, updating transcript of OpenCode sessions. Rust: - Add OpenCodePart type (Text, Tool, StepStart, StepFinish) with conversion from internal Part enum - Add ProcessEvent::OpenCodePartUpdated + ApiEvent + SSE event name - Emit OpenCodePartUpdated on every message.part.updated SSE event - On completion, fetch full message history from OpenCode API and convert to TranscriptStep[] for persistence (replaces SSE-based transcript accumulation) - Simplify EventState: remove all_text/transcript fields, keep last_text for status and initial result delivery Frontend: - Add OpenCodePart/OpenCodeToolState/OpenCodePartUpdatedEvent types - Handle opencode_part_updated in useLiveContext with per-worker insertion-ordered Maps keyed by part ID (upsert semantics) - OpenCode-native transcript renderer: text as markdown, tools as collapsible cards with status/input/output, step boundaries as dividers. Falls back to TranscriptStep view for completed workers.
Interactive workers (OpenCode and builtin) now emit a dedicated WorkerIdle event when waiting for follow-up input, instead of perpetually showing 'running'. The idle state flows through the full event pipeline: Backend: - ProcessEvent::WorkerIdle variant with agent_id, worker_id, channel_id - ApiEvent::WorkerIdle → SSE 'worker_idle' event - DB persistence: 'idle' status in worker_runs, resumes to 'running' - Startup reconciliation treats idle same as running (marks failed) - SpacebotHook::send_worker_idle() for builtin workers - OpenCodeWorker::send_idle() for OpenCode workers Frontend: - ActiveWorker.isIdle flag, set by worker_idle SSE handler - Cleared automatically when worker_status arrives (resume) - Blue 'idle' badge (new Badge variant) replaces amber 'running' - LiveDuration counter stops for idle workers - 'Waiting for follow-up input' indicator in detail view - Idle filter added to worker list status pills
… fails The OpenCode worker fetches the full transcript from the OpenCode server's messages API after completion. If that API call fails (server recycled, process exited, timeout), the transcript was lost — the DB row kept a NULL transcript blob, and after page refresh the UI showed 'Waiting for first tool call...' instead of the actual conversation history. Fix: accumulate OpenCodeParts in EventState during SSE processing. When get_messages() fails, convert the accumulated parts to TranscriptSteps via a new convert_opencode_parts() function (with deduplication, since parts are upserted by ID through state transitions like Pending → Running → Completed). This ensures every OpenCode worker persists a transcript. Also fix the frontend fallback: only show the 'Waiting for first tool call' spinner when there's actual SSE state (liveWorker exists), not for stale 'running' DB rows after a page refresh.
…n merge Add WorkerIdle, OpenCodeSessionCreated, OpenCodePartUpdated, and WorkerInitialResult to the exhaustive match statements in channel_history::event_is_for_channel and cortex::signal_from_event that were introduced by the worker-outcome-gate PR on main. WorkerIdle and WorkerInitialResult carry channel_id and route to the correct channel. OpenCode UI events have no channel affinity (return false). For cortex signals, WorkerIdle maps to WorkerStatus with 'idle' text; OpenCode/InitialResult events are UI plumbing and return None (signal_from_event now returns Option<Signal>).
The supervisor was still killing idle OpenCode workers after 10 minutes of inactivity because last_activity_at stops updating when a worker enters idle state (waiting for follow-up input). Add is_idle flag to WorkerTracker: - Set when WorkerIdle event is observed - Cleared by track_worker_activity (any WorkerStatus/ToolStarted/ ToolCompleted event means the worker resumed) - Overdue filter skips workers with is_idle=true This complements the last_activity_at fix from #332 — that fix prevents killing workers that are actively running tools, this fix prevents killing workers that are legitimately waiting for follow-up.
…encode worker colors Thread the interactive bool from spawn-time through DB persistence, API responses, SSE events, and frontend rendering so the UI can show whether a worker accepts follow-up input. Differentiate opencode workers visually with zinc/neutral colors (vs amber for builtin) across channel timeline, channel cards, and webchat panel.
Both OpenCode and builtin interactive workers now emit a WorkerInitialResult after each successful follow-up cycle, not just the initial task. The channel already handles this event by queuing a retrigger, so routed follow-up results now relay to users automatically. Result text is scrubbed through the secrets store before emission, matching the initial-result path.
Summary
Architecture
Changes
Real-time transcript mirroring (new)
Typed part model (
src/opencode/types.rs) —OpenCodePartenum (Text, Tool, StepStart, StepFinish) withOpenCodeToolState(Pending, Running, Completed, Error).part_to_opencode_part()converts internal SSE parts to the frontend-facing type, truncating tool inputs/outputs.Event pipeline —
ProcessEvent::OpenCodePartUpdated→ApiEvent::OpenCodePartUpdated→ SSE event"opencode_part_updated". Emitted on everymessage.part.updatedfrom OpenCode (finalized parts only, not character deltas).API-based transcript persistence (
src/opencode/worker.rs) — on session completion, callsserver.get_messages(session_id)and converts viaconvert_opencode_messages()inworker_transcript.rs. This replaces the previous SSE-based transcript/all_text accumulation, producing accurate transcripts that match OpenCode's own view.EventStatesimplified to only tracklast_text(for initial result delivery) andtool_calls.Frontend live transcript (
useLiveContext.tsx) — maintainsliveOpenCodeParts: Record<string, Map<string, OpenCodePart>>per worker. Parts are upserted by ID into insertion-ordered Maps, so tool parts update in-place as they transition through states (pending → running → completed).OpenCode-native renderer (
AgentWorkers.tsx) — for running OpenCode workers, renders from live parts: text as markdown, tools as collapsible cards with status indicator/input/output, step boundaries as dividers. Completed workers fall back to the persisted TranscriptStep view.OpenCode web embed (existing)
Reverse proxy (
src/api/opencode_proxy.rs) — handler at/api/opencode/{port}/{*path}that forwards to the OpenCode server on127.0.0.1:{port}. HTML rewriting injects<base href>and localStorage SDK URL override so the iframe resolves assets correctly.Persistence — migration adds
opencode_session_idandopencode_portcolumns toworker_runs.ProcessEvent::OpenCodeSessionCreatedemitted from worker, persisted viaProcessRunLogger::log_opencode_metadata().Tabbed detail view — OpenCode workers show "OpenCode" / "Transcript" tabs. The OpenCode tab renders an iframe with health check + graceful fallback.
Bug fixes
expand_tilde()inchannel_dispatch.rsresolves~/paths beforecanonicalize()ProcessEvent::WorkerInitialResultbreaks the circular dependency between channel waiting for WorkerComplete and worker waiting for input_rx closedescribe_tool_input()extracts file paths, descriptions, patterns from tool input JSON instead of showing bare tool names<base href>injection and localStorage SDK URL override fix asset resolution for the proxied OpenCode SPAFiles changed
migrations/20260303000001_opencode_worker_metadata.sqlsrc/opencode/types.rsOpenCodePart,OpenCodeToolState,part_to_opencode_part()src/lib.rsOpenCodeSessionCreated,OpenCodePartUpdated,WorkerInitialResultto ProcessEventsrc/api/state.rsApiEvent::OpenCodePartUpdated+ translationsrc/api/system.rs"opencode_part_updated"SSE event namesrc/opencode/worker.rssrc/conversation/worker_transcript.rsconvert_opencode_messages()src/api/opencode_proxy.rssrc/agent/channel.rssrc/agent/channel_dispatch.rsexpand_tilde(), rich status viadescribe_tool_input()interface/src/api/client.tsOpenCodePart,OpenCodePartUpdatedEventtypesinterface/src/hooks/useLiveContext.tsxliveOpenCodePartsstate +opencode_part_updatedhandlerinterface/src/routes/AgentWorkers.tsxOpenCodeEmbed