feat: Projects — workspace layout awareness for agents#346
Conversation
…, and UI Implement the Projects system that gives Spacebot first-class understanding of a developer's workspace layout (projects, repos, git worktrees). Phase 1 — Data Layer: migration files, ProjectStore with full CRUD, git helpers (discover_repos, list/create/remove worktrees), wired into AgentDeps + ApiState. Phase 2 — Prompt Injection: projects_context template, project_manage_description template, render_projects_context() method, project_context parameter in channel prompt, conditional block in channel.md.j2, build_project_context() on Channel. Phase 3 — Tool Integration: project_manage tool with 7 actions (list, create, scan, add_repo, create_worktree, remove_worktree, disk_usage), spawn_worker extended with project_id/worktree_id for directory resolution, worker-project association. Phase 4 — API: 10 REST endpoints for projects, repos, worktrees, scanning, and disk usage, registered in server.rs. Phase 5 — UI: Project types + 11 API methods in client.ts, Projects tab in AgentTabs, route at /agents/$agentId/projects, full AgentProjects page with project list (card grid), project detail (repos, worktrees, disk usage), create project dialog, create worktree dialog, and delete confirmations.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a Projects feature across stack: DB migrations and ProjectStore, git helpers, REST API and server routes, project_manage tool, worker run linking, prompt-context injection, frontend API/types and UI, plus tests and runtime/config wiring. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/tools/spawn_worker.rs (1)
126-157:⚠️ Potential issue | 🟠 MajorExpose
project_idandworktree_idoutside the OpenCode gate.These fields are currently hidden whenever
opencode_enabledis false, so builtin workers cannot request project/worktree context through the tool schema even thoughSpawnWorkerArgsand the runtime path support it.Suggested fix
- if opencode_enabled && let Some(obj) = properties.as_object_mut() { + if let Some(obj) = properties.as_object_mut() { + obj.insert( + "project_id".to_string(), + serde_json::json!({ + "type": "string", + "description": "Project ID to associate this worker with. When set, the worker gets project context. If directory is not specified, defaults to the project root." + }), + ); + obj.insert( + "worktree_id".to_string(), + serde_json::json!({ + "type": "string", + "description": "Worktree ID within the project. If set, the worker's directory is automatically set to the worktree path." + }), + ); + } + + if opencode_enabled && let Some(obj) = properties.as_object_mut() { obj.insert( "worker_type".to_string(), serde_json::json!({ @@ obj.insert( "directory".to_string(), serde_json::json!({ "type": "string", "description": "Working directory for the worker. Required when worker_type is \"opencode\" unless project_id or worktree_id is set. The OpenCode agent operates in this directory." }), ); - obj.insert( - "project_id".to_string(), - serde_json::json!({ - "type": "string", - "description": "Project ID to associate this worker with. When set, the worker gets project context. If directory is not specified, defaults to the project root." - }), - ); - obj.insert( - "worktree_id".to_string(), - serde_json::json!({ - "type": "string", - "description": "Worktree ID within the project. If set, the worker's directory is automatically set to the worktree path." - }), - ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tools/spawn_worker.rs` around lines 126 - 157, The schema currently only inserts "project_id" and "worktree_id" when opencode_enabled is true, hiding them from builtin workers even though SpawnWorkerArgs and the runtime accept them; move the obj.insert calls for "project_id" and "worktree_id" out of the opencode_enabled conditional (keep "worker_type" and "directory" gated if desired) so that properties.as_object_mut() always receives inserts for project_id and worktree_id, preserving their descriptions and behavior used by the runtime path and SpawnWorkerArgs.src/main.rs (1)
2588-2619:⚠️ Potential issue | 🟠 Major
project_storesis only initialized as a startup snapshot.After this call, the
agent_rx/agent_remove_rxbranches only update the localagentsmap, notapi_state.project_stores. That means project endpoints will miss newly created agents and can keep serving stale stores for removed agents until restart. Please update the API registries on runtime add/remove as well, or derive these lookups from the live agent registry instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main.rs` around lines 2588 - 2619, The startup code populates project_stores (and other per-agent registries) once via api_state.set_project_stores(...) but the agent add/remove branches (agent_rx / agent_remove_rx) only update the local agents map, causing api_state.project_stores to become stale; update the runtime add/remove logic to mirror changes into api_state by calling the same setters (e.g., api_state.set_project_stores, api_state.set_agent_pools, api_state.set_memory_searches, api_state.set_mcp_managers, api_state.set_task_stores and api_state.set_agent_configs) when handling new agents or removals, or refactor api_state lookup to derive stores from the live agents registry (the agents HashMap) so lookups always reflect current agents (refer to project_stores, agent_pools, api_state, agents, and the agent_rx/agent_remove_rx handling).
🧹 Nitpick comments (6)
tests/context_dump.rs (1)
108-131: This harness still bypasses the new project-context path.Adding
project_storehere is necessary, butbuild_channel_system_prompt()below still renders the channel prompt without loading or passingproject_context. That meansdump_channel_contextcan stay green while the new prompt-injection path is broken. Consider seeding a sample project and rendering through the same path the channel uses in production.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/context_dump.rs` around lines 108 - 131, The test harness is bypassing the project-context path: seed a sample project into the ProjectStore used in AgentDeps and ensure the test renders the channel prompt via the same production path that uses project_context (e.g., call the same function(s) that production uses to build channel prompts rather than directly invoking build_channel_system_prompt if it bypasses project loading). Specifically, in tests/context_dump.rs populate project_store (the ProjectStore inside AgentDeps) with a sample project, ensure project_context is loaded and passed into dump_channel_context or the channel prompt builder the code uses in production, and update the test to assert the rendered prompt comes from that full project-context flow.src/tools/project_manage.rs (2)
163-174: Silent error swallowing could hide database issues.Using
.unwrap_or_default()on database operations silently ignores failures when listing repos and worktrees. While the tool returns structured errors for critical operations, these silent fallbacks could mask database connectivity issues.Consider logging the errors for observability:
♻️ Suggested improvement
let repos = self .project_store .list_repos(&project.id) .await - .unwrap_or_default(); + .unwrap_or_else(|error| { + tracing::warn!(%error, project_id = %project.id, "failed to list repos"); + Vec::new() + }); let worktrees = self .project_store .list_worktrees_with_repos(&project.id) .await - .unwrap_or_default(); + .unwrap_or_else(|error| { + tracing::warn!(%error, project_id = %project.id, "failed to list worktrees"); + Vec::new() + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tools/project_manage.rs` around lines 163 - 174, The code is silently swallowing DB errors by calling unwrap_or_default() on project_store.list_repos and list_worktrees_with_repos; replace these calls with proper error handling: either propagate the error (use ? on list_repos/list_worktrees_with_repos so the caller can handle it) or explicitly match the Result, log the error via your logger and handle a fallback, rather than using unwrap_or_default. Update the loop in the function that builds results (the block referencing results, project_store.list_repos, and project_store.list_worktrees_with_repos) to surface failures for observability and avoid hiding connectivity/DB issues.
696-735: Blocking I/O on async context may cause thread pool starvation.
std::fs::read_diris synchronous and will block the async runtime's thread. For large directories this could impact performance. Consider usingtokio::fs::read_dirfor consistency with the async context, similar to howdir_sizeis implemented asynchronously insrc/api/projects.rs(lines 713-735).♻️ Suggested async implementation
- if let Ok(read_dir) = std::fs::read_dir(root) { - for entry in read_dir.flatten() { - let is_dir = entry.file_type().map(|ft| ft.is_dir()).unwrap_or(false); - let size = if is_dir { - dir_size(&entry.path()) - } else { - entry.metadata().map(|m| m.len()).unwrap_or(0) - }; + if let Ok(mut read_dir) = tokio::fs::read_dir(root).await { + while let Ok(Some(entry)) = read_dir.next_entry().await { + let metadata = match entry.metadata().await { + Ok(m) => m, + Err(_) => continue, + }; + let is_dir = metadata.is_dir(); + let size = if is_dir { + dir_size_async(&entry.path()).await + } else { + metadata.len() + };And update
dir_sizeto be async as well (seesrc/api/projects.rs:713-735for reference implementation).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tools/project_manage.rs` around lines 696 - 735, The code currently uses blocking std::fs::read_dir and synchronous metadata calls inside an async context; replace the sync directory walk with an async implementation using tokio::fs::read_dir and its async DirEntry methods (e.g. next_entry().await, entry.file_type().await, entry.metadata().await) inside the function that builds entries/total_bytes, and convert dir_size to an async function (dir_size -> async fn dir_size(...).await) so you can await sizes for directories; update the surrounding function signature to async and await dir_size, and keep building the same serde_json entries and final ProjectManageOutput (retaining format_bytes, total_bytes, entries, and project.name) but without blocking the async runtime.src/projects/store.rs (1)
494-518: Consider usingrowinstead ofrfor consistency with naming guidelines.The closure parameter uses abbreviated
rinstead ofrow. While this is a common pattern, the coding guidelines specify non-abbreviated variable names.♻️ Suggested refactor
- row.map(|r| row_to_project(&r)).transpose() + row.map(|row| row_to_project(&row)).transpose()Apply similarly to other occurrences at lines 333, 344, 370, 408, 419, 476, 488.
As per coding guidelines: "Use non-abbreviated variable names in Rust code:
queuenotq,messagenotmsg".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/projects/store.rs` around lines 494 - 518, The closure parameter uses the abbreviated name `r` in the project row parsing code; update the closure parameter to `row` for consistency with naming guidelines used in functions like `row_to_project`, and apply the same rename to the other similar closures referenced (occurrences around the functions/blocks at the indicated locations) so that all usages (e.g., calls to `row.try_get(...)`) use `row` instead of `r`. Ensure you rename both the parameter and all its usages inside each closure to avoid unused-variable or unresolved-name errors.interface/src/api/client.ts (1)
1376-1379:ProjectDetailResponsetype is overly permissive and unused.The index signature
[key: string]: unknowndoesn't provide type safety. SincegetProjectalready returnsProjectWithRelationsdirectly (line 2259), this type appears to be unused. Consider removing it or replacing with a proper type alias.♻️ Suggested fix
-export interface ProjectDetailResponse { - /** The flattened project + repos + worktrees (serde #[flatten]) */ - [key: string]: unknown; -} +// ProjectDetailResponse is not needed — getProject returns ProjectWithRelations directlyIf you need a wrapper type for consistency with other endpoints:
export interface ProjectDetailResponse { project: ProjectWithRelations; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/api/client.ts` around lines 1376 - 1379, The ProjectDetailResponse interface is overly permissive and appears unused; either remove the ProjectDetailResponse declaration entirely or replace it with a strongly typed wrapper (e.g., a property typed as ProjectWithRelations) and update any callers: locate the ProjectDetailResponse interface, remove or change it to something like project: ProjectWithRelations, and ensure you import/retain the ProjectWithRelations type and update any references (e.g., places using getProject) to use the new type or the direct ProjectWithRelations return.src/api/projects.rs (1)
496-518: Consider validating repo ownership before deletion.The
delete_repohandler extractsproject_idfrom the path but doesn't use it. While the current implementation works because repo IDs are globally unique, validating that the repo belongs to the specified project would be more defensive and consistent with the route structure/agents/projects/{id}/repos/{repo_id}.🛡️ Suggested validation
pub(super) async fn delete_repo( State(state): State<Arc<ApiState>>, - Path((_, repo_id)): Path<(String, String)>, + Path((project_id, repo_id)): Path<(String, String)>, Query(query): Query<AgentQuery>, ) -> Result<Json<ActionResponse>, StatusCode> { let stores = state.project_stores.load(); let store = stores.get(&query.agent_id).ok_or(StatusCode::NOT_FOUND)?; + // Verify repo belongs to the specified project. + let repo = store.get_repo(&repo_id).await.map_err(|error| { + tracing::error!(%error, "failed to get repo"); + StatusCode::INTERNAL_SERVER_ERROR + })? + .ok_or(StatusCode::NOT_FOUND)?; + + if repo.project_id != project_id { + return Err(StatusCode::NOT_FOUND); + } + let deleted = store.delete_repo(&repo_id).await.map_err(|error| {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/projects.rs` around lines 496 - 518, The handler delete_repo currently ignores the extracted project_id (Path((_, repo_id))) so add a defensive ownership check before calling store.delete_repo: extract project_id from the Path, fetch the repo metadata from the store (e.g., a method like store.get_repo(repo_id) or add one if missing) and verify that the repo's project_id equals the path project_id; if the repo is missing or the project_id does not match return NOT_FOUND (or FORBIDDEN if you prefer), otherwise proceed to call store.delete_repo(&repo_id). Keep existing logging and error mapping around store.delete_repo but perform the ownership check first and return early on mismatch.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/routes/AgentProjects.tsx`:
- Around line 1-12: Remove the unused imports causing TS6133: delete useCallback
from the React import and remove the unused types ProjectWithRelations and
DiskUsageResponse from the api import list in AgentProjects.tsx; update the
import line(s) so only used symbols (e.g., useState, useQuery, useMutation,
useQueryClient, and the actually referenced api types like Project,
ProjectWorktreeWithRepo, ProjectRepo, CreateProjectRequest,
CreateWorktreeRequest) remain.
- Around line 424-444: The icon-only delete Button components (the Button
instances using onDelete and isDeleting in AgentProjects.tsx) lack accessible
names; add an explicit accessible name (e.g., aria-label="Delete project" or
aria-label="Delete worktree" as appropriate) to each Button (both the instance
around the trash SVG at the first occurrence and the second occurrence) so
screen readers can announce the control and distinguish what will be deleted;
ensure the aria-label matches the action/context and keep existing props like
onDelete and disabled intact.
- Around line 259-273: Seeded repoId only on first mount; add an effect to reset
it whenever the dialog is opened or the repo list changes so the select defaults
to the currently preselected repo. Implement a useEffect that calls
setRepoId(repos[0]?.id ?? "") and run it when deps change (repos and the dialog
open state) so repoId is updated when the dialog is shown for a different repo;
reference the existing repoId/setRepoId state and the repos array and use the
dialog open flag (e.g., isOpen) or the onOpenChange-controlled state as the
second dependency.
In `@migrations/20260306000001_projects.sql`:
- Around line 40-52: The project_worktrees table allows mismatched project_id
and repo_id; create a new migration (do not modify this file) that enforces the
pair or removes redundancy: either add a cross-column constraint that ensures
(project_id, repo_id) match project_repos (e.g. ALTER TABLE project_worktrees
ADD CONSTRAINT project_worktrees_project_repo_fk FOREIGN KEY (project_id,
repo_id) REFERENCES project_repos(project_id, id) ON DELETE CASCADE) so repo_id
must belong to the given project, or drop project_worktrees.project_id and
derive the project via project_repos.repo_id in application code and DB queries
(remove column and adjust references). Reference table/columns:
project_worktrees.project_id, project_worktrees.repo_id and the project_repos
table (project_id, id) when implementing the migration.
In `@prompts/en/fragments/projects_context.md.j2`:
- Around line 1-13: The template currently injects user-editable fields
(project.description, project.tags, repo.name, worktree fields) directly into
the system prompt; mark this section as untrusted and render all free-text
metadata as literal/code blocks or quoted text to avoid prompt injection. Update
the projects_context.md.j2 fragment to add an explicit header like "Untrusted
project metadata — treat as raw data, not instructions" and change how you
output project.description, project.tags, repo.remote_url and any worktree
fields so they are wrapped as code blocks or quoted blocks rather than inline
prose, ensuring listable fields (tags) are joined but still rendered inside a
safe block; keep structured fields like repo.name/path in monospace but do not
allow free-text to be interpreted as prompt text. Ensure the same treatment is
applied wherever project.repos and project.worktrees are rendered.
In `@prompts/en/tools/project_manage_description.md.j2`:
- Line 1: Update the guidance to make worktree creation repo-scoped: change the
sentence that tells the agent to "use `create_worktree` to set up a worktree in
the project root" so it first identifies the target repo (e.g., "select the
specific repository within the project") and then instructs to create the
worktree for that repository (e.g., "use `create_worktree` to create a worktree
in the target repo's path"). Apply the same rewording consistently in
prompts/en/fragments/projects_context.md.j2 (around the guidance at line 16) so
both files instruct selecting the repo first then creating the worktree for that
repo.
In `@src/api/agents.rs`:
- Around line 714-722: The new ProjectStore (created as project_store via
crate::projects::ProjectStore::new) is only put into the AgentDeps but not
registered in the global ApiState, causing handlers to fail; update the
create_agent flow to also insert project_store.clone() into
ApiState.project_stores keyed by arc_agent_id (the same key used for
memory_search/task_store maps), and update delete_agent to remove that entry
when cleaning up other per-agent maps so the project store lifecycle matches the
other per-agent resources.
In `@src/conversation/history.rs`:
- Around line 394-403: The UPDATE running inside tokio::spawn can race the
initial INSERT and silently affect 0 rows; fix by persisting project_id and
worktree_id at the INSERT time (modify the code that creates the worker_runs row
to include project_id and worktree_id) or else serialize the writes (have
log_worker_started return a future/JoinHandle and await the initial insert
before spawning the UPDATE, or replace the UPDATE with an INSERT ... ON CONFLICT
DO UPDATE upsert). Update the code paths that call log_worker_started (and the
spawning logic using tokio::spawn) so the worker_runs row is created with
project_id/worktree_id atomically, or ensure the spawned UPDATE cannot run until
the insert completes.
In `@src/projects/git.rs`:
- Around line 249-264: The get_remote_url function currently returns raw remote
URLs which may contain embedded credentials; update get_remote_url to sanitize
the returned URL by removing any userinfo (username/password/PAT) before
returning. Specifically, after obtaining url from get-output, parse it with the
Url parser (url::Url) and if parsing succeeds call set_username("") and
set_password(None) (or otherwise reconstruct the URL without the userinfo) and
then return the sanitized string; if parsing fails, fallback to stripping any
userinfo via a safe regex that removes "userinfo@" from the start of authority.
Keep the function name get_remote_url and the existing git command usage, only
change the post-processing of the url variable so no credentials are ever
returned.
In `@src/tools/spawn_worker.rs`:
- Around line 259-292: Change resolve_directory_from_project to return
Result<Option<String>, SpawnWorkerError> (instead of Option<String>) and
propagate database errors rather than collapsing them to None: keep the
explicit-directory short-circuit, but when calling
store.get_worktree(worktree_id).await and store.get_project(agent_id,
...).await, match on the Result and return
Err(SpawnWorkerError::from_db_error(...)) (or an appropriate variant) when the
call is Err(_); treat Ok(None) as a not-found situation to return Ok(None) or a
specific not-found error per your SpawnWorkerError design, and when
Ok(Some(...)) continue to build the path (using project.root_path and
worktree.path) and return Ok(Some(path)). Ensure all references use
resolve_directory_from_project, deps.project_store, deps.agent_id, get_worktree,
and get_project so callers can handle the Result.
- Around line 274-281: The code currently uses the caller-supplied project_id
when resolving a worktree path; instead, always use the worktree's own
project_id (worktree.project_id) to build the absolute path and, if a project_id
was supplied by the caller, check for equality up front and return None (or
otherwise fail fast) if they differ; update the branch that calls
store.get_worktree(worktree_id).await and store.get_project(agent_id, pid).await
to set pid = &worktree.project_id, compare with the original project_id when
Some, and only proceed to compute abs_path =
Path::new(&project.root_path).join(&worktree.path) if the IDs match.
---
Outside diff comments:
In `@src/main.rs`:
- Around line 2588-2619: The startup code populates project_stores (and other
per-agent registries) once via api_state.set_project_stores(...) but the agent
add/remove branches (agent_rx / agent_remove_rx) only update the local agents
map, causing api_state.project_stores to become stale; update the runtime
add/remove logic to mirror changes into api_state by calling the same setters
(e.g., api_state.set_project_stores, api_state.set_agent_pools,
api_state.set_memory_searches, api_state.set_mcp_managers,
api_state.set_task_stores and api_state.set_agent_configs) when handling new
agents or removals, or refactor api_state lookup to derive stores from the live
agents registry (the agents HashMap) so lookups always reflect current agents
(refer to project_stores, agent_pools, api_state, agents, and the
agent_rx/agent_remove_rx handling).
In `@src/tools/spawn_worker.rs`:
- Around line 126-157: The schema currently only inserts "project_id" and
"worktree_id" when opencode_enabled is true, hiding them from builtin workers
even though SpawnWorkerArgs and the runtime accept them; move the obj.insert
calls for "project_id" and "worktree_id" out of the opencode_enabled conditional
(keep "worker_type" and "directory" gated if desired) so that
properties.as_object_mut() always receives inserts for project_id and
worktree_id, preserving their descriptions and behavior used by the runtime path
and SpawnWorkerArgs.
---
Nitpick comments:
In `@interface/src/api/client.ts`:
- Around line 1376-1379: The ProjectDetailResponse interface is overly
permissive and appears unused; either remove the ProjectDetailResponse
declaration entirely or replace it with a strongly typed wrapper (e.g., a
property typed as ProjectWithRelations) and update any callers: locate the
ProjectDetailResponse interface, remove or change it to something like project:
ProjectWithRelations, and ensure you import/retain the ProjectWithRelations type
and update any references (e.g., places using getProject) to use the new type or
the direct ProjectWithRelations return.
In `@src/api/projects.rs`:
- Around line 496-518: The handler delete_repo currently ignores the extracted
project_id (Path((_, repo_id))) so add a defensive ownership check before
calling store.delete_repo: extract project_id from the Path, fetch the repo
metadata from the store (e.g., a method like store.get_repo(repo_id) or add one
if missing) and verify that the repo's project_id equals the path project_id; if
the repo is missing or the project_id does not match return NOT_FOUND (or
FORBIDDEN if you prefer), otherwise proceed to call store.delete_repo(&repo_id).
Keep existing logging and error mapping around store.delete_repo but perform the
ownership check first and return early on mismatch.
In `@src/projects/store.rs`:
- Around line 494-518: The closure parameter uses the abbreviated name `r` in
the project row parsing code; update the closure parameter to `row` for
consistency with naming guidelines used in functions like `row_to_project`, and
apply the same rename to the other similar closures referenced (occurrences
around the functions/blocks at the indicated locations) so that all usages
(e.g., calls to `row.try_get(...)`) use `row` instead of `r`. Ensure you rename
both the parameter and all its usages inside each closure to avoid
unused-variable or unresolved-name errors.
In `@src/tools/project_manage.rs`:
- Around line 163-174: The code is silently swallowing DB errors by calling
unwrap_or_default() on project_store.list_repos and list_worktrees_with_repos;
replace these calls with proper error handling: either propagate the error (use
? on list_repos/list_worktrees_with_repos so the caller can handle it) or
explicitly match the Result, log the error via your logger and handle a
fallback, rather than using unwrap_or_default. Update the loop in the function
that builds results (the block referencing results, project_store.list_repos,
and project_store.list_worktrees_with_repos) to surface failures for
observability and avoid hiding connectivity/DB issues.
- Around line 696-735: The code currently uses blocking std::fs::read_dir and
synchronous metadata calls inside an async context; replace the sync directory
walk with an async implementation using tokio::fs::read_dir and its async
DirEntry methods (e.g. next_entry().await, entry.file_type().await,
entry.metadata().await) inside the function that builds entries/total_bytes, and
convert dir_size to an async function (dir_size -> async fn dir_size(...).await)
so you can await sizes for directories; update the surrounding function
signature to async and await dir_size, and keep building the same serde_json
entries and final ProjectManageOutput (retaining format_bytes, total_bytes,
entries, and project.name) but without blocking the async runtime.
In `@tests/context_dump.rs`:
- Around line 108-131: The test harness is bypassing the project-context path:
seed a sample project into the ProjectStore used in AgentDeps and ensure the
test renders the channel prompt via the same production path that uses
project_context (e.g., call the same function(s) that production uses to build
channel prompts rather than directly invoking build_channel_system_prompt if it
bypasses project loading). Specifically, in tests/context_dump.rs populate
project_store (the ProjectStore inside AgentDeps) with a sample project, ensure
project_context is loaded and passed into dump_channel_context or the channel
prompt builder the code uses in production, and update the test to assert the
rendered prompt comes from that full project-context flow.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 23709dcd-06e5-4587-9c21-92637a72d059
📒 Files selected for processing (28)
interface/src/api/client.tsinterface/src/components/AgentTabs.tsxinterface/src/router.tsxinterface/src/routes/AgentProjects.tsxmigrations/20260306000001_projects.sqlmigrations/20260306000002_worker_project_link.sqlprompts/en/channel.md.j2prompts/en/fragments/projects_context.md.j2prompts/en/tools/project_manage_description.md.j2src/agent/channel.rssrc/api.rssrc/api/agents.rssrc/api/projects.rssrc/api/server.rssrc/api/state.rssrc/conversation/history.rssrc/lib.rssrc/main.rssrc/projects.rssrc/projects/git.rssrc/projects/store.rssrc/prompts/engine.rssrc/prompts/text.rssrc/tools.rssrc/tools/project_manage.rssrc/tools/spawn_worker.rstests/bulletin.rstests/context_dump.rs
| CREATE TABLE IF NOT EXISTS project_worktrees ( | ||
| id TEXT PRIMARY KEY, | ||
| project_id TEXT NOT NULL, | ||
| repo_id TEXT NOT NULL, | ||
| name TEXT NOT NULL, | ||
| path TEXT NOT NULL, | ||
| branch TEXT NOT NULL, | ||
| created_by TEXT NOT NULL DEFAULT 'user', | ||
| created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| FOREIGN KEY (project_id) REFERENCES projects(id) ON DELETE CASCADE, | ||
| FOREIGN KEY (repo_id) REFERENCES project_repos(id) ON DELETE CASCADE | ||
| ); |
There was a problem hiding this comment.
Enforce that each worktree repo belongs to the same project.
project_worktrees.project_id and repo_id are validated independently here, so a bad insert can still link a worktree to repo A and project B. Please add a follow-up migration that enforces (project_id, repo_id) together, or drop the redundant project_id from project_worktrees and derive it from repo_id.
As per coding guidelines "Never edit an existing migration file in place once it has been committed or applied in any environment; treat migration files as immutable and create new migration files with new timestamps for schema changes".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@migrations/20260306000001_projects.sql` around lines 40 - 52, The
project_worktrees table allows mismatched project_id and repo_id; create a new
migration (do not modify this file) that enforces the pair or removes
redundancy: either add a cross-column constraint that ensures (project_id,
repo_id) match project_repos (e.g. ALTER TABLE project_worktrees ADD CONSTRAINT
project_worktrees_project_repo_fk FOREIGN KEY (project_id, repo_id) REFERENCES
project_repos(project_id, id) ON DELETE CASCADE) so repo_id must belong to the
given project, or drop project_worktrees.project_id and derive the project via
project_repos.repo_id in application code and DB queries (remove column and
adjust references). Reference table/columns: project_worktrees.project_id,
project_worktrees.repo_id and the project_repos table (project_id, id) when
implementing the migration.
| async fn resolve_directory_from_project( | ||
| deps: &crate::AgentDeps, | ||
| directory: Option<&str>, | ||
| project_id: Option<&str>, | ||
| worktree_id: Option<&str>, | ||
| ) -> Option<String> { | ||
| // Explicit directory takes precedence. | ||
| if let Some(dir) = directory { | ||
| return Some(dir.to_string()); | ||
| } | ||
|
|
||
| let store = &deps.project_store; | ||
| let agent_id = &deps.agent_id; | ||
|
|
||
| // Worktree resolution: look up the worktree, derive absolute path from project root. | ||
| if let Some(worktree_id) = worktree_id | ||
| && let Ok(Some(worktree)) = store.get_worktree(worktree_id).await | ||
| { | ||
| // Need the project to resolve the absolute path. | ||
| let pid = project_id.unwrap_or(&worktree.project_id); | ||
| if let Ok(Some(project)) = store.get_project(agent_id, pid).await { | ||
| let abs_path = std::path::Path::new(&project.root_path).join(&worktree.path); | ||
| return Some(abs_path.to_string_lossy().to_string()); | ||
| } | ||
| } | ||
|
|
||
| // Project root resolution. | ||
| if let Some(project_id) = project_id | ||
| && let Ok(Some(project)) = store.get_project(agent_id, project_id).await | ||
| { | ||
| return Some(project.root_path.clone()); | ||
| } | ||
|
|
||
| None |
There was a problem hiding this comment.
Return a structured error here instead of collapsing lookup failures to None.
get_worktree() and get_project() already distinguish Err from Ok(None), but this helper discards both by matching only Ok(Some(...)). That turns an unknown worktree_id / project_id or a DB failure into either a misleading "directory is required" error or a successful spawn with a broken project link. This helper should return Result<Option<String>, SpawnWorkerError> and propagate not-found / lookup failures explicitly. As per coding guidelines, "Never silently discard errors; no let _ = on Results; handle, log, or propagate errors" and "Tool implementations must return errors as structured results, not panics; tools should be error-as-result for LLM recovery".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tools/spawn_worker.rs` around lines 259 - 292, Change
resolve_directory_from_project to return Result<Option<String>,
SpawnWorkerError> (instead of Option<String>) and propagate database errors
rather than collapsing them to None: keep the explicit-directory short-circuit,
but when calling store.get_worktree(worktree_id).await and
store.get_project(agent_id, ...).await, match on the Result and return
Err(SpawnWorkerError::from_db_error(...)) (or an appropriate variant) when the
call is Err(_); treat Ok(None) as a not-found situation to return Ok(None) or a
specific not-found error per your SpawnWorkerError design, and when
Ok(Some(...)) continue to build the path (using project.root_path and
worktree.path) and return Ok(Some(path)). Ensure all references use
resolve_directory_from_project, deps.project_store, deps.agent_id, get_worktree,
and get_project so callers can handle the Result.
| if let Some(worktree_id) = worktree_id | ||
| && let Ok(Some(worktree)) = store.get_worktree(worktree_id).await | ||
| { | ||
| // Need the project to resolve the absolute path. | ||
| let pid = project_id.unwrap_or(&worktree.project_id); | ||
| if let Ok(Some(project)) = store.get_project(agent_id, pid).await { | ||
| let abs_path = std::path::Path::new(&project.root_path).join(&worktree.path); | ||
| return Some(abs_path.to_string_lossy().to_string()); |
There was a problem hiding this comment.
Reject mismatched project_id / worktree_id pairs before resolving the path.
A worktree already knows its owning project. Using the caller-supplied project_id here means a mismatched pair resolves worktree.path against the wrong root, which can spawn the worker in an unrelated checkout. Resolve against worktree.project_id and, if project_id was also supplied, fail fast when it does not match.
Suggested fix
- if let Some(worktree_id) = worktree_id
- && let Ok(Some(worktree)) = store.get_worktree(worktree_id).await
- {
- // Need the project to resolve the absolute path.
- let pid = project_id.unwrap_or(&worktree.project_id);
- if let Ok(Some(project)) = store.get_project(agent_id, pid).await {
- let abs_path = std::path::Path::new(&project.root_path).join(&worktree.path);
- return Some(abs_path.to_string_lossy().to_string());
- }
- }
+ if let Some(worktree_id) = worktree_id {
+ let worktree = store
+ .get_worktree(worktree_id)
+ .await
+ .map_err(|error| SpawnWorkerError(format!("failed to fetch worktree {worktree_id}: {error}")))?
+ .ok_or_else(|| SpawnWorkerError(format!("unknown worktree_id: {worktree_id}")))?;
+
+ if let Some(project_id) = project_id
+ && project_id != worktree.project_id
+ {
+ return Err(SpawnWorkerError(format!(
+ "worktree_id {worktree_id} does not belong to project_id {project_id}"
+ )));
+ }
+
+ let project = store
+ .get_project(agent_id, &worktree.project_id)
+ .await
+ .map_err(|error| SpawnWorkerError(format!("failed to fetch project {}: {error}", worktree.project_id)))?
+ .ok_or_else(|| SpawnWorkerError(format!("unknown project_id: {}", worktree.project_id)))?;
+
+ let abs_path = std::path::Path::new(&project.root_path).join(&worktree.path);
+ return Ok(Some(abs_path.to_string_lossy().to_string()));
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tools/spawn_worker.rs` around lines 274 - 281, The code currently uses
the caller-supplied project_id when resolving a worktree path; instead, always
use the worktree's own project_id (worktree.project_id) to build the absolute
path and, if a project_id was supplied by the caller, check for equality up
front and return None (or otherwise fail fast) if they differ; update the branch
that calls store.get_worktree(worktree_id).await and store.get_project(agent_id,
pid).await to set pid = &worktree.project_id, compare with the original
project_id when Some, and only proceed to compute abs_path =
Path::new(&project.root_path).join(&worktree.path) if the IDs match.
…rs in repo discovery Two bugs fixed: 1. discover_repos() now checks for .git *directory* (not just existence) to skip worktree directories that have a .git file pointing to the main repo. 2. create_project API handler now discovers worktrees after discovering repos, matching the behavior of scan_project. Extracted shared helper discover_and_register_worktrees() used by both handlers.
src/api/projects.rs
Outdated
| Path((_, repo_id)): Path<(String, String)>, | ||
| Query(query): Query<AgentQuery>, | ||
| ) -> Result<Json<ActionResponse>, StatusCode> { | ||
| let stores = state.project_stores.load(); | ||
| let store = stores.get(&query.agent_id).ok_or(StatusCode::NOT_FOUND)?; | ||
|
|
||
| let deleted = store.delete_repo(&repo_id).await.map_err(|error| { | ||
| tracing::error!(%error, "failed to delete repo"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })?; |
There was a problem hiding this comment.
This route ignores the project_id param and deletes purely by repo_id. That makes it easy to delete a repo from a different project/agent if you can guess an ID.
| Path((_, repo_id)): Path<(String, String)>, | |
| Query(query): Query<AgentQuery>, | |
| ) -> Result<Json<ActionResponse>, StatusCode> { | |
| let stores = state.project_stores.load(); | |
| let store = stores.get(&query.agent_id).ok_or(StatusCode::NOT_FOUND)?; | |
| let deleted = store.delete_repo(&repo_id).await.map_err(|error| { | |
| tracing::error!(%error, "failed to delete repo"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })?; | |
| Path((project_id, repo_id)): Path<(String, String)>, | |
| Query(query): Query<AgentQuery>, | |
| ) -> Result<Json<ActionResponse>, StatusCode> { | |
| let stores = state.project_stores.load(); | |
| let store = stores.get(&query.agent_id).ok_or(StatusCode::NOT_FOUND)?; | |
| // Verify project exists for this agent, and the repo belongs to it. | |
| store | |
| .get_project(&query.agent_id, &project_id) | |
| .await | |
| .map_err(|error| { | |
| tracing::error!(%error, "failed to verify project"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })? | |
| .ok_or(StatusCode::NOT_FOUND)?; | |
| let repo = store | |
| .get_repo(&repo_id) | |
| .await | |
| .map_err(|error| { | |
| tracing::error!(%error, "failed to get repo"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })? | |
| .ok_or(StatusCode::NOT_FOUND)?; | |
| if repo.project_id != project_id { | |
| return Err(StatusCode::NOT_FOUND); | |
| } | |
| let deleted = store.delete_repo(&repo_id).await.map_err(|error| { | |
| tracing::error!(%error, "failed to delete repo"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })?; |
| })? | ||
| .ok_or(StatusCode::NOT_FOUND)?; | ||
|
|
||
| let root = std::path::PathBuf::from(&project.root_path); |
There was a problem hiding this comment.
Worth validating that the repo_id actually belongs to the {project_id} in the URL before using repo.path (otherwise you can mix/match repo IDs across projects and resolve paths incorrectly).
| let root = std::path::PathBuf::from(&project.root_path); | |
| if repo.project_id != project_id { | |
| return Err(StatusCode::UNPROCESSABLE_ENTITY); | |
| } | |
| let root = std::path::PathBuf::from(&project.root_path); |
| // Verify project exists. | ||
| store | ||
| .get_project(&request.agent_id, &project_id) | ||
| .await | ||
| .map_err(|error| { | ||
| tracing::error!(%error, "failed to verify project"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })? | ||
| .ok_or(StatusCode::NOT_FOUND)?; | ||
|
|
There was a problem hiding this comment.
request.path is persisted and later used for path joins (e.g., root.join(&repo.path)); rejecting absolute paths / .. components here avoids escaping the project root via API input.
| // Verify project exists. | |
| store | |
| .get_project(&request.agent_id, &project_id) | |
| .await | |
| .map_err(|error| { | |
| tracing::error!(%error, "failed to verify project"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })? | |
| .ok_or(StatusCode::NOT_FOUND)?; | |
| // Verify project exists. | |
| store | |
| .get_project(&request.agent_id, &project_id) | |
| .await | |
| .map_err(|error| { | |
| tracing::error!(%error, "failed to verify project"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })? | |
| .ok_or(StatusCode::NOT_FOUND)?; | |
| let repo_path = std::path::Path::new(&request.path); | |
| if repo_path.is_absolute() | |
| || repo_path | |
| .components() | |
| .any(|component| matches!(component, std::path::Component::ParentDir)) | |
| { | |
| return Err(StatusCode::UNPROCESSABLE_ENTITY); | |
| } |
| // Look up worktree and project for the git removal. | ||
| let worktree = store | ||
| .get_worktree(&worktree_id) | ||
| .await | ||
| .map_err(|error| { | ||
| tracing::error!(%error, "failed to get worktree"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })? | ||
| .ok_or(StatusCode::NOT_FOUND)?; |
There was a problem hiding this comment.
Worth verifying the {project_id} URL param matches the fetched worktree before resolving paths and deleting; otherwise worktree_id can be mixed with an unrelated project.
| // Look up worktree and project for the git removal. | |
| let worktree = store | |
| .get_worktree(&worktree_id) | |
| .await | |
| .map_err(|error| { | |
| tracing::error!(%error, "failed to get worktree"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })? | |
| .ok_or(StatusCode::NOT_FOUND)?; | |
| // Look up worktree and project for the git removal. | |
| let worktree = store | |
| .get_worktree(&worktree_id) | |
| .await | |
| .map_err(|error| { | |
| tracing::error!(%error, "failed to get worktree"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })? | |
| .ok_or(StatusCode::NOT_FOUND)?; | |
| if worktree.project_id != project_id { | |
| return Err(StatusCode::NOT_FOUND); | |
| } |
Add disk_usage_bytes column to project_repos and project_worktrees via new migration. Sizes are computed and cached during project creation (auto-discover) and scan. Extracted compute_and_cache_disk_usage() helper used by both handlers. UI changes: total disk usage shown as a pill in the project detail header, per-repo size shown on each repo card, per-worktree size shown inline. Removed the separate expandable DiskUsagePanel section at the bottom.
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (4)
interface/src/routes/AgentProjects.tsx (2)
422-442:⚠️ Potential issue | 🟠 MajorGive the icon-only delete buttons accessible names.
These controls render only an SVG, so screen readers announce unlabeled buttons and can’t distinguish deleting a repo from deleting a worktree.
♿ Suggested change
<Button variant="outline" size="icon" + aria-label={`Remove repository ${repo.name}`} onClick={onDelete} disabled={isDeleting} className="h-6 w-6 text-red-500 hover:text-red-400" > @@ <Button variant="outline" size="icon" + aria-label={`Remove worktree ${worktree.name}`} onClick={onDelete} disabled={isDeleting} className="h-6 w-6 text-red-500 hover:text-red-400" >Also applies to: 508-528
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/routes/AgentProjects.tsx` around lines 422 - 442, The icon-only delete Button lacks an accessible name; update the Button components rendering the trash SVG (the one using onDelete and disabled={isDeleting}) to include a descriptive accessible label (e.g., add aria-label="Delete project" or aria-label that matches the action like "Delete repository" / "Delete worktree") or include visually-hidden text inside the Button so screen readers can distinguish the actions; apply the same change to the other similar Button instance in this file that also renders only an SVG.
257-272:⚠️ Potential issue | 🟠 MajorReset the selected repo whenever the dialog opens.
repoIdis only initialized on the first render. If the dialog is opened from repo B after repo A, submit still targets repo A unless the user notices and changes the select.🔁 Suggested change
-import { useState } from "react"; +import { useEffect, useState } from "react"; @@ const queryClient = useQueryClient(); const [repoId, setRepoId] = useState(repos[0]?.id ?? ""); const [branch, setBranch] = useState(""); const [worktreeName, setWorktreeName] = useState(""); + + useEffect(() => { + if (open) { + setRepoId(repos[0]?.id ?? ""); + } + }, [open, repos]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/routes/AgentProjects.tsx` around lines 257 - 272, The repoId state is only set on first render so reopening the dialog can leave an outdated selection; add a useEffect that watches the dialog open flag (e.g. isOpen or whatever prop indicates the dialog is open) and the repos array and calls setRepoId(repos[0]?.id ?? "") when the dialog opens to reset the selection; reference the repoId/setRepoId state and repos in the effect and ensure it runs on changes to isOpen and repos.src/api/projects.rs (2)
525-543:⚠️ Potential issue | 🟠 MajorValidate
request.pathbefore storing it.This value is later joined onto
project.root_pathfor git and disk-usage operations. Absolute paths or..components let a repo escape the project root.🛡️ Suggested change
store .get_project(&request.agent_id, &project_id) .await @@ })? .ok_or(StatusCode::NOT_FOUND)?; + + let repo_path = std::path::Path::new(&request.path); + if repo_path.is_absolute() + || repo_path + .components() + .any(|component| matches!(component, std::path::Component::ParentDir)) + { + return Err(StatusCode::UNPROCESSABLE_ENTITY); + } let repo = store .create_repo(CreateRepoInput {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/projects.rs` around lines 525 - 543, Validate and sanitize request.path before calling store.create_repo: ensure it is not absolute and does not contain parent components ("..") that could escape project.root_path (e.g., using Path::new(request.path) checks). In the handler where get_project and create_repo are called, reject or normalize any path containing an absolute prefix or ".." components and return a 400/INVALID_ARGUMENT; only allow a relative, clean path (no leading '/' and no components equal to "..") when constructing CreateRepoInput for create_repo so subsequent filesystem/git joins against project.root_path cannot escape the project directory.
554-565:⚠️ Potential issue | 🟠 MajorEnforce that nested repo/worktree IDs belong to
{project_id}.These handlers fetch by raw ID and never confirm the resource belongs to the project in the URL, so callers can mix IDs across projects within the same agent and operate on the wrong repo/worktree.
Also applies to: 596-605, 655-663
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/projects.rs` around lines 554 - 565, The delete_repo handler (and the other handlers around the file: the blocks at ~596-605 and ~655-663) are deleting/fetching by raw repo/worktree ID without verifying the resource belongs to the project in the URL; update these handlers to first load the resource (e.g., via the store method that returns the repo or worktree) and assert its project_id equals the {project_id} path param before calling store.delete_repo or other mutators; if the IDs don't match return an appropriate error (NOT_FOUND or FORBIDDEN) instead of operating across projects. Ensure you reference the same store API used currently (the store.* methods) and perform the check in delete_repo (and the analogous worktree handlers) using the repo.project_id/worktree.project_id field.
🧹 Nitpick comments (1)
interface/src/api/client.ts (1)
1378-1381: AliasProjectDetailResponseto the real shape or remove it.The endpoint already returns
ProjectWithRelations. Leaving a separate[key: string]: unknownexport around makes it easy for future callers to throw away type safety and drift from the server contract.🧹 Suggested change
-export interface ProjectDetailResponse { - /** The flattened project + repos + worktrees (serde #[flatten]) */ - [key: string]: unknown; -} +export type ProjectDetailResponse = ProjectWithRelations;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/api/client.ts` around lines 1378 - 1381, ProjectDetailResponse is a lax index-signature that undermines the actual server contract; replace it with or alias it to the precise shape returned by the endpoint (ProjectWithRelations) so callers keep type safety — change the export for ProjectDetailResponse to export type ProjectDetailResponse = ProjectWithRelations; and update any local imports/usages to use the new alias (or remove ProjectDetailResponse entirely and use ProjectWithRelations everywhere).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/api/projects.rs`:
- Around line 25-30: ProjectListQuery currently deserializes status as
Option<String> and later uses and_then(ProjectStatus::parse), which silently
ignores invalid values; change the query and any similar structs (e.g., the
query/DTO types used around the other occurrences) to use Option<ProjectStatus>
so serde will reject unknown values at deserialization time, remove the manual
ProjectStatus::parse calls (or replace callers that accept String with the
enum-typed fields), and update handlers/signatures that read these structs (and
the update payload type) to expect Option<ProjectStatus> rather than
Option<String> so invalid status inputs return a deserialization error instead
of being treated as “no status.”
- Around line 688-703: The handler currently deletes the DB row regardless of
git removal errors; change the flow so you only call
store.delete_worktree(&worktree_id) when remove_worktree succeeded or when the
remove_worktree error specifically indicates the worktree is already absent
(e.g., a "not found"/"already removed" variant or message). Update the match on
the result of crate::projects::git::remove_worktree(&repo_abs_path,
&worktree_abs_path).await to: on Ok => proceed to delete and return success; on
Err(e) => if e is the explicit "worktree missing" variant/string then log at
info/debug and proceed to delete the DB record; otherwise log the real git
failure and return an error (StatusCode::INTERNAL_SERVER_ERROR) without calling
store.delete_worktree. Ensure you reference remove_worktree, delete_worktree,
repo_abs_path, worktree_abs_path, and worktree_id when locating the change.
- Around line 769-792: The dir_size function currently uses
entry.metadata().is_dir() which follows symlinks — change the logic to call
tokio::fs::symlink_metadata for each entry to detect symlinks, check
file_type().is_symlink() and skip symlinked entries (or at minimum skip
symlinked directories), and only then treat non-symlink entries: if
symlink_metadata indicates a directory push entry.path() onto the stack,
otherwise use the (non-symlink) metadata.len() for files; reference dir_size,
tokio::fs::symlink_metadata, entry.metadata/entry.path, metadata.is_dir and
file_type().is_symlink when making the change.
- Around line 223-245: The current code masks DB failures by using
unwrap_or_default on store.list_repos and store.list_worktrees; change both
calls to handle their Result explicitly (e.g., match or use ?/map_err) so that
on Err you log the error with context (including project_id) and return early
instead of treating it as an empty list; update the block around list_repos and
list_worktrees and their subsequent loops (references: list_repos,
list_worktrees, set_repo_disk_usage, set_worktree_disk_usage) to propagate or
return after logging the DB error.
- Around line 608-633: worktree_name coming from request.worktree_name is
treated as a path and passed to root.join(&worktree_name) which allows absolute
paths and parent components; validate/sanitize worktree_name before joining:
ensure it is exactly one normal path segment (no '/' or '\\', no '..'
components, not absolute) and reject with a client error if invalid; apply this
check where worktree_name is computed (the variable worktree_name derived from
request.worktree_name or request.branch), before calling
crate::projects::git::create_worktree and before calling store.create_worktree
to guarantee you only register safe directory names.
In `@src/projects/store.rs`:
- Around line 518-527: The row_to_project function currently swallows malformed
persisted data by using unwrap_or_default/unwrap_or on tags, settings, and
status; change these to propagate errors instead: parse tags via
serde_json::from_str(&tags_raw).context("invalid project tags")? (producing
Vec<String>), parse settings via
serde_json::from_str(&settings_raw).context("invalid project settings")?
(producing serde_json::Value), and handle status by converting
ProjectStatus::parse(&status_raw) into a Result and .context("invalid project
status")? so malformed rows return an Err with context instead of silently
defaulting; keep the existing row.try_get(...) context checks and ensure the
function's Result<Project> propagates these new errors.
- Around line 582-590: The in-memory SQLite pool created in async fn setup_pool
currently uses SqlitePool::connect("sqlite::memory:"), which yields multiple
distinct in-memory DBs across pooled connections causing flaky tests; fix by
either pinning the pool to a single connection (use
sqlx::sqlite::SqlitePoolOptions::new().max_connections(1).connect("sqlite::memory:").await)
or use a shared-cache memory DSN (e.g. "file::memory:?cache=shared") when
calling connect, then run sqlx::migrate! on that same pool; apply the same
change to the similar pool creation in src/messaging/webchat.rs (around the code
that calls SqlitePool::connect).
---
Duplicate comments:
In `@interface/src/routes/AgentProjects.tsx`:
- Around line 422-442: The icon-only delete Button lacks an accessible name;
update the Button components rendering the trash SVG (the one using onDelete and
disabled={isDeleting}) to include a descriptive accessible label (e.g., add
aria-label="Delete project" or aria-label that matches the action like "Delete
repository" / "Delete worktree") or include visually-hidden text inside the
Button so screen readers can distinguish the actions; apply the same change to
the other similar Button instance in this file that also renders only an SVG.
- Around line 257-272: The repoId state is only set on first render so reopening
the dialog can leave an outdated selection; add a useEffect that watches the
dialog open flag (e.g. isOpen or whatever prop indicates the dialog is open) and
the repos array and calls setRepoId(repos[0]?.id ?? "") when the dialog opens to
reset the selection; reference the repoId/setRepoId state and repos in the
effect and ensure it runs on changes to isOpen and repos.
In `@src/api/projects.rs`:
- Around line 525-543: Validate and sanitize request.path before calling
store.create_repo: ensure it is not absolute and does not contain parent
components ("..") that could escape project.root_path (e.g., using
Path::new(request.path) checks). In the handler where get_project and
create_repo are called, reject or normalize any path containing an absolute
prefix or ".." components and return a 400/INVALID_ARGUMENT; only allow a
relative, clean path (no leading '/' and no components equal to "..") when
constructing CreateRepoInput for create_repo so subsequent filesystem/git joins
against project.root_path cannot escape the project directory.
- Around line 554-565: The delete_repo handler (and the other handlers around
the file: the blocks at ~596-605 and ~655-663) are deleting/fetching by raw
repo/worktree ID without verifying the resource belongs to the project in the
URL; update these handlers to first load the resource (e.g., via the store
method that returns the repo or worktree) and assert its project_id equals the
{project_id} path param before calling store.delete_repo or other mutators; if
the IDs don't match return an appropriate error (NOT_FOUND or FORBIDDEN) instead
of operating across projects. Ensure you reference the same store API used
currently (the store.* methods) and perform the check in delete_repo (and the
analogous worktree handlers) using the repo.project_id/worktree.project_id
field.
---
Nitpick comments:
In `@interface/src/api/client.ts`:
- Around line 1378-1381: ProjectDetailResponse is a lax index-signature that
undermines the actual server contract; replace it with or alias it to the
precise shape returned by the endpoint (ProjectWithRelations) so callers keep
type safety — change the export for ProjectDetailResponse to export type
ProjectDetailResponse = ProjectWithRelations; and update any local
imports/usages to use the new alias (or remove ProjectDetailResponse entirely
and use ProjectWithRelations everywhere).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b962e2a3-2224-44e9-b2fc-323867018827
📒 Files selected for processing (5)
interface/src/api/client.tsinterface/src/routes/AgentProjects.tsxmigrations/20260306000003_project_disk_usage.sqlsrc/api/projects.rssrc/projects/store.rs
| #[derive(Deserialize)] | ||
| pub(super) struct ProjectListQuery { | ||
| agent_id: String, | ||
| #[serde(default)] | ||
| status: Option<String>, | ||
| } |
There was a problem hiding this comment.
Reject unknown status values instead of ignoring them.
and_then(ProjectStatus::parse) turns ?status=bogus into “list everything” and a bad update payload into a silent no-op. Deserialize status as Option<ProjectStatus> so invalid input fails at the boundary instead.
🛠️ Suggested change
pub(super) struct ProjectListQuery {
agent_id: String,
#[serde(default)]
- status: Option<String>,
+ status: Option<ProjectStatus>,
}
@@
pub(super) struct UpdateProjectRequest {
@@
#[serde(default)]
- status: Option<String>,
+ status: Option<ProjectStatus>,
}
@@
- let status = query.status.as_deref().and_then(ProjectStatus::parse);
+ let status = query.status;
@@
- let status = request.status.as_deref().and_then(ProjectStatus::parse);
+ let status = request.status;Also applies to: 50-65, 261-271, 378-390
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/projects.rs` around lines 25 - 30, ProjectListQuery currently
deserializes status as Option<String> and later uses
and_then(ProjectStatus::parse), which silently ignores invalid values; change
the query and any similar structs (e.g., the query/DTO types used around the
other occurrences) to use Option<ProjectStatus> so serde will reject unknown
values at deserialization time, remove the manual ProjectStatus::parse calls (or
replace callers that accept String with the enum-typed fields), and update
handlers/signatures that read these structs (and the update payload type) to
expect Option<ProjectStatus> rather than Option<String> so invalid status inputs
return a deserialization error instead of being treated as “no status.”
| let repos = store.list_repos(project_id).await.unwrap_or_default(); | ||
| for repo in &repos { | ||
| let abs_path = root.join(&repo.path); | ||
| if abs_path.is_dir() { | ||
| let bytes = dir_size(&abs_path).await; | ||
| if let Err(error) = store.set_repo_disk_usage(&repo.id, bytes as i64).await { | ||
| tracing::warn!(%error, repo = %repo.name, "failed to cache repo disk usage"); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| let worktrees = store.list_worktrees(project_id).await.unwrap_or_default(); | ||
| for worktree in &worktrees { | ||
| let abs_path = root.join(&worktree.path); | ||
| if abs_path.is_dir() { | ||
| let bytes = dir_size(&abs_path).await; | ||
| if let Err(error) = store | ||
| .set_worktree_disk_usage(&worktree.id, bytes as i64) | ||
| .await | ||
| { | ||
| tracing::warn!(%error, worktree = %worktree.name, "failed to cache worktree disk usage"); | ||
| } | ||
| } |
There was a problem hiding this comment.
Don't turn store failures into empty disk-usage refreshes.
Both list_repos and list_worktrees fall back to [] on error here, so a database failure is silently reported as “nothing to update”. Log and return early instead of masking the failure.
🛠️ Suggested change
- let repos = store.list_repos(project_id).await.unwrap_or_default();
+ let repos = match store.list_repos(project_id).await {
+ Ok(repos) => repos,
+ Err(error) => {
+ tracing::warn!(%error, "failed to list repos for disk usage refresh");
+ return;
+ }
+ };
@@
- let worktrees = store.list_worktrees(project_id).await.unwrap_or_default();
+ let worktrees = match store.list_worktrees(project_id).await {
+ Ok(worktrees) => worktrees,
+ Err(error) => {
+ tracing::warn!(%error, "failed to list worktrees for disk usage refresh");
+ return;
+ }
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let repos = store.list_repos(project_id).await.unwrap_or_default(); | |
| for repo in &repos { | |
| let abs_path = root.join(&repo.path); | |
| if abs_path.is_dir() { | |
| let bytes = dir_size(&abs_path).await; | |
| if let Err(error) = store.set_repo_disk_usage(&repo.id, bytes as i64).await { | |
| tracing::warn!(%error, repo = %repo.name, "failed to cache repo disk usage"); | |
| } | |
| } | |
| } | |
| let worktrees = store.list_worktrees(project_id).await.unwrap_or_default(); | |
| for worktree in &worktrees { | |
| let abs_path = root.join(&worktree.path); | |
| if abs_path.is_dir() { | |
| let bytes = dir_size(&abs_path).await; | |
| if let Err(error) = store | |
| .set_worktree_disk_usage(&worktree.id, bytes as i64) | |
| .await | |
| { | |
| tracing::warn!(%error, worktree = %worktree.name, "failed to cache worktree disk usage"); | |
| } | |
| } | |
| let repos = match store.list_repos(project_id).await { | |
| Ok(repos) => repos, | |
| Err(error) => { | |
| tracing::warn!(%error, "failed to list repos for disk usage refresh"); | |
| return; | |
| } | |
| }; | |
| for repo in &repos { | |
| let abs_path = root.join(&repo.path); | |
| if abs_path.is_dir() { | |
| let bytes = dir_size(&abs_path).await; | |
| if let Err(error) = store.set_repo_disk_usage(&repo.id, bytes as i64).await { | |
| tracing::warn!(%error, repo = %repo.name, "failed to cache repo disk usage"); | |
| } | |
| } | |
| } | |
| let worktrees = match store.list_worktrees(project_id).await { | |
| Ok(worktrees) => worktrees, | |
| Err(error) => { | |
| tracing::warn!(%error, "failed to list worktrees for disk usage refresh"); | |
| return; | |
| } | |
| }; | |
| for worktree in &worktrees { | |
| let abs_path = root.join(&worktree.path); | |
| if abs_path.is_dir() { | |
| let bytes = dir_size(&abs_path).await; | |
| if let Err(error) = store | |
| .set_worktree_disk_usage(&worktree.id, bytes as i64) | |
| .await | |
| { | |
| tracing::warn!(%error, worktree = %worktree.name, "failed to cache worktree disk usage"); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/projects.rs` around lines 223 - 245, The current code masks DB
failures by using unwrap_or_default on store.list_repos and
store.list_worktrees; change both calls to handle their Result explicitly (e.g.,
match or use ?/map_err) so that on Err you log the error with context (including
project_id) and return early instead of treating it as an empty list; update the
block around list_repos and list_worktrees and their subsequent loops
(references: list_repos, list_worktrees, set_repo_disk_usage,
set_worktree_disk_usage) to propagate or return after logging the DB error.
src/api/projects.rs
Outdated
| if let Err(error) = | ||
| crate::projects::git::remove_worktree(&repo_abs_path, &worktree_abs_path).await | ||
| { | ||
| tracing::warn!(%error, "git worktree remove failed, deleting DB record anyway"); | ||
| } | ||
|
|
||
| // Delete from database. | ||
| store.delete_worktree(&worktree_id).await.map_err(|error| { | ||
| tracing::error!(%error, "failed to delete worktree record"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })?; | ||
|
|
||
| Ok(Json(ActionResponse { | ||
| success: true, | ||
| message: "worktree removed".into(), | ||
| })) |
There was a problem hiding this comment.
Only drop the DB record when the worktree is actually gone.
A generic git worktree remove failure currently still returns success and deletes the row. That leaves the checkout on disk but makes it invisible in the UI. Distinguish “already deleted” from real git failures and only unlink the record in the former case.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/projects.rs` around lines 688 - 703, The handler currently deletes
the DB row regardless of git removal errors; change the flow so you only call
store.delete_worktree(&worktree_id) when remove_worktree succeeded or when the
remove_worktree error specifically indicates the worktree is already absent
(e.g., a "not found"/"already removed" variant or message). Update the match on
the result of crate::projects::git::remove_worktree(&repo_abs_path,
&worktree_abs_path).await to: on Ok => proceed to delete and return success; on
Err(e) => if e is the explicit "worktree missing" variant/string then log at
info/debug and proceed to delete the DB record; otherwise log the real git
failure and return an error (StatusCode::INTERNAL_SERVER_ERROR) without calling
store.delete_worktree. Ensure you reference remove_worktree, delete_worktree,
repo_abs_path, worktree_abs_path, and worktree_id when locating the change.
| fn row_to_project(row: &sqlx::sqlite::SqliteRow) -> Result<Project> { | ||
| let tags_raw: String = row.try_get("tags").context("missing tags")?; | ||
| let tags: Vec<String> = serde_json::from_str(&tags_raw).unwrap_or_default(); | ||
|
|
||
| let settings_raw: String = row.try_get("settings").context("missing settings")?; | ||
| let settings: Value = | ||
| serde_json::from_str(&settings_raw).unwrap_or(Value::Object(Default::default())); | ||
|
|
||
| let status_raw: String = row.try_get("status").context("missing status")?; | ||
| let status = ProjectStatus::parse(&status_raw).unwrap_or(ProjectStatus::Active); |
There was a problem hiding this comment.
Return an error for malformed persisted project data.
Falling back to [], {}, or active makes a corrupted row look valid and can silently overwrite the bad value on the next update. These deserializations should fail fast with context instead of defaulting.
🛠️ Suggested change
fn row_to_project(row: &sqlx::sqlite::SqliteRow) -> Result<Project> {
let tags_raw: String = row.try_get("tags").context("missing tags")?;
- let tags: Vec<String> = serde_json::from_str(&tags_raw).unwrap_or_default();
+ let tags: Vec<String> =
+ serde_json::from_str(&tags_raw).context("failed to parse project tags")?;
let settings_raw: String = row.try_get("settings").context("missing settings")?;
- let settings: Value =
- serde_json::from_str(&settings_raw).unwrap_or(Value::Object(Default::default()));
+ let settings: Value =
+ serde_json::from_str(&settings_raw).context("failed to parse project settings")?;
let status_raw: String = row.try_get("status").context("missing status")?;
- let status = ProjectStatus::parse(&status_raw).unwrap_or(ProjectStatus::Active);
+ let status = ProjectStatus::parse(&status_raw).context("invalid project status")?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn row_to_project(row: &sqlx::sqlite::SqliteRow) -> Result<Project> { | |
| let tags_raw: String = row.try_get("tags").context("missing tags")?; | |
| let tags: Vec<String> = serde_json::from_str(&tags_raw).unwrap_or_default(); | |
| let settings_raw: String = row.try_get("settings").context("missing settings")?; | |
| let settings: Value = | |
| serde_json::from_str(&settings_raw).unwrap_or(Value::Object(Default::default())); | |
| let status_raw: String = row.try_get("status").context("missing status")?; | |
| let status = ProjectStatus::parse(&status_raw).unwrap_or(ProjectStatus::Active); | |
| fn row_to_project(row: &sqlx::sqlite::SqliteRow) -> Result<Project> { | |
| let tags_raw: String = row.try_get("tags").context("missing tags")?; | |
| let tags: Vec<String> = | |
| serde_json::from_str(&tags_raw).context("failed to parse project tags")?; | |
| let settings_raw: String = row.try_get("settings").context("missing settings")?; | |
| let settings: Value = | |
| serde_json::from_str(&settings_raw).context("failed to parse project settings")?; | |
| let status_raw: String = row.try_get("status").context("missing status")?; | |
| let status = ProjectStatus::parse(&status_raw).context("invalid project status")?; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/projects/store.rs` around lines 518 - 527, The row_to_project function
currently swallows malformed persisted data by using unwrap_or_default/unwrap_or
on tags, settings, and status; change these to propagate errors instead: parse
tags via serde_json::from_str(&tags_raw).context("invalid project tags")?
(producing Vec<String>), parse settings via
serde_json::from_str(&settings_raw).context("invalid project settings")?
(producing serde_json::Value), and handle status by converting
ProjectStatus::parse(&status_raw) into a Result and .context("invalid project
status")? so malformed rows return an Err with context instead of silently
defaulting; keep the existing row.try_get(...) context checks and ensure the
function's Result<Project> propagates these new errors.
| async fn setup_pool() -> SqlitePool { | ||
| let pool = SqlitePool::connect("sqlite::memory:") | ||
| .await | ||
| .expect("failed to create in-memory pool"); | ||
| sqlx::migrate!("./migrations") | ||
| .run(&pool) | ||
| .await | ||
| .expect("failed to run migrations"); | ||
| pool |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In SQLite and SQLx, does SqlitePool::connect("sqlite::memory:")create a separate in-memory database per pooled connection, and ismax_connections(1)orfile::memory:?cache=shared required for reliable shared test state?
💡 Result:
Yes.
What happens with SqlitePool::connect("sqlite::memory:") in SQLx
- In “plain SQLite”,
:memory:creates a private in-memory database per connection (so different connections do not see the same schema/data). (learn.microsoft.com) - In SQLx, the URL
sqlite::memory:is parsed into an in-memory DB with shared-cache enabled, but each connection gets a different URI name (file:sqlx-in-memory-{seqno}), so pool connections still end up pointing at different in-memory databases. (docs.rs)
This is exactly the “migrations applied on the first connection, later connections are empty” failure mode people hit with pools + in-memory SQLite. (github.com)
Do you need max_connections(1) or file::memory:?cache=shared for shared test state?
- If you want reliable shared state across
Pool::acquire()calls, you generally needmax_connections(1)(and also avoid the pool dropping/replacing that one connection via timeouts/lifetimes). Otherwise, acquiring a different pooled connection can give you a different empty in-memory DB. (github.com) - If you want multiple connections to share the same in-memory DB, you must use a URI filename with shared cache, e.g.
sqlite:file::memory:?cache=shared(or a namedfile:memdb1?mode=memory&cache=shared). SQLite only shares an in-memory DB between connections when opened this way. (sqlite.org)
Citations:
- 1: https://learn.microsoft.com/en-us/dotnet/standard/data/sqlite/in-memory-databases?utm_source=openai
- 2: https://docs.rs/crate/sqlx-core-guts/latest/source/src/sqlite/options/parse.rs
- 3: sqlite in-memory databases do not seem to work with connection pools launchbadge/sqlx#2510
- 4: sqlite in-memory databases do not seem to work with connection pools launchbadge/sqlx#2510
- 5: https://www.sqlite.org/sharedcache.html?utm_source=openai
🏁 Script executed:
cd /tmp/repo 2>/dev/null || true
git ls-files src/projects/store.rs | head -5Repository: spacedriveapp/spacebot
Length of output: 87
🏁 Script executed:
wc -l src/projects/store.rsRepository: spacedriveapp/spacebot
Length of output: 91
🏁 Script executed:
sed -n '575,595p' src/projects/store.rsRepository: spacedriveapp/spacebot
Length of output: 545
🏁 Script executed:
rg -n "SqlitePool::connect" --type rs -B2 -A2Repository: spacedriveapp/spacebot
Length of output: 92
🏁 Script executed:
rg -n "SqlitePool::connect" -B2 -A2Repository: spacedriveapp/spacebot
Length of output: 1417
Use a single shared connection for the in-memory SQLite tests.
sqlite::memory: in SQLx creates a separate in-memory database for each pooled connection (SQLx assigns each connection a unique URI file:sqlx-in-memory-{seqno}), so migrations can run on one connection and later queries on another connection will hit an empty database. This causes flaky tests.
Pin the pool to a single connection or use a shared-cache memory DSN.
🧪 Suggested change
async fn setup_pool() -> SqlitePool {
- let pool = SqlitePool::connect("sqlite::memory:")
+ let pool = sqlx::sqlite::SqlitePoolOptions::new()
+ .max_connections(1)
+ .connect("sqlite::memory:")
.await
.expect("failed to create in-memory pool");Note: The same pattern exists in src/messaging/webchat.rs:136.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async fn setup_pool() -> SqlitePool { | |
| let pool = SqlitePool::connect("sqlite::memory:") | |
| .await | |
| .expect("failed to create in-memory pool"); | |
| sqlx::migrate!("./migrations") | |
| .run(&pool) | |
| .await | |
| .expect("failed to run migrations"); | |
| pool | |
| async fn setup_pool() -> SqlitePool { | |
| let pool = sqlx::sqlite::SqlitePoolOptions::new() | |
| .max_connections(1) | |
| .connect("sqlite::memory:") | |
| .await | |
| .expect("failed to create in-memory pool"); | |
| sqlx::migrate!("./migrations") | |
| .run(&pool) | |
| .await | |
| .expect("failed to run migrations"); | |
| pool | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/projects/store.rs` around lines 582 - 590, The in-memory SQLite pool
created in async fn setup_pool currently uses
SqlitePool::connect("sqlite::memory:"), which yields multiple distinct in-memory
DBs across pooled connections causing flaky tests; fix by either pinning the
pool to a single connection (use
sqlx::sqlite::SqlitePoolOptions::new().max_connections(1).connect("sqlite::memory:").await)
or use a shared-cache memory DSN (e.g. "file::memory:?cache=shared") when
calling connect, then run sqlx::migrate! on that same pool; apply the same
change to the similar pool creation in src/messaging/webchat.rs (around the code
that calls SqlitePool::connect).
…ore lifecycle - Clarify prompt templates to direct LLM to select a specific repo before creating a worktree, instead of suggesting worktree in project root - Insert project_store into ApiState map when creating agents via API - Remove project_store from ApiState map when deleting agents via API
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/api/agents.rs (1)
407-408: Consider retrieving fromstate.project_storesfirst for consistency.The
task_storelookup at lines 395-398 tries to retrieve from state before falling back to creating a new instance. For consistency,project_storecould follow the same pattern, though functionally it doesn't matter sinceProjectStoreis stateless.♻️ Optional consistency fix
+ let project_stores = state.project_stores.load(); let task_store = task_stores .get(agent_id) .cloned() .unwrap_or_else(|| Arc::new(crate::tasks::TaskStore::new(sqlite_pool.clone()))); let llm_manager = llm_manager.clone(); let force = request.force; let agent_id = agent_id.clone(); let task_store_registry = state.task_store_registry.clone(); let injection_tx = state.injection_tx.clone(); + let project_store = project_stores + .get(&agent_id) + .cloned() + .unwrap_or_else(|| Arc::new(crate::projects::ProjectStore::new(sqlite_pool.clone()))); tokio::spawn(async move { let (event_tx, memory_event_tx) = crate::create_process_event_buses(); - let project_store = - std::sync::Arc::new(crate::projects::ProjectStore::new(sqlite_pool.clone())); let deps = crate::AgentDeps {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/agents.rs` around lines 407 - 408, Follow the same lookup pattern used by task_store: attempt to retrieve project_store from state.project_stores first and only create a new Arc<ProjectStore> with ProjectStore::new(sqlite_pool.clone()) as a fallback; update the code around project_store so it checks state.project_stores.get(&project_id) (or the same key used for task_store) and uses that Arc if present, otherwise constructs and inserts/uses the newly created Arc<ProjectStore>.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/api/agents.rs`:
- Around line 407-408: Follow the same lookup pattern used by task_store:
attempt to retrieve project_store from state.project_stores first and only
create a new Arc<ProjectStore> with ProjectStore::new(sqlite_pool.clone()) as a
fallback; update the code around project_store so it checks
state.project_stores.get(&project_id) (or the same key used for task_store) and
uses that Arc if present, otherwise constructs and inserts/uses the newly
created Arc<ProjectStore>.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e4f72416-a26f-40e4-8bb3-f95af79c3337
📒 Files selected for processing (3)
prompts/en/fragments/projects_context.md.j2prompts/en/tools/project_manage_description.md.j2src/api/agents.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- prompts/en/fragments/projects_context.md.j2
src/tools/project_manage.rs
Outdated
| } | ||
|
|
||
| // Compute relative path from project root | ||
| let relative_path = abs_path |
There was a problem hiding this comment.
If abs_path is outside the project root, falling back to persisting repo_path as-is can store an absolute/.. path and later escape root.join(&repo.path).
| let relative_path = abs_path | |
| // Compute relative path from project root. | |
| let relative_path = match abs_path.strip_prefix(&project.root_path) { | |
| Ok(path) => path.to_string_lossy().to_string(), | |
| Err(_) => { | |
| return Ok(ProjectManageOutput { | |
| success: false, | |
| action: "add_repo".to_string(), | |
| message: "repo_path must be within the project root".into(), | |
| data: None, | |
| }); | |
| } | |
| }; |
| .get_repo(&repo_id) | ||
| .await | ||
| .map_err(|error| ProjectManageError(format!("failed to get repo: {error}")))? | ||
| .ok_or_else(|| ProjectManageError(format!("repo not found: {repo_id}")))?; |
There was a problem hiding this comment.
Probably worth asserting the repo_id actually belongs to this project_id before using repo.path / creating the worktree.
| .ok_or_else(|| ProjectManageError(format!("repo not found: {repo_id}")))?; | |
| if repo.project_id != project_id { | |
| return Err(ProjectManageError(format!( | |
| "repo_id {repo_id} does not belong to project_id {project_id}" | |
| ))); | |
| } |
|
|
||
| let root = Path::new(&project.root_path); | ||
| let repo_abs_path = root.join(&repo.path); | ||
| let worktree_abs_path = root.join(&worktree_dir_name); |
There was a problem hiding this comment.
worktree_name is treated like a directory name here, but root.join(&worktree_dir_name) will honor absolute paths / parent components. I’d validate it’s exactly one normal path segment before joining.
| let worktree_abs_path = root.join(&worktree_dir_name); | |
| let worktree_dir_name_path = Path::new(&worktree_dir_name); | |
| if worktree_dir_name_path.is_absolute() | |
| || worktree_dir_name_path.components().count() != 1 | |
| || worktree_dir_name_path | |
| .components() | |
| .any(|c| matches!(c, std::path::Component::CurDir | std::path::Component::ParentDir)) | |
| { | |
| return Err(ProjectManageError( | |
| "'worktree_name' must be a single directory name".into(), | |
| )); | |
| } |
| .get_worktree(&worktree_id) | ||
| .await | ||
| .map_err(|error| ProjectManageError(format!("failed to get worktree: {error}")))? | ||
| .ok_or_else(|| ProjectManageError(format!("worktree not found: {worktree_id}")))?; |
There was a problem hiding this comment.
Guard against mixing {project_id} + an unrelated worktree_id (otherwise this can remove the wrong worktree / delete the wrong DB record).
| .ok_or_else(|| ProjectManageError(format!("worktree not found: {worktree_id}")))?; | |
| if worktree.project_id != project_id { | |
| return Err(ProjectManageError(format!( | |
| "worktree_id {worktree_id} does not belong to project_id {project_id}" | |
| ))); | |
| } |
…d, API config section, and UI config page Phase 6 of the Projects feature. Adds ProjectsConfig with 6 fields (use_worktrees, worktree_name_template, auto_create_worktrees, auto_discover_repos, auto_discover_worktrees, disk_usage_warning_threshold) to the full config pipeline: TOML schema, config resolution, RuntimeConfig with ArcSwap hot-reload, API GET/PUT handlers, and the Projects section in the AgentConfig UI with toggles, input, and number stepper controls.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
src/projects/store.rs (2)
545-554:⚠️ Potential issue | 🟠 MajorReturn an error for malformed persisted project data.
Lines 547, 550-551, and 554 silently fall back to defaults when JSON parsing fails or status is invalid. This masks data corruption and violates the coding guideline against silently discarding errors.
🛠️ Suggested fix
fn row_to_project(row: &sqlx::sqlite::SqliteRow) -> Result<Project> { let tags_raw: String = row.try_get("tags").context("missing tags")?; - let tags: Vec<String> = serde_json::from_str(&tags_raw).unwrap_or_default(); + let tags: Vec<String> = + serde_json::from_str(&tags_raw).context("failed to parse project tags")?; let settings_raw: String = row.try_get("settings").context("missing settings")?; - let settings: Value = - serde_json::from_str(&settings_raw).unwrap_or(Value::Object(Default::default())); + let settings: Value = + serde_json::from_str(&settings_raw).context("failed to parse project settings")?; let status_raw: String = row.try_get("status").context("missing status")?; - let status = ProjectStatus::parse(&status_raw).unwrap_or(ProjectStatus::Active); + let status = ProjectStatus::parse(&status_raw) + .context("invalid project status")?;As per coding guidelines: "Never silently discard errors; no
let _ =on Results; handle, log, or propagate errors".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/projects/store.rs` around lines 545 - 554, The row_to_project function is silently swallowing JSON and parse errors for tags, settings, and status; change it to propagate errors instead: when parsing tags_raw and settings_raw use serde_json::from_str(...) and return Err (use ? or map_err with context) instead of unwrap_or_default, and when parsing status via ProjectStatus::parse(&status_raw) return an error (with context) if parsing fails rather than defaulting to ProjectStatus::Active; update error contexts to reference the fields ("tags", "settings", "status") so malformed persisted project data surfaces as a failure from row_to_project.
609-617:⚠️ Potential issue | 🟡 MinorUse a single connection for in-memory SQLite tests.
sqlite::memory:in SQLx creates a separate in-memory database for each pooled connection. This can cause flaky tests where migrations run on one connection but queries execute on another empty database.🧪 Suggested fix
async fn setup_pool() -> SqlitePool { - let pool = SqlitePool::connect("sqlite::memory:") + let pool = sqlx::sqlite::SqlitePoolOptions::new() + .max_connections(1) + .connect("sqlite::memory:") .await .expect("failed to create in-memory pool");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/projects/store.rs` around lines 609 - 617, The in-memory DB is created per pooled connection causing flakiness in setup_pool; change the pool creation to use a single connection so migrations and queries hit the same DB (e.g. use SqlitePoolOptions::new().max_connections(1).connect("sqlite::memory:").await or use a shared in-memory URI like "file::memory:?cache=shared" and ensure pool max connections is 1), keeping the rest of the function (sqlx::migrate! and returning pool) intact and referencing setup_pool, SqlitePool::connect, and sqlx::migrate! so the migrations run on the same connection used for queries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/routes/AgentConfig.tsx`:
- Around line 1065-1072: The UI currently displays the token example with spaces
("{ branch }") which mismatches the supported syntax; update the help text
paragraph (the <p> near the "Worktree Name Template" label) to list the exact
supported tokens "{branch}", "{feature}", and "{repo}" (no spaces) and change
the Input component's placeholder prop (where value is
localValues.worktree_name_template and onChange calls
handleChange("worktree_name_template", ...)) to the exact token format e.g.
"{branch}" so users see and can copy the correct syntax.
- Around line 1092-1099: The NumberStepper is showing raw bytes but the UI is
GB; change its value, onChange, and step to use gigabytes instead of bytes: pass
value={Math.round((localValues.disk_usage_warning_threshold as number) /
1073741824)} (or divide without rounding if fractional GBs are allowed), set
step={1} (1 GB), and in onChange convert back to bytes before saving by calling
handleChange("disk_usage_warning_threshold", v * 1073741824). Keep the
description text as-is (it already converts to GB) and ensure types/casts match
NumberStepper's props.
In `@src/api/config.rs`:
- Around line 821-823: The code casts several u64 config fields (e.g.,
projects.disk_usage_warning_threshold, rate_limit_cooldown_secs,
max_concurrent_branches, max_turns, branch_max_turns, context_window,
tick_interval_secs, worker_timeout_secs) directly to i64 which will wrap for
values > i64::MAX; instead validate each u64 before writing TOML by checking it
fits in i64 (use i64::try_from or threshold <= i64::MAX) and reject or return an
error for values larger than i64::MAX, and only then convert and assign to
toml_edit::value(...) — apply this same check-and-convert pattern to every
u64-to-i64 cast in src/api/config.rs where those config fields are written.
---
Duplicate comments:
In `@src/projects/store.rs`:
- Around line 545-554: The row_to_project function is silently swallowing JSON
and parse errors for tags, settings, and status; change it to propagate errors
instead: when parsing tags_raw and settings_raw use serde_json::from_str(...)
and return Err (use ? or map_err with context) instead of unwrap_or_default, and
when parsing status via ProjectStatus::parse(&status_raw) return an error (with
context) if parsing fails rather than defaulting to ProjectStatus::Active;
update error contexts to reference the fields ("tags", "settings", "status") so
malformed persisted project data surfaces as a failure from row_to_project.
- Around line 609-617: The in-memory DB is created per pooled connection causing
flakiness in setup_pool; change the pool creation to use a single connection so
migrations and queries hit the same DB (e.g. use
SqlitePoolOptions::new().max_connections(1).connect("sqlite::memory:").await or
use a shared in-memory URI like "file::memory:?cache=shared" and ensure pool max
connections is 1), keeping the rest of the function (sqlx::migrate! and
returning pool) intact and referencing setup_pool, SqlitePool::connect, and
sqlx::migrate! so the migrations run on the same connection used for queries.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f06140f9-6e6d-432b-9449-87f0ebf99d78
📒 Files selected for processing (9)
interface/src/api/client.tsinterface/src/routes/AgentConfig.tsxsrc/api/agents.rssrc/api/config.rssrc/config/load.rssrc/config/runtime.rssrc/config/toml_schema.rssrc/config/types.rssrc/projects/store.rs
| <label className="text-sm font-medium text-ink">Worktree Name Template</label> | ||
| <p className="text-tiny text-ink-faint">Template for naming new worktrees. Use {"{"} branch {"}"} for the branch name.</p> | ||
| <Input | ||
| value={localValues.worktree_name_template as string} | ||
| onChange={(e) => handleChange("worktree_name_template", e.target.value)} | ||
| placeholder="{branch}" | ||
| className="border-app-line/50 bg-app-darkBox/30" | ||
| /> |
There was a problem hiding this comment.
Show the placeholder tokens exactly as supported.
Line 1066 renders { branch }, while src/config/types.rs, Lines 888-889 document {branch}, {feature}, and {repo}. The current copy is easy to paste back verbatim and misleads about the supported syntax.
Suggested fix
- <p className="text-tiny text-ink-faint">Template for naming new worktrees. Use {"{"} branch {"}"} for the branch name.</p>
+ <p className="text-tiny text-ink-faint">
+ Template for naming new worktrees. Available variables: <code>{"{branch}"}</code>,{" "}
+ <code>{"{feature}"}</code>, and <code>{"{repo}"}</code>.
+ </p>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <label className="text-sm font-medium text-ink">Worktree Name Template</label> | |
| <p className="text-tiny text-ink-faint">Template for naming new worktrees. Use {"{"} branch {"}"} for the branch name.</p> | |
| <Input | |
| value={localValues.worktree_name_template as string} | |
| onChange={(e) => handleChange("worktree_name_template", e.target.value)} | |
| placeholder="{branch}" | |
| className="border-app-line/50 bg-app-darkBox/30" | |
| /> | |
| <label className="text-sm font-medium text-ink">Worktree Name Template</label> | |
| <p className="text-tiny text-ink-faint"> | |
| Template for naming new worktrees. Available variables: <code>{"{branch}"}</code>,{" "} | |
| <code>{"{feature}"}</code>, and <code>{"{repo}"}</code>. | |
| </p> | |
| <Input | |
| value={localValues.worktree_name_template as string} | |
| onChange={(e) => handleChange("worktree_name_template", e.target.value)} | |
| placeholder="{branch}" | |
| className="border-app-line/50 bg-app-darkBox/30" | |
| /> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/AgentConfig.tsx` around lines 1065 - 1072, The UI
currently displays the token example with spaces ("{ branch }") which mismatches
the supported syntax; update the help text paragraph (the <p> near the "Worktree
Name Template" label) to list the exact supported tokens "{branch}",
"{feature}", and "{repo}" (no spaces) and change the Input component's
placeholder prop (where value is localValues.worktree_name_template and onChange
calls handleChange("worktree_name_template", ...)) to the exact token format
e.g. "{branch}" so users see and can copy the correct syntax.
| <NumberStepper | ||
| label="Disk Usage Warning" | ||
| description={`Warn when total project disk usage exceeds this threshold (${Math.round((localValues.disk_usage_warning_threshold as number) / 1073741824)} GB)`} | ||
| value={localValues.disk_usage_warning_threshold as number} | ||
| onChange={(v) => handleChange("disk_usage_warning_threshold", v)} | ||
| min={0} | ||
| step={1073741824} | ||
| /> |
There was a problem hiding this comment.
Display the disk threshold in GB, not raw bytes.
Line 1095 binds the stepper directly to the byte count, so the default renders as 53687091200 even though the UI copy speaks in GB. That makes manual edits much easier to misconfigure.
Suggested fix
<NumberStepper
label="Disk Usage Warning"
- description={`Warn when total project disk usage exceeds this threshold (${Math.round((localValues.disk_usage_warning_threshold as number) / 1073741824)} GB)`}
- value={localValues.disk_usage_warning_threshold as number}
- onChange={(v) => handleChange("disk_usage_warning_threshold", v)}
+ description="Warn when total project disk usage exceeds this threshold."
+ value={(localValues.disk_usage_warning_threshold as number) / 1073741824}
+ onChange={(valueGb) =>
+ handleChange("disk_usage_warning_threshold", valueGb * 1073741824)
+ }
min={0}
- step={1073741824}
+ step={1}
+ suffix=" GB"
/>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <NumberStepper | |
| label="Disk Usage Warning" | |
| description={`Warn when total project disk usage exceeds this threshold (${Math.round((localValues.disk_usage_warning_threshold as number) / 1073741824)} GB)`} | |
| value={localValues.disk_usage_warning_threshold as number} | |
| onChange={(v) => handleChange("disk_usage_warning_threshold", v)} | |
| min={0} | |
| step={1073741824} | |
| /> | |
| <NumberStepper | |
| label="Disk Usage Warning" | |
| description="Warn when total project disk usage exceeds this threshold." | |
| value={(localValues.disk_usage_warning_threshold as number) / 1073741824} | |
| onChange={(valueGb) => | |
| handleChange("disk_usage_warning_threshold", valueGb * 1073741824) | |
| } | |
| min={0} | |
| step={1} | |
| suffix=" GB" | |
| /> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/AgentConfig.tsx` around lines 1092 - 1099, The
NumberStepper is showing raw bytes but the UI is GB; change its value, onChange,
and step to use gigabytes instead of bytes: pass
value={Math.round((localValues.disk_usage_warning_threshold as number) /
1073741824)} (or divide without rounding if fractional GBs are allowed), set
step={1} (1 GB), and in onChange convert back to bytes before saving by calling
handleChange("disk_usage_warning_threshold", v * 1073741824). Keep the
description text as-is (it already converts to GB) and ensure types/casts match
NumberStepper's props.
| if let Some(threshold) = projects.disk_usage_warning_threshold { | ||
| table["disk_usage_warning_threshold"] = toml_edit::value(threshold as i64); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Rust, what happens when casting a u64larger thani64::MAXtoi64usingas?
💡 Result:
Casting a u64 to i64 with as is a same-size integer cast, so Rust does no range checking: it keeps the same 64 bits and interprets them as an i64 (two’s-complement). (doc.rust-lang.org)
So if the u64 value is larger than i64::MAX, the highest bit is 1, and the result becomes a negative i64, effectively wrapping modulo 2⁶⁴ (i.e., result == (value as i128) - 2^64). (doc.rust-lang.org)
Example: ((i64::MAX as u64) + 1) as i64 == i64::MIN. (doc.rust-lang.org)
Citations:
- 1: https://doc.rust-lang.org/beta/reference/expressions/operator-expr.html
- 2: https://doc.rust-lang.org/beta/reference/expressions/operator-expr.html
- 3: https://doc.rust-lang.org/beta/reference/expressions/operator-expr.html
🏁 Script executed:
# Check the actual code at the specified lines
if [ -f src/api/config.rs ]; then
echo "=== Checking lines 821-823 in src/api/config.rs ==="
sed -n '815,830p' src/api/config.rs
else
echo "File not found, searching for config.rs files..."
find . -name "config.rs" -type f | head -20
fiRepository: spacedriveapp/spacebot
Length of output: 735
🏁 Script executed:
# Search for similar u64 to i64 conversions in the codebase
echo "=== Searching for similar u64 to i64 casts ==="
rg 'as i64' src/api/config.rs -B 2 -A 1Repository: spacedriveapp/spacebot
Length of output: 2773
🏁 Script executed:
# Check if there are other disk_usage related settings with similar patterns
echo "=== Searching for disk_usage patterns ==="
rg 'disk_usage' src/ -B 2 -A 2Repository: spacedriveapp/spacebot
Length of output: 11371
Reject thresholds above i64::MAX before writing TOML.
Line 821 accepts a u64, but line 822 stores it with threshold as i64. Values above i64::MAX wrap to a negative integer, corrupting config.toml.
This pattern affects 19+ similar fields in this file (rate_limit_cooldown_secs, max_concurrent_branches, max_turns, branch_max_turns, context_window, tick_interval_secs, worker_timeout_secs, and others). Apply the same validation to all u64-to-i64 casts:
Suggested fix
if let Some(threshold) = projects.disk_usage_warning_threshold {
- table["disk_usage_warning_threshold"] = toml_edit::value(threshold as i64);
+ table["disk_usage_warning_threshold"] =
+ toml_edit::value(i64::try_from(threshold).map_err(|_| StatusCode::BAD_REQUEST)?);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/config.rs` around lines 821 - 823, The code casts several u64 config
fields (e.g., projects.disk_usage_warning_threshold, rate_limit_cooldown_secs,
max_concurrent_branches, max_turns, branch_max_turns, context_window,
tick_interval_secs, worker_timeout_secs) directly to i64 which will wrap for
values > i64::MAX; instead validate each u64 before writing TOML by checking it
fits in i64 (use i64::try_from or threshold <= i64::MAX) and reject or return an
error for values larger than i64::MAX, and only then convert and assign to
toml_edit::value(...) — apply this same check-and-convert pattern to every
u64-to-i64 cast in src/api/config.rs where those config fields are written.
…paths into sandbox allowlist Two improvements: 1. Branch detection: repos now track current_branch separately from default_branch. The UI shows the actual checked-out branch (with an accent badge when it differs from default). Scan/rescan refreshes current_branch on existing repos instead of skipping them. 2. Sandbox integration: project root paths are automatically injected into the sandbox allowlist so workers can access project directories outside the workspace. Refresh happens at startup, agent creation, and after project create/delete/scan. The shell/exec/file tools now check all allowed paths (workspace + user writable_paths + project paths) instead of just the workspace.
# Conflicts: # src/prompts/text.rs # src/tools.rs
- Sanitize user-supplied paths to prevent directory traversal in repo/worktree creation - Validate cross-resource ownership (repo/worktree belongs to project) in all endpoints - Scrub embedded credentials from git remote URLs before storing - Render user-editable fields (description, tags) as code in prompt template - Fix worktree/project_id mismatch in spawn_worker directory resolution - Don't delete worktree DB record when git worktree remove fails - Use symlink_metadata in dir_size to avoid following symlinks - Use async IO (tokio::fs) instead of blocking std::fs in tool disk_usage - Fix race condition in worker_runs project linking with retry loop - Clamp u64-to-i64 cast for disk_usage_warning_threshold - Reset worktree dialog state when reopened for different repo - Add aria-labels to delete buttons for accessibility - Fix placeholder token syntax in config UI
Summary
What's included
Phase 1 — Data Layer
projects,project_repos,project_worktreestables +worker_runsproject columnsProjectStorewith full CRUD for projects, repos, and worktreesdiscover_repos,list_worktrees,create_worktree,remove_worktreeAgentDepsandApiState(all 5 construction sites updated)Phase 2 — Prompt Injection
projects_context.md.j2template rendering project/repo/worktree infoproject_manage_description.md.j2tool description templaterender_projects_context()method + data structs onPromptEngineproject_contextparameter inrender_channel_prompt_with_links()build_project_context()method onChannelPhase 3 — Tool Integration
project_managetool with 7 actions:list,create,scan,add_repo,create_worktree,remove_worktree,disk_usagespawn_workerextended withproject_id/worktree_idargs + directory resolution from project worktreeslog_worker_project_link()for worker-project associationPhase 4 — REST API
server.rsPhase 5 — Dashboard UI
client.tsAgentTabs(between Workers and Tasks)/agents/$agentId/projectsAgentProjects.tsx(~650 lines): project list (card grid with icon/name/tags/status), project detail (repos, worktrees, expandable disk usage), create project dialog, create worktree dialog, delete confirmationsDelivery gates
just preflight— passedjust gate-pr— passed (433 tests, zero clippy warnings, formatting clean)bunx vite buildNote
End-to-end implementation of workspace layout awareness. Agents can now manage developer projects, repositories, and worktrees as first-class entities. All five phases complete: SQLite schema, prompt injection templates, agent tools, REST API, and dashboard UI. All tests passing and delivery gates green.
Written by Tembo for commit aedf1bd. This will update automatically on new commits.