Skip to content

feat: implement 11 feature services + queue/steer follow-up behavior#1

Merged
Josh-wt merged 3 commits intomainfrom
claude/understand-codebase-9Fmym
Apr 10, 2026
Merged

feat: implement 11 feature services + queue/steer follow-up behavior#1
Josh-wt merged 3 commits intomainfrom
claude/understand-codebase-9Fmym

Conversation

@Josh-wt
Copy link
Copy Markdown
Owner

@Josh-wt Josh-wt commented Apr 10, 2026

Summary

  • 9 new server services (CostTracking, AuditLog, CI/CD Integration, ProviderRouter, Pipeline, Workflow, TaskDecomposition, ProjectMemory, Presence) with full SQLite persistence via migration 020
  • RPC endpoints in ws.ts for all new services; contract schemas, client stores, and a Features settings panel on the web
  • Queue/Steer follow-up behavior (inspired by upstream PR feat(web): add configurable queue/steer follow-up behavior pingdotgg/t3code#1479): hold messages while a turn is running and auto-dispatch when it settles, or send immediately as real-time guidance

Feature Details

9 New Services (Server + Client)

Service Purpose
CostTracking Per-turn token usage and cost recording with budget alerts
AuditLog Structured audit trail for all significant actions
CIIntegration GitHub Actions / CI pipeline monitoring and auto-fix policies
ProviderRouter Rule-based routing of tasks to specific AI providers
Pipeline Multi-stage AI task chains with dependency ordering
Workflow Reusable templates (e.g. "Implement & Test") executed as pipelines
TaskDecomposition Break a prompt into a tree of subtasks
ProjectMemory Full-text-searchable project knowledge store
Presence Session sharing with participant tracking

Queue/Steer Follow-up Behavior

  • Steer mode (default): messages sent while AI is running go immediately — current behavior unchanged
  • Queue mode: messages typed while a turn is running are held client-side and auto-dispatched when the turn settles
  • Cmd/Ctrl+Shift+Enter inverts the active mode for one message
  • ComposerQueuedFollowUpsPanel shows queued items with per-item remove buttons
  • Queue clears on thread switch; failed dispatches are re-queued

Test plan

  • Server starts without errors; migration 020 runs cleanly
  • All 9 service RPC endpoints respond correctly (cost summary, audit query, pipeline create/execute, etc.)
  • Features settings panel loads at /settings/features
  • In queue mode, messages sent while AI is running appear in the queue panel and dispatch automatically on turn completion
  • Cmd/Ctrl+Shift+Enter correctly bypasses queue (in queue mode) or force-queues (in steer mode)
  • Queue clears when switching threads
  • bun fmt, bun lint, bun typecheck (web) all pass

https://claude.ai/code/session_01XF5adFusgTx5Fb1Qhc8MeC


Summary by cubic

Adds nine new feature services with SQLite persistence and WS RPC, a queue/steer follow‑up option in chat, a Features settings panel, and support for branching a thread from a checkpoint. Also hardens service queries and presence/memory handling.

  • New Features

    • Server: CostTracking, AuditLog, CI Integration, ProviderRouter, Pipeline, Workflow, TaskDecomposition, ProjectMemory, Presence; persisted via migration 020 and exposed over WS RPC.
    • Web: Client stores for each service and a new Features panel at /settings/features.
    • Chat composer: Queue/steer follow‑ups; steer (default) sends immediately, queue holds until the current turn settles; Cmd/Ctrl+Shift+Enter inverts once; queued items panel with remove; queue clears on thread switch and failed sends re‑queue.
    • Orchestration: Add thread.branch-from-checkpoint command/event to clone a thread at a checkpoint.
    • Contracts/RPC: New schemas for costs, audit, CI, routing, pipelines, workflows, tasks, memory, presence; orchestration and WS RPC expanded accordingly.
  • Bug Fixes

    • Parameterized SQL across feature services and added pagination guards to prevent unbounded queries.
    • Refined presence participant targeting and improved project memory handling to reduce misrouting and query issues.

Written for commit 55d5017. Summary will update on new commits.

Summary by CodeRabbit

  • New Features
    • Cost tracking with budgets, alerts and summaries
    • Audit logging with queries and live events
    • CI/CD integration: run history, reruns, feedback policies
    • Provider routing with health, rules and decisions
    • Pipelines & workflows: define, execute, stream executions
    • Task decomposition: create/manage task trees and events
    • Project memory: add, search, index, and FTS-backed results
    • Presence & session sharing with live participant streams
    • Web UI: Features settings panel, stores, and queued follow-up composer behavior

Server:
- Add CostTracking, AuditLog, CI, ProviderRouter, TaskDecomposition,
  ProjectMemory, Presence, Pipeline, and Workflow services with full
  SQLite persistence (migration 020)
- Wire all services into server.ts using Layer.mergeAll / provideMerge
  to stay within TypeScript's 20-arg pipe limit
- Add comprehensive RPC handlers in ws.ts for all new services
- Add mock stubs in server.test.ts to satisfy test layer requirements

Client:
- Add contract schemas and types for all 9 feature domains
- Add client-side stores (costStore, auditStore, ciStore, etc.)
- Add Features settings panel (FeaturesPanels.tsx + route)
- Extend RPC client (wsRpcClient.ts) with new service methods

Queue/Steer follow-up behavior (inspired by upstream PR pingdotgg#1479):
- Add followUpBehavior: "steer" | "queue" setting to AppSettings
- Queue mode holds follow-up messages while a turn is running and
  auto-dispatches them when the turn settles (transition-based effect)
- Steer mode (default) sends messages immediately as before
- Cmd/Ctrl+Shift+Enter inverts behavior one-off in either mode
- ComposerQueuedFollowUpsPanel shows queued items with remove buttons
- Queue clears on thread switch; failed dispatches re-queue

https://claude.ai/code/session_01XF5adFusgTx5Fb1Qhc8MeC
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 10, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0b2b85ee-b948-4e88-ba93-a9a2fa6e4807

📥 Commits

Reviewing files that changed from the base of the PR and between 64c26c1 and 55d5017.

📒 Files selected for processing (5)
  • apps/server/src/audit/Services/AuditLogService.ts
  • apps/server/src/cost/Services/CostTrackingService.ts
  • apps/server/src/memory/Services/ProjectMemoryService.ts
  • apps/server/src/presence/Services/PresenceService.ts
  • packages/contracts/src/presence.ts

📝 Walkthrough

Walkthrough

Adds nine backend feature services with SQL persistence and PubSub streams, a new DB migration, orchestration checkpoint branching, RPC/WS wiring and contracts, plus web client stores, UI components, and a settings route for the new features.

Changes

Cohort / File(s) Summary
Backend Service Implementations
apps/server/src/audit/Services/AuditLogService.ts, apps/server/src/ci/Services/CIIntegrationService.ts, apps/server/src/cost/Services/CostTrackingService.ts, apps/server/src/memory/Services/ProjectMemoryService.ts, apps/server/src/pipeline/Services/PipelineService.ts, apps/server/src/presence/Services/PresenceService.ts, apps/server/src/routing/Services/ProviderRouterService.ts, apps/server/src/task/Services/TaskDecompositionService.ts, apps/server/src/workflow/Services/WorkflowService.ts
Nine new service modules. Each defines a ServiceShape, a Service tag, and a Live layer; implements SQL persistence, event publishing via an unbounded PubSub, Effect-based control flow (mostly using Effect.gen and Effect.orDie), and exposes event streams via Stream.fromPubSub.
Database Migration
apps/server/src/persistence/Migrations.ts, apps/server/src/persistence/Migrations/020_NewFeatureTables.ts
Added migration entry ID 20 and new migration that creates tables/indexes/triggers for cost tracking, audit_log, CI runs/policies, pipelines/executions, workflow_templates, task_trees, memory_entries + memory_fts FTS5 virtual table with triggers, session_shares, and routing_rules.
Orchestration: command + projector + schema
apps/server/src/orchestration/decider.ts, apps/server/src/orchestration/projector.ts, apps/server/src/orchestration/Schemas.ts, packages/contracts/src/orchestration.ts
Introduces thread.branch-from-checkpoint command handling that emits thread.created and thread.branched-from-checkpoint events; projector applies the branched-from-checkpoint event by trimming source checkpoints/messages/activities up to the checkpoint turn count and patching the new thread. Added corresponding payload schema export.
Server integration & RPC wiring
apps/server/src/server.ts, apps/server/src/ws.ts, apps/server/src/server.test.ts
Composed new feature layers into server runtime (IndependentFeaturesLive, PipelineAndWorkflowLive), cast runServer launch effect to avoid exposing SqlClient in types, and wired many new WS RPC handlers in WsRpcLayer. Tests updated to provide mock implementations for new services.
Contracts & RPC method definitions
packages/contracts/src/*.ts (e.g., auditLog.ts, ciIntegration.ts, costTracking.ts, projectMemory.ts, pipelines.ts, presence.ts, routing.ts, taskDecomposition.ts, workflows.ts), packages/contracts/src/rpc.ts, packages/contracts/src/index.ts
Added nine new contract modules defining schema-driven types, inputs/results, stream event unions, and tagged errors. Extended WS_METHODS, added many Ws*Rpc constants and subscriptions in rpc.ts, and re-exported new contract modules from index.ts.
Web client stores & RPC client
apps/web/src/auditStore.ts, apps/web/src/ciStore.ts, apps/web/src/costStore.ts, apps/web/src/memoryStore.ts, apps/web/src/pipelineStore.ts, apps/web/src/presenceStore.ts, apps/web/src/routingStore.ts, apps/web/src/taskStore.ts, apps/web/src/workflowStore.ts, apps/web/src/wsRpcClient.ts
Added nine Zustand stores with async actions and subscribe...Events() helpers that register WS event handlers. Extended WsRpcClient with new namespaces (cost, audit, ci, routing, pipeline, workflow, task, memory, presence) exposing typed unary and streaming methods.
Web UI: Features panels, composer, settings route
apps/web/src/components/settings/FeaturesPanels.tsx, apps/web/src/components/chat/ComposerQueuedFollowUpsPanel.tsx, apps/web/src/components/settings/SettingsSidebarNav.tsx, apps/web/src/components/ChatView.tsx, apps/web/src/routeTree.gen.ts, apps/web/src/routes/settings.features.tsx, apps/web/src/appSettings.ts
Added a comprehensive Features settings panel and route (/settings/features), a queued follow-ups composer panel, integrated queued follow-up behavior into ChatView (queueing, auto-dispatch, keyboard shortcuts), and added followUpBehavior to AppSettingsSchema. Sidebar nav updated to include the Features entry.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐇 A hop, a patch, new tables in the ground,
New streams that sing, new services gather round.
I nibble logs and costs with happy cheer,
Pipelines and memories all appear —
A rabbit's hop brings features far and near!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 74.07% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: implement 11 feature services + queue/steer follow-up behavior' clearly and concisely summarizes the main changes: nine new feature services and the queue/steer follow-up behavior enhancement.
Description check ✅ Passed The PR description is comprehensive and well-structured, covering a clear summary of all major changes, detailed feature breakdown, test plan items, and relevant context.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch claude/understand-codebase-9Fmym

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 64c26c1ce8

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +151 to +153
if (input.projectId) conditions.push(`project_id = '${input.projectId}'`);
if (input.threadId) conditions.push(`thread_id = '${input.threadId}'`);
const whereClause = `WHERE ${conditions.join(" AND ")}`;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Parameterize cost summary filters

This query path builds SQL conditions by interpolating RPC input directly into string fragments and then executes them with sql.unsafe, so crafted values in fields like projectId or threadId can break query structure (or worse) instead of being treated as data. Because cost.getSummary is externally callable, this should use bound parameters/tagged SQL fragments throughout.

Useful? React with 👍 / 👎.

Comment on lines +840 to +842
stages: JSON.parse(
(def as unknown as { stagesJson: string }).stagesJson ?? "[]",
) as import("@t3tools/contracts").PipelineStage[],
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Return persisted stages in pipeline list results

pipeline.list reads definitions from PipelineService.list, but this mapper tries to parse a non-existent stagesJson field and falls back to [], so every listed pipeline loses its stage graph. Any client relying on list data for editing/execution previews will see empty pipelines and behave incorrectly.

Useful? React with 👍 / 👎.

Comment on lines +92 to +94
for (const [participantId] of threadMap) {
threadMap.delete(participantId);
yield* PubSub.publish(pubsub, {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Delete only the caller when handling presence leave

This leave implementation iterates the entire thread map and deletes every participant, so one user leaving a shared thread forces all other participants to be removed and emits presence.left for everyone. In multi-user sessions this collapses presence state globally instead of removing a single participant.

Useful? React with 👍 / 👎.

Comment on lines +673 to +675
aggregateKind: "thread",
aggregateId: command.sourceThreadId,
occurredAt: command.createdAt,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Emit branched thread creation on the new thread stream

The thread.created event for branch-from-checkpoint is written with aggregateId set to the source thread even though the payload creates newThreadId. Since event stream identity is keyed by aggregateId, this puts the creation event in the wrong stream, skewing per-thread stream history/versioning for consumers that read by aggregate.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

🟠 Major comments (20)
apps/web/src/components/ChatView.tsx-2384-2413 (1)

2384-2413: ⚠️ Potential issue | 🟠 Major

Route queued auto-dispatch through the same send path as manual sends.

Line 2393 removes the head item before the dispatch path is even confirmed, and Lines 2398-2408 bypass the normal send flow entirely. That means queued follow-ups skip sendInFlightRef, beginLocalDispatch, persistThreadSettingsForNextTurn, and formatOutgoingPrompt, so they can be dropped, double-sent, or behave differently from an immediate send. The catch also requeues into component state rather than a thread-scoped queue, so a failure after navigation can resurrect the item in the wrong thread.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/ChatView.tsx` around lines 2384 - 2413, The queued
follow-up auto-dispatch removes the head and calls orchestration.dispatchCommand
directly (using queuedFollowUpsRef, setQueuedFollowUps, readNativeApi,
api.orchestration.dispatchCommand, newMessageId) which bypasses the normal send
path (sendInFlightRef, beginLocalDispatch, persistThreadSettingsForNextTurn,
formatOutgoingPrompt) and requeues into component state on failure; instead
route the queued item through the same send flow used for manual sends: do not
mutate setQueuedFollowUps until the send path is invoked successfully, call the
existing send/beginLocalDispatch routine (or extract a send helper used by
manual sends) with the thread from activeThreadRef and the
queuedFollowUpsRef.current[0] payload so that sendInFlightRef,
persistThreadSettingsForNextTurn and formatOutgoingPrompt are used, and on send
failure requeue into the thread-scoped queue (not component state) so navigation
won't resurrect the item in the wrong thread.
apps/web/src/components/ChatView.tsx-3023-3029 (1)

3023-3029: ⚠️ Potential issue | 🟠 Major

Don't queue one-off overrides when the thread is already idle.

Line 3027 lets forceQueue bypass the phase === "running" check. In steer mode, Cmd/Ctrl+Shift+Enter on an already-settled thread will enqueue the message, but nothing will auto-dispatch it until some unrelated future turn settles. Restrict the override to active turns, or fall back to immediate send when the thread is idle.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/ChatView.tsx` around lines 3023 - 3029, The current
shouldQueue expression allows opts?.forceQueue to bypass the phase === "running"
check so a forced queue can enqueue on an already-idle thread; update the
condition so forced queuing only applies when the thread is active. Modify the
shouldQueue logic (the const shouldQueue near the composer logic) to require
phase === "running" for both the followUpBehavior === "queue" branch and the
opts?.forceQueue branch (or explicitly add && phase === "running" to the outer
condition), so that when the thread is idle the override falls back to immediate
send instead of enqueuing.
apps/server/src/orchestration/decider.ts-671-691 (1)

671-691: ⚠️ Potential issue | 🟠 Major

Incorrect aggregateId for thread.created event.

The branchedThreadEvent uses command.sourceThreadId as its aggregateId (Line 674), but this event represents the creation of the new thread (command.newThreadId). This is inconsistent with the regular thread.create handler (Line 152), where aggregateId matches the thread being created.

🐛 Proposed fix
       const branchedThreadEvent: Omit<OrchestrationEvent, "sequence"> = {
         ...withEventBase({
           aggregateKind: "thread",
-          aggregateId: command.sourceThreadId,
+          aggregateId: command.newThreadId,
           occurredAt: command.createdAt,
           commandId: command.commandId,
         }),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/orchestration/decider.ts` around lines 671 - 691, The
branchedThreadEvent sets aggregateId to command.sourceThreadId but this event
represents creation of the new thread; update branchedThreadEvent (created via
withEventBase and used to emit the "thread.created" payload) to use
command.newThreadId as aggregateId so aggregateId matches the new thread
(consistent with the thread.create handler) and ensure any other references
inside that event payload remain unchanged.
apps/server/src/ws.ts-885-917 (1)

885-917: ⚠️ Potential issue | 🟠 Major

Use Effect.fail instead of Effect.die for missing execution.

Effect.die terminates the fiber with an unrecoverable defect, which is inappropriate for a user-recoverable "not found" scenario. The RPC client will receive an opaque crash instead of a structured error it can handle gracefully.

🐛 Proposed fix: Return a typed error
           Effect.flatMap((exec) =>
             exec
               ? Effect.succeed({
                   // ... existing mapping ...
                 })
-              : Effect.die(new Error("Execution not found")),
+              : Effect.fail(new Error("Execution not found")),
           ),

Consider defining a dedicated PipelineExecutionNotFoundError schema for typed error handling on the client.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/ws.ts` around lines 885 - 917, Change the unrecoverable
defect to a recoverable typed failure: in the WS_METHODS.pipelineGetExecution
handler (around pipelineService.getExecution(...).pipe(Effect.flatMap(...)))
replace the Effect.die(new Error("Execution not found")) with Effect.fail(...)
returning a typed error value (e.g., a PipelineExecutionNotFoundError object or
error tag) so the RPC client receives a structured, recoverable error; ensure
the error type/schema is defined and used consistently so the RPC layer can
pattern-match it on the client.
apps/server/src/orchestration/projector.ts-667-714 (1)

667-714: ⚠️ Potential issue | 🟠 Major

Add proposedPlans retention to branched-from-checkpoint handler, consistent with thread.reverted.

The thread.reverted handler retains proposedPlans filtered by retainedTurnIds (lines 604-607), but the thread.branched-from-checkpoint handler skips proposedPlans entirely despite using the same retainedTurnIds logic for messages and activities. Both operations filter data by checkpoint turn count and should retain proposed plans up to that point.

Add the following to the branching handler's thread update:

const retainedProposedPlans = retainThreadProposedPlansAfterRevert(
  sourceThread.proposedPlans,
  retainedTurnIds,
).slice(-200);

And include proposedPlans: retainedProposedPlans in the thread update object.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/orchestration/projector.ts` around lines 667 - 714, The
branch-from-checkpoint handler is missing retention of proposedPlans like
thread.reverted does; compute retainedProposedPlans by calling
retainThreadProposedPlansAfterRevert(sourceThread.proposedPlans,
retainedTurnIds).slice(-200) and add proposedPlans: retainedProposedPlans to the
object passed into updateThread (inside the "thread.branched-from-checkpoint"
Effect.gen), using the same retainedTurnIds derived from retainedCheckpoints so
proposed plans are filtered consistently.
apps/web/src/presenceStore.ts-67-70 (1)

67-70: ⚠️ Potential issue | 🟠 Major

Don't clear the whole thread on one leave().

This drops participantsByThread[input.threadId] entirely. If other participants are still present, the UI shows the thread as empty until the next refresh or stream event. Remove only the leaving participant, or wait for presence.left before mutating local state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/presenceStore.ts` around lines 67 - 70, The current set call is
removing the entire participantsByThread[input.threadId] entry (const {
[input.threadId]: _removed, ...rest } = s.participantsByThread), which clears
the whole thread on a single leave; instead modify leave() so you only remove
the leaving participant from the thread's participant list (e.g., find
participantsByThread[input.threadId] and filter/delete the specific
participantId) and only delete the thread key if the resulting list is empty, or
alternatively defer local state changes until handling presence.left; update
logic that references participantsByThread, input.threadId and the leave()
handler accordingly.
apps/server/src/workflow/Services/WorkflowService.ts-186-205 (1)

186-205: ⚠️ Potential issue | 🟠 Major

Reject missing required workflow variables before execution.

substituteVariables() only fills provided values plus non-null defaults. Required variables with defaultValue: null are left as raw {{VAR_NAME}} tokens, so execute() can build pipeline prompts with unresolved placeholders instead of failing fast. Validate required variables before stage creation.

Also applies to: 327-330

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/workflow/Services/WorkflowService.ts` around lines 186 - 205,
substituteVariables currently leaves placeholders for required WorkflowVariable
entries with defaultValue === null unresolved; update logic to validate and
reject missing required variables before execution by scanning defaults and
provided variables, e.g. in substituteVariables (or a new
validateRequiredVariables helper) compute required names where defaultValue ===
null and not present in variables, and throw a clear Error listing the missing
variable names so execute()/stage creation cannot build prompts with raw
{{VAR_NAME}} tokens; also apply the same validation at the other usage site
referenced (lines ~327-330) to ensure required vars are checked before stage
creation.
apps/server/src/workflow/Services/WorkflowService.ts-304-310 (1)

304-310: ⚠️ Potential issue | 🟠 Major

Deletion should fail when nothing was removed.

This silently succeeds for built-in or missing templates. The client can remove the template locally even though the row still exists and will come back on the next refetch. Check the affected row count and surface a typed error when deletion is rejected.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/workflow/Services/WorkflowService.ts` around lines 304 - 310,
The deleteTemplate implementation currently ignores the SQL result so deletions
that affect 0 rows (built-in or missing templates) silently succeed; change
deleteTemplate to capture the SQL execution result (the value returned by the
sql`` call), inspect its affected-row count (e.g.,
rowCount/numUpdatedRows/changes field returned by your DB driver), and if the
count is 0 return a typed failure (e.g., Effect.fail(new
TemplateNotDeletedError(templateId)) or a NotFound/DeletionRejected error)
instead of proceeding to orDie; keep the WHERE id = input.templateId AND
is_built_in = 0 check and surface the typed error when no rows were removed so
clients can react accordingly.
apps/server/src/presence/Services/PresenceService.ts-87-120 (1)

87-120: ⚠️ Potential issue | 🟠 Major

Scope leave and cursor updates to a single participant.

Both methods iterate the entire thread map. A single leave removes everybody from the session, and one cursor update overwrites every participant’s cursor state for that thread.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/presence/Services/PresenceService.ts` around lines 87 - 120,
The leave and updateCursor implementations currently iterate the entire
threadMap and affect every participant; change both to operate only on the
targeted participantId from the input: in leave (function leave) get threadMap =
presenceMap.get(input.threadId), if not present return, then delete only
threadMap.delete(input.participantId) and publish a single PubSub.publish with
type "presence.left" for that participantId and threadId; in updateCursor
(function updateCursor) similarly lookup threadMap, retrieve the single entry
with threadMap.get(input.participantId), if found update only that participant's
record (cursor and lastSeenAt), threadMap.set(input.participantId, updated) and
publish one "presence.cursor.updated" event for that participantId and threadId;
ensure you handle the case where the participant entry is missing by returning
silently.
apps/web/src/components/settings/FeaturesPanels.tsx-255-260 (1)

255-260: ⚠️ Potential issue | 🟠 Major

Fetch CI state on mount instead of relying only on the stream.

This panel only opens the live subscription. On a cold load, runsByProject stays empty until some future CI event arrives, so existing runs never render when the user first opens the page.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/settings/FeaturesPanels.tsx` around lines 255 - 260,
CIPanel currently only subscribes to CI events (subscribeCIEvents) so
runsByProject stays empty on cold load; update CIPanel to also fetch the current
CI state on mount by calling the store's load/fetch method (e.g., a
fetchCIState/fetchRuns/init method exposed by useCIStore) before returning the
subscription, and keep the existing return subscribeCIEvents() cleanup; ensure
you reference CIPanel, useCIStore, runsByProject, and subscribeCIEvents when
adding the fetch call.
apps/web/src/components/settings/FeaturesPanels.tsx-559-566 (1)

559-566: ⚠️ Potential issue | 🟠 Major

Memory entries are never loaded here.

The useEffect is a no-op, and this panel never calls a list/search action. Direct navigation to the Features page will therefore show an empty memory view even when persisted entries already exist.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/settings/FeaturesPanels.tsx` around lines 559 - 566,
MemoryPanel never triggers loading of persisted entries because the useEffect is
a no-op; update MemoryPanel to call the memory store's load/list action on mount
(the action exposed by useMemoryStore — e.g., listEntries, loadMemoryEntries or
similar) so entriesByProject and searchResults are populated; specifically,
inside the existing useEffect in MemoryPanel invoke the store's fetch/list
method and handle any returned promise/errors (and optionally set loading state
via isLoading) so direct navigation shows persisted entries.
apps/server/src/pipeline/Services/PipelineService.ts-112-129 (1)

112-129: ⚠️ Potential issue | 🟠 Major

Reject cycles and unknown dependencies before execution.

This DFS only tracks visited, so a cycle like A -> B -> A is accepted, and missing dependency IDs are silently ignored. That lets impossible pipelines execute in an arbitrary order instead of failing fast when the definition is invalid.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/pipeline/Services/PipelineService.ts` around lines 112 - 129,
The topologicalSort function currently only uses a single visited set so it
silently ignores missing dependencies and fails to detect cycles; update
topologicalSort to validate dependencies and reject invalid graphs by (1)
checking stageMap.has(depId) inside visit and throwing a clear error including
the current stage.id and missing depId when a dependency is unknown, and (2)
adding a second "inProgress" (or tempMark) Set used during DFS so if visit
encounters a depId already inProgress it throws a cycle error that names the
involved stage IDs (detect A->B->A). Keep the existing result
push-after-visiting-deps logic but ensure errors are thrown before returning the
sorted array.
packages/contracts/src/costTracking.ts-70-80 (1)

70-80: ⚠️ Potential issue | 🟠 Major

Tighten the budget invariants at the contract boundary.

These shapes currently allow limitCents = 0, periodDays = 0, and threshold values above 100. That admits budgets the service cannot handle cleanly: a zero limit feeds divide-by-zero alert math, and a zero-day period has no valid rollover semantics. Please make the limit/period strictly positive and cap alertThresholdPercent to 0..100.

Also applies to: 101-108

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/contracts/src/costTracking.ts` around lines 70 - 80, The CostBudget
contract allows limitCents and periodDays to be zero and alertThresholdPercent
>100; update the schema for CostBudget (and the similar shape at lines 101-108)
so limitCents and periodDays are strictly positive and alertThresholdPercent is
constrained to 0..100: change the limitCents field from a non-negative/CostCents
type to a strictly positive variant or add a refinement (e.g.,
refine/withConstraint/greaterThan(0)) on limitCents and periodDays to enforce >
0, and replace the alertThresholdPercent schema (currently
NonNegativeInt.pipe(...)) with a ranged/validated schema that enforces 0 <=
value <= 100 (or add a refinement that clamps/throws outside that range) while
keeping the default of 80 via withDecodingDefault.
apps/web/src/components/settings/FeaturesPanels.tsx-377-385 (1)

377-385: ⚠️ Potential issue | 🟠 Major

Pipelines panel has no bootstrap path for existing data.

allPipelines/allExecutions are derived entirely from store state, but this component never loads any initial pipeline data. After a refresh, the panel will stay empty unless some other screen already populated the store or a new event happens while this tab is open.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/settings/FeaturesPanels.tsx` around lines 377 - 385,
PipelinesPanel never bootstraps existing data on mount, so update the component
to call the pipeline store's initial load routine before/when subscribing:
inside the useEffect in PipelinesPanel, invoke a store method (e.g.,
loadInitialPipelines / fetchPipelines / bootstrapPipelines — add one to
usePipelineStore if it doesn't exist) to populate pipelinesByProject and
executions, then call subscribePipelineEvents() and return its unsubscribe; keep
the empty dependency array and surface errors/loading from that initial load via
the existing isLoading/error fields.
apps/server/src/cost/Services/CostTrackingService.ts-107-123 (1)

107-123: ⚠️ Potential issue | 🟠 Major

Budget periods are ignored, and editing a budget wipes its spend.

recordUsage only ever increments current_spend_cents, so periodDays never actually influences alerting. Then setBudget does INSERT OR REPLACE with current_spend_cents = 0 and a fresh created_at, which resets any existing budget whenever it is edited. Period-based budgets will behave like lifetime counters until they are changed, then suddenly drop back to zero.

Also applies to: 201-218

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/cost/Services/CostTrackingService.ts` around lines 107 - 123,
recordUsage and setBudget ignore budget periods and editing a budget wipes spend
because recordUsage always increments current_spend_cents without checking
periodDays and setBudget uses INSERT OR REPLACE which resets current_spend_cents
and created_at; fix by implementing period-aware resets and preserving spend on
edits: in recordUsage (the function that queries cost_budgets and updates
current_spend_cents) detect whether budget.period_days (or a new
period_start/last_reset_at column) indicates a new period and, if so, reset
current_spend_cents to 0 and update period_start/last_reset_at before adding
input.costCents; in setBudget replace INSERT OR REPLACE with INSERT ... ON
CONFLICT(...) DO UPDATE that preserves existing current_spend_cents unless the
incoming periodDays differs or the period has rolled over (in which case reset
appropriately), and add/declare a period_start/last_reset_at column if one does
not exist to make period logic reliable.
apps/server/src/presence/Services/PresenceService.ts-68-85 (1)

68-85: ⚠️ Potential issue | 🟠 Major

Honor share visibility and capacity when someone joins.

join() unconditionally inserts a participant into the thread map. It never checks whether the thread is shared, whether the share is public, or whether maxParticipants has already been reached, so private or full sessions can still be joined.

apps/web/src/components/settings/FeaturesPanels.tsx-626-633 (1)

626-633: ⚠️ Potential issue | 🟠 Major

Presence can only show future activity, not already-active sessions.

This panel seeds threadIds from local store state and only subscribes to new events. After a reload, any existing shared sessions remain invisible until another presence event happens, so the “Active Sessions” view is stale by default.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/components/settings/FeaturesPanels.tsx` around lines 626 - 633,
PresencePanel currently only subscribes to future presence events via
subscribePresenceEvents and seeds threadIds from the local participantsByThread
store, so existing active sessions after a reload are invisible; fix by loading
the current presence snapshot on mount before subscribing: in PresencePanel
(usePresenceStore, participantsByThread, sharesByThread) call an async
initializer (e.g., fetchPresenceSnapshot or presenceStore.loadInitialState)
inside useEffect, await and merge that initial data into the store, then call
subscribePresenceEvents to stream updates—ensure threadIds computation still
derives from participantsByThread/sharesByThread after the initial load so the
“Active Sessions” view shows already-active sessions immediately.
apps/server/src/pipeline/Services/PipelineService.ts-182-290 (1)

182-290: ⚠️ Potential issue | 🟠 Major

execute() reports successful stage execution without running anything.

Each stage flips from running to completed in the same call, and threadId is never used. Consumers will see completed pipelines even though no stage prompt was dispatched or awaited, which makes the execution status misleading.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/pipeline/Services/PipelineService.ts` around lines 182 - 290,
The execute function currently marks each stage as "completed" immediately
without dispatching or awaiting work; stop flipping status to "completed" inside
execute and instead dispatch the stage work (include the stage's threadId if
present) and persist the "running" state so an external worker can perform the
task and later update the stage result. Concretely: in execute (the
PipelineService.execute loop) remove the immediate assignment that sets
stageExecutions[i].status = "completed" and the subsequent
publishEvent("stage.completed")/UPDATE that follows it; replace that with a
publishEvent (e.g., "stage.dispatched" or "stage.start.requested") that includes
executionId, stage.id, and stage.threadId (or other dispatch metadata) and
persist the stageExecutions with status "running" to pipeline_executions; rely
on the external worker/consumer to publish "stage.completed"/"stage.failed"
events or call an API that updates pipeline_executions to mark
completedAt/error/final status and let the finalization step read those statuses
rather than assuming completion within execute.
apps/web/src/pipelineStore.ts-134-158 (1)

134-158: ⚠️ Potential issue | 🟠 Major

The stream handler is wired to the wrong event contract.

The server publishes stage.* / pipeline.* events, but this code only handles "pipeline.execution.updated" and "pipeline.stage.updated" and expects event.execution / event.stage payloads. As written, every real pipeline event is ignored, so live execution updates never reach the store.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/pipelineStore.ts` around lines 134 - 158, The stream handler in
client.pipeline.onEvent is wired to wrong event names and payload shape; update
the handler to match the server's contract by checking event.type for the server
prefixes (e.g., event.type.startsWith("pipeline.") and
event.type.startsWith("stage.") or the exact "pipeline.*"/"stage.*" names your
server sends) instead of only "pipeline.execution.updated" and
"pipeline.stage.updated", and extract the payload from the correct field (e.g.,
event.data.execution or event.data.stage) rather than
event.execution/event.stage; then call usePipelineStore.setState the same way to
upsert the execution and stage (referencing the existing functions/variables
client.pipeline.onEvent, usePipelineStore.setState, execution/stage, and
executionId) so real pipeline and stage events update the store.�
packages/contracts/src/rpc.ts-559-563 (1)

559-563: ⚠️ Potential issue | 🟠 Major

Allow getExecution to represent “not found.”

PipelineService.getExecution() returns PipelineExecution | null, but this RPC only accepts PipelineExecution. A miss will decode as a protocol error instead of a clean not-found response.

Suggested contract fix
 export const WsPipelineGetExecutionRpc = Rpc.make(WS_METHODS.pipelineGetExecution, {
   payload: PipelineGetExecutionInput,
-  success: PipelineExecution,
+  success: Schema.NullOr(PipelineExecution),
   error: PipelineError,
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/contracts/src/rpc.ts` around lines 559 - 563, The RPC contract for
WsPipelineGetExecutionRpc currently declares success: PipelineExecution but
PipelineService.getExecution can return null; update the RPC success schema to
allow null (e.g., success: PipelineExecution | null or a nullable wrapper) so a
not-found result decodes as a valid response rather than a protocol error;
ensure Rpc.make(WS_METHODS.pipelineGetExecution, { payload:
PipelineGetExecutionInput, success: <nullable PipelineExecution>, error:
PipelineError }) is updated accordingly and keep
WS_METHODS.pipelineGetExecution, PipelineGetExecutionInput, PipelineExecution
and PipelineError references intact.
🟡 Minor comments (6)
apps/server/src/ws.ts-840-842 (1)

840-842: ⚠️ Potential issue | 🟡 Minor

Unsafe JSON.parse without error handling.

The stagesJson field is cast via as unknown as { stagesJson: string } and parsed without try/catch. If the persisted JSON is malformed, this will throw and crash the RPC handler.

🛡️ Proposed defensive fix
-                stages: JSON.parse(
-                  (def as unknown as { stagesJson: string }).stagesJson ?? "[]",
-                ) as import("@t3tools/contracts").PipelineStage[],
+                stages: (() => {
+                  try {
+                    return JSON.parse(
+                      (def as unknown as { stagesJson: string }).stagesJson ?? "[]",
+                    ) as import("@t3tools/contracts").PipelineStage[];
+                  } catch {
+                    return [];
+                  }
+                })(),

Alternatively, consider using Effect's Schema.decode for validated parsing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/ws.ts` around lines 840 - 842, The JSON.parse call for (def
as unknown as { stagesJson: string }).stagesJson is unguarded and can throw on
malformed input; wrap the parse in a safe decode with try/catch (or use a schema
decoder) inside the codepath that builds stages so that parsing errors do not
bubble and crash the RPC handler—on error log the defect via
processLogger/errorLogger and fall back to an empty array ([]) for stages;
alternatively replace JSON.parse with a validated decode (e.g., Effect
Schema.decode) and map decode failures to the same logged fallback.
apps/web/src/auditStore.ts-30-44 (1)

30-44: ⚠️ Potential issue | 🟡 Minor

Type cast bypasses partial input validation.

The query method accepts Partial<AuditQueryInput> but casts to AuditQueryInput (Line 34). If the RPC endpoint expects certain required fields, passing an empty object {} could cause server-side errors.

🛡️ Suggested approach

Either:

  1. Ensure the server's auditQuery handler accepts partial input with defaults, or
  2. Provide client-side defaults:
   query: async (input = {}) => {
     set({ isLoading: true, error: null });
     try {
       const client = getWsRpcClient();
-      const result: AuditQueryResult = await client.audit.query(input as AuditQueryInput);
+      const result: AuditQueryResult = await client.audit.query({
+        limit: 100,
+        offset: 0,
+        ...input,
+      });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/auditStore.ts` around lines 30 - 44, The query function
currently casts a Partial<AuditQueryInput> to AuditQueryInput before calling
client.audit.query which can send missing required fields; instead, validate and
merge client-side defaults for required fields (or change the call to accept
Partial if server supports it). Concretely, in the query function, create/use a
defaultAuditQueryInput and merge it with the incoming input (or validate
required properties and throw/return early), then pass the merged/validated
object to client.audit.query; reference the query method,
AuditQueryInput/AuditQueryResult types, and getWsRpcClient()/client.audit.query
to locate where to add the merge/validation. Ensure set({ error: ... }) still
handles validation failures before invoking the RPC.
apps/web/src/costStore.ts-52-57 (1)

52-57: ⚠️ Potential issue | 🟡 Minor

Budget stream updates can break project-scoped views.

fetchBudgets(projectId) can populate a filtered list, but the event handler always merges any incoming event.budget into the single budgets array. After subscribing, a project-specific view can start showing budgets from other projects. Key budgets by project or track the active filter before applying cost.budget.updated.

Also applies to: 78-83

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/costStore.ts` around lines 52 - 57, fetchBudgets is populating a
project-scoped list but the cost.budget.updated event handler blindly merges
event.budget into the single budgets array, which can introduce budgets from
other projects; update the handler(s) referenced alongside fetchBudgets (the
cost.budget.updated subscriber around lines 78-83) to respect the active project
filter by either keying stored budgets by projectId or checking that
event.budget.projectId matches the current projectId before merging/replacing,
and when subscribing from fetchBudgets pass and store the projectId filter so
the handler can ignore updates for other projects.
apps/server/src/task/Services/TaskDecompositionService.ts-57-57 (1)

57-57: ⚠️ Potential issue | 🟡 Minor

Dead code: ternary condition always produces the same result.

The expression order > 0 && !isNumbered ? [] : [] evaluates to an empty array in both branches, making the condition meaningless. If this is intentional placeholder logic, consider simplifying to just []. If dependencies were intended to be populated based on this logic, the implementation is incomplete.

Proposed fix
-      dependsOn: order > 0 && !isNumbered ? [] : [],
+      dependsOn: [],
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/task/Services/TaskDecompositionService.ts` at line 57, The
dependsOn assignment in TaskDecompositionService is dead code because the
ternary order > 0 && !isNumbered ? [] : [] always yields an empty array; remove
the needless condition and set dependsOn to [] directly (or implement the
intended dependency logic if tasks should populate dependencies based on order
and isNumbered). Locate the dependsOn property usage in the
TaskDecompositionService method where order and isNumbered are in scope and
either replace the ternary with a plain [] or replace it with the correct
array-building logic that uses order and isNumbered to produce dependencies.
apps/server/src/task/Services/TaskDecompositionService.ts-165-170 (1)

165-170: ⚠️ Potential issue | 🟡 Minor

Potential undefined access: updatedNode may be undefined.

If input.taskId does not match any task in updatedTasks, find returns undefined, but the non-null assertion (!) suppresses the type error. The subsequent PubSub.publish would then emit an event with node: undefined, which could cause downstream issues.

Proposed fix: validate task exists before updating
-      const updatedNode = updatedTasks.find((t) => t.id === input.taskId)!;
-      yield* PubSub.publish(pubsub, {
-        type: "task.node.updated" as const,
-        treeId: input.treeId,
-        node: updatedNode,
-      });
+      const updatedNode = updatedTasks.find((t) => t.id === input.taskId);
+      if (!updatedNode) {
+        return yield* Effect.die(new Error(`Task ${input.taskId} not found in tree ${input.treeId}`));
+      }
+      yield* PubSub.publish(pubsub, {
+        type: "task.node.updated" as const,
+        treeId: input.treeId,
+        node: updatedNode,
+      });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/task/Services/TaskDecompositionService.ts` around lines 165 -
170, In TaskDecompositionService, the code assumes updatedNode exists by using a
non-null assertion on updatedTasks.find(...); change this to check for undefined
(e.g., const updatedNode = updatedTasks.find(t => t.id === input.taskId); if
(!updatedNode) { /* handle */ }) before calling PubSub.publish so you never
publish node: undefined; handle the missing case by throwing a clear error,
returning early, or logging and skipping the publish (use the same
PubSub.publish call only when updatedNode is defined) and keep references to
updatedTasks, updatedNode, PubSub.publish, and input.taskId to locate the fix.
apps/server/src/routing/Services/ProviderRouterService.ts-150-169 (1)

150-169: ⚠️ Potential issue | 🟡 Minor

Default provider selection may choose an unhealthy provider.

When no matching rule exists or preferred providers are unavailable, the code defaults to "claudeAgent" (line 154) without checking if it's healthy. The alternatives are filtered for health (line 168), but selected remains the hardcoded default.

Proposed fix: verify default provider health
      let selected: ProviderKind = "claudeAgent";
      let reason = "default";
      let alternatives: ProviderKind[] = [];

      if (matchingRule && matchingRule.preferredProviders.length > 0) {
        const preferred = matchingRule.preferredProviders.filter(
          (p) => healthyProviders.includes(p) && !matchingRule.excludedProviders.includes(p),
        );
        if (preferred.length > 0) {
          selected = preferred[0]!;
          alternatives = preferred.slice(1);
          reason = `rule:${matchingRule.name}:${matchingRule.strategy}`;
        }
      } else {
-        alternatives = healthyProviders.filter((p) => p !== selected);
+        // Ensure default is healthy, otherwise pick first healthy provider
+        if (!healthyProviders.includes(selected)) {
+          selected = healthyProviders[0] ?? "claudeAgent";
+          reason = "fallback-healthy";
+        }
+        alternatives = healthyProviders.filter((p) => p !== selected);
      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/routing/Services/ProviderRouterService.ts` around lines 150 -
169, The code currently sets selected: ProviderKind = "claudeAgent" without
verifying health; update the selection logic in ProviderRouterService (use
healthMap / healthyProviders /matchingRule /preferredProviders) so that when
there is no matchingRule or no preferred healthy providers, you check whether
the default ("claudeAgent") is present in healthyProviders and if not pick the
first healthyProviders[0] (or leave selected undefined/null and set reason to
"no-healthy-provider" if none exist); also recompute alternatives from
healthyProviders excluding the final selected and update reason accordingly
(e.g., "default:healthy" or "default:unhealthy->fallback").
🧹 Nitpick comments (8)
apps/server/src/persistence/Migrations/020_NewFeatureTables.ts (2)

256-270: Redundant index on thread_id.

The session_shares table already has a UNIQUE constraint on thread_id (Line 259), which implicitly creates an index in SQLite. The explicit idx_session_shares_thread index (Lines 268-270) is redundant and wastes storage.

♻️ Remove redundant index
   yield* sql`
     CREATE TABLE IF NOT EXISTS session_shares (
       id TEXT PRIMARY KEY,
       thread_id TEXT NOT NULL UNIQUE,
       owner_id TEXT NOT NULL,
       max_participants INTEGER NOT NULL DEFAULT 10,
       is_public INTEGER NOT NULL DEFAULT 0,
       created_at TEXT NOT NULL,
       expires_at TEXT
     )
   `;
-
-  yield* sql`
-    CREATE INDEX IF NOT EXISTS idx_session_shares_thread ON session_shares(thread_id)
-  `;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/persistence/Migrations/020_NewFeatureTables.ts` around lines
256 - 270, The CREATE INDEX idx_session_shares_thread on
session_shares(thread_id) is redundant because thread_id already has a UNIQUE
constraint in the session_shares table; remove the explicit index creation (the
SQL block that creates idx_session_shares_thread) so only the table definition
with UNIQUE(thread_id) remains, keeping the session_shares table and its UNIQUE
constraint intact.

1-286: Consider adding composite indexes for common query patterns.

The individual column indexes are good, but if queries commonly filter on multiple columns (e.g., project_id + created_at for cost_entries, or project_id + timestamp for audit_log), composite indexes would improve performance.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/persistence/Migrations/020_NewFeatureTables.ts` around lines
1 - 286, Add composite indexes for common multi-column filters: create new
CREATE INDEX IF NOT EXISTS statements (placed near the existing index blocks)
such as idx_cost_entries_project_created on cost_entries(project_id, created_at)
and idx_audit_entries_project_thread on cost_entries(project_id, thread_id) if
you frequently query both, and idx_audit_log_project_timestamp on
audit_log(project_id, timestamp) to speed project+time queries; follow the same
pattern for other tables you often filter by multiple columns (e.g.,
ci_runs(project_id, branch) or pipeline_executions(pipeline_id, project_id))
using unique index names like idx_ci_runs_project_branch and
idx_pipeline_exec_pipeline_project. Ensure each is added using yield* sql`CREATE
INDEX IF NOT EXISTS ...` alongside the other CREATE INDEX statements in this
migration (refer to cost_entries, audit_log, ci_runs, pipeline_executions and
their existing idx_* names).
apps/server/src/server.ts (1)

296-302: Acceptable workaround for TypeScript inference limitation.

The as unknown as cast is a pragmatic solution for the documented TS limitation with deep Layer.provideMerge chains. The comment clearly explains the rationale. Consider adding a follow-up issue to revisit if Effect updates its Layer inference.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/server.ts` around lines 296 - 302, The cast workaround on
runServer (export const runServer = Layer.launch(makeServerLayer) as unknown as
Effect.Effect<never, any, ServerConfig>) is acceptable per the comment, but
please add a brief TODO or create a follow-up issue referencing this TypeScript
inference limitation so we can revisit when Effect/Layer typing improves; update
the comment near runServer to include a TODO tag and optionally an issue ID or
link for future tracking.
packages/contracts/src/auditLog.ts (1)

64-65: Bound limit at the contract edge.

NonNegativeInt alone lets a caller request arbitrarily large audit pages. Adding an upper bound here keeps the RPC surface safer and avoids expensive queries leaking through to the service layer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/contracts/src/auditLog.ts` around lines 64 - 65, The limit field
currently uses NonNegativeInt allowing unbounded large requests; change the
contract to enforce an upper bound at the edge by replacing the type for limit
with a bounded/int schema (e.g., a NonNegativeInt piped/combined with a max
constraint) and keep the decoding default to the desired page size (currently
100); update the line that defines limit (the symbol "limit" using
NonNegativeInt.pipe(Schema.withDecodingDefault(...))) to use the bounded schema
(e.g.,
"NonNegativeInt.pipe(Schema.withMax(X)).pipe(Schema.withDecodingDefault(() =>
100))" or a named BoundedLimit schema) while leaving offset as-is. Ensure the
chosen max value is documented in the schema comment or name.
apps/web/src/memoryStore.ts (1)

114-114: Avoid as any type assertion; use the proper branded type.

The projectId as any cast bypasses type safety. The projectId parameter should be typed as ProjectId from @t3tools/contracts to ensure type consistency with the RPC client.

Proposed fix
+import type { ProjectId } from "@t3tools/contracts";
+
 export interface MemoryStore extends MemoryState {
   // ...
-  index: (projectId: string, forceReindex?: boolean) => Promise<void>;
+  index: (projectId: ProjectId, forceReindex?: boolean) => Promise<void>;

Then remove the cast:

-      await client.memory.index({ projectId: projectId as any, forceReindex });
+      await client.memory.index({ projectId, forceReindex });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/memoryStore.ts` at line 114, The call to client.memory.index
currently uses a type-escaping cast (`projectId as any`); change the parameter
that supplies projectId to be typed as the branded ProjectId from
`@t3tools/contracts` (import ProjectId) and remove the `as any` cast in the call
to client.memory.index so you pass `projectId` with its proper type; update the
function/method signature that declares projectId (and any callers) to use
ProjectId to maintain type safety.
apps/server/src/ci/Services/CIIntegrationService.ts (1)

107-123: Effect.fail wrapped by Effect.orDie converts expected errors to defects.

In triggerRerun, Effect.fail(new Error(...)) on line 113 is wrapped by Effect.orDie on line 123. This converts an expected business error ("CI run not found") into an unrecoverable defect. Consider either:

  1. Remove Effect.orDie and properly type the error in the service shape
  2. Use Effect.die directly if this should truly be a defect
Option 1: Use Effect.die for defect semantics
-      if (!run) return yield* Effect.fail(new Error(`CI run ${input.runId} not found`));
+      if (!run) return yield* Effect.die(new Error(`CI run ${input.runId} not found`));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/server/src/ci/Services/CIIntegrationService.ts` around lines 107 - 123,
The triggerRerun implementation uses Effect.fail(new Error(...)) to signal a
missing CI run but then unconditionally calls .pipe(Effect.orDie), which turns
that expected business error into a defect; fix by either (A) removing
Effect.orDie and updating CIIntegrationServiceShape["triggerRerun"] to reflect a
failure channel (i.e., return Effect<Error, RequeuedRun> or similar) so callers
can handle the "CI run not found" error, or (B) if this truly should be
unrecoverable, replace Effect.fail(new Error(...)) with Effect.die(new
Error(...)) to intentionally produce a defect; locate the logic in triggerRerun
and adjust either the error-channel typing on the service shape or the failing
primitive accordingly.
packages/contracts/src/routing.ts (1)

46-55: Reject rules that both prefer and exclude the same provider.

This schema currently accepts contradictory routing rules. Once persisted, the router has to guess whether exclusion wins or whether no candidates remain. Add a cross-field refinement here so invalid rules are rejected at the contract boundary.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/contracts/src/routing.ts` around lines 46 - 55, The RoutingRule
schema currently allows preferredProviders and excludedProviders to contain the
same ProviderKind values; add a cross-field refinement on RoutingRule to reject
any object where the intersection of preferredProviders and excludedProviders is
non-empty. Update the schema construction for RoutingRule to apply a refinement
(e.g., using Schema.Refinement/Schema.Pipe or the library's cross-field
validation utility) that inspects preferredProviders and excludedProviders and
returns a validation error if any provider appears in both arrays, referencing
the RoutingRule, preferredProviders, and excludedProviders symbols so the check
is applied at the contract boundary.
apps/web/src/wsRpcClient.ts (1)

288-389: Consider a tiny factory for these transport wrappers.

Most of this block is mechanical transport.request / transport.subscribe glue. Pulling that into a helper would reduce the copy-paste surface and make future RPC additions less drift-prone.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/src/wsRpcClient.ts` around lines 288 - 389, Extract the repeated
transport.request/transport.subscribe patterns into small helpers and replace
the inline wrappers with calls to them: add helpers like makeRequest =
(methodName) => (input) => transport.request(client =>
client[methodName](input)) and makeSubscribe = (methodName) => (listener,
options) => transport.subscribe(client => client[methodName]({}), listener,
options), then refactor each namespace (e.g., cost, audit, ci, routing,
pipeline, workflow, task, memory, presence) to use makeRequest(WS_METHODS.xxx)
for request methods and makeSubscribe(WS_METHODS.subscribeXxx) for onEvent to
eliminate duplication and centralize the glue logic.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3e0f5739-ab45-4e65-9fd3-973f9954fda0

📥 Commits

Reviewing files that changed from the base of the PR and between 0da9c13 and 64c26c1.

📒 Files selected for processing (46)
  • apps/server/src/audit/Services/AuditLogService.ts
  • apps/server/src/ci/Services/CIIntegrationService.ts
  • apps/server/src/cost/Services/CostTrackingService.ts
  • apps/server/src/memory/Services/ProjectMemoryService.ts
  • apps/server/src/orchestration/Schemas.ts
  • apps/server/src/orchestration/decider.ts
  • apps/server/src/orchestration/projector.ts
  • apps/server/src/persistence/Migrations.ts
  • apps/server/src/persistence/Migrations/020_NewFeatureTables.ts
  • apps/server/src/pipeline/Services/PipelineService.ts
  • apps/server/src/presence/Services/PresenceService.ts
  • apps/server/src/routing/Services/ProviderRouterService.ts
  • apps/server/src/server.test.ts
  • apps/server/src/server.ts
  • apps/server/src/task/Services/TaskDecompositionService.ts
  • apps/server/src/workflow/Services/WorkflowService.ts
  • apps/server/src/ws.ts
  • apps/web/src/appSettings.ts
  • apps/web/src/auditStore.ts
  • apps/web/src/ciStore.ts
  • apps/web/src/components/ChatView.tsx
  • apps/web/src/components/chat/ComposerQueuedFollowUpsPanel.tsx
  • apps/web/src/components/settings/FeaturesPanels.tsx
  • apps/web/src/components/settings/SettingsSidebarNav.tsx
  • apps/web/src/costStore.ts
  • apps/web/src/memoryStore.ts
  • apps/web/src/pipelineStore.ts
  • apps/web/src/presenceStore.ts
  • apps/web/src/routeTree.gen.ts
  • apps/web/src/routes/settings.features.tsx
  • apps/web/src/routingStore.ts
  • apps/web/src/taskStore.ts
  • apps/web/src/workflowStore.ts
  • apps/web/src/wsRpcClient.ts
  • packages/contracts/src/auditLog.ts
  • packages/contracts/src/ciIntegration.ts
  • packages/contracts/src/costTracking.ts
  • packages/contracts/src/index.ts
  • packages/contracts/src/orchestration.ts
  • packages/contracts/src/pipelines.ts
  • packages/contracts/src/presence.ts
  • packages/contracts/src/projectMemory.ts
  • packages/contracts/src/routing.ts
  • packages/contracts/src/rpc.ts
  • packages/contracts/src/taskDecomposition.ts
  • packages/contracts/src/workflows.ts

Comment on lines +92 to +94
export const MemoryForgetInput = Schema.Struct({
entryId: MemoryEntryId,
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Expectation:
# - the WS handler forwards enough context to scope the delete, and/or
# - the service resolves the row by entryId and verifies its projectId before deleting.

rg -n -C4 'memoryForget|forget\(|projectId|entryId|DELETE' \
  apps/server/src/ws.ts \
  apps/server/src/memory/Services/ProjectMemoryService.ts

Repository: Josh-wt/t3code

Length of output: 16972


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check if there's authorization/context middleware that validates projectId
rg -n "projectId|context|auth|Permission" apps/server/src/ws.ts \
  | head -50

# Check the database schema for memory_entries
fd -e "sql" -e "ts" -e "js" | xargs rg -l "memory_entries\|CREATE TABLE" | head -10

Repository: Josh-wt/t3code

Length of output: 1380


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find database migrations or schema definitions
find . -type f \( -name "*migration*" -o -name "*schema*" \) \
  | grep -i -E "(memory|db|sql)" | head -20

Repository: Josh-wt/t3code

Length of output: 40


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check MemoryForgetInput definition in contracts
fd "projectMemory" -e ts | xargs rg -l "MemoryForgetInput" | head -5

# Then look at the actual input type definition
rg -n -A5 "MemoryForgetInput.*=" packages/contracts/src/projectMemory.ts

Repository: Josh-wt/t3code

Length of output: 444


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check if ws.ts has any context or projectId validation for the memory handlers
rg -n -B10 "WS_METHODS.memoryForget" apps/server/src/ws.ts | head -40

Repository: Josh-wt/t3code

Length of output: 630


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check MemoryAddInput to see if it requires projectId
rg -n -A10 "export const MemoryAddInput" packages/contracts/src/projectMemory.ts

# Check MemorySearchInput too
rg -n -A10 "export const MemorySearchInput" packages/contracts/src/projectMemory.ts

Repository: Josh-wt/t3code

Length of output: 903


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Look for database schema/migrations mentioning memory_entries
fd -type f | xargs rg -l "memory_entries" | head -10

Repository: Josh-wt/t3code

Length of output: 364


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check the WS handler pattern - is there context/middleware for projectId validation?
# Look around the memoryForget handler for context
rg -n -B30 "WS_METHODS.memoryForget" apps/server/src/ws.ts | tail -50

Repository: Josh-wt/t3code

Length of output: 1653


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Check the migration for memory_entries table schema
cat apps/server/src/persistence/Migrations/020_NewFeatureTables.ts | head -150

Repository: Josh-wt/t3code

Length of output: 4150


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Look for database constraint on entryId - is it unique globally or scoped to project?
rg -n -A15 "CREATE TABLE.*memory_entries\|memory_entries.*project_id" apps/server/src/persistence/Migrations/020_NewFeatureTables.ts

Repository: Josh-wt/t3code

Length of output: 40


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find memory_entries in the migration file
cat apps/server/src/persistence/Migrations/020_NewFeatureTables.ts | grep -n -A20 "memory_entries"

Repository: Josh-wt/t3code

Length of output: 2438


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Get the full migration file to see memory_entries definition
wc -l apps/server/src/persistence/Migrations/020_NewFeatureTables.ts

# Then read from line that contains memory_entries onwards
tail -n +100 apps/server/src/persistence/Migrations/020_NewFeatureTables.ts | head -100

Repository: Josh-wt/t3code

Length of output: 2797


Add projectId validation to the MemoryForgetInput and verify project ownership before deletion.

MemoryForgetInput lacks projectId, making it the only memory mutation without project scoping. The forget operation deletes by entryId alone without verifying the entry belongs to the requesting project. Since entryId is globally unique in the database schema, a leaked or cross-project ID can delete another project's memory.

Update the input schema to require projectId and add a WHERE clause to the DELETE query: WHERE id = ${input.entryId} AND project_id = ${input.projectId}.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/contracts/src/projectMemory.ts` around lines 92 - 94,
MemoryForgetInput currently only includes MemoryEntryId which allows
cross-project deletes; add a required projectId (use the existing ProjectId
type) to the MemoryForgetInput schema so it becomes { entryId: MemoryEntryId,
projectId: ProjectId }, then update the forget/delete handler (the resolver or
function that performs the DELETE by entryId) to include and check project
ownership by changing the DELETE to include WHERE id = ${input.entryId} AND
project_id = ${input.projectId} (or perform an existence check that the entry
belongs to the provided project before deleting) so deletes are scoped to the
requesting project.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

19 issues found across 46 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="apps/web/src/costStore.ts">

<violation number="1" location="apps/web/src/costStore.ts:40">
P2: `isLoading` is shared across independent requests. If `fetchSummary` and `fetchBudgets` run at the same time, whichever finishes first clears `isLoading` even though the other call is still pending, so the UI can indicate loading is done prematurely. Consider separate loading flags or a request counter.</violation>
</file>

<file name="apps/server/src/audit/Services/AuditLogService.ts">

<violation number="1" location="apps/server/src/audit/Services/AuditLogService.ts:94">
P0: SQL injection in `query`: all filter values (`projectId`, `threadId`, `category`, `severity`, `actor`, timestamps) are string-interpolated into raw SQL passed to `sql.unsafe()`. An attacker-controlled input can escape the quoted string and execute arbitrary SQL. This is especially dangerous in an audit log service — an attacker could tamper with or exfiltrate the security audit trail.

Use parameterized queries (the `sql` tagged template) or build the query with proper parameter binding instead of string concatenation.</violation>
</file>

<file name="apps/web/src/presenceStore.ts">

<violation number="1" location="apps/web/src/presenceStore.ts:68">
P2: `leave` removes the entire thread’s participant list, so one user leaving clears all remaining participants for that thread. Remove only the leaving participant instead of deleting the whole thread entry.</violation>
</file>

<file name="apps/server/src/ci/Services/CIIntegrationService.ts">

<violation number="1" location="apps/server/src/ci/Services/CIIntegrationService.ts:109">
P1: `triggerRerun` ignores `projectId` and `failedOnly`, so any run ID can be re-queued even if it belongs to another project or isn’t failed. Filter by project and enforce the `failedOnly` guard before re-queuing.</violation>
</file>

<file name="apps/server/src/orchestration/projector.ts">

<violation number="1" location="apps/server/src/orchestration/projector.ts:700">
P2: The branch-from-checkpoint projection drops proposedPlans entirely. This will leave the new thread without any proposed plans associated with the retained turns, unlike the revert logic which preserves them.</violation>
</file>

<file name="apps/server/src/pipeline/Services/PipelineService.ts">

<violation number="1" location="apps/server/src/pipeline/Services/PipelineService.ts:244">
P2: Stages are marked `completed` immediately with no dispatch or wait for actual execution, so pipelines always finish instantly even if real work hasn’t run. Either dispatch the stage (or wait for an external executor) before setting `completed`, or leave the stage pending until real completion is reported.</violation>
</file>

<file name="apps/web/src/components/ChatView.tsx">

<violation number="1" location="apps/web/src/components/ChatView.tsx:3027">
P2: `forceQueue` currently queues even when no turn is running, which can leave the message stuck because auto-dispatch only happens on running→settled. Gate force-queued items on `phase === "running"` so idle sends go out immediately.</violation>
</file>

<file name="apps/server/src/orchestration/decider.ts">

<violation number="1" location="apps/server/src/orchestration/decider.ts:674">
P1: `thread.created` should be recorded under the new thread aggregate, not the source thread. Using `aggregateId: command.sourceThreadId` will attach the creation event to the wrong thread.</violation>
</file>

<file name="apps/server/src/ws.ts">

<violation number="1" location="apps/server/src/ws.ts:841">
P1: Use `def.stages` here instead of parsing a non-existent `stagesJson` field; the current mapping returns empty stage arrays for listed pipelines.</violation>

<violation number="2" location="apps/server/src/ws.ts:913">
P1: `Effect.die` is for unrecoverable defects, not expected "not found" cases. This will crash the fiber with an untyped error rather than returning a proper error response to the client. Use `Effect.fail` with a typed error instead.</violation>

<violation number="3" location="apps/server/src/ws.ts:932">
P2: Pipeline events are broadcast with hardcoded empty strings for `pipelineId`, `projectId`, and an empty `stages` array. Clients won't be able to correlate these events to actual pipelines or projects. Either enrich the event by looking up the execution, or extend `PipelineStreamEvent` to carry these fields.</violation>
</file>

<file name="apps/server/src/cost/Services/CostTrackingService.ts">

<violation number="1" location="apps/server/src/cost/Services/CostTrackingService.ts:122">
P2: Division by zero when `budget.limit_cents` is 0 produces `Infinity` or `NaN`, causing either spurious alerts on every call or silently skipped alerts. Guard against a zero limit before computing `percentUsed`.</violation>

<violation number="2" location="apps/server/src/cost/Services/CostTrackingService.ts:148">
P0: SQL injection: user-supplied `projectId`, `threadId`, `periodStart`, and `periodEnd` are string-interpolated into raw SQL passed to `sql.unsafe()`. Use the parameterized `sql` tagged template instead of building a raw WHERE clause. For conditional filters, you can compose queries with optional conditions or build separate branches.</violation>

<violation number="3" location="apps/server/src/cost/Services/CostTrackingService.ts:205">
P2: `INSERT OR REPLACE` resets `current_spend_cents` to 0 and `created_at` to now on every update. Editing a budget's threshold or limit silently discards all accumulated spend data. Use `INSERT … ON CONFLICT(id) DO UPDATE SET` to preserve `current_spend_cents` and `created_at`.</violation>
</file>

<file name="apps/server/src/memory/Services/ProjectMemoryService.ts">

<violation number="1" location="apps/server/src/memory/Services/ProjectMemoryService.ts:80">
P1: Avoid string-interpolating `input.kind` into the sql.unsafe query; parameterize it to prevent SQL injection in the search endpoint.</violation>
</file>

<file name="apps/server/src/presence/Services/PresenceService.ts">

<violation number="1" location="apps/server/src/presence/Services/PresenceService.ts:92">
P2: `leave` clears the entire thread instead of the caller. With only `threadId` in the input, any client leaving will evict all participants and emit left events for them.</violation>

<violation number="2" location="apps/server/src/presence/Services/PresenceService.ts:106">
P2: `updateCursor` updates every participant in the thread. One user’s cursor change will overwrite all other participants’ cursor states and broadcast updates for all of them.</violation>
</file>

<file name="apps/server/src/workflow/Services/WorkflowService.ts">

<violation number="1" location="apps/server/src/workflow/Services/WorkflowService.ts:248">
P2: Include `is_built_in` in the upsert update so built-in templates can’t remain user-deletable when an existing row conflicts.</violation>
</file>

<file name="apps/server/src/routing/Services/ProviderRouterService.ts">

<violation number="1" location="apps/server/src/routing/Services/ProviderRouterService.ts:148">
P2: Rule selection ignores task patterns and the provided taskHint, so task-specific routing rules never apply. Incorporate taskHint/taskPatterns when picking the matching rule.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.


const triggerRerun: CIIntegrationServiceShape["triggerRerun"] = (input) =>
Effect.gen(function* () {
const rows = yield* sql<
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: triggerRerun ignores projectId and failedOnly, so any run ID can be re-queued even if it belongs to another project or isn’t failed. Filter by project and enforce the failedOnly guard before re-queuing.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/ci/Services/CIIntegrationService.ts, line 109:

<comment>`triggerRerun` ignores `projectId` and `failedOnly`, so any run ID can be re-queued even if it belongs to another project or isn’t failed. Filter by project and enforce the `failedOnly` guard before re-queuing.</comment>

<file context>
@@ -0,0 +1,168 @@
+
+  const triggerRerun: CIIntegrationServiceShape["triggerRerun"] = (input) =>
+    Effect.gen(function* () {
+      const rows = yield* sql<
+        Record<string, unknown>
+      >`SELECT * FROM ci_runs WHERE id = ${input.runId}`;
</file context>
Fix with Cubic

const branchedThreadEvent: Omit<OrchestrationEvent, "sequence"> = {
...withEventBase({
aggregateKind: "thread",
aggregateId: command.sourceThreadId,
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: thread.created should be recorded under the new thread aggregate, not the source thread. Using aggregateId: command.sourceThreadId will attach the creation event to the wrong thread.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/orchestration/decider.ts, line 674:

<comment>`thread.created` should be recorded under the new thread aggregate, not the source thread. Using `aggregateId: command.sourceThreadId` will attach the creation event to the wrong thread.</comment>

<file context>
@@ -647,6 +647,67 @@ export const decideOrchestrationCommand = Effect.fn("decideOrchestrationCommand"
+      const branchedThreadEvent: Omit<OrchestrationEvent, "sequence"> = {
+        ...withEventBase({
+          aggregateKind: "thread",
+          aggregateId: command.sourceThreadId,
+          occurredAt: command.createdAt,
+          commandId: command.commandId,
</file context>
Suggested change
aggregateId: command.sourceThreadId,
aggregateId: command.newThreadId,
Fix with Cubic

completedAt: exec.completedAt,
updatedAt: exec.updatedAt,
})
: Effect.die(new Error("Execution not found")),
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Effect.die is for unrecoverable defects, not expected "not found" cases. This will crash the fiber with an untyped error rather than returning a proper error response to the client. Use Effect.fail with a typed error instead.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/ws.ts, line 913:

<comment>`Effect.die` is for unrecoverable defects, not expected "not found" cases. This will crash the fiber with an untyped error rather than returning a proper error response to the client. Use `Effect.fail` with a typed error instead.</comment>

<file context>
@@ -708,6 +726,397 @@ const WsRpcLayer = WsRpcGroup.toLayer(
+                    completedAt: exec.completedAt,
+                    updatedAt: exec.updatedAt,
+                  })
+                : Effect.die(new Error("Execution not found")),
+            ),
+          ),
</file context>
Fix with Cubic

Effect.gen(function* () {
const now = new Date().toISOString();

yield* sql`INSERT OR REPLACE INTO cost_budgets (id, project_id, limit_cents, period_days, current_spend_cents, alert_threshold_percent, enabled, created_at, updated_at)
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: INSERT OR REPLACE resets current_spend_cents to 0 and created_at to now on every update. Editing a budget's threshold or limit silently discards all accumulated spend data. Use INSERT … ON CONFLICT(id) DO UPDATE SET to preserve current_spend_cents and created_at.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/cost/Services/CostTrackingService.ts, line 205:

<comment>`INSERT OR REPLACE` resets `current_spend_cents` to 0 and `created_at` to now on every update. Editing a budget's threshold or limit silently discards all accumulated spend data. Use `INSERT … ON CONFLICT(id) DO UPDATE SET` to preserve `current_spend_cents` and `created_at`.</comment>

<file context>
@@ -0,0 +1,256 @@
+    Effect.gen(function* () {
+      const now = new Date().toISOString();
+
+      yield* sql`INSERT OR REPLACE INTO cost_budgets (id, project_id, limit_cents, period_days, current_spend_cents, alert_threshold_percent, enabled, created_at, updated_at)
+        VALUES (${input.budgetId}, ${input.projectId}, ${input.limitCents}, ${input.periodDays}, 0, ${input.alertThresholdPercent}, ${input.enabled ? 1 : 0}, ${now}, ${now})`;
+
</file context>
Fix with Cubic

description = excluded.description,
category = excluded.category,
variables = excluded.variables,
steps = excluded.steps,
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Include is_built_in in the upsert update so built-in templates can’t remain user-deletable when an existing row conflicts.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/workflow/Services/WorkflowService.ts, line 248:

<comment>Include `is_built_in` in the upsert update so built-in templates can’t remain user-deletable when an existing row conflicts.</comment>

<file context>
@@ -0,0 +1,360 @@
+        description = excluded.description,
+        category = excluded.category,
+        variables = excluded.variables,
+        steps = excluded.steps,
+        updated_at = excluded.updated_at
+    `;
</file context>
Fix with Cubic

const now = new Date().toISOString();

// Find first matching rule for this project
const matchingRule = rules.find((r) => r.projectId === null || r.projectId === projectId);
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Rule selection ignores task patterns and the provided taskHint, so task-specific routing rules never apply. Incorporate taskHint/taskPatterns when picking the matching rule.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/routing/Services/ProviderRouterService.ts, line 148:

<comment>Rule selection ignores task patterns and the provided taskHint, so task-specific routing rules never apply. Incorporate taskHint/taskPatterns when picking the matching rule.</comment>

<file context>
@@ -0,0 +1,216 @@
+      const now = new Date().toISOString();
+
+      // Find first matching rule for this project
+      const matchingRule = rules.find((r) => r.projectId === null || r.projectId === projectId);
+
+      const healthyProviders = Array.from(healthMap.values())
</file context>
Fix with Cubic

@Josh-wt Josh-wt merged commit 5441092 into main Apr 10, 2026
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants