feat: implement 11 feature services + queue/steer follow-up behavior#1
feat: implement 11 feature services + queue/steer follow-up behavior#1
Conversation
Server: - Add CostTracking, AuditLog, CI, ProviderRouter, TaskDecomposition, ProjectMemory, Presence, Pipeline, and Workflow services with full SQLite persistence (migration 020) - Wire all services into server.ts using Layer.mergeAll / provideMerge to stay within TypeScript's 20-arg pipe limit - Add comprehensive RPC handlers in ws.ts for all new services - Add mock stubs in server.test.ts to satisfy test layer requirements Client: - Add contract schemas and types for all 9 feature domains - Add client-side stores (costStore, auditStore, ciStore, etc.) - Add Features settings panel (FeaturesPanels.tsx + route) - Extend RPC client (wsRpcClient.ts) with new service methods Queue/Steer follow-up behavior (inspired by upstream PR pingdotgg#1479): - Add followUpBehavior: "steer" | "queue" setting to AppSettings - Queue mode holds follow-up messages while a turn is running and auto-dispatches them when the turn settles (transition-based effect) - Steer mode (default) sends messages immediately as before - Cmd/Ctrl+Shift+Enter inverts behavior one-off in either mode - ComposerQueuedFollowUpsPanel shows queued items with remove buttons - Queue clears on thread switch; failed dispatches re-queue https://claude.ai/code/session_01XF5adFusgTx5Fb1Qhc8MeC
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (5)
📝 WalkthroughWalkthroughAdds nine backend feature services with SQL persistence and PubSub streams, a new DB migration, orchestration checkpoint branching, RPC/WS wiring and contracts, plus web client stores, UI components, and a settings route for the new features. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 64c26c1ce8
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (input.projectId) conditions.push(`project_id = '${input.projectId}'`); | ||
| if (input.threadId) conditions.push(`thread_id = '${input.threadId}'`); | ||
| const whereClause = `WHERE ${conditions.join(" AND ")}`; |
There was a problem hiding this comment.
Parameterize cost summary filters
This query path builds SQL conditions by interpolating RPC input directly into string fragments and then executes them with sql.unsafe, so crafted values in fields like projectId or threadId can break query structure (or worse) instead of being treated as data. Because cost.getSummary is externally callable, this should use bound parameters/tagged SQL fragments throughout.
Useful? React with 👍 / 👎.
| stages: JSON.parse( | ||
| (def as unknown as { stagesJson: string }).stagesJson ?? "[]", | ||
| ) as import("@t3tools/contracts").PipelineStage[], |
There was a problem hiding this comment.
Return persisted stages in pipeline list results
pipeline.list reads definitions from PipelineService.list, but this mapper tries to parse a non-existent stagesJson field and falls back to [], so every listed pipeline loses its stage graph. Any client relying on list data for editing/execution previews will see empty pipelines and behave incorrectly.
Useful? React with 👍 / 👎.
| for (const [participantId] of threadMap) { | ||
| threadMap.delete(participantId); | ||
| yield* PubSub.publish(pubsub, { |
There was a problem hiding this comment.
Delete only the caller when handling presence leave
This leave implementation iterates the entire thread map and deletes every participant, so one user leaving a shared thread forces all other participants to be removed and emits presence.left for everyone. In multi-user sessions this collapses presence state globally instead of removing a single participant.
Useful? React with 👍 / 👎.
| aggregateKind: "thread", | ||
| aggregateId: command.sourceThreadId, | ||
| occurredAt: command.createdAt, |
There was a problem hiding this comment.
Emit branched thread creation on the new thread stream
The thread.created event for branch-from-checkpoint is written with aggregateId set to the source thread even though the payload creates newThreadId. Since event stream identity is keyed by aggregateId, this puts the creation event in the wrong stream, skewing per-thread stream history/versioning for consumers that read by aggregate.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 4
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (20)
apps/web/src/components/ChatView.tsx-2384-2413 (1)
2384-2413:⚠️ Potential issue | 🟠 MajorRoute queued auto-dispatch through the same send path as manual sends.
Line 2393 removes the head item before the dispatch path is even confirmed, and Lines 2398-2408 bypass the normal send flow entirely. That means queued follow-ups skip
sendInFlightRef,beginLocalDispatch,persistThreadSettingsForNextTurn, andformatOutgoingPrompt, so they can be dropped, double-sent, or behave differently from an immediate send. The catch also requeues into component state rather than a thread-scoped queue, so a failure after navigation can resurrect the item in the wrong thread.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/components/ChatView.tsx` around lines 2384 - 2413, The queued follow-up auto-dispatch removes the head and calls orchestration.dispatchCommand directly (using queuedFollowUpsRef, setQueuedFollowUps, readNativeApi, api.orchestration.dispatchCommand, newMessageId) which bypasses the normal send path (sendInFlightRef, beginLocalDispatch, persistThreadSettingsForNextTurn, formatOutgoingPrompt) and requeues into component state on failure; instead route the queued item through the same send flow used for manual sends: do not mutate setQueuedFollowUps until the send path is invoked successfully, call the existing send/beginLocalDispatch routine (or extract a send helper used by manual sends) with the thread from activeThreadRef and the queuedFollowUpsRef.current[0] payload so that sendInFlightRef, persistThreadSettingsForNextTurn and formatOutgoingPrompt are used, and on send failure requeue into the thread-scoped queue (not component state) so navigation won't resurrect the item in the wrong thread.apps/web/src/components/ChatView.tsx-3023-3029 (1)
3023-3029:⚠️ Potential issue | 🟠 MajorDon't queue one-off overrides when the thread is already idle.
Line 3027 lets
forceQueuebypass thephase === "running"check. In steer mode,Cmd/Ctrl+Shift+Enteron an already-settled thread will enqueue the message, but nothing will auto-dispatch it until some unrelated future turn settles. Restrict the override to active turns, or fall back to immediate send when the thread is idle.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/components/ChatView.tsx` around lines 3023 - 3029, The current shouldQueue expression allows opts?.forceQueue to bypass the phase === "running" check so a forced queue can enqueue on an already-idle thread; update the condition so forced queuing only applies when the thread is active. Modify the shouldQueue logic (the const shouldQueue near the composer logic) to require phase === "running" for both the followUpBehavior === "queue" branch and the opts?.forceQueue branch (or explicitly add && phase === "running" to the outer condition), so that when the thread is idle the override falls back to immediate send instead of enqueuing.apps/server/src/orchestration/decider.ts-671-691 (1)
671-691:⚠️ Potential issue | 🟠 MajorIncorrect
aggregateIdforthread.createdevent.The
branchedThreadEventusescommand.sourceThreadIdas itsaggregateId(Line 674), but this event represents the creation of the new thread (command.newThreadId). This is inconsistent with the regularthread.createhandler (Line 152), whereaggregateIdmatches the thread being created.🐛 Proposed fix
const branchedThreadEvent: Omit<OrchestrationEvent, "sequence"> = { ...withEventBase({ aggregateKind: "thread", - aggregateId: command.sourceThreadId, + aggregateId: command.newThreadId, occurredAt: command.createdAt, commandId: command.commandId, }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/orchestration/decider.ts` around lines 671 - 691, The branchedThreadEvent sets aggregateId to command.sourceThreadId but this event represents creation of the new thread; update branchedThreadEvent (created via withEventBase and used to emit the "thread.created" payload) to use command.newThreadId as aggregateId so aggregateId matches the new thread (consistent with the thread.create handler) and ensure any other references inside that event payload remain unchanged.apps/server/src/ws.ts-885-917 (1)
885-917:⚠️ Potential issue | 🟠 MajorUse
Effect.failinstead ofEffect.diefor missing execution.
Effect.dieterminates the fiber with an unrecoverable defect, which is inappropriate for a user-recoverable "not found" scenario. The RPC client will receive an opaque crash instead of a structured error it can handle gracefully.🐛 Proposed fix: Return a typed error
Effect.flatMap((exec) => exec ? Effect.succeed({ // ... existing mapping ... }) - : Effect.die(new Error("Execution not found")), + : Effect.fail(new Error("Execution not found")), ),Consider defining a dedicated
PipelineExecutionNotFoundErrorschema for typed error handling on the client.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/ws.ts` around lines 885 - 917, Change the unrecoverable defect to a recoverable typed failure: in the WS_METHODS.pipelineGetExecution handler (around pipelineService.getExecution(...).pipe(Effect.flatMap(...))) replace the Effect.die(new Error("Execution not found")) with Effect.fail(...) returning a typed error value (e.g., a PipelineExecutionNotFoundError object or error tag) so the RPC client receives a structured, recoverable error; ensure the error type/schema is defined and used consistently so the RPC layer can pattern-match it on the client.apps/server/src/orchestration/projector.ts-667-714 (1)
667-714:⚠️ Potential issue | 🟠 MajorAdd proposedPlans retention to branched-from-checkpoint handler, consistent with thread.reverted.
The
thread.revertedhandler retainsproposedPlansfiltered byretainedTurnIds(lines 604-607), but thethread.branched-from-checkpointhandler skipsproposedPlansentirely despite using the sameretainedTurnIdslogic for messages and activities. Both operations filter data by checkpoint turn count and should retain proposed plans up to that point.Add the following to the branching handler's thread update:
const retainedProposedPlans = retainThreadProposedPlansAfterRevert( sourceThread.proposedPlans, retainedTurnIds, ).slice(-200);And include
proposedPlans: retainedProposedPlansin the thread update object.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/orchestration/projector.ts` around lines 667 - 714, The branch-from-checkpoint handler is missing retention of proposedPlans like thread.reverted does; compute retainedProposedPlans by calling retainThreadProposedPlansAfterRevert(sourceThread.proposedPlans, retainedTurnIds).slice(-200) and add proposedPlans: retainedProposedPlans to the object passed into updateThread (inside the "thread.branched-from-checkpoint" Effect.gen), using the same retainedTurnIds derived from retainedCheckpoints so proposed plans are filtered consistently.apps/web/src/presenceStore.ts-67-70 (1)
67-70:⚠️ Potential issue | 🟠 MajorDon't clear the whole thread on one
leave().This drops
participantsByThread[input.threadId]entirely. If other participants are still present, the UI shows the thread as empty until the next refresh or stream event. Remove only the leaving participant, or wait forpresence.leftbefore mutating local state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/presenceStore.ts` around lines 67 - 70, The current set call is removing the entire participantsByThread[input.threadId] entry (const { [input.threadId]: _removed, ...rest } = s.participantsByThread), which clears the whole thread on a single leave; instead modify leave() so you only remove the leaving participant from the thread's participant list (e.g., find participantsByThread[input.threadId] and filter/delete the specific participantId) and only delete the thread key if the resulting list is empty, or alternatively defer local state changes until handling presence.left; update logic that references participantsByThread, input.threadId and the leave() handler accordingly.apps/server/src/workflow/Services/WorkflowService.ts-186-205 (1)
186-205:⚠️ Potential issue | 🟠 MajorReject missing required workflow variables before execution.
substituteVariables()only fills provided values plus non-null defaults. Required variables withdefaultValue: nullare left as raw{{VAR_NAME}}tokens, soexecute()can build pipeline prompts with unresolved placeholders instead of failing fast. Validate required variables before stage creation.Also applies to: 327-330
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/workflow/Services/WorkflowService.ts` around lines 186 - 205, substituteVariables currently leaves placeholders for required WorkflowVariable entries with defaultValue === null unresolved; update logic to validate and reject missing required variables before execution by scanning defaults and provided variables, e.g. in substituteVariables (or a new validateRequiredVariables helper) compute required names where defaultValue === null and not present in variables, and throw a clear Error listing the missing variable names so execute()/stage creation cannot build prompts with raw {{VAR_NAME}} tokens; also apply the same validation at the other usage site referenced (lines ~327-330) to ensure required vars are checked before stage creation.apps/server/src/workflow/Services/WorkflowService.ts-304-310 (1)
304-310:⚠️ Potential issue | 🟠 MajorDeletion should fail when nothing was removed.
This silently succeeds for built-in or missing templates. The client can remove the template locally even though the row still exists and will come back on the next refetch. Check the affected row count and surface a typed error when deletion is rejected.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/workflow/Services/WorkflowService.ts` around lines 304 - 310, The deleteTemplate implementation currently ignores the SQL result so deletions that affect 0 rows (built-in or missing templates) silently succeed; change deleteTemplate to capture the SQL execution result (the value returned by the sql`` call), inspect its affected-row count (e.g., rowCount/numUpdatedRows/changes field returned by your DB driver), and if the count is 0 return a typed failure (e.g., Effect.fail(new TemplateNotDeletedError(templateId)) or a NotFound/DeletionRejected error) instead of proceeding to orDie; keep the WHERE id = input.templateId AND is_built_in = 0 check and surface the typed error when no rows were removed so clients can react accordingly.apps/server/src/presence/Services/PresenceService.ts-87-120 (1)
87-120:⚠️ Potential issue | 🟠 MajorScope leave and cursor updates to a single participant.
Both methods iterate the entire thread map. A single leave removes everybody from the session, and one cursor update overwrites every participant’s cursor state for that thread.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/presence/Services/PresenceService.ts` around lines 87 - 120, The leave and updateCursor implementations currently iterate the entire threadMap and affect every participant; change both to operate only on the targeted participantId from the input: in leave (function leave) get threadMap = presenceMap.get(input.threadId), if not present return, then delete only threadMap.delete(input.participantId) and publish a single PubSub.publish with type "presence.left" for that participantId and threadId; in updateCursor (function updateCursor) similarly lookup threadMap, retrieve the single entry with threadMap.get(input.participantId), if found update only that participant's record (cursor and lastSeenAt), threadMap.set(input.participantId, updated) and publish one "presence.cursor.updated" event for that participantId and threadId; ensure you handle the case where the participant entry is missing by returning silently.apps/web/src/components/settings/FeaturesPanels.tsx-255-260 (1)
255-260:⚠️ Potential issue | 🟠 MajorFetch CI state on mount instead of relying only on the stream.
This panel only opens the live subscription. On a cold load,
runsByProjectstays empty until some future CI event arrives, so existing runs never render when the user first opens the page.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/components/settings/FeaturesPanels.tsx` around lines 255 - 260, CIPanel currently only subscribes to CI events (subscribeCIEvents) so runsByProject stays empty on cold load; update CIPanel to also fetch the current CI state on mount by calling the store's load/fetch method (e.g., a fetchCIState/fetchRuns/init method exposed by useCIStore) before returning the subscription, and keep the existing return subscribeCIEvents() cleanup; ensure you reference CIPanel, useCIStore, runsByProject, and subscribeCIEvents when adding the fetch call.apps/web/src/components/settings/FeaturesPanels.tsx-559-566 (1)
559-566:⚠️ Potential issue | 🟠 MajorMemory entries are never loaded here.
The
useEffectis a no-op, and this panel never calls a list/search action. Direct navigation to the Features page will therefore show an empty memory view even when persisted entries already exist.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/components/settings/FeaturesPanels.tsx` around lines 559 - 566, MemoryPanel never triggers loading of persisted entries because the useEffect is a no-op; update MemoryPanel to call the memory store's load/list action on mount (the action exposed by useMemoryStore — e.g., listEntries, loadMemoryEntries or similar) so entriesByProject and searchResults are populated; specifically, inside the existing useEffect in MemoryPanel invoke the store's fetch/list method and handle any returned promise/errors (and optionally set loading state via isLoading) so direct navigation shows persisted entries.apps/server/src/pipeline/Services/PipelineService.ts-112-129 (1)
112-129:⚠️ Potential issue | 🟠 MajorReject cycles and unknown dependencies before execution.
This DFS only tracks
visited, so a cycle likeA -> B -> Ais accepted, and missing dependency IDs are silently ignored. That lets impossible pipelines execute in an arbitrary order instead of failing fast when the definition is invalid.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/pipeline/Services/PipelineService.ts` around lines 112 - 129, The topologicalSort function currently only uses a single visited set so it silently ignores missing dependencies and fails to detect cycles; update topologicalSort to validate dependencies and reject invalid graphs by (1) checking stageMap.has(depId) inside visit and throwing a clear error including the current stage.id and missing depId when a dependency is unknown, and (2) adding a second "inProgress" (or tempMark) Set used during DFS so if visit encounters a depId already inProgress it throws a cycle error that names the involved stage IDs (detect A->B->A). Keep the existing result push-after-visiting-deps logic but ensure errors are thrown before returning the sorted array.packages/contracts/src/costTracking.ts-70-80 (1)
70-80:⚠️ Potential issue | 🟠 MajorTighten the budget invariants at the contract boundary.
These shapes currently allow
limitCents = 0,periodDays = 0, and threshold values above 100. That admits budgets the service cannot handle cleanly: a zero limit feeds divide-by-zero alert math, and a zero-day period has no valid rollover semantics. Please make the limit/period strictly positive and capalertThresholdPercentto0..100.Also applies to: 101-108
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/contracts/src/costTracking.ts` around lines 70 - 80, The CostBudget contract allows limitCents and periodDays to be zero and alertThresholdPercent >100; update the schema for CostBudget (and the similar shape at lines 101-108) so limitCents and periodDays are strictly positive and alertThresholdPercent is constrained to 0..100: change the limitCents field from a non-negative/CostCents type to a strictly positive variant or add a refinement (e.g., refine/withConstraint/greaterThan(0)) on limitCents and periodDays to enforce > 0, and replace the alertThresholdPercent schema (currently NonNegativeInt.pipe(...)) with a ranged/validated schema that enforces 0 <= value <= 100 (or add a refinement that clamps/throws outside that range) while keeping the default of 80 via withDecodingDefault.apps/web/src/components/settings/FeaturesPanels.tsx-377-385 (1)
377-385:⚠️ Potential issue | 🟠 MajorPipelines panel has no bootstrap path for existing data.
allPipelines/allExecutionsare derived entirely from store state, but this component never loads any initial pipeline data. After a refresh, the panel will stay empty unless some other screen already populated the store or a new event happens while this tab is open.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/components/settings/FeaturesPanels.tsx` around lines 377 - 385, PipelinesPanel never bootstraps existing data on mount, so update the component to call the pipeline store's initial load routine before/when subscribing: inside the useEffect in PipelinesPanel, invoke a store method (e.g., loadInitialPipelines / fetchPipelines / bootstrapPipelines — add one to usePipelineStore if it doesn't exist) to populate pipelinesByProject and executions, then call subscribePipelineEvents() and return its unsubscribe; keep the empty dependency array and surface errors/loading from that initial load via the existing isLoading/error fields.apps/server/src/cost/Services/CostTrackingService.ts-107-123 (1)
107-123:⚠️ Potential issue | 🟠 MajorBudget periods are ignored, and editing a budget wipes its spend.
recordUsageonly ever incrementscurrent_spend_cents, soperiodDaysnever actually influences alerting. ThensetBudgetdoesINSERT OR REPLACEwithcurrent_spend_cents = 0and a freshcreated_at, which resets any existing budget whenever it is edited. Period-based budgets will behave like lifetime counters until they are changed, then suddenly drop back to zero.Also applies to: 201-218
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/cost/Services/CostTrackingService.ts` around lines 107 - 123, recordUsage and setBudget ignore budget periods and editing a budget wipes spend because recordUsage always increments current_spend_cents without checking periodDays and setBudget uses INSERT OR REPLACE which resets current_spend_cents and created_at; fix by implementing period-aware resets and preserving spend on edits: in recordUsage (the function that queries cost_budgets and updates current_spend_cents) detect whether budget.period_days (or a new period_start/last_reset_at column) indicates a new period and, if so, reset current_spend_cents to 0 and update period_start/last_reset_at before adding input.costCents; in setBudget replace INSERT OR REPLACE with INSERT ... ON CONFLICT(...) DO UPDATE that preserves existing current_spend_cents unless the incoming periodDays differs or the period has rolled over (in which case reset appropriately), and add/declare a period_start/last_reset_at column if one does not exist to make period logic reliable.apps/server/src/presence/Services/PresenceService.ts-68-85 (1)
68-85:⚠️ Potential issue | 🟠 MajorHonor share visibility and capacity when someone joins.
join()unconditionally inserts a participant into the thread map. It never checks whether the thread is shared, whether the share is public, or whethermaxParticipantshas already been reached, so private or full sessions can still be joined.apps/web/src/components/settings/FeaturesPanels.tsx-626-633 (1)
626-633:⚠️ Potential issue | 🟠 MajorPresence can only show future activity, not already-active sessions.
This panel seeds
threadIdsfrom local store state and only subscribes to new events. After a reload, any existing shared sessions remain invisible until another presence event happens, so the “Active Sessions” view is stale by default.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/components/settings/FeaturesPanels.tsx` around lines 626 - 633, PresencePanel currently only subscribes to future presence events via subscribePresenceEvents and seeds threadIds from the local participantsByThread store, so existing active sessions after a reload are invisible; fix by loading the current presence snapshot on mount before subscribing: in PresencePanel (usePresenceStore, participantsByThread, sharesByThread) call an async initializer (e.g., fetchPresenceSnapshot or presenceStore.loadInitialState) inside useEffect, await and merge that initial data into the store, then call subscribePresenceEvents to stream updates—ensure threadIds computation still derives from participantsByThread/sharesByThread after the initial load so the “Active Sessions” view shows already-active sessions immediately.apps/server/src/pipeline/Services/PipelineService.ts-182-290 (1)
182-290:⚠️ Potential issue | 🟠 Major
execute()reports successful stage execution without running anything.Each stage flips from
runningtocompletedin the same call, andthreadIdis never used. Consumers will see completed pipelines even though no stage prompt was dispatched or awaited, which makes the execution status misleading.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/pipeline/Services/PipelineService.ts` around lines 182 - 290, The execute function currently marks each stage as "completed" immediately without dispatching or awaiting work; stop flipping status to "completed" inside execute and instead dispatch the stage work (include the stage's threadId if present) and persist the "running" state so an external worker can perform the task and later update the stage result. Concretely: in execute (the PipelineService.execute loop) remove the immediate assignment that sets stageExecutions[i].status = "completed" and the subsequent publishEvent("stage.completed")/UPDATE that follows it; replace that with a publishEvent (e.g., "stage.dispatched" or "stage.start.requested") that includes executionId, stage.id, and stage.threadId (or other dispatch metadata) and persist the stageExecutions with status "running" to pipeline_executions; rely on the external worker/consumer to publish "stage.completed"/"stage.failed" events or call an API that updates pipeline_executions to mark completedAt/error/final status and let the finalization step read those statuses rather than assuming completion within execute.apps/web/src/pipelineStore.ts-134-158 (1)
134-158:⚠️ Potential issue | 🟠 MajorThe stream handler is wired to the wrong event contract.
The server publishes
stage.*/pipeline.*events, but this code only handles"pipeline.execution.updated"and"pipeline.stage.updated"and expectsevent.execution/event.stagepayloads. As written, every real pipeline event is ignored, so live execution updates never reach the store.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/pipelineStore.ts` around lines 134 - 158, The stream handler in client.pipeline.onEvent is wired to wrong event names and payload shape; update the handler to match the server's contract by checking event.type for the server prefixes (e.g., event.type.startsWith("pipeline.") and event.type.startsWith("stage.") or the exact "pipeline.*"/"stage.*" names your server sends) instead of only "pipeline.execution.updated" and "pipeline.stage.updated", and extract the payload from the correct field (e.g., event.data.execution or event.data.stage) rather than event.execution/event.stage; then call usePipelineStore.setState the same way to upsert the execution and stage (referencing the existing functions/variables client.pipeline.onEvent, usePipelineStore.setState, execution/stage, and executionId) so real pipeline and stage events update the store.�packages/contracts/src/rpc.ts-559-563 (1)
559-563:⚠️ Potential issue | 🟠 MajorAllow
getExecutionto represent “not found.”
PipelineService.getExecution()returnsPipelineExecution | null, but this RPC only acceptsPipelineExecution. A miss will decode as a protocol error instead of a clean not-found response.Suggested contract fix
export const WsPipelineGetExecutionRpc = Rpc.make(WS_METHODS.pipelineGetExecution, { payload: PipelineGetExecutionInput, - success: PipelineExecution, + success: Schema.NullOr(PipelineExecution), error: PipelineError, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/contracts/src/rpc.ts` around lines 559 - 563, The RPC contract for WsPipelineGetExecutionRpc currently declares success: PipelineExecution but PipelineService.getExecution can return null; update the RPC success schema to allow null (e.g., success: PipelineExecution | null or a nullable wrapper) so a not-found result decodes as a valid response rather than a protocol error; ensure Rpc.make(WS_METHODS.pipelineGetExecution, { payload: PipelineGetExecutionInput, success: <nullable PipelineExecution>, error: PipelineError }) is updated accordingly and keep WS_METHODS.pipelineGetExecution, PipelineGetExecutionInput, PipelineExecution and PipelineError references intact.
🟡 Minor comments (6)
apps/server/src/ws.ts-840-842 (1)
840-842:⚠️ Potential issue | 🟡 MinorUnsafe
JSON.parsewithout error handling.The
stagesJsonfield is cast viaas unknown as { stagesJson: string }and parsed without try/catch. If the persisted JSON is malformed, this will throw and crash the RPC handler.🛡️ Proposed defensive fix
- stages: JSON.parse( - (def as unknown as { stagesJson: string }).stagesJson ?? "[]", - ) as import("@t3tools/contracts").PipelineStage[], + stages: (() => { + try { + return JSON.parse( + (def as unknown as { stagesJson: string }).stagesJson ?? "[]", + ) as import("@t3tools/contracts").PipelineStage[]; + } catch { + return []; + } + })(),Alternatively, consider using Effect's
Schema.decodefor validated parsing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/ws.ts` around lines 840 - 842, The JSON.parse call for (def as unknown as { stagesJson: string }).stagesJson is unguarded and can throw on malformed input; wrap the parse in a safe decode with try/catch (or use a schema decoder) inside the codepath that builds stages so that parsing errors do not bubble and crash the RPC handler—on error log the defect via processLogger/errorLogger and fall back to an empty array ([]) for stages; alternatively replace JSON.parse with a validated decode (e.g., Effect Schema.decode) and map decode failures to the same logged fallback.apps/web/src/auditStore.ts-30-44 (1)
30-44:⚠️ Potential issue | 🟡 MinorType cast bypasses partial input validation.
The
querymethod acceptsPartial<AuditQueryInput>but casts toAuditQueryInput(Line 34). If the RPC endpoint expects certain required fields, passing an empty object{}could cause server-side errors.🛡️ Suggested approach
Either:
- Ensure the server's
auditQueryhandler accepts partial input with defaults, or- Provide client-side defaults:
query: async (input = {}) => { set({ isLoading: true, error: null }); try { const client = getWsRpcClient(); - const result: AuditQueryResult = await client.audit.query(input as AuditQueryInput); + const result: AuditQueryResult = await client.audit.query({ + limit: 100, + offset: 0, + ...input, + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/auditStore.ts` around lines 30 - 44, The query function currently casts a Partial<AuditQueryInput> to AuditQueryInput before calling client.audit.query which can send missing required fields; instead, validate and merge client-side defaults for required fields (or change the call to accept Partial if server supports it). Concretely, in the query function, create/use a defaultAuditQueryInput and merge it with the incoming input (or validate required properties and throw/return early), then pass the merged/validated object to client.audit.query; reference the query method, AuditQueryInput/AuditQueryResult types, and getWsRpcClient()/client.audit.query to locate where to add the merge/validation. Ensure set({ error: ... }) still handles validation failures before invoking the RPC.apps/web/src/costStore.ts-52-57 (1)
52-57:⚠️ Potential issue | 🟡 MinorBudget stream updates can break project-scoped views.
fetchBudgets(projectId)can populate a filtered list, but the event handler always merges any incomingevent.budgetinto the singlebudgetsarray. After subscribing, a project-specific view can start showing budgets from other projects. Key budgets by project or track the active filter before applyingcost.budget.updated.Also applies to: 78-83
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/costStore.ts` around lines 52 - 57, fetchBudgets is populating a project-scoped list but the cost.budget.updated event handler blindly merges event.budget into the single budgets array, which can introduce budgets from other projects; update the handler(s) referenced alongside fetchBudgets (the cost.budget.updated subscriber around lines 78-83) to respect the active project filter by either keying stored budgets by projectId or checking that event.budget.projectId matches the current projectId before merging/replacing, and when subscribing from fetchBudgets pass and store the projectId filter so the handler can ignore updates for other projects.apps/server/src/task/Services/TaskDecompositionService.ts-57-57 (1)
57-57:⚠️ Potential issue | 🟡 MinorDead code: ternary condition always produces the same result.
The expression
order > 0 && !isNumbered ? [] : []evaluates to an empty array in both branches, making the condition meaningless. If this is intentional placeholder logic, consider simplifying to just[]. If dependencies were intended to be populated based on this logic, the implementation is incomplete.Proposed fix
- dependsOn: order > 0 && !isNumbered ? [] : [], + dependsOn: [],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/task/Services/TaskDecompositionService.ts` at line 57, The dependsOn assignment in TaskDecompositionService is dead code because the ternary order > 0 && !isNumbered ? [] : [] always yields an empty array; remove the needless condition and set dependsOn to [] directly (or implement the intended dependency logic if tasks should populate dependencies based on order and isNumbered). Locate the dependsOn property usage in the TaskDecompositionService method where order and isNumbered are in scope and either replace the ternary with a plain [] or replace it with the correct array-building logic that uses order and isNumbered to produce dependencies.apps/server/src/task/Services/TaskDecompositionService.ts-165-170 (1)
165-170:⚠️ Potential issue | 🟡 MinorPotential undefined access:
updatedNodemay be undefined.If
input.taskIddoes not match any task inupdatedTasks,findreturnsundefined, but the non-null assertion (!) suppresses the type error. The subsequentPubSub.publishwould then emit an event withnode: undefined, which could cause downstream issues.Proposed fix: validate task exists before updating
- const updatedNode = updatedTasks.find((t) => t.id === input.taskId)!; - yield* PubSub.publish(pubsub, { - type: "task.node.updated" as const, - treeId: input.treeId, - node: updatedNode, - }); + const updatedNode = updatedTasks.find((t) => t.id === input.taskId); + if (!updatedNode) { + return yield* Effect.die(new Error(`Task ${input.taskId} not found in tree ${input.treeId}`)); + } + yield* PubSub.publish(pubsub, { + type: "task.node.updated" as const, + treeId: input.treeId, + node: updatedNode, + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/task/Services/TaskDecompositionService.ts` around lines 165 - 170, In TaskDecompositionService, the code assumes updatedNode exists by using a non-null assertion on updatedTasks.find(...); change this to check for undefined (e.g., const updatedNode = updatedTasks.find(t => t.id === input.taskId); if (!updatedNode) { /* handle */ }) before calling PubSub.publish so you never publish node: undefined; handle the missing case by throwing a clear error, returning early, or logging and skipping the publish (use the same PubSub.publish call only when updatedNode is defined) and keep references to updatedTasks, updatedNode, PubSub.publish, and input.taskId to locate the fix.apps/server/src/routing/Services/ProviderRouterService.ts-150-169 (1)
150-169:⚠️ Potential issue | 🟡 MinorDefault provider selection may choose an unhealthy provider.
When no matching rule exists or preferred providers are unavailable, the code defaults to
"claudeAgent"(line 154) without checking if it's healthy. Thealternativesare filtered for health (line 168), butselectedremains the hardcoded default.Proposed fix: verify default provider health
let selected: ProviderKind = "claudeAgent"; let reason = "default"; let alternatives: ProviderKind[] = []; if (matchingRule && matchingRule.preferredProviders.length > 0) { const preferred = matchingRule.preferredProviders.filter( (p) => healthyProviders.includes(p) && !matchingRule.excludedProviders.includes(p), ); if (preferred.length > 0) { selected = preferred[0]!; alternatives = preferred.slice(1); reason = `rule:${matchingRule.name}:${matchingRule.strategy}`; } } else { - alternatives = healthyProviders.filter((p) => p !== selected); + // Ensure default is healthy, otherwise pick first healthy provider + if (!healthyProviders.includes(selected)) { + selected = healthyProviders[0] ?? "claudeAgent"; + reason = "fallback-healthy"; + } + alternatives = healthyProviders.filter((p) => p !== selected); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/routing/Services/ProviderRouterService.ts` around lines 150 - 169, The code currently sets selected: ProviderKind = "claudeAgent" without verifying health; update the selection logic in ProviderRouterService (use healthMap / healthyProviders /matchingRule /preferredProviders) so that when there is no matchingRule or no preferred healthy providers, you check whether the default ("claudeAgent") is present in healthyProviders and if not pick the first healthyProviders[0] (or leave selected undefined/null and set reason to "no-healthy-provider" if none exist); also recompute alternatives from healthyProviders excluding the final selected and update reason accordingly (e.g., "default:healthy" or "default:unhealthy->fallback").
🧹 Nitpick comments (8)
apps/server/src/persistence/Migrations/020_NewFeatureTables.ts (2)
256-270: Redundant index onthread_id.The
session_sharestable already has aUNIQUEconstraint onthread_id(Line 259), which implicitly creates an index in SQLite. The explicitidx_session_shares_threadindex (Lines 268-270) is redundant and wastes storage.♻️ Remove redundant index
yield* sql` CREATE TABLE IF NOT EXISTS session_shares ( id TEXT PRIMARY KEY, thread_id TEXT NOT NULL UNIQUE, owner_id TEXT NOT NULL, max_participants INTEGER NOT NULL DEFAULT 10, is_public INTEGER NOT NULL DEFAULT 0, created_at TEXT NOT NULL, expires_at TEXT ) `; - - yield* sql` - CREATE INDEX IF NOT EXISTS idx_session_shares_thread ON session_shares(thread_id) - `;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/persistence/Migrations/020_NewFeatureTables.ts` around lines 256 - 270, The CREATE INDEX idx_session_shares_thread on session_shares(thread_id) is redundant because thread_id already has a UNIQUE constraint in the session_shares table; remove the explicit index creation (the SQL block that creates idx_session_shares_thread) so only the table definition with UNIQUE(thread_id) remains, keeping the session_shares table and its UNIQUE constraint intact.
1-286: Consider adding composite indexes for common query patterns.The individual column indexes are good, but if queries commonly filter on multiple columns (e.g.,
project_id + created_atfor cost_entries, orproject_id + timestampfor audit_log), composite indexes would improve performance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/persistence/Migrations/020_NewFeatureTables.ts` around lines 1 - 286, Add composite indexes for common multi-column filters: create new CREATE INDEX IF NOT EXISTS statements (placed near the existing index blocks) such as idx_cost_entries_project_created on cost_entries(project_id, created_at) and idx_audit_entries_project_thread on cost_entries(project_id, thread_id) if you frequently query both, and idx_audit_log_project_timestamp on audit_log(project_id, timestamp) to speed project+time queries; follow the same pattern for other tables you often filter by multiple columns (e.g., ci_runs(project_id, branch) or pipeline_executions(pipeline_id, project_id)) using unique index names like idx_ci_runs_project_branch and idx_pipeline_exec_pipeline_project. Ensure each is added using yield* sql`CREATE INDEX IF NOT EXISTS ...` alongside the other CREATE INDEX statements in this migration (refer to cost_entries, audit_log, ci_runs, pipeline_executions and their existing idx_* names).apps/server/src/server.ts (1)
296-302: Acceptable workaround for TypeScript inference limitation.The
as unknown ascast is a pragmatic solution for the documented TS limitation with deepLayer.provideMergechains. The comment clearly explains the rationale. Consider adding a follow-up issue to revisit if Effect updates its Layer inference.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/server.ts` around lines 296 - 302, The cast workaround on runServer (export const runServer = Layer.launch(makeServerLayer) as unknown as Effect.Effect<never, any, ServerConfig>) is acceptable per the comment, but please add a brief TODO or create a follow-up issue referencing this TypeScript inference limitation so we can revisit when Effect/Layer typing improves; update the comment near runServer to include a TODO tag and optionally an issue ID or link for future tracking.packages/contracts/src/auditLog.ts (1)
64-65: Boundlimitat the contract edge.
NonNegativeIntalone lets a caller request arbitrarily large audit pages. Adding an upper bound here keeps the RPC surface safer and avoids expensive queries leaking through to the service layer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/contracts/src/auditLog.ts` around lines 64 - 65, The limit field currently uses NonNegativeInt allowing unbounded large requests; change the contract to enforce an upper bound at the edge by replacing the type for limit with a bounded/int schema (e.g., a NonNegativeInt piped/combined with a max constraint) and keep the decoding default to the desired page size (currently 100); update the line that defines limit (the symbol "limit" using NonNegativeInt.pipe(Schema.withDecodingDefault(...))) to use the bounded schema (e.g., "NonNegativeInt.pipe(Schema.withMax(X)).pipe(Schema.withDecodingDefault(() => 100))" or a named BoundedLimit schema) while leaving offset as-is. Ensure the chosen max value is documented in the schema comment or name.apps/web/src/memoryStore.ts (1)
114-114: Avoidas anytype assertion; use the proper branded type.The
projectId as anycast bypasses type safety. TheprojectIdparameter should be typed asProjectIdfrom@t3tools/contractsto ensure type consistency with the RPC client.Proposed fix
+import type { ProjectId } from "@t3tools/contracts"; + export interface MemoryStore extends MemoryState { // ... - index: (projectId: string, forceReindex?: boolean) => Promise<void>; + index: (projectId: ProjectId, forceReindex?: boolean) => Promise<void>;Then remove the cast:
- await client.memory.index({ projectId: projectId as any, forceReindex }); + await client.memory.index({ projectId, forceReindex });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/memoryStore.ts` at line 114, The call to client.memory.index currently uses a type-escaping cast (`projectId as any`); change the parameter that supplies projectId to be typed as the branded ProjectId from `@t3tools/contracts` (import ProjectId) and remove the `as any` cast in the call to client.memory.index so you pass `projectId` with its proper type; update the function/method signature that declares projectId (and any callers) to use ProjectId to maintain type safety.apps/server/src/ci/Services/CIIntegrationService.ts (1)
107-123:Effect.failwrapped byEffect.orDieconverts expected errors to defects.In
triggerRerun,Effect.fail(new Error(...))on line 113 is wrapped byEffect.orDieon line 123. This converts an expected business error ("CI run not found") into an unrecoverable defect. Consider either:
- Remove
Effect.orDieand properly type the error in the service shape- Use
Effect.diedirectly if this should truly be a defectOption 1: Use Effect.die for defect semantics
- if (!run) return yield* Effect.fail(new Error(`CI run ${input.runId} not found`)); + if (!run) return yield* Effect.die(new Error(`CI run ${input.runId} not found`));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/ci/Services/CIIntegrationService.ts` around lines 107 - 123, The triggerRerun implementation uses Effect.fail(new Error(...)) to signal a missing CI run but then unconditionally calls .pipe(Effect.orDie), which turns that expected business error into a defect; fix by either (A) removing Effect.orDie and updating CIIntegrationServiceShape["triggerRerun"] to reflect a failure channel (i.e., return Effect<Error, RequeuedRun> or similar) so callers can handle the "CI run not found" error, or (B) if this truly should be unrecoverable, replace Effect.fail(new Error(...)) with Effect.die(new Error(...)) to intentionally produce a defect; locate the logic in triggerRerun and adjust either the error-channel typing on the service shape or the failing primitive accordingly.packages/contracts/src/routing.ts (1)
46-55: Reject rules that both prefer and exclude the same provider.This schema currently accepts contradictory routing rules. Once persisted, the router has to guess whether exclusion wins or whether no candidates remain. Add a cross-field refinement here so invalid rules are rejected at the contract boundary.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/contracts/src/routing.ts` around lines 46 - 55, The RoutingRule schema currently allows preferredProviders and excludedProviders to contain the same ProviderKind values; add a cross-field refinement on RoutingRule to reject any object where the intersection of preferredProviders and excludedProviders is non-empty. Update the schema construction for RoutingRule to apply a refinement (e.g., using Schema.Refinement/Schema.Pipe or the library's cross-field validation utility) that inspects preferredProviders and excludedProviders and returns a validation error if any provider appears in both arrays, referencing the RoutingRule, preferredProviders, and excludedProviders symbols so the check is applied at the contract boundary.apps/web/src/wsRpcClient.ts (1)
288-389: Consider a tiny factory for these transport wrappers.Most of this block is mechanical
transport.request/transport.subscribeglue. Pulling that into a helper would reduce the copy-paste surface and make future RPC additions less drift-prone.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/wsRpcClient.ts` around lines 288 - 389, Extract the repeated transport.request/transport.subscribe patterns into small helpers and replace the inline wrappers with calls to them: add helpers like makeRequest = (methodName) => (input) => transport.request(client => client[methodName](input)) and makeSubscribe = (methodName) => (listener, options) => transport.subscribe(client => client[methodName]({}), listener, options), then refactor each namespace (e.g., cost, audit, ci, routing, pipeline, workflow, task, memory, presence) to use makeRequest(WS_METHODS.xxx) for request methods and makeSubscribe(WS_METHODS.subscribeXxx) for onEvent to eliminate duplication and centralize the glue logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3e0f5739-ab45-4e65-9fd3-973f9954fda0
📒 Files selected for processing (46)
apps/server/src/audit/Services/AuditLogService.tsapps/server/src/ci/Services/CIIntegrationService.tsapps/server/src/cost/Services/CostTrackingService.tsapps/server/src/memory/Services/ProjectMemoryService.tsapps/server/src/orchestration/Schemas.tsapps/server/src/orchestration/decider.tsapps/server/src/orchestration/projector.tsapps/server/src/persistence/Migrations.tsapps/server/src/persistence/Migrations/020_NewFeatureTables.tsapps/server/src/pipeline/Services/PipelineService.tsapps/server/src/presence/Services/PresenceService.tsapps/server/src/routing/Services/ProviderRouterService.tsapps/server/src/server.test.tsapps/server/src/server.tsapps/server/src/task/Services/TaskDecompositionService.tsapps/server/src/workflow/Services/WorkflowService.tsapps/server/src/ws.tsapps/web/src/appSettings.tsapps/web/src/auditStore.tsapps/web/src/ciStore.tsapps/web/src/components/ChatView.tsxapps/web/src/components/chat/ComposerQueuedFollowUpsPanel.tsxapps/web/src/components/settings/FeaturesPanels.tsxapps/web/src/components/settings/SettingsSidebarNav.tsxapps/web/src/costStore.tsapps/web/src/memoryStore.tsapps/web/src/pipelineStore.tsapps/web/src/presenceStore.tsapps/web/src/routeTree.gen.tsapps/web/src/routes/settings.features.tsxapps/web/src/routingStore.tsapps/web/src/taskStore.tsapps/web/src/workflowStore.tsapps/web/src/wsRpcClient.tspackages/contracts/src/auditLog.tspackages/contracts/src/ciIntegration.tspackages/contracts/src/costTracking.tspackages/contracts/src/index.tspackages/contracts/src/orchestration.tspackages/contracts/src/pipelines.tspackages/contracts/src/presence.tspackages/contracts/src/projectMemory.tspackages/contracts/src/routing.tspackages/contracts/src/rpc.tspackages/contracts/src/taskDecomposition.tspackages/contracts/src/workflows.ts
| export const MemoryForgetInput = Schema.Struct({ | ||
| entryId: MemoryEntryId, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Expectation:
# - the WS handler forwards enough context to scope the delete, and/or
# - the service resolves the row by entryId and verifies its projectId before deleting.
rg -n -C4 'memoryForget|forget\(|projectId|entryId|DELETE' \
apps/server/src/ws.ts \
apps/server/src/memory/Services/ProjectMemoryService.tsRepository: Josh-wt/t3code
Length of output: 16972
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check if there's authorization/context middleware that validates projectId
rg -n "projectId|context|auth|Permission" apps/server/src/ws.ts \
| head -50
# Check the database schema for memory_entries
fd -e "sql" -e "ts" -e "js" | xargs rg -l "memory_entries\|CREATE TABLE" | head -10Repository: Josh-wt/t3code
Length of output: 1380
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find database migrations or schema definitions
find . -type f \( -name "*migration*" -o -name "*schema*" \) \
| grep -i -E "(memory|db|sql)" | head -20Repository: Josh-wt/t3code
Length of output: 40
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check MemoryForgetInput definition in contracts
fd "projectMemory" -e ts | xargs rg -l "MemoryForgetInput" | head -5
# Then look at the actual input type definition
rg -n -A5 "MemoryForgetInput.*=" packages/contracts/src/projectMemory.tsRepository: Josh-wt/t3code
Length of output: 444
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check if ws.ts has any context or projectId validation for the memory handlers
rg -n -B10 "WS_METHODS.memoryForget" apps/server/src/ws.ts | head -40Repository: Josh-wt/t3code
Length of output: 630
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check MemoryAddInput to see if it requires projectId
rg -n -A10 "export const MemoryAddInput" packages/contracts/src/projectMemory.ts
# Check MemorySearchInput too
rg -n -A10 "export const MemorySearchInput" packages/contracts/src/projectMemory.tsRepository: Josh-wt/t3code
Length of output: 903
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Look for database schema/migrations mentioning memory_entries
fd -type f | xargs rg -l "memory_entries" | head -10Repository: Josh-wt/t3code
Length of output: 364
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check the WS handler pattern - is there context/middleware for projectId validation?
# Look around the memoryForget handler for context
rg -n -B30 "WS_METHODS.memoryForget" apps/server/src/ws.ts | tail -50Repository: Josh-wt/t3code
Length of output: 1653
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check the migration for memory_entries table schema
cat apps/server/src/persistence/Migrations/020_NewFeatureTables.ts | head -150Repository: Josh-wt/t3code
Length of output: 4150
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Look for database constraint on entryId - is it unique globally or scoped to project?
rg -n -A15 "CREATE TABLE.*memory_entries\|memory_entries.*project_id" apps/server/src/persistence/Migrations/020_NewFeatureTables.tsRepository: Josh-wt/t3code
Length of output: 40
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find memory_entries in the migration file
cat apps/server/src/persistence/Migrations/020_NewFeatureTables.ts | grep -n -A20 "memory_entries"Repository: Josh-wt/t3code
Length of output: 2438
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Get the full migration file to see memory_entries definition
wc -l apps/server/src/persistence/Migrations/020_NewFeatureTables.ts
# Then read from line that contains memory_entries onwards
tail -n +100 apps/server/src/persistence/Migrations/020_NewFeatureTables.ts | head -100Repository: Josh-wt/t3code
Length of output: 2797
Add projectId validation to the MemoryForgetInput and verify project ownership before deletion.
MemoryForgetInput lacks projectId, making it the only memory mutation without project scoping. The forget operation deletes by entryId alone without verifying the entry belongs to the requesting project. Since entryId is globally unique in the database schema, a leaked or cross-project ID can delete another project's memory.
Update the input schema to require projectId and add a WHERE clause to the DELETE query: WHERE id = ${input.entryId} AND project_id = ${input.projectId}.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/contracts/src/projectMemory.ts` around lines 92 - 94,
MemoryForgetInput currently only includes MemoryEntryId which allows
cross-project deletes; add a required projectId (use the existing ProjectId
type) to the MemoryForgetInput schema so it becomes { entryId: MemoryEntryId,
projectId: ProjectId }, then update the forget/delete handler (the resolver or
function that performs the DELETE by entryId) to include and check project
ownership by changing the DELETE to include WHERE id = ${input.entryId} AND
project_id = ${input.projectId} (or perform an existence check that the entry
belongs to the provided project before deleting) so deletes are scoped to the
requesting project.
There was a problem hiding this comment.
19 issues found across 46 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="apps/web/src/costStore.ts">
<violation number="1" location="apps/web/src/costStore.ts:40">
P2: `isLoading` is shared across independent requests. If `fetchSummary` and `fetchBudgets` run at the same time, whichever finishes first clears `isLoading` even though the other call is still pending, so the UI can indicate loading is done prematurely. Consider separate loading flags or a request counter.</violation>
</file>
<file name="apps/server/src/audit/Services/AuditLogService.ts">
<violation number="1" location="apps/server/src/audit/Services/AuditLogService.ts:94">
P0: SQL injection in `query`: all filter values (`projectId`, `threadId`, `category`, `severity`, `actor`, timestamps) are string-interpolated into raw SQL passed to `sql.unsafe()`. An attacker-controlled input can escape the quoted string and execute arbitrary SQL. This is especially dangerous in an audit log service — an attacker could tamper with or exfiltrate the security audit trail.
Use parameterized queries (the `sql` tagged template) or build the query with proper parameter binding instead of string concatenation.</violation>
</file>
<file name="apps/web/src/presenceStore.ts">
<violation number="1" location="apps/web/src/presenceStore.ts:68">
P2: `leave` removes the entire thread’s participant list, so one user leaving clears all remaining participants for that thread. Remove only the leaving participant instead of deleting the whole thread entry.</violation>
</file>
<file name="apps/server/src/ci/Services/CIIntegrationService.ts">
<violation number="1" location="apps/server/src/ci/Services/CIIntegrationService.ts:109">
P1: `triggerRerun` ignores `projectId` and `failedOnly`, so any run ID can be re-queued even if it belongs to another project or isn’t failed. Filter by project and enforce the `failedOnly` guard before re-queuing.</violation>
</file>
<file name="apps/server/src/orchestration/projector.ts">
<violation number="1" location="apps/server/src/orchestration/projector.ts:700">
P2: The branch-from-checkpoint projection drops proposedPlans entirely. This will leave the new thread without any proposed plans associated with the retained turns, unlike the revert logic which preserves them.</violation>
</file>
<file name="apps/server/src/pipeline/Services/PipelineService.ts">
<violation number="1" location="apps/server/src/pipeline/Services/PipelineService.ts:244">
P2: Stages are marked `completed` immediately with no dispatch or wait for actual execution, so pipelines always finish instantly even if real work hasn’t run. Either dispatch the stage (or wait for an external executor) before setting `completed`, or leave the stage pending until real completion is reported.</violation>
</file>
<file name="apps/web/src/components/ChatView.tsx">
<violation number="1" location="apps/web/src/components/ChatView.tsx:3027">
P2: `forceQueue` currently queues even when no turn is running, which can leave the message stuck because auto-dispatch only happens on running→settled. Gate force-queued items on `phase === "running"` so idle sends go out immediately.</violation>
</file>
<file name="apps/server/src/orchestration/decider.ts">
<violation number="1" location="apps/server/src/orchestration/decider.ts:674">
P1: `thread.created` should be recorded under the new thread aggregate, not the source thread. Using `aggregateId: command.sourceThreadId` will attach the creation event to the wrong thread.</violation>
</file>
<file name="apps/server/src/ws.ts">
<violation number="1" location="apps/server/src/ws.ts:841">
P1: Use `def.stages` here instead of parsing a non-existent `stagesJson` field; the current mapping returns empty stage arrays for listed pipelines.</violation>
<violation number="2" location="apps/server/src/ws.ts:913">
P1: `Effect.die` is for unrecoverable defects, not expected "not found" cases. This will crash the fiber with an untyped error rather than returning a proper error response to the client. Use `Effect.fail` with a typed error instead.</violation>
<violation number="3" location="apps/server/src/ws.ts:932">
P2: Pipeline events are broadcast with hardcoded empty strings for `pipelineId`, `projectId`, and an empty `stages` array. Clients won't be able to correlate these events to actual pipelines or projects. Either enrich the event by looking up the execution, or extend `PipelineStreamEvent` to carry these fields.</violation>
</file>
<file name="apps/server/src/cost/Services/CostTrackingService.ts">
<violation number="1" location="apps/server/src/cost/Services/CostTrackingService.ts:122">
P2: Division by zero when `budget.limit_cents` is 0 produces `Infinity` or `NaN`, causing either spurious alerts on every call or silently skipped alerts. Guard against a zero limit before computing `percentUsed`.</violation>
<violation number="2" location="apps/server/src/cost/Services/CostTrackingService.ts:148">
P0: SQL injection: user-supplied `projectId`, `threadId`, `periodStart`, and `periodEnd` are string-interpolated into raw SQL passed to `sql.unsafe()`. Use the parameterized `sql` tagged template instead of building a raw WHERE clause. For conditional filters, you can compose queries with optional conditions or build separate branches.</violation>
<violation number="3" location="apps/server/src/cost/Services/CostTrackingService.ts:205">
P2: `INSERT OR REPLACE` resets `current_spend_cents` to 0 and `created_at` to now on every update. Editing a budget's threshold or limit silently discards all accumulated spend data. Use `INSERT … ON CONFLICT(id) DO UPDATE SET` to preserve `current_spend_cents` and `created_at`.</violation>
</file>
<file name="apps/server/src/memory/Services/ProjectMemoryService.ts">
<violation number="1" location="apps/server/src/memory/Services/ProjectMemoryService.ts:80">
P1: Avoid string-interpolating `input.kind` into the sql.unsafe query; parameterize it to prevent SQL injection in the search endpoint.</violation>
</file>
<file name="apps/server/src/presence/Services/PresenceService.ts">
<violation number="1" location="apps/server/src/presence/Services/PresenceService.ts:92">
P2: `leave` clears the entire thread instead of the caller. With only `threadId` in the input, any client leaving will evict all participants and emit left events for them.</violation>
<violation number="2" location="apps/server/src/presence/Services/PresenceService.ts:106">
P2: `updateCursor` updates every participant in the thread. One user’s cursor change will overwrite all other participants’ cursor states and broadcast updates for all of them.</violation>
</file>
<file name="apps/server/src/workflow/Services/WorkflowService.ts">
<violation number="1" location="apps/server/src/workflow/Services/WorkflowService.ts:248">
P2: Include `is_built_in` in the upsert update so built-in templates can’t remain user-deletable when an existing row conflicts.</violation>
</file>
<file name="apps/server/src/routing/Services/ProviderRouterService.ts">
<violation number="1" location="apps/server/src/routing/Services/ProviderRouterService.ts:148">
P2: Rule selection ignores task patterns and the provided taskHint, so task-specific routing rules never apply. Incorporate taskHint/taskPatterns when picking the matching rule.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
|
||
| const triggerRerun: CIIntegrationServiceShape["triggerRerun"] = (input) => | ||
| Effect.gen(function* () { | ||
| const rows = yield* sql< |
There was a problem hiding this comment.
P1: triggerRerun ignores projectId and failedOnly, so any run ID can be re-queued even if it belongs to another project or isn’t failed. Filter by project and enforce the failedOnly guard before re-queuing.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/ci/Services/CIIntegrationService.ts, line 109:
<comment>`triggerRerun` ignores `projectId` and `failedOnly`, so any run ID can be re-queued even if it belongs to another project or isn’t failed. Filter by project and enforce the `failedOnly` guard before re-queuing.</comment>
<file context>
@@ -0,0 +1,168 @@
+
+ const triggerRerun: CIIntegrationServiceShape["triggerRerun"] = (input) =>
+ Effect.gen(function* () {
+ const rows = yield* sql<
+ Record<string, unknown>
+ >`SELECT * FROM ci_runs WHERE id = ${input.runId}`;
</file context>
| const branchedThreadEvent: Omit<OrchestrationEvent, "sequence"> = { | ||
| ...withEventBase({ | ||
| aggregateKind: "thread", | ||
| aggregateId: command.sourceThreadId, |
There was a problem hiding this comment.
P1: thread.created should be recorded under the new thread aggregate, not the source thread. Using aggregateId: command.sourceThreadId will attach the creation event to the wrong thread.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/orchestration/decider.ts, line 674:
<comment>`thread.created` should be recorded under the new thread aggregate, not the source thread. Using `aggregateId: command.sourceThreadId` will attach the creation event to the wrong thread.</comment>
<file context>
@@ -647,6 +647,67 @@ export const decideOrchestrationCommand = Effect.fn("decideOrchestrationCommand"
+ const branchedThreadEvent: Omit<OrchestrationEvent, "sequence"> = {
+ ...withEventBase({
+ aggregateKind: "thread",
+ aggregateId: command.sourceThreadId,
+ occurredAt: command.createdAt,
+ commandId: command.commandId,
</file context>
| aggregateId: command.sourceThreadId, | |
| aggregateId: command.newThreadId, |
| completedAt: exec.completedAt, | ||
| updatedAt: exec.updatedAt, | ||
| }) | ||
| : Effect.die(new Error("Execution not found")), |
There was a problem hiding this comment.
P1: Effect.die is for unrecoverable defects, not expected "not found" cases. This will crash the fiber with an untyped error rather than returning a proper error response to the client. Use Effect.fail with a typed error instead.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/ws.ts, line 913:
<comment>`Effect.die` is for unrecoverable defects, not expected "not found" cases. This will crash the fiber with an untyped error rather than returning a proper error response to the client. Use `Effect.fail` with a typed error instead.</comment>
<file context>
@@ -708,6 +726,397 @@ const WsRpcLayer = WsRpcGroup.toLayer(
+ completedAt: exec.completedAt,
+ updatedAt: exec.updatedAt,
+ })
+ : Effect.die(new Error("Execution not found")),
+ ),
+ ),
</file context>
| Effect.gen(function* () { | ||
| const now = new Date().toISOString(); | ||
|
|
||
| yield* sql`INSERT OR REPLACE INTO cost_budgets (id, project_id, limit_cents, period_days, current_spend_cents, alert_threshold_percent, enabled, created_at, updated_at) |
There was a problem hiding this comment.
P2: INSERT OR REPLACE resets current_spend_cents to 0 and created_at to now on every update. Editing a budget's threshold or limit silently discards all accumulated spend data. Use INSERT … ON CONFLICT(id) DO UPDATE SET to preserve current_spend_cents and created_at.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/cost/Services/CostTrackingService.ts, line 205:
<comment>`INSERT OR REPLACE` resets `current_spend_cents` to 0 and `created_at` to now on every update. Editing a budget's threshold or limit silently discards all accumulated spend data. Use `INSERT … ON CONFLICT(id) DO UPDATE SET` to preserve `current_spend_cents` and `created_at`.</comment>
<file context>
@@ -0,0 +1,256 @@
+ Effect.gen(function* () {
+ const now = new Date().toISOString();
+
+ yield* sql`INSERT OR REPLACE INTO cost_budgets (id, project_id, limit_cents, period_days, current_spend_cents, alert_threshold_percent, enabled, created_at, updated_at)
+ VALUES (${input.budgetId}, ${input.projectId}, ${input.limitCents}, ${input.periodDays}, 0, ${input.alertThresholdPercent}, ${input.enabled ? 1 : 0}, ${now}, ${now})`;
+
</file context>
| description = excluded.description, | ||
| category = excluded.category, | ||
| variables = excluded.variables, | ||
| steps = excluded.steps, |
There was a problem hiding this comment.
P2: Include is_built_in in the upsert update so built-in templates can’t remain user-deletable when an existing row conflicts.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/workflow/Services/WorkflowService.ts, line 248:
<comment>Include `is_built_in` in the upsert update so built-in templates can’t remain user-deletable when an existing row conflicts.</comment>
<file context>
@@ -0,0 +1,360 @@
+ description = excluded.description,
+ category = excluded.category,
+ variables = excluded.variables,
+ steps = excluded.steps,
+ updated_at = excluded.updated_at
+ `;
</file context>
| const now = new Date().toISOString(); | ||
|
|
||
| // Find first matching rule for this project | ||
| const matchingRule = rules.find((r) => r.projectId === null || r.projectId === projectId); |
There was a problem hiding this comment.
P2: Rule selection ignores task patterns and the provided taskHint, so task-specific routing rules never apply. Incorporate taskHint/taskPatterns when picking the matching rule.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/server/src/routing/Services/ProviderRouterService.ts, line 148:
<comment>Rule selection ignores task patterns and the provided taskHint, so task-specific routing rules never apply. Incorporate taskHint/taskPatterns when picking the matching rule.</comment>
<file context>
@@ -0,0 +1,216 @@
+ const now = new Date().toISOString();
+
+ // Find first matching rule for this project
+ const matchingRule = rules.find((r) => r.projectId === null || r.projectId === projectId);
+
+ const healthyProviders = Array.from(healthMap.values())
</file context>
…-plan Parameterize SQL queries, add pagination guards, and refine presence/memory handling
Summary
ws.tsfor all new services; contract schemas, client stores, and a Features settings panel on the webFeature Details
9 New Services (Server + Client)
Queue/Steer Follow-up Behavior
Cmd/Ctrl+Shift+Enterinverts the active mode for one messageComposerQueuedFollowUpsPanelshows queued items with per-item remove buttonsTest plan
/settings/featuresCmd/Ctrl+Shift+Entercorrectly bypasses queue (in queue mode) or force-queues (in steer mode)bun fmt,bun lint,bun typecheck(web) all passhttps://claude.ai/code/session_01XF5adFusgTx5Fb1Qhc8MeC
Summary by cubic
Adds nine new feature services with SQLite persistence and WS RPC, a queue/steer follow‑up option in chat, a Features settings panel, and support for branching a thread from a checkpoint. Also hardens service queries and presence/memory handling.
New Features
020and exposed over WS RPC./settings/features.thread.branch-from-checkpointcommand/event to clone a thread at a checkpoint.Bug Fixes
Written for commit 55d5017. Summary will update on new commits.
Summary by CodeRabbit