Conversation
…ows, and audit controls
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds CI and staged/production deployment workflows, introduces audit and GDPR data-deletion features with DB migrations, adds PII masking and secret-rotation utilities, injects prompt-injection blocking into the LLM pipeline, and adds many unit/integration/E2E/load tests plus test/config infra updates. Changes
Sequence Diagram(s)sequenceDiagram
participant GH as GitHub Actions
participant Prepare as Prepare Job (build shared)
participant Artifact as Artifact Store
participant Lint as Lint/Typecheck/Build
participant Test as Test Job
participant Staging as Deploy — Staging
participant Prod as Deploy — Production
GH->>Prepare: workflow trigger
Prepare->>Artifact: upload built shared package (artifact)
Lint->>Artifact: download artifact, install, run lint/typecheck/build
Test->>Artifact: download artifact, install, run unit/integration tests (uses secrets)
Test->>GH: upload test results/artifacts
GH->>Staging: trigger staging deploy workflow
Staging->>Artifact: download build artifacts
Staging->>Staging: deploy to staging -> smoke/e2e/observability checks
alt staging gate passes
GH->>Prod: trigger production deploy workflow (manual/auto)
Prod->>Prod: pre-flight health check -> manual approval -> deploy canary
Prod->>Prod: run canary soak validation (scripts/canary-validate.sh)
alt canary passes
Prod->>Prod: promote canary to 100% and post-promote checks
else canary fails
Prod->>Prod: automated rollback and fail workflow
end
else staging gate fails
Staging->>GH: fail gate, block production promotion
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/shared/src/logger.js (1)
4-17:⚠️ Potential issue | 🟠 MajorMasking is currently off for QA/staging too.
NODE_ENV !== 'production'makes every non-production environment behave like dev, so raw PII still escapes in QA/staging/preview. If only local development should see unredacted logs, make that an explicit development check instead.Suggested fix
-const isDev = processEnv?.NODE_ENV !== 'production'; +const isDev = processEnv?.NODE_ENV === 'development';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/logger.js` around lines 4 - 17, The current isDev flag (isDev = processEnv?.NODE_ENV !== 'production') treats QA/staging as dev and leaks PII; change the environment check to explicitly enable unmasked logs only for development by setting isDev = processEnv?.NODE_ENV === 'development' (using the same processEnv variable and keeping existing logger, formatters, and maskPii usage), so maskPii is applied for any non-development environment (QA/staging/preview/production).
🟠 Major comments (22)
packages/llm/src/langchain/__tests__/provider-fallback.e2e.test.ts-42-58 (1)
42-58:⚠️ Potential issue | 🟠 MajorThis mock bypasses the real fallback logic.
createStructuredModelRouter()is mocked to catch the simulated OpenAI failure and return the Gemini response itself. That meansprocessMessage()never has to recover from a provider error, so this test still passes if the real fallback handling regresses. To validate the fallback path, keep the router/orchestration real and mock the lower-level provider call sites instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/llm/src/langchain/__tests__/provider-fallback.e2e.test.ts` around lines 42 - 58, The test currently mocks createStructuredModelRouter to swallow the simulated OpenAI failure and return the Gemini result inside the mock, which bypasses the real fallback logic; instead, restore the real createStructuredModelRouter/RunnableLambda orchestration and mock the lower-level provider call sites that the router invokes so that the first provider call (OpenAI) throws (use providerCallLog to assert order) and the subsequent provider (Gemini) returns the fallback response, ensuring processMessage actually exercises the router's fallback handling.packages/llm/src/langchain/__tests__/langchain-integration.test.ts-1-13 (1)
1-13:⚠️ Potential issue | 🟠 MajorThis suite never reaches composition or policy.
The file header says this exercises
compositionChainandpolicyChain, butrunCoreChain()stops afterrouterChain. So the mocked model router, final response composition, and blocking/policy behavior are not actually covered here. Please either narrow the test description or invoke the full pipeline/composed chain so the suite matches its stated scope.Also applies to: 75-91
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/llm/src/langchain/__tests__/langchain-integration.test.ts` around lines 1 - 13, The test header and comments claim it exercises compositionChain and policyChain but the test only calls runCoreChain(), which stops after routerChain; update the test so the code and description match by either (A) narrowing the header/comments to state this only covers normalizationChain, classificationChain and routerChain, or (B) extend the test to invoke the full pipeline by replacing or augmenting the call to runCoreChain() with the function that runs the full composed pipeline (e.g. runFullPipeline, runCompositionAndPolicy, or the existing function that executes compositionChain and policyChain), ensuring the mocked model router and final composition/policy behavior are exercised; locate references to runCoreChain, routerChain, compositionChain, and policyChain in the test to implement the change and update the header lines accordingly.packages/llm/src/langchain/chains/policy.ts-11-25 (1)
11-25:⚠️ Potential issue | 🟠 MajorNarrow the hard-block patterns to explicit instruction-override attempts.
The roleplay-style regexes here are much broader than the comment suggests. Phrases like “act as a translator”, “pretend to be a customer”, or “roleplay as support” are benign in many products, but they now get a hard
BLOCKEDresponse. That will create avoidable false positives in normal user flows.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/llm/src/langchain/chains/policy.ts` around lines 11 - 25, PROMPT_INJECTION_PATTERNS currently contains broad roleplay-style regexes that will BLOCK benign requests (e.g., translators or customer roleplay); narrow these to only match explicit instruction-override attempts by removing or tightening the generic roleplay rules (specifically the /act\s+as\s+(a|an)\s+/, /pretend\s+(you\s+are|to\s+be)\s+/, and /roleplay\s+as\s+/ entries in PROMPT_INJECTION_PATTERNS) and replace them with stricter patterns that require clear override language (e.g., phrases combined with “ignore/disregard/override/bypass” or explicit references to “instructions” or “system prompt”); alternatively move benign roleplay patterns out of PROMPT_INJECTION_PATTERNS into a non-blocking/monitoring list so the check in the policy enforcement logic (PROMPT_INJECTION_PATTERNS) only hard-blocks explicit instruction-override attempts.apps/worker/src/__tests__/queue-flow.integration.test.ts-122-126 (1)
122-126:⚠️ Potential issue | 🟠 MajorThis dedup test only proves the test’s own
Set, not the real worker path.The idempotency guard lives entirely inside this inline processor, so the assertion passes even if the production consumer stops deduplicating. Right now this is a self-fulfilling test: enqueue duplicate jobs, dedupe them with
processedKeys, then assert thatprocessedKeysdeduped them. Wire the queue through the actual worker/processor and assert on the real side effect it protects.Also applies to: 130-139, 168-170
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/worker/src/__tests__/queue-flow.integration.test.ts` around lines 122 - 126, The test currently verifies deduplication using the test-local Set (processedKeys) inside an inline processor instead of exercising the real worker logic; update the test so duplicates are enqueued via queue.addBulk({ name: INGRESS_JOB_NAME, ... }) but processed by the actual worker/processor used in production (remove or disable the inline processor/processedKeys dedupe), start the real worker/processor instance in the test, and assert on the real side-effect the idempotency guard protects (e.g., an external call count, DB record creation, or a mock endpoint was invoked only once) rather than asserting against processedKeys; ensure you reference INGRESS_JOB_NAME and the real processor/worker setup used by the app so the test covers the actual dedup path.scripts/canary-validate.sh-120-130 (1)
120-130:⚠️ Potential issue | 🟠 MajorThe metrics gate never actually gates promotion.
A non-200
/metricsresponse is only logged as info, and a 200 response only proves reachability. The script can therefore promote a canary without evaluating any concrete error-rate or latency threshold.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/canary-validate.sh` around lines 120 - 130, The current /metrics gate only checks reachability via METRICS_STATUS and never enforces any thresholds; change the logic in the METRICS_STATUS/METRICS_BODY block so non-200 responses fail promotion and a 200 response parses METRICS_BODY for concrete metrics (e.g., error_rate, request_error_count, request_total, or latency histograms) and evaluates them against configured thresholds; update the code paths around METRICS_STATUS and METRICS_BODY to: treat non-200 as a failing gate, populate METRICS_BODY once (avoid double-curl), extract the relevant metric(s) from METRICS_BODY, compute the metric (error rate = errors/total or P95 latency from histogram buckets), compare to configured env vars, and call pass(...) only when metrics are within thresholds and fail/log otherwise so promotion is actually gated by metric values.scripts/canary-validate.sh-47-50 (1)
47-50:⚠️ Potential issue | 🟠 MajorA single transient soak failure still forces rollback.
fail()increments the globalFAILcounter for every failed poll, and Line 141 aborts wheneverFAIL > 0. So even one isolated/healthblip belowMAX_CONSECUTIVE_HEALTH_FAILURESwill fail the deployment, which defeats the purpose of having a consecutive-failure threshold.💡 Proposed fix
PASS=0 -FAIL=0 +POST_SOAK_FAILURES=0 +SOAK_FAILURES=0 CONSECUTIVE_FAILURES=0 @@ -fail() { +soak_fail() { log "[FAIL] $1" - FAIL=$((FAIL + 1)) + SOAK_FAILURES=$((SOAK_FAILURES + 1)) CONSECUTIVE_FAILURES=$((CONSECUTIVE_FAILURES + 1)) } + +post_soak_fail() { + log "[FAIL] $1" + POST_SOAK_FAILURES=$((POST_SOAK_FAILURES + 1)) +} @@ - fail "Health check at ${ELAPSED}s — HTTP $STATUS" + soak_fail "Health check at ${ELAPSED}s — HTTP $STATUS" @@ - fail "POST-SOAK: /ready — HTTP $READY_STATUS" + post_soak_fail "POST-SOAK: /ready — HTTP $READY_STATUS" @@ -if [ "$FAIL" -gt 0 ]; then +if [ "$POST_SOAK_FAILURES" -gt 0 ]; then log "CANARY VALIDATION FAILED. Rollback will be triggered." exit 1 fiAlso applies to: 90-100, 141-144
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/canary-validate.sh` around lines 47 - 50, The fail() function currently bumps the global FAIL counter on every poll which causes any single transient health blip to trigger aborts; change fail() so it only increments CONSECUTIVE_FAILURES and only sets/increments the global FAIL (or triggers the abort condition) once CONSECUTIVE_FAILURES exceeds MAX_CONSECUTIVE_HEALTH_FAILURES; ensure the success path (e.g., success() or the healthy branch) resets CONSECUTIVE_FAILURES to 0 and leaves FAIL at 0 so only sustained consecutive failures set FAIL and cause the rollback logic in the abort-check (the code referencing FAIL and CONSECUTIVE_FAILURES) to trigger.packages/shared/package.json-10-11 (1)
10-11:⚠️ Potential issue | 🟠 MajorAdd
vitestas a dependency to this workspace.The
testscript inpackages/shared/package.jsonusesvitest, butvitestis not declared as a dependency in this workspace. Onlypackages/llmdeclaresvitest(^4.0.18). In npm workspaces, each workspace must declare its own dependencies; runningnpm --workspace packages/shared run testwill fail on a clean install becausevitestwill not be available to this workspace.Add
vitestto thedevDependenciesin this manifest, or consider hoisting it to the root workspace if it's used across multiple packages.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/package.json` around lines 10 - 11, The package manifest declares a "test" script that runs "vitest" but does not list vitest as a dependency; add vitest to this workspace's devDependencies in packages/shared's package.json (or hoist it to the root) so running npm --workspace packages/shared run test succeeds; ensure the version matches the workspace (e.g., ^4.0.18) or the chosen canonical version used by packages/llm.apps/api/src/repositories/data-deletion.ts-78-80 (1)
78-80:⚠️ Potential issue | 🟠 MajorPII (phone number) in error message.
The error message includes the phone number, which could leak to logs. For a GDPR-compliance module, consider masking or omitting the PII.
🛡️ Proposed fix
if (lookupError || !user) { - throw new Error(`User with phone ${phoneNumber} not found`); + throw new Error('User with provided phone number not found'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/repositories/data-deletion.ts` around lines 78 - 80, The error thrown when lookupError || !user currently includes raw PII (phoneNumber); change the throw to avoid logging the full phone number by either omitting it (e.g., "User not found") or using a masking helper (e.g., maskPhoneNumber(phoneNumber) that preserves only non-PII suffixes) before including it in the message. Update the code surrounding lookupError, user, and phoneNumber in data-deletion.ts to use the mask helper or a generic message so no raw phoneNumber is emitted to logs or error handlers.scripts/smoke-test.sh-37-62 (1)
37-62:⚠️ Potential issue | 🟠 MajorURL-encode the webhook verification parameters.
WHATSAPP_VERIFY_TOKENandCHALLENGEare inserted into the URL raw. If either contains+,=,&, or similar characters, the query string changes meaning and this smoke test can fail against a healthy deployment.🔧 Build the verification request with encoded query params
check_http() { local description="$1" local url="$2" local expected_status="${3:-200}" local expected_body="${4:-}" + shift 4 || true local http_status local body - body=$(curl -s -w "\n%{http_code}" --max-time 10 "$url" 2>/dev/null || echo -e "\n000") + body=$(curl -s -w "\n%{http_code}" --max-time 10 "$@" "$url" 2>/dev/null || printf '\n000') http_status=$(echo "$body" | tail -1) body=$(echo "$body" | head -n -1) if [ "$http_status" = "$expected_status" ]; then @@ check_http \ "GET /webhook (verification challenge)" \ - "${API_URL}/webhook?hub.mode=subscribe&hub.verify_token=${WHATSAPP_VERIFY_TOKEN}&hub.challenge=${CHALLENGE}" \ + "${API_URL}/webhook" \ "200" \ - "${CHALLENGE}" + "${CHALLENGE}" \ + --get \ + --data-urlencode "hub.mode=subscribe" \ + --data-urlencode "hub.verify_token=${WHATSAPP_VERIFY_TOKEN}" \ + --data-urlencode "hub.challenge=${CHALLENGE}"Also applies to: 81-85
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/smoke-test.sh` around lines 37 - 62, The smoke test builds webhook verification URLs by inserting WHATSAPP_VERIFY_TOKEN and CHALLENGE raw into the query string, which breaks when those values contain characters like +, =, or &; URL-encode WHATSAPP_VERIFY_TOKEN and CHALLENGE before concatenating them into the URL (e.g., add a urlencode function or use curl --data-urlencode) and replace the raw inserts in the check_http usage and the other verification-request construction (the block that forms the webhook verification URL around CHALLENGE/WHATSAPP_VERIFY_TOKEN). Ensure all occurrences that build the verification URL use the encoded variables so the query string semantics remain correct.packages/shared/src/index.ts-322-323 (1)
322-323:⚠️ Potential issue | 🟠 MajorAvoid publishing the callback-reset helper through the shared API.
export * from './secret-rotation.js'pulls every secret-rotation symbol into the public surface. The new test suite shows thatclearRotationCallbacksis exported from that module, so this change also makes it available to package consumers and gives them a way to wipe rotation listeners at runtime.🔧 Narrow the public export surface
export * from './telemetry.js'; export * from './pii.js'; -export * from './secret-rotation.js'; +export { + REQUIRED_SECRET_NAMES, + validateSecrets, + rotateSecret, + onSecretRotated, +} from './secret-rotation.js';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/index.ts` around lines 322 - 323, The shared package is re-exporting everything from secret-rotation.js which exposes internal helpers like clearRotationCallbacks; remove the star export and instead explicitly export only the intended public symbols from secret-rotation.js (or omit that module entirely) so clearRotationCallbacks is not part of the public API—locate the export line in packages/shared/src/index.ts and replace export * from './secret-rotation.js' with explicit named exports for approved functions/classes, excluding clearRotationCallbacks.apps/api/src/index.ts-728-737 (1)
728-737:⚠️ Potential issue | 🟠 MajorReject bad pagination inputs before passing them to Supabase.
Number(query['limit'])/Number(query['offset'])will happily produceNaN, negatives, or huge values from malformed query strings. Right now that turns a client error into a 500 or an oversized audit read instead of a clean 400.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/index.ts` around lines 728 - 737, Validate and sanitize pagination inputs before calling getAuditEvents: parse and verify query['limit'] and query['offset'] are finite integers within acceptable bounds (e.g., limit between 1 and a sensible max like 1000, offset >= 0), reject NaN/negative/too-large values and return a 400 response; only if validated include limit and offset in the options object (use variables like limit and offset derived from req.query), leaving other filters (action, actor_id, resource_type) unchanged. Ensure you use Number/parseInt then Number.isInteger and isFinite checks and return a clear 400 error for invalid pagination instead of passing bad values to getAuditEvents.apps/api/src/index.ts-738-745 (1)
738-745:⚠️ Potential issue | 🟠 Major
await ...catch()is still blocking this response.This still waits for the audit insert to settle before sending
events, so a slow or unhealthy audit table adds latency to every/api/admin/auditcall. If this write is intentionally best-effort, fire it without awaiting it.Suggested fix
- await logAuditEvent({ + void logAuditEvent({ actor_id: actorId, actor_role: req.user?.role ?? 'admin', action: 'audit.query', resource_type: 'audit_events', metadata: { filters: query }, }).catch(() => undefined); // non-blocking; audit-of-audit failure must not break response res.json({ events });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/index.ts` around lines 738 - 745, The audit write is still blocking because of the leading await; remove the await so the call becomes fire-and-forget (e.g. call logAuditEvent(...) without awaiting and keep the .catch to swallow errors) to avoid adding latency to the /api/admin/audit response; update the call that invokes logAuditEvent({ actor_id: actorId, actor_role: req.user?.role ?? 'admin', action: 'audit.query', resource_type: 'audit_events', metadata: { filters: query } }) to not await its Promise (use void or just drop await) while preserving the error swallowing callback.apps/api/src/__tests__/supabase-persistence.integration.test.ts-47-69 (1)
47-69:⚠️ Potential issue | 🟠 MajorThis stub returns the same rows regardless of filter predicates, making the filtered query test a false positive.
Line 86 in
getAuditEvents()applies.eq('action', filters.action)when a filter is provided, but the stub'seq()method (line 59) ignores its arguments and always resolves{ data: rows, error: null }. The test at line 152 ("getAuditEvents with filter returns filtered rows") passes only because the test data is pre-filtered to match expectations—it would pass even if the.eq()call were removed entirely.Additionally, the handwritten
thenmethods at lines 60–61 and 82–83 violate Biome'slint/suspicious/noThenPropertyrule (which is enabled by default in your biome.json).Consider making the stub record and apply predicates (e.g., filter
rowsbased on which.eq()was called and its arguments), or assert that the expected.eq()call was made with correct parameters. Then regenerate the.jsmirror from the corrected TS source.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/__tests__/supabase-persistence.integration.test.ts` around lines 47 - 69, The Supabase stub created by makeSupabaseStub returns the same rows regardless of predicates and includes manual then properties (triggering lint errors), so update makeSupabaseStub to record and apply predicate arguments from eq(...) to actually filter the returned rows (i.e., when getAuditEvents calls .eq('action', filters.action) the stub should return only rows matching that action) and remove any manual then properties by returning real Promises instead; ensure the stub's from()->select()->order()->limit()->eq method takes the key and value, filters the provided rows array accordingly, and resolves a Promise<{ data: AuditEvent[]; error: null }> so the "getAuditEvents with filter returns filtered rows" test verifies real filtering and satisfies the lint rule.packages/shared/src/pii.js-66-92 (1)
66-92:⚠️ Potential issue | 🟠 MajorThe depth cap does not make circular input safe.
When
depth > 10, the original object is inserted back into the cloned structure. That can preserve circular references and unmasked nested data in the logger path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/pii.js` around lines 66 - 92, maskPii currently returns the original value when depth > 10, which can reintroduce circular refs and unmasked data; change the depth-safety branch in maskPii so it does not return the original object—return a safe placeholder (e.g., MASKED_FIELD) for objects/arrays instead of the original value while still allowing primitives to be returned as-is; update the depth > 10 check inside maskPii and ensure callers relying on MASKED_FIELD and maskString/PII_KEY_BLOCKLIST behavior remain consistent.packages/shared/src/secret-rotation.js-63-69 (1)
63-69:⚠️ Potential issue | 🟠 MajorDon't hide client reload failures during rotation.
If a registered Redis/Supabase re-init callback throws,
rotateSecret()still returns success and leaves the process in a mixed old/new-secret state. This path needs to surface partial failures.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/secret-rotation.js` around lines 63 - 69, rotateSecret currently swallows errors from rotationCallbacks (the for loop calling callback(name, newValue)), which lets rotateSecret return success despite partial failures; change the callback invocation to support async callbacks (await Promise.resolve(callback(name, newValue))) and capture any thrown/rejected error into a local errors array, and after the loop if errors.length > 0 throw or return an aggregated Error that includes which callback(s) failed and their error messages so callers see partial-failure of rotateSecret instead of silent success.packages/shared/src/secret-rotation.js-15-22 (1)
15-22:⚠️ Potential issue | 🟠 Major
SUPABASE_URLneeds to be part of the required-secret set.Both new repositories initialize Supabase from
SUPABASE_URL. As written,validateSecrets()can reporthealthy: truewhile audit logging and data deletion are guaranteed to fail.Suggested fix
export const REQUIRED_SECRET_NAMES = [ + 'SUPABASE_URL', 'WHATSAPP_APP_SECRET', 'WHATSAPP_ACCESS_TOKEN', 'WHATSAPP_VERIFY_TOKEN', 'OPENAI_API_KEY', 'SUPABASE_SERVICE_ROLE_KEY',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/secret-rotation.js` around lines 15 - 22, REQUIRED_SECRET_NAMES is missing SUPABASE_URL which lets validateSecrets() incorrectly report healthy; update the REQUIRED_SECRET_NAMES array to include 'SUPABASE_URL' so validateSecrets() and any secret audits will require the Supabase URL before reporting healthy (locate REQUIRED_SECRET_NAMES in packages/shared/src/secret-rotation.js and add the 'SUPABASE_URL' entry alongside the other secret names).apps/api/src/repositories/data-deletion.js-48-55 (1)
48-55:⚠️ Potential issue | 🟠 MajorSeparate lookup failures from "not found", and drop the raw phone number from the error.
lookupErrorcurrently gets flattened into the same message as a missing user, which hides real DB/auth failures, and interpolatingphoneNumberleaks PII into logs or HTTP responses.Suggested fix
- if (lookupError || !user) { - throw new Error(`User with phone ${phoneNumber} not found`); - } + if (lookupError) { + throw new Error(`deleteUserDataByPhone lookup failed: ${lookupError.message}`); + } + if (!user) { + throw new Error('User not found'); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/repositories/data-deletion.js` around lines 48 - 55, The lookup currently conflates DB errors and genuine "not found" cases and leaks the raw phoneNumber; update the client.from('users').select(...).eq(...).single() handling so that if lookupError is truthy you throw a distinct error like "Failed to lookup user" (and log lookupError details to server logs) and if !user you throw a separate "User not found" error that does NOT include the raw phoneNumber (optionally include a maskedPhone derived from phoneNumber, e.g. showing only last 2–4 digits, but do not interpolate the full phoneNumber). Ensure you reference and change the existing lookupError and user checks around the client.from('users') call and remove any use of the raw phoneNumber in thrown error messages.apps/api/src/repositories/audit.ts-31-36 (1)
31-36:⚠️ Potential issue | 🟠 MajorFail fast when Supabase config is missing.
The audit repository currently constructs a client even when
SUPABASE_URLor the service key is absent, so a deployment misconfig only shows up on the first audit read/write.Suggested fix
function getClient(): SupabaseClient { if (_client) return _client; - const url = process.env.SUPABASE_URL || ''; - const key = process.env.SUPABASE_SERVICE_ROLE_KEY || process.env.SUPABASE_SERVICE_KEY || ''; + const url = process.env.SUPABASE_URL; + const key = process.env.SUPABASE_SERVICE_ROLE_KEY || process.env.SUPABASE_SERVICE_KEY; + if (!url || !key) { + throw new Error('Missing Supabase configuration for audit repository'); + } _client = createClient(url, key); return _client; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/repositories/audit.ts` around lines 31 - 36, getClient currently builds a Supabase client even when SUPABASE_URL or service key envs are missing, causing delayed failures; update getClient to validate that process.env.SUPABASE_URL and at least one of process.env.SUPABASE_SERVICE_ROLE_KEY or process.env.SUPABASE_SERVICE_KEY is present before calling createClient, and if not present throw a clear, fast-failing Error (include the env var names in the message) instead of returning/creating _client; reference the getClient function, the _client variable, and createClient when implementing the guard.apps/api/src/repositories/audit.ts-74-83 (1)
74-83:⚠️ Potential issue | 🟠 MajorClamp
limit/offsetbefore issuing the query.These values look externally controlled from the admin query path. Negative numbers or a very large
limitcan turn this into an invalid or unexpectedly expensive audit scan.Suggested fix
export async function getAuditEvents(filters: AuditQueryFilters = {}): Promise<AuditEvent[]> { const client = getClient(); + const limit = Math.min(Math.max(filters.limit ?? 50, 1), 100); + const offset = Math.max(filters.offset ?? 0, 0); let query = client .from('audit_events') .select('*') .order('created_at', { ascending: false }) - .limit(filters.limit ?? 50); + .limit(limit); - if (filters.offset) { - query = query.range(filters.offset, filters.offset + (filters.limit ?? 50) - 1); + if (offset > 0) { + query = query.range(offset, offset + limit - 1); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/repositories/audit.ts` around lines 74 - 83, getAuditEvents allows externally controlled filters.limit and filters.offset which can be negative or unbounded; clamp these before building the query by normalizing limit = Math.min(Math.max(Number(filters.limit ?? 50), 1), 100) (use a sensible MAX like 100) and offset = Math.max(Number(filters.offset ?? 0), 0). Then use these normalized values in the .limit() and .range() calls in getAuditEvents so the DB query cannot be given negative or excessively large limits/offsets.packages/shared/src/pii.js-47-53 (1)
47-53:⚠️ Potential issue | 🟠 Major
maskString()still leaks extra matches.It returns after the first email/phone branch, and each
replace()only masks one occurrence. A string containing multiple emails/phones—or one of each—will only be partially redacted.Suggested fix
const PHONE_REGEX = /(\+?\d[\s\-()*]*){7,15}/; +const PHONE_REGEX_GLOBAL = /(\+?\d[\s\-()*]*){7,15}/g; const EMAIL_REGEX = /[a-zA-Z0-9._%+]+[-a-zA-Z0-9._%+]*@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}/; +const EMAIL_REGEX_GLOBAL = /[a-zA-Z0-9._%+]+[-a-zA-Z0-9._%+]*@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}/g; @@ export function maskString(value) { - if (looksLikeEmail(value)) { - return value.replace(EMAIL_REGEX, MASKED_EMAIL); - } - if (looksLikePhone(value)) { - return value.replace(PHONE_REGEX, MASKED_PHONE); - } - return value; + return value + .replace(EMAIL_REGEX_GLOBAL, MASKED_EMAIL) + .replace(PHONE_REGEX_GLOBAL, MASKED_PHONE); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/pii.js` around lines 47 - 53, maskString currently returns after the first branch and each replace only masks a single occurrence; update maskString so it does not return early and applies global masking for all matches: ensure EMAIL_REGEX and PHONE_REGEX are used with global replacement (or call replaceAll) and run both replacements (email then phone, or vice versa) so strings with multiple emails, multiple phones, or mixed PII are fully redacted; reference maskString, looksLikeEmail, EMAIL_REGEX, MASKED_EMAIL, looksLikePhone, PHONE_REGEX, MASKED_PHONE when making the changes.packages/shared/src/secret-rotation.ts-25-25 (1)
25-25:⚠️ Potential issue | 🟠 MajorReject unknown secret names instead of creating arbitrary env keys.
SecretNameis defined, but the public callback and rotation APIs still accept plainstring. A typo here will silently add a new entry toenv, skip any name-based reinitialization logic, and still look like a successful rotation. Constrain the API toSecretNameand keep a runtime membership check for JS callers.🛠️ Proposed fix
-export type SecretRotatedCallback = (name: string, newValue: string) => void; +export type SecretRotatedCallback = (name: SecretName, newValue: string) => void; export function rotateSecret( env: Record<string, string | undefined>, - name: string, + name: SecretName, newValue: string, ): void { + if (!(REQUIRED_SECRET_NAMES as readonly string[]).includes(name)) { + throw new Error(`rotateSecret: unknown secret "${name}"`); + } if (!newValue || newValue.trim().length === 0) {Also applies to: 37-37, 78-87
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/secret-rotation.ts` at line 25, The API currently accepts plain strings which lets typos create arbitrary env keys; change the public callback and rotation API parameter types to use the SecretName type instead of string and add a runtime membership check using REQUIRED_SECRET_NAMES.includes(name) to reject unknown names (throw or return an error) for JS callers; update the function signatures that accept secret names (the public rotation entry points and callback registration functions referenced near the SecretName definition and lines 37 and 78-87) to accept SecretName and perform the runtime check before proceeding so only valid secret names are allowed.packages/shared/src/secret-rotation.ts-87-95 (1)
87-95:⚠️ Potential issue | 🟠 MajorDon't report success when dependent refreshes fail.
Lines 89-94 swallow every callback exception after the env value has already been updated. If a Redis/Supabase re-init fails, the process is left partially rotated and the caller gets no signal to retry or roll back.
🛠️ Proposed fix
env[name] = newValue; + let callbackFailed = false; for (const callback of rotationCallbacks) { try { callback(name, newValue); - } catch { - // Callbacks must not crash the rotation process + } catch { + callbackFailed = true; } } + + if (callbackFailed) { + throw new Error(`rotateSecret: one or more callbacks failed for "${name}"`); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/secret-rotation.ts` around lines 87 - 95, The code currently sets env[name] = newValue then swallows errors from rotationCallbacks, leaving a partially-rotated state; change it to capture the previous value (e.g., oldValue = env[name]) before assigning, invoke each rotation callback and if any throws (or returns a rejected promise) restore env[name] back to oldValue and rethrow (or throw an aggregated error) so callers see failure and can retry/roll back; update the invocation of rotationCallbacks (handle async callbacks with await if needed) and ensure exceptions from callback(name, newValue) are not silently ignored.
🟡 Minor comments (8)
packages/llm/src/langchain/chains/__tests__/router.test.ts-65-68 (1)
65-68:⚠️ Potential issue | 🟡 MinorFix the test name to match the assertion.
This case expects
escalation_path, but the title saysclarification_path. If it fails later, the failure message will point readers in the wrong direction.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/llm/src/langchain/chains/__tests__/router.test.ts` around lines 65 - 68, Update the test title in the router.test.ts case that currently reads "falls back to clarification_path when confidence is undefined" to accurately reflect the assertion expecting 'escalation_path' (i.e., change the string to "falls back to escalation_path when confidence is undefined"); locate the test that calls route({ intent: 'RAG' }) and asserts toBe('escalation_path') and replace only the descriptive string so the name matches the assertion.package.json-10-13 (1)
10-13:⚠️ Potential issue | 🟡 Minor
npm run typecheckstill skips the new top-level tests project.
tests/tsconfig.jsonlives outside the configured workspaces, so type errors intests/e2eandtests/loadwon’t be caught by the default root typecheck entrypoint.💡 Proposed fix
- "typecheck": "npm run typecheck --workspaces --if-present", + "typecheck": "npm run typecheck --workspaces --if-present && tsc -p tests/tsconfig.json",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` around lines 10 - 13, The root "typecheck" script currently uses workspaces and therefore skips the top-level tests project (tests/tsconfig.json), so add the tests project to the type checking step; update the "typecheck" script in package.json (the "typecheck" npm script) to also run tsc -p tests/tsconfig.json (or include the tests folder in the workspace glob) so type errors in tests/e2e and tests/load are checked, e.g. run the existing workspace typecheck and then run tsc -p tests/tsconfig.json as part of the same script.apps/api/src/repositories/data-deletion.test.js-7-8 (1)
7-8:⚠️ Potential issue | 🟡 MinorAvoid returning the
deleteresult from theforEachcallback.Line 8 returns the
deleteexpression, which trips Biome’suseIterableCallbackReturnrule and can fail lint for this file.💡 Proposed fix
function resetStore() { - Object.keys(userStore).forEach((k) => delete userStore[k]); + for (const k of Object.keys(userStore)) { + delete userStore[k]; + } userStore['user-uuid-1'] = { id: 'user-uuid-1', phone_number: '+628111111111' };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/repositories/data-deletion.test.js` around lines 7 - 8, The forEach callback in resetStore currently returns the result of the delete expression which triggers the linter rule; change the callback to avoid returning anything—e.g., replace Object.keys(userStore).forEach((k) => delete userStore[k]) with a non-returning callback (forEach((k) => { delete userStore[k]; })) or use a for...in loop to delete keys; update the resetStore function to perform deletions without returning the delete expression (reference: resetStore, userStore, Object.keys(...).forEach).apps/api/src/repositories/data-deletion.ts-25-31 (1)
25-31:⚠️ Potential issue | 🟡 MinorMissing validation for required environment variables.
getClient()uses empty string fallbacks forSUPABASE_URLandSUPABASE_SERVICE_ROLE_KEY. If these are missing,createClientwill create an invalid client that fails on first use with a confusing error.🛠️ Proposed fix to validate env vars
function getClient(): SupabaseClient { if (_client) return _client; const url = process.env.SUPABASE_URL || ''; const key = process.env.SUPABASE_SERVICE_ROLE_KEY || process.env.SUPABASE_SERVICE_KEY || ''; + if (!url || !key) { + throw new Error('Missing required SUPABASE_URL or SUPABASE_SERVICE_ROLE_KEY environment variables'); + } _client = createClient(url, key); return _client; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/repositories/data-deletion.ts` around lines 25 - 31, The getClient function currently uses empty-string fallbacks for SUPABASE_URL and SUPABASE_SERVICE_ROLE_KEY/SUPABASE_SERVICE_KEY which can produce an invalid Supabase client; update getClient to validate those env vars (check process.env.SUPABASE_URL and at least one of process.env.SUPABASE_SERVICE_ROLE_KEY or process.env.SUPABASE_SERVICE_KEY), and if missing throw a clear Error (or processLogger/error + throw) with a descriptive message before calling createClient, leaving the existing _client caching logic intact; reference the getClient function, the createClient call, and the _client variable when making the change.tests/load/load-test.ts-139-141 (1)
139-141:⚠️ Potential issue | 🟡 MinorRedundant and potentially incorrect error check.
The condition
!res.ok && res.status !== 200is redundant—res.okis alreadyfalsewhenstatus !== 200. Additionally, this logic counts 2xx statuses like 201 or 204 as errors since they would fail theres.status !== 200check whenres.okis true.🐛 Proposed fix
.then((res) => { const latency = Date.now() - reqStart; latenciesMs.push(latency); completedCount++; - if (!res.ok && res.status !== 200) { + if (!res.ok) { errorCount++; } })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/load/load-test.ts` around lines 139 - 141, The current check uses "if (!res.ok && res.status !== 200)" which is redundant and incorrectly treats non-200 but still-ok 2xx responses as errors; change the condition to simply check the Response.ok flag (e.g., "if (!res.ok) errorCount++") where the variables "res" and "errorCount" are used so only non-2xx responses increment errorCount.apps/api/src/repositories/audit.test.ts-8-19 (1)
8-19:⚠️ Potential issue | 🟡 MinorCompose
eq()filters in the mock chain.Each
eq()call replaces the previous predicate instead of narrowing it. IfgetAuditEvents()applies more than one filter, this double can still return rows that Supabase would exclude, so the test may pass against broken query logic.🔧 Preserve earlier predicates when chaining filters
const makeSelectChain = (filterFn: (e: AuditEvent) => boolean = () => true) => ({ order: () => ({ limit: () => ({ data: auditStore.filter(filterFn), error: null }), data: auditStore.filter(filterFn), error: null, }), eq: (field: string, value: string) => - makeSelectChain((e) => (e as unknown as Record<string, unknown>)[field] === value), + makeSelectChain( + (e) => filterFn(e) && (e as unknown as Record<string, unknown>)[field] === value, + ), range: () => makeSelectChain(filterFn), data: auditStore.filter(filterFn), error: null, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/repositories/audit.test.ts` around lines 8 - 19, The mock query chain in makeSelectChain currently replaces the previous predicate when eq() is called, so chained calls lose earlier filters; update eq in makeSelectChain to compose the new equality test with the existing filterFn (e.g., create a new predicate that returns filterFn(e) && fieldMatches) so subsequent eq() calls narrow results instead of overwriting them; ensure other chain methods (order, range) continue to return makeSelectChain with the composed filterFn so tests using getAuditEvents see the same filter semantics as Supabase.apps/api/src/repositories/data-deletion.test.ts-10-12 (1)
10-12:⚠️ Potential issue | 🟡 MinorMake
resetStore()lint-clean.
delete userStore[k]is being returned from theforEachcallback, which trips Biome'suseIterableCallbackReturnand will fail the new lint gate.Suggested fix
function resetStore() { - Object.keys(userStore).forEach((k) => delete userStore[k]); + for (const key of Object.keys(userStore)) { + delete userStore[key]; + } userStore['user-uuid-1'] = { id: 'user-uuid-1', phone_number: '+628111111111' }; userStore['user-uuid-2'] = { id: 'user-uuid-2', phone_number: '+628222222222' }; deletionLog.length = 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/repositories/data-deletion.test.ts` around lines 10 - 12, The resetStore function currently uses a concise arrow inside Object.keys(userStore).forEach that returns the result of delete userStore[k], triggering the useIterableCallbackReturn lint rule; modify resetStore so the forEach callback uses a block body (e.g., Object.keys(userStore).forEach((k) => { delete userStore[k]; });) or replace the forEach with a for...of loop to explicitly perform deletions without returning a value; update the function where resetStore and userStore are defined accordingly..github/workflows/deploy-staging.yml-30-32 (1)
30-32:⚠️ Potential issue | 🟡 MinorQuote
$GITHUB_OUTPUTin the redirect.The redirect target on line 32 is unquoted, which ShellCheck and actionlint flag. Quote it with braces to match GitHub's documented best practice and ensure the step passes linting checks cleanly.
🛠️ Proposed fix
- run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT + run: echo "sha_short=$(git rev-parse --short HEAD)" >> "${GITHUB_OUTPUT}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/deploy-staging.yml around lines 30 - 32, The redirect target in the "Set short SHA" step (id: meta) is unquoted; update the run command that writes sha_short to use the quoted, braced GITHUB_OUTPUT variable (i.e., append to "${GITHUB_OUTPUT}" rather than $GITHUB_OUTPUT) so shellcheck/actionlint warnings are resolved and the workflow follows GitHub's recommended practice.
🧹 Nitpick comments (9)
packages/llm/src/parsers/__tests__/parsers.test.ts (2)
43-50: Pin the intended failure in the negative schema tests.These cases only assert
success === false, so they still pass if an unrelated validation starts failing first. Checkingresult.error.issuesfor the expected field (confidence,message,escalate_flag, etc.) would make the suite much harder to satisfy accidentally.Also applies to: 52-59, 70-72, 94-96, 111-125
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/llm/src/parsers/__tests__/parsers.test.ts` around lines 43 - 50, Update the failing negative tests to assert not just result.success but that the validation error specifically targets the intended field: after calling IntentSchema.safeParse(...) examine result.error.issues and assert there is an issue with path ['confidence'] (and similarly for other tests assert the expected paths like ['message'], ['escalate_flag'], etc.) and optionally check the issue.message or code to ensure the rejection reason matches; update the assertions in the tests around IntentSchema.safeParse to reference result.error.issues rather than only result.success to pin the intended failure.
160-165: Mirror the failure-path coverage forstandardResponseParser.
intentParseris exercised against malformed and schema-invalid JSON, but this parser only has the success case. Adding those two failure modes here would keep the wrapper coverage consistent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/llm/src/parsers/__tests__/parsers.test.ts` around lines 160 - 165, Add two failure-path tests for standardResponseParser: one that passes a malformed JSON string (e.g., 'not a json') to standardResponseParser.parse and asserts it rejects/throws, and another that passes a well-formed JSON that does not match the expected schema (e.g., JSON.stringify({ wrong: 'value' })) and asserts parse rejects/throws due to schema validation; use the same async/await + expect(...).rejects.toThrow pattern used by intentParser tests and reference standardResponseParser.parse in the new test cases.apps/worker/src/__tests__/queue-flow.integration.test.ts (1)
57-59: Add explicit queue cleanup usingobliterate()to prevent Redis test debris accumulation.
queue.close()only closes the Redis connection—it leaves jobs and queue metadata behind. UsingDate.now()for queue name isolation (line 57) makes each test run create persistent test state that accumulates in Redis. Callawait queue.obliterate()beforeawait queue.close()to fully remove queue contents and metadata after each test.Pattern to apply at cleanup points:
await worker.close(); await queue.obliterate(); // Remove all jobs and metadata await queue.close();Applies to cleanup in: lines 85, 92, 100, 145, 154, 163
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/worker/src/__tests__/queue-flow.integration.test.ts` around lines 57 - 59, The tests create ephemeral queues using TEST_QUEUE_NAME and currently only call queue.close(), which leaves jobs/metadata in Redis; before calling queue.close() in each cleanup block (where worker.close() and queue.close() are used) insert an awaited call to queue.obliterate() so the sequence becomes await worker.close(); await queue.obliterate(); await queue.close(); to fully remove jobs and metadata for the queue and prevent accumulating Redis test debris.apps/api/src/repositories/data-deletion.test.js (1)
36-49: Tighten the phone-lookup mock contract.Because
_fieldis ignored here, this test still passes ifdeleteUserDataByPhonequeries the wrong column. That makes the happy-path assertion weaker than the real Supabase contract.💡 Proposed fix
return { // eslint-disable-next-line `@typescript-eslint/no-unused-vars` select: (_sel) => ({ - eq: (_field, phone) => ({ - single: () => { - const found = Object.values(userStore).find((u) => u.phone_number === phone); - return Promise.resolve(found - ? { data: found, error: null } - : { data: null, error: { message: 'not found' } }); - }, - }), + eq: (field, phone) => { + expect(field).toBe('phone_number'); + return { + single: () => { + const found = Object.values(userStore).find((u) => u.phone_number === phone); + return Promise.resolve(found + ? { data: found, error: null } + : { data: null, error: { message: 'not found' } }); + }, + }; + }, }), };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/repositories/data-deletion.test.js` around lines 36 - 49, The phone-lookup mock in mockFrom is too permissive because it ignores the eq field parameter; update the mock used by deleteUserDataByPhone to validate the column name (e.g., ensure the second arg's field equals 'phone_number') before matching userStore, and if the field is unexpected return a not-found or error result so tests fail when the wrong column is queried; locate the mockFrom/select/eq closure and add a check that the received field equals 'phone_number' (or otherwise throw/return an error) while keeping the existing phone lookup against userStore.tests/e2e/webhook-to-outbound.e2e.test.ts (1)
96-112: Missing error handler on server listen.If the server fails to start (e.g., address binding error), there's no
'error'event listener, which could leave the promise hanging or crash with an uncaught error.🛠️ Proposed fix to add error handling
return new Promise<T>((resolve, reject) => { const server = (app as ReturnType<(typeof import('express'))['default']>).listen( 0, async () => { try { const addr = server.address() as { port: number }; const baseUrl = `http://localhost:${addr.port}`; const result = await fn(baseUrl, enqueuedJobs); server.close(); resolve(result); } catch (err) { server.close(); reject(err); } }, ); + server.on('error', reject); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/webhook-to-outbound.e2e.test.ts` around lines 96 - 112, The Promise created around starting the Express server lacks an 'error' event listener on the server instance (variable server), so add server.on('error', (err) => { server.close(); reject(err); }) before or immediately after calling listen to ensure binding failures are rejected and the server is closed; update the block that invokes fn(baseUrl, enqueuedJobs) to rely on this listener so both listen-time errors and errors in the async listen callback properly reject the Promise.tests/e2e/escalation-handoff.e2e.test.ts (2)
93-109: Missing error handler on server listen.Same issue as in
webhook-to-outbound.e2e.test.ts—no'error'event listener on the server.🛠️ Proposed fix
return new Promise<T>((resolve, reject) => { const server = (app as ReturnType<(typeof import('express'))['default']>).listen( 0, async () => { try { const addr = server.address() as { port: number }; const baseUrl = `http://localhost:${addr.port}`; const result = await fn(baseUrl, enqueuedJobs); server.close(); resolve(result); } catch (err) { server.close(); reject(err); } }, ); + server.on('error', reject); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/escalation-handoff.e2e.test.ts` around lines 93 - 109, The server started in the Promise wrapper around server.listen (variable server created from (app as ReturnType<typeof import('express')>['default']).listen) lacks an 'error' event handler; add server.on('error', (err) => { server.close(); reject(err); }) right after creating server so any listen/startup errors reject the Promise and close the server. Ensure you reference the same server variable and the existing resolve/reject logic used in the async callback to avoid unhandled 'error' events.
17-110: Consider extracting shared test harness.The test utilities (
color,runTest,signBody,withServer) are duplicated across E2E test files. Extracting to a sharedtests/e2e/harness.tswould reduce maintenance burden.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/escalation-handoff.e2e.test.ts` around lines 17 - 110, Extract the duplicated test utilities into a shared harness module: move the color constant and the functions runTest, signBody, and withServer into a single exported module and replace the duplicate definitions in each E2E test with imports from that harness; ensure the exported symbols keep the same names and signatures (color, runTest, signBody, withServer) and that withServer still accepts the same fn signature and constructs idempotencyStore/ingressQueue/observability the same way so existing tests continue to work.packages/shared/src/pii.ts (1)
28-29: Phone regex may produce false positives.The pattern
(\+?\d[\s\-()*]*){7,15}matches 7-15 digits with arbitrary separators including*and(). This could match numeric strings that aren't phone numbers (e.g., order IDs, timestamps). Consider tightening the pattern or accepting some false positives for security.-const PHONE_REGEX = /(\+?\d[\s\-()*]*){7,15}/; +const PHONE_REGEX = /(\+?\d[\s\-()]*){7,15}/;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/pii.ts` around lines 28 - 29, The PHONE_REGEX is too permissive (currently (\+?\d[\s\-()*]*){7,15}); update the PHONE_REGEX constant to be stricter: add start/end anchors, disallow '*' and loose parentheses as separators, limit allowed separators to spaces, hyphens, or dots, require an optional leading '+' country code followed by a total digit count threshold (e.g. 10–15 digits) so it no longer matches arbitrary numeric strings; modify the PHONE_REGEX definition in packages/shared/src/pii.ts accordingly.supabase/migrations/202603090100_l1_audit_data_deletion.sql (1)
26-45: The database layer comments on immutability are misleading.The comments at lines 44–45 ("No UPDATE policy → rows are effectively immutable" and "No DELETE policy → rows cannot be deleted") only hold for roles constrained by RLS. Superusers or roles with
BYPASSRLScan still mutate rows, so the immutability guarantee depends on PostgreSQL's role isolation, not the table definition itself.Mitigation: The application layer (audit.ts) enforces INSERT-only by exposing only
logAuditEvent()andgetAuditEvents()—no update/delete methods—and Supabase controls superuser access. For production defense-in-depth, add anUPDATE/DELETEtrigger that raises an error, or explicitly revoke mutating privileges on the table.Clarify the comments to reflect that immutability is enforced via application contracts and role isolation, not database policies alone.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@supabase/migrations/202603090100_l1_audit_data_deletion.sql` around lines 26 - 45, The comment claiming immutability is misleading because RLS policies on audit_events (service_role_insert_audit, service_role_select_audit) only restrict roles subject to RLS; superusers or BYPASSRLS roles can still UPDATE/DELETE. Update the migration comments to state that immutability is enforced by RLS plus application conventions (audit.ts exposing logAuditEvent/getAuditEvents) and Supabase role controls, and optionally implement defense‑in‑depth by adding a TRIGGER on audit_events that raises an error on UPDATE/DELETE or by revoking UPDATE/DELETE privileges for non‑superuser roles.
| environment: | ||
| name: production | ||
| url: ${{ secrets.PROD_API_URL }} |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub Actions environment.url secrets context support documentation
💡 Result:
jobs.<job_id>.environment.url does not support the secrets context. GitHub’s docs list the only supported contexts for environment.url as:
github, needs, strategy, matrix, job, runner, env, vars, steps, inputs(notsecrets) [1]
So this is not allowed:
environment:
name: prod
url: ${{ secrets.PROD_URL }} # not supportedUse a non-secret source instead (for example, a step output or vars), which are supported:
- id: deploy
run: echo "url=https://example.com" >> "$GITHUB_OUTPUT"
environment:
name: prod
url: ${{ steps.deploy.outputs.url }}
``` [1]
If you want environment-scoped configuration, note that **environment variables** you store in an Environment are available via the `vars` context (and environment secrets are only available once the job references the environment and any protection rules pass) [2].
**Sources:** GitHub Docs on contexts (allowed contexts for `jobs.<job_id>.environment.url`) [1], and environments (environment vars via `vars`, secrets gated by environment rules) [2].
[1] ([docs.github.com](https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs?ref=engineering-empathy-co&utm_source=openai))
[2] ([docs.github.com](https://docs.github.com/en/actions/reference/environments?utm_source=openai))
Citations:
- 1: https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs?ref=engineering-empathy-co&utm_source=openai
- 2: https://docs.github.com/en/actions/reference/environments?utm_source=openai
---
</details>
**`environment.url` cannot use the `secrets` context.**
GitHub Actions does not expose the `secrets` context in `jobs.<job_id>.environment.url`. The supported contexts are: `github`, `needs`, `strategy`, `matrix`, `job`, `runner`, `env`, `vars`, `steps`, and `inputs`.
Use one of the supported contexts instead—for example, `vars.PROD_API_URL` if the value is stored in an environment variable, or `steps.<id>.outputs.url` if set by a previous step.
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 actionlint (1.7.11)</summary>
[error] 82-82: context "secrets" is not allowed here. available contexts are "env", "github", "inputs", "job", "matrix", "needs", "runner", "steps", "strategy", "vars". see https://docs.github.com/en/actions/learn-github-actions/contexts#context-availability for more details
(expression)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy-production.yml around lines 80 - 82, The workflow
sets environment.url using secrets.PROD_API_URL which is invalid because the
secrets context cannot be used in jobs.<job_id>.environment.url; update the job
to reference a supported context instead—for example, set environment.url to
vars.PROD_API_URL if you store the value in vars, or populate the URL in a prior
step and use steps.<step_id>.outputs.url; ensure you remove secrets.PROD_API_URL
and replace it with the chosen supported context (e.g., vars.PROD_API_URL or
steps..outputs.url) and, if using vars, add the PROD_API_URL to workflow
vars or repository variables so environment.name and environment.url resolve
correctly.
</details>
<!-- fingerprinting:phantom:medusa:grasshopper -->
<!-- This is an auto-generated comment by CodeRabbit -->
| needs: approval-gate | ||
| steps: | ||
| - name: Checkout | ||
| uses: actions/checkout@v4 | ||
| with: | ||
| ref: ${{ needs.preflight.outputs.deploy_sha }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/deploy-production.yml | head -250Repository: Ryanakml/ChatFuse
Length of output: 11765
Add preflight as a direct dependency for all jobs that reference its outputs.
In GitHub Actions, jobs can only access outputs from jobs listed directly in their needs field. The following jobs reference needs.preflight.outputs.deploy_sha without declaring preflight as a direct dependency, which will cause the workflow to fail at runtime:
deploy-canary(references output at lines 109, 132) — currentlyneeds: approval-gatepromote(references output at lines 194, 204, 221) — currentlyneeds: canary-validationrollback(references output at line 249) — currentlyneeds: canary-validation
Even though these jobs have transitive dependencies on preflight through the job chain, GitHub Actions does not allow accessing outputs from indirect dependencies. All three jobs must explicitly include preflight in their needs array.
Suggested fix
deploy-canary:
name: Deploy Canary
runs-on: ubuntu-latest
- needs: approval-gate
+ needs: [approval-gate, preflight]
@@
promote:
name: Promote to 100%
runs-on: ubuntu-latest
- needs: canary-validation
+ needs: [canary-validation, preflight]
@@
rollback:
name: Automated Rollback
runs-on: ubuntu-latest
- needs: canary-validation
+ needs: [canary-validation, preflight]🧰 Tools
🪛 actionlint (1.7.11)
[error] 109-109: property "preflight" is not defined in object type {approval-gate: {outputs: {}; result: string}}
(expression)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy-production.yml around lines 104 - 109, Jobs
deploy-canary, promote, and rollback are reading
needs.preflight.outputs.deploy_sha but do not list preflight as a direct
dependency, which breaks at runtime; update each job's needs to include
preflight (e.g., change deploy-canary's needs from "approval-gate" to an array
["approval-gate","preflight"], and similarly add "preflight" to the needs array
for promote (in addition to canary-validation) and rollback (in addition to
canary-validation)) so they can legally access
needs.preflight.outputs.deploy_sha.
| # ── REPLACE THIS STEP WITH YOUR PROVIDER ── | ||
| - name: Deploy canary API | ||
| run: | | ||
| echo "[PLACEHOLDER] Deploying canary slice to production..." | ||
| echo "Replace with your provider's canary deploy command." | ||
| echo "Example (Fly.io): fly deploy --app wa-chat-api-prod --strategy canary" | ||
| echo "Example (k8s/Helm): helm upgrade --set image.tag=$SHA --set replicaCount.canary=1" | ||
| echo "Example (no-native-canary): Deploy to a separate 'api-canary' service; route 10% traffic via load balancer." | ||
| env: | ||
| SHA: ${{ needs.preflight.outputs.deploy_sha }} | ||
| PROD_SUPABASE_URL: ${{ secrets.PROD_SUPABASE_URL }} | ||
| PROD_SUPABASE_SERVICE_ROLE_KEY: ${{ secrets.PROD_SUPABASE_SERVICE_ROLE_KEY }} | ||
| PROD_REDIS_URL: ${{ secrets.PROD_REDIS_URL }} | ||
| PROD_OPENAI_API_KEY: ${{ secrets.PROD_OPENAI_API_KEY }} | ||
| PROD_GEMINI_API_KEY: ${{ secrets.PROD_GEMINI_API_KEY }} | ||
| PROD_WHATSAPP_PHONE_NUMBER_ID: ${{ secrets.PROD_WHATSAPP_PHONE_NUMBER_ID }} | ||
| PROD_WHATSAPP_ACCESS_TOKEN: ${{ secrets.PROD_WHATSAPP_ACCESS_TOKEN }} | ||
| PROD_WHATSAPP_APP_SECRET: ${{ secrets.PROD_WHATSAPP_APP_SECRET }} | ||
| PROD_WHATSAPP_VERIFY_TOKEN: ${{ secrets.PROD_WHATSAPP_VERIFY_TOKEN }} |
There was a problem hiding this comment.
This pipeline can succeed without changing production.
The canary deploy, promote, and rollback steps are all placeholder echos. In the current state, a green run does not mean any revision was deployed or reverted.
Also applies to: 196-205, 236-245
🧰 Tools
🪛 actionlint (1.7.11)
[error] 132-132: property "preflight" is not defined in object type {approval-gate: {outputs: {}; result: string}}
(expression)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy-production.yml around lines 123 - 141, The "Deploy
canary API" workflow step currently contains only placeholder echo commands, so
CI can pass without performing any real deployment; replace the placeholder step
named "Deploy canary API" (and the analogous placeholder steps for
promote/rollback) with your provider's actual deployment commands (e.g., fly
deploy --app ... --strategy canary, helm upgrade with --set image.tag=$SHA and
canary replica settings, or deploy to a separate api-canary service and adjust
load balancer traffic), ensure the step uses the SHA and production secrets
already in env (PROD_*), make promote/rollback steps execute the real provider
promotion/rollback commands and fail the job on non-zero exit, and optionally
add a manual approval/confirmation job before promoting to full production so
runs cannot silently claim production changes.
| rollback: | ||
| name: Automated Rollback | ||
| runs-on: ubuntu-latest | ||
| needs: canary-validation | ||
| if: failure() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/deploy-production.yml | head -250Repository: Ryanakml/ChatFuse
Length of output: 11765
Rollback is skipped when the canary deploy fails before validation.
If deploy-canary fails (e.g., the health check at line 159), then canary-validation is automatically skipped because it depends on deploy-canary. Since the rollback job only depends on canary-validation and uses if: failure(), it is also skipped—the workflow never triggers a rollback despite a deployment failure.
The rollback needs to depend directly on deploy-canary and check both jobs for failure:
Suggested fix
rollback:
name: Automated Rollback
runs-on: ubuntu-latest
- needs: canary-validation
- if: failure()
+ needs: [preflight, deploy-canary, canary-validation]
+ if: ${{ always() && (needs.deploy-canary.result == 'failure' || needs.canary-validation.result == 'failure') }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy-production.yml around lines 227 - 231, The rollback
job currently only needs canary-validation and uses if: failure(), so it is
skipped when deploy-canary fails before validation; update the rollback job
(named "rollback") to depend on both deploy-canary and canary-validation (set
needs to include both job names) and change its if condition to explicitly
trigger when either dependency fails, e.g. check needs.deploy-canary.result ==
'failure' or needs.canary-validation.result == 'failure' (or combine with
failure() as appropriate) so the rollback runs whenever deploy-canary or
canary-validation fails.
| environment: | ||
| name: staging | ||
| url: ${{ secrets.STAGING_API_URL }} |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitHub Actions environment url secrets context allowed
💡 Result:
In jobs.<job_id>.environment.url, GitHub Actions lets you use an expression, but the secrets context is not allowed.
Allowed contexts for jobs.<job_id>.environment.url are: github, needs, strategy, matrix, job, runner, env, vars, steps, inputs (and not secrets). [1]
If you need to build a URL dynamically, use one of the allowed contexts (commonly a prior step’s output via steps.<id>.outputs...)—and avoid embedding secrets in URLs. [1]
Sources:
[1] GitHub Docs — Contexts reference (property table for jobs.<job_id>.environment.url) (docs.github.com)
Citations:
🌐 Web query:
GitHub Actions environment.url available contexts
💡 Result:
For the workflow key jobs.<job_id>.environment.url, GitHub Actions expressions can use these contexts:
githubneedsstrategymatrixjobrunnerenvvarsstepsinputs[1]
Notably, secrets is not available in jobs.<job_id>.environment.url. [1]
Sources:
[1] GitHub Docs — Contexts reference (Context availability table) (docs.github.com)
Citations:
🏁 Script executed:
# Check if there are any workflow run logs or related documentation in the repo
find . -type f -name "*.md" -o -name "*.yml" -o -name "*.yaml" | grep -E "(workflow|deploy|github)" | head -20Repository: Ryanakml/ChatFuse
Length of output: 255
🏁 Script executed:
# Look for the actual deploy-staging.yml file to understand the context better
find . -type f -name "deploy-staging.yml" -exec cat {} \;Repository: Ryanakml/ChatFuse
Length of output: 8775
environment.url cannot use the secrets context.
GitHub Actions does not allow the secrets context in jobs.<job_id>.environment.url. The allowed contexts are: github, needs, strategy, matrix, job, runner, env, vars, steps, and inputs. Use vars.STAGING_API_URL instead, or output the URL from a prior step.
Proposed fix
environment:
name: staging
- url: ${{ secrets.STAGING_API_URL }}
+ url: ${{ vars.STAGING_API_URL }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| environment: | |
| name: staging | |
| url: ${{ secrets.STAGING_API_URL }} | |
| environment: | |
| name: staging | |
| url: ${{ vars.STAGING_API_URL }} |
🧰 Tools
🪛 actionlint (1.7.11)
[error] 68-68: context "secrets" is not allowed here. available contexts are "env", "github", "inputs", "job", "matrix", "needs", "runner", "steps", "strategy", "vars". see https://docs.github.com/en/actions/learn-github-actions/contexts#context-availability for more details
(expression)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy-staging.yml around lines 66 - 68, The
environment.url entry is using secrets.STAGING_API_URL which is not allowed;
replace the secrets context with an allowed source (for example use
vars.STAGING_API_URL or set and emit the URL from a prior step and reference it
via steps.<step_id>.outputs.url) so update the environment.url value to use
vars.STAGING_API_URL or the appropriate step output instead of
secrets.STAGING_API_URL.
| # ── REPLACE THIS STEP WITH YOUR PROVIDER ── | ||
| - name: Deploy API to staging | ||
| run: | | ||
| echo "[PLACEHOLDER] Deploying API service to staging..." | ||
| echo "Replace this step with your hosting provider deploy command." | ||
| echo "Example (Railway): railway up --service api --detach" | ||
| echo "Example (Fly.io): fly deploy --app wa-chat-api-staging --remote-only" | ||
| echo "Example (Render): curl -X POST '${{ secrets.RENDER_STAGING_DEPLOY_HOOK }}'" | ||
| env: | ||
| STAGING_SUPABASE_URL: ${{ secrets.STAGING_SUPABASE_URL }} | ||
| STAGING_SUPABASE_SERVICE_ROLE_KEY: ${{ secrets.STAGING_SUPABASE_SERVICE_ROLE_KEY }} | ||
| STAGING_REDIS_URL: ${{ secrets.STAGING_REDIS_URL }} | ||
| STAGING_OPENAI_API_KEY: ${{ secrets.STAGING_OPENAI_API_KEY }} | ||
| STAGING_GEMINI_API_KEY: ${{ secrets.STAGING_GEMINI_API_KEY }} | ||
| STAGING_WHATSAPP_PHONE_NUMBER_ID: ${{ secrets.STAGING_WHATSAPP_PHONE_NUMBER_ID }} | ||
| STAGING_WHATSAPP_ACCESS_TOKEN: ${{ secrets.STAGING_WHATSAPP_ACCESS_TOKEN }} | ||
| STAGING_WHATSAPP_APP_SECRET: ${{ secrets.STAGING_WHATSAPP_APP_SECRET }} | ||
| STAGING_WHATSAPP_VERIFY_TOKEN: ${{ secrets.STAGING_WHATSAPP_VERIFY_TOKEN }} | ||
|
|
||
| # ── REPLACE THIS STEP WITH YOUR PROVIDER ── | ||
| - name: Deploy Worker to staging | ||
| run: | | ||
| echo "[PLACEHOLDER] Deploying Worker service to staging..." | ||
| echo "Replace this step with your hosting provider deploy command." | ||
|
|
There was a problem hiding this comment.
These steps never deploy the build.
Both deploy steps only echo placeholder text, so the health check, smoke tests, and E2E suite are validating whatever is already running in staging rather than this commit. Wire the real provider commands before merge, or fail fast here so the workflow can't produce a false green.
If you want, I can sketch the provider-specific step once you share whether staging is on Railway, Fly.io, Render, or something else.
🛠️ Safer interim behavior
- name: Deploy API to staging
run: |
- echo "[PLACEHOLDER] Deploying API service to staging..."
- echo "Replace this step with your hosting provider deploy command."
- echo "Example (Railway): railway up --service api --detach"
- echo "Example (Fly.io): fly deploy --app wa-chat-api-staging --remote-only"
- echo "Example (Render): curl -X POST '${{ secrets.RENDER_STAGING_DEPLOY_HOOK }}'"
+ echo "Staging deploy is not implemented." >&2
+ exit 1
@@
- name: Deploy Worker to staging
run: |
- echo "[PLACEHOLDER] Deploying Worker service to staging..."
- echo "Replace this step with your hosting provider deploy command."
+ echo "Staging worker deploy is not implemented." >&2
+ exit 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # ── REPLACE THIS STEP WITH YOUR PROVIDER ── | |
| - name: Deploy API to staging | |
| run: | | |
| echo "[PLACEHOLDER] Deploying API service to staging..." | |
| echo "Replace this step with your hosting provider deploy command." | |
| echo "Example (Railway): railway up --service api --detach" | |
| echo "Example (Fly.io): fly deploy --app wa-chat-api-staging --remote-only" | |
| echo "Example (Render): curl -X POST '${{ secrets.RENDER_STAGING_DEPLOY_HOOK }}'" | |
| env: | |
| STAGING_SUPABASE_URL: ${{ secrets.STAGING_SUPABASE_URL }} | |
| STAGING_SUPABASE_SERVICE_ROLE_KEY: ${{ secrets.STAGING_SUPABASE_SERVICE_ROLE_KEY }} | |
| STAGING_REDIS_URL: ${{ secrets.STAGING_REDIS_URL }} | |
| STAGING_OPENAI_API_KEY: ${{ secrets.STAGING_OPENAI_API_KEY }} | |
| STAGING_GEMINI_API_KEY: ${{ secrets.STAGING_GEMINI_API_KEY }} | |
| STAGING_WHATSAPP_PHONE_NUMBER_ID: ${{ secrets.STAGING_WHATSAPP_PHONE_NUMBER_ID }} | |
| STAGING_WHATSAPP_ACCESS_TOKEN: ${{ secrets.STAGING_WHATSAPP_ACCESS_TOKEN }} | |
| STAGING_WHATSAPP_APP_SECRET: ${{ secrets.STAGING_WHATSAPP_APP_SECRET }} | |
| STAGING_WHATSAPP_VERIFY_TOKEN: ${{ secrets.STAGING_WHATSAPP_VERIFY_TOKEN }} | |
| # ── REPLACE THIS STEP WITH YOUR PROVIDER ── | |
| - name: Deploy Worker to staging | |
| run: | | |
| echo "[PLACEHOLDER] Deploying Worker service to staging..." | |
| echo "Replace this step with your hosting provider deploy command." | |
| # ── REPLACE THIS STEP WITH YOUR PROVIDER ── | |
| - name: Deploy API to staging | |
| run: | | |
| echo "Staging deploy is not implemented." >&2 | |
| exit 1 | |
| env: | |
| STAGING_SUPABASE_URL: ${{ secrets.STAGING_SUPABASE_URL }} | |
| STAGING_SUPABASE_SERVICE_ROLE_KEY: ${{ secrets.STAGING_SUPABASE_SERVICE_ROLE_KEY }} | |
| STAGING_REDIS_URL: ${{ secrets.STAGING_REDIS_URL }} | |
| STAGING_OPENAI_API_KEY: ${{ secrets.STAGING_OPENAI_API_KEY }} | |
| STAGING_GEMINI_API_KEY: ${{ secrets.STAGING_GEMINI_API_KEY }} | |
| STAGING_WHATSAPP_PHONE_NUMBER_ID: ${{ secrets.STAGING_WHATSAPP_PHONE_NUMBER_ID }} | |
| STAGING_WHATSAPP_ACCESS_TOKEN: ${{ secrets.STAGING_WHATSAPP_ACCESS_TOKEN }} | |
| STAGING_WHATSAPP_APP_SECRET: ${{ secrets.STAGING_WHATSAPP_APP_SECRET }} | |
| STAGING_WHATSAPP_VERIFY_TOKEN: ${{ secrets.STAGING_WHATSAPP_VERIFY_TOKEN }} | |
| # ── REPLACE THIS STEP WITH YOUR PROVIDER ── | |
| - name: Deploy Worker to staging | |
| run: | | |
| echo "Staging worker deploy is not implemented." >&2 | |
| exit 1 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy-staging.yml around lines 87 - 111, The workflow's
"Deploy API to staging" and "Deploy Worker to staging" steps only echo
placeholders and do not deploy the build, so replace these placeholder runs with
real provider deploy commands (e.g., Railway, Fly.io, Render) that push the
current commit/artifacts and use the existing env secrets (STAGING_*);
alternatively, if staging provider isn't available yet, fail fast by making
those steps exit non‑zero to prevent false-positive downstream tests. Locate the
steps named "Deploy API to staging" and "Deploy Worker to staging" and update
their run blocks to invoke your hosting CLI/HTTP deploy hook (using the
STAGING_* secrets) or change them to a failing command so the workflow cannot
proceed without a real deploy.
| export const policyChain = RunnableLambda.from(async (state: AgentState) => { | ||
| // Simple mock policy enforcement | ||
| // --- Input-side: prompt injection mitigation --- | ||
| const inputStr = | ||
| typeof state.originalInput === 'string' | ||
| ? state.originalInput | ||
| : JSON.stringify(state.originalInput ?? ''); | ||
|
|
||
| const isInjectionAttempt = PROMPT_INJECTION_PATTERNS.some((pattern) => pattern.test(inputStr)); | ||
|
|
||
| if (isInjectionAttempt) { | ||
| return { | ||
| ...state, | ||
| isSafe: false, | ||
| intent: 'BLOCKED', | ||
| finalResponse: BLOCKED_RESPONSE, | ||
| } as AgentState; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify where policyChain is wired relative to the rest of the LangChain steps.
PIPELINE=$(fd 'pipeline\.ts$' packages/llm/src/langchain | head -n1)
echo "Inspecting: $PIPELINE"
rg -n -C2 'normalizationChain|classificationChain|routerChain|compositionChain|policyChain' "$PIPELINE"Repository: Ryanakml/ChatFuse
Length of output: 924
Move prompt-injection check to the start of the pipeline.
policyChain is currently the last chain in processMessagePipeline (after retrieval, classification, routing, and composition). The input-side prompt-injection guard at lines 34–50 runs too late—injected content can still reach retrieval queries, router decisions, and composition logic before being marked BLOCKED. Insert this check immediately after normalization (or at the pipeline start) to prevent injection patterns from influencing downstream processing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/llm/src/langchain/chains/policy.ts` around lines 34 - 50, The
prompt-injection detection currently inside policyChain (the RunnableLambda.from
block that uses PROMPT_INJECTION_PATTERNS, sets isSafe/intent/finalResponse to
BLOCKED_RESPONSE on AgentState) must run at the very start of the message
pipeline instead of at the end; move or extract that input-side check so it
executes immediately after normalization (or as the first chain in
processMessagePipeline) to short-circuit downstream retrieval, routing,
classification and composition logic when a match is found. Locate the
PROMPT_INJECTION_PATTERNS check and its early-return AgentState mutation in
policyChain and either (a) relocate that exact block to a new pre-processing
Runnable added first in processMessagePipeline, or (b) refactor it into a small
helper (e.g., isPromptInjection(inputStr)) and invoke it at pipeline start to
return the BLOCKED_RESPONSE AgentState before any retrieval/router/composition
runs.
| create or replace function delete_user_data( | ||
| target_user_id uuid, | ||
| actor text default 'system', | ||
| actor_role_val text default 'admin' | ||
| ) | ||
| returns jsonb | ||
| language plpgsql | ||
| security definer | ||
| as $$ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and examine the migration file
find . -name "202603090100_l1_audit_data_deletion.sql" -type f | head -5Repository: Ryanakml/ChatFuse
Length of output: 122
🏁 Script executed:
# Get the file size and content
wc -l supabase/migrations/202603090100_l1_audit_data_deletion.sqlRepository: Ryanakml/ChatFuse
Length of output: 124
🏁 Script executed:
# Read the full migration file
cat -n supabase/migrations/202603090100_l1_audit_data_deletion.sqlRepository: Ryanakml/ChatFuse
Length of output: 5504
Harden this SECURITY DEFINER function before shipping.
In PostgreSQL, a new SECURITY DEFINER function defaults to executable by PUBLIC, and this one runs with elevated rights. It uses unqualified table names without a fixed search_path—an attacker could inject malicious tables in a different schema to intercept DELETE operations. Additionally, the audit_events table immutability is not enforced at the database layer; RLS policies don't protect against superusers or the table owner, and there's no trigger preventing UPDATE/DELETE on audit records.
Required fixes:
- Pin
search_pathto the correct schema - Schema-qualify all table references
- Revoke
EXECUTEfromPUBLIC,anon, andauthenticated; grant only toservice_role - Add a trigger to the
audit_eventstable to prevent UPDATE/DELETE operations
Example hardening (search_path and execute grants)
returns jsonb
language plpgsql
security definer
+set search_path = public
as $$
@@
$$;
+
+revoke all on function delete_user_data(uuid, text, text) from public, anon, authenticated;
+grant execute on function delete_user_data(uuid, text, text) to service_role;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@supabase/migrations/202603090100_l1_audit_data_deletion.sql` around lines 50
- 58, Harden the SECURITY DEFINER function delete_user_data by explicitly
setting and locking the search_path at the top of the function (e.g., set local
search_path = 'your_schema', pg_temp; or similar) and schema-qualify every table
reference inside the function (e.g., your_schema.users,
your_schema.user_profiles, your_schema.audit_events) so no unqualified lookup
can be hijacked; after deploying the function, revoke EXECUTE on the function
from PUBLIC and any anonymous/authenticated roles and only GRANT EXECUTE to the
service_role; additionally create a database-level trigger on
your_schema.audit_events that raises an exception to prevent UPDATE or DELETE
(i.e., an BEFORE UPDATE OR DELETE trigger that aborts) to enforce immutability
at the DB layer.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/api/tsconfig.json (1)
28-51:⚠️ Potential issue | 🟠 MajorDon't pull sibling package
srctrees into this composite project.With Line 28 enabling composite mode, Line 51 pulls
packages/sharedandpackages/configsource directly intoapps/api's compilation, bypassing project references entirely. This defeats composite-project isolation, can duplicate emit intoapps/api/dist, and breaks the intended build graph—especially sincepackages/sharedis already composite whilepackages/configis not. Additionally, path aliases (Lines 16–17) point to source files rather than compiled entrypoints, reinforcing direct dependency on source instead of built artifacts.Replace direct source includes with project
referencesand consume sibling packages through their public entrypoints:Suggested direction
"compilerOptions": { @@ "skipLibCheck": true }, - "include": ["src", "../../packages/shared/src/**/*", "../../packages/config/src/**/*"], + "include": ["src"], + "references": [ + { "path": "../../packages/shared" } + ] }Update path aliases to reference built/public entrypoints:
"paths": { - "@wa-chat/config": ["../../packages/config/src/index.ts"], - "@wa-chat/shared": ["../../packages/shared/src/index.ts"] + "@wa-chat/shared": ["../../packages/shared/dist/index.d.ts"] }
packages/configshould either enable composite mode and be added toreferences, orapps/apishould depend on its published entrypoint instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/tsconfig.json` around lines 28 - 51, Your tsconfig currently enables "composite": true but pulls sibling package source via the "include" entries (../../packages/shared/src/**/*, ../../packages/config/src/**/*) and uses path aliases that point at source; remove those sibling "src" includes and stop pointing aliases at source files, and instead add proper project references in "references" referencing the sibling packages' tsconfig (e.g., ../.. /packages/shared/tsconfig.json and packages/config/tsconfig.json) so the composite build graph is used; for packages/config either enable "composite": true or change the alias to consume its built/public entrypoint (dist or package main) rather than its src; update the path aliases (the ones around lines 16–17) to point to compiled/public entrypoints, not source folders.
🧹 Nitpick comments (2)
packages/config/src/env.test.js (1)
67-75: Remove cosmetic console output or wrap in verbose logging mode.The console.log statements (lines 76-84 in the source) provide only visual feedback after tests pass and may clutter CI logs. Consider removing them or wrapping in a conditional verbose flag.
Note: The
sourceMappingURLcomment is correct and expected—this file is compiled from TypeScript, so the source map reference is appropriate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/config/src/env.test.js` around lines 67 - 75, Remove or guard the cosmetic console.log output that prints colored test summaries: the color variables green, cyan, reset and the console.log calls that print "ENV VALIDATION TEST", the three "✓ ... validation passed" lines, and "All env tests passed". Either delete these console.log lines entirely or wrap them behind a verbosity flag (e.g., check process.env.VERBOSE or a DEBUG constant) so they only run when verbose logging is enabled; keep the existing sourceMappingURL comment untouched.packages/shared/src/pii.test.js (1)
41-54: Exercise the full blocklist, not just three keys.
packages/shared/src/pii.ts:10-25currently redacts 13 keys, including camelCase aliases and message fields. Locking this test to onlyphone_number,display_name, and🧪 Example table-driven coverage
describe('maskPii', () => { - it('redacts known PII fields by key', () => { - const input = { - phone_number: '+628123456789', - display_name: 'John Doe', - email: 'john@example.com', - status: 'open', - }; - const result = maskPii(input); - expect(result['phone_number']).toBe(MASKED_FIELD); - expect(result['display_name']).toBe(MASKED_FIELD); - expect(result['email']).toBe(MASKED_FIELD); - // Non-PII field preserved - expect(result['status']).toBe('open'); - }); + it('redacts every blocklisted key', () => { + const piiKeys = [ + 'phone_number', + 'phoneNumber', + 'phone', + 'email', + 'display_name', + 'displayName', + 'name', + 'body', + 'content', + 'message', + 'sender', + 'from', + 'to', + ]; + const input = Object.fromEntries( + piiKeys.map((key) => [key, `${key}-value`]).concat([['status', 'open']]), + ); + const result = maskPii(input); + for (const key of piiKeys) { + expect(result[key]).toBe(MASKED_FIELD); + } + expect(result['status']).toBe('open'); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/src/pii.test.js` around lines 41 - 54, The test only asserts three keys but should exercise the full PII blocklist: update the test for maskPii to be table-driven over every PII key defined in the blocklist (including camelCase aliases and message fields referenced in the blocklist in maskPii) and assert each key's value equals MASKED_FIELD after masking; also keep a non-PII field (e.g., status) and assert it remains unchanged. Locate usage of maskPii and MASKED_FIELD in the test and replace the hard-coded expects with a loop/array of all blocklist keys (the same identifiers used in pii.ts) to ensure full coverage. Ensure the test fails if any blocklist key is not masked and that the non-PII assertion remains.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/config/src/env.test.js`:
- Around line 24-29: The test is brittle because it sets the optional key using
OPTIONAL_ENV_VARS[0] but asserts against the hardcoded LANGCHAIN_TRACING_V2;
update the assertion to use the same dynamic key so they stay in sync — e.g.,
after creating optionalEnv (which uses OPTIONAL_ENV_VARS[0]) call validateEnv
and assert that optionalResult[OPTIONAL_ENV_VARS[0]] equals '1' (or
alternatively set the optionalEnv using the explicit 'LANGCHAIN_TRACING_V2' key
and keep the existing assertion), making sure to reference OPTIONAL_ENV_VARS,
optionalEnv, validateEnv, and optionalResult so the test will not break if
OPTIONAL_ENV_VARS order changes.
In `@packages/shared/src/pii.test.js`:
- Around line 24-39: Add a regression test in pii.test.js that passes a string
containing both an email and a phone (e.g., "Contact user@example.com or call
+628123456789") and assert that maskString returns a string containing both
MASKED_EMAIL and MASKED_PHONE and contains neither original PII; then fix
maskString so it does not return early after doing the email replacement—ensure
both replacements run (apply email masking and then phone masking, or run both
regex replacements on the input) so mixed email+phone strings get both masked.
In `@packages/shared/src/telemetry.test.js`:
- Around line 21-25: The test currently calls setupTelemetry('test-service') and
asserts before attempting shutdown and swallows shutdown errors; change the test
to always perform teardown and propagate shutdown failures by wrapping the
assertion in a try/finally (or using Promise.finally) so sdk.shutdown() is
called unconditionally and its rejection is not caught. Ensure you reference
setupTelemetry and sdk.shutdown and let any shutdown error fail the test rather
than being ignored.
- Line 28: Remove the transpiled duplicate telemetry.test.js (the file that
contains a source map trailer) so only the TypeScript source telemetry.test.ts
remains; delete the telemetry.test.js file from the repository (e.g., git rm)
and ensure no build step emits tests into src (verify TypeScript emits to
dist/), then run the test suite to confirm setupTelemetry() is invoked only
once.
---
Outside diff comments:
In `@apps/api/tsconfig.json`:
- Around line 28-51: Your tsconfig currently enables "composite": true but pulls
sibling package source via the "include" entries
(../../packages/shared/src/**/*, ../../packages/config/src/**/*) and uses path
aliases that point at source; remove those sibling "src" includes and stop
pointing aliases at source files, and instead add proper project references in
"references" referencing the sibling packages' tsconfig (e.g., ../..
/packages/shared/tsconfig.json and packages/config/tsconfig.json) so the
composite build graph is used; for packages/config either enable "composite":
true or change the alias to consume its built/public entrypoint (dist or package
main) rather than its src; update the path aliases (the ones around lines 16–17)
to point to compiled/public entrypoints, not source folders.
---
Nitpick comments:
In `@packages/config/src/env.test.js`:
- Around line 67-75: Remove or guard the cosmetic console.log output that prints
colored test summaries: the color variables green, cyan, reset and the
console.log calls that print "ENV VALIDATION TEST", the three "✓ ... validation
passed" lines, and "All env tests passed". Either delete these console.log lines
entirely or wrap them behind a verbosity flag (e.g., check process.env.VERBOSE
or a DEBUG constant) so they only run when verbose logging is enabled; keep the
existing sourceMappingURL comment untouched.
In `@packages/shared/src/pii.test.js`:
- Around line 41-54: The test only asserts three keys but should exercise the
full PII blocklist: update the test for maskPii to be table-driven over every
PII key defined in the blocklist (including camelCase aliases and message fields
referenced in the blocklist in maskPii) and assert each key's value equals
MASKED_FIELD after masking; also keep a non-PII field (e.g., status) and assert
it remains unchanged. Locate usage of maskPii and MASKED_FIELD in the test and
replace the hard-coded expects with a loop/array of all blocklist keys (the same
identifiers used in pii.ts) to ensure full coverage. Ensure the test fails if
any blocklist key is not masked and that the non-PII assertion remains.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3e015d0e-ebfd-4377-960a-4e20165d1c95
⛔ Files ignored due to path filters (8)
packages/config/src/env.test.d.ts.mapis excluded by!**/*.mappackages/config/src/env.test.js.mapis excluded by!**/*.mappackages/shared/src/pii.test.d.ts.mapis excluded by!**/*.mappackages/shared/src/pii.test.js.mapis excluded by!**/*.mappackages/shared/src/secret-rotation.test.d.ts.mapis excluded by!**/*.mappackages/shared/src/secret-rotation.test.js.mapis excluded by!**/*.mappackages/shared/src/telemetry.test.d.ts.mapis excluded by!**/*.mappackages/shared/src/telemetry.test.js.mapis excluded by!**/*.map
📒 Files selected for processing (17)
apps/api/tsconfig.jsonapps/api/tsconfig.tsbuildinfoapps/dashboard/tsconfig.jsonapps/worker/tsconfig.jsonapps/worker/tsconfig.tsbuildinfopackages/config/src/env.test.d.tspackages/config/src/env.test.jspackages/llm/tsconfig.jsonpackages/llm/tsconfig.tsbuildinfopackages/shared/src/pii.test.d.tspackages/shared/src/pii.test.jspackages/shared/src/secret-rotation.test.d.tspackages/shared/src/secret-rotation.test.jspackages/shared/src/telemetry.test.d.tspackages/shared/src/telemetry.test.jspackages/shared/tsconfig.jsonpackages/shared/tsconfig.tsbuildinfo
✅ Files skipped from review due to trivial changes (3)
- packages/shared/src/telemetry.test.d.ts
- packages/config/src/env.test.d.ts
- packages/shared/src/secret-rotation.test.d.ts
| const optionalEnv = { | ||
| ...baseEnv, | ||
| [OPTIONAL_ENV_VARS[0]]: '1', | ||
| }; | ||
| const optionalResult = validateEnv(optionalEnv); | ||
| assert.equal(optionalResult.LANGCHAIN_TRACING_V2, '1'); |
There was a problem hiding this comment.
Brittle test: hardcoded key vs. indexed access mismatch.
Line 26 uses OPTIONAL_ENV_VARS[0] to set the value, but line 29 asserts against the hardcoded key LANGCHAIN_TRACING_V2. If the array order changes, this test will fail with a confusing error.
Proposed fix to use consistent indexing
const optionalEnv = {
...baseEnv,
[OPTIONAL_ENV_VARS[0]]: '1',
};
const optionalResult = validateEnv(optionalEnv);
-assert.equal(optionalResult.LANGCHAIN_TRACING_V2, '1');
+assert.equal(optionalResult[OPTIONAL_ENV_VARS[0]], '1');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/config/src/env.test.js` around lines 24 - 29, The test is brittle
because it sets the optional key using OPTIONAL_ENV_VARS[0] but asserts against
the hardcoded LANGCHAIN_TRACING_V2; update the assertion to use the same dynamic
key so they stay in sync — e.g., after creating optionalEnv (which uses
OPTIONAL_ENV_VARS[0]) call validateEnv and assert that
optionalResult[OPTIONAL_ENV_VARS[0]] equals '1' (or alternatively set the
optionalEnv using the explicit 'LANGCHAIN_TRACING_V2' key and keep the existing
assertion), making sure to reference OPTIONAL_ENV_VARS, optionalEnv,
validateEnv, and optionalResult so the test will not break if OPTIONAL_ENV_VARS
order changes.
| describe('maskString', () => { | ||
| it('masks email addresses', () => { | ||
| const result = maskString('Contact user@example.com for info'); | ||
| expect(result).toContain(MASKED_EMAIL); | ||
| expect(result).not.toContain('user@example.com'); | ||
| }); | ||
| it('masks phone numbers', () => { | ||
| const result = maskString('Call +628123456789 now'); | ||
| expect(result).toContain(MASKED_PHONE); | ||
| expect(result).not.toContain('+628123456789'); | ||
| }); | ||
| it('returns non-PII strings unchanged', () => { | ||
| expect(maskString('hello world')).toBe('hello world'); | ||
| expect(maskString('order status: shipped')).toBe('order status: shipped'); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Add a mixed email+phone regression case.
The current implementation in packages/shared/src/pii.ts:53-61 returns after the email replacement, so a string containing both an email and a phone number will leave the phone number unmasked. This suite won't catch that leak today.
🔎 Suggested regression test
describe('maskString', () => {
it('masks email addresses', () => {
const result = maskString('Contact user@example.com for info');
expect(result).toContain(MASKED_EMAIL);
expect(result).not.toContain('user@example.com');
});
it('masks phone numbers', () => {
const result = maskString('Call +628123456789 now');
expect(result).toContain(MASKED_PHONE);
expect(result).not.toContain('+628123456789');
});
+ it('masks email addresses and phone numbers when both appear', () => {
+ const result = maskString('Contact user@example.com or +628123456789');
+ expect(result).toContain(MASKED_EMAIL);
+ expect(result).toContain(MASKED_PHONE);
+ expect(result).not.toContain('user@example.com');
+ expect(result).not.toContain('+628123456789');
+ });
it('returns non-PII strings unchanged', () => {
expect(maskString('hello world')).toBe('hello world');
expect(maskString('order status: shipped')).toBe('order status: shipped');
});
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/shared/src/pii.test.js` around lines 24 - 39, Add a regression test
in pii.test.js that passes a string containing both an email and a phone (e.g.,
"Contact user@example.com or call +628123456789") and assert that maskString
returns a string containing both MASKED_EMAIL and MASKED_PHONE and contains
neither original PII; then fix maskString so it does not return early after
doing the email replacement—ensure both replacements run (apply email masking
and then phone masking, or run both regex replacements on the input) so mixed
email+phone strings get both masked.
| it('should initialize OpenTelemetry SDK gracefully', () => { | ||
| const sdk = setupTelemetry('test-service'); | ||
| expect(sdk).toBeDefined(); | ||
| // Attempt shutdown to avoid leaking in tests | ||
| return sdk.shutdown().catch(() => { }); |
There was a problem hiding this comment.
Make teardown unconditional and let shutdown failures fail the test.
Line 25 suppresses teardown errors, and cleanup only happens after the assertion on Line 23. If this test fails earlier, it can leave telemetry globals registered for the rest of the suite.
Proposed fix
- it('should initialize OpenTelemetry SDK gracefully', () => {
- const sdk = setupTelemetry('test-service');
- expect(sdk).toBeDefined();
- // Attempt shutdown to avoid leaking in tests
- return sdk.shutdown().catch(() => { });
- }, 15_000);
+ it('should initialize OpenTelemetry SDK gracefully', async () => {
+ const sdk = setupTelemetry('test-service');
+ try {
+ expect(sdk).toBeDefined();
+ }
+ finally {
+ await expect(sdk.shutdown()).resolves.toBeUndefined();
+ }
+ }, 15_000);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('should initialize OpenTelemetry SDK gracefully', () => { | |
| const sdk = setupTelemetry('test-service'); | |
| expect(sdk).toBeDefined(); | |
| // Attempt shutdown to avoid leaking in tests | |
| return sdk.shutdown().catch(() => { }); | |
| it('should initialize OpenTelemetry SDK gracefully', async () => { | |
| const sdk = setupTelemetry('test-service'); | |
| try { | |
| expect(sdk).toBeDefined(); | |
| } | |
| finally { | |
| await expect(sdk.shutdown()).resolves.toBeUndefined(); | |
| } | |
| }, 15_000); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/shared/src/telemetry.test.js` around lines 21 - 25, The test
currently calls setupTelemetry('test-service') and asserts before attempting
shutdown and swallows shutdown errors; change the test to always perform
teardown and propagate shutdown failures by wrapping the assertion in a
try/finally (or using Promise.finally) so sdk.shutdown() is called
unconditionally and its rejection is not caught. Ensure you reference
setupTelemetry and sdk.shutdown and let any shutdown error fail the test rather
than being ignored.
| return sdk.shutdown().catch(() => { }); | ||
| }, 15_000); | ||
| }); | ||
| //# sourceMappingURL=telemetry.test.js.map No newline at end of file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== telemetry test siblings =="
fd '^telemetry\.test\.(ts|js)$' packages/shared/src -a
echo
echo "== Vitest config include patterns =="
fd 'vitest(\.config)?\.(ts|js|mjs|cjs)$' -a . -x sh -c '
echo "--- $1 ---"
sed -n "1,220p" "$1" | grep -nE "include|exclude|testMatch|testRegex|\.(test|spec)\."
' sh {}
echo
echo "== package.json scripts mentioning vitest =="
fd '^package\.json$' -a . -x sh -c '
echo "--- $1 ---"
jq -r ".scripts // {}" "$1" | grep -n "vitest" || true
' sh {}Repository: Ryanakml/ChatFuse
Length of output: 1930
🏁 Script executed:
# Check the content of both test files to see if they're duplicates
echo "=== telemetry.test.ts (first 30 lines) ==="
head -30 packages/shared/src/telemetry.test.ts
echo
echo "=== telemetry.test.js (entire file) ==="
cat packages/shared/src/telemetry.test.jsRepository: Ryanakml/ChatFuse
Length of output: 2451
🌐 Web query:
Vitest default include patterns test discovery behavior
💡 Result:
- Default test discovery (
test.include): Vitest looks for files matching**/*.{test,spec}.?(c|m)[jt]s?(x)(e.g.,.test.ts,.spec.mjs,.test.jsx). Patterns are resolved relative totest.root(defaults toprocess.cwd()), and Vitest resolves globs via tinyglobby. Settingtest.includereplaces the defaults; to extend, spreadconfigDefaults.include. [1] - Default exclusions (
test.exclude):['**/node_modules/**', '**/.git/**']. Likeinclude, these are resolved relative to the root and use tinyglobby. If you settest.excludein config, it replaces defaults; to extend, spreadconfigDefaults.exclude. [2] - CLI interaction:
- You can pass include globs positionally (e.g.
vitest **/*.test.js). [1] --excludeis additive: CLI--excludepatterns are appended to the configuredexcludelist (and it’s the one option that doesn’t fully override config when provided via CLI). [2]
- You can pass include globs positionally (e.g.
Sources: [1] Vitest config: include https://vitest.dev/config/include, [2] Vitest config: exclude https://vitest.dev/config/exclude
🏁 Script executed:
# Check tsconfig and build config in packages/shared
echo "=== tsconfig in packages/shared ==="
find packages/shared -name "tsconfig*.json" -exec sh -c 'echo "--- $1 ---"; cat "$1"' sh {} \;
echo
echo "=== package.json build/scripts in packages/shared ==="
cat packages/shared/package.json | head -50Repository: Ryanakml/ChatFuse
Length of output: 1416
Remove telemetry.test.js from packages/shared/src/.
Both telemetry.test.ts and telemetry.test.js exist in the same directory and contain identical test logic. The .js file is a transpiled copy (indicated by the source map trailer). Vitest's default test discovery pattern (**/*.{test,spec}.?(c|m)[jt]s?(x)) includes both .ts and .js extensions, so this test will run twice. This doubles the telemetry side effects from setupTelemetry() and wastes test execution time.
Remove the .js file from src; only the .ts source should be committed. TypeScript compilation is configured to emit to dist/, not src/.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/shared/src/telemetry.test.js` at line 28, Remove the transpiled
duplicate telemetry.test.js (the file that contains a source map trailer) so
only the TypeScript source telemetry.test.ts remains; delete the
telemetry.test.js file from the repository (e.g., git rm) and ensure no build
step emits tests into src (verify TypeScript emits to dist/), then run the test
suite to confirm setupTelemetry() is invoked only once.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/shared/package.json (1)
7-12: Prefertypesfirst and adefaultruntime target.Because
packages/shared/src/index.tsalready funnels the public API through one entrypoint, this export can use the simplertypes+defaultshape. Theimport-only runtime condition is less portable;"types"should appear first per Node.js Packages documentation, and"default"is preferred over"import"when there is only one JavaScript entrypoint.♻️ Suggested manifest shape
"exports": { ".": { - "import": "./dist/index.js", - "types": "./dist/index.d.ts" + "types": "./dist/index.d.ts", + "default": "./dist/index.js" } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/package.json` around lines 7 - 12, The package export shape is using an "import" runtime condition and places "types" second; update the exports for the shared package so "types" appears first and use a single runtime target named "default" (rather than "import") that points to the compiled JS. Edit the exports block in packages/shared/package.json to list "types": "./dist/index.d.ts" first and "default": "./dist/index.js" (matching the public entrypoint funneled by packages/shared/src/index.ts), removing or replacing the "import" condition.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 119-126: The CI job sets SUPABASE_SERVICE_ROLE_KEY but the app
code reads SUPABASE_SERVICE_KEY and integration tests require INTEGRATION=true;
update the workflow env block so names match the app or vice versa: either
change the workflow to export SUPABASE_SERVICE_KEY: ${{
secrets.TEST_SUPABASE_SERVICE_ROLE_KEY || '' }} (and add INTEGRATION: 'true' for
the integration test job) or change apps/api/src/auth.ts to read
process.env.SUPABASE_SERVICE_ROLE_KEY instead of SUPABASE_SERVICE_KEY; ensure
the CI env also includes INTEGRATION: 'true' for the integration test step so
queue-flow.integration.test.ts and supabase-persistence.integration.test.ts run.
---
Nitpick comments:
In `@packages/shared/package.json`:
- Around line 7-12: The package export shape is using an "import" runtime
condition and places "types" second; update the exports for the shared package
so "types" appears first and use a single runtime target named "default" (rather
than "import") that points to the compiled JS. Edit the exports block in
packages/shared/package.json to list "types": "./dist/index.d.ts" first and
"default": "./dist/index.js" (matching the public entrypoint funneled by
packages/shared/src/index.ts), removing or replacing the "import" condition.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1a4585c2-c94f-41e1-bb0f-b0259f14f594
📒 Files selected for processing (2)
.github/workflows/ci.ymlpackages/shared/package.json
| env: | ||
| NODE_ENV: test | ||
| # Integration tests may need these; they should be set as repo secrets | ||
| # for branches that need real service connectivity. | ||
| # Unit tests run without any env vars. | ||
| SUPABASE_URL: ${{ secrets.TEST_SUPABASE_URL || '' }} | ||
| SUPABASE_SERVICE_ROLE_KEY: ${{ secrets.TEST_SUPABASE_SERVICE_ROLE_KEY || '' }} | ||
| REDIS_URL: ${{ secrets.TEST_REDIS_URL || '' }} |
There was a problem hiding this comment.
Environment variable mismatch and missing integration flag will prevent tests from running correctly.
Two issues in the test environment configuration:
-
Env var name mismatch: The workflow sets
SUPABASE_SERVICE_ROLE_KEY, butapps/api/src/auth.tsreadsSUPABASE_SERVICE_KEY. This causes auth initialization to fall back to the dummy value even when secrets are properly configured. -
Missing
INTEGRATIONflag: Integration tests inapps/worker/src/__tests__/queue-flow.integration.test.tsandapps/api/src/__tests__/supabase-persistence.integration.test.tscheck forINTEGRATION=truebefore running live tests. Without this variable, all integration tests are skipped silently.
🔧 Proposed fix
env:
NODE_ENV: test
# Integration tests may need these; they should be set as repo secrets
# for branches that need real service connectivity.
# Unit tests run without any env vars.
SUPABASE_URL: ${{ secrets.TEST_SUPABASE_URL || '' }}
- SUPABASE_SERVICE_ROLE_KEY: ${{ secrets.TEST_SUPABASE_SERVICE_ROLE_KEY || '' }}
+ SUPABASE_SERVICE_KEY: ${{ secrets.TEST_SUPABASE_SERVICE_ROLE_KEY || '' }}
REDIS_URL: ${{ secrets.TEST_REDIS_URL || '' }}
+ # Set to 'true' when secrets are configured to enable integration tests
+ INTEGRATION: ${{ secrets.TEST_SUPABASE_URL && 'true' || 'false' }}Alternatively, if SUPABASE_SERVICE_ROLE_KEY is the canonical name, update apps/api/src/auth.ts to read from that variable instead.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| env: | |
| NODE_ENV: test | |
| # Integration tests may need these; they should be set as repo secrets | |
| # for branches that need real service connectivity. | |
| # Unit tests run without any env vars. | |
| SUPABASE_URL: ${{ secrets.TEST_SUPABASE_URL || '' }} | |
| SUPABASE_SERVICE_ROLE_KEY: ${{ secrets.TEST_SUPABASE_SERVICE_ROLE_KEY || '' }} | |
| REDIS_URL: ${{ secrets.TEST_REDIS_URL || '' }} | |
| env: | |
| NODE_ENV: test | |
| # Integration tests may need these; they should be set as repo secrets | |
| # for branches that need real service connectivity. | |
| # Unit tests run without any env vars. | |
| SUPABASE_URL: ${{ secrets.TEST_SUPABASE_URL || '' }} | |
| SUPABASE_SERVICE_KEY: ${{ secrets.TEST_SUPABASE_SERVICE_ROLE_KEY || '' }} | |
| REDIS_URL: ${{ secrets.TEST_REDIS_URL || '' }} | |
| # Set to 'true' when secrets are configured to enable integration tests | |
| INTEGRATION: ${{ secrets.TEST_SUPABASE_URL && 'true' || 'false' }} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 119 - 126, The CI job sets
SUPABASE_SERVICE_ROLE_KEY but the app code reads SUPABASE_SERVICE_KEY and
integration tests require INTEGRATION=true; update the workflow env block so
names match the app or vice versa: either change the workflow to export
SUPABASE_SERVICE_KEY: ${{ secrets.TEST_SUPABASE_SERVICE_ROLE_KEY || '' }} (and
add INTEGRATION: 'true' for the integration test job) or change
apps/api/src/auth.ts to read process.env.SUPABASE_SERVICE_ROLE_KEY instead of
SUPABASE_SERVICE_KEY; ensure the CI env also includes INTEGRATION: 'true' for
the integration test step so queue-flow.integration.test.ts and
supabase-persistence.integration.test.ts run.
Summary by CodeRabbit
New Features
Improvements
Chores