Notification Improvements, Improved Auth Flows and Duplicate Eliminiation of Entries#58
Notification Improvements, Improved Auth Flows and Duplicate Eliminiation of Entries#58fortune710 merged 59 commits intostagingfrom
Conversation
Notification Updates, Font Improvements and OTP Feature for Phone Number Update
…and HTML formats, including status polling and download handling. Update dependencies in package.json and bun.lock for improved performance.
… management, improved error handling, and pagination for entry fetching. Update dependencies in package.json and bun.lock for better performance.
…rt and add background cleanup for export files after download. Enhance error handling in the frontend export process.
…xport process. Log specific errors for malformed timestamps and sign-out failures.
Implement async user data export queue and move export/delete to Privacy screen
… to Privacy screen"
Revert "Implement async user data export queue and move export/delete to Privacy screen"
…ination for entries and friendships, and improved error handling. Add export status polling and download capabilities in the frontend.
Fix export: paginate friendships, defer cleanup, guard unmount in download
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds large cross-cutting features: async user data export jobs (backend + frontend), password-reset/update flows, SaveLock provider with capture integration, environment-aware push token handling and DB migration, centralized frontend API client + SSE streaming, idempotent entry processing, realtime entry subscriptions, Twilio SDK SMS send, UI/typography/animation updates, and many tests and dependency updates. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant DetailsScreen
participant Backend
participant SaveLockProvider
participant CaptureScreen
User->>DetailsScreen: Tap "Save"
DetailsScreen->>Backend: POST /entries (include idempotencyKey)
Backend-->>DetailsScreen: 201 Created
DetailsScreen->>SaveLockProvider: lockSave()
SaveLockProvider-->>DetailsScreen: isSaveLocked = true
User->>CaptureScreen: Navigate / CaptureScreen mounts
CaptureScreen->>SaveLockProvider: unlockSave()
SaveLockProvider-->>CaptureScreen: isSaveLocked = false
sequenceDiagram
participant User
participant PrivacyUI
participant ApiClient
participant Backend
participant Poller
participant FileSystem
User->>PrivacyUI: Request "Export My Data"
PrivacyUI->>ApiClient: POST /{user_id}/export (uses apiFetch -> attaches token)
ApiClient->>Backend: Authenticated request
Backend-->>ApiClient: { job_id, status: pending }
ApiClient-->>PrivacyUI: job info
PrivacyUI->>Poller: start polling
Poller->>ApiClient: GET /{user_id}/export/{job_id}/status
ApiClient->>Backend: Authenticated status request
Backend-->>ApiClient: { status: completed }
PrivacyUI->>ApiClient: GET /{user_id}/export/{job_id}/download
ApiClient-->>FileSystem: stream/download file
PrivacyUI->>User: Present file for share/save
sequenceDiagram
participant User
participant ForgotPasswordScreen
participant AuthProvider
participant Supabase
User->>ForgotPasswordScreen: Submit email
ForgotPasswordScreen->>AuthProvider: resetPasswordForEmail(email)
AuthProvider->>Supabase: auth.resetPasswordForEmail(email, redirectTo)
Supabase-->>AuthProvider: success
AuthProvider-->>ForgotPasswordScreen: { error: null }
ForgotPasswordScreen->>User: Navigate to success screen
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (16)
frontend/components/profile/phone-number-input.tsx (1)
46-75:⚠️ Potential issue | 🟠 Major
onChangeis never fired wheninitialValueis parsed.When a parent provides or updates
initialValue, this effect parses and sets internal state but never callsemitPhoneState. The parent won't learn the derivedfullPhoneNumber,nationalNumber, orisValiduntil the user manually edits the field. If the parent relies ononChangeto sync state (e.g. enabling a submit button), it will be out of sync.Consider calling
emitPhoneStateat the end of each branch with the newly computed values:Proposed fix (representative branch)
if (longestMatch) { setCountryCode(longestMatch.code); setCountryIso(longestMatch.iso); - setValue(formatPhoneNumber(initialValue.slice(longestMatch.code.length).trim())); + const formatted = formatPhoneNumber(initialValue.slice(longestMatch.code.length).trim()); + setValue(formatted); + emitPhoneState({ currentValue: formatted, currentCountryCode: longestMatch.code }); return; }Apply the same pattern to the other branches.
frontend/services/push-notification-service.ts (1)
139-150: 🛠️ Refactor suggestion | 🟠 MajorUse the
TABLESconstant instead of the plain'push_tokens'string.This applies to all Supabase
.from()calls in this file ('push_tokens'on lines 140 and 169,'notification_settings'on lines 217, 279, and 314). As per coding guidelines, "When making calls to the Supabase Database, use theTABLESobject inconstants/supabase.tsfile instead of writing the table name as a plain string".frontend/components/capture/editor/location-tab.tsx (1)
80-97:⚠️ Potential issue | 🟡 Minor
debouncedQueryis an unnecessary dependency inuseMemo.
debouncedQueryis listed in the dependency array but isn't referenced inside the memo callback. This causes unnecessary re-computations whenever the debounced query changes, even though the result depends only oncurrentLocationandmapboxPlacesResults.Also, the comment on line 93 says results are conditionally added based on whether a query exists, but the code unconditionally pushes
mapboxPlacesResults.Proposed fix
- }, [currentLocation, mapboxPlacesResults, debouncedQuery]); + }, [currentLocation, mapboxPlacesResults]);frontend/services/search-service.ts (2)
154-167:⚠️ Potential issue | 🟠 MajorDouble error reporting:
onErrorcallback is invoked, then the error is re-thrown.When
onErroris provided, the caller receives the error via the callback and as a thrown exception. This forces callers to handle the error in two places (or risk unhandled rejections). Typically you'd do one or the other.Proposed fix — don't re-throw if `onError` handled it
if (onError) { onError(error); } else { logger.error('SearchService: unhandled error', { error: error.message }); + throw error; } - throw error;
150-152:⚠️ Potential issue | 🟡 MinorBuffer content logged at
infolevel.Line 152 logs the raw stream buffer at
infolevel, which will appear in production logs and may contain user-sensitive search content. Consider downgrading todebug.- logger.info('SearchService: buffer', { buffer }); + logger.debug('SearchService: buffer', { buffer });frontend/hooks/use-deep-linking.ts (1)
9-9: 🛠️ Refactor suggestion | 🟠 MajorMissing docstring on
useDeepLinkinghook.Proposed fix
+/** + * Registers deep-link listeners and routes incoming URLs to the appropriate screens. + */ export function useDeepLinking(): UseDeepLinkingResult {As per coding guidelines, "ALL custom hooks created must have docstrings".
frontend/app/calendar/index.tsx (1)
84-89:⚠️ Potential issue | 🟡 MinorBack button uses a right-pointing chevron icon.
ChevronRightis used for arouter.back()action, which typically uses a left-pointing chevron. If this is intentional (e.g., a "close" or forward-dismiss gesture), ignore — otherwise consider usingChevronLeft.backend/services/cache_service.py (2)
161-184:⚠️ Potential issue | 🟠 MajorCache key does not include the environment dimension.
The Supabase query now filters by environment (
line 173), but the cache key on line 161 is stillpush_tokens:{user_id}. If Redis is shared across environments, or if the environment value ever changes, cached tokens from one environment will be served for another.The same issue applies to
get_push_tokens_batch(line 205).Proposed fix
def get_push_tokens(self, user_id: str) -> List[str]: - cache_key = f"push_tokens:{user_id}" + environment = self._get_environment() + cache_key = f"push_tokens:{user_id}:{environment}" # Try Redis cache first cached_data = self._get_from_redis(cache_key) ... try: - environment = self._get_environment() response = self.supabase.table("push_tokens").select("token").eq("user_id", user_id).eq("environment", environment).execute()And similarly for
get_push_tokens_batch:- cache_keys = [f"push_tokens:{user_id}" for user_id in user_ids] + environment = self._get_environment() + cache_keys = [f"push_tokens:{user_id}:{environment}" for user_id in user_ids]
82-91:⚠️ Potential issue | 🟠 MajorOTP and phone number logged at INFO level — sensitive data leak.
Lines 89 and 91 log the OTP in plaintext and the phone number at
INFOlevel via theextradict. OTPs are security credentials and phone numbers are PII — neither should appear in production logs. Log only non-sensitive identifiers (e.g., a masked phone number or a request correlation ID).frontend/app/capture/index.tsx (2)
150-154:⚠️ Potential issue | 🟡 MinorType assertion on
maybeSingle()silently discards theerrorfield.Casting the result as
{ data: { id: string } | null }drops Supabase'serrorproperty. If the query fails,datawill benulland the error will go unnoticed.🛠️ Suggested fix
- const { data: pendingRecord } = await supabase + const { data: pendingRecord, error: pendingError } = await supabase .from('phone_number_updates') .select('id') .eq('user_id', user.id) - .maybeSingle() as { data: { id: string } | null }; + .maybeSingle(); + + if (pendingError) { + console.warn('Failed to check pending phone updates:', pendingError.message); + }
150-154:⚠️ Potential issue | 🟡 MinorUse
TABLESconstant instead of plain string'phone_number_updates'.The coding guidelines require using the
TABLESobject fromconstants/supabase.tsinstead of writing table names as plain strings. You'll need to importTABLESfromconstants/supabase.tsand update the code accordingly.🔧 Suggested fix
+import { TABLES } from '@/constants/supabase'; + const { data: pendingRecord } = await supabase - .from('phone_number_updates') + .from(TABLES.PHONE_NUMBER_UPDATES) .select('id') .eq('user_id', user.id) .maybeSingle() as { data: { id: string } | null };frontend/services/background-task-manager.ts (2)
188-192:⚠️ Potential issue | 🟡 MinorUnreachable code: duplicate
console.logandreturnafter an earlierreturnstatement.Lines 191-192 are dead code — they follow the
return { success: true, entry };on line 189 and will never execute. This appears to be a copy-paste artifact.🔧 Suggested fix
console.log('Entry processed successfully:', data.entryId); return { success: true, entry }; - - console.log('Entry processed successfully:', data.entryId); - return { success: true, entry }; } catch (error) {
278-298:⚠️ Potential issue | 🟠 Major
retryEntryProcessingresets status to "pending" before checking for duplicates.If an entry with the same
idempotencyKeyis already in the queue (e.g., still being processed), the status is set to "pending" on lines 280-286 but the entry is not re-enqueued (line 296-298). This could leave the entry in a "pending" state while the already-queued copy might fail or complete independently, leading to a stale status.Consider moving the status reset to after the duplicate check, or only resetting when the item is actually enqueued.
🛠️ Suggested fix
export async function retryEntryProcessing(data: EntryProcessingData): Promise<void> { - // Reset status to pending - await deviceStorage.updateEntry(data.userId, data.entryId, { - status: 'pending', - processingStartedAt: null, - processingCompletedAt: null, - processingFailedAt: null, - error: null - }); - // Re-enqueue the task (check for duplicates) const queue = await loadQueue(); const existingEntry = queue.find(item => item.idempotencyKey === data.idempotencyKey); if (!existingEntry) { + // Reset status to pending only when actually re-enqueuing + await deviceStorage.updateEntry(data.userId, data.entryId, { + status: 'pending', + processingStartedAt: null, + processingCompletedAt: null, + processingFailedAt: null, + error: null + }); queue.push(data); await saveQueue(queue); void startForegroundQueueProcessor(); } else { console.log('Entry with idempotency key already in queue, skipping retry:', data.idempotencyKey); } }frontend/hooks/use-entry-operations.ts (1)
137-138:⚠️ Potential issue | 🟡 MinorMissing docstring on
useEntryOperationshook.Per the coding guidelines, all custom hooks must have docstrings.
generateIdempotencyKeyhas one (line 40), butuseEntryOperationsdoes not.🛠️ Suggested fix
+/** + * Hook for entry save and media upload operations. + * Handles background processing scheduling with idempotency-based deduplication. + */ export function useEntryOperations(): UseEntryOperationsResult {As per coding guidelines: "ALL custom hooks created must have docstrings."
frontend/app/capture/details.tsx (1)
255-264:⚠️ Potential issue | 🟠 MajorBug: duplicate toast shown on validation failure.
Line 262 unconditionally fires
toast('Cannot save entry', 'error')after the innerif/elsehas already shown a toast. This causes two toast notifications to appear simultaneously for every validation failure path.🐛 Proposed fix — remove the redundant toast
if (!capture || !user || !hasSelectedSharing()) { if (!hasSelectedSharing()) { toast('Please select who to share this entry with', 'error'); } else { toast('Cannot save entry', 'error'); } - toast('Cannot save entry', 'error'); return; }frontend/hooks/use-user-entries.ts (1)
143-143:⚠️ Potential issue | 🟡 MinorStale comment: says "5 minutes" but the value is 10 minutes.
- staleTime: 1000 * 60 * 10, // 5 minutes + staleTime: 1000 * 60 * 10, // 10 minutes
🤖 Fix all issues with AI agents
In `@backend/requirements.txt`:
- Around line 43-84: The requirements.txt pins multiple packages with known
high-severity CVEs (starlette, python-multipart, urllib3, pillow, protobuf,
pyasn1); update those package pins in requirements.txt to the patched versions
(at minimum: starlette -> 0.52.1, python-multipart -> 0.0.22, urllib3 -> 2.6.3,
pillow -> 12.1.1, protobuf -> 5.29.6 or later, pyasn1 -> 0.6.2), run the test
suite and CI, regenerate any lockfile if used, and re-run a dependency
vulnerability scan to confirm the issues are resolved; if compatibility issues
surface, iterate on minimal version bumps and run full integration tests before
merging.
In `@backend/routers/phone_number.py`:
- Around line 82-92: The logs in the Twilio send block expose sensitive data:
remove the raw OTP from all logger extras and stop logging plaintext phone
numbers; update the logger calls around client.messages.create (the logger.info
and logger.exception uses) to omit the otp field and either mask the phone
number (e.g., show only last 4 digits) or remove it entirely, and make the same
change for the later logger call on line 109 so no raw OTP or full phone number
is ever written to logs; keep the rest of the error handling (raising
HTTPException) intact.
- Around line 82-88: The Twilio call client.messages.create(...) is synchronous
and will block the async event loop; update the async function containing this
call (look for the Client instantiation and client.messages.create usage) to run
the blocking call in a thread or switch to Twilio's async client: either import
asyncio and replace the direct call with await asyncio.to_thread(lambda:
client.messages.create(...)) so the blocking HTTP request runs off the event
loop, or instantiate and use the async Twilio HTTP client instead; ensure any
exceptions from client.messages.create are still caught/handled and logging
remains intact.
In `@backend/routers/user.py`:
- Around line 357-378: The _run_export_job function mutates the shared
export_jobs dict from a background thread without synchronization and declares
an unused exception variable exc; fix by introducing and using a shared lock
(e.g., export_jobs_lock) to guard all reads/writes to export_jobs (wrap the
assignments to export_jobs[job_id] in a lock context), and ensure all other
places that access export_jobs use the same lock; also reference the caught
exception when logging (e.g., include exc in the logger.exception/log message or
use exc_info) instead of leaving it unused so Ruff F841 is resolved.
- Around line 72-99: The /me endpoint (get_current_user_info) currently returns
current_user.user.app_metadata and current_user.user.user_metadata which can
contain sensitive internal info; replace those with an allowlist/sanitization
step that extracts only safe fields (e.g., specific profile fields like name,
avatar, roles if explicitly safe) before returning, or omit them entirely;
implement the sanitization in get_current_user_info (use a helper
sanitize_user_metadata or similar) and log/return only the safe subset.
Additionally, if this endpoint is truly for debugging, gate
get_current_user_info behind a dev flag (e.g., check a DEBUG/DEV_MODE env
variable) or remove it in production to prevent accidental exposure.
- Around line 554-798: The file contains a broken duplicate implementation block
that redefines and shadows the correct export logic; remove the entire duplicate
block that defines _fetch_export_data, _generate_html_export_content,
_prune_export_jobs, _run_export_job, and start_user_export (the second set that
references undefined export_jobs_lock and overrides the working originals),
leaving the original working implementations (e.g., _fetch_user_export_data and
_render_export_content, export_jobs_lock, _prune_export_jobs, _run_export_job,
start_user_export) intact; delete the duplicate definitions and any references
to them so the code uses the first, correct functions and then run the test
suite/linter to ensure no remaining references to the removed symbols exist.
- Around line 34-36: The in-memory export_jobs dict must be protected by a
threading.Lock to avoid race conditions and should be made durable for
production; create an export_jobs_lock = threading.Lock() and wrap every
access/mutation of export_jobs (reads and writes) in with export_jobs_lock:
blocks—specifically update the functions/methods that touch export_jobs (e.g.,
any handler that enqueues jobs, the background worker that updates job state,
and the status endpoint that returns job info such as get_export_status /
start_export / export background task) to acquire the lock before touching
export_jobs; for production, replace this in-memory approach with a persisted
store like Redis or a DB table to survive restarts and support multiple workers.
In `@frontend/app/onboarding/__tests__/reset-password.test.tsx`:
- Around line 32-48: The test fails because getByText('Reset Password') matches
both the screen title and the button; update the press action to target the
button explicitly by replacing fireEvent.press(getByText('Reset Password')) with
fireEvent.press(getByRole('button', { name: 'Reset Password' })) (keeping other
assertions that check the title via getByText('Reset Password') or change the
title check to a heading role if you prefer); modify the call site in the
ResetPasswordScreen test that uses getByText for the button and ensure getByRole
is imported/available from the test renderer being used.
In `@frontend/app/onboarding/auth.tsx`:
- Around line 156-174: In handleNext's switch on currentStep (case 'email'),
wrap the existing case body in a block { ... } to fix the noSwitchDeclarations
lint error so the local const exists is scoped to the case, and replace the weak
validation email.includes('@') with a stronger regex check (e.g. use
/\S+@\S+\.\S+/) before calling setCheckingEmail, checkEmailExists(email),
setCheckingEmail(false), and proceeding to setCurrentStep('password'); keep the
existing showToast calls and flow but use the regex to decide invalid emails.
In `@frontend/app/onboarding/index.tsx`:
- Around line 75-103: The UI is gated on the videoFinished state so if the video
never fires playToEnd the user is stuck; add a robust fallback by wiring an
onError handler on the video player (or equivalent) to call
setVideoFinished(true) and also add a timeout-based fallback inside a useEffect
that starts a short timer (e.g., 8–12s) to setVideoFinished(true) if playToEnd
hasn't occurred, and ensure you clear the timer on unmount or when playToEnd
succeeds; update any existing playToEnd handler to call setVideoFinished(true)
so all three paths (success, error, timeout) enable the Animated.View with the
Get Started / Sign In buttons.
In `@frontend/app/settings/privacy.tsx`:
- Around line 113-121: Replace raw fetch calls that manually set Authorization
headers with the centralized apiFetch client: update pollExportStatus (the
try/await fetch to `${BACKEND_URL}/user/${profile.id}/export/.../status`),
performExport (the export initiation fetch), and handleDeleteAccount (the
account deletion fetch) to call apiFetch with the same endpoint and method,
removing manual Authorization and session.access_token usage; ensure you pass
body/json options and handle the returned Response/JSON the same way as before
(or throw on non-OK) so existing response handling logic remains unchanged.
In `@frontend/hooks/use-auth.ts`:
- Around line 93-94: The console.log in the auth state change handler (the async
(event, session) => { ... } callback in use-auth.ts) prints the full session
including access_token/refresh_token; replace this with the project's logger
utility (which respects isDev) and redact sensitive fields or only log event
type: call logger.debug or logger.info with a minimal object like { event,
userId: session?.user?.id } or explicitly remove session.access_token and
session.refresh_token before logging; ensure the handler no longer uses
console.log and that any logging of session data is gated by isDev or uses the
logger so tokens are never printed.
- Around line 67-87: The initializeAuth function contains a redundant
Platform.OS conditional where both branches call supabase.auth.getSession and
then call setSession, setUser, and setLoading; remove the if (Platform.OS ===
'web') / else block and replace it with a single unified flow: call await
supabase.auth.getSession(), extract data.session as initialSession, call
setSession(initialSession), setUser(initialSession?.user ?? null), catch and log
errors, and finally call setLoading(false) so the behavior of initializeAuth is
unchanged but the dead conditional is removed.
In `@frontend/hooks/use-user-entries.ts`:
- Line 128: The query in use-user-entries.ts has the .limit(20) commented out,
removing the pagination guard and allowing unbounded fetches; restore a sensible
default limit (e.g., re-enable .limit(20)) on the query used in the
useUserEntries hook (or the function that builds the query, e.g.,
fetchEntries/getUserEntriesQuery) and/or implement a proper pagination mechanism
(cursor or offset) that returns a nextCursor and a pageSize parameter from the
hook so the UI can request subsequent pages instead of loading all entries;
remove the commented-out line and ensure the hook exposes page size and a
continuation token for incremental loading.
In `@frontend/providers/save-lock-provider.tsx`:
- Around line 15-17: Add a docstring above the custom hook export useSaveLock
describing its purpose, when/how to use it, the return value (what
SaveLockContext provides), and any important behavior or side effects; locate
the function export useSaveLock and insert a brief JSDoc/TSdoc block referencing
SaveLockContext and the shape of the returned value so the hook complies with
the project's docstring guidelines.
In `@frontend/services/push-notification-service.ts`:
- Around line 56-63: The fallback device ID logic (fallbackId and the ultimate
fallback return that use Constants.sessionId || Date.now()) produces
non-deterministic IDs each session; change it to generate or read a stable UUID
stored on first launch (e.g., using expo-secure-store or AsyncStorage) and
return that stored UUID instead of sessionId/Date.now(); update the code paths
in the get-unique-device-id flow that currently produce fallbackId and the final
return to read/create-and-persist a single UUID key (e.g., "device_uuid") and
use that value for device_id so restarts will upsert rather than insert new
push_tokens rows.
🟡 Minor comments (13)
frontend/services/push-notification-service.ts-111-111 (1)
111-111:⚠️ Potential issue | 🟡 MinorLogs
this.currentToken(which isnullat this point) instead of the actual token.
this.currentTokenis only assigned on line 115. This should logtokenData.data.Proposed fix
- logger.debug('Push token:', this.currentToken); + logger.debug('Push token:', tokenData.data);frontend/components/phone-number-bottom-sheet.tsx-258-263 (1)
258-263:⚠️ Potential issue | 🟡 MinorGestureDetector hit-test area may not move with the animated popover when the keyboard is open.
The
keyboardAwareStyleappliestransform: [{ translateY: -keyboard.height.value }]to shift the popover up when the keyboard appears. Since theGestureDetectorwraps this transformedAnimated.View, the gesture hit-test region may not follow the visual position on Android, causing the swipe-to-dismiss gesture to fail when the keyboard is open.Apply the keyboard offset to the outer
overlaycontainer or move gesture detection to an untransformed touch layer to keep gesture hit areas in sync with visual layout.frontend/components/capture/editor/text-tab.tsx-393-399 (1)
393-399:⚠️ Potential issue | 🟡 MinorMismatched
fontFamilyandfontWeightvalues.
fontFamily: 'Jost-Regular'implies weight 400, butfontWeight: '600'requests semi-bold. On Android,fontFamilytypically takes precedence, so thefontWeightis silently ignored; on iOS the behavior may differ. This mismatch appears in several styles across this PR (e.g.,attachmentTextusesOutfit-SemiBoldwithfontWeight: '500'). Align the weight values with the font variant name, or removefontWeightwhen the family variant already encodes it..github/workflows/test.yml-8-11 (1)
8-11:⚠️ Potential issue | 🟡 MinorRemoving
mainfrom push triggers means merges tomainno longer run backend tests.PR-triggered tests (Line 5) still cover pull requests targeting
main, but direct pushes or merge commits tomainwill no longer trigger this workflow. If your deployment pipeline pushes tomainafter staging, you lose that final CI safety net. Confirm this is intentional.frontend/AGENTS.md-28-28 (1)
28-28:⚠️ Potential issue | 🟡 MinorTypo: "removed" → "remove".
📝 Proposed fix
-2. Always removed unused imports +2. Always remove unused importsfrontend/AGENTS.md-17-17 (1)
17-17:⚠️ Potential issue | 🟡 MinorTypo: "evaulate" → "evaluate".
📝 Proposed fix
-1. Always evaulate negative conditions first eg. +1. Always evaluate negative conditions first, e.g.backend/routers/phone_number.py-117-117 (1)
117-117:⚠️ Potential issue | 🟡 MinorOTP hash and phone number logged at INFO level.
Line 117 logs
otp_hashandphone_numberin the extras. While the hash is less sensitive than the raw OTP, the phone number is still PII. Consider masking it here too for consistency with the fix above.frontend/hooks/__tests__/use-auth.test.ts-57-63 (1)
57-63:⚠️ Potential issue | 🟡 Minor
responseis untyped and accessed without null-guard outsideact.
responseis declared aslet response;(typeundefined), and TypeScript won't narrow it after theactblock. Accessingresponse.error.messageon line 63 will produce a TS error (Object is possibly 'undefined'). The same issue applies to lines 91-97.🛠️ Suggested fix
- let response; + let response: any; await act(async () => { response = await result.current.resetPasswordForEmail('test@example.com'); });Apply the same pattern to the
updatePassworderror test (line 91).frontend/app/onboarding/auth.tsx-46-46 (1)
46-46:⚠️ Potential issue | 🟡 MinorRemove debug
console.logartifact.
console.log(error, "errorrr")looks like a leftover debug statement. Consider using the project'sloggerutility or removing it entirely.frontend/app/onboarding/reset-password.tsx-56-59 (1)
56-59:⚠️ Potential issue | 🟡 MinorMinor:
setTimeoutnot cleared on unmount.If the component unmounts within the 500 ms window (e.g., the user presses back), the callback will fire and call
router.replace, potentially causing an unexpected navigation. Consider storing the timeout ID and clearing it in a cleanup effect, or using a ref guard.frontend/app/onboarding/auth.tsx-69-88 (1)
69-88:⚠️ Potential issue | 🟡 MinorUse
TABLES.PROFILESinstead of hardcoded string for Supabase table name.Line 73 uses
supabase.from('profiles')with a hardcoded string. Replace withsupabase.from(TABLES.PROFILES)after importingTABLESfromconstants/supabase.ts.backend/routers/user.py-1-12 (1)
1-12:⚠️ Potential issue | 🟡 MinorDuplicate
import uuidon lines 4 and 10.Static analysis (Ruff F811) confirms the redefinition.
import threading import uuid import html from datetime import datetime, timezone from enum import Enum - -import tempfile -import uuid +import tempfile from pathlib import Pathfrontend/hooks/use-auth.ts-240-259 (1)
240-259:⚠️ Potential issue | 🟡 MinorRemove unused
Linkingimport and extract hardcoded redirect URL to configuration.Line 8 imports
Linkingfromexpo-linkingbut it's never used in this file and should be removed.For the hardcoded URL on line 245,
https://keepsafe.fortunealebiosu.dev/reset-passwordshould be extracted to a configuration constant or environment variable (following the existing pattern ofEXPO_PUBLIC_BACKEND_URLinlib/constants.ts). This allows the same code to work across staging and production environments without code changes.
🧹 Nitpick comments (32)
frontend/components/capture/canvas/vault-canvas.tsx (1)
81-93: Redundant default:VaultCanvasItemwill only receive items that already passed thetransformsfilter on Line 63.The defensive default on lines 82-87 is unreachable in the current call site because the parent already filters out items without
transforms. That said, it's a reasonable safeguard if this component is ever used independently. Consider adding a brief comment noting the filter upstream so future readers understand the belt-and-suspenders intent.frontend/app/settings/legal.tsx (2)
12-13:useLocalSearchParamsdoes not validate the param value at runtime.The generic
<{ doc: LegalDocument }>only narrows the TypeScript type — at runtime the value isstring | string[]. If someone navigates with an invalid value (e.g.,?doc=foo), it will be assigned toselectedDocand silently pass type checks. Thedefaultbranch inrenderDocumentContentdoes handle this gracefully, but the type gives a false sense of safety.Consider an explicit guard:
Suggested improvement
- const { doc } = useLocalSearchParams<{ doc: LegalDocument }>(); - const [selectedDoc, setSelectedDoc] = useState<LegalDocument | null>(doc || null); + const { doc } = useLocalSearchParams<{ doc: string }>(); + const validDocs: LegalDocument[] = ['terms', 'eula', 'privacy']; + const initialDoc = typeof doc === 'string' && validDocs.includes(doc as LegalDocument) ? (doc as LegalDocument) : null; + const [selectedDoc, setSelectedDoc] = useState<LegalDocument | null>(initialDoc);
240-240: Minor:docvariable shadowing.The
.map((doc) => ...)callback parameter shadows thedocdestructured fromuseLocalSearchParamson line 12. Not a bug since the outerdocisn't referenced inside the callback, but renaming the callback parameter (e.g.,itemorlegalDoc) would improve clarity.frontend/components/profile/phone-number-input.tsx (3)
79-81:fullPhoneNumberis now unused;normalizedis partially redundant.After introducing
emitPhoneState(which recomputes its owncurrentNormalizedandcurrentFullPhoneNumber), thefullPhoneNumbervariable on line 81 is dead code—nothing reads it.normalizedis still used indirectly viaisValidfor the inline validation display (line 142), but the duplication of the normalization logic between here andemitPhoneStateis a maintenance risk (divergence over time).Consider removing
fullPhoneNumberand extracting the normalization into a shared helper or reusingnormalized/isValidinsideemitPhoneState.
77-77:phoneRegexis re-created on every render.Since the regex is a constant, hoist it outside the component to avoid unnecessary allocations.
Suggested fix
+const phoneRegex = /^\d+$/; + export function PhoneNumberInput({ ... - const phoneRegex = /^\d+$/;
92-112: ThepayloadOverridesspread can silently overwrite computed values.Because the options type includes
Partial<PhoneNumberInputChangePayload>, a caller could accidentally pass{ fullPhoneNumber: "..." }and override the derived value without realising it. Currently no call site does this, but the type signature invites it.If overrides aren't needed today, narrow the type to just the two internal options:
- const emitPhoneState = (options?: { - currentValue?: string; - currentCountryCode?: string; - } & Partial<PhoneNumberInputChangePayload>) => { + const emitPhoneState = (options?: { + currentValue?: string; + currentCountryCode?: string; + }) => {and drop the spread of
payloadOverrides.frontend/services/streak-service.ts (2)
192-212: One-shot DATE trigger means no recurring reminders for active-streak users.Switching from
DAILYtoDATEmeans the notification fires exactly once (next day at noon). If the user ignores it and never opens the app, no further reminders are scheduled — unlike theDAILYtrigger in theelsebranch (Line 224) which repeats indefinitely.This is fine if intentional (less nagging for engaged users), but worth confirming that's the desired behavior. If the user goes silent after the one-shot fires, they'll only get re-scheduled when
checkAndUpdateStreakruns on next app open (which resets streak and falls back toDAILY).
188-230: Notification scheduling logic is duplicated across four call sites.The same notification content and scheduling pattern (cancel all → schedule with identifier
streak_${userId}) is repeated inupdateStreak,checkAndUpdateStreak(twice), andresetStreak. Consider extracting two helpers — e.g.,scheduleStreakReminder(userId, streak)andscheduleGeneralReminder(userId)— to reduce duplication and keep notification content in sync.Also applies to: 262-281, 300-317, 342-359
frontend/services/push-notification-service.ts (1)
58-61: Inconsistent logging:console.warn/console.errorused instead oflogger.The rest of the PR adopts the
loggerutility (e.g., line 111), but these fallback paths still use rawconsolecalls. Consider usinglogger.warnandlogger.errorfor consistency and to benefit from the structured formatting.frontend/components/phone-number-bottom-sheet.tsx (2)
309-354: Large block of commented-out code and significant dead code left behind.The entire OTP verification UI (lines 309–354) is commented out, but all the supporting infrastructure remains active:
usePhoneNumberUpdateRecord,stepstate,otp/isVerifyingOtp/resendAttempts/cooldownSecondsstate,canVerifymemo,resendOtp(),verifyOtp(), threeuseEffecthooks, and theOtpInput/BACKEND_URL/supabaseimports. None of this code is reachable in the current render path.If the OTP flow is intentionally removed in favor of
updateProfile, clean up the dead code. If it's temporarily disabled, consider moving it behind a feature flag or extracting it to a separate module so it doesn't obscure the active logic.
129-156:isSendingOtpis semantically misleading and redundant withisProfileUpdating.
handleUpdatePhoneNumbermanually togglesisSendingOtparound theupdateProfilecall, butisProfileUpdating(fromuseProfileOperations) already tracks the same async operation. Both are checked together in the button'sdisabledand loading indicator (line 289–293), creating redundant overlapping state.Consider dropping the manual
isSendingOtptoggle in this handler and relying solely onisProfileUpdating, or rename the state to something semantically accurate likeisUpdatingif you need a local flag for the broader try/catch scope (e.g., to coverclearPhonePromptState).frontend/components/friends/entry-share-list.tsx (1)
117-119: Commented-outbackgroundColorremoves the selected-state highlight for friend options.With this gone, the only visual cues for a selected friend are the avatar border and purple text color. The container itself no longer visually distinguishes selected from unselected. If this is intentional, consider removing the comment entirely rather than leaving dead code. If not, restore it.
frontend/app/settings/index.tsx (1)
93-101: Inconsistent logging:console.errorvslogger.error.
handleLogoutusesconsole.error(line 98) while the rest of the file useslogger.error. Consider switching tologger.errorfor consistency.} catch (error) { - console.error('Sign out error:', error); + logger.error('Sign out error:', error); Alert.alert('Error', 'Failed to sign out. Please try again.'); }Same applies to line 240 in the delete account error handler.
frontend/supabase/migrations/20260209012200_add_environment_to_push_tokens.sql (1)
4-19: Migration ordering: backfill is redundant givenDEFAULT 'prod'.In PostgreSQL 11+,
ADD COLUMN ... DEFAULT 'prod'applies the default to all existing rows, so theUPDATEon lines 17–19 is a no-op. Not harmful, but worth noting for clarity.More importantly, the new unique constraint (line 14) is added before the backfill
UPDATE. If the column were actually nullable for existing rows (e.g., on PostgreSQL < 11, or ifIF NOT EXISTShits a pre-existing nullable column without a default), you could have rows withNULLenvironment that conflict unexpectedly whenSET NOT NULLis applied. Consider reordering so the backfill runs before adding the new constraint:-- Add environment column with CHECK constraint ALTER TABLE push_tokens ADD COLUMN IF NOT EXISTS environment text CHECK (environment IN ('prod', 'dev')) DEFAULT 'prod'; +-- Update existing rows to have 'prod' environment (if any exist) +UPDATE push_tokens +SET environment = 'prod' +WHERE environment IS NULL; + -- Drop the old unique constraint ALTER TABLE push_tokens DROP CONSTRAINT IF EXISTS push_tokens_user_device_key; -- Create new unique constraint that includes environment ALTER TABLE push_tokens ADD CONSTRAINT push_tokens_user_device_environment_key UNIQUE (user_id, device_id, environment); --- Update existing rows to have 'prod' environment (if any exist) -UPDATE push_tokens -SET environment = 'prod' -WHERE environment IS NULL; - -- Make environment NOT NULL after backfilling ALTER TABLE push_tokens ALTER COLUMN environment SET NOT NULL;frontend/app/onboarding/forgot-password-success.tsx (1)
67-128: Inconsistent use ofColorsconstants.
Colorsis imported and used forbackground(line 55) andborderColor(line 86), but many other styles hardcode the same hex values that are available inColors:
'#8B5CF6'→Colors.primary(lines 20, 118)'#059669'→Colors.success(lines 23, 82)'#1E293B'→Colors.text(line 95)'#64748B'→Colors.textMuted(line 101)'#94A3B8'→Colors.textSubtle(line 109)Using the constants consistently makes future theming changes easier.
frontend/app/onboarding/__tests__/forgot-password.test.tsx (1)
44-51: Test relies onfireEvent.pressbypassingdisabledprop.In the component, the "Send Reset Link" button is
disabledwhen the email is empty.fireEvent.pressin RNTL calls theonPresshandler directly, bypassing the nativedisabledguard. This means the test validates the handler's internal validation logic, which is valuable, but doesn't verify that the button is actually disabled from a user interaction perspective.Consider adding an explicit assertion for the disabled state if you want to test both layers:
const button = getByText('Send Reset Link'); expect(button).toBeDisabled();frontend/app/onboarding/forgot-password.tsx (1)
49-110:KeyboardAvoidingViewis imported but not used.
KeyboardAvoidingViewis imported on line 2 but the component wraps content in a plainView. On iOS, the keyboard may obscure the email input. Consider wrapping withKeyboardAvoidingViewfor better UX.frontend/app/_layout.tsx (1)
46-46: Simplify the platform variable or move it to module scope.
Platform.OSis a static value, so recalculating inside the render function on every render is unnecessary. Also, the ternary collapses all non-iOS platforms (web, windows, etc.) to'android', which is misleading since it's only used for the iOS animation check on Line 78. Consider simplifying:♻️ Suggested simplification
- const platform = Platform.OS === 'ios' ? 'ios' : 'android';And on Line 78:
- animation: platform === 'ios' ? 'ios_from_left' : 'default' + animation: Platform.OS === 'ios' ? 'ios_from_left' : 'default'frontend/app/vault.tsx (1)
252-252:Animated.Viewwrapper no longer uses any animation props.Since the entering/exiting animations were removed from this
Animated.View, it now behaves identically to a plainView. Consider replacing it withViewfor clarity, and theSlideInUp/SlideInDownimports on Lines 5–6 can also be removed.frontend/lib/api-client.ts (2)
40-76: Consider extracting shared header-building logic to reduce duplication.
apiFetchandapiFetchStreamshare nearly identical code for building headers and attaching the auth token (~20 lines each). A small internal helper likebuildAuthHeaders(headers, skipAuth)would DRY this up and make future changes (e.g., adding a custom header) a single-point edit.Also applies to: 94-133
127-132: Redundantbodyandsignalassignments.Lines 129–130 explicitly set
bodyandsignalvia nullish coalescing, but they are already included inrestOptionsfrom the spread on Line 128. These lines can be removed without changing behavior.♻️ Suggested simplification
return expoFetch(url, { ...restOptions, - body: restOptions.body ?? undefined, - signal: restOptions.signal ?? undefined, headers: requestHeaders, });frontend/app/settings/privacy.tsx (1)
188-189: Consider reducing log verbosity for the download result.
logger.info('Download result:', result)may log file paths and response metadata in production. Consider downgrading tologger.debugso it's only visible in development.frontend/providers/auth-provider.tsx (1)
76-82: Missing docstring onuseAuthContexthook.Per coding guidelines, all custom hooks must have docstrings.
Proposed fix
+/** + * Retrieves the current authentication context. + * Must be used within an AuthProvider. + */ export function useAuthContext() {As per coding guidelines, "ALL custom hooks created must have docstrings".
frontend/hooks/use-user-entries.ts (4)
90-90: Missing docstring onuseUserEntrieshook.Per coding guidelines, all custom hooks must have docstrings.
Proposed fix
+/** + * Manages the current user's entries including fetching, caching, real-time subscriptions + * for shared entries, optimistic updates, retry logic, and unseen entry tracking. + */ export function useUserEntries(): UseUserEntriesResult {As per coding guidelines, "ALL custom hooks created must have docstrings".
234-259: Circular dependency betweensetupSubscriptionandscheduleReconnect.
setupSubscriptioncallsscheduleReconnect(line 346/350/354), andscheduleReconnectcallssetupSubscription(line 376). However,scheduleReconnectlistssetupSubscriptionin its dependency array (line 378), whilesetupSubscriptiondoes not listscheduleReconnect(line 359). This meanssetupSubscriptioncaptures a stalescheduleReconnectif the latter is ever recreated.In practice, since both dependency arrays are fairly stable, this may not manifest as a bug today, but it's a fragile pattern. Consider using refs to break the cycle:
Sketch
+ const setupSubscriptionRef = useRef<(userId: string) => void>(); + // After defining setupSubscription: + setupSubscriptionRef.current = setupSubscription; const scheduleReconnect = useCallback((userId: string) => { // ... reconnectTimeoutRef.current = setTimeout(() => { reconnectTimeoutRef.current = null; - setupSubscription(userId); + setupSubscriptionRef.current?.(userId); }, delay) as unknown as number; - }, [setupSubscription]); + }, []);Also applies to: 362-378
44-59: Inconsistent logging:console.erroringetEntryOwnerProfilevsloggerelsewhere.The rest of the file uses the imported
loggerutility, but this helper usesconsole.erroron lines 54 and 85.Proposed fix
- console.error('Error fetching profile from Supabase:', error); + logger.error('Error fetching profile from Supabase:', error);- console.error('Error getting entry owner profile:', error); + logger.error('Error getting entry owner profile:', error);Also applies to: 84-86
248-260: Subscribing to all INSERTs on the entries table with client-side filtering.The subscription receives every single INSERT on the
entriestable across all users and filters client-side (lines 267-276). At scale, this will generate significant unnecessary traffic and processing. Consider adding a Supabase RLS policy or a server-side filter (e.g., onshared_with_everyone) to reduce the volume, or at minimum document this as a known scaling limitation.frontend/app/onboarding/index.tsx (2)
109-109: Unexplained magic numberscaleFactor = 7.
scaleFactoris used to extend the video container beyond the screen width (width + 10 * scaleFactor = width + 70) and offset it left by5 * scaleFactor = 35. A brief comment explaining why the video is 70pt wider and offset would help future maintainers.Also applies to: 117-122
26-28:player.seekBy(10000)works but should useplayer.currentTime = player.durationinstead.Seeking by an arbitrary large offset is fragile and assumes the video is shorter than ~2.7 hours. The
expo-videoAPI providesplayer.duration(read-only) andplayer.currentTime(writable setter) for more robust seeking. Setplayer.currentTime = player.durationto skip to the end, which is clearer thanseekBy(10000).One caveat:
player.durationmay not be immediately available during player initialization (when this callback runs), so you may need to defer this seek to a later point (e.g., in auseEffectafter the duration is loaded). SincevideoFinishedis already initialized totrueon subsequent visits (line 19), the current approach has low impact—this seek only affects the visual frame shown—but the code is worth refactoring for clarity and robustness.frontend/hooks/use-auth.ts (1)
57-57: Missing docstring onuseAuthhook.Per coding guidelines, all custom hooks must have docstrings.
Proposed fix
+/** + * Manages authentication state and operations including sign-up, sign-in, + * sign-out, password reset, and password update. Tracks auth change events + * and integrates with PostHog for user identification. + */ export function useAuth(): UseAuthResult {As per coding guidelines, "ALL custom hooks created must have docstrings".
backend/routers/user.py (2)
480-518: File served viaFileResponsethen deleted — potential race condition.
download_user_exportserves the file viaFileResponseand then schedules_cleanup_export_after_downloadas a background task to delete it. FastAPI'sFileResponsestreams the file, and the background task runs after the response is sent, so this should generally work. However, if the client is slow to download and the server has aggressive timeouts, or if the same job is downloaded concurrently, you may hit issues.Consider adding a guard in the cleanup to avoid deleting while another download is in progress, or simply let the TTL-based pruning handle cleanup.
139-139: Use constants for table names instead of hardcoded strings.Lines 139, 164, 182, and 216 in
_fetch_user_export_data, along with the duplicate_fetch_export_datafunction (lines 560, 565, 571), use hardcoded table names:"profiles","entries", and"friendships". Define these as constants at the module level to improve maintainability and reduce the risk of typos.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/hooks/use-auth.ts (1)
141-163:⚠️ Potential issue | 🟡 Minor
AccountDisabledErroris never thrown but is caught.The switch/case and pattern-matching blocks in
signInnever throwAccountDisabledError, yet lines 175-178 check for it in the catch block. Either add a mapping for the disabled-account Supabase error message, or remove it from the instanceof chain to avoid confusion.Example: add a mapping for it
case 'Too many requests': throw new TooManyAttemptsError(); + case 'User is banned': + throw new AccountDisabledError(); default:
🤖 Fix all issues with AI agents
In `@backend/routers/phone_number.py`:
- Around line 113-115: The resend_phone_otp flow currently upserts the new OTP
hash to the DB before calling _send_sms_otp, which can invalidate the old OTP if
SMS delivery fails; modify resend_phone_otp to send the SMS first (call
_send_sms_otp with the generated otp and payload.phone_number) and only perform
the DB upsert (the same upsert used in start_phone_otp) after the SMS send
succeeds, or alternatively delay updating the stored hash until _send_sms_otp
confirms delivery, mirroring the start_phone_otp ordering to avoid creating a
stale, undelivered OTP record.
- Around line 121-123: Remove the sensitive otp_hash from structured logs: in
the block that computes phone_number_masked and calls logger.info (the call that
currently includes otp_hash in extra), update the logger invocation to omit
otp_hash from the extra payload and only include non-sensitive fields (e.g.,
user_id, phone_number_masked, created_at). Leave the otp_hash variable and its
storage/usage unchanged elsewhere (e.g., database upsert logic) but do not log
it from the function that calls logger.info.
In `@backend/routers/user.py`:
- Around line 1-12: There is a duplicate import of the uuid module (two separate
import uuid statements) at the top of the file; remove the redundant import so
only a single "import uuid" remains among the top-level imports (keep the one
that fits existing import ordering/style), and run the linter to ensure Ruff
F811 is resolved.
- Around line 60-63: The try/except around
datetime.fromisoformat(reference_time_str) is silently swallowing parse errors
which leaves malformed jobs unpruned; change the except to capture the exception
(except Exception as e) and log it via the module/logger used in this file
(e.g., logger.exception or logger.error(..., exc_info=True)) mentioning
reference_time_str and the parse failure, then continue as before so operators
can detect corrupted data; keep the existing flow that skips the bad job after
logging.
In `@frontend/hooks/use-auth.ts`:
- Around line 30-34: Add a JSDoc comment immediately above the useAuth function
that documents the hook’s purpose, the shape of its return value (mentioning
user: User | null, session: Session | null, loading: boolean, authEvent:
AuthChangeEvent | null and any exposed setters or handlers included in
UseAuthResult), notes that it takes no arguments, and a short usage example
showing how to call the hook inside a component; ensure the comment references
the UseAuthResult type and the hook name useAuth so readers can quickly locate
related types and usage.
🧹 Nitpick comments (8)
frontend/components/toast-message.tsx (1)
3-4: UnusedSlideInDownimport.
SlideInDownis still imported on line 3 but no longer referenced anywhere — the entering animation now usesSlideInUp. Remove it to keep imports clean.Proposed fix
-import Animated, { SlideInDown, SlideInUp, SlideOutUp, runOnJS } from 'react-native-reanimated'; +import Animated, { SlideInUp, SlideOutUp, runOnJS } from 'react-native-reanimated';Also applies to: 17-18
backend/routers/phone_number.py (2)
91-95: Extract phone-number masking into a helper.The expression
f"...{phone_number[-4:]}" if phone_number and len(phone_number) >= 4 else "****"is duplicated four times. A small helper would reduce noise and ensure consistent masking.Suggested helper
+def _mask_phone(phone_number: str | None) -> str: + if phone_number and len(phone_number) >= 4: + return f"...{phone_number[-4:]}" + return "****"Also applies to: 113-113, 122-122
280-280: Use lazy%sformatting in logger call.
logger.warning(f"Failed to delete ...")eagerly formats the string. Preferlogger.warning("Failed to delete phone_number_updates record %s: %s", record_id, e)for consistency and to avoid formatting cost when the log level is disabled.backend/requirements.txt (1)
60-61: Test dependencies in production requirements.
pytestandpytest-asyncioare test-only dependencies and should not be in the productionrequirements.txt. They inflate the production image and widen the attack surface. Move them to a separaterequirements-dev.txtor[dev]extras group.backend/routers/user.py (4)
253-258:remove_nonesilently dropsNoneelements from lists, not justNonedict values.Line 257 filters out
Noneitems from lists (if v is not None). While this is probably fine for the current Supabase response shape, it could silently discard legitimate list entries if any areNone. If the intent is only to strip null-valued dict keys, consider leaving list elements intact:Proposed fix
def remove_none(obj: Any) -> Any: if isinstance(obj, dict): return {k: remove_none(v) for k, v in obj.items() if v is not None} if isinstance(obj, list): - return [remove_none(v) for v in obj if v is not None] + return [remove_none(v) for v in obj] return obj
278-279: Redundantimport html— already imported at module level (line 5).This local import shadows the top-level one unnecessarily. Remove it since
html.escapeis already available from the top-level import.Proposed fix
if format == ExportFormat.html: - import html - def validate_url(url: str) -> str:
560-564: Quote thefilenameinContent-Dispositionheader.While
user_idis a UUID (no special chars), best practice is to quote the filename value to handle any edge cases:Proposed fix
- headers={"Content-Disposition": f"attachment; filename={filename}"}, + headers={"Content-Disposition": f'attachment; filename="{filename}"'},
91-91: Prefer%-style formatting inloggercalls for lazy evaluation.The file mixes f-strings (e.g., lines 91, 110, 130, 181, 217) with
%-style formatting (e.g., lines 194, 377, 393) in logger calls.%-style defers string interpolation until the message is actually emitted, avoiding unnecessary work when the log level is disabled.
| try: | ||
| reference_time = datetime.fromisoformat(reference_time_str) | ||
| except Exception: # noqa: BLE001 | ||
| continue |
There was a problem hiding this comment.
Silently swallowing datetime parse errors means corrupted jobs are never pruned.
If reference_time_str is malformed, the continue skips the job forever — it will accumulate in memory indefinitely. Log the exception so operators can detect data corruption.
Proposed fix
try:
reference_time = datetime.fromisoformat(reference_time_str)
- except Exception: # noqa: BLE001
+ except Exception:
+ logger.warning("Invalid timestamp in export job %s, removing it", job_id)
+ expired_job_ids.append(job_id)
continue🧰 Tools
🪛 Ruff (0.15.0)
[error] 62-63: try-except-continue detected, consider logging the exception
(S112)
🤖 Prompt for AI Agents
In `@backend/routers/user.py` around lines 60 - 63, The try/except around
datetime.fromisoformat(reference_time_str) is silently swallowing parse errors
which leaves malformed jobs unpruned; change the except to capture the exception
(except Exception as e) and log it via the module/logger used in this file
(e.g., logger.exception or logger.error(..., exc_info=True)) mentioning
reference_time_str and the parse failure, then continue as before so operators
can detect corrupted data; keep the existing flow that skips the bad job after
logging.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
frontend/services/push-notification-service.ts (1)
226-227:⚠️ Potential issue | 🟡 MinorUse
TABLES.NOTIFICATION_SETTINGSinstead of the string literal.All other call sites in this file already use the
TABLESconstant — this one was missed.Proposed fix
const { data, error } = await supabase - .from('notification_settings') + .from(TABLES.NOTIFICATION_SETTINGS)As per coding guidelines, "When making calls to the Supabase Database, use the
TABLESobject inconstants/supabase.tsfile instead of writing the table name as a plain string".frontend/hooks/use-user-entries.ts (2)
267-411:⚠️ Potential issue | 🟠 MajorCircular dependency between
setupSubscriptionandscheduleReconnect— stale closure risk.
setupSubscription(Line 267) callsscheduleReconnectbut omits it from its dependency array.scheduleReconnect(Line 395) depends onsetupSubscription. Adding the missing dep would create an infinite re-creation loop.The practical result: after
fetchMissedEntriesorqueryClientidentity changes,setupSubscriptionis recreated, causingscheduleReconnectto be recreated, butsetupSubscriptionstill holds the oldscheduleReconnectreference—so reconnection attempts after that point use stale logic.🔧 Suggested fix: break the cycle with a ref
+ const setupSubscriptionRef = useRef<(userId: string) => void>(() => {}); + // Helper function to schedule reconnection with exponential backoff - const scheduleReconnect = useCallback((userId: string) => { + const scheduleReconnect = useCallback((userId: string) => { if (reconnectTimeoutRef.current) { clearTimeout(reconnectTimeoutRef.current); } const delay = Math.min(1000 * Math.pow(2, reconnectAttemptsRef.current), 30000); reconnectAttemptsRef.current += 1; logger.info(`Scheduling reconnection attempt ${reconnectAttemptsRef.current} in ${delay}ms`); reconnectTimeoutRef.current = setTimeout(() => { reconnectTimeoutRef.current = null; - setupSubscription(userId); + setupSubscriptionRef.current(userId); }, delay) as unknown as number; - }, [setupSubscription]); + }, []); const setupSubscription = useCallback((userId: string) => { // ... existing body, now safe to list scheduleReconnect in deps ... - }, [queryClient, fetchMissedEntries]); + }, [queryClient, fetchMissedEntries, scheduleReconnect]); + + // Keep ref in sync + useEffect(() => { + setupSubscriptionRef.current = setupSubscription; + }, [setupSubscription]);Move
scheduleReconnectabovesetupSubscriptionso it's in scope, then add it to the deps. The ref indirection inscheduleReconnectbreaks the cycle.
116-164:⚠️ Potential issue | 🟠 MajorCache-first path can break pagination:
getNextPageParamsees fewer thanDEFAULT_PAGE_SIZEentries and stops.When
deviceStoragereturns betweenMIN_ENTRIES_TO_CACHE + 1(11) andDEFAULT_PAGE_SIZE - 1(19) cached entries,getNextPageParamreceives a page shorter than 20 items and returnsundefined, signaling no more pages. The user is stuck with only cached data and cannot load more.Additionally, if cached entries are returned as the first "page" and their count exceeds
DEFAULT_PAGE_SIZE, the cursor fromgetNextPageParammay not align with actual DB page boundaries, risking duplicates or gaps.🔧 Possible fix: always fetch from DB, use cache only as placeholder
One approach: return cached data as
placeholderData/initialDataon the query, and always let the first page come from the DB so page sizes are consistent:- // Try to get cached entries first for initial load - if (!pageParam) { - const cachedEntries = await deviceStorage.getEntries(user.id); - if (cachedEntries && cachedEntries.length > MIN_ENTRIES_TO_CACHE) { - // If we have cached data, we use it. - // Note: This might overlap with first page fetch but gives instant UI. - return cachedEntries; - } - }Instead, provide cached data via
placeholderDataon the query config so the UI renders instantly while the real first page is still being fetched. This keeps page sizes consistent for the cursor logic.Alternatively, if the cache-first approach is intentional,
getNextPageParammust not rely on page length when the first page comes from the cache:getNextPageParam: (lastPage: EntryWithProfile[]) => { - if (!lastPage || lastPage.length < DEFAULT_PAGE_SIZE) { + if (!lastPage || lastPage.length === 0) { return undefined; } return lastPage[lastPage.length - 1].created_at; },Note that this second approach means the hook never knows "there are no more pages" until it fetches an empty page, which is a minor UX trade-off.
🤖 Fix all issues with AI agents
In `@frontend/hooks/use-user-entries.ts`:
- Line 96: Add a docstring comment above the exported hook function
useUserEntries describing its purpose, inputs (none), and return type
(UseUserEntriesResult); include a short summary of what the hook does (e.g.,
fetches and returns user entry state and actions), mention any side effects or
context it uses, and give a minimal usage example or note about its return shape
to satisfy the "ALL custom hooks must have docstrings" guideline.
- Around line 217-241: The code treats the useInfiniteQuery cache as a flat
EntryWithProfile[] but queryClient.getQueryData(['user-entries', userId])
returns InfiniteData<EntryWithProfile[]>; update fetchMissedEntries and the
realtime callback to unwrap and rewrap the infinite structure: read current =
queryClient.getQueryData<InfiniteData<EntryWithProfile[]>>(['user-entries',
userId])?.pages ?? [], flatten pages to compute currentEntryIds and merge
missedEntries with the flattened entries, then rebuild pages (or replace only
the first page) into an InfiniteData shape before calling
queryClient.setQueryData<InfiniteData<EntryWithProfile[]>>(['user-entries',
userId], updatedInfiniteData); apply the same unwrap/flatten/merge/rewrap
pattern for the realtime callback variables currentEntries, missedEntries,
sortedEntries and for the cache writes using setQueryData so the InfiniteData
structure is preserved.
- Around line 325-338: The realtime handler is reading/writing the query cache
as a flat EntryWithProfile[] but the query was created with useInfiniteQuery
(stored as InfiniteData with { pages, pageParams }), so calling
queryClient.getQueryData and .some() and then setQueryData with a flat array
will break the cache; update the logic that uses queryClient.getQueryData and
queryClient.setQueryData for the key ['user-entries', userId] to treat the
cached value as InfiniteData<EntryWithProfile[]>: inspect pages (e.g., iterate
pages[].data or pages[] depending on your page shape) to deduplicate against
typedNewEntry.id, and when inserting prepend the new entry into the first page
(or create a new first page) rather than replacing the whole cache with a flat
array; use the same key and typedNewEntry variable, and ensure your setter
preserves pageParams and other pages.
In `@frontend/services/push-notification-service.ts`:
- Line 121: The debug statement is logging this.currentToken before it is
assigned; update the log in the push-notification flow so it logs the freshly
obtained token (tokenData.data) or move the logger.debug call to after the
assignment to this.currentToken; target the logger.debug call that currently
references this.currentToken and replace it with tokenData.data (or relocate it
to follow the assignment to this.currentToken) so the printed token is the
actual new value.
🧹 Nitpick comments (9)
frontend/hooks/use-auth.ts (2)
137-189: Sign-in error classification is solid;AccountDisabledErrorin catch is unreachable.The
instanceof AccountDisabledErrorcheck at line 180 has no correspondingthrowin the try block, making it dead code. It's harmless for now, but if the intent is to handle a disabled-account response from Supabase, consider mapping it in theswitch/includesblock above (e.g., matching on a"banned"or"disabled"message).
209-228: NewresetPasswordForEmailandupdatePasswordlook correct overall.One minor note:
signIn,signUp, andsignOutall togglesetLoading(true/false), but these two new methods do not. If any UI depends onloadingto show a spinner during password reset or update, it won't trigger. Consider addingsetLoadingfor consistency, or document the intentional difference.backend/routers/phone_number.py (2)
91-94: Extract the repeated phone-number masking into a helper.The expression
f"...{phone_number[-4:]}" if phone_number and len(phone_number) >= 4 else "****"is duplicated five times (lines 91, 94, 113, 122, 179). A small helper keeps masking logic consistent and easy to change.♻️ Proposed helper
Add near the other utility functions at the top of the file:
def _mask_phone(phone_number: str | None) -> str: """Return the last 4 digits of a phone number, masked for logging.""" if phone_number and len(phone_number) >= 4: return f"...{phone_number[-4:]}" return "****"Then replace all inline occurrences, e.g.:
- phone_number_masked = f"...{phone_number[-4:]}" if phone_number and len(phone_number) >= 4 else "****" - logger.info("Twilio send successful", extra={"phone_number": phone_number_masked}) + logger.info("Twilio send successful", extra={"phone_number": _mask_phone(phone_number)})
83-83: Consider reusing the TwilioClientinstance instead of creating one per call.Instantiating
Clientinside_send_sms_otpcreates a new HTTP session on every OTP send. A module-level or cached instance avoids that overhead and is safe since the client is thread-safe.♻️ Example: lazy singleton
_twilio_client: Client | None = None def _get_twilio_client() -> Client: global _twilio_client if _twilio_client is None: _twilio_client = Client(settings.TWILIO_ACCOUNT_SID, settings.TWILIO_AUTH_TOKEN) return _twilio_clientThen in
_send_sms_otp:- client = Client(settings.TWILIO_ACCOUNT_SID, settings.TWILIO_AUTH_TOKEN) + client = _get_twilio_client()backend/routers/user.py (3)
278-278: Redundantimport html— already imported at module level (line 5).This inner import shadows the module-level
import htmlwithout adding anything. Remove it.Proposed fix
if format == ExportFormat.html: - import html - def validate_url(url: str) -> str:
562-562: Quote thefilenameinContent-Dispositionheader.If
filenameever contains spaces or special characters, the header will be malformed. Use proper quoting to be safe per RFC 6266.Proposed fix
- headers={"Content-Disposition": f"attachment; filename={filename}"}, + headers={"Content-Disposition": f'attachment; filename="{filename}"'},
75-75: Remove stale# noqa: BLE001directives (also on lines 492 and 564).Ruff RUF100 reports these
noqadirectives are unused. They can be removed.frontend/app/vault.tsx (2)
216-224: Pagination integration looks correct.Minor nit:
ListFooterComponentis an inline arrow function recreated every render. Consider extracting it or memoizing to avoid unnecessary footer remounts, though the impact is likely negligible here.
218-224: Nit: UseColors.primaryinstead of hardcoded"#8B5CF6".Line 221 uses a hardcoded color.
Colors.primaryfrom@/lib/constantsalready defines this value.Proposed fix
- <ActivityIndicator size="small" color="#8B5CF6" /> + <ActivityIndicator size="small" color={Colors.primary} />
|
|
||
| const DEFAULT_PAGE_SIZE = 20; | ||
|
|
||
| export function useUserEntries(): UseUserEntriesResult { |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Missing docstring on useUserEntries hook.
The getEntryOwnerProfile helper above has a proper docstring, but the main exported hook does not. As per coding guidelines, "ALL custom hooks created must have docstrings".
🤖 Prompt for AI Agents
In `@frontend/hooks/use-user-entries.ts` at line 96, Add a docstring comment above
the exported hook function useUserEntries describing its purpose, inputs (none),
and return type (UseUserEntriesResult); include a short summary of what the hook
does (e.g., fetches and returns user entry state and actions), mention any side
effects or context it uses, and give a minimal usage example or note about its
return shape to satisfy the "ALL custom hooks must have docstrings" guideline.
|
|
||
| console.log('Push token:', this.currentToken); | ||
| logger.debug('Push token:', this.currentToken); |
There was a problem hiding this comment.
Bug: Logging this.currentToken which is still null at this point.
this.currentToken is not assigned until line 125. You likely meant to log tokenData.data.
🐛 Proposed fix
- logger.debug('Push token:', this.currentToken);
+ logger.debug('Push token:', tokenData.data);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.debug('Push token:', this.currentToken); | |
| logger.debug('Push token:', tokenData.data); |
🤖 Prompt for AI Agents
In `@frontend/services/push-notification-service.ts` at line 121, The debug
statement is logging this.currentToken before it is assigned; update the log in
the push-notification flow so it logs the freshly obtained token
(tokenData.data) or move the logger.debug call to after the assignment to
this.currentToken; target the logger.debug call that currently references
this.currentToken and replace it with tokenData.data (or relocate it to follow
the assignment to this.currentToken) so the printed token is the actual new
value.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/hooks/use-user-entries.ts (1)
280-432:⚠️ Potential issue | 🔴 CriticalCircular dependency between
setupSubscriptionandscheduleReconnect— stale closure bug.
setupSubscription(Line 280) callsscheduleReconnectinside the subscribe callback but does not list it in its dependency array (Line 413).scheduleReconnect(Line 416) callssetupSubscriptionand does list it in its dependency array (Line 432). This means:
setupSubscriptionalways captures the initialscheduleReconnectreference from the first render.- After any dependency change causes
setupSubscriptionto be recreated, the oldscheduleReconnectit closed over still points to a stalesetupSubscription.This breaks reconnection after the first attempt. Consider using a single
useRefto hold the latestsetupSubscriptioncallback, or combine both into a singleuseCallback, or useuseRefforscheduleReconnectsosetupSubscriptionalways calls the latest version.🛠️ One approach: use a ref for the reconnect function
+ const setupSubscriptionRef = useRef<(userId: string) => void>(); + const setupSubscription = useCallback((userId: string) => { // ... existing body ... }, [queryClient, fetchMissedEntries]); + // Keep ref in sync + useEffect(() => { + setupSubscriptionRef.current = setupSubscription; + }, [setupSubscription]); const scheduleReconnect = useCallback((userId: string) => { if (reconnectTimeoutRef.current) { clearTimeout(reconnectTimeoutRef.current); } const delay = Math.min(1000 * Math.pow(2, reconnectAttemptsRef.current), 30000); reconnectAttemptsRef.current += 1; logger.info(`Scheduling reconnection attempt ${reconnectAttemptsRef.current} in ${delay}ms`); reconnectTimeoutRef.current = setTimeout(() => { reconnectTimeoutRef.current = null; - setupSubscription(userId); + setupSubscriptionRef.current?.(userId); - }, delay) as unknown as number; + }, delay) as unknown as number; - }, [setupSubscription]); + }, []);Then
setupSubscriptioncan safely referencescheduleReconnectwithout a circular dep.
🤖 Fix all issues with AI agents
In `@frontend/hooks/use-user-entries.ts`:
- Around line 147-149: The current check in the useUserEntries hook uses
userEntriesError.message even when userEntriesError can be null, causing a
TypeError if entries is null and userEntriesError is null; update the error
handling in useUserEntries to first check entries and userEntriesError
separately and throw a safe Error: if userEntriesError exists rethrow or use its
message, otherwise throw a new Error with a clear message like "No entries
returned from Supabase" (use userEntriesError and entries identifiers to locate
and update the logic).
- Around line 116-127: The cached-entry return in queryFn uses the full
deviceStorage.getEntries(user.id) array which breaks pagination invariants;
modify queryFn so when pageParam is falsy you still use cached data for instant
UI but slice the cached array to DEFAULT_PAGE_SIZE before returning (so first
page size matches expected page size) and trigger a background refetch (e.g.,
via a refetch or invalidation) to load the true first page from the server;
ensure references to MIN_ENTRIES_TO_CACHE, DEFAULT_PAGE_SIZE,
deviceStorage.getEntries, and getNextPageParam remain consistent so
getNextPageParam sees correctly sized pages and hasNextPage is computed
properly.
- Around line 350-359: The realtime handler currently calls
queryClient.setQueryData for ['user-entries', userId] and returns undefined when
oldData is falsy, silently dropping realtime entries; change the update logic in
the setQueryData callback (and similarly in fetchMissedEntries) to initialize a
proper InfiniteData<EntryWithProfile[]> shape when oldData is undefined — create
a minimal object with pages: [[typedNewEntry]] and reasonable pageParams (e.g.,
an empty array or initial cursor) so the realtime entry is inserted into the
cache even if the query hasn't been fetched yet; update the code paths around
queryClient.setQueryData and any helper functions that assume oldData exists to
handle this initialized cache shape.
🧹 Nitpick comments (1)
frontend/hooks/use-user-entries.ts (1)
44-92: Inconsistent logging:console.errorvslogger.error.
getEntryOwnerProfileusesconsole.error(Lines 58, 89) while the rest of the file consistently useslogger.error. Consider switching tologgerfor uniform structured logging.
| queryFn: async ({ pageParam }) => { | ||
| if (!user) return []; | ||
|
|
||
| // Try to get cached entries first | ||
| const cachedEntries = await deviceStorage.getEntries(user.id); | ||
| if (cachedEntries && cachedEntries.length > MIN_ENTRIES_TO_CACHE) { | ||
| return cachedEntries; | ||
| // Try to get cached entries first for initial load | ||
| if (!pageParam) { | ||
| const cachedEntries = await deviceStorage.getEntries(user.id); | ||
| if (cachedEntries && cachedEntries.length > MIN_ENTRIES_TO_CACHE) { | ||
| // If we have cached data, we use it. | ||
| // Note: This might overlap with first page fetch but gives instant UI. | ||
| return cachedEntries; | ||
| } | ||
| } |
There was a problem hiding this comment.
Cached entries bypass pagination sizing, which can prematurely stop pagination or create oversized pages.
When pageParam is falsy, this returns the full cached array regardless of size. Two problems:
- If cached entries count is between
MIN_ENTRIES_TO_CACHE(10) andDEFAULT_PAGE_SIZE(20),getNextPageParamseeslastPage.length < DEFAULT_PAGE_SIZEand returnsundefined, sohasNextPagebecomesfalse— even though more entries exist in the DB. - If cached entries count exceeds
DEFAULT_PAGE_SIZE, the first "page" is larger than expected, creating inconsistent page sizes in theInfiniteDatastructure.
Consider slicing cached entries to DEFAULT_PAGE_SIZE or triggering a background refetch after returning cached data, so pagination invariants are preserved.
🛠️ Suggested approach
if (!pageParam) {
const cachedEntries = await deviceStorage.getEntries(user.id);
if (cachedEntries && cachedEntries.length > MIN_ENTRIES_TO_CACHE) {
- return cachedEntries;
+ // Return at most one page worth to preserve pagination invariants
+ return cachedEntries.slice(0, DEFAULT_PAGE_SIZE);
}
}🤖 Prompt for AI Agents
In `@frontend/hooks/use-user-entries.ts` around lines 116 - 127, The cached-entry
return in queryFn uses the full deviceStorage.getEntries(user.id) array which
breaks pagination invariants; modify queryFn so when pageParam is falsy you
still use cached data for instant UI but slice the cached array to
DEFAULT_PAGE_SIZE before returning (so first page size matches expected page
size) and trigger a background refetch (e.g., via a refetch or invalidation) to
load the true first page from the server; ensure references to
MIN_ENTRIES_TO_CACHE, DEFAULT_PAGE_SIZE, deviceStorage.getEntries, and
getNextPageParam remain consistent so getNextPageParam sees correctly sized
pages and hasNextPage is computed properly.
| queryClient.setQueryData<InfiniteData<EntryWithProfile[]>>( | ||
| ['user-entries', userId], | ||
| (oldData) => { | ||
| if (!oldData) return undefined; | ||
| return { | ||
| ...oldData, | ||
| pages: [[typedNewEntry, ...oldData.pages[0]], ...oldData.pages.slice(1)], | ||
| }; | ||
| } | ||
| ); |
There was a problem hiding this comment.
Realtime entry silently dropped if query cache is not yet initialized.
On Line 353, if oldData is undefined (e.g., query hasn't completed its first fetch yet when a realtime event arrives), returning undefined discards the new entry entirely with no fallback. The same applies to Line 235 in fetchMissedEntries. Consider initializing the cache structure if it doesn't exist:
- if (!oldData) return undefined;
+ if (!oldData) return {
+ pages: [[typedNewEntry]],
+ pageParams: [null],
+ };🤖 Prompt for AI Agents
In `@frontend/hooks/use-user-entries.ts` around lines 350 - 359, The realtime
handler currently calls queryClient.setQueryData for ['user-entries', userId]
and returns undefined when oldData is falsy, silently dropping realtime entries;
change the update logic in the setQueryData callback (and similarly in
fetchMissedEntries) to initialize a proper InfiniteData<EntryWithProfile[]>
shape when oldData is undefined — create a minimal object with pages:
[[typedNewEntry]] and reasonable pageParams (e.g., an empty array or initial
cursor) so the realtime entry is inserted into the cache even if the query
hasn't been fetched yet; update the code paths around queryClient.setQueryData
and any helper functions that assume oldData exists to handle this initialized
cache shape.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Tests