Added Improvements for Version 0.9.7#45
Conversation
|
|
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughRecipient filtering and multi-device push handling standardized; PostHog events added for enqueue/send; realtime unseen-entry tracking and UI indicator added; global ErrorBoundary introduced; invite flow renamed to inviteCode; multiple frontend typography, UX, and utility hardening changes applied. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Enqueue as NotificationEnqueueService
participant Notification as NotificationService
participant DB as Database (profiles / push_tokens)
participant Push as Push Gateway
participant PH as PostHog
Enqueue->>Notification: enqueue_entry_notification(entry, recipients)
Notification->>DB: query push_tokens and profiles for recipients
DB-->>Notification: tokens (user→[tokens]), emails
Notification->>Push: send payload to flattened tokens list
Push-->>Notification: per-token responses (success/failure)
Notification->>PH: capture_notification_enqueued_event(metadata, recipients, title, body, priority)
Notification->>PH: capture_notification_sent_event(per-recipient metadata...)
Notification-->>Enqueue: return result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
frontend/components/toast-message.tsx (2)
12-13:⚠️ Potential issue | 🔴 CriticalExit animation and
onHidecallback will never execute.When
visiblebecomesfalse, the component immediately returnsnulland unmounts. This prevents theSlideOutUpexiting animation from playing and theonHidecallback from being invoked.To preserve exit animations, keep the component mounted and let Reanimated control visibility, or use a separate state to delay unmounting until after the exit animation completes.
🐛 Proposed fix using delayed unmounting
-import React, { useEffect } from 'react'; +import React, { useEffect, useState } from 'react'; import { View, Text, StyleSheet } from 'react-native'; import Animated, { SlideInDown, SlideOutUp, runOnJS } from 'react-native-reanimated'; ... export default function ToastMessage({ message, type, visible, onHide }: ToastMessageProps) { - if (!visible) return null; + const [shouldRender, setShouldRender] = useState(visible); + + useEffect(() => { + if (visible) { + setShouldRender(true); + } + }, [visible]); + + const handleExitComplete = () => { + setShouldRender(false); + onHide?.(); + }; + + if (!shouldRender) return null; return ( <Animated.View - entering={SlideInDown.duration(300).springify().damping(20).stiffness(90)} - exiting={SlideOutUp.duration(200).withCallback(() => { - if (onHide) { - runOnJS(onHide)(); - } - })} + entering={visible ? SlideInDown.duration(300).springify().damping(20).stiffness(90) : undefined} + exiting={!visible ? SlideOutUp.duration(200).withCallback(() => { + runOnJS(handleExitComplete)(); + }) : undefined} style={styles.outerContainer} >
1-1:⚠️ Potential issue | 🟡 MinorUnused import:
useEffect.
useEffectis imported but never used in this component.🧹 Proposed fix
-import React, { useEffect } from 'react'; +import React from 'react';Note: If you apply the fix for the exit animation bug above,
useEffectwill be needed.frontend/app/capture/details.tsx (1)
253-262:⚠️ Potential issue | 🟡 MinorDuplicate error toast will show twice.
When
!hasSelectedSharing()is true, the code shows the specific toast "Please select who to share this entry with" on Line 256, then immediately shows a second generic toast "Cannot save entry" on Line 260. The second toast call appears to be unintended duplication.🐛 Proposed fix
const handleSave = async () => { if (!capture || !user || !hasSelectedSharing()) { if (!hasSelectedSharing()) { toast('Please select who to share this entry with', 'error'); } else { toast('Cannot save entry', 'error'); } - toast('Cannot save entry', 'error'); return; }frontend/hooks/use-user-entries.ts (1)
80-80:⚠️ Potential issue | 🟡 MinorStale comment doesn't match the actual value.
The comment says "5 minutes" but the value is
1000 * 60 * 10which equals 10 minutes.🔧 Fix the comment
- staleTime: 1000 * 60 * 10, // 5 minutes + staleTime: 1000 * 60 * 10, // 10 minutesfrontend/hooks/use-invite-acceptance.ts (2)
84-97:⚠️ Potential issue | 🟠 MajorNull result from
maybeSingle()not handled.When
maybeSingle()returnsnull(no matching profile found), the code continues and attempts to access properties onnullat line 106-114. This will cause a runtime error when no profile matches the invite code.🐛 Proposed fix to handle null result
const { data: invite, error } = await supabase .from(TABLES.PROFILES) .select('id, email, full_name, avatar_url, username, invite_code') .eq('invite_code', inviteCode.trim()) .maybeSingle(); if (error) { setError('This invitation link is invalid or has expired.'); return null; } + + if (!invite) { + setError('This invitation link is invalid or has expired.'); + return null; + }
163-183:⚠️ Potential issue | 🟡 Minor
declineInvitehas no implementation.The function body is empty but returns
success: true, giving a false impression that the operation succeeded. Either implement the decline logic or remove this function from the interface.frontend/services/friend-service.ts (1)
317-322:⚠️ Potential issue | 🟡 MinorEmail validation allows commas, which could break the
.in()filter when emails contain commas.The
isValidEmail()regex allows any non-whitespace, non-@ characters, including commas (e.g.,"first,last"@example.com). When these emails are joined with commas at line 318, the resulting filter becomes malformed:email.in.(user@ex.com,"first,last"@ex.com)may be parsed as 3 values instead of 2. Phone numbers are safe sinceisValidPhoneNumber()enforces digit-only format after cleaning.Quote email/phone values before joining, or restrict the email regex to exclude special characters.
🤖 Fix all issues with AI agents
In `@backend/services/notification_enqueue_service.py`:
- Around line 142-144: entry_type_display already includes the indefinite
article for non-audio entries, so the notification body is adding a duplicate
"a"; change the body construction to use entry_type_display directly (e.g.
replace f"{owner_name} shared a {entry_type_display} with you" with
f"{owner_name} shared {entry_type_display} with you") so you don't prepend an
extra article; keep the existing entry_type_display logic (the variable named
entry_type_display) and only adjust the body variable.
In `@backend/services/notification_service.py`:
- Around line 721-724: The code is currently adding raw email addresses into the
analytics properties dict (properties["email"] = email), which raises privacy
concerns; change this so that you either remove the email field entirely (only
send properties["user_id"]) or only include a hashed/pseudonymized value (e.g.,
SHA256 of email) and only when a clear consent/config flag is enabled. Update
the logic around the properties dict (where properties is set and email is used)
to check a new config/feature-flag like ANALYTICS_INCLUDE_EMAIL or a user
consent helper (e.g., user_has_consented_to_analytics(user_id)) before adding
email, or replace email with a non-reversible hash; apply the same change at the
other occurrence referenced (the block around lines 778-781) so both places
behave consistently. Ensure any config/consent check names you add are used by
the code path that eventually calls the PostHog/send event function so raw
emails are never sent unless explicitly allowed.
In `@backend/test_enqueue_service.py`:
- Around line 29-31: Replace the hard-coded real-looking UUIDs used in the test
constants ENTRY_ID and OWNER_USER_ID with non-production placeholders or
generated test IDs (e.g., use deterministic placeholders like "test-entry-id" /
"test-owner-id" or generate via uuid.uuid4() at test runtime) and ensure
ENTRY_TYPE remains a valid enum value ("photo"/"video"/"audio"/"entry"); update
the test to load these values from environment variables or generate them at
runtime so no real user/entry IDs are committed and tests remain portable and
deterministic.
In `@frontend/app/_layout.tsx`:
- Around line 45-74: The Stack.Screen entry for "vault" is misindented compared
to the other <Stack.Screen> elements; open the JSX block containing
<ErrorBoundary> → <GestureHandlerRootView> → <Stack> and align the <Stack.Screen
name="vault" /> line so it matches the same indentation level as the other
<Stack.Screen> lines (e.g., the lines for "invite/[id]" and "calendar") to keep
consistent formatting in _layout.tsx.
In `@frontend/app/friends.tsx`:
- Around line 109-119: The RefreshControl spinner stops too early because
handleRefresh sets setRefreshing(false) immediately after calling
refreshFriends(), which only invalidates queries synchronously; update
refreshFriends (or handleRefresh) to await the actual refetches (e.g., return
Promise.all of the refetch calls or use queryClient.refetchQueries/
invalidateQueries with await) and then await refreshFriends() inside
handleRefresh before calling setRefreshing(false) so the refreshing state
reflects real network completion (refer to handleRefresh, refreshFriends,
setRefreshing, and the relevant query refetch functions).
In `@frontend/app/vault.tsx`:
- Around line 110-126: The onViewableItemsChanged callback stored in useRef
captures stale entriesByDate and unseenEntryIds and must be updated to use
current values; change its implementation to either (A) use useCallback with
entriesByDate and unseenEntryIds in the dependency array so
onViewableItemsChanged sees updates, or (B) keep the ref but store live refs for
entriesByDate and unseenEntryIds (e.g., entriesByDateRef.current and
unseenEntryIdsRef.current) and read from those inside onViewableItemsChanged
before calling markEntriesAsSeen; update references to
entriesByDate/unseenEntryIds inside the function accordingly so visibleEntryIds
reflect current state.
In `@frontend/lib/utils.ts`:
- Around line 90-98: The grouping uses date.toISOString().slice(0,10) which
creates UTC-day buckets that can differ from the user's local-day display;
change the logic in the block handling (key === 'updated_at' || key ===
'created_at') to produce a local-date key (e.g., build YYYY-MM-DD from
date.getFullYear(), date.getMonth()+1, date.getDate() with zero-padding or use
date.toLocaleDateString with a fixed locale/format) instead of using
toISOString(), so groupKey reflects the local timezone shown in DateContainer
and avoid mismatched buckets for entries near midnight UTC.
In `@frontend/services/search-service.ts`:
- Around line 82-86: The object literal passed to logger.error in SearchService
contains an unintended empty line and a trailing comma; edit the logger.error
call (the object with status: response.status and statusText:
response.statusText) to remove the blank line and trailing comma so the object
is properly formatted (or if you intended another property, add it instead of
leaving the blank line), referencing the logger.error invocation around the
non-OK response handling.
In `@frontend/supabase/migrations/20260202060724_add_invite_code_to_profiles.sql`:
- Around line 38-41: The backfill UPDATE can fail if generate_invite_code()
produces a duplicate because of the UNIQUE constraint; change the migration to
iterate over each profile (SELECT id FROM public.profiles WHERE invite_code IS
NULL), for each row call generate_invite_code() and attempt an UPDATE inside a
retry loop that catches unique_violation and retries up to a max_attempts (e.g.,
10) before raising an error; reference generate_invite_code(), public.profiles,
invite_code and the profile id in the retry logic so each row is processed
individually and collisions are handled safely.
🧹 Nitpick comments (23)
frontend/components/toast-message.tsx (1)
44-46: Redundant width declarations.Specifying
left: 0,right: 0, andwidth: '100%'together is redundant. Usingleft: 0andright: 0already stretches the element to full width for absolute positioning.♻️ Proposed fix
outerContainer: { position: 'absolute', top: 60, left: 0, right: 0, - width: '100%', justifyContent: 'center', alignItems: 'center', zIndex: 1000, paddingHorizontal: 20, },frontend/app/capture/index.tsx (1)
639-650: Inconsistent scaling inprofileButtonandprofileImagestyles.A few issues with these responsive scaling updates:
borderRadius: scale(20)should bescale(18)to match half of width/heightscale(36)for a perfect circle.borderWidth: 2(line 642) is not scaled, unlike the other properties.profileImage.borderRadius: 16(line 649) remains hardcoded while its parent container uses scaled values, causing inconsistent appearance on different screen sizes.♻️ Suggested fix for consistency
profileButton: { width: scale(36), height: scale(36), - borderRadius: scale(20), - borderWidth: 2, + borderRadius: scale(18), + borderWidth: scale(2), borderColor: '#8B5CF6', padding: scale(2), }, profileImage: { width: '100%', height: '100%', - borderRadius: 16, + borderRadius: scale(16), },frontend/services/search-service.ts (1)
162-162: Consider removing or reducing log level of buffer logging.Logging the buffer at
infolevel after successful stream completion appears to be leftover debug instrumentation. The buffer should typically be empty at this point (after flushing at lines 157-159), and if it's not, that might warrant awarninstead. Consider removing this line or changing tologger.debugif it's still useful during development.♻️ Option 1: Remove the log
logger.info('SearchService: stream completed successfully', { url }); - logger.info('SearchService: buffer', { buffer }); onFinish?.();♻️ Option 2: Change to debug level
logger.info('SearchService: stream completed successfully', { url }); - logger.info('SearchService: buffer', { buffer }); + logger.debug('SearchService: remaining buffer after stream', { buffer }); onFinish?.();backend/services/notification_service.py (3)
635-687: Consider caching or batching user info lookups to reduce DB load.
_get_user_info_from_tokensperforms two database queries per call. Since this is called for every enqueued and sent notification event, it could create significant database load at scale. Consider:
- Using the existing
CacheServicepattern already established infriend_service.pyandnotification_enqueue_service.py- Batching these lookups if multiple notifications are processed together
Additionally, per static analysis hint (TRY300): the
return resulton line 683 could be moved to anelseblock for cleaner control flow.♻️ Suggested improvement for error handling style
result[token] = { "user_id": user_id, "email": user_id_to_email.get(user_id) } - - return result - + else: + return result except Exception as e: logger.error(f"Error getting user info from tokens: {str(e)}") return {}
689-744: Significant code duplication between event capture methods.
_capture_notification_enqueued_eventand_capture_notification_sent_eventare nearly identical, differing only in the event name. Consider extracting a common helper method.♻️ Proposed refactor to reduce duplication
+ def _capture_notification_event( + self, + event_name: str, + metadata: Optional[Dict[str, Any]], + recipients: List[str], + title: str, + body: str, + priority: str + ) -> None: + """Capture PostHog event for notification lifecycle.""" + if not self.posthog_client: + return + + try: + notification_type = metadata.get("notification_type", "unknown") if metadata else "unknown" + token_user_info = self._get_user_info_from_tokens(recipients) + + for token in recipients: + user_info = token_user_info.get(token, {}) + user_id = user_info.get("user_id") + email = user_info.get("email") + + properties: Dict[str, Any] = { + "notification_type": notification_type, + "title": title, + "body": body, + "priority": priority, + "recipient_count": len(recipients), + } + + if user_id: + properties["user_id"] = user_id + if email: + properties["email"] = email + + if metadata: + for key, value in metadata.items(): + if key not in properties and key != "notification_type": + properties[f"metadata_{key}"] = value + + distinct_id = user_id if user_id else f"token_{token[:8]}" + + self.posthog_client.capture( + distinct_id=distinct_id, + event=event_name, + properties=properties + ) + + self.posthog_client.flush() + + except Exception as e: + logger.exception(f"Error capturing {event_name} event to PostHog") + def _capture_notification_enqueued_event( self, metadata: Optional[Dict[str, Any]], recipients: List[str], title: str, body: str, priority: str ) -> None: """Capture PostHog event when notification is enqueued.""" - if not self.posthog_client: - return - - try: - notification_type = metadata.get("notification_type", "unknown") if metadata else "unknown" - - # Get user info from tokens - token_user_info = self._get_user_info_from_tokens(recipients) - - # Capture event for each recipient - for token in recipients: - user_info = token_user_info.get(token, {}) - user_id = user_info.get("user_id") - email = user_info.get("email") - - properties: Dict[str, Any] = { - "notification_type": notification_type, - "title": title, - "body": body, - "priority": priority, - "recipient_count": len(recipients), - } - - if user_id: - properties["user_id"] = user_id - if email: - properties["email"] = email - - # Add any additional metadata - if metadata: - for key, value in metadata.items(): - if key not in properties and key != "notification_type": - properties[f"metadata_{key}"] = value - - # Use user_id as distinct_id if available, otherwise use token - distinct_id = user_id if user_id else f"token_{token[:8]}" - - self.posthog_client.capture( - distinct_id=distinct_id, - event="notification_enqueued", - properties=properties - ) - - self.posthog_client.flush() - - except Exception as e: - logger.error(f"Error capturing notification_enqueued event to PostHog: {str(e)}") + self._capture_notification_event("notification_enqueued", metadata, recipients, title, body, priority)Apply similar change to
_capture_notification_sent_event.Also applies to: 746-801
743-744: Uselogger.exceptioninstead oflogger.errorwith string interpolation.Per static analysis hints (TRY400, RUF010),
logger.exceptionautomatically includes the traceback and is preferred for error logging in exception handlers.♻️ Proposed fix
except Exception as e: - logger.error(f"Error capturing notification_enqueued event to PostHog: {str(e)}") + logger.exception("Error capturing notification_enqueued event to PostHog")Apply similarly on line 801.
Also applies to: 800-801
backend/test_enqueue_service.py (3)
1-14: Consider moving test file to thetests/directory.This file is located in
backend/root rather thanbackend/tests/. Following Python/pytest conventions, test files should be in a dedicatedtests/directory. The existingbackend/tests/test_notification_enqueue_service_integration.pysuggests atests/directory already exists.If this is intended as a manual testing/debugging script rather than an automated test, consider renaming it to something like
scripts/manual_test_enqueue.pyto clarify its purpose.
84-84: Remove extraneous f-string prefix.Per static analysis (F541), line 84 uses an f-string without any placeholders.
♻️ Proposed fix
- print(f"\nEntry Configuration:") + print("\nEntry Configuration:")
93-94: PotentialTypeErrorifTEXT_CONTENTisNone.Line 94 calls
len(TEXT_CONTENT)butTEXT_CONTENTis initialized toNoneon line 48. If this branch is reached whenTEXT_CONTENTis truthy but somehow set to a non-string, or if the conditional on line 93 is modified, this could raise aTypeError.The current code is technically safe because
if TEXT_CONTENT:guards it, but the nested conditional makes this fragile.♻️ Suggested simplification
if TEXT_CONTENT: - print(f" Text Content: {TEXT_CONTENT[:50]}..." if len(TEXT_CONTENT) > 50 else f" Text Content: {TEXT_CONTENT}") + display_text = TEXT_CONTENT[:50] + "..." if len(str(TEXT_CONTENT)) > 50 else TEXT_CONTENT + print(f" Text Content: {display_text}")frontend/app/settings/blocked-users.tsx (1)
23-23: Mixed styling approach may cause confusion.This line uses both
className(Tailwind/NativeWind) andstyleprop simultaneously. While this can work, it's inconsistent with the rest of this file which exclusively usesStyleSheet. TheclassNameprops (flex flex-row items-center justify-between px-5 py-4) overlap with what's defined instyles.header.Consider using only the StyleSheet approach for consistency:
🔧 Suggested fix
- <View className="flex flex-row items-center justify-between px-5 py-4" style={styles.header}> + <View style={styles.header}>frontend/components/new-entries-indicator.tsx (2)
33-44: Hardcodedtop: 80may cause positioning issues across devices.The fixed
top: 80value doesn't account for varying safe area insets or header heights across different devices. This could cause the indicator to overlap with the status bar or header on some devices.Consider using a relative position or accepting the top offset as a prop for flexibility.
🔧 Suggested improvement
+import { useSafeAreaInsets } from 'react-native-safe-area-context'; + +interface NewEntriesIndicatorProps { + count: number; + onPress: () => void; + visible: boolean; + topOffset?: number; +} + -export default function NewEntriesIndicator({ count, onPress, visible }: NewEntriesIndicatorProps) { +export default function NewEntriesIndicator({ count, onPress, visible, topOffset }: NewEntriesIndicatorProps) { + const insets = useSafeAreaInsets(); + const top = topOffset ?? (insets.top + 40); // ... rest of component return ( <Animated.View // ... - style={styles.container} + style={[styles.container, { top }]} >And remove
top: 80from the StyleSheet.
58-63: Typography inconsistency with PR pattern.This file uses
fontWeight: '600'while other files in this PR use explicitfontFamilyvalues (e.g.,'Outfit-SemiBold'). For consistency with the typography updates across the codebase, consider using the font family approach.🔧 Suggested fix
text: { color: 'white', fontSize: 14, - fontWeight: '600', + fontFamily: 'Outfit-SemiBold', textAlign: 'center', },frontend/components/error-boundary.tsx (1)
20-49: Consider addingdeviceInfoto the dependency array or memoizing it.The
useEffectreferencesdeviceInfobut only lists[error]in the dependency array. WhiledeviceInfois computed fresh each render (so no stale closure issue), this pattern may trigger ESLint'sreact-hooks/exhaustive-depsrule.Since
getDeviceInfo()is called outside the effect, consider either:
- Moving the call inside the effect, or
- Memoizing it if device info is expensive to compute
🔧 Option 1: Move inside effect
function ErrorFallback({ error, resetError }: ErrorFallbackProps) { - // Collect device information - const deviceInfo = getDeviceInfo(); - // Capture error to PostHog React.useEffect(() => { + const deviceInfo = getDeviceInfo(); try { // ... rest of effect } // Log error locally logger.error('ErrorBoundary caught an error:', { error, deviceInfo, }); }, [error]); + + // For dev display, compute separately + const deviceInfo = __DEV__ ? getDeviceInfo() : {};frontend/hooks/use-user-entries.ts (2)
95-101: Static analysis: forEach callback should not return a value.
Set.delete()returns a boolean, which is implicitly returned from the arrow function. While functionally harmless, Biome flags this as suspicious.🔧 Explicit void return
setUnseenEntryIds(prev => { const updated = new Set(prev); - entryIds.forEach(id => updated.delete(id)); + entryIds.forEach(id => { updated.delete(id); }); return updated; });
226-232: Async operations insideforEachare fire-and-forget.Using
asynccallbacks withforEachdoesn't await each iteration—alladdEntrycalls execute in parallel without waiting for completion. While this may be acceptable here given the deduplication indeviceStorage.addEntry, any errors won't be properly aggregated.Consider using
Promise.allfor explicit parallel execution with proper error handling:♻️ Suggested refactor
- // Save to device storage - missedEntries.forEach(async (entry: EntryWithProfile) => { - try { - await deviceStorage.addEntry(user.id, entry); - } catch (storageError) { - console.error('Error saving missed entry to storage:', storageError); - } - }); + // Save to device storage + await Promise.all( + missedEntries.map(async (entry: EntryWithProfile) => { + try { + await deviceStorage.addEntry(user.id, entry); + } catch (storageError) { + console.error('Error saving missed entry to storage:', storageError); + } + }) + );frontend/services/device-storage.ts (1)
129-140: Good deduplication logic, but logging format is inconsistent.The deduplication check is correct. However, the
logger.infocall passesentry.idas a separate argument rather than in the meta object, making the log output inconsistent with other logger calls.🔧 Fix logging format
// Deduplicate: Check if entry already exists if (existingEntries.some(e => e.id === entry.id)) { - logger.info('DeviceStorage: Entry already exists in storage, skipping:', entry.id); + logger.info('DeviceStorage: Entry already exists in storage, skipping', { entryId: entry.id }); return; },
frontend/app/vault.tsx (1)
247-253: Redundantvisibleprop.The
visibleprop is redundant since the component is already conditionally rendered whenunseenEntryIds.size > 0. UnlessNewEntriesIndicatoruses this prop for internal animation logic, it can be removed.♻️ Suggested simplification
{unseenEntryIds.size > 0 && ( <NewEntriesIndicator count={unseenEntryIds.size} onPress={scrollToTop} - visible={unseenEntryIds.size > 0} /> )}frontend/supabase/migrations/20260202060724_add_invite_code_to_profiles.sql (1)
5-36: Duplicate function definitions with earlier migration.The
generate_invite_code()andset_profile_invite_code()functions are already defined in20250825200450_misty_boat.sql. WhileCREATE OR REPLACEmakes this safe, having duplicate definitions can lead to maintenance confusion.Consider either:
- Removing these definitions from this migration (relying on the earlier one)
- Consolidating all invite code logic into this later migration and removing from the earlier one
frontend/hooks/use-invite-acceptance.ts (1)
84-86: Consider explicit return value for clarity.The early return at line 85 implicitly returns
undefined. While theenabledflag guards against this, an explicitreturn nullwould make the intent clearer.frontend/services/friend-service.ts (2)
253-259: UnnecessaryrefreshSession()beforegetSession().Calling
refreshSession()unconditionally before every API call forces a token refresh even when the session is still valid. The Supabase client automatically refreshes tokens when needed. This adds unnecessary latency and network overhead.♻️ Consider removing the explicit refresh
- await supabase.auth.refreshSession(); const { data: { session } } = await supabase.auth.getSession(); if (!session) { logger.error('No Active Session Found'); throw new Error('No session found'); }
156-177: Null check fordatacomes after it's already used.At line 156,
data?.map()is called, but theif (!data) return []guard at line 173 comes after the mapping. While the optional chaining prevents a crash, the logic order is confusing and the null check becomes redundant.♻️ Move the null check before mapping
}) as { data: any[], error: any, }; + if (!data) return []; + + if (error) { + throw new Error(error.message); + } + const friends: FriendWithProfile[] = data?.map(friend => { // ... mapping logic }) - if (!data) return []; - - if (error) { - throw new Error(error.message); - }frontend/app/friends.tsx (2)
40-40:refetchSuggestedFriendsis destructured but never used.This variable is extracted from
useSuggestedFriendsbut not utilized. If the intent was to refresh suggested friends alongside the friends list, it should be called inhandleRefresh.
68-79: Consider keeping toast as fallback when accepting friend request.On success, only a push notification is sent. If the user has notifications disabled, there's no visible feedback that the action succeeded. Consider keeping the toast as a backup.
💡 Add toast alongside notification
const handleAcceptRequest = async (friendshipId: string) => { const result = await acceptFriendRequest(friendshipId); if (result.success) { await LocalNotificationService.sendNotification({ title: 'Friend Request Accepted', body: 'You are now friends!', sound: true, }); + showToast('Friend request accepted!', 'success'); } else { showToast(result.error || 'Failed to accept request', 'error'); } };
| ENTRY_ID = "018eabe2-222d-4d14-974b-771b490f54c2" # Replace with a real entry ID | ||
| OWNER_USER_ID = "64cbe7e2-1fd3-4370-8832-e3f96e010a88" # Replace with a real user ID who owns the entry | ||
| ENTRY_TYPE = "photo" # Options: "photo", "video", "audio", or "entry" |
There was a problem hiding this comment.
Avoid committing real user/entry IDs to the repository.
These appear to be real UUIDs from a database. Committing real data:
- Could leak information about your data structure/users
- Will cause test failures in other environments
- May violate privacy expectations
Consider using placeholder values with clear instructions, or loading test data from environment variables.
🛡️ Suggested fix
-ENTRY_ID = "018eabe2-222d-4d14-974b-771b490f54c2" # Replace with a real entry ID
-OWNER_USER_ID = "64cbe7e2-1fd3-4370-8832-e3f96e010a88" # Replace with a real user ID who owns the entry
+ENTRY_ID = os.getenv("TEST_ENTRY_ID", "your-entry-id-here") # Set via environment or replace
+OWNER_USER_ID = os.getenv("TEST_OWNER_USER_ID", "your-user-id-here") # Set via environment or replace
ENTRY_TYPE = "photo" # Options: "photo", "video", "audio", or "entry"
# Sharing configuration
# Option 1: Share with specific users (provide their user IDs)
SHARED_WITH_USER_IDS = [
- "d745ed21-5667-4595-bf76-8505ccf63d8b"
+ # Add user IDs here or set TEST_SHARED_WITH_USER_IDS env var
# "user-id-2",
]Also applies to: 35-38
🤖 Prompt for AI Agents
In `@backend/test_enqueue_service.py` around lines 29 - 31, Replace the hard-coded
real-looking UUIDs used in the test constants ENTRY_ID and OWNER_USER_ID with
non-production placeholders or generated test IDs (e.g., use deterministic
placeholders like "test-entry-id" / "test-owner-id" or generate via uuid.uuid4()
at test runtime) and ensure ENTRY_TYPE remains a valid enum value
("photo"/"video"/"audio"/"entry"); update the test to load these values from
environment variables or generate them at runtime so no real user/entry IDs are
committed and tests remain portable and deterministic.
| <ErrorBoundary> | ||
| <Host> | ||
| <SafeAreaProvider> | ||
| <QueryProvider> | ||
| <AuthProvider> | ||
| <ToastProvider> | ||
| <GestureHandlerRootView style={{ flex: 1 }}> | ||
| <Stack screenOptions={{ headerShown: false }}> | ||
| <Stack.Screen name="onboarding" /> | ||
| <Stack.Screen name="index" /> | ||
| <Stack.Screen name="capture" /> | ||
| <Stack.Screen name="invite/[id]" /> | ||
| <Stack.Screen name="vault" /> | ||
| <Stack.Screen name="calendar" /> | ||
| <Stack.Screen name="calendar/day" /> | ||
| <Stack.Screen name="social" /> | ||
| <Stack.Screen name="friends" /> | ||
| <Stack.Screen name="dreamscape" /> | ||
| <Stack.Screen name="search" /> | ||
| <Stack.Screen name="capture/details" /> | ||
| <Stack.Screen name="+not-found" /> | ||
| </Stack> | ||
| <StatusBar style="dark" /> | ||
| </GestureHandlerRootView> | ||
| </ToastProvider> | ||
| </AuthProvider> | ||
| </QueryProvider> | ||
| </SafeAreaProvider> | ||
| </Host> | ||
| </ErrorBoundary> |
There was a problem hiding this comment.
Good addition of global error boundary.
The ErrorBoundary placement at the root level is correct and will catch errors from the entire component tree, including all providers. This aligns well with the new react-native-error-boundary integration.
Note: Line 57 has inconsistent indentation compared to surrounding Stack.Screen elements.
🔧 Suggested fix for indentation
<Stack.Screen name="invite/[id]" />
- <Stack.Screen name="vault" />
+ <Stack.Screen name="vault" />
<Stack.Screen name="calendar" />,
🤖 Prompt for AI Agents
In `@frontend/app/_layout.tsx` around lines 45 - 74, The Stack.Screen entry for
"vault" is misindented compared to the other <Stack.Screen> elements; open the
JSX block containing <ErrorBoundary> → <GestureHandlerRootView> → <Stack> and
align the <Stack.Screen name="vault" /> line so it matches the same indentation
level as the other <Stack.Screen> lines (e.g., the lines for "invite/[id]" and
"calendar") to keep consistent formatting in _layout.tsx.
| logger.error('SearchService: non-OK response', { | ||
| status: response.status, | ||
| statusText: response.statusText, | ||
|
|
||
| }); |
There was a problem hiding this comment.
Empty line inside object literal suggests incomplete edit.
Line 85 contains only whitespace before the closing brace, and the trailing comma on line 84 suggests additional properties may have been intended but weren't added. If this was intentional, remove the empty line and trailing comma for cleaner formatting.
🧹 Proposed fix
logger.error('SearchService: non-OK response', {
status: response.status,
statusText: response.statusText,
-
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.error('SearchService: non-OK response', { | |
| status: response.status, | |
| statusText: response.statusText, | |
| }); | |
| logger.error('SearchService: non-OK response', { | |
| status: response.status, | |
| statusText: response.statusText, | |
| }); |
🤖 Prompt for AI Agents
In `@frontend/services/search-service.ts` around lines 82 - 86, The object literal
passed to logger.error in SearchService contains an unintended empty line and a
trailing comma; edit the logger.error call (the object with status:
response.status and statusText: response.statusText) to remove the blank line
and trailing comma so the object is properly formatted (or if you intended
another property, add it instead of leaving the blank line), referencing the
logger.error invocation around the non-OK response handling.
frontend/supabase/migrations/20260202060724_add_invite_code_to_profiles.sql
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@backend/services/notification_service.py`:
- Around line 689-742: In _capture_notification_enqueued_event, remove the
unused local assignment email = user_info.get("email") (from the loop that
iterates over recipients and uses token_user_info) so the variable isn't created
when it's not used; if the email is intended to be tracked, instead add it to
the properties dict (e.g., properties["email"] = user_info.get("email")) or
otherwise drop the assignment entirely to eliminate the lint warning.
- Around line 248-256: The synchronous PostHog I/O in
_capture_notification_sent_event is being called directly inside the async
_process_message (which is used by process_queue with asyncio.gather), blocking
the event loop; change the call site to offload the synchronous work to a thread
using asyncio.to_thread, e.g. replace the direct call to
self._capture_notification_sent_event(...) with awaiting
asyncio.to_thread(self._capture_notification_sent_event, metadata, recipients,
title, body, priority) so the blocking capture()/flush() runs off the event loop
(if you intentionally want fire-and-forget instead, wrap the to_thread call in
asyncio.create_task(asyncio.to_thread(...)) rather than awaiting it).
🧹 Nitpick comments (1)
frontend/app/vault.tsx (1)
103-126: De-duplicate visible IDs before marking seen.This avoids redundant updates if duplicate IDs appear in
viewableItemsor entries.♻️ Proposed change
- const onViewableItemsChanged = useCallback(({ viewableItems }: { viewableItems: any[] }) => { - const visibleEntryIds: string[] = []; + const onViewableItemsChanged = useCallback(({ viewableItems }: { viewableItems: any[] }) => { + const visibleEntryIds = new Set<string>(); viewableItems.forEach(item => { const dateKey = item.item; const dateEntries = entriesByDate?.[dateKey] || []; dateEntries.forEach((entry: any) => { if (unseenEntryIds.has(entry.id)) { - visibleEntryIds.push(entry.id); + visibleEntryIds.add(entry.id); } }); }); - if (visibleEntryIds.length > 0) { - markEntriesAsSeen(visibleEntryIds); + if (visibleEntryIds.size > 0) { + markEntriesAsSeen([...visibleEntryIds]); } }, [entriesByDate, unseenEntryIds, markEntriesAsSeen]);
| def _capture_notification_enqueued_event( | ||
| self, | ||
| metadata: Optional[Dict[str, Any]], | ||
| recipients: List[str], | ||
| title: str, | ||
| body: str, | ||
| priority: str | ||
| ) -> None: | ||
| """Capture PostHog event when notification is enqueued.""" | ||
| if not self.posthog_client: | ||
| return | ||
|
|
||
| try: | ||
| notification_type = metadata.get("notification_type", "unknown") if metadata else "unknown" | ||
|
|
||
| # Get user info from tokens | ||
| token_user_info = self._get_user_info_from_tokens(recipients) | ||
|
|
||
| # Capture event for each recipient | ||
| for token in recipients: | ||
| user_info = token_user_info.get(token, {}) | ||
| user_id = user_info.get("user_id") | ||
| email = user_info.get("email") | ||
|
|
||
| properties: Dict[str, Any] = { | ||
| "notification_type": notification_type, | ||
| "title": title, | ||
| "body": body, | ||
| "priority": priority, | ||
| "recipient_count": len(recipients), | ||
| } | ||
|
|
||
| if user_id: | ||
| properties["user_id"] = user_id | ||
|
|
||
| # Add any additional metadata | ||
| if metadata: | ||
| for key, value in metadata.items(): | ||
| if key not in properties and key != "notification_type": | ||
| properties[f"metadata_{key}"] = value | ||
|
|
||
| # Use user_id as distinct_id if available, otherwise use token | ||
| distinct_id = user_id if user_id else f"token_{token[:8]}" | ||
|
|
||
| self.posthog_client.capture( | ||
| distinct_id=distinct_id, | ||
| event="notification_enqueued", | ||
| properties=properties | ||
| ) | ||
|
|
||
| self.posthog_client.flush() | ||
|
|
||
| except Exception as e: | ||
| logger.error(f"Error capturing notification_enqueued event to PostHog: {str(e)}") |
There was a problem hiding this comment.
Remove the unused email variable in enqueue capture.
Line 711 assigns email but it’s never used, which triggers the lint warning and obscures intent. Drop the assignment (or use it intentionally).
🧹 Remove unused assignment
- email = user_info.get("email")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _capture_notification_enqueued_event( | |
| self, | |
| metadata: Optional[Dict[str, Any]], | |
| recipients: List[str], | |
| title: str, | |
| body: str, | |
| priority: str | |
| ) -> None: | |
| """Capture PostHog event when notification is enqueued.""" | |
| if not self.posthog_client: | |
| return | |
| try: | |
| notification_type = metadata.get("notification_type", "unknown") if metadata else "unknown" | |
| # Get user info from tokens | |
| token_user_info = self._get_user_info_from_tokens(recipients) | |
| # Capture event for each recipient | |
| for token in recipients: | |
| user_info = token_user_info.get(token, {}) | |
| user_id = user_info.get("user_id") | |
| email = user_info.get("email") | |
| properties: Dict[str, Any] = { | |
| "notification_type": notification_type, | |
| "title": title, | |
| "body": body, | |
| "priority": priority, | |
| "recipient_count": len(recipients), | |
| } | |
| if user_id: | |
| properties["user_id"] = user_id | |
| # Add any additional metadata | |
| if metadata: | |
| for key, value in metadata.items(): | |
| if key not in properties and key != "notification_type": | |
| properties[f"metadata_{key}"] = value | |
| # Use user_id as distinct_id if available, otherwise use token | |
| distinct_id = user_id if user_id else f"token_{token[:8]}" | |
| self.posthog_client.capture( | |
| distinct_id=distinct_id, | |
| event="notification_enqueued", | |
| properties=properties | |
| ) | |
| self.posthog_client.flush() | |
| except Exception as e: | |
| logger.error(f"Error capturing notification_enqueued event to PostHog: {str(e)}") | |
| def _capture_notification_enqueued_event( | |
| self, | |
| metadata: Optional[Dict[str, Any]], | |
| recipients: List[str], | |
| title: str, | |
| body: str, | |
| priority: str | |
| ) -> None: | |
| """Capture PostHog event when notification is enqueued.""" | |
| if not self.posthog_client: | |
| return | |
| try: | |
| notification_type = metadata.get("notification_type", "unknown") if metadata else "unknown" | |
| # Get user info from tokens | |
| token_user_info = self._get_user_info_from_tokens(recipients) | |
| # Capture event for each recipient | |
| for token in recipients: | |
| user_info = token_user_info.get(token, {}) | |
| user_id = user_info.get("user_id") | |
| properties: Dict[str, Any] = { | |
| "notification_type": notification_type, | |
| "title": title, | |
| "body": body, | |
| "priority": priority, | |
| "recipient_count": len(recipients), | |
| } | |
| if user_id: | |
| properties["user_id"] = user_id | |
| # Add any additional metadata | |
| if metadata: | |
| for key, value in metadata.items(): | |
| if key not in properties and key != "notification_type": | |
| properties[f"metadata_{key}"] = value | |
| # Use user_id as distinct_id if available, otherwise use token | |
| distinct_id = user_id if user_id else f"token_{token[:8]}" | |
| self.posthog_client.capture( | |
| distinct_id=distinct_id, | |
| event="notification_enqueued", | |
| properties=properties | |
| ) | |
| self.posthog_client.flush() | |
| except Exception as e: | |
| logger.error(f"Error capturing notification_enqueued event to PostHog: {str(e)}") |
🧰 Tools
🪛 Ruff (0.14.14)
[error] 711-711: Local variable email is assigned to but never used
Remove assignment to unused variable email
(F841)
[warning] 741-741: Do not catch blind exception: Exception
(BLE001)
[warning] 742-742: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
[warning] 742-742: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
In `@backend/services/notification_service.py` around lines 689 - 742, In
_capture_notification_enqueued_event, remove the unused local assignment email =
user_info.get("email") (from the loop that iterates over recipients and uses
token_user_info) so the variable isn't created when it's not used; if the email
is intended to be tracked, instead add it to the properties dict (e.g.,
properties["email"] = user_info.get("email")) or otherwise drop the assignment
entirely to eliminate the lint warning.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@frontend/app/friends.tsx`:
- Around line 68-75: The accept flow in handleAcceptRequest calls
acceptFriendRequest then awaits LocalNotificationService.sendNotification which
can reject and erroneously surface as a failure; wrap the sendNotification call
in a try/catch (inside handleAcceptRequest) and swallow or log the error (e.g.,
console.error or app logger) so notification failures do not break the
successful acceptFriendRequest flow, keeping the success branch behavior intact.
- Around line 121-127: The effect calls refreshFriends() without handling
promise rejections; wrap the call in an async-safe handler by creating an inner
async function (or call refreshFriends().catch(...)) inside the useEffect and
await the call, then catch and log or handle errors to prevent unhandled
rejections — reference the useEffect block that reads refresh/refreshParam and
invokes refreshFriends; ensure the guard still checks refreshParam === 'true'
before awaiting and catching refreshFriends()'s errors.
| const handleAcceptRequest = async (friendshipId: string) => { | ||
| const result = await acceptFriendRequest(friendshipId); | ||
| if (result.success) { | ||
| showToast('Friend request accepted', 'success'); | ||
| await LocalNotificationService.sendNotification({ | ||
| title: 'Friend Request Accepted', | ||
| body: 'You are now friends!', | ||
| sound: true, | ||
| }); |
There was a problem hiding this comment.
Avoid failing the accept flow if notification dispatch fails.
sendNotification can reject and would currently surface as an error despite a successful accept. Consider swallowing/logging the notification failure.
Proposed fix
if (result.success) {
- await LocalNotificationService.sendNotification({
- title: 'Friend Request Accepted',
- body: 'You are now friends!',
- sound: true,
- });
+ try {
+ await LocalNotificationService.sendNotification({
+ title: 'Friend Request Accepted',
+ body: 'You are now friends!',
+ sound: true,
+ });
+ } catch (error) {
+ logger.warn('Failed to send accept notification:', error);
+ }
} else {
showToast(result.error || 'Failed to accept request', 'error');
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleAcceptRequest = async (friendshipId: string) => { | |
| const result = await acceptFriendRequest(friendshipId); | |
| if (result.success) { | |
| showToast('Friend request accepted', 'success'); | |
| await LocalNotificationService.sendNotification({ | |
| title: 'Friend Request Accepted', | |
| body: 'You are now friends!', | |
| sound: true, | |
| }); | |
| const handleAcceptRequest = async (friendshipId: string) => { | |
| const result = await acceptFriendRequest(friendshipId); | |
| if (result.success) { | |
| try { | |
| await LocalNotificationService.sendNotification({ | |
| title: 'Friend Request Accepted', | |
| body: 'You are now friends!', | |
| sound: true, | |
| }); | |
| } catch (error) { | |
| logger.warn('Failed to send accept notification:', error); | |
| } | |
| } else { | |
| showToast(result.error || 'Failed to accept request', 'error'); | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@frontend/app/friends.tsx` around lines 68 - 75, The accept flow in
handleAcceptRequest calls acceptFriendRequest then awaits
LocalNotificationService.sendNotification which can reject and erroneously
surface as a failure; wrap the sendNotification call in a try/catch (inside
handleAcceptRequest) and swallow or log the error (e.g., console.error or app
logger) so notification failures do not break the successful acceptFriendRequest
flow, keeping the success branch behavior intact.
| // Check for refresh param and call refreshFriends if present | ||
| useEffect(() => { | ||
| const refreshParam = Array.isArray(refresh) ? refresh[0] : refresh; | ||
| if (refreshParam === 'true') { | ||
| refreshFriends(); | ||
| } | ||
| }, [refresh, refreshFriends]); |
There was a problem hiding this comment.
Handle refresh promise rejections to avoid unhandled errors.
refreshFriends() is invoked without awaiting or catching in the effect. Add a guard to prevent unhandled rejections.
Proposed fix
useEffect(() => {
const refreshParam = Array.isArray(refresh) ? refresh[0] : refresh;
if (refreshParam === 'true') {
- refreshFriends();
+ void refreshFriends().catch(error => {
+ logger.warn('Error refreshing friends data from param:', error);
+ });
}
}, [refresh, refreshFriends]);🤖 Prompt for AI Agents
In `@frontend/app/friends.tsx` around lines 121 - 127, The effect calls
refreshFriends() without handling promise rejections; wrap the call in an
async-safe handler by creating an inner async function (or call
refreshFriends().catch(...)) inside the useEffect and await the call, then catch
and log or handle errors to prevent unhandled rejections — reference the
useEffect block that reads refresh/refreshParam and invokes refreshFriends;
ensure the guard still checks refreshParam === 'true' before awaiting and
catching refreshFriends()'s errors.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@frontend/supabase/migrations/20260202060724_add_invite_code_to_profiles.sql`:
- Around line 6-17: The generate_invite_code() function uses PostgreSQL
random(), which is not cryptographically secure; replace its implementation to
use pgcrypto's gen_random_bytes() and ensure the extension is enabled with
CREATE EXTENSION IF NOT EXISTS pgcrypto;; update the function
generate_invite_code to derive 8 characters by mapping secure random bytes to
the allowed charset (A–Z a–z 0–9) (e.g., use get_byte(gen_random_bytes(n), i) %
62 to index the charset) instead of substring(... FROM
(ceil(random()*62))::int), preserving the function name and return type.
🧹 Nitpick comments (1)
frontend/supabase/migrations/20260202060724_add_invite_code_to_profiles.sql (1)
1-3: Ensure uniqueness is enforced even if the column already exists.
ADD COLUMN IF NOT EXISTS ... UNIQUEskips the UNIQUE constraint when the column already exists, which can leaveinvite_codenon-unique on some environments. Consider adding a separate unique index to guarantee enforcement.🔧 Suggested adjustment
ALTER TABLE public.profiles -ADD COLUMN IF NOT EXISTS invite_code text UNIQUE; +ADD COLUMN IF NOT EXISTS invite_code text; + +CREATE UNIQUE INDEX IF NOT EXISTS profiles_invite_code_key + ON public.profiles (invite_code);
| CREATE OR REPLACE FUNCTION generate_invite_code() | ||
| RETURNS text AS $$ | ||
| BEGIN | ||
| RETURN array_to_string( | ||
| ARRAY( | ||
| SELECT substring('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789' | ||
| FROM (ceil(random()*62))::int FOR 1) | ||
| FROM generate_series(1, 8) | ||
| ), '' | ||
| ); | ||
| END; | ||
| $$ LANGUAGE plpgsql VOLATILE; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for the migration file mentioned in the review
find . -name "*20260202060724*" -o -name "*add_invite_code_to_profiles*"Repository: fortune710/keepsafe
Length of output: 140
🏁 Script executed:
# Search for migration files in the frontend directory
fd -e sql . frontend/supabase/migrations/ 2>/dev/null | head -20Repository: fortune710/keepsafe
Length of output: 574
🏁 Script executed:
# Also check if there are any migration files in other locations
fd -e sql . --type f | grep -i migrat | head -20Repository: fortune710/keepsafe
Length of output: 574
🏁 Script executed:
# Read the migration file to verify the code
cat -n ./frontend/supabase/migrations/20260202060724_add_invite_code_to_profiles.sqlRepository: fortune710/keepsafe
Length of output: 3138
🌐 Web query:
Supabase PostgreSQL pgcrypto extension availability
💡 Result:
Yes—pgcrypto is available on Supabase Postgres (managed and self-hosted). Supabase lists pgcrypto as a supported/pre-installed Postgres extension, and extensions can be enabled/disabled from the Dashboard (Database → Extensions) or via SQL. (supabase.com)
To enable via SQL:
create extension if not exists pgcrypto;Citations:
🌐 Web query:
PostgreSQL random() function security cryptographic invite codes
💡 Result:
Is random() safe for invite codes?
No. PostgreSQL’s random() uses a deterministic PRNG (older docs describe it as a linear congruential generator) and is explicitly “not suitable for cryptographic applications.” Use pgcrypto instead. [1]
What to use instead (cryptographic)
Enable pgcrypto and generate bytes from a CSPRNG:
gen_random_bytes(n)returns cryptographically strong random bytes. [2]gen_random_uuid()returns a v4 UUID (random);pgcryptonotes it calls the core function. [2]
Practical invite-code pattern (URL-safe)
128 bits of entropy (16 bytes) is a common baseline.
CREATE EXTENSION IF NOT EXISTS pgcrypto;
-- 22-char base64url-ish token (~128 bits), strip "=" padding
SELECT rtrim(translate(encode(gen_random_bytes(16), 'base64'), '+/', '-_'), '=')
AS invite_code;Still do this for safety
Even with strong randomness, enforce uniqueness and retry on rare collisions:
ALTER TABLE invites ADD CONSTRAINT invites_code_uniq UNIQUE (invite_code);Sources:
[1] PostgreSQL math functions docs: random() is not suitable for cryptographic applications. https://www.postgresql.org/docs/16/functions-math.html
[2] PostgreSQL pgcrypto docs: gen_random_bytes is cryptographically strong; gen_random_uuid. https://www.postgresql.org/docs/current/pgcrypto.html
🌐 Web query:
PostgreSQL gen_random_bytes pgcrypto performance invite code generation
💡 Result:
Using gen_random_bytes() for invite-code generation (and what to expect performance-wise)
What it does
pgcrypto.gen_random_bytes(n)returns cryptographically strong random bytes and is limited to 1024 bytes per call (to avoid draining the RNG pool). [1]pgcryptois built/available only when PostgreSQL was built with OpenSSL support. [1]
Performance reality (practical guidance)
- For invite codes you typically request small outputs (8–32 bytes). At these sizes, the cost is usually dominated by the INSERT + unique-index check + WAL, not by generating the randomness.
- If you generate codes in large bursts (e.g.,
INSERT … SELECT generate_series(...)) and see CPU time climb, it can be either RNG cost or (more often) contention on the unique index due to retries on collisions—so design for “generate then enforce uniqueness” efficiently.
Recommended pattern (URL-safe, compact, low collision risk)
22-char URL-safe code (~128 bits)
-- 16 bytes = 128 bits; base64 is 24 chars with "==", so strip padding => 22 chars
SELECT rtrim(
translate(encode(gen_random_bytes(16), 'base64'), '+/', '-_'),
'='
) AS invite_code;Enforce uniqueness
ALTER TABLE invites ADD CONSTRAINT invites_code_uk UNIQUE (invite_code);Insert with automatic retry (one simple approach)
INSERT INTO invites (invite_code, created_at)
SELECT code, now()
FROM (
SELECT rtrim(translate(encode(gen_random_bytes(16),'base64'),'+/','-_'),'=') AS code
) s
ON CONFLICT (invite_code) DO NOTHING
RETURNING *;If you tell me your target length (e.g., 8/10/12 chars) and expected volume (codes/day, peak QPS), I can suggest the smallest byte-length that still keeps collision probability negligible and avoids excess retries.
Sources: PostgreSQL pgcrypto docs (gen_random_bytes, limits, OpenSSL requirement). [1]
Use cryptographically secure randomness for invite codes.
PostgreSQL's random() is not suitable for cryptographic applications and should not be used for generating security-sensitive invite codes. Replace it with pgcrypto's gen_random_bytes() to ensure proper entropy.
pgcrypto is available and pre-installed on Supabase; enable it via CREATE EXTENSION IF NOT EXISTS pgcrypto;
Proposed fix (pgcrypto)
+CREATE EXTENSION IF NOT EXISTS pgcrypto;
+
CREATE OR REPLACE FUNCTION generate_invite_code()
RETURNS text AS $$
+DECLARE
+ raw text;
BEGIN
- RETURN array_to_string(
- ARRAY(
- SELECT substring('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789'
- FROM (ceil(random()*62))::int FOR 1)
- FROM generate_series(1, 8)
- ), ''
- );
+ raw := encode(gen_random_bytes(6), 'base64');
+ raw := translate(raw, '+/', 'Aa');
+ RETURN raw;
END;
$$ LANGUAGE plpgsql VOLATILE;🤖 Prompt for AI Agents
In `@frontend/supabase/migrations/20260202060724_add_invite_code_to_profiles.sql`
around lines 6 - 17, The generate_invite_code() function uses PostgreSQL
random(), which is not cryptographically secure; replace its implementation to
use pgcrypto's gen_random_bytes() and ensure the extension is enabled with
CREATE EXTENSION IF NOT EXISTS pgcrypto;; update the function
generate_invite_code to derive 8 characters by mapping secure random bytes to
the allowed charset (A–Z a–z 0–9) (e.g., use get_byte(gen_random_bytes(n), i) %
62 to index the charset) instead of substring(... FROM
(ceil(random()*62))::int), preserving the function name and return type.
Summary by CodeRabbit
New Features
Improvements