Fix export: paginate friendships, defer cleanup, guard unmount in download#56
Fix export: paginate friendships, defer cleanup, guard unmount in download#56fortune710 merged 19 commits intofortune710:devfrom
Conversation
Added Places Search for API and Fixed Editor Popover Heights
Added User Identification
Added Notifications for Friend Request
Added App Logo, Removed Support for Tablet and Changed LLM
Feat: Added Custom Fonts and UI Changes
Completed Friends Page UI Update and Daily Notifications
Added Improvements for Version 0.9.7
Added OTP Verification for Phone Number Updates
Notification Updates, Font Improvements and OTP Feature for Phone Number Update
…and HTML formats, including status polling and download handling. Update dependencies in package.json and bun.lock for improved performance.
… management, improved error handling, and pagination for entry fetching. Update dependencies in package.json and bun.lock for better performance.
…rt and add background cleanup for export files after download. Enhance error handling in the frontend export process.
…xport process. Log specific errors for malformed timestamps and sign-out failures.
Implement async user data export queue and move export/delete to Privacy screen
… to Privacy screen"
Revert "Implement async user data export queue and move export/delete to Privacy screen"
…ination for entries and friendships, and improved error handling. Add export status polling and download capabilities in the frontend.
|
|
|
@MaestroDev19 is attempting to deploy a commit to the fortune710's projects Team on Vercel. A member of the Team first needs to authorize it. |
|
Caution Review failedThe pull request is closed. 🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@backend/routers/user.py`:
- Around line 324-344: The background task _run_export_job can KeyError if
export_jobs[job_id] was pruned concurrently and also currently binds exc without
using it; fix by always reading export_jobs.get(job_id) into a local variable
before mutating and only update if that dict exists (use job =
export_jobs.get(job_id) and guard each update/write), and in the except block
keep "except Exception as exc:" but use exc when setting the job error (e.g.
job["error"] = str(exc)) or at least include exc in the guarded update so the
variable is used; ensure all writes to export_jobs reference the guarded job
variable rather than indexing export_jobs[job_id] directly.
- Around line 47-50: Replace the broad except that swallows
datetime.fromisoformat failures with a ValueError handler that logs a warning
including the malformed reference_time_str and the exception before continuing;
specifically, in the block that calls datetime.fromisoformat(reference_time_str)
(in backend/routers/user.py), change "except Exception: continue" to "except
ValueError as e: logger.warning('Failed to parse reference_time_str=%r: %s',
reference_time_str, e)" (or use the module/app logger available) so malformed
timestamps are recorded rather than silently ignored.
- Around line 347-400: The export_jobs dict is mutated from multiple threads
causing race conditions; create a module-level threading.Lock named
export_jobs_lock and use it to protect all reads/writes/deletes of export_jobs
(i.e., wrap mutations in start_user_export, _run_export_job, and
_prune_export_jobs with lock acquisition/release) so all updates and pruning
happen while holding export_jobs_lock; ensure the lock is imported from
threading and used consistently wherever export_jobs is accessed for mutation.
In `@frontend/app/settings/privacy.tsx`:
- Around line 149-157: Move the isMountedRef.current guard to before any state
updates so we don't call setExportStatusMessage after unmount; specifically, in
the export flow where you check attempt to decide the message (the block that
calls setExportStatusMessage(...) based on attempt), first ensure
isMountedRef.current is true and return early if not, then call
setExportStatusMessage; update the code around the setExportStatusMessage calls
and the isMountedRef check (referencing setExportStatusMessage,
isMountedRef.current, and the attempt variable) so no state is set unless the
component is mounted.
- Around line 286-348: The handleDeleteAccount flow updates state and shows
alerts after async operations without checking isMountedRef, risking
setState/Alert on unmounted components; update handleDeleteAccount to guard all
post-async state changes and Alert.alert calls with if (!isMountedRef.current)
return (or wrap subsequent logic in if (isMountedRef.current) { ... }),
specifically before calling setIsDeleting(false), any Alert.alert invocations,
and before router.replace('/onboarding'); ensure you check isMountedRef.current
after the fetch response handling and again after supabase.auth.signOut() to
mirror the export flow's mount-guard pattern.
🧹 Nitpick comments (3)
frontend/app/settings/privacy.tsx (1)
191-201:FileSystem.downloadAsyncdoes not throw on non-200 status; consider also checking for a missing/empty file.The guard
result.status !== 200is good, but note thatdownloadAsyncmay write an error-response body tofileUri. If the backend returns an error payload, the user ends up with a corrupt export file on disk. Consider deleting the file before throwing.Proposed enhancement
if (result.status !== 200) { + // Clean up the file written by downloadAsync (it contains the error response body) + await FileSystem.deleteAsync(fileUri, { idempotent: true }); throw new Error('Failed to download export file'); }backend/routers/user.py (2)
27-31: In-memoryexport_jobsdict is lost on restart and incompatible with multi-worker deployments.If the app runs behind Gunicorn/Uvicorn with more than one worker, each worker gets its own
export_jobsdict — a job started in worker A will return 404 when polled from worker B. A process restart also loses all pending/completed jobs.This is acceptable for a single-worker prototype, but consider documenting this limitation or migrating to Redis / a database table before scaling out.
488-516: Synchronous compatibility endpoint performs the full export in the request thread.For large datasets (the very reason pagination was added), this blocks the worker for the entire duration. This is acceptable short-term for backwards compatibility, but consider deprecating it with a warning header or logging when the payload exceeds a size threshold to nudge callers toward the async flow.
| try: | ||
| reference_time = datetime.fromisoformat(reference_time_str) | ||
| except Exception: # noqa: BLE001 | ||
| continue |
There was a problem hiding this comment.
Silently swallowing fromisoformat parse failures hides corrupted job state.
If reference_time_str is malformed, the job is silently skipped and will never be pruned — leaking both memory and disk. At minimum, log a warning.
Proposed fix
try:
reference_time = datetime.fromisoformat(reference_time_str)
- except Exception: # noqa: BLE001
+ except Exception:
+ logger.warning("Invalid timestamp for export job %s, removing", job_id)
+ expired_job_ids.append(job_id)
continue🧰 Tools
🪛 Ruff (0.15.0)
[error] 49-50: try-except-continue detected, consider logging the exception
(S112)
🤖 Prompt for AI Agents
In `@backend/routers/user.py` around lines 47 - 50, Replace the broad except that
swallows datetime.fromisoformat failures with a ValueError handler that logs a
warning including the malformed reference_time_str and the exception before
continuing; specifically, in the block that calls
datetime.fromisoformat(reference_time_str) (in backend/routers/user.py), change
"except Exception: continue" to "except ValueError as e: logger.warning('Failed
to parse reference_time_str=%r: %s', reference_time_str, e)" (or use the
module/app logger available) so malformed timestamps are recorded rather than
silently ignored.
| def _run_export_job(user_id: str, format: ExportFormat, job_id: str) -> None: | ||
| try: | ||
| logger.info("Starting export job %s for user_id=%s", job_id, user_id) | ||
| payload = _fetch_user_export_data(user_id) | ||
| content, media_type, filename = _render_export_content(user_id, format, payload) | ||
|
|
||
| EXPORT_BASE_DIR.mkdir(parents=True, exist_ok=True) | ||
| file_path = EXPORT_BASE_DIR / f"{job_id}_{filename}" | ||
| file_path.write_bytes(content) | ||
|
|
||
| export_jobs[job_id]["status"] = "completed" | ||
| export_jobs[job_id]["file_path"] = str(file_path) | ||
| export_jobs[job_id]["media_type"] = media_type | ||
| export_jobs[job_id]["filename"] = filename | ||
| export_jobs[job_id]["completed_at"] = datetime.now(timezone.utc).isoformat() | ||
|
|
||
| logger.info("Completed export job %s, file=%s", job_id, file_path) | ||
| except Exception as exc: | ||
| logger.exception("Failed export job %s for user_id=%s", job_id, user_id) | ||
| export_jobs[job_id]["status"] = "failed" | ||
| export_jobs[job_id]["error"] = "Export failed. Please try again or contact support." |
There was a problem hiding this comment.
Potential KeyError if the job is pruned before the background task completes, and unused exc variable.
Two issues in _run_export_job:
-
If
_prune_export_jobs()(called by concurrent requests) evictsjob_idfromexport_jobswhile this task is running, Lines 334–344 will raiseKeyError. Guard with a check or use.get(): -
Line 341:
excis assigned but never used (Ruff F841).
Proposed fix
def _run_export_job(user_id: str, format: ExportFormat, job_id: str) -> None:
try:
logger.info("Starting export job %s for user_id=%s", job_id, user_id)
payload = _fetch_user_export_data(user_id)
content, media_type, filename = _render_export_content(user_id, format, payload)
EXPORT_BASE_DIR.mkdir(parents=True, exist_ok=True)
file_path = EXPORT_BASE_DIR / f"{job_id}_{filename}"
file_path.write_bytes(content)
+ job = export_jobs.get(job_id)
+ if job is None:
+ logger.warning("Export job %s was pruned before completion; discarding result", job_id)
+ file_path.unlink(missing_ok=True)
+ return
+
- export_jobs[job_id]["status"] = "completed"
- export_jobs[job_id]["file_path"] = str(file_path)
- export_jobs[job_id]["media_type"] = media_type
- export_jobs[job_id]["filename"] = filename
- export_jobs[job_id]["completed_at"] = datetime.now(timezone.utc).isoformat()
+ job["status"] = "completed"
+ job["file_path"] = str(file_path)
+ job["media_type"] = media_type
+ job["filename"] = filename
+ job["completed_at"] = datetime.now(timezone.utc).isoformat()
logger.info("Completed export job %s, file=%s", job_id, file_path)
- except Exception as exc:
+ except Exception:
logger.exception("Failed export job %s for user_id=%s", job_id, user_id)
- export_jobs[job_id]["status"] = "failed"
- export_jobs[job_id]["error"] = "Export failed. Please try again or contact support."
+ job = export_jobs.get(job_id)
+ if job is not None:
+ job["status"] = "failed"
+ job["error"] = "Export failed. Please try again or contact support."📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _run_export_job(user_id: str, format: ExportFormat, job_id: str) -> None: | |
| try: | |
| logger.info("Starting export job %s for user_id=%s", job_id, user_id) | |
| payload = _fetch_user_export_data(user_id) | |
| content, media_type, filename = _render_export_content(user_id, format, payload) | |
| EXPORT_BASE_DIR.mkdir(parents=True, exist_ok=True) | |
| file_path = EXPORT_BASE_DIR / f"{job_id}_{filename}" | |
| file_path.write_bytes(content) | |
| export_jobs[job_id]["status"] = "completed" | |
| export_jobs[job_id]["file_path"] = str(file_path) | |
| export_jobs[job_id]["media_type"] = media_type | |
| export_jobs[job_id]["filename"] = filename | |
| export_jobs[job_id]["completed_at"] = datetime.now(timezone.utc).isoformat() | |
| logger.info("Completed export job %s, file=%s", job_id, file_path) | |
| except Exception as exc: | |
| logger.exception("Failed export job %s for user_id=%s", job_id, user_id) | |
| export_jobs[job_id]["status"] = "failed" | |
| export_jobs[job_id]["error"] = "Export failed. Please try again or contact support." | |
| def _run_export_job(user_id: str, format: ExportFormat, job_id: str) -> None: | |
| try: | |
| logger.info("Starting export job %s for user_id=%s", job_id, user_id) | |
| payload = _fetch_user_export_data(user_id) | |
| content, media_type, filename = _render_export_content(user_id, format, payload) | |
| EXPORT_BASE_DIR.mkdir(parents=True, exist_ok=True) | |
| file_path = EXPORT_BASE_DIR / f"{job_id}_{filename}" | |
| file_path.write_bytes(content) | |
| job = export_jobs.get(job_id) | |
| if job is None: | |
| logger.warning("Export job %s was pruned before completion; discarding result", job_id) | |
| file_path.unlink(missing_ok=True) | |
| return | |
| job["status"] = "completed" | |
| job["file_path"] = str(file_path) | |
| job["media_type"] = media_type | |
| job["filename"] = filename | |
| job["completed_at"] = datetime.now(timezone.utc).isoformat() | |
| logger.info("Completed export job %s, file=%s", job_id, file_path) | |
| except Exception: | |
| logger.exception("Failed export job %s for user_id=%s", job_id, user_id) | |
| job = export_jobs.get(job_id) | |
| if job is not None: | |
| job["status"] = "failed" | |
| job["error"] = "Export failed. Please try again or contact support." |
🧰 Tools
🪛 Ruff (0.15.0)
[error] 341-341: Local variable exc is assigned to but never used
Remove assignment to unused variable exc
(F841)
🤖 Prompt for AI Agents
In `@backend/routers/user.py` around lines 324 - 344, The background task
_run_export_job can KeyError if export_jobs[job_id] was pruned concurrently and
also currently binds exc without using it; fix by always reading
export_jobs.get(job_id) into a local variable before mutating and only update if
that dict exists (use job = export_jobs.get(job_id) and guard each
update/write), and in the except block keep "except Exception as exc:" but use
exc when setting the job error (e.g. job["error"] = str(exc)) or at least
include exc in the guarded update so the variable is used; ensure all writes to
export_jobs reference the guarded job variable rather than indexing
export_jobs[job_id] directly.
| @router.post("/{user_id}/export", status_code=status.HTTP_202_ACCEPTED) | ||
| async def start_user_export( | ||
| user_id: str, | ||
| format: ExportFormat = ExportFormat.json, | ||
| background_tasks: BackgroundTasks = Depends(), | ||
| current_user=Depends(get_current_user), | ||
| ): | ||
| """ | ||
| Start an asynchronous export job for the user. | ||
|
|
||
| Returns immediately with a job_id that can be polled for status and used | ||
| to download the completed export once ready. | ||
| """ | ||
| _prune_export_jobs() | ||
|
|
||
| if current_user.user.id != user_id: | ||
| logger.warning( | ||
| "Unauthorized export start attempt. User %s tried to export %s", | ||
| current_user.user.id, | ||
| user_id, | ||
| ) | ||
| raise HTTPException(status_code=403, detail="Not authorized to export this user's data") | ||
|
|
||
| # Enforce per-user cap on pending jobs | ||
| pending_jobs_for_user = sum( | ||
| 1 | ||
| for job in export_jobs.values() | ||
| if job.get("user_id") == user_id and job.get("status") == "pending" | ||
| ) | ||
| if pending_jobs_for_user >= MAX_PENDING_EXPORT_JOBS_PER_USER: | ||
| logger.warning( | ||
| "User %s has too many pending export jobs: %s", | ||
| user_id, | ||
| pending_jobs_for_user, | ||
| ) | ||
| raise HTTPException( | ||
| status_code=status.HTTP_429_TOO_MANY_REQUESTS, | ||
| detail="Too many pending export jobs. Please try again later.", | ||
| ) | ||
|
|
||
| job_id = f"{user_id}-{uuid.uuid4()}" | ||
| export_jobs[job_id] = { | ||
| "status": "pending", | ||
| "user_id": user_id, | ||
| "format": format, | ||
| "created_at": datetime.now(timezone.utc).isoformat(), | ||
| "completed_at": None, | ||
| "error": None, | ||
| } | ||
|
|
||
| background_tasks.add_task(_run_export_job, user_id, format, job_id) | ||
| logger.info("Queued export job %s for user_id=%s", job_id, user_id) | ||
|
|
||
| return {"job_id": job_id, "status": "pending"} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the _run_export_job function
rg -n -A 20 'def _run_export_job' --type pyRepository: fortune710/keepsafe
Length of output: 1745
🏁 Script executed:
# Find get_supabase_client implementation
rg -n -A 10 'def get_supabase_client' --type pyRepository: fortune710/keepsafe
Length of output: 648
🏁 Script executed:
# Check for export_jobs initialization and understand its structure
rg -n -B 5 -A 5 'export_jobs' --type py | head -60Repository: fortune710/keepsafe
Length of output: 3717
🏁 Script executed:
# Check for any locking mechanisms or thread-safety patterns
rg -n 'Lock|RLock|thread_local|ThreadLocal|queue|Queue' --type py | grep -i exportRepository: fortune710/keepsafe
Length of output: 162
🏁 Script executed:
# Find _fetch_user_export_data implementation
rg -n -A 15 'def _fetch_user_export_data' --type pyRepository: fortune710/keepsafe
Length of output: 1175
🏁 Script executed:
# Check for any threading imports and mutex patterns
rg -n '^import|^from' backend/routers/user.py | head -30Repository: fortune710/keepsafe
Length of output: 556
🏁 Script executed:
# Check if there are any Lock objects instantiated in the file
rg -n 'Lock\(\)|RLock\(\)' backend/routers/user.pyRepository: fortune710/keepsafe
Length of output: 45
export_jobs dict mutations lack thread-safe synchronization — add a Lock.
_run_export_job runs in a thread pool and directly mutates export_jobs[job_id] (lines 363–367, 372–373) without locks. Meanwhile, start_user_export initializes entries and _prune_export_jobs deletes entries from the same dict on the main thread. This creates race conditions.
get_supabase_client() is thread-safe (creates a new instance each call), so that is not a concern. However, export_jobs itself must be protected. Add a module-level Lock and acquire it around all mutations:
import threading
export_jobs_lock = threading.Lock()
# In _run_export_job, _prune_export_jobs, and start_user_export:
with export_jobs_lock:
export_jobs[job_id] = {...} # or mutationAlternatively, use threading.local() per-request storage if appropriate for your use case.
🧰 Tools
🪛 Ruff (0.15.0)
[warning] 351-351: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
[warning] 352-352: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
In `@backend/routers/user.py` around lines 347 - 400, The export_jobs dict is
mutated from multiple threads causing race conditions; create a module-level
threading.Lock named export_jobs_lock and use it to protect all
reads/writes/deletes of export_jobs (i.e., wrap mutations in start_user_export,
_run_export_job, and _prune_export_jobs with lock acquisition/release) so all
updates and pruning happen while holding export_jobs_lock; ensure the lock is
imported from threading and used consistently wherever export_jobs is accessed
for mutation.
| if (attempt === 0) { | ||
| setExportStatusMessage('Preparing your export. This may take a few moments...'); | ||
| } else { | ||
| setExportStatusMessage('Still preparing your export...'); | ||
| } | ||
|
|
||
| if (!isMountedRef.current) { | ||
| return; | ||
| } |
There was a problem hiding this comment.
State updates at Lines 150/152 are not guarded by isMountedRef.
setExportStatusMessage is called before the isMountedRef.current check at Line 155. Between the initial guard at Line 91 and here, an unmount could have occurred (an await on Line 114 yields control). Move the guard before these state updates.
Proposed fix
+ if (!isMountedRef.current) {
+ return;
+ }
+
if (attempt === 0) {
setExportStatusMessage('Preparing your export. This may take a few moments...');
} else {
setExportStatusMessage('Still preparing your export...');
}
- if (!isMountedRef.current) {
- return;
- }
-
exportPollTimeoutRef.current = setTimeout(() => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (attempt === 0) { | |
| setExportStatusMessage('Preparing your export. This may take a few moments...'); | |
| } else { | |
| setExportStatusMessage('Still preparing your export...'); | |
| } | |
| if (!isMountedRef.current) { | |
| return; | |
| } | |
| if (!isMountedRef.current) { | |
| return; | |
| } | |
| if (attempt === 0) { | |
| setExportStatusMessage('Preparing your export. This may take a few moments...'); | |
| } else { | |
| setExportStatusMessage('Still preparing your export...'); | |
| } |
🤖 Prompt for AI Agents
In `@frontend/app/settings/privacy.tsx` around lines 149 - 157, Move the
isMountedRef.current guard to before any state updates so we don't call
setExportStatusMessage after unmount; specifically, in the export flow where you
check attempt to decide the message (the block that calls
setExportStatusMessage(...) based on attempt), first ensure isMountedRef.current
is true and return early if not, then call setExportStatusMessage; update the
code around the setExportStatusMessage calls and the isMountedRef check
(referencing setExportStatusMessage, isMountedRef.current, and the attempt
variable) so no state is set unless the component is mounted.
| const handleDeleteAccount = () => { | ||
| Alert.alert( | ||
| 'Delete Account', | ||
| 'This action cannot be undone. All your data will be permanently deleted.', | ||
| 'Are you sure you want to delete your account? This action cannot be undone and all your data will be permanently lost.', | ||
| [ | ||
| { text: 'Cancel', style: 'cancel' }, | ||
| { | ||
| text: 'Delete', | ||
| { | ||
| text: 'Cancel', | ||
| style: 'cancel', | ||
| }, | ||
| { | ||
| text: 'Delete', | ||
| style: 'destructive', | ||
| onPress: () => { | ||
| Alert.alert('Account Deleted', 'Your account has been deleted.'); | ||
| router.replace('/onboarding'); | ||
| } | ||
| } | ||
| onPress: async () => { | ||
| if (!profile?.id) return; | ||
|
|
||
| if (!session?.access_token) { | ||
| Alert.alert('Error', 'You need to be signed in to delete your account.'); | ||
| return; | ||
| } | ||
|
|
||
| try { | ||
| setIsDeleting(true); | ||
|
|
||
| const response = await fetch(`${BACKEND_URL}/user/${profile.id}`, { | ||
| method: 'DELETE', | ||
| headers: { | ||
| Authorization: `Bearer ${session.access_token}`, | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| }); | ||
|
|
||
| if (!response.ok) { | ||
| const errorData = await response.json().catch(() => ({})); | ||
| throw new Error(errorData.detail || 'Failed to delete account data'); | ||
| } | ||
|
|
||
| try { | ||
| await supabase.auth.signOut(); | ||
| } catch (signOutError: any) { | ||
| logger.error('Delete Account: signOut failed after backend delete', signOutError); | ||
| } | ||
|
|
||
| setIsDeleting(false); | ||
| Alert.alert( | ||
| 'Account Deleted', | ||
| 'Account deleted successfully, we hate to see you go', | ||
| [ | ||
| { | ||
| text: 'OK', | ||
| onPress: () => router.replace('/onboarding'), | ||
| }, | ||
| ] | ||
| ); | ||
| } catch (error: any) { | ||
| logger.error('Delete Account Error', error); | ||
| Alert.alert('Error', error.message || 'Failed to delete account'); | ||
| setIsDeleting(false); | ||
| } | ||
| }, | ||
| }, | ||
| ] | ||
| ); | ||
| }; |
There was a problem hiding this comment.
handleDeleteAccount is missing isMountedRef guards, unlike the export flow.
The export flow carefully checks isMountedRef.current before every state update and Alert.alert call. However, handleDeleteAccount calls setIsDeleting(false) (Lines 328, 342) and Alert.alert (Lines 329, 341) after async operations (fetch, signOut) without any mount guard. After supabase.auth.signOut() the auth provider may trigger navigation/unmount, making the subsequent state update and alert fire on an unmounted component.
Proposed fix: add isMountedRef guards
try {
await supabase.auth.signOut();
} catch (signOutError: any) {
logger.error('Delete Account: signOut failed after backend delete', signOutError);
}
+ if (!isMountedRef.current) return;
setIsDeleting(false);
Alert.alert(
'Account Deleted',
'Account deleted successfully, we hate to see you go',
[
{
text: 'OK',
onPress: () => router.replace('/onboarding'),
},
]
);
} catch (error: any) {
logger.error('Delete Account Error', error);
- Alert.alert('Error', error.message || 'Failed to delete account');
- setIsDeleting(false);
+ if (isMountedRef.current) {
+ Alert.alert('Error', error.message || 'Failed to delete account');
+ setIsDeleting(false);
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleDeleteAccount = () => { | |
| Alert.alert( | |
| 'Delete Account', | |
| 'This action cannot be undone. All your data will be permanently deleted.', | |
| 'Are you sure you want to delete your account? This action cannot be undone and all your data will be permanently lost.', | |
| [ | |
| { text: 'Cancel', style: 'cancel' }, | |
| { | |
| text: 'Delete', | |
| { | |
| text: 'Cancel', | |
| style: 'cancel', | |
| }, | |
| { | |
| text: 'Delete', | |
| style: 'destructive', | |
| onPress: () => { | |
| Alert.alert('Account Deleted', 'Your account has been deleted.'); | |
| router.replace('/onboarding'); | |
| } | |
| } | |
| onPress: async () => { | |
| if (!profile?.id) return; | |
| if (!session?.access_token) { | |
| Alert.alert('Error', 'You need to be signed in to delete your account.'); | |
| return; | |
| } | |
| try { | |
| setIsDeleting(true); | |
| const response = await fetch(`${BACKEND_URL}/user/${profile.id}`, { | |
| method: 'DELETE', | |
| headers: { | |
| Authorization: `Bearer ${session.access_token}`, | |
| 'Content-Type': 'application/json', | |
| }, | |
| }); | |
| if (!response.ok) { | |
| const errorData = await response.json().catch(() => ({})); | |
| throw new Error(errorData.detail || 'Failed to delete account data'); | |
| } | |
| try { | |
| await supabase.auth.signOut(); | |
| } catch (signOutError: any) { | |
| logger.error('Delete Account: signOut failed after backend delete', signOutError); | |
| } | |
| setIsDeleting(false); | |
| Alert.alert( | |
| 'Account Deleted', | |
| 'Account deleted successfully, we hate to see you go', | |
| [ | |
| { | |
| text: 'OK', | |
| onPress: () => router.replace('/onboarding'), | |
| }, | |
| ] | |
| ); | |
| } catch (error: any) { | |
| logger.error('Delete Account Error', error); | |
| Alert.alert('Error', error.message || 'Failed to delete account'); | |
| setIsDeleting(false); | |
| } | |
| }, | |
| }, | |
| ] | |
| ); | |
| }; | |
| const handleDeleteAccount = () => { | |
| Alert.alert( | |
| 'Delete Account', | |
| 'Are you sure you want to delete your account? This action cannot be undone and all your data will be permanently lost.', | |
| [ | |
| { | |
| text: 'Cancel', | |
| style: 'cancel', | |
| }, | |
| { | |
| text: 'Delete', | |
| style: 'destructive', | |
| onPress: async () => { | |
| if (!profile?.id) return; | |
| if (!session?.access_token) { | |
| Alert.alert('Error', 'You need to be signed in to delete your account.'); | |
| return; | |
| } | |
| try { | |
| setIsDeleting(true); | |
| const response = await fetch(`${BACKEND_URL}/user/${profile.id}`, { | |
| method: 'DELETE', | |
| headers: { | |
| Authorization: `Bearer ${session.access_token}`, | |
| 'Content-Type': 'application/json', | |
| }, | |
| }); | |
| if (!response.ok) { | |
| const errorData = await response.json().catch(() => ({})); | |
| throw new Error(errorData.detail || 'Failed to delete account data'); | |
| } | |
| try { | |
| await supabase.auth.signOut(); | |
| } catch (signOutError: any) { | |
| logger.error('Delete Account: signOut failed after backend delete', signOutError); | |
| } | |
| if (!isMountedRef.current) return; | |
| setIsDeleting(false); | |
| Alert.alert( | |
| 'Account Deleted', | |
| 'Account deleted successfully, we hate to see you go', | |
| [ | |
| { | |
| text: 'OK', | |
| onPress: () => router.replace('/onboarding'), | |
| }, | |
| ] | |
| ); | |
| } catch (error: any) { | |
| logger.error('Delete Account Error', error); | |
| if (isMountedRef.current) { | |
| Alert.alert('Error', error.message || 'Failed to delete account'); | |
| setIsDeleting(false); | |
| } | |
| } | |
| }, | |
| }, | |
| ] | |
| ); | |
| }; |
🤖 Prompt for AI Agents
In `@frontend/app/settings/privacy.tsx` around lines 286 - 348, The
handleDeleteAccount flow updates state and shows alerts after async operations
without checking isMountedRef, risking setState/Alert on unmounted components;
update handleDeleteAccount to guard all post-async state changes and Alert.alert
calls with if (!isMountedRef.current) return (or wrap subsequent logic in if
(isMountedRef.current) { ... }), specifically before calling
setIsDeleting(false), any Alert.alert invocations, and before
router.replace('/onboarding'); ensure you check isMountedRef.current after the
fetch response handling and again after supabase.auth.signOut() to mirror the
export flow's mount-guard pattern.
Backend – user export data
Paginate friendships in _fetch_user_export_data so exports don’t hit Supabase’s 1000-row limit. Use friendships_batch_size and friendships_offset, loop with .range(), accumulate batches into friendships_data, and keep existing remove_none behavior.
Backend – export download
Stop deleting the export file and popping the job before the response finishes. Add _cleanup_export_after_download(job_id, file_path) and run it via FastAPI BackgroundTasks after returning FileResponse. Cleanup checks Path(file_path).is_file(), unlinks in try/except with logging, then export_jobs.pop(job_id, None) so downloads complete before cleanup.
Frontend – privacy export
In downloadExport, guard all state updates and UI (e.g. setIsExporting, setExportStatusMessage, Alert.alert, Sharing onPress) with isMountedRef.current so we don’t update state or show dialogs after the component unmounts.
Summary by CodeRabbit