Implement async user data export queue and move export/delete to Privacy screen#54
Conversation
…and HTML formats, including status polling and download handling. Update dependencies in package.json and bun.lock for improved performance.
|
|
|
@MaestroDev19 is attempting to deploy a commit to the fortune710's projects Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughAdds an async, job-based user data export system (HTML/JSON) to the backend with TTL pruning and per-user limits; moves export/delete UI and the full export polling/download/share flow into the frontend privacy screen; simplifies settings UI and pins one frontend dependency. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Frontend as Privacy Settings
participant Backend as User API
participant Database as Supabase DB
participant FileSystem as Export Storage
User->>Frontend: Initiate export (format)
Frontend->>Backend: POST /start_user_export (auth)
Backend->>Backend: authorize, create job_id
Backend-->>Frontend: {job_id, status: pending}
rect rgba(100,150,200,0.5)
Note over Backend,Database: Background export job
Backend->>Database: Fetch profile, entries (paginated), friendships
Database-->>Backend: User data
Backend->>Backend: Sanitize & render (HTML/JSON)
Backend->>FileSystem: Write export file, update job status -> ready
end
loop Polling
Frontend->>Backend: GET /get_export_status/{job_id} (auth)
Backend-->>Frontend: {status, timestamps, errors}
end
Frontend->>Backend: GET /download_user_export/{job_id} (auth)
Backend->>FileSystem: Read file bytes
FileSystem-->>Backend: File bytes
Backend-->>Frontend: FileResponse (download)
Frontend->>User: Save / Share
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
frontend/app/settings/index.tsx (1)
26-29:⚠️ Potential issue | 🟡 MinorUnused imports left over from refactor.
BACKEND_URL,FileSystem,Sharing, andloggerare imported but no longer referenced in this file after the export/delete logic was moved toprivacy.tsx. These should be removed to avoid confusion and unnecessary bundling.Proposed fix
-import { BACKEND_URL } from '@/lib/constants'; -import * as FileSystem from 'expo-file-system/legacy'; -import * as Sharing from 'expo-sharing'; -import { logger } from '@/lib/logger';frontend/app/settings/privacy.tsx (1)
362-372:⚠️ Potential issue | 🟠 MajorExport button should be disabled while an export is in progress.
The export button remains tappable during an active export. Tapping it again will start a second parallel export job, causing confusing state and duplicate downloads. Disable it when
isExportingis true:Proposed fix
- <TouchableOpacity style={styles.actionButton} onPress={handleExportData}> + <TouchableOpacity style={[styles.actionButton, isExporting && { opacity: 0.5 }]} onPress={handleExportData} disabled={isExporting}>
🤖 Fix all issues with AI agents
In `@backend/routers/user.py`:
- Around line 23-27: The current implementation leaves EXPORT_BASE_DIR files and
the in-memory export_jobs dict unbounded; add a TTL + eviction and file cleanup:
implement a per-job timestamp/expiry field in ExportJobState and enforce a
max-age (e.g., 1 hour) when creating/updating jobs in export_jobs, add a
background pruner task (or run-time cleanup hook) that scans export_jobs and
removes entries older than TTL and deletes their corresponding files under
EXPORT_BASE_DIR, ensure successful download endpoints remove the exported file
and delete the job from export_jobs, and enforce a per-user cap on pending jobs
(e.g., check user_id in export job metadata and reject/create with 429 when over
limit) so export_jobs, EXPORT_BASE_DIR, and concurrent jobs remain bounded.
- Around line 251-254: The code stores the raw exception string into
export_jobs[job_id]["error"] which is later returned by get_export_status,
risking leakage of sensitive internals; instead, leave logger.exception as-is to
record full details but replace export_jobs[job_id]["error"] with a generic
user-facing message (e.g., "export failed, contact support") and optionally
store the exception details in an internal-only field or monitoring system;
update the except block that references export_jobs, job_id, user_id, and
logger.exception to set a sanitized message while retaining full logging.
- Line 278: The current job_id generation using job_id =
f"{user_id}-{datetime.now(timezone.utc).timestamp()}" can collide under
concurrent requests; replace this with a UUID-based id by importing uuid and
setting job_id to a string combining user_id and uuid.uuid4() (e.g.,
f"{user_id}-{uuid.uuid4()}") wherever job_id is created (refer to the job_id
variable and the export_jobs insertion logic) so each export job gets a globally
unique identifier.
- Around line 95-99: The current fetch of entries using
supabase.table("entries").select("*").eq("user_id", user_id).execute() will
silently cap at 1000 rows; update the fetch logic in the entries retrieval block
(references: supabase.table("entries"), entries_response, entries_data, logger)
to paginate through results or set an explicit higher .limit(n) and loop using
.range(offset, offset+batchSize-1) until no more rows are returned; after each
batch append to entries_data and log progress via logger.info, and if you use a
fixed limit log a warning via logger.warn if len(entries_data) == limit to
indicate potential truncation.
In `@frontend/app/settings/privacy.tsx`:
- Around line 74-136: pollExportStatus can continue scheduling setTimeouts after
the component unmounts causing state updates on an unmounted component; fix by
adding a mounted ref (e.g., isMountedRef) and a timeoutId ref used inside
pollExportStatus and the setTimeout callback to (1) check isMountedRef.current
before calling setIsExporting or setExportStatusMessage or Alert, (2) avoid
scheduling the next poll if !isMountedRef.current, and (3) clear the pending
timeout in a useEffect cleanup that sets isMountedRef.current = false and clears
timeoutId; update references in pollExportStatus, the setTimeout call, and the
component’s useEffect to use these refs.
- Around line 269-272: The error handling for the delete-account fetch uses
response.json() without a .catch fallback, so non-JSON error bodies will throw a
parse error; update the block that checks if (!response.ok) to parse the body
with await response.json().catch(() => ({})) and then throw new
Error(errorData.detail || 'Failed to delete account data') so it matches the
existing patterns used at the other locations (see where response is inspected
and the Error is thrown).
In `@frontend/package.json`:
- Around line 43-52: package.json lists several Expo packages with older patch
versions that are incompatible with SDK 54; update the versions for the listed
packages (expo-audio, expo-av, expo-camera, expo-blur, expo-clipboard,
expo-constants, expo-contacts, expo-crypto, expo-dev-client) to the
SDK-54-compatible versions from the comment (e.g., expo-audio → ~1.1.1, expo-av
→ ~16.0.8, expo-camera → ~17.0.10, expo-blur → ~15.0.8, expo-clipboard → ~8.0.8,
expo-constants → ~18.0.13, expo-contacts → ~15.0.11, expo-crypto → ~15.0.8,
expo-dev-client → ~6.0.20) and then run npx expo install for each to ensure
package-lock.json and node_modules are updated accordingly.
🧹 Nitpick comments (4)
backend/routers/user.py (1)
251-251: Remove unusednoqadirectives.Per static analysis (Ruff RUF100), the
# noqa: BLE001comments on Lines 251 and 385 are unnecessary — theBLE001rule is not triggered here. Remove them to keep the code clean.Proposed fix
- except Exception as exc: # noqa: BLE001 + except Exception as exc:- except Exception as e: # noqa: BLE001 + except Exception as e:Also applies to: 385-385
frontend/app/settings/index.tsx (1)
67-74: Remove commented-out code or track it in an issue.The commented-out "Storage & Data" settings item is dead code. If it's planned for a future release, track it in an issue rather than leaving it in the source.
frontend/app/settings/privacy.tsx (2)
286-286: Uselogger.errorinstead ofconsole.errorfor consistency.The rest of this file uses the project's
loggerutility (imported on Line 14), but the delete error path usesconsole.errordirectly. This bypasses the structured logging format.Proposed fix
- console.error('❌ Delete Account Error:', error); + logger.error('Delete Account Error', error);
383-395: Consider disabling the delete button whileisDeletingis true.Similar to the export button, the delete button remains tappable during deletion. While the confirmation Alert provides some protection, rapidly dismissing and re-tapping could trigger duplicate delete requests.
Proposed fix
- <TouchableOpacity style={styles.actionButton} onPress={handleDeleteAccount}> + <TouchableOpacity style={[styles.actionButton, isDeleting && { opacity: 0.5 }]} onPress={handleDeleteAccount} disabled={isDeleting}>
… management, improved error handling, and pagination for entry fetching. Update dependencies in package.json and bun.lock for better performance.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/app/settings/privacy.tsx (1)
449-460:⚠️ Potential issue | 🟠 MajorRace condition: file deleted before
FileResponsefinishes streaming.Lines 450–453 in
backend/routers/user.pydelete the export file immediately after constructing theFileResponse, butFileResponsestreams the file — so the file may be removed before the response body is fully sent. The correspondingdownloadExportcall on the frontend (Line 191 here) could then receive a truncated or failed download.This is the root cause in the backend; see the backend file review for the fix.
🤖 Fix all issues with AI agents
In `@backend/routers/user.py`:
- Around line 127-194: The friendships query in _fetch_user_export_data uses an
unbounded select (friendships_response/friendships_data) and can hit Supabase's
1000-row cap; change it to paginate like entries: introduce
friendships_batch_size and friendships_offset, loop calling
supabase.table("friendships").select("*").or_(f"user_id.eq.{user_id},friend_id.eq.{user_id}").range(offset,
offset+batch_size-1).execute(), accumulate batches into friendships_data, break
when a batch is empty or smaller than the batch size, and keep the existing
remove_none handling; alternatively, if you prefer not to paginate, add a
.limit() and a logger.warn when the returned count equals the limit to signal
truncation.
- Around line 443-459: The code deletes the export file and pops export_jobs
synchronously before FileResponse finishes streaming, causing failed downloads;
update the endpoint to accept FastAPI's BackgroundTasks (add BackgroundTasks
parameter to the function signature) and move the cleanup logic into a
background task function that accepts job_id and file_path, checks
Path(file_path).is_file() and unlinks it inside a try/except (logging
exceptions), then removes export_jobs.pop(job_id, None); return the FileResponse
immediately and schedule the background cleanup via
background_tasks.add_task(cleanup_func, job_id, file_path) so deletion happens
after the response is sent.
In `@frontend/app/settings/privacy.tsx`:
- Around line 176-231: The downloadExport function may update state after the
component unmounts; add isMountedRef guards around all state updates and UI
actions: before calling setIsExporting(false) and setExportStatusMessage(null)
after a successful download and inside the catch block, check if
(isMountedRef.current) then perform the state updates and Alert calls; similarly
guard the Sharing/Alert flow (the onPress handler) so it only runs if
isMountedRef.current is true. Reference downloadExport, setIsExporting,
setExportStatusMessage, and isMountedRef when making these conditional checks.
🧹 Nitpick comments (4)
frontend/app/settings/index.tsx (1)
1-71: UnusedHardDriveimport.
HardDrive(Line 18) is imported fromlucide-react-nativebut never referenced in the component orsettingsItemsarray.Proposed fix
import { ChevronRight, User, Bell, Shield, - HardDrive, Info, LogOut, } from 'lucide-react-native';backend/routers/user.py (3)
316-321: Unusedexcvariable flagged by static analysis.
excis bound on Line 318 but never read since the error message is now a generic string. Either prefix with_or remove the binding.- except Exception as exc: + except Exception: logger.exception("Failed export job %s for user_id=%s", job_id, user_id)
47-50: Silent exception swallowing during date parsing.The
except Exception: continueon Lines 49–50 silently skips jobs with unparseable timestamps. A debug-level log would help diagnose corrupt job state during troubleshooting.try: reference_time = datetime.fromisoformat(reference_time_str) - except Exception: # noqa: BLE001 + except Exception: + logger.debug("Skipping job %s with unparseable timestamp: %s", job_id, reference_time_str) continue
63-64: Stale# noqa: BLE001directives.Ruff 0.15.0 reports these
noqacomments as unused on Lines 63, 454, and 488. They can be removed to keep the code tidy.Also applies to: 454-454, 488-488
…rt and add background cleanup for export files after download. Enhance error handling in the frontend export process.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@backend/routers/user.py`:
- Around line 34-66: In _prune_export_jobs(), stop silently swallowing malformed
timestamps: when datetime.fromisoformat(reference_time_str) raises, catch the
specific ValueError, log a warning including the job_id and the bad timestamp,
and then mark that job for deletion (e.g., append job_id to expired_job_ids) so
corrupt entries don't leak forever; also replace the broad except with
ValueError for timestamp parsing and remove the unused "noqa: BLE001" comment on
the unlink exception catch in the function.
In `@frontend/app/settings/privacy.tsx`:
- Around line 241-282: The catch block in performExport updates state and shows
an Alert without checking the component mount status; guard the state updates
and Alert by wrapping setIsExporting(false), setExportStatusMessage(null),
logger.error(...) and Alert.alert(...) in an if (isMountedRef.current) { ... }
check (or early return if !isMountedRef.current) so these side-effects only run
when the component is still mounted; ensure isMountedRef is the same ref used
elsewhere in the component and is in scope for performExport.
- Around line 284-341: The delete flow in handleDeleteAccount can leave the user
signed in if supabase.auth.signOut() throws; modify the try/catch so that after
a successful backend DELETE you attempt supabase.auth.signOut() in its own
try/catch, log any signOut error via logger.error, still show the success Alert
and call router.replace('/onboarding') (and clear UI state via
setIsDeleting(false) as needed) so the user is navigated away even if signOut
fails; ensure the outer catch only handles backend DELETE failures and shows the
appropriate error.
🧹 Nitpick comments (2)
backend/routers/user.py (2)
324-345: Unused exception variableexc(Ruff F841).Line 341 binds the exception to
excbut never uses it. Since the generic user-facing error message is intentional (good — addresses prior review feedback), just drop the binding.Proposed fix
- except Exception as exc: + except Exception: logger.exception("Failed export job %s for user_id=%s", job_id, user_id) export_jobs[job_id]["status"] = "failed" export_jobs[job_id]["error"] = "Export failed. Please try again or contact support."
506-516: UnquotedfilenameinContent-Dispositionheader.Per RFC 6266, the filename value should be quoted to handle potential special characters. While the current filename format (
user_data_{uuid}.json) is unlikely to cause problems, quoting is a low-effort safeguard.Proposed fix
- headers={"Content-Disposition": f"attachment; filename={filename}"}, + headers={"Content-Disposition": f'attachment; filename="{filename}"'},
| const handleDeleteAccount = () => { | ||
| Alert.alert( | ||
| 'Delete Account', | ||
| 'This action cannot be undone. All your data will be permanently deleted.', | ||
| 'Are you sure you want to delete your account? This action cannot be undone and all your data will be permanently lost.', | ||
| [ | ||
| { text: 'Cancel', style: 'cancel' }, | ||
| { | ||
| text: 'Delete', | ||
| { | ||
| text: 'Cancel', | ||
| style: 'cancel', | ||
| }, | ||
| { | ||
| text: 'Delete', | ||
| style: 'destructive', | ||
| onPress: () => { | ||
| Alert.alert('Account Deleted', 'Your account has been deleted.'); | ||
| router.replace('/onboarding'); | ||
| } | ||
| } | ||
| onPress: async () => { | ||
| if (!profile?.id) return; | ||
|
|
||
| if (!session?.access_token) { | ||
| Alert.alert('Error', 'You need to be signed in to delete your account.'); | ||
| return; | ||
| } | ||
|
|
||
| try { | ||
| setIsDeleting(true); | ||
|
|
||
| const response = await fetch(`${BACKEND_URL}/user/${profile.id}`, { | ||
| method: 'DELETE', | ||
| headers: { | ||
| Authorization: `Bearer ${session.access_token}`, | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| }); | ||
|
|
||
| if (!response.ok) { | ||
| const errorData = await response.json().catch(() => ({})); | ||
| throw new Error(errorData.detail || 'Failed to delete account data'); | ||
| } | ||
|
|
||
| await supabase.auth.signOut(); | ||
| Alert.alert( | ||
| 'Account Deleted', | ||
| 'Account deleted successfully, we hate to see you go', | ||
| [ | ||
| { | ||
| text: 'OK', | ||
| onPress: () => router.replace('/onboarding'), | ||
| }, | ||
| ] | ||
| ); | ||
| } catch (error: any) { | ||
| logger.error('Delete Account Error', error); | ||
| Alert.alert('Error', error.message || 'Failed to delete account'); | ||
| } finally { | ||
| setIsDeleting(false); | ||
| } | ||
| }, | ||
| }, | ||
| ] | ||
| ); | ||
| }; |
There was a problem hiding this comment.
Account deletion succeeds but signOut failure leaves user in a broken UI state.
If supabase.auth.signOut() on Line 320 throws, the catch block on Line 331 shows a generic error alert, but the backend account is already deleted. The user is stuck: signed in with an invalid account, seeing an error, and unable to retry deletion.
Consider catching signOut failures separately and still navigating to onboarding:
Proposed fix
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
throw new Error(errorData.detail || 'Failed to delete account data');
}
- await supabase.auth.signOut();
+ await supabase.auth.signOut().catch((err: any) => {
+ logger.error('Sign out after deletion failed', err);
+ });
Alert.alert(
'Account Deleted',📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleDeleteAccount = () => { | |
| Alert.alert( | |
| 'Delete Account', | |
| 'This action cannot be undone. All your data will be permanently deleted.', | |
| 'Are you sure you want to delete your account? This action cannot be undone and all your data will be permanently lost.', | |
| [ | |
| { text: 'Cancel', style: 'cancel' }, | |
| { | |
| text: 'Delete', | |
| { | |
| text: 'Cancel', | |
| style: 'cancel', | |
| }, | |
| { | |
| text: 'Delete', | |
| style: 'destructive', | |
| onPress: () => { | |
| Alert.alert('Account Deleted', 'Your account has been deleted.'); | |
| router.replace('/onboarding'); | |
| } | |
| } | |
| onPress: async () => { | |
| if (!profile?.id) return; | |
| if (!session?.access_token) { | |
| Alert.alert('Error', 'You need to be signed in to delete your account.'); | |
| return; | |
| } | |
| try { | |
| setIsDeleting(true); | |
| const response = await fetch(`${BACKEND_URL}/user/${profile.id}`, { | |
| method: 'DELETE', | |
| headers: { | |
| Authorization: `Bearer ${session.access_token}`, | |
| 'Content-Type': 'application/json', | |
| }, | |
| }); | |
| if (!response.ok) { | |
| const errorData = await response.json().catch(() => ({})); | |
| throw new Error(errorData.detail || 'Failed to delete account data'); | |
| } | |
| await supabase.auth.signOut(); | |
| Alert.alert( | |
| 'Account Deleted', | |
| 'Account deleted successfully, we hate to see you go', | |
| [ | |
| { | |
| text: 'OK', | |
| onPress: () => router.replace('/onboarding'), | |
| }, | |
| ] | |
| ); | |
| } catch (error: any) { | |
| logger.error('Delete Account Error', error); | |
| Alert.alert('Error', error.message || 'Failed to delete account'); | |
| } finally { | |
| setIsDeleting(false); | |
| } | |
| }, | |
| }, | |
| ] | |
| ); | |
| }; | |
| const handleDeleteAccount = () => { | |
| Alert.alert( | |
| 'Delete Account', | |
| 'Are you sure you want to delete your account? This action cannot be undone and all your data will be permanently lost.', | |
| [ | |
| { | |
| text: 'Cancel', | |
| style: 'cancel', | |
| }, | |
| { | |
| text: 'Delete', | |
| style: 'destructive', | |
| onPress: async () => { | |
| if (!profile?.id) return; | |
| if (!session?.access_token) { | |
| Alert.alert('Error', 'You need to be signed in to delete your account.'); | |
| return; | |
| } | |
| try { | |
| setIsDeleting(true); | |
| const response = await fetch(`${BACKEND_URL}/user/${profile.id}`, { | |
| method: 'DELETE', | |
| headers: { | |
| Authorization: `Bearer ${session.access_token}`, | |
| 'Content-Type': 'application/json', | |
| }, | |
| }); | |
| if (!response.ok) { | |
| const errorData = await response.json().catch(() => ({})); | |
| throw new Error(errorData.detail || 'Failed to delete account data'); | |
| } | |
| await supabase.auth.signOut().catch((err: any) => { | |
| logger.error('Sign out after deletion failed', err); | |
| }); | |
| Alert.alert( | |
| 'Account Deleted', | |
| 'Account deleted successfully, we hate to see you go', | |
| [ | |
| { | |
| text: 'OK', | |
| onPress: () => router.replace('/onboarding'), | |
| }, | |
| ] | |
| ); | |
| } catch (error: any) { | |
| logger.error('Delete Account Error', error); | |
| Alert.alert('Error', error.message || 'Failed to delete account'); | |
| } finally { | |
| setIsDeleting(false); | |
| } | |
| }, | |
| }, | |
| ] | |
| ); | |
| }; |
🤖 Prompt for AI Agents
In `@frontend/app/settings/privacy.tsx` around lines 284 - 341, The delete flow in
handleDeleteAccount can leave the user signed in if supabase.auth.signOut()
throws; modify the try/catch so that after a successful backend DELETE you
attempt supabase.auth.signOut() in its own try/catch, log any signOut error via
logger.error, still show the success Alert and call
router.replace('/onboarding') (and clear UI state via setIsDeleting(false) as
needed) so the user is navigated away even if signOut fails; ensure the outer
catch only handles backend DELETE failures and shows the appropriate error.
…xport process. Log specific errors for malformed timestamps and sign-out failures.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@backend/routers/user.py`:
- Around line 347-350: The except block defines an unused variable `exc`; change
the handler to not assign the exception (use `except Exception:` or `except
Exception as _:`) so Ruff F841 is resolved, leaving the body as-is where
`logger.exception("Failed export job %s for user_id=%s", job_id, user_id)` and
the updates to `export_jobs[job_id]["status"]` and
`export_jobs[job_id]["error"]` remain unchanged.
- Line 520: The except block in backend/routers/user.py currently has an unused
`# noqa: BLE001` comment on the line `except Exception as e:`; remove the `#
noqa: BLE001` suffix so the line is simply `except Exception as e:` (or, if you
prefer, replace the broad exception with a specific exception type in the
`except` clause used by the surrounding function/method) to eliminate the
unused-noqa RUF100 warning.
- Line 448: Remove the unused Ruff suppression by deleting the trailing comment
"# noqa: BLE001" from the bare exception handler so the line reads simply
"except Exception:"; update the except block in backend/routers/user.py (the
try/except that currently contains "except Exception: # noqa: BLE001") to
remove the unused directive and leave the exception handling logic unchanged.
🧹 Nitpick comments (2)
backend/routers/user.py (2)
24-31: In-memory job store acknowledged as single-process only — consider documenting thread-safety caveat.The PR notes mention this is for single-process deployments. Be aware that
_run_export_jobruns in a Starlette thread pool (sync background tasks useanyio.to_thread.run_sync), so it writes toexport_jobsconcurrently with the event loop reading it. CPython's GIL makes individual dict operations atomic, but the multi-step update in_run_export_job(Lines 340–344) is not atomic — a request handler could observestatus="completed"beforefile_pathis set, causing a spurious 410 from the download endpoint.A quick mitigation is to build the update as a local dict and assign it in one shot, or set
statuslast:Proposed fix
- export_jobs[job_id]["status"] = "completed" - export_jobs[job_id]["file_path"] = str(file_path) - export_jobs[job_id]["media_type"] = media_type - export_jobs[job_id]["filename"] = filename - export_jobs[job_id]["completed_at"] = datetime.now(timezone.utc).isoformat() + export_jobs[job_id].update({ + "file_path": str(file_path), + "media_type": media_type, + "filename": filename, + "completed_at": datetime.now(timezone.utc).isoformat(), + "status": "completed", + })Note:
dict.update()is also not truly atomic, but in CPython it holds the GIL for the C-level operation, making it safer than separate assignments. Settingstatuslast in the dict literal ensures the fields are populated before the status signals readiness.
226-327: HTML rendering has proper XSS protections.All user-controlled values are escaped via
html.escape(), URLs are validated forhttp(s)://prefixes, and the template construction looks safe.Minor nit:
formatas a parameter name shadows the Python built-in. Considerexport_formatfor clarity, though this is non-blocking.
| except Exception as exc: | ||
| logger.exception("Failed export job %s for user_id=%s", job_id, user_id) | ||
| export_jobs[job_id]["status"] = "failed" | ||
| export_jobs[job_id]["error"] = "Export failed. Please try again or contact support." |
There was a problem hiding this comment.
Unused exception variable exc.
Ruff flags exc as assigned but never used (F841). Since logger.exception automatically includes the current exception info, you can simply use a bare except Exception: or name it _.
Proposed fix
- except Exception as exc:
+ except Exception:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as exc: | |
| logger.exception("Failed export job %s for user_id=%s", job_id, user_id) | |
| export_jobs[job_id]["status"] = "failed" | |
| export_jobs[job_id]["error"] = "Export failed. Please try again or contact support." | |
| except Exception: | |
| logger.exception("Failed export job %s for user_id=%s", job_id, user_id) | |
| export_jobs[job_id]["status"] = "failed" | |
| export_jobs[job_id]["error"] = "Export failed. Please try again or contact support." |
🧰 Tools
🪛 Ruff (0.15.0)
[error] 347-347: Local variable exc is assigned to but never used
Remove assignment to unused variable exc
(F841)
🤖 Prompt for AI Agents
In `@backend/routers/user.py` around lines 347 - 350, The except block defines an
unused variable `exc`; change the handler to not assign the exception (use
`except Exception:` or `except Exception as _:`) so Ruff F841 is resolved,
leaving the body as-is where `logger.exception("Failed export job %s for
user_id=%s", job_id, user_id)` and the updates to
`export_jobs[job_id]["status"]` and `export_jobs[job_id]["error"]` remain
unchanged.
| path_obj = Path(file_path) | ||
| if path_obj.is_file(): | ||
| path_obj.unlink() | ||
| except Exception: # noqa: BLE001 |
There was a problem hiding this comment.
Remove unused noqa directive.
Ruff flags # noqa: BLE001 as unused on this line (RUF100). The bare except Exception: without BLE001 suppression is fine here since Ruff doesn't trigger that rule in this context.
Proposed fix
- except Exception: # noqa: BLE001
+ except Exception:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception: # noqa: BLE001 | |
| except Exception: |
🧰 Tools
🪛 Ruff (0.15.0)
[warning] 448-448: Unused noqa directive (unused: BLE001)
Remove unused noqa directive
(RUF100)
🤖 Prompt for AI Agents
In `@backend/routers/user.py` at line 448, Remove the unused Ruff suppression by
deleting the trailing comment "# noqa: BLE001" from the bare exception handler
so the line reads simply "except Exception:"; update the except block in
backend/routers/user.py (the try/except that currently contains "except
Exception: # noqa: BLE001") to remove the unused directive and leave the
exception handling logic unchanged.
| media_type=media_type, | ||
| headers={"Content-Disposition": f"attachment; filename={filename}"}, | ||
| ) | ||
| except Exception as e: # noqa: BLE001 |
There was a problem hiding this comment.
Remove unused noqa directive.
Same as above — Ruff flags # noqa: BLE001 as unused (RUF100).
Proposed fix
- except Exception as e: # noqa: BLE001
+ except Exception as e:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: # noqa: BLE001 | |
| except Exception as e: |
🧰 Tools
🪛 Ruff (0.15.0)
[warning] 520-520: Unused noqa directive (unused: BLE001)
Remove unused noqa directive
(RUF100)
🤖 Prompt for AI Agents
In `@backend/routers/user.py` at line 520, The except block in
backend/routers/user.py currently has an unused `# noqa: BLE001` comment on the
line `except Exception as e:`; remove the `# noqa: BLE001` suffix so the line is
simply `except Exception as e:` (or, if you prefer, replace the broad exception
with a specific exception type in the `except` clause used by the surrounding
function/method) to eliminate the unused-noqa RUF100 warning.
Summary
Implemented an asynchronous user data export pipeline in the FastAPI backend using BackgroundTasks, with job creation, status, and download endpoints.
Updated React Native/Expo frontend to use the new async export flow, including visible progress feedback, and centralized export/delete actions under the Privacy & Security screen.
Backend
Added async export workflow in routers/user.py:
POST /user/{user_id}/export?format=json|html to enqueue an export job and immediately return a job_id with 202 Accepted.
GET /user/{user_id}/export/{job_id}/status to poll job status (pending, completed, failed) and expose any error message.
GET /user/{user_id}/export/{job_id}/download to download the generated JSON or HTML export once the job is completed.
Refactored export logic into helpers to:
Fetch all user data (profile, entries, friendships) and strip None values.
Render either JSON or HTML export content safely (HTML escapes, URL validation).
Persist the resulting file to a temp keepsafe_exports directory and track jobs via an in-memory export_jobs map.
Kept the existing synchronous GET /user/{user_id}/export for backward compatibility, delegating to the new helper functions.
Fixed Supabase auth dependency wiring by pointing utils/auth.py to services.supabase_client.get_supabase_client instead of a non-existent app.deps.supabase module.
Frontend
app/settings/index.tsx (Settings):
Removed duplicate Export Data and Delete Account actions from the main Settings screen so they only live under Privacy & Security.
Left the Sign Out action as-is.
app/settings/privacy.tsx (Privacy & Security):
Moved the real export and delete account logic here:
Delete account now calls the backend /user/{id} DELETE endpoint, then signs out via Supabase and navigates to onboarding.
Export now:
Starts an async export job via POST /user/{id}/export?format=....
Polls /user/{id}/export/{job_id}/status until completion or failure.
Downloads the ready file via /user/{id}/export/{job_id}/download using expo-file-system and opens the share/save sheet with expo-sharing.
Added a lightweight progress UI for exports:
isExporting state toggling button label to “Exporting Data…”.
exportStatusMessage plus an ActivityIndicator under the Export My Data row, showing messages like “Starting your export…”, “Preparing your export…”, and “Export ready. Downloading your data…”.
Notes
Authorization for all user export and delete routes still relies on get_current_user to ensure users can only act on their own account.
The in-memory export job map is suitable for the current single-process deployment; if/when you scale horizontally, this can be swapped to Redis or another shared store without changing the API contract.
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Chores