Conversation
📝 WalkthroughWalkthroughAdds a mailbox notification subsystem: DB model + migration, core mailbox sync/emit/resolve logic, API endpoints and a new Django app, frontend mailbox feature and store with polling, wiring into workflow/archive/permission flows, tests, backfill command, and optional-dep guards and minor polish. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Frontend as "Frontend"
participant API as "Django API"
participant Mailbox as "sql.mailbox"
participant DB as "Database"
participant Worker as "Async Worker"
User->>Frontend: Submit / Review / Execute / Archive
Frontend->>API: POST action
API->>DB: create/update Workflow & Audit (transaction)
API->>Mailbox: sync_approval_notifications / sync_execution_needed_notifications
Mailbox->>DB: upsert / resolve MailboxItem(s)
DB-->>Mailbox: persisted
API->>DB: transaction commit
API->>Worker: schedule async notify task (registered via on_commit)
Worker->>Mailbox: emit_execution_finished_notifications (on execution finish)
Mailbox->>DB: create/update EXECUTION_FINISHED MailboxItem(s)
DB-->>Mailbox: persisted
Frontend->>API: GET /v1/mailbox/summary or /items
API->>Mailbox: preview_mailbox_items / query items
Mailbox->>DB: select recipient items
DB-->>Mailbox: items
Mailbox-->>API: serialized response
API-->>Frontend: 200 { unread_count, items }
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
api_access/views.py (1)
485-503:⚠️ Potential issue | 🟠 MajorKeep mailbox sync in the same creation transaction.
Line 497 runs after
auditor.create_audit()has committed. If mailbox sync fails here, the request/audit can already be persisted while the API returns an error; a retry then hits the duplicate pending-request check instead of returning the created request. Move the callback and mailbox sync into the existing atomic block so creation rolls back cleanly on sync failure.🐛 Proposed transaction-boundary fix
auditor = get_auditor(workflow=permission_request) try: with transaction.atomic(): auditor.create_audit() + _permission_request_audit_callback( + auditor.workflow.request_id, auditor.audit.current_status + ) + sync_approval_notifications(auditor.workflow) except AuditException: raise serializers.ValidationError( {"errors": "Failed to create approval flow, please contact admin."} ) - _permission_request_audit_callback( - auditor.workflow.request_id, auditor.audit.current_status - ) - sync_approval_notifications(auditor.workflow) async_task( notify_for_audit, workflow_audit=auditor.audit,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_access/views.py` around lines 485 - 503, The mailbox sync and callback run after the audit creation commits, so move the calls to _permission_request_audit_callback, sync_approval_notifications, and the async_task(...) scheduling for notify_for_audit inside the existing try/with transaction.atomic() that wraps auditor.create_audit() (the get_auditor/auditor.create_audit block) so any failure during callback or mailbox sync causes the transaction to roll back; update the except AuditException handler (or catch broader exceptions from the sync/callback) to re-raise serializers.ValidationError as before so the API returns an error instead of leaving a partially-created request/audit.
🧹 Nitpick comments (2)
sql/mailbox.py (2)
137-168: HoistSysConfig().get("ban_self_audit")out of the reviewer loop.
_is_self_audit_blockedis invoked per user in_current_reviewers; each call constructs aSysConfiginstance and reads the flag. For groups with many reviewers (and inbackfill_mailbox_notifications, for every workflow/archive/request), this adds unnecessary overhead. Resolve the flag once per_current_reviewerscall and pass it down, or cache it on the function scope.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql/mailbox.py` around lines 137 - 168, The loop in _current_reviewers calls _is_self_audit_blocked for every user which recreates SysConfig() and reads SysConfig().get("ban_self_audit") repeatedly; hoist the flag once at the start of _current_reviewers (e.g. ban_self_audit = SysConfig().get("ban_self_audit")) and pass that boolean into _is_self_audit_blocked (or replace the call with a local check using the hoisted value), updating _is_self_audit_blocked's signature/uses accordingly so backfill_mailbox_notifications and other callers still behave correctly; ensure references to SqlWorkflow and resource/group id checks remain unchanged.
453-485: Avoid the_reload_sourcere-query insidebackfill_mailbox_notifications.Each top-level loop already prefetches the exact related objects (
instance,resource_group,src_instance), but every call intosync_approval_notifications/sync_execution_needed_notificationsthen invokes_reload_source, which discards the prefetched instance and issues another SELECT. On large installations this turns the backfill into an O(N) extra-query pass and undermines theselect_relatedabove. Consider bypassing_reload_sourcein the already-hydrated path (e.g., accepting askip_reload=Trueflag, or inlining the sync body on the hydrated object).♻️ Sketch
-def sync_approval_notifications(source): - source = _reload_source(source) +def sync_approval_notifications(source, *, reload=True): + if reload: + source = _reload_source(source) reviewers = _current_reviewers(source) ...Then in
backfill_mailbox_notifications, passreload=Falsesince objects are already fetched with the neededselect_related.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql/mailbox.py` around lines 453 - 485, The backfill is reloading already select_related objects via _reload_source causing extra queries; modify sync_approval_notifications and sync_execution_needed_notifications to accept a reload/skip_reload boolean (default True) and only call _reload_source when reload is True, and change _reload_source to be used exclusively when reload is requested; then update backfill_mailbox_notifications to call the sync_* functions with reload=False (or skip_reload=True) so the pre-fetched SqlWorkflow, ArchiveConfig, and PermissionRequest objects are used directly without re-querying.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api_archives/views.py`:
- Around line 811-812: Wrap the post-persist mailbox sync calls so failures
don't bubble up: surround sync_approval_notifications(audit_handler.workflow)
and sync_execution_needed_notifications(audit_handler.workflow) with a
try/except that catches Exception, logs the error (including the exception) and
continues without re-raising; apply the same change to the other occurrences of
these calls referenced (the blocks around the other occurrences at the same
pattern). Ensure you reference the sync_approval_notifications and
sync_execution_needed_notifications calls and the audit_handler.workflow
argument when adding the safe-guarding try/except and logging.
In `@api_mailbox/tests.py`:
- Around line 214-216: Replace the naive datetime usage when setting
workflow.finish_time: in the block that sets workflow.status =
"workflow_finish", change the assignment workflow.finish_time =
datetime.datetime.now() to use Django's timezone.now()
(django.utils.timezone.now) so the test uses timezone-aware datetimes; then save
as before with workflow.save(update_fields=["status", "finish_time"]).
In `@api_mailbox/views.py`:
- Around line 78-86: The code currently treats any unrecognized state (e.g.,
"unred") as "all"; update the state handling to validate allowed values and
return a validation error for unsupported values: after reading state =
self.request.query_params.get("state", ...), check it against an allowed set
{"all","read","unread"} and if it's not in that set raise a DRF ValidationError
(or return a 400 response) with a clear message; keep the existing branches that
filter queryset (queryset.filter(is_unread=True) / is_unread=False) for the
valid "read"/"unread" values and leave "all" unfiltered.
In `@frontend/src/app/AppShell.vue`:
- Around line 154-163: openMailboxItem currently awaits mailboxStore.markRead
and will prevent navigation if markRead rejects; change it to treat markRead as
best-effort by calling it without blocking router.push (either fire-and-forget
or await inside a try/catch that swallows errors). Ensure
isMailboxMenuOpen.value is still set to false and router.push(actionPath) always
runs (use router.push after closing the menu and do not let markRead rejection
stop execution). Target symbols: function openMailboxItem,
mailboxStore.markRead, isMailboxMenuOpen, and router.push.
In `@frontend/src/features/archives/pages/ArchiveDetailPage.vue`:
- Around line 228-231: The current use of Promise.all([...]) couples
refreshArchiveDetail() with mailboxStore.refreshSummary() so any mailbox refresh
rejection marks the whole operation as failed; instead make the mailbox refresh
best-effort by awaiting refreshArchiveDetail() normally and running
mailboxStore.refreshSummary() in a non-fatal way (either await
Promise.allSettled([mailboxStore.refreshSummary()]) or run
mailboxStore.refreshSummary() inside a try/catch and swallow/log the error)
wherever Promise.all([...refreshArchiveDetail(), mailboxStore.refreshSummary()])
is used (refer to refreshArchiveDetail() and mailboxStore.refreshSummary() calls
in the file).
In `@frontend/src/features/mailbox/pages/MailboxPage.vue`:
- Around line 162-200: The item card currently renders as an outer <button> in
MailboxPage.vue while the inner action uses the Button component (which produces
another <button>), causing invalid nested buttons; change the outer clickable
item (the element handling openItem) to a non-button element (e.g., a div or
article) with the same classes and click handler (keep openItem call on `@click`
and ensure keyboard accessibility by adding tabindex="0" and an `@keydown.enter`
handler to call openItem), keep the inner markItemRead Button as-is with
`@click.stop` to prevent propagation, and ensure any aria/role attributes are
added to the outer element for accessibility.
- Around line 72-77: The openItem function currently awaits
mailboxStore.markRead(item.id) which can throw and prevent navigation; wrap the
markRead call in a try/catch (inside openItem) so any error is
caught/logged/ignored and then always call await router.push(item.action_path)
afterwards; reference the openItem function and mailboxStore.markRead to locate
the change and ensure router.push still runs even if markRead fails.
In `@frontend/src/features/permissions/pages/PermissionManagementPage.vue`:
- Around line 622-635: The watcher for route.query.requestId in
PermissionManagementPage.vue currently only closes the detail dialog when the
value is missing; update the invalid-id branch so that whenever requestId is not
a positive integer (e.g., non-numeric "abc"), if isDetailDialogOpen.value is
true then set isDetailDialogOpen.value = false and selectedRequestDetail.value =
null; modify the block inside the Number.isInteger(requestId) || requestId <= 0
check to always clear the dialog (not just when !value), keeping the existing
guards around canViewPermissionManagement.value.
In `@frontend/src/features/workflows/pages/WorkflowDetailPage.vue`:
- Around line 315-318: The Promise.all that awaits refreshSelectedWorkflow() and
mailboxStore.refreshSummary() causes mailboxStore.refreshSummary rejections to
surface as overall failure even though the review/execute mutation already
succeeded; change the pattern so mailbox refresh errors don't mask success by
using Promise.allSettled or by awaiting refreshSelectedWorkflow() separately and
calling mailboxStore.refreshSummary().catch(...) (log the error with
mailboxStore or console) so only the workflow refresh failure will block UI
success handling; update both occurrences that call
Promise.all([refreshSelectedWorkflow(), mailboxStore.refreshSummary()])
accordingly.
In `@frontend/src/stores/mailbox.ts`:
- Around line 130-147: When marking items read in markRead and markAllRead, if
listFilters.state === 'unread' the mutated items should be removed from the
current itemsPage (and itemsPage.count updated) or the page should be re-fetched
so the unread filter no longer shows now-read items; update markRead (after
updatePageItem) and markAllRead (after updating results) to either call the
existing page fetch/refresh routine or filter out items whose is_unread became
false and decrement itemsPage.count accordingly, and still call
refreshSummary(); reference markRead, markAllRead, listFilters.state,
itemsPage.count, updatePageItem, markAllMailboxItemsRead, and refreshSummary to
locate the places to change.
In `@sql/archiver.py`:
- Around line 64-67: The dedupe suffix function _archive_mailbox_dedupe_suffix
currently returns archive_info.last_archive_time or "archive-{id}", which causes
repeated failures before _record_archive_log updates last_archive_time to share
the same dedupe key; change the usage and signature to accept a per-attempt
identifier (e.g., callback_timestamp or attempt_id) or a task.success flag and
callback_time so that when task.success is False and
archive_info.last_archive_time is missing you generate a unique per-attempt
suffix (use callback_time.strftime(...) or the attempt_id) instead of reusing
"archive-{id}"; adjust callers that build dedupe keys to pass the attempt
identifier and update references to _archive_mailbox_dedupe_suffix and related
calls so failed runs always produce distinct dedupe values.
In `@sql/mailbox.py`:
- Around line 373-411: Make dedupe_suffix required on
emit_execution_finished_notifications by removing the default None (def
emit_execution_finished_notifications(source, outcome, actor=None,
dedupe_suffix):), delete the fallback timezone.now().strftime(...) generation
and use the provided dedupe_suffix directly when building the dedupe_key (the
code that composes
f"{MailboxCategory.EXECUTION_FINISHED}:{_source_type_for(source)}:{_source_id_for(source)}:{suffix}"
should use dedupe_suffix). Update all callers (execute_sql.py, archiver.py,
api_workflows/views.py) to pass their deterministic suffixes (finish_time / last
execution time) into emit_execution_finished_notifications so deduplication
remains deterministic; leave other behavior (metadata/outcome,
MailboxItem.update_or_create,
_source_type_for/_source_id_for/_source_title/_action_path_for) unchanged.
In `@sql/utils/execute_sql.py`:
- Around line 33-36: The calls to resolve_mailbox_items when loading or after
executing a SqlWorkflow (SqlWorkflow.objects.select_related("instance").get(...)
and resolve_mailbox_items(..., category="execution_needed")) must be made
best-effort so mailbox helper failures do not abort SQL execution or crash
post-persist callbacks; wrap each resolve_mailbox_items invocation (both the
pre-execution lookup and the post-execution callback) in a try/except that
catches Exception, logs the error with context (include workflow id and
category) via the module logger, and continues—do not re-raise; alternatively
schedule the mailbox call to run asynchronously/fire-and-forget after persisting
results, but ensure failures are only logged and do not affect SqlWorkflow
execution flow.
---
Outside diff comments:
In `@api_access/views.py`:
- Around line 485-503: The mailbox sync and callback run after the audit
creation commits, so move the calls to _permission_request_audit_callback,
sync_approval_notifications, and the async_task(...) scheduling for
notify_for_audit inside the existing try/with transaction.atomic() that wraps
auditor.create_audit() (the get_auditor/auditor.create_audit block) so any
failure during callback or mailbox sync causes the transaction to roll back;
update the except AuditException handler (or catch broader exceptions from the
sync/callback) to re-raise serializers.ValidationError as before so the API
returns an error instead of leaving a partially-created request/audit.
---
Nitpick comments:
In `@sql/mailbox.py`:
- Around line 137-168: The loop in _current_reviewers calls
_is_self_audit_blocked for every user which recreates SysConfig() and reads
SysConfig().get("ban_self_audit") repeatedly; hoist the flag once at the start
of _current_reviewers (e.g. ban_self_audit = SysConfig().get("ban_self_audit"))
and pass that boolean into _is_self_audit_blocked (or replace the call with a
local check using the hoisted value), updating _is_self_audit_blocked's
signature/uses accordingly so backfill_mailbox_notifications and other callers
still behave correctly; ensure references to SqlWorkflow and resource/group id
checks remain unchanged.
- Around line 453-485: The backfill is reloading already select_related objects
via _reload_source causing extra queries; modify sync_approval_notifications and
sync_execution_needed_notifications to accept a reload/skip_reload boolean
(default True) and only call _reload_source when reload is True, and change
_reload_source to be used exclusively when reload is requested; then update
backfill_mailbox_notifications to call the sync_* functions with reload=False
(or skip_reload=True) so the pre-fetched SqlWorkflow, ArchiveConfig, and
PermissionRequest objects are used directly without re-querying.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9b24771c-49b3-41b3-a516-3da266abb73d
📒 Files selected for processing (36)
api_access/views.pyapi_archives/views.pyapi_instances/serializers.pyapi_mailbox/__init__.pyapi_mailbox/apps.pyapi_mailbox/management/__init__.pyapi_mailbox/management/commands/__init__.pyapi_mailbox/management/commands/backfill_mailbox.pyapi_mailbox/serializers.pyapi_mailbox/tests.pyapi_mailbox/urls.pyapi_mailbox/views.pyapi_workflows/views.pyarchery/settings.pyfrontend/src/app/AppShell.vuefrontend/src/app/feature-registry.test.tsfrontend/src/app/feature-registry.tsfrontend/src/features/archives/pages/ArchiveDetailPage.vuefrontend/src/features/mailbox/api.tsfrontend/src/features/mailbox/manifest.tsfrontend/src/features/mailbox/pages/MailboxPage.vuefrontend/src/features/permissions/pages/PermissionManagementPage.vuefrontend/src/features/workflows/pages/WorkflowDetailPage.vuefrontend/src/lib/api.tsfrontend/src/stores/mailbox.test.tsfrontend/src/stores/mailbox.tssql/admin.pysql/archiver.pysql/form.pysql/mailbox.pysql/migrations/0013_remove_aliyunrdsconfig_ak_and_more.pysql/migrations/0014_mailboxitem.pysql/models.pysql/slowlog.pysql/utils/execute_sql.pysql_api/urls.py
| def emit_execution_finished_notifications( | ||
| source, outcome, actor=None, dedupe_suffix=None | ||
| ): | ||
| source = _reload_source(source) | ||
| recipients = _execution_finished_recipients(source, actor=actor) | ||
| if not recipients: | ||
| return [] | ||
|
|
||
| normalized_outcome = "success" if outcome == "success" else "failure" | ||
| status_text = ( | ||
| "finished successfully" | ||
| if normalized_outcome == "success" | ||
| else "finished with errors" | ||
| ) | ||
| metadata = _base_metadata_for(source) | ||
| metadata["outcome"] = normalized_outcome | ||
| suffix = dedupe_suffix or timezone.now().strftime("%Y%m%d%H%M%S%f") | ||
| created_items = [] | ||
| for recipient in recipients: | ||
| created_items.append( | ||
| MailboxItem.objects.update_or_create( | ||
| recipient=recipient, | ||
| dedupe_key=( | ||
| f"{MailboxCategory.EXECUTION_FINISHED}:" | ||
| f"{_source_type_for(source)}:{_source_id_for(source)}:{suffix}" | ||
| ), | ||
| defaults={ | ||
| "category": MailboxCategory.EXECUTION_FINISHED, | ||
| "source_type": _source_type_for(source), | ||
| "source_id": _source_id_for(source), | ||
| "title": f"Execution finished: {_source_title(source)}", | ||
| "body": f"This execution {status_text}.", | ||
| "action_path": _action_path_for(source), | ||
| "metadata": metadata, | ||
| "resolved_at": None, | ||
| }, | ||
| )[0] | ||
| ) | ||
| return created_items |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find every caller of emit_execution_finished_notifications and check whether they pass dedupe_suffix.
rg -nP -C2 'emit_execution_finished_notifications\s*\('Repository: jruszo/Datamingle
Length of output: 3437
🏁 Script executed:
# Check if any caller passes dedupe_suffix parameter
rg -n 'emit_execution_finished_notifications\s*\([^)]*dedupe_suffix' --type pyRepository: jruszo/Datamingle
Length of output: 43
🏁 Script executed:
# Get more context around the test cases to see if they pass dedupe_suffix
sed -n '210,235p' api_mailbox/tests.pyRepository: jruszo/Datamingle
Length of output: 1004
🏁 Script executed:
# Check for retry logic in execute_sql.py around the call
sed -n '140,165p' sql/utils/execute_sql.pyRepository: jruszo/Datamingle
Length of output: 980
🏁 Script executed:
# Check for retry logic in archiver.py around the calls
sed -n '505,580p' sql/archiver.pyRepository: jruszo/Datamingle
Length of output: 2760
🏁 Script executed:
# Find where mailbox_suffix is defined in archiver.py
rg -n 'mailbox_suffix' sql/archiver.py | head -20Repository: jruszo/Datamingle
Length of output: 356
🏁 Script executed:
# Check the api_workflows/views.py call context
sed -n '1755,1775p' api_workflows/views.pyRepository: jruszo/Datamingle
Length of output: 910
🏁 Script executed:
# Search more broadly for all calls with dedupe_suffix to get a complete picture
rg -B2 -A2 'dedupe_suffix=' --type pyRepository: jruszo/Datamingle
Length of output: 2373
🏁 Script executed:
# Check Black formatting of the function in sql/mailbox.py
python3 << 'EOF'
import subprocess
result = subprocess.run(['black', '--check', 'sql/mailbox.py'], capture_output=True, text=True)
print("Exit code:", result.returncode)
print("stdout:", result.stdout)
print("stderr:", result.stderr)
EOFRepository: jruszo/Datamingle
Length of output: 614
🏁 Script executed:
# Find the definition of _archive_mailbox_dedupe_suffix
rg -n '_archive_mailbox_dedupe_suffix' sql/archiver.pyRepository: jruszo/Datamingle
Length of output: 183
🏁 Script executed:
# Get the function implementation
rg -A10 'def _archive_mailbox_dedupe_suffix' sql/archiver.pyRepository: jruszo/Datamingle
Length of output: 406
🏁 Script executed:
# Final comprehensive check: find ALL calls to emit_execution_finished_notifications
rg -c 'emit_execution_finished_notifications' --type pyRepository: jruszo/Datamingle
Length of output: 170
🏁 Script executed:
# Check the function for Black formatting manually
sed -n '373,411p' sql/mailbox.pyRepository: jruszo/Datamingle
Length of output: 1608
All production callers already pass explicit deterministic dedupe_suffix; consider making this parameter required instead of optional.
The default timezone.now().strftime("%Y%m%d%H%M%S%f") creates a unique key on every invocation and would disable deduplication. However, all current callers in execute_sql.py, archiver.py, and api_workflows/views.py explicitly pass deterministic suffixes (based on finish_time or last execution time). The optional parameter is unused in production and represents a code smell—future maintainers might accidentally omit it. Either make dedupe_suffix required, or add a comment explaining when/why the default fallback should be used.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sql/mailbox.py` around lines 373 - 411, Make dedupe_suffix required on
emit_execution_finished_notifications by removing the default None (def
emit_execution_finished_notifications(source, outcome, actor=None,
dedupe_suffix):), delete the fallback timezone.now().strftime(...) generation
and use the provided dedupe_suffix directly when building the dedupe_key (the
code that composes
f"{MailboxCategory.EXECUTION_FINISHED}:{_source_type_for(source)}:{_source_id_for(source)}:{suffix}"
should use dedupe_suffix). Update all callers (execute_sql.py, archiver.py,
api_workflows/views.py) to pass their deterministic suffixes (finish_time / last
execution time) into emit_execution_finished_notifications so deduplication
remains deterministic; leave other behavior (metadata/outcome,
MailboxItem.update_or_create,
_source_type_for/_source_id_for/_source_title/_action_path_for) unchanged.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
sql/archiver.py (1)
64-69:⚠️ Potential issue | 🟡 MinorDon’t reuse stale
last_archive_timefor failed callbacks.When a task fails before a new archive log updates
last_archive_time, Line 65 can reuse the previous run’s timestamp. That still dedupes distinct failures together; prefer the callback time for failed tasks.Proposed fix
-def _archive_mailbox_dedupe_suffix(archive_info, callback_time=None): - if archive_info.last_archive_time: +def _archive_mailbox_dedupe_suffix( + archive_info, callback_time=None, prefer_last_archive_time=True +): + if prefer_last_archive_time and archive_info.last_archive_time: return archive_info.last_archive_time.strftime("%Y%m%d%H%M%S%f") if callback_time: return callback_time.strftime("%Y%m%d%H%M%S%f") - return datetime.datetime.utcnow().strftime("%Y%m%d%H%M%S%f") + return timezone.now().strftime("%Y%m%d%H%M%S%f")mailbox_suffix = _archive_mailbox_dedupe_suffix( archive_info, callback_time=getattr(task, "stopped", None), + prefer_last_archive_time=task.success, )Also applies to: 505-509
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql/archiver.py` around lines 64 - 69, The function _archive_mailbox_dedupe_suffix currently prefers archive_info.last_archive_time and can reuse a stale timestamp for failed callbacks; change the selection order so that if a callback_time is provided it is used first, otherwise fall back to archive_info.last_archive_time, and finally to utcnow; update the analogous logic in the other duplicate location (the block around the second occurrence) to match this behavior; keep the same formatting string ("%Y%m%d%H%M%S%f") and preserve the function signature (archive_info, callback_time=None).
🧹 Nitpick comments (1)
sql/storage.py (1)
8-13: Optional import guard looks correct; consider a debug log for discoverability.The narrow
ModuleNotFoundErrorcheck onexc.name == "paramiko"correctly avoids masking unrelated import failures insidestorages.backends.sftpstorage. Nit: emitting alogger.debug/logger.infowhen falling back toSFTPStorage = Nonewould make misconfigurations easier to diagnose, since the failure will otherwise only surface later via theImportErrorin_init_sftp_storage. Note thatloggeris defined below this block, so any logging would need to be reordered or deferred.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql/storage.py` around lines 8 - 13, The optional import fallback for SFTPStorage should log when paramiko is missing to aid debugging: move or create the module logger before the try/except (or obtain a temporary logger via logging.getLogger) and inside the except branch where you set SFTPStorage = None call logger.debug/info with the exception details and context; ensure you still only swallow ModuleNotFoundError for exc.name == "paramiko" and keep _init_sftp_storage’s existing ImportError behavior unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api_access/views.py`:
- Around line 492-503: The mailbox sync (sync_approval_notifications) and the
async_task registration must be moved out of the DB transaction so failures
don't roll back valid workflow mutations; extract these calls into a post-commit
best-effort helper (eg. enqueue_post_commit_mailbox_sync) and call that helper
inside transaction.on_commit so the DB commit happens first, and ensure the
helper wraps sync_approval_notifications and notify_for_audit scheduling in a
try/except that logs errors but does not raise; update both places that
currently call sync_approval_notifications and register async_task (the block
around _permission_request_audit_callback and the similar block at 584-588) to
call the new post-commit helper instead.
In `@api_mailbox/views.py`:
- Around line 20-23: The extend_schema annotations currently document raw
serializers (e.g., MailboxSummarySerializer) but the views actually return a
success_response wrapper {"detail": str, "data": ...}; update each
extend_schema(...) responses value to reflect that wrapper by using either a
small wrapper serializer (e.g., SuccessResponseSerializer with fields detail:
CharField and data: <original serializer>) or an OpenAPI Response object that
describes {"detail": string, "data": <original>}; for the list endpoint, make
the data field a pagination wrapper matching {"count", "next", "previous",
"results": [<item serializer>]} instead of the raw results array. Locate and
change the extend_schema decorators (the ones referencing
MailboxSummarySerializer and the three other annotated endpoints) so generated
OpenAPI matches the actual response shape.
In `@frontend/src/app/AppShell.vue`:
- Around line 147-151: toggleMailboxMenu currently awaits
mailboxStore.refreshSummary which can reject and produce unhandled promise
rejections; make the refresh best-effort by calling or awaiting
mailboxStore.refreshSummary inside a try/catch (or attach .catch) and swallow or
log the error so dropdown open/route watcher logic (the calls to
mailboxStore.refreshSummary from toggleMailboxMenu and the route watcher) never
propagates rejections to the UI/event loop. Ensure you update both the
toggleMailboxMenu function and the route watcher calls to
mailboxStore.refreshSummary to handle errors gracefully.
In `@frontend/src/features/mailbox/pages/MailboxPage.vue`:
- Around line 170-178: The interactive card (<article ... role="button">)
currently only responds to Enter; update the keyboard handler to also handle
Space: add a `@keydown.space.prevent` that calls the same openItem(item) method
(same handler used by `@keydown.enter.prevent` and `@click`) so Space activates the
card like a native button and keyboard users get equivalent behavior.
---
Duplicate comments:
In `@sql/archiver.py`:
- Around line 64-69: The function _archive_mailbox_dedupe_suffix currently
prefers archive_info.last_archive_time and can reuse a stale timestamp for
failed callbacks; change the selection order so that if a callback_time is
provided it is used first, otherwise fall back to
archive_info.last_archive_time, and finally to utcnow; update the analogous
logic in the other duplicate location (the block around the second occurrence)
to match this behavior; keep the same formatting string ("%Y%m%d%H%M%S%f") and
preserve the function signature (archive_info, callback_time=None).
---
Nitpick comments:
In `@sql/storage.py`:
- Around line 8-13: The optional import fallback for SFTPStorage should log when
paramiko is missing to aid debugging: move or create the module logger before
the try/except (or obtain a temporary logger via logging.getLogger) and inside
the except branch where you set SFTPStorage = None call logger.debug/info with
the exception details and context; ensure you still only swallow
ModuleNotFoundError for exc.name == "paramiko" and keep _init_sftp_storage’s
existing ImportError behavior unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8f39415b-0bb6-42b3-8182-a9e0374d24f3
📒 Files selected for processing (15)
api_access/views.pyapi_archives/views.pyapi_mailbox/tests.pyapi_mailbox/views.pyfrontend/src/app/AppShell.vuefrontend/src/features/archives/pages/ArchiveDetailPage.vuefrontend/src/features/mailbox/pages/MailboxPage.vuefrontend/src/features/permissions/pages/PermissionManagementPage.vuefrontend/src/features/workflows/pages/WorkflowDetailPage.vuefrontend/src/stores/mailbox.test.tsfrontend/src/stores/mailbox.tssql/archiver.pysql/mailbox.pysql/storage.pysql/utils/execute_sql.py
🚧 Files skipped from review as they are similar to previous changes (5)
- frontend/src/features/workflows/pages/WorkflowDetailPage.vue
- sql/utils/execute_sql.py
- frontend/src/features/permissions/pages/PermissionManagementPage.vue
- frontend/src/stores/mailbox.ts
- sql/mailbox.py
| _permission_request_audit_callback( | ||
| auditor.workflow.request_id, auditor.audit.current_status | ||
| ) | ||
| sync_approval_notifications(auditor.workflow) | ||
| transaction.on_commit( | ||
| lambda workflow_audit=auditor.audit, request_id=auditor.workflow.request_id: async_task( | ||
| notify_for_audit, | ||
| workflow_audit=workflow_audit, | ||
| timeout=60, | ||
| task_name=f"permission-request-{request_id}", | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Keep mailbox sync from rolling back permission actions.
Line 495 and Line 587 run mailbox sync inside the approval transactions. If sync_approval_notifications() raises, creating or reviewing the permission request fails even though the core workflow mutation was valid. Move this to a post-commit best-effort helper.
Proposed fix
logger = logging.getLogger("default")
+def _sync_permission_approval_notifications_safe(workflow):
+ try:
+ sync_approval_notifications(workflow)
+ except Exception:
+ logger.exception(
+ "Permission request mailbox sync failed for request_id=%s",
+ getattr(workflow, "request_id", None),
+ )
+
+
def _require_permission(request, permission): try:
with transaction.atomic():
auditor.create_audit()
_permission_request_audit_callback(
auditor.workflow.request_id, auditor.audit.current_status
)
- sync_approval_notifications(auditor.workflow)
transaction.on_commit(
lambda workflow_audit=auditor.audit, request_id=auditor.workflow.request_id: async_task(
notify_for_audit,
workflow_audit=workflow_audit,
timeout=60,
@@
raise serializers.ValidationError(
{"errors": "Failed to create approval flow, please contact admin."}
) from exc
+ _sync_permission_approval_notifications_safe(auditor.workflow)
return success_response( _permission_request_audit_callback(
auditor.audit.workflow_id, auditor.audit.current_status
)
- sync_approval_notifications(auditor.workflow)
+
+ _sync_permission_approval_notifications_safe(auditor.workflow)
async_task(Also applies to: 584-588
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api_access/views.py` around lines 492 - 503, The mailbox sync
(sync_approval_notifications) and the async_task registration must be moved out
of the DB transaction so failures don't roll back valid workflow mutations;
extract these calls into a post-commit best-effort helper (eg.
enqueue_post_commit_mailbox_sync) and call that helper inside
transaction.on_commit so the DB commit happens first, and ensure the helper
wraps sync_approval_notifications and notify_for_audit scheduling in a
try/except that logs errors but does not raise; update both places that
currently call sync_approval_notifications and register async_task (the block
around _permission_request_audit_callback and the similar block at 584-588) to
call the new post-commit helper instead.
| @extend_schema( | ||
| summary="Mailbox Summary", | ||
| responses={200: MailboxSummarySerializer}, | ||
| description="Return unread count and recent preview items for the authenticated user.", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n api_mailbox/views.py | head -150Repository: jruszo/Datamingle
Length of output: 5440
🏁 Script executed:
find . -name "response.py" -path "*/api_core/*" -type fRepository: jruszo/Datamingle
Length of output: 83
🏁 Script executed:
cat -n ./api_core/response.pyRepository: jruszo/Datamingle
Length of output: 388
🏁 Script executed:
find . -name "pagination.py" -path "*/api_core/*" -type fRepository: jruszo/Datamingle
Length of output: 85
🏁 Script executed:
cat -n ./api_core/pagination.pyRepository: jruszo/Datamingle
Length of output: 1151
Update OpenAPI response schemas to match the success_response wrapper structure.
All four annotated endpoints document raw serializers, but the views return {"detail": "...", "data": ...}. The list endpoint additionally wraps pagination metadata under data as {"count": ..., "next": ..., "previous": ..., "results": [...]}. Generated clients will expect the wrong response shape unless the schemas are updated to reflect this wrapper.
Affects lines 20-23, 45-67, 103-106, 117-120.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api_mailbox/views.py` around lines 20 - 23, The extend_schema annotations
currently document raw serializers (e.g., MailboxSummarySerializer) but the
views actually return a success_response wrapper {"detail": str, "data": ...};
update each extend_schema(...) responses value to reflect that wrapper by using
either a small wrapper serializer (e.g., SuccessResponseSerializer with fields
detail: CharField and data: <original serializer>) or an OpenAPI Response object
that describes {"detail": string, "data": <original>}; for the list endpoint,
make the data field a pagination wrapper matching {"count", "next", "previous",
"results": [<item serializer>]} instead of the raw results array. Locate and
change the extend_schema decorators (the ones referencing
MailboxSummarySerializer and the three other annotated endpoints) so generated
OpenAPI matches the actual response shape.
| async function toggleMailboxMenu() { | ||
| isMailboxMenuOpen.value = !isMailboxMenuOpen.value | ||
| if (isMailboxMenuOpen.value) { | ||
| await mailboxStore.refreshSummary() | ||
| } |
There was a problem hiding this comment.
Make header mailbox refresh best-effort.
refreshSummary() can reject on transient API/auth failures. Line 150 and Line 244 should not create unhandled promise rejections from a header dropdown or route watcher.
Proposed fix
+async function refreshMailboxSummaryBestEffort() {
+ try {
+ await mailboxStore.refreshSummary()
+ } catch (error) {
+ console.error('Failed to refresh mailbox summary.', error)
+ }
+}
+
async function toggleMailboxMenu() {
isMailboxMenuOpen.value = !isMailboxMenuOpen.value
if (isMailboxMenuOpen.value) {
- await mailboxStore.refreshSummary()
+ await refreshMailboxSummaryBestEffort()
}
} if (
previousPath?.startsWith('/workflows/')
|| previousPath?.startsWith('/archives/')
|| previousPath?.startsWith('/permission-management')
) {
- void mailboxStore.refreshSummary()
+ void refreshMailboxSummaryBestEffort()
}Also applies to: 239-245
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/app/AppShell.vue` around lines 147 - 151, toggleMailboxMenu
currently awaits mailboxStore.refreshSummary which can reject and produce
unhandled promise rejections; make the refresh best-effort by calling or
awaiting mailboxStore.refreshSummary inside a try/catch (or attach .catch) and
swallow or log the error so dropdown open/route watcher logic (the calls to
mailboxStore.refreshSummary from toggleMailboxMenu and the route watcher) never
propagates rejections to the UI/event loop. Ensure you update both the
toggleMailboxMenu function and the route watcher calls to
mailboxStore.refreshSummary to handle errors gracefully.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
frontend/src/app/AppShell.vue (1)
147-151:⚠️ Potential issue | 🟡 MinorMake mailbox summary refresh best-effort.
refreshSummary()can reject on transient API/auth failures. The dropdown toggle currently propagates the rejection, and the route watcher can still create an unhandled rejection despitevoid.Proposed fix
+async function refreshMailboxSummaryBestEffort() { + try { + await mailboxStore.refreshSummary() + } catch (error) { + console.error('Failed to refresh mailbox summary.', error) + } +} + async function toggleMailboxMenu() { isMailboxMenuOpen.value = !isMailboxMenuOpen.value if (isMailboxMenuOpen.value) { - await mailboxStore.refreshSummary() + await refreshMailboxSummaryBestEffort() } } ... - void mailboxStore.refreshSummary() + void refreshMailboxSummaryBestEffort()Also applies to: 239-245
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/app/AppShell.vue` around lines 147 - 151, The toggleMailboxMenu function currently awaits mailboxStore.refreshSummary and will propagate rejections; change it to call refreshSummary in a best-effort manner by handling errors locally (e.g., wrap the await in try/catch and either swallow or log the error via the logger/store instead of rethrowing) so UI toggling cannot produce unhandled promise rejections; apply the same pattern to the other place that calls mailboxStore.refreshSummary (the route watcher branch that currently awaits/voids refreshSummary) so both uses catch and handle transient API/auth errors locally.api_mailbox/views.py (1)
20-23:⚠️ Potential issue | 🟡 MinorUpdate OpenAPI schemas to include the
success_responseenvelope.These decorators still document raw serializers, but the endpoints return wrapped payloads like
{"detail": ..., "data": ...}. The list endpoint also returns paginated data under that wrapper, so generated clients will expect the wrong shape.Suggested direction
- responses={200: MailboxSummarySerializer}, + responses={200: OpenApiTypes.OBJECT},Prefer a small wrapper serializer/OpenAPI response object so
datais typed asMailboxSummarySerializer, a pagination object,MailboxItemSerializer, or{updated: number}as appropriate.Also applies to: 45-67, 103-106, 117-120
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_mailbox/views.py` around lines 20 - 23, The OpenAPI decorators (the `@extend_schema` usages that reference MailboxSummarySerializer, MailboxItemSerializer, pagination responses, etc.) must reflect the actual envelope returned (e.g., {"detail":..., "data":...}) so clients get the correct shape: replace raw serializer responses with a small wrapper response schema that sets "data" to the appropriate type (MailboxSummarySerializer for the summary endpoint, paginated MailboxItemSerializer for list endpoints, or an object like {updated: number} where used), and update the `@extend_schema` responses on the affected decorators (including the Mailbox summary decorator shown and the decorators covering lines 45–67, 103–106, 117–120) to reference that wrapper schema instead of the raw serializers.
🧹 Nitpick comments (1)
src/docker/startup.sh (1)
41-45: Optional: quote expansion and use array for cleaner arg handling.Relying on word-splitting of an unquoted
${GUNICORN_RELOAD_ARGS}works here because the value is either empty or a single token, but it's fragile if more flags are added later and trips ShellCheck (SC2086). Consider a bash array:♻️ Proposed refactor
echo Start services -GUNICORN_RELOAD_ARGS="" +GUNICORN_EXTRA_ARGS=() if [[ "${GUNICORN_RELOAD:-0}" == "1" ]]; then - GUNICORN_RELOAD_ARGS="--reload" + GUNICORN_EXTRA_ARGS+=("--reload") fi -gunicorn -w 4 -b 127.0.0.1:8888 --timeout 600 ${GUNICORN_RELOAD_ARGS} archery.wsgi:application +gunicorn -w 4 -b 127.0.0.1:8888 --timeout 600 "${GUNICORN_EXTRA_ARGS[@]}" archery.wsgi:applicationAlso note: with 4 workers +
--reload, each worker watches the filesystem independently — fine for local-arm dev, but don't let this configuration leak into non-local compose files.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/docker/startup.sh` around lines 41 - 45, The startup script uses an unquoted GUNICORN_RELOAD_ARGS token in the gunicorn command which is fragile (SC2086); change to build a bash array (e.g., create an array like GUNICORN_ARGS or similar), conditionally append "--reload" to that array when GUNICORN_RELOAD is "1" (use the existing GUNICORN_RELOAD variable and GUNICORN_RELOAD_ARGS logic as the locator), and invoke gunicorn with the array expansion (e.g., run gunicorn ... "${GUNICORN_ARGS[@]}") so arguments are safely handled and future flags won’t break word-splitting; also ensure the reload flag is only enabled in local/dev compose files so the --reload behavior doesn’t leak into production.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api_workflows/views.py`:
- Around line 1044-1045: The mailbox helper calls (e.g.,
sync_approval_notifications, sync_execution_needed_notifications,
resolve_mailbox_items and the review/manual execution mailbox calls) must be
treated as best-effort so they cannot abort or leave the workflow in an
inconsistent persisted state; wrap each call in a try/except that catches all
exceptions, logs the error with context (including workflow id/status and the
exception), and does not re-raise so the persisted transition and subsequent
async_task(...) are not prevented; mirror the same wrapper pattern used by the
archive API helper wrappers and apply it to the calls referenced
(sync_approval_notifications, sync_execution_needed_notifications,
resolve_mailbox_items and the mailbox calls around review/manual execution).
---
Duplicate comments:
In `@api_mailbox/views.py`:
- Around line 20-23: The OpenAPI decorators (the `@extend_schema` usages that
reference MailboxSummarySerializer, MailboxItemSerializer, pagination responses,
etc.) must reflect the actual envelope returned (e.g., {"detail":...,
"data":...}) so clients get the correct shape: replace raw serializer responses
with a small wrapper response schema that sets "data" to the appropriate type
(MailboxSummarySerializer for the summary endpoint, paginated
MailboxItemSerializer for list endpoints, or an object like {updated: number}
where used), and update the `@extend_schema` responses on the affected decorators
(including the Mailbox summary decorator shown and the decorators covering lines
45–67, 103–106, 117–120) to reference that wrapper schema instead of the raw
serializers.
In `@frontend/src/app/AppShell.vue`:
- Around line 147-151: The toggleMailboxMenu function currently awaits
mailboxStore.refreshSummary and will propagate rejections; change it to call
refreshSummary in a best-effort manner by handling errors locally (e.g., wrap
the await in try/catch and either swallow or log the error via the logger/store
instead of rethrowing) so UI toggling cannot produce unhandled promise
rejections; apply the same pattern to the other place that calls
mailboxStore.refreshSummary (the route watcher branch that currently
awaits/voids refreshSummary) so both uses catch and handle transient API/auth
errors locally.
---
Nitpick comments:
In `@src/docker/startup.sh`:
- Around line 41-45: The startup script uses an unquoted GUNICORN_RELOAD_ARGS
token in the gunicorn command which is fragile (SC2086); change to build a bash
array (e.g., create an array like GUNICORN_ARGS or similar), conditionally
append "--reload" to that array when GUNICORN_RELOAD is "1" (use the existing
GUNICORN_RELOAD variable and GUNICORN_RELOAD_ARGS logic as the locator), and
invoke gunicorn with the array expansion (e.g., run gunicorn ...
"${GUNICORN_ARGS[@]}") so arguments are safely handled and future flags won’t
break word-splitting; also ensure the reload flag is only enabled in local/dev
compose files so the --reload behavior doesn’t leak into production.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 31db7268-6163-4e33-9a05-5d76440b62f4
📒 Files selected for processing (10)
.gitignoreapi_archives/views.pyapi_mailbox/tests.pyapi_mailbox/views.pyapi_workflows/views.pyfrontend/src/app/AppShell.vuefrontend/src/features/mailbox/pages/MailboxPage.vuefrontend/tests/e2e/mailbox.spec.tssrc/docker-compose/docker-compose.local-arm.ymlsrc/docker/startup.sh
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/src/features/mailbox/pages/MailboxPage.vue
| sync_approval_notifications(workflow_content.workflow) | ||
| sync_execution_needed_notifications(workflow_content.workflow) |
There was a problem hiding this comment.
Isolate mailbox side effects from committed workflow transitions.
These mailbox helpers can raise, and they now run after state changes are persisted. The riskiest path is Line 1699: if resolve_mailbox_items() fails after workflow.status = "workflow_queuing" but before async_task(...), the workflow can be left queued without an execution task. Treat mailbox sync/resolve/emit as best-effort here, like the archive API wrappers do.
Suggested direction
+def _workflow_mailbox_call_safely(label, workflow, callback):
+ try:
+ return callback()
+ except Exception:
+ logger.exception(
+ "Workflow mailbox side effect failed for workflow_id=%s while %s",
+ getattr(workflow, "id", None),
+ label,
+ )
+ return None
+
...
- sync_approval_notifications(workflow_content.workflow)
- sync_execution_needed_notifications(workflow_content.workflow)
+ _workflow_mailbox_call_safely(
+ "syncing approval notifications",
+ workflow_content.workflow,
+ lambda: sync_approval_notifications(workflow_content.workflow),
+ )
+ _workflow_mailbox_call_safely(
+ "syncing execution-needed notifications",
+ workflow_content.workflow,
+ lambda: sync_execution_needed_notifications(workflow_content.workflow),
+ )
...
- resolve_mailbox_items(workflow, category="execution_needed")
+ _workflow_mailbox_call_safely(
+ "resolving execution-needed notifications",
+ workflow,
+ lambda: resolve_mailbox_items(
+ workflow,
+ category="execution_needed",
+ ),
+ )Apply the same wrapper to the review/manual execution mailbox calls.
Also applies to: 1595-1605, 1697-1700, 1736-1770
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api_workflows/views.py` around lines 1044 - 1045, The mailbox helper calls
(e.g., sync_approval_notifications, sync_execution_needed_notifications,
resolve_mailbox_items and the review/manual execution mailbox calls) must be
treated as best-effort so they cannot abort or leave the workflow in an
inconsistent persisted state; wrap each call in a try/except that catches all
exceptions, logs the error with context (including workflow id/status and the
exception), and does not re-raise so the persisted transition and subsequent
async_task(...) are not prevented; mirror the same wrapper pattern used by the
archive API helper wrappers and apply it to the calls referenced
(sync_approval_notifications, sync_execution_needed_notifications,
resolve_mailbox_items and the mailbox calls around review/manual execution).
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
api_access/views.py (1)
500-513:⚠️ Potential issue | 🟠 MajorMove mailbox sync and audit dispatch behind a post-commit best-effort helper.
Line 505 and Line 597 still call
sync_approval_notifications()before commit, and that helper writes mailbox rows (sql/mailbox.py:310-337). A mailbox failure here can still roll back an otherwise valid permission request/review. The review path also queuesnotify_for_auditdirectly on Lines 599-605 instead of viatransaction.on_commit, so an outer transaction can still expose uncommitted state. Please keep both side effects post-commit and swallow/log their failures there instead of coupling them to the approval write.Also applies to: 585-605
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api_access/views.py` around lines 500 - 513, The mailbox sync and audit notification side-effects are running inside the DB transaction and can cause rollbacks; refactor so that calls to _sync_permission_request_approval_notifications (currently invoked around auditor.workflow) and the notify_for_audit enqueue are executed via a post-commit best-effort helper (use transaction.on_commit) that wraps the call, logs errors, and swallows failures instead of letting them propagate; update both the path that currently calls _sync_permission_request_approval_notifications (around auditor.workflow) and the code that directly queues notify_for_audit (the lambda/async_task usage) to use this post-commit helper, and ensure any references to auditor.audit / auditor.workflow.request_id are captured into the on_commit closure to avoid referencing uncommitted state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api_access/views.py`:
- Around line 514-517: The except block catching AuditException should capture
the exception (e.g., except AuditException as exc:) and re-raise the
serializers.ValidationError while chaining it to the original exception using
"from exc" so the original traceback is preserved; update the except handler
around the AuditException to raise serializers.ValidationError({"errors":
"Failed to create approval flow, please contact admin."}) from exc.
---
Duplicate comments:
In `@api_access/views.py`:
- Around line 500-513: The mailbox sync and audit notification side-effects are
running inside the DB transaction and can cause rollbacks; refactor so that
calls to _sync_permission_request_approval_notifications (currently invoked
around auditor.workflow) and the notify_for_audit enqueue are executed via a
post-commit best-effort helper (use transaction.on_commit) that wraps the call,
logs errors, and swallows failures instead of letting them propagate; update
both the path that currently calls
_sync_permission_request_approval_notifications (around auditor.workflow) and
the code that directly queues notify_for_audit (the lambda/async_task usage) to
use this post-commit helper, and ensure any references to auditor.audit /
auditor.workflow.request_id are captured into the on_commit closure to avoid
referencing uncommitted state.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 09ab7123-e354-487a-b031-b00e33d273d3
📒 Files selected for processing (4)
api_access/views.pyapi_core/legacy_tests.pysql/archiver.pysql/local_demo.py
💤 Files with no reviewable changes (1)
- sql/local_demo.py
🚧 Files skipped from review as they are similar to previous changes (1)
- sql/archiver.py
| except AuditException: | ||
| raise serializers.ValidationError( | ||
| {"errors": "Failed to create approval flow, please contact admin."} | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and examine the file at the specified lines
fd -name "views.py" -path "*/api_access/*" | head -1Repository: jruszo/Datamingle
Length of output: 292
🏁 Script executed:
# Check the file size and read the relevant section
if [ -f "api_access/views.py" ]; then
wc -l api_access/views.py
echo "---"
sed -n '510,520p' api_access/views.py
fiRepository: jruszo/Datamingle
Length of output: 540
🏁 Script executed:
# Verify Ruff B904 rule details via web search would be ideal,
# but let's also check if there's any documentation in the repo
rg "B904|exception chaining" --type py -A 2 -B 2 | head -20Repository: jruszo/Datamingle
Length of output: 43
🌐 Web query:
Ruff B904 rule exception chaining
💡 Result:
The Ruff rule B904, named "raise-without-from-inside-except", detects raise statements inside exception handlers (except clauses) that lack a 'from' clause. It enforces explicit exception chaining using 'raise ... from exc' or suppressing chaining with 'raise ... from None'. What it checks: Flags bare 'raise NewException(...)' inside except blocks without specifying the cause via 'from'. Why: Without 'from', Python implicitly chains via context, but explicit 'from' sets cause for clearer debugging and distinguishes intentional raises from handling errors. The traceback phrasing differs slightly ("During handling..." for implicit vs "direct cause" for explicit). Example violation: try: ... except FileNotFoundError: raise RuntimeError("...") Fixed: try: ... except FileNotFoundError as exc: raise RuntimeError("...") from exc # chain # or from None to suppress Note: Some debate exists in Ruff issues about docs accuracy since implicit chaining occurs by default, but the rule promotes explicitness. TRY200 was a duplicate rule, removed in v0.2.0 in favor of B904. Rule added in Ruff v0.0.138.
Citations:
- 1: https://docs.astral.sh/ruff/rules/raise-without-from-inside-except/
- 2: Misleading documentation for B904: raise-without-from-inside-except astral-sh/ruff#22541
- 3: Misleading descriptions for raise-from exception chaining rules (B904 and TRY200) astral-sh/ruff#8736
- 4: https://docs.astral.sh/ruff/rules/reraise-no-cause/
- 5: https://docs.astral.sh/ruff/rules/
🏁 Script executed:
# Check for other except blocks raising exceptions without chaining in the file
rg "except.*:\s*raise" api_access/views.py -A 1 | head -30Repository: jruszo/Datamingle
Length of output: 43
🏁 Script executed:
# Verify the specific indentation and style of the original code
sed -n '514,517p' api_access/views.py | cat -ARepository: jruszo/Datamingle
Length of output: 241
Chain the caught AuditException explicitly.
This handler raises a new ValidationError without from exc / from None, which violates Ruff B904 (raise-without-from-inside-except) and makes the original failure source ambiguous during debugging.
Suggested fix
- except AuditException:
+ except AuditException as exc:
raise serializers.ValidationError(
{"errors": "Failed to create approval flow, please contact admin."}
- )
+ ) from exc📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except AuditException: | |
| raise serializers.ValidationError( | |
| {"errors": "Failed to create approval flow, please contact admin."} | |
| ) | |
| except AuditException as exc: | |
| raise serializers.ValidationError( | |
| {"errors": "Failed to create approval flow, please contact admin."} | |
| ) from exc |
🧰 Tools
🪛 Ruff (0.15.11)
[warning] 515-517: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api_access/views.py` around lines 514 - 517, The except block catching
AuditException should capture the exception (e.g., except AuditException as
exc:) and re-raise the serializers.ValidationError while chaining it to the
original exception using "from exc" so the original traceback is preserved;
update the except handler around the AuditException to raise
serializers.ValidationError({"errors": "Failed to create approval flow, please
contact admin."}) from exc.
Summary
Testing
Summary by CodeRabbit
New Features
Database
Tests