Skip to content

feat: add audit log management features#53

Merged
maximedogawa merged 4 commits into
mainfrom
20-feature-add-logging-strategy
Apr 18, 2026
Merged

feat: add audit log management features#53
maximedogawa merged 4 commits into
mainfrom
20-feature-add-logging-strategy

Conversation

@maximedogawa
Copy link
Copy Markdown
Collaborator

@maximedogawa maximedogawa commented Apr 18, 2026

  • Introduced functionality for listing, reading, and deleting daily NDJSON audit files.
  • Implemented a new AuditLogPanel component for displaying and managing audit logs in the UI.
  • Enhanced API with new commands for handling audit logs, including audit_list_files, audit_read_file, and audit_delete_file.
  • Updated Tauri permissions to allow access to audit log commands.
  • Integrated custom logging with Winston in Vite for improved log management.

Made-with: Cursor

Summary by CodeRabbit

  • New Features
    • Added Audit Log Panel to Dashboard—browse and manage saved audit logs, view their contents, and delete entries individually
    • Enhanced terminal preview with a live status indicator for better visibility during active sessions

- Introduced functionality for listing, reading, and deleting daily NDJSON audit files.
- Implemented a new AuditLogPanel component for displaying and managing audit logs in the UI.
- Enhanced API with new commands for handling audit logs, including `audit_list_files`, `audit_read_file`, and `audit_delete_file`.
- Updated Tauri permissions to allow access to audit log commands.
- Integrated custom logging with Winston in Vite for improved log management.

Made-with: Cursor
@maximedogawa maximedogawa self-assigned this Apr 18, 2026
@maximedogawa maximedogawa linked an issue Apr 18, 2026 that may be closed by this pull request
3 tasks
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 18, 2026

Warning

Rate limit exceeded

@maximedogawa has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 29 minutes and 14 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 29 minutes and 14 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: befdaa05-1418-4a78-8ad3-b6c102e5ac13

📥 Commits

Reviewing files that changed from the base of the PR and between 6fdc4a4 and 9fe458c.

📒 Files selected for processing (12)
  • package.json
  • src-tauri/src/app.rs
  • src-tauri/src/infrastructure/audit_log.rs
  • src-tauri/src/infrastructure/http_server.rs
  • src-tauri/src/modules/bot/commands.rs
  • src-tauri/src/shared/state.rs
  • src/modules/bot/api/index.ts
  • src/modules/bot/components/AuditLogPanel.tsx
  • src/modules/bot/types.ts
  • src/pages/DashboardPage.tsx
  • vite.config.ts
  • vite/pengine-logger.ts
📝 Walkthrough

Walkthrough

This PR implements a complete audit logging feature for Pengine. It adds daily JSON-line file-based audit log storage, Tauri command handlers, HTTP API endpoints, and a React UI component for viewing and managing audit logs. The system integrates with the existing emit_log infrastructure to persist audit events to disk.

Changes

Cohort / File(s) Summary
Dependencies & Configuration
package.json, vite.config.ts
Added winston dependency; configured Vite to use custom logger via createPengineViteLogger().
Tauri Permissions & Capabilities
src-tauri/capabilities/default.json, src-tauri/permissions/audit-logs.toml
Added allow-audit-logs permission with allowed commands for audit file listing, reading, and deletion.
Backend Infrastructure: Audit Logs
src-tauri/src/infrastructure/audit_log.rs, src-tauri/src/infrastructure/mod.rs
New module implementing daily NDJSON audit file storage, parsing, and listing; functions for appending, reading, and deleting audit logs by date.
Backend Infrastructure: HTTP & App Wiring
src-tauri/src/infrastructure/http_server.rs, src-tauri/src/app.rs
Added HTTP routes (GET/DELETE /v1/logs/audit{,/{date}}) and registered Tauri command handlers for audit operations; centralized IO error-to-HTTP mapping.
Backend State & Commands
src-tauri/src/shared/state.rs, src-tauri/src/modules/bot/commands.rs
Modified AppState::emit_log to persist audit events; added three Tauri command wrappers for listing, reading, and deleting audit logs.
Frontend API & Utilities
src/modules/bot/api/index.ts, src/shared/api/config.ts
Added auditListFiles, auditReadFile, and auditDeleteFile Tauri invoke functions; refactored timeout handling for existing health/connection commands; added makeTimeoutSignal utility with feature detection for AbortSignal.timeout.
Frontend Components & Types
src/modules/bot/components/AuditLogPanel.tsx, src/modules/bot/components/logLineKindClass.ts, src/modules/bot/components/TerminalPreview.tsx, src/modules/bot/types.ts, src/modules/bot/index.ts
Added AuditLogPanel component with file listing, log reading, deletion, and NDJSON parsing; extracted logLineKindClass helper for kind-to-CSS mapping; updated exports and imports.
Frontend Pages
src/pages/DashboardPage.tsx
Added AuditLogPanel section below live terminal preview to display saved disk-based audit logs.
Development Logging
vite/pengine-logger.ts
New custom Vite logger using Winston, with TTY-aware formatting, deduplication, and [pengine:dev] prefixing.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 Hop, hop—logs now saved to disk,
Each audit line a historical risk;
From Tauri commands to React's display,
We track the app's every working day!
With Winston's help and Winston's care,
The past is there, for all to share. 📋✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 43.59% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add audit log management features' clearly and concisely summarizes the main change: adding comprehensive audit log management capabilities (listing, reading, deleting files, UI panel, API commands, and Tauri permissions).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch 20-feature-add-logging-strategy

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src-tauri/src/shared/state.rs (1)

188-204: ⚠️ Potential issue | 🟠 Major

Disk I/O on every log call will serialize latency on hot paths — consider async delegation or a bounded write channel.

The current emit_log() implementation awaits append_line_json() inline, blocking the caller until file open + write completes. This affects tool selection metrics (p95, lines 180–184), cron loops, MCP, auth, and bot handlers. Consider either (a) spawning via tokio::spawn so the caller isn't delayed, or (b) moving disk writes to a dedicated background writer via a bounded mpsc channel. Fire-and-forget (swallowing errors with log::warn!) already matches your current error model.

User-identifying data is now persisted to audit files without a visible rotation or retention policy.

app.rs:160 logs the bot username (format!("Resuming bot @{}…", meta.bot_username)). Audit files accumulate daily (audit-YYYY-MM-DD.log) in the logs directory alongside app data, with no explicit cleanup, rotation, or permission-setting visible in the code. Before release, confirm: (a) a retention/rotation policy exists (at least daily cleanup or 30-day rolling window), (b) audit files are written with restrictive permissions, (c) no bot tokens or API keys are interpolated into any emit_log call site.

One-shot error alerting avoids log spam on misconfiguration.

If the audit directory becomes unreadable or full, append_line_json() will log::warn!() on every single event, potentially flooding logs. A static AtomicBool guard to emit only the first warning would surface the misconfiguration without noise.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/shared/state.rs` around lines 188 - 204, emit_log currently
awaits crate::infrastructure::audit_log::append_line_json inline, which
serializes disk I/O on hot paths; change this to delegate writes off the caller
by either spawning a background task (tokio::spawn) that calls append_line_json
and swallows/logs errors, or preferably implement a dedicated bounded mpsc
writer (create a background task owned by the State that consumes a channel) and
have emit_log push entries into that channel (use self.log_tx or add a new
audit_tx) so emit_log returns immediately; additionally, modify append_line_json
usage to emit only the first warning on repeated failures using a static
AtomicBool guard, and ensure audit file creation respects restrictive
permissions and that rotation/retention (e.g., daily rotation or 30-day cleanup)
is implemented elsewhere before release.
🧹 Nitpick comments (9)
vite/pengine-logger.ts (2)

38-39: Inconsistent indentation.

Line 39 is prefixed with extra spaces (tab-like indent) while the surrounding lines use 2-space indentation. Prettier will likely reformat this — worth fixing now to keep the blame clean.

Fix
-        const repeatCount = process.stdout.rows - 2;
-               const blank = repeatCount > 0 ? "\n".repeat(repeatCount) : "";
+        const repeatCount = process.stdout.rows - 2;
+        const blank = repeatCount > 0 ? "\n".repeat(repeatCount) : "";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vite/pengine-logger.ts` around lines 38 - 39, The indentation for the blank
assignment is inconsistent: align the indentation of the line that defines
"const blank = repeatCount > 0 ? \"\\n\".repeat(repeatCount) : \"\";" with the
surrounding 2-space style used for "const repeatCount = process.stdout.rows -
2;" so both use the same 2-space indentation; update the whitespace before the
"const blank" declaration in pengine-logger.ts to match the file's 2-space
indentation convention.

57-63: type parameter is unused in formatLine.

formatLine receives type but never consults it — severity is applied later via winston level. Either drop the parameter or include the type in the formatted line (e.g., for file transports that lose winston's level field).

-  function formatLine(type: LogType, msg: string, opts: LogOptions = {}): string {
+  function formatLine(_type: LogType, msg: string, opts: LogOptions = {}): string {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vite/pengine-logger.ts` around lines 57 - 63, The formatLine function
currently takes a type: LogType parameter that is never used; update formatLine
(and its signature) to either remove the unused parameter or, preferably,
include the log level in the returned string so file transports that lose
winston's level still have severity info. Modify formatLine (and usages) to
interpolate the type (e.g. append or prepend `${type}` or
`${type.toUpperCase()}` with a separating space/brackets) alongside the
timestamp/env/msg (timeFmt, opts.environment, msg) so the log line contains the
severity; ensure all callers of formatLine still pass the existing type argument
or adapt them if you remove the parameter.
src/modules/bot/types.ts (1)

9-13: Field shape matches the Rust AuditFileEntry serialization.

Keeping size_bytes in snake_case is intentional here to align with the serde output (no #[serde(rename_all = "camelCase")] on AuditFileEntry). Worth a brief comment on the type noting that the casing is dictated by the backend struct so future contributors don't "fix" it to camelCase and silently break deserialization at call sites.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/bot/types.ts` around lines 9 - 13, Add a short comment above the
AuditLogFileInfo type explaining that the field naming (size_bytes)
intentionally uses snake_case to match the backend Rust struct AuditFileEntry
serialization (serde) and warn future contributors not to rename to camelCase as
that will break deserialization; reference the AuditLogFileInfo type and the
size_bytes field in the comment so it’s clear why the casing is non-idiomatic in
TypeScript.
src-tauri/src/infrastructure/http_server.rs (1)

2213-2225: Reuse audit_io_error for consistency.

handle_audit_log_get and handle_audit_log_delete funnel IO errors through audit_io_error, producing 404/400/500 appropriately; this handler collapses everything to 500. Today list_audit_files turns NotFound into Ok(empty) so the divergence is mostly cosmetic, but any other IO failure (e.g., PermissionDenied reading logs/) gets a less useful status than sibling endpoints.

🔧 Suggested fix
 async fn handle_audit_logs_list(
     State(state): State<AppState>,
 ) -> Result<Json<Vec<audit_log::AuditFileEntry>>, (StatusCode, Json<ErrorResponse>)> {
-    match audit_log::list_audit_files(&state.store_path).await {
-        Ok(entries) => Ok(Json(entries)),
-        Err(e) => Err((
-            StatusCode::INTERNAL_SERVER_ERROR,
-            Json(ErrorResponse {
-                error: e.to_string(),
-            }),
-        )),
-    }
+    audit_log::list_audit_files(&state.store_path)
+        .await
+        .map(Json)
+        .map_err(audit_io_error)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/infrastructure/http_server.rs` around lines 2213 - 2225, The
handler handle_audit_logs_list currently maps all errors from
audit_log::list_audit_files to a 500 with a generic ErrorResponse; change it to
reuse the existing audit_io_error error-mapping logic so IO errors get the same
404/400/500 semantics as handle_audit_log_get and handle_audit_log_delete.
Replace the Err branch to call audit_io_error(e) (or translate the io::Error via
audit_io_error) and return its (StatusCode, Json<ErrorResponse>) tuple so the
function returns consistent error statuses for AppState/store_path read
failures.
src-tauri/src/infrastructure/audit_log.rs (3)

31-73: Appending reopens the file per line; consider a shared writer or at least be aware of large-line atomicity.

Every emit_logappend_line_jsonappend_line_for_date call pays for create_dir_all + OpenOptions::open + write_all + close. Under a busy turn (tool chatter, MCP reconnects) this is a lot of FS round-trips. A Mutex<tokio::fs::File> on AppState, or a dedicated writer task fed by an mpsc channel, would amortize and also serialize writes deterministically.

Separately on concurrency: with O_APPEND, POSIX guarantees atomic appends only up to PIPE_BUF (~4 KiB). If message ever carries large payloads (big tool outputs, stack traces), concurrent appenders on Linux could interleave mid-JSON and corrupt NDJSON lines; Windows semantics are weaker still. A single serialized writer would remove that class of bug entirely.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/infrastructure/audit_log.rs` around lines 31 - 73, The current
append_line_for_date + append_line_json implementation opens, writes, and closes
the audit file on every call which causes excessive FS round-trips and risks
interleaved/corrupted NDJSON when messages are large; replace ad-hoc per-call
file operations with a single serialized writer: add a long-lived writer owned
by your application state (e.g. store a Mutex<tokio::fs::File> or spawn a
dedicated async writer task fed via an mpsc channel in AppState), have
append_line_json send the JSON line (or enqueue it) to that writer instead of
calling append_line_for_date directly, and keep audit_file_path only for initial
file discovery/rotation; ensure the writer creates the parent directory once
(create_dir_all) and performs all OpenOptions::new().append/open and write_all
calls from one task to guarantee atomic, ordered NDJSON writes and reduce FS
overhead.

118-126: Whole-file read; worth capping for pathological cases.

read_audit_file slurps the entire file into a String, and handle_audit_log_get in http_server.rs then wraps it into a response body. For typical daily logs this is fine, but a runaway-logging day (tight loop emitting multi-KB messages) could produce a very large file that pins memory in both the Tauri process and anyone fetching via HTTP. A size check up front (with a clear error) or a streaming reader would be more defensive.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/infrastructure/audit_log.rs` around lines 118 - 126,
read_audit_file currently loads the whole audit file into memory; add a
defensive size check before reading to avoid unbounded slurping: in
read_audit_file (and where audit_file_path is used) call tokio::fs::metadata on
the resolved path, compare metadata.len() against a defined MAX_AUDIT_BYTES
constant (choose a sensible cap, e.g. a few MB), and if it exceeds the cap
return an Err with a clear ErrorKind and message (so handle_audit_log_get can
map that to a proper HTTP error); alternatively, if huge files must be
supported, refactor read_audit_file to return an async stream/Reader instead of
String so the HTTP handler can stream the body rather than buffering it.

68-73: UTC-based filename may confuse users in non-UTC timezones.

date_str is computed from chrono::Utc::now(), so a user in PST logging at 5pm local sees activity filed under the next day's file. The timestamp field inside each line is also UTC, which is fine, but the file grouping surprise is visible in the panel (f.date). Consider either naming by local date or documenting the UTC choice in the UI tooltip.

Also worth thinking about at some point: there's no retention/rotation — after a year of daily use, logs/ will accumulate 365+ files indefinitely. A simple age-based cleanup would keep this bounded.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/infrastructure/audit_log.rs` around lines 68 - 73, The audit
filenames are generated using chrono::Utc::now() in append_line_json which
misgroups logs for users in non‑UTC zones; change the date_str computation in
append_line_json (and any uses in append_line_for_date) to use
chrono::Local::now().format("%Y-%m-%d") so file names reflect the user's local
date while keeping the per-line timestamp as UTC, and either update the UI
tooltip to document that timestamps are UTC or add that doc text where f.date is
shown; optionally add a simple retention sweep (e.g., remove files older than N
days) invoked from append_line_json or a new cleanup function to prevent
unbounded growth.
src/modules/bot/api/index.ts (1)

12-35: Blanket catch hides root cause from users and devtools.

All three audit helpers convert any failure (missing permission, invalid date, IO error, Tauri IPC failure) into null/false, so the AuditLogPanel can only show a generic "Could not load files" / "Empty or unreadable" message. That will make support debugging hard once this ships (e.g., if the new allow-audit-logs capability is missing, nothing in the UI or console will hint at it).

Recommend at minimum logging the error, or better, letting it throw and having the panel handle it:

🔧 Suggested fix
 export async function auditListFiles(): Promise<AuditLogFileInfo[] | null> {
   try {
     return await invoke<AuditLogFileInfo[]>("audit_list_files");
-  } catch {
+  } catch (e) {
+    console.warn("audit_list_files failed:", e);
     return null;
   }
 }

Apply similarly to auditReadFile and auditDeleteFile.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/bot/api/index.ts` around lines 12 - 35, The current blanket catch
in auditListFiles, auditReadFile, and auditDeleteFile swallows all errors from
invoke and hides root causes; update these functions to either remove the silent
catch or at minimum log the caught error (include the error object) before
rethrowing or returning so the UI (AuditLogPanel) and devtools can surface the
real failure (e.g., missing capability, IPC error); locate the three functions
(auditListFiles, auditReadFile, auditDeleteFile) and change their catch blocks
to console.error/processLogger.error with the error and a clear message, then
rethrow the error (or return the original null/false only if the panel will
handle it) so callers can decide how to present the failure.
src-tauri/src/modules/bot/commands.rs (1)

74-104: Consider preserving ErrorKind in command errors.

Collapsing all errors via e.to_string() means the frontend can't distinguish a missing file (404-ish) from an invalid date (400-ish) or a transient IO failure. Today the TS wrappers in src/modules/bot/api/index.ts swallow errors into null/false, so this doesn't break anything, but if you later want per-case UX (e.g., "already deleted" vs "permission denied") you'll want a structured error shape similar to audit_io_error in http_server.rs.

Optional — fine to defer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/modules/bot/commands.rs` around lines 74 - 104, The command
handlers audit_list_files, audit_read_file, and audit_delete_file currently
collapse all errors via e.to_string(); instead preserve ErrorKind by mapping IO
errors into a structured error shape (reuse the existing audit_io_error helper
from http_server.rs or create a similar mapper) when calling
audit_log::list_audit_files / read_audit_file / remove_audit_file so the
frontend can distinguish 404 vs 400 vs transient IO; either change the handlers'
error return to that structured type (or serialize the structured error to JSON
string) and replace .map_err(|e| e.to_string()) with a call that converts the
error into the structured audit_io_error form.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/modules/bot/components/AuditLogPanel.tsx`:
- Around line 162-178: The delete handler can update state after unmounting; fix
by adding an isMounted ref (via useRef) and use the existing "alive" pattern:
create e.g. isMountedRef = useRef(true) in the component, set it to false in the
cleanup of the read effect (or a useEffect that sets true on mount and false on
unmount), then in the onClick async handler around auditDeleteFile(f.date) check
isMountedRef.current before calling setDeleting, setSelected, and before
invoking loadList; only call setDeleting(null), setSelected(null) and void
loadList() if isMountedRef.current is true. Ensure you add useRef to imports and
reference the same isMountedRef used by the read effect.

In `@vite.config.ts`:
- Around line 10-12: The custom logger created by createPengineViteLogger()
doesn't receive the resolved clearScreen flag, so it still allows TTY clears;
pass the resolved clearScreen (or allowClearScreen) option from the vite config
into createPengineViteLogger(...) and update createPengineViteLogger (in
vite/pengine-logger.ts) to accept an options parameter and honor that flag (use
it instead of solely relying on process.stdout.isTTY && !process.env.CI and
ensure the logger checks allowClearScreen before clearing the screen).

In `@vite/pengine-logger.ts`:
- Around line 21-33: The console transport currently prepends a redundant level
prefix via winston.format.printf which causes outputs like "info: ..." and
diverges from Vite's default; remove the explicit `${level}: ` prefix in the
printf formatter (or replace printf with a formatter that only returns the
message) so winston.format.colorize and Vite's message text (e.g. from
winstonLog / transports Console) control the visual severity; update the printf
in the winston.createLogger transports block to output only the message (and any
existing tags like `[pengine:dev]`) rather than including the level string.
- Around line 35-45: The logger currently clears the terminal based only on
TTY/CI checks; update the logic to also respect Vite's clearScreen flag by
honoring an allowClearScreen option: change the canClearScreen computation to
include a check like opts.allowClearScreen !== false (or treat undefined as
true) so that const canClearScreen = process.stdout.isTTY && !process.env.CI &&
opts.allowClearScreen !== false; ensure the clear function remains the same and
that callers (and vite.config.ts) pass { allowClearScreen: false } when they set
clearScreen: false so the logger will no longer clear the screen in that case.

---

Outside diff comments:
In `@src-tauri/src/shared/state.rs`:
- Around line 188-204: emit_log currently awaits
crate::infrastructure::audit_log::append_line_json inline, which serializes disk
I/O on hot paths; change this to delegate writes off the caller by either
spawning a background task (tokio::spawn) that calls append_line_json and
swallows/logs errors, or preferably implement a dedicated bounded mpsc writer
(create a background task owned by the State that consumes a channel) and have
emit_log push entries into that channel (use self.log_tx or add a new audit_tx)
so emit_log returns immediately; additionally, modify append_line_json usage to
emit only the first warning on repeated failures using a static AtomicBool
guard, and ensure audit file creation respects restrictive permissions and that
rotation/retention (e.g., daily rotation or 30-day cleanup) is implemented
elsewhere before release.

---

Nitpick comments:
In `@src-tauri/src/infrastructure/audit_log.rs`:
- Around line 31-73: The current append_line_for_date + append_line_json
implementation opens, writes, and closes the audit file on every call which
causes excessive FS round-trips and risks interleaved/corrupted NDJSON when
messages are large; replace ad-hoc per-call file operations with a single
serialized writer: add a long-lived writer owned by your application state (e.g.
store a Mutex<tokio::fs::File> or spawn a dedicated async writer task fed via an
mpsc channel in AppState), have append_line_json send the JSON line (or enqueue
it) to that writer instead of calling append_line_for_date directly, and keep
audit_file_path only for initial file discovery/rotation; ensure the writer
creates the parent directory once (create_dir_all) and performs all
OpenOptions::new().append/open and write_all calls from one task to guarantee
atomic, ordered NDJSON writes and reduce FS overhead.
- Around line 118-126: read_audit_file currently loads the whole audit file into
memory; add a defensive size check before reading to avoid unbounded slurping:
in read_audit_file (and where audit_file_path is used) call tokio::fs::metadata
on the resolved path, compare metadata.len() against a defined MAX_AUDIT_BYTES
constant (choose a sensible cap, e.g. a few MB), and if it exceeds the cap
return an Err with a clear ErrorKind and message (so handle_audit_log_get can
map that to a proper HTTP error); alternatively, if huge files must be
supported, refactor read_audit_file to return an async stream/Reader instead of
String so the HTTP handler can stream the body rather than buffering it.
- Around line 68-73: The audit filenames are generated using chrono::Utc::now()
in append_line_json which misgroups logs for users in non‑UTC zones; change the
date_str computation in append_line_json (and any uses in append_line_for_date)
to use chrono::Local::now().format("%Y-%m-%d") so file names reflect the user's
local date while keeping the per-line timestamp as UTC, and either update the UI
tooltip to document that timestamps are UTC or add that doc text where f.date is
shown; optionally add a simple retention sweep (e.g., remove files older than N
days) invoked from append_line_json or a new cleanup function to prevent
unbounded growth.

In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 2213-2225: The handler handle_audit_logs_list currently maps all
errors from audit_log::list_audit_files to a 500 with a generic ErrorResponse;
change it to reuse the existing audit_io_error error-mapping logic so IO errors
get the same 404/400/500 semantics as handle_audit_log_get and
handle_audit_log_delete. Replace the Err branch to call audit_io_error(e) (or
translate the io::Error via audit_io_error) and return its (StatusCode,
Json<ErrorResponse>) tuple so the function returns consistent error statuses for
AppState/store_path read failures.

In `@src-tauri/src/modules/bot/commands.rs`:
- Around line 74-104: The command handlers audit_list_files, audit_read_file,
and audit_delete_file currently collapse all errors via e.to_string(); instead
preserve ErrorKind by mapping IO errors into a structured error shape (reuse the
existing audit_io_error helper from http_server.rs or create a similar mapper)
when calling audit_log::list_audit_files / read_audit_file / remove_audit_file
so the frontend can distinguish 404 vs 400 vs transient IO; either change the
handlers' error return to that structured type (or serialize the structured
error to JSON string) and replace .map_err(|e| e.to_string()) with a call that
converts the error into the structured audit_io_error form.

In `@src/modules/bot/api/index.ts`:
- Around line 12-35: The current blanket catch in auditListFiles, auditReadFile,
and auditDeleteFile swallows all errors from invoke and hides root causes;
update these functions to either remove the silent catch or at minimum log the
caught error (include the error object) before rethrowing or returning so the UI
(AuditLogPanel) and devtools can surface the real failure (e.g., missing
capability, IPC error); locate the three functions (auditListFiles,
auditReadFile, auditDeleteFile) and change their catch blocks to
console.error/processLogger.error with the error and a clear message, then
rethrow the error (or return the original null/false only if the panel will
handle it) so callers can decide how to present the failure.

In `@src/modules/bot/types.ts`:
- Around line 9-13: Add a short comment above the AuditLogFileInfo type
explaining that the field naming (size_bytes) intentionally uses snake_case to
match the backend Rust struct AuditFileEntry serialization (serde) and warn
future contributors not to rename to camelCase as that will break
deserialization; reference the AuditLogFileInfo type and the size_bytes field in
the comment so it’s clear why the casing is non-idiomatic in TypeScript.

In `@vite/pengine-logger.ts`:
- Around line 38-39: The indentation for the blank assignment is inconsistent:
align the indentation of the line that defines "const blank = repeatCount > 0 ?
\"\\n\".repeat(repeatCount) : \"\";" with the surrounding 2-space style used for
"const repeatCount = process.stdout.rows - 2;" so both use the same 2-space
indentation; update the whitespace before the "const blank" declaration in
pengine-logger.ts to match the file's 2-space indentation convention.
- Around line 57-63: The formatLine function currently takes a type: LogType
parameter that is never used; update formatLine (and its signature) to either
remove the unused parameter or, preferably, include the log level in the
returned string so file transports that lose winston's level still have severity
info. Modify formatLine (and usages) to interpolate the type (e.g. append or
prepend `${type}` or `${type.toUpperCase()}` with a separating space/brackets)
alongside the timestamp/env/msg (timeFmt, opts.environment, msg) so the log line
contains the severity; ensure all callers of formatLine still pass the existing
type argument or adapt them if you remove the parameter.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 507ac868-bd82-4184-8d27-4c1e4789583e

📥 Commits

Reviewing files that changed from the base of the PR and between 2e1af21 and 6fdc4a4.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (19)
  • package.json
  • src-tauri/capabilities/default.json
  • src-tauri/permissions/audit-logs.toml
  • src-tauri/src/app.rs
  • src-tauri/src/infrastructure/audit_log.rs
  • src-tauri/src/infrastructure/http_server.rs
  • src-tauri/src/infrastructure/mod.rs
  • src-tauri/src/modules/bot/commands.rs
  • src-tauri/src/shared/state.rs
  • src/modules/bot/api/index.ts
  • src/modules/bot/components/AuditLogPanel.tsx
  • src/modules/bot/components/TerminalPreview.tsx
  • src/modules/bot/components/logLineKindClass.ts
  • src/modules/bot/index.ts
  • src/modules/bot/types.ts
  • src/pages/DashboardPage.tsx
  • src/shared/api/config.ts
  • vite.config.ts
  • vite/pengine-logger.ts

Comment thread src/modules/bot/components/AuditLogPanel.tsx
Comment thread vite.config.ts Outdated
Comment thread vite/pengine-logger.ts
Comment thread vite/pengine-logger.ts Outdated
- Refactored audit log functions to include detailed error handling and logging for file operations.
- Updated the Vite configuration to allow for custom logger options, improving logging behavior.
- Enhanced the AuditLogPanel component with a mounted reference to prevent state updates on unmounted components.
- Introduced a background writer for audit logs to ensure ordered appends and manage file size limits.
- Added new error handling for HTTP responses related to audit logs, improving user feedback on failures.

Made-with: Cursor
@maximedogawa maximedogawa merged commit 21f0fc7 into main Apr 18, 2026
1 check passed
@maximedogawa maximedogawa deleted the 20-feature-add-logging-strategy branch April 18, 2026 18:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Add Logging Strategy

1 participant