feat: add audit log management features#53
Conversation
- Introduced functionality for listing, reading, and deleting daily NDJSON audit files. - Implemented a new AuditLogPanel component for displaying and managing audit logs in the UI. - Enhanced API with new commands for handling audit logs, including `audit_list_files`, `audit_read_file`, and `audit_delete_file`. - Updated Tauri permissions to allow access to audit log commands. - Integrated custom logging with Winston in Vite for improved log management. Made-with: Cursor
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 29 minutes and 14 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (12)
📝 WalkthroughWalkthroughThis PR implements a complete audit logging feature for Pengine. It adds daily JSON-line file-based audit log storage, Tauri command handlers, HTTP API endpoints, and a React UI component for viewing and managing audit logs. The system integrates with the existing Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src-tauri/src/shared/state.rs (1)
188-204:⚠️ Potential issue | 🟠 MajorDisk I/O on every log call will serialize latency on hot paths — consider async delegation or a bounded write channel.
The current
emit_log()implementation awaitsappend_line_json()inline, blocking the caller until file open + write completes. This affects tool selection metrics (p95, lines 180–184), cron loops, MCP, auth, and bot handlers. Consider either (a) spawning viatokio::spawnso the caller isn't delayed, or (b) moving disk writes to a dedicated background writer via a bounded mpsc channel. Fire-and-forget (swallowing errors withlog::warn!) already matches your current error model.User-identifying data is now persisted to audit files without a visible rotation or retention policy.
app.rs:160 logs the bot username (
format!("Resuming bot @{}…", meta.bot_username)). Audit files accumulate daily (audit-YYYY-MM-DD.log) in the logs directory alongside app data, with no explicit cleanup, rotation, or permission-setting visible in the code. Before release, confirm: (a) a retention/rotation policy exists (at least daily cleanup or 30-day rolling window), (b) audit files are written with restrictive permissions, (c) no bot tokens or API keys are interpolated into anyemit_logcall site.One-shot error alerting avoids log spam on misconfiguration.
If the audit directory becomes unreadable or full,
append_line_json()willlog::warn!()on every single event, potentially flooding logs. A static AtomicBool guard to emit only the first warning would surface the misconfiguration without noise.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/shared/state.rs` around lines 188 - 204, emit_log currently awaits crate::infrastructure::audit_log::append_line_json inline, which serializes disk I/O on hot paths; change this to delegate writes off the caller by either spawning a background task (tokio::spawn) that calls append_line_json and swallows/logs errors, or preferably implement a dedicated bounded mpsc writer (create a background task owned by the State that consumes a channel) and have emit_log push entries into that channel (use self.log_tx or add a new audit_tx) so emit_log returns immediately; additionally, modify append_line_json usage to emit only the first warning on repeated failures using a static AtomicBool guard, and ensure audit file creation respects restrictive permissions and that rotation/retention (e.g., daily rotation or 30-day cleanup) is implemented elsewhere before release.
🧹 Nitpick comments (9)
vite/pengine-logger.ts (2)
38-39: Inconsistent indentation.Line 39 is prefixed with extra spaces (tab-like indent) while the surrounding lines use 2-space indentation. Prettier will likely reformat this — worth fixing now to keep the blame clean.
Fix
- const repeatCount = process.stdout.rows - 2; - const blank = repeatCount > 0 ? "\n".repeat(repeatCount) : ""; + const repeatCount = process.stdout.rows - 2; + const blank = repeatCount > 0 ? "\n".repeat(repeatCount) : "";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vite/pengine-logger.ts` around lines 38 - 39, The indentation for the blank assignment is inconsistent: align the indentation of the line that defines "const blank = repeatCount > 0 ? \"\\n\".repeat(repeatCount) : \"\";" with the surrounding 2-space style used for "const repeatCount = process.stdout.rows - 2;" so both use the same 2-space indentation; update the whitespace before the "const blank" declaration in pengine-logger.ts to match the file's 2-space indentation convention.
57-63:typeparameter is unused informatLine.
formatLinereceivestypebut never consults it — severity is applied later via winston level. Either drop the parameter or include the type in the formatted line (e.g., for file transports that lose winston'slevelfield).- function formatLine(type: LogType, msg: string, opts: LogOptions = {}): string { + function formatLine(_type: LogType, msg: string, opts: LogOptions = {}): string {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vite/pengine-logger.ts` around lines 57 - 63, The formatLine function currently takes a type: LogType parameter that is never used; update formatLine (and its signature) to either remove the unused parameter or, preferably, include the log level in the returned string so file transports that lose winston's level still have severity info. Modify formatLine (and usages) to interpolate the type (e.g. append or prepend `${type}` or `${type.toUpperCase()}` with a separating space/brackets) alongside the timestamp/env/msg (timeFmt, opts.environment, msg) so the log line contains the severity; ensure all callers of formatLine still pass the existing type argument or adapt them if you remove the parameter.src/modules/bot/types.ts (1)
9-13: Field shape matches the RustAuditFileEntryserialization.Keeping
size_bytesin snake_case is intentional here to align with the serde output (no#[serde(rename_all = "camelCase")]onAuditFileEntry). Worth a brief comment on the type noting that the casing is dictated by the backend struct so future contributors don't "fix" it to camelCase and silently break deserialization at call sites.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/bot/types.ts` around lines 9 - 13, Add a short comment above the AuditLogFileInfo type explaining that the field naming (size_bytes) intentionally uses snake_case to match the backend Rust struct AuditFileEntry serialization (serde) and warn future contributors not to rename to camelCase as that will break deserialization; reference the AuditLogFileInfo type and the size_bytes field in the comment so it’s clear why the casing is non-idiomatic in TypeScript.src-tauri/src/infrastructure/http_server.rs (1)
2213-2225: Reuseaudit_io_errorfor consistency.
handle_audit_log_getandhandle_audit_log_deletefunnel IO errors throughaudit_io_error, producing 404/400/500 appropriately; this handler collapses everything to 500. Todaylist_audit_filesturnsNotFoundintoOk(empty)so the divergence is mostly cosmetic, but any other IO failure (e.g.,PermissionDeniedreadinglogs/) gets a less useful status than sibling endpoints.🔧 Suggested fix
async fn handle_audit_logs_list( State(state): State<AppState>, ) -> Result<Json<Vec<audit_log::AuditFileEntry>>, (StatusCode, Json<ErrorResponse>)> { - match audit_log::list_audit_files(&state.store_path).await { - Ok(entries) => Ok(Json(entries)), - Err(e) => Err(( - StatusCode::INTERNAL_SERVER_ERROR, - Json(ErrorResponse { - error: e.to_string(), - }), - )), - } + audit_log::list_audit_files(&state.store_path) + .await + .map(Json) + .map_err(audit_io_error) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/infrastructure/http_server.rs` around lines 2213 - 2225, The handler handle_audit_logs_list currently maps all errors from audit_log::list_audit_files to a 500 with a generic ErrorResponse; change it to reuse the existing audit_io_error error-mapping logic so IO errors get the same 404/400/500 semantics as handle_audit_log_get and handle_audit_log_delete. Replace the Err branch to call audit_io_error(e) (or translate the io::Error via audit_io_error) and return its (StatusCode, Json<ErrorResponse>) tuple so the function returns consistent error statuses for AppState/store_path read failures.src-tauri/src/infrastructure/audit_log.rs (3)
31-73: Appending reopens the file per line; consider a shared writer or at least be aware of large-line atomicity.Every
emit_log→append_line_json→append_line_for_datecall pays forcreate_dir_all+OpenOptions::open+write_all+ close. Under a busy turn (tool chatter, MCP reconnects) this is a lot of FS round-trips. AMutex<tokio::fs::File>onAppState, or a dedicated writer task fed by an mpsc channel, would amortize and also serialize writes deterministically.Separately on concurrency: with
O_APPEND, POSIX guarantees atomic appends only up toPIPE_BUF(~4 KiB). Ifmessageever carries large payloads (big tool outputs, stack traces), concurrent appenders on Linux could interleave mid-JSON and corrupt NDJSON lines; Windows semantics are weaker still. A single serialized writer would remove that class of bug entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/infrastructure/audit_log.rs` around lines 31 - 73, The current append_line_for_date + append_line_json implementation opens, writes, and closes the audit file on every call which causes excessive FS round-trips and risks interleaved/corrupted NDJSON when messages are large; replace ad-hoc per-call file operations with a single serialized writer: add a long-lived writer owned by your application state (e.g. store a Mutex<tokio::fs::File> or spawn a dedicated async writer task fed via an mpsc channel in AppState), have append_line_json send the JSON line (or enqueue it) to that writer instead of calling append_line_for_date directly, and keep audit_file_path only for initial file discovery/rotation; ensure the writer creates the parent directory once (create_dir_all) and performs all OpenOptions::new().append/open and write_all calls from one task to guarantee atomic, ordered NDJSON writes and reduce FS overhead.
118-126: Whole-file read; worth capping for pathological cases.
read_audit_fileslurps the entire file into aString, andhandle_audit_log_getinhttp_server.rsthen wraps it into a response body. For typical daily logs this is fine, but a runaway-logging day (tight loop emitting multi-KB messages) could produce a very large file that pins memory in both the Tauri process and anyone fetching via HTTP. A size check up front (with a clear error) or a streaming reader would be more defensive.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/infrastructure/audit_log.rs` around lines 118 - 126, read_audit_file currently loads the whole audit file into memory; add a defensive size check before reading to avoid unbounded slurping: in read_audit_file (and where audit_file_path is used) call tokio::fs::metadata on the resolved path, compare metadata.len() against a defined MAX_AUDIT_BYTES constant (choose a sensible cap, e.g. a few MB), and if it exceeds the cap return an Err with a clear ErrorKind and message (so handle_audit_log_get can map that to a proper HTTP error); alternatively, if huge files must be supported, refactor read_audit_file to return an async stream/Reader instead of String so the HTTP handler can stream the body rather than buffering it.
68-73: UTC-based filename may confuse users in non-UTC timezones.
date_stris computed fromchrono::Utc::now(), so a user in PST logging at 5pm local sees activity filed under the next day's file. Thetimestampfield inside each line is also UTC, which is fine, but the file grouping surprise is visible in the panel (f.date). Consider either naming by local date or documenting the UTC choice in the UI tooltip.Also worth thinking about at some point: there's no retention/rotation — after a year of daily use,
logs/will accumulate 365+ files indefinitely. A simple age-based cleanup would keep this bounded.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/infrastructure/audit_log.rs` around lines 68 - 73, The audit filenames are generated using chrono::Utc::now() in append_line_json which misgroups logs for users in non‑UTC zones; change the date_str computation in append_line_json (and any uses in append_line_for_date) to use chrono::Local::now().format("%Y-%m-%d") so file names reflect the user's local date while keeping the per-line timestamp as UTC, and either update the UI tooltip to document that timestamps are UTC or add that doc text where f.date is shown; optionally add a simple retention sweep (e.g., remove files older than N days) invoked from append_line_json or a new cleanup function to prevent unbounded growth.src/modules/bot/api/index.ts (1)
12-35: Blanketcatchhides root cause from users and devtools.All three audit helpers convert any failure (missing permission, invalid date, IO error, Tauri IPC failure) into
null/false, so theAuditLogPanelcan only show a generic "Could not load files" / "Empty or unreadable" message. That will make support debugging hard once this ships (e.g., if the newallow-audit-logscapability is missing, nothing in the UI or console will hint at it).Recommend at minimum logging the error, or better, letting it throw and having the panel handle it:
🔧 Suggested fix
export async function auditListFiles(): Promise<AuditLogFileInfo[] | null> { try { return await invoke<AuditLogFileInfo[]>("audit_list_files"); - } catch { + } catch (e) { + console.warn("audit_list_files failed:", e); return null; } }Apply similarly to
auditReadFileandauditDeleteFile.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/bot/api/index.ts` around lines 12 - 35, The current blanket catch in auditListFiles, auditReadFile, and auditDeleteFile swallows all errors from invoke and hides root causes; update these functions to either remove the silent catch or at minimum log the caught error (include the error object) before rethrowing or returning so the UI (AuditLogPanel) and devtools can surface the real failure (e.g., missing capability, IPC error); locate the three functions (auditListFiles, auditReadFile, auditDeleteFile) and change their catch blocks to console.error/processLogger.error with the error and a clear message, then rethrow the error (or return the original null/false only if the panel will handle it) so callers can decide how to present the failure.src-tauri/src/modules/bot/commands.rs (1)
74-104: Consider preservingErrorKindin command errors.Collapsing all errors via
e.to_string()means the frontend can't distinguish a missing file (404-ish) from an invalid date (400-ish) or a transient IO failure. Today the TS wrappers insrc/modules/bot/api/index.tsswallow errors intonull/false, so this doesn't break anything, but if you later want per-case UX (e.g., "already deleted" vs "permission denied") you'll want a structured error shape similar toaudit_io_errorinhttp_server.rs.Optional — fine to defer.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/bot/commands.rs` around lines 74 - 104, The command handlers audit_list_files, audit_read_file, and audit_delete_file currently collapse all errors via e.to_string(); instead preserve ErrorKind by mapping IO errors into a structured error shape (reuse the existing audit_io_error helper from http_server.rs or create a similar mapper) when calling audit_log::list_audit_files / read_audit_file / remove_audit_file so the frontend can distinguish 404 vs 400 vs transient IO; either change the handlers' error return to that structured type (or serialize the structured error to JSON string) and replace .map_err(|e| e.to_string()) with a call that converts the error into the structured audit_io_error form.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/modules/bot/components/AuditLogPanel.tsx`:
- Around line 162-178: The delete handler can update state after unmounting; fix
by adding an isMounted ref (via useRef) and use the existing "alive" pattern:
create e.g. isMountedRef = useRef(true) in the component, set it to false in the
cleanup of the read effect (or a useEffect that sets true on mount and false on
unmount), then in the onClick async handler around auditDeleteFile(f.date) check
isMountedRef.current before calling setDeleting, setSelected, and before
invoking loadList; only call setDeleting(null), setSelected(null) and void
loadList() if isMountedRef.current is true. Ensure you add useRef to imports and
reference the same isMountedRef used by the read effect.
In `@vite.config.ts`:
- Around line 10-12: The custom logger created by createPengineViteLogger()
doesn't receive the resolved clearScreen flag, so it still allows TTY clears;
pass the resolved clearScreen (or allowClearScreen) option from the vite config
into createPengineViteLogger(...) and update createPengineViteLogger (in
vite/pengine-logger.ts) to accept an options parameter and honor that flag (use
it instead of solely relying on process.stdout.isTTY && !process.env.CI and
ensure the logger checks allowClearScreen before clearing the screen).
In `@vite/pengine-logger.ts`:
- Around line 21-33: The console transport currently prepends a redundant level
prefix via winston.format.printf which causes outputs like "info: ..." and
diverges from Vite's default; remove the explicit `${level}: ` prefix in the
printf formatter (or replace printf with a formatter that only returns the
message) so winston.format.colorize and Vite's message text (e.g. from
winstonLog / transports Console) control the visual severity; update the printf
in the winston.createLogger transports block to output only the message (and any
existing tags like `[pengine:dev]`) rather than including the level string.
- Around line 35-45: The logger currently clears the terminal based only on
TTY/CI checks; update the logic to also respect Vite's clearScreen flag by
honoring an allowClearScreen option: change the canClearScreen computation to
include a check like opts.allowClearScreen !== false (or treat undefined as
true) so that const canClearScreen = process.stdout.isTTY && !process.env.CI &&
opts.allowClearScreen !== false; ensure the clear function remains the same and
that callers (and vite.config.ts) pass { allowClearScreen: false } when they set
clearScreen: false so the logger will no longer clear the screen in that case.
---
Outside diff comments:
In `@src-tauri/src/shared/state.rs`:
- Around line 188-204: emit_log currently awaits
crate::infrastructure::audit_log::append_line_json inline, which serializes disk
I/O on hot paths; change this to delegate writes off the caller by either
spawning a background task (tokio::spawn) that calls append_line_json and
swallows/logs errors, or preferably implement a dedicated bounded mpsc writer
(create a background task owned by the State that consumes a channel) and have
emit_log push entries into that channel (use self.log_tx or add a new audit_tx)
so emit_log returns immediately; additionally, modify append_line_json usage to
emit only the first warning on repeated failures using a static AtomicBool
guard, and ensure audit file creation respects restrictive permissions and that
rotation/retention (e.g., daily rotation or 30-day cleanup) is implemented
elsewhere before release.
---
Nitpick comments:
In `@src-tauri/src/infrastructure/audit_log.rs`:
- Around line 31-73: The current append_line_for_date + append_line_json
implementation opens, writes, and closes the audit file on every call which
causes excessive FS round-trips and risks interleaved/corrupted NDJSON when
messages are large; replace ad-hoc per-call file operations with a single
serialized writer: add a long-lived writer owned by your application state (e.g.
store a Mutex<tokio::fs::File> or spawn a dedicated async writer task fed via an
mpsc channel in AppState), have append_line_json send the JSON line (or enqueue
it) to that writer instead of calling append_line_for_date directly, and keep
audit_file_path only for initial file discovery/rotation; ensure the writer
creates the parent directory once (create_dir_all) and performs all
OpenOptions::new().append/open and write_all calls from one task to guarantee
atomic, ordered NDJSON writes and reduce FS overhead.
- Around line 118-126: read_audit_file currently loads the whole audit file into
memory; add a defensive size check before reading to avoid unbounded slurping:
in read_audit_file (and where audit_file_path is used) call tokio::fs::metadata
on the resolved path, compare metadata.len() against a defined MAX_AUDIT_BYTES
constant (choose a sensible cap, e.g. a few MB), and if it exceeds the cap
return an Err with a clear ErrorKind and message (so handle_audit_log_get can
map that to a proper HTTP error); alternatively, if huge files must be
supported, refactor read_audit_file to return an async stream/Reader instead of
String so the HTTP handler can stream the body rather than buffering it.
- Around line 68-73: The audit filenames are generated using chrono::Utc::now()
in append_line_json which misgroups logs for users in non‑UTC zones; change the
date_str computation in append_line_json (and any uses in append_line_for_date)
to use chrono::Local::now().format("%Y-%m-%d") so file names reflect the user's
local date while keeping the per-line timestamp as UTC, and either update the UI
tooltip to document that timestamps are UTC or add that doc text where f.date is
shown; optionally add a simple retention sweep (e.g., remove files older than N
days) invoked from append_line_json or a new cleanup function to prevent
unbounded growth.
In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 2213-2225: The handler handle_audit_logs_list currently maps all
errors from audit_log::list_audit_files to a 500 with a generic ErrorResponse;
change it to reuse the existing audit_io_error error-mapping logic so IO errors
get the same 404/400/500 semantics as handle_audit_log_get and
handle_audit_log_delete. Replace the Err branch to call audit_io_error(e) (or
translate the io::Error via audit_io_error) and return its (StatusCode,
Json<ErrorResponse>) tuple so the function returns consistent error statuses for
AppState/store_path read failures.
In `@src-tauri/src/modules/bot/commands.rs`:
- Around line 74-104: The command handlers audit_list_files, audit_read_file,
and audit_delete_file currently collapse all errors via e.to_string(); instead
preserve ErrorKind by mapping IO errors into a structured error shape (reuse the
existing audit_io_error helper from http_server.rs or create a similar mapper)
when calling audit_log::list_audit_files / read_audit_file / remove_audit_file
so the frontend can distinguish 404 vs 400 vs transient IO; either change the
handlers' error return to that structured type (or serialize the structured
error to JSON string) and replace .map_err(|e| e.to_string()) with a call that
converts the error into the structured audit_io_error form.
In `@src/modules/bot/api/index.ts`:
- Around line 12-35: The current blanket catch in auditListFiles, auditReadFile,
and auditDeleteFile swallows all errors from invoke and hides root causes;
update these functions to either remove the silent catch or at minimum log the
caught error (include the error object) before rethrowing or returning so the UI
(AuditLogPanel) and devtools can surface the real failure (e.g., missing
capability, IPC error); locate the three functions (auditListFiles,
auditReadFile, auditDeleteFile) and change their catch blocks to
console.error/processLogger.error with the error and a clear message, then
rethrow the error (or return the original null/false only if the panel will
handle it) so callers can decide how to present the failure.
In `@src/modules/bot/types.ts`:
- Around line 9-13: Add a short comment above the AuditLogFileInfo type
explaining that the field naming (size_bytes) intentionally uses snake_case to
match the backend Rust struct AuditFileEntry serialization (serde) and warn
future contributors not to rename to camelCase as that will break
deserialization; reference the AuditLogFileInfo type and the size_bytes field in
the comment so it’s clear why the casing is non-idiomatic in TypeScript.
In `@vite/pengine-logger.ts`:
- Around line 38-39: The indentation for the blank assignment is inconsistent:
align the indentation of the line that defines "const blank = repeatCount > 0 ?
\"\\n\".repeat(repeatCount) : \"\";" with the surrounding 2-space style used for
"const repeatCount = process.stdout.rows - 2;" so both use the same 2-space
indentation; update the whitespace before the "const blank" declaration in
pengine-logger.ts to match the file's 2-space indentation convention.
- Around line 57-63: The formatLine function currently takes a type: LogType
parameter that is never used; update formatLine (and its signature) to either
remove the unused parameter or, preferably, include the log level in the
returned string so file transports that lose winston's level still have severity
info. Modify formatLine (and usages) to interpolate the type (e.g. append or
prepend `${type}` or `${type.toUpperCase()}` with a separating space/brackets)
alongside the timestamp/env/msg (timeFmt, opts.environment, msg) so the log line
contains the severity; ensure all callers of formatLine still pass the existing
type argument or adapt them if you remove the parameter.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 507ac868-bd82-4184-8d27-4c1e4789583e
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (19)
package.jsonsrc-tauri/capabilities/default.jsonsrc-tauri/permissions/audit-logs.tomlsrc-tauri/src/app.rssrc-tauri/src/infrastructure/audit_log.rssrc-tauri/src/infrastructure/http_server.rssrc-tauri/src/infrastructure/mod.rssrc-tauri/src/modules/bot/commands.rssrc-tauri/src/shared/state.rssrc/modules/bot/api/index.tssrc/modules/bot/components/AuditLogPanel.tsxsrc/modules/bot/components/TerminalPreview.tsxsrc/modules/bot/components/logLineKindClass.tssrc/modules/bot/index.tssrc/modules/bot/types.tssrc/pages/DashboardPage.tsxsrc/shared/api/config.tsvite.config.tsvite/pengine-logger.ts
- Refactored audit log functions to include detailed error handling and logging for file operations. - Updated the Vite configuration to allow for custom logger options, improving logging behavior. - Enhanced the AuditLogPanel component with a mounted reference to prevent state updates on unmounted components. - Introduced a background writer for audit logs to ensure ordered appends and manage file size limits. - Added new error handling for HTTP responses related to audit logs, improving user feedback on failures. Made-with: Cursor
audit_list_files,audit_read_file, andaudit_delete_file.Made-with: Cursor
Summary by CodeRabbit