feat: implement cron job management features#47
Conversation
- Added new components for managing cron jobs, including CronPanel, CronJobCard, and CronFormPinnedSkills. - Introduced API functions for fetching, creating, updating, and deleting cron jobs. - Implemented a daily local time picker for scheduling and a drag-and-drop interface for managing pinned skills. - Enhanced the SkillsPanel to support skill ordering and integration with cron job functionality. - Updated package dependencies to include @dnd-kit for drag-and-drop capabilities and improved ESLint configuration. Made-with: Cursor
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 39 minutes and 10 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
📝 WalkthroughWalkthroughAdds a full cron scheduling subsystem: persistent cron storage, scheduler loop, HTTP CRUD and test endpoints, bot integration for system turns with optional skill filtering, frontend UI (CronPanel) with drag-and-drop skill pinning, and skill ordering persistence and gating. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as CronPanel
participant API as HTTP API
participant Repo as Cron Repository
participant Sched as Cron Scheduler
participant Agent as Bot Agent
participant TG as Telegram
UI->>API: POST/PUT /v1/cron (create/update)
API->>Repo: save(cron.json)
Repo-->>API: ok
API-->>UI: success
loop scheduler every 30s or on notify
Sched->>Sched: load state.cron_jobs
alt job due
Sched->>Agent: run_system_turn(prompt, skill_filter)
Agent-->>Sched: TurnResult (text, suppress flags)
alt should send
Sched->>TG: send_to_last_chat(chunks)
TG-->>Sched: success/error
else skip send
Sched-->>Sched: log skip
end
Sched->>Repo: save(updated cron.json with last_run_at)
Repo-->>Sched: ok
end
end
UI->>API: POST /v1/cron/{id}/test
API->>Agent: run_system_turn(composed_prompt, skill_filter)
Agent-->>API: TurnResult
API->>TG: send_to_last_chat (optional)
TG-->>API: result
API-->>UI: CronTestResponse
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (5)
src/modules/cron/index.ts (1)
1-3: Watch for symbol collisions between./apiand./types.Two
export *barrels will fail the build (TS2308) if either file ever re-exports a name that exists in the other (e.g. a helper namedCronJobalongside the type). Not a defect today, just a trap for later refactors — consider explicit named re-exports if this surface is meant to be public/stable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/cron/index.ts` around lines 1 - 3, The two blanket re-exports (export * from "./api" and export * from "./types") risk future TS2308 collisions if the modules ever export the same identifier (e.g., a CronJob helper/type); replace these star exports with explicit named exports to make the public surface stable and collision-safe: import the specific symbols you want from "./api" (functions/classes) and from "./types" (type-only exports), then re-export them using explicit export { Foo, Bar } and export type { FooType, BarType } patterns; keep the existing named export for CronPanel as-is.src-tauri/src/modules/cron/scheduler.rs (1)
51-64: Consider rate-limiting the "no Telegram chat known" log.When due jobs exist but
last_chat_idisNone, this branch emits a log on every 30-second tick (and on everycron_notifywake) until the user first messages the bot. On an idle install with a few enabled jobs, that's a continuous stream of near-identical log lines. A simpleAtomicBool/timestamp gate so the warning fires at most once per "unknown chat" episode (and again afterlast_chat_idtransitions back toNone) would keep the signal without the noise.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/cron/scheduler.rs` around lines 51 - 64, The log spam when last_chat_id.read().await.is_none() occurs should be rate-limited: add a small state flag/timestamp (e.g., an AtomicBool or an Option<Instant> stored on state, e.g., last_chat_known_warned or last_unknown_chat_warn_at) checked before calling state.emit_log("cron", ...) so the message is emitted at most once per unknown-chat episode (or at most once per N minutes), and update/reset that flag/timestamp when last_chat_id transitions from None -> Some (clear the flag) or from Some -> None (start a new episode); modify the code around last_chat_id.read() and emit_log to consult and set this gate so repeated ticks/crons don’t produce identical log lines.src-tauri/src/modules/cron/repository.rs (1)
23-31: Non-atomic write can corruptcron.jsonon crash.
std::fs::writeperforms a directopen(O_TRUNC) + write + close. If the process is killed (or the host loses power) between truncate and the final write,cron.jsonis left truncated or empty, and on next startupload()will returnCronFile::default()— silently dropping every cron job the user configured. Given this is invoked from every CRUD/toggle handler and frommark_ranafter each firing, the exposure isn't hypothetical.Use the standard write-temp-then-rename pattern (the same-directory rename is atomic on POSIX and on Windows via
ReplaceFileW/MoveFileEx).♻️ Proposed atomic write
pub fn save(path: &Path, file: &CronFile) -> Result<(), String> { - if let Some(parent) = path.parent() { - std::fs::create_dir_all(parent) - .map_err(|e| format!("create parent dirs for cron.json: {e}"))?; - } + let parent = path.parent().unwrap_or_else(|| Path::new(".")); + std::fs::create_dir_all(parent) + .map_err(|e| format!("create parent dirs for cron.json: {e}"))?; let pretty = serde_json::to_string_pretty(file).map_err(|e| format!("encode cron.json: {e}"))?; - std::fs::write(path, pretty).map_err(|e| format!("write cron.json: {e}")) + let tmp = parent.join(".cron.json.tmp"); + std::fs::write(&tmp, pretty).map_err(|e| format!("write cron.json tmp: {e}"))?; + std::fs::rename(&tmp, path).map_err(|e| format!("rename cron.json: {e}")) }Consider a crate like
tempfile::persistoratomicwritesif you prefer a battle-tested helper (handles fsync on the temp file and the parent directory, which the snippet above skips).Additionally,
repository::saveis invoked concurrently from the scheduler (mark_ran) and HTTP CRUD handlers with no shared mutex. Two simultaneous writers can race on the rename; more importantly, last-writer-wins may drop a just-completed CRUD change if the scheduler's snapshot was read a moment earlier. Worth considering a module-levelMutexaround load/mutate/save for cron persistence.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/cron/repository.rs` around lines 23 - 31, The save(Path, CronFile) function is doing a non-atomic std::fs::write which can truncate cron.json mid-update and must be replaced with an atomic write-then-rename: create a temp file in the same directory (e.g., via tempfile::NamedTempFile or atomicwrites), write the pretty JSON to it, flush and fsync the temp file, atomically rename/persist it over the target path, and fsync the parent directory; alternatively use tempfile::NamedTempFile::persist or the atomicwrites crate which handle these steps. Also serialize concurrent access to the repo (load/mutate/save/mark_ran) with a module-level Mutex or other lock so two writers cannot race and lose changes; update repository::save and callers (including mark_ran and CRUD handlers) to use the mutex-protected code path.src-tauri/src/modules/skills/service.rs (1)
411-433: Minor: usegather_skills_sortedhere to avoid an unnecessary order pass.
list_skillsapplies.skill_order.jsonordering, which is irrelevant when you're just building a lowercase-slug lookup for deduplication. Swapping togather_skills_sortedavoids reading the order file and the sort on every cron canonicalize call:♻️ Proposed tweak
pub fn canonicalize_skill_slug_list(store_path: &Path, requested: &[String]) -> Vec<String> { - let by_lower: HashMap<String, String> = list_skills(store_path) + let by_lower: HashMap<String, String> = gather_skills_sorted(store_path) .into_iter() .map(|s| (s.slug.to_lowercase(), s.slug)) .collect();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/skills/service.rs` around lines 411 - 433, The canonicalize_skill_slug_list function currently calls list_skills to build the by_lower lookup which triggers unnecessary .skill_order.json reads and sorting; change that call to gather_skills_sorted so you still get all existing skills but avoid the order pass. Update the line that builds by_lower (currently using list_skills(store_path).into_iter()...) to use gather_skills_sorted(store_path).into_iter() and keep the rest of canonicalize_skill_slug_list (seen, out, key logic) unchanged.src-tauri/src/infrastructure/http_server.rs (1)
2057-2085: Deadfoundcheck — thelet Some(...) else { return }branch already handles the missing case.Inside the block we
return Err(not_found(...))when the job is missing, sofoundis unconditionallytrueon the happy path. Theif !found { ... }at Line 2070-2072 is unreachable.♻️ Proposed simplification
- let found = { - let mut jobs = state.cron_jobs.write().await; - let Some(job) = jobs.iter_mut().find(|j| j.id == id) else { - return Err(not_found(format!("cron job '{id}' not found"))); - }; - job.enabled = body.enabled; - true - }; - if !found { - return Err(not_found(format!("cron job '{id}' not found"))); - } + { + let mut jobs = state.cron_jobs.write().await; + let Some(job) = jobs.iter_mut().find(|j| j.id == id) else { + return Err(not_found(format!("cron job '{id}' not found"))); + }; + job.enabled = body.enabled; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/infrastructure/http_server.rs` around lines 2057 - 2085, The variable `found` and the subsequent `if !found { ... }` are dead code inside `handle_cron_set_enabled` because the `let Some(job) = ... else { return Err(not_found(...)) }` already returns on miss; remove `found` entirely, mutate `job.enabled = body.enabled` directly inside the `write().await` block (no boolean result needed), and delete the unreachable check (`if !found { ... }`); keep the calls to `persist_cron(&state).await.map_err(internal)?`, `state.cron_notify.notify_waiters()`, and `state.emit_log(...)` as-is so the function behavior is unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@package.json`:
- Around line 33-40: The peer-dependency warning from `@dnd-kit/core` and
`@dnd-kit/sortable` when installing with react@19.1.0 is expected and
non-blocking; update the PR by adding a short note (referencing `@dnd-kit/core`,
`@dnd-kit/sortable`, and react@19.1.0) to the change/PR description and the repo
README explaining that this is a known peerWarning, that these packages are
confirmed to work with React 19, and provide recommended mitigations (e.g.,
using --legacy-peer-deps, using skipLibCheck: true in tsconfig, or adding
suppressHydrationWarning fixes) so reviewers and users know why the warning can
be safely ignored.
In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 1979-2085: The in-memory cron list is mutated before persisting so
a persist_cron failure leaves memory inconsistent; for handle_cron_create,
handle_cron_update, handle_cron_delete and handle_cron_set_enabled make each
handler roll back the in-memory change when persist_cron returns an error: for
handle_cron_create, push the job, call persist_cron, and on error remove the job
with the generated job.id and return the error (do not call notify_waiters or
emit_log); for handle_cron_update, clone the original CronJob (or its fields)
before mutating, call persist_cron, and on error restore the original job in
place and return the error; for handle_cron_delete, snapshot the removed
CronJob(s) or index before retain/remove, call persist_cron, and on error
re-insert the removed job(s) into state.cron_jobs and return the error; for
handle_cron_set_enabled, save the previous enabled value (or clone the job),
call persist_cron, and on error restore the previous enabled state and return
the error; in all cases ensure notify_waiters() and emit_log() are only called
after a successful persist_cron.
In `@src-tauri/src/modules/cron/service.rs`:
- Around line 61-72: The current is_no_message_reply(text: &str) treats
empty/whitespace strings as the "no message" sentinel and causes scheduler.rs to
silently skip delivery; change is_no_message_reply to only return true for
explicit sentinel tokens (NO_MESSAGE_SENTINEL, "no-message", "no message",
"<no_message>", "no_message") and remove the early t.is_empty() -> true path,
and then update scheduler.rs to detect an empty trimmed reply from the model as
an error case (log or surface it) instead of suppressing delivery so transient
inference/transport failures are visible.
- Around line 140-191: The tests mutate process-global TZ via
ensure_daily_tests_use_utc which is fragile; instead remove that function and
any std::env::set_var calls and make tests deterministic by either (A)
constructing UTC-aware datetimes with FixedOffset or Utc using the helper dt and
asserting is_due(&s, ...) with those UTC DateTime values, or (B) modify
is_due/next_due to accept an explicit timezone parameter and pass Utc (or a
FixedOffset) from the tests (update calls in tests like
daily_at_waits_until_target_time and daily_at_rolls_to_tomorrow_after_firing to
pass the explicit tz); refer to symbols ensure_daily_tests_use_utc, dt,
Schedule::DailyAt, and is_due when applying the change.
In `@src-tauri/src/modules/mcp/native.rs`:
- Around line 458-481: The function set_cron_enabled performs a synchronous
filesystem write via crate::modules::cron::repository::save from an async
context which can block the runtime; change the call to run on a blocking thread
(e.g. wrap the save call in tokio::task::spawn_blocking or an equivalent
executor) or make repository::save async, invoking the new async/blocking
wrapper from set_cron_enabled after building the CronFile (references:
set_cron_enabled, state.cron_jobs, state.cron_path,
crate::modules::cron::repository::save, and state.cron_notify).
In `@src/modules/cron/components/CronPanel.tsx`:
- Around line 139-145: Update the validation error text in CronPanel.tsx to
reflect local time rather than UTC: where the code checks draft.schedule (the
block that uses hour/minute and calls setFormError("Time must be HH:MM in
24-hour UTC")), change the message to indicate local 24-hour clock (e.g., "Time
must be HH:MM in 24-hour local time") so it matches CronDailyLocalTimePicker and
the Schedule::DailyAt behavior; ensure you only alter the string passed to
setFormError and leave the hour/minute validation logic unchanged.
In `@src/modules/skills/components/SkillsPanel.tsx`:
- Around line 122-131: The drag handle button in SkillsPanel.tsx (the element
using {...attributes} and {...listeners} and aria-label={`Drag to reorder
${skill.slug}`}) removes the focus outline via "outline-none", which hides focus
for keyboard users using KeyboardSensor/sortableKeyboardCoordinates; restore a
visible keyboard focus by replacing or augmenting the className with a
keyboard-only focus style (e.g., add a focus-visible ring class such as
"focus-visible:ring-2 focus-visible:ring-offset-1 focus-visible:ring-primary" or
similar) so the handle shows a visible focus ring when focused via keyboard
while keeping the visual style for mouse users.
---
Nitpick comments:
In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 2057-2085: The variable `found` and the subsequent `if !found {
... }` are dead code inside `handle_cron_set_enabled` because the `let Some(job)
= ... else { return Err(not_found(...)) }` already returns on miss; remove
`found` entirely, mutate `job.enabled = body.enabled` directly inside the
`write().await` block (no boolean result needed), and delete the unreachable
check (`if !found { ... }`); keep the calls to
`persist_cron(&state).await.map_err(internal)?`,
`state.cron_notify.notify_waiters()`, and `state.emit_log(...)` as-is so the
function behavior is unchanged.
In `@src-tauri/src/modules/cron/repository.rs`:
- Around line 23-31: The save(Path, CronFile) function is doing a non-atomic
std::fs::write which can truncate cron.json mid-update and must be replaced with
an atomic write-then-rename: create a temp file in the same directory (e.g., via
tempfile::NamedTempFile or atomicwrites), write the pretty JSON to it, flush and
fsync the temp file, atomically rename/persist it over the target path, and
fsync the parent directory; alternatively use tempfile::NamedTempFile::persist
or the atomicwrites crate which handle these steps. Also serialize concurrent
access to the repo (load/mutate/save/mark_ran) with a module-level Mutex or
other lock so two writers cannot race and lose changes; update repository::save
and callers (including mark_ran and CRUD handlers) to use the mutex-protected
code path.
In `@src-tauri/src/modules/cron/scheduler.rs`:
- Around line 51-64: The log spam when last_chat_id.read().await.is_none()
occurs should be rate-limited: add a small state flag/timestamp (e.g., an
AtomicBool or an Option<Instant> stored on state, e.g., last_chat_known_warned
or last_unknown_chat_warn_at) checked before calling state.emit_log("cron", ...)
so the message is emitted at most once per unknown-chat episode (or at most once
per N minutes), and update/reset that flag/timestamp when last_chat_id
transitions from None -> Some (clear the flag) or from Some -> None (start a new
episode); modify the code around last_chat_id.read() and emit_log to consult and
set this gate so repeated ticks/crons don’t produce identical log lines.
In `@src-tauri/src/modules/skills/service.rs`:
- Around line 411-433: The canonicalize_skill_slug_list function currently calls
list_skills to build the by_lower lookup which triggers unnecessary
.skill_order.json reads and sorting; change that call to gather_skills_sorted so
you still get all existing skills but avoid the order pass. Update the line that
builds by_lower (currently using list_skills(store_path).into_iter()...) to use
gather_skills_sorted(store_path).into_iter() and keep the rest of
canonicalize_skill_slug_list (seen, out, key logic) unchanged.
In `@src/modules/cron/index.ts`:
- Around line 1-3: The two blanket re-exports (export * from "./api" and export
* from "./types") risk future TS2308 collisions if the modules ever export the
same identifier (e.g., a CronJob helper/type); replace these star exports with
explicit named exports to make the public surface stable and collision-safe:
import the specific symbols you want from "./api" (functions/classes) and from
"./types" (type-only exports), then re-export them using explicit export { Foo,
Bar } and export type { FooType, BarType } patterns; keep the existing named
export for CronPanel as-is.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b5058d63-fcc0-42a3-af72-8949ee25c5bf
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (28)
package.jsonsrc-tauri/src/app.rssrc-tauri/src/infrastructure/http_server.rssrc-tauri/src/modules/bot/agent.rssrc-tauri/src/modules/bot/service.rssrc-tauri/src/modules/cron/mod.rssrc-tauri/src/modules/cron/repository.rssrc-tauri/src/modules/cron/scheduler.rssrc-tauri/src/modules/cron/service.rssrc-tauri/src/modules/cron/types.rssrc-tauri/src/modules/mcp/native.rssrc-tauri/src/modules/mcp/service.rssrc-tauri/src/modules/mod.rssrc-tauri/src/modules/skills/service.rssrc-tauri/src/modules/skills/types.rssrc-tauri/src/shared/state.rssrc-tauri/tests/skills_brave_gate.rssrc/modules/cron/api/index.tssrc/modules/cron/components/CronDailyLocalTimePicker.tsxsrc/modules/cron/components/CronFormPinnedSkills.tsxsrc/modules/cron/components/CronJobCard.tsxsrc/modules/cron/components/CronPanel.tsxsrc/modules/cron/index.tssrc/modules/cron/types.tssrc/modules/skills/api/index.tssrc/modules/skills/components/SkillsPanel.tsxsrc/modules/skills/types.tssrc/pages/DashboardPage.tsx
| "@dnd-kit/core": "^6.3.1", | ||
| "@dnd-kit/modifiers": "^9.0.0", | ||
| "@dnd-kit/sortable": "^10.0.0", | ||
| "@dnd-kit/utilities": "^3.2.2", | ||
| "@radix-ui/react-accordion": "^1.2.12", | ||
| "@radix-ui/react-menubar": "^1.1.16", | ||
| "@radix-ui/react-scroll-area": "^1.2.10", | ||
| "@radix-ui/react-select": "^2.2.6", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
@dnd-kit/core 6.3 and @dnd-kit/sortable 10 React 19 peer dependency support
💡 Result:
No, @dnd-kit/core 6.3.x and @dnd-kit/sortable 10.x do not explicitly declare React 19 in their peer dependencies. Both packages specify "react": ">=16.8.0" and "react-dom": ">=16.8.0" in their peerDependencies on npm, which technically allows React 19 but does not explicitly confirm full compatibility or testing. However, evidence indicates practical React 19 support: - Users report successful usage with React 19 and Next.js 15, sometimes requiring workarounds like --legacy-peer-deps, skipLibCheck, or hydration fixes ("use client", suppressHydrationWarning). - Recent fixes in the repository (e.g., useRef readonly errors for React 19 type compatibility, JSX namespace wrapping) were merged before these versions' release (Dec 2024). - No breaking issues reported post-release; a StackOverflow example shows @dnd-kit/core@6.3.1 installing without React peer conflict alongside React 19. The library has since been rewritten (new @dnd-kit/react@0.4.0 etc. with explicit React 19 fixes), but for these specific legacy versions, compatibility is high-confidence in practice despite non-explicit peer deps.
Citations:
- 1: https://www.npmjs.com/package/@dnd-kit/core?activeTab=versions
- 2: https://www.npmjs.com/package/@dnd-kit/core
- 3: https://registry.npmjs.org/%40dnd-kit%2Fcore
- 4: https://www.npmjs.com/package/@dnd-kit/sortable/v/10.0.0?activeTab=dependencies
- 5: https://www.npmjs.com/package/@dnd-kit/sortable?activeTab=versions
- 6: Support React 19 & Nextjs 15 clauderic/dnd-kit#1511
- 7: Wrap JSX inside react module, in preparation of React 19 clauderic/dnd-kit#1818
- 8: With React 19 typings: Cannot find namespace 'JSX'. clauderic/dnd-kit#1559
- 9: https://www.shadcraft.com
- 10: https://docs.dndkit.com/introduction/installation
- 11: https://dndkit.com/introduction/installation
- 12: React 18 - @dnd-kit/accessibility clauderic/dnd-kit#988
- 13: https://github.com/clauderic/dnd-kit/blob/master/packages/core/CHANGELOG.md
@dnd-kit peer dependency warning is expected but not blocking with React 19.
@dnd-kit/core@6.3.1 and @dnd-kit/sortable@10.0.0 declare only react>=16.8.0 in their peerDependencies, so npm will likely warn when installing with react@19.1.0. Despite this technical mismatch, practical compatibility is solid—users successfully run these versions with React 19, and no breaking issues have been reported post-release. If peer warnings appear during install, --legacy-peer-deps, skipLibCheck: true, or hydration fixes (e.g., suppressHydrationWarning) can suppress them. No evidence of React 19 strict mode double-invocation causing regressions in sortable behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` around lines 33 - 40, The peer-dependency warning from
`@dnd-kit/core` and `@dnd-kit/sortable` when installing with react@19.1.0 is
expected and non-blocking; update the PR by adding a short note (referencing
`@dnd-kit/core`, `@dnd-kit/sortable`, and react@19.1.0) to the change/PR description
and the repo README explaining that this is a known peerWarning, that these
packages are confirmed to work with React 19, and provide recommended
mitigations (e.g., using --legacy-peer-deps, using skipLibCheck: true in
tsconfig, or adding suppressHydrationWarning fixes) so reviewers and users know
why the warning can be safely ignored.
- Added a note in the README regarding peer dependency warnings for `@dnd-kit/core` and `@dnd-kit/sortable` with React 19.1.0. - Refactored cron job management functions to enhance error handling during job creation, update, and deletion, ensuring jobs are retained in case of persistence failures. - Improved user feedback in the CronPanel component by clarifying time format error messages. - Enhanced the SkillsPanel component's drag-and-drop interface with additional focus styles for better accessibility. - Introduced a new macOS configuration file for Tauri to streamline the build process. Made-with: Cursor
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (2)
src-tauri/tauri.macos.conf.json (1)
4-4: Minor: redundant error suppression and double shell wrapping inbeforeBundleCommand.
rm -falready ignores missing files and exits successfully, and Tauri 2.0 automatically wrapsbeforeBundleCommandin a shell invocation (sh -con Unix/macOS). The explicitsh -cwrapper and2>/dev/null; trueare redundant. Simplify to:♻️ Suggested simplification
- "beforeBundleCommand": "sh -c 'rm -f target/release/bundle/macos/*.dmg target/release/bundle/dmg/*.dmg 2>/dev/null; true'" + "beforeBundleCommand": "rm -f target/release/bundle/macos/*.dmg target/release/bundle/dmg/*.dmg"Glob expansion (
*.dmg) will still work correctly because Tauri's automatic shell wrapping handles it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/tauri.macos.conf.json` at line 4, The beforeBundleCommand string is overly wrapped and suppresses errors redundantly; update the "beforeBundleCommand" value to remove the explicit "sh -c" wrapper and the trailing "2>/dev/null; true" so it simply runs rm -f target/release/bundle/macos/*.dmg target/release/bundle/dmg/*.dmg, relying on rm -f and Tauri's automatic shell wrapping (locate the beforeBundleCommand entry in tauri.macos.conf.json to change the value used during bundling).src-tauri/src/modules/skills/service.rs (1)
334-404: Consider documenting the empty-slug_filtersemantics.
Some(&[])is intentionally treated the same asNonehere (filtered_runstaysfalse, no retain/reorder, gate still applies). That’s sensible but surprising — cron callers passing an unfilteredVec<String>will fall back to normal gating rather than running unfiltered-without-gate. Worth a one-liner in the doc comment so future callers don’t rely on “passSome(&[])to disable gating”.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/skills/service.rs` around lines 334 - 404, The function skills_prompt_hint_for_turn treats Some(&[]) the same as None (the local filtered_run remains false so gating via skill_passes_hint_gate still applies), which is surprising; update the doc comment above skills_prompt_hint_for_turn to add a one-line note explaining that slug_filter==Some(&[]) is intentionally treated as "no filter" (i.e., behaves like None) and that callers should pass Some(non_empty_slice) to apply explicit filtering or None to disable filtering, so cron callers don't rely on passing an empty slice to change gating behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 2105-2165: The call to bot_agent::run_system_turn inside
handle_cron_test can hang indefinitely; wrap that await in
tokio::time::timeout(...) with a bounded Duration (e.g., 30s or a configured
timeout) and handle Err(Elapsed) by logging via state.emit_log and returning an
appropriate timeout response (StatusCode::GATEWAY_TIMEOUT or
StatusCode::REQUEST_TIMEOUT with Json<ErrorResponse>), while keeping the
successful path identical; specifically replace await on
bot_agent::run_system_turn(&state, &prompt, skills_filter).await with a timeout
wrapper, map timeout into an early Err(...) returned from handle_cron_test, and
only proceed to use turn.text and telegram logic when the timeout did not
elapse.
- Around line 1945-1950: persist_cron currently performs synchronous filesystem
I/O on the async runtime and races with other saves; fix it by adding a
dedicated save mutex to AppState (e.g., cron_save_mutex: Arc<Mutex<()>>) and
acquiring that mutex around creating the snapshot and the subsequent blocking
save, then run the actual disk write via tokio::task::spawn_blocking calling
cron_repository::save; apply the same pattern to
modules/cron/scheduler.rs::mark_ran and modules/mcp/native.rs::set_cron_enabled
so they also lock cron_save_mutex before snapshotting and before calling
spawn_blocking(cron_repository::save) to ensure in-memory write order matches
on-disk order.
In `@src-tauri/src/modules/cron/scheduler.rs`:
- Around line 20-33: The scheduler currently only observes state.shutdown_notify
in run() between ticks so long-running work inside
tick()/execute_job()/run_system_turn() blocks shutdown; update tick() to check
state.shutdown_notify.notified().now_or_never() between iterating over due jobs
and skip starting any further jobs if shutdown was requested, and for extra
safety race each long-running per-job call (execute_job / run_system_turn)
against state.shutdown_notify via tokio::select! so the job can be aborted or
cancelled promptly when shutdown is signalled.
- Around line 79-165: The code unconditionally calls mark_ran (advancing
last_run_at) even on agent::run_system_turn errors and performs synchronous fs
I/O in repository::save; change execute_job so mark_ran is only invoked for
successful runs (or explicitly-classified "ran" errors) — i.e. call
mark_ran(state, &job.id).await only inside the Ok(turn) branch after a
successful send (or where you define success), and surface the error from
run_system_turn instead of advancing last_run_at on Err; refactor mark_ran to
perform saving on a blocking thread by wrapping repository::save(...) in
tokio::task::spawn_blocking and await it, and make it use the same save mutex
used by persist_cron (follow the persist_cron concurrency pattern) so cron file
writes are serialized and last-writer-wins are preserved.
In `@src/modules/cron/components/CronPanel.tsx`:
- Around line 61-64: The current scalar states togglingId, testingId, and
deletingId cause races when multiple card actions run concurrently; replace each
with a Set<string> (e.g., togglingIds, testingIds, deletingIds) managed via
useState and update helpers (instead of
setTogglingId/setTestingId/setDeletingId) that add the job id when an operation
starts and remove it when it completes or errors, and update all UI/logic checks
(where code checks equality to togglingId/testingId/deletingId) to test Set
membership so each card shows its own busy state independently; ensure all
start/finish paths (including error paths) use the add/remove helpers so no id
is left stuck in the Set.
- Around line 147-165: The save flow can close a different/newer draft when the
in-flight save resolves because it always calls closeForm() and load()
regardless of whether the active editingId changed; fix by capturing the current
editingId (e.g., const sessionId = editingId) before awaiting
updateCronJob/createCronJob and, after the await, check that editingId ===
sessionId (or that the form session token still matches) before calling
setNotice(), closeForm(), and load(); otherwise only setSaving(false) and return
or set form error as appropriate. Apply this same pattern to the other save
blocks that call setNotice/closeForm/load (the other occurrences around the
indicated ranges).
- Around line 67-80: The load function can be racy because earlier
fetchCronJobs() responses can override later ones; fix by adding a
monotonically-increasing request id ref (e.g., fetchSeqRef) that you increment
before each call in load(), capture its value in the closure, and after awaiting
fetchCronJobs() verify that cancelledRef.current is false and the captured seq
matches fetchSeqRef.current before calling setJobs/setLastChatId/setError;
ensure the same check is used for refreshes and on unmount so stale responses
are ignored.
- Around line 133-145: The schedule validators in CronPanel (the block checking
draft.schedule.kind === "every_minutes" and the else branch checking { hour,
minute }) currently use Number.isFinite which permits fractional values and
elsewhere parseInt truncates decimals; update validation to require integers by
using Number.isInteger for minute/hour checks and ensure the minutes input
parsing (the code that reads the minutes value around the minutes input
handling) uses event.target.valueAsNumber (or Number(value) followed by
Number.isInteger) instead of Number.parseInt so fractional inputs are rejected
and the saved values reflect exact integers; keep error messages via
setFormError unchanged but trigger them when Number.isInteger fails or
valueAsNumber is NaN/out of range.
---
Nitpick comments:
In `@src-tauri/src/modules/skills/service.rs`:
- Around line 334-404: The function skills_prompt_hint_for_turn treats Some(&[])
the same as None (the local filtered_run remains false so gating via
skill_passes_hint_gate still applies), which is surprising; update the doc
comment above skills_prompt_hint_for_turn to add a one-line note explaining that
slug_filter==Some(&[]) is intentionally treated as "no filter" (i.e., behaves
like None) and that callers should pass Some(non_empty_slice) to apply explicit
filtering or None to disable filtering, so cron callers don't rely on passing an
empty slice to change gating behavior.
In `@src-tauri/tauri.macos.conf.json`:
- Line 4: The beforeBundleCommand string is overly wrapped and suppresses errors
redundantly; update the "beforeBundleCommand" value to remove the explicit "sh
-c" wrapper and the trailing "2>/dev/null; true" so it simply runs rm -f
target/release/bundle/macos/*.dmg target/release/bundle/dmg/*.dmg, relying on rm
-f and Tauri's automatic shell wrapping (locate the beforeBundleCommand entry in
tauri.macos.conf.json to change the value used during bundling).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 037c0f87-cbc7-4697-b4ba-ec58a98782eb
📒 Files selected for processing (13)
README.mdsrc-tauri/Cargo.tomlsrc-tauri/src/infrastructure/http_server.rssrc-tauri/src/modules/cron/repository.rssrc-tauri/src/modules/cron/scheduler.rssrc-tauri/src/modules/cron/service.rssrc-tauri/src/modules/mcp/native.rssrc-tauri/src/modules/skills/service.rssrc-tauri/src/shared/state.rssrc-tauri/tauri.macos.conf.jsonsrc/modules/cron/components/CronPanel.tsxsrc/modules/cron/index.tssrc/modules/skills/components/SkillsPanel.tsx
✅ Files skipped from review due to trivial changes (2)
- README.md
- src/modules/cron/index.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- src-tauri/src/shared/state.rs
- src/modules/skills/components/SkillsPanel.tsx
- src-tauri/src/modules/cron/repository.rs
| async fn persist_cron(state: &AppState) -> Result<(), String> { | ||
| let jobs = state.cron_jobs.read().await.clone(); | ||
| let last_chat_id = *state.last_chat_id.read().await; | ||
| let file = CronFile { jobs, last_chat_id }; | ||
| cron_repository::save(&state.cron_path, &file) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm every direct caller of cron_repository::save to size the fix surface.
rg -nP --type=rust -C3 '\bcron::?(?:repository|_repository)?::save\b|\brepository::save\b' -g '**/cron/**' -g '**/http_server.rs' -g '**/mcp/native.rs'Repository: pengine-ai/pengine
Length of output: 1118
🏁 Script executed:
#!/bin/bash
# Find persist_cron definition in http_server.rs
rg -n 'fn persist_cron' src-tauri/src/infrastructure/http_server.rs -A 10Repository: pengine-ai/pengine
Length of output: 560
🏁 Script executed:
#!/bin/bash
# Find all callers of persist_cron
rg -n 'persist_cron' src-tauri/src/ --type=rustRepository: pengine-ai/pengine
Length of output: 566
🏁 Script executed:
#!/bin/bash
# Check scheduler.rs::mark_ran to confirm it lacks spawn_blocking
rg -n 'fn mark_ran' src-tauri/src/modules/cron/scheduler.rs -A 15Repository: pengine-ai/pengine
Length of output: 619
🏁 Script executed:
#!/bin/bash
# Check AppState definition to see if cron_save_mutex exists
rg -n 'struct AppState' src-tauri/src/ --type=rust -A 30Repository: pengine-ai/pengine
Length of output: 2818
persist_cron does sync fs I/O on the Tokio runtime, and concurrent save paths race on cron.json.
Two related issues in a single persistence primitive:
-
cron_repository::saveis synchronous (tempfile create + write + rename). The MCPset_cron_enabledpath inmodules/mcp/native.rs(Line 479-481) wraps it intokio::task::spawn_blockingfor exactly this reason;persist_cronshould do the same. Additionally,modules/cron/scheduler.rs::mark_ran(Line 164) also callsrepository::savedirectly withoutspawn_blocking, creating an asymmetric pattern. Bothpersist_cronandmark_ranblock the async runtime on fs I/O. -
persist_cronandmark_raneach re-readstate.cron_jobsunder a freshread()guard (or modify then release awrite()guard) after releasing their lock, before callingsave(). Nothing serializes concurrent saves acrosshandle_cron_*(viapersist_cron),set_cron_enabled(via MCP), andscheduler::mark_ran. Interleaving:- H1 writes job A, releases, reads snapshot_A = {A}, starts save.
- H2 writes job B, releases, reads snapshot_B = {A, B}, saves → disk = {A, B}.
- H1's save finishes → disk = {A}. B is lost.
The window extends across the entire save duration since each call takes a fresh lock.
🛡️ Suggested direction
Introduce a dedicated save mutex in AppState (e.g. cron_save_mutex: Arc<Mutex<()>>) and held across the snapshot + spawn_blocking(save) in all three paths (persist_cron, mark_ran, and set_cron_enabled) so in-memory write order matches on-disk write order, and move the sync write off the async runtime:
async fn persist_cron(state: &AppState) -> Result<(), String> {
- let jobs = state.cron_jobs.read().await.clone();
- let last_chat_id = *state.last_chat_id.read().await;
- let file = CronFile { jobs, last_chat_id };
- cron_repository::save(&state.cron_path, &file)
+ let _guard = state.cron_save_mutex.lock().await;
+ let jobs = state.cron_jobs.read().await.clone();
+ let last_chat_id = *state.last_chat_id.read().await;
+ let file = CronFile { jobs, last_chat_id };
+ let path = state.cron_path.clone();
+ tokio::task::spawn_blocking(move || cron_repository::save(&path, &file))
+ .await
+ .map_err(|e| format!("cron save task: {e}"))?
}Apply the same pattern to modules/cron/scheduler.rs::mark_ran and ensure modules/mcp/native.rs::set_cron_enabled takes the mutex before its spawn_blocking call.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/infrastructure/http_server.rs` around lines 1945 - 1950,
persist_cron currently performs synchronous filesystem I/O on the async runtime
and races with other saves; fix it by adding a dedicated save mutex to AppState
(e.g., cron_save_mutex: Arc<Mutex<()>>) and acquiring that mutex around creating
the snapshot and the subsequent blocking save, then run the actual disk write
via tokio::task::spawn_blocking calling cron_repository::save; apply the same
pattern to modules/cron/scheduler.rs::mark_ran and
modules/mcp/native.rs::set_cron_enabled so they also lock cron_save_mutex before
snapshotting and before calling spawn_blocking(cron_repository::save) to ensure
in-memory write order matches on-disk order.
- Updated the CronDailyLocalTimePicker component to use Number.isInteger for hour and minute validation, ensuring only valid integers are accepted. - Refactored the CronPanel component to manage toggling, testing, and deleting of cron jobs using Sets for better state handling. - Enhanced error handling for cron job scheduling, ensuring that time values are validated as integers and within acceptable ranges. - Introduced a mutex for synchronizing access to cron job state, improving data integrity during concurrent operations. Made-with: Cursor
Made-with: Cursor
Summary by CodeRabbit
New Features
Documentation