Skip to content

feat: implement cron job management features#47

Merged
maximedogawa merged 3 commits into
mainfrom
36-feature-add-cron-job-management
Apr 18, 2026
Merged

feat: implement cron job management features#47
maximedogawa merged 3 commits into
mainfrom
36-feature-add-cron-job-management

Conversation

@maximedogawa
Copy link
Copy Markdown
Collaborator

@maximedogawa maximedogawa commented Apr 18, 2026

  • Added new components for managing cron jobs, including CronPanel, CronJobCard, and CronFormPinnedSkills.
  • Introduced API functions for fetching, creating, updating, and deleting cron jobs.
  • Implemented a daily local time picker for scheduling and a drag-and-drop interface for managing pinned skills.
  • Enhanced the SkillsPanel to support skill ordering and integration with cron job functionality.
  • Updated package dependencies to include @dnd-kit for drag-and-drop capabilities and improved ESLint configuration.

Made-with: Cursor

Summary by CodeRabbit

  • New Features

    • Scheduled prompts (cron jobs): create, edit, delete, test, enable/disable, interval or daily local-time schedules
    • UI for managing cron jobs, including a panel, job cards, time picker, and pinned-skills panel with drag-and-drop ordering
    • Client API + server endpoints for cron and skill-order management; persistent cron storage and background scheduler with Telegram delivery
    • Skill filtering/gating and weather-prioritization in prompts
  • Documentation

    • Install note about dnd-kit peer-dependency warnings and workaround guidance

- Added new components for managing cron jobs, including CronPanel, CronJobCard, and CronFormPinnedSkills.
- Introduced API functions for fetching, creating, updating, and deleting cron jobs.
- Implemented a daily local time picker for scheduling and a drag-and-drop interface for managing pinned skills.
- Enhanced the SkillsPanel to support skill ordering and integration with cron job functionality.
- Updated package dependencies to include @dnd-kit for drag-and-drop capabilities and improved ESLint configuration.

Made-with: Cursor
@maximedogawa maximedogawa self-assigned this Apr 18, 2026
@maximedogawa maximedogawa linked an issue Apr 18, 2026 that may be closed by this pull request
4 tasks
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 18, 2026

Warning

Rate limit exceeded

@maximedogawa has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 39 minutes and 10 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 39 minutes and 10 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7175bad4-18f2-42b7-9fef-453c0cafffc6

📥 Commits

Reviewing files that changed from the base of the PR and between f17079e and af5281c.

📒 Files selected for processing (8)
  • src-tauri/src/infrastructure/http_server.rs
  • src-tauri/src/modules/cron/scheduler.rs
  • src-tauri/src/modules/mcp/native.rs
  • src-tauri/src/modules/skills/service.rs
  • src-tauri/src/shared/state.rs
  • src-tauri/tauri.macos.conf.json
  • src/modules/cron/components/CronDailyLocalTimePicker.tsx
  • src/modules/cron/components/CronPanel.tsx
📝 Walkthrough

Walkthrough

Adds a full cron scheduling subsystem: persistent cron storage, scheduler loop, HTTP CRUD and test endpoints, bot integration for system turns with optional skill filtering, frontend UI (CronPanel) with drag-and-drop skill pinning, and skill ordering persistence and gating.

Changes

Cohort / File(s) Summary
Manifest / Packages
package.json, src-tauri/Cargo.toml
Added dnd-kit and radix select to JS deps; moved tempfile to runtime Rust deps.
Core Cron Module (Rust)
src-tauri/src/modules/cron/...
New cron module: types, repository (atomic load/save + mutex), service (validation, schedule math, sentinel, prompt compose), scheduler (background loop, execute/send, send_to_last_chat).
Backend Integration (Rust)
src-tauri/src/app.rs, src-tauri/src/shared/state.rs, src-tauri/src/modules/mod.rs
AppState extended with cron fields; startup loads persisted cron file before launching scheduler; cron module exported.
Bot / MCP / Skills Backend (Rust)
src-tauri/src/modules/bot/*, src-tauri/src/modules/mcp/*, src-tauri/src/modules/skills/*, src-tauri/src/modules/skills/types.rs
Added run_system_turn, per-turn skill filtering, weather prioritization, cron-native manager provider, skill slug ordering persistence and hint gating (hint_allow_substrings).
HTTP API (Rust)
src-tauri/src/infrastructure/http_server.rs
New endpoints: skill order (PUT /v1/skills/order), cron CRUD (GET/POST/PUT/DELETE /v1/cron), enabled toggle, and POST /v1/cron/{id}/test; includes persistence, in-memory state updates, notify + logging, and test execution flow.
Frontend: Cron Types & API (TS)
src/modules/cron/types.ts, src/modules/cron/api/index.ts, src/modules/cron/index.ts
Added types for schedules/jobs, format helper, and client API helpers with timeout/abort and error parsing for fetch/create/update/delete/toggle/test.
Frontend: Cron UI (React/TSX)
src/modules/cron/components/*, src/modules/cron/components/CronFormPinnedSkills.tsx
New CronPanel, CronJobCard, CronDailyLocalTimePicker, and pinned-skills component with @dnd-kit sortable drag-and-drop and skill selection.
Frontend: Skills UI & API
src/modules/skills/components/SkillsPanel.tsx, src/modules/skills/api/index.ts
SkillsPanel made sortable with dnd-kit; added putSkillSlugOrder API call to persist order.
Dashboard and Docs
src/pages/DashboardPage.tsx, README.md, src-tauri/tauri.macos.conf.json
Embedded CronPanel into dashboard, documented dnd-kit peer-warning guidance, added macOS tauri pre-bundle cleanup hook.
Tests
src-tauri/tests/skills_brave_gate.rs
Added unit test for hint/brave gating substring behavior.

Sequence Diagram(s)

sequenceDiagram
    participant UI as CronPanel
    participant API as HTTP API
    participant Repo as Cron Repository
    participant Sched as Cron Scheduler
    participant Agent as Bot Agent
    participant TG as Telegram

    UI->>API: POST/PUT /v1/cron (create/update)
    API->>Repo: save(cron.json)
    Repo-->>API: ok
    API-->>UI: success

    loop scheduler every 30s or on notify
        Sched->>Sched: load state.cron_jobs
        alt job due
            Sched->>Agent: run_system_turn(prompt, skill_filter)
            Agent-->>Sched: TurnResult (text, suppress flags)
            alt should send
                Sched->>TG: send_to_last_chat(chunks)
                TG-->>Sched: success/error
            else skip send
                Sched-->>Sched: log skip
            end
            Sched->>Repo: save(updated cron.json with last_run_at)
            Repo-->>Sched: ok
        end
    end

    UI->>API: POST /v1/cron/{id}/test
    API->>Agent: run_system_turn(composed_prompt, skill_filter)
    Agent-->>API: TurnResult
    API->>TG: send_to_last_chat (optional)
    TG-->>API: result
    API-->>UI: CronTestResponse
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐇 I hopped in with a tiny plan,

Schedules set by paw and hand;
Skills pinned, dragged, and placed just right,
The bot replies by day and night.
The rabbit cheers — cron runs on!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 65.52% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title clearly and specifically describes the primary change: implementing cron job management features, which is the main focus across backend scheduler, API endpoints, and frontend UI components.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch 36-feature-add-cron-job-management

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (5)
src/modules/cron/index.ts (1)

1-3: Watch for symbol collisions between ./api and ./types.

Two export * barrels will fail the build (TS2308) if either file ever re-exports a name that exists in the other (e.g. a helper named CronJob alongside the type). Not a defect today, just a trap for later refactors — consider explicit named re-exports if this surface is meant to be public/stable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/cron/index.ts` around lines 1 - 3, The two blanket re-exports
(export * from "./api" and export * from "./types") risk future TS2308
collisions if the modules ever export the same identifier (e.g., a CronJob
helper/type); replace these star exports with explicit named exports to make the
public surface stable and collision-safe: import the specific symbols you want
from "./api" (functions/classes) and from "./types" (type-only exports), then
re-export them using explicit export { Foo, Bar } and export type { FooType,
BarType } patterns; keep the existing named export for CronPanel as-is.
src-tauri/src/modules/cron/scheduler.rs (1)

51-64: Consider rate-limiting the "no Telegram chat known" log.

When due jobs exist but last_chat_id is None, this branch emits a log on every 30-second tick (and on every cron_notify wake) until the user first messages the bot. On an idle install with a few enabled jobs, that's a continuous stream of near-identical log lines. A simple AtomicBool/timestamp gate so the warning fires at most once per "unknown chat" episode (and again after last_chat_id transitions back to None) would keep the signal without the noise.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/modules/cron/scheduler.rs` around lines 51 - 64, The log spam
when last_chat_id.read().await.is_none() occurs should be rate-limited: add a
small state flag/timestamp (e.g., an AtomicBool or an Option<Instant> stored on
state, e.g., last_chat_known_warned or last_unknown_chat_warn_at) checked before
calling state.emit_log("cron", ...) so the message is emitted at most once per
unknown-chat episode (or at most once per N minutes), and update/reset that
flag/timestamp when last_chat_id transitions from None -> Some (clear the flag)
or from Some -> None (start a new episode); modify the code around
last_chat_id.read() and emit_log to consult and set this gate so repeated
ticks/crons don’t produce identical log lines.
src-tauri/src/modules/cron/repository.rs (1)

23-31: Non-atomic write can corrupt cron.json on crash.

std::fs::write performs a direct open(O_TRUNC) + write + close. If the process is killed (or the host loses power) between truncate and the final write, cron.json is left truncated or empty, and on next startup load() will return CronFile::default() — silently dropping every cron job the user configured. Given this is invoked from every CRUD/toggle handler and from mark_ran after each firing, the exposure isn't hypothetical.

Use the standard write-temp-then-rename pattern (the same-directory rename is atomic on POSIX and on Windows via ReplaceFileW/MoveFileEx).

♻️ Proposed atomic write
 pub fn save(path: &Path, file: &CronFile) -> Result<(), String> {
-    if let Some(parent) = path.parent() {
-        std::fs::create_dir_all(parent)
-            .map_err(|e| format!("create parent dirs for cron.json: {e}"))?;
-    }
+    let parent = path.parent().unwrap_or_else(|| Path::new("."));
+    std::fs::create_dir_all(parent)
+        .map_err(|e| format!("create parent dirs for cron.json: {e}"))?;
     let pretty =
         serde_json::to_string_pretty(file).map_err(|e| format!("encode cron.json: {e}"))?;
-    std::fs::write(path, pretty).map_err(|e| format!("write cron.json: {e}"))
+    let tmp = parent.join(".cron.json.tmp");
+    std::fs::write(&tmp, pretty).map_err(|e| format!("write cron.json tmp: {e}"))?;
+    std::fs::rename(&tmp, path).map_err(|e| format!("rename cron.json: {e}"))
 }

Consider a crate like tempfile::persist or atomicwrites if you prefer a battle-tested helper (handles fsync on the temp file and the parent directory, which the snippet above skips).

Additionally, repository::save is invoked concurrently from the scheduler (mark_ran) and HTTP CRUD handlers with no shared mutex. Two simultaneous writers can race on the rename; more importantly, last-writer-wins may drop a just-completed CRUD change if the scheduler's snapshot was read a moment earlier. Worth considering a module-level Mutex around load/mutate/save for cron persistence.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/modules/cron/repository.rs` around lines 23 - 31, The
save(Path, CronFile) function is doing a non-atomic std::fs::write which can
truncate cron.json mid-update and must be replaced with an atomic
write-then-rename: create a temp file in the same directory (e.g., via
tempfile::NamedTempFile or atomicwrites), write the pretty JSON to it, flush and
fsync the temp file, atomically rename/persist it over the target path, and
fsync the parent directory; alternatively use tempfile::NamedTempFile::persist
or the atomicwrites crate which handle these steps. Also serialize concurrent
access to the repo (load/mutate/save/mark_ran) with a module-level Mutex or
other lock so two writers cannot race and lose changes; update repository::save
and callers (including mark_ran and CRUD handlers) to use the mutex-protected
code path.
src-tauri/src/modules/skills/service.rs (1)

411-433: Minor: use gather_skills_sorted here to avoid an unnecessary order pass.

list_skills applies .skill_order.json ordering, which is irrelevant when you're just building a lowercase-slug lookup for deduplication. Swapping to gather_skills_sorted avoids reading the order file and the sort on every cron canonicalize call:

♻️ Proposed tweak
 pub fn canonicalize_skill_slug_list(store_path: &Path, requested: &[String]) -> Vec<String> {
-    let by_lower: HashMap<String, String> = list_skills(store_path)
+    let by_lower: HashMap<String, String> = gather_skills_sorted(store_path)
         .into_iter()
         .map(|s| (s.slug.to_lowercase(), s.slug))
         .collect();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/modules/skills/service.rs` around lines 411 - 433, The
canonicalize_skill_slug_list function currently calls list_skills to build the
by_lower lookup which triggers unnecessary .skill_order.json reads and sorting;
change that call to gather_skills_sorted so you still get all existing skills
but avoid the order pass. Update the line that builds by_lower (currently using
list_skills(store_path).into_iter()...) to use
gather_skills_sorted(store_path).into_iter() and keep the rest of
canonicalize_skill_slug_list (seen, out, key logic) unchanged.
src-tauri/src/infrastructure/http_server.rs (1)

2057-2085: Dead found check — the let Some(...) else { return } branch already handles the missing case.

Inside the block we return Err(not_found(...)) when the job is missing, so found is unconditionally true on the happy path. The if !found { ... } at Line 2070-2072 is unreachable.

♻️ Proposed simplification
-    let found = {
-        let mut jobs = state.cron_jobs.write().await;
-        let Some(job) = jobs.iter_mut().find(|j| j.id == id) else {
-            return Err(not_found(format!("cron job '{id}' not found")));
-        };
-        job.enabled = body.enabled;
-        true
-    };
-    if !found {
-        return Err(not_found(format!("cron job '{id}' not found")));
-    }
+    {
+        let mut jobs = state.cron_jobs.write().await;
+        let Some(job) = jobs.iter_mut().find(|j| j.id == id) else {
+            return Err(not_found(format!("cron job '{id}' not found")));
+        };
+        job.enabled = body.enabled;
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/infrastructure/http_server.rs` around lines 2057 - 2085, The
variable `found` and the subsequent `if !found { ... }` are dead code inside
`handle_cron_set_enabled` because the `let Some(job) = ... else { return
Err(not_found(...)) }` already returns on miss; remove `found` entirely, mutate
`job.enabled = body.enabled` directly inside the `write().await` block (no
boolean result needed), and delete the unreachable check (`if !found { ... }`);
keep the calls to `persist_cron(&state).await.map_err(internal)?`,
`state.cron_notify.notify_waiters()`, and `state.emit_log(...)` as-is so the
function behavior is unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@package.json`:
- Around line 33-40: The peer-dependency warning from `@dnd-kit/core` and
`@dnd-kit/sortable` when installing with react@19.1.0 is expected and
non-blocking; update the PR by adding a short note (referencing `@dnd-kit/core`,
`@dnd-kit/sortable`, and react@19.1.0) to the change/PR description and the repo
README explaining that this is a known peerWarning, that these packages are
confirmed to work with React 19, and provide recommended mitigations (e.g.,
using --legacy-peer-deps, using skipLibCheck: true in tsconfig, or adding
suppressHydrationWarning fixes) so reviewers and users know why the warning can
be safely ignored.

In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 1979-2085: The in-memory cron list is mutated before persisting so
a persist_cron failure leaves memory inconsistent; for handle_cron_create,
handle_cron_update, handle_cron_delete and handle_cron_set_enabled make each
handler roll back the in-memory change when persist_cron returns an error: for
handle_cron_create, push the job, call persist_cron, and on error remove the job
with the generated job.id and return the error (do not call notify_waiters or
emit_log); for handle_cron_update, clone the original CronJob (or its fields)
before mutating, call persist_cron, and on error restore the original job in
place and return the error; for handle_cron_delete, snapshot the removed
CronJob(s) or index before retain/remove, call persist_cron, and on error
re-insert the removed job(s) into state.cron_jobs and return the error; for
handle_cron_set_enabled, save the previous enabled value (or clone the job),
call persist_cron, and on error restore the previous enabled state and return
the error; in all cases ensure notify_waiters() and emit_log() are only called
after a successful persist_cron.

In `@src-tauri/src/modules/cron/service.rs`:
- Around line 61-72: The current is_no_message_reply(text: &str) treats
empty/whitespace strings as the "no message" sentinel and causes scheduler.rs to
silently skip delivery; change is_no_message_reply to only return true for
explicit sentinel tokens (NO_MESSAGE_SENTINEL, "no-message", "no message",
"<no_message>", "no_message") and remove the early t.is_empty() -> true path,
and then update scheduler.rs to detect an empty trimmed reply from the model as
an error case (log or surface it) instead of suppressing delivery so transient
inference/transport failures are visible.
- Around line 140-191: The tests mutate process-global TZ via
ensure_daily_tests_use_utc which is fragile; instead remove that function and
any std::env::set_var calls and make tests deterministic by either (A)
constructing UTC-aware datetimes with FixedOffset or Utc using the helper dt and
asserting is_due(&s, ...) with those UTC DateTime values, or (B) modify
is_due/next_due to accept an explicit timezone parameter and pass Utc (or a
FixedOffset) from the tests (update calls in tests like
daily_at_waits_until_target_time and daily_at_rolls_to_tomorrow_after_firing to
pass the explicit tz); refer to symbols ensure_daily_tests_use_utc, dt,
Schedule::DailyAt, and is_due when applying the change.

In `@src-tauri/src/modules/mcp/native.rs`:
- Around line 458-481: The function set_cron_enabled performs a synchronous
filesystem write via crate::modules::cron::repository::save from an async
context which can block the runtime; change the call to run on a blocking thread
(e.g. wrap the save call in tokio::task::spawn_blocking or an equivalent
executor) or make repository::save async, invoking the new async/blocking
wrapper from set_cron_enabled after building the CronFile (references:
set_cron_enabled, state.cron_jobs, state.cron_path,
crate::modules::cron::repository::save, and state.cron_notify).

In `@src/modules/cron/components/CronPanel.tsx`:
- Around line 139-145: Update the validation error text in CronPanel.tsx to
reflect local time rather than UTC: where the code checks draft.schedule (the
block that uses hour/minute and calls setFormError("Time must be HH:MM in
24-hour UTC")), change the message to indicate local 24-hour clock (e.g., "Time
must be HH:MM in 24-hour local time") so it matches CronDailyLocalTimePicker and
the Schedule::DailyAt behavior; ensure you only alter the string passed to
setFormError and leave the hour/minute validation logic unchanged.

In `@src/modules/skills/components/SkillsPanel.tsx`:
- Around line 122-131: The drag handle button in SkillsPanel.tsx (the element
using {...attributes} and {...listeners} and aria-label={`Drag to reorder
${skill.slug}`}) removes the focus outline via "outline-none", which hides focus
for keyboard users using KeyboardSensor/sortableKeyboardCoordinates; restore a
visible keyboard focus by replacing or augmenting the className with a
keyboard-only focus style (e.g., add a focus-visible ring class such as
"focus-visible:ring-2 focus-visible:ring-offset-1 focus-visible:ring-primary" or
similar) so the handle shows a visible focus ring when focused via keyboard
while keeping the visual style for mouse users.

---

Nitpick comments:
In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 2057-2085: The variable `found` and the subsequent `if !found {
... }` are dead code inside `handle_cron_set_enabled` because the `let Some(job)
= ... else { return Err(not_found(...)) }` already returns on miss; remove
`found` entirely, mutate `job.enabled = body.enabled` directly inside the
`write().await` block (no boolean result needed), and delete the unreachable
check (`if !found { ... }`); keep the calls to
`persist_cron(&state).await.map_err(internal)?`,
`state.cron_notify.notify_waiters()`, and `state.emit_log(...)` as-is so the
function behavior is unchanged.

In `@src-tauri/src/modules/cron/repository.rs`:
- Around line 23-31: The save(Path, CronFile) function is doing a non-atomic
std::fs::write which can truncate cron.json mid-update and must be replaced with
an atomic write-then-rename: create a temp file in the same directory (e.g., via
tempfile::NamedTempFile or atomicwrites), write the pretty JSON to it, flush and
fsync the temp file, atomically rename/persist it over the target path, and
fsync the parent directory; alternatively use tempfile::NamedTempFile::persist
or the atomicwrites crate which handle these steps. Also serialize concurrent
access to the repo (load/mutate/save/mark_ran) with a module-level Mutex or
other lock so two writers cannot race and lose changes; update repository::save
and callers (including mark_ran and CRUD handlers) to use the mutex-protected
code path.

In `@src-tauri/src/modules/cron/scheduler.rs`:
- Around line 51-64: The log spam when last_chat_id.read().await.is_none()
occurs should be rate-limited: add a small state flag/timestamp (e.g., an
AtomicBool or an Option<Instant> stored on state, e.g., last_chat_known_warned
or last_unknown_chat_warn_at) checked before calling state.emit_log("cron", ...)
so the message is emitted at most once per unknown-chat episode (or at most once
per N minutes), and update/reset that flag/timestamp when last_chat_id
transitions from None -> Some (clear the flag) or from Some -> None (start a new
episode); modify the code around last_chat_id.read() and emit_log to consult and
set this gate so repeated ticks/crons don’t produce identical log lines.

In `@src-tauri/src/modules/skills/service.rs`:
- Around line 411-433: The canonicalize_skill_slug_list function currently calls
list_skills to build the by_lower lookup which triggers unnecessary
.skill_order.json reads and sorting; change that call to gather_skills_sorted so
you still get all existing skills but avoid the order pass. Update the line that
builds by_lower (currently using list_skills(store_path).into_iter()...) to use
gather_skills_sorted(store_path).into_iter() and keep the rest of
canonicalize_skill_slug_list (seen, out, key logic) unchanged.

In `@src/modules/cron/index.ts`:
- Around line 1-3: The two blanket re-exports (export * from "./api" and export
* from "./types") risk future TS2308 collisions if the modules ever export the
same identifier (e.g., a CronJob helper/type); replace these star exports with
explicit named exports to make the public surface stable and collision-safe:
import the specific symbols you want from "./api" (functions/classes) and from
"./types" (type-only exports), then re-export them using explicit export { Foo,
Bar } and export type { FooType, BarType } patterns; keep the existing named
export for CronPanel as-is.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b5058d63-fcc0-42a3-af72-8949ee25c5bf

📥 Commits

Reviewing files that changed from the base of the PR and between 0a19e31 and 971edf8.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (28)
  • package.json
  • src-tauri/src/app.rs
  • src-tauri/src/infrastructure/http_server.rs
  • src-tauri/src/modules/bot/agent.rs
  • src-tauri/src/modules/bot/service.rs
  • src-tauri/src/modules/cron/mod.rs
  • src-tauri/src/modules/cron/repository.rs
  • src-tauri/src/modules/cron/scheduler.rs
  • src-tauri/src/modules/cron/service.rs
  • src-tauri/src/modules/cron/types.rs
  • src-tauri/src/modules/mcp/native.rs
  • src-tauri/src/modules/mcp/service.rs
  • src-tauri/src/modules/mod.rs
  • src-tauri/src/modules/skills/service.rs
  • src-tauri/src/modules/skills/types.rs
  • src-tauri/src/shared/state.rs
  • src-tauri/tests/skills_brave_gate.rs
  • src/modules/cron/api/index.ts
  • src/modules/cron/components/CronDailyLocalTimePicker.tsx
  • src/modules/cron/components/CronFormPinnedSkills.tsx
  • src/modules/cron/components/CronJobCard.tsx
  • src/modules/cron/components/CronPanel.tsx
  • src/modules/cron/index.ts
  • src/modules/cron/types.ts
  • src/modules/skills/api/index.ts
  • src/modules/skills/components/SkillsPanel.tsx
  • src/modules/skills/types.ts
  • src/pages/DashboardPage.tsx

Comment thread package.json
Comment on lines +33 to +40
"@dnd-kit/core": "^6.3.1",
"@dnd-kit/modifiers": "^9.0.0",
"@dnd-kit/sortable": "^10.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@radix-ui/react-accordion": "^1.2.12",
"@radix-ui/react-menubar": "^1.1.16",
"@radix-ui/react-scroll-area": "^1.2.10",
"@radix-ui/react-select": "^2.2.6",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

@dnd-kit/core 6.3 and @dnd-kit/sortable 10 React 19 peer dependency support

💡 Result:

No, @dnd-kit/core 6.3.x and @dnd-kit/sortable 10.x do not explicitly declare React 19 in their peer dependencies. Both packages specify "react": ">=16.8.0" and "react-dom": ">=16.8.0" in their peerDependencies on npm, which technically allows React 19 but does not explicitly confirm full compatibility or testing. However, evidence indicates practical React 19 support: - Users report successful usage with React 19 and Next.js 15, sometimes requiring workarounds like --legacy-peer-deps, skipLibCheck, or hydration fixes ("use client", suppressHydrationWarning). - Recent fixes in the repository (e.g., useRef readonly errors for React 19 type compatibility, JSX namespace wrapping) were merged before these versions' release (Dec 2024). - No breaking issues reported post-release; a StackOverflow example shows @dnd-kit/core@6.3.1 installing without React peer conflict alongside React 19. The library has since been rewritten (new @dnd-kit/react@0.4.0 etc. with explicit React 19 fixes), but for these specific legacy versions, compatibility is high-confidence in practice despite non-explicit peer deps.

Citations:


@dnd-kit peer dependency warning is expected but not blocking with React 19.

@dnd-kit/core@6.3.1 and @dnd-kit/sortable@10.0.0 declare only react>=16.8.0 in their peerDependencies, so npm will likely warn when installing with react@19.1.0. Despite this technical mismatch, practical compatibility is solid—users successfully run these versions with React 19, and no breaking issues have been reported post-release. If peer warnings appear during install, --legacy-peer-deps, skipLibCheck: true, or hydration fixes (e.g., suppressHydrationWarning) can suppress them. No evidence of React 19 strict mode double-invocation causing regressions in sortable behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@package.json` around lines 33 - 40, The peer-dependency warning from
`@dnd-kit/core` and `@dnd-kit/sortable` when installing with react@19.1.0 is
expected and non-blocking; update the PR by adding a short note (referencing
`@dnd-kit/core`, `@dnd-kit/sortable`, and react@19.1.0) to the change/PR description
and the repo README explaining that this is a known peerWarning, that these
packages are confirmed to work with React 19, and provide recommended
mitigations (e.g., using --legacy-peer-deps, using skipLibCheck: true in
tsconfig, or adding suppressHydrationWarning fixes) so reviewers and users know
why the warning can be safely ignored.

Comment thread src-tauri/src/infrastructure/http_server.rs
Comment thread src-tauri/src/modules/cron/service.rs
Comment thread src-tauri/src/modules/cron/service.rs Outdated
Comment thread src-tauri/src/modules/mcp/native.rs
Comment thread src/modules/cron/components/CronPanel.tsx
Comment thread src/modules/skills/components/SkillsPanel.tsx
- Added a note in the README regarding peer dependency warnings for `@dnd-kit/core` and `@dnd-kit/sortable` with React 19.1.0.
- Refactored cron job management functions to enhance error handling during job creation, update, and deletion, ensuring jobs are retained in case of persistence failures.
- Improved user feedback in the CronPanel component by clarifying time format error messages.
- Enhanced the SkillsPanel component's drag-and-drop interface with additional focus styles for better accessibility.
- Introduced a new macOS configuration file for Tauri to streamline the build process.

Made-with: Cursor
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (2)
src-tauri/tauri.macos.conf.json (1)

4-4: Minor: redundant error suppression and double shell wrapping in beforeBundleCommand.

rm -f already ignores missing files and exits successfully, and Tauri 2.0 automatically wraps beforeBundleCommand in a shell invocation (sh -c on Unix/macOS). The explicit sh -c wrapper and 2>/dev/null; true are redundant. Simplify to:

♻️ Suggested simplification
-    "beforeBundleCommand": "sh -c 'rm -f target/release/bundle/macos/*.dmg target/release/bundle/dmg/*.dmg 2>/dev/null; true'"
+    "beforeBundleCommand": "rm -f target/release/bundle/macos/*.dmg target/release/bundle/dmg/*.dmg"

Glob expansion (*.dmg) will still work correctly because Tauri's automatic shell wrapping handles it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/tauri.macos.conf.json` at line 4, The beforeBundleCommand string is
overly wrapped and suppresses errors redundantly; update the
"beforeBundleCommand" value to remove the explicit "sh -c" wrapper and the
trailing "2>/dev/null; true" so it simply runs rm -f
target/release/bundle/macos/*.dmg target/release/bundle/dmg/*.dmg, relying on rm
-f and Tauri's automatic shell wrapping (locate the beforeBundleCommand entry in
tauri.macos.conf.json to change the value used during bundling).
src-tauri/src/modules/skills/service.rs (1)

334-404: Consider documenting the empty-slug_filter semantics.

Some(&[]) is intentionally treated the same as None here (filtered_run stays false, no retain/reorder, gate still applies). That’s sensible but surprising — cron callers passing an unfiltered Vec<String> will fall back to normal gating rather than running unfiltered-without-gate. Worth a one-liner in the doc comment so future callers don’t rely on “pass Some(&[]) to disable gating”.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/modules/skills/service.rs` around lines 334 - 404, The function
skills_prompt_hint_for_turn treats Some(&[]) the same as None (the local
filtered_run remains false so gating via skill_passes_hint_gate still applies),
which is surprising; update the doc comment above skills_prompt_hint_for_turn to
add a one-line note explaining that slug_filter==Some(&[]) is intentionally
treated as "no filter" (i.e., behaves like None) and that callers should pass
Some(non_empty_slice) to apply explicit filtering or None to disable filtering,
so cron callers don't rely on passing an empty slice to change gating behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 2105-2165: The call to bot_agent::run_system_turn inside
handle_cron_test can hang indefinitely; wrap that await in
tokio::time::timeout(...) with a bounded Duration (e.g., 30s or a configured
timeout) and handle Err(Elapsed) by logging via state.emit_log and returning an
appropriate timeout response (StatusCode::GATEWAY_TIMEOUT or
StatusCode::REQUEST_TIMEOUT with Json<ErrorResponse>), while keeping the
successful path identical; specifically replace await on
bot_agent::run_system_turn(&state, &prompt, skills_filter).await with a timeout
wrapper, map timeout into an early Err(...) returned from handle_cron_test, and
only proceed to use turn.text and telegram logic when the timeout did not
elapse.
- Around line 1945-1950: persist_cron currently performs synchronous filesystem
I/O on the async runtime and races with other saves; fix it by adding a
dedicated save mutex to AppState (e.g., cron_save_mutex: Arc<Mutex<()>>) and
acquiring that mutex around creating the snapshot and the subsequent blocking
save, then run the actual disk write via tokio::task::spawn_blocking calling
cron_repository::save; apply the same pattern to
modules/cron/scheduler.rs::mark_ran and modules/mcp/native.rs::set_cron_enabled
so they also lock cron_save_mutex before snapshotting and before calling
spawn_blocking(cron_repository::save) to ensure in-memory write order matches
on-disk order.

In `@src-tauri/src/modules/cron/scheduler.rs`:
- Around line 20-33: The scheduler currently only observes state.shutdown_notify
in run() between ticks so long-running work inside
tick()/execute_job()/run_system_turn() blocks shutdown; update tick() to check
state.shutdown_notify.notified().now_or_never() between iterating over due jobs
and skip starting any further jobs if shutdown was requested, and for extra
safety race each long-running per-job call (execute_job / run_system_turn)
against state.shutdown_notify via tokio::select! so the job can be aborted or
cancelled promptly when shutdown is signalled.
- Around line 79-165: The code unconditionally calls mark_ran (advancing
last_run_at) even on agent::run_system_turn errors and performs synchronous fs
I/O in repository::save; change execute_job so mark_ran is only invoked for
successful runs (or explicitly-classified "ran" errors) — i.e. call
mark_ran(state, &job.id).await only inside the Ok(turn) branch after a
successful send (or where you define success), and surface the error from
run_system_turn instead of advancing last_run_at on Err; refactor mark_ran to
perform saving on a blocking thread by wrapping repository::save(...) in
tokio::task::spawn_blocking and await it, and make it use the same save mutex
used by persist_cron (follow the persist_cron concurrency pattern) so cron file
writes are serialized and last-writer-wins are preserved.

In `@src/modules/cron/components/CronPanel.tsx`:
- Around line 61-64: The current scalar states togglingId, testingId, and
deletingId cause races when multiple card actions run concurrently; replace each
with a Set<string> (e.g., togglingIds, testingIds, deletingIds) managed via
useState and update helpers (instead of
setTogglingId/setTestingId/setDeletingId) that add the job id when an operation
starts and remove it when it completes or errors, and update all UI/logic checks
(where code checks equality to togglingId/testingId/deletingId) to test Set
membership so each card shows its own busy state independently; ensure all
start/finish paths (including error paths) use the add/remove helpers so no id
is left stuck in the Set.
- Around line 147-165: The save flow can close a different/newer draft when the
in-flight save resolves because it always calls closeForm() and load()
regardless of whether the active editingId changed; fix by capturing the current
editingId (e.g., const sessionId = editingId) before awaiting
updateCronJob/createCronJob and, after the await, check that editingId ===
sessionId (or that the form session token still matches) before calling
setNotice(), closeForm(), and load(); otherwise only setSaving(false) and return
or set form error as appropriate. Apply this same pattern to the other save
blocks that call setNotice/closeForm/load (the other occurrences around the
indicated ranges).
- Around line 67-80: The load function can be racy because earlier
fetchCronJobs() responses can override later ones; fix by adding a
monotonically-increasing request id ref (e.g., fetchSeqRef) that you increment
before each call in load(), capture its value in the closure, and after awaiting
fetchCronJobs() verify that cancelledRef.current is false and the captured seq
matches fetchSeqRef.current before calling setJobs/setLastChatId/setError;
ensure the same check is used for refreshes and on unmount so stale responses
are ignored.
- Around line 133-145: The schedule validators in CronPanel (the block checking
draft.schedule.kind === "every_minutes" and the else branch checking { hour,
minute }) currently use Number.isFinite which permits fractional values and
elsewhere parseInt truncates decimals; update validation to require integers by
using Number.isInteger for minute/hour checks and ensure the minutes input
parsing (the code that reads the minutes value around the minutes input
handling) uses event.target.valueAsNumber (or Number(value) followed by
Number.isInteger) instead of Number.parseInt so fractional inputs are rejected
and the saved values reflect exact integers; keep error messages via
setFormError unchanged but trigger them when Number.isInteger fails or
valueAsNumber is NaN/out of range.

---

Nitpick comments:
In `@src-tauri/src/modules/skills/service.rs`:
- Around line 334-404: The function skills_prompt_hint_for_turn treats Some(&[])
the same as None (the local filtered_run remains false so gating via
skill_passes_hint_gate still applies), which is surprising; update the doc
comment above skills_prompt_hint_for_turn to add a one-line note explaining that
slug_filter==Some(&[]) is intentionally treated as "no filter" (i.e., behaves
like None) and that callers should pass Some(non_empty_slice) to apply explicit
filtering or None to disable filtering, so cron callers don't rely on passing an
empty slice to change gating behavior.

In `@src-tauri/tauri.macos.conf.json`:
- Line 4: The beforeBundleCommand string is overly wrapped and suppresses errors
redundantly; update the "beforeBundleCommand" value to remove the explicit "sh
-c" wrapper and the trailing "2>/dev/null; true" so it simply runs rm -f
target/release/bundle/macos/*.dmg target/release/bundle/dmg/*.dmg, relying on rm
-f and Tauri's automatic shell wrapping (locate the beforeBundleCommand entry in
tauri.macos.conf.json to change the value used during bundling).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 037c0f87-cbc7-4697-b4ba-ec58a98782eb

📥 Commits

Reviewing files that changed from the base of the PR and between 971edf8 and f17079e.

📒 Files selected for processing (13)
  • README.md
  • src-tauri/Cargo.toml
  • src-tauri/src/infrastructure/http_server.rs
  • src-tauri/src/modules/cron/repository.rs
  • src-tauri/src/modules/cron/scheduler.rs
  • src-tauri/src/modules/cron/service.rs
  • src-tauri/src/modules/mcp/native.rs
  • src-tauri/src/modules/skills/service.rs
  • src-tauri/src/shared/state.rs
  • src-tauri/tauri.macos.conf.json
  • src/modules/cron/components/CronPanel.tsx
  • src/modules/cron/index.ts
  • src/modules/skills/components/SkillsPanel.tsx
✅ Files skipped from review due to trivial changes (2)
  • README.md
  • src/modules/cron/index.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • src-tauri/src/shared/state.rs
  • src/modules/skills/components/SkillsPanel.tsx
  • src-tauri/src/modules/cron/repository.rs

Comment on lines +1945 to +1950
async fn persist_cron(state: &AppState) -> Result<(), String> {
let jobs = state.cron_jobs.read().await.clone();
let last_chat_id = *state.last_chat_id.read().await;
let file = CronFile { jobs, last_chat_id };
cron_repository::save(&state.cron_path, &file)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm every direct caller of cron_repository::save to size the fix surface.
rg -nP --type=rust -C3 '\bcron::?(?:repository|_repository)?::save\b|\brepository::save\b' -g '**/cron/**' -g '**/http_server.rs' -g '**/mcp/native.rs'

Repository: pengine-ai/pengine

Length of output: 1118


🏁 Script executed:

#!/bin/bash
# Find persist_cron definition in http_server.rs
rg -n 'fn persist_cron' src-tauri/src/infrastructure/http_server.rs -A 10

Repository: pengine-ai/pengine

Length of output: 560


🏁 Script executed:

#!/bin/bash
# Find all callers of persist_cron
rg -n 'persist_cron' src-tauri/src/ --type=rust

Repository: pengine-ai/pengine

Length of output: 566


🏁 Script executed:

#!/bin/bash
# Check scheduler.rs::mark_ran to confirm it lacks spawn_blocking
rg -n 'fn mark_ran' src-tauri/src/modules/cron/scheduler.rs -A 15

Repository: pengine-ai/pengine

Length of output: 619


🏁 Script executed:

#!/bin/bash
# Check AppState definition to see if cron_save_mutex exists
rg -n 'struct AppState' src-tauri/src/ --type=rust -A 30

Repository: pengine-ai/pengine

Length of output: 2818


persist_cron does sync fs I/O on the Tokio runtime, and concurrent save paths race on cron.json.

Two related issues in a single persistence primitive:

  1. cron_repository::save is synchronous (tempfile create + write + rename). The MCP set_cron_enabled path in modules/mcp/native.rs (Line 479-481) wraps it in tokio::task::spawn_blocking for exactly this reason; persist_cron should do the same. Additionally, modules/cron/scheduler.rs::mark_ran (Line 164) also calls repository::save directly without spawn_blocking, creating an asymmetric pattern. Both persist_cron and mark_ran block the async runtime on fs I/O.

  2. persist_cron and mark_ran each re-read state.cron_jobs under a fresh read() guard (or modify then release a write() guard) after releasing their lock, before calling save(). Nothing serializes concurrent saves across handle_cron_* (via persist_cron), set_cron_enabled (via MCP), and scheduler::mark_ran. Interleaving:

    • H1 writes job A, releases, reads snapshot_A = {A}, starts save.
    • H2 writes job B, releases, reads snapshot_B = {A, B}, saves → disk = {A, B}.
    • H1's save finishes → disk = {A}. B is lost.

    The window extends across the entire save duration since each call takes a fresh lock.

🛡️ Suggested direction

Introduce a dedicated save mutex in AppState (e.g. cron_save_mutex: Arc<Mutex<()>>) and held across the snapshot + spawn_blocking(save) in all three paths (persist_cron, mark_ran, and set_cron_enabled) so in-memory write order matches on-disk write order, and move the sync write off the async runtime:

 async fn persist_cron(state: &AppState) -> Result<(), String> {
-    let jobs = state.cron_jobs.read().await.clone();
-    let last_chat_id = *state.last_chat_id.read().await;
-    let file = CronFile { jobs, last_chat_id };
-    cron_repository::save(&state.cron_path, &file)
+    let _guard = state.cron_save_mutex.lock().await;
+    let jobs = state.cron_jobs.read().await.clone();
+    let last_chat_id = *state.last_chat_id.read().await;
+    let file = CronFile { jobs, last_chat_id };
+    let path = state.cron_path.clone();
+    tokio::task::spawn_blocking(move || cron_repository::save(&path, &file))
+        .await
+        .map_err(|e| format!("cron save task: {e}"))?
 }

Apply the same pattern to modules/cron/scheduler.rs::mark_ran and ensure modules/mcp/native.rs::set_cron_enabled takes the mutex before its spawn_blocking call.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/infrastructure/http_server.rs` around lines 1945 - 1950,
persist_cron currently performs synchronous filesystem I/O on the async runtime
and races with other saves; fix it by adding a dedicated save mutex to AppState
(e.g., cron_save_mutex: Arc<Mutex<()>>) and acquiring that mutex around creating
the snapshot and the subsequent blocking save, then run the actual disk write
via tokio::task::spawn_blocking calling cron_repository::save; apply the same
pattern to modules/cron/scheduler.rs::mark_ran and
modules/mcp/native.rs::set_cron_enabled so they also lock cron_save_mutex before
snapshotting and before calling spawn_blocking(cron_repository::save) to ensure
in-memory write order matches on-disk order.

Comment thread src-tauri/src/infrastructure/http_server.rs
Comment thread src-tauri/src/modules/cron/scheduler.rs
Comment thread src-tauri/src/modules/cron/scheduler.rs
Comment thread src/modules/cron/components/CronPanel.tsx Outdated
Comment thread src/modules/cron/components/CronPanel.tsx
Comment thread src/modules/cron/components/CronPanel.tsx
Comment thread src/modules/cron/components/CronPanel.tsx
- Updated the CronDailyLocalTimePicker component to use Number.isInteger for hour and minute validation, ensuring only valid integers are accepted.
- Refactored the CronPanel component to manage toggling, testing, and deleting of cron jobs using Sets for better state handling.
- Enhanced error handling for cron job scheduling, ensuring that time values are validated as integers and within acceptable ranges.
- Introduced a mutex for synchronizing access to cron job state, improving data integrity during concurrent operations.

Made-with: Cursor
@maximedogawa maximedogawa merged commit ef002ac into main Apr 18, 2026
1 check passed
@maximedogawa maximedogawa deleted the 36-feature-add-cron-job-management branch April 18, 2026 17:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Add Cron job management

1 participant