Problem
All pod-origin scheduled invocations in live OpenClaw desks are degrading in clawdash because claw-api repeatedly fails their wake command with unknown cron job id. After enough consecutive failures, the scheduler flips those invocations into degraded mode and only attempts about 10% of slots.
Observed live failure chain on April 9, 2026:
clawdash schedule page shows many invocations as degraded with (~10% fire chance)
claw-api logs show schedule <id>: skipped (degraded-throttled) after repeated wake failures
- Manual repro inside an affected agent container:
openclaw cron list returns no jobs
openclaw cron run <job-id> returns GatewayClientRequestError: Error: unknown cron job id: <job-id>
- The generated runtime still contains the expected ids in
.claw-runtime/<service>/state/cron/jobs.json
This means the dashboard is correctly reflecting real scheduler degradation, not inventing it.
Root cause
Clawdapus's OpenClaw driver is still emitting pod-origin cron jobs in the older location and schema:
- Host/runtime path:
state/cron/jobs.json
- Container mount target:
/app/state/cron
- JSON shape: bare array
[{...}]
The OpenClaw build currently running in these desks expects its cron store under config space and in a versioned envelope:
- Container path:
/app/config/cron/jobs.json
- JSON shape:
{ "version": 1, "jobs": [...] }
Because of that mismatch, OpenClaw never loads the pod-origin cron registry into memory. The wake adapter then calls openclaw cron run <id> against an empty registry, which fails with unknown cron job id. claw-api records those as wake failures, increments consecutive_failures, and eventually marks the invocation degraded.
Constraints
- Preserve the current scheduler behavior in
claw-api; the bug is in the OpenClaw driver/runtime contract, not the schedule UI.
- Keep directory-level mounts for atomic writes. OpenClaw rewrites files by creating temp files adjacent to the target before renaming.
- Maintain deterministic invocation ids. Existing scheduled invocations should survive
claw up without id churn.
- Do not regress non-pod-origin jobs. The driver still needs to preserve the enabled/disabled distinction introduced for scheduler-owned wake execution.
Scope
- Update the OpenClaw driver to write the cron store where the current runtime loads it.
- Update the emitted JSON shape to the current OpenClaw cron schema.
- Adjust OpenClaw driver tests and spike expectations away from
/app/state/cron/jobs.json and bare-array parsing.
- Add regression coverage that would have caught the live mismatch.
Likely files
internal/driver/openclaw/driver.go
internal/driver/openclaw/jobs.go
internal/driver/openclaw/driver_test.go
internal/driver/openclaw/jobs_test.go
cmd/claw/spike_test.go
- Any docs/examples that still assert the old
/app/state/cron/jobs.json contract
Verification
go test ./internal/driver/openclaw/...
go test ./cmd/claw/... for any schedule/spike expectations that changed
- Re-run a pod so
openclaw cron list shows the generated jobs and openclaw cron run <id> resolves instead of returning unknown cron job id
Problem
All pod-origin scheduled invocations in live OpenClaw desks are degrading in
clawdashbecauseclaw-apirepeatedly fails their wake command withunknown cron job id. After enough consecutive failures, the scheduler flips those invocations into degraded mode and only attempts about 10% of slots.Observed live failure chain on April 9, 2026:
clawdashschedule page shows many invocations asdegradedwith(~10% fire chance)claw-apilogs showschedule <id>: skipped (degraded-throttled)after repeated wake failuresopenclaw cron listreturns no jobsopenclaw cron run <job-id>returnsGatewayClientRequestError: Error: unknown cron job id: <job-id>.claw-runtime/<service>/state/cron/jobs.jsonThis means the dashboard is correctly reflecting real scheduler degradation, not inventing it.
Root cause
Clawdapus's OpenClaw driver is still emitting pod-origin cron jobs in the older location and schema:
state/cron/jobs.json/app/state/cron[{...}]The OpenClaw build currently running in these desks expects its cron store under config space and in a versioned envelope:
/app/config/cron/jobs.json{ "version": 1, "jobs": [...] }Because of that mismatch, OpenClaw never loads the pod-origin cron registry into memory. The wake adapter then calls
openclaw cron run <id>against an empty registry, which fails withunknown cron job id.claw-apirecords those as wake failures, incrementsconsecutive_failures, and eventually marks the invocation degraded.Constraints
claw-api; the bug is in the OpenClaw driver/runtime contract, not the schedule UI.claw upwithout id churn.Scope
/app/state/cron/jobs.jsonand bare-array parsing.Likely files
internal/driver/openclaw/driver.gointernal/driver/openclaw/jobs.gointernal/driver/openclaw/driver_test.gointernal/driver/openclaw/jobs_test.gocmd/claw/spike_test.go/app/state/cron/jobs.jsoncontractVerification
go test ./internal/driver/openclaw/...go test ./cmd/claw/...for any schedule/spike expectations that changedopenclaw cron listshows the generated jobs andopenclaw cron run <id>resolves instead of returningunknown cron job id