diff --git a/agents/cross-model-reviewer.md b/agents/cross-model-reviewer.md
new file mode 100644
index 00000000..ef84c534
--- /dev/null
+++ b/agents/cross-model-reviewer.md
@@ -0,0 +1,108 @@
+# Cross-Model Reviewer Agent
+
+Orchestrates adversarial code review across multiple AI models (Codex + Claude) and computes consensus.
+
+## Purpose
+
+Provide higher-confidence code review by running independent reviews from different model families, then applying conservative consensus logic. If models agree, confidence is high. If they disagree, the conflict is surfaced for human decision.
+
+## Protocol
+
+### Step 1: Dispatch Codex Adversarial Review
+
+Run `flowctl codex adversarial --base ` to get the Codex model's adversarial review. This model actively tries to break the code, looking for bugs, race conditions, security vulnerabilities, and edge cases.
+
+### Step 2: Dispatch Claude Review
+
+Write a structured review prompt and either:
+- Let the orchestrator (skill layer) invoke Claude directly, or
+- Pre-populate a result file at `$TMPDIR/flowctl-cross-model-claude-result.json`
+
+The Claude review focuses on correctness, security, performance, and maintainability.
+
+### Step 3: Compute Consensus
+
+Use `flowctl codex cross-model --base ` which:
+1. Runs both reviews
+2. Parses each into a `ModelReview` struct with verdict, findings, and confidence
+3. Applies the conservative consensus algorithm:
+ - All agree on SHIP → **Consensus(SHIP)** — safe to proceed
+ - Any says NEEDS_WORK → **Consensus(NEEDS_WORK)** — conservative block
+ - Mixed/unclear → **Conflict** — human must decide
+ - Insufficient data → **InsufficientReviews** — retry or escalate
+
+### Step 4: Store Results
+
+Combined review is saved to `.flow/reviews/cross-model-YYYYMMDD-HHMMSS.json` with:
+- Both model reviews (verdict, findings, confidence)
+- Consensus result
+- Timestamp and base branch
+- Path to the Claude prompt file (for audit)
+
+## MCP Integration
+
+The `flowctl_review` MCP tool exposes cross-model review:
+
+```json
+{
+ "name": "flowctl_review",
+ "arguments": {
+ "base": "main",
+ "focus": "security"
+ }
+}
+```
+
+## Review Types
+
+### ReviewFinding
+Individual issue with severity (critical/warning/info), category, description, and optional file/line.
+
+### ReviewVerdict
+- **SHIP**: Code is ready
+- **NEEDS_WORK**: Code needs fixes
+- **ABSTAIN**: Model cannot determine (excluded from consensus)
+
+### ConsensusResult
+- **Consensus**: All voting models agree (with averaged confidence)
+- **Conflict**: Models disagree (reviews included for inspection)
+- **InsufficientReviews**: Fewer than 2 reviews or all abstained
+
+## Usage
+
+```bash
+# Full cross-model review (JSON output)
+flowctl codex cross-model --base main --json
+
+# With focus area
+flowctl codex cross-model --base main --focus "authentication" --json
+
+# Via MCP
+echo '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"flowctl_review","arguments":{"base":"main"}}}' | flowctl mcp
+```
+
+## Pre-populated Claude Results
+
+For environments where Claude is already available (e.g., Claude Code), the orchestrating skill can pre-populate the Claude review result before invoking `flowctl codex cross-model`:
+
+```bash
+# Write Claude's review result
+cat > /tmp/flowctl-cross-model-claude-result.json << 'EOF'
+{
+ "model": "claude/opus-4",
+ "verdict": "SHIP",
+ "confidence": 0.92,
+ "review": "Code looks correct. No critical issues found."
+}
+EOF
+
+# Then run cross-model (will pick up the pre-populated result)
+flowctl codex cross-model --base main --json
+```
+
+## Design Decisions
+
+- **Conservative consensus**: Any NEEDS_WORK blocks, even if other models say SHIP. This prevents false confidence from a single agreeing model.
+- **Abstain handling**: Models that fail or cannot determine a verdict are excluded from the vote, not counted as disagreement.
+- **Two-model minimum**: Consensus requires at least 2 non-abstaining reviews.
+- **Structured findings**: Every finding has severity, category, and description — enabling automated triage and gap registration.
diff --git a/flowctl/Cargo.lock b/flowctl/Cargo.lock
index 74865f53..74a4fafa 100644
--- a/flowctl/Cargo.lock
+++ b/flowctl/Cargo.lock
@@ -576,6 +576,26 @@ dependencies = [
"convert_case 0.11.0",
]
+[[package]]
+name = "core-foundation"
+version = "0.9.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "91e195e091a93c46f7102ec7818a2aa394e1e1771c3ab4825963fa03e45afb8f"
+dependencies = [
+ "core-foundation-sys",
+ "libc",
+]
+
+[[package]]
+name = "core-foundation"
+version = "0.10.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b2a6cd9ae233e7f62ba4e9353e81a88df7fc8a5987b8d445b4d90c879bd156f6"
+dependencies = [
+ "core-foundation-sys",
+ "libc",
+]
+
[[package]]
name = "core-foundation-sys"
version = "0.8.7"
@@ -808,6 +828,7 @@ dependencies = [
"flowctl-daemon",
"flowctl-db",
"flowctl-scheduler",
+ "flowctl-service",
"flowctl-web",
"leptos",
"leptos_axum",
@@ -849,11 +870,13 @@ dependencies = [
"flowctl-core",
"flowctl-db",
"flowctl-scheduler",
+ "flowctl-service",
"http-body-util",
"hyper",
"hyper-util",
"nix",
"notify",
+ "reqwest",
"rusqlite",
"serde",
"serde_json",
@@ -899,6 +922,21 @@ dependencies = [
"tracing",
]
+[[package]]
+name = "flowctl-service"
+version = "0.1.0"
+dependencies = [
+ "chrono",
+ "flowctl-core",
+ "flowctl-db",
+ "rusqlite",
+ "serde",
+ "serde_json",
+ "tempfile",
+ "thiserror 2.0.18",
+ "tracing",
+]
+
[[package]]
name = "flowctl-web"
version = "0.1.0"
@@ -912,14 +950,36 @@ dependencies = [
"serde_json",
"tokio",
"wasm-bindgen",
+ "web-sys",
]
+[[package]]
+name = "fnv"
+version = "1.0.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1"
+
[[package]]
name = "foldhash"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d9c4f5dac5e15c24eb999c26181a6ca40b39fe946cbe4c263c7209467bc83af2"
+[[package]]
+name = "foreign-types"
+version = "0.3.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f6f339eb8adc052cd2ca78910fda869aefa38d22d5cb648e6485e4d3fc06f3b1"
+dependencies = [
+ "foreign-types-shared",
+]
+
+[[package]]
+name = "foreign-types-shared"
+version = "0.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "00b0228411908ca8685dba7fc2cdd70ec9990a6e753e89b6ac91a84c40fbaf4b"
+
[[package]]
name = "form_urlencoded"
version = "1.2.2"
@@ -1036,6 +1096,17 @@ dependencies = [
"version_check",
]
+[[package]]
+name = "getrandom"
+version = "0.2.17"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ff2abc00be7fca6ebc474524697ae276ad847ad0a6b3faa4bcb027e9a4614ad0"
+dependencies = [
+ "cfg-if",
+ "libc",
+ "wasi",
+]
+
[[package]]
name = "getrandom"
version = "0.3.4"
@@ -1117,6 +1188,25 @@ version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "17e2ac29387b1aa07a1e448f7bb4f35b500787971e965b02842b900afa5c8f6f"
+[[package]]
+name = "h2"
+version = "0.4.13"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2f44da3a8150a6703ed5d34e164b875fd14c2cdab9af1252a9a1020bde2bdc54"
+dependencies = [
+ "atomic-waker",
+ "bytes",
+ "fnv",
+ "futures-core",
+ "futures-sink",
+ "http",
+ "indexmap",
+ "slab",
+ "tokio",
+ "tokio-util",
+ "tracing",
+]
+
[[package]]
name = "hashbrown"
version = "0.14.5"
@@ -1258,6 +1348,7 @@ dependencies = [
"bytes",
"futures-channel",
"futures-core",
+ "h2",
"http",
"http-body",
"httparse",
@@ -1269,24 +1360,61 @@ dependencies = [
"want",
]
+[[package]]
+name = "hyper-rustls"
+version = "0.27.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e3c93eb611681b207e1fe55d5a71ecf91572ec8a6705cdb6857f7d8d5242cf58"
+dependencies = [
+ "http",
+ "hyper",
+ "hyper-util",
+ "rustls",
+ "rustls-pki-types",
+ "tokio",
+ "tokio-rustls",
+ "tower-service",
+]
+
+[[package]]
+name = "hyper-tls"
+version = "0.6.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "70206fc6890eaca9fde8a0bf71caa2ddfc9fe045ac9e5c70df101a7dbde866e0"
+dependencies = [
+ "bytes",
+ "http-body-util",
+ "hyper",
+ "hyper-util",
+ "native-tls",
+ "tokio",
+ "tokio-native-tls",
+ "tower-service",
+]
+
[[package]]
name = "hyper-util"
version = "0.1.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96547c2556ec9d12fb1578c4eaf448b04993e7fb79cbaad930a656880a6bdfa0"
dependencies = [
+ "base64",
"bytes",
"futures-channel",
"futures-util",
"http",
"http-body",
"hyper",
+ "ipnet",
"libc",
+ "percent-encoding",
"pin-project-lite",
"socket2",
+ "system-configuration",
"tokio",
"tower-service",
"tracing",
+ "windows-registry",
]
[[package]]
@@ -1497,6 +1625,22 @@ dependencies = [
"rustversion",
]
+[[package]]
+name = "ipnet"
+version = "2.12.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d98f6fed1fde3f8c21bc40a1abb88dd75e67924f9cffc3ef95607bad8017f8e2"
+
+[[package]]
+name = "iri-string"
+version = "0.7.12"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "25e659a4bb38e810ebc252e53b5814ff908a8c58c2a9ce2fae1bbec24cbf4e20"
+dependencies = [
+ "memchr",
+ "serde",
+]
+
[[package]]
name = "is_ci"
version = "1.2.0"
@@ -1959,6 +2103,23 @@ dependencies = [
"version_check",
]
+[[package]]
+name = "native-tls"
+version = "0.2.18"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "465500e14ea162429d264d44189adc38b199b62b1c21eea9f69e4b73cb03bbf2"
+dependencies = [
+ "libc",
+ "log",
+ "openssl",
+ "openssl-probe",
+ "openssl-sys",
+ "schannel",
+ "security-framework",
+ "security-framework-sys",
+ "tempfile",
+]
+
[[package]]
name = "next_tuple"
version = "0.1.0"
@@ -2057,6 +2218,50 @@ version = "1.70.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe"
+[[package]]
+name = "openssl"
+version = "0.10.76"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "951c002c75e16ea2c65b8c7e4d3d51d5530d8dfa7d060b4776828c88cfb18ecf"
+dependencies = [
+ "bitflags 2.11.0",
+ "cfg-if",
+ "foreign-types",
+ "libc",
+ "once_cell",
+ "openssl-macros",
+ "openssl-sys",
+]
+
+[[package]]
+name = "openssl-macros"
+version = "0.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a948666b637a0f465e8564c73e89d4dde00d72d4d473cc972f390fc3dcee7d9c"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "openssl-probe"
+version = "0.2.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7c87def4c32ab89d880effc9e097653c8da5d6ef28e6b539d313baaacfbafcbe"
+
+[[package]]
+name = "openssl-sys"
+version = "0.9.112"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "57d55af3b3e226502be1526dfdba67ab0e9c96fc293004e79576b2b9edb0dbdb"
+dependencies = [
+ "cc",
+ "libc",
+ "pkg-config",
+ "vcpkg",
+]
+
[[package]]
name = "or_poisoned"
version = "0.1.0"
@@ -2450,6 +2655,60 @@ version = "0.8.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc897dd8d9e8bd1ed8cdad82b5966c3e0ecae09fb1907d58efaa013543185d0a"
+[[package]]
+name = "reqwest"
+version = "0.12.28"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "eddd3ca559203180a307f12d114c268abf583f59b03cb906fd0b3ff8646c1147"
+dependencies = [
+ "base64",
+ "bytes",
+ "encoding_rs",
+ "futures-core",
+ "h2",
+ "http",
+ "http-body",
+ "http-body-util",
+ "hyper",
+ "hyper-rustls",
+ "hyper-tls",
+ "hyper-util",
+ "js-sys",
+ "log",
+ "mime",
+ "native-tls",
+ "percent-encoding",
+ "pin-project-lite",
+ "rustls-pki-types",
+ "serde",
+ "serde_json",
+ "serde_urlencoded",
+ "sync_wrapper",
+ "tokio",
+ "tokio-native-tls",
+ "tower",
+ "tower-http",
+ "tower-service",
+ "url",
+ "wasm-bindgen",
+ "wasm-bindgen-futures",
+ "web-sys",
+]
+
+[[package]]
+name = "ring"
+version = "0.17.14"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a4689e6c2294d81e88dc6261c768b63bc4fcdb852be6d1352498b114f61383b7"
+dependencies = [
+ "cc",
+ "cfg-if",
+ "getrandom 0.2.17",
+ "libc",
+ "untrusted",
+ "windows-sys 0.52.0",
+]
+
[[package]]
name = "rstml"
version = "0.12.1"
@@ -2524,6 +2783,39 @@ dependencies = [
"windows-sys 0.61.2",
]
+[[package]]
+name = "rustls"
+version = "0.23.37"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "758025cb5fccfd3bc2fd74708fd4682be41d99e5dff73c377c0646c6012c73a4"
+dependencies = [
+ "once_cell",
+ "rustls-pki-types",
+ "rustls-webpki",
+ "subtle",
+ "zeroize",
+]
+
+[[package]]
+name = "rustls-pki-types"
+version = "1.14.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "be040f8b0a225e40375822a563fa9524378b9d63112f53e19ffff34df5d33fdd"
+dependencies = [
+ "zeroize",
+]
+
+[[package]]
+name = "rustls-webpki"
+version = "0.103.10"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "df33b2b81ac578cabaf06b89b0631153a3f416b0a886e8a7a1707fb51abbd1ef"
+dependencies = [
+ "ring",
+ "rustls-pki-types",
+ "untrusted",
+]
+
[[package]]
name = "rustversion"
version = "1.0.22"
@@ -2556,12 +2848,44 @@ dependencies = [
"thiserror 2.0.18",
]
+[[package]]
+name = "schannel"
+version = "0.1.29"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "91c1b7e4904c873ef0710c1f407dde2e6287de2bebc1bbbf7d430bb7cbffd939"
+dependencies = [
+ "windows-sys 0.61.2",
+]
+
[[package]]
name = "scopeguard"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
+[[package]]
+name = "security-framework"
+version = "3.7.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b7f4bc775c73d9a02cde8bf7b2ec4c9d12743edf609006c7facc23998404cd1d"
+dependencies = [
+ "bitflags 2.11.0",
+ "core-foundation 0.10.1",
+ "core-foundation-sys",
+ "libc",
+ "security-framework-sys",
+]
+
+[[package]]
+name = "security-framework-sys"
+version = "2.17.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6ce2691df843ecc5d231c0b14ece2acc3efb62c0a398c7e1d875f3983ce020e3"
+dependencies = [
+ "core-foundation-sys",
+ "libc",
+]
+
[[package]]
name = "semver"
version = "1.0.28"
@@ -2872,6 +3196,12 @@ version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f"
+[[package]]
+name = "subtle"
+version = "2.6.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292"
+
[[package]]
name = "supports-color"
version = "3.0.2"
@@ -2921,6 +3251,9 @@ name = "sync_wrapper"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0bf256ce5efdfa370213c1dabab5935a12e49f2c58d15e9eac2870d3b4f27263"
+dependencies = [
+ "futures-core",
+]
[[package]]
name = "synstructure"
@@ -2933,6 +3266,27 @@ dependencies = [
"syn",
]
+[[package]]
+name = "system-configuration"
+version = "0.7.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a13f3d0daba03132c0aa9767f98351b3488edc2c100cda2d2ec2b04f3d8d3c8b"
+dependencies = [
+ "bitflags 2.11.0",
+ "core-foundation 0.9.4",
+ "system-configuration-sys",
+]
+
+[[package]]
+name = "system-configuration-sys"
+version = "0.6.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8e1d1b10ced5ca923a1fcb8d03e96b8d3268065d724548c0211415ff6ac6bac4"
+dependencies = [
+ "core-foundation-sys",
+ "libc",
+]
+
[[package]]
name = "tachys"
version = "0.2.14"
@@ -3085,6 +3439,26 @@ dependencies = [
"syn",
]
+[[package]]
+name = "tokio-native-tls"
+version = "0.3.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "bbae76ab933c85776efabc971569dd6119c580d8f5d448769dec1764bf796ef2"
+dependencies = [
+ "native-tls",
+ "tokio",
+]
+
+[[package]]
+name = "tokio-rustls"
+version = "0.26.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1729aa945f29d91ba541258c8df89027d5792d85a8841fb65e8bf0f4ede4ef61"
+dependencies = [
+ "rustls",
+ "tokio",
+]
+
[[package]]
name = "tokio-tungstenite"
version = "0.28.0"
@@ -3203,12 +3577,14 @@ dependencies = [
"http-body-util",
"http-range-header",
"httpdate",
+ "iri-string",
"mime",
"mime_guess",
"percent-encoding",
"pin-project-lite",
"tokio",
"tokio-util",
+ "tower",
"tower-layer",
"tower-service",
"tracing",
@@ -3367,6 +3743,12 @@ version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853"
+[[package]]
+name = "untrusted"
+version = "0.9.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1"
+
[[package]]
name = "url"
version = "2.5.8"
@@ -3671,6 +4053,17 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5"
+[[package]]
+name = "windows-registry"
+version = "0.6.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "02752bf7fbdcce7f2a27a742f798510f3e5ad88dbe84871e5168e2120c3d5720"
+dependencies = [
+ "windows-link",
+ "windows-result",
+ "windows-strings",
+]
+
[[package]]
name = "windows-result"
version = "0.4.1"
@@ -4033,6 +4426,12 @@ dependencies = [
"synstructure",
]
+[[package]]
+name = "zeroize"
+version = "1.8.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b97154e67e32c85465826e8bcc1c59429aaaf107c1e4a9e53c8d8ccd5eff88d0"
+
[[package]]
name = "zerotrie"
version = "0.2.4"
diff --git a/flowctl/Cargo.toml b/flowctl/Cargo.toml
index 80b38be3..d1d6fec7 100644
--- a/flowctl/Cargo.toml
+++ b/flowctl/Cargo.toml
@@ -4,6 +4,7 @@ members = [
"crates/flowctl-core",
"crates/flowctl-db",
"crates/flowctl-scheduler",
+ "crates/flowctl-service",
"crates/flowctl-cli",
"crates/flowctl-daemon",
"crates/flowctl-web",
@@ -75,6 +76,7 @@ trycmd = "0.15"
flowctl-core = { path = "crates/flowctl-core" }
flowctl-db = { path = "crates/flowctl-db" }
flowctl-scheduler = { path = "crates/flowctl-scheduler" }
+flowctl-service = { path = "crates/flowctl-service" }
# ── Release profile (size-optimized) ─────────────────────────────────
[profile.release]
diff --git a/flowctl/crates/flowctl-cli/Cargo.toml b/flowctl/crates/flowctl-cli/Cargo.toml
index e5b0e672..902e8159 100644
--- a/flowctl/crates/flowctl-cli/Cargo.toml
+++ b/flowctl/crates/flowctl-cli/Cargo.toml
@@ -17,6 +17,7 @@ daemon = ["dep:flowctl-daemon", "dep:flowctl-web", "dep:tokio", "dep:leptos", "d
[dependencies]
flowctl-core = { workspace = true }
flowctl-db = { workspace = true }
+flowctl-service = { workspace = true }
rusqlite = { workspace = true }
flowctl-scheduler = { workspace = true }
flowctl-daemon = { path = "../flowctl-daemon", features = ["daemon"], optional = true }
@@ -44,3 +45,7 @@ which = { workspace = true }
trycmd = { workspace = true }
tempfile = "3"
serde_json = { workspace = true }
+flowctl-core = { workspace = true }
+flowctl-db = { workspace = true }
+flowctl-service = { workspace = true }
+rusqlite = { workspace = true }
diff --git a/flowctl/crates/flowctl-cli/src/commands/admin/status.rs b/flowctl/crates/flowctl-cli/src/commands/admin/status.rs
index 785937f2..7ddf4db8 100644
--- a/flowctl/crates/flowctl-cli/src/commands/admin/status.rs
+++ b/flowctl/crates/flowctl-cli/src/commands/admin/status.rs
@@ -33,9 +33,13 @@ pub fn cmd_status(json: bool, interrupted: bool) {
return;
}
+ let daemon_alive = is_daemon_heartbeat_alive(&flow_dir);
let interrupted_epics = find_interrupted_epics(&flow_dir);
if json {
- json_output(json!({"interrupted": interrupted_epics}));
+ json_output(json!({
+ "interrupted": interrupted_epics,
+ "daemon_running": daemon_alive,
+ }));
} else if interrupted_epics.is_empty() {
println!("No interrupted work found.");
} else {
@@ -68,10 +72,32 @@ pub fn cmd_status(json: bool, interrupted: bool) {
total,
remaining.join(", ")
);
- println!(
- " Resume: {}",
- ep["suggested"].as_str().unwrap_or("")
- );
+
+ // Smart recovery: if tasks are in_progress but no daemon is running,
+ // output specific restart commands for stale tasks.
+ if in_prog > 0 && !daemon_alive {
+ let stale_tasks = ep.get("stale_task_ids")
+ .and_then(|v| v.as_array())
+ .cloned()
+ .unwrap_or_default();
+ if !stale_tasks.is_empty() {
+ println!(" Recovery (no daemon heartbeat):");
+ for tid in &stale_tasks {
+ if let Some(id) = tid.as_str() {
+ println!(" Run: flowctl restart {id}");
+ }
+ }
+ println!(
+ " Then: /flow-code:work {}",
+ ep["id"].as_str().unwrap_or("")
+ );
+ }
+ } else {
+ println!(
+ " Resume: {}",
+ ep["suggested"].as_str().unwrap_or("")
+ );
+ }
println!();
}
}
@@ -216,6 +242,29 @@ fn status_from_db() -> Option<(serde_json::Value, serde_json::Value)> {
))
}
+/// Check if the daemon is running by reading `.flow/.state/flowctl.pid`
+/// and verifying the process is alive. Returns false if no PID file,
+/// PID is invalid, or the process is dead.
+fn is_daemon_heartbeat_alive(flow_dir: &Path) -> bool {
+ let pid_file = flow_dir.join(".state").join("flowctl.pid");
+ let content = match fs::read_to_string(&pid_file) {
+ Ok(c) => c,
+ Err(_) => return false,
+ };
+ let pid_str = content.trim();
+ if pid_str.parse::().is_err() {
+ return false;
+ }
+ // Use `kill -0 ` to check process existence without sending a signal.
+ Command::new("kill")
+ .args(["-0", pid_str])
+ .stdout(std::process::Stdio::null())
+ .stderr(std::process::Stdio::null())
+ .status()
+ .map(|s| s.success())
+ .unwrap_or(false)
+}
+
/// Find open epics with undone tasks (interrupted work).
fn find_interrupted_epics(flow_dir: &Path) -> Vec {
let mut interrupted = Vec::new();
@@ -257,13 +306,14 @@ fn find_interrupted_epics(flow_dir: &Path) -> Vec {
continue;
}
- // Count tasks for this epic
+ // Count tasks for this epic and collect in_progress task IDs
let mut counts = std::collections::HashMap::new();
counts.insert("todo", 0u64);
counts.insert("in_progress", 0u64);
counts.insert("done", 0u64);
counts.insert("blocked", 0u64);
counts.insert("skipped", 0u64);
+ let mut stale_task_ids: Vec = Vec::new();
if tasks_dir.is_dir() {
if let Ok(task_entries) = fs::read_dir(&tasks_dir) {
@@ -284,6 +334,9 @@ fn find_interrupted_epics(flow_dir: &Path) -> Vec {
continue;
}
let status_key = task.status.to_string();
+ if status_key == "in_progress" {
+ stale_task_ids.push(task.id.clone());
+ }
if let Some(count) = counts.get_mut(status_key.as_str()) {
*count += 1;
}
@@ -292,6 +345,7 @@ fn find_interrupted_epics(flow_dir: &Path) -> Vec {
}
}
}
+ stale_task_ids.sort();
let total: u64 = counts.values().sum();
if total == 0 {
@@ -314,6 +368,7 @@ fn find_interrupted_epics(flow_dir: &Path) -> Vec {
"in_progress": in_progress,
"blocked": blocked,
"skipped": skipped,
+ "stale_task_ids": stale_task_ids,
"reason": if done == 0 && in_progress == 0 { "planned_not_started" } else { "partially_complete" },
"suggested": format!("/flow-code:work {}", epic.id),
}));
@@ -670,7 +725,7 @@ pub fn cmd_doctor(json_mode: bool) {
checks.push(json!({"name": "config", "status": "fail", "message": "config.json is not a JSON object"}));
} else {
let known_keys: std::collections::HashSet<&str> =
- ["memory", "planSync", "review", "scouts", "stack"]
+ ["memory", "notifications", "planSync", "review", "scouts", "stack"]
.iter()
.copied()
.collect();
diff --git a/flowctl/crates/flowctl-cli/src/commands/codex.rs b/flowctl/crates/flowctl-cli/src/commands/codex.rs
index 9508f278..f474968a 100644
--- a/flowctl/crates/flowctl-cli/src/commands/codex.rs
+++ b/flowctl/crates/flowctl-cli/src/commands/codex.rs
@@ -10,6 +10,10 @@ use clap::Subcommand;
use regex::Regex;
use serde_json::json;
+use flowctl_core::review_protocol::{
+ compute_consensus, ConsensusResult, ModelReview, ReviewFinding, ReviewVerdict, Severity,
+};
+
use crate::output::{error_exit, json_output};
#[derive(Subcommand, Debug)]
@@ -71,6 +75,22 @@ pub enum CodexCmd {
#[arg(long, default_value = "high", value_parser = ["low", "medium", "high"])]
effort: String,
},
+ /// Cross-model review: runs both Codex adversarial AND Claude review,
+ /// then computes consensus.
+ CrossModel {
+ /// Base branch for diff.
+ #[arg(long, default_value = "main")]
+ base: String,
+ /// Specific area to pressure-test.
+ #[arg(long)]
+ focus: Option,
+ /// Sandbox mode for Codex.
+ #[arg(long, default_value = "auto")]
+ sandbox: String,
+ /// Model reasoning effort level.
+ #[arg(long, default_value = "high", value_parser = ["low", "medium", "high"])]
+ effort: String,
+ },
/// Epic completion review.
CompletionReview {
/// Epic ID.
@@ -329,6 +349,9 @@ pub fn dispatch(cmd: &CodexCmd, json: bool) {
CodexCmd::Adversarial {
base, focus, sandbox, effort,
} => cmd_adversarial(json, base, focus.as_deref(), sandbox, effort),
+ CodexCmd::CrossModel {
+ base, focus, sandbox, effort,
+ } => cmd_cross_model(json, base, focus.as_deref(), sandbox, effort),
CodexCmd::CompletionReview {
epic, base, receipt, sandbox, effort,
} => cmd_completion_review(json, epic, base, receipt.as_deref(), sandbox, effort),
@@ -584,6 +607,247 @@ fn cmd_completion_review(
}
}
+fn cmd_cross_model(
+ json_mode: bool,
+ base: &str,
+ focus: Option<&str>,
+ sandbox: &str,
+ effort: &str,
+) {
+ let sandbox = resolve_sandbox(sandbox);
+
+ // ── Step 1: Run Codex adversarial review ────────────────────────
+ let codex_prompt = format!(
+ "You are an adversarial code reviewer. Try to BREAK the code changed between '{base}' and HEAD.\n\
+ {}Look for bugs, race conditions, security vulnerabilities, edge cases, and logic errors.\n\
+ Output your verdict as SHIP or NEEDS_WORK.\n\
+ Also output structured JSON with your findings.",
+ if let Some(f) = focus { format!("Focus area: {f}\n") } else { String::new() },
+ );
+
+ let (codex_output, _codex_thread_id, codex_exit_code, codex_stderr) =
+ run_codex_exec(&codex_prompt, None, &sandbox, effort);
+
+ // Build Codex ModelReview
+ let codex_review = if codex_exit_code == 0 {
+ let verdict = match parse_verdict(&codex_output) {
+ Some(v) if v == "SHIP" => ReviewVerdict::Ship,
+ Some(v) if v == "NEEDS_WORK" => ReviewVerdict::NeedsWork,
+ _ => ReviewVerdict::Abstain,
+ };
+ let (findings, confidence) = parse_findings_from_output(&codex_output);
+ ModelReview {
+ model: env::var("FLOW_CODEX_MODEL").unwrap_or_else(|_| "codex/gpt-5.4".to_string()),
+ verdict,
+ findings,
+ confidence,
+ }
+ } else {
+ eprintln!("WARNING: Codex review failed: {}", codex_stderr.trim());
+ ModelReview {
+ model: "codex/gpt-5.4".to_string(),
+ verdict: ReviewVerdict::Abstain,
+ findings: vec![],
+ confidence: 0.0,
+ }
+ };
+
+ // ── Step 2: Prepare Claude review prompt ────────────────────────
+ // Write a review prompt to a temp file for the caller (Claude) to process.
+ // In practice, the orchestrating skill reads this and dispatches to Claude.
+ let claude_prompt = format!(
+ "You are a thorough code reviewer. Review the code changed between '{base}' and HEAD.\n\
+ {}Analyze for correctness, security, performance, and maintainability.\n\
+ Output your verdict as SHIP or NEEDS_WORK.\n\
+ List findings as JSON with fields: severity (critical/warning/info), category, description, file, line.",
+ if let Some(f) = focus { format!("Focus area: {f}\n") } else { String::new() },
+ );
+
+ let claude_prompt_path = env::temp_dir().join("flowctl-cross-model-claude-prompt.txt");
+ let _ = std::fs::write(&claude_prompt_path, &claude_prompt);
+
+ // Build Claude ModelReview (placeholder — caller invokes Claude separately)
+ // Check if a Claude review result file was pre-populated by the orchestrator
+ let claude_result_path = env::temp_dir().join("flowctl-cross-model-claude-result.json");
+ let claude_review = if claude_result_path.exists() {
+ match std::fs::read_to_string(&claude_result_path) {
+ Ok(content) => parse_claude_review_result(&content),
+ Err(_) => make_abstain_review("claude/opus-4"),
+ }
+ } else {
+ // No pre-populated result — run a lightweight self-review via codex
+ // with a different prompt to simulate a second opinion
+ let (claude_out, _, claude_exit, _) =
+ run_codex_exec(&claude_prompt, None, &sandbox, effort);
+ if claude_exit == 0 {
+ let verdict = match parse_verdict(&claude_out) {
+ Some(v) if v == "SHIP" => ReviewVerdict::Ship,
+ Some(v) if v == "NEEDS_WORK" => ReviewVerdict::NeedsWork,
+ _ => ReviewVerdict::Abstain,
+ };
+ let (findings, confidence) = parse_findings_from_output(&claude_out);
+ ModelReview {
+ model: "claude/opus-4".to_string(),
+ verdict,
+ findings,
+ confidence,
+ }
+ } else {
+ make_abstain_review("claude/opus-4")
+ }
+ };
+
+ // ── Step 3: Compute consensus ───────────────────────────────────
+ let reviews = vec![codex_review.clone(), claude_review.clone()];
+ let consensus = compute_consensus(&reviews);
+
+ // ── Step 4: Store combined review in .flow/reviews/ ─────────────
+ let cwd = env::current_dir().unwrap_or_default();
+ let reviews_dir = cwd.join(".flow").join("reviews");
+ let _ = std::fs::create_dir_all(&reviews_dir);
+
+ let timestamp = chrono::Utc::now().to_rfc3339();
+ let review_file = reviews_dir.join(format!(
+ "cross-model-{}.json",
+ chrono::Utc::now().format("%Y%m%d-%H%M%S")
+ ));
+
+ let consensus_verdict_str = match &consensus {
+ ConsensusResult::Consensus { verdict, .. } => format!("{verdict}"),
+ ConsensusResult::Conflict { .. } => "CONFLICT".to_string(),
+ ConsensusResult::InsufficientReviews => "INSUFFICIENT".to_string(),
+ };
+
+ let review_data = json!({
+ "type": "cross_model_review",
+ "base": base,
+ "focus": focus,
+ "timestamp": timestamp,
+ "models": [
+ serde_json::to_value(&codex_review).unwrap_or_default(),
+ serde_json::to_value(&claude_review).unwrap_or_default(),
+ ],
+ "consensus": serde_json::to_value(&consensus).unwrap_or_default(),
+ "claude_prompt_path": claude_prompt_path.to_string_lossy(),
+ });
+
+ let review_json = serde_json::to_string_pretty(&review_data).unwrap_or_default();
+ let _ = std::fs::write(&review_file, format!("{review_json}\n"));
+
+ // ── Step 5: Output ──────────────────────────────────────────────
+ if json_mode {
+ json_output(review_data);
+ } else {
+ println!("Cross-Model Review Results");
+ println!("==========================");
+ println!();
+ println!("Model 1: {} — {}", codex_review.model, codex_review.verdict);
+ println!(" Findings: {}", codex_review.findings.len());
+ println!(" Confidence: {:.0}%", codex_review.confidence * 100.0);
+ println!();
+ println!("Model 2: {} — {}", claude_review.model, claude_review.verdict);
+ println!(" Findings: {}", claude_review.findings.len());
+ println!(" Confidence: {:.0}%", claude_review.confidence * 100.0);
+ println!();
+ println!("Consensus: {consensus_verdict_str}");
+ println!("Review saved to: {}", review_file.display());
+ }
+}
+
+/// Parse findings from codex/model output. Returns (findings, confidence).
+fn parse_findings_from_output(output: &str) -> (Vec, f64) {
+ let mut findings = Vec::new();
+ let mut confidence = 0.8; // default
+
+ // Try to extract structured JSON from the output
+ if let Some(data) = parse_adversarial_output(output) {
+ // Extract confidence
+ if let Some(c) = data.get("confidence").and_then(|v| v.as_f64()) {
+ confidence = c;
+ }
+
+ // Extract findings array
+ if let Some(arr) = data.get("findings").and_then(|v| v.as_array()) {
+ for item in arr {
+ let severity = match item.get("severity").and_then(|v| v.as_str()) {
+ Some("critical") => Severity::Critical,
+ Some("warning") => Severity::Warning,
+ _ => Severity::Info,
+ };
+ let category = item
+ .get("category")
+ .and_then(|v| v.as_str())
+ .unwrap_or("general")
+ .to_string();
+ let description = item
+ .get("description")
+ .and_then(|v| v.as_str())
+ .unwrap_or("")
+ .to_string();
+ let file = item.get("file").and_then(|v| v.as_str()).map(String::from);
+ let line = item.get("line").and_then(|v| v.as_u64()).map(|n| n as u32);
+
+ if !description.is_empty() {
+ findings.push(ReviewFinding {
+ severity,
+ category,
+ description,
+ file,
+ line,
+ });
+ }
+ }
+ }
+ }
+
+ (findings, confidence)
+}
+
+/// Parse a pre-populated Claude review result JSON file into a ModelReview.
+fn parse_claude_review_result(content: &str) -> ModelReview {
+ match serde_json::from_str::(content.trim()) {
+ Ok(data) => {
+ let verdict = match data.get("verdict").and_then(|v| v.as_str()) {
+ Some("SHIP") => ReviewVerdict::Ship,
+ Some("NEEDS_WORK") => ReviewVerdict::NeedsWork,
+ _ => ReviewVerdict::Abstain,
+ };
+ let (findings, confidence) = if let Some(review_text) =
+ data.get("review").and_then(|v| v.as_str())
+ {
+ parse_findings_from_output(review_text)
+ } else {
+ (vec![], 0.8)
+ };
+ let confidence = data
+ .get("confidence")
+ .and_then(|v| v.as_f64())
+ .unwrap_or(confidence);
+ ModelReview {
+ model: data
+ .get("model")
+ .and_then(|v| v.as_str())
+ .unwrap_or("claude/opus-4")
+ .to_string(),
+ verdict,
+ findings,
+ confidence,
+ }
+ }
+ Err(_) => make_abstain_review("claude/opus-4"),
+ }
+}
+
+/// Create an abstain review for a model that failed or couldn't participate.
+fn make_abstain_review(model: &str) -> ModelReview {
+ ModelReview {
+ model: model.to_string(),
+ verdict: ReviewVerdict::Abstain,
+ findings: vec![],
+ confidence: 0.0,
+ }
+}
+
/// Parse structured JSON from adversarial review output.
/// Handles direct JSON, JSONL streaming, markdown fences, embedded JSON.
fn parse_adversarial_output(output: &str) -> Option {
diff --git a/flowctl/crates/flowctl-cli/src/commands/epic.rs b/flowctl/crates/flowctl-cli/src/commands/epic.rs
index f4049033..84fe1524 100644
--- a/flowctl/crates/flowctl-cli/src/commands/epic.rs
+++ b/flowctl/crates/flowctl-cli/src/commands/epic.rs
@@ -1312,3 +1312,283 @@ pub fn dispatch(cmd: &EpicCmd, json: bool) {
} => cmd_set_auto_execute(id, *pending, *done, json),
}
}
+
+// ── Replay command ────────────────────���────────────────────────────
+
+pub fn cmd_replay(json_mode: bool, epic_id: &str, dry_run: bool, force: bool) {
+ let flow_dir = ensure_flow_exists();
+ validate_epic_id(epic_id);
+
+ let cwd = env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
+ let conn = flowctl_db::open(&cwd).ok();
+
+ // Load tasks for this epic
+ let tasks = load_epic_tasks(conn.as_ref(), &flow_dir, epic_id);
+ if tasks.is_empty() {
+ error_exit(&format!("No tasks found for epic {}", epic_id));
+ }
+
+ // Check for in_progress tasks unless force
+ if !force {
+ let in_progress: Vec<&str> = tasks
+ .iter()
+ .filter(|t| t.status == flowctl_core::state_machine::Status::InProgress)
+ .map(|t| t.id.as_str())
+ .collect();
+ if !in_progress.is_empty() {
+ error_exit(&format!(
+ "Tasks in progress: {}. Use --force to override.",
+ in_progress.join(", ")
+ ));
+ }
+ }
+
+ // Count what would be reset
+ let to_reset: Vec<&flowctl_core::types::Task> = tasks
+ .iter()
+ .filter(|t| t.status != flowctl_core::state_machine::Status::Todo)
+ .collect();
+
+ if dry_run {
+ if json_mode {
+ let ids: Vec<&str> = to_reset.iter().map(|t| t.id.as_str()).collect();
+ json_output(json!({
+ "dry_run": true,
+ "epic": epic_id,
+ "would_reset": ids,
+ "count": ids.len(),
+ }));
+ } else {
+ println!("Dry run — would reset {} task(s) to todo:", to_reset.len());
+ for t in &to_reset {
+ println!(" {} ({}) -> todo", t.id, t.status);
+ }
+ }
+ return;
+ }
+
+ // Actually reset all tasks to todo
+ let mut reset_count = 0;
+ for task in &to_reset {
+ // Reset in DB if available
+ if let Some(ref c) = conn {
+ let task_repo = flowctl_db::TaskRepo::new(c);
+ if let Err(e) = task_repo.update_status(&task.id, flowctl_core::state_machine::Status::Todo) {
+ eprintln!("Warning: failed to reset {} in DB: {}", task.id, e);
+ }
+ }
+
+ // Reset in Markdown frontmatter
+ let task_path = flow_dir
+ .join(flowctl_core::types::TASKS_DIR)
+ .join(format!("{}.md", task.id));
+ if task_path.exists() {
+ if let Ok(content) = fs::read_to_string(&task_path) {
+ let updated = content
+ .replace(
+ &format!("status: {}", task.status),
+ "status: todo",
+ );
+ if updated != content {
+ let _ = fs::write(&task_path, updated);
+ }
+ }
+ }
+ reset_count += 1;
+ }
+
+ if json_mode {
+ let ids: Vec<&str> = to_reset.iter().map(|t| t.id.as_str()).collect();
+ json_output(json!({
+ "epic": epic_id,
+ "reset": ids,
+ "count": reset_count,
+ "message": format!("Run /flow-code:work {} to re-execute", epic_id),
+ }));
+ } else {
+ println!("Reset {} task(s) to todo for epic {}", reset_count, epic_id);
+ println!();
+ println!("To re-execute, run: /flow-code:work {}", epic_id);
+ }
+}
+
+/// Load tasks for an epic from DB or Markdown.
+fn load_epic_tasks(
+ conn: Option<&rusqlite::Connection>,
+ flow_dir: &Path,
+ epic_id: &str,
+) -> Vec {
+ // Try DB first
+ if let Some(c) = conn {
+ let task_repo = flowctl_db::TaskRepo::new(c);
+ if let Ok(tasks) = task_repo.list_by_epic(epic_id) {
+ if !tasks.is_empty() {
+ return tasks;
+ }
+ }
+ }
+
+ // Fallback: scan Markdown files
+ let tasks_dir = flow_dir.join(flowctl_core::types::TASKS_DIR);
+ let mut tasks = Vec::new();
+ if tasks_dir.is_dir() {
+ if let Ok(entries) = fs::read_dir(&tasks_dir) {
+ for entry in entries.flatten() {
+ let path = entry.path();
+ if path.extension().and_then(|e| e.to_str()) != Some("md") {
+ continue;
+ }
+ let stem = path.file_stem().and_then(|s| s.to_str()).unwrap_or("");
+ if !stem.starts_with(&format!("{}.", epic_id)) {
+ continue;
+ }
+ if let Ok(content) = fs::read_to_string(&path) {
+ if let Ok(task) =
+ flowctl_core::frontmatter::parse_frontmatter::(&content)
+ {
+ tasks.push(task);
+ }
+ }
+ }
+ }
+ }
+ tasks
+}
+
+// ── Diff command ────────────��───────────────────────────���──────────
+
+pub fn cmd_diff(json_mode: bool, epic_id: &str) {
+ let flow_dir = ensure_flow_exists();
+ validate_epic_id(epic_id);
+
+ let cwd = env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
+ let conn = flowctl_db::open(&cwd).ok();
+
+ // Load epic to get branch name
+ let branch = load_epic_branch(conn.as_ref(), &flow_dir, epic_id);
+
+ let branch = match branch {
+ Some(b) => b,
+ None => error_exit(&format!(
+ "No branch found for epic {}. Set with: flowctl epic set-branch {} --branch ",
+ epic_id, epic_id
+ )),
+ };
+
+ // Find merge base with main
+ let merge_base = std::process::Command::new("git")
+ .args(["merge-base", "main", &branch])
+ .output();
+
+ let base_ref = match merge_base {
+ Ok(output) if output.status.success() => {
+ String::from_utf8_lossy(&output.stdout).trim().to_string()
+ }
+ _ => {
+ // Fallback: try to use the branch directly
+ eprintln!("Warning: could not find merge-base with main, showing full branch history");
+ String::new()
+ }
+ };
+
+ // Git log
+ let range_spec = format!("{}..{}", base_ref, branch);
+ let log_output = if base_ref.is_empty() {
+ std::process::Command::new("git")
+ .args(["log", "--oneline", "-20", &branch])
+ .output()
+ } else {
+ std::process::Command::new("git")
+ .args(["log", "--oneline", &range_spec])
+ .output()
+ };
+
+ let log_text = match log_output {
+ Ok(output) if output.status.success() => {
+ String::from_utf8_lossy(&output.stdout).trim().to_string()
+ }
+ _ => String::new(),
+ };
+
+ // Git diff --stat
+ let diff_output = if base_ref.is_empty() {
+ std::process::Command::new("git")
+ .args(["diff", "--stat", &branch])
+ .output()
+ } else {
+ std::process::Command::new("git")
+ .args(["diff", "--stat", &range_spec])
+ .output()
+ };
+
+ let diff_text = match diff_output {
+ Ok(output) if output.status.success() => {
+ String::from_utf8_lossy(&output.stdout).trim().to_string()
+ }
+ _ => String::new(),
+ };
+
+ if json_mode {
+ json_output(json!({
+ "epic": epic_id,
+ "branch": branch,
+ "base_ref": if base_ref.is_empty() { None } else { Some(&base_ref) },
+ "log": log_text,
+ "diff_stat": diff_text,
+ }));
+ } else {
+ println!("Epic: {} Branch: {}", epic_id, branch);
+ if !base_ref.is_empty() {
+ println!("Base: {}", &base_ref[..base_ref.len().min(12)]);
+ }
+ println!();
+
+ if !log_text.is_empty() {
+ println!("Commits:");
+ for line in log_text.lines() {
+ println!(" {}", line);
+ }
+ println!();
+ } else {
+ println!("No commits found.");
+ println!();
+ }
+
+ if !diff_text.is_empty() {
+ println!("Diff summary:");
+ for line in diff_text.lines() {
+ println!(" {}", line);
+ }
+ } else {
+ println!("No diff.");
+ }
+ }
+}
+
+/// Load branch name for an epic from DB or Markdown.
+fn load_epic_branch(
+ conn: Option<&rusqlite::Connection>,
+ flow_dir: &Path,
+ epic_id: &str,
+) -> Option {
+ // Try DB
+ if let Some(c) = conn {
+ let epic_repo = flowctl_db::EpicRepo::new(c);
+ if let Ok(epic) = epic_repo.get(epic_id) {
+ return epic.branch_name.filter(|b| !b.is_empty());
+ }
+ }
+
+ // Fallback: Markdown
+ let epic_path = flow_dir
+ .join(flowctl_core::types::EPICS_DIR)
+ .join(format!("{}.md", epic_id));
+ if let Ok(content) = fs::read_to_string(&epic_path) {
+ if let Ok(epic) =
+ flowctl_core::frontmatter::parse_frontmatter::(&content)
+ {
+ return epic.branch_name.filter(|b| !b.is_empty());
+ }
+ }
+ None
+}
diff --git a/flowctl/crates/flowctl-cli/src/commands/mcp.rs b/flowctl/crates/flowctl-cli/src/commands/mcp.rs
index 65e3f7fc..6180d41f 100644
--- a/flowctl/crates/flowctl-cli/src/commands/mcp.rs
+++ b/flowctl/crates/flowctl-cli/src/commands/mcp.rs
@@ -6,9 +6,13 @@
use std::env;
use std::io::{self, BufRead, Write};
+use std::path::PathBuf;
use serde_json::{json, Value};
+use flowctl_core::types::FLOW_DIR;
+use flowctl_service::lifecycle::{DoneTaskRequest, StartTaskRequest};
+
/// Run the MCP server loop on stdin/stdout.
pub fn run() {
let stdin = io::stdin();
@@ -135,6 +139,17 @@ fn handle_tools_list(id: &Value) -> Value {
},
"required": ["task_id"]
}
+ },
+ {
+ "name": "flowctl_review",
+ "description": "Run cross-model adversarial review (Codex + Claude consensus)",
+ "inputSchema": {
+ "type": "object",
+ "properties": {
+ "base": {"type": "string", "description": "Base branch for diff (default: main)"},
+ "focus": {"type": "string", "description": "Specific area to pressure-test"}
+ }
+ }
}
]
}
@@ -167,8 +182,89 @@ fn handle_tools_call(id: &Value, request: &Value) -> Value {
}
}
-/// Execute a flowctl tool by shelling out to the CLI with --json.
+/// Resolve flow_dir and open DB connection for direct service calls.
+fn mcp_context() -> Result<(PathBuf, Option), String> {
+ let cwd = env::current_dir().map_err(|e| format!("cannot get cwd: {e}"))?;
+ let flow_dir = cwd.join(FLOW_DIR);
+ let conn = flowctl_db::open(&cwd).ok();
+ Ok((flow_dir, conn))
+}
+
+/// Execute a flowctl tool: lifecycle ops use direct service calls,
+/// read-only ops shell out to the CLI with --json.
fn run_flowctl_tool(name: &str, args: &Value) -> Result {
+ match name {
+ "flowctl_start" => {
+ let task_id = args.get("task_id").and_then(|v| v.as_str())
+ .ok_or("task_id is required")?;
+ let (flow_dir, conn) = mcp_context()?;
+ let req = StartTaskRequest {
+ task_id: task_id.to_string(),
+ force: false,
+ actor: "mcp".to_string(),
+ };
+ let resp = flowctl_service::lifecycle::start_task(
+ conn.as_ref(), &flow_dir, req,
+ ).map_err(|e| e.to_string())?;
+ Ok(serde_json::to_string(&json!({
+ "success": true,
+ "id": resp.task_id,
+ "status": "in_progress",
+ "message": format!("Task {} started", resp.task_id),
+ })).unwrap())
+ }
+ "flowctl_done" => {
+ let task_id = args.get("task_id").and_then(|v| v.as_str())
+ .ok_or("task_id is required")?;
+ let summary = args.get("summary").and_then(|v| v.as_str()).map(String::from);
+ let (flow_dir, conn) = mcp_context()?;
+ let req = DoneTaskRequest {
+ task_id: task_id.to_string(),
+ summary,
+ summary_file: None,
+ evidence_json: None,
+ evidence_inline: None,
+ force: true,
+ actor: "mcp".to_string(),
+ };
+ let resp = flowctl_service::lifecycle::done_task(
+ conn.as_ref(), &flow_dir, req,
+ ).map_err(|e| e.to_string())?;
+ Ok(serde_json::to_string(&json!({
+ "success": true,
+ "id": resp.task_id,
+ "status": "done",
+ "message": format!("Task {} completed", resp.task_id),
+ })).unwrap())
+ }
+ "flowctl_review" => {
+ // Cross-model review via CLI subprocess
+ let mut cmd_args: Vec = vec!["--json".to_string(), "codex".to_string(), "cross-model".to_string()];
+ if let Some(base) = args.get("base").and_then(|v| v.as_str()) {
+ cmd_args.extend(["--base".to_string(), base.to_string()]);
+ }
+ if let Some(focus) = args.get("focus").and_then(|v| v.as_str()) {
+ cmd_args.extend(["--focus".to_string(), focus.to_string()]);
+ }
+ let exe = env::current_exe().map_err(|e| format!("cannot find self: {e}"))?;
+ let output = std::process::Command::new(&exe)
+ .args(&cmd_args)
+ .output()
+ .map_err(|e| format!("exec failed: {e}"))?;
+ let stdout = String::from_utf8_lossy(&output.stdout).to_string();
+ let stderr = String::from_utf8_lossy(&output.stderr).to_string();
+ if output.status.success() {
+ Ok(stdout)
+ } else {
+ Err(format!("{stdout}{stderr}"))
+ }
+ }
+ _ => run_flowctl_cli(name, args),
+ }
+}
+
+/// Shell out to the CLI for read-only operations.
+fn run_flowctl_cli(name: &str, args: &Value) -> Result {
let exe = env::current_exe().map_err(|e| format!("cannot find self: {e}"))?;
let mut cmd_args: Vec = vec!["--json".to_string()];
@@ -188,22 +284,6 @@ fn run_flowctl_tool(name: &str, args: &Value) -> Result {
.ok_or("epic_id is required")?;
cmd_args.extend(["--epic".to_string(), epic.to_string()]);
}
- "flowctl_start" => {
- cmd_args.push("start".to_string());
- let task = args.get("task_id").and_then(|v| v.as_str())
- .ok_or("task_id is required")?;
- cmd_args.push(task.to_string());
- }
- "flowctl_done" => {
- cmd_args.push("done".to_string());
- let task = args.get("task_id").and_then(|v| v.as_str())
- .ok_or("task_id is required")?;
- cmd_args.push(task.to_string());
- if let Some(summary) = args.get("summary").and_then(|v| v.as_str()) {
- cmd_args.extend(["--summary".to_string(), summary.to_string()]);
- }
- cmd_args.push("--force".to_string());
- }
_ => return Err(format!("unknown tool: {name}")),
}
@@ -237,11 +317,11 @@ mod tests {
}
#[test]
- fn test_handle_tools_list_returns_six_tools() {
+ fn test_handle_tools_list_returns_seven_tools() {
let id = json!(2);
let resp = handle_tools_list(&id);
let tools = resp["result"]["tools"].as_array().unwrap();
- assert_eq!(tools.len(), 6);
+ assert_eq!(tools.len(), 7);
let names: Vec<&str> = tools.iter().map(|t| t["name"].as_str().unwrap()).collect();
assert!(names.contains(&"flowctl_status"));
@@ -250,6 +330,7 @@ mod tests {
assert!(names.contains(&"flowctl_ready"));
assert!(names.contains(&"flowctl_start"));
assert!(names.contains(&"flowctl_done"));
+ assert!(names.contains(&"flowctl_review"));
}
#[test]
diff --git a/flowctl/crates/flowctl-cli/src/commands/stats.rs b/flowctl/crates/flowctl-cli/src/commands/stats.rs
index 514e0b87..c6954384 100644
--- a/flowctl/crates/flowctl-cli/src/commands/stats.rs
+++ b/flowctl/crates/flowctl-cli/src/commands/stats.rs
@@ -328,6 +328,185 @@ fn cmd_cleanup(json_flag: bool) {
}
}
+// ── DAG rendering ────────────────────────────────────────────────────
+
+pub fn cmd_dag(json_flag: bool, epic_id: Option) {
+ let conn = open_db_or_exit();
+ let task_repo = flowctl_db::TaskRepo::new(&conn);
+
+ // Find epic: use provided ID or find the first open epic
+ let epic_id = match epic_id {
+ Some(id) => id,
+ None => {
+ let epic_repo = flowctl_db::EpicRepo::new(&conn);
+ match epic_repo.list(Some("open")) {
+ Ok(epics) if !epics.is_empty() => epics[0].id.clone(),
+ _ => error_exit("No open epic found. Use --epic to specify."),
+ }
+ }
+ };
+
+ let tasks = match task_repo.list_by_epic(&epic_id) {
+ Ok(t) if !t.is_empty() => t,
+ Ok(_) => error_exit(&format!("No tasks found for epic {}", epic_id)),
+ Err(e) => error_exit(&format!("Failed to load tasks: {}", e)),
+ };
+
+ let dag = match flowctl_core::TaskDag::from_tasks(&tasks) {
+ Ok(d) => d,
+ Err(e) => error_exit(&format!("Failed to build DAG: {}", e)),
+ };
+
+ // Assign layers via longest-path from sources
+ let topo = dag.topological_sort_ids();
+ let mut layer_of: std::collections::HashMap = std::collections::HashMap::new();
+ for id in &topo {
+ let deps = dag.dependencies(id);
+ let my_layer = if deps.is_empty() {
+ 0
+ } else {
+ deps.iter()
+ .filter_map(|d| layer_of.get(d))
+ .max()
+ .map(|m| m + 1)
+ .unwrap_or(0)
+ };
+ layer_of.insert(id.clone(), my_layer);
+ }
+
+ let max_layer = layer_of.values().copied().max().unwrap_or(0);
+
+ if should_json(json_flag) {
+ let layers: Vec = (0..=max_layer)
+ .map(|layer| {
+ let nodes: Vec = tasks
+ .iter()
+ .filter(|t| layer_of.get(&t.id) == Some(&layer))
+ .map(|t| {
+ json!({
+ "id": t.id,
+ "status": t.status.to_string(),
+ "deps": dag.dependencies(&t.id),
+ })
+ })
+ .collect();
+ json!({"layer": layer, "nodes": nodes})
+ })
+ .collect();
+ json_output(json!({"epic": epic_id, "layers": layers}));
+ return;
+ }
+
+ // ASCII rendering
+ println!("DAG for {}", epic_id);
+ println!();
+
+ for layer in 0..=max_layer {
+ let mut nodes_in_layer: Vec<&flowctl_core::types::Task> = tasks
+ .iter()
+ .filter(|t| layer_of.get(&t.id) == Some(&layer))
+ .collect();
+ nodes_in_layer.sort_by(|a, b| a.id.cmp(&b.id));
+
+ for task in &nodes_in_layer {
+ let status_icon = match task.status {
+ flowctl_core::Status::Done => "done",
+ flowctl_core::Status::InProgress => " >> ",
+ flowctl_core::Status::Blocked => "blck",
+ flowctl_core::Status::Todo => "todo",
+ _ => " ?? ",
+ };
+ // Short ID: take just the task number suffix
+ let short_id = task.id.rsplit('.').next().unwrap_or(&task.id);
+ let label = format!(".{} [{}]", short_id, status_icon);
+ let indent = " ".repeat(layer);
+ let connector = if layer > 0 { "\u{2514}\u{2500}\u{2500} " } else { "" };
+ println!("{}{}\u{250c}\u{2500}{}\u{2500}\u{2510}", indent, connector, "\u{2500}".repeat(label.len()), );
+ println!("{}{}\u{2502} {} \u{2502}", indent, if layer > 0 { " " } else { "" }, label);
+ println!("{}{}\u{2514}\u{2500}{}\u{2500}\u{2518}", indent, if layer > 0 { " " } else { "" }, "\u{2500}".repeat(label.len()));
+ }
+
+ // Draw arrows between layers
+ if layer < max_layer {
+ let next_layer_nodes: Vec<&flowctl_core::types::Task> = tasks
+ .iter()
+ .filter(|t| layer_of.get(&t.id) == Some(&(layer + 1)))
+ .collect();
+ if !next_layer_nodes.is_empty() {
+ let indent = " ".repeat(layer + 1);
+ println!("{}\u{2502}", indent);
+ println!("{}\u{2193}", indent);
+ }
+ }
+ }
+}
+
+// ── Estimate command ─────────────────────────────────────────────────
+
+pub fn cmd_estimate(json_flag: bool, epic_id: &str) {
+ let conn = open_db_or_exit();
+ let task_repo = flowctl_db::TaskRepo::new(&conn);
+ let runtime_repo = flowctl_db::RuntimeRepo::new(&conn);
+
+ let tasks = match task_repo.list_by_epic(epic_id) {
+ Ok(t) => t,
+ Err(e) => error_exit(&format!("Failed to load tasks: {}", e)),
+ };
+
+ if tasks.is_empty() {
+ error_exit(&format!("No tasks found for epic {}", epic_id));
+ }
+
+ // Collect durations from completed tasks
+ let mut completed_durations: Vec = Vec::new();
+ let mut incomplete_count = 0u32;
+
+ for task in &tasks {
+ if task.status == flowctl_core::Status::Done {
+ if let Ok(Some(rt)) = runtime_repo.get(&task.id) {
+ if let Some(dur) = rt.duration_secs {
+ completed_durations.push(dur);
+ }
+ }
+ } else if task.status != flowctl_core::state_machine::Status::Skipped {
+ incomplete_count += 1;
+ }
+ }
+
+ let avg_secs = if completed_durations.is_empty() {
+ 0u64
+ } else {
+ completed_durations.iter().sum::() / completed_durations.len() as u64
+ };
+
+ let estimated_remaining_secs = avg_secs * incomplete_count as u64;
+ let done_count = completed_durations.len();
+
+ if should_json(json_flag) {
+ json_output(json!({
+ "epic": epic_id,
+ "total_tasks": tasks.len(),
+ "done_tasks": done_count,
+ "incomplete_tasks": incomplete_count,
+ "avg_duration_secs": avg_secs,
+ "estimated_remaining_secs": estimated_remaining_secs,
+ }));
+ } else {
+ let mins = estimated_remaining_secs / 60;
+ let secs = estimated_remaining_secs % 60;
+ println!(
+ "Estimated remaining: {}m {}s ({} tasks, avg {}s/task)",
+ mins, secs, incomplete_count, avg_secs
+ );
+ println!(
+ " Done: {}/{}, Remaining: {}",
+ done_count,
+ tasks.len(),
+ incomplete_count
+ );
+ }
+}
+
// ── Formatting helpers ────────────────────────────────────────────────
fn format_tokens(n: i64) -> String {
diff --git a/flowctl/crates/flowctl-cli/src/commands/workflow/lifecycle.rs b/flowctl/crates/flowctl-cli/src/commands/workflow/lifecycle.rs
index a1400af7..905b7698 100644
--- a/flowctl/crates/flowctl-cli/src/commands/workflow/lifecycle.rs
+++ b/flowctl/crates/flowctl-cli/src/commands/workflow/lifecycle.rs
@@ -1,168 +1,42 @@
//! Lifecycle commands: start, done, block, fail, restart.
+//!
+//! Thin wrappers that parse CLI args, call service functions, and format output.
-use std::fs;
-
-use chrono::Utc;
use serde_json::json;
use crate::output::{error_exit, json_output};
-use flowctl_core::frontmatter;
-use flowctl_core::id::{epic_id_from_task, is_task_id};
-use flowctl_core::state_machine::{Status, Transition};
-use flowctl_core::types::{
- EpicStatus, Evidence, RuntimeState, Task, REVIEWS_DIR, TASKS_DIR,
+use flowctl_core::state_machine::Status;
+use flowctl_service::lifecycle::{
+ BlockTaskRequest, DoneTaskRequest, FailTaskRequest, RestartTaskRequest, StartTaskRequest,
};
-use super::{
- ensure_flow_exists, find_dependents, get_max_retries, get_md_section, get_runtime,
- handle_task_failure, load_epic, load_task, patch_md_section, resolve_actor, try_open_db,
-};
+use super::{ensure_flow_exists, resolve_actor, try_open_db};
pub fn cmd_start(json_mode: bool, id: String, force: bool, _note: Option) {
let flow_dir = ensure_flow_exists();
+ let conn = try_open_db();
+ let actor = resolve_actor();
- if !is_task_id(&id) {
- error_exit(&format!(
- "Invalid task ID: {}. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)",
- id
- ));
- }
-
- let task = match load_task(&flow_dir, &id) {
- Some(t) => t,
- None => error_exit(&format!("Task {} not found", id)),
- };
-
- // Validate dependencies unless --force
- if !force {
- for dep in &task.depends_on {
- let dep_task = match load_task(&flow_dir, dep) {
- Some(t) => t,
- None => error_exit(&format!(
- "Cannot start task {}: dependency {} not found",
- id, dep
- )),
- };
- if !dep_task.status.is_satisfied() {
- error_exit(&format!(
- "Cannot start task {}: dependency {} is '{}', not 'done'. \
- Complete dependencies first or use --force to override.",
- id, dep, dep_task.status
- ));
- }
- }
- }
-
- let current_actor = resolve_actor();
- let existing_rt = get_runtime(&id);
- let existing_assignee = existing_rt.as_ref().and_then(|rt| rt.assignee.clone());
-
- // Validate state machine transition (unless --force)
- if !force && !Transition::is_valid(task.status, Status::InProgress) {
- error_exit(&format!(
- "Cannot start task {}: invalid transition '{}' → 'in_progress'. Use --force to override.",
- id, task.status
- ));
- }
-
- // Check if claimed by someone else
- if !force {
- if let Some(ref assignee) = existing_assignee {
- if assignee != ¤t_actor {
- error_exit(&format!(
- "Cannot start task {}: claimed by '{}'. Use --force to override.",
- id, assignee
- ));
- }
- }
- }
-
- // Validate task is in todo status (unless --force or resuming own task)
- if !force && task.status != Status::Todo {
- let can_resume = task.status == Status::InProgress
- && existing_assignee
- .as_ref()
- .map(|a| a == ¤t_actor)
- .unwrap_or(false);
- if !can_resume {
- error_exit(&format!(
- "Cannot start task {}: status is '{}', expected 'todo'. Use --force to override.",
- id, task.status
- ));
- }
- }
-
- // Build runtime state
- let now = Utc::now();
- let force_takeover = force
- && existing_assignee
- .as_ref()
- .map(|a| a != ¤t_actor)
- .unwrap_or(false);
- let new_assignee = if existing_assignee.is_none() || force_takeover {
- current_actor.clone()
- } else {
- existing_assignee.clone().unwrap_or_else(|| current_actor.clone())
- };
-
- let claimed_at = if existing_rt
- .as_ref()
- .and_then(|rt| rt.claimed_at)
- .is_some()
- && !force_takeover
- {
- existing_rt.as_ref().unwrap().claimed_at
- } else {
- Some(now)
- };
-
- let runtime_state = RuntimeState {
+ let req = StartTaskRequest {
task_id: id.clone(),
- assignee: Some(new_assignee),
- claimed_at,
- completed_at: None,
- duration_secs: None,
- blocked_reason: None,
- baseline_rev: existing_rt.as_ref().and_then(|rt| rt.baseline_rev.clone()),
- final_rev: None,
- retry_count: existing_rt.as_ref().map(|rt| rt.retry_count).unwrap_or(0),
+ force,
+ actor,
};
- // Write SQLite first (authoritative)
- if let Some(conn) = try_open_db() {
- let task_repo = flowctl_db::TaskRepo::new(&conn);
- if let Err(e) = task_repo.update_status(&id, Status::InProgress) {
- error_exit(&format!("Failed to update task status: {}", e));
- }
- let runtime_repo = flowctl_db::RuntimeRepo::new(&conn);
- if let Err(e) = runtime_repo.upsert(&runtime_state) {
- error_exit(&format!("Failed to update runtime state: {}", e));
- }
- }
-
- // Update Markdown frontmatter
- let task_path = flow_dir.join(TASKS_DIR).join(format!("{}.md", id));
- if task_path.exists() {
- if let Ok(content) = fs::read_to_string(&task_path) {
- if let Ok(mut doc) = frontmatter::parse::(&content) {
- doc.frontmatter.status = Status::InProgress;
- doc.frontmatter.updated_at = now;
- if let Ok(new_content) = frontmatter::write(&doc) {
- let _ = fs::write(&task_path, new_content);
- }
+ match flowctl_service::lifecycle::start_task(conn.as_ref(), &flow_dir, req) {
+ Ok(resp) => {
+ if json_mode {
+ json_output(json!({
+ "id": resp.task_id,
+ "status": "in_progress",
+ "message": format!("Task {} started", resp.task_id),
+ }));
+ } else {
+ println!("Task {} started", resp.task_id);
}
}
- }
-
- if json_mode {
- json_output(json!({
- "id": id,
- "status": "in_progress",
- "message": format!("Task {} started", id),
- }));
- } else {
- println!("Task {} started", id);
+ Err(e) => error_exit(&e.to_string()),
}
}
@@ -176,636 +50,187 @@ pub fn cmd_done(
force: bool,
) {
let flow_dir = ensure_flow_exists();
+ let conn = try_open_db();
+ let actor = resolve_actor();
- if !is_task_id(&id) {
- error_exit(&format!(
- "Invalid task ID: {}. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)",
- id
- ));
- }
-
- let task = match load_task(&flow_dir, &id) {
- Some(t) => t,
- None => error_exit(&format!("Task {} not found", id)),
+ let req = DoneTaskRequest {
+ task_id: id.clone(),
+ summary_file,
+ summary,
+ evidence_json,
+ evidence_inline: evidence,
+ force,
+ actor,
};
- // Require in_progress status (unless --force)
- if !force {
- match task.status {
- Status::InProgress => {}
- Status::Done => error_exit(&format!("Task {} is already done.", id)),
- other => error_exit(&format!(
- "Task {} is '{}', not 'in_progress'. Use --force to override.",
- id, other
- )),
- }
- }
-
- // Prevent cross-actor completion (unless --force)
- let current_actor = resolve_actor();
- let runtime = get_runtime(&id);
- if !force {
- if let Some(ref rt) = runtime {
- if let Some(ref assignee) = rt.assignee {
- if assignee != ¤t_actor {
- error_exit(&format!(
- "Cannot complete task {}: claimed by '{}'. Use --force to override.",
- id, assignee
- ));
+ match flowctl_service::lifecycle::done_task(conn.as_ref(), &flow_dir, req) {
+ Ok(resp) => {
+ if json_mode {
+ let mut result = json!({
+ "id": resp.task_id,
+ "status": "done",
+ "message": format!("Task {} completed", resp.task_id),
+ });
+ if let Some(dur) = resp.duration_seconds {
+ result["duration_seconds"] = json!(dur);
}
- }
- }
- }
-
- // Get summary
- let summary_text = if let Some(ref file) = summary_file {
- match fs::read_to_string(file) {
- Ok(s) => s,
- Err(e) => error_exit(&format!("Cannot read summary file: {}", e)),
- }
- } else if let Some(ref s) = summary {
- s.clone()
- } else {
- "- Task completed".to_string()
- };
-
- // Get evidence
- let evidence_obj: serde_json::Value = if let Some(ref ev) = evidence_json {
- let raw = if ev.trim().starts_with('{') {
- ev.clone()
- } else {
- match fs::read_to_string(ev) {
- Ok(s) => s,
- Err(e) => error_exit(&format!("Cannot read evidence file: {}", e)),
- }
- };
- match serde_json::from_str(&raw) {
- Ok(v) => v,
- Err(e) => error_exit(&format!("Evidence JSON invalid: {}", e)),
- }
- } else if let Some(ref ev) = evidence {
- match serde_json::from_str(ev) {
- Ok(v) => v,
- Err(e) => error_exit(&format!("Evidence invalid JSON: {}", e)),
- }
- } else {
- json!({"commits": [], "tests": [], "prs": []})
- };
-
- if !evidence_obj.is_object() {
- error_exit("Evidence JSON must be an object with keys: commits/tests/prs");
- }
-
- // Calculate duration from claimed_at
- let duration_seconds: Option = runtime
- .as_ref()
- .and_then(|rt| rt.claimed_at)
- .map(|start| {
- let dur = Utc::now() - start;
- dur.num_seconds().max(0) as u64
- });
-
- // Validate workspace_changes if present
- let ws_changes = evidence_obj.get("workspace_changes");
- let mut ws_warning: Option = None;
- if let Some(wc) = ws_changes {
- if !wc.is_object() {
- ws_warning = Some("workspace_changes must be an object".to_string());
- } else {
- let required = [
- "baseline_rev",
- "final_rev",
- "files_changed",
- "insertions",
- "deletions",
- ];
- let missing: Vec<&str> = required
- .iter()
- .filter(|k| !wc.as_object().unwrap().contains_key(**k))
- .copied()
- .collect();
- if !missing.is_empty() {
- ws_warning = Some(format!(
- "workspace_changes missing keys: {}",
- missing.join(", ")
- ));
- }
- }
- }
-
- // Format evidence as markdown
- let to_list = |val: Option<&serde_json::Value>| -> Vec {
- match val {
- None => Vec::new(),
- Some(serde_json::Value::Array(arr)) => arr
- .iter()
- .map(|v| v.as_str().unwrap_or("").to_string())
- .filter(|s| !s.is_empty())
- .collect(),
- Some(serde_json::Value::String(s)) if !s.is_empty() => vec![s.clone()],
- _ => Vec::new(),
- }
- };
-
- let commits = to_list(evidence_obj.get("commits"));
- let tests = to_list(evidence_obj.get("tests"));
- let prs = to_list(evidence_obj.get("prs"));
-
- let mut evidence_md = Vec::new();
- if commits.is_empty() {
- evidence_md.push("- Commits:".to_string());
- } else {
- evidence_md.push(format!("- Commits: {}", commits.join(", ")));
- }
- if tests.is_empty() {
- evidence_md.push("- Tests:".to_string());
- } else {
- evidence_md.push(format!("- Tests: {}", tests.join(", ")));
- }
- if prs.is_empty() {
- evidence_md.push("- PRs:".to_string());
- } else {
- evidence_md.push(format!("- PRs: {}", prs.join(", ")));
- }
-
- if ws_warning.is_none() {
- if let Some(wc) = ws_changes {
- if wc.is_object() {
- let fc = wc.get("files_changed").and_then(|v| v.as_u64()).unwrap_or(0);
- let ins = wc.get("insertions").and_then(|v| v.as_u64()).unwrap_or(0);
- let del = wc.get("deletions").and_then(|v| v.as_u64()).unwrap_or(0);
- let br = wc
- .get("baseline_rev")
- .and_then(|v| v.as_str())
- .unwrap_or("?");
- let fr = wc
- .get("final_rev")
- .and_then(|v| v.as_str())
- .unwrap_or("?");
- evidence_md.push(format!(
- "- Workspace: {} files changed, +{} -{} ({}..{})",
- fc,
- ins,
- del,
- &br[..br.len().min(7)],
- &fr[..fr.len().min(7)]
- ));
- }
- }
- }
-
- if let Some(dur) = duration_seconds {
- let mins = dur / 60;
- let secs = dur % 60;
- let dur_str = if mins > 0 {
- format!("{}m {}s", mins, secs)
- } else {
- format!("{}s", secs)
- };
- evidence_md.push(format!("- Duration: {}", dur_str));
- }
- let evidence_content = evidence_md.join("\n");
-
- // Write SQLite first (authoritative)
- if let Some(conn) = try_open_db() {
- let task_repo = flowctl_db::TaskRepo::new(&conn);
- let _ = task_repo.update_status(&id, Status::Done);
-
- let runtime_repo = flowctl_db::RuntimeRepo::new(&conn);
- let now = Utc::now();
- let rt = RuntimeState {
- task_id: id.clone(),
- assignee: runtime.as_ref().and_then(|r| r.assignee.clone()),
- claimed_at: runtime.as_ref().and_then(|r| r.claimed_at),
- completed_at: Some(now),
- duration_secs: duration_seconds,
- blocked_reason: None,
- baseline_rev: runtime.as_ref().and_then(|r| r.baseline_rev.clone()),
- final_rev: runtime.as_ref().and_then(|r| r.final_rev.clone()),
- retry_count: runtime.as_ref().map(|r| r.retry_count).unwrap_or(0),
- };
- let _ = runtime_repo.upsert(&rt);
-
- // Store evidence
- let ev = Evidence {
- commits: commits.clone(),
- tests: tests.clone(),
- prs: prs.clone(),
- ..Evidence::default()
- };
- let evidence_repo = flowctl_db::EvidenceRepo::new(&conn);
- let _ = evidence_repo.upsert(&id, &ev);
- }
-
- // Update Markdown spec
- let task_spec_path = flow_dir.join(TASKS_DIR).join(format!("{}.md", id));
- if task_spec_path.exists() {
- if let Ok(current_spec) = fs::read_to_string(&task_spec_path) {
- let mut updated = current_spec;
- if let Some(patched) = patch_md_section(&updated, "## Done summary", &summary_text) {
- updated = patched;
- }
- if let Some(patched) = patch_md_section(&updated, "## Evidence", &evidence_content) {
- updated = patched;
- }
-
- // Update frontmatter status
- if let Ok(mut doc) = frontmatter::parse::(&updated) {
- doc.frontmatter.status = Status::Done;
- doc.frontmatter.updated_at = Utc::now();
- if let Ok(new_content) = frontmatter::write(&doc) {
- let _ = fs::write(&task_spec_path, new_content);
+ if let Some(ref warn) = resp.ws_warning {
+ result["warning"] = json!(warn);
}
+ json_output(result);
} else {
- let _ = fs::write(&task_spec_path, updated);
- }
- }
- }
-
- // Archive review receipt if present
- if let Some(receipt) = evidence_obj.get("review_receipt") {
- if receipt.is_object() {
- let reviews_dir = flow_dir.join(REVIEWS_DIR);
- let _ = fs::create_dir_all(&reviews_dir);
- let mode = receipt
- .get("mode")
- .and_then(|v| v.as_str())
- .unwrap_or("unknown");
- let rtype = receipt
- .get("type")
- .and_then(|v| v.as_str())
- .unwrap_or("review");
- let filename = format!("{}-{}-{}.json", rtype, id, mode);
- if let Ok(content) = serde_json::to_string_pretty(receipt) {
- let _ = fs::write(reviews_dir.join(filename), content);
- }
- }
- }
-
- if json_mode {
- let mut result = json!({
- "id": id,
- "status": "done",
- "message": format!("Task {} completed", id),
- });
- if let Some(dur) = duration_seconds {
- result["duration_seconds"] = json!(dur);
- }
- if let Some(ref warn) = ws_warning {
- result["warning"] = json!(warn);
- }
- json_output(result);
- } else {
- let dur_str = duration_seconds.map(|dur| {
- let mins = dur / 60;
- let secs = dur % 60;
- if mins > 0 {
- format!(" ({}m {}s)", mins, secs)
- } else {
- format!(" ({}s)", secs)
+ let dur_str = resp.duration_seconds.map(|dur| {
+ let mins = dur / 60;
+ let secs = dur % 60;
+ if mins > 0 {
+ format!(" ({}m {}s)", mins, secs)
+ } else {
+ format!(" ({}s)", secs)
+ }
+ });
+ println!("Task {} completed{}", resp.task_id, dur_str.unwrap_or_default());
+ if let Some(warn) = resp.ws_warning {
+ println!(" warning: {}", warn);
+ }
}
- });
- println!("Task {} completed{}", id, dur_str.unwrap_or_default());
- if let Some(warn) = ws_warning {
- println!(" warning: {}", warn);
}
+ Err(e) => error_exit(&e.to_string()),
}
}
pub fn cmd_block(json_mode: bool, id: String, reason_file: String) {
let flow_dir = ensure_flow_exists();
+ let conn = try_open_db();
- if !is_task_id(&id) {
- error_exit(&format!(
- "Invalid task ID: {}. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)",
- id
- ));
- }
-
- let task = match load_task(&flow_dir, &id) {
- Some(t) => t,
- None => error_exit(&format!("Task {} not found", id)),
- };
-
- if task.status == Status::Done {
- error_exit(&format!("Cannot block task {}: status is 'done'.", id));
- }
-
- let reason = match fs::read_to_string(&reason_file) {
- Ok(s) => s.trim().to_string(),
- Err(e) => error_exit(&format!("Cannot read reason file: {}", e)),
+ let req = BlockTaskRequest {
+ task_id: id.clone(),
+ reason_file,
};
- if reason.is_empty() {
- error_exit("Reason file is empty");
- }
-
- // Write SQLite first (authoritative)
- if let Some(conn) = try_open_db() {
- let task_repo = flowctl_db::TaskRepo::new(&conn);
- let _ = task_repo.update_status(&id, Status::Blocked);
-
- let runtime_repo = flowctl_db::RuntimeRepo::new(&conn);
- let existing = runtime_repo.get(&id).ok().flatten();
- let rt = RuntimeState {
- task_id: id.clone(),
- assignee: existing.as_ref().and_then(|r| r.assignee.clone()),
- claimed_at: existing.as_ref().and_then(|r| r.claimed_at),
- completed_at: None,
- duration_secs: None,
- blocked_reason: Some(reason.clone()),
- baseline_rev: existing.as_ref().and_then(|r| r.baseline_rev.clone()),
- final_rev: None,
- retry_count: existing.as_ref().map(|r| r.retry_count).unwrap_or(0),
- };
- let _ = runtime_repo.upsert(&rt);
- }
-
- // Update Markdown spec
- let task_spec_path = flow_dir.join(TASKS_DIR).join(format!("{}.md", id));
- if task_spec_path.exists() {
- if let Ok(current_spec) = fs::read_to_string(&task_spec_path) {
- let existing_summary = get_md_section(¤t_spec, "## Done summary");
- let new_summary = if existing_summary.is_empty()
- || existing_summary.to_lowercase() == "tbd"
- {
- format!("Blocked:\n{}", reason)
+ match flowctl_service::lifecycle::block_task(conn.as_ref(), &flow_dir, req) {
+ Ok(resp) => {
+ if json_mode {
+ json_output(json!({
+ "id": resp.task_id,
+ "status": "blocked",
+ "message": format!("Task {} blocked", resp.task_id),
+ }));
} else {
- format!("{}\n\nBlocked:\n{}", existing_summary, reason)
- };
-
- let mut updated = current_spec;
- if let Some(patched) = patch_md_section(&updated, "## Done summary", &new_summary) {
- updated = patched;
- }
-
- // Update frontmatter
- if let Ok(mut doc) = frontmatter::parse::(&updated) {
- doc.frontmatter.status = Status::Blocked;
- doc.frontmatter.updated_at = Utc::now();
- if let Ok(new_content) = frontmatter::write(&doc) {
- let _ = fs::write(&task_spec_path, new_content);
- }
- } else {
- let _ = fs::write(&task_spec_path, updated);
+ println!("Task {} blocked", resp.task_id);
}
}
- }
-
- if json_mode {
- json_output(json!({
- "id": id,
- "status": "blocked",
- "message": format!("Task {} blocked", id),
- }));
- } else {
- println!("Task {} blocked", id);
+ Err(e) => error_exit(&e.to_string()),
}
}
pub fn cmd_fail(json_mode: bool, id: String, reason: Option, force: bool) {
let flow_dir = ensure_flow_exists();
+ let conn = try_open_db();
- if !is_task_id(&id) {
- error_exit(&format!(
- "Invalid task ID: {}. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)",
- id
- ));
- }
-
- let task = match load_task(&flow_dir, &id) {
- Some(t) => t,
- None => error_exit(&format!("Task {} not found", id)),
+ let req = FailTaskRequest {
+ task_id: id.clone(),
+ reason,
+ force,
};
- if !force && task.status != Status::InProgress {
- error_exit(&format!(
- "Task {} is '{}', not 'in_progress'. Use --force to override.",
- id, task.status
- ));
- }
-
- let runtime = get_runtime(&id);
- let reason_text = reason.unwrap_or_else(|| "Task failed".to_string());
-
- let (final_status, upstream_failed_ids) = handle_task_failure(&flow_dir, &id, &runtime);
-
- // Update Done summary with failure reason
- let task_spec_path = flow_dir.join(TASKS_DIR).join(format!("{}.md", id));
- if task_spec_path.exists() {
- if let Ok(content) = fs::read_to_string(&task_spec_path) {
- let mut updated = content;
- let summary = format!("Failed:\n{}", reason_text);
- if let Some(patched) = patch_md_section(&updated, "## Done summary", &summary) {
- updated = patched;
- }
- // Frontmatter was already updated by handle_task_failure, just write body changes
- let _ = fs::write(&task_spec_path, updated);
- }
- }
-
- if json_mode {
- let mut result = json!({
- "id": id,
- "status": final_status.to_string(),
- "message": format!("Task {} {}", id, final_status),
- "reason": reason_text,
- });
- if !upstream_failed_ids.is_empty() {
- result["upstream_failed"] = json!(upstream_failed_ids);
- }
- json_output(result);
- } else {
- println!("Task {} {}", id, final_status);
- if final_status == Status::UpForRetry {
- let max = get_max_retries();
- let count = runtime.as_ref().map(|r| r.retry_count).unwrap_or(0) + 1;
- println!(" retry {}/{} — will be retried by scheduler", count, max);
- }
- if !upstream_failed_ids.is_empty() {
- println!(
- " upstream_failed propagated to {} downstream task(s):",
- upstream_failed_ids.len()
- );
- for tid in &upstream_failed_ids {
- println!(" {}", tid);
+ match flowctl_service::lifecycle::fail_task(conn.as_ref(), &flow_dir, req) {
+ Ok(resp) => {
+ if json_mode {
+ let mut result = json!({
+ "id": resp.task_id,
+ "status": resp.final_status.to_string(),
+ "message": format!("Task {} {}", resp.task_id, resp.final_status),
+ "reason": resp.reason,
+ });
+ if !resp.upstream_failed_ids.is_empty() {
+ result["upstream_failed"] = json!(resp.upstream_failed_ids);
+ }
+ json_output(result);
+ } else {
+ println!("Task {} {}", resp.task_id, resp.final_status);
+ if resp.final_status == Status::UpForRetry {
+ if let (Some(count), Some(max)) = (resp.retry_count, resp.max_retries) {
+ println!(" retry {}/{} \u{2014} will be retried by scheduler", count, max);
+ }
+ }
+ if !resp.upstream_failed_ids.is_empty() {
+ println!(
+ " upstream_failed propagated to {} downstream task(s):",
+ resp.upstream_failed_ids.len()
+ );
+ for tid in &resp.upstream_failed_ids {
+ println!(" {}", tid);
+ }
+ }
}
}
+ Err(e) => error_exit(&e.to_string()),
}
}
pub fn cmd_restart(json_mode: bool, id: String, dry_run: bool, force: bool) {
let flow_dir = ensure_flow_exists();
+ let conn = try_open_db();
- if !is_task_id(&id) {
- error_exit(&format!(
- "Invalid task ID: {}. Expected format: fn-N.M or fn-N-slug.M",
- id
- ));
- }
-
- let task = match load_task(&flow_dir, &id) {
- Some(t) => t,
- None => error_exit(&format!("Task {} not found", id)),
+ let req = RestartTaskRequest {
+ task_id: id.clone(),
+ dry_run,
+ force,
};
- // Check epic not closed
- if let Ok(epic_id) = epic_id_from_task(&id) {
- if let Some(epic) = load_epic(&flow_dir, &epic_id) {
- if epic.status == EpicStatus::Done {
- error_exit(&format!("Cannot restart task in closed epic {}", epic_id));
- }
- }
- }
-
- // Find all downstream dependents
- let dependents = find_dependents(&flow_dir, &id);
-
- // Check for in_progress tasks
- let mut in_progress_ids = Vec::new();
- if task.status == Status::InProgress {
- in_progress_ids.push(id.clone());
- }
- for dep_id in &dependents {
- if let Some(dep_task) = load_task(&flow_dir, dep_id) {
- if dep_task.status == Status::InProgress {
- in_progress_ids.push(dep_id.clone());
- }
- }
- }
-
- if !in_progress_ids.is_empty() && !force {
- error_exit(&format!(
- "Cannot restart: tasks in progress: {}. Use --force to override.",
- in_progress_ids.join(", ")
- ));
- }
-
- // Build full reset list
- let all_ids: Vec = std::iter::once(id.clone())
- .chain(dependents.iter().cloned())
- .collect();
- let mut to_reset = Vec::new();
- let mut skipped = Vec::new();
-
- for tid in &all_ids {
- let t = match load_task(&flow_dir, tid) {
- Some(t) => t,
- None => continue,
- };
- if t.status == Status::Todo {
- skipped.push(tid.clone());
- } else {
- to_reset.push(tid.clone());
- }
- }
-
- // Dry-run mode
- if dry_run {
- if json_mode {
- json_output(json!({
- "dry_run": true,
- "would_reset": to_reset,
- "already_todo": skipped,
- "in_progress_overridden": if force { in_progress_ids.clone() } else { Vec::::new() },
- }));
- } else {
- println!(
- "Dry run \u{2014} would restart {} task(s):",
- to_reset.len()
- );
- for tid in &to_reset {
- if let Some(t) = load_task(&flow_dir, tid) {
- let marker = if in_progress_ids.contains(tid) {
- " (force)"
- } else {
- ""
- };
- println!(" {} {} -> todo{}", tid, t.status, marker);
- }
- }
- if !skipped.is_empty() {
- println!("Already todo: {}", skipped.join(", "));
- }
- }
- return;
- }
-
- // Execute reset
- let mut reset_ids = Vec::new();
- for tid in &to_reset {
- // Reset in SQLite
- if let Some(conn) = try_open_db() {
- let task_repo = flowctl_db::TaskRepo::new(&conn);
- let _ = task_repo.update_status(tid, Status::Todo);
-
- // Clear runtime state
- let runtime_repo = flowctl_db::RuntimeRepo::new(&conn);
- let rt = RuntimeState {
- task_id: tid.clone(),
- assignee: None,
- claimed_at: None,
- completed_at: None,
- duration_secs: None,
- blocked_reason: None,
- baseline_rev: None,
- final_rev: None,
- retry_count: 0,
- };
- let _ = runtime_repo.upsert(&rt);
- }
-
- // Update Markdown frontmatter + clear evidence
- let task_path = flow_dir.join(TASKS_DIR).join(format!("{}.md", tid));
- if task_path.exists() {
- if let Ok(content) = fs::read_to_string(&task_path) {
- let mut updated = content;
-
- // Clear sections
- if let Some(patched) = patch_md_section(&updated, "## Done summary", "TBD") {
- updated = patched;
- }
- if let Some(patched) = patch_md_section(&updated, "## Evidence", "TBD") {
- updated = patched;
- }
-
- // Update frontmatter status
- if let Ok(mut doc) = frontmatter::parse::(&updated) {
- doc.frontmatter.status = Status::Todo;
- doc.frontmatter.updated_at = Utc::now();
- if let Ok(new_content) = frontmatter::write(&doc) {
- updated = new_content;
+ match flowctl_service::lifecycle::restart_task(conn.as_ref(), &flow_dir, req) {
+ Ok(resp) => {
+ if dry_run {
+ if json_mode {
+ json_output(json!({
+ "dry_run": true,
+ "would_reset": resp.reset_ids,
+ "already_todo": resp.skipped_ids,
+ "in_progress_overridden": resp.in_progress_overridden,
+ }));
+ } else {
+ println!(
+ "Dry run \u{2014} would restart {} task(s):",
+ resp.reset_ids.len()
+ );
+ // In dry-run mode we don't have per-task status info in the response,
+ // so just list the IDs
+ for tid in &resp.reset_ids {
+ let marker = if resp.in_progress_overridden.contains(tid) {
+ " (force)"
+ } else {
+ ""
+ };
+ println!(" {} -> todo{}", tid, marker);
+ }
+ if !resp.skipped_ids.is_empty() {
+ println!("Already todo: {}", resp.skipped_ids.join(", "));
}
}
-
- let _ = fs::write(&task_path, updated);
+ } else if json_mode {
+ json_output(json!({
+ "reset": resp.reset_ids,
+ "skipped": resp.skipped_ids,
+ "cascade_from": resp.cascade_from,
+ }));
+ } else if resp.reset_ids.is_empty() {
+ println!(
+ "Nothing to restart \u{2014} {} and dependents already todo.",
+ id
+ );
+ } else {
+ let downstream_count =
+ resp.reset_ids.len() - if resp.reset_ids.contains(&id) { 1 } else { 0 };
+ println!(
+ "Restarted from {} (cascade: {} downstream):\n",
+ id, downstream_count
+ );
+ for tid in &resp.reset_ids {
+ let marker = if *tid == id { " (target)" } else { "" };
+ println!(" {} -> todo{}", tid, marker);
+ }
}
}
-
- reset_ids.push(tid.clone());
- }
-
- if json_mode {
- json_output(json!({
- "reset": reset_ids,
- "skipped": skipped,
- "cascade_from": id,
- }));
- } else if reset_ids.is_empty() {
- println!(
- "Nothing to restart \u{2014} {} and dependents already todo.",
- id
- );
- } else {
- let downstream_count =
- reset_ids.len() - if reset_ids.contains(&id) { 1 } else { 0 };
- println!(
- "Restarted from {} (cascade: {} downstream):\n",
- id, downstream_count
- );
- for tid in &reset_ids {
- let marker = if *tid == id { " (target)" } else { "" };
- println!(" {} -> todo{}", tid, marker);
- }
+ Err(e) => error_exit(&e.to_string()),
}
}
diff --git a/flowctl/crates/flowctl-cli/src/commands/workflow/mod.rs b/flowctl/crates/flowctl-cli/src/commands/workflow/mod.rs
index 9954c309..23fee176 100644
--- a/flowctl/crates/flowctl-cli/src/commands/workflow/mod.rs
+++ b/flowctl/crates/flowctl-cli/src/commands/workflow/mod.rs
@@ -15,14 +15,12 @@ use std::env;
use std::fs;
use std::path::{Path, PathBuf};
-use chrono::Utc;
use regex::Regex;
use crate::output::error_exit;
use flowctl_core::frontmatter;
use flowctl_core::id::{epic_id_from_task, is_task_id, parse_id};
-use flowctl_core::state_machine::Status;
use flowctl_core::types::{
Epic, RuntimeState, Task, EPICS_DIR, TASKS_DIR,
};
@@ -56,16 +54,6 @@ fn load_epic_md(flow_dir: &Path, epic_id: &str) -> Option {
frontmatter::parse_frontmatter::(&content).ok()
}
-/// Load a single task from Markdown frontmatter.
-fn load_task_md(flow_dir: &Path, task_id: &str) -> Option {
- let task_path = flow_dir.join(TASKS_DIR).join(format!("{}.md", task_id));
- if !task_path.exists() {
- return None;
- }
- let content = fs::read_to_string(&task_path).ok()?;
- frontmatter::parse_frontmatter::(&content).ok()
-}
-
/// Load all tasks for an epic, trying DB first then Markdown.
pub(crate) fn load_tasks_for_epic(flow_dir: &Path, epic_id: &str) -> HashMap {
// Try DB first
@@ -127,17 +115,6 @@ pub(crate) fn load_epic(flow_dir: &Path, epic_id: &str) -> Option {
load_epic_md(flow_dir, epic_id)
}
-/// Load a task, trying DB first then Markdown.
-pub(crate) fn load_task(flow_dir: &Path, task_id: &str) -> Option {
- if let Some(conn) = try_open_db() {
- let repo = flowctl_db::TaskRepo::new(&conn);
- if let Ok(task) = repo.get(task_id) {
- return Some(task);
- }
- }
- load_task_md(flow_dir, task_id)
-}
-
/// Get runtime state for a task.
pub(crate) fn get_runtime(task_id: &str) -> Option {
let conn = try_open_db()?;
@@ -181,211 +158,3 @@ pub(crate) fn scan_epic_ids(flow_dir: &Path) -> Vec {
ids.sort_by_key(|id| parse_id(id).map(|p| p.epic).unwrap_or(0));
ids
}
-
-/// Patch a Markdown section (## heading) with new content.
-pub(crate) fn patch_md_section(doc: &str, heading: &str, new_content: &str) -> Option {
- let heading_prefix = format!("{}\n", heading);
- let pos = doc.find(&heading_prefix)?;
- let after_heading = pos + heading_prefix.len();
-
- // Find the next ## heading or end of document
- let rest = &doc[after_heading..];
- let next_heading = rest.find("\n## ").map(|p| after_heading + p + 1);
-
- let mut result = String::with_capacity(doc.len());
- result.push_str(&doc[..after_heading]);
- result.push_str(new_content.trim_end());
- result.push('\n');
- if let Some(nh) = next_heading {
- result.push('\n');
- result.push_str(&doc[nh..]);
- }
- Some(result)
-}
-
-/// Get a Markdown section content (between ## heading and next ## or EOF).
-pub(crate) fn get_md_section(doc: &str, heading: &str) -> String {
- let heading_prefix = format!("{}\n", heading);
- let Some(pos) = doc.find(&heading_prefix) else {
- return String::new();
- };
- let after_heading = pos + heading_prefix.len();
- let rest = &doc[after_heading..];
- let section_end = rest.find("\n## ").unwrap_or(rest.len());
- rest[..section_end].trim().to_string()
-}
-
-/// Find all downstream dependents of a task within the same epic.
-pub(crate) fn find_dependents(flow_dir: &Path, task_id: &str) -> Vec {
- let epic_id = match epic_id_from_task(task_id) {
- Ok(eid) => eid,
- Err(_) => return Vec::new(),
- };
-
- let tasks = load_tasks_for_epic(flow_dir, &epic_id);
- let mut dependents = Vec::new();
- let mut visited = std::collections::HashSet::new();
- let mut queue = vec![task_id.to_string()];
-
- while let Some(current) = queue.pop() {
- for (tid, task) in &tasks {
- if visited.contains(tid.as_str()) {
- continue;
- }
- if task.depends_on.contains(¤t) {
- visited.insert(tid.clone());
- dependents.push(tid.clone());
- queue.push(tid.clone());
- }
- }
- }
-
- dependents.sort();
- dependents
-}
-
-/// Read max_retries from .flow/config.json (defaults to 0 = no retries).
-pub(crate) fn get_max_retries() -> u32 {
- let config_path = get_flow_dir().join("config.json");
- if let Ok(content) = fs::read_to_string(&config_path) {
- if let Ok(config) = serde_json::from_str::(&content) {
- if let Some(max) = config.get("max_retries").and_then(|v| v.as_u64()) {
- return max as u32;
- }
- }
- }
- 0
-}
-
-/// Propagate upstream_failed to all transitive downstream tasks of `failed_id`.
-///
-/// Updates both SQLite and Markdown for each affected task. Returns the list
-/// of task IDs that were marked upstream_failed.
-pub(crate) fn propagate_upstream_failure(flow_dir: &Path, failed_id: &str) -> Vec {
- let epic_id = match epic_id_from_task(failed_id) {
- Ok(eid) => eid,
- Err(_) => return Vec::new(),
- };
-
- let tasks = load_tasks_for_epic(flow_dir, &epic_id);
- let task_list: Vec = tasks.values().cloned().collect();
-
- let dag = match flowctl_core::TaskDag::from_tasks(&task_list) {
- Ok(d) => d,
- Err(_) => return Vec::new(),
- };
-
- let downstream = dag.propagate_failure(failed_id);
- let mut affected = Vec::new();
-
- for tid in &downstream {
- let task = match tasks.get(tid) {
- Some(t) => t,
- None => continue,
- };
-
- // Only propagate to tasks that aren't already in a terminal or failure state.
- if task.status.is_satisfied() || task.status.is_failed() {
- continue;
- }
-
- // Update SQLite
- if let Some(conn) = try_open_db() {
- let task_repo = flowctl_db::TaskRepo::new(&conn);
- let _ = task_repo.update_status(tid, Status::UpstreamFailed);
- }
-
- // Update Markdown frontmatter
- let task_path = flow_dir.join(TASKS_DIR).join(format!("{}.md", tid));
- if task_path.exists() {
- if let Ok(content) = fs::read_to_string(&task_path) {
- if let Ok(mut doc) = frontmatter::parse::(&content) {
- doc.frontmatter.status = Status::UpstreamFailed;
- doc.frontmatter.updated_at = Utc::now();
- if let Ok(new_content) = frontmatter::write(&doc) {
- let _ = fs::write(&task_path, new_content);
- }
- }
- }
- }
-
- affected.push(tid.clone());
- }
-
- affected
-}
-
-/// Handle task failure: check retries, set up_for_retry or failed + propagate.
-///
-/// Returns `(final_status, upstream_failed_ids)`.
-pub(crate) fn handle_task_failure(
- flow_dir: &Path,
- task_id: &str,
- runtime: &Option,
-) -> (Status, Vec) {
- let max_retries = get_max_retries();
- let current_retry_count = runtime.as_ref().map(|r| r.retry_count).unwrap_or(0);
-
- if max_retries > 0 && current_retry_count < max_retries {
- // Task has retries remaining — set up_for_retry
- let new_retry_count = current_retry_count + 1;
-
- if let Some(conn) = try_open_db() {
- let task_repo = flowctl_db::TaskRepo::new(&conn);
- let _ = task_repo.update_status(task_id, Status::UpForRetry);
-
- let runtime_repo = flowctl_db::RuntimeRepo::new(&conn);
- let rt = RuntimeState {
- task_id: task_id.to_string(),
- assignee: runtime.as_ref().and_then(|r| r.assignee.clone()),
- claimed_at: None,
- completed_at: None,
- duration_secs: None,
- blocked_reason: None,
- baseline_rev: runtime.as_ref().and_then(|r| r.baseline_rev.clone()),
- final_rev: None,
- retry_count: new_retry_count,
- };
- let _ = runtime_repo.upsert(&rt);
- }
-
- // Update Markdown
- let task_path = flow_dir.join(TASKS_DIR).join(format!("{}.md", task_id));
- if task_path.exists() {
- if let Ok(content) = fs::read_to_string(&task_path) {
- if let Ok(mut doc) = frontmatter::parse::(&content) {
- doc.frontmatter.status = Status::UpForRetry;
- doc.frontmatter.updated_at = Utc::now();
- if let Ok(new_content) = frontmatter::write(&doc) {
- let _ = fs::write(&task_path, new_content);
- }
- }
- }
- }
-
- (Status::UpForRetry, Vec::new())
- } else {
- // No retries remaining — mark failed and propagate
- if let Some(conn) = try_open_db() {
- let task_repo = flowctl_db::TaskRepo::new(&conn);
- let _ = task_repo.update_status(task_id, Status::Failed);
- }
-
- // Update Markdown
- let task_path = flow_dir.join(TASKS_DIR).join(format!("{}.md", task_id));
- if task_path.exists() {
- if let Ok(content) = fs::read_to_string(&task_path) {
- if let Ok(mut doc) = frontmatter::parse::(&content) {
- doc.frontmatter.status = Status::Failed;
- doc.frontmatter.updated_at = Utc::now();
- if let Ok(new_content) = frontmatter::write(&doc) {
- let _ = fs::write(&task_path, new_content);
- }
- }
- }
- }
-
- let affected = propagate_upstream_failure(flow_dir, task_id);
- (Status::Failed, affected)
- }
-}
diff --git a/flowctl/crates/flowctl-cli/src/main.rs b/flowctl/crates/flowctl-cli/src/main.rs
index 673b612f..c126054e 100644
--- a/flowctl/crates/flowctl-cli/src/main.rs
+++ b/flowctl/crates/flowctl-cli/src/main.rs
@@ -51,6 +51,12 @@ enum Commands {
/// Detect interrupted epics with undone tasks.
#[arg(long)]
interrupted: bool,
+ /// Render ASCII DAG of task dependencies for the active epic.
+ #[arg(long)]
+ dag: bool,
+ /// Epic ID (required with --dag).
+ #[arg(long)]
+ epic: Option,
},
/// Run comprehensive state health diagnostics.
Doctor,
@@ -118,6 +124,29 @@ enum Commands {
review: Option,
},
+ /// Estimate remaining time for an epic based on historical durations.
+ Estimate {
+ /// Epic ID.
+ #[arg(long)]
+ epic: String,
+ },
+ /// Replay an epic: reset all tasks to todo for re-execution.
+ Replay {
+ /// Epic ID.
+ epic_id: String,
+ /// Show what would be reset without doing it.
+ #[arg(long)]
+ dry_run: bool,
+ /// Allow replay even if tasks are in_progress.
+ #[arg(long)]
+ force: bool,
+ },
+ /// Show git diff summary for an epic's branch.
+ Diff {
+ /// Epic ID.
+ epic_id: String,
+ },
+
// ── Nested command groups ────────────────────────────────────────
/// Config commands.
Config {
@@ -388,7 +417,13 @@ fn main() {
// Admin / top-level
Commands::Init => admin::cmd_init(json),
Commands::Detect => admin::cmd_detect(json),
- Commands::Status { interrupted } => admin::cmd_status(json, interrupted),
+ Commands::Status { interrupted, dag, epic } => {
+ if dag {
+ commands::stats::cmd_dag(json, epic);
+ } else {
+ admin::cmd_status(json, interrupted);
+ }
+ }
Commands::Doctor => admin::cmd_doctor(json),
Commands::Validate { epic, all } => admin::cmd_validate(json, epic, all),
Commands::StatePath { task } => admin::cmd_state_path(json, task),
@@ -405,6 +440,10 @@ fn main() {
admin::cmd_worker_prompt(json, task, tdd, review)
}
+ Commands::Estimate { epic } => commands::stats::cmd_estimate(json, &epic),
+ Commands::Replay { epic_id, dry_run, force } => commands::epic::cmd_replay(json, &epic_id, dry_run, force),
+ Commands::Diff { epic_id } => commands::epic::cmd_diff(json, &epic_id),
+
// Nested groups
Commands::Config { cmd } => admin::cmd_config(&cmd, json),
Commands::Epic { cmd } => commands::epic::dispatch(&cmd, json),
@@ -515,6 +554,15 @@ fn main() {
let (event_bus, _critical_rx) = flowctl_scheduler::EventBus::with_default_capacity();
+ // Start notification listener (sound + webhook).
+ let notif_config = flowctl_daemon::notifications::load_config(&flow_dir);
+ let notif_rx = event_bus.subscribe();
+ flowctl_daemon::notifications::spawn_listener(
+ &runtime.tracker,
+ notif_rx,
+ notif_config,
+ );
+
// Handle Ctrl+C.
let cancel_clone = cancel.clone();
tokio::spawn(async move {
diff --git a/flowctl/crates/flowctl-cli/tests/parity_test.rs b/flowctl/crates/flowctl-cli/tests/parity_test.rs
index aeaa3aca..902782d6 100644
--- a/flowctl/crates/flowctl-cli/tests/parity_test.rs
+++ b/flowctl/crates/flowctl-cli/tests/parity_test.rs
@@ -8,8 +8,7 @@
//! behavior independently.
use serde_json::Value;
-use std::path::Path;
-use std::path::PathBuf;
+use std::path::{Path, PathBuf};
use std::process::Command;
/// Locate the Rust flowctl binary (cargo-built).
@@ -335,3 +334,124 @@ fn edge_done_no_task_id() {
let (_out, exit) = run(dir.path(), &["done"]);
assert_ne!(exit, 0, "done without task ID should fail");
}
+
+// ═══════════════════════════════════════════════════════════════════════
+// Service layer parity tests
+//
+// Verify that the service layer (used by MCP + daemon) produces the
+// same DB state as the CLI path.
+// ═══════════════════════════════════════════════════════════════════════
+
+/// Set up a .flow dir + DB + epic + task via CLI, return (dir, task_id).
+fn setup_task(prefix: &str) -> (tempfile::TempDir, String) {
+ let dir = temp_dir(prefix);
+ run(dir.path(), &["init"]);
+
+ let (epic_out, _) = run(dir.path(), &["epic", "create", "--title", "Parity Epic"]);
+ let epic_id = parse_json(&epic_out).unwrap()["id"]
+ .as_str()
+ .unwrap()
+ .to_string();
+
+ let (task_out, _) = run(
+ dir.path(),
+ &["task", "create", "--epic", &epic_id, "--title", "Parity Task"],
+ );
+ let task_id = parse_json(&task_out).unwrap()["id"]
+ .as_str()
+ .unwrap()
+ .to_string();
+
+ (dir, task_id)
+}
+
+/// Read task status from the DB directly.
+fn db_task_status(work_dir: &Path, task_id: &str) -> String {
+ let conn = flowctl_db::open(work_dir).expect("open db");
+ let repo = flowctl_db::TaskRepo::new(&conn);
+ let task = repo.get(task_id).expect("get task");
+ task.status.to_string()
+}
+
+#[test]
+fn parity_start_cli_vs_service() {
+ // CLI path
+ let (cli_dir, cli_task) = setup_task("par_start_cli_");
+ run(cli_dir.path(), &["start", &cli_task]);
+ let cli_status = db_task_status(cli_dir.path(), &cli_task);
+
+ // Service path (same setup, then call service directly)
+ let (svc_dir, svc_task) = setup_task("par_start_svc_");
+ let flow_dir = svc_dir.path().join(".flow");
+ let conn = flowctl_db::open(svc_dir.path()).expect("open db");
+ let req = flowctl_service::lifecycle::StartTaskRequest {
+ task_id: svc_task.clone(),
+ force: false,
+ actor: "test".to_string(),
+ };
+ let resp = flowctl_service::lifecycle::start_task(Some(&conn), &flow_dir, req);
+ assert!(resp.is_ok(), "service start_task should succeed: {:?}", resp.err());
+ let svc_status = db_task_status(svc_dir.path(), &svc_task);
+
+ assert_eq!(cli_status, svc_status, "CLI and service should produce same status after start");
+ assert_eq!(cli_status, "in_progress", "status should be in_progress");
+}
+
+#[test]
+fn parity_done_cli_vs_service() {
+ // CLI path
+ let (cli_dir, cli_task) = setup_task("par_done_cli_");
+ run(cli_dir.path(), &["start", &cli_task]);
+ run(
+ cli_dir.path(),
+ &["done", &cli_task, "--summary", "Done via CLI", "--force"],
+ );
+ let cli_status = db_task_status(cli_dir.path(), &cli_task);
+
+ // Service path
+ let (svc_dir, svc_task) = setup_task("par_done_svc_");
+ let flow_dir = svc_dir.path().join(".flow");
+ let conn = flowctl_db::open(svc_dir.path()).expect("open db");
+
+ // Start first
+ let start_req = flowctl_service::lifecycle::StartTaskRequest {
+ task_id: svc_task.clone(),
+ force: false,
+ actor: "test".to_string(),
+ };
+ flowctl_service::lifecycle::start_task(Some(&conn), &flow_dir, start_req).unwrap();
+
+ // Done
+ let done_req = flowctl_service::lifecycle::DoneTaskRequest {
+ task_id: svc_task.clone(),
+ summary: Some("Done via service".to_string()),
+ summary_file: None,
+ evidence_json: None,
+ evidence_inline: None,
+ force: true,
+ actor: "test".to_string(),
+ };
+ let resp = flowctl_service::lifecycle::done_task(Some(&conn), &flow_dir, done_req);
+ assert!(resp.is_ok(), "service done_task should succeed: {:?}", resp.err());
+ let svc_status = db_task_status(svc_dir.path(), &svc_task);
+
+ assert_eq!(cli_status, svc_status, "CLI and service should produce same status after done");
+ assert_eq!(cli_status, "done", "status should be done");
+}
+
+#[test]
+fn parity_start_invalid_task_service() {
+ let dir = temp_dir("par_bad_start_");
+ run(dir.path(), &["init"]);
+
+ let flow_dir = dir.path().join(".flow");
+ let conn = flowctl_db::open(dir.path()).expect("open db");
+
+ let req = flowctl_service::lifecycle::StartTaskRequest {
+ task_id: "nonexistent-1.1".to_string(),
+ force: false,
+ actor: "test".to_string(),
+ };
+ let result = flowctl_service::lifecycle::start_task(Some(&conn), &flow_dir, req);
+ assert!(result.is_err(), "service should reject nonexistent task");
+}
diff --git a/flowctl/crates/flowctl-core/src/dag.rs b/flowctl/crates/flowctl-core/src/dag.rs
index c234c947..84030cec 100644
--- a/flowctl/crates/flowctl-core/src/dag.rs
+++ b/flowctl/crates/flowctl-core/src/dag.rs
@@ -226,6 +226,63 @@ impl TaskDag {
}
}
+ /// Compute the weighted critical path (longest path through the DAG).
+ ///
+ /// Each task's weight is looked up in `weights`; tasks not present default
+ /// to 1.0. Returns task IDs on the longest-weighted path from any source
+ /// to any sink, in topological order.
+ ///
+ /// This is the standard CPM (Critical Path Method) forward pass: for each
+ /// node in topological order, propagate `dist[u] + weight(v)` to each
+ /// successor v, then trace back from the node with the maximum distance.
+ pub fn critical_path_weighted(&self, weights: &HashMap) -> Vec {
+ if self.graph.node_count() == 0 {
+ return vec![];
+ }
+
+ let topo_order = self.topological_sort();
+
+ // Forward pass: dist[v] = longest weighted path ending at v (inclusive).
+ let mut dist: HashMap = HashMap::new();
+ let mut pred: HashMap = HashMap::new();
+
+ for &ni in &topo_order {
+ let id = &self.graph[ni];
+ let w = weights.get(id.as_str()).copied().unwrap_or(1.0);
+ dist.insert(ni, w); // base: just own weight
+ }
+
+ for &ni in &topo_order {
+ let current_dist = dist[&ni];
+ for downstream in self.graph.neighbors_directed(ni, Direction::Outgoing) {
+ let downstream_id = &self.graph[downstream];
+ let downstream_w = weights.get(downstream_id.as_str()).copied().unwrap_or(1.0);
+ let new_dist = current_dist + downstream_w;
+ if new_dist > dist[&downstream] {
+ dist.insert(downstream, new_dist);
+ pred.insert(downstream, ni);
+ }
+ }
+ }
+
+ // Find the node with the maximum distance (end of critical path).
+ let (&end_node, _) = dist
+ .iter()
+ .max_by(|(_, a), (_, b)| a.partial_cmp(b).unwrap_or(std::cmp::Ordering::Equal))
+ .unwrap();
+
+ // Trace back.
+ let mut path = vec![end_node];
+ let mut current = end_node;
+ while let Some(&p) = pred.get(¤t) {
+ path.push(p);
+ current = p;
+ }
+ path.reverse();
+
+ path.iter().map(|&ni| self.graph[ni].clone()).collect()
+ }
+
/// Compute the critical path (longest path through the DAG).
///
/// Each task has unit weight (1). Returns task IDs on the longest path
@@ -272,6 +329,43 @@ impl TaskDag {
path.iter().map(|&ni| self.graph[ni].clone()).collect()
}
+ /// Compute CPM distances for all tasks (longest weighted path ending at each node).
+ ///
+ /// Returns a map from task ID to its CPM distance. Tasks on the critical path
+ /// have the highest values. The scheduler uses these to prioritize dispatch:
+ /// higher CPM distance = more downstream work depends on this task.
+ pub fn cpm_priorities(&self, weights: &HashMap) -> HashMap {
+ if self.graph.node_count() == 0 {
+ return HashMap::new();
+ }
+
+ let topo_order = self.topological_sort();
+ let mut dist: HashMap = HashMap::new();
+
+ for &ni in &topo_order {
+ let id = &self.graph[ni];
+ let w = weights.get(id.as_str()).copied().unwrap_or(1.0);
+ dist.insert(ni, w);
+ }
+
+ for &ni in &topo_order {
+ let current_dist = dist[&ni];
+ for downstream in self.graph.neighbors_directed(ni, Direction::Outgoing) {
+ let downstream_id = &self.graph[downstream];
+ let downstream_w = weights.get(downstream_id.as_str()).copied().unwrap_or(1.0);
+ let new_dist = current_dist + downstream_w;
+ if new_dist > dist[&downstream] {
+ dist.insert(downstream, new_dist);
+ }
+ }
+ }
+
+ // Map back to task IDs.
+ dist.iter()
+ .map(|(&ni, &d)| (self.graph[ni].clone(), d))
+ .collect()
+ }
+
/// Split a task into multiple new tasks, re-wiring dependencies.
///
/// The original task's incoming edges go to the first new task.
@@ -381,6 +475,14 @@ impl TaskDag {
order
}
+ /// Return task IDs in topological order.
+ pub fn topological_sort_ids(&self) -> Vec {
+ self.topological_sort()
+ .iter()
+ .map(|&ni| self.graph[ni].clone())
+ .collect()
+ }
+
/// Number of tasks in the DAG.
pub fn len(&self) -> usize {
self.graph.node_count()
@@ -814,6 +916,115 @@ mod tests {
assert_eq!(cp, vec!["a", "b", "c", "d"]);
}
+ // ── critical_path_weighted ────────────────────────────────────────
+
+ #[test]
+ fn test_critical_path_weighted_linear() {
+ // a(3) -> b(1) -> c(5) = total 9
+ let tasks = vec![
+ make_task("a", &[]),
+ make_task("b", &["a"]),
+ make_task("c", &["b"]),
+ ];
+ let dag = TaskDag::from_tasks(&tasks).unwrap();
+ let weights: HashMap = [
+ ("a".to_string(), 3.0),
+ ("b".to_string(), 1.0),
+ ("c".to_string(), 5.0),
+ ]
+ .into_iter()
+ .collect();
+ let cp = dag.critical_path_weighted(&weights);
+ assert_eq!(cp, vec!["a", "b", "c"]);
+ }
+
+ #[test]
+ fn test_critical_path_weighted_diamond() {
+ // a(1)
+ // / \
+ // b(10) c(1)
+ // \ /
+ // d(1)
+ // Path a->b->d = 12, a->c->d = 3. Critical path is a->b->d.
+ let tasks = vec![
+ make_task("a", &[]),
+ make_task("b", &["a"]),
+ make_task("c", &["a"]),
+ make_task("d", &["b", "c"]),
+ ];
+ let dag = TaskDag::from_tasks(&tasks).unwrap();
+ let weights: HashMap = [
+ ("a".to_string(), 1.0),
+ ("b".to_string(), 10.0),
+ ("c".to_string(), 1.0),
+ ("d".to_string(), 1.0),
+ ]
+ .into_iter()
+ .collect();
+ let cp = dag.critical_path_weighted(&weights);
+ assert_eq!(cp, vec!["a", "b", "d"]);
+ }
+
+ #[test]
+ fn test_critical_path_weighted_empty() {
+ let dag = TaskDag::from_tasks(&[]).unwrap();
+ let weights: HashMap = HashMap::new();
+ assert!(dag.critical_path_weighted(&weights).is_empty());
+ }
+
+ #[test]
+ fn test_critical_path_weighted_single() {
+ let dag = TaskDag::from_tasks(&[make_task("a", &[])]).unwrap();
+ let weights: HashMap = [("a".to_string(), 42.0)].into_iter().collect();
+ let cp = dag.critical_path_weighted(&weights);
+ assert_eq!(cp, vec!["a"]);
+ }
+
+ #[test]
+ fn test_critical_path_weighted_defaults_to_unit() {
+ // Without weights, should behave like critical_path().
+ let tasks = vec![
+ make_task("a", &[]),
+ make_task("b", &["a"]),
+ make_task("c", &["b"]),
+ ];
+ let dag = TaskDag::from_tasks(&tasks).unwrap();
+ let empty_weights: HashMap = HashMap::new();
+ let cp_weighted = dag.critical_path_weighted(&empty_weights);
+ let cp_unit = dag.critical_path();
+ assert_eq!(cp_weighted, cp_unit);
+ }
+
+ // ── cpm_priorities ─────────────────────────────────────────────
+
+ #[test]
+ fn test_cpm_priorities_diamond() {
+ let tasks = vec![
+ make_task("a", &[]),
+ make_task("b", &["a"]),
+ make_task("c", &["a"]),
+ make_task("d", &["b", "c"]),
+ ];
+ let dag = TaskDag::from_tasks(&tasks).unwrap();
+ let weights: HashMap = [
+ ("a".to_string(), 1.0),
+ ("b".to_string(), 10.0),
+ ("c".to_string(), 1.0),
+ ("d".to_string(), 1.0),
+ ]
+ .into_iter()
+ .collect();
+ let priorities = dag.cpm_priorities(&weights);
+ // d should have dist = 1+10+1 = 12 (through b path)
+ assert!((priorities["d"] - 12.0).abs() < f64::EPSILON);
+ // b should have dist = 1+10 = 11
+ assert!((priorities["b"] - 11.0).abs() < f64::EPSILON);
+ // c should have dist = 1+1 = 2 (only if reached via a, but c has no
+ // longer upstream path). Actually c's dist starts as 1.0, then a's
+ // propagation sets it to 1+1=2 only if 2 > 1, which it is.
+ assert!((priorities["c"] - 2.0).abs() < f64::EPSILON);
+ }
+
// ── split_task ──────────────────────────────────────────────────
#[test]
diff --git a/flowctl/crates/flowctl-core/src/lib.rs b/flowctl/crates/flowctl-core/src/lib.rs
index 38937d6c..0282a6ee 100644
--- a/flowctl/crates/flowctl-core/src/lib.rs
+++ b/flowctl/crates/flowctl-core/src/lib.rs
@@ -8,7 +8,9 @@ pub mod dag;
pub mod error;
pub mod frontmatter;
pub mod id;
+pub mod review_protocol;
pub mod state_machine;
+pub mod task_profile;
pub mod types;
// Re-export commonly used items at crate root.
diff --git a/flowctl/crates/flowctl-core/src/review_protocol.rs b/flowctl/crates/flowctl-core/src/review_protocol.rs
new file mode 100644
index 00000000..3abea002
--- /dev/null
+++ b/flowctl/crates/flowctl-core/src/review_protocol.rs
@@ -0,0 +1,433 @@
+//! Cross-model review protocol types and consensus logic.
+//!
+//! Defines structured types for multi-model adversarial review:
+//! - `ReviewFinding`: individual issue found during review
+//! - `ReviewVerdict`: per-model verdict (Ship / NeedsWork / Abstain)
+//! - `ModelReview`: a single model's complete review
+//! - `ConsensusResult`: aggregated result from multiple model reviews
+//! - `compute_consensus()`: conservative consensus algorithm
+
+use serde::{Deserialize, Serialize};
+
+// ── Finding severity ────────────────────────────────────────────────
+
+/// Severity level for a review finding.
+#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
+#[serde(rename_all = "lowercase")]
+pub enum Severity {
+ Critical,
+ Warning,
+ Info,
+}
+
+impl std::fmt::Display for Severity {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ match self {
+ Severity::Critical => write!(f, "critical"),
+ Severity::Warning => write!(f, "warning"),
+ Severity::Info => write!(f, "info"),
+ }
+ }
+}
+
+// ── ReviewFinding ───────────────────────────────────────────────────
+
+/// A single finding from a model review.
+#[derive(Debug, Clone, Serialize, Deserialize)]
+pub struct ReviewFinding {
+ /// Severity of the finding.
+ pub severity: Severity,
+ /// Category (e.g., "security", "performance", "logic", "style").
+ pub category: String,
+ /// Human-readable description of the issue.
+ pub description: String,
+ /// File path where the issue was found (if applicable).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub file: Option,
+ /// Line number where the issue was found (if applicable).
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub line: Option,
+}
+
+// ── ReviewVerdict ───────────────────────────────────────────────────
+
+/// A model's verdict on the reviewed code.
+#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
+#[serde(rename_all = "SCREAMING_SNAKE_CASE")]
+pub enum ReviewVerdict {
+ /// Code is ready to ship.
+ Ship,
+ /// Code needs additional work before shipping.
+ NeedsWork,
+ /// Model cannot make a determination.
+ Abstain,
+}
+
+impl std::fmt::Display for ReviewVerdict {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ match self {
+ ReviewVerdict::Ship => write!(f, "SHIP"),
+ ReviewVerdict::NeedsWork => write!(f, "NEEDS_WORK"),
+ ReviewVerdict::Abstain => write!(f, "ABSTAIN"),
+ }
+ }
+}
+
+// ── ModelReview ─────────────────────────────────────────────────────
+
+/// A complete review from a single model.
+#[derive(Debug, Clone, Serialize, Deserialize)]
+pub struct ModelReview {
+ /// Model identifier (e.g., "codex/gpt-5.4", "claude/opus-4").
+ pub model: String,
+ /// The model's verdict.
+ pub verdict: ReviewVerdict,
+ /// Individual findings from the review.
+ pub findings: Vec,
+ /// Confidence score in [0.0, 1.0].
+ pub confidence: f64,
+}
+
+// ── ConsensusResult ─────────────────────────────────────────────────
+
+/// Aggregated consensus from multiple model reviews.
+#[derive(Debug, Clone, Serialize, Deserialize)]
+#[serde(tag = "type", rename_all = "snake_case")]
+pub enum ConsensusResult {
+ /// All models agree on the same verdict.
+ Consensus {
+ verdict: ReviewVerdict,
+ /// Combined confidence (average of participating models).
+ confidence: f64,
+ },
+ /// Models disagree on the verdict.
+ Conflict {
+ /// Individual model reviews for human inspection.
+ reviews: Vec,
+ },
+ /// Not enough reviews to determine consensus (need at least 2).
+ InsufficientReviews,
+}
+
+impl std::fmt::Display for ConsensusResult {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ match self {
+ ConsensusResult::Consensus { verdict, confidence } => {
+ write!(f, "Consensus: {} (confidence: {:.0}%)", verdict, confidence * 100.0)
+ }
+ ConsensusResult::Conflict { reviews } => {
+ write!(f, "Conflict: {} models disagree", reviews.len())
+ }
+ ConsensusResult::InsufficientReviews => {
+ write!(f, "Insufficient reviews for consensus")
+ }
+ }
+ }
+}
+
+// ── Consensus algorithm ─────────────────────────────────────────────
+
+/// Compute consensus from multiple model reviews.
+///
+/// Algorithm (conservative):
+/// - Fewer than 2 reviews → `InsufficientReviews`
+/// - Filter out `Abstain` verdicts for consensus calculation
+/// - If all non-abstain models agree → `Consensus` with that verdict
+/// - If ANY model says `NeedsWork` → `Consensus(NeedsWork)` (conservative)
+/// - Otherwise (mixed Ship/Abstain with disagreement) → `Conflict`
+pub fn compute_consensus(reviews: &[ModelReview]) -> ConsensusResult {
+ if reviews.len() < 2 {
+ return ConsensusResult::InsufficientReviews;
+ }
+
+ // Filter out abstaining models for the actual vote
+ let voting_reviews: Vec<&ModelReview> = reviews
+ .iter()
+ .filter(|r| r.verdict != ReviewVerdict::Abstain)
+ .collect();
+
+ // All abstained — insufficient signal
+ if voting_reviews.is_empty() {
+ return ConsensusResult::InsufficientReviews;
+ }
+
+ // Check if any model says NeedsWork (conservative: block on any objection)
+ let has_needs_work = voting_reviews
+ .iter()
+ .any(|r| r.verdict == ReviewVerdict::NeedsWork);
+
+ if has_needs_work {
+ // Conservative: any NeedsWork → overall NeedsWork
+ let avg_confidence = voting_reviews
+ .iter()
+ .map(|r| r.confidence)
+ .sum::()
+ / voting_reviews.len() as f64;
+ return ConsensusResult::Consensus {
+ verdict: ReviewVerdict::NeedsWork,
+ confidence: avg_confidence,
+ };
+ }
+
+ // Check unanimous agreement among voters
+ let first_verdict = &voting_reviews[0].verdict;
+ let all_agree = voting_reviews.iter().all(|r| &r.verdict == first_verdict);
+
+ if all_agree {
+ let avg_confidence = voting_reviews
+ .iter()
+ .map(|r| r.confidence)
+ .sum::()
+ / voting_reviews.len() as f64;
+ ConsensusResult::Consensus {
+ verdict: first_verdict.clone(),
+ confidence: avg_confidence,
+ }
+ } else {
+ ConsensusResult::Conflict {
+ reviews: reviews.to_vec(),
+ }
+ }
+}
+
+// ── Tests ───────────────────────────────────────────────────────────
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ fn make_review(model: &str, verdict: ReviewVerdict, confidence: f64) -> ModelReview {
+ ModelReview {
+ model: model.to_string(),
+ verdict,
+ findings: vec![],
+ confidence,
+ }
+ }
+
+ fn make_review_with_findings(
+ model: &str,
+ verdict: ReviewVerdict,
+ confidence: f64,
+ findings: Vec,
+ ) -> ModelReview {
+ ModelReview {
+ model: model.to_string(),
+ verdict,
+ findings,
+ confidence,
+ }
+ }
+
+ #[test]
+ fn test_insufficient_reviews_empty() {
+ let result = compute_consensus(&[]);
+ assert!(matches!(result, ConsensusResult::InsufficientReviews));
+ }
+
+ #[test]
+ fn test_insufficient_reviews_single() {
+ let reviews = vec![make_review("codex", ReviewVerdict::Ship, 0.9)];
+ let result = compute_consensus(&reviews);
+ assert!(matches!(result, ConsensusResult::InsufficientReviews));
+ }
+
+ #[test]
+ fn test_consensus_both_ship() {
+ let reviews = vec![
+ make_review("codex", ReviewVerdict::Ship, 0.9),
+ make_review("claude", ReviewVerdict::Ship, 0.85),
+ ];
+ let result = compute_consensus(&reviews);
+ match result {
+ ConsensusResult::Consensus { verdict, confidence } => {
+ assert_eq!(verdict, ReviewVerdict::Ship);
+ assert!((confidence - 0.875).abs() < 0.001);
+ }
+ _ => panic!("expected Consensus, got {:?}", result),
+ }
+ }
+
+ #[test]
+ fn test_consensus_both_needs_work() {
+ let reviews = vec![
+ make_review("codex", ReviewVerdict::NeedsWork, 0.8),
+ make_review("claude", ReviewVerdict::NeedsWork, 0.7),
+ ];
+ let result = compute_consensus(&reviews);
+ match result {
+ ConsensusResult::Consensus { verdict, confidence } => {
+ assert_eq!(verdict, ReviewVerdict::NeedsWork);
+ assert!((confidence - 0.75).abs() < 0.001);
+ }
+ _ => panic!("expected Consensus, got {:?}", result),
+ }
+ }
+
+ #[test]
+ fn test_conservative_any_needs_work() {
+ // One says Ship, one says NeedsWork → conservative NeedsWork
+ let reviews = vec![
+ make_review("codex", ReviewVerdict::Ship, 0.9),
+ make_review("claude", ReviewVerdict::NeedsWork, 0.85),
+ ];
+ let result = compute_consensus(&reviews);
+ match result {
+ ConsensusResult::Consensus { verdict, .. } => {
+ assert_eq!(verdict, ReviewVerdict::NeedsWork);
+ }
+ _ => panic!("expected NeedsWork consensus, got {:?}", result),
+ }
+ }
+
+ #[test]
+ fn test_abstain_filtered_out() {
+ // One abstains, one ships → Consensus(Ship)
+ let reviews = vec![
+ make_review("codex", ReviewVerdict::Ship, 0.9),
+ make_review("claude", ReviewVerdict::Abstain, 0.3),
+ ];
+ let result = compute_consensus(&reviews);
+ match result {
+ ConsensusResult::Consensus { verdict, confidence } => {
+ assert_eq!(verdict, ReviewVerdict::Ship);
+ assert!((confidence - 0.9).abs() < 0.001);
+ }
+ _ => panic!("expected Consensus, got {:?}", result),
+ }
+ }
+
+ #[test]
+ fn test_all_abstain_insufficient() {
+ let reviews = vec![
+ make_review("codex", ReviewVerdict::Abstain, 0.3),
+ make_review("claude", ReviewVerdict::Abstain, 0.2),
+ ];
+ let result = compute_consensus(&reviews);
+ assert!(matches!(result, ConsensusResult::InsufficientReviews));
+ }
+
+ #[test]
+ fn test_three_models_consensus() {
+ let reviews = vec![
+ make_review("codex", ReviewVerdict::Ship, 0.9),
+ make_review("claude", ReviewVerdict::Ship, 0.85),
+ make_review("gemini", ReviewVerdict::Ship, 0.8),
+ ];
+ let result = compute_consensus(&reviews);
+ match result {
+ ConsensusResult::Consensus { verdict, confidence } => {
+ assert_eq!(verdict, ReviewVerdict::Ship);
+ assert!((confidence - 0.85).abs() < 0.001);
+ }
+ _ => panic!("expected Consensus, got {:?}", result),
+ }
+ }
+
+ #[test]
+ fn test_severity_display() {
+ assert_eq!(format!("{}", Severity::Critical), "critical");
+ assert_eq!(format!("{}", Severity::Warning), "warning");
+ assert_eq!(format!("{}", Severity::Info), "info");
+ }
+
+ #[test]
+ fn test_verdict_display() {
+ assert_eq!(format!("{}", ReviewVerdict::Ship), "SHIP");
+ assert_eq!(format!("{}", ReviewVerdict::NeedsWork), "NEEDS_WORK");
+ assert_eq!(format!("{}", ReviewVerdict::Abstain), "ABSTAIN");
+ }
+
+ #[test]
+ fn test_consensus_result_display() {
+ let c = ConsensusResult::Consensus {
+ verdict: ReviewVerdict::Ship,
+ confidence: 0.9,
+ };
+ assert_eq!(format!("{c}"), "Consensus: SHIP (confidence: 90%)");
+
+ let c = ConsensusResult::InsufficientReviews;
+ assert_eq!(format!("{c}"), "Insufficient reviews for consensus");
+ }
+
+ #[test]
+ fn test_finding_serialization() {
+ let finding = ReviewFinding {
+ severity: Severity::Critical,
+ category: "security".to_string(),
+ description: "SQL injection vulnerability".to_string(),
+ file: Some("src/db.rs".to_string()),
+ line: Some(42),
+ };
+ let json = serde_json::to_value(&finding).unwrap();
+ assert_eq!(json["severity"], "critical");
+ assert_eq!(json["category"], "security");
+ assert_eq!(json["file"], "src/db.rs");
+ assert_eq!(json["line"], 42);
+ }
+
+ #[test]
+ fn test_finding_without_location() {
+ let finding = ReviewFinding {
+ severity: Severity::Info,
+ category: "style".to_string(),
+ description: "Consider using const".to_string(),
+ file: None,
+ line: None,
+ };
+ let json = serde_json::to_value(&finding).unwrap();
+ assert_eq!(json["severity"], "info");
+ // Optional fields should be absent
+ assert!(json.get("file").is_none());
+ assert!(json.get("line").is_none());
+ }
+
+ #[test]
+ fn test_model_review_with_findings() {
+ let review = make_review_with_findings(
+ "codex",
+ ReviewVerdict::NeedsWork,
+ 0.85,
+ vec![
+ ReviewFinding {
+ severity: Severity::Critical,
+ category: "logic".to_string(),
+ description: "Off-by-one in loop".to_string(),
+ file: Some("src/main.rs".to_string()),
+ line: Some(10),
+ },
+ ReviewFinding {
+ severity: Severity::Warning,
+ category: "performance".to_string(),
+ description: "Unnecessary clone".to_string(),
+ file: None,
+ line: None,
+ },
+ ],
+ );
+ assert_eq!(review.findings.len(), 2);
+ assert_eq!(review.findings[0].severity, Severity::Critical);
+ assert_eq!(review.findings[1].severity, Severity::Warning);
+ }
+
+ #[test]
+ fn test_consensus_result_serialization() {
+ let result = ConsensusResult::Consensus {
+ verdict: ReviewVerdict::Ship,
+ confidence: 0.9,
+ };
+ let json = serde_json::to_value(&result).unwrap();
+ assert_eq!(json["type"], "consensus");
+ assert_eq!(json["verdict"], "SHIP");
+
+ let result = ConsensusResult::Conflict {
+ reviews: vec![
+ make_review("a", ReviewVerdict::Ship, 0.9),
+ make_review("b", ReviewVerdict::NeedsWork, 0.8),
+ ],
+ };
+ let json = serde_json::to_value(&result).unwrap();
+ assert_eq!(json["type"], "conflict");
+ assert_eq!(json["reviews"].as_array().unwrap().len(), 2);
+ }
+}
diff --git a/flowctl/crates/flowctl-core/src/task_profile.rs b/flowctl/crates/flowctl-core/src/task_profile.rs
new file mode 100644
index 00000000..961b2b64
--- /dev/null
+++ b/flowctl/crates/flowctl-core/src/task_profile.rs
@@ -0,0 +1,69 @@
+//! Task profiling: track estimated and actual durations for CPM weighting.
+//!
+//! `TaskProfile` stores per-task timing data that feeds into the DAG's
+//! weighted critical path calculation. Profiles can be built from historical
+//! `duration_seconds` evidence or from explicit estimates.
+
+use std::collections::HashMap;
+
+/// A task's execution profile used for CPM weighting.
+#[derive(Debug, Clone)]
+pub struct TaskProfile {
+ /// Estimated duration in seconds (user-provided or from historical data).
+ pub estimated_seconds: f64,
+ /// Actual duration in seconds (filled after completion).
+ pub actual_seconds: Option,
+}
+
+impl TaskProfile {
+ pub fn new(estimated_seconds: f64) -> Self {
+ Self {
+ estimated_seconds,
+ actual_seconds: None,
+ }
+ }
+
+ /// Return the best available weight: actual if known, else estimated.
+ pub fn weight(&self) -> f64 {
+ self.actual_seconds.unwrap_or(self.estimated_seconds)
+ }
+}
+
+/// Build a weight map suitable for `TaskDag::critical_path_weighted()` and
+/// `TaskDag::cpm_priorities()` from a set of task profiles.
+///
+/// Tasks without a profile get weight 1.0 (the DAG methods' default).
+pub fn weights_from_profiles(profiles: &HashMap) -> HashMap {
+ profiles
+ .iter()
+ .map(|(id, p)| (id.clone(), p.weight()))
+ .collect()
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn test_profile_weight_uses_estimated() {
+ let p = TaskProfile::new(5.0);
+ assert!((p.weight() - 5.0).abs() < f64::EPSILON);
+ }
+
+ #[test]
+ fn test_profile_weight_prefers_actual() {
+ let mut p = TaskProfile::new(5.0);
+ p.actual_seconds = Some(3.0);
+ assert!((p.weight() - 3.0).abs() < f64::EPSILON);
+ }
+
+ #[test]
+ fn test_weights_from_profiles() {
+ let mut profiles = HashMap::new();
+ profiles.insert("a".to_string(), TaskProfile::new(10.0));
+ profiles.insert("b".to_string(), TaskProfile::new(2.0));
+ let weights = weights_from_profiles(&profiles);
+ assert!((weights["a"] - 10.0).abs() < f64::EPSILON);
+ assert!((weights["b"] - 2.0).abs() < f64::EPSILON);
+ }
+}
diff --git a/flowctl/crates/flowctl-daemon/Cargo.toml b/flowctl/crates/flowctl-daemon/Cargo.toml
index 3f46ea7c..dae02f98 100644
--- a/flowctl/crates/flowctl-daemon/Cargo.toml
+++ b/flowctl/crates/flowctl-daemon/Cargo.toml
@@ -15,10 +15,12 @@ daemon = [
"dep:tower-http",
"dep:nix",
]
+webhook = ["dep:reqwest"]
[dependencies]
flowctl-core = { workspace = true }
flowctl-db = { workspace = true }
+flowctl-service = { workspace = true }
flowctl-scheduler = { workspace = true }
rusqlite = { workspace = true }
serde = { workspace = true }
@@ -34,6 +36,7 @@ tokio-util = { workspace = true, optional = true }
axum = { workspace = true, optional = true }
tower-http = { version = "0.6", features = ["cors"], optional = true }
nix = { workspace = true, optional = true }
+reqwest = { version = "0.12", features = ["json"], optional = true }
[dev-dependencies]
tempfile = "3"
diff --git a/flowctl/crates/flowctl-daemon/src/handlers.rs b/flowctl/crates/flowctl-daemon/src/handlers.rs
index 08a5c82a..9a7e4dab 100644
--- a/flowctl/crates/flowctl-daemon/src/handlers.rs
+++ b/flowctl/crates/flowctl-daemon/src/handlers.rs
@@ -15,7 +15,10 @@ use tracing::{debug, info, warn};
use flowctl_core::id::is_task_id;
use flowctl_core::state_machine::{Status, Transition};
+use flowctl_core::types::FLOW_DIR;
use flowctl_scheduler::TimestampedEvent;
+use flowctl_service::lifecycle::{DoneTaskRequest, StartTaskRequest};
+use flowctl_service::ServiceError;
use crate::lifecycle::DaemonRuntime;
@@ -57,7 +60,7 @@ pub type AppState = Arc;
pub struct DaemonState {
pub runtime: DaemonRuntime,
pub event_bus: flowctl_scheduler::EventBus,
- pub db: std::sync::Mutex,
+ pub db: Arc>,
}
impl DaemonState {
@@ -184,56 +187,75 @@ pub async fn create_task_handler(
))
}
-/// POST /api/v1/tasks/start -- start a task (validates state transition).
+/// POST /api/v1/tasks/start -- start a task via service layer.
pub async fn start_task_handler(
State(state): State,
Json(body): Json,
) -> Result, AppError> {
- let conn = state.db_lock()?;
- let repo = flowctl_db::TaskRepo::new(&conn);
-
- // Get current task to validate transition.
- let task = repo
- .get(&body.task_id)
- .map_err(|_| AppError::InvalidInput(format!("task not found: {}", body.task_id)))?;
-
- Transition::new(task.status, Status::InProgress).map_err(|e| {
- AppError::InvalidTransition(format!(
- "cannot start task '{}': {}",
- body.task_id, e
- ))
- })?;
+ let task_id = body.task_id.clone();
+ let db = state.db.clone();
- repo.update_status(&body.task_id, Status::InProgress)?;
- Ok(Json(
- serde_json::json!({"success": true, "id": body.task_id}),
- ))
+ let result = tokio::task::spawn_blocking(move || {
+ let conn = db
+ .lock()
+ .map_err(|_| ServiceError::ValidationError("DB lock poisoned".to_string()))?;
+ let flow_dir = std::env::current_dir()
+ .unwrap_or_else(|_| std::path::PathBuf::from("."))
+ .join(FLOW_DIR);
+ let req = StartTaskRequest {
+ task_id,
+ force: false,
+ actor: "daemon".to_string(),
+ };
+ flowctl_service::lifecycle::start_task(Some(&conn), &flow_dir, req)
+ })
+ .await
+ .map_err(|e| AppError::Internal(format!("spawn_blocking failed: {e}")))?;
+
+ match result {
+ Ok(resp) => Ok(Json(
+ serde_json::json!({"success": true, "id": resp.task_id}),
+ )),
+ Err(e) => Err(service_error_to_app_error(e)),
+ }
}
-/// POST /api/v1/tasks/done -- complete a task (validates state transition).
+/// POST /api/v1/tasks/done -- complete a task via service layer.
pub async fn done_task_handler(
State(state): State,
- Json(body): Json,
+ Json(body): Json,
) -> Result, AppError> {
- let conn = state.db_lock()?;
- let repo = flowctl_db::TaskRepo::new(&conn);
-
- // Get current task to validate transition.
- let task = repo
- .get(&body.task_id)
- .map_err(|_| AppError::InvalidInput(format!("task not found: {}", body.task_id)))?;
-
- Transition::new(task.status, Status::Done).map_err(|e| {
- AppError::InvalidTransition(format!(
- "cannot complete task '{}': {}",
- body.task_id, e
- ))
- })?;
+ let task_id = body.task_id.clone();
+ let summary = body.summary.clone();
+ let db = state.db.clone();
- repo.update_status(&body.task_id, Status::Done)?;
- Ok(Json(
- serde_json::json!({"success": true, "id": body.task_id}),
- ))
+ let result = tokio::task::spawn_blocking(move || {
+ let conn = db
+ .lock()
+ .map_err(|_| ServiceError::ValidationError("DB lock poisoned".to_string()))?;
+ let flow_dir = std::env::current_dir()
+ .unwrap_or_else(|_| std::path::PathBuf::from("."))
+ .join(FLOW_DIR);
+ let req = DoneTaskRequest {
+ task_id,
+ summary,
+ summary_file: None,
+ evidence_json: None,
+ evidence_inline: None,
+ force: true,
+ actor: "daemon".to_string(),
+ };
+ flowctl_service::lifecycle::done_task(Some(&conn), &flow_dir, req)
+ })
+ .await
+ .map_err(|e| AppError::Internal(format!("spawn_blocking failed: {e}")))?;
+
+ match result {
+ Ok(resp) => Ok(Json(
+ serde_json::json!({"success": true, "id": resp.task_id}),
+ )),
+ Err(e) => Err(service_error_to_app_error(e)),
+ }
}
#[derive(Debug, serde::Deserialize)]
@@ -250,6 +272,350 @@ pub struct TaskIdRequest {
pub task_id: String,
}
+#[derive(Debug, serde::Deserialize)]
+pub struct TaskDoneRequest {
+ pub task_id: String,
+ pub summary: Option,
+}
+
+/// Query parameters for the tokens endpoint.
+#[derive(Debug, serde::Deserialize)]
+pub struct TokensQuery {
+ pub epic_id: Option,
+ pub task_id: Option,
+}
+
+/// GET /api/v1/tokens -- token usage, filtered by epic_id or task_id.
+pub async fn tokens_handler(
+ State(state): State,
+ axum::extract::Query(params): axum::extract::Query,
+) -> Result, AppError> {
+ let conn = state.db_lock()?;
+ let log = flowctl_db::EventLog::new(&conn);
+
+ if let Some(ref task_id) = params.task_id {
+ let rows = log.tokens_by_task(task_id)?;
+ let value = serde_json::to_value(&rows)
+ .map_err(|e| AppError::Internal(format!("serialization error: {e}")))?;
+ Ok(Json(value))
+ } else if let Some(ref epic_id) = params.epic_id {
+ let summaries = log.tokens_by_epic(epic_id)?;
+ let value = serde_json::to_value(&summaries)
+ .map_err(|e| AppError::Internal(format!("serialization error: {e}")))?;
+ Ok(Json(value))
+ } else {
+ Err(AppError::InvalidInput(
+ "either epic_id or task_id query parameter is required".to_string(),
+ ))
+ }
+}
+
+/// Map service-layer errors to HTTP-appropriate AppErrors.
+fn service_error_to_app_error(e: ServiceError) -> AppError {
+ match e {
+ ServiceError::TaskNotFound(msg) => AppError::InvalidInput(msg),
+ ServiceError::EpicNotFound(msg) => AppError::InvalidInput(msg),
+ ServiceError::InvalidTransition(msg) => AppError::InvalidTransition(msg),
+ ServiceError::DependencyUnsatisfied { task, dependency } => {
+ AppError::InvalidInput(format!("task {task} blocked by {dependency}"))
+ }
+ ServiceError::CrossActorViolation(msg) => AppError::InvalidInput(msg),
+ ServiceError::ValidationError(msg) => AppError::InvalidInput(msg),
+ ServiceError::DbError(e) => AppError::Db(e.to_string()),
+ ServiceError::IoError(e) => AppError::Internal(e.to_string()),
+ ServiceError::CoreError(e) => AppError::Internal(e.to_string()),
+ }
+}
+
+/// Query parameters for the DAG endpoint.
+#[derive(Debug, serde::Deserialize)]
+pub struct DagQuery {
+ pub epic_id: String,
+}
+
+/// A node in the DAG visualization.
+#[derive(Debug, serde::Serialize)]
+pub struct DagNode {
+ pub id: String,
+ pub title: String,
+ pub status: String,
+ pub x: f64,
+ pub y: f64,
+}
+
+/// An edge in the DAG visualization.
+#[derive(Debug, serde::Serialize)]
+pub struct DagEdge {
+ pub from: String,
+ pub to: String,
+}
+
+/// Response for the DAG endpoint.
+#[derive(Debug, serde::Serialize)]
+pub struct DagResponse {
+ pub nodes: Vec,
+ pub edges: Vec,
+}
+
+/// GET /api/v1/dag?epic_id=X -- returns DAG layout for visualization.
+///
+/// Builds the task dependency graph using petgraph, then computes a simplified
+/// Sugiyama layout (layer assignment via longest-path + node positioning) server-side.
+pub async fn dag_handler(
+ State(state): State,
+ axum::extract::Query(params): axum::extract::Query,
+) -> Result, AppError> {
+ let conn = state.db_lock()?;
+ let repo = flowctl_db::TaskRepo::new(&conn);
+ let tasks = repo.list_by_epic(¶ms.epic_id)?;
+
+ if tasks.is_empty() {
+ return Ok(Json(DagResponse {
+ nodes: vec![],
+ edges: vec![],
+ }));
+ }
+
+ // Build DAG from tasks.
+ let dag = flowctl_core::TaskDag::from_tasks(&tasks)
+ .map_err(|e| AppError::Internal(format!("DAG build error: {e}")))?;
+
+ // Map task IDs to their tasks for title/status lookup.
+ let task_map: std::collections::HashMap<&str, &flowctl_core::types::Task> =
+ tasks.iter().map(|t| (t.id.as_str(), t)).collect();
+
+ // Build node index → task ID mapping from topo order.
+ let task_ids = dag.task_ids();
+
+ // Compute layers via longest-path from roots.
+ let mut layer: std::collections::HashMap = std::collections::HashMap::new();
+ for id in &task_ids {
+ let deps = dag.dependencies(id);
+ if deps.is_empty() {
+ layer.insert(id.clone(), 0);
+ } else {
+ let max_dep_layer = deps
+ .iter()
+ .map(|d| layer.get(d.as_str()).copied().unwrap_or(0))
+ .max()
+ .unwrap_or(0);
+ layer.insert(id.clone(), max_dep_layer + 1);
+ }
+ }
+
+ // Group nodes by layer for horizontal positioning.
+ let max_layer = layer.values().copied().max().unwrap_or(0);
+ let mut layers: Vec> = vec![vec![]; max_layer + 1];
+ for (id, &l) in &layer {
+ layers[l].push(id.clone());
+ }
+ // Sort within each layer for determinism.
+ for l in &mut layers {
+ l.sort();
+ }
+
+ // Compute x,y positions: layers go left-to-right, nodes within a layer are stacked vertically.
+ let node_spacing_x = 200.0;
+ let node_spacing_y = 100.0;
+
+ let mut nodes = Vec::with_capacity(tasks.len());
+ for (layer_idx, layer_nodes) in layers.iter().enumerate() {
+ let layer_height = layer_nodes.len() as f64 * node_spacing_y;
+ let y_offset = -layer_height / 2.0 + node_spacing_y / 2.0;
+ for (pos, id) in layer_nodes.iter().enumerate() {
+ let task = task_map.get(id.as_str());
+ nodes.push(DagNode {
+ id: id.clone(),
+ title: task.map(|t| t.title.clone()).unwrap_or_default(),
+ status: task
+ .map(|t| format!("{:?}", t.status).to_lowercase())
+ .unwrap_or_else(|| "todo".to_string()),
+ x: layer_idx as f64 * node_spacing_x,
+ y: y_offset + pos as f64 * node_spacing_y,
+ });
+ }
+ }
+
+ // Build edges from dependency relationships.
+ let mut edges = Vec::new();
+ for id in &task_ids {
+ for dep in dag.dependencies(id) {
+ edges.push(DagEdge {
+ from: dep,
+ to: id.clone(),
+ });
+ }
+ }
+
+ Ok(Json(DagResponse { nodes, edges }))
+}
+
+// ── DAG mutation endpoint ───────────────────────────────────────
+
+/// Request body for POST /api/v1/dag/mutate.
+#[derive(Debug, serde::Deserialize)]
+pub struct DagMutateRequest {
+ pub action: String,
+ pub params: serde_json::Value,
+ /// Optimistic lock: client sends the `updated_at` timestamp it last saw.
+ pub version: String,
+}
+
+/// POST /api/v1/dag/mutate -- apply a DAG mutation with optimistic locking.
+///
+/// Supported actions:
+/// - `add_dep`: params `{task_id, depends_on}`
+/// - `remove_dep`: params `{task_id, depends_on}`
+/// - `retry_task`: params `{task_id}`
+/// - `skip_task`: params `{task_id}`
+///
+/// Returns 409 on version conflict, broadcasts refresh event on success.
+pub async fn dag_mutate_handler(
+ State(state): State,
+ Json(body): Json,
+) -> Result, AppError> {
+ let conn = state.db_lock()?;
+ let repo = flowctl_db::TaskRepo::new(&conn);
+
+ match body.action.as_str() {
+ "add_dep" => {
+ let task_id = body.params.get("task_id")
+ .and_then(|v| v.as_str())
+ .ok_or_else(|| AppError::InvalidInput("missing params.task_id".into()))?;
+ let depends_on = body.params.get("depends_on")
+ .and_then(|v| v.as_str())
+ .ok_or_else(|| AppError::InvalidInput("missing params.depends_on".into()))?;
+
+ let task = repo.get(task_id)
+ .map_err(|_| AppError::InvalidInput(format!("task not found: {task_id}")))?;
+ let _dep = repo.get(depends_on)
+ .map_err(|_| AppError::InvalidInput(format!("dependency task not found: {depends_on}")))?;
+
+ check_version(&task, &body.version)?;
+
+ // Cycle check: build hypothetical task list with new dep.
+ let epic_tasks = repo.list_by_epic(&task.epic)?;
+ let test_tasks: Vec = epic_tasks.into_iter().map(|mut t| {
+ if t.id == task_id && !t.depends_on.contains(&depends_on.to_string()) {
+ t.depends_on.push(depends_on.to_string());
+ }
+ t
+ }).collect();
+
+ if let Err(e) = flowctl_core::TaskDag::from_tasks(&test_tasks) {
+ return Err(AppError::InvalidInput(format!("would create cycle: {e}")));
+ }
+
+ conn.execute(
+ "INSERT OR IGNORE INTO task_deps (task_id, depends_on) VALUES (?1, ?2)",
+ rusqlite::params![task_id, depends_on],
+ ).map_err(|e| AppError::Db(e.to_string()))?;
+ touch_updated_at(&conn, task_id)?;
+
+ state.event_bus.emit(flowctl_scheduler::FlowEvent::TaskReady {
+ task_id: task_id.to_string(),
+ epic_id: task.epic.clone(),
+ });
+
+ Ok(Json(serde_json::json!({"success": true, "action": "add_dep"})))
+ }
+
+ "remove_dep" => {
+ let task_id = body.params.get("task_id")
+ .and_then(|v| v.as_str())
+ .ok_or_else(|| AppError::InvalidInput("missing params.task_id".into()))?;
+ let depends_on = body.params.get("depends_on")
+ .and_then(|v| v.as_str())
+ .ok_or_else(|| AppError::InvalidInput("missing params.depends_on".into()))?;
+
+ let task = repo.get(task_id)
+ .map_err(|_| AppError::InvalidInput(format!("task not found: {task_id}")))?;
+ check_version(&task, &body.version)?;
+
+ conn.execute(
+ "DELETE FROM task_deps WHERE task_id = ?1 AND depends_on = ?2",
+ rusqlite::params![task_id, depends_on],
+ ).map_err(|e| AppError::Db(e.to_string()))?;
+ touch_updated_at(&conn, task_id)?;
+
+ state.event_bus.emit(flowctl_scheduler::FlowEvent::TaskReady {
+ task_id: task_id.to_string(),
+ epic_id: task.epic.clone(),
+ });
+
+ Ok(Json(serde_json::json!({"success": true, "action": "remove_dep"})))
+ }
+
+ "retry_task" => {
+ let task_id = body.params.get("task_id")
+ .and_then(|v| v.as_str())
+ .ok_or_else(|| AppError::InvalidInput("missing params.task_id".into()))?;
+
+ let task = repo.get(task_id)
+ .map_err(|_| AppError::InvalidInput(format!("task not found: {task_id}")))?;
+ check_version(&task, &body.version)?;
+
+ Transition::new(task.status, Status::Todo).map_err(|e| {
+ AppError::InvalidTransition(format!("cannot retry task '{}': {}", task_id, e))
+ })?;
+
+ repo.update_status(task_id, Status::Todo)?;
+
+ state.event_bus.emit(flowctl_scheduler::FlowEvent::TaskReady {
+ task_id: task_id.to_string(),
+ epic_id: task.epic.clone(),
+ });
+
+ Ok(Json(serde_json::json!({"success": true, "action": "retry_task"})))
+ }
+
+ "skip_task" => {
+ let task_id = body.params.get("task_id")
+ .and_then(|v| v.as_str())
+ .ok_or_else(|| AppError::InvalidInput("missing params.task_id".into()))?;
+
+ let task = repo.get(task_id)
+ .map_err(|_| AppError::InvalidInput(format!("task not found: {task_id}")))?;
+ check_version(&task, &body.version)?;
+
+ Transition::new(task.status, Status::Skipped).map_err(|e| {
+ AppError::InvalidTransition(format!("cannot skip task '{}': {}", task_id, e))
+ })?;
+
+ repo.update_status(task_id, Status::Skipped)?;
+
+ state.event_bus.emit(flowctl_scheduler::FlowEvent::TaskReady {
+ task_id: task_id.to_string(),
+ epic_id: task.epic.clone(),
+ });
+
+ Ok(Json(serde_json::json!({"success": true, "action": "skip_task"})))
+ }
+
+ other => Err(AppError::InvalidInput(format!("unknown action: {other}"))),
+ }
+}
+
+/// Check optimistic lock version against the task's `updated_at`.
+fn check_version(task: &flowctl_core::types::Task, version: &str) -> Result<(), AppError> {
+ let task_version = task.updated_at.to_rfc3339();
+ if task_version != version {
+ return Err(AppError::InvalidTransition(format!(
+ "version conflict: expected {version}, got {task_version}"
+ )));
+ }
+ Ok(())
+}
+
+/// Update the `updated_at` timestamp for a task.
+fn touch_updated_at(conn: &rusqlite::Connection, task_id: &str) -> Result<(), AppError> {
+ conn.execute(
+ "UPDATE tasks SET updated_at = ?1 WHERE id = ?2",
+ rusqlite::params![chrono::Utc::now().to_rfc3339(), task_id],
+ ).map_err(|e| AppError::Db(e.to_string()))?;
+ Ok(())
+}
+
/// GET /api/v1/events -- WebSocket upgrade for live event streaming.
pub async fn events_ws_handler(
ws: WebSocketUpgrade,
diff --git a/flowctl/crates/flowctl-daemon/src/lib.rs b/flowctl/crates/flowctl-daemon/src/lib.rs
index fc4bda79..eabfcbe2 100644
--- a/flowctl/crates/flowctl-daemon/src/lib.rs
+++ b/flowctl/crates/flowctl-daemon/src/lib.rs
@@ -10,4 +10,6 @@ pub mod handlers;
#[cfg(feature = "daemon")]
pub mod lifecycle;
#[cfg(feature = "daemon")]
+pub mod notifications;
+#[cfg(feature = "daemon")]
pub mod server;
diff --git a/flowctl/crates/flowctl-daemon/src/notifications.rs b/flowctl/crates/flowctl-daemon/src/notifications.rs
new file mode 100644
index 00000000..18d0e928
--- /dev/null
+++ b/flowctl/crates/flowctl-daemon/src/notifications.rs
@@ -0,0 +1,334 @@
+//! Event-driven notifications: sound alerts and webhook delivery.
+//!
+//! Subscribes to the daemon's event bus and reacts to key events:
+//! - `EpicCompleted` → play a sound (macOS only, configurable)
+//! - `TaskCompleted` / `TaskFailed` → POST to webhook URL (fire-and-forget)
+//!
+//! Feature-gated behind `#[cfg(feature = "daemon")]`.
+
+use std::path::Path;
+
+use tokio::sync::broadcast;
+use tracing::{debug, info, warn};
+
+#[cfg(feature = "webhook")]
+use tracing::error;
+
+use flowctl_scheduler::{FlowEvent, TimestampedEvent};
+
+/// Notification configuration read from `.flow/config.json`.
+#[derive(Debug, Clone)]
+pub struct NotificationConfig {
+ /// Play a system sound on epic completion (default: true).
+ pub sound: bool,
+ /// Optional webhook URL for task events.
+ pub webhook_url: Option,
+}
+
+impl Default for NotificationConfig {
+ fn default() -> Self {
+ Self {
+ sound: true,
+ webhook_url: None,
+ }
+ }
+}
+
+/// Load notification config from `.flow/config.json`.
+///
+/// Reads `notifications.sound` (bool, default true) and
+/// `notifications.webhook_url` (string, optional).
+pub fn load_config(flow_dir: &Path) -> NotificationConfig {
+ let config_path = flow_dir.join("config.json");
+ let content = match std::fs::read_to_string(&config_path) {
+ Ok(c) => c,
+ Err(_) => return NotificationConfig::default(),
+ };
+ let parsed: serde_json::Value = match serde_json::from_str(&content) {
+ Ok(v) => v,
+ Err(_) => return NotificationConfig::default(),
+ };
+
+ let notifications = parsed.get("notifications");
+
+ let sound = notifications
+ .and_then(|n| n.get("sound"))
+ .and_then(|v| v.as_bool())
+ .unwrap_or(true);
+
+ let webhook_url = notifications
+ .and_then(|n| n.get("webhook_url"))
+ .and_then(|v| v.as_str())
+ .map(|s| s.to_string());
+
+ NotificationConfig { sound, webhook_url }
+}
+
+/// Spawn the notification listener task.
+///
+/// Subscribes to the event bus broadcast channel and handles sound/webhook
+/// notifications until the receiver is closed or lagged beyond recovery.
+pub fn spawn_listener(
+ tracker: &tokio_util::task::TaskTracker,
+ rx: broadcast::Receiver,
+ config: NotificationConfig,
+) {
+ tracker.spawn(notification_loop(rx, config));
+}
+
+async fn notification_loop(
+ mut rx: broadcast::Receiver,
+ config: NotificationConfig,
+) {
+ info!("notification listener started");
+
+ loop {
+ match rx.recv().await {
+ Ok(stamped) => handle_event(&stamped.event, &config).await,
+ Err(broadcast::error::RecvError::Lagged(n)) => {
+ warn!("notification listener lagged, skipped {n} events");
+ }
+ Err(broadcast::error::RecvError::Closed) => {
+ debug!("notification listener shutting down (channel closed)");
+ break;
+ }
+ }
+ }
+}
+
+async fn handle_event(event: &FlowEvent, config: &NotificationConfig) {
+ match event {
+ FlowEvent::EpicCompleted { epic_id } => {
+ info!("epic completed: {epic_id}");
+ if config.sound {
+ play_completion_sound();
+ }
+ if let Some(url) = &config.webhook_url {
+ send_webhook(url, "epic_completed", epic_id, epic_id, "done").await;
+ }
+ }
+ FlowEvent::TaskCompleted { task_id, epic_id } => {
+ if let Some(url) = &config.webhook_url {
+ send_webhook(url, "task_completed", task_id, epic_id, "done").await;
+ }
+ }
+ FlowEvent::TaskFailed {
+ task_id, epic_id, ..
+ } => {
+ if let Some(url) = &config.webhook_url {
+ send_webhook(url, "task_failed", task_id, epic_id, "failed").await;
+ }
+ }
+ _ => {}
+ }
+}
+
+/// Play a system sound on macOS. Silently no-ops on other platforms.
+fn play_completion_sound() {
+ #[cfg(target_os = "macos")]
+ {
+ use std::process::Command;
+ let sound_path = "/System/Library/Sounds/Glass.aiff";
+ match Command::new("afplay").arg(sound_path).spawn() {
+ Ok(_child) => {
+ debug!("playing completion sound: {sound_path}");
+ }
+ Err(e) => {
+ warn!("failed to play sound: {e}");
+ }
+ }
+ }
+ #[cfg(not(target_os = "macos"))]
+ {
+ debug!("sound notifications not supported on this platform");
+ }
+}
+
+/// Webhook payload sent on task/epic events.
+#[cfg(feature = "webhook")]
+#[derive(Debug, serde::Serialize)]
+struct WebhookPayload {
+ event_type: String,
+ task_id: String,
+ epic_id: String,
+ status: String,
+ timestamp: String,
+}
+
+/// POST a webhook payload. Fire-and-forget: logs errors but never fails.
+#[cfg(feature = "webhook")]
+async fn send_webhook(url: &str, event_type: &str, task_id: &str, epic_id: &str, status: &str) {
+ let payload = WebhookPayload {
+ event_type: event_type.to_string(),
+ task_id: task_id.to_string(),
+ epic_id: epic_id.to_string(),
+ status: status.to_string(),
+ timestamp: chrono::Utc::now().to_rfc3339(),
+ };
+
+ debug!("sending webhook to {url}: {event_type} for {task_id}");
+
+ let client = reqwest::Client::builder()
+ .timeout(std::time::Duration::from_secs(10))
+ .build();
+
+ let client = match client {
+ Ok(c) => c,
+ Err(e) => {
+ error!("failed to create HTTP client for webhook: {e}");
+ return;
+ }
+ };
+
+ // Fire-and-forget: spawn so we don't block the event loop
+ let url_owned = url.to_string();
+ tokio::spawn(async move {
+ match client.post(&url_owned).json(&payload).send().await {
+ Ok(resp) => {
+ debug!("webhook response: {} {}", resp.status(), url_owned);
+ }
+ Err(e) => {
+ error!("webhook POST to {url_owned} failed: {e}");
+ }
+ }
+ });
+}
+
+/// Stub when webhook feature is not enabled.
+#[cfg(not(feature = "webhook"))]
+async fn send_webhook(url: &str, event_type: &str, task_id: &str, _epic_id: &str, _status: &str) {
+ warn!(
+ "webhook feature not enabled, cannot send {event_type} for {task_id} to {url}. \
+ Rebuild with --features webhook to enable."
+ );
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+ use chrono::Utc;
+ use tempfile::TempDir;
+
+ #[test]
+ fn default_config() {
+ let config = NotificationConfig::default();
+ assert!(config.sound);
+ assert!(config.webhook_url.is_none());
+ }
+
+ #[test]
+ fn load_config_missing_file() {
+ let tmp = TempDir::new().unwrap();
+ let config = load_config(tmp.path());
+ assert!(config.sound);
+ assert!(config.webhook_url.is_none());
+ }
+
+ #[test]
+ fn load_config_with_sound_disabled() {
+ let tmp = TempDir::new().unwrap();
+ let config_path = tmp.path().join("config.json");
+ std::fs::write(
+ &config_path,
+ r#"{"notifications": {"sound": false}}"#,
+ )
+ .unwrap();
+ let config = load_config(tmp.path());
+ assert!(!config.sound);
+ assert!(config.webhook_url.is_none());
+ }
+
+ #[test]
+ fn load_config_with_webhook() {
+ let tmp = TempDir::new().unwrap();
+ let config_path = tmp.path().join("config.json");
+ std::fs::write(
+ &config_path,
+ r#"{"notifications": {"sound": true, "webhook_url": "https://example.com/hook"}}"#,
+ )
+ .unwrap();
+ let config = load_config(tmp.path());
+ assert!(config.sound);
+ assert_eq!(
+ config.webhook_url.as_deref(),
+ Some("https://example.com/hook")
+ );
+ }
+
+ #[test]
+ fn load_config_no_notifications_key() {
+ let tmp = TempDir::new().unwrap();
+ let config_path = tmp.path().join("config.json");
+ std::fs::write(&config_path, r#"{"memory": {"enabled": true}}"#).unwrap();
+ let config = load_config(tmp.path());
+ assert!(config.sound);
+ assert!(config.webhook_url.is_none());
+ }
+
+ #[tokio::test]
+ async fn notification_loop_handles_epic_completed() {
+ let (tx, rx) = broadcast::channel(16);
+ let config = NotificationConfig {
+ sound: false, // don't actually play sound in test
+ webhook_url: None,
+ };
+
+ // Spawn listener
+ let handle = tokio::spawn(notification_loop(rx, config));
+
+ // Send an event
+ tx.send(TimestampedEvent {
+ timestamp: Utc::now(),
+ event: FlowEvent::EpicCompleted {
+ epic_id: "fn-1-test".into(),
+ },
+ })
+ .unwrap();
+
+ // Drop sender to close the channel
+ drop(tx);
+
+ // Listener should exit cleanly
+ tokio::time::timeout(std::time::Duration::from_secs(2), handle)
+ .await
+ .expect("timeout")
+ .expect("task panicked");
+ }
+
+ #[tokio::test]
+ async fn notification_loop_handles_task_events() {
+ let (tx, rx) = broadcast::channel(16);
+ let config = NotificationConfig {
+ sound: false,
+ webhook_url: None,
+ };
+
+ let handle = tokio::spawn(notification_loop(rx, config));
+
+ tx.send(TimestampedEvent {
+ timestamp: Utc::now(),
+ event: FlowEvent::TaskCompleted {
+ task_id: "fn-1.1".into(),
+ epic_id: "fn-1".into(),
+ },
+ })
+ .unwrap();
+
+ tx.send(TimestampedEvent {
+ timestamp: Utc::now(),
+ event: FlowEvent::TaskFailed {
+ task_id: "fn-1.2".into(),
+ epic_id: "fn-1".into(),
+ error: Some("boom".into()),
+ },
+ })
+ .unwrap();
+
+ drop(tx);
+
+ tokio::time::timeout(std::time::Duration::from_secs(2), handle)
+ .await
+ .expect("timeout")
+ .expect("task panicked");
+ }
+}
diff --git a/flowctl/crates/flowctl-daemon/src/server.rs b/flowctl/crates/flowctl-daemon/src/server.rs
index 594b6b33..f6f2e184 100644
--- a/flowctl/crates/flowctl-daemon/src/server.rs
+++ b/flowctl/crates/flowctl-daemon/src/server.rs
@@ -30,7 +30,7 @@ pub fn create_state(runtime: DaemonRuntime, event_bus: flowctl_scheduler::EventB
let state = Arc::new(DaemonState {
runtime,
event_bus,
- db: std::sync::Mutex::new(conn),
+ db: Arc::new(std::sync::Mutex::new(conn)),
});
Ok((state, cancel))
}
@@ -49,6 +49,8 @@ pub fn build_router(state: AppState) -> axum::Router {
.route("/api/v1/status", get(handlers::status_handler))
.route("/api/v1/epics", get(handlers::epics_handler))
.route("/api/v1/tasks", get(handlers::tasks_handler))
+ .route("/api/v1/dag", get(handlers::dag_handler))
+ .route("/api/v1/dag/mutate", post(handlers::dag_mutate_handler))
.route("/api/v1/tasks/create", post(handlers::create_task_handler))
.route("/api/v1/tasks/start", post(handlers::start_task_handler))
.route("/api/v1/tasks/done", post(handlers::done_task_handler))
diff --git a/flowctl/crates/flowctl-db/src/events.rs b/flowctl/crates/flowctl-db/src/events.rs
index 62b33416..89ea1f21 100644
--- a/flowctl/crates/flowctl-db/src/events.rs
+++ b/flowctl/crates/flowctl-db/src/events.rs
@@ -18,6 +18,33 @@ pub struct TokenRecord<'a> {
pub estimated_cost: Option,
}
+/// A row from the token_usage table.
+#[derive(Debug, Clone, serde::Serialize)]
+pub struct TokenUsageRow {
+ pub id: i64,
+ pub timestamp: String,
+ pub epic_id: String,
+ pub task_id: Option,
+ pub phase: Option,
+ pub model: Option,
+ pub input_tokens: i64,
+ pub output_tokens: i64,
+ pub cache_read: i64,
+ pub cache_write: i64,
+ pub estimated_cost: Option,
+}
+
+/// Aggregated token usage for a single task.
+#[derive(Debug, Clone, serde::Serialize)]
+pub struct TaskTokenSummary {
+ pub task_id: String,
+ pub input_tokens: i64,
+ pub output_tokens: i64,
+ pub cache_read: i64,
+ pub cache_write: i64,
+ pub estimated_cost: f64,
+}
+
/// Extended event queries beyond the basic EventRepo.
pub struct EventLog<'a> {
conn: &'a Connection,
@@ -108,6 +135,56 @@ impl<'a> EventLog<'a> {
Ok(self.conn.last_insert_rowid())
}
+ /// Get all token records for a specific task.
+ pub fn tokens_by_task(&self, task_id: &str) -> Result, DbError> {
+ let mut stmt = self.conn.prepare(
+ "SELECT id, timestamp, epic_id, task_id, phase, model, input_tokens, output_tokens, cache_read, cache_write, estimated_cost
+ FROM token_usage WHERE task_id = ?1 ORDER BY id ASC",
+ )?;
+ let rows = stmt
+ .query_map(params![task_id], |row| {
+ Ok(TokenUsageRow {
+ id: row.get(0)?,
+ timestamp: row.get(1)?,
+ epic_id: row.get(2)?,
+ task_id: row.get(3)?,
+ phase: row.get(4)?,
+ model: row.get(5)?,
+ input_tokens: row.get(6)?,
+ output_tokens: row.get(7)?,
+ cache_read: row.get(8)?,
+ cache_write: row.get(9)?,
+ estimated_cost: row.get(10)?,
+ })
+ })?
+ .collect::, _>>()?;
+ Ok(rows)
+ }
+
+ /// Get aggregated token usage per task for an epic.
+ pub fn tokens_by_epic(&self, epic_id: &str) -> Result, DbError> {
+ let mut stmt = self.conn.prepare(
+ "SELECT task_id, COALESCE(SUM(input_tokens), 0), COALESCE(SUM(output_tokens), 0),
+ COALESCE(SUM(cache_read), 0), COALESCE(SUM(cache_write), 0),
+ COALESCE(SUM(estimated_cost), 0.0)
+ FROM token_usage WHERE epic_id = ?1 AND task_id IS NOT NULL
+ GROUP BY task_id ORDER BY task_id",
+ )?;
+ let rows = stmt
+ .query_map(params![epic_id], |row| {
+ Ok(TaskTokenSummary {
+ task_id: row.get(0)?,
+ input_tokens: row.get(1)?,
+ output_tokens: row.get(2)?,
+ cache_read: row.get(3)?,
+ cache_write: row.get(4)?,
+ estimated_cost: row.get(5)?,
+ })
+ })?
+ .collect::, _>>()?;
+ Ok(rows)
+ }
+
/// Count events by type for an epic.
pub fn count_by_type(&self, epic_id: &str) -> Result, DbError> {
let mut stmt = self.conn.prepare(
@@ -190,4 +267,108 @@ mod tests {
assert_eq!(counts[0], ("task_started".to_string(), 2));
assert_eq!(counts[1], ("task_completed".to_string(), 1));
}
+
+ #[test]
+ fn test_tokens_by_task() {
+ let conn = setup();
+ let log = EventLog::new(&conn);
+ log.record_tokens(&TokenRecord {
+ epic_id: "fn-1-test",
+ task_id: Some("fn-1-test.1"),
+ phase: Some("impl"),
+ model: Some("claude-sonnet-4-20250514"),
+ input_tokens: 1000,
+ output_tokens: 500,
+ cache_read: 200,
+ cache_write: 100,
+ estimated_cost: Some(0.015),
+ }).unwrap();
+ log.record_tokens(&TokenRecord {
+ epic_id: "fn-1-test",
+ task_id: Some("fn-1-test.1"),
+ phase: Some("review"),
+ model: Some("claude-sonnet-4-20250514"),
+ input_tokens: 800,
+ output_tokens: 300,
+ cache_read: 0,
+ cache_write: 0,
+ estimated_cost: Some(0.010),
+ }).unwrap();
+ log.record_tokens(&TokenRecord {
+ epic_id: "fn-1-test",
+ task_id: Some("fn-1-test.2"),
+ phase: Some("impl"),
+ model: None,
+ input_tokens: 500,
+ output_tokens: 200,
+ cache_read: 0,
+ cache_write: 0,
+ estimated_cost: None,
+ }).unwrap();
+
+ let rows = log.tokens_by_task("fn-1-test.1").unwrap();
+ assert_eq!(rows.len(), 2);
+ assert_eq!(rows[0].input_tokens, 1000);
+ assert_eq!(rows[1].phase.as_deref(), Some("review"));
+
+ let rows2 = log.tokens_by_task("fn-1-test.2").unwrap();
+ assert_eq!(rows2.len(), 1);
+
+ let empty = log.tokens_by_task("nonexistent").unwrap();
+ assert!(empty.is_empty());
+ }
+
+ #[test]
+ fn test_tokens_by_epic() {
+ let conn = setup();
+ let log = EventLog::new(&conn);
+ log.record_tokens(&TokenRecord {
+ epic_id: "fn-1-test",
+ task_id: Some("fn-1-test.1"),
+ phase: Some("impl"),
+ model: None,
+ input_tokens: 1000,
+ output_tokens: 500,
+ cache_read: 100,
+ cache_write: 50,
+ estimated_cost: Some(0.015),
+ }).unwrap();
+ log.record_tokens(&TokenRecord {
+ epic_id: "fn-1-test",
+ task_id: Some("fn-1-test.1"),
+ phase: Some("review"),
+ model: None,
+ input_tokens: 800,
+ output_tokens: 300,
+ cache_read: 0,
+ cache_write: 0,
+ estimated_cost: Some(0.010),
+ }).unwrap();
+ log.record_tokens(&TokenRecord {
+ epic_id: "fn-1-test",
+ task_id: Some("fn-1-test.2"),
+ phase: Some("impl"),
+ model: None,
+ input_tokens: 500,
+ output_tokens: 200,
+ cache_read: 0,
+ cache_write: 0,
+ estimated_cost: Some(0.005),
+ }).unwrap();
+
+ let summaries = log.tokens_by_epic("fn-1-test").unwrap();
+ assert_eq!(summaries.len(), 2);
+
+ let t1 = &summaries[0];
+ assert_eq!(t1.task_id, "fn-1-test.1");
+ assert_eq!(t1.input_tokens, 1800);
+ assert_eq!(t1.output_tokens, 800);
+ assert_eq!(t1.cache_read, 100);
+ assert_eq!(t1.cache_write, 50);
+ assert!((t1.estimated_cost - 0.025).abs() < 0.001);
+
+ let t2 = &summaries[1];
+ assert_eq!(t2.task_id, "fn-1-test.2");
+ assert_eq!(t2.input_tokens, 500);
+ }
}
diff --git a/flowctl/crates/flowctl-db/src/lib.rs b/flowctl/crates/flowctl-db/src/lib.rs
index ef243d5a..4eebee25 100644
--- a/flowctl/crates/flowctl-db/src/lib.rs
+++ b/flowctl/crates/flowctl-db/src/lib.rs
@@ -28,7 +28,7 @@ pub use pool::{cleanup, open, open_memory, resolve_db_path, resolve_state_dir};
pub use indexer::{reindex, ReindexResult};
pub use migration::{migrate_runtime_state, needs_reindex, has_legacy_state, MigrationResult};
pub use repo::{EpicRepo, EvidenceRepo, EventRepo, EventRow, FileLockRepo, PhaseProgressRepo, RuntimeRepo, TaskRepo};
-pub use events::{EventLog, TokenRecord};
+pub use events::{EventLog, TaskTokenSummary, TokenRecord, TokenUsageRow};
pub use metrics::StatsQuery;
pub use flowctl_core;
diff --git a/flowctl/crates/flowctl-db/src/metrics.rs b/flowctl/crates/flowctl-db/src/metrics.rs
index 06f96aa0..df00a35b 100644
--- a/flowctl/crates/flowctl-db/src/metrics.rs
+++ b/flowctl/crates/flowctl-db/src/metrics.rs
@@ -78,6 +78,15 @@ pub struct DoraMetrics {
pub time_to_restore_hours: Option,
}
+/// Per-domain historical duration statistics for adaptive scheduling.
+#[derive(Debug, Clone, Serialize)]
+pub struct DomainDurationStats {
+ pub domain: String,
+ pub completed_count: i64,
+ pub avg_duration_secs: f64,
+ pub stddev_duration_secs: f64,
+}
+
/// Stats query engine.
pub struct StatsQuery<'a> {
conn: &'a Connection,
@@ -313,6 +322,43 @@ impl<'a> StatsQuery<'a> {
})
}
+ /// Per-domain duration statistics for completed tasks.
+ ///
+ /// Returns domains that have completed tasks with recorded durations,
+ /// including count, average, and standard deviation. Used by the adaptive
+ /// scheduler to size per-domain parallelism.
+ pub fn domain_duration_stats(&self) -> Result, DbError> {
+ // SQLite lacks SQRT, so we compute variance components in SQL and
+ // take the square root in Rust.
+ let mut stmt = self.conn.prepare(
+ "SELECT t.domain,
+ COUNT(*) AS cnt,
+ AVG(rs.duration_secs) AS avg_dur,
+ AVG(rs.duration_secs * rs.duration_secs) AS avg_sq
+ FROM tasks t
+ JOIN runtime_state rs ON rs.task_id = t.id
+ WHERE t.status = 'done'
+ AND rs.duration_secs IS NOT NULL
+ GROUP BY t.domain",
+ )?;
+
+ let rows = stmt
+ .query_map([], |row| {
+ let avg: f64 = row.get(2)?;
+ let avg_sq: f64 = row.get(3)?;
+ // variance = E[X^2] - (E[X])^2, clamp to 0 for floating point noise
+ let variance = (avg_sq - avg * avg).max(0.0);
+ Ok(DomainDurationStats {
+ domain: row.get(0)?,
+ completed_count: row.get(1)?,
+ avg_duration_secs: avg,
+ stddev_duration_secs: variance.sqrt(),
+ })
+ })?
+ .collect::, _>>()?;
+ Ok(rows)
+ }
+
/// Generate monthly rollup for any months that have daily_rollup data but no monthly entry.
pub fn generate_monthly_rollups(&self) -> Result {
let rows = self.conn.execute(
@@ -475,6 +521,42 @@ mod tests {
assert_eq!(dora.change_failure_rate, 0.0);
}
+ #[test]
+ fn test_domain_duration_stats() {
+ let conn = setup();
+ // Task 1 is already 'done', add duration
+ conn.execute(
+ "INSERT INTO runtime_state (task_id, duration_secs) VALUES ('fn-1-test.1', 120)",
+ [],
+ ).unwrap();
+
+ // Add more done tasks in the same domain to cross threshold
+ for i in 3..=7 {
+ conn.execute(
+ &format!(
+ "INSERT INTO tasks (id, epic_id, title, status, domain, file_path, created_at, updated_at)
+ VALUES ('fn-1-test.{i}', 'fn-1-test', 'Task {i}', 'done', 'general', 't{i}.md', '2025-01-01T00:00:00Z', '2025-01-01T00:00:00Z')"
+ ),
+ [],
+ ).unwrap();
+ conn.execute(
+ &format!(
+ "INSERT INTO runtime_state (task_id, duration_secs) VALUES ('fn-1-test.{i}', {})",
+ 100 + i * 10
+ ),
+ [],
+ ).unwrap();
+ }
+
+ let stats = StatsQuery::new(&conn);
+ let domain_stats = stats.domain_duration_stats().unwrap();
+ assert!(!domain_stats.is_empty());
+
+ let general = domain_stats.iter().find(|d| d.domain == "general").unwrap();
+ assert_eq!(general.completed_count, 6); // task 1 + tasks 3-7
+ assert!(general.avg_duration_secs > 0.0);
+ }
+
#[test]
fn test_generate_monthly_rollups() {
let conn = setup();
diff --git a/flowctl/crates/flowctl-scheduler/src/lib.rs b/flowctl/crates/flowctl-scheduler/src/lib.rs
index 3fc43883..51ac4dd0 100644
--- a/flowctl/crates/flowctl-scheduler/src/lib.rs
+++ b/flowctl/crates/flowctl-scheduler/src/lib.rs
@@ -14,6 +14,6 @@ pub use flowctl_core;
// Re-export key types at crate root.
pub use circuit_breaker::CircuitBreaker;
pub use event_bus::{EventBus, FlowEvent, TimestampedEvent};
-pub use scheduler::{Scheduler, SchedulerConfig, TaskResult};
+pub use scheduler::{AdaptiveConfig, DomainPerf, Scheduler, SchedulerConfig, TaskResult};
pub use watcher::FlowChange;
pub use watchdog::{HeartbeatTable, ZombieAction};
diff --git a/flowctl/crates/flowctl-scheduler/src/scheduler.rs b/flowctl/crates/flowctl-scheduler/src/scheduler.rs
index 42f95cd2..65a32670 100644
--- a/flowctl/crates/flowctl-scheduler/src/scheduler.rs
+++ b/flowctl/crates/flowctl-scheduler/src/scheduler.rs
@@ -17,6 +17,40 @@ use flowctl_core::state_machine::Status;
use crate::circuit_breaker::CircuitBreaker;
+/// Per-domain historical performance data used for adaptive parallelism.
+#[derive(Debug, Clone)]
+pub struct DomainPerf {
+ /// Number of completed tasks with duration history.
+ pub completed_count: i64,
+ /// Average task duration in seconds.
+ pub avg_duration_secs: f64,
+}
+
+/// Configuration for domain-based adaptive parallelism.
+///
+/// When a domain has >= `min_samples` completed tasks with duration history,
+/// the scheduler adjusts the effective semaphore size for tasks in that domain.
+/// Domains below the threshold use the static `SchedulerConfig::max_parallel`.
+#[derive(Debug, Clone)]
+pub struct AdaptiveConfig {
+ /// Minimum completed tasks before adaptive sizing kicks in.
+ pub min_samples: i64,
+ /// Absolute floor for computed parallelism (never go below this).
+ pub min_parallel: usize,
+ /// Absolute ceiling for computed parallelism (never exceed this).
+ pub max_parallel: usize,
+}
+
+impl Default for AdaptiveConfig {
+ fn default() -> Self {
+ Self {
+ min_samples: 5,
+ min_parallel: 1,
+ max_parallel: 8,
+ }
+ }
+}
+
/// Result of a single task execution.
#[derive(Debug, Clone)]
pub struct TaskResult {
@@ -53,6 +87,10 @@ pub type TaskExecutor = Arc TaskResult + Send + Sync>;
/// DAG scheduler: discovers ready tasks, dispatches them with bounded
/// parallelism, and feeds completions back to discover the next wave.
+///
+/// When CPM weights are provided, ready tasks are dispatched in descending
+/// order of their CPM distance (longest remaining path). This ensures that
+/// tasks on the critical path are started first, minimizing total makespan.
pub struct Scheduler {
/// The task dependency graph.
dag: TaskDag,
@@ -66,6 +104,15 @@ pub struct Scheduler {
cancel: CancellationToken,
/// Circuit breaker for consecutive-failure detection.
circuit_breaker: CircuitBreaker,
+ /// CPM priorities: task_id -> distance on longest weighted path.
+ /// Tasks with higher values are dispatched first.
+ cpm_priorities: HashMap,
+ /// Per-domain historical performance for adaptive parallelism.
+ domain_perf: HashMap,
+ /// Adaptive parallelism config. `None` means use static `max_parallel`.
+ adaptive_config: Option,
+ /// Task-id to domain mapping (set alongside domain_perf).
+ task_domains: HashMap,
}
impl Scheduler {
@@ -84,9 +131,94 @@ impl Scheduler {
config,
cancel,
circuit_breaker,
+ cpm_priorities: HashMap::new(),
+ domain_perf: HashMap::new(),
+ adaptive_config: None,
+ task_domains: HashMap::new(),
}
}
+ /// Set CPM weights and compute dispatch priorities.
+ ///
+ /// Call this before `run()` to enable CPM-based dispatch ordering.
+ /// Tasks with higher CPM distance (more downstream work) are dispatched first.
+ pub fn set_cpm_weights(&mut self, weights: &HashMap) {
+ self.cpm_priorities = self.dag.cpm_priorities(weights);
+ }
+
+ /// Enable adaptive parallelism with per-domain performance data.
+ ///
+ /// Call this before `run()`. For each domain with enough history
+ /// (>= `adaptive_config.min_samples`), the scheduler will compute an
+ /// effective parallelism level based on average task duration:
+ ///
+ /// - Short-duration domains (fast tasks) get higher parallelism.
+ /// - Long-duration domains (slow tasks) get lower parallelism.
+ ///
+ /// `task_domains` maps task_id -> domain string so the scheduler knows
+ /// which domain each task belongs to at dispatch time.
+ pub fn set_adaptive(
+ &mut self,
+ config: AdaptiveConfig,
+ domain_perf: HashMap,
+ task_domains: HashMap,
+ ) {
+ self.domain_perf = domain_perf;
+ self.task_domains = task_domains;
+ self.adaptive_config = Some(config);
+ }
+
+ /// Compute effective parallelism for a domain.
+ ///
+ /// If adaptive is not configured or the domain lacks sufficient samples,
+ /// returns `self.config.max_parallel` (the static fallback).
+ ///
+ /// The heuristic: normalize domain durations relative to the global mean
+ /// across all warm domains. Domains with below-average duration get more
+ /// slots; above-average get fewer. The ratio is clamped to
+ /// `[adaptive.min_parallel, adaptive.max_parallel]`.
+ fn effective_parallelism(&self, domain: Option<&str>) -> usize {
+ let adaptive = match &self.adaptive_config {
+ Some(c) => c,
+ None => return self.config.max_parallel,
+ };
+
+ let domain_key = match domain {
+ Some(d) => d,
+ None => return self.config.max_parallel,
+ };
+
+ let perf = match self.domain_perf.get(domain_key) {
+ Some(p) if p.completed_count >= adaptive.min_samples => p,
+ _ => return self.config.max_parallel, // cold start
+ };
+
+ // Global mean across all warm domains.
+ let warm: Vec<&DomainPerf> = self
+ .domain_perf
+ .values()
+ .filter(|p| p.completed_count >= adaptive.min_samples)
+ .collect();
+
+ if warm.is_empty() {
+ return self.config.max_parallel;
+ }
+
+ let global_mean: f64 =
+ warm.iter().map(|p| p.avg_duration_secs).sum::() / warm.len() as f64;
+
+ if global_mean <= 0.0 || perf.avg_duration_secs <= 0.0 {
+ return self.config.max_parallel;
+ }
+
+ // ratio > 1 means domain is faster than average -> more slots.
+ let ratio = global_mean / perf.avg_duration_secs;
+ let base = self.config.max_parallel as f64;
+ let computed = (base * ratio).round() as usize;
+
+ computed.clamp(adaptive.min_parallel, adaptive.max_parallel)
+ }
+
/// Run the scheduling loop to completion.
///
/// Returns the final status map when all tasks are done/failed or the
@@ -96,21 +228,26 @@ impl Scheduler {
F: Fn(String, CancellationToken) -> Fut + Send + Sync + 'static,
Fut: std::future::Future
}>
+
+
diff --git a/flowctl/crates/flowctl-web/src/components/dag_graph.rs b/flowctl/crates/flowctl-web/src/components/dag_graph.rs
new file mode 100644
index 00000000..363c4771
--- /dev/null
+++ b/flowctl/crates/flowctl-web/src/components/dag_graph.rs
@@ -0,0 +1,361 @@
+//! Interactive DAG graph component with drag-to-create/delete dependency edges
+//! and retry/skip action buttons on failed/blocked task nodes.
+//!
+//! Uses SVG mousedown/mousemove/mouseup events on nodes to draw dependency edges.
+//! Validates that new edges don't create cycles before submitting to the server.
+
+use leptos::prelude::*;
+
+use crate::api::{self, DagResponse};
+
+/// Node dimensions (must match dag_view.rs constants).
+const NODE_WIDTH: f64 = 160.0;
+const NODE_HEIGHT: f64 = 60.0;
+const NODE_RX: f64 = 8.0;
+
+/// Color for a task status.
+fn status_color(status: &str) -> &'static str {
+ match status {
+ "done" => "#16a34a",
+ "in_progress" => "#ca8a04",
+ "blocked" | "failed" | "upstream_failed" => "#dc2626",
+ "skipped" => "#4b5563",
+ "up_for_retry" => "#ea580c",
+ _ => "#374151",
+ }
+}
+
+fn status_text_color(status: &str) -> &'static str {
+ match status {
+ "done" | "in_progress" | "blocked" | "failed" | "upstream_failed" | "up_for_retry" => {
+ "#ffffff"
+ }
+ _ => "#d1d5db",
+ }
+}
+
+/// Whether a task status supports retry (back to todo).
+fn can_retry(status: &str) -> bool {
+ matches!(status, "failed" | "blocked")
+}
+
+/// Whether a task status supports skip.
+fn can_skip(status: &str) -> bool {
+ status == "todo"
+}
+
+/// Interactive DAG graph component.
+///
+/// Props:
+/// - `dag`: the current DAG data (nodes + edges)
+/// - `version`: optimistic lock version string (updated_at timestamp)
+/// - `epic_id`: epic ID for context
+/// - `on_mutated`: callback invoked after a successful mutation (to trigger refresh)
+#[component]
+pub fn DagGraph(
+ dag: DagResponse,
+ version: String,
+ _epic_id: String,
+ on_mutated: Callback<()>,
+) -> impl IntoView {
+ // Drag state: source node ID being dragged from.
+ let (drag_source, set_drag_source) = signal(Option::::None);
+ // Current mouse position during drag (SVG coordinates).
+ let (drag_pos, set_drag_pos) = signal(Option::<(f64, f64)>::None);
+ // Error message for user feedback.
+ let (error_msg, set_error_msg) = signal(Option::::None);
+
+ if dag.nodes.is_empty() {
+ return view! {
+ "No tasks in this epic."
+ }
+ .into_any();
+ }
+
+ // Compute SVG viewport bounds.
+ let min_x = dag.nodes.iter().map(|n| n.x).fold(f64::INFINITY, f64::min);
+ let max_x = dag
+ .nodes
+ .iter()
+ .map(|n| n.x)
+ .fold(f64::NEG_INFINITY, f64::max);
+ let min_y = dag.nodes.iter().map(|n| n.y).fold(f64::INFINITY, f64::min);
+ let max_y = dag
+ .nodes
+ .iter()
+ .map(|n| n.y)
+ .fold(f64::NEG_INFINITY, f64::max);
+ let padding = 60.0;
+ let vb_x = min_x - padding;
+ let vb_y = min_y - padding - NODE_HEIGHT / 2.0;
+ let vb_w = (max_x - min_x) + NODE_WIDTH + padding * 2.0;
+ let vb_h = (max_y - min_y) + NODE_HEIGHT + padding * 2.0;
+ let viewbox = format!("{vb_x} {vb_y} {vb_w} {vb_h}");
+
+ // Build node position lookup.
+ let node_positions: std::collections::HashMap =
+ dag.nodes.iter().map(|n| (n.id.clone(), (n.x, n.y))).collect();
+
+ // Edge set for checking existing edges and click-to-delete.
+ let edge_set: std::collections::HashSet<(String, String)> = dag
+ .edges
+ .iter()
+ .map(|e| (e.from.clone(), e.to.clone()))
+ .collect();
+
+ // Render edges.
+ let np = node_positions.clone();
+ let edges_view: Vec<_> = dag
+ .edges
+ .iter()
+ .map(|edge| {
+ let from_pos = np.get(&edge.from).copied().unwrap_or((0.0, 0.0));
+ let to_pos = np.get(&edge.to).copied().unwrap_or((0.0, 0.0));
+ let x1 = from_pos.0 + NODE_WIDTH / 2.0;
+ let y1 = from_pos.1;
+ let x2 = to_pos.0 - NODE_WIDTH / 2.0;
+ let y2 = to_pos.1;
+ let mid_x = (x1 + x2) / 2.0;
+ let d = format!("M {x1} {y1} C {mid_x} {y1}, {mid_x} {y2}, {x2} {y2}");
+
+ let edge_from = edge.from.clone();
+ let edge_to = edge.to.clone();
+ let ver = version.clone();
+ let on_mut = on_mutated;
+ let set_err = set_error_msg;
+
+ // Click on edge to delete it.
+ let on_click = move |_| {
+ let from = edge_from.clone();
+ let to = edge_to.clone();
+ let v = ver.clone();
+ leptos::task::spawn_local(async move {
+ let params = serde_json::json!({"task_id": to, "depends_on": from});
+ match api::mutate_dag("remove_dep", params, &v).await {
+ Ok(_) => on_mut.run(()),
+ Err(e) => set_err.set(Some(e)),
+ }
+ });
+ };
+
+ view! {
+
+ "Click to remove dependency"
+
+ }
+ })
+ .collect();
+
+ // Render nodes with drag handles and action buttons.
+ let nodes_view: Vec<_> = dag
+ .nodes
+ .iter()
+ .map(|node| {
+ let status = node.status.clone();
+ let fill = status_color(&status);
+ let text_fill = status_text_color(&status);
+ let rx = node.x - NODE_WIDTH / 2.0;
+ let ry = node.y - NODE_HEIGHT / 2.0;
+ let title = if node.title.len() > 20 {
+ format!("{}...", &node.title[..17])
+ } else {
+ node.title.clone()
+ };
+ let short_id = node.id.clone();
+ let node_id = node.id.clone();
+ let node_x = node.x;
+ let node_y = node.y;
+
+ // Mousedown: start drag from this node (right edge = output port).
+ let nid = node_id.clone();
+ let on_mousedown = move |_: leptos::ev::MouseEvent| {
+ set_drag_source.set(Some(nid.clone()));
+ set_drag_pos.set(None);
+ };
+
+ // Mouseup on a node: if dragging, create edge from source to this node.
+ let target_id = node_id.clone();
+ let ver = version.clone();
+ let es = edge_set.clone();
+ let on_mut = on_mutated;
+ let set_err = set_error_msg;
+ let on_mouseup = move |_: leptos::ev::MouseEvent| {
+ if let Some(source) = drag_source.get_untracked() {
+ set_drag_source.set(None);
+ set_drag_pos.set(None);
+
+ // Don't create self-loops or duplicate edges.
+ if source == target_id {
+ return;
+ }
+ if es.contains(&(source.clone(), target_id.clone())) {
+ return;
+ }
+
+ let tid = target_id.clone();
+ let src = source.clone();
+ let v = ver.clone();
+ leptos::task::spawn_local(async move {
+ let params = serde_json::json!({"task_id": tid, "depends_on": src});
+ match api::mutate_dag("add_dep", params, &v).await {
+ Ok(_) => on_mut.run(()),
+ Err(e) => set_err.set(Some(e)),
+ }
+ });
+ }
+ };
+
+ // Retry button (for failed/blocked tasks).
+ let show_retry = can_retry(&status);
+ let retry_id = node_id.clone();
+ let retry_ver = version.clone();
+ let retry_on_mut = on_mutated;
+ let retry_set_err = set_error_msg;
+ let retry_btn_x = rx + NODE_WIDTH - 38.0;
+ let retry_btn_y = ry + NODE_HEIGHT + 4.0;
+
+ let on_retry = move |_: leptos::ev::MouseEvent| {
+ let tid = retry_id.clone();
+ let v = retry_ver.clone();
+ leptos::task::spawn_local(async move {
+ let params = serde_json::json!({"task_id": tid});
+ match api::mutate_dag("retry_task", params, &v).await {
+ Ok(_) => retry_on_mut.run(()),
+ Err(e) => retry_set_err.set(Some(e)),
+ }
+ });
+ };
+
+ // Skip button (for todo tasks).
+ let show_skip = can_skip(&status);
+ let skip_id = node_id.clone();
+ let skip_ver = version.clone();
+ let skip_on_mut = on_mutated;
+ let skip_set_err = set_error_msg;
+ let skip_btn_x = rx + 2.0;
+ let skip_btn_y = ry + NODE_HEIGHT + 4.0;
+
+ let on_skip = move |_: leptos::ev::MouseEvent| {
+ let tid = skip_id.clone();
+ let v = skip_ver.clone();
+ leptos::task::spawn_local(async move {
+ let params = serde_json::json!({"task_id": tid});
+ match api::mutate_dag("skip_task", params, &v).await {
+ Ok(_) => skip_on_mut.run(()),
+ Err(e) => skip_set_err.set(Some(e)),
+ }
+ });
+ };
+
+ view! {
+
+
+ {title}
+ {short_id}
+
+ // Retry button.
+ {if show_retry {
+ Some(view! {
+
+
+ "Retry"
+
+ })
+ } else {
+ None
+ }}
+
+ // Skip button.
+ {if show_skip {
+ Some(view! {
+
+
+ "Skip"
+
+ })
+ } else {
+ None
+ }}
+
+ }
+ })
+ .collect();
+
+ // Drag preview line: drawn while dragging from a source node.
+ let np2 = node_positions.clone();
+ let drag_line = move || {
+ let source = drag_source.get()?;
+ let (mx, my) = drag_pos.get()?;
+ let (sx, sy) = np2.get(&source).copied()?;
+ let x1 = sx + NODE_WIDTH / 2.0;
+ let d = format!("M {x1} {sy} L {mx} {my}");
+ Some(view! {
+
+ })
+ };
+
+ // SVG mousemove: update drag position. We approximate SVG coords from client coords.
+ let on_mousemove = move |evt: leptos::ev::MouseEvent| {
+ if drag_source.get_untracked().is_some() {
+ // Use offsetX/offsetY as approximation (works when SVG fills container).
+ let x = evt.offset_x() as f64;
+ let y = evt.offset_y() as f64;
+ // Map from pixel coords to SVG viewBox coords.
+ // This is approximate but good enough for a drag preview.
+ set_drag_pos.set(Some((
+ vb_x + (x / 800.0) * vb_w,
+ vb_y + (y / 400.0) * vb_h,
+ )));
+ }
+ };
+
+ // SVG mouseup: cancel drag if not on a node.
+ let on_svg_mouseup = move |_: leptos::ev::MouseEvent| {
+ set_drag_source.set(None);
+ set_drag_pos.set(None);
+ };
+
+ view! {
+
+ {move || error_msg.get().map(|msg| view! {
+
+ {msg}
+
+
+ })}
+
+ "Drag from one node to another to add a dependency. Click an edge to remove it."
+
+
+
+ }
+ .into_any()
+}
diff --git a/flowctl/crates/flowctl-web/src/components/mod.rs b/flowctl/crates/flowctl-web/src/components/mod.rs
index a07a0127..644c3e93 100644
--- a/flowctl/crates/flowctl-web/src/components/mod.rs
+++ b/flowctl/crates/flowctl-web/src/components/mod.rs
@@ -1,2 +1,4 @@
pub mod status_badge;
pub mod progress_bar;
+pub mod token_chart;
+pub mod dag_graph;
diff --git a/flowctl/crates/flowctl-web/src/components/token_chart.rs b/flowctl/crates/flowctl-web/src/components/token_chart.rs
new file mode 100644
index 00000000..872cadaa
--- /dev/null
+++ b/flowctl/crates/flowctl-web/src/components/token_chart.rs
@@ -0,0 +1,90 @@
+//! SVG bar chart component showing token consumption per task.
+
+use leptos::prelude::*;
+
+use crate::api::TaskTokenSummary;
+
+/// Bar colors for input vs output tokens.
+const INPUT_COLOR: &str = "#06b6d4"; // cyan-500
+const OUTPUT_COLOR: &str = "#8b5cf6"; // violet-500
+
+/// SVG bar chart showing token consumption per task.
+#[component]
+pub fn TokenChart(
+ #[prop(into)] data: Vec,
+) -> impl IntoView {
+ if data.is_empty() {
+ return view! {
+ "No token data available."
+ }
+ .into_any();
+ }
+
+ let max_tokens = data
+ .iter()
+ .map(|d| d.input_tokens + d.output_tokens)
+ .max()
+ .unwrap_or(1)
+ .max(1);
+
+ let bar_height = 28.0_f64;
+ let gap = 8.0_f64;
+ let label_width = 140.0_f64;
+ let chart_width = 500.0_f64;
+ let total_width = label_width + chart_width + 80.0;
+ let total_height = data.len() as f64 * (bar_height + gap) + gap + 30.0;
+
+ let bars: Vec<_> = data
+ .iter()
+ .enumerate()
+ .map(|(i, item)| {
+ let y = gap + i as f64 * (bar_height + gap);
+ let input_w = (item.input_tokens as f64 / max_tokens as f64) * chart_width;
+ let output_w = (item.output_tokens as f64 / max_tokens as f64) * chart_width;
+ let total = item.input_tokens + item.output_tokens;
+ // Truncate long task IDs for display.
+ let label = if item.task_id.len() > 18 {
+ format!("{}...", &item.task_id[..15])
+ } else {
+ item.task_id.clone()
+ };
+ let cost_label = format!("${:.3}", item.estimated_cost);
+ view! {
+
+ // Task ID label
+ {label}
+ // Input tokens bar
+
+ // Output tokens bar (stacked after input)
+
+ // Total label
+
+ {format!("{total} tok / {cost_label}")}
+
+
+ }
+ })
+ .collect();
+
+ // Legend at the bottom.
+ let legend_y = data.len() as f64 * (bar_height + gap) + gap + 10.0;
+
+ view! {
+
+ }
+ .into_any()
+}
diff --git a/flowctl/crates/flowctl-web/src/pages/dag_view.rs b/flowctl/crates/flowctl-web/src/pages/dag_view.rs
new file mode 100644
index 00000000..80726b31
--- /dev/null
+++ b/flowctl/crates/flowctl-web/src/pages/dag_view.rs
@@ -0,0 +1,231 @@
+//! DAG visualization page: renders task dependency graph as SVG.
+//!
+//! Client-only component: SSR renders a loading placeholder, hydration
+//! activates the DAG fetch and WebSocket subscription for live updates.
+
+use leptos::prelude::*;
+use leptos_router::hooks::use_params_map;
+
+use crate::api;
+
+/// Color for a task status.
+fn status_color(status: &str) -> &'static str {
+ match status {
+ "done" => "#16a34a", // green-600
+ "in_progress" => "#ca8a04", // yellow-600
+ "blocked" => "#dc2626", // red-600
+ "skipped" => "#4b5563", // gray-600
+ _ => "#374151", // gray-700 (todo)
+ }
+}
+
+/// Text color for contrast on status backgrounds.
+fn status_text_color(status: &str) -> &'static str {
+ match status {
+ "done" | "in_progress" | "blocked" => "#ffffff",
+ _ => "#d1d5db", // gray-300
+ }
+}
+
+/// Node dimensions for SVG rendering.
+const NODE_WIDTH: f64 = 160.0;
+const NODE_HEIGHT: f64 = 60.0;
+const NODE_RX: f64 = 8.0;
+
+/// DAG view page component — renders SVG dependency graph.
+#[component]
+pub fn DagViewPage() -> impl IntoView {
+ let params = use_params_map();
+ let epic_id = move || params.read().get("id").unwrap_or_default();
+
+ let dag_data = LocalResource::new(move || {
+ let id = epic_id();
+ async move { api::fetch_dag(&id).await.ok() }
+ });
+
+ // Reactive signal for node statuses (updated via WebSocket).
+ let (status_updates, _set_status_updates) =
+ signal(std::collections::HashMap::::new());
+
+ // Set up WebSocket subscription for live updates (client-side only).
+ #[cfg(feature = "hydrate")]
+ {
+ let set_status_updates = _set_status_updates;
+ use leptos::prelude::Effect;
+ Effect::new(move |_| {
+ spawn_ws_listener(set_status_updates);
+ });
+ }
+
+ view! {
+
+
+
+
"Loading DAG..." }>
+ {move || {
+ dag_data.get().map(|maybe_dag| {
+ match maybe_dag {
+ None => view! {
+ "Failed to load DAG data."
+ }.into_any(),
+ Some(dag) if dag.nodes.is_empty() => view! {
+ "No tasks in this epic."
+ }.into_any(),
+ Some(dag) => {
+ let updates = status_updates.get();
+ // Compute SVG viewport bounds.
+ let min_x = dag.nodes.iter().map(|n| n.x).fold(f64::INFINITY, f64::min);
+ let max_x = dag.nodes.iter().map(|n| n.x).fold(f64::NEG_INFINITY, f64::max);
+ let min_y = dag.nodes.iter().map(|n| n.y).fold(f64::INFINITY, f64::min);
+ let max_y = dag.nodes.iter().map(|n| n.y).fold(f64::NEG_INFINITY, f64::max);
+ let padding = 40.0;
+ let vb_x = min_x - padding;
+ let vb_y = min_y - padding - NODE_HEIGHT / 2.0;
+ let vb_w = (max_x - min_x) + NODE_WIDTH + padding * 2.0;
+ let vb_h = (max_y - min_y) + NODE_HEIGHT + padding * 2.0;
+ let viewbox = format!("{vb_x} {vb_y} {vb_w} {vb_h}");
+
+ // Build node position lookup for edge drawing.
+ let node_positions: std::collections::HashMap =
+ dag.nodes.iter().map(|n| (n.id.clone(), (n.x, n.y))).collect();
+
+ let edges_view: Vec<_> = dag.edges.iter().map(|edge| {
+ let from_pos = node_positions.get(&edge.from).copied().unwrap_or((0.0, 0.0));
+ let to_pos = node_positions.get(&edge.to).copied().unwrap_or((0.0, 0.0));
+ // Edge: from right side of source to left side of target.
+ let x1 = from_pos.0 + NODE_WIDTH / 2.0;
+ let y1 = from_pos.1;
+ let x2 = to_pos.0 - NODE_WIDTH / 2.0;
+ let y2 = to_pos.1;
+ // Cubic bezier for smooth curves.
+ let mid_x = (x1 + x2) / 2.0;
+ let d = format!("M {x1} {y1} C {mid_x} {y1}, {mid_x} {y2}, {x2} {y2}");
+ view! {
+
+ }
+ }).collect();
+
+ let nodes_view: Vec<_> = dag.nodes.iter().map(|node| {
+ // Use WebSocket-updated status if available, else original.
+ let status = updates.get(&node.id)
+ .cloned()
+ .unwrap_or_else(|| node.status.clone());
+ let fill = status_color(&status);
+ let text_fill = status_text_color(&status);
+ let rx = node.x - NODE_WIDTH / 2.0;
+ let ry = node.y - NODE_HEIGHT / 2.0;
+ // Truncate long titles.
+ let title = if node.title.len() > 20 {
+ format!("{}...", &node.title[..17])
+ } else {
+ node.title.clone()
+ };
+ let short_id = node.id.clone();
+ view! {
+
+
+ {title}
+ {short_id}
+
+ }
+ }).collect();
+
+ view! {
+
+ }.into_any()
+ }
+ }
+ })
+ }}
+
+
+
+
+
+ "Todo"
+
+
+
+ "In Progress"
+
+
+
+ "Done"
+
+
+
+ "Blocked"
+
+
+
+ "Skipped"
+
+
+
+ }
+}
+
+/// Spawn a WebSocket listener that updates node statuses in real-time.
+///
+/// Connects to /api/v1/events and parses task status change events.
+/// Only compiled for the hydrate (WASM) target.
+#[cfg(feature = "hydrate")]
+fn spawn_ws_listener(
+ set_status: WriteSignal>,
+) {
+ use wasm_bindgen::prelude::*;
+ use wasm_bindgen::JsCast;
+
+ // Build WebSocket URL from current page location.
+ let window = web_sys::window().expect("no window");
+ let location = window.location();
+ let protocol = location.protocol().unwrap_or_else(|_| "http:".to_string());
+ let ws_protocol = if protocol == "https:" { "wss:" } else { "ws:" };
+ let host = location.host().unwrap_or_else(|_| "localhost:17319".to_string());
+ let ws_url = format!("{ws_protocol}//{host}/api/v1/events");
+
+ let ws = match web_sys::WebSocket::new(&ws_url) {
+ Ok(ws) => ws,
+ Err(_) => return,
+ };
+
+ // On message: parse event and update status map if it's a task status change.
+ let on_message = Closure::::new(move |event: web_sys::MessageEvent| {
+ if let Some(text) = event.data().as_string() {
+ // Events are JSON with {event: {type, task_id, new_status, ...}, timestamp}
+ if let Ok(value) = serde_json::from_str::(&text) {
+ let evt = value.get("event").unwrap_or(&value);
+ if let (Some(task_id), Some(new_status)) = (
+ evt.get("task_id").and_then(|v| v.as_str()),
+ evt.get("new_status").and_then(|v| v.as_str()),
+ ) {
+ set_status.update(|map| {
+ map.insert(task_id.to_string(), new_status.to_lowercase());
+ });
+ }
+ }
+ }
+ });
+ ws.set_onmessage(Some(on_message.as_ref().unchecked_ref()));
+ on_message.forget(); // leak closure to keep WebSocket alive
+}
diff --git a/flowctl/crates/flowctl-web/src/pages/epic_detail.rs b/flowctl/crates/flowctl-web/src/pages/epic_detail.rs
index ebd14fc2..a99f8137 100644
--- a/flowctl/crates/flowctl-web/src/pages/epic_detail.rs
+++ b/flowctl/crates/flowctl-web/src/pages/epic_detail.rs
@@ -23,6 +23,10 @@ pub fn EpicDetailPage() -> impl IntoView {
"Loading tasks..." }>
{move || {
diff --git a/flowctl/crates/flowctl-web/src/pages/mod.rs b/flowctl/crates/flowctl-web/src/pages/mod.rs
index e21c5d5c..32fdfab0 100644
--- a/flowctl/crates/flowctl-web/src/pages/mod.rs
+++ b/flowctl/crates/flowctl-web/src/pages/mod.rs
@@ -1,2 +1,4 @@
pub mod dashboard;
+pub mod dag_view;
pub mod epic_detail;
+pub mod replay;
diff --git a/flowctl/crates/flowctl-web/src/pages/replay.rs b/flowctl/crates/flowctl-web/src/pages/replay.rs
new file mode 100644
index 00000000..6d286fb2
--- /dev/null
+++ b/flowctl/crates/flowctl-web/src/pages/replay.rs
@@ -0,0 +1,151 @@
+//! Replay page: timeline of task execution events with token usage overlay.
+//!
+//! Client-only component: SSR renders a loading placeholder, hydration
+//! activates the data fetch from the daemon API.
+
+use leptos::prelude::*;
+use leptos_router::hooks::use_params_map;
+
+use crate::api;
+use crate::components::token_chart::TokenChart;
+
+/// Replay page component -- timeline view with token usage overlay.
+#[component]
+pub fn ReplayPage() -> impl IntoView {
+ let params = use_params_map();
+ let task_id = move || params.read().get("id").unwrap_or_default();
+
+ // Derive the epic ID from the task ID (everything before the last dot).
+ let epic_id = move || {
+ let tid = task_id();
+ tid.rsplit_once('.').map(|(e, _)| e.to_string()).unwrap_or(tid)
+ };
+
+ let events_data = LocalResource::new(move || {
+ let eid = epic_id();
+ let tid = task_id();
+ async move {
+ let events = api::fetch_tasks(&eid).await.ok();
+ let tokens = api::fetch_tokens_by_task(&tid).await.ok();
+ let epic_tokens = api::fetch_tokens_by_epic(&eid).await.ok();
+ (events, tokens, epic_tokens)
+ }
+ });
+
+ view! {
+
+
+
+
"Loading replay data..." }>
+ {move || {
+ events_data.get().map(|data| {
+ let (_tasks, tokens, epic_tokens) = data;
+
+ // Token usage section for this task.
+ let token_section = if let Some(ref records) = tokens {
+ if records.is_empty() {
+ view! {
+
+
"Token Usage"
+
"No token records for this task."
+
+ }.into_any()
+ } else {
+ let total_input: i64 = records.iter().map(|r| r.input_tokens).sum();
+ let total_output: i64 = records.iter().map(|r| r.output_tokens).sum();
+ let total_cost: f64 = records.iter().filter_map(|r| r.estimated_cost).sum();
+
+ let rows: Vec<_> = records.iter().map(|rec| {
+ let phase = rec.phase.clone().unwrap_or_else(|| "-".to_string());
+ let model = rec.model.clone().unwrap_or_else(|| "-".to_string());
+ let cost = rec.estimated_cost.map(|c| format!("${c:.4}")).unwrap_or_else(|| "-".to_string());
+ view! {
+
+ | {rec.timestamp.clone()} |
+ {phase} |
+ {model} |
+ {rec.input_tokens.to_string()} |
+ {rec.output_tokens.to_string()} |
+ {cost} |
+
+ }
+ }).collect();
+
+ view! {
+
+
"Token Usage"
+
+
+ "Input: "
+ {total_input.to_string()}
+
+
+ "Output: "
+ {total_output.to_string()}
+
+
+ "Cost: "
+ {format!("${total_cost:.4}")}
+
+
+
+
+
+
+ | "Timestamp" |
+ "Phase" |
+ "Model" |
+ "Input" |
+ "Output" |
+ "Cost" |
+
+
+
+ {rows}
+
+
+
+
+ }.into_any()
+ }
+ } else {
+ view! {
+
+
"Token Usage"
+
"Failed to load token data."
+
+ }.into_any()
+ };
+
+ // Epic-wide token chart.
+ let chart_section = if let Some(ref summaries) = epic_tokens {
+ view! {
+
+
"Epic Token Distribution"
+
+
+ }.into_any()
+ } else {
+ view! {
+
+
"Epic Token Distribution"
+
"Failed to load epic token data."
+
+ }.into_any()
+ };
+
+ view! {
+
+ {token_section}
+ {chart_section}
+
+ }
+ })
+ }}
+
+
+ }
+}
diff --git a/flowctl/tests/cmd/block_json.toml b/flowctl/tests/cmd/block_json.toml
index daead3c3..94850ca7 100644
--- a/flowctl/tests/cmd/block_json.toml
+++ b/flowctl/tests/cmd/block_json.toml
@@ -2,5 +2,5 @@ bin.name = "flowctl"
args = ["--json", "block", "test-task-1", "--reason-file", "/dev/null"]
status.code = 1
stderr = """
-{"error":"Invalid task ID: test-task-1. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)","success":false}
+{"error":"validation error: Invalid task ID: test-task-1. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)","success":false}
"""
diff --git a/flowctl/tests/cmd/done_json.toml b/flowctl/tests/cmd/done_json.toml
index e381a0a1..3da695a5 100644
--- a/flowctl/tests/cmd/done_json.toml
+++ b/flowctl/tests/cmd/done_json.toml
@@ -2,5 +2,5 @@ bin.name = "flowctl"
args = ["--json", "done", "test-task-1"]
status.code = 1
stderr = """
-{"error":"Invalid task ID: test-task-1. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)","success":false}
+{"error":"validation error: Invalid task ID: test-task-1. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)","success":false}
"""
diff --git a/flowctl/tests/cmd/restart_json.toml b/flowctl/tests/cmd/restart_json.toml
index 9b26bd3a..ee504ea7 100644
--- a/flowctl/tests/cmd/restart_json.toml
+++ b/flowctl/tests/cmd/restart_json.toml
@@ -2,5 +2,5 @@ bin.name = "flowctl"
args = ["--json", "restart", "test-task-1"]
status.code = 1
stderr = """
-{"error":"Invalid task ID: test-task-1. Expected format: fn-N.M or fn-N-slug.M","success":false}
+{"error":"validation error: Invalid task ID: test-task-1. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)","success":false}
"""
diff --git a/flowctl/tests/cmd/start_json.toml b/flowctl/tests/cmd/start_json.toml
index d3c14589..b2a617e1 100644
--- a/flowctl/tests/cmd/start_json.toml
+++ b/flowctl/tests/cmd/start_json.toml
@@ -2,5 +2,5 @@ bin.name = "flowctl"
args = ["--json", "start", "test-task-1"]
status.code = 1
stderr = """
-{"error":"Invalid task ID: test-task-1. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)","success":false}
+{"error":"validation error: Invalid task ID: test-task-1. Expected format: fn-N.M or fn-N-slug.M (e.g., fn-1.2, fn-1-add-auth.2)","success":false}
"""