diff --git a/CHANGELOG.md b/CHANGELOG.md new file mode 100644 index 0000000..07fa507 --- /dev/null +++ b/CHANGELOG.md @@ -0,0 +1,108 @@ +# Changelog + +All notable changes to spar are documented here. Format follows +[Keep a Changelog](https://keepachangelog.com/en/1.1.0/) and the project +follows [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +## [0.7.1] — 2026-04-27 + +This release closes the v0.7.x line. Headline: full IRQ-aware response-time +analysis with priority-inheritance / priority-ceiling blocking, machine-checked +in Lean. Plus the entire v0.7.x verification-infrastructure ratchet. + +Track D Phase 1 (TSN/Ethernet WCTT) and Track E (migration oracle, commits 1-4) +are also on `main` at the time of this tag — they will be promoted in the next +release (v0.8.0). They are functional and tested but the Track E surface is +not yet at its commit-8 close-out. + +### Added — Track A v0.7.0 (IRQ-aware RTA) + +- `Spar_Timing::*` and `Spar_Trace::*` non-standard property sets + (`ISR_Priority`, `ISR_Execution_Time`, `Interrupt_Latency_Bound`, + `Bottom_Half_Server`; `Probe_Point`, `Expected_BCET`, `Expected_WCET`, + `Expected_Mean`). +- Hierarchical two-tier RTA: ISR layer steals CPU capacity first, residual + feeds task RTA. `Dispatch_Jitter` woven into the Tindell-Clark recurrence. + `Compute_Execution_Time`'s Time_Range consumed as `(BCET, WCET)`. +- New diagnostics: `IrqResponseBudget`, `IrqBudgetViolated`, + `IsrOverloadedCpu`, `MissingBottomHalfServer`, `ResponseBand`. +- Lean theorems for jittered RTA convergence (`proofs/Proofs/Scheduling/RTAJittered.lean`). +- Non-regression: models without `Spar_Timing::*` produce byte-identical + RTA output to v0.6.0. + +### Added — Track A v0.7.1 (PIP/PCP blocking) + +- `Thread_Properties::Locking_Protocol` (`Priority_Inheritance_Protocol`, + `Priority_Ceiling_Protocol`, `Stop_For_Lock`, `None`) + + `Spar_Timing::Critical_Section_Blocking` property recognition. +- Blocking term `B_i` folded into the hierarchical-RTA recurrence per + Joseph & Pandya 1986 / Buttazzo. New `BlockingInflated` Info diagnostic. +- Non-regression: models without `Locking_Protocol` produce byte-identical + output to v0.7.0. + +### Added — Track B v0.7.x foundation (variants) + +- `docs/contracts/rivet-spar-variant-v1.md` — interchange contract between + rivet (PLE truth) and spar (HIR consumer). Shape 1: rivet emits a JSON + context blob; spar consumes and filters HIR. +- New crate `spar-variants`: reads the v1 context blob, applies + intersection-semantics binding rules, exposes `keep_in_variant` predicate. + CLI integration arrives once rivet's emitter side ships. + +### Added — v0.7.x verification infrastructure + +- **Lean + Bazel + proptest CI gates** (`.github/workflows/proofs.yml` + + `bazel-test` + `Rivet validate (artifacts)` jobs in `ci.yml`). + Lean proofs now machine-checked on every PR via Mathlib precompiled cache. + Closes #135. +- **Kani harnesses** (`crates/spar-{solver,codegen}/tests/kani_*.rs`) bounded- + model-check ARINC653 solver invariants (closes #136). +- **cargo-fuzz scaffolding** (`fuzz/`, three targets: parser, solver, + codegen-roundtrip, with PR smoke + nightly soak workflows) (closes #138). +- **Criterion benchmarks** (`crates/spar-{solver,codegen}/benches/`, + PR compile-gate + nightly baseline) (closes #137). +- Status badges + AGENTS.md regeneration via `rivet init --agents`. + +### Added — v0.8.0 in flight (on main, not feature-promoted in this release) + +- **Track D Phase 1 — TSN/Ethernet WCTT analysis (6/6 commits)**: new + `spar-network` crate with NetworkGraph extraction + Network Calculus + primitives + Lean theorems. New `WcttAnalysis` pass produces per-stream + end-to-end traversal-time bounds. `latency.rs` integration alternates + RTA-derived WCET on compute hops with WCTT on network hops, replacing the + scalar `Bus_Properties::Latency` placeholder when `Spar_Network::*` is + annotated. +- **Track E commits 1-4 (4/8)**: `Spar_Migration::*` property set, + `BindingOverlay` for hypothetical-binding queries, `spar moves verify` + CLI returning JSON pass/fail, `spar moves enumerate` listing valid + rebinding candidates ranked by slack. + +### Changed + +- COMPLIANCE.md narrative updated for v0.7.0 / v0.7.1 / partial v0.8.0. +- Test count: ~1900+ across 17 crates (previously ~1200 across 16). +- `rivet validate` pin in CI bumped from v0.1.0 to v0.4.3 to match the schema- + tolerance behaviour of current artifacts. +- Migration: `cargo-fuzz` job now pinned to `x86_64-unknown-linux-gnu` + (avoids ASan / static-libc conflict). + +### Fixed + +- Two Lean import-order / comment-style issues in `RTAJittered.lean` and + `Network/MinPlus.lean` surfaced (and resolved) by the new Lean CI gate. +- Cargo-vet exemption ordering bug after appending criterion + pretty_assertions + dev-deps; sorter Python script now keeps the store-format check happy. + +### Documentation + +- `docs/designs/v0.7.0-hierarchical-rta.md` — design doc for Track A commit 2. +- `docs/designs/track-d-tsn-wctt-research.md` — TSN/WCTT design space + commercial-tool comparison (RTaW-Pegase et al.). +- `docs/designs/track-e-migration-research.md` — migration / design-space oracle research, MCP boundary design. +- `docs/designs/track-f-sysml-kerml-engagement.md` — SysML v2 / KerML community engagement strategy. +- `docs/contracts/rivet-spar-variant-v1.md` — variant-context interchange contract. + +--- + +## [0.6.0] — 2026-04-05 + +Earlier releases — see git history (no formal changelog kept before v0.7.1). diff --git a/Cargo.lock b/Cargo.lock index a495300..7cee06c 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -1215,7 +1215,7 @@ dependencies = [ [[package]] name = "spar" -version = "0.6.0" +version = "0.7.1" dependencies = [ "etch", "la-arena", @@ -1238,12 +1238,13 @@ dependencies = [ "spar-solver", "spar-syntax", "spar-sysml2", + "spar-variants", "toml", ] [[package]] name = "spar-analysis" -version = "0.6.0" +version = "0.7.1" dependencies = [ "la-arena", "rustc-hash 2.1.2", @@ -1256,7 +1257,7 @@ dependencies = [ [[package]] name = "spar-annex" -version = "0.6.0" +version = "0.7.1" dependencies = [ "rowan", "spar-syntax", @@ -1264,7 +1265,7 @@ dependencies = [ [[package]] name = "spar-base-db" -version = "0.6.0" +version = "0.7.1" dependencies = [ "rowan", "salsa", @@ -1274,7 +1275,7 @@ dependencies = [ [[package]] name = "spar-codegen" -version = "0.6.0" +version = "0.7.1" dependencies = [ "criterion", "la-arena", @@ -1288,7 +1289,7 @@ dependencies = [ [[package]] name = "spar-hir" -version = "0.6.0" +version = "0.7.1" dependencies = [ "salsa", "serde", @@ -1301,7 +1302,7 @@ dependencies = [ [[package]] name = "spar-hir-def" -version = "0.6.0" +version = "0.7.1" dependencies = [ "la-arena", "rowan", @@ -1315,7 +1316,7 @@ dependencies = [ [[package]] name = "spar-network" -version = "0.6.0" +version = "0.7.1" dependencies = [ "spar-base-db", "spar-hir-def", @@ -1323,7 +1324,7 @@ dependencies = [ [[package]] name = "spar-parser" -version = "0.6.0" +version = "0.7.1" dependencies = [ "expect-test", "proptest", @@ -1333,7 +1334,7 @@ dependencies = [ [[package]] name = "spar-render" -version = "0.6.0" +version = "0.7.1" dependencies = [ "etch", "la-arena", @@ -1344,7 +1345,7 @@ dependencies = [ [[package]] name = "spar-solver" -version = "0.6.0" +version = "0.7.1" dependencies = [ "criterion", "good_lp", @@ -1358,7 +1359,7 @@ dependencies = [ [[package]] name = "spar-syntax" -version = "0.6.0" +version = "0.7.1" dependencies = [ "expect-test", "rowan", @@ -1367,7 +1368,7 @@ dependencies = [ [[package]] name = "spar-sysml2" -version = "0.6.0" +version = "0.7.1" dependencies = [ "expect-test", "la-arena", @@ -1378,7 +1379,7 @@ dependencies = [ [[package]] name = "spar-transform" -version = "0.6.0" +version = "0.7.1" dependencies = [ "la-arena", "serde", @@ -1389,7 +1390,7 @@ dependencies = [ [[package]] name = "spar-variants" -version = "0.6.0" +version = "0.7.1" dependencies = [ "pretty_assertions", "serde", @@ -1399,14 +1400,14 @@ dependencies = [ [[package]] name = "spar-verify" -version = "0.6.0" +version = "0.7.1" dependencies = [ "spar-verify-macros", ] [[package]] name = "spar-verify-macros" -version = "0.6.0" +version = "0.7.1" dependencies = [ "proc-macro2", "quote", @@ -1415,7 +1416,7 @@ dependencies = [ [[package]] name = "spar-wasm" -version = "0.6.0" +version = "0.7.1" dependencies = [ "etch", "la-arena", diff --git a/Cargo.toml b/Cargo.toml index ef5ce53..9e91ee0 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -22,7 +22,7 @@ members = [ ] [workspace.package] -version = "0.6.0" +version = "0.7.1" edition = "2024" license = "MIT" repository = "https://github.com/pulseengine/spar" diff --git a/artifacts/requirements.yaml b/artifacts/requirements.yaml index 29aa7d7..63413bb 100644 --- a/artifacts/requirements.yaml +++ b/artifacts/requirements.yaml @@ -1263,6 +1263,36 @@ artifacts: status: planned tags: [migration, track-e, v080, cli, solver] + - id: REQ-MIGRATION-008 + type: requirement + title: spar moves verify/enumerate variant scoping (rivet v1 contract) + description: > + System shall accept `--variant NAME` (implicit form; spar shells + out to `rivet resolve --variant --format spar-context-json` + per docs/contracts/rivet-spar-variant-v1.md) and + `--variant-context PATH` (explicit form; PATH is `-` for stdin or + a filesystem path) on both `spar moves verify` and + `spar moves enumerate`. The two flags are mutually exclusive. When + a variant is in play the move-oracle scopes its analysis to the + resolved-variant subset of HIR: components dropped by + keep_in_variant are unreachable as `--component` / `--to` targets + and are excluded from the candidate-target set in enumerate. + Resolution failures map to typed CLI errors — RivetNotFound + points at the explicit form, ComponentNotInVariant / + TargetNotInVariant identify dropped-by-variant inputs, and + VariantContextSchema surfaces the v1 reader's strict-version + refusal of v2+ blobs. The variant scope is non-mutating: + [`crate::variants_bridge::VariantScope`] wraps the SystemInstance + and exposes lookup-time accessors that hide dropped components + without touching the parsed model. JSON output adds top-level + `variant` and `feature_model_hash` fields; text output prefixes + the summary line with `(variant=)`. Track E commit 6/8 of + the v0.8.0 migration design research, wiring the spar-variants + consumer crate (REQ-VARIANT-001) into the verify/enumerate + pipelines (REQ-MIGRATION-005, REQ-MIGRATION-006). + status: planned + tags: [migration, variants, track-e, track-b, v080, cli] + - id: REQ-NETWORK-004 type: requirement title: Typed network graph extraction from a SystemInstance diff --git a/artifacts/verification.yaml b/artifacts/verification.yaml index 9f2ae96..dca0a46 100644 --- a/artifacts/verification.yaml +++ b/artifacts/verification.yaml @@ -1626,6 +1626,53 @@ artifacts: - type: satisfies target: REQ-MIGRATION-004 + - id: TEST-MOVES-VARIANT + type: feature + title: spar moves verify/enumerate variant integration tests + description: > + Integration tests in crates/spar-cli/tests/moves_variant.rs that + shell out to the `spar` binary and exercise the variant-aware + flags on `spar moves verify` and `spar moves enumerate` (Track E + commit 6/8). Coverage: + (1) verify with --variant-context filters the analysis surface to + the kept subset and emits ok=true plus a top-level + `variant` field for the petrol-only fixture; + (2) --component pointing at a dropped-by-variant subcomponent + yields a "not part of variant " diagnostic and + non-zero exit; + (3) enumerate with --variant-context excludes processors gated + on a missing feature from the candidate list; + (4) --variant-context accepts both a filesystem path and `-` + (stdin) per the v1 contract's CLI section; + (5) --variant NAME shells out to rivet via $RIVET_BIN / $PATH; + a test seam (SPAR_VARIANT_TEST_RIVET_OUTPUT env var) + short-circuits the shell-out so CI runners without rivet + can still cover the implicit-form code path end-to-end; + (6) --variant NAME with no rivet on PATH and no $RIVET_BIN + emits the documented diagnostic that points back to the + explicit form and the contract doc; + (7) any rivet_spar_context_version other than "1" is refused + with a clear error message, satisfying the contract's + strict-version-rejection clause; + (8) JSON output includes both `variant` and `feature_model_hash` + top-level fields when a variant context is active, supplying + the audit-trail metadata MCP / rivet downstream consume. + fields: + method: automated-test + steps: + - run: cargo test -p spar --test moves_variant + status: passing + tags: [migration, variants, track-e, track-b, v080, cli] + links: + - type: satisfies + target: REQ-MIGRATION-008 + - type: satisfies + target: REQ-VARIANT-001 + - type: satisfies + target: REQ-MIGRATION-005 + - type: satisfies + target: REQ-MIGRATION-006 + - id: TEST-SPAR-NETWORK-GRAPH type: feature title: Network graph extraction tests diff --git a/crates/spar-cli/Cargo.toml b/crates/spar-cli/Cargo.toml index f20bcb7..c4eab37 100644 --- a/crates/spar-cli/Cargo.toml +++ b/crates/spar-cli/Cargo.toml @@ -22,7 +22,9 @@ spar-sysml2.workspace = true spar-solver.workspace = true spar-render.workspace = true spar-codegen.workspace = true +spar-variants.workspace = true etch.workspace = true +la-arena.workspace = true rowan.workspace = true salsa.workspace = true serde.workspace = true @@ -32,6 +34,5 @@ serde_json = "1" toml.workspace = true [dev-dependencies] -la-arena.workspace = true rustc-hash = "2" proptest.workspace = true diff --git a/crates/spar-cli/src/main.rs b/crates/spar-cli/src/main.rs index 351fe85..b2920a2 100644 --- a/crates/spar-cli/src/main.rs +++ b/crates/spar-cli/src/main.rs @@ -4,6 +4,7 @@ mod lsp; mod moves; mod refactor; mod sarif; +mod variants_bridge; mod verify; use std::{env, fs, process}; diff --git a/crates/spar-cli/src/moves.rs b/crates/spar-cli/src/moves.rs index 72136c2..11c63ae 100644 --- a/crates/spar-cli/src/moves.rs +++ b/crates/spar-cli/src/moves.rs @@ -64,15 +64,20 @@ use std::collections::BTreeMap; use std::fs; +use std::io::Read; use std::process; +use std::sync::Arc; use serde::Serialize; use spar_analysis::{AnalysisDiagnostic, Severity}; use spar_hir_def::instance::{ComponentInstanceIdx, SystemInstance}; -use spar_hir_def::item_tree::ComponentCategory; +use spar_hir_def::item_tree::{ComponentCategory, ItemTree}; use spar_hir_def::{AllowedTargetsViolation, BindingOverlay, FrozenViolation, OverlayDiagnostic}; use spar_solver::enumerate::{CandidateRank, EnumerationObjective, rank_candidate}; +use spar_variants::{ContextError, VariantContext}; + +use crate::variants_bridge::{SourcePathMap, VariantScope}; /// Parsed CLI arguments for `spar moves verify`. /// @@ -80,7 +85,7 @@ use spar_solver::enumerate::{CandidateRank, EnumerationObjective, rank_candidate /// the design-research-style clap struct in track-e-migration-research §6.3 /// without dragging clap into spar-cli (which still uses hand-rolled /// `args[i]` matching for every other subcommand). -#[derive(Debug)] +#[derive(Debug, Default)] pub struct VerifyArgs { /// Path(s) to the AADL model file(s) to load. pub model_files: Vec, @@ -92,6 +97,15 @@ pub struct VerifyArgs { pub target: String, /// Output format: `text` (default) or `json`. pub format: String, + /// Implicit-form variant name. When set (and `variant_context` is + /// not), spar shells out to `rivet resolve --variant + /// --format spar-context-json` per the v1 contract. Mutually + /// exclusive with [`Self::variant_context`]. + pub variant: Option, + /// Explicit-form variant-context source. `Some("-")` reads stdin; + /// any other path is read from the filesystem. Mutually exclusive + /// with [`Self::variant`]. + pub variant_context: Option, } /// All ways `spar moves verify` can fail before producing a report. @@ -122,6 +136,31 @@ pub enum MovesError { /// (`max-response`, `total-load`, `total-power`, `total-weight`, /// `balanced`). UnknownObjective(String), + /// `--variant` and `--variant-context` were both supplied. The v1 + /// contract specifies they are mutually exclusive. + VariantArgsConflict, + /// `--variant-context` could not be read from the named file or stdin. + VariantContextIo(String, std::io::Error), + /// The variant context blob failed schema validation. Wrapped from + /// [`spar_variants::ContextError`] so unknown-version refusal and + /// JSON-parse failures are reported with their original message. + VariantContextSchema(ContextError), + /// `--variant NAME` was supplied but rivet could not be located on + /// `$PATH` (and `$RIVET_BIN` was unset). Per the v1 contract we + /// point the user at the explicit form. + RivetNotFound, + /// rivet was located but exited non-zero. The captured stderr is + /// surfaced to the user. + RivetFailed { stderr: String, code: Option }, + /// rivet emitted output we could not capture or decode. + RivetIo(std::io::Error), + /// A user-supplied `--component` value resolves to a component that + /// the variant filter dropped. Per the contract — and the + /// commit-spec — the move-oracle scopes its analysis to the kept + /// subset only. + ComponentNotInVariant { name: String, variant: String }, + /// As above but for `--to`. + TargetNotInVariant { name: String, variant: String }, } impl std::fmt::Display for MovesError { @@ -150,6 +189,37 @@ impl std::fmt::Display for MovesError { "--objective {o} is not recognised (expected max-response | total-load | \ total-power | total-weight | balanced)", ), + MovesError::VariantArgsConflict => write!( + f, + "--variant and --variant-context are mutually exclusive (see docs/contracts/rivet-spar-variant-v1.md)", + ), + MovesError::VariantContextIo(path, err) => { + write!(f, "Cannot read variant context from {path}: {err}") + } + MovesError::VariantContextSchema(err) => { + write!(f, "Variant context: {err}") + } + MovesError::RivetNotFound => write!( + f, + "rivet not found on $PATH and $RIVET_BIN is unset; \ + either install rivet or use the explicit form: \ + `rivet resolve --variant --format spar-context-json > ctx.json` \ + then pass `--variant-context ctx.json` \ + (see docs/contracts/rivet-spar-variant-v1.md)", + ), + MovesError::RivetFailed { stderr, code } => { + let suffix = code + .map(|c| format!("exit {c}")) + .unwrap_or_else(|| "killed".into()); + write!(f, "rivet resolve failed ({suffix}): {stderr}") + } + MovesError::RivetIo(err) => write!(f, "Cannot run rivet: {err}"), + MovesError::ComponentNotInVariant { name, variant } => { + write!(f, "--component {name} is not part of variant {variant}",) + } + MovesError::TargetNotInVariant { name, variant } => { + write!(f, "--to {name} is not part of variant {variant}",) + } } } } @@ -236,6 +306,19 @@ pub struct MoveVerifyReport { /// Per-pass diagnostic stream from the analysis suite, keyed by pass /// name. Empty when there were no analysis diagnostics for a pass. pub diagnostics_by_pass: BTreeMap>, + /// Resolved-variant name when the run was scoped by a rivet + /// variant context per the v1 contract, otherwise `None`. + /// Promoted to a top-level field so MCP consumers can route a + /// follow-up call back to the same variant without parsing the + /// audit trail. + #[serde(skip_serializing_if = "Option::is_none")] + pub variant: Option, + /// Stable hash of the feature model that produced the variant + /// resolution. Used as a salsa cache key; surfaced here so audit + /// trails can pin the exact feature model the analysis was run + /// against. `None` when no variant was applied. + #[serde(skip_serializing_if = "Option::is_none")] + pub feature_model_hash: Option, } /// Wire-format mirror of [`AnalysisDiagnostic`]. @@ -284,41 +367,25 @@ pub fn run_verify(args: VerifyArgs) -> Result { // 1. Parse + instantiate. We mirror the path used by `spar analyze`, // short-circuiting the same way on parse errors so users see a // diagnostic rather than a stack trace. - let db = spar_hir_def::HirDefDatabase::default(); - let mut trees = Vec::new(); - for file_path in &args.model_files { - let source = - fs::read_to_string(file_path).map_err(|e| MovesError::Io(file_path.clone(), e))?; - let parsed = spar_syntax::parse(&source); - if !parsed.ok() { - let mut msg = String::new(); - for err in parsed.errors() { - let (line, col) = spar_base_db::offset_to_line_col(&source, err.offset); - msg.push_str(&format!("{file_path}:{line}:{col}: {}\n", err.msg)); - } - return Err(MovesError::Parse(file_path.clone(), msg)); - } - let sf = spar_base_db::SourceFile::new(&db, file_path.clone(), source); - trees.push(spar_hir_def::file_item_tree(&db, sf)); - } - - let (pkg_name, type_name, impl_name) = parse_root_ref(&args.root)?; - let scope = spar_hir_def::GlobalScope::from_trees(trees); - let inst = SystemInstance::instantiate( - &scope, - &spar_hir_def::Name::new(&pkg_name), - &spar_hir_def::Name::new(&type_name), - &spar_hir_def::Name::new(&impl_name), - ); - if inst.component_count() == 0 { - return Err(MovesError::UnknownRoot(args.root.clone())); - } - - // 2. Resolve component + target FQNs. - let comp_idx = resolve_component(&inst, &args.component) - .ok_or_else(|| MovesError::UnknownComponent(args.component.clone()))?; - let target_idx = resolve_component(&inst, &args.target) - .ok_or_else(|| MovesError::UnknownTarget(args.target.clone()))?; + let (inst, source_paths) = parse_and_instantiate(&args.model_files, &args.root)?; + + // 2. Optional variant scope. Variant filtering happens *before* + // overlay validation so dropped components can never appear as + // `--component` or `--to` resolution targets — matching the + // contract's intersection semantics. The scope itself is non- + // mutating; the analysis suite still sees the full instance and + // is expected to remain monotonic w.r.t. the kept subset. + let variant_ctx = + load_variant_context(args.variant.as_deref(), args.variant_context.as_deref())?; + let scope_holder = variant_ctx + .as_ref() + .map(|ctx| VariantScope::new(&inst, ctx, &source_paths)); + + // 3. Resolve component + target FQNs (variant-aware). + let comp_idx = resolve_component_in_scope(&inst, scope_holder.as_ref(), &args.component) + .ok_or_else(|| component_not_found_error(&args.component, scope_holder.as_ref()))?; + let target_idx = resolve_component_in_scope(&inst, scope_holder.as_ref(), &args.target) + .ok_or_else(|| target_not_found_error(&args.target, scope_holder.as_ref()))?; let target_cat = inst.component(target_idx).category; if target_cat != ComponentCategory::Processor && target_cat != ComponentCategory::VirtualProcessor @@ -329,12 +396,12 @@ pub fn run_verify(args: VerifyArgs) -> Result { }); } - // 3. Build the overlay and validate against the platform / application split. + // 4. Build the overlay and validate against the platform / application split. let mut overlay = BindingOverlay::new(); overlay.add_move(comp_idx, target_idx); let overlay_diags = overlay.validate(&inst); - // 4. Run the analysis suite. + // 5. Run the analysis suite. // // Per commit 3 scope: the suite reads the un-overlayed instance. // The overlay still surfaces its own constraint-layer @@ -346,16 +413,20 @@ pub fn run_verify(args: VerifyArgs) -> Result { // binding rather than the declared one. let analysis_diags = run_all_analyses(&inst); - // 5. Build the structured report. - let report = build_report(&inst, comp_idx, target_idx, &overlay_diags, &analysis_diags); + // 6. Build the structured report. + let mut report = build_report(&inst, comp_idx, target_idx, &overlay_diags, &analysis_diags); + if let Some(scope) = scope_holder.as_ref() { + report.variant = Some(scope.variant_name().to_string()); + report.feature_model_hash = Some(scope.feature_model_hash().to_string()); + } - // 6. Render. + // 7. Render. match args.format.as_str() { "json" => render_json(&report), _ => render_text(&report), } - // 7. Compute exit code. + // 8. Compute exit code. Ok(exit_code_for(&report, &overlay_diags)) } @@ -432,6 +503,8 @@ fn build_report( target: fqn(instance, target_idx), violations, diagnostics_by_pass: by_pass, + variant: None, + feature_model_hash: None, } } @@ -448,7 +521,14 @@ fn render_json(report: &MoveVerifyReport) { /// analysis output when an `AnalysisError` is reported. fn render_text(report: &MoveVerifyReport) { let status = if report.ok { "OK" } else { "FAIL" }; - println!("{} move {} -> {}", status, report.component, report.target); + let variant_prefix = match &report.variant { + Some(v) => format!("(variant={v}) "), + None => String::new(), + }; + println!( + "{}{} move {} -> {}", + variant_prefix, status, report.component, report.target, + ); if report.violations.is_empty() { println!(" no violations"); @@ -605,6 +685,238 @@ fn run_all_analyses(inst: &SystemInstance) -> Vec { runner.run_all(inst) } +/// Parse the model files, build the global scope, instantiate the root, +/// and return the instance plus a `(package, type) -> path` map for the +/// variant-bridge layer. +/// +/// Centralised so verify and enumerate share the exact same parse + +/// instantiate pipeline; differences live in the variant scope and the +/// candidate-target enumeration respectively. +fn parse_and_instantiate( + model_files: &[String], + root: &str, +) -> Result<(SystemInstance, SourcePathMap), MovesError> { + let db = spar_hir_def::HirDefDatabase::default(); + let mut trees = Vec::new(); + let mut path_pairs: Vec<(String, Arc)> = Vec::new(); + for file_path in model_files { + let source = + fs::read_to_string(file_path).map_err(|e| MovesError::Io(file_path.clone(), e))?; + let parsed = spar_syntax::parse(&source); + if !parsed.ok() { + let mut msg = String::new(); + for err in parsed.errors() { + let (line, col) = spar_base_db::offset_to_line_col(&source, err.offset); + msg.push_str(&format!("{file_path}:{line}:{col}: {}\n", err.msg)); + } + return Err(MovesError::Parse(file_path.clone(), msg)); + } + let sf = spar_base_db::SourceFile::new(&db, file_path.clone(), source); + let tree = spar_hir_def::file_item_tree(&db, sf); + path_pairs.push((file_path.clone(), tree.clone())); + trees.push(tree); + } + + let (pkg_name, type_name, impl_name) = parse_root_ref(root)?; + let scope = spar_hir_def::GlobalScope::from_trees(trees); + let inst = SystemInstance::instantiate( + &scope, + &spar_hir_def::Name::new(&pkg_name), + &spar_hir_def::Name::new(&type_name), + &spar_hir_def::Name::new(&impl_name), + ); + if inst.component_count() == 0 { + return Err(MovesError::UnknownRoot(root.to_string())); + } + let source_paths = SourcePathMap::from_trees(&path_pairs); + Ok((inst, source_paths)) +} + +/// Resolve the variant-context source — either implicit (`--variant +/// NAME`, shells out to rivet) or explicit (`--variant-context PATH`, +/// where `PATH` is `-` for stdin or a filesystem path) — and return +/// the parsed [`VariantContext`]. +/// +/// `None` is returned when neither flag was supplied, in which case +/// the run is a no-op variant-pass-through (the legacy v0.7.x path). +/// +/// Mutual exclusion is enforced: passing both flags is a hard error +/// per the v1 contract's CLI section. +/// +/// # Test override +/// +/// The `SPAR_VARIANT_TEST_RIVET_OUTPUT` environment variable, if set +/// when `--variant NAME` is in play, replaces the rivet shell-out with +/// a direct read of the variable's value. This is the seam the +/// integration test uses to exercise the implicit-form path without +/// requiring a real rivet binary on the test runner. +fn load_variant_context( + variant: Option<&str>, + variant_context: Option<&str>, +) -> Result, MovesError> { + match (variant, variant_context) { + (Some(_), Some(_)) => Err(MovesError::VariantArgsConflict), + (None, None) => Ok(None), + (None, Some(path)) => { + let blob = read_variant_context_file(path)?; + VariantContext::from_json(&blob) + .map(Some) + .map_err(MovesError::VariantContextSchema) + } + (Some(name), None) => { + // Test seam: the integration tests set this to the JSON + // payload they want spar to read. In production this is + // never set, so we fall through to shelling out. + if let Ok(payload) = std::env::var("SPAR_VARIANT_TEST_RIVET_OUTPUT") { + return VariantContext::from_json(&payload) + .map(Some) + .map_err(MovesError::VariantContextSchema); + } + let blob = shell_out_to_rivet(name)?; + VariantContext::from_json(&blob) + .map(Some) + .map_err(MovesError::VariantContextSchema) + } + } +} + +/// Read a `--variant-context` payload from the named source. +/// +/// The path `-` reads stdin to EOF. Any other path is a filesystem +/// path; failures are reported with a context-rich error. +fn read_variant_context_file(path: &str) -> Result { + if path == "-" { + let mut buf = String::new(); + std::io::stdin() + .read_to_string(&mut buf) + .map_err(|e| MovesError::VariantContextIo("".to_string(), e))?; + Ok(buf) + } else { + fs::read_to_string(path).map_err(|e| MovesError::VariantContextIo(path.to_string(), e)) + } +} + +/// Shell out to `rivet resolve --variant NAME --format spar-context-json` +/// and return its stdout. +/// +/// The rivet binary is located via `$RIVET_BIN` first, then via the +/// host `$PATH`. Failures map to typed [`MovesError`] variants so the +/// CLI surface emits actionable messages rather than raw OS errors. +fn shell_out_to_rivet(variant: &str) -> Result { + let bin = match std::env::var_os("RIVET_BIN") { + Some(v) => std::path::PathBuf::from(v), + None => match which_rivet() { + Some(p) => p, + None => return Err(MovesError::RivetNotFound), + }, + }; + + let output = process::Command::new(&bin) + .args([ + "resolve", + "--variant", + variant, + "--format", + "spar-context-json", + ]) + .output() + .map_err(|e| { + // `not found` -> RivetNotFound; everything else -> IO error + // bubble. + if e.kind() == std::io::ErrorKind::NotFound { + MovesError::RivetNotFound + } else { + MovesError::RivetIo(e) + } + })?; + + if !output.status.success() { + let stderr = String::from_utf8_lossy(&output.stderr).into_owned(); + return Err(MovesError::RivetFailed { + stderr, + code: output.status.code(), + }); + } + String::from_utf8(output.stdout).map_err(|e| { + MovesError::RivetIo(std::io::Error::new( + std::io::ErrorKind::InvalidData, + format!("rivet stdout was not UTF-8: {e}"), + )) + }) +} + +/// Best-effort lookup of `rivet` on `$PATH`. Returns `None` when no +/// `rivet` (or `rivet.exe`) is found in any `$PATH` entry. +fn which_rivet() -> Option { + let path = std::env::var_os("PATH")?; + for dir in std::env::split_paths(&path) { + for name in ["rivet", "rivet.exe"] { + let candidate = dir.join(name); + if candidate.is_file() { + return Some(candidate); + } + } + } + None +} + +/// Variant-aware component resolution. +/// +/// When a [`VariantScope`] is supplied, the resolver first tries to +/// match against the kept subset only — so `--component X` resolving to +/// a dropped component is reported as "not in variant" rather than +/// silently snapping to a same-named-but-kept sibling. When no scope is +/// in play we fall through to the v0.7.x [`resolve_component`] path +/// untouched. +fn resolve_component_in_scope( + inst: &SystemInstance, + scope: Option<&VariantScope<'_>>, + name: &str, +) -> Option { + let raw = resolve_component(inst, name)?; + if let Some(scope) = scope { + if scope.is_kept(raw) { + Some(raw) + } else { + // Dropped by the variant. The resolver's caller turns this + // into a typed error (`ComponentNotInVariant` / + // `TargetNotInVariant`) — we just signal "not findable". + None + } + } else { + Some(raw) + } +} + +/// Lift a "name not found" failure into the right [`MovesError`] +/// variant: when a variant scope is active the dropped-by-variant case +/// gets a more specific diagnostic so users know to check the variant +/// definition rather than the model. +fn component_not_found_error(name: &str, scope: Option<&VariantScope<'_>>) -> MovesError { + match scope { + Some(scope) if resolve_component(scope.instance, name).is_some() => { + MovesError::ComponentNotInVariant { + name: name.to_string(), + variant: scope.variant_name().to_string(), + } + } + _ => MovesError::UnknownComponent(name.to_string()), + } +} + +/// As [`component_not_found_error`] but for `--to`. +fn target_not_found_error(name: &str, scope: Option<&VariantScope<'_>>) -> MovesError { + match scope { + Some(scope) if resolve_component(scope.instance, name).is_some() => { + MovesError::TargetNotInVariant { + name: name.to_string(), + variant: scope.variant_name().to_string(), + } + } + _ => MovesError::UnknownTarget(name.to_string()), + } +} + // ── CLI dispatch helpers ───────────────────────────────────────────── /// Print top-level `spar moves` usage to stderr and exit non-zero. @@ -624,15 +936,24 @@ pub fn print_verify_usage() { eprintln!( "Usage: spar moves verify --root Pkg::Type.Impl --component --to \\" ); - eprintln!(" [--format text|json] ..."); + eprintln!( + " [--variant NAME | --variant-context PATH] [--format text|json] \\" + ); + eprintln!(" ..."); eprintln!(); eprintln!("Options:"); - eprintln!(" --root Root system implementation in Pkg::Type.Impl form"); + eprintln!(" --root Root system implementation in Pkg::Type.Impl form"); + eprintln!( + " --component FQN (or suffix / bare name) of the component to (hypothetically) move" + ); + eprintln!(" --to FQN (or suffix / bare name) of the target processor"); + eprintln!(" --format Output format: text (default) or json"); + eprintln!( + " --variant Variant NAME; spar shells out to `rivet resolve` (see contract docs)" + ); eprintln!( - " --component FQN (or suffix / bare name) of the component to (hypothetically) move" + " --variant-context PATH (or '-' for stdin) of an explicit rivet variant-context blob" ); - eprintln!(" --to FQN (or suffix / bare name) of the target processor"); - eprintln!(" --format Output format: text (default) or json"); eprintln!(); eprintln!("Exit codes:"); eprintln!(" 0 move is admissible (no violations, no analysis errors)"); @@ -664,6 +985,8 @@ fn cmd_moves_verify(args: &[String]) -> i32 { let mut component = None; let mut target = None; let mut format: Option = None; + let mut variant: Option = None; + let mut variant_context: Option = None; let mut model_files = Vec::new(); let mut i = 0; @@ -701,6 +1024,22 @@ fn cmd_moves_verify(args: &[String]) -> i32 { } format = Some(args[i].clone()); } + "--variant" => { + i += 1; + if i >= args.len() { + eprintln!("--variant requires a value (variant name)"); + return 1; + } + variant = Some(args[i].clone()); + } + "--variant-context" => { + i += 1; + if i >= args.len() { + eprintln!("--variant-context requires a value (path or '-' for stdin)"); + return 1; + } + variant_context = Some(args[i].clone()); + } "--help" | "-h" => { print_verify_usage(); return 0; @@ -738,6 +1077,8 @@ fn cmd_moves_verify(args: &[String]) -> i32 { component, target, format: format.unwrap_or_else(|| "text".to_string()), + variant, + variant_context, }; match run_verify(args) { @@ -787,6 +1128,11 @@ pub struct EnumerateArgs { /// `EnumerationObjective::max_response()` — equivalent to commit 4's /// slack-only ranking on single-CPU models. pub objective: EnumerationObjective, + /// Implicit-form variant name. See [`VerifyArgs::variant`]. + pub variant: Option, + /// Explicit-form variant-context source. See + /// [`VerifyArgs::variant_context`]. + pub variant_context: Option, } /// Per-candidate verification record produced by `spar moves enumerate`. @@ -835,6 +1181,14 @@ pub struct MoveEnumerateReport { pub total: usize, /// Number of `ok=true` candidates (admissible moves). pub valid: usize, + /// Resolved-variant name when the run was scoped by a rivet + /// variant context per the v1 contract, otherwise `None`. + #[serde(skip_serializing_if = "Option::is_none")] + pub variant: Option, + /// Stable hash of the feature model that produced the variant + /// resolution. `None` when no variant was applied. + #[serde(skip_serializing_if = "Option::is_none")] + pub feature_model_hash: Option, } /// Run `spar moves enumerate`, returning the desired process exit code. @@ -872,49 +1226,36 @@ pub fn run_enumerate(args: EnumerateArgs) -> Result { } // 1. Parse + instantiate. - let db = spar_hir_def::HirDefDatabase::default(); - let mut trees = Vec::new(); - for file_path in &args.model_files { - let source = - fs::read_to_string(file_path).map_err(|e| MovesError::Io(file_path.clone(), e))?; - let parsed = spar_syntax::parse(&source); - if !parsed.ok() { - let mut msg = String::new(); - for err in parsed.errors() { - let (line, col) = spar_base_db::offset_to_line_col(&source, err.offset); - msg.push_str(&format!("{file_path}:{line}:{col}: {}\n", err.msg)); - } - return Err(MovesError::Parse(file_path.clone(), msg)); - } - let sf = spar_base_db::SourceFile::new(&db, file_path.clone(), source); - trees.push(spar_hir_def::file_item_tree(&db, sf)); + let (inst, source_paths) = parse_and_instantiate(&args.model_files, &args.root)?; + + // 2. Optional variant scope (see verify pipeline notes). + let variant_ctx = + load_variant_context(args.variant.as_deref(), args.variant_context.as_deref())?; + let scope_holder = variant_ctx + .as_ref() + .map(|ctx| VariantScope::new(&inst, ctx, &source_paths)); + + // 3. Resolve --component (variant-aware). + let comp_idx = resolve_component_in_scope(&inst, scope_holder.as_ref(), &args.component) + .ok_or_else(|| component_not_found_error(&args.component, scope_holder.as_ref()))?; + + // 4. Build the candidate-target set, intersecting with the kept + // subset when a variant is in play. + let mut candidates = candidate_targets(&inst, comp_idx, args.target_filter.as_deref()); + if let Some(scope) = scope_holder.as_ref() { + candidates.retain(|idx| scope.is_kept(*idx)); } - let (pkg_name, type_name, impl_name) = parse_root_ref(&args.root)?; - let scope = spar_hir_def::GlobalScope::from_trees(trees); - let inst = SystemInstance::instantiate( - &scope, - &spar_hir_def::Name::new(&pkg_name), - &spar_hir_def::Name::new(&type_name), - &spar_hir_def::Name::new(&impl_name), - ); - if inst.component_count() == 0 { - return Err(MovesError::UnknownRoot(args.root.clone())); - } - - // 2. Resolve --component. - let comp_idx = resolve_component(&inst, &args.component) - .ok_or_else(|| MovesError::UnknownComponent(args.component.clone()))?; - - // 3. Build the candidate-target set. - let candidates = candidate_targets(&inst, comp_idx, args.target_filter.as_deref()); - - // 4-5. Verify each candidate. + // 5-6. Verify each candidate. let mut report = MoveEnumerateReport { component: fqn(&inst, comp_idx), candidates: Vec::with_capacity(candidates.len()), total: 0, valid: 0, + variant: scope_holder.as_ref().map(|s| s.variant_name().to_string()), + feature_model_hash: scope_holder + .as_ref() + .map(|s| s.feature_model_hash().to_string()), }; for target_idx in candidates { @@ -1077,9 +1418,13 @@ fn render_enumerate_json(report: &MoveEnumerateReport) { /// rank value; `` is emitted for deadline-miss candidates so /// the column stays one token wide. fn render_enumerate_text(report: &MoveEnumerateReport) { + let variant_prefix = match &report.variant { + Some(v) => format!("(variant={v}) "), + None => String::new(), + }; println!( - "Enumerate {} ({} candidates)", - report.component, report.total + "{}Enumerate {} ({} candidates)", + variant_prefix, report.component, report.total, ); if report.candidates.is_empty() { println!(" (no candidate targets)"); @@ -1138,17 +1483,25 @@ fn summarise_violations(violations: &[Violation]) -> String { pub fn print_enumerate_usage() { eprintln!("Usage: spar moves enumerate --root Pkg::Type.Impl --component \\"); eprintln!(" [--target-filter ] [--objective ] \\"); - eprintln!(" [--format text|json] ..."); + eprintln!(" [--format text|json] \\"); + eprintln!(" [--variant NAME | --variant-context PATH] \\"); + eprintln!(" ..."); eprintln!(); eprintln!("Options:"); - eprintln!(" --root Root system implementation in Pkg::Type.Impl form"); - eprintln!(" --component FQN (or suffix / bare name) of the component to enumerate"); - eprintln!(" --target-filter Optional case-insensitive substring filter on candidate FQNs"); + eprintln!(" --root Root system implementation in Pkg::Type.Impl form"); + eprintln!(" --component FQN (or suffix / bare name) of the component to enumerate"); + eprintln!(" --target-filter Optional case-insensitive substring filter on candidate FQNs"); + eprintln!( + " --objective Ranking objective: max-response (default), total-load, total-power," + ); + eprintln!(" total-weight, or balanced (all four equally weighted)"); + eprintln!(" --format Output format: text (default) or json"); + eprintln!( + " --variant Variant NAME; spar shells out to `rivet resolve` (see contract docs)" + ); eprintln!( - " --objective Ranking objective: max-response (default), total-load, total-power," + " --variant-context PATH (or '-' for stdin) of an explicit rivet variant-context blob" ); - eprintln!(" total-weight, or balanced (all four equally weighted)"); - eprintln!(" --format Output format: text (default) or json"); eprintln!(); eprintln!("Candidate-target set:"); eprintln!( @@ -1191,6 +1544,8 @@ fn cmd_moves_enumerate(args: &[String]) -> i32 { let mut target_filter: Option = None; let mut format: Option = None; let mut objective_str: Option = None; + let mut variant: Option = None; + let mut variant_context: Option = None; let mut model_files = Vec::new(); let mut i = 0; @@ -1239,6 +1594,22 @@ fn cmd_moves_enumerate(args: &[String]) -> i32 { } format = Some(args[i].clone()); } + "--variant" => { + i += 1; + if i >= args.len() { + eprintln!("--variant requires a value (variant name)"); + return 1; + } + variant = Some(args[i].clone()); + } + "--variant-context" => { + i += 1; + if i >= args.len() { + eprintln!("--variant-context requires a value (path or '-' for stdin)"); + return 1; + } + variant_context = Some(args[i].clone()); + } "--help" | "-h" => { print_enumerate_usage(); return 0; @@ -1284,6 +1655,8 @@ fn cmd_moves_enumerate(args: &[String]) -> i32 { target_filter, format: format.unwrap_or_else(|| "text".to_string()), objective, + variant, + variant_context, }; match run_enumerate(args) { diff --git a/crates/spar-cli/src/variants_bridge.rs b/crates/spar-cli/src/variants_bridge.rs new file mode 100644 index 0000000..5ea2fa5 --- /dev/null +++ b/crates/spar-cli/src/variants_bridge.rs @@ -0,0 +1,442 @@ +// Some of the helpers below (e.g. `SourcePathMap::new`, +// `VariantScope::all_kept_components`) are part of the bridge's intended +// public surface even when the current `moves.rs` callers do not exercise +// every entry point. Allowing dead-code locally keeps the module self- +// contained and avoids littering the call sites with `#[allow(dead_code)]`. +#![allow(dead_code)] + +//! Bridge between [`spar_variants`] and [`spar_hir_def`] HIR types +//! (Track E commit 6/8, v0.8.0). +//! +//! Per the v1 contract — `docs/contracts/rivet-spar-variant-v1.md` +//! §"Binding resolution semantics" — variant filtering decides whether +//! each HIR item is kept under a resolved variant. The +//! [`HasBindingIdentity`] trait abstracts over the identity surface of +//! an HIR item: a project-relative source-file path and a fully-qualified +//! AADL symbol. This module provides the spar-side adapters. +//! +//! # What lives here +//! +//! - [`ComponentInstanceIdentity`] — a value-typed adapter that wraps a +//! [`ComponentInstanceIdx`] together with the [`SystemInstance`] and a +//! `(package, type) -> source_path` map so it can answer +//! `artifact_path()` and `fully_qualified_symbol()` queries. +//! - [`VariantScope`] — a non-mutating wrapper around `(SystemInstance, +//! VariantContext, source-path map)` that exposes overlay-aware +//! accessors: a "kept" predicate for component indices, an iterator +//! over kept components, and a helper that resolves a user-supplied +//! `--component`/`--to` against the kept subset. +//! +//! # Why a wrapper, not a re-built instance? +//! +//! The variant filter applies *before* overlay validation, but the +//! filter itself is non-mutating: it does not touch the parsed +//! [`SystemInstance`]. Doing so would invalidate every cached analysis +//! result and force every downstream consumer to either snapshot the +//! model or guard against mid-computation flips. [`VariantScope`] is the +//! lookup-time projection that subsequent verify/enumerate code uses +//! when deciding whether a component participates in the analysis +//! surface. +//! +//! # Source-path mapping +//! +//! [`ComponentInstance`] does not carry a source-file path; the path is +//! tracked by the CLI driver when it loads model files into the +//! [`spar_hir_def::GlobalScope`]. The `(package, type) -> path` map is +//! built by walking each loaded `ItemTree` and pairing every public / +//! private classifier name with the source path the tree was parsed +//! from. The variant filter only needs path info on the artifact-binding +//! path, so a coarse package+type granularity is enough — items textually +//! nested in a typed body inherit the path of their enclosing classifier +//! (matching the contract's "file-scoped binding" semantics). + +use std::collections::HashMap; +use std::sync::Arc; + +use spar_hir_def::instance::{ComponentInstanceIdx, SystemInstance}; +use spar_hir_def::item_tree::{ItemRef, ItemTree}; +use spar_variants::{HasBindingIdentity, VariantContext, keep_in_variant}; + +/// Identity adapter for a single component instance. +/// +/// Holds borrows of the [`SystemInstance`] and the source-path map so the +/// trait methods can synthesise the artifact path and FQN on demand. The +/// lifetime parameter `'a` ties the adapter to the parent scope's +/// borrows; constructing one is essentially free (three references and an +/// idx). +pub struct ComponentInstanceIdentity<'a> { + instance: &'a SystemInstance, + idx: ComponentInstanceIdx, + source_paths: &'a SourcePathMap, + /// Cached lazily-computed FQN string. The trait returns `Option` + /// (an owned value) so we materialise once and clone on each call. + fqn: String, +} + +impl<'a> ComponentInstanceIdentity<'a> { + /// Build an identity adapter for `idx`. + pub fn new( + instance: &'a SystemInstance, + idx: ComponentInstanceIdx, + source_paths: &'a SourcePathMap, + ) -> Self { + let fqn = component_fqn(instance, idx); + Self { + instance, + idx, + source_paths, + fqn, + } + } +} + +impl<'a> HasBindingIdentity for ComponentInstanceIdentity<'a> { + fn artifact_path(&self) -> Option<&str> { + let comp = self.instance.component(self.idx); + // Walk up to the nearest ancestor (or self) whose (package, type) + // pair exists in the map. The component-path FQN includes + // subcomponents whose classifiers are declared in different files + // from the enclosing implementation; the artifact-binding contract + // applies to "every item declared in the named source file", and + // a leaf subcomponent's "declaration site" is its own classifier's + // file. So we ask the most specific (innermost) classifier first. + let pkg = comp.package.as_str(); + let typ = comp.type_name.as_str(); + self.source_paths + .lookup(pkg, typ) + .map(std::string::String::as_str) + } + + fn fully_qualified_symbol(&self) -> Option { + Some(self.fqn.clone()) + } +} + +/// Compute the fully-qualified AADL symbol for a component instance, +/// matching the shape used by binding resolution: `Package::Type` for +/// type-only components, `Package::Type.Implementation` when the +/// implementation name is set. Subcomponents nested in a typed body get +/// `Package::Type.Impl.subname` / `…/sub2/…` form. +/// +/// The resulting string is matched against +/// [`spar_variants::Binding::Symbol`] entries via prefix-with-boundary +/// rules — see [`spar_variants::binding`] for the matcher. +pub fn component_fqn(instance: &SystemInstance, idx: ComponentInstanceIdx) -> String { + // Walk parent chain to the root, collecting subcomponent names. + // We then prepend the root's `Package::Type.Impl` form. + let mut chain: Vec<&str> = Vec::new(); + let mut cur = Some(idx); + let mut root_idx = idx; + while let Some(ci) = cur { + let comp = instance.component(ci); + // The root has parent=None; we don't include its subcomponent + // name (which is the type+impl-derived "Type.Impl" tag) in the + // dotted nested chain — instead we anchor on the root's package. + if comp.parent.is_some() { + chain.push(comp.name.as_str()); + } + if comp.parent.is_none() { + root_idx = ci; + } + cur = comp.parent; + } + let root = instance.component(root_idx); + let mut s = format!("{}::{}", root.package.as_str(), root.type_name.as_str()); + if let Some(impl_name) = &root.impl_name { + s.push('.'); + s.push_str(impl_name.as_str()); + } + // chain is innermost-first; reverse to root→leaf. + for name in chain.into_iter().rev() { + s.push('.'); + s.push_str(name); + } + s +} + +/// `(package, type)` -> source-file path map. +/// +/// Keys are case-insensitive on the AADL identifier side because AADL +/// identifiers are case-insensitive. The value is the path the CLI +/// driver passed to `spar-base-db::SourceFile::new`. +#[derive(Debug, Default, Clone)] +pub struct SourcePathMap { + inner: HashMap<(String, String), String>, +} + +impl SourcePathMap { + /// Build an empty map. + pub fn new() -> Self { + Self::default() + } + + /// Walk a set of `(file_path, ItemTree)` pairs and register every + /// declared component type / impl with its file path. + pub fn from_trees(pairs: &[(String, Arc)]) -> Self { + let mut out = Self::default(); + for (path, tree) in pairs { + for (_idx, pkg) in tree.packages.iter() { + let pkg_name = pkg.name.as_str().to_ascii_lowercase(); + for item in pkg.public_items.iter().chain(pkg.private_items.iter()) { + match item { + ItemRef::ComponentType(ti) => { + let t = &tree.component_types[*ti]; + out.inner.insert( + (pkg_name.clone(), t.name.as_str().to_ascii_lowercase()), + path.clone(), + ); + } + ItemRef::ComponentImpl(ii) => { + let i = &tree.component_impls[*ii]; + // Implementations live in the same file as + // their declaring type usually does; record + // both the type and the type.impl variant + // so subcomponent path lookups can resolve + // either granularity. + out.inner.insert( + (pkg_name.clone(), i.type_name.as_str().to_ascii_lowercase()), + path.clone(), + ); + } + _ => {} + } + } + } + } + out + } + + /// Look up the path for `(package, type)`, case-insensitive on both. + pub fn lookup(&self, package: &str, type_name: &str) -> Option<&String> { + self.inner + .get(&(package.to_ascii_lowercase(), type_name.to_ascii_lowercase())) + } +} + +/// Non-mutating projection of a [`SystemInstance`] under a +/// [`VariantContext`]. +/// +/// Builds a `kept` bitset on construction by running [`keep_in_variant`] +/// against every component, then exposes lookup-time accessors that +/// return only the kept subset. The wrapper does not touch the +/// underlying instance; the variant-aware caller in +/// `crates/spar-cli/src/moves.rs` is responsible for routing every +/// component-resolution and candidate-target lookup through it. +pub struct VariantScope<'a> { + /// The underlying instance — borrowed, not owned. Analyses that + /// don't need variant-awareness still see the full surface. + pub instance: &'a SystemInstance, + /// The variant context the projection was computed from. + pub context: &'a VariantContext, + /// Source-path map used for artifact-binding resolution. Stored on + /// the scope so callers can query `is_kept()` and resolve names + /// without rebuilding identity adapters. + pub source_paths: &'a SourcePathMap, + /// Per-component "kept" flag, indexed by [`ComponentInstanceIdx`]. + /// We materialise the whole vector eagerly because the variant + /// filter is cheap (linear in #bindings × #features) and verifying + /// a single move can ask the predicate up to N² times when scanning + /// candidate targets. + kept: Vec, +} + +impl<'a> VariantScope<'a> { + /// Construct a scope by filtering `instance` under `context`. + /// + /// Components dropped by the filter remain present in + /// `instance.components` (the wrapper is non-mutating); they are + /// reported as "not kept" by [`Self::is_kept`] and skipped by + /// [`Self::all_kept_components`]. + pub fn new( + instance: &'a SystemInstance, + context: &'a VariantContext, + source_paths: &'a SourcePathMap, + ) -> Self { + let mut kept = Vec::with_capacity(instance.component_count()); + for (idx, _) in instance.all_components() { + let id = ComponentInstanceIdentity::new(instance, idx, source_paths); + kept.push(keep_in_variant(&id, context)); + } + Self { + instance, + context, + source_paths, + kept, + } + } + + /// True iff component `idx` survives the variant filter. + pub fn is_kept(&self, idx: ComponentInstanceIdx) -> bool { + self.kept.get(arena_index(idx)).copied().unwrap_or(true) + } + + /// Iterate every kept component as `(idx, &ComponentInstance)`. + pub fn all_kept_components( + &self, + ) -> impl Iterator< + Item = ( + ComponentInstanceIdx, + &spar_hir_def::instance::ComponentInstance, + ), + > { + self.instance + .all_components() + .filter(move |(idx, _)| self.is_kept(*idx)) + } + + /// The variant name, for diagnostics and metadata. + pub fn variant_name(&self) -> &str { + &self.context.variant + } + + /// The feature-model hash, for the `feature_model_hash` metadata + /// field on JSON output. + pub fn feature_model_hash(&self) -> &str { + &self.context.feature_model_hash + } +} + +/// Translate a `la_arena::Idx<…>` back into its underlying integer. +/// +/// `la_arena::Idx` doesn't expose a stable accessor for its raw u32 +/// across all paths the workspace uses, but for our purpose — indexing +/// a `Vec` parallel to `instance.components` — it suffices to use +/// the iteration order. We assume `all_components()` yields indices in +/// raw-id order, which the arena guarantees. +/// +/// This helper exists so [`VariantScope::is_kept`] can do an O(1) +/// lookup. If the assumption ever breaks, we fall through to the +/// "unknown idx → kept" default, which is conservative. +fn arena_index(idx: la_arena::Idx) -> usize { + idx.into_raw().into_u32() as usize +} + +#[cfg(test)] +mod tests { + use super::*; + use spar_hir_def::{GlobalScope, HirDefDatabase, Name, file_item_tree}; + + fn parse_to_instance( + files: &[(&str, &str)], + pkg: &str, + ty: &str, + im: &str, + ) -> (SystemInstance, SourcePathMap) { + let db = HirDefDatabase::default(); + let mut trees = Vec::new(); + let mut pairs = Vec::new(); + for (name, src) in files { + let sf = spar_base_db::SourceFile::new(&db, (*name).to_string(), (*src).to_string()); + let tree = file_item_tree(&db, sf); + pairs.push(((*name).to_string(), tree.clone())); + trees.push(tree); + } + let scope = GlobalScope::from_trees(trees); + let inst = + SystemInstance::instantiate(&scope, &Name::new(pkg), &Name::new(ty), &Name::new(im)); + let map = SourcePathMap::from_trees(&pairs); + (inst, map) + } + + const TWO_FILE_MODEL_A: &str = "\ +package P +public + processor CPU + end CPU; + thread Worker + end Worker; + process Proc + end Proc; + process implementation Proc.Impl + subcomponents + t1: thread Worker; + end Proc.Impl; + system Sys + end Sys; + system implementation Sys.Impl + subcomponents + cpu1: processor CPU; + app: process Proc.Impl; + end Sys.Impl; +end P; +"; + + #[test] + fn fqn_walks_root_to_leaf() { + let (inst, _map) = parse_to_instance(&[("a.aadl", TWO_FILE_MODEL_A)], "P", "Sys", "Impl"); + // Root is "Sys.Impl". + let root = inst.root; + let r = component_fqn(&inst, root); + assert_eq!(r, "P::Sys.Impl"); + // Find a sub-leaf 't1' (declared inside Proc.Impl). + let t1 = inst + .all_components() + .find(|(_, c)| c.name.as_str() == "t1") + .unwrap() + .0; + // Path for t1 should be `P::Sys.Impl.app.t1`. + assert_eq!(component_fqn(&inst, t1), "P::Sys.Impl.app.t1"); + } + + #[test] + fn source_path_map_indexes_classifiers() { + let (_inst, map) = parse_to_instance(&[("a.aadl", TWO_FILE_MODEL_A)], "P", "Sys", "Impl"); + assert_eq!(map.lookup("P", "Sys").map(String::as_str), Some("a.aadl")); + assert_eq!(map.lookup("p", "sys").map(String::as_str), Some("a.aadl")); + assert_eq!( + map.lookup("P", "Worker").map(String::as_str), + Some("a.aadl") + ); + assert_eq!(map.lookup("Q", "Sys"), None); + } + + #[test] + fn variant_scope_drops_dropped_components() { + let (inst, map) = parse_to_instance(&[("a.aadl", TWO_FILE_MODEL_A)], "P", "Sys", "Impl"); + // Build a context that drops every item under `a.aadl` (its + // requires has a feature that's not active). + let blob = r#"{ + "rivet_spar_context_version": "1", + "variant": "noop", + "features": [], + "bindings": [ + { "artifact": "a.aadl", "requires": ["never_active"] } + ], + "feature_model_hash": "sha256:0", + "resolved_at": "2026-04-23T12:00:00Z", + "generated_by": "test" + }"#; + let ctx = VariantContext::from_json(blob).unwrap(); + let scope = VariantScope::new(&inst, &ctx, &map); + // Every component in inst comes from a.aadl, so all are + // dropped. + assert!( + scope.all_kept_components().count() == 0, + "expected every component dropped, got {} kept", + scope.all_kept_components().count(), + ); + } + + #[test] + fn variant_scope_keeps_unbound_components() { + // Empty bindings → every component is variant-independent + // infrastructure → all kept. + let (inst, map) = parse_to_instance(&[("a.aadl", TWO_FILE_MODEL_A)], "P", "Sys", "Impl"); + let blob = r#"{ + "rivet_spar_context_version": "1", + "variant": "all", + "features": [], + "bindings": [], + "feature_model_hash": "sha256:0", + "resolved_at": "2026-04-23T12:00:00Z", + "generated_by": "test" + }"#; + let ctx = VariantContext::from_json(blob).unwrap(); + let scope = VariantScope::new(&inst, &ctx, &map); + assert_eq!( + scope.all_kept_components().count(), + inst.component_count(), + "expected all components kept", + ); + } +} diff --git a/crates/spar-cli/tests/moves_variant.rs b/crates/spar-cli/tests/moves_variant.rs new file mode 100644 index 0000000..08a1fc0 --- /dev/null +++ b/crates/spar-cli/tests/moves_variant.rs @@ -0,0 +1,541 @@ +//! Integration tests for `spar moves verify --variant` and +//! `spar moves enumerate --variant` (Track E commit 6/8). +//! +//! Each test builds a small inline AADL model, drops it (and an +//! accompanying rivet variant-context blob when needed) into a per-test +//! temp file, and shells out to the `spar` binary. Tests assert exit +//! codes, parse the JSON output to verify the variant-aware report +//! shape, and exercise both the explicit (`--variant-context PATH`) and +//! implicit (`--variant NAME`) forms of the contract's CLI. +//! +//! Test inventory (8 cases per the commit-6 spec): +//! +//! 1. `verify_with_variant_filters_components` +//! 2. `verify_unknown_component_in_variant_errors` +//! 3. `enumerate_with_variant_filters_targets` +//! 4. `verify_explicit_context_file_and_stdin` +//! 5. `verify_implicit_variant_shells_out_to_rivet` +//! 6. `verify_no_rivet_on_path_clear_error` +//! 7. `verify_unknown_version_blob_rejected` +//! 8. `verify_output_includes_variant_metadata` + +use std::env; +use std::fs; +use std::io::Write; +use std::path::PathBuf; +use std::process::{Command, Stdio}; + +fn spar() -> Command { + Command::new(env!("CARGO_BIN_EXE_spar")) +} + +/// Per-test temp file: process id + per-test tag, to avoid races between +/// parallel test runners on the same machine. +fn write_file(prefix: &str, tag: &str, ext: &str, body: &str) -> PathBuf { + let path = env::temp_dir().join(format!( + "spar_moves_variant_{}_{}_{}.{}", + prefix, + std::process::id(), + tag, + ext, + )); + fs::write(&path, body).expect("write temp file"); + path +} + +fn cleanup(path: &PathBuf) { + let _ = fs::remove_file(path); +} + +/// Two-cpu-two-thread model split across two declared classifier sources. +/// In a real rivet project the artifact split would be at the file +/// boundary; here we use a single AADL file and rely on symbol bindings +/// instead — keeping the test fixture lean while still exercising the +/// keep_in_variant pipeline end-to-end. +const PETROL_DIESEL_MODEL: &str = "\ +package Engines +public + processor CPU + end CPU; + + thread Petrol + properties + Spar_Migration::Mobile => true; + end Petrol; + + thread Diesel + properties + Spar_Migration::Mobile => true; + end Diesel; + + process Engine + end Engine; + + process implementation Engine.Impl + subcomponents + petrol: thread Petrol; + diesel: thread Diesel; + end Engine.Impl; + + system Sys + end Sys; + + system implementation Sys.Impl + subcomponents + cpu1: processor CPU; + cpu2: processor CPU; + eng: process Engine.Impl; + properties + Actual_Processor_Binding => (reference (cpu1)) applies to eng.petrol; + Actual_Processor_Binding => (reference (cpu1)) applies to eng.diesel; + end Sys.Impl; +end Engines; +"; + +/// Build a minimal v1 variant-context blob for the petrol-only variant. +/// +/// The binding gates `Engines::Sys.Impl.eng.diesel` (the FQN spar's +/// instance-path adapter reports for the diesel thread instance) on +/// the `engine_diesel` feature. The variant declares only +/// `engine_petrol` as active, so the diesel instance is dropped from +/// the analysis surface while the petrol instance is kept. +fn petrol_only_blob() -> &'static str { + r#"{ + "rivet_spar_context_version": "1", + "variant": "petrol_only", + "features": ["engine_petrol"], + "bindings": [ + { "symbol": "Engines::Sys.Impl.eng.diesel", "requires": ["engine_diesel"] } + ], + "feature_model_hash": "sha256:petrol", + "resolved_at": "2026-04-23T12:00:00Z", + "generated_by": "spar test harness" + }"# +} + +/// Build a v2 (unknown) blob for the strict-version-rejection test. +fn unknown_version_blob() -> &'static str { + r#"{ + "rivet_spar_context_version": "2", + "variant": "future", + "features": [], + "bindings": [], + "feature_model_hash": "sha256:0", + "resolved_at": "2026-04-23T12:00:00Z", + "generated_by": "future-emitter" + }"# +} + +// ── 1. verify_with_variant_filters_components ───────────────────────── + +#[test] +fn verify_with_variant_filters_components() { + // Petrol-only variant: the petrol thread is in the variant, so + // moving it to cpu2 succeeds; the move should pass without the + // diesel thread polluting the analysis surface. + let model = write_file("filter", "model", "aadl", PETROL_DIESEL_MODEL); + let ctx = write_file("filter", "ctx", "json", petrol_only_blob()); + + let out = spar() + .args([ + "moves", + "verify", + "--root", + "Engines::Sys.Impl", + "--component", + "petrol", + "--to", + "cpu2", + "--format", + "json", + "--variant-context", + ]) + .arg(&ctx) + .arg(&model) + .output() + .expect("failed to run spar"); + + let stdout = String::from_utf8_lossy(&out.stdout); + let stderr = String::from_utf8_lossy(&out.stderr); + assert_eq!( + out.status.code(), + Some(0), + "expected ok exit, stdout: {stdout}\nstderr: {stderr}", + ); + let v: serde_json::Value = serde_json::from_str(&stdout).expect("stdout must be valid JSON"); + assert_eq!(v["ok"].as_bool(), Some(true)); + assert_eq!(v["variant"].as_str(), Some("petrol_only")); + cleanup(&model); + cleanup(&ctx); +} + +// ── 2. verify_unknown_component_in_variant_errors ───────────────────── + +#[test] +fn verify_unknown_component_in_variant_errors() { + // Diesel thread is dropped by the petrol variant — pointing + // --component at it must produce a clear "not part of variant" + // diagnostic, not a "no such component" one. + let model = write_file("notinvariant", "model", "aadl", PETROL_DIESEL_MODEL); + let ctx = write_file("notinvariant", "ctx", "json", petrol_only_blob()); + + let out = spar() + .args([ + "moves", + "verify", + "--root", + "Engines::Sys.Impl", + "--component", + "diesel", + "--to", + "cpu2", + "--variant-context", + ]) + .arg(&ctx) + .arg(&model) + .output() + .expect("failed to run spar"); + + let stderr = String::from_utf8_lossy(&out.stderr); + assert!( + out.status.code() != Some(0), + "expected non-zero exit when --component is dropped by variant", + ); + assert!( + stderr.contains("not part of variant") && stderr.contains("petrol_only"), + "stderr should explain the dropped-by-variant case; got: {stderr}", + ); + cleanup(&model); + cleanup(&ctx); +} + +// ── 3. enumerate_with_variant_filters_targets ───────────────────────── + +#[test] +fn enumerate_with_variant_filters_targets() { + // Build a model where the second processor is gated on a feature + // not present in the variant, so the variant filter must drop cpu2 + // from the candidate list. + let model_src = "\ +package Var +public + processor CPU + end CPU; + + thread Worker + properties + Spar_Migration::Mobile => true; + end Worker; + + process Proc + end Proc; + + process implementation Proc.Impl + subcomponents + t1: thread Worker; + end Proc.Impl; + + system Sys + end Sys; + + system implementation Sys.Impl + subcomponents + cpu1: processor CPU; + cpu2: processor CPU; + app: process Proc.Impl; + properties + Actual_Processor_Binding => (reference (cpu1)) applies to app.t1; + end Sys.Impl; +end Var; +"; + // The variant only activates `cpu_a`; the symbol binding gates the + // entire `Var::Sys.Impl.cpu2` instance on a missing feature so the + // filter drops it. + let blob = r#"{ + "rivet_spar_context_version": "1", + "variant": "single_cpu", + "features": ["cpu_a"], + "bindings": [ + { "symbol": "Var::Sys.Impl.cpu2", "requires": ["cpu_b"] } + ], + "feature_model_hash": "sha256:single", + "resolved_at": "2026-04-23T12:00:00Z", + "generated_by": "spar test harness" + }"#; + let model = write_file("enumvar", "model", "aadl", model_src); + let ctx = write_file("enumvar", "ctx", "json", blob); + + let out = spar() + .args([ + "moves", + "enumerate", + "--root", + "Var::Sys.Impl", + "--component", + "t1", + "--format", + "json", + "--variant-context", + ]) + .arg(&ctx) + .arg(&model) + .output() + .expect("failed to run spar"); + + let stdout = String::from_utf8_lossy(&out.stdout); + let v: serde_json::Value = serde_json::from_str(&stdout).expect("stdout must be valid JSON"); + let candidates = v["candidates"].as_array().expect("candidates array"); + // Only cpu1 should remain after the variant filter. + let targets: Vec = candidates + .iter() + .map(|c| c["target"].as_str().unwrap_or_default().to_string()) + .collect(); + assert!( + targets.iter().any(|t| t.ends_with("cpu1")), + "expected cpu1 in candidate list; got {targets:?}", + ); + assert!( + !targets.iter().any(|t| t.ends_with("cpu2")), + "expected cpu2 dropped by variant; got {targets:?}", + ); + assert_eq!(v["variant"].as_str(), Some("single_cpu")); + cleanup(&model); + cleanup(&ctx); +} + +// ── 4. verify_explicit_context_file_and_stdin ───────────────────────── + +#[test] +fn verify_explicit_context_file_and_stdin() { + let model = write_file("explicit", "model", "aadl", PETROL_DIESEL_MODEL); + let ctx = write_file("explicit", "ctx", "json", petrol_only_blob()); + + // (a) Explicit file path. + let out = spar() + .args([ + "moves", + "verify", + "--root", + "Engines::Sys.Impl", + "--component", + "petrol", + "--to", + "cpu2", + "--format", + "json", + "--variant-context", + ]) + .arg(&ctx) + .arg(&model) + .output() + .expect("failed to run spar"); + let stdout = String::from_utf8_lossy(&out.stdout); + let v: serde_json::Value = serde_json::from_str(&stdout).expect("file form must produce JSON"); + assert_eq!(v["variant"].as_str(), Some("petrol_only")); + + // (b) Stdin (`-`). + let mut child = spar() + .args([ + "moves", + "verify", + "--root", + "Engines::Sys.Impl", + "--component", + "petrol", + "--to", + "cpu2", + "--format", + "json", + "--variant-context", + "-", + ]) + .arg(&model) + .stdin(Stdio::piped()) + .stdout(Stdio::piped()) + .stderr(Stdio::piped()) + .spawn() + .expect("failed to spawn spar"); + { + let stdin = child.stdin.as_mut().expect("stdin handle"); + stdin + .write_all(petrol_only_blob().as_bytes()) + .expect("write stdin"); + } + let out2 = child.wait_with_output().expect("wait spar"); + let stdout2 = String::from_utf8_lossy(&out2.stdout); + let v2: serde_json::Value = + serde_json::from_str(&stdout2).expect("stdin form must produce JSON"); + assert_eq!(v2["variant"].as_str(), Some("petrol_only")); + + cleanup(&model); + cleanup(&ctx); +} + +// ── 5. verify_implicit_variant_shells_out_to_rivet ──────────────────── + +#[test] +fn verify_implicit_variant_shells_out_to_rivet() { + // The implicit form normally invokes `rivet resolve --variant + // --format spar-context-json`. To exercise that code path + // without a real rivet binary on the test runner, the moves + // pipeline honours an `SPAR_VARIANT_TEST_RIVET_OUTPUT` env-var that + // short-circuits the shell-out and uses the variable's value as + // the JSON payload directly. Production builds never set this + // variable, so the seam is invisible to end users. + let model = write_file("implicit", "model", "aadl", PETROL_DIESEL_MODEL); + + let out = spar() + .env("SPAR_VARIANT_TEST_RIVET_OUTPUT", petrol_only_blob()) + .args([ + "moves", + "verify", + "--root", + "Engines::Sys.Impl", + "--component", + "petrol", + "--to", + "cpu2", + "--format", + "json", + "--variant", + "petrol_only", + ]) + .arg(&model) + .output() + .expect("failed to run spar"); + + let stdout = String::from_utf8_lossy(&out.stdout); + let v: serde_json::Value = + serde_json::from_str(&stdout).expect("implicit form must produce JSON"); + assert_eq!(v["variant"].as_str(), Some("petrol_only")); + assert_eq!(v["ok"].as_bool(), Some(true)); + + cleanup(&model); +} + +// ── 6. verify_no_rivet_on_path_clear_error ──────────────────────────── + +#[test] +fn verify_no_rivet_on_path_clear_error() { + // Force rivet-not-found by pointing PATH at an empty directory and + // unsetting RIVET_BIN. The implicit `--variant` form should then + // emit the documented diagnostic with a pointer back to the + // explicit form. + let model = write_file("norivet", "model", "aadl", PETROL_DIESEL_MODEL); + let empty_dir = env::temp_dir().join(format!("spar_norivet_{}", std::process::id())); + let _ = fs::create_dir_all(&empty_dir); + + let out = spar() + .env_remove("SPAR_VARIANT_TEST_RIVET_OUTPUT") + .env_remove("RIVET_BIN") + .env("PATH", &empty_dir) + .args([ + "moves", + "verify", + "--root", + "Engines::Sys.Impl", + "--component", + "petrol", + "--to", + "cpu2", + "--variant", + "petrol_only", + ]) + .arg(&model) + .output() + .expect("failed to run spar"); + + assert!( + out.status.code() != Some(0), + "expected non-zero exit when rivet is unreachable", + ); + let stderr = String::from_utf8_lossy(&out.stderr); + assert!( + stderr.contains("rivet") && stderr.contains("--variant-context"), + "stderr should mention rivet and point to the explicit form; got: {stderr}", + ); + + let _ = fs::remove_dir_all(&empty_dir); + cleanup(&model); +} + +// ── 7. verify_unknown_version_blob_rejected ─────────────────────────── + +#[test] +fn verify_unknown_version_blob_rejected() { + // v1 readers must refuse v2 (or any non-"1") blobs per the + // contract's compatibility section. + let model = write_file("badver", "model", "aadl", PETROL_DIESEL_MODEL); + let ctx = write_file("badver", "ctx", "json", unknown_version_blob()); + + let out = spar() + .args([ + "moves", + "verify", + "--root", + "Engines::Sys.Impl", + "--component", + "petrol", + "--to", + "cpu2", + "--variant-context", + ]) + .arg(&ctx) + .arg(&model) + .output() + .expect("failed to run spar"); + + assert!( + out.status.code() != Some(0), + "expected non-zero exit on unknown version", + ); + let stderr = String::from_utf8_lossy(&out.stderr); + assert!( + stderr.contains("rivet_spar_context_version") || stderr.contains("v1 only"), + "stderr should mention the version mismatch; got: {stderr}", + ); + + cleanup(&model); + cleanup(&ctx); +} + +// ── 8. verify_output_includes_variant_metadata ──────────────────────── + +#[test] +fn verify_output_includes_variant_metadata() { + // Top-level JSON must include `variant` and `feature_model_hash` + // when a variant context is active; both fields are part of the + // audit trail consumed by MCP / rivet downstream. + let model = write_file("meta", "model", "aadl", PETROL_DIESEL_MODEL); + let ctx = write_file("meta", "ctx", "json", petrol_only_blob()); + + let out = spar() + .args([ + "moves", + "verify", + "--root", + "Engines::Sys.Impl", + "--component", + "petrol", + "--to", + "cpu2", + "--format", + "json", + "--variant-context", + ]) + .arg(&ctx) + .arg(&model) + .output() + .expect("failed to run spar"); + + let stdout = String::from_utf8_lossy(&out.stdout); + let v: serde_json::Value = serde_json::from_str(&stdout).expect("stdout must be valid JSON"); + assert_eq!(v["variant"].as_str(), Some("petrol_only")); + assert_eq!( + v["feature_model_hash"].as_str(), + Some("sha256:petrol"), + "expected feature_model_hash in output; got {v}", + ); + cleanup(&model); + cleanup(&ctx); +} diff --git a/vscode-spar/package.json b/vscode-spar/package.json index 6aecc06..4b52cbb 100644 --- a/vscode-spar/package.json +++ b/vscode-spar/package.json @@ -3,7 +3,7 @@ "displayName": "AADL (spar)", "description": "AADL v2.2 language support with live architecture visualization", "publisher": "pulseengine", - "version": "0.6.0", + "version": "0.7.1", "license": "MIT", "repository": { "type": "git",