diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md index b9a04d7af..867340d42 100644 --- a/.claude/CLAUDE.md +++ b/.claude/CLAUDE.md @@ -13,6 +13,7 @@ Rust library for NP-hard problem reductions. Implements computational problems w - [write-rule-in-paper](skills/write-rule-in-paper/SKILL.md) -- Write or improve a reduction-rule entry in the Typst paper. Covers complexity citation, self-contained proof, detailed example, and verification. - [release](skills/release/SKILL.md) -- Create a new crate release. Determines version bump from diff, verifies tests/clippy, then runs `make release`. - [check-issue](skills/check-issue/SKILL.md) -- Quality gate for `[Rule]` and `[Model]` issues. Checks usefulness, non-triviality, correctness of literature, and writing quality. Posts structured report and adds failure labels. +- [check-rule-redundancy](skills/check-rule-redundancy/SKILL.md) -- Check if a reduction rule (source-target pair) is redundant, i.e., dominated by a composite path through other rules. - [meta-power](skills/meta-power/SKILL.md) -- Batch-resolve all open `[Model]` and `[Rule]` issues autonomously: plan, implement, review, fix CI, merge — in dependency order (models first). ## Commands @@ -104,7 +105,7 @@ enum Direction { Maximize, Minimize } - `ReductionResult` provides `target_problem()` and `extract_solution()` - `Solver::find_best()` → `Option>` for optimization problems; `Solver::find_satisfying()` → `Option>` for `Metric = bool` - `BruteForce::find_all_best()` / `find_all_satisfying()` return `Vec>` for all optimal/satisfying solutions -- Graph types: HyperGraph, SimpleGraph, PlanarGraph, BipartiteGraph, UnitDiskGraph, KingsSubgraph, TriangularSubgraph +- Graph types: SimpleGraph, PlanarGraph, BipartiteGraph, UnitDiskGraph, KingsSubgraph, TriangularSubgraph - Weight types: `One` (unit weight marker), `i32`, `f64` — all implement `WeightElement` trait - `WeightElement` trait: `type Sum: NumericSize` + `fn to_sum(&self)` — converts weight to a summable numeric type - Weight management via inherent methods (`weights()`, `set_weights()`, `is_weighted()`), not traits diff --git a/.claude/skills/check-issue/SKILL.md b/.claude/skills/check-issue/SKILL.md index 1a74a7c5e..e2b8d30b4 100644 --- a/.claude/skills/check-issue/SKILL.md +++ b/.claude/skills/check-issue/SKILL.md @@ -89,7 +89,7 @@ Applies when the title contains `[Rule]`. Read the "Reduction Algorithm" section and flag as **Fail** if: - **Variable substitution only:** The mapping is a 1-to-1 relabeling (e.g., `x_i → 1 - x_i` for complement problems). A valid reduction must construct new constraints, objectives, or graph structure. -- **Subtype coercion:** The reduction merely casts to a more general type (e.g., SimpleGraph → HyperGraph) with no structural change to the problem instance. +- **Subtype coercion:** The reduction merely casts to a more general type within an existing variant hierarchy (e.g., UnitDiskGraph → SimpleGraph) with no structural change to the problem instance. - **Same-problem identity:** Reducing between variants of the same problem with no insight (e.g., `MIS` → `MIS` by setting all weights to 1). - **Insufficient detail:** The algorithm is a hand-wave ("map variables accordingly", "follows from the definition") — not a step-by-step procedure a programmer could implement. This is also a **Fail**. diff --git a/.claude/skills/check-rule-redundancy/SKILL.md b/.claude/skills/check-rule-redundancy/SKILL.md new file mode 100644 index 000000000..2a8009a32 --- /dev/null +++ b/.claude/skills/check-rule-redundancy/SKILL.md @@ -0,0 +1,149 @@ +--- +name: check-rule-redundancy +description: Use when checking if a reduction rule (source-target pair) is redundant — i.e., dominated by a composite path through other rules in the reduction graph +--- + +# Check Rule Redundancy + +Determines whether reduction rules are redundant (dominated by composite paths through the reduction graph). Can check a single source-target pair or all primitive rules at once. + +## Invocation + +``` +/check-rule-redundancy # Check ALL primitive rules +/check-rule-redundancy # Check a specific rule +``` + +Examples: +``` +/check-rule-redundancy +/check-rule-redundancy MIS ILP +/check-rule-redundancy MaximumIndependentSet QUBO +``` + +## Mode 1: Check All Rules (no arguments) + +When invoked without arguments, run the codebase's `find_dominated_rules` analysis test directly: + +```bash +cargo test test_find_dominated_rules_returns_known_set -- --nocapture 2>&1 +``` + +This runs the analysis from `src/rules/analysis.rs` which: +1. Enumerates every primitive reduction rule (direct edge) in the graph +2. For each, finds all alternative composite paths +3. Uses polynomial normalization and monomial-dominance to compare overheads +4. Reports dominated rules and unknown comparisons + +Always report rules with full variant-qualified endpoints, not just base names. +Use the same display style as `ReductionStep`, e.g. +`MaximumIndependentSet {graph: "SimpleGraph", weight: "One"} -> MaximumIndependentSet {graph: "KingsSubgraph", weight: "i32"}`. +Base-name-only summaries are ambiguous and can hide cast-only paths. + +Parse the test output and report a summary: + +```markdown +## All Primitive Rules — Redundancy Report + +### Dominated Rules (N) + +| # | Rule | Dominating Path | +|---|------|-----------------| +| 1 | Source {variant...} -> Target {variant...} | A -> B -> C | + +### Unknown Comparisons (N) + +| # | Rule | Reason | +|---|------|--------| +| 1 | Source {variant...} -> Target {variant...} | expression comparison returned Unknown | + +### Allowed (acknowledged) dominated rules + +List the entries from the `allowed` set in `test_find_dominated_rules_returns_known_set` +(file: `src/unit_tests/rules/analysis.rs`), and note when that allow-list is keyed only by base names while the reported dominated rule is variant-specific. + +### Verdict + +- If test passes: all dominated rules are acknowledged in the allow-list. +- If test fails: report the unexpected dominated rule or stale allow-list entry. +``` + +## Mode 2: Check Single Rule (source target arguments) + +### Step 1: Resolve Problem Names + +Use MCP tools (`show_problem`) to validate and resolve aliases (MIS = MaximumIndependentSet, MVC = MinimumVertexCover, SAT = Satisfiability, etc.). + +### Step 2: Check if Rule Already Exists + +Use `show_problem` on the source and check its `reduces_to` array for a direct edge to the target. + +- **Direct edge exists**: Report "Direct rule ` -> ` already exists" and proceed to redundancy analysis (Step 3). +- **No direct edge**: Report "No direct rule from ` -> ` exists yet." Then check if any path exists: + - Use `find_path` MCP tool. + - **Path exists**: Report the cheapest existing path and its overhead. This is the baseline the proposed new rule must beat to be non-redundant. + - **No path exists**: Report "No path exists — a new rule would be novel (not redundant)." Stop here. + +### Step 3: Find All Paths + +Use `find_path` with `all: true` to get all paths between source and target. + +### Step 4: Compare Overheads + +For each composite path (length > 1 step): + +1. Extract the **overall overhead** from the path result +2. Extract the **direct rule's overhead** from the single-step path +3. Compare field by field: + - For polynomial expressions: compare degree — lower degree means the composite is better + - For equal-degree polynomials: compare leading coefficients + - For non-polynomial (exp, log): report as "Unknown — manual review needed" + +**Dominance definition:** A composite path **dominates** the direct rule if, on every common overhead field, the composite's expression has equal or smaller asymptotic growth. + +### Step 5: Report Results + +Output a structured report: + +```markdown +## Redundancy Check: -> + +### Direct Rule +- Overhead: [field = expr, ...] +- Rule: `Source {variant...} -> Target {variant...}` +- Overhead: [field = expr, ...] + +### Composite Paths Found: N + +| # | Path | Steps | Overhead | Comparison | +|---|------|-------|----------|------------| +| 1 | A -> B -> C | 2 | field = expr | Dominates / Worse / Unknown | + +### Verdict + +- **Redundant**: At least one composite path dominates the direct rule +- **Not Redundant**: No composite path dominates the direct rule +- **Inconclusive**: Some paths have Unknown comparison (non-polynomial overhead) + +### Recommendation + +If redundant: +> The direct rule `Source {variant...} -> Target {variant...}` is dominated by the composite path `[path]`. +> Consider removing it unless it provides value for: +> - Simpler solution extraction (fewer intermediate steps) +> - Educational/documentation clarity +> - Better numerical behavior in practice + +If not redundant: +> The direct rule `Source {variant...} -> Target {variant...}` is not dominated by any composite path. +> It provides overhead that cannot be achieved through existing reductions. +``` + +## Notes + +- "Equal overhead" does not necessarily mean the rule should be removed — direct rules have practical advantages (simpler extraction, fewer steps) +- The analysis uses asymptotic comparison (big-O), so constant factors are ignored +- This means the check can produce false alarms, especially when overhead metadata keeps only leading terms or when a long composite path is asymptotically comparable but practically much worse +- Treat "dominated" as "potentially redundant, requires manual review" unless the composite path is also clearly preferable structurally +- When overhead expressions involve variables from different problems (e.g., `num_vertices` vs `num_clauses`), comparison may not be meaningful — report as Unknown +- The ground truth for what the codebase considers dominated is `src/rules/analysis.rs` (`find_dominated_rules`) with the allow-list in `src/unit_tests/rules/analysis.rs` (`test_find_dominated_rules_returns_known_set`) diff --git a/docs/paper/reductions.typ b/docs/paper/reductions.typ index 975d74cb0..915878827 100644 --- a/docs/paper/reductions.typ +++ b/docs/paper/reductions.typ @@ -585,7 +585,7 @@ Equivalent to the Ising model via the linear substitution $s_i = 2x_i - 1$. The ] #problem-def("ILP")[ - Given $n$ integer variables $bold(x) in ZZ^n$, constraint matrix $A in RR^(m times n)$, bounds $bold(b) in RR^m$, and objective $bold(c) in RR^n$, find $bold(x)$ minimizing $bold(c)^top bold(x)$ subject to $A bold(x) <= bold(b)$ and variable bounds. + Given $n$ variables $bold(x)$ over a domain $cal(D)$ (binary $cal(D) = {0,1}$ or integer $cal(D) = ZZ_(>=0)$), constraint matrix $A in RR^(m times n)$, bounds $bold(b) in RR^m$, and objective $bold(c) in RR^n$, find $bold(x) in cal(D)^n$ minimizing $bold(c)^top bold(x)$ subject to $A bold(x) <= bold(b)$. ][ Integer Linear Programming is a universal modeling framework: virtually every NP-hard combinatorial optimization problem admits an ILP formulation. Relaxing integrality to $bold(x) in RR^n$ yields a linear program solvable in polynomial time, forming the basis of branch-and-bound solvers. When the number of integer variables $n$ is fixed, ILP is solvable in polynomial time by Lenstra's algorithm @lenstra1983 using the geometry of numbers, making it fixed-parameter tractable in $n$. The best known general algorithm achieves $O^*(n^n)$ via an FPT algorithm based on lattice techniques @dadush2012. @@ -1032,49 +1032,6 @@ The _penalty method_ @glover2019 @lucas2014 converts a constrained optimization $ f(bold(x)) = "obj"(bold(x)) + P sum_k g_k (bold(x))^2 $ where $P$ is a penalty weight large enough that any constraint violation costs more than the entire objective range. Since $g_k (bold(x))^2 >= 0$ with equality iff $g_k (bold(x)) = 0$, minimizers of $f$ are feasible and optimal for the original problem. Because binary variables satisfy $x_i^2 = x_i$, the resulting $f$ is a quadratic in $bold(x)$, i.e.\ a QUBO. -#let mis_qubo = load-example("maximumindependentset_to_qubo") -#let mis_qubo_r = load-results("maximumindependentset_to_qubo") -#reduction-rule("MaximumIndependentSet", "QUBO", - example: true, - example-caption: [IS on the Petersen graph ($n = 10$) to QUBO], - extra: [ - *Source edges:* $= {#mis_qubo.source.instance.edges.map(e => $(#e.at(0), #e.at(1))$).join(", ")}$ \ - *QUBO matrix* ($Q in RR^(#mis_qubo.target.instance.num_vars times #mis_qubo.target.instance.num_vars)$): - $ Q = #math.mat(..mis_qubo.target.instance.matrix.map(row => row.map(v => { - let r = calc.round(v, digits: 0) - [#r] - }))) $ - *Optimal IS* (size #mis_qubo_r.solutions.at(0).source_config.filter(x => x == 1).len()): - #mis_qubo_r.solutions.map(sol => { - let verts = sol.source_config.enumerate().filter(((i, x)) => x == 1).map(((i, x)) => str(i)) - $\{#verts.join(", ")\}$ - }).join(", ") - ], -)[ - An independent set selects vertices with no two adjacent. Each vertex $i$ gets a binary variable $x_i in {0,1}$ indicating selection, and the objective $sum_i w_i x_i$ rewards large sets. The adjacency constraint $x_i x_j = 0$ for each edge is naturally quadratic, so the penalty method directly yields a QUBO: diagonal entries reward vertex selection, while off-diagonal entries penalize adjacent pairs with a weight large enough to make any edge violation costlier than selecting all vertices. -][ - _Construction._ The IS objective is: maximize $sum_i w_i x_i$ subject to $x_i x_j = 0$ for $(i,j) in E$. Applying the penalty method (@sec:penalty-method): - $ f(bold(x)) = -sum_i w_i x_i + P sum_((i,j) in E) x_i x_j $ - with $P = 1 + sum_i w_i$. Reading off the QUBO coefficients: diagonal $Q_(i i) = -w_i$ (linear reward for selection), off-diagonal $Q_(i j) = P$ for edges $i < j$ (quadratic penalty for adjacency). - - _Correctness._ ($arrow.r.double$) If $bold(x)$ encodes a maximum-weight IS $S^*$, then all penalty terms vanish ($x_i x_j = 0$ for all edges), and $f(bold(x)) = -sum_(i in S^*) w_i$. Any non-IS assignment activates at least one penalty $P > sum_i w_i$, yielding $f > 0 >= f(bold(x))$. ($arrow.l.double$) Among feasible assignments (independent sets), the penalty terms vanish and $f(bold(x)) = -sum_(i in S) w_i$, minimized exactly when $S$ is a maximum-weight IS. Thus QUBO minimizers correspond to maximum-weight independent sets. - - _Solution extraction._ Return $bold(x)$ directly — each $x_i = 1$ indicates vertex $i$ is in the IS. -] - -#reduction-rule("MinimumVertexCover", "QUBO")[ - A vertex cover must include at least one endpoint of every edge. The covering constraint for edge $(i,j)$ — that $x_i = x_j = 0$ is forbidden — translates to the quadratic penalty $(1-x_i)(1-x_j)$, which equals 1 exactly when neither endpoint is selected. The penalty method combines the weight-minimization objective with these coverage penalties into a single QUBO, where diagonal entries reflect the trade-off between vertex cost and coverage benefit, and off-diagonal entries penalize uncovered edges. -][ - _Construction._ The VC objective is: minimize $sum_i w_i x_i$ subject to $x_i + x_j >= 1$ for $(i,j) in E$. Applying the penalty method (@sec:penalty-method), the constraint $x_i + x_j >= 1$ is violated iff $x_i = x_j = 0$, with penalty $(1 - x_i)(1 - x_j)$: - $ f(bold(x)) = sum_i w_i x_i + P sum_((i,j) in E) (1 - x_i)(1 - x_j) $ - with $P = 1 + sum_i w_i$. Expanding: $(1 - x_i)(1 - x_j) = 1 - x_i - x_j + x_i x_j$. - Summing over all edges, each vertex $i$ appears in $"deg"(i)$ penalty terms. The QUBO coefficients are: diagonal $Q_(i i) = w_i - P dot "deg"(i)$ (objective cost minus linear penalty for coverage), off-diagonal $Q_(i j) = P$ for edges (quadratic penalty). The constant $P |E|$ does not affect the minimizer. - - _Correctness._ ($arrow.r.double$) If $bold(x)$ encodes a minimum vertex cover, every edge has at least one endpoint selected, so all penalty terms $(1-x_i)(1-x_j) = 0$ vanish and $f(bold(x)) = sum_(i in C) w_i$. ($arrow.l.double$) If some edge $(i,j)$ is uncovered ($x_i = x_j = 0$), the penalty $P > sum_i w_i$ exceeds the entire objective range, so $bold(x)$ cannot be a minimizer. Among valid covers (all penalties zero), $f(bold(x)) = sum_(i in C) w_i$ up to a constant, minimized exactly when $C$ is a minimum-weight vertex cover. - - _Solution extraction._ Return $bold(x)$ directly — each $x_i = 1$ indicates vertex $i$ is in the cover. -] - #let kc_qubo = load-example("kcoloring_to_qubo") #let kc_qubo_r = load-results("kcoloring_to_qubo") #let kc_qubo_sol = kc_qubo_r.solutions.at(0) @@ -1517,24 +1474,14 @@ where $P$ is a penalty weight large enough that any constraint violation costs m The following reductions to Integer Linear Programming are straightforward formulations where problem constraints map directly to linear inequalities. -#reduction-rule("MaximumIndependentSet", "ILP")[ - Each vertex is either selected or not, and each edge forbids selecting both endpoints -- a constraint that is directly linear in binary indicator variables. -][ - _Construction._ Variables: $x_v in {0, 1}$ for each $v in V$. Constraints: $x_u + x_v <= 1$ for each $(u, v) in E$. Objective: maximize $sum_v w_v x_v$. - - _Correctness._ ($arrow.r.double$) An independent set has no two adjacent vertices selected, so all edge constraints hold. ($arrow.l.double$) Any feasible binary solution selects no two adjacent vertices, forming an independent set; the objective maximizes total weight. - - _Solution extraction._ $S = {v : x_v = 1}$. -] - -#reduction-rule("MinimumVertexCover", "ILP")[ - Every edge must be covered by at least one endpoint -- a lower-bound constraint that is directly linear in binary vertex indicators. +#reduction-rule("MaximumSetPacking", "ILP")[ + Each set is either selected or not, and every universe element may belong to at most one selected set -- an element-based constraint that is directly linear in binary indicator variables. ][ - _Construction._ Variables: $x_v in {0, 1}$ for each $v in V$. Constraints: $x_u + x_v >= 1$ for each $(u, v) in E$. Objective: minimize $sum_v w_v x_v$. + _Construction._ Variables: $x_i in {0, 1}$ for each set $S_i in cal(S)$. Constraints: $sum_(S_i in.rev e) x_i <= 1$ for each element $e in U$. Objective: maximize $sum_i w_i x_i$. - _Correctness._ ($arrow.r.double$) A vertex cover includes at least one endpoint of every edge, satisfying all constraints. ($arrow.l.double$) Any feasible solution covers every edge; the objective minimizes total weight. + _Correctness._ ($arrow.r.double$) A valid packing chooses pairwise disjoint sets, so each element is covered at most once. ($arrow.l.double$) Any feasible binary solution covers each element at most once, hence the chosen sets are pairwise disjoint; the objective maximizes total weight. - _Solution extraction._ $C = {v : x_v = 1}$. + _Solution extraction._ $cal(P) = {S_i : x_i = 1}$. ] #reduction-rule("MaximumMatching", "ILP")[ @@ -1547,16 +1494,6 @@ The following reductions to Integer Linear Programming are straightforward formu _Solution extraction._ $M = {e : x_e = 1}$. ] -#reduction-rule("MaximumSetPacking", "ILP")[ - Two sets conflict if they share a universe element, and at most one of each conflicting pair may be selected -- the same exclusion structure as independent set on the intersection graph, expressible as pairwise linear constraints. -][ - _Construction._ Variables: $x_i in {0, 1}$ for each $S_i in cal(S)$. Constraints: $x_i + x_j <= 1$ for each overlapping pair $S_i, S_j in cal(S)$ with $S_i inter S_j != emptyset$. Objective: maximize $sum_i w_i x_i$. - - _Correctness._ ($arrow.r.double$) A packing has mutually disjoint sets, so no overlapping pair is co-selected. ($arrow.l.double$) Any feasible solution selects only mutually disjoint sets; the objective maximizes total weight. - - _Solution extraction._ $cal(P) = {S_i : x_i = 1}$. -] - #reduction-rule("MinimumSetCovering", "ILP")[ Every universe element must be covered by at least one selected set -- a lower-bound constraint on the sum of indicators for sets containing that element, which is directly linear. ][ @@ -1724,13 +1661,13 @@ The following table shows concrete variable overhead for example instances, gene "minimumvertexcover_to_minimumsetcovering", "maxcut_to_spinglass", "spinglass_to_maxcut", "spinglass_to_qubo", "qubo_to_spinglass", - "maximumindependentset_to_qubo", "minimumvertexcover_to_qubo", "kcoloring_to_qubo", + "maximumindependentset_to_qubo", "kcoloring_to_qubo", "maximumsetpacking_to_qubo", "ksatisfiability_to_qubo", "ilp_to_qubo", "satisfiability_to_maximumindependentset", "satisfiability_to_kcoloring", "satisfiability_to_minimumdominatingset", "satisfiability_to_ksatisfiability", "circuitsat_to_spinglass", "factoring_to_circuitsat", - "maximumindependentset_to_ilp", "minimumvertexcover_to_ilp", "maximummatching_to_ilp", + "maximumsetpacking_to_ilp", "maximummatching_to_ilp", "kcoloring_to_ilp", "factoring_to_ilp", - "maximumsetpacking_to_ilp", "minimumsetcovering_to_ilp", + "minimumsetcovering_to_ilp", "minimumdominatingset_to_ilp", "maximumclique_to_ilp", "travelingsalesman_to_ilp", ) diff --git a/docs/src/cli.md b/docs/src/cli.md index af1c62372..c3e2f2e4f 100644 --- a/docs/src/cli.md +++ b/docs/src/cli.md @@ -125,9 +125,8 @@ Size fields (2): num_edges Reduces to (10): + MaximumIndependentSet {graph=SimpleGraph, weight=i32} → MaximumSetPacking ... MaximumIndependentSet {graph=SimpleGraph, weight=i32} → MinimumVertexCover ... - MaximumIndependentSet {graph=SimpleGraph, weight=i32} → ILP (default) - MaximumIndependentSet {graph=SimpleGraph, weight=i32} → QUBO {weight=f64} ... Reduces from (9): @@ -145,15 +144,16 @@ $ pred to MIS --hops 2 MaximumIndependentSet {graph=SimpleGraph, weight=i32} — 2-hop neighbors (outgoing) MaximumIndependentSet {graph=SimpleGraph, weight=i32} -├── ILP (default) +├── MaximumSetPacking {weight=i32} +│ ├── ILP (default) +│ ├── MaximumIndependentSet {graph=SimpleGraph, weight=i32} +│ └── QUBO {weight=f64} ├── MaximumIndependentSet {graph=KingsSubgraph, weight=i32} │ └── MaximumIndependentSet {graph=SimpleGraph, weight=i32} ├── MaximumIndependentSet {graph=TriangularSubgraph, weight=i32} │ └── MaximumIndependentSet {graph=SimpleGraph, weight=i32} ├── MinimumVertexCover {graph=SimpleGraph, weight=i32} -│ ├── ILP (default) │ └── MaximumIndependentSet {graph=SimpleGraph, weight=i32} -└── QUBO {weight=f64} 5 reachable problems in 2 hops ``` @@ -180,9 +180,20 @@ Find the cheapest chain of reductions between two problems: ```bash $ pred path MIS QUBO -Path (1 steps): MaximumIndependentSet ... → QUBO {weight: "f64"} +Path (3 steps): MaximumIndependentSet/SimpleGraph/i32 → MaximumSetPacking/i32 → QUBO/f64 + + Step 1: MaximumIndependentSet/SimpleGraph/i32 → MaximumSetPacking/i32 + num_sets = num_vertices + universe_size = num_edges + + Step 2: MaximumSetPacking/i32 → MaximumSetPacking/f64 + num_sets = num_sets + universe_size = universe_size + + Step 3: MaximumSetPacking/f64 → QUBO/f64 + num_vars = num_sets - Step 1: MaximumIndependentSet {graph: "SimpleGraph", weight: "i32"} → QUBO {weight: "f64"} + Overall: num_vars = num_vertices ``` diff --git a/docs/src/design.md b/docs/src/design.md index c5a523b57..77110650f 100644 --- a/docs/src/design.md +++ b/docs/src/design.md @@ -66,7 +66,7 @@ A single problem name like `MaximumIndependentSet` can have multiple **variants* Variant types fall into three categories: -- **Graph type** — `HyperGraph` (root), `SimpleGraph`, `PlanarGraph`, `BipartiteGraph`, `UnitDiskGraph`, `KingsSubgraph`, `TriangularSubgraph`. +- **Graph type** — `SimpleGraph` (root), `PlanarGraph`, `BipartiteGraph`, `UnitDiskGraph`, `KingsSubgraph`, `TriangularSubgraph`. - **Weight type** — `One` (unweighted), `i32`, `f64`. - **K value** — e.g., `K3` for 3-SAT, `KN` for arbitrary K. @@ -111,14 +111,7 @@ The `impl_variant_param!` macro implements `VariantParam` (and optionally `CastT ```rust,ignore // Root type (no parent): -impl_variant_param!(HyperGraph, "graph"); - -// Type with parent (cast closure required): -impl_variant_param!(SimpleGraph, "graph", parent: HyperGraph, - cast: |g| { - let edges: Vec> = g.edges().into_iter().map(|(u, v)| vec![u, v]).collect(); - HyperGraph::new(g.num_vertices(), edges) - }); +impl_variant_param!(SimpleGraph, "graph"); // K root (arbitrary K): impl_variant_param!(KN, "k", k: None); diff --git a/docs/src/getting-started.md b/docs/src/getting-started.md index 9ba6ef3a1..2c01d2a2c 100644 --- a/docs/src/getting-started.md +++ b/docs/src/getting-started.md @@ -28,33 +28,34 @@ The core workflow is: **create** a problem, **reduce** it to a target, **solve** -### Example 1: Direct reduction — MIS to ILP +### Example 1: Direct reduction — Set Packing to ILP -Reduce Maximum Independent Set to Integer Linear Programming (ILP) on a -4-vertex path graph, solve with the ILP solver, and extract the solution back. +Reduce Maximum Set Packing to Integer Linear Programming (ILP), solve with the +ILP solver, and extract the solution back. #### Step 1 — Create the source problem -A path graph `0–1–2–3` has 4 vertices and 3 edges. +A small set system with pairwise overlaps gives a direct binary ILP. ```rust,ignore use problemreductions::prelude::*; use problemreductions::models::algebraic::ILP; use problemreductions::solvers::ILPSolver; -use problemreductions::topology::SimpleGraph; -let problem = MaximumIndependentSet::new( - SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)]), - vec![1i32; 4], -); +let problem = MaximumSetPacking::::new(vec![ + vec![0, 1], + vec![1, 2], + vec![2, 3], + vec![4, 5], +]); ``` #### Step 2 — Reduce to ILP `ReduceTo` applies a single-step reduction. The result holds the target problem and knows how to map solutions back. The ILP formulation introduces -binary variable x_v for each vertex, constraint x_u + x_v ≤ 1 for each edge, -and maximizes the weighted sum. +binary variable x_i for each set, constraint x_i + x_j ≤ 1 for each +overlapping pair, and maximizes the weighted sum. ```rust,ignore let reduction = ReduceTo::::reduce_to(&problem); @@ -63,7 +64,7 @@ println!("ILP: {} variables, {} constraints", ilp.num_vars, ilp.constraints.len( ``` ```text -ILP: 4 variables, 3 constraints +ILP: 4 variables, 2 constraints ``` #### Step 3 — Solve the ILP @@ -90,12 +91,12 @@ configuration space. ```rust,ignore let solution = reduction.extract_solution(&ilp_solution); let metric = problem.evaluate(&solution); -println!("IS solution: {:?} -> size {:?}", solution, metric); +println!("Packing solution: {:?} -> size {:?}", solution, metric); assert!(metric.is_valid()); ``` ```text -IS solution: [1, 0, 1, 0] -> size Valid(2) +Packing solution: [1, 0, 1, 1] -> size Valid(3) ``` For convenience, `ILPSolver::solve_reduced` combines reduce + solve + extract diff --git a/docs/src/mcp.md b/docs/src/mcp.md index 457170e96..05913595c 100644 --- a/docs/src/mcp.md +++ b/docs/src/mcp.md @@ -57,7 +57,7 @@ A complete Claude Code session showing how the AI assistant uses MCP tools behin > [ 0.0, 0.0, 0.0, -4.0] > ``` > -> The diagonal entries (-1, -2, -3, -4) encode vertex weights negated for minimization. The off-diagonal entries (11.0) are large penalties on edges — selecting two adjacent vertices would incur a penalty that outweighs any benefit. The reduction path is: MaximumIndependentSet → QUBO. +> The diagonal entries (-1, -2, -3, -4) encode vertex weights negated for minimization. The off-diagonal entries (11.0) are large penalties on edges — selecting two adjacent vertices would incur a penalty that outweighs any benefit. The reduction path is: MaximumIndependentSet → MaximumSetPacking → QUBO. **You:** Now solve it through the reduction, explain the result. diff --git a/docs/src/reductions/problem_schemas.json b/docs/src/reductions/problem_schemas.json index 15eafb74c..50d327e9b 100644 --- a/docs/src/reductions/problem_schemas.json +++ b/docs/src/reductions/problem_schemas.json @@ -119,11 +119,6 @@ "type_name": "usize", "description": "Number of integer variables" }, - { - "name": "bounds", - "type_name": "Vec", - "description": "Variable bounds" - }, { "name": "constraints", "type_name": "Vec", diff --git a/docs/src/reductions/reduction_graph.json b/docs/src/reductions/reduction_graph.json index 3e73b9c15..e189c002a 100644 --- a/docs/src/reductions/reduction_graph.json +++ b/docs/src/reductions/reduction_graph.json @@ -1,3 +1,14 @@ +{ + "nodes": [ + { + "name": "BMF", + "variant": {}, + "category": "algebraic", + "doc_path": "models/algebraic/struct.BMF.html", + "complexity": "2^(rows * rank + r +Exported to: docs/src/reductions/reduction_graph.json + +JSON content: { "nodes": [ { @@ -66,10 +77,21 @@ }, { "name": "ILP", - "variant": {}, + "variant": { + "variable": "bool" + }, "category": "algebraic", "doc_path": "models/algebraic/struct.ILP.html", - "complexity": "num_variables^num_variables" + "complexity": "2^num_vars" + }, + { + "name": "ILP", + "variant": { + "variable": "i32" + }, + "category": "algebraic", + "doc_path": "models/algebraic/struct.ILP.html", + "complexity": "num_vars^num_vars" }, { "name": "KColoring", @@ -393,7 +415,7 @@ }, { "source": 4, - "target": 39, + "target": 40, "overhead": [ { "field": "num_spins", @@ -423,33 +445,48 @@ }, { "source": 7, - "target": 8, + "target": 9, "overhead": [ { "field": "num_vars", - "formula": "2 * num_bits_first + 2 * num_bits_second + num_bits_first * num_bits_second" + "formula": "num_bits_first * num_bits_second" }, { "field": "num_constraints", - "formula": "3 * num_bits_first * num_bits_second + num_bits_first + num_bits_second + 1" + "formula": "num_bits_first * num_bits_second" } ], "doc_path": "rules/factoring_ilp/index.html" }, { "source": 8, - "target": 36, + "target": 9, "overhead": [ { "field": "num_vars", "formula": "num_vars" + }, + { + "field": "num_constraints", + "formula": "num_constraints + num_vars" + } + ], + "doc_path": "rules/ilp_bool_ilp_i32/index.html" + }, + { + "source": 8, + "target": 37, + "overhead": [ + { + "field": "num_vars", + "formula": "num_vars + num_constraints * num_vars" } ], "doc_path": "rules/ilp_qubo/index.html" }, { - "source": 10, - "target": 13, + "source": 11, + "target": 14, "overhead": [ { "field": "num_vertices", @@ -463,7 +500,7 @@ "doc_path": "rules/kcoloring_casts/index.html" }, { - "source": 13, + "source": 14, "target": 8, "overhead": [ { @@ -478,8 +515,8 @@ "doc_path": "rules/coloring_ilp/index.html" }, { - "source": 13, - "target": 36, + "source": 14, + "target": 37, "overhead": [ { "field": "num_vars", @@ -489,8 +526,8 @@ "doc_path": "rules/coloring_qubo/index.html" }, { - "source": 14, - "target": 16, + "source": 15, + "target": 17, "overhead": [ { "field": "num_vars", @@ -504,38 +541,19 @@ "doc_path": "rules/ksatisfiability_casts/index.html" }, { - "source": 14, - "target": 36, - "overhead": [ - { - "field": "num_vars", - "formula": "num_vars" - } - ], - "doc_path": "rules/ksatisfiability_qubo/index.html" - }, - { - "source": 14, + "source": 15, "target": 37, "overhead": [ - { - "field": "num_clauses", - "formula": "num_clauses" - }, { "field": "num_vars", "formula": "num_vars" - }, - { - "field": "num_literals", - "formula": "num_literals" } ], - "doc_path": "rules/sat_ksat/index.html" + "doc_path": "rules/ksatisfiability_qubo/index.html" }, { - "source": 15, - "target": 16, + "source": 16, + "target": 17, "overhead": [ { "field": "num_vars", @@ -549,8 +567,8 @@ "doc_path": "rules/ksatisfiability_casts/index.html" }, { - "source": 15, - "target": 36, + "source": 16, + "target": 37, "overhead": [ { "field": "num_vars", @@ -560,27 +578,8 @@ "doc_path": "rules/ksatisfiability_qubo/index.html" }, { - "source": 15, - "target": 37, - "overhead": [ - { - "field": "num_clauses", - "formula": "num_clauses" - }, - { - "field": "num_vars", - "formula": "num_vars" - }, - { - "field": "num_literals", - "formula": "num_literals" - } - ], - "doc_path": "rules/sat_ksat/index.html" - }, - { - "source": 16, - "target": 37, + "source": 17, + "target": 38, "overhead": [ { "field": "num_clauses", @@ -598,8 +597,8 @@ "doc_path": "rules/sat_ksat/index.html" }, { - "source": 18, - "target": 39, + "source": 19, + "target": 40, "overhead": [ { "field": "num_spins", @@ -613,7 +612,7 @@ "doc_path": "rules/spinglass_maxcut/index.html" }, { - "source": 20, + "source": 21, "target": 8, "overhead": [ { @@ -628,23 +627,8 @@ "doc_path": "rules/maximumclique_ilp/index.html" }, { - "source": 21, - "target": 22, - "overhead": [ - { - "field": "num_vertices", - "formula": "num_vertices" - }, - { - "field": "num_edges", - "formula": "num_edges" - } - ], - "doc_path": "rules/maximumindependentset_casts/index.html" - }, - { - "source": 21, - "target": 26, + "source": 22, + "target": 23, "overhead": [ { "field": "num_vertices", @@ -674,21 +658,21 @@ }, { "source": 23, - "target": 21, + "target": 28, "overhead": [ { "field": "num_vertices", - "formula": "num_vertices * num_vertices" + "formula": "num_vertices" }, { "field": "num_edges", - "formula": "num_vertices * num_vertices" + "formula": "num_edges" } ], - "doc_path": "rules/maximumindependentset_gridgraph/index.html" + "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 23, + "source": 24, "target": 22, "overhead": [ { @@ -703,8 +687,8 @@ "doc_path": "rules/maximumindependentset_gridgraph/index.html" }, { - "source": 23, - "target": 24, + "source": 24, + "target": 25, "overhead": [ { "field": "num_vertices", @@ -718,8 +702,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 23, - "target": 25, + "source": 24, + "target": 26, "overhead": [ { "field": "num_vertices", @@ -733,8 +717,8 @@ "doc_path": "rules/maximumindependentset_triangular/index.html" }, { - "source": 23, - "target": 29, + "source": 24, + "target": 30, "overhead": [ { "field": "num_sets", @@ -748,23 +732,8 @@ "doc_path": "rules/maximumindependentset_maximumsetpacking/index.html" }, { - "source": 24, - "target": 8, - "overhead": [ - { - "field": "num_vars", - "formula": "num_vertices" - }, - { - "field": "num_constraints", - "formula": "num_edges" - } - ], - "doc_path": "rules/maximumindependentset_ilp/index.html" - }, - { - "source": 24, - "target": 31, + "source": 25, + "target": 32, "overhead": [ { "field": "num_sets", @@ -778,8 +747,8 @@ "doc_path": "rules/maximumindependentset_maximumsetpacking/index.html" }, { - "source": 24, - "target": 34, + "source": 25, + "target": 35, "overhead": [ { "field": "num_vertices", @@ -793,19 +762,8 @@ "doc_path": "rules/minimumvertexcover_maximumindependentset/index.html" }, { - "source": 24, - "target": 36, - "overhead": [ - { - "field": "num_vars", - "formula": "num_vertices" - } - ], - "doc_path": "rules/maximumindependentset_qubo/index.html" - }, - { - "source": 25, - "target": 27, + "source": 26, + "target": 28, "overhead": [ { "field": "num_vertices", @@ -819,8 +777,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 26, - "target": 23, + "source": 27, + "target": 24, "overhead": [ { "field": "num_vertices", @@ -834,8 +792,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 26, - "target": 27, + "source": 27, + "target": 28, "overhead": [ { "field": "num_vertices", @@ -849,8 +807,8 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 27, - "target": 24, + "source": 28, + "target": 25, "overhead": [ { "field": "num_vertices", @@ -864,7 +822,7 @@ "doc_path": "rules/maximumindependentset_casts/index.html" }, { - "source": 28, + "source": 29, "target": 8, "overhead": [ { @@ -879,8 +837,8 @@ "doc_path": "rules/maximummatching_ilp/index.html" }, { - "source": 28, - "target": 31, + "source": 29, + "target": 32, "overhead": [ { "field": "num_sets", @@ -894,8 +852,8 @@ "doc_path": "rules/maximummatching_maximumsetpacking/index.html" }, { - "source": 29, - "target": 23, + "source": 30, + "target": 24, "overhead": [ { "field": "num_vertices", @@ -909,8 +867,8 @@ "doc_path": "rules/maximumindependentset_maximumsetpacking/index.html" }, { - "source": 29, - "target": 31, + "source": 30, + "target": 32, "overhead": [ { "field": "num_sets", @@ -924,8 +882,8 @@ "doc_path": "rules/maximumsetpacking_casts/index.html" }, { - "source": 30, - "target": 36, + "source": 31, + "target": 37, "overhead": [ { "field": "num_vars", @@ -935,7 +893,7 @@ "doc_path": "rules/maximumsetpacking_qubo/index.html" }, { - "source": 31, + "source": 32, "target": 8, "overhead": [ { @@ -944,14 +902,14 @@ }, { "field": "num_constraints", - "formula": "num_sets^2" + "formula": "universe_size" } ], "doc_path": "rules/maximumsetpacking_ilp/index.html" }, { - "source": 31, - "target": 24, + "source": 32, + "target": 25, "overhead": [ { "field": "num_vertices", @@ -965,8 +923,8 @@ "doc_path": "rules/maximumindependentset_maximumsetpacking/index.html" }, { - "source": 31, - "target": 30, + "source": 32, + "target": 31, "overhead": [ { "field": "num_sets", @@ -980,7 +938,7 @@ "doc_path": "rules/maximumsetpacking_casts/index.html" }, { - "source": 32, + "source": 33, "target": 8, "overhead": [ { @@ -995,7 +953,7 @@ "doc_path": "rules/minimumdominatingset_ilp/index.html" }, { - "source": 33, + "source": 34, "target": 8, "overhead": [ { @@ -1010,23 +968,8 @@ "doc_path": "rules/minimumsetcovering_ilp/index.html" }, { - "source": 34, - "target": 8, - "overhead": [ - { - "field": "num_vars", - "formula": "num_vertices" - }, - { - "field": "num_constraints", - "formula": "num_edges" - } - ], - "doc_path": "rules/minimumvertexcover_ilp/index.html" - }, - { - "source": 34, - "target": 24, + "source": 35, + "target": 25, "overhead": [ { "field": "num_vertices", @@ -1040,8 +983,8 @@ "doc_path": "rules/minimumvertexcover_maximumindependentset/index.html" }, { - "source": 34, - "target": 33, + "source": 35, + "target": 34, "overhead": [ { "field": "num_sets", @@ -1055,18 +998,7 @@ "doc_path": "rules/minimumvertexcover_minimumsetcovering/index.html" }, { - "source": 34, - "target": 36, - "overhead": [ - { - "field": "num_vars", - "formula": "num_vertices" - } - ], - "doc_path": "rules/minimumvertexcover_qubo/index.html" - }, - { - "source": 36, + "source": 37, "target": 8, "overhead": [ { @@ -1081,8 +1013,8 @@ "doc_path": "rules/qubo_ilp/index.html" }, { - "source": 36, - "target": 38, + "source": 37, + "target": 39, "overhead": [ { "field": "num_spins", @@ -1092,7 +1024,7 @@ "doc_path": "rules/spinglass_qubo/index.html" }, { - "source": 37, + "source": 38, "target": 4, "overhead": [ { @@ -1107,23 +1039,23 @@ "doc_path": "rules/sat_circuitsat/index.html" }, { - "source": 37, - "target": 10, + "source": 38, + "target": 11, "overhead": [ { "field": "num_vertices", - "formula": "2 * num_vars + 5 * num_literals + -1 * 5 * num_clauses + 3" + "formula": "num_vars + num_literals" }, { "field": "num_edges", - "formula": "3 * num_vars + 11 * num_literals + -1 * 9 * num_clauses + 3" + "formula": "num_vars + num_literals" } ], "doc_path": "rules/sat_coloring/index.html" }, { - "source": 37, - "target": 15, + "source": 38, + "target": 16, "overhead": [ { "field": "num_clauses", @@ -1137,8 +1069,8 @@ "doc_path": "rules/sat_ksat/index.html" }, { - "source": 37, - "target": 23, + "source": 38, + "target": 24, "overhead": [ { "field": "num_vertices", @@ -1152,8 +1084,8 @@ "doc_path": "rules/sat_maximumindependentset/index.html" }, { - "source": 37, - "target": 32, + "source": 38, + "target": 33, "overhead": [ { "field": "num_vertices", @@ -1167,8 +1099,8 @@ "doc_path": "rules/sat_minimumdominatingset/index.html" }, { - "source": 38, - "target": 36, + "source": 39, + "target": 37, "overhead": [ { "field": "num_vars", @@ -1178,8 +1110,8 @@ "doc_path": "rules/spinglass_qubo/index.html" }, { - "source": 39, - "target": 18, + "source": 40, + "target": 19, "overhead": [ { "field": "num_vertices", @@ -1193,8 +1125,8 @@ "doc_path": "rules/spinglass_maxcut/index.html" }, { - "source": 39, - "target": 38, + "source": 40, + "target": 39, "overhead": [ { "field": "num_spins", @@ -1208,7 +1140,7 @@ "doc_path": "rules/spinglass_casts/index.html" }, { - "source": 40, + "source": 41, "target": 8, "overhead": [ { @@ -1223,4 +1155,4 @@ "doc_path": "rules/travelingsalesman_ilp/index.html" } ] -} \ No newline at end of file +} diff --git a/examples/chained_reduction_factoring_to_spinglass.rs b/examples/chained_reduction_factoring_to_spinglass.rs index 8b0e9d857..8374906e7 100644 --- a/examples/chained_reduction_factoring_to_spinglass.rs +++ b/examples/chained_reduction_factoring_to_spinglass.rs @@ -5,6 +5,7 @@ // Uses ILPSolver for the solve step (Julia uses GenericTensorNetworks). // ANCHOR: imports +use problemreductions::models::algebraic::ILP; use problemreductions::prelude::*; use problemreductions::rules::{MinimizeSteps, ReductionGraph}; use problemreductions::solvers::ILPSolver; @@ -40,9 +41,11 @@ pub fn run() { // ANCHOR_END: step2 // ANCHOR: step3 - // solve_reduced: reduce → ILP, solve with HiGHS, extract back + // Factoring reduces to ILP, so we manually reduce, solve, and extract let solver = ILPSolver::new(); - let solution = solver.solve_reduced(&factoring).unwrap(); + let reduction = ReduceTo::>::reduce_to(&factoring); + let ilp_solution = solver.solve(reduction.target_problem()).unwrap(); + let solution = reduction.extract_solution(&ilp_solution); // ANCHOR_END: step3 // ANCHOR: step4 diff --git a/examples/chained_reduction_ksat_to_mis.rs b/examples/chained_reduction_ksat_to_mis.rs index d253cd218..5fd1aab28 100644 --- a/examples/chained_reduction_ksat_to_mis.rs +++ b/examples/chained_reduction_ksat_to_mis.rs @@ -2,9 +2,10 @@ // // Demonstrates the `find_cheapest_path` + `reduce_along_path` API to chain // reductions automatically: KSatisfiability → Satisfiability → MIS. -// The target MIS is then solved via `ILPSolver::solve_reduced`. +// The target MIS is then reduced further to ILP and solved there. // ANCHOR: imports +use problemreductions::models::algebraic::ILP; use problemreductions::prelude::*; use problemreductions::rules::{MinimizeSteps, ReductionGraph}; use problemreductions::solvers::ILPSolver; @@ -49,10 +50,28 @@ pub fn run() { .unwrap(); let target: &MaximumIndependentSet = chain.target_problem(); - // Solve the target MIS via ILP + // Reduce the target MIS further to ILP through the registered rule graph. + let ilp_var = ReductionGraph::variant_to_map(&ILP::::variant()); + let ilp_path = graph + .find_cheapest_path( + "MaximumIndependentSet", + &dst_var, + "ILP", + &ilp_var, + &ProblemSize::new(vec![]), + &MinimizeSteps, + ) + .unwrap(); + let ilp_chain = graph + .reduce_along_path(&ilp_path, target as &dyn std::any::Any) + .unwrap(); + let ilp: &ILP = ilp_chain.target_problem(); + + // Solve the target MIS via the derived ILP. let solver = ILPSolver::new(); - let solution = solver.solve_reduced(target).unwrap(); - let original = chain.extract_solution(&solution); + let ilp_solution = solver.solve(ilp).unwrap(); + let mis_solution = ilp_chain.extract_solution(&ilp_solution); + let original = chain.extract_solution(&mis_solution); // Verify: satisfies the original 3-SAT formula assert!(ksat.evaluate(&original)); diff --git a/examples/reduction_circuitsat_to_ilp.rs b/examples/reduction_circuitsat_to_ilp.rs index 4bbc42d40..599a0960f 100644 --- a/examples/reduction_circuitsat_to_ilp.rs +++ b/examples/reduction_circuitsat_to_ilp.rs @@ -69,7 +69,7 @@ pub fn run() { ); // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&circuit_sat); + let reduction = ReduceTo::>::reduce_to(&circuit_sat); let ilp = reduction.target_problem(); println!("\n=== Problem Transformation ==="); @@ -136,7 +136,7 @@ pub fn run() { // 5. Export JSON let source_variant = variant_to_map(CircuitSAT::variant()); - let target_variant = variant_to_map(ILP::variant()); + let target_variant = variant_to_map(ILP::::variant()); let overhead = lookup_overhead("CircuitSAT", &source_variant, "ILP", &target_variant) .expect("CircuitSAT -> ILP overhead not found"); @@ -150,7 +150,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_variables(), diff --git a/examples/reduction_factoring_to_ilp.rs b/examples/reduction_factoring_to_ilp.rs index 1f0013ab8..12c6e5643 100644 --- a/examples/reduction_factoring_to_ilp.rs +++ b/examples/reduction_factoring_to_ilp.rs @@ -27,7 +27,7 @@ pub fn run() { let problem = Factoring::new(3, 3, 35); // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // 3. Print transformation @@ -72,7 +72,7 @@ pub fn run() { }]; let source_variant = variant_to_map(Factoring::variant()); - let target_variant = variant_to_map(ILP::variant()); + let target_variant = variant_to_map(ILP::::variant()); let overhead = lookup_overhead("Factoring", &source_variant, "ILP", &target_variant) .expect("Factoring -> ILP overhead not found"); @@ -87,7 +87,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, diff --git a/examples/reduction_ilp_to_qubo.rs b/examples/reduction_ilp_to_qubo.rs index 804cef94d..abf2ff193 100644 --- a/examples/reduction_ilp_to_qubo.rs +++ b/examples/reduction_ilp_to_qubo.rs @@ -47,7 +47,7 @@ pub fn run() { // Constraint 1: knapsack weight capacity <= 10 // Constraint 2: category A items (x0, x1, x2) limited to 2 // Constraint 3: category B items (x3, x4, x5) limited to 2 - let ilp = ILP::binary( + let ilp = ILP::::new( 6, vec![ // Knapsack weight constraint: 3x0 + 2x1 + 5x2 + 4x3 + 2x4 + 3x5 <= 10 @@ -69,7 +69,7 @@ pub fn run() { let values = [10, 7, 12, 8, 6, 9]; // Reduce to QUBO - let reduction = ReduceTo::::reduce_to(&ilp); + let reduction = ReduceTo::>::reduce_to(&ilp); let qubo = reduction.target_problem(); println!("Source: ILP (binary) with 6 variables, 3 constraints"); @@ -136,14 +136,14 @@ pub fn run() { println!("\nVerification passed: all solutions are feasible and optimal"); // Export JSON - let source_variant = variant_to_map(ILP::variant()); + let source_variant = variant_to_map(ILP::::variant()); let target_variant = variant_to_map(QUBO::::variant()); let overhead = lookup_overhead("ILP", &source_variant, "QUBO", &target_variant) .expect("ILP -> QUBO overhead not found"); let data = ReductionData { source: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: source_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, diff --git a/examples/reduction_kcoloring_to_ilp.rs b/examples/reduction_kcoloring_to_ilp.rs index d287f0aad..97be89956 100644 --- a/examples/reduction_kcoloring_to_ilp.rs +++ b/examples/reduction_kcoloring_to_ilp.rs @@ -29,7 +29,7 @@ pub fn run() { let coloring = KColoring::::new(SimpleGraph::new(num_vertices, edges.clone())); // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&coloring); + let reduction = ReduceTo::>::reduce_to(&coloring); let ilp = reduction.target_problem(); // 3. Print transformation @@ -73,7 +73,7 @@ pub fn run() { }); let source_variant = variant_to_map(KColoring::::variant()); - let target_variant = variant_to_map(ILP::variant()); + let target_variant = variant_to_map(ILP::::variant()); let overhead = lookup_overhead("KColoring", &source_variant, "ILP", &target_variant) .expect("KColoring -> ILP overhead not found"); @@ -88,7 +88,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, diff --git a/examples/reduction_maximumclique_to_ilp.rs b/examples/reduction_maximumclique_to_ilp.rs index 1785c242c..af60e544d 100644 --- a/examples/reduction_maximumclique_to_ilp.rs +++ b/examples/reduction_maximumclique_to_ilp.rs @@ -29,7 +29,7 @@ pub fn run() { ); // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&clique); + let reduction = ReduceTo::>::reduce_to(&clique); let ilp = reduction.target_problem(); // 3. Print transformation @@ -76,7 +76,7 @@ pub fn run() { } let source_variant = variant_to_map(MaximumClique::::variant()); - let target_variant = variant_to_map(ILP::variant()); + let target_variant = variant_to_map(ILP::::variant()); let overhead = lookup_overhead("MaximumClique", &source_variant, "ILP", &target_variant) .unwrap_or_default(); @@ -91,7 +91,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, diff --git a/examples/reduction_maximumindependentset_to_ilp.rs b/examples/reduction_maximumindependentset_to_ilp.rs index cb66e151d..01d857a85 100644 --- a/examples/reduction_maximumindependentset_to_ilp.rs +++ b/examples/reduction_maximumindependentset_to_ilp.rs @@ -1,49 +1,60 @@ -// # Independent Set to ILP Reduction -// -// ## Mathematical Formulation -// Variables: x_v in {0,1} for each vertex v. -// Constraints: x_u + x_v <= 1 for each edge (u,v). -// Objective: maximize sum of w_v * x_v. +// # Independent Set to ILP via Reduction Path // // ## This Example // - Instance: Petersen graph (10 vertices, 15 edges, 3-regular) // - Source IS: max size 4 -// - Target ILP: 10 binary variables, 15 constraints +// - Target: ILP reached through the reduction graph // // ## Output -// Exports `docs/paper/examples/maximumindependentset_to_ilp.json` and `maximumindependentset_to_ilp.result.json`. +// Exports `docs/paper/examples/maximumindependentset_to_ilp.json` and +// `maximumindependentset_to_ilp.result.json`. use problemreductions::export::*; use problemreductions::models::algebraic::ILP; use problemreductions::prelude::*; +use problemreductions::rules::{MinimizeSteps, ReductionGraph}; use problemreductions::topology::small_graphs::petersen; use problemreductions::topology::{Graph, SimpleGraph}; +use problemreductions::types::ProblemSize; pub fn run() { - // 1. Create IS instance: Petersen graph let (num_vertices, edges) = petersen(); let is = MaximumIndependentSet::new( SimpleGraph::new(num_vertices, edges.clone()), vec![1i32; num_vertices], ); - // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&is); - let ilp = reduction.target_problem(); + let graph = ReductionGraph::new(); + let src_variant_bt = + ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); + let dst_variant_bt = ReductionGraph::variant_to_map(&ILP::::variant()); + let path = graph + .find_cheapest_path( + "MaximumIndependentSet", + &src_variant_bt, + "ILP", + &dst_variant_bt, + &ProblemSize::new(vec![]), + &MinimizeSteps, + ) + .expect("MaximumIndependentSet -> ILP path not found"); + let reduction = graph + .reduce_along_path(&path, &is as &dyn std::any::Any) + .expect("MaximumIndependentSet -> ILP path reduction failed"); + let ilp: &ILP = reduction.target_problem(); - // 3. Print transformation println!("\n=== Problem Transformation ==="); println!( "Source: MaximumIndependentSet with {} variables", is.num_variables() ); + println!("Path: {}", path); println!( "Target: ILP with {} variables, {} constraints", ilp.num_vars, ilp.constraints.len() ); - // 4. Solve target ILP let solver = BruteForce::new(); let ilp_solutions = solver.find_all_best(ilp); println!("\n=== Solution ==="); @@ -52,22 +63,19 @@ pub fn run() { let ilp_solution = &ilp_solutions[0]; println!("ILP solution: {:?}", ilp_solution); - // 5. Extract source solution let is_solution = reduction.extract_solution(ilp_solution); println!("Source IS solution: {:?}", is_solution); - // 6. Verify let size = is.evaluate(&is_solution); println!("Solution size: {:?}", size); - assert!(size.is_valid()); // Valid solution + assert!(size.is_valid()); println!("\nReduction verified successfully"); - // 7. Collect solutions and export JSON let mut solutions = Vec::new(); for target_config in &ilp_solutions { let source_sol = reduction.extract_solution(target_config); let s = is.evaluate(&source_sol); - assert!(s.is_valid()); // Valid solution + assert!(s.is_valid()); solutions.push(SolutionPair { source_config: source_sol, target_config: target_config.clone(), @@ -75,14 +83,8 @@ pub fn run() { } let source_variant = variant_to_map(MaximumIndependentSet::::variant()); - let target_variant = variant_to_map(ILP::variant()); - let overhead = lookup_overhead( - "MaximumIndependentSet", - &source_variant, - "ILP", - &target_variant, - ) - .unwrap_or_default(); + let target_variant = variant_to_map(ILP::::variant()); + let overhead = graph.compose_path_overhead(&path); let data = ReductionData { source: ProblemSide { @@ -95,7 +97,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, @@ -106,8 +108,7 @@ pub fn run() { }; let results = ResultData { solutions }; - let name = "maximumindependentset_to_ilp"; - write_example(name, &data, &results); + write_example("maximumindependentset_to_ilp", &data, &results); } fn main() { diff --git a/examples/reduction_maximumindependentset_to_qubo.rs b/examples/reduction_maximumindependentset_to_qubo.rs index 06e028e42..334cc8123 100644 --- a/examples/reduction_maximumindependentset_to_qubo.rs +++ b/examples/reduction_maximumindependentset_to_qubo.rs @@ -1,65 +1,67 @@ -// # Independent Set to QUBO Reduction (Penalty Method) -// -// ## Mathematical Relationship -// The Maximum Independent Set (MIS) problem on a graph G = (V, E) is mapped to -// QUBO by constructing a penalty Hamiltonian: -// -// H(x) = -sum_{i in V} x_i + P * sum_{(i,j) in E} x_i * x_j -// -// where P > 1 is a penalty weight ensuring no two adjacent vertices are both -// selected. The QUBO minimization finds configurations that maximize the -// independent set size while respecting adjacency constraints. +// # Independent Set to QUBO via Reduction Path // // ## This Example // - Instance: Petersen graph (10 vertices, 15 edges, 3-regular) // - Source: MaximumIndependentSet with maximum size 4 -// - QUBO variables: 10 (one per vertex) -// - Expected: Optimal solutions of size 4 +// - Target: QUBO reached through the reduction graph // // ## Output -// Exports `docs/paper/examples/maximumindependentset_to_qubo.json` and `maximumindependentset_to_qubo.result.json`. -// -// ## Usage -// ```bash -// cargo run --example reduction_is_to_qubo -// ``` +// Exports `docs/paper/examples/maximumindependentset_to_qubo.json` and +// `maximumindependentset_to_qubo.result.json`. use problemreductions::export::*; use problemreductions::prelude::*; +use problemreductions::rules::{Minimize, ReductionGraph}; use problemreductions::topology::small_graphs::petersen; use problemreductions::topology::{Graph, SimpleGraph}; +use problemreductions::types::ProblemSize; pub fn run() { println!("=== Independent Set -> QUBO Reduction ===\n"); - // Petersen graph: 10 vertices, 15 edges, 3-regular let (num_vertices, edges) = petersen(); let is = MaximumIndependentSet::new( SimpleGraph::new(num_vertices, edges.clone()), vec![1i32; num_vertices], ); - // Reduce to QUBO - let reduction = ReduceTo::::reduce_to(&is); - let qubo = reduction.target_problem(); + let graph = ReductionGraph::new(); + let src_variant_bt = + ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); + let dst_variant_bt = ReductionGraph::variant_to_map(&QUBO::::variant()); + let path = graph + .find_cheapest_path( + "MaximumIndependentSet", + &src_variant_bt, + "QUBO", + &dst_variant_bt, + &ProblemSize::new(vec![ + ("num_vertices", is.graph().num_vertices()), + ("num_edges", is.graph().num_edges()), + ]), + &Minimize("num_vars"), + ) + .expect("MaximumIndependentSet -> QUBO path not found"); + let reduction = graph + .reduce_along_path(&path, &is as &dyn std::any::Any) + .expect("MaximumIndependentSet -> QUBO path reduction failed"); + let qubo: &QUBO = reduction.target_problem(); println!("Source: MaximumIndependentSet on Petersen graph (10 vertices, 15 edges)"); + println!("Path: {}", path); println!("Target: QUBO with {} variables", qubo.num_variables()); println!("Q matrix:"); for row in qubo.matrix() { println!(" {:?}", row); } - // Solve QUBO with brute force let solver = BruteForce::new(); let qubo_solutions = solver.find_all_best(qubo); - // Extract and verify solutions println!("\nOptimal solutions:"); let mut solutions = Vec::new(); for sol in &qubo_solutions { let extracted = reduction.extract_solution(sol); - // MaximumIndependentSet is a maximization problem, infeasible configs return Invalid let sol_size = is.evaluate(&extracted); assert!( sol_size.is_valid(), @@ -82,16 +84,9 @@ pub fn run() { println!("\nVerification passed: all solutions are valid"); - // Export JSON let source_variant = variant_to_map(MaximumIndependentSet::::variant()); let target_variant = variant_to_map(QUBO::::variant()); - let overhead = lookup_overhead( - "MaximumIndependentSet", - &source_variant, - "QUBO", - &target_variant, - ) - .expect("MaximumIndependentSet -> QUBO overhead not found"); + let overhead = graph.compose_path_overhead(&path); let data = ReductionData { source: ProblemSide { @@ -115,8 +110,7 @@ pub fn run() { }; let results = ResultData { solutions }; - let name = "maximumindependentset_to_qubo"; - write_example(name, &data, &results); + write_example("maximumindependentset_to_qubo", &data, &results); } fn main() { diff --git a/examples/reduction_maximummatching_to_ilp.rs b/examples/reduction_maximummatching_to_ilp.rs index ec482e642..7a588c888 100644 --- a/examples/reduction_maximummatching_to_ilp.rs +++ b/examples/reduction_maximummatching_to_ilp.rs @@ -26,7 +26,7 @@ pub fn run() { MaximumMatching::<_, i32>::unit_weights(SimpleGraph::new(num_vertices, edges.clone())); // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&matching); + let reduction = ReduceTo::>::reduce_to(&matching); let ilp = reduction.target_problem(); // 3. Print transformation @@ -73,7 +73,7 @@ pub fn run() { } let source_variant = variant_to_map(MaximumMatching::::variant()); - let target_variant = variant_to_map(ILP::variant()); + let target_variant = variant_to_map(ILP::::variant()); let overhead = lookup_overhead("MaximumMatching", &source_variant, "ILP", &target_variant) .unwrap_or_default(); @@ -88,7 +88,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, diff --git a/examples/reduction_maximumsetpacking_to_ilp.rs b/examples/reduction_maximumsetpacking_to_ilp.rs index 86faf44e2..4a832196f 100644 --- a/examples/reduction_maximumsetpacking_to_ilp.rs +++ b/examples/reduction_maximumsetpacking_to_ilp.rs @@ -8,8 +8,8 @@ // ## This Example // - Instance: 6 sets over universe {0,...,7} // - S0={0,1,2}, S1={2,3,4}, S2={4,5,6}, S3={6,7,0}, S4={1,3,5}, S5={0,4,7} -// - Source MaximumSetPacking: max packing size 2 (e.g., S0 and S2, or S1 and S3) -// - Target ILP: 6 binary variables, overlap constraints for each pair sharing elements +// - Source MaximumSetPacking: max packing size 2 +// - Target ILP: 6 binary variables, one constraint per overlapping pair // // ## Output // Exports `docs/paper/examples/maximumsetpacking_to_ilp.json` and `maximumsetpacking_to_ilp.result.json`. @@ -19,22 +19,19 @@ use problemreductions::models::algebraic::ILP; use problemreductions::prelude::*; pub fn run() { - // 1. Create MaximumSetPacking instance: 6 sets over universe {0,...,7} let sets = vec![ - vec![0, 1, 2], // S0 - vec![2, 3, 4], // S1 (overlaps S0 at 2) - vec![4, 5, 6], // S2 (overlaps S1 at 4) - vec![6, 7, 0], // S3 (overlaps S2 at 6, S0 at 0) - vec![1, 3, 5], // S4 (overlaps S0, S1, S2) - vec![0, 4, 7], // S5 (overlaps S0, S1, S3) + vec![0, 1, 2], + vec![2, 3, 4], + vec![4, 5, 6], + vec![6, 7, 0], + vec![1, 3, 5], + vec![0, 4, 7], ]; let sp = MaximumSetPacking::::new(sets.clone()); - // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&sp); + let reduction = ReduceTo::>::reduce_to(&sp); let ilp = reduction.target_problem(); - // 3. Print transformation println!("\n=== Problem Transformation ==="); println!( "Source: MaximumSetPacking with {} sets over universe {{0,...,7}}", @@ -49,7 +46,6 @@ pub fn run() { ilp.constraints.len() ); - // 4. Solve target ILP let solver = BruteForce::new(); let ilp_solutions = solver.find_all_best(ilp); println!("\n=== Solution ==="); @@ -58,16 +54,13 @@ pub fn run() { let ilp_solution = &ilp_solutions[0]; println!("ILP solution: {:?}", ilp_solution); - // 5. Extract source solution let sp_solution = reduction.extract_solution(ilp_solution); println!("Source MaximumSetPacking solution: {:?}", sp_solution); - // 6. Verify let metric = sp.evaluate(&sp_solution); println!("Solution metric: {:?}", metric); println!("\nReduction verified successfully"); - // 7. Collect solutions and export JSON let mut solutions = Vec::new(); for target_config in &ilp_solutions { let source_sol = reduction.extract_solution(target_config); @@ -78,7 +71,7 @@ pub fn run() { } let source_variant = variant_to_map(MaximumSetPacking::::variant()); - let target_variant = variant_to_map(ILP::variant()); + let target_variant = variant_to_map(ILP::::variant()); let overhead = lookup_overhead("MaximumSetPacking", &source_variant, "ILP", &target_variant) .unwrap_or_default(); @@ -92,7 +85,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, diff --git a/examples/reduction_minimumdominatingset_to_ilp.rs b/examples/reduction_minimumdominatingset_to_ilp.rs index 959ec4e3a..a684986f2 100644 --- a/examples/reduction_minimumdominatingset_to_ilp.rs +++ b/examples/reduction_minimumdominatingset_to_ilp.rs @@ -28,7 +28,7 @@ pub fn run() { ); // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&ds); + let reduction = ReduceTo::>::reduce_to(&ds); let ilp = reduction.target_problem(); // 3. Print transformation @@ -77,7 +77,7 @@ pub fn run() { } let source_variant = variant_to_map(MinimumDominatingSet::::variant()); - let target_variant = variant_to_map(ILP::variant()); + let target_variant = variant_to_map(ILP::::variant()); let overhead = lookup_overhead( "MinimumDominatingSet", &source_variant, @@ -97,7 +97,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, diff --git a/examples/reduction_minimumsetcovering_to_ilp.rs b/examples/reduction_minimumsetcovering_to_ilp.rs index ba514c627..0d8b726c9 100644 --- a/examples/reduction_minimumsetcovering_to_ilp.rs +++ b/examples/reduction_minimumsetcovering_to_ilp.rs @@ -31,7 +31,7 @@ pub fn run() { let sc = MinimumSetCovering::::new(8, sets.clone()); // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&sc); + let reduction = ReduceTo::>::reduce_to(&sc); let ilp = reduction.target_problem(); // 3. Print transformation @@ -81,7 +81,7 @@ pub fn run() { } let source_variant = variant_to_map(MinimumSetCovering::::variant()); - let target_variant = variant_to_map(ILP::variant()); + let target_variant = variant_to_map(ILP::::variant()); let overhead = lookup_overhead( "MinimumSetCovering", &source_variant, @@ -101,7 +101,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, diff --git a/examples/reduction_minimumvertexcover_to_ilp.rs b/examples/reduction_minimumvertexcover_to_ilp.rs index 6e674008f..186ec3d5b 100644 --- a/examples/reduction_minimumvertexcover_to_ilp.rs +++ b/examples/reduction_minimumvertexcover_to_ilp.rs @@ -1,49 +1,60 @@ -// # Vertex Covering to ILP Reduction -// -// ## Mathematical Formulation -// Variables: x_v in {0,1} for each vertex v. -// Constraints: x_u + x_v >= 1 for each edge (u,v). -// Objective: minimize sum of w_v * x_v. +// # Vertex Cover to ILP via Reduction Path // // ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges), VC=6 -// - Source VC: min cover size 6 -// - Target ILP: 10 binary variables, 15 constraints +// - Instance: Petersen graph (10 vertices, 15 edges), VC = 6 +// - Source VC: min size 6 +// - Target: ILP reached through the reduction graph // // ## Output -// Exports `docs/paper/examples/minimumvertexcover_to_ilp.json` and `minimumvertexcover_to_ilp.result.json`. +// Exports `docs/paper/examples/minimumvertexcover_to_ilp.json` and +// `minimumvertexcover_to_ilp.result.json`. use problemreductions::export::*; use problemreductions::models::algebraic::ILP; use problemreductions::prelude::*; +use problemreductions::rules::{MinimizeSteps, ReductionGraph}; use problemreductions::topology::small_graphs::petersen; use problemreductions::topology::{Graph, SimpleGraph}; +use problemreductions::types::ProblemSize; pub fn run() { - // 1. Create VC instance: Petersen graph (10 vertices, 15 edges), VC=6 let (num_vertices, edges) = petersen(); let vc = MinimumVertexCover::new( SimpleGraph::new(num_vertices, edges.clone()), vec![1i32; num_vertices], ); - // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&vc); - let ilp = reduction.target_problem(); + let graph = ReductionGraph::new(); + let src_variant_bt = + ReductionGraph::variant_to_map(&MinimumVertexCover::::variant()); + let dst_variant_bt = ReductionGraph::variant_to_map(&ILP::::variant()); + let path = graph + .find_cheapest_path( + "MinimumVertexCover", + &src_variant_bt, + "ILP", + &dst_variant_bt, + &ProblemSize::new(vec![]), + &MinimizeSteps, + ) + .expect("MinimumVertexCover -> ILP path not found"); + let reduction = graph + .reduce_along_path(&path, &vc as &dyn std::any::Any) + .expect("MinimumVertexCover -> ILP path reduction failed"); + let ilp: &ILP = reduction.target_problem(); - // 3. Print transformation println!("\n=== Problem Transformation ==="); println!( "Source: MinimumVertexCover with {} variables", vc.num_variables() ); + println!("Path: {}", path); println!( "Target: ILP with {} variables, {} constraints", ilp.num_vars, ilp.constraints.len() ); - // 4. Solve target ILP let solver = BruteForce::new(); let ilp_solutions = solver.find_all_best(ilp); println!("\n=== Solution ==="); @@ -52,39 +63,28 @@ pub fn run() { let ilp_solution = &ilp_solutions[0]; println!("ILP solution: {:?}", ilp_solution); - // 5. Extract source solution let vc_solution = reduction.extract_solution(ilp_solution); println!("Source VC solution: {:?}", vc_solution); - // 6. Verify let size = vc.evaluate(&vc_solution); - // MinimumVertexCover is a minimization problem, infeasible configs return Invalid println!("Solution size: {:?}", size); assert!(size.is_valid()); println!("\nReduction verified successfully"); - // 7. Collect solutions and export JSON let mut solutions = Vec::new(); for target_config in &ilp_solutions { let source_sol = reduction.extract_solution(target_config); let s = vc.evaluate(&source_sol); - // MinimumVertexCover is a minimization problem, infeasible configs return Invalid assert!(s.is_valid()); solutions.push(SolutionPair { source_config: source_sol, - target_config: target_config.to_vec(), + target_config: target_config.clone(), }); } let source_variant = variant_to_map(MinimumVertexCover::::variant()); - let target_variant = variant_to_map(ILP::variant()); - let overhead = lookup_overhead( - "MinimumVertexCover", - &source_variant, - "ILP", - &target_variant, - ) - .unwrap_or_default(); + let target_variant = variant_to_map(ILP::::variant()); + let overhead = graph.compose_path_overhead(&path); let data = ReductionData { source: ProblemSide { @@ -97,7 +97,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, @@ -108,8 +108,7 @@ pub fn run() { }; let results = ResultData { solutions }; - let name = "minimumvertexcover_to_ilp"; - write_example(name, &data, &results); + write_example("minimumvertexcover_to_ilp", &data, &results); } fn main() { diff --git a/examples/reduction_minimumvertexcover_to_qubo.rs b/examples/reduction_minimumvertexcover_to_qubo.rs index daad414c2..54e9d40de 100644 --- a/examples/reduction_minimumvertexcover_to_qubo.rs +++ b/examples/reduction_minimumvertexcover_to_qubo.rs @@ -1,60 +1,63 @@ -// # Vertex Covering to QUBO Reduction (Penalty Method) -// -// ## Mathematical Relationship -// The Minimum Vertex Cover (MVC) problem on a graph G = (V, E) is mapped to -// QUBO by constructing a penalty Hamiltonian: -// -// H(x) = sum_{i in V} x_i + P * sum_{(i,j) in E} (1 - x_i)(1 - x_j) -// -// where P is a penalty weight ensuring every edge has at least one endpoint -// selected. The QUBO minimization finds configurations that minimize the -// number of selected vertices while covering all edges. +// # Vertex Cover to QUBO via Reduction Path // // ## This Example -// - Instance: Petersen graph (10 vertices, 15 edges), VC=6 -// - Source: MinimumVertexCover with minimum size 6 -// - QUBO variables: 10 (one per vertex) -// - Expected: Optimal vertex covers of size 6 +// - Instance: Petersen graph (10 vertices, 15 edges), VC = 6 +// - Source: MinimumVertexCover +// - Target: QUBO reached through the reduction graph // // ## Output -// Exports `docs/paper/examples/minimumvertexcover_to_qubo.json` and `minimumvertexcover_to_qubo.result.json`. -// -// ## Usage -// ```bash -// cargo run --example reduction_vc_to_qubo -// ``` +// Exports `docs/paper/examples/minimumvertexcover_to_qubo.json` and +// `minimumvertexcover_to_qubo.result.json`. use problemreductions::export::*; use problemreductions::prelude::*; +use problemreductions::rules::{Minimize, ReductionGraph}; use problemreductions::topology::small_graphs::petersen; use problemreductions::topology::{Graph, SimpleGraph}; +use problemreductions::types::ProblemSize; pub fn run() { - println!("=== Vertex Covering -> QUBO Reduction ===\n"); + println!("=== Vertex Cover -> QUBO Reduction ===\n"); - // Petersen graph: 10 vertices, 15 edges, VC=6 let (num_vertices, edges) = petersen(); let vc = MinimumVertexCover::new( SimpleGraph::new(num_vertices, edges.clone()), vec![1i32; num_vertices], ); - // Reduce to QUBO - let reduction = ReduceTo::::reduce_to(&vc); - let qubo = reduction.target_problem(); + let graph = ReductionGraph::new(); + let src_variant_bt = + ReductionGraph::variant_to_map(&MinimumVertexCover::::variant()); + let dst_variant_bt = ReductionGraph::variant_to_map(&QUBO::::variant()); + let path = graph + .find_cheapest_path( + "MinimumVertexCover", + &src_variant_bt, + "QUBO", + &dst_variant_bt, + &ProblemSize::new(vec![ + ("num_vertices", vc.graph().num_vertices()), + ("num_edges", vc.graph().num_edges()), + ]), + &Minimize("num_vars"), + ) + .expect("MinimumVertexCover -> QUBO path not found"); + let reduction = graph + .reduce_along_path(&path, &vc as &dyn std::any::Any) + .expect("MinimumVertexCover -> QUBO path reduction failed"); + let qubo: &QUBO = reduction.target_problem(); println!("Source: MinimumVertexCover on Petersen graph (10 vertices, 15 edges)"); + println!("Path: {}", path); println!("Target: QUBO with {} variables", qubo.num_variables()); println!("Q matrix:"); for row in qubo.matrix() { println!(" {:?}", row); } - // Solve QUBO with brute force let solver = BruteForce::new(); let qubo_solutions = solver.find_all_best(qubo); - // Extract and verify solutions println!("\nOptimal solutions:"); let mut solutions = Vec::new(); for sol in &qubo_solutions { @@ -68,8 +71,6 @@ pub fn run() { let size = selected.len(); println!(" Cover vertices: {:?} ({} vertices)", selected, size); - // Closed-loop verification: check solution is valid in original problem - // MinimumVertexCover is a minimization problem, infeasible configs return Invalid let sol_size = vc.evaluate(&extracted); assert!( sol_size.is_valid(), @@ -82,25 +83,11 @@ pub fn run() { }); } - // All optimal solutions should have size 6 - assert!( - solutions - .iter() - .all(|s| s.source_config.iter().filter(|&&x| x == 1).count() == 6), - "All optimal VC solutions on Petersen graph should have size 6" - ); - println!("\nVerification passed: all solutions are valid with size 6"); + println!("\nVerification passed: all solutions are valid"); - // Export JSON let source_variant = variant_to_map(MinimumVertexCover::::variant()); let target_variant = variant_to_map(QUBO::::variant()); - let overhead = lookup_overhead( - "MinimumVertexCover", - &source_variant, - "QUBO", - &target_variant, - ) - .expect("MinimumVertexCover -> QUBO overhead not found"); + let overhead = graph.compose_path_overhead(&path); let data = ReductionData { source: ProblemSide { @@ -124,8 +111,7 @@ pub fn run() { }; let results = ResultData { solutions }; - let name = "minimumvertexcover_to_qubo"; - write_example(name, &data, &results); + write_example("minimumvertexcover_to_qubo", &data, &results); } fn main() { diff --git a/examples/reduction_qubo_to_ilp.rs b/examples/reduction_qubo_to_ilp.rs index 0ab13ceda..cbd976bcc 100644 --- a/examples/reduction_qubo_to_ilp.rs +++ b/examples/reduction_qubo_to_ilp.rs @@ -44,7 +44,7 @@ pub fn run() { let qubo = QUBO::from_matrix(matrix); // Reduce to ILP - let reduction = ReduceTo::::reduce_to(&qubo); + let reduction = ReduceTo::>::reduce_to(&qubo); let ilp = reduction.target_problem(); println!("Source: QUBO with {} variables", qubo.num_variables()); @@ -88,7 +88,7 @@ pub fn run() { // Export JSON let source_variant = variant_to_map(QUBO::::variant()); - let target_variant = variant_to_map(ILP::variant()); + let target_variant = variant_to_map(ILP::::variant()); let overhead = lookup_overhead("QUBO", &source_variant, "ILP", &target_variant) .expect("QUBO -> ILP overhead not found"); @@ -102,7 +102,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_variables(), diff --git a/examples/reduction_travelingsalesman_to_ilp.rs b/examples/reduction_travelingsalesman_to_ilp.rs index b0ce59bfd..a6bbd871d 100644 --- a/examples/reduction_travelingsalesman_to_ilp.rs +++ b/examples/reduction_travelingsalesman_to_ilp.rs @@ -28,7 +28,7 @@ pub fn run() { ); // 2. Reduce to ILP - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // 3. Print transformation @@ -72,7 +72,7 @@ pub fn run() { }]; let source_variant = variant_to_map(TravelingSalesman::::variant()); - let target_variant = variant_to_map(ILP::variant()); + let target_variant = variant_to_map(ILP::::variant()); let overhead = lookup_overhead("TravelingSalesman", &source_variant, "ILP", &target_variant) .unwrap_or_default(); let edges: Vec<(usize, usize)> = problem.edges().iter().map(|&(u, v, _)| (u, v)).collect(); @@ -88,7 +88,7 @@ pub fn run() { }), }, target: ProblemSide { - problem: ILP::NAME.to_string(), + problem: ILP::::NAME.to_string(), variant: target_variant, instance: serde_json::json!({ "num_vars": ilp.num_vars, diff --git a/problemreductions-cli/tests/cli_tests.rs b/problemreductions-cli/tests/cli_tests.rs index d76028214..133487268 100644 --- a/problemreductions-cli/tests/cli_tests.rs +++ b/problemreductions-cli/tests/cli_tests.rs @@ -1324,8 +1324,8 @@ fn test_path_overall_overhead_composition() { let content = std::fs::read_to_string(&tmp).unwrap(); let json: serde_json::Value = serde_json::from_str(&content).unwrap(); - // Must have exactly 2 steps - assert_eq!(json["steps"].as_u64().unwrap(), 2); + // Must have at least 2 steps (K3→KN variant cast adds an extra step) + assert!(json["steps"].as_u64().unwrap() >= 2); // Collect overall overhead into a map let overall: std::collections::HashMap = json["overall_overhead"] @@ -1393,7 +1393,7 @@ fn test_path_all_overall_overhead() { #[test] fn test_path_single_step_no_overall_text() { // Single-step path should NOT show the Overall section - let output = pred().args(["path", "MIS", "QUBO"]).output().unwrap(); + let output = pred().args(["path", "MIS", "MVC"]).output().unwrap(); assert!(output.status.success()); let stdout = String::from_utf8(output.stdout).unwrap(); assert!( diff --git a/scripts/generate_qubo_tests.py b/scripts/generate_qubo_tests.py index a404ec9e2..24cfb2f67 100644 --- a/scripts/generate_qubo_tests.py +++ b/scripts/generate_qubo_tests.py @@ -46,25 +46,6 @@ def save_test(name: str, data: dict, outdir: Path): print(f" wrote {path} ({path.stat().st_size} bytes)") -def generate_vertex_covering(outdir: Path): - """Minimum Vertex Cover on a small graph (4 nodes, 5 edges).""" - edges = [(0, 1), (1, 2), (2, 3), (0, 3), (0, 2)] - n_nodes = 4 - penalty = 8.0 - g = qubogen.Graph(edges=np.array(edges), n_nodes=n_nodes) - Q = qubogen.qubo_mvc(g, penalty=penalty) - - qubo_result = brute_force_qubo(Q) - - save_test("minimumvertexcover_to_qubo", { - "problem": "MinimumVertexCover", - "source": {"num_vertices": n_nodes, "edges": edges, "penalty": penalty}, - "qubo_matrix": Q.tolist(), - "qubo_num_vars": int(Q.shape[0]), - "qubo_optimal": qubo_result, - }, outdir) - - def generate_independent_set(outdir: Path): """Independent Set on a small graph. @@ -227,7 +208,6 @@ def main(): outdir.mkdir(parents=True, exist_ok=True) print("Generating QUBO test datasets...") - generate_vertex_covering(outdir) generate_independent_set(outdir) generate_graph_coloring(outdir) generate_set_packing(outdir) diff --git a/src/expr.rs b/src/expr.rs index 96a647682..8640959fb 100644 --- a/src/expr.rs +++ b/src/expr.rs @@ -130,6 +130,75 @@ impl Expr { Expr::Exp(_) | Expr::Log(_) | Expr::Sqrt(_) => false, } } + + /// Check whether this expression is suitable for asymptotic complexity notation. + /// + /// This is intentionally conservative for symbolic size formulas: + /// - rejects explicit multiplicative constant factors like `3 * n` + /// - rejects additive constant terms like `n + 1` + /// - allows constants used as exponents (e.g. `n^(1/3)`) + /// - allows constants used as exponential bases (e.g. `2^n`) + /// + /// The goal is to accept expressions that already look like reduced + /// asymptotic notation, rather than exact-count formulas. + pub fn is_valid_complexity_notation(&self) -> bool { + self.is_valid_complexity_notation_inner() + } + + fn is_valid_complexity_notation_inner(&self) -> bool { + match self { + Expr::Const(c) => (*c - 1.0).abs() < 1e-10, + Expr::Var(_) => true, + Expr::Add(a, b) => { + a.constant_value().is_none() + && b.constant_value().is_none() + && a.is_valid_complexity_notation_inner() + && b.is_valid_complexity_notation_inner() + } + Expr::Mul(a, b) => { + a.constant_value().is_none() + && b.constant_value().is_none() + && a.is_valid_complexity_notation_inner() + && b.is_valid_complexity_notation_inner() + } + Expr::Pow(base, exp) => { + let base_is_constant = base.constant_value().is_some(); + let exp_is_constant = exp.constant_value().is_some(); + + let base_ok = if base_is_constant { + base.is_valid_exponential_base() + } else { + base.is_valid_complexity_notation_inner() + }; + + let exp_ok = if exp_is_constant { + true + } else { + exp.is_valid_complexity_notation_inner() + }; + + base_ok && exp_ok + } + Expr::Exp(a) | Expr::Log(a) | Expr::Sqrt(a) => a.is_valid_complexity_notation_inner(), + } + } + + fn is_valid_exponential_base(&self) -> bool { + self.constant_value().is_some_and(|c| c > 0.0) + } + + fn constant_value(&self) -> Option { + match self { + Expr::Const(c) => Some(*c), + Expr::Var(_) => None, + Expr::Add(a, b) => Some(a.constant_value()? + b.constant_value()?), + Expr::Mul(a, b) => Some(a.constant_value()? * b.constant_value()?), + Expr::Pow(base, exp) => Some(base.constant_value()?.powf(exp.constant_value()?)), + Expr::Exp(a) => Some(a.constant_value()?.exp()), + Expr::Log(a) => Some(a.constant_value()?.ln()), + Expr::Sqrt(a) => Some(a.constant_value()?.sqrt()), + } + } } impl fmt::Display for Expr { @@ -164,7 +233,12 @@ impl fmt::Display for Expr { } else { format!("{base}") }; - write!(f, "{base_str}^{exp}") + let exp_str = if matches!(exp.as_ref(), Expr::Add(_, _) | Expr::Mul(_, _)) { + format!("({exp})") + } else { + format!("{exp}") + }; + write!(f, "{base_str}^{exp_str}") } Expr::Exp(a) => write!(f, "exp({a})"), Expr::Log(a) => write!(f, "log({a})"), @@ -181,6 +255,302 @@ impl std::ops::Add for Expr { } } +/// Error returned when analyzing asymptotic behavior. +#[derive(Clone, Debug, PartialEq, Eq)] +pub enum AsymptoticAnalysisError { + Unsupported(String), +} + +impl fmt::Display for AsymptoticAnalysisError { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + match self { + Self::Unsupported(expr) => write!(f, "unsupported asymptotic expression: {expr}"), + } + } +} + +impl std::error::Error for AsymptoticAnalysisError {} + +/// Return a normalized `Expr` representing the asymptotic behavior of `expr`. +/// +/// Normalization includes: +/// - commutativity/associativity of `+` and `*` +/// - removal of positive constant factors +/// - removal of additive constant terms +/// - normalization of `sqrt(x)` into `x^(1/2)` +/// - combination of repeated multiplicative factors +/// - canonical identities like `exp(a) * exp(b) = exp(a + b)` +pub fn asymptotic_normal_form(expr: &Expr) -> Result { + match expr { + Expr::Const(c) => { + if *c >= 0.0 { + Ok(Expr::Const(1.0)) + } else { + Err(AsymptoticAnalysisError::Unsupported(expr.to_string())) + } + } + Expr::Var(name) => Ok(Expr::Var(name)), + Expr::Add(a, b) => { + let mut terms = Vec::new(); + collect_sum_term(a, &mut terms)?; + collect_sum_term(b, &mut terms)?; + Ok(build_sum(terms)) + } + Expr::Mul(a, b) => { + let mut factors = Vec::new(); + collect_product_factor(a, &mut factors)?; + collect_product_factor(b, &mut factors)?; + Ok(build_product(factors)) + } + Expr::Pow(base, exp) => normalize_pow(base, exp, expr), + Expr::Exp(arg) => Ok(build_exp(asymptotic_normal_form(arg)?)), + Expr::Log(arg) => Ok(build_log(asymptotic_normal_form(arg)?)), + Expr::Sqrt(arg) => Ok(build_pow(asymptotic_normal_form(arg)?, 0.5)), + } +} + +fn normalize_pow(base: &Expr, exp: &Expr, whole: &Expr) -> Result { + match (base.constant_value(), exp.constant_value()) { + (Some(c), Some(_)) => { + if c >= 0.0 { + Ok(Expr::Const(1.0)) + } else { + Err(AsymptoticAnalysisError::Unsupported(whole.to_string())) + } + } + (Some(base_const), None) => { + if base_const <= 0.0 || (base_const - 1.0).abs() < 1e-10 { + return Err(AsymptoticAnalysisError::Unsupported(whole.to_string())); + } + Ok(build_exp_base(base_const, asymptotic_normal_form(exp)?)) + } + (None, Some(exp_const)) => { + if exp_const < 0.0 { + return Err(AsymptoticAnalysisError::Unsupported(whole.to_string())); + } + Ok(build_pow(asymptotic_normal_form(base)?, exp_const)) + } + (None, None) => Err(AsymptoticAnalysisError::Unsupported(whole.to_string())), + } +} + +fn collect_sum_term(expr: &Expr, out: &mut Vec) -> Result<(), AsymptoticAnalysisError> { + if let Some(c) = expr.constant_value() { + if c >= 0.0 { + return Ok(()); + } + return Err(AsymptoticAnalysisError::Unsupported(expr.to_string())); + } + out.push(asymptotic_normal_form(expr)?); + Ok(()) +} + +fn collect_product_factor(expr: &Expr, out: &mut Vec) -> Result<(), AsymptoticAnalysisError> { + if let Some(c) = expr.constant_value() { + if c > 0.0 { + return Ok(()); + } + return Err(AsymptoticAnalysisError::Unsupported(expr.to_string())); + } + out.push(asymptotic_normal_form(expr)?); + Ok(()) +} + +fn build_sum(terms: Vec) -> Expr { + let mut flat = Vec::new(); + for term in terms { + match term { + Expr::Const(c) if (c - 1.0).abs() < 1e-10 => {} + Expr::Add(a, b) => { + flat.push(*a); + flat.push(*b); + } + other => flat.push(other), + } + } + + if flat.is_empty() { + return Expr::Const(1.0); + } + + let mut dedup = HashMap::::new(); + for term in flat { + dedup.entry(term.to_string()).or_insert(term); + } + + let mut values: Vec<_> = dedup.into_values().collect(); + values.sort_by_key(|term| term.to_string()); + combine_add_chain(values) +} + +fn build_product(factors: Vec) -> Expr { + let mut flat = Vec::new(); + for factor in factors { + match factor { + Expr::Const(c) if (c - 1.0).abs() < 1e-10 => {} + Expr::Mul(a, b) => { + flat.push(*a); + flat.push(*b); + } + other => flat.push(other), + } + } + + if flat.is_empty() { + return Expr::Const(1.0); + } + + let mut power_terms: HashMap = HashMap::new(); + let mut natural_exp_args = Vec::new(); + let mut base_exp_args: HashMap)> = HashMap::new(); + + for factor in flat { + match factor { + Expr::Exp(arg) => natural_exp_args.push(*arg), + Expr::Pow(base, exp) + if base.constant_value().is_some() && exp.constant_value().is_none() => + { + let base_const = base.constant_value().unwrap(); + let key = format_float(base_const); + base_exp_args + .entry(key) + .or_insert_with(|| (base_const, Vec::new())) + .1 + .push(*exp); + } + other => { + let (base, exp) = into_base_and_exponent(other); + let key = base.to_string(); + power_terms + .entry(key) + .and_modify(|(total, _)| *total += exp) + .or_insert((exp, base)); + } + } + } + + let mut result = Vec::new(); + + for (_key, (exp, base)) in power_terms { + if exp.abs() < 1e-10 { + continue; + } + result.push(build_pow(base, exp)); + } + + if !natural_exp_args.is_empty() { + result.push(build_exp(build_sum(natural_exp_args))); + } + + for (_key, (base, args)) in base_exp_args { + result.push(build_exp_base(base, build_sum(args))); + } + + if result.is_empty() { + return Expr::Const(1.0); + } + + result.sort_by_key(|factor| factor.to_string()); + combine_mul_chain(result) +} + +fn build_pow(base: Expr, exp: f64) -> Expr { + if exp.abs() < 1e-10 { + return Expr::Const(1.0); + } + if (exp - 1.0).abs() < 1e-10 { + return base; + } + + match base { + Expr::Const(c) if (c - 1.0).abs() < 1e-10 => Expr::Const(1.0), + Expr::Pow(inner, inner_exp) => { + if let Expr::Const(inner_exp_value) = inner_exp.as_ref() { + build_pow(*inner, inner_exp_value * exp) + } else { + Expr::Pow( + Box::new(Expr::Pow(inner, inner_exp)), + Box::new(Expr::Const(exp)), + ) + } + } + Expr::Mul(a, b) => build_product(vec![build_pow(*a, exp), build_pow(*b, exp)]), + other => Expr::Pow(Box::new(other), Box::new(Expr::Const(exp))), + } +} + +fn build_exp(arg: Expr) -> Expr { + match arg { + Expr::Log(inner) => *inner, + other => Expr::Exp(Box::new(other)), + } +} + +fn build_exp_base(base: f64, arg: Expr) -> Expr { + if (base - std::f64::consts::E).abs() < 1e-10 { + return build_exp(arg); + } + Expr::Pow(Box::new(Expr::Const(base)), Box::new(arg)) +} + +fn build_log(arg: Expr) -> Expr { + match arg { + Expr::Const(_) => Expr::Const(1.0), + Expr::Pow(base, exp) if exp.constant_value().is_some() => build_log(*base), + Expr::Pow(base, exp) if base.constant_value().is_some() => *exp, + Expr::Exp(inner) => *inner, + other => Expr::Log(Box::new(other)), + } +} + +fn into_base_and_exponent(expr: Expr) -> (Expr, f64) { + match expr { + Expr::Pow(base, exp) => match *exp { + Expr::Const(exp_value) => (*base, exp_value), + other => (Expr::Pow(base, Box::new(other)), 1.0), + }, + other => (other, 1.0), + } +} + +fn combine_add_chain(mut terms: Vec) -> Expr { + if terms.is_empty() { + return Expr::Const(1.0); + } + let mut expr = terms.remove(0); + for term in terms { + expr = Expr::add(expr, term); + } + expr +} + +fn combine_mul_chain(mut factors: Vec) -> Expr { + if factors.is_empty() { + return Expr::Const(1.0); + } + let mut expr = factors.remove(0); + for factor in factors { + expr = Expr::mul(expr, factor); + } + expr +} + +fn format_float(value: f64) -> String { + let rounded = value.round(); + if (value - rounded).abs() < 1e-10 { + return format!("{}", rounded as i64); + } + + let mut s = format!("{value:.10}"); + while s.contains('.') && s.ends_with('0') { + s.pop(); + } + if s.ends_with('.') { + s.pop(); + } + s +} + // --- Runtime expression parser --- /// Parse an expression string into an `Expr`. diff --git a/src/lib.rs b/src/lib.rs index c9ada7ef1..87b0126a0 100644 --- a/src/lib.rs +++ b/src/lib.rs @@ -10,7 +10,7 @@ //! | [`models`] | Problem types — [`graph`](models::graph), [`formula`](models::formula), [`set`](models::set), [`algebraic`](models::algebraic), [`misc`](models::misc) | //! | [`rules`] | Reduction rules, [`ReductionGraph`](rules::ReductionGraph) for path search | //! | [`solvers`] | [`BruteForce`] and [`ILPSolver`](solvers::ILPSolver) | -//! | [`topology`] | Graph types — [`SimpleGraph`](topology::SimpleGraph), [`HyperGraph`](topology::HyperGraph), [`UnitDiskGraph`](topology::UnitDiskGraph), etc. | +//! | [`topology`] | Graph types — [`SimpleGraph`](topology::SimpleGraph), [`UnitDiskGraph`](topology::UnitDiskGraph), etc. | //! | [`traits`] | Core traits — [`Problem`], [`OptimizationProblem`], [`SatisfactionProblem`] | //! | [`types`] | [`SolutionSize`], [`Direction`], [`ProblemSize`], [`WeightElement`] | //! | [`variant`] | Variant parameter system for problem type parameterization | @@ -58,6 +58,7 @@ pub mod prelude { // Re-export commonly used items at crate root pub use error::{ProblemError, Result}; +pub use expr::{asymptotic_normal_form, AsymptoticAnalysisError}; pub use registry::{ComplexityClass, ProblemInfo}; pub use solvers::{BruteForce, Solver}; pub use traits::{OptimizationProblem, Problem, SatisfactionProblem}; diff --git a/src/models/algebraic/closest_vector_problem.rs b/src/models/algebraic/closest_vector_problem.rs index fe5d69c44..7588d634c 100644 --- a/src/models/algebraic/closest_vector_problem.rs +++ b/src/models/algebraic/closest_vector_problem.rs @@ -3,7 +3,6 @@ //! Given a lattice basis B and target vector t, find integer coefficients x //! minimizing ‖Bx - t‖₂. -use crate::models::algebraic::VarBounds; use crate::registry::{FieldInfo, ProblemSchemaEntry}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize}; @@ -22,6 +21,82 @@ inventory::submit! { } } +/// Variable bounds (None = unbounded in that direction). +/// +/// Represents the lower and upper bounds for an integer variable. +/// A value of `None` indicates the variable is unbounded in that direction. +#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, Serialize, Deserialize)] +pub struct VarBounds { + /// Lower bound (None = -infinity). + pub lower: Option, + /// Upper bound (None = +infinity). + pub upper: Option, +} + +impl VarBounds { + /// Create bounds for a binary variable: 0 <= x <= 1. + pub fn binary() -> Self { + Self { + lower: Some(0), + upper: Some(1), + } + } + + /// Create bounds for a non-negative variable: x >= 0. + pub fn non_negative() -> Self { + Self { + lower: Some(0), + upper: None, + } + } + + /// Create unbounded variable: -infinity < x < +infinity. + pub fn unbounded() -> Self { + Self { + lower: None, + upper: None, + } + } + + /// Create bounds with explicit lower and upper: lo <= x <= hi. + pub fn bounded(lo: i64, hi: i64) -> Self { + Self { + lower: Some(lo), + upper: Some(hi), + } + } + + /// Check if a value satisfies these bounds. + pub fn contains(&self, value: i64) -> bool { + if let Some(lo) = self.lower { + if value < lo { + return false; + } + } + if let Some(hi) = self.upper { + if value > hi { + return false; + } + } + true + } + + /// Get the number of integer values in this bound range. + /// Returns None if unbounded in either direction. + pub fn num_values(&self) -> Option { + match (self.lower, self.upper) { + (Some(lo), Some(hi)) => { + if hi >= lo { + Some((hi - lo + 1) as usize) + } else { + Some(0) + } + } + _ => None, + } + } +} + /// Closest Vector Problem (CVP). /// /// Given a lattice basis B ∈ R^{m×n} and target t ∈ R^m, diff --git a/src/models/algebraic/ilp.rs b/src/models/algebraic/ilp.rs index fe2c31f6c..fb263100b 100644 --- a/src/models/algebraic/ilp.rs +++ b/src/models/algebraic/ilp.rs @@ -2,11 +2,16 @@ //! //! ILP optimizes a linear objective over integer variables subject to linear constraints. //! This is a fundamental "hub" problem that many other NP-hard problems can be reduced to. +//! +//! The type parameter `V` determines the variable domain: +//! - `ILP`: binary variables (0 or 1) +//! - `ILP`: non-negative integer variables (0..2^31-1) use crate::registry::{FieldInfo, ProblemSchemaEntry}; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize}; use serde::{Deserialize, Serialize}; +use std::marker::PhantomData; inventory::submit! { ProblemSchemaEntry { @@ -15,7 +20,6 @@ inventory::submit! { description: "Optimize linear objective subject to linear constraints", fields: &[ FieldInfo { name: "num_vars", type_name: "usize", description: "Number of integer variables" }, - FieldInfo { name: "bounds", type_name: "Vec", description: "Variable bounds" }, FieldInfo { name: "constraints", type_name: "Vec", description: "Linear constraints" }, FieldInfo { name: "objective", type_name: "Vec<(usize, f64)>", description: "Sparse objective coefficients" }, FieldInfo { name: "sense", type_name: "ObjectiveSense", description: "Optimization direction" }, @@ -23,80 +27,24 @@ inventory::submit! { } } -/// Variable bounds (None = unbounded in that direction). +/// Sealed trait for ILP variable domains. /// -/// Represents the lower and upper bounds for an integer variable. -/// A value of `None` indicates the variable is unbounded in that direction. -#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, Serialize, Deserialize)] -pub struct VarBounds { - /// Lower bound (None = -infinity). - pub lower: Option, - /// Upper bound (None = +infinity). - pub upper: Option, +/// `bool` = binary variables (0 or 1), `i32` = non-negative integers (0..2^31-1). +pub trait VariableDomain: 'static + Clone + std::fmt::Debug + Send + Sync { + /// Number of possible values per variable (used by `dims()`). + const DIMS_PER_VAR: usize; + /// Name for the variant system (e.g., "bool", "i32"). + const NAME: &'static str; } -impl VarBounds { - /// Create bounds for a binary variable: 0 <= x <= 1. - pub fn binary() -> Self { - Self { - lower: Some(0), - upper: Some(1), - } - } - - /// Create bounds for a non-negative variable: x >= 0. - pub fn non_negative() -> Self { - Self { - lower: Some(0), - upper: None, - } - } - - /// Create unbounded variable: -infinity < x < +infinity. - pub fn unbounded() -> Self { - Self { - lower: None, - upper: None, - } - } - - /// Create bounds with explicit lower and upper: lo <= x <= hi. - pub fn bounded(lo: i64, hi: i64) -> Self { - Self { - lower: Some(lo), - upper: Some(hi), - } - } - - /// Check if a value satisfies these bounds. - pub fn contains(&self, value: i64) -> bool { - if let Some(lo) = self.lower { - if value < lo { - return false; - } - } - if let Some(hi) = self.upper { - if value > hi { - return false; - } - } - true - } +impl VariableDomain for bool { + const DIMS_PER_VAR: usize = 2; + const NAME: &'static str = "bool"; +} - /// Get the number of integer values in this bound range. - /// Returns None if unbounded in either direction. - pub fn num_values(&self) -> Option { - match (self.lower, self.upper) { - (Some(lo), Some(hi)) => { - if hi >= lo { - Some((hi - lo + 1) as usize) - } else { - Some(0) - } - } - _ => None, - } - } +impl VariableDomain for i32 { + const DIMS_PER_VAR: usize = (i32::MAX as usize) + 1; + const NAME: &'static str = "i32"; } /// Comparison operator for linear constraints. @@ -187,22 +135,26 @@ pub enum ObjectiveSense { /// Integer Linear Programming (ILP) problem. /// /// An ILP consists of: -/// - A set of integer variables with bounds +/// - A set of integer variables with a domain determined by `V` /// - Linear constraints on those variables /// - A linear objective function to optimize /// - An optimization sense (maximize or minimize) /// +/// # Type Parameter +/// +/// - `V = bool`: binary variables (0 or 1) +/// - `V = i32`: non-negative integer variables +/// /// # Example /// /// ``` -/// use problemreductions::models::algebraic::{ILP, VarBounds, Comparison, LinearConstraint, ObjectiveSense}; +/// use problemreductions::models::algebraic::{ILP, LinearConstraint, ObjectiveSense}; /// use problemreductions::Problem; /// -/// // Create a simple ILP: maximize x0 + 2*x1 +/// // Create a simple binary ILP: maximize x0 + 2*x1 /// // subject to: x0 + x1 <= 3, x0, x1 binary -/// let ilp = ILP::new( +/// let ilp = ILP::::new( /// 2, -/// vec![VarBounds::binary(), VarBounds::binary()], /// vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 3.0)], /// vec![(0, 1.0), (1, 2.0)], /// ObjectiveSense::Maximize, @@ -211,70 +163,40 @@ pub enum ObjectiveSense { /// assert_eq!(ilp.num_variables(), 2); /// ``` #[derive(Debug, Clone, Serialize, Deserialize)] -pub struct ILP { +#[serde(bound(serialize = "", deserialize = ""))] +pub struct ILP { /// Number of variables. pub num_vars: usize, - /// Bounds for each variable. - pub bounds: Vec, /// Linear constraints. pub constraints: Vec, /// Sparse objective coefficients: (var_index, coefficient). pub objective: Vec<(usize, f64)>, /// Optimization direction. pub sense: ObjectiveSense, + #[serde(skip)] + _marker: PhantomData, } -impl ILP { +impl ILP { /// Create a new ILP problem. - /// - /// # Arguments - /// * `num_vars` - Number of variables - /// * `bounds` - Bounds for each variable (must have length num_vars) - /// * `constraints` - List of linear constraints - /// * `objective` - Sparse objective coefficients - /// * `sense` - Maximize or minimize - /// - /// # Panics - /// Panics if bounds.len() != num_vars. pub fn new( num_vars: usize, - bounds: Vec, constraints: Vec, objective: Vec<(usize, f64)>, sense: ObjectiveSense, ) -> Self { - assert_eq!(bounds.len(), num_vars, "bounds length must match num_vars"); Self { num_vars, - bounds, constraints, objective, sense, + _marker: PhantomData, } } - /// Create a binary ILP (all variables are 0-1). - /// - /// This is a convenience constructor for common binary optimization problems. - pub fn binary( - num_vars: usize, - constraints: Vec, - objective: Vec<(usize, f64)>, - sense: ObjectiveSense, - ) -> Self { - let bounds = vec![VarBounds::binary(); num_vars]; - Self::new(num_vars, bounds, constraints, objective, sense) - } - /// Create an empty ILP with no variables. pub fn empty() -> Self { - Self { - num_vars: 0, - bounds: vec![], - constraints: vec![], - objective: vec![], - sense: ObjectiveSense::Minimize, - } + Self::new(0, vec![], vec![], ObjectiveSense::Minimize) } /// Evaluate the objective function for given variable values. @@ -285,40 +207,20 @@ impl ILP { .sum() } - /// Check if all bounds are satisfied for given variable values. - pub fn bounds_satisfied(&self, values: &[i64]) -> bool { - if values.len() != self.num_vars { - return false; - } - for (i, &value) in values.iter().enumerate() { - if !self.bounds[i].contains(value) { - return false; - } - } - true - } - /// Check if all constraints are satisfied for given variable values. pub fn constraints_satisfied(&self, values: &[i64]) -> bool { self.constraints.iter().all(|c| c.is_satisfied(values)) } - /// Check if a solution is feasible (satisfies bounds and constraints). + /// Check if a solution is feasible (satisfies constraints). pub fn is_feasible(&self, values: &[i64]) -> bool { - self.bounds_satisfied(values) && self.constraints_satisfied(values) + values.len() == self.num_vars && self.constraints_satisfied(values) } /// Convert a configuration (Vec) to integer values (Vec). - /// The configuration encodes variable values as offsets from lower bounds. + /// For bool: config 0→0, 1→1. For i32: config index = value. fn config_to_values(&self, config: &[usize]) -> Vec { - config - .iter() - .enumerate() - .map(|(i, &c)| { - let lo = self.bounds.get(i).and_then(|b| b.lower).unwrap_or(0); - lo + c as i64 - }) - .collect() + config.iter().map(|&c| c as i64).collect() } /// Get the number of variables. @@ -337,19 +239,12 @@ impl ILP { } } -impl Problem for ILP { +impl Problem for ILP { const NAME: &'static str = "ILP"; type Metric = SolutionSize; fn dims(&self) -> Vec { - self.bounds - .iter() - .map(|b| { - b.num_values().expect( - "ILP brute-force enumeration requires all variables to have finite bounds", - ) - }) - .collect() + vec![V::DIMS_PER_VAR; self.num_vars] } fn evaluate(&self, config: &[usize]) -> SolutionSize { @@ -361,11 +256,11 @@ impl Problem for ILP { } fn variant() -> Vec<(&'static str, &'static str)> { - crate::variant_params![] + vec![("variable", V::NAME)] } } -impl OptimizationProblem for ILP { +impl OptimizationProblem for ILP { type Value = f64; fn direction(&self) -> Direction { @@ -377,7 +272,8 @@ impl OptimizationProblem for ILP { } crate::declare_variants! { - ILP => "num_variables^num_variables", + ILP => "2^num_vars", + ILP => "num_vars^num_vars", } #[cfg(test)] diff --git a/src/models/algebraic/mod.rs b/src/models/algebraic/mod.rs index 71964f28c..6cfc0069d 100644 --- a/src/models/algebraic/mod.rs +++ b/src/models/algebraic/mod.rs @@ -12,6 +12,6 @@ mod ilp; mod qubo; pub use bmf::BMF; -pub use closest_vector_problem::ClosestVectorProblem; -pub use ilp::{Comparison, LinearConstraint, ObjectiveSense, VarBounds, ILP}; +pub use closest_vector_problem::{ClosestVectorProblem, VarBounds}; +pub use ilp::{Comparison, LinearConstraint, ObjectiveSense, VariableDomain, ILP}; pub use qubo::QUBO; diff --git a/src/rules/analysis.rs b/src/rules/analysis.rs new file mode 100644 index 000000000..570a9b20a --- /dev/null +++ b/src/rules/analysis.rs @@ -0,0 +1,451 @@ +//! Analysis utilities for the reduction graph. +//! +//! Detects primitive reduction rules that are dominated by composite paths, +//! using asymptotic normalization plus monomial-dominance comparison. +//! +//! This analysis is **sound but incomplete**: it reports `Dominated` only when +//! the symbolic comparison is trustworthy, and `Unknown` when metadata is too +//! weak to compare safely. + +use crate::expr::{asymptotic_normal_form, Expr}; +use crate::rules::graph::{ReductionGraph, ReductionPath}; +use crate::rules::registry::ReductionOverhead; +use std::collections::BTreeMap; +use std::fmt; + +/// Result of comparing one primitive rule against one composite path. +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum ComparisonStatus { + /// Composite is equal or better on all common fields. + Dominated, + /// Composite is worse on at least one common field. + NotDominated, + /// Cannot decide: expression not normalizable or path not trustworthy. + Unknown, +} + +/// A primitive reduction rule proven dominated by a composite path. +#[derive(Debug, Clone)] +pub struct DominatedRule { + pub source_name: &'static str, + pub source_variant: BTreeMap, + pub target_name: &'static str, + pub target_variant: BTreeMap, + pub primitive_overhead: ReductionOverhead, + pub dominating_path: ReductionPath, + pub composed_overhead: ReductionOverhead, + pub comparable_fields: Vec, +} + +impl DominatedRule { + pub fn source_display(&self) -> String { + format_problem_variant(self.source_name, &self.source_variant) + } + + pub fn target_display(&self) -> String { + format_problem_variant(self.target_name, &self.target_variant) + } +} + +impl fmt::Display for DominatedRule { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!(f, "{} -> {}", self.source_display(), self.target_display()) + } +} + +/// A candidate comparison that could not be decided soundly. +#[derive(Debug, Clone)] +pub struct UnknownComparison { + pub source_name: &'static str, + pub source_variant: BTreeMap, + pub target_name: &'static str, + pub target_variant: BTreeMap, + pub candidate_path: ReductionPath, + pub reason: String, +} + +impl UnknownComparison { + pub fn source_display(&self) -> String { + format_problem_variant(self.source_name, &self.source_variant) + } + + pub fn target_display(&self) -> String { + format_problem_variant(self.target_name, &self.target_variant) + } +} + +impl fmt::Display for UnknownComparison { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!(f, "{} -> {}", self.source_display(), self.target_display()) + } +} + +pub fn format_problem_variant(name: &str, variant: &BTreeMap) -> String { + if variant.is_empty() { + return name.to_string(); + } + + let vars = variant + .iter() + .map(|(k, v)| format!("{k}: {v:?}")) + .collect::>() + .join(", "); + format!("{name} {{{vars}}}") +} + +// ────────── Polynomial normalization ────────── + +/// A monomial: coefficient × ∏(variable ^ exponent). +#[derive(Debug, Clone)] +struct Monomial { + coeff: f64, + /// Variable name → exponent. Only non-zero exponents stored. + vars: BTreeMap<&'static str, f64>, +} + +impl Monomial { + fn constant(c: f64) -> Self { + Self { + coeff: c, + vars: BTreeMap::new(), + } + } + + fn variable(name: &'static str) -> Self { + let mut vars = BTreeMap::new(); + vars.insert(name, 1.0); + Self { coeff: 1.0, vars } + } + + /// Multiply two monomials. + fn mul(&self, other: &Monomial) -> Monomial { + let coeff = self.coeff * other.coeff; + let mut vars = self.vars.clone(); + for (&v, &e) in &other.vars { + *vars.entry(v).or_insert(0.0) += e; + } + Monomial { coeff, vars } + } +} + +/// A polynomial (sum of monomials) in normal form. +#[derive(Debug, Clone)] +struct NormalizedPoly { + terms: Vec, +} + +impl NormalizedPoly { + fn add(mut self, other: NormalizedPoly) -> NormalizedPoly { + self.terms.extend(other.terms); + self + } + + fn mul(&self, other: &NormalizedPoly) -> NormalizedPoly { + let mut terms = Vec::new(); + for a in &self.terms { + for b in &other.terms { + terms.push(a.mul(b)); + } + } + NormalizedPoly { terms } + } + + /// True if any monomial has a negative coefficient. + fn has_negative_coefficients(&self) -> bool { + self.terms.iter().any(|m| m.coeff < -1e-15) + } +} + +/// Normalize an expression into a sum of monomials. +/// +/// Supports: constants, variables, addition, multiplication, +/// and powers with non-negative constant exponents. +/// Returns `Err` for exp, log, sqrt, division, and negative exponents. +fn normalize_polynomial(expr: &Expr) -> Result { + match expr { + Expr::Const(c) => Ok(NormalizedPoly { + terms: vec![Monomial::constant(*c)], + }), + Expr::Var(v) => Ok(NormalizedPoly { + terms: vec![Monomial::variable(v)], + }), + Expr::Add(a, b) => { + let pa = normalize_polynomial(a)?; + let pb = normalize_polynomial(b)?; + Ok(pa.add(pb)) + } + Expr::Mul(a, b) => { + let pa = normalize_polynomial(a)?; + let pb = normalize_polynomial(b)?; + Ok(pa.mul(&pb)) + } + Expr::Pow(base, exp) => { + if let Expr::Const(c) = exp.as_ref() { + if *c < 0.0 { + return Err(format!("negative exponent: {c}")); + } + let pb = normalize_polynomial(base)?; + // Single monomial: multiply exponents + if pb.terms.len() == 1 { + let m = &pb.terms[0]; + let coeff = m.coeff.powf(*c); + let vars: BTreeMap<_, _> = m.vars.iter().map(|(&v, &e)| (v, e * c)).collect(); + return Ok(NormalizedPoly { + terms: vec![Monomial { coeff, vars }], + }); + } + // Multi-term polynomial raised to non-negative integer power + let n = *c as usize; + if c.fract().abs() < 1e-10 { + if n == 0 { + return Ok(NormalizedPoly { + terms: vec![Monomial::constant(1.0)], + }); + } + let mut result = pb.clone(); + for _ in 1..n { + result = result.mul(&pb); + } + return Ok(result); + } + Err(format!( + "non-integer power of multi-term polynomial: ({base})^{c}" + )) + } else { + Err(format!("variable exponent: ({base})^({exp})")) + } + } + Expr::Exp(_) => Err("exp() not supported".into()), + Expr::Log(_) => Err("log() not supported".into()), + Expr::Sqrt(_) => Err("sqrt() not supported".into()), + } +} + +fn prepare_expr_for_comparison(expr: &Expr) -> Expr { + asymptotic_normal_form(expr).unwrap_or_else(|_| expr.clone()) +} + +// ────────── Monomial-dominance comparison ────────── + +/// Check if monomial `small` is asymptotically dominated by monomial `big`. +/// +/// True iff for every variable in `small`, `big` has at least as large an exponent. +/// This means `small` grows no faster than `big` as all variables → ∞. +fn monomial_dominated_by(small: &Monomial, big: &Monomial) -> bool { + for (&var, &exp_small) in &small.vars { + let exp_big = big.vars.get(var).copied().unwrap_or(0.0); + if exp_small > exp_big + 1e-10 { + return false; + } + } + true +} + +/// Check if polynomial `a` is asymptotically ≤ polynomial `b`. +/// +/// True iff every positive-coefficient monomial in `a` is dominated by +/// some positive-coefficient monomial in `b`. +fn poly_leq(a: &NormalizedPoly, b: &NormalizedPoly) -> bool { + let b_positive: Vec<&Monomial> = b.terms.iter().filter(|m| m.coeff > 1e-15).collect(); + + for a_term in &a.terms { + if a_term.coeff <= 1e-15 { + continue; // zero or negative — can only make `a` smaller + } + let dominated = b_positive + .iter() + .any(|b_term| monomial_dominated_by(a_term, b_term)); + if !dominated { + return false; + } + } + true +} + +// ────────── Overhead comparison ────────── + +/// Compare two overheads across all common fields. +/// +/// Returns `Dominated` if composite ≤ primitive on all common fields. +/// Returns `NotDominated` if composite is worse on any common field. +/// Returns `Unknown` if any common field's expressions cannot be normalized +/// into a comparable polynomial form or contain negative coefficients. +pub fn compare_overhead( + primitive: &ReductionOverhead, + composite: &ReductionOverhead, +) -> ComparisonStatus { + let comp_map: std::collections::HashMap<&str, &Expr> = composite + .output_size + .iter() + .map(|(name, expr)| (*name, expr)) + .collect(); + + let mut any_common = false; + + for (field, prim_expr) in &primitive.output_size { + let Some(comp_expr) = comp_map.get(field) else { + continue; + }; + any_common = true; + + let primitive_prepared = prepare_expr_for_comparison(prim_expr); + let composite_prepared = prepare_expr_for_comparison(comp_expr); + + if primitive_prepared == composite_prepared { + continue; + } + + let primitive_poly = match normalize_polynomial(&primitive_prepared) { + Ok(p) => p, + Err(_) => return ComparisonStatus::Unknown, + }; + let composite_poly = match normalize_polynomial(&composite_prepared) { + Ok(p) => p, + Err(_) => return ComparisonStatus::Unknown, + }; + + // Reject expressions with negative coefficients + if primitive_poly.has_negative_coefficients() || composite_poly.has_negative_coefficients() + { + return ComparisonStatus::Unknown; + } + + // Check: composite ≤ primitive on this field + if !poly_leq(&composite_poly, &primitive_poly) { + return ComparisonStatus::NotDominated; + } + } + + if any_common { + ComparisonStatus::Dominated + } else { + ComparisonStatus::NotDominated + } +} + +// ────────── Main analysis ────────── + +/// Find all primitive reduction rules dominated by composite paths. +/// +/// Returns a tuple of: +/// - `Vec`: rules proven dominated by a composite path +/// - `Vec`: candidates that could not be decided +/// +/// For each primitive rule (direct edge), enumerates all alternative paths, +/// validates trustworthiness, composes overheads, and compares. +/// Keeps only the best (shortest) dominating path per primitive rule. +/// +/// Note: iterates the graph's coalesced edges rather than raw `inventory` entries. +/// This is sound because `test_no_duplicate_primitive_rules_per_variant_pair` guards +/// the invariant that at most one registration exists per (source_variant, target_variant) pair. +pub fn find_dominated_rules( + graph: &ReductionGraph, +) -> (Vec, Vec) { + let mut dominated = Vec::new(); + let mut unknown = Vec::new(); + + for edge_info in all_edges(graph) { + let paths = graph.find_all_paths( + edge_info.source_name, + &edge_info.source_variant, + edge_info.target_name, + &edge_info.target_variant, + ); + + let mut best_dominating: Option<(ReductionPath, ReductionOverhead, Vec)> = None; + + for path in paths { + if path.len() <= 1 { + continue; // skip the direct edge itself + } + + let composed = graph.compose_path_overhead(&path); + + match compare_overhead(&edge_info.overhead, &composed) { + ComparisonStatus::Dominated => { + let comparable_fields = common_fields(&edge_info.overhead, &composed); + let is_better = match &best_dominating { + None => true, + Some((best_path, _, _)) => path.len() < best_path.len(), + }; + if is_better { + best_dominating = Some((path, composed, comparable_fields)); + } + } + ComparisonStatus::Unknown => { + unknown.push(UnknownComparison { + source_name: edge_info.source_name, + source_variant: edge_info.source_variant.clone(), + target_name: edge_info.target_name, + target_variant: edge_info.target_variant.clone(), + candidate_path: path, + reason: "expression comparison returned Unknown".into(), + }); + } + ComparisonStatus::NotDominated => {} + } + } + + if let Some((path, composed, fields)) = best_dominating { + dominated.push(DominatedRule { + source_name: edge_info.source_name, + source_variant: edge_info.source_variant.clone(), + target_name: edge_info.target_name, + target_variant: edge_info.target_variant.clone(), + primitive_overhead: edge_info.overhead.clone(), + dominating_path: path, + composed_overhead: composed, + comparable_fields: fields, + }); + } + } + + // Deterministic output + dominated.sort_by(|a, b| { + ( + format_problem_variant(a.source_name, &a.source_variant), + format_problem_variant(a.target_name, &a.target_variant), + a.dominating_path.len(), + ) + .cmp(&( + format_problem_variant(b.source_name, &b.source_variant), + format_problem_variant(b.target_name, &b.target_variant), + b.dominating_path.len(), + )) + }); + unknown.sort_by(|a, b| { + ( + format_problem_variant(a.source_name, &a.source_variant), + format_problem_variant(a.target_name, &a.target_variant), + ) + .cmp(&( + format_problem_variant(b.source_name, &b.source_variant), + format_problem_variant(b.target_name, &b.target_variant), + )) + }); + + (dominated, unknown) +} + +/// Fields present in both overheads. +fn common_fields(a: &ReductionOverhead, b: &ReductionOverhead) -> Vec { + let b_fields: std::collections::HashSet<&str> = b.output_size.iter().map(|(n, _)| *n).collect(); + a.output_size + .iter() + .filter(|&(f, _)| b_fields.contains(f)) + .map(|(f, _)| f.to_string()) + .collect() +} + +/// Collect all edges from the reduction graph. +fn all_edges(graph: &ReductionGraph) -> Vec { + let mut edges = Vec::new(); + for name in graph.problem_types() { + edges.extend(graph.outgoing_reductions(name)); + } + edges +} + +#[cfg(test)] +#[path = "../unit_tests/rules/analysis.rs"] +mod tests; diff --git a/src/rules/circuit_ilp.rs b/src/rules/circuit_ilp.rs index 6befb2bb3..1bf97af82 100644 --- a/src/rules/circuit_ilp.rs +++ b/src/rules/circuit_ilp.rs @@ -14,7 +14,7 @@ //! ## Objective //! Trivial (minimize 0): any feasible ILP solution is a satisfying assignment. -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; +use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; use crate::models::formula::{BooleanExpr, BooleanOp, CircuitSAT}; use crate::reduction; use crate::rules::traits::{ReduceTo, ReductionResult}; @@ -23,16 +23,16 @@ use std::collections::HashMap; /// Result of reducing CircuitSAT to ILP. #[derive(Debug, Clone)] pub struct ReductionCircuitToILP { - target: ILP, + target: ILP, source_variables: Vec, variable_map: HashMap, } impl ReductionResult for ReductionCircuitToILP { type Source = CircuitSAT; - type Target = ILP; + type Target = ILP; - fn target_problem(&self) -> &ILP { + fn target_problem(&self) -> &ILP { &self.target } @@ -176,7 +176,7 @@ impl ILPBuilder { num_constraints = "num_variables + num_assignments", } )] -impl ReduceTo for CircuitSAT { +impl ReduceTo> for CircuitSAT { type Result = ReductionCircuitToILP; fn reduce_to(&self) -> Self::Result { @@ -203,12 +203,10 @@ impl ReduceTo for CircuitSAT { } } - let bounds = vec![VarBounds::binary(); builder.num_vars]; // Trivial objective: minimize 0 (satisfaction problem) let objective = vec![]; let target = ILP::new( builder.num_vars, - bounds, builder.constraints, objective, ObjectiveSense::Minimize, diff --git a/src/rules/coloring_ilp.rs b/src/rules/coloring_ilp.rs index 5bd45c267..2f0b9dfa2 100644 --- a/src/rules/coloring_ilp.rs +++ b/src/rules/coloring_ilp.rs @@ -7,7 +7,7 @@ //! 2. Adjacent vertices have different colors: x_{u,c} + x_{v,c} <= 1 for each edge (u,v) and color c //! - Objective: None (feasibility problem, minimize 0) -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; +use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; use crate::models::graph::KColoring; use crate::reduction; use crate::rules::traits::{ReduceTo, ReductionResult}; @@ -22,7 +22,7 @@ use crate::variant::{KValue, K1, K2, K3, K4, KN}; /// - Constraints ensure adjacent vertices have different colors #[derive(Debug, Clone)] pub struct ReductionKColoringToILP { - target: ILP, + target: ILP, num_vertices: usize, num_colors: usize, _phantom: std::marker::PhantomData<(K, G)>, @@ -40,9 +40,9 @@ where G: Graph + crate::variant::VariantParam, { type Source = KColoring; - type Target = ILP; + type Target = ILP; - fn target_problem(&self) -> &ILP { + fn target_problem(&self) -> &ILP { &self.target } @@ -76,9 +76,6 @@ fn reduce_kcoloring_to_ilp( // Helper function to get variable index let var_index = |v: usize, c: usize| -> usize { v * k + c }; - // All variables are binary (0 or 1) - let bounds = vec![VarBounds::binary(); num_vars]; - let mut constraints = Vec::new(); // Constraint 1: Each vertex has exactly one color @@ -103,13 +100,7 @@ fn reduce_kcoloring_to_ilp( // We use an empty objective let objective: Vec<(usize, f64)> = vec![]; - let target = ILP::new( - num_vars, - bounds, - constraints, - objective, - ObjectiveSense::Minimize, - ); + let target = ILP::new(num_vars, constraints, objective, ObjectiveSense::Minimize); ReductionKColoringToILP { target, @@ -126,7 +117,7 @@ fn reduce_kcoloring_to_ilp( num_constraints = "num_vertices + num_vertices * num_edges", } )] -impl ReduceTo for KColoring { +impl ReduceTo> for KColoring { type Result = ReductionKColoringToILP; fn reduce_to(&self) -> Self::Result { @@ -137,7 +128,7 @@ impl ReduceTo for KColoring { // Additional concrete impls for tests (not registered in reduction graph) macro_rules! impl_kcoloring_to_ilp { ($($ktype:ty),+) => {$( - impl ReduceTo for KColoring<$ktype, SimpleGraph> { + impl ReduceTo> for KColoring<$ktype, SimpleGraph> { type Result = ReductionKColoringToILP<$ktype, SimpleGraph>; fn reduce_to(&self) -> Self::Result { reduce_kcoloring_to_ilp(self) } } diff --git a/src/rules/factoring_ilp.rs b/src/rules/factoring_ilp.rs index e2724cc1c..1184e2ccf 100644 --- a/src/rules/factoring_ilp.rs +++ b/src/rules/factoring_ilp.rs @@ -1,6 +1,6 @@ //! Reduction from Factoring to ILP (Integer Linear Programming). //! -//! The Integer Factoring problem can be formulated as a binary ILP using +//! The Integer Factoring problem can be formulated as an ILP using //! McCormick linearization for binary products combined with carry propagation. //! //! Given target N and bit widths m, n, find factors p (m bits) and q (n bits) @@ -16,8 +16,10 @@ //! 1. Product linearization (McCormick): z_ij ≤ p_i, z_ij ≤ q_j, z_ij ≥ p_i + q_j - 1 //! 2. Bit-position sums: Σ_{i+j=k} z_ij + c_{k-1} = N_k + 2·c_k //! 3. No overflow: c_{m+n-1} = 0 +//! 4. Binary bounds: p_i ≤ 1, q_j ≤ 1 +//! 5. Carry bounds: 0 ≤ c_k ≤ min(m, n) -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; +use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; use crate::models::misc::Factoring; use crate::reduction; use crate::rules::traits::{ReduceTo, ReductionResult}; @@ -31,7 +33,7 @@ use std::cmp::min; /// - Constraints enforce the multiplication equals the target #[derive(Debug, Clone)] pub struct ReductionFactoringToILP { - target: ILP, + target: ILP, m: usize, // bits for first factor n: usize, // bits for second factor } @@ -62,9 +64,9 @@ impl ReductionFactoringToILP { impl ReductionResult for ReductionFactoringToILP { type Source = Factoring; - type Target = ILP; + type Target = ILP; - fn target_problem(&self) -> &ILP { + fn target_problem(&self) -> &ILP { &self.target } @@ -92,10 +94,10 @@ impl ReductionResult for ReductionFactoringToILP { } #[reduction(overhead = { - num_vars = "2 * num_bits_first + 2 * num_bits_second + num_bits_first * num_bits_second", - num_constraints = "3 * num_bits_first * num_bits_second + num_bits_first + num_bits_second + 1", + num_vars = "num_bits_first * num_bits_second", + num_constraints = "num_bits_first * num_bits_second", })] -impl ReduceTo for Factoring { +impl ReduceTo> for Factoring { type Result = ReductionFactoringToILP; fn reduce_to(&self) -> Self::Result { @@ -129,21 +131,6 @@ impl ReduceTo for Factoring { let z_var = |i: usize, j: usize| -> usize { m + n + i * n + j }; let carry_var = |k: usize| -> usize { m + n + m * n + k }; - // Variable bounds - let mut bounds = Vec::with_capacity(num_vars); - - // p_i, q_j, z_ij are binary - for _ in 0..(num_p + num_q + num_z) { - bounds.push(VarBounds::binary()); - } - - // c_k are non-negative integers with upper bound min(m, n) - // (at most min(m, n) products can contribute to any position) - let carry_upper = min(m, n) as i64; - for _ in 0..num_carries { - bounds.push(VarBounds::bounded(0, carry_upper)); - } - let mut constraints = Vec::new(); // Constraint 1: Product linearization (McCormick constraints) @@ -209,16 +196,26 @@ impl ReduceTo for Factoring { 0.0, )); + // Constraint 4: Binary bounds for p_i and q_j (enforce 0/1 in integer domain) + for i in 0..m { + constraints.push(LinearConstraint::le(vec![(p_var(i), 1.0)], 1.0)); + } + for j in 0..n { + constraints.push(LinearConstraint::le(vec![(q_var(j), 1.0)], 1.0)); + } + + // Constraint 5: Carry bounds (0 ≤ c_k ≤ min(m, n)) + let carry_upper = min(m, n) as f64; + for k in 0..num_carries { + let cv = carry_var(k); + constraints.push(LinearConstraint::ge(vec![(cv, 1.0)], 0.0)); + constraints.push(LinearConstraint::le(vec![(cv, 1.0)], carry_upper)); + } + // Objective: feasibility problem (minimize 0) let objective: Vec<(usize, f64)> = vec![]; - let ilp = ILP::new( - num_vars, - bounds, - constraints, - objective, - ObjectiveSense::Minimize, - ); + let ilp = ILP::::new(num_vars, constraints, objective, ObjectiveSense::Minimize); ReductionFactoringToILP { target: ilp, m, n } } diff --git a/src/rules/graph.rs b/src/rules/graph.rs index 722d75d82..3c9640692 100644 --- a/src/rules/graph.rs +++ b/src/rules/graph.rs @@ -1193,3 +1193,19 @@ mod tests; #[cfg(test)] #[path = "../unit_tests/rules/reduction_path_parity.rs"] mod reduction_path_parity_tests; + +#[cfg(all(test, feature = "ilp-solver"))] +#[path = "../unit_tests/rules/maximumindependentset_ilp.rs"] +mod maximumindependentset_ilp_path_tests; + +#[cfg(all(test, feature = "ilp-solver"))] +#[path = "../unit_tests/rules/minimumvertexcover_ilp.rs"] +mod minimumvertexcover_ilp_path_tests; + +#[cfg(test)] +#[path = "../unit_tests/rules/maximumindependentset_qubo.rs"] +mod maximumindependentset_qubo_path_tests; + +#[cfg(test)] +#[path = "../unit_tests/rules/minimumvertexcover_qubo.rs"] +mod minimumvertexcover_qubo_path_tests; diff --git a/src/rules/ilp_bool_ilp_i32.rs b/src/rules/ilp_bool_ilp_i32.rs new file mode 100644 index 000000000..5e36032a8 --- /dev/null +++ b/src/rules/ilp_bool_ilp_i32.rs @@ -0,0 +1,58 @@ +//! Natural embedding of binary ILP into general integer ILP. +//! +//! Every binary (0-1) variable is a valid non-negative integer variable. +//! The constraints carry over unchanged. Additional upper-bound constraints +//! (x_i <= 1) are added to preserve binary semantics. +//! +//! This is a same-name variant cast (ILP → ILP), so by convention it does not +//! have an example file or a paper `reduction-rule` entry. + +use crate::models::algebraic::{LinearConstraint, ILP}; +use crate::reduction; +use crate::rules::traits::{ReduceTo, ReductionResult}; + +#[derive(Debug, Clone)] +pub struct ReductionBinaryILPToIntILP { + target: ILP, +} + +impl ReductionResult for ReductionBinaryILPToIntILP { + type Source = ILP; + type Target = ILP; + + fn target_problem(&self) -> &ILP { + &self.target + } + + fn extract_solution(&self, target_solution: &[usize]) -> Vec { + target_solution.to_vec() + } +} + +#[reduction(overhead = { + num_vars = "num_vars", + num_constraints = "num_constraints + num_vars", +})] +impl ReduceTo> for ILP { + type Result = ReductionBinaryILPToIntILP; + + fn reduce_to(&self) -> Self::Result { + let mut constraints = self.constraints.clone(); + // Add x_i <= 1 for each variable to preserve binary domain + for i in 0..self.num_vars { + constraints.push(LinearConstraint::le(vec![(i, 1.0)], 1.0)); + } + ReductionBinaryILPToIntILP { + target: ILP::::new( + self.num_vars, + constraints, + self.objective.clone(), + self.sense, + ), + } + } +} + +#[cfg(test)] +#[path = "../unit_tests/rules/ilp_bool_ilp_i32.rs"] +mod tests; diff --git a/src/rules/ilp_qubo.rs b/src/rules/ilp_qubo.rs index ab901a44b..8eb9fb80e 100644 --- a/src/rules/ilp_qubo.rs +++ b/src/rules/ilp_qubo.rs @@ -21,7 +21,7 @@ pub struct ReductionILPToQUBO { } impl ReductionResult for ReductionILPToQUBO { - type Source = ILP; + type Source = ILP; type Target = QUBO; fn target_problem(&self) -> &Self::Target { @@ -35,23 +35,15 @@ impl ReductionResult for ReductionILPToQUBO { } #[reduction( - overhead = { num_vars = "num_vars" } + overhead = { num_vars = "num_vars + num_constraints * num_vars" } )] -impl ReduceTo> for ILP { +impl ReduceTo> for ILP { type Result = ReductionILPToQUBO; fn reduce_to(&self) -> Self::Result { let n = self.num_vars; - // Verify all variables are binary - for (i, b) in self.bounds.iter().enumerate() { - assert!( - b.lower == Some(0) && b.upper == Some(1), - "ILP→QUBO requires binary variables (var {} has bounds {:?})", - i, - b - ); - } + // All variables are binary by type — no runtime check needed. // Build dense constraint matrix A and rhs vector b // Also compute slack sizes for inequality constraints diff --git a/src/rules/maximumclique_ilp.rs b/src/rules/maximumclique_ilp.rs index 0b36bba29..ed950dee3 100644 --- a/src/rules/maximumclique_ilp.rs +++ b/src/rules/maximumclique_ilp.rs @@ -6,7 +6,7 @@ //! at most one can be in the clique //! - Objective: Maximize the sum of weights of selected vertices -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; +use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; use crate::models::graph::MaximumClique; use crate::reduction; use crate::rules::traits::{ReduceTo, ReductionResult}; @@ -20,14 +20,14 @@ use crate::topology::{Graph, SimpleGraph}; /// - The objective maximizes the total weight of selected vertices #[derive(Debug, Clone)] pub struct ReductionCliqueToILP { - target: ILP, + target: ILP, } impl ReductionResult for ReductionCliqueToILP { type Source = MaximumClique; - type Target = ILP; + type Target = ILP; - fn target_problem(&self) -> &ILP { + fn target_problem(&self) -> &ILP { &self.target } @@ -46,15 +46,12 @@ impl ReductionResult for ReductionCliqueToILP { num_constraints = "num_vertices^2", } )] -impl ReduceTo for MaximumClique { +impl ReduceTo> for MaximumClique { type Result = ReductionCliqueToILP; fn reduce_to(&self) -> Self::Result { let num_vars = self.graph().num_vertices(); - // All variables are binary (0 or 1) - let bounds = vec![VarBounds::binary(); num_vars]; - // Constraints: x_u + x_v <= 1 for each NON-EDGE (u, v) // This ensures at most one vertex of each non-edge is selected (i.e., if both // are selected, they must be adjacent, forming a clique) @@ -75,13 +72,7 @@ impl ReduceTo for MaximumClique { .map(|(i, &w)| (i, w as f64)) .collect(); - let target = ILP::new( - num_vars, - bounds, - constraints, - objective, - ObjectiveSense::Maximize, - ); + let target = ILP::new(num_vars, constraints, objective, ObjectiveSense::Maximize); ReductionCliqueToILP { target } } diff --git a/src/rules/maximumindependentset_gridgraph.rs b/src/rules/maximumindependentset_gridgraph.rs index e336f033a..2515371b8 100644 --- a/src/rules/maximumindependentset_gridgraph.rs +++ b/src/rules/maximumindependentset_gridgraph.rs @@ -55,51 +55,6 @@ impl ReduceTo> } } -/// Result of reducing MIS to MIS. -#[derive(Debug, Clone)] -pub struct ReductionISSimpleOneToGridWeighted { - target: MaximumIndependentSet, - mapping_result: ksg::MappingResult, -} - -impl ReductionResult for ReductionISSimpleOneToGridWeighted { - type Source = MaximumIndependentSet; - type Target = MaximumIndependentSet; - - fn target_problem(&self) -> &Self::Target { - &self.target - } - - fn extract_solution(&self, target_solution: &[usize]) -> Vec { - self.mapping_result.map_config_back(target_solution) - } -} - -#[reduction( - overhead = { - num_vertices = "num_vertices * num_vertices", - num_edges = "num_vertices * num_vertices", - } -)] -impl ReduceTo> - for MaximumIndependentSet -{ - type Result = ReductionISSimpleOneToGridWeighted; - - fn reduce_to(&self) -> Self::Result { - let n = self.graph().num_vertices(); - let edges = self.graph().edges(); - let result = ksg::map_unweighted(n, &edges); - let weights = result.node_weights.clone(); - let grid = result.to_kings_subgraph(); - let target = MaximumIndependentSet::new(grid, weights); - ReductionISSimpleOneToGridWeighted { - target, - mapping_result: result, - } - } -} - #[cfg(test)] #[path = "../unit_tests/rules/maximumindependentset_gridgraph.rs"] mod tests; diff --git a/src/rules/maximumindependentset_ilp.rs b/src/rules/maximumindependentset_ilp.rs deleted file mode 100644 index 10a02f2ca..000000000 --- a/src/rules/maximumindependentset_ilp.rs +++ /dev/null @@ -1,88 +0,0 @@ -//! Reduction from MaximumIndependentSet to ILP (Integer Linear Programming). -//! -//! The Independent Set problem can be formulated as a binary ILP: -//! - Variables: One binary variable per vertex (0 = not selected, 1 = selected) -//! - Constraints: x_u + x_v <= 1 for each edge (u, v) - at most one endpoint can be selected -//! - Objective: Maximize the sum of weights of selected vertices - -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; -use crate::models::graph::MaximumIndependentSet; -use crate::reduction; -use crate::rules::traits::{ReduceTo, ReductionResult}; -use crate::topology::{Graph, SimpleGraph}; - -/// Result of reducing MaximumIndependentSet to ILP. -/// -/// This reduction creates a binary ILP where: -/// - Each vertex corresponds to a binary variable -/// - Edge constraints ensure at most one endpoint is selected -/// - The objective maximizes the total weight of selected vertices -#[derive(Debug, Clone)] -pub struct ReductionISToILP { - target: ILP, -} - -impl ReductionResult for ReductionISToILP { - type Source = MaximumIndependentSet; - type Target = ILP; - - fn target_problem(&self) -> &ILP { - &self.target - } - - /// Extract solution from ILP back to MaximumIndependentSet. - /// - /// Since the mapping is 1:1 (each vertex maps to one binary variable), - /// the solution extraction is simply copying the configuration. - fn extract_solution(&self, target_solution: &[usize]) -> Vec { - target_solution.to_vec() - } -} - -#[reduction( - overhead = { - num_vars = "num_vertices", - num_constraints = "num_edges", - } -)] -impl ReduceTo for MaximumIndependentSet { - type Result = ReductionISToILP; - - fn reduce_to(&self) -> Self::Result { - let num_vars = self.graph().num_vertices(); - - // All variables are binary (0 or 1) - let bounds = vec![VarBounds::binary(); num_vars]; - - // Constraints: x_u + x_v <= 1 for each edge (u, v) - // This ensures at most one endpoint of each edge is selected - let constraints: Vec = self - .graph() - .edges() - .into_iter() - .map(|(u, v)| LinearConstraint::le(vec![(u, 1.0), (v, 1.0)], 1.0)) - .collect(); - - // Objective: maximize sum of w_i * x_i (weighted sum of selected vertices) - let objective: Vec<(usize, f64)> = self - .weights() - .iter() - .enumerate() - .map(|(i, &w)| (i, w as f64)) - .collect(); - - let target = ILP::new( - num_vars, - bounds, - constraints, - objective, - ObjectiveSense::Maximize, - ); - - ReductionISToILP { target } - } -} - -#[cfg(test)] -#[path = "../unit_tests/rules/maximumindependentset_ilp.rs"] -mod tests; diff --git a/src/rules/maximumindependentset_qubo.rs b/src/rules/maximumindependentset_qubo.rs deleted file mode 100644 index 2d0b4dae2..000000000 --- a/src/rules/maximumindependentset_qubo.rs +++ /dev/null @@ -1,62 +0,0 @@ -//! Reduction from MaximumIndependentSet to QUBO. -//! -//! Maximize Σ w_i·x_i s.t. x_i·x_j = 0 for (i,j) ∈ E -//! = Minimize -Σ w_i·x_i + P·Σ_{(i,j)∈E} x_i·x_j -//! -//! Q[i][i] = -w_i, Q[i][j] = P for edges. P = 1 + Σ w_i. - -use crate::models::algebraic::QUBO; -use crate::models::graph::MaximumIndependentSet; -use crate::reduction; -use crate::rules::traits::{ReduceTo, ReductionResult}; -use crate::topology::{Graph, SimpleGraph}; -/// Result of reducing MaximumIndependentSet to QUBO. -#[derive(Debug, Clone)] -pub struct ReductionISToQUBO { - target: QUBO, -} - -impl ReductionResult for ReductionISToQUBO { - type Source = MaximumIndependentSet; - type Target = QUBO; - - fn target_problem(&self) -> &Self::Target { - &self.target - } - - fn extract_solution(&self, target_solution: &[usize]) -> Vec { - target_solution.to_vec() - } -} - -#[reduction( - overhead = { num_vars = "num_vertices" } -)] -impl ReduceTo> for MaximumIndependentSet { - type Result = ReductionISToQUBO; - - fn reduce_to(&self) -> Self::Result { - let n = self.graph().num_vertices(); - let edges = self.graph().edges(); - let weights = self.weights(); - let total_weight: f64 = weights.iter().map(|&w| w as f64).sum(); - let penalty = 1.0 + total_weight; - - let mut matrix = vec![vec![0.0; n]; n]; - for i in 0..n { - matrix[i][i] = -(weights[i] as f64); - } - for (u, v) in &edges { - let (i, j) = if u < v { (*u, *v) } else { (*v, *u) }; - matrix[i][j] += penalty; - } - - ReductionISToQUBO { - target: QUBO::from_matrix(matrix), - } - } -} - -#[cfg(test)] -#[path = "../unit_tests/rules/maximumindependentset_qubo.rs"] -mod tests; diff --git a/src/rules/maximummatching_ilp.rs b/src/rules/maximummatching_ilp.rs index 04c867931..7042518bd 100644 --- a/src/rules/maximummatching_ilp.rs +++ b/src/rules/maximummatching_ilp.rs @@ -6,7 +6,7 @@ //! (at most one incident edge can be selected) //! - Objective: Maximize the sum of weights of selected edges -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; +use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; use crate::models::graph::MaximumMatching; use crate::reduction; use crate::rules::traits::{ReduceTo, ReductionResult}; @@ -20,14 +20,14 @@ use crate::topology::{Graph, SimpleGraph}; /// - The objective maximizes the total weight of selected edges #[derive(Debug, Clone)] pub struct ReductionMatchingToILP { - target: ILP, + target: ILP, } impl ReductionResult for ReductionMatchingToILP { type Source = MaximumMatching; - type Target = ILP; + type Target = ILP; - fn target_problem(&self) -> &ILP { + fn target_problem(&self) -> &ILP { &self.target } @@ -46,15 +46,12 @@ impl ReductionResult for ReductionMatchingToILP { num_constraints = "num_vertices", } )] -impl ReduceTo for MaximumMatching { +impl ReduceTo> for MaximumMatching { type Result = ReductionMatchingToILP; fn reduce_to(&self) -> Self::Result { let num_vars = self.graph().num_edges(); // Number of edges - // All variables are binary (0 or 1) - let bounds = vec![VarBounds::binary(); num_vars]; - // Constraints: For each vertex v, sum of incident edge variables <= 1 // This ensures at most one incident edge is selected per vertex let v2e = self.vertex_to_edges(); @@ -75,13 +72,7 @@ impl ReduceTo for MaximumMatching { .map(|(i, &w)| (i, w as f64)) .collect(); - let target = ILP::new( - num_vars, - bounds, - constraints, - objective, - ObjectiveSense::Maximize, - ); + let target = ILP::new(num_vars, constraints, objective, ObjectiveSense::Maximize); ReductionMatchingToILP { target } } diff --git a/src/rules/maximumsetpacking_ilp.rs b/src/rules/maximumsetpacking_ilp.rs index cd2d70936..96c7f7a04 100644 --- a/src/rules/maximumsetpacking_ilp.rs +++ b/src/rules/maximumsetpacking_ilp.rs @@ -2,10 +2,10 @@ //! //! The Set Packing problem can be formulated as a binary ILP: //! - Variables: One binary variable per set (0 = not selected, 1 = selected) -//! - Constraints: x_i + x_j <= 1 for each overlapping pair (i, j) +//! - Constraints: For each element e, Σ_{i : e ∈ S_i} x_i ≤ 1 //! - Objective: Maximize the sum of weights of selected sets -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; +use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; use crate::models::set::MaximumSetPacking; use crate::reduction; use crate::rules::traits::{ReduceTo, ReductionResult}; @@ -14,25 +14,21 @@ use crate::rules::traits::{ReduceTo, ReductionResult}; /// /// This reduction creates a binary ILP where: /// - Each set corresponds to a binary variable -/// - Overlapping pair constraints ensure at most one of each pair is selected +/// - Element constraints ensure at most one set per element is selected /// - The objective maximizes the total weight of selected sets #[derive(Debug, Clone)] pub struct ReductionSPToILP { - target: ILP, + target: ILP, } impl ReductionResult for ReductionSPToILP { type Source = MaximumSetPacking; - type Target = ILP; + type Target = ILP; - fn target_problem(&self) -> &ILP { + fn target_problem(&self) -> &ILP { &self.target } - /// Extract solution from ILP back to MaximumSetPacking. - /// - /// Since the mapping is 1:1 (each set maps to one binary variable), - /// the solution extraction is simply copying the configuration. fn extract_solution(&self, target_solution: &[usize]) -> Vec { target_solution.to_vec() } @@ -41,27 +37,33 @@ impl ReductionResult for ReductionSPToILP { #[reduction( overhead = { num_vars = "num_sets", - num_constraints = "num_sets^2", + num_constraints = "universe_size", } )] -impl ReduceTo for MaximumSetPacking { +impl ReduceTo> for MaximumSetPacking { type Result = ReductionSPToILP; fn reduce_to(&self) -> Self::Result { let num_vars = self.num_sets(); - // All variables are binary (0 or 1) - let bounds = vec![VarBounds::binary(); num_vars]; + // Build element-to-sets mapping, then create one constraint per element + let universe = self.universe_size(); + let mut elem_to_sets: Vec> = vec![Vec::new(); universe]; + for (i, set) in self.sets().iter().enumerate() { + for &e in set { + elem_to_sets[e].push(i); + } + } - // Constraints: x_i + x_j <= 1 for each overlapping pair (i, j) - // This ensures at most one set from each overlapping pair is selected - let constraints: Vec = self - .overlapping_pairs() + let constraints: Vec = elem_to_sets .into_iter() - .map(|(i, j)| LinearConstraint::le(vec![(i, 1.0), (j, 1.0)], 1.0)) + .filter(|sets| sets.len() > 1) + .map(|sets| { + let terms: Vec<(usize, f64)> = sets.into_iter().map(|i| (i, 1.0)).collect(); + LinearConstraint::le(terms, 1.0) + }) .collect(); - // Objective: maximize sum of w_i * x_i (weighted sum of selected sets) let objective: Vec<(usize, f64)> = self .weights_ref() .iter() @@ -69,13 +71,7 @@ impl ReduceTo for MaximumSetPacking { .map(|(i, &w)| (i, w as f64)) .collect(); - let target = ILP::new( - num_vars, - bounds, - constraints, - objective, - ObjectiveSense::Maximize, - ); + let target = ILP::new(num_vars, constraints, objective, ObjectiveSense::Maximize); ReductionSPToILP { target } } diff --git a/src/rules/minimumdominatingset_ilp.rs b/src/rules/minimumdominatingset_ilp.rs index e2981a81c..978aad01e 100644 --- a/src/rules/minimumdominatingset_ilp.rs +++ b/src/rules/minimumdominatingset_ilp.rs @@ -6,7 +6,7 @@ //! (v or at least one of its neighbors must be selected) //! - Objective: Minimize the sum of weights of selected vertices -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; +use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; use crate::models::graph::MinimumDominatingSet; use crate::reduction; use crate::rules::traits::{ReduceTo, ReductionResult}; @@ -21,14 +21,14 @@ use crate::topology::{Graph, SimpleGraph}; /// - The objective minimizes the total weight of selected vertices #[derive(Debug, Clone)] pub struct ReductionDSToILP { - target: ILP, + target: ILP, } impl ReductionResult for ReductionDSToILP { type Source = MinimumDominatingSet; - type Target = ILP; + type Target = ILP; - fn target_problem(&self) -> &ILP { + fn target_problem(&self) -> &ILP { &self.target } @@ -47,15 +47,12 @@ impl ReductionResult for ReductionDSToILP { num_constraints = "num_vertices", } )] -impl ReduceTo for MinimumDominatingSet { +impl ReduceTo> for MinimumDominatingSet { type Result = ReductionDSToILP; fn reduce_to(&self) -> Self::Result { let num_vars = self.graph().num_vertices(); - // All variables are binary (0 or 1) - let bounds = vec![VarBounds::binary(); num_vars]; - // Constraints: For each vertex v, x_v + sum_{u in N(v)} x_u >= 1 // This ensures that v is dominated (either selected or has a selected neighbor) let constraints: Vec = (0..num_vars) @@ -77,13 +74,7 @@ impl ReduceTo for MinimumDominatingSet { .map(|(i, &w)| (i, w as f64)) .collect(); - let target = ILP::new( - num_vars, - bounds, - constraints, - objective, - ObjectiveSense::Minimize, - ); + let target = ILP::new(num_vars, constraints, objective, ObjectiveSense::Minimize); ReductionDSToILP { target } } diff --git a/src/rules/minimumsetcovering_ilp.rs b/src/rules/minimumsetcovering_ilp.rs index ced7991dd..7cab965c5 100644 --- a/src/rules/minimumsetcovering_ilp.rs +++ b/src/rules/minimumsetcovering_ilp.rs @@ -5,7 +5,7 @@ //! - Constraints: For each element e: sum_{j: e in set_j} x_j >= 1 (element must be covered) //! - Objective: Minimize the sum of weights of selected sets -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; +use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; use crate::models::set::MinimumSetCovering; use crate::reduction; use crate::rules::traits::{ReduceTo, ReductionResult}; @@ -18,14 +18,14 @@ use crate::rules::traits::{ReduceTo, ReductionResult}; /// - The objective minimizes the total weight of selected sets #[derive(Debug, Clone)] pub struct ReductionSCToILP { - target: ILP, + target: ILP, } impl ReductionResult for ReductionSCToILP { type Source = MinimumSetCovering; - type Target = ILP; + type Target = ILP; - fn target_problem(&self) -> &ILP { + fn target_problem(&self) -> &ILP { &self.target } @@ -44,15 +44,12 @@ impl ReductionResult for ReductionSCToILP { num_constraints = "universe_size", } )] -impl ReduceTo for MinimumSetCovering { +impl ReduceTo> for MinimumSetCovering { type Result = ReductionSCToILP; fn reduce_to(&self) -> Self::Result { let num_vars = self.num_sets(); - // All variables are binary (0 or 1) - let bounds = vec![VarBounds::binary(); num_vars]; - // Constraints: For each element e, sum_{j: e in set_j} x_j >= 1 // This ensures each element is covered by at least one selected set let constraints: Vec = (0..self.universe_size()) @@ -78,13 +75,7 @@ impl ReduceTo for MinimumSetCovering { .map(|(i, &w)| (i, w as f64)) .collect(); - let target = ILP::new( - num_vars, - bounds, - constraints, - objective, - ObjectiveSense::Minimize, - ); + let target = ILP::new(num_vars, constraints, objective, ObjectiveSense::Minimize); ReductionSCToILP { target } } diff --git a/src/rules/minimumvertexcover_ilp.rs b/src/rules/minimumvertexcover_ilp.rs deleted file mode 100644 index d69c02322..000000000 --- a/src/rules/minimumvertexcover_ilp.rs +++ /dev/null @@ -1,88 +0,0 @@ -//! Reduction from MinimumVertexCover to ILP (Integer Linear Programming). -//! -//! The Vertex Cover problem can be formulated as a binary ILP: -//! - Variables: One binary variable per vertex (0 = not selected, 1 = selected) -//! - Constraints: x_u + x_v >= 1 for each edge (u, v) - at least one endpoint must be selected -//! - Objective: Minimize the sum of weights of selected vertices - -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; -use crate::models::graph::MinimumVertexCover; -use crate::reduction; -use crate::rules::traits::{ReduceTo, ReductionResult}; -use crate::topology::{Graph, SimpleGraph}; - -/// Result of reducing MinimumVertexCover to ILP. -/// -/// This reduction creates a binary ILP where: -/// - Each vertex corresponds to a binary variable -/// - Edge constraints ensure at least one endpoint is selected -/// - The objective minimizes the total weight of selected vertices -#[derive(Debug, Clone)] -pub struct ReductionVCToILP { - target: ILP, -} - -impl ReductionResult for ReductionVCToILP { - type Source = MinimumVertexCover; - type Target = ILP; - - fn target_problem(&self) -> &ILP { - &self.target - } - - /// Extract solution from ILP back to MinimumVertexCover. - /// - /// Since the mapping is 1:1 (each vertex maps to one binary variable), - /// the solution extraction is simply copying the configuration. - fn extract_solution(&self, target_solution: &[usize]) -> Vec { - target_solution.to_vec() - } -} - -#[reduction( - overhead = { - num_vars = "num_vertices", - num_constraints = "num_edges", - } -)] -impl ReduceTo for MinimumVertexCover { - type Result = ReductionVCToILP; - - fn reduce_to(&self) -> Self::Result { - let num_vars = self.graph().num_vertices(); - - // All variables are binary (0 or 1) - let bounds = vec![VarBounds::binary(); num_vars]; - - // Constraints: x_u + x_v >= 1 for each edge (u, v) - // This ensures at least one endpoint of each edge is selected - let constraints: Vec = self - .graph() - .edges() - .into_iter() - .map(|(u, v)| LinearConstraint::ge(vec![(u, 1.0), (v, 1.0)], 1.0)) - .collect(); - - // Objective: minimize sum of w_i * x_i (weighted sum of selected vertices) - let objective: Vec<(usize, f64)> = self - .weights() - .iter() - .enumerate() - .map(|(i, &w)| (i, w as f64)) - .collect(); - - let target = ILP::new( - num_vars, - bounds, - constraints, - objective, - ObjectiveSense::Minimize, - ); - - ReductionVCToILP { target } - } -} - -#[cfg(test)] -#[path = "../unit_tests/rules/minimumvertexcover_ilp.rs"] -mod tests; diff --git a/src/rules/minimumvertexcover_qubo.rs b/src/rules/minimumvertexcover_qubo.rs deleted file mode 100644 index a47e0cd5d..000000000 --- a/src/rules/minimumvertexcover_qubo.rs +++ /dev/null @@ -1,75 +0,0 @@ -//! Reduction from MinimumVertexCover to QUBO. -//! -//! Minimize Σ w_i·x_i s.t. x_i + x_j ≥ 1 for (i,j) ∈ E -//! = Minimize Σ w_i·x_i + P·Σ_{(i,j)∈E} (1-x_i)(1-x_j) -//! -//! Expanding: Q[i][i] = w_i - P·deg(i), Q[i][j] = P for edges. -//! P = 1 + Σ w_i. - -use crate::models::algebraic::QUBO; -use crate::models::graph::MinimumVertexCover; -use crate::reduction; -use crate::rules::traits::{ReduceTo, ReductionResult}; -use crate::topology::{Graph, SimpleGraph}; - -/// Result of reducing MinimumVertexCover to QUBO. -#[derive(Debug, Clone)] -pub struct ReductionVCToQUBO { - target: QUBO, -} - -impl ReductionResult for ReductionVCToQUBO { - type Source = MinimumVertexCover; - type Target = QUBO; - - fn target_problem(&self) -> &Self::Target { - &self.target - } - - fn extract_solution(&self, target_solution: &[usize]) -> Vec { - target_solution.to_vec() - } -} - -#[reduction( - overhead = { num_vars = "num_vertices" } -)] -impl ReduceTo> for MinimumVertexCover { - type Result = ReductionVCToQUBO; - - fn reduce_to(&self) -> Self::Result { - let n = self.graph().num_vertices(); - let edges = self.graph().edges(); - let weights = self.weights(); - let total_weight: f64 = weights.iter().map(|&w| w as f64).sum(); - let penalty = 1.0 + total_weight; - - let mut matrix = vec![vec![0.0; n]; n]; - - // Compute degree of each vertex - let mut degree = vec![0usize; n]; - for (u, v) in &edges { - degree[*u] += 1; - degree[*v] += 1; - } - - // Diagonal: w_i - P * deg(i) - for i in 0..n { - matrix[i][i] = weights[i] as f64 - penalty * degree[i] as f64; - } - - // Off-diagonal: P for each edge - for (u, v) in &edges { - let (i, j) = if u < v { (*u, *v) } else { (*v, *u) }; - matrix[i][j] += penalty; - } - - ReductionVCToQUBO { - target: QUBO::from_matrix(matrix), - } - } -} - -#[cfg(test)] -#[path = "../unit_tests/rules/minimumvertexcover_qubo.rs"] -mod tests; diff --git a/src/rules/mod.rs b/src/rules/mod.rs index 765e3e8cf..80bdb735b 100644 --- a/src/rules/mod.rs +++ b/src/rules/mod.rs @@ -1,5 +1,6 @@ //! Reduction rules between NP-hard problems. +pub mod analysis; pub mod cost; pub mod registry; pub use cost::{CustomCost, Minimize, MinimizeSteps, PathCostFn}; @@ -15,14 +16,12 @@ mod ksatisfiability_qubo; mod maximumindependentset_casts; mod maximumindependentset_gridgraph; mod maximumindependentset_maximumsetpacking; -mod maximumindependentset_qubo; mod maximumindependentset_triangular; mod maximummatching_maximumsetpacking; mod maximumsetpacking_casts; mod maximumsetpacking_qubo; mod minimumvertexcover_maximumindependentset; mod minimumvertexcover_minimumsetcovering; -mod minimumvertexcover_qubo; mod sat_circuitsat; mod sat_coloring; mod sat_ksat; @@ -42,12 +41,12 @@ mod coloring_ilp; #[cfg(feature = "ilp-solver")] mod factoring_ilp; #[cfg(feature = "ilp-solver")] +mod ilp_bool_ilp_i32; +#[cfg(feature = "ilp-solver")] mod ilp_qubo; #[cfg(feature = "ilp-solver")] mod maximumclique_ilp; #[cfg(feature = "ilp-solver")] -mod maximumindependentset_ilp; -#[cfg(feature = "ilp-solver")] mod maximummatching_ilp; #[cfg(feature = "ilp-solver")] mod maximumsetpacking_ilp; @@ -56,8 +55,6 @@ mod minimumdominatingset_ilp; #[cfg(feature = "ilp-solver")] mod minimumsetcovering_ilp; #[cfg(feature = "ilp-solver")] -mod minimumvertexcover_ilp; -#[cfg(feature = "ilp-solver")] mod qubo_ilp; #[cfg(feature = "ilp-solver")] mod travelingsalesman_ilp; diff --git a/src/rules/qubo_ilp.rs b/src/rules/qubo_ilp.rs index 07a38226e..a0e2c5c57 100644 --- a/src/rules/qubo_ilp.rs +++ b/src/rules/qubo_ilp.rs @@ -14,22 +14,22 @@ //! ## Objective //! minimize Σ_i Q_ii · x_i + Σ_{i, num_original: usize, } impl ReductionResult for ReductionQUBOToILP { type Source = QUBO; - type Target = ILP; + type Target = ILP; - fn target_problem(&self) -> &ILP { + fn target_problem(&self) -> &ILP { &self.target } @@ -44,7 +44,7 @@ impl ReductionResult for ReductionQUBOToILP { num_constraints = "num_vars^2", } )] -impl ReduceTo for QUBO { +impl ReduceTo> for QUBO { type Result = ReductionQUBOToILP; fn reduce_to(&self) -> Self::Result { @@ -64,9 +64,6 @@ impl ReduceTo for QUBO { let m = off_diag.len(); let total_vars = n + m; - // All variables are binary - let bounds = vec![VarBounds::binary(); total_vars]; - // Objective: minimize Σ Q_ii · x_i + Σ Q_ij · y_k let mut objective: Vec<(usize, f64)> = Vec::new(); for (i, row) in matrix.iter().enumerate() { @@ -94,13 +91,7 @@ impl ReduceTo for QUBO { )); } - let target = ILP::new( - total_vars, - bounds, - constraints, - objective, - ObjectiveSense::Minimize, - ); + let target = ILP::new(total_vars, constraints, objective, ObjectiveSense::Minimize); ReductionQUBOToILP { target, num_original: n, diff --git a/src/rules/sat_coloring.rs b/src/rules/sat_coloring.rs index 0c6bb4bc7..a5e169f16 100644 --- a/src/rules/sat_coloring.rs +++ b/src/rules/sat_coloring.rs @@ -296,8 +296,8 @@ impl ReductionSATToColoring { #[reduction( overhead = { - num_vertices = "2 * num_vars + 5 * num_literals + -5 * num_clauses + 3", - num_edges = "3 * num_vars + 11 * num_literals + -9 * num_clauses + 3", + num_vertices = "num_vars + num_literals", + num_edges = "num_vars + num_literals", } )] impl ReduceTo> for Satisfiability { diff --git a/src/rules/sat_ksat.rs b/src/rules/sat_ksat.rs index 312e860f5..410587649 100644 --- a/src/rules/sat_ksat.rs +++ b/src/rules/sat_ksat.rs @@ -201,9 +201,22 @@ macro_rules! impl_ksat_to_sat { // Register KN for the reduction graph (covers all K values as the generic entry) impl_ksat_to_sat!(KN); -// Register K3 and K2 as concrete entries (used directly in tests and reductions) -impl_ksat_to_sat!(K3); -impl_ksat_to_sat!(K2); + +// K3 and K2 keep their ReduceTo impls for typed use, +// but are NOT registered as separate primitive graph edges (KN covers them). +impl ReduceTo for KSatisfiability { + type Result = ReductionKSATToSAT; + fn reduce_to(&self) -> Self::Result { + reduce_ksat_to_sat(self) + } +} + +impl ReduceTo for KSatisfiability { + type Result = ReductionKSATToSAT; + fn reduce_to(&self) -> Self::Result { + reduce_ksat_to_sat(self) + } +} #[cfg(test)] #[path = "../unit_tests/rules/sat_ksat.rs"] diff --git a/src/rules/travelingsalesman_ilp.rs b/src/rules/travelingsalesman_ilp.rs index 5b5cc7961..1cb02e68c 100644 --- a/src/rules/travelingsalesman_ilp.rs +++ b/src/rules/travelingsalesman_ilp.rs @@ -5,7 +5,7 @@ //! - Constraints: assignment, non-edge consecutive, McCormick //! - Objective: minimize total edge weight of the tour -use crate::models::algebraic::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; +use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; use crate::models::graph::TravelingSalesman; use crate::reduction; use crate::rules::traits::{ReduceTo, ReductionResult}; @@ -14,7 +14,7 @@ use crate::topology::{Graph, SimpleGraph}; /// Result of reducing TravelingSalesman to ILP. #[derive(Debug, Clone)] pub struct ReductionTSPToILP { - target: ILP, + target: ILP, /// Number of vertices in the source graph. num_vertices: usize, /// Edges of the source graph (for solution extraction). @@ -30,9 +30,9 @@ impl ReductionTSPToILP { impl ReductionResult for ReductionTSPToILP { type Source = TravelingSalesman; - type Target = ILP; + type Target = ILP; - fn target_problem(&self) -> &ILP { + fn target_problem(&self) -> &ILP { &self.target } @@ -76,7 +76,7 @@ impl ReductionResult for ReductionTSPToILP { num_constraints = "num_vertices^3 + -1 * num_vertices^2 + 2 * num_vertices + 4 * num_vertices * num_edges", } )] -impl ReduceTo for TravelingSalesman { +impl ReduceTo> for TravelingSalesman { type Result = ReductionTSPToILP; fn reduce_to(&self) -> Self::Result { @@ -105,7 +105,6 @@ impl ReduceTo for TravelingSalesman { let y_idx = |edge: usize, k: usize, dir: usize| -> usize { num_x + edge * 2 * n + 2 * k + dir }; - let bounds = vec![VarBounds::binary(); num_vars]; let mut constraints = Vec::new(); // Constraint 1: Each vertex has exactly one position @@ -187,13 +186,7 @@ impl ReduceTo for TravelingSalesman { } } - let target = ILP::new( - num_vars, - bounds, - constraints, - objective, - ObjectiveSense::Minimize, - ); + let target = ILP::new(num_vars, constraints, objective, ObjectiveSense::Minimize); ReductionTSPToILP { target, diff --git a/src/solvers/ilp/mod.rs b/src/solvers/ilp/mod.rs index 8244962c5..f23f70ff2 100644 --- a/src/solvers/ilp/mod.rs +++ b/src/solvers/ilp/mod.rs @@ -6,11 +6,11 @@ //! # Example //! //! ```rust,ignore -//! use problemreductions::models::algebraic::{ILP, VarBounds, LinearConstraint, ObjectiveSense}; +//! use problemreductions::models::algebraic::{ILP, LinearConstraint, ObjectiveSense}; //! use problemreductions::solvers::ILPSolver; //! -//! // Create a simple ILP: maximize x0 + 2*x1 subject to x0 + x1 <= 1 -//! let ilp = ILP::binary( +//! // Create a simple binary ILP: maximize x0 + 2*x1 subject to x0 + x1 <= 1 +//! let ilp = ILP::::new( //! 2, //! vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], //! vec![(0, 1.0), (1, 2.0)], diff --git a/src/solvers/ilp/solver.rs b/src/solvers/ilp/solver.rs index 23744dc52..881d706f3 100644 --- a/src/solvers/ilp/solver.rs +++ b/src/solvers/ilp/solver.rs @@ -1,6 +1,6 @@ //! ILP solver implementation using HiGHS. -use crate::models::algebraic::{Comparison, ObjectiveSense, ILP}; +use crate::models::algebraic::{Comparison, ObjectiveSense, VariableDomain, ILP}; use crate::rules::{ReduceTo, ReductionResult}; use good_lp::{default_solver, variable, ProblemVariables, Solution, SolverModel, Variable}; @@ -11,11 +11,11 @@ use good_lp::{default_solver, variable, ProblemVariables, Solution, SolverModel, /// # Example /// /// ```rust,ignore -/// use problemreductions::models::algebraic::{ILP, VarBounds, LinearConstraint, ObjectiveSense}; +/// use problemreductions::models::algebraic::{ILP, LinearConstraint, ObjectiveSense}; /// use problemreductions::solvers::ILPSolver; /// -/// // Create a simple ILP: maximize x0 + 2*x1 subject to x0 + x1 <= 1 -/// let ilp = ILP::binary( +/// // Create a simple binary ILP: maximize x0 + 2*x1 subject to x0 + x1 <= 1 +/// let ilp = ILP::::new( /// 2, /// vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], /// vec![(0, 1.0), (1, 2.0)], @@ -50,31 +50,20 @@ impl ILPSolver { /// /// Returns `None` if the problem is infeasible or the solver fails. /// The returned solution is a configuration vector where each element - /// represents the offset from the lower bound for that variable. - pub fn solve(&self, problem: &ILP) -> Option> { + /// is the variable value (config index = value). + pub fn solve(&self, problem: &ILP) -> Option> { let n = problem.num_vars; if n == 0 { return Some(vec![]); } - // Create integer variables with bounds + // Create integer variables with bounds from variable domain let mut vars_builder = ProblemVariables::new(); - let vars: Vec = problem - .bounds - .iter() - .map(|bounds| { + let vars: Vec = (0..n) + .map(|_| { let mut v = variable().integer(); - - // Apply lower bound - if let Some(lo) = bounds.lower { - v = v.min(lo as f64); - } - - // Apply upper bound - if let Some(hi) = bounds.upper { - v = v.max(hi as f64); - } - + v = v.min(0.0); + v = v.max((V::DIMS_PER_VAR - 1) as f64); vars_builder.add(v) }) .collect(); @@ -117,27 +106,21 @@ impl ILPSolver { // Solve let solution = model.solve().ok()?; - // Extract solution values and convert to configuration - // Configuration is offset from lower bound: config[i] = value[i] - lower_bound[i] + // Extract solution: config index = value (no lower bound offset) let result: Vec = vars .iter() - .enumerate() - .map(|(i, v)| { + .map(|v| { let val = solution.value(*v); - // Round to nearest integer and compute offset from lower bound - let int_val = val.round() as i64; - let lower_bound = problem.bounds[i].lower.unwrap_or(0); - let offset = int_val - lower_bound; - offset.max(0) as usize + val.round().max(0.0) as usize }) .collect(); Some(result) } - /// Solve any problem that reduces to ILP. + /// Solve any problem that reduces to `ILP`. /// - /// This method first reduces the problem to an ILP, solves the ILP, + /// This method first reduces the problem to a binary ILP, solves the ILP, /// and then extracts the solution back to the original problem space. /// /// # Example @@ -145,10 +128,13 @@ impl ILPSolver { /// ```no_run /// use problemreductions::prelude::*; /// use problemreductions::solvers::ILPSolver; - /// use problemreductions::topology::SimpleGraph; /// - /// // Create a problem that reduces to ILP (e.g., Independent Set) - /// let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1), (1, 2)]), vec![1i32; 3]); + /// // Create a problem that reduces directly to ILP. + /// let problem = MaximumSetPacking::::new(vec![ + /// vec![0, 1], + /// vec![1, 2], + /// vec![3, 4], + /// ]); /// /// // Solve using ILP solver /// let solver = ILPSolver::new(); @@ -158,7 +144,7 @@ impl ILPSolver { /// ``` pub fn solve_reduced

(&self, problem: &P) -> Option> where - P: ReduceTo, + P: ReduceTo>, { let reduction = problem.reduce_to(); let ilp_solution = self.solve(reduction.target_problem())?; diff --git a/src/topology/graph.rs b/src/topology/graph.rs index 485f167f7..a51cb8c2b 100644 --- a/src/topology/graph.rs +++ b/src/topology/graph.rs @@ -278,13 +278,8 @@ impl PartialEq for SimpleGraph { impl Eq for SimpleGraph {} -use super::hypergraph::HyperGraph; use crate::impl_variant_param; -impl_variant_param!(SimpleGraph, "graph", parent: HyperGraph, -cast: |g| { - let edges: Vec> = g.edges().into_iter().map(|(u, v)| vec![u, v]).collect(); - HyperGraph::new(g.num_vertices(), edges) -}); +impl_variant_param!(SimpleGraph, "graph"); #[cfg(test)] #[path = "../unit_tests/topology/graph.rs"] diff --git a/src/topology/hypergraph.rs b/src/topology/hypergraph.rs deleted file mode 100644 index 03e9914c3..000000000 --- a/src/topology/hypergraph.rs +++ /dev/null @@ -1,154 +0,0 @@ -//! Hypergraph implementation. -//! -//! A hypergraph is a generalization of a graph where edges (called hyperedges) -//! can connect any number of vertices, not just two. - -use serde::{Deserialize, Serialize}; - -/// A hypergraph where edges can connect any number of vertices. -/// -/// # Example -/// -/// ``` -/// use problemreductions::topology::HyperGraph; -/// -/// // Create a hypergraph with 4 vertices and 2 hyperedges -/// let hg = HyperGraph::new(4, vec![ -/// vec![0, 1, 2], // Edge connecting vertices 0, 1, 2 -/// vec![2, 3], // Edge connecting vertices 2, 3 -/// ]); -/// -/// assert_eq!(hg.num_vertices(), 4); -/// assert_eq!(hg.num_edges(), 2); -/// ``` -#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)] -pub struct HyperGraph { - num_vertices: usize, - edges: Vec>, -} - -impl HyperGraph { - /// Create a new hypergraph. - /// - /// # Panics - /// - /// Panics if any vertex index in an edge is out of bounds. - pub fn new(num_vertices: usize, edges: Vec>) -> Self { - for edge in &edges { - for &v in edge { - assert!( - v < num_vertices, - "vertex index {} out of bounds (max {})", - v, - num_vertices - 1 - ); - } - } - Self { - num_vertices, - edges, - } - } - - /// Create an empty hypergraph with no edges. - pub fn empty(num_vertices: usize) -> Self { - Self { - num_vertices, - edges: Vec::new(), - } - } - - /// Get the number of vertices. - pub fn num_vertices(&self) -> usize { - self.num_vertices - } - - /// Get the number of hyperedges. - pub fn num_edges(&self) -> usize { - self.edges.len() - } - - /// Get all hyperedges. - pub fn edges(&self) -> &[Vec] { - &self.edges - } - - /// Get a specific edge by index. - pub fn edge(&self, index: usize) -> Option<&Vec> { - self.edges.get(index) - } - - /// Check if a hyperedge exists (order-independent). - pub fn has_edge(&self, edge: &[usize]) -> bool { - let mut sorted = edge.to_vec(); - sorted.sort(); - self.edges.iter().any(|e| { - let mut e_sorted = e.clone(); - e_sorted.sort(); - e_sorted == sorted - }) - } - - /// Get all vertices adjacent to vertex v (share a hyperedge with v). - pub fn neighbors(&self, v: usize) -> Vec { - let mut neighbors = Vec::new(); - for edge in &self.edges { - if edge.contains(&v) { - for &u in edge { - if u != v && !neighbors.contains(&u) { - neighbors.push(u); - } - } - } - } - neighbors - } - - /// Get the degree of a vertex (number of hyperedges containing it). - pub fn degree(&self, v: usize) -> usize { - self.edges.iter().filter(|edge| edge.contains(&v)).count() - } - - /// Get all edges containing a specific vertex. - pub fn edges_containing(&self, v: usize) -> Vec<&Vec> { - self.edges.iter().filter(|edge| edge.contains(&v)).collect() - } - - /// Add a new hyperedge. - /// - /// # Panics - /// - /// Panics if any vertex index is out of bounds. - pub fn add_edge(&mut self, edge: Vec) { - for &v in &edge { - assert!(v < self.num_vertices, "vertex index {} out of bounds", v); - } - self.edges.push(edge); - } - - /// Get the maximum edge size (maximum number of vertices in any hyperedge). - pub fn max_edge_size(&self) -> usize { - self.edges.iter().map(|e| e.len()).max().unwrap_or(0) - } - - /// Check if this is a regular graph (all edges have size 2). - pub fn is_regular_graph(&self) -> bool { - self.edges.iter().all(|e| e.len() == 2) - } - - /// Convert to a regular graph if possible (all edges size 2). - /// Returns None if any edge has size != 2. - pub fn to_graph_edges(&self) -> Option> { - if !self.is_regular_graph() { - return None; - } - Some(self.edges.iter().map(|e| (e[0], e[1])).collect()) - } -} - -use crate::impl_variant_param; -impl_variant_param!(HyperGraph, "graph"); - -#[cfg(test)] -#[path = "../unit_tests/topology/hypergraph.rs"] -mod tests; diff --git a/src/topology/mod.rs b/src/topology/mod.rs index 0a5abe355..3e4e64b34 100644 --- a/src/topology/mod.rs +++ b/src/topology/mod.rs @@ -1,7 +1,6 @@ //! Graph topology types. //! //! - [`SimpleGraph`]: Standard unweighted graph (default for most problems) -//! - [`HyperGraph`]: Edges can connect any number of vertices //! - [`PlanarGraph`]: Planar graph //! - [`BipartiteGraph`]: Bipartite graph //! - [`UnitDiskGraph`]: Vertices with 2D positions, edges based on distance @@ -10,7 +9,6 @@ mod bipartite_graph; mod graph; -mod hypergraph; mod kings_subgraph; mod planar_graph; pub mod small_graphs; @@ -19,7 +17,6 @@ mod unit_disk_graph; pub use bipartite_graph::BipartiteGraph; pub use graph::{Graph, GraphCast, SimpleGraph}; -pub use hypergraph::HyperGraph; pub use kings_subgraph::KingsSubgraph; pub use planar_graph::PlanarGraph; pub use small_graphs::{available_graphs, smallgraph}; diff --git a/src/unit_tests/expr.rs b/src/unit_tests/expr.rs index 6851ac30a..106814938 100644 --- a/src/unit_tests/expr.rs +++ b/src/unit_tests/expr.rs @@ -144,6 +144,120 @@ fn test_expr_is_polynomial() { assert!(!Expr::Sqrt(Box::new(Expr::Var("n"))).is_polynomial()); } +#[test] +fn test_expr_is_valid_complexity_notation_simple() { + assert!(Expr::Var("n").is_valid_complexity_notation()); + assert!(Expr::pow(Expr::Var("n"), Expr::Const(2.0)).is_valid_complexity_notation()); + assert!(Expr::parse("n + m").is_valid_complexity_notation()); + assert!(Expr::parse("2^n").is_valid_complexity_notation()); + assert!(Expr::parse("n^(1/3)").is_valid_complexity_notation()); + assert!(Expr::parse("2^(rows * rank + rank * cols)").is_valid_complexity_notation()); +} + +#[test] +fn test_expr_is_valid_complexity_notation_rejects_constant_factors() { + assert!(!Expr::parse("3 * n").is_valid_complexity_notation()); + assert!(!Expr::parse("n / 3").is_valid_complexity_notation()); + assert!(!Expr::parse("n - m").is_valid_complexity_notation()); + assert!(!Expr::parse("2^(2.372 * n / 3)").is_valid_complexity_notation()); +} + +#[test] +fn test_expr_is_valid_complexity_notation_rejects_additive_constants() { + assert!(!Expr::parse("n + 1").is_valid_complexity_notation()); + assert!(!Expr::parse("log(n + 1)").is_valid_complexity_notation()); + assert!(!Expr::parse("(n + 1)^2").is_valid_complexity_notation()); + assert!(!Expr::Const(5.0).is_valid_complexity_notation()); + assert!(Expr::Const(1.0).is_valid_complexity_notation()); +} + +#[test] +fn test_expr_display_pow_with_complex_exponent() { + let expr = Expr::pow(Expr::Const(2.0), Expr::add(Expr::Var("m"), Expr::Var("n"))); + assert_eq!(format!("{expr}"), "2^(m + n)"); +} + +#[test] +fn test_asymptotic_normal_form_drops_constant_factors() { + let expr = Expr::parse("3 * num_variables^2"); + let normalized = asymptotic_normal_form(&expr).unwrap(); + assert_eq!(normalized.to_string(), "num_variables^2"); +} + +#[test] +fn test_asymptotic_normal_form_drops_additive_constants() { + let expr = Expr::parse("num_variables + 1"); + let normalized = asymptotic_normal_form(&expr).unwrap(); + assert_eq!(normalized.to_string(), "num_variables"); +} + +#[test] +fn test_asymptotic_normal_form_canonicalizes_commutative_sum() { + let a = asymptotic_normal_form(&Expr::parse("n + m")).unwrap(); + let b = asymptotic_normal_form(&Expr::parse("m + n")).unwrap(); + assert_eq!(a, b); + assert_eq!(a.to_string(), "m + n"); +} + +#[test] +fn test_asymptotic_normal_form_canonicalizes_commutative_product() { + let a = asymptotic_normal_form(&Expr::parse("n * m")).unwrap(); + let b = asymptotic_normal_form(&Expr::parse("m * n")).unwrap(); + assert_eq!(a, b); + assert_eq!(a.to_string(), "m * n"); +} + +#[test] +fn test_asymptotic_normal_form_combines_repeated_factors() { + let normalized = asymptotic_normal_form(&Expr::parse("n * n^(1/2)")).unwrap(); + assert_eq!(normalized.to_string(), "n^1.5"); +} + +#[test] +fn test_asymptotic_normal_form_canonicalizes_exponential_product() { + let a = asymptotic_normal_form(&Expr::parse("exp(n) * exp(m)")).unwrap(); + let b = asymptotic_normal_form(&Expr::parse("exp(n + m)")).unwrap(); + assert_eq!(a, b); + assert_eq!(a.to_string(), "exp(m + n)"); +} + +#[test] +fn test_asymptotic_normal_form_canonicalizes_constant_base_exponential_product() { + let a = asymptotic_normal_form(&Expr::parse("2^n * 2^m")).unwrap(); + let b = asymptotic_normal_form(&Expr::parse("2^(n + m)")).unwrap(); + assert_eq!(a, b); + assert_eq!(a.to_string(), "2^(m + n)"); +} + +#[test] +fn test_asymptotic_normal_form_sqrt_matches_fractional_power() { + let a = asymptotic_normal_form(&Expr::parse("sqrt(n * m)")).unwrap(); + let b = asymptotic_normal_form(&Expr::parse("(n * m)^(1/2)")).unwrap(); + assert_eq!(a, b); +} + +#[test] +fn test_asymptotic_normal_form_log_of_power_simplifies() { + let normalized = asymptotic_normal_form(&Expr::parse("log(n^2)")).unwrap(); + assert_eq!(normalized.to_string(), "log(n)"); +} + +#[test] +fn test_asymptotic_normal_form_substitution_is_closed() { + let notation = asymptotic_normal_form(&Expr::parse("n * m")).unwrap(); + let k = Expr::parse("k"); + let k_squared = Expr::parse("k^2"); + let mapping = HashMap::from([("n", &k), ("m", &k_squared)]); + let substituted = asymptotic_normal_form(¬ation.substitute(&mapping)).unwrap(); + assert_eq!(substituted.to_string(), "k^3"); +} + +#[test] +fn test_asymptotic_normal_form_rejects_negative_forms() { + let err = asymptotic_normal_form(&Expr::parse("n - m")).unwrap_err(); + assert!(matches!(err, AsymptoticAnalysisError::Unsupported(_))); +} + #[test] fn test_expr_display_fractional_constant() { assert_eq!(format!("{}", Expr::Const(2.75)), "2.75"); diff --git a/src/unit_tests/models/algebraic/ilp.rs b/src/unit_tests/models/algebraic/ilp.rs index d17c274e3..b7bb0e976 100644 --- a/src/unit_tests/models/algebraic/ilp.rs +++ b/src/unit_tests/models/algebraic/ilp.rs @@ -3,70 +3,6 @@ use crate::solvers::BruteForce; use crate::traits::{OptimizationProblem, Problem}; use crate::types::{Direction, SolutionSize}; -// ============================================================ -// VarBounds tests -// ============================================================ - -#[test] -fn test_varbounds_binary() { - let bounds = VarBounds::binary(); - assert_eq!(bounds.lower, Some(0)); - assert_eq!(bounds.upper, Some(1)); - assert!(bounds.contains(0)); - assert!(bounds.contains(1)); - assert!(!bounds.contains(-1)); - assert!(!bounds.contains(2)); - assert_eq!(bounds.num_values(), Some(2)); -} - -#[test] -fn test_varbounds_non_negative() { - let bounds = VarBounds::non_negative(); - assert_eq!(bounds.lower, Some(0)); - assert_eq!(bounds.upper, None); - assert!(bounds.contains(0)); - assert!(bounds.contains(100)); - assert!(!bounds.contains(-1)); - assert_eq!(bounds.num_values(), None); -} - -#[test] -fn test_varbounds_unbounded() { - let bounds = VarBounds::unbounded(); - assert_eq!(bounds.lower, None); - assert_eq!(bounds.upper, None); - assert!(bounds.contains(-1000)); - assert!(bounds.contains(0)); - assert!(bounds.contains(1000)); - assert_eq!(bounds.num_values(), None); -} - -#[test] -fn test_varbounds_bounded() { - let bounds = VarBounds::bounded(-5, 10); - assert_eq!(bounds.lower, Some(-5)); - assert_eq!(bounds.upper, Some(10)); - assert!(bounds.contains(-5)); - assert!(bounds.contains(0)); - assert!(bounds.contains(10)); - assert!(!bounds.contains(-6)); - assert!(!bounds.contains(11)); - assert_eq!(bounds.num_values(), Some(16)); // -5 to 10 inclusive -} - -#[test] -fn test_varbounds_default() { - let bounds = VarBounds::default(); - assert_eq!(bounds.lower, None); - assert_eq!(bounds.upper, None); -} - -#[test] -fn test_varbounds_empty_range() { - let bounds = VarBounds::bounded(5, 3); // Invalid: lo > hi - assert_eq!(bounds.num_values(), Some(0)); -} - // ============================================================ // Comparison tests // ============================================================ @@ -178,56 +114,29 @@ fn test_objective_sense_direction_conversions() { #[test] fn test_ilp_new() { - let ilp = ILP::new( + let ilp = ILP::::new( 2, - vec![VarBounds::binary(), VarBounds::binary()], vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 2.0)], ObjectiveSense::Maximize, ); assert_eq!(ilp.num_vars, 2); - assert_eq!(ilp.bounds.len(), 2); assert_eq!(ilp.constraints.len(), 1); assert_eq!(ilp.objective.len(), 2); assert_eq!(ilp.sense, ObjectiveSense::Maximize); } -#[test] -#[should_panic(expected = "bounds length must match num_vars")] -fn test_ilp_new_mismatched_bounds() { - ILP::new( - 3, - vec![VarBounds::binary(), VarBounds::binary()], // Only 2 bounds for 3 vars - vec![], - vec![], - ObjectiveSense::Minimize, - ); -} - -#[test] -fn test_ilp_binary() { - let ilp = ILP::binary( - 3, - vec![], - vec![(0, 1.0), (1, 1.0), (2, 1.0)], - ObjectiveSense::Minimize, - ); - assert_eq!(ilp.num_vars, 3); - assert!(ilp.bounds.iter().all(|b| *b == VarBounds::binary())); -} - #[test] fn test_ilp_empty() { - let ilp = ILP::empty(); + let ilp = ILP::::empty(); assert_eq!(ilp.num_vars, 0); - assert!(ilp.bounds.is_empty()); assert!(ilp.constraints.is_empty()); assert!(ilp.objective.is_empty()); } #[test] fn test_ilp_evaluate_objective() { - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![], vec![(0, 2.0), (1, 3.0), (2, -1.0)], @@ -239,26 +148,9 @@ fn test_ilp_evaluate_objective() { assert!((ilp.evaluate_objective(&[0, 0, 1]) - (-1.0)).abs() < 1e-9); } -#[test] -fn test_ilp_bounds_satisfied() { - let ilp = ILP::new( - 2, - vec![VarBounds::bounded(0, 5), VarBounds::bounded(-2, 2)], - vec![], - vec![], - ObjectiveSense::Minimize, - ); - assert!(ilp.bounds_satisfied(&[0, 0])); - assert!(ilp.bounds_satisfied(&[5, 2])); - assert!(ilp.bounds_satisfied(&[3, -2])); - assert!(!ilp.bounds_satisfied(&[6, 0])); // x0 > 5 - assert!(!ilp.bounds_satisfied(&[0, 3])); // x1 > 2 - assert!(!ilp.bounds_satisfied(&[0])); // Wrong length -} - #[test] fn test_ilp_constraints_satisfied() { - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![ LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0), // x0 + x1 <= 1 @@ -275,7 +167,7 @@ fn test_ilp_constraints_satisfied() { #[test] fn test_ilp_is_feasible() { - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 1.0)], @@ -285,7 +177,6 @@ fn test_ilp_is_feasible() { assert!(ilp.is_feasible(&[1, 0])); assert!(ilp.is_feasible(&[0, 1])); assert!(!ilp.is_feasible(&[1, 1])); // Constraint violated - assert!(!ilp.is_feasible(&[2, 0])); // Bounds violated } // ============================================================ @@ -294,14 +185,14 @@ fn test_ilp_is_feasible() { #[test] fn test_ilp_num_variables() { - let ilp = ILP::binary(5, vec![], vec![], ObjectiveSense::Minimize); + let ilp = ILP::::new(5, vec![], vec![], ObjectiveSense::Minimize); assert_eq!(ilp.num_variables(), 5); } #[test] fn test_ilp_direction() { - let max_ilp = ILP::binary(2, vec![], vec![], ObjectiveSense::Maximize); - let min_ilp = ILP::binary(2, vec![], vec![], ObjectiveSense::Minimize); + let max_ilp = ILP::::new(2, vec![], vec![], ObjectiveSense::Maximize); + let min_ilp = ILP::::new(2, vec![], vec![], ObjectiveSense::Minimize); assert_eq!(max_ilp.direction(), Direction::Maximize); assert_eq!(min_ilp.direction(), Direction::Minimize); @@ -310,7 +201,7 @@ fn test_ilp_direction() { #[test] fn test_ilp_evaluate_valid() { // Maximize x0 + 2*x1 subject to x0 + x1 <= 1 - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 2.0)], @@ -327,7 +218,7 @@ fn test_ilp_evaluate_valid() { #[test] fn test_ilp_evaluate_invalid() { // x0 + x1 <= 1 - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 2.0)], @@ -338,28 +229,10 @@ fn test_ilp_evaluate_invalid() { assert_eq!(Problem::evaluate(&ilp, &[1, 1]), SolutionSize::Invalid); } -#[test] -fn test_ilp_evaluate_with_offset_bounds() { - // Variables with non-zero lower bounds - let ilp = ILP::new( - 2, - vec![VarBounds::bounded(1, 3), VarBounds::bounded(-1, 1)], - vec![], - vec![(0, 1.0), (1, 1.0)], - ObjectiveSense::Maximize, - ); - - // Config [0, 0] maps to x0=1, x1=-1 => obj = 0 - assert_eq!(Problem::evaluate(&ilp, &[0, 0]), SolutionSize::Valid(0.0)); - - // Config [2, 2] maps to x0=3, x1=1 => obj = 4 - assert_eq!(Problem::evaluate(&ilp, &[2, 2]), SolutionSize::Valid(4.0)); -} - #[test] fn test_ilp_brute_force_maximization() { // Maximize x0 + 2*x1 subject to x0 + x1 <= 1, x0, x1 binary - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 2.0)], @@ -377,7 +250,7 @@ fn test_ilp_brute_force_maximization() { #[test] fn test_ilp_brute_force_minimization() { // Minimize x0 + x1 subject to x0 + x1 >= 1, x0, x1 binary - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::ge(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 1.0)], @@ -397,7 +270,7 @@ fn test_ilp_brute_force_minimization() { #[test] fn test_ilp_brute_force_no_feasible() { // x0 >= 1 AND x0 <= 0 (infeasible) - let ilp = ILP::binary( + let ilp = ILP::::new( 1, vec![ LinearConstraint::ge(vec![(0, 1.0)], 1.0), @@ -427,7 +300,7 @@ fn test_ilp_brute_force_no_feasible() { #[test] fn test_ilp_unconstrained() { // Maximize x0 + x1, no constraints, binary vars - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![], vec![(0, 1.0), (1, 1.0)], @@ -445,7 +318,7 @@ fn test_ilp_unconstrained() { #[test] fn test_ilp_equality_constraint() { // Minimize x0 subject to x0 + x1 == 1, binary vars - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::eq(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0)], @@ -466,7 +339,7 @@ fn test_ilp_multiple_constraints() { // x0 + x1 <= 1 // x1 + x2 <= 1 // Binary vars - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![ LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0), @@ -486,30 +359,18 @@ fn test_ilp_multiple_constraints() { #[test] fn test_ilp_config_to_values() { - let ilp = ILP::new( - 3, - vec![ - VarBounds::bounded(0, 2), // 0,1,2 - VarBounds::bounded(-1, 1), // -1,0,1 - VarBounds::bounded(5, 7), // 5,6,7 - ], - vec![], - vec![], - ObjectiveSense::Minimize, - ); + let ilp = ILP::::new(3, vec![], vec![], ObjectiveSense::Minimize); - // Config [0,0,0] => [0, -1, 5] - assert_eq!(ilp.config_to_values(&[0, 0, 0]), vec![0, -1, 5]); - // Config [2,2,2] => [2, 1, 7] - assert_eq!(ilp.config_to_values(&[2, 2, 2]), vec![2, 1, 7]); - // Config [1,1,1] => [1, 0, 6] - assert_eq!(ilp.config_to_values(&[1, 1, 1]), vec![1, 0, 6]); + // For binary ILP, config maps directly: config[i] -> value[i] as i64 + assert_eq!(ilp.config_to_values(&[0, 0, 0]), vec![0, 0, 0]); + assert_eq!(ilp.config_to_values(&[1, 1, 1]), vec![1, 1, 1]); + assert_eq!(ilp.config_to_values(&[1, 0, 1]), vec![1, 0, 1]); } #[test] fn test_ilp_problem() { // Maximize x0 + 2*x1, s.t. x0 + x1 <= 1, binary - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 2.0)], @@ -532,7 +393,7 @@ fn test_ilp_problem() { #[test] fn test_ilp_problem_minimize() { // Minimize x0 + x1, no constraints, binary - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![], vec![(0, 1.0), (1, 1.0)], @@ -545,9 +406,8 @@ fn test_ilp_problem_minimize() { #[test] fn test_size_getters() { - let ilp = ILP::new( + let ilp = ILP::::new( 2, - vec![VarBounds::binary(); 2], vec![ LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 3.0), LinearConstraint::le(vec![(0, 1.0)], 2.0), @@ -559,3 +419,9 @@ fn test_size_getters() { assert_eq!(ilp.num_variables(), 2); assert_eq!(ilp.num_constraints(), 2); } + +#[test] +fn test_ilp_i32_dims() { + let ilp = ILP::::new(3, vec![], vec![], ObjectiveSense::Minimize); + assert_eq!(ilp.dims(), vec![(i32::MAX as usize) + 1; 3]); +} diff --git a/src/unit_tests/problem_size.rs b/src/unit_tests/problem_size.rs index 52116df99..1e40e683f 100644 --- a/src/unit_tests/problem_size.rs +++ b/src/unit_tests/problem_size.rs @@ -144,7 +144,7 @@ fn test_problem_size_spinglass() { #[test] fn test_problem_size_ilp() { use crate::models::algebraic::{LinearConstraint, ObjectiveSense}; - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 3.0)], vec![(0, 1.0), (1, 2.0)], diff --git a/src/unit_tests/reduction_graph.rs b/src/unit_tests/reduction_graph.rs index dd62524c0..9ba9e770b 100644 --- a/src/unit_tests/reduction_graph.rs +++ b/src/unit_tests/reduction_graph.rs @@ -314,16 +314,16 @@ fn test_3sat_to_mis_triangular_overhead() { ) .expect("Should find path from 3-SAT to MIS on triangular lattice"); - // Path: K3SAT → SAT → MIS{SimpleGraph,One} → MIS{TriangularSubgraph,i32} + // Path: K3SAT → KN_SAT (cast) → SAT → MIS{SimpleGraph,One} → MIS{TriangularSubgraph,i32} assert_eq!( path.type_names(), vec!["KSatisfiability", "Satisfiability", "MaximumIndependentSet"] ); - assert_eq!(path.len(), 3); + assert_eq!(path.len(), 4); // Per-edge symbolic overheads let edges = graph.path_overheads(&path); - assert_eq!(edges.len(), 3); + assert_eq!(edges.len(), 4); // Evaluate overheads at a test point to verify correctness let test_size = ProblemSize::new(vec![ @@ -334,30 +334,35 @@ fn test_3sat_to_mis_triangular_overhead() { ("num_edges", 15), ]); - // Edge 0: K3SAT → SAT (identity) + // Edge 0: K3SAT → KN_SAT (variant cast, identity for num_vars + num_clauses) assert_eq!(edges[0].get("num_vars").unwrap().eval(&test_size), 3.0); assert_eq!(edges[0].get("num_clauses").unwrap().eval(&test_size), 2.0); - assert_eq!(edges[0].get("num_literals").unwrap().eval(&test_size), 6.0); - // Edge 1: SAT → MIS{SimpleGraph,One} + // Edge 1: KN_SAT → SAT (identity) + assert_eq!(edges[1].get("num_vars").unwrap().eval(&test_size), 3.0); + assert_eq!(edges[1].get("num_clauses").unwrap().eval(&test_size), 2.0); + assert_eq!(edges[1].get("num_literals").unwrap().eval(&test_size), 6.0); + + // Edge 2: SAT → MIS{SimpleGraph,One} // num_vertices = num_literals, num_edges = num_literals^2 - assert_eq!(edges[1].get("num_vertices").unwrap().eval(&test_size), 6.0); - assert_eq!(edges[1].get("num_edges").unwrap().eval(&test_size), 36.0); + assert_eq!(edges[2].get("num_vertices").unwrap().eval(&test_size), 6.0); + assert_eq!(edges[2].get("num_edges").unwrap().eval(&test_size), 36.0); - // Edge 2: MIS{SimpleGraph,One} → MIS{TriangularSubgraph,i32} + // Edge 3: MIS{SimpleGraph,One} → MIS{TriangularSubgraph,i32} // num_vertices = num_vertices^2, num_edges = num_vertices^2 assert_eq!( - edges[2].get("num_vertices").unwrap().eval(&test_size), + edges[3].get("num_vertices").unwrap().eval(&test_size), 100.0 ); - assert_eq!(edges[2].get("num_edges").unwrap().eval(&test_size), 100.0); + assert_eq!(edges[3].get("num_edges").unwrap().eval(&test_size), 100.0); // Compose overheads symbolically along the path. // The composed overhead maps 3-SAT input variables to final MIS{Triangular} output. // - // K3SAT → SAT: {num_clauses: C, num_vars: V, num_literals: L} (identity) - // SAT → MIS{SG,One}: {num_vertices: L, num_edges: L²} - // MIS{SG,One→Tri}: {num_vertices: V², num_edges: V²} + // K3SAT → KN_SAT: {num_clauses: C, num_vars: V, num_literals: L} (identity cast) + // KN_SAT → SAT: {num_clauses: C, num_vars: V, num_literals: L} (identity) + // SAT → MIS{SG,One}: {num_vertices: L, num_edges: L²} + // MIS{SG,One→Tri}: {num_vertices: V², num_edges: V²} // // Composed: num_vertices = L², num_edges = L² let composed = graph.compose_path_overhead(&path); diff --git a/src/unit_tests/rules/analysis.rs b/src/unit_tests/rules/analysis.rs new file mode 100644 index 000000000..0a6096e70 --- /dev/null +++ b/src/unit_tests/rules/analysis.rs @@ -0,0 +1,331 @@ +use crate::expr::Expr; +use crate::rules::analysis::{compare_overhead, find_dominated_rules, ComparisonStatus}; +use crate::rules::graph::ReductionGraph; +use crate::rules::registry::ReductionOverhead; + +// --- Asymptotic normalization + comparison tests --- + +#[test] +fn test_compare_overhead_equal() { + let a = ReductionOverhead::new(vec![("num_vars", Expr::Var("n"))]); + let b = ReductionOverhead::new(vec![("num_vars", Expr::Var("n"))]); + assert_eq!(compare_overhead(&a, &b), ComparisonStatus::Dominated); +} + +#[test] +fn test_compare_overhead_composite_smaller_degree() { + // primitive: num_vars = n^2, composite: num_vars = n → dominated + let prim = ReductionOverhead::new(vec![( + "num_vars", + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + )]); + let comp = ReductionOverhead::new(vec![("num_vars", Expr::Var("n"))]); + assert_eq!(compare_overhead(&prim, &comp), ComparisonStatus::Dominated); +} + +#[test] +fn test_compare_overhead_composite_worse() { + // primitive: num_vars = n, composite: num_vars = n^2 → not dominated + let prim = ReductionOverhead::new(vec![("num_vars", Expr::Var("n"))]); + let comp = ReductionOverhead::new(vec![( + "num_vars", + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + )]); + assert_eq!( + compare_overhead(&prim, &comp), + ComparisonStatus::NotDominated + ); +} + +#[test] +fn test_compare_overhead_multi_field_mixed() { + // One field better, one worse → not dominated + let prim = ReductionOverhead::new(vec![ + ("num_vars", Expr::Var("n")), + ( + "num_constraints", + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + ), + ]); + let comp = ReductionOverhead::new(vec![ + ("num_vars", Expr::pow(Expr::Var("n"), Expr::Const(2.0))), + ("num_constraints", Expr::Var("n")), + ]); + assert_eq!( + compare_overhead(&prim, &comp), + ComparisonStatus::NotDominated + ); +} + +#[test] +fn test_compare_overhead_no_common_fields() { + let prim = ReductionOverhead::new(vec![("num_vars", Expr::Var("n"))]); + let comp = ReductionOverhead::new(vec![("num_spins", Expr::Var("n"))]); + assert_eq!( + compare_overhead(&prim, &comp), + ComparisonStatus::NotDominated + ); +} + +#[test] +fn test_compare_overhead_unknown_exp() { + // Different exponential-vs-polynomial growth is still not decided by the + // monomial comparison fallback. + let prim = ReductionOverhead::new(vec![("num_vars", Expr::Exp(Box::new(Expr::Var("n"))))]); + let comp = ReductionOverhead::new(vec![("num_vars", Expr::Var("n"))]); + assert_eq!(compare_overhead(&prim, &comp), ComparisonStatus::Unknown); +} + +#[test] +fn test_compare_overhead_unknown_log() { + let prim = ReductionOverhead::new(vec![("num_vars", Expr::Var("n"))]); + let comp = ReductionOverhead::new(vec![("num_vars", Expr::Log(Box::new(Expr::Var("n"))))]); + assert_eq!(compare_overhead(&prim, &comp), ComparisonStatus::Unknown); +} + +#[test] +fn test_compare_overhead_exp_identity_after_asymptotic_normalization() { + let prim = ReductionOverhead::new(vec![("num_vars", Expr::parse("exp(n + m)"))]); + let comp = ReductionOverhead::new(vec![("num_vars", Expr::parse("exp(n) * exp(m)"))]); + assert_eq!(compare_overhead(&prim, &comp), ComparisonStatus::Dominated); +} + +#[test] +fn test_compare_overhead_log_identity_after_asymptotic_normalization() { + let prim = ReductionOverhead::new(vec![("num_vars", Expr::parse("log(n)"))]); + let comp = ReductionOverhead::new(vec![("num_vars", Expr::parse("log(n^2)"))]); + assert_eq!(compare_overhead(&prim, &comp), ComparisonStatus::Dominated); +} + +#[test] +fn test_compare_overhead_sqrt_identity_after_asymptotic_normalization() { + let prim = ReductionOverhead::new(vec![("num_vars", Expr::parse("sqrt(n * m)"))]); + let comp = ReductionOverhead::new(vec![("num_vars", Expr::parse("(n * m)^(1/2)"))]); + assert_eq!(compare_overhead(&prim, &comp), ComparisonStatus::Dominated); +} + +#[test] +fn test_compare_overhead_additive_constant_after_asymptotic_normalization() { + let prim = ReductionOverhead::new(vec![("num_vars", Expr::parse("n"))]); + let comp = ReductionOverhead::new(vec![("num_vars", Expr::parse("n + 1"))]); + assert_eq!(compare_overhead(&prim, &comp), ComparisonStatus::Dominated); +} + +#[test] +fn test_compare_overhead_multivariate_product_vs_sum() { + // n * m (degree 2) vs n + m (degree 1): + // monomial n*m has exponents {n:1, m:1} + // monomials n, m each have exponent 1 in one variable + // n*m is NOT dominated by either n or m → composite is worse + let prim = ReductionOverhead::new(vec![( + "num_vars", + Expr::add(Expr::Var("n"), Expr::Var("m")), + )]); + let comp = ReductionOverhead::new(vec![( + "num_vars", + Expr::mul(Expr::Var("n"), Expr::Var("m")), + )]); + assert_eq!( + compare_overhead(&prim, &comp), + ComparisonStatus::NotDominated + ); +} + +#[test] +fn test_compare_overhead_multivariate_product_vs_square() { + // n * m (has m) vs n^2 (no m): incomparable + // n*m monomial {n:1, m:1} — dominated by n^2 {n:2}? + // exponent_n: 1 <= 2 ✓, exponent_m: 1 <= 0 ✗ → not dominated + let prim = ReductionOverhead::new(vec![( + "num_vars", + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + )]); + let comp = ReductionOverhead::new(vec![( + "num_vars", + Expr::mul(Expr::Var("n"), Expr::Var("m")), + )]); + assert_eq!( + compare_overhead(&prim, &comp), + ComparisonStatus::NotDominated + ); +} + +#[test] +fn test_compare_overhead_sum_vs_single_var() { + // composite: n, primitive: n + m → composite ≤ primitive (n dominated by n) + let prim = ReductionOverhead::new(vec![( + "num_vars", + Expr::add(Expr::Var("n"), Expr::Var("m")), + )]); + let comp = ReductionOverhead::new(vec![("num_vars", Expr::Var("n"))]); + assert_eq!(compare_overhead(&prim, &comp), ComparisonStatus::Dominated); +} + +#[test] +fn test_compare_overhead_constant_factor() { + // 3*n vs n → same asymptotic class → dominated (equal) + let prim = ReductionOverhead::new(vec![("num_vars", Expr::Var("n"))]); + let comp = ReductionOverhead::new(vec![( + "num_vars", + Expr::mul(Expr::Const(3.0), Expr::Var("n")), + )]); + assert_eq!(compare_overhead(&prim, &comp), ComparisonStatus::Dominated); +} + +#[test] +fn test_compare_overhead_polynomial_expansion() { + // (n + m)^2 = n^2 + 2nm + m^2 (degree 2) vs n^3 (degree 3) + // Each monomial of composite has total degree ≤ 2, primitive has degree 3 + // n^2 dominated by n^3? exponent_n: 2 ≤ 3 ✓ → yes + // 2*n*m dominated by n^3? exponent_n: 1 ≤ 3 ✓, exponent_m: 1 ≤ 0 ✗ → no! + // So composite is NOT dominated — (n+m)^2 can exceed n^3 when m is large + let prim = ReductionOverhead::new(vec![( + "num_vars", + Expr::pow(Expr::Var("n"), Expr::Const(3.0)), + )]); + let comp = ReductionOverhead::new(vec![( + "num_vars", + Expr::pow(Expr::add(Expr::Var("n"), Expr::Var("m")), Expr::Const(2.0)), + )]); + assert_eq!( + compare_overhead(&prim, &comp), + ComparisonStatus::NotDominated + ); +} + +#[test] +fn test_compare_overhead_multi_field_all_smaller() { + // Both fields: composite has smaller degree → dominated + let prim = ReductionOverhead::new(vec![ + ("num_vars", Expr::pow(Expr::Var("n"), Expr::Const(2.0))), + ( + "num_constraints", + Expr::pow(Expr::Var("n"), Expr::Const(3.0)), + ), + ]); + let comp = ReductionOverhead::new(vec![ + ("num_vars", Expr::Var("n")), + ("num_constraints", Expr::Var("n")), + ]); + assert_eq!(compare_overhead(&prim, &comp), ComparisonStatus::Dominated); +} + +// --- Integration tests: find_dominated_rules --- + +use std::collections::BTreeMap; + +#[test] +fn test_find_dominated_rules_returns_known_set() { + let graph = ReductionGraph::new(); + let (dominated, unknown) = find_dominated_rules(&graph); + + // Print for debugging + eprintln!("Dominated rules ({}):", dominated.len()); + for rule in &dominated { + let path_str: String = rule + .dominating_path + .steps + .iter() + .map(|s| s.to_string()) + .collect::>() + .join(" -> "); + eprintln!( + " {} -> {} dominated by [{}]", + rule.source_display(), + rule.target_display(), + path_str, + ); + } + eprintln!("\nUnknown comparisons ({}):", unknown.len()); + for u in &unknown { + eprintln!( + " {} -> {}: {}", + u.source_display(), + u.target_display(), + u.reason, + ); + } + + // ── Allow-list of expected dominated rules ── + // Keyed by (source_display, target_display) with full variant info. + // This list must be updated when new reductions are added. + let allowed: std::collections::HashSet<(&str, &str)> = [ + // Composite through CircuitSAT → ILP is better + ("Factoring", "ILP {variable: \"i32\"}"), + // K3-SAT → QUBO via SAT → CircuitSAT → SpinGlass chain + ("KSatisfiability {k: \"K3\"}", "QUBO {weight: \"f64\"}"), + // MaxMatching → MaxSetPacking → ILP is better than direct MaxMatching → ILP + ( + "MaximumMatching {graph: \"SimpleGraph\", weight: \"i32\"}", + "ILP {variable: \"bool\"}", + ), + ] + .into_iter() + .collect(); + + // Check: no unexpected dominated rules + for rule in &dominated { + let src = rule.source_display(); + let tgt = rule.target_display(); + assert!( + allowed.contains(&(src.as_str(), tgt.as_str())), + "Unexpected dominated rule: {} -> {} (dominated by {})", + src, + tgt, + rule.dominating_path + .steps + .iter() + .map(|s| s.to_string()) + .collect::>() + .join(" -> "), + ); + } + + // Check: no stale entries in allow-list + let found: std::collections::HashSet<(String, String)> = dominated + .iter() + .map(|r| (r.source_display(), r.target_display())) + .collect(); + for &(src, tgt) in &allowed { + assert!( + found.contains(&(src.to_string(), tgt.to_string())), + "Allow-list entry {:?} -> {:?} is stale (no longer dominated)", + src, + tgt, + ); + } +} + +#[test] +fn test_no_duplicate_primitive_rules_per_variant_pair() { + use crate::rules::registry::ReductionEntry; + use std::collections::HashSet; + + let mut seen = HashSet::new(); + for entry in inventory::iter:: { + let src_variant: BTreeMap = entry + .source_variant() + .into_iter() + .map(|(k, v)| (k.to_string(), v.to_string())) + .collect(); + let dst_variant: BTreeMap = entry + .target_variant() + .into_iter() + .map(|(k, v)| (k.to_string(), v.to_string())) + .collect(); + let key = ( + entry.source_name, + src_variant, + entry.target_name, + dst_variant, + ); + assert!( + seen.insert(key.clone()), + "Duplicate primitive rule: {} {:?} -> {} {:?}", + key.0, + key.1, + key.2, + key.3, + ); + } +} diff --git a/src/unit_tests/rules/coloring_ilp.rs b/src/unit_tests/rules/coloring_ilp.rs index a8678c617..68e83d7af 100644 --- a/src/unit_tests/rules/coloring_ilp.rs +++ b/src/unit_tests/rules/coloring_ilp.rs @@ -7,7 +7,7 @@ use crate::variant::{K1, K2, K3, K4}; fn test_reduction_creates_valid_ilp() { // Triangle graph with 3 colors let problem = KColoring::::new(SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Check ILP structure @@ -27,18 +27,13 @@ fn test_reduction_creates_valid_ilp() { ); assert_eq!(ilp.sense, ObjectiveSense::Minimize, "Should minimize"); - - // All variables should be binary - for bound in &ilp.bounds { - assert_eq!(*bound, VarBounds::binary()); - } } #[test] fn test_reduction_path_graph() { // Path graph 0-1-2 with 2 colors (2-colorable) let problem = KColoring::::new(SimpleGraph::new(3, vec![(0, 1), (1, 2)])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // num_vars = 3 * 2 = 6 @@ -52,7 +47,7 @@ fn test_reduction_path_graph() { fn test_coloring_to_ilp_closed_loop() { // Triangle needs 3 colors let problem = KColoring::::new(SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); @@ -85,7 +80,7 @@ fn test_coloring_to_ilp_closed_loop() { fn test_ilp_solution_equals_brute_force_path() { // Path graph 0-1-2-3 with 2 colors let problem = KColoring::::new(SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -110,7 +105,7 @@ fn test_ilp_solution_equals_brute_force_path() { fn test_ilp_infeasible_triangle_2_colors() { // Triangle cannot be 2-colored let problem = KColoring::::new(SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -126,7 +121,7 @@ fn test_ilp_infeasible_triangle_2_colors() { #[test] fn test_solution_extraction() { let problem = KColoring::::new(SimpleGraph::new(3, vec![(0, 1)])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); // ILP solution where: // vertex 0 has color 1 (x_{0,1} = 1) @@ -146,7 +141,7 @@ fn test_solution_extraction() { fn test_ilp_structure() { let problem = KColoring::::new(SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4)])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // 5 vertices * 3 colors = 15 variables @@ -159,7 +154,7 @@ fn test_ilp_structure() { fn test_empty_graph() { // Graph with no edges: any coloring is valid let problem = KColoring::::new(SimpleGraph::new(3, vec![])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Should only have vertex constraints (each vertex = one color) @@ -179,7 +174,7 @@ fn test_complete_graph_k4() { 4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)], )); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -202,7 +197,7 @@ fn test_complete_graph_k4_with_3_colors_infeasible() { 4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)], )); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -216,7 +211,7 @@ fn test_bipartite_graph() { // This is 2-colorable let problem = KColoring::::new(SimpleGraph::new(4, vec![(0, 2), (0, 3), (1, 2), (1, 3)])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -249,7 +244,7 @@ fn test_solve_reduced() { fn test_single_vertex() { // Single vertex graph: always 1-colorable let problem = KColoring::::new(SimpleGraph::new(1, vec![])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); assert_eq!(ilp.num_vars, 1); @@ -266,7 +261,7 @@ fn test_single_vertex() { fn test_single_edge() { // Single edge: needs 2 colors let problem = KColoring::::new(SimpleGraph::new(2, vec![(0, 1)])); - let reduction = ReduceTo::::reduce_to(&problem); + let reduction = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); diff --git a/src/unit_tests/rules/factoring_ilp.rs b/src/unit_tests/rules/factoring_ilp.rs index 157157d22..61b10a0c9 100644 --- a/src/unit_tests/rules/factoring_ilp.rs +++ b/src/unit_tests/rules/factoring_ilp.rs @@ -5,14 +5,14 @@ use crate::solvers::{BruteForce, ILPSolver}; fn test_reduction_creates_valid_ilp() { // Factor 6 with 2-bit factors let problem = Factoring::new(2, 2, 6); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Check variable count: m + n + m*n + (m+n) = 2 + 2 + 4 + 4 = 12 assert_eq!(ilp.num_vars, 12); - // Check constraint count: 3*m*n + (m+n) + 1 = 12 + 4 + 1 = 17 - assert_eq!(ilp.constraints.len(), 17); + // Check constraint count: 3*m*n + 4*m + 4*n + 1 = 12 + 8 + 8 + 1 = 29 + assert_eq!(ilp.constraints.len(), 29); assert_eq!(ilp.sense, ObjectiveSense::Minimize); } @@ -20,7 +20,7 @@ fn test_reduction_creates_valid_ilp() { #[test] fn test_variable_layout() { let problem = Factoring::new(3, 2, 6); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); // p variables: [0, 1, 2] assert_eq!(reduction.p_var(0), 0); @@ -45,7 +45,7 @@ fn test_variable_layout() { fn test_factor_6() { // 6 = 2 × 3 or 3 × 2 let problem = Factoring::new(2, 2, 6); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -67,7 +67,7 @@ fn test_factor_15() { let problem = Factoring::new(4, 4, 15); // 2. Reduce to ILP - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // 3. Solve ILP @@ -87,7 +87,7 @@ fn test_factor_15() { fn test_factor_35() { // 35 = 5 × 7 or 7 × 5 let problem = Factoring::new(3, 3, 35); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -104,7 +104,7 @@ fn test_factor_35() { fn test_factor_one() { // 1 = 1 × 1 let problem = Factoring::new(2, 2, 1); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -121,7 +121,7 @@ fn test_factor_one() { fn test_factor_prime() { // 7 is prime: 7 = 1 × 7 or 7 × 1 let problem = Factoring::new(3, 3, 7); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -138,7 +138,7 @@ fn test_factor_prime() { fn test_factor_square() { // 9 = 3 × 3 let problem = Factoring::new(3, 3, 9); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -155,7 +155,7 @@ fn test_factor_square() { fn test_infeasible_target_too_large() { // Target 100 with 2-bit factors (max product is 3 × 3 = 9) let problem = Factoring::new(2, 2, 100); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -167,7 +167,7 @@ fn test_infeasible_target_too_large() { #[test] fn test_factoring_to_ilp_closed_loop() { let problem = Factoring::new(2, 2, 6); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Get ILP solution @@ -198,7 +198,7 @@ fn test_factoring_to_ilp_closed_loop() { #[test] fn test_solution_extraction() { let problem = Factoring::new(2, 2, 6); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); // Manually construct ILP solution for 2 × 3 = 6 // p = 2 = binary 10 -> p_0=0, p_1=1 @@ -221,24 +221,25 @@ fn test_solution_extraction() { #[test] fn test_target_ilp_structure() { let problem = Factoring::new(3, 4, 12); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // num_vars = 3 + 4 + 12 + 7 = 26 assert_eq!(ilp.num_vars, 26); - // num_constraints = 3*12 + 7 + 1 = 44 - assert_eq!(ilp.constraints.len(), 44); + // num_constraints = 3*12 + 4*3 + 4*4 + 1 = 36 + 12 + 16 + 1 = 65 + assert_eq!(ilp.constraints.len(), 65); } #[test] fn test_solve_reduced() { let problem = Factoring::new(2, 2, 6); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); + let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); - let solution = ilp_solver - .solve_reduced(&problem) - .expect("solve_reduced should work"); + let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); + let solution = reduction.extract_solution(&ilp_solution); assert!(problem.is_valid_factorization(&solution)); } @@ -247,7 +248,7 @@ fn test_solve_reduced() { fn test_asymmetric_bit_widths() { // 12 = 3 × 4 or 4 × 3 or 2 × 6 or 6 × 2 or 1 × 12 or 12 × 1 let problem = Factoring::new(2, 4, 12); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -262,13 +263,14 @@ fn test_asymmetric_bit_widths() { #[test] fn test_constraint_count_formula() { - // Verify constraint count matches formula: 3*m*n + (m+n) + 1 + // Verify constraint count matches formula: 3*m*n + 4*m + 4*n + 1 + // (3*m*n McCormick + (m+n) bit equations + 1 final carry + (m+n) binary bounds + 2*(m+n) carry bounds) for (m, n) in [(2, 2), (3, 3), (2, 4), (4, 2)] { let problem = Factoring::new(m, n, 1); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); - let expected = 3 * m * n + (m + n) + 1; + let expected = 3 * m * n + 4 * m + 4 * n + 1; assert_eq!( ilp.constraints.len(), expected, @@ -284,7 +286,7 @@ fn test_variable_count_formula() { // Verify variable count matches formula: m + n + m*n + (m+n) for (m, n) in [(2, 2), (3, 3), (2, 4), (4, 2)] { let problem = Factoring::new(m, n, 1); - let reduction: ReductionFactoringToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionFactoringToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let expected = m + n + m * n + (m + n); diff --git a/src/unit_tests/rules/graph.rs b/src/unit_tests/rules/graph.rs index b9d54db62..1c6a1a8be 100644 --- a/src/unit_tests/rules/graph.rs +++ b/src/unit_tests/rules/graph.rs @@ -2,7 +2,7 @@ use super::*; use crate::models::algebraic::QUBO; use crate::models::graph::{MaximumIndependentSet, MinimumVertexCover}; use crate::models::set::MaximumSetPacking; -use crate::rules::cost::MinimizeSteps; +use crate::rules::cost::{Minimize, MinimizeSteps}; use crate::rules::graph::{classify_problem_category, ReductionStep}; use crate::rules::registry::ReductionEntry; use crate::topology::SimpleGraph; @@ -71,7 +71,11 @@ fn test_is_to_qubo_path() { &MinimizeSteps, ); assert!(path.is_some()); - assert_eq!(path.unwrap().len(), 1); // Direct path + let path = path.unwrap(); + assert!( + path.len() > 1, + "MIS -> QUBO should now go through a composite path" + ); } #[test] @@ -711,7 +715,7 @@ fn test_find_cheapest_path_multi_step() { #[test] fn test_find_cheapest_path_is_to_qubo() { let graph = ReductionGraph::new(); - let cost_fn = MinimizeSteps; + let cost_fn = Minimize("num_vars"); let input_size = crate::types::ProblemSize::new(vec![("num_vertices", 10), ("num_edges", 20)]); let src = ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); let dst = ReductionGraph::variant_to_map(&QUBO::::variant()); @@ -726,7 +730,15 @@ fn test_find_cheapest_path_is_to_qubo() { ); assert!(path.is_some()); - assert_eq!(path.unwrap().len(), 1); // Direct path + let path = path.unwrap(); + assert!( + path.len() > 1, + "MIS -> QUBO should now be discovered through a composite path" + ); + assert_eq!( + path.type_names(), + vec!["MaximumIndependentSet", "MaximumSetPacking", "QUBO"] + ); } #[test] diff --git a/src/unit_tests/rules/ilp_bool_ilp_i32.rs b/src/unit_tests/rules/ilp_bool_ilp_i32.rs new file mode 100644 index 000000000..4f89d8599 --- /dev/null +++ b/src/unit_tests/rules/ilp_bool_ilp_i32.rs @@ -0,0 +1,77 @@ +use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; +use crate::rules::traits::{ReduceTo, ReductionResult}; +use crate::solvers::{BruteForce, Solver}; +use crate::traits::Problem; + +#[test] +fn test_ilp_bool_to_ilp_i32_closed_loop() { + // Binary ILP: maximize x0 + 2*x1 + 3*x2, s.t. x0 + x1 + x2 <= 2, x1 + x2 <= 1 + let source = ILP::::new( + 3, + vec![ + LinearConstraint::le(vec![(0, 1.0), (1, 1.0), (2, 1.0)], 2.0), + LinearConstraint::le(vec![(1, 1.0), (2, 1.0)], 1.0), + ], + vec![(0, 1.0), (1, 2.0), (2, 3.0)], + ObjectiveSense::Maximize, + ); + + // Find optimal on source via brute force + let solver = BruteForce::new(); + let source_best = solver + .find_best(&source) + .expect("source should have optimal"); + let source_obj = source.evaluate(&source_best); + + let result = ReduceTo::>::reduce_to(&source); + let target = result.target_problem(); + + // Target should have same number of variables + assert_eq!(target.num_vars, 3); + // Target should have original 2 constraints + 3 binary bound constraints + assert_eq!(target.constraints.len(), 5); + // Dims should be (i32::MAX + 1) per variable + assert_eq!(target.dims(), vec![(i32::MAX as usize) + 1; 3]); + + // Extract solution back to source and verify optimality + let source_solution = result.extract_solution(&source_best); + assert_eq!(source.evaluate(&source_solution), source_obj); +} + +#[test] +fn test_ilp_bool_to_ilp_i32_empty() { + let source = ILP::::empty(); + let result = ReduceTo::>::reduce_to(&source); + let target = result.target_problem(); + assert_eq!(target.num_vars, 0); + assert!(target.constraints.is_empty()); +} + +#[test] +fn test_ilp_bool_to_ilp_i32_preserves_constraints() { + // Three constraints on 3 variables + let source = ILP::::new( + 3, + vec![ + LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0), + LinearConstraint::ge(vec![(0, 1.0)], 0.0), + LinearConstraint::eq(vec![(2, 1.0)], 1.0), + ], + vec![(0, 1.0)], + ObjectiveSense::Maximize, + ); + + let result = ReduceTo::>::reduce_to(&source); + let target = result.target_problem(); + + // Original 3 constraints + 3 binary bound constraints (x_i <= 1) + assert_eq!(target.constraints.len(), 6); + + // Verify bound constraints are the last 3 + for i in 0..3 { + let c = &target.constraints[3 + i]; + assert_eq!(c.terms, vec![(i, 1.0)]); + assert_eq!(c.cmp, crate::models::algebraic::Comparison::Le); + assert_eq!(c.rhs, 1.0); + } +} diff --git a/src/unit_tests/rules/ilp_qubo.rs b/src/unit_tests/rules/ilp_qubo.rs index df21dc981..eea8f38b8 100644 --- a/src/unit_tests/rules/ilp_qubo.rs +++ b/src/unit_tests/rules/ilp_qubo.rs @@ -8,7 +8,7 @@ fn test_ilp_to_qubo_closed_loop() { // Binary ILP: maximize x0 + 2*x1 + 3*x2 // s.t. x0 + x1 <= 1, x1 + x2 <= 1 // Optimal: x = [1, 0, 1] with obj = 4 - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![ LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0), @@ -39,7 +39,7 @@ fn test_ilp_to_qubo_minimize() { // Binary ILP: minimize x0 + 2*x1 + 3*x2 // s.t. x0 + x1 >= 1 (at least one of x0, x1 selected) // Optimal: x = [1, 0, 0] with obj = 1 - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![LinearConstraint::ge(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 2.0), (2, 3.0)], @@ -66,7 +66,7 @@ fn test_ilp_to_qubo_equality() { // Binary ILP: maximize x0 + x1 + x2 // s.t. x0 + x1 + x2 = 2 // Optimal: any 2 of 3 variables = 1 - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![LinearConstraint::eq( vec![(0, 1.0), (1, 1.0), (2, 1.0)], @@ -97,7 +97,7 @@ fn test_ilp_to_qubo_ge_with_slack() { // Ge constraint with slack_range > 1 to exercise slack variable code path. // 3 vars: minimize x0 + x1 + x2 // s.t. x0 + x1 + x2 >= 1 (max_lhs=3, b=1, slack_range=2, ns=ceil(log2(3))=2) - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![LinearConstraint::ge( vec![(0, 1.0), (1, 1.0), (2, 1.0)], @@ -131,7 +131,7 @@ fn test_ilp_to_qubo_le_with_slack() { // Le constraint with rhs > 1 to exercise Le slack variable code path. // 3 vars: maximize x0 + x1 + x2 // s.t. x0 + x1 + x2 <= 2 (min_lhs=0, b=2, slack_range=2, ns=ceil(log2(3))=2) - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![LinearConstraint::le( vec![(0, 1.0), (1, 1.0), (2, 1.0)], @@ -162,7 +162,7 @@ fn test_ilp_to_qubo_le_with_slack() { #[test] fn test_ilp_to_qubo_structure() { - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 2.0), (2, 3.0)], diff --git a/src/unit_tests/rules/maximumclique_ilp.rs b/src/unit_tests/rules/maximumclique_ilp.rs index 1bfc5f3f1..ce819c101 100644 --- a/src/unit_tests/rules/maximumclique_ilp.rs +++ b/src/unit_tests/rules/maximumclique_ilp.rs @@ -56,7 +56,7 @@ fn test_reduction_creates_valid_ilp() { SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)]), vec![1; 3], ); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Check ILP structure @@ -67,11 +67,6 @@ fn test_reduction_creates_valid_ilp() { "Complete graph has no non-edges, so no constraints" ); assert_eq!(ilp.sense, ObjectiveSense::Maximize, "Should maximize"); - - // All variables should be binary - for bound in &ilp.bounds { - assert_eq!(*bound, VarBounds::binary()); - } } #[test] @@ -79,7 +74,7 @@ fn test_reduction_with_non_edges() { // Path graph 0-1-2: edges (0,1) and (1,2), non-edge (0,2) let problem: MaximumClique = MaximumClique::new(SimpleGraph::new(3, vec![(0, 1), (1, 2)]), vec![1; 3]); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Should have 1 constraint for non-edge (0, 2) @@ -95,7 +90,7 @@ fn test_reduction_with_non_edges() { fn test_reduction_weighted() { let problem: MaximumClique = MaximumClique::new(SimpleGraph::new(3, vec![(0, 1)]), vec![5, 10, 15]); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Check that weights are correctly transferred to objective @@ -115,7 +110,7 @@ fn test_maximumclique_to_ilp_closed_loop() { SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)]), vec![1; 3], ); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -146,7 +141,7 @@ fn test_ilp_solution_equals_brute_force_path() { SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)]), vec![1; 4], ); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -174,7 +169,7 @@ fn test_ilp_solution_equals_brute_force_weighted() { // Since 0-1 and 1-2 are edges, both {0,1} and {1,2} are valid cliques let problem: MaximumClique = MaximumClique::new(SimpleGraph::new(3, vec![(0, 1), (1, 2)]), vec![1, 100, 1]); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -196,7 +191,7 @@ fn test_ilp_solution_equals_brute_force_weighted() { fn test_solution_extraction() { let problem: MaximumClique = MaximumClique::new(SimpleGraph::new(4, vec![(0, 1), (2, 3)]), vec![1; 4]); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); // Test that extraction works correctly (1:1 mapping) let ilp_solution = vec![1, 1, 0, 0]; @@ -213,7 +208,7 @@ fn test_ilp_structure() { SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4)]), vec![1; 5], ); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); assert_eq!(ilp.num_vars, 5); @@ -226,7 +221,7 @@ fn test_empty_graph() { // Graph with no edges: max clique = 1 (any single vertex) let problem: MaximumClique = MaximumClique::new(SimpleGraph::new(3, vec![]), vec![1; 3]); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // All pairs are non-edges, so 3 constraints @@ -250,7 +245,7 @@ fn test_complete_graph() { SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]), vec![1; 4], ); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // No non-edges, so no constraints @@ -275,7 +270,7 @@ fn test_bipartite_graph() { SimpleGraph::new(4, vec![(0, 2), (0, 3), (1, 2), (1, 3)]), vec![1; 4], ); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -298,7 +293,7 @@ fn test_star_graph() { SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3)]), vec![1; 4], ); - let reduction: ReductionCliqueToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionCliqueToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Non-edges: (1,2), (1,3), (2,3) = 3 constraints diff --git a/src/unit_tests/rules/maximumindependentset_gridgraph.rs b/src/unit_tests/rules/maximumindependentset_gridgraph.rs index 734149f5e..52c6ee6ca 100644 --- a/src/unit_tests/rules/maximumindependentset_gridgraph.rs +++ b/src/unit_tests/rules/maximumindependentset_gridgraph.rs @@ -52,24 +52,3 @@ fn test_mis_simple_one_to_kings_one_closed_loop() { let size: usize = original_solution.iter().sum(); assert_eq!(size, 3, "Max IS in path of 5 should be 3"); } - -#[test] -fn test_mis_simple_one_to_kings_weighted_closed_loop() { - // Path graph: 0-1-2-3-4 (MIS = 3: select vertices 0, 2, 4) - let problem = MaximumIndependentSet::new( - SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4)]), - vec![One; 5], - ); - let result = ReduceTo::>::reduce_to(&problem); - let target = result.target_problem(); - assert!(target.graph().num_vertices() > 5); - - let solver = BruteForce::new(); - let grid_solutions = solver.find_all_best(target); - assert!(!grid_solutions.is_empty()); - - let original_solution = result.extract_solution(&grid_solutions[0]); - assert_eq!(original_solution.len(), 5); - let size: usize = original_solution.iter().sum(); - assert_eq!(size, 3, "Max IS in path of 5 should be 3"); -} diff --git a/src/unit_tests/rules/maximumindependentset_ilp.rs b/src/unit_tests/rules/maximumindependentset_ilp.rs index b85681c18..cc6dd7a58 100644 --- a/src/unit_tests/rules/maximumindependentset_ilp.rs +++ b/src/unit_tests/rules/maximumindependentset_ilp.rs @@ -1,249 +1,88 @@ -use super::*; +use crate::models::algebraic::{ObjectiveSense, ILP}; +use crate::models::graph::MaximumIndependentSet; +use crate::rules::{MinimizeSteps, ReductionChain, ReductionGraph, ReductionPath}; use crate::solvers::{BruteForce, ILPSolver}; +use crate::topology::SimpleGraph; use crate::traits::Problem; -use crate::types::SolutionSize; - -#[test] -fn test_reduction_creates_valid_ilp() { - // Triangle graph: 3 vertices, 3 edges - let problem = MaximumIndependentSet::new( - SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)]), - vec![1i32; 3], - ); - let reduction: ReductionISToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - // Check ILP structure - assert_eq!(ilp.num_vars, 3, "Should have one variable per vertex"); - assert_eq!( - ilp.constraints.len(), - 3, - "Should have one constraint per edge" - ); - assert_eq!(ilp.sense, ObjectiveSense::Maximize, "Should maximize"); - - // All variables should be binary - for bound in &ilp.bounds { - assert_eq!(*bound, VarBounds::binary()); - } - - // Each constraint should be x_i + x_j <= 1 - for constraint in &ilp.constraints { - assert_eq!(constraint.terms.len(), 2); - assert!((constraint.rhs - 1.0).abs() < 1e-9); - } -} - -#[test] -fn test_reduction_weighted() { - let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1)]), vec![5, 10, 15]); - let reduction: ReductionISToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - // Check that weights are correctly transferred to objective - let mut coeffs: Vec = vec![0.0; 3]; - for &(var, coef) in &ilp.objective { - coeffs[var] = coef; - } - assert!((coeffs[0] - 5.0).abs() < 1e-9); - assert!((coeffs[1] - 10.0).abs() < 1e-9); - assert!((coeffs[2] - 15.0).abs() < 1e-9); +use crate::types::{ProblemSize, SolutionSize}; + +fn reduce_mis_to_ilp( + problem: &MaximumIndependentSet, +) -> (ReductionPath, ReductionChain) { + let graph = ReductionGraph::new(); + let src = ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); + let dst = ReductionGraph::variant_to_map(&ILP::::variant()); + let path = graph + .find_cheapest_path( + "MaximumIndependentSet", + &src, + "ILP", + &dst, + &ProblemSize::new(vec![]), + &MinimizeSteps, + ) + .expect("Should find path MaximumIndependentSet -> ILP"); + let chain = graph + .reduce_along_path(&path, problem as &dyn std::any::Any) + .expect("Should reduce MaximumIndependentSet to ILP along path"); + (path, chain) } #[test] -fn test_maximumindependentset_to_ilp_closed_loop() { - // Triangle graph: max IS = 1 vertex +fn test_maximumindependentset_to_ilp_via_path_structure() { let problem = MaximumIndependentSet::new( SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)]), vec![1i32; 3], ); - let reduction: ReductionISToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - let bf = BruteForce::new(); - let ilp_solver = ILPSolver::new(); - - // Solve with brute force on original problem - let bf_solutions = bf.find_all_best(&problem); - - // Solve via ILP reduction - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - - // Both should find optimal size = 1 - let bf_size: usize = bf_solutions[0].iter().sum(); - let ilp_size: usize = extracted.iter().sum(); - assert_eq!(bf_size, 1); - assert_eq!(ilp_size, 1); + let (path, chain) = reduce_mis_to_ilp(&problem); + let ilp: &ILP = chain.target_problem(); - // Verify the ILP solution is valid for the original problem assert!( - problem.evaluate(&extracted).is_valid(), - "Extracted solution should be valid" + path.len() > 1, + "Removed rule should be exercised through a multi-step path" + ); + assert_eq!( + path.type_names(), + vec!["MaximumIndependentSet", "MaximumSetPacking", "ILP"] ); + assert_eq!(ilp.num_vars, 3); + assert_eq!(ilp.constraints.len(), 3); + assert_eq!(ilp.sense, ObjectiveSense::Maximize); } #[test] -fn test_ilp_solution_equals_brute_force_path() { - // Path graph 0-1-2-3: max IS = 2 (e.g., {0, 2} or {1, 3} or {0, 3}) +fn test_maximumindependentset_to_ilp_via_path_closed_loop() { let problem = MaximumIndependentSet::new( SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)]), vec![1i32; 4], ); - let reduction: ReductionISToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); + let (_, chain) = reduce_mis_to_ilp(&problem); + let ilp: &ILP = chain.target_problem(); let bf = BruteForce::new(); let ilp_solver = ILPSolver::new(); - - // Solve with brute force let bf_solutions = bf.find_all_best(&problem); - let bf_size: usize = bf_solutions[0].iter().sum(); - - // Solve via ILP let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - let ilp_size: usize = extracted.iter().sum(); + let extracted = chain.extract_solution(&ilp_solution); + let bf_size: usize = bf_solutions[0].iter().sum(); + let ilp_size: usize = extracted.iter().sum(); assert_eq!(bf_size, 2); assert_eq!(ilp_size, 2); - - // Verify validity assert!(problem.evaluate(&extracted).is_valid()); } #[test] -fn test_ilp_solution_equals_brute_force_weighted() { - // Weighted problem: vertex 1 has high weight but is connected to both 0 and 2 - // 0 -- 1 -- 2 - // Weights: [1, 100, 1] - // Max IS by weight: just vertex 1 (weight 100) beats 0+2 (weight 2) +fn test_maximumindependentset_to_ilp_via_path_weighted() { let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1), (1, 2)]), vec![1, 100, 1]); - let reduction: ReductionISToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); + let (_, chain) = reduce_mis_to_ilp(&problem); + let ilp: &ILP = chain.target_problem(); - let bf = BruteForce::new(); let ilp_solver = ILPSolver::new(); - - let bf_solutions = bf.find_all_best(&problem); - let bf_obj = problem.evaluate(&bf_solutions[0]); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - let ilp_obj = problem.evaluate(&extracted); - - assert_eq!(bf_obj, SolutionSize::Valid(100)); - assert_eq!(ilp_obj, SolutionSize::Valid(100)); + let extracted = chain.extract_solution(&ilp_solution); - // Verify the solution selects vertex 1 + assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(100)); assert_eq!(extracted, vec![0, 1, 0]); } - -#[test] -fn test_solution_extraction() { - let problem = - MaximumIndependentSet::new(SimpleGraph::new(4, vec![(0, 1), (2, 3)]), vec![1i32; 4]); - let reduction: ReductionISToILP = ReduceTo::::reduce_to(&problem); - - // Test that extraction works correctly (1:1 mapping) - let ilp_solution = vec![1, 0, 0, 1]; - let extracted = reduction.extract_solution(&ilp_solution); - assert_eq!(extracted, vec![1, 0, 0, 1]); - - // Verify this is a valid IS (0 and 3 are not adjacent) - assert!(problem.evaluate(&extracted).is_valid()); -} - -#[test] -fn test_ilp_structure() { - let problem = MaximumIndependentSet::new( - SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4)]), - vec![1i32; 5], - ); - let reduction: ReductionISToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - assert_eq!(ilp.num_vars, 5); - assert_eq!(ilp.constraints.len(), 4); -} - -#[test] -fn test_empty_graph() { - // Graph with no edges: all vertices can be selected - let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![]), vec![1i32; 3]); - let reduction: ReductionISToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - assert_eq!(ilp.constraints.len(), 0); - - let ilp_solver = ILPSolver::new(); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - - // All vertices should be selected - assert_eq!(extracted, vec![1, 1, 1]); - - assert!(problem.evaluate(&extracted).is_valid()); - assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(3)); -} - -#[test] -fn test_complete_graph() { - // Complete graph K4: max IS = 1 - let problem = MaximumIndependentSet::new( - SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]), - vec![1i32; 4], - ); - let reduction: ReductionISToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - assert_eq!(ilp.constraints.len(), 6); - - let ilp_solver = ILPSolver::new(); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - - assert!(problem.evaluate(&extracted).is_valid()); - assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(1)); -} - -#[test] -fn test_solve_reduced() { - // Test the ILPSolver::solve_reduced method - let problem = MaximumIndependentSet::new( - SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)]), - vec![1i32; 4], - ); - - let ilp_solver = ILPSolver::new(); - let solution = ilp_solver - .solve_reduced(&problem) - .expect("solve_reduced should work"); - - assert!(problem.evaluate(&solution).is_valid()); - assert_eq!(problem.evaluate(&solution), SolutionSize::Valid(2)); -} - -#[test] -fn test_bipartite_graph() { - // Bipartite graph: 0-2, 0-3, 1-2, 1-3 (two independent sets: {0,1} and {2,3}) - // With equal weights, max IS = 2 - let problem = MaximumIndependentSet::new( - SimpleGraph::new(4, vec![(0, 2), (0, 3), (1, 2), (1, 3)]), - vec![1i32; 4], - ); - let reduction: ReductionISToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - let ilp_solver = ILPSolver::new(); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - - assert!(problem.evaluate(&extracted).is_valid()); - assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(2)); - - // Should select either {0, 1} or {2, 3} - let sum: usize = extracted.iter().sum(); - assert_eq!(sum, 2); -} diff --git a/src/unit_tests/rules/maximumindependentset_qubo.rs b/src/unit_tests/rules/maximumindependentset_qubo.rs index c7ee5cc2e..8a7c70365 100644 --- a/src/unit_tests/rules/maximumindependentset_qubo.rs +++ b/src/unit_tests/rules/maximumindependentset_qubo.rs @@ -1,75 +1,93 @@ -use super::*; -use crate::solvers::BruteForce; +use crate::models::algebraic::QUBO; +use crate::models::graph::MaximumIndependentSet; +use crate::rules::{Minimize, ReductionChain, ReductionGraph, ReductionPath}; +use crate::solvers::{BruteForce, Solver}; +use crate::topology::{Graph, SimpleGraph}; use crate::traits::Problem; +use crate::types::{ProblemSize, SolutionSize}; + +fn reduce_mis_to_qubo( + problem: &MaximumIndependentSet, +) -> (ReductionPath, ReductionChain) { + let graph = ReductionGraph::new(); + let src = ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); + let dst = ReductionGraph::variant_to_map(&QUBO::::variant()); + let path = graph + .find_cheapest_path( + "MaximumIndependentSet", + &src, + "QUBO", + &dst, + &ProblemSize::new(vec![ + ("num_vertices", problem.graph().num_vertices()), + ("num_edges", problem.graph().num_edges()), + ]), + &Minimize("num_vars"), + ) + .expect("Should find path MaximumIndependentSet -> QUBO"); + let chain = graph + .reduce_along_path(&path, problem as &dyn std::any::Any) + .expect("Should reduce MaximumIndependentSet to QUBO along path"); + (path, chain) +} #[test] -fn test_independentset_to_qubo_closed_loop() { - // Path graph: 0-1-2-3 (4 vertices, 3 edges) - // Maximum IS = {0, 2} or {1, 3} (size 2) - let is = MaximumIndependentSet::new( +fn test_maximumindependentset_to_qubo_via_path_closed_loop() { + let problem = MaximumIndependentSet::new( SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)]), vec![1i32; 4], ); - let reduction = ReduceTo::>::reduce_to(&is); - let qubo = reduction.target_problem(); + let (path, chain) = reduce_mis_to_qubo(&problem); + let qubo: &QUBO = chain.target_problem(); + + assert!( + path.len() > 1, + "Removed rule should be exercised through a multi-step path" + ); + assert_eq!( + path.type_names(), + vec!["MaximumIndependentSet", "MaximumSetPacking", "QUBO"] + ); + assert_eq!(qubo.num_variables(), 4); let solver = BruteForce::new(); let qubo_solutions = solver.find_all_best(qubo); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - assert!(is.evaluate(&extracted).is_valid()); + let extracted = chain.extract_solution(sol); + assert!(problem.evaluate(&extracted).is_valid()); assert_eq!(extracted.iter().filter(|&&x| x == 1).count(), 2); } } #[test] -fn test_independentset_to_qubo_triangle() { - // Triangle: 0-1-2 (complete graph K3) - // Maximum IS = any single vertex (size 1) - let is = MaximumIndependentSet::new( - SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)]), - vec![1i32; 3], - ); - let reduction = ReduceTo::>::reduce_to(&is); - let qubo = reduction.target_problem(); +fn test_maximumindependentset_to_qubo_via_path_weighted() { + let problem = + MaximumIndependentSet::new(SimpleGraph::new(3, vec![(0, 1), (1, 2)]), vec![1, 100, 1]); + let (_, chain) = reduce_mis_to_qubo(&problem); + let qubo: &QUBO = chain.target_problem(); let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); + let qubo_solution = solver + .find_best(qubo) + .expect("QUBO should be solvable via path"); + let extracted = chain.extract_solution(&qubo_solution); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - assert!(is.evaluate(&extracted).is_valid()); - assert_eq!(extracted.iter().filter(|&&x| x == 1).count(), 1); - } + assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(100)); + assert_eq!(extracted, vec![0, 1, 0]); } #[test] -fn test_independentset_to_qubo_empty_graph() { - // No edges: all vertices form the IS - let is = MaximumIndependentSet::new(SimpleGraph::new(3, vec![]), vec![1i32; 3]); - let reduction = ReduceTo::>::reduce_to(&is); - let qubo = reduction.target_problem(); +fn test_maximumindependentset_to_qubo_via_path_empty_graph() { + let problem = MaximumIndependentSet::new(SimpleGraph::new(3, vec![]), vec![1i32; 3]); + let (_, chain) = reduce_mis_to_qubo(&problem); + let qubo: &QUBO = chain.target_problem(); - let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); - - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - assert!(is.evaluate(&extracted).is_valid()); - assert_eq!(extracted.iter().filter(|&&x| x == 1).count(), 3); - } -} + assert_eq!(qubo.num_variables(), 3); -#[test] -fn test_independentset_to_qubo_structure() { - let is = MaximumIndependentSet::new( - SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)]), - vec![1i32; 4], - ); - let reduction = ReduceTo::>::reduce_to(&is); - let qubo = reduction.target_problem(); + let solver = BruteForce::new(); + let qubo_solution = solver.find_best(qubo).expect("QUBO should be solvable"); + let extracted = chain.extract_solution(&qubo_solution); - // QUBO should have same number of variables as vertices - assert_eq!(qubo.num_variables(), 4); + assert_eq!(extracted, vec![1, 1, 1]); + assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(3)); } diff --git a/src/unit_tests/rules/maximummatching_ilp.rs b/src/unit_tests/rules/maximummatching_ilp.rs index 65d69551a..15b2a5779 100644 --- a/src/unit_tests/rules/maximummatching_ilp.rs +++ b/src/unit_tests/rules/maximummatching_ilp.rs @@ -9,7 +9,7 @@ fn test_reduction_creates_valid_ilp() { // Triangle graph: 3 vertices, 3 edges let problem = MaximumMatching::<_, i32>::unit_weights(SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)])); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Check ILP structure @@ -22,11 +22,6 @@ fn test_reduction_creates_valid_ilp() { ); assert_eq!(ilp.sense, ObjectiveSense::Maximize, "Should maximize"); - // All variables should be binary - for bound in &ilp.bounds { - assert_eq!(*bound, VarBounds::binary()); - } - // Each constraint should be sum of incident edge vars <= 1 for constraint in &ilp.constraints { assert!((constraint.rhs - 1.0).abs() < 1e-9); @@ -36,7 +31,7 @@ fn test_reduction_creates_valid_ilp() { #[test] fn test_reduction_weighted() { let problem = MaximumMatching::new(SimpleGraph::new(3, vec![(0, 1), (1, 2)]), vec![5, 10]); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Check that weights are correctly transferred to objective @@ -53,7 +48,7 @@ fn test_maximummatching_to_ilp_closed_loop() { // Triangle graph: max matching = 1 edge let problem = MaximumMatching::<_, i32>::unit_weights(SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)])); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); @@ -84,7 +79,7 @@ fn test_ilp_solution_equals_brute_force_path() { // Path graph 0-1-2-3: max matching = 2 (edges {0-1, 2-3}) let problem = MaximumMatching::<_, i32>::unit_weights(SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)])); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); @@ -113,7 +108,7 @@ fn test_ilp_solution_equals_brute_force_weighted() { // Weights: [100, 1] // Max matching by weight: just edge 0-1 (weight 100) beats edge 1-2 (weight 1) let problem = MaximumMatching::new(SimpleGraph::new(3, vec![(0, 1), (1, 2)]), vec![100, 1]); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); @@ -137,7 +132,7 @@ fn test_ilp_solution_equals_brute_force_weighted() { fn test_solution_extraction() { let problem = MaximumMatching::<_, i32>::unit_weights(SimpleGraph::new(4, vec![(0, 1), (2, 3)])); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); // Test that extraction works correctly (1:1 mapping) let ilp_solution = vec![1, 1]; @@ -154,7 +149,7 @@ fn test_ilp_structure() { 5, vec![(0, 1), (1, 2), (2, 3), (3, 4)], )); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); assert_eq!(ilp.num_vars, 4); @@ -167,7 +162,7 @@ fn test_ilp_structure() { fn test_empty_graph() { // Graph with no edges: empty matching let problem = MaximumMatching::<_, i32>::unit_weights(SimpleGraph::new(3, vec![])); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); assert_eq!(ilp.num_vars, 0); @@ -184,7 +179,7 @@ fn test_k4_perfect_matching() { 4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)], )); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // 6 edges, 4 vertices with constraints @@ -209,7 +204,7 @@ fn test_star_graph() { // Max matching = 1 (only one edge can be selected) let problem = MaximumMatching::<_, i32>::unit_weights(SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3)])); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -228,7 +223,7 @@ fn test_bipartite_graph() { 4, vec![(0, 2), (0, 3), (1, 2), (1, 3)], )); - let reduction: ReductionMatchingToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionMatchingToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); diff --git a/src/unit_tests/rules/maximumsetpacking_ilp.rs b/src/unit_tests/rules/maximumsetpacking_ilp.rs index d547ca63a..8cf4cdc26 100644 --- a/src/unit_tests/rules/maximumsetpacking_ilp.rs +++ b/src/unit_tests/rules/maximumsetpacking_ilp.rs @@ -5,28 +5,21 @@ use crate::types::SolutionSize; #[test] fn test_reduction_creates_valid_ilp() { - // Three sets with two overlapping pairs let problem = MaximumSetPacking::::new(vec![vec![0, 1], vec![1, 2], vec![2, 3]]); - let reduction: ReductionSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); - // Check ILP structure assert_eq!(ilp.num_vars, 3, "Should have one variable per set"); + // Elements 1 and 2 each appear in 2 sets → 2 element constraints assert_eq!( ilp.constraints.len(), 2, - "Should have one constraint per overlapping pair" + "Should have one constraint per shared element" ); assert_eq!(ilp.sense, ObjectiveSense::Maximize, "Should maximize"); - // All variables should be binary - for bound in &ilp.bounds { - assert_eq!(*bound, VarBounds::binary()); - } - - // Each constraint should be x_i + x_j <= 1 for constraint in &ilp.constraints { - assert_eq!(constraint.terms.len(), 2); + assert!(constraint.terms.len() >= 2); assert!((constraint.rhs - 1.0).abs() < 1e-9); } } @@ -34,10 +27,9 @@ fn test_reduction_creates_valid_ilp() { #[test] fn test_reduction_weighted() { let problem = MaximumSetPacking::with_weights(vec![vec![0, 1], vec![2, 3]], vec![5, 10]); - let reduction: ReductionSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); - // Check that weights are correctly transferred to objective let mut coeffs: Vec = vec![0.0; 2]; for &(var, coef) in &ilp.objective { coeffs[var] = coef; @@ -48,67 +40,35 @@ fn test_reduction_weighted() { #[test] fn test_maximumsetpacking_to_ilp_closed_loop() { - // Chain: {0,1}, {1,2}, {2,3} - can select at most 2 non-adjacent sets let problem = MaximumSetPacking::::new(vec![vec![0, 1], vec![1, 2], vec![2, 3]]); - let reduction: ReductionSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); let ilp_solver = ILPSolver::new(); - // Solve with brute force on original problem let bf_solutions = bf.find_all_best(&problem); - - // Solve via ILP reduction let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); let extracted = reduction.extract_solution(&ilp_solution); - // Both should find optimal size = 2 let bf_size: usize = bf_solutions[0].iter().sum(); let ilp_size: usize = extracted.iter().sum(); assert_eq!(bf_size, 2); assert_eq!(ilp_size, 2); - // Verify the ILP solution is valid for the original problem assert!( problem.evaluate(&extracted).is_valid(), "Extracted solution should be valid" ); } -#[test] -fn test_ilp_solution_equals_brute_force_all_overlap() { - // All sets share element 0: can only select one - let problem = MaximumSetPacking::::new(vec![vec![0, 1], vec![0, 2], vec![0, 3]]); - let reduction: ReductionSPToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - let bf = BruteForce::new(); - let ilp_solver = ILPSolver::new(); - - let bf_solutions = bf.find_all_best(&problem); - let bf_size: usize = bf_solutions[0].iter().sum(); - - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - let ilp_size: usize = extracted.iter().sum(); - - assert_eq!(bf_size, 1); - assert_eq!(ilp_size, 1); - - assert!(problem.evaluate(&extracted).is_valid()); -} - #[test] fn test_ilp_solution_equals_brute_force_weighted() { - // Weighted problem: single heavy set vs multiple light sets - // Set 0 covers all elements but has weight 5 - // Sets 1 and 2 are disjoint and together have weight 6 let problem = MaximumSetPacking::with_weights( vec![vec![0, 1, 2, 3], vec![0, 1], vec![2, 3]], vec![5, 3, 3], ); - let reduction: ReductionSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); @@ -123,8 +83,6 @@ fn test_ilp_solution_equals_brute_force_weighted() { assert_eq!(bf_obj, SolutionSize::Valid(6)); assert_eq!(ilp_obj, SolutionSize::Valid(6)); - - // Should select sets 1 and 2 assert_eq!(extracted, vec![0, 1, 1]); } @@ -132,34 +90,18 @@ fn test_ilp_solution_equals_brute_force_weighted() { fn test_solution_extraction() { let problem = MaximumSetPacking::::new(vec![vec![0, 1], vec![2, 3], vec![4, 5], vec![6, 7]]); - let reduction: ReductionSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSPToILP = ReduceTo::>::reduce_to(&problem); - // Test that extraction works correctly (1:1 mapping) let ilp_solution = vec![1, 0, 1, 0]; let extracted = reduction.extract_solution(&ilp_solution); assert_eq!(extracted, vec![1, 0, 1, 0]); - - // Verify this is a valid packing (sets 0 and 2 are disjoint) assert!(problem.evaluate(&extracted).is_valid()); } -#[test] -fn test_ilp_structure() { - let problem = - MaximumSetPacking::::new(vec![vec![0, 1], vec![1, 2], vec![2, 3], vec![3, 4]]); - let reduction: ReductionSPToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - assert_eq!(ilp.num_vars, 4); - // 3 overlapping pairs: (0,1), (1,2), (2,3) - assert_eq!(ilp.constraints.len(), 3); -} - #[test] fn test_disjoint_sets() { - // All sets are disjoint: no overlapping pairs let problem = MaximumSetPacking::::new(vec![vec![0], vec![1], vec![2], vec![3]]); - let reduction: ReductionSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); assert_eq!(ilp.constraints.len(), 0); @@ -168,26 +110,13 @@ fn test_disjoint_sets() { let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); let extracted = reduction.extract_solution(&ilp_solution); - // All sets should be selected assert_eq!(extracted, vec![1, 1, 1, 1]); - assert!(problem.evaluate(&extracted).is_valid()); assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(4)); } -#[test] -fn test_empty_sets() { - let problem = MaximumSetPacking::::new(vec![]); - let reduction: ReductionSPToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - assert_eq!(ilp.num_vars, 0); - assert_eq!(ilp.constraints.len(), 0); -} - #[test] fn test_solve_reduced() { - // Test the ILPSolver::solve_reduced method let problem = MaximumSetPacking::::new(vec![vec![0, 1], vec![1, 2], vec![2, 3]]); let ilp_solver = ILPSolver::new(); @@ -198,22 +127,3 @@ fn test_solve_reduced() { assert!(problem.evaluate(&solution).is_valid()); assert_eq!(problem.evaluate(&solution), SolutionSize::Valid(2)); } - -#[test] -fn test_all_sets_overlap_pairwise() { - // All pairs overlap: can only select one set - // Sets: {0,1}, {0,2}, {1,2} - each pair shares one element - let problem = MaximumSetPacking::::new(vec![vec![0, 1], vec![0, 2], vec![1, 2]]); - let reduction: ReductionSPToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - // 3 overlapping pairs - assert_eq!(ilp.constraints.len(), 3); - - let ilp_solver = ILPSolver::new(); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - - assert!(problem.evaluate(&extracted).is_valid()); - assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(1)); -} diff --git a/src/unit_tests/rules/minimumdominatingset_ilp.rs b/src/unit_tests/rules/minimumdominatingset_ilp.rs index cc91b26db..01cd49e57 100644 --- a/src/unit_tests/rules/minimumdominatingset_ilp.rs +++ b/src/unit_tests/rules/minimumdominatingset_ilp.rs @@ -10,7 +10,7 @@ fn test_reduction_creates_valid_ilp() { SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)]), vec![1i32; 3], ); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Check ILP structure @@ -22,11 +22,6 @@ fn test_reduction_creates_valid_ilp() { ); assert_eq!(ilp.sense, ObjectiveSense::Minimize, "Should minimize"); - // All variables should be binary - for bound in &ilp.bounds { - assert_eq!(*bound, VarBounds::binary()); - } - // Each constraint should be x_v + sum_{u in N(v)} x_u >= 1 for constraint in &ilp.constraints { assert!(!constraint.terms.is_empty()); @@ -37,7 +32,7 @@ fn test_reduction_creates_valid_ilp() { #[test] fn test_reduction_weighted() { let problem = MinimumDominatingSet::new(SimpleGraph::new(3, vec![(0, 1)]), vec![5, 10, 15]); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Check that weights are correctly transferred to objective @@ -58,7 +53,7 @@ fn test_minimumdominatingset_to_ilp_closed_loop() { SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3)]), vec![1i32; 4], ); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); @@ -91,7 +86,7 @@ fn test_ilp_solution_equals_brute_force_path() { SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4)]), vec![1i32; 5], ); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); @@ -121,7 +116,7 @@ fn test_ilp_solution_equals_brute_force_weighted() { SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3)]), vec![100, 1, 1, 1], ); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); @@ -145,7 +140,7 @@ fn test_ilp_solution_equals_brute_force_weighted() { fn test_solution_extraction() { let problem = MinimumDominatingSet::new(SimpleGraph::new(4, vec![(0, 1), (2, 3)]), vec![1i32; 4]); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); // Test that extraction works correctly (1:1 mapping) let ilp_solution = vec![1, 0, 1, 0]; @@ -162,7 +157,7 @@ fn test_ilp_structure() { SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4)]), vec![1i32; 5], ); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); assert_eq!(ilp.num_vars, 5); @@ -173,7 +168,7 @@ fn test_ilp_structure() { fn test_isolated_vertices() { // Graph with isolated vertex 2: it must be in the dominating set let problem = MinimumDominatingSet::new(SimpleGraph::new(3, vec![(0, 1)]), vec![1i32; 3]); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -193,7 +188,7 @@ fn test_complete_graph() { SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]), vec![1i32; 4], ); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -208,7 +203,7 @@ fn test_complete_graph() { fn test_single_vertex() { // Single vertex with no edges: must be in dominating set let problem = MinimumDominatingSet::new(SimpleGraph::new(1, vec![]), vec![1i32; 1]); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -229,7 +224,7 @@ fn test_cycle_graph() { SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4), (4, 0)]), vec![1i32; 5], ); - let reduction: ReductionDSToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionDSToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); diff --git a/src/unit_tests/rules/minimumsetcovering_ilp.rs b/src/unit_tests/rules/minimumsetcovering_ilp.rs index e3b213963..523b6a569 100644 --- a/src/unit_tests/rules/minimumsetcovering_ilp.rs +++ b/src/unit_tests/rules/minimumsetcovering_ilp.rs @@ -7,7 +7,7 @@ use crate::types::SolutionSize; fn test_reduction_creates_valid_ilp() { // Universe: {0, 1, 2}, Sets: S0={0,1}, S1={1,2} let problem = MinimumSetCovering::::new(3, vec![vec![0, 1], vec![1, 2]]); - let reduction: ReductionSCToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSCToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Check ILP structure @@ -19,11 +19,6 @@ fn test_reduction_creates_valid_ilp() { ); assert_eq!(ilp.sense, ObjectiveSense::Minimize, "Should minimize"); - // All variables should be binary - for bound in &ilp.bounds { - assert_eq!(*bound, VarBounds::binary()); - } - // Each constraint should be sum >= 1 for constraint in &ilp.constraints { assert!((constraint.rhs - 1.0).abs() < 1e-9); @@ -33,7 +28,7 @@ fn test_reduction_creates_valid_ilp() { #[test] fn test_reduction_weighted() { let problem = MinimumSetCovering::with_weights(3, vec![vec![0, 1], vec![1, 2]], vec![5, 10]); - let reduction: ReductionSCToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSCToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // Check that weights are correctly transferred to objective @@ -50,7 +45,7 @@ fn test_minimumsetcovering_to_ilp_closed_loop() { // Universe: {0, 1, 2}, Sets: S0={0,1}, S1={1,2}, S2={0,2} // Minimum cover: any 2 sets work let problem = MinimumSetCovering::::new(3, vec![vec![0, 1], vec![1, 2], vec![0, 2]]); - let reduction: ReductionSCToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSCToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); @@ -87,7 +82,7 @@ fn test_ilp_solution_equals_brute_force_weighted() { vec![vec![0, 1, 2], vec![0, 1], vec![2]], vec![10, 3, 3], ); - let reduction: ReductionSCToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSCToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let bf = BruteForce::new(); @@ -110,7 +105,7 @@ fn test_ilp_solution_equals_brute_force_weighted() { #[test] fn test_solution_extraction() { let problem = MinimumSetCovering::::new(4, vec![vec![0, 1], vec![2, 3]]); - let reduction: ReductionSCToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSCToILP = ReduceTo::>::reduce_to(&problem); // Test that extraction works correctly (1:1 mapping) let ilp_solution = vec![1, 1]; @@ -125,7 +120,7 @@ fn test_solution_extraction() { fn test_ilp_structure() { let problem = MinimumSetCovering::::new(5, vec![vec![0, 1], vec![1, 2], vec![2, 3], vec![3, 4]]); - let reduction: ReductionSCToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSCToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); assert_eq!(ilp.num_vars, 4); @@ -138,7 +133,7 @@ fn test_single_set_covers_all() { let problem = MinimumSetCovering::::new(3, vec![vec![0, 1, 2], vec![0], vec![1], vec![2]]); let ilp_solver = ILPSolver::new(); - let reduction: ReductionSCToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSCToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); @@ -157,7 +152,7 @@ fn test_overlapping_sets() { let problem = MinimumSetCovering::::new(3, vec![vec![0, 1], vec![1, 2]]); let ilp_solver = ILPSolver::new(); - let reduction: ReductionSCToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSCToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); @@ -174,7 +169,7 @@ fn test_overlapping_sets() { fn test_empty_universe() { // Empty universe is trivially covered let problem = MinimumSetCovering::::new(0, vec![]); - let reduction: ReductionSCToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSCToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); assert_eq!(ilp.num_vars, 0); @@ -204,7 +199,7 @@ fn test_constraint_structure() { // Element 1 is in S1, S2 -> constraint: x1 + x2 >= 1 // Element 2 is in S2 -> constraint: x2 >= 1 let problem = MinimumSetCovering::::new(3, vec![vec![0], vec![0, 1], vec![1, 2]]); - let reduction: ReductionSCToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionSCToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); assert_eq!(ilp.constraints.len(), 3); diff --git a/src/unit_tests/rules/minimumvertexcover_ilp.rs b/src/unit_tests/rules/minimumvertexcover_ilp.rs index d6a76743b..d58d372f5 100644 --- a/src/unit_tests/rules/minimumvertexcover_ilp.rs +++ b/src/unit_tests/rules/minimumvertexcover_ilp.rs @@ -1,297 +1,88 @@ -use super::*; +use crate::models::algebraic::{ObjectiveSense, ILP}; +use crate::models::graph::MinimumVertexCover; +use crate::rules::{MinimizeSteps, ReductionChain, ReductionGraph, ReductionPath}; use crate::solvers::{BruteForce, ILPSolver}; +use crate::topology::SimpleGraph; use crate::traits::Problem; -use crate::types::SolutionSize; - -#[test] -fn test_reduction_creates_valid_ilp() { - // Triangle graph: 3 vertices, 3 edges - let problem = MinimumVertexCover::new( - SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)]), - vec![1i32; 3], - ); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - // Check ILP structure - assert_eq!(ilp.num_vars, 3, "Should have one variable per vertex"); - assert_eq!( - ilp.constraints.len(), - 3, - "Should have one constraint per edge" - ); - assert_eq!(ilp.sense, ObjectiveSense::Minimize, "Should minimize"); - - // All variables should be binary - for bound in &ilp.bounds { - assert_eq!(*bound, VarBounds::binary()); - } - - // Each constraint should be x_i + x_j >= 1 - for constraint in &ilp.constraints { - assert_eq!(constraint.terms.len(), 2); - assert!((constraint.rhs - 1.0).abs() < 1e-9); - } -} - -#[test] -fn test_reduction_weighted() { - let problem = MinimumVertexCover::new(SimpleGraph::new(3, vec![(0, 1)]), vec![5, 10, 15]); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - // Check that weights are correctly transferred to objective - let mut coeffs: Vec = vec![0.0; 3]; - for &(var, coef) in &ilp.objective { - coeffs[var] = coef; - } - assert!((coeffs[0] - 5.0).abs() < 1e-9); - assert!((coeffs[1] - 10.0).abs() < 1e-9); - assert!((coeffs[2] - 15.0).abs() < 1e-9); +use crate::types::{ProblemSize, SolutionSize}; + +fn reduce_vc_to_ilp( + problem: &MinimumVertexCover, +) -> (ReductionPath, ReductionChain) { + let graph = ReductionGraph::new(); + let src = ReductionGraph::variant_to_map(&MinimumVertexCover::::variant()); + let dst = ReductionGraph::variant_to_map(&ILP::::variant()); + let path = graph + .find_cheapest_path( + "MinimumVertexCover", + &src, + "ILP", + &dst, + &ProblemSize::new(vec![]), + &MinimizeSteps, + ) + .expect("Should find path MinimumVertexCover -> ILP"); + let chain = graph + .reduce_along_path(&path, problem as &dyn std::any::Any) + .expect("Should reduce MinimumVertexCover to ILP along path"); + (path, chain) } #[test] -fn test_minimumvertexcover_to_ilp_closed_loop() { - // Triangle graph: min VC = 2 vertices +fn test_minimumvertexcover_to_ilp_via_path_structure() { let problem = MinimumVertexCover::new( SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)]), vec![1i32; 3], ); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - let bf = BruteForce::new(); - let ilp_solver = ILPSolver::new(); - - // Solve with brute force on original problem - let bf_solutions = bf.find_all_best(&problem); - - // Solve via ILP reduction - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - - // Both should find optimal size = 2 - let bf_size: usize = bf_solutions[0].iter().sum(); - let ilp_size: usize = extracted.iter().sum(); - assert_eq!(bf_size, 2); - assert_eq!(ilp_size, 2); + let (path, chain) = reduce_vc_to_ilp(&problem); + let ilp: &ILP = chain.target_problem(); - // Verify the ILP solution is valid for the original problem assert!( - problem.evaluate(&extracted).is_valid(), - "Extracted solution should be valid" + path.len() > 1, + "Removed rule should be exercised through a multi-step path" + ); + assert_eq!( + path.type_names(), + vec!["MinimumVertexCover", "MinimumSetCovering", "ILP"] ); + assert_eq!(ilp.num_vars, 3); + assert_eq!(ilp.constraints.len(), 3); + assert_eq!(ilp.sense, ObjectiveSense::Minimize); } #[test] -fn test_ilp_solution_equals_brute_force_path() { - // Path graph 0-1-2-3: min VC = 2 (e.g., {1, 2} or {0, 2} or {1, 3}) +fn test_minimumvertexcover_to_ilp_via_path_closed_loop() { let problem = MinimumVertexCover::new( SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)]), vec![1i32; 4], ); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); + let (_, chain) = reduce_vc_to_ilp(&problem); + let ilp: &ILP = chain.target_problem(); let bf = BruteForce::new(); let ilp_solver = ILPSolver::new(); - - // Solve with brute force let bf_solutions = bf.find_all_best(&problem); - let bf_size: usize = bf_solutions[0].iter().sum(); - - // Solve via ILP let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - let ilp_size: usize = extracted.iter().sum(); + let extracted = chain.extract_solution(&ilp_solution); + let bf_size: usize = bf_solutions[0].iter().sum(); + let ilp_size: usize = extracted.iter().sum(); assert_eq!(bf_size, 2); assert_eq!(ilp_size, 2); - - // Verify validity assert!(problem.evaluate(&extracted).is_valid()); } #[test] -fn test_ilp_solution_equals_brute_force_weighted() { - // Weighted problem: vertex 1 has low weight and covers both edges - // 0 -- 1 -- 2 - // Weights: [100, 1, 100] - // Min VC by weight: just vertex 1 (weight 1) beats 0+2 (weight 200) +fn test_minimumvertexcover_to_ilp_via_path_weighted() { let problem = MinimumVertexCover::new(SimpleGraph::new(3, vec![(0, 1), (1, 2)]), vec![100, 1, 100]); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); + let (_, chain) = reduce_vc_to_ilp(&problem); + let ilp: &ILP = chain.target_problem(); - let bf = BruteForce::new(); let ilp_solver = ILPSolver::new(); - - let bf_solutions = bf.find_all_best(&problem); - let bf_obj = problem.evaluate(&bf_solutions[0]); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - let ilp_obj = problem.evaluate(&extracted); + let extracted = chain.extract_solution(&ilp_solution); - assert_eq!(bf_obj, SolutionSize::Valid(1)); - assert_eq!(ilp_obj, SolutionSize::Valid(1)); - - // Verify the solution selects vertex 1 + assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(1)); assert_eq!(extracted, vec![0, 1, 0]); } - -#[test] -fn test_solution_extraction() { - let problem = MinimumVertexCover::new(SimpleGraph::new(4, vec![(0, 1), (2, 3)]), vec![1i32; 4]); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - - // Test that extraction works correctly (1:1 mapping) - let ilp_solution = vec![1, 0, 0, 1]; - let extracted = reduction.extract_solution(&ilp_solution); - assert_eq!(extracted, vec![1, 0, 0, 1]); - - // Verify this is a valid VC (covers edges 0-1 and 2-3) - assert!(problem.evaluate(&extracted).is_valid()); -} - -#[test] -fn test_ilp_structure() { - let problem = MinimumVertexCover::new( - SimpleGraph::new(5, vec![(0, 1), (1, 2), (2, 3), (3, 4)]), - vec![1i32; 5], - ); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - assert_eq!(ilp.num_vars, 5); - assert_eq!(ilp.constraints.len(), 4); -} - -#[test] -fn test_empty_graph() { - // Graph with no edges: empty cover is valid - let problem = MinimumVertexCover::new(SimpleGraph::new(3, vec![]), vec![1i32; 3]); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - assert_eq!(ilp.constraints.len(), 0); - - let ilp_solver = ILPSolver::new(); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - - // No vertices should be selected - assert_eq!(extracted, vec![0, 0, 0]); - - assert!(problem.evaluate(&extracted).is_valid()); - assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(0)); -} - -#[test] -fn test_complete_graph() { - // Complete graph K4: min VC = 3 (all but one vertex) - let problem = MinimumVertexCover::new( - SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]), - vec![1i32; 4], - ); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - assert_eq!(ilp.constraints.len(), 6); - - let ilp_solver = ILPSolver::new(); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - - assert!(problem.evaluate(&extracted).is_valid()); - assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(3)); -} - -#[test] -fn test_solve_reduced() { - // Test the ILPSolver::solve_reduced method - let problem = MinimumVertexCover::new( - SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3)]), - vec![1i32; 4], - ); - - let ilp_solver = ILPSolver::new(); - let solution = ilp_solver - .solve_reduced(&problem) - .expect("solve_reduced should work"); - - assert!(problem.evaluate(&solution).is_valid()); - assert_eq!(problem.evaluate(&solution), SolutionSize::Valid(2)); -} - -#[test] -fn test_bipartite_graph() { - // Bipartite graph: 0-2, 0-3, 1-2, 1-3 (complete bipartite K_{2,2}) - // Min VC = 2 (either side of the bipartition) - let problem = MinimumVertexCover::new( - SimpleGraph::new(4, vec![(0, 2), (0, 3), (1, 2), (1, 3)]), - vec![1i32; 4], - ); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - let ilp_solver = ILPSolver::new(); - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - - assert!(problem.evaluate(&extracted).is_valid()); - assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(2)); - - // Should select either {0, 1} or {2, 3} - let sum: usize = extracted.iter().sum(); - assert_eq!(sum, 2); -} - -#[test] -fn test_single_edge() { - // Single edge: min VC = 1 - let problem = MinimumVertexCover::new(SimpleGraph::new(2, vec![(0, 1)]), vec![1i32; 2]); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - let bf = BruteForce::new(); - let ilp_solver = ILPSolver::new(); - - let bf_solutions = bf.find_all_best(&problem); - let bf_size: usize = bf_solutions[0].iter().sum(); - - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - let ilp_size: usize = extracted.iter().sum(); - - assert_eq!(bf_size, 1); - assert_eq!(ilp_size, 1); -} - -#[test] -fn test_star_graph() { - // Star graph: center vertex 0 connected to all others - // Min VC = 1 (just the center) - let problem = MinimumVertexCover::new( - SimpleGraph::new(5, vec![(0, 1), (0, 2), (0, 3), (0, 4)]), - vec![1i32; 5], - ); - let reduction: ReductionVCToILP = ReduceTo::::reduce_to(&problem); - let ilp = reduction.target_problem(); - - let bf = BruteForce::new(); - let ilp_solver = ILPSolver::new(); - - let bf_solutions = bf.find_all_best(&problem); - let bf_size: usize = bf_solutions[0].iter().sum(); - - let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); - let extracted = reduction.extract_solution(&ilp_solution); - let ilp_size: usize = extracted.iter().sum(); - - assert_eq!(bf_size, 1); - assert_eq!(ilp_size, 1); - - // The optimal solution should select vertex 0 - assert_eq!(extracted[0], 1); -} diff --git a/src/unit_tests/rules/minimumvertexcover_qubo.rs b/src/unit_tests/rules/minimumvertexcover_qubo.rs index f9b75691d..95bdff9d6 100644 --- a/src/unit_tests/rules/minimumvertexcover_qubo.rs +++ b/src/unit_tests/rules/minimumvertexcover_qubo.rs @@ -1,78 +1,101 @@ -use super::*; -use crate::solvers::BruteForce; +use crate::models::algebraic::QUBO; +use crate::models::graph::MinimumVertexCover; +use crate::rules::{Minimize, ReductionChain, ReductionGraph, ReductionPath}; +use crate::solvers::{BruteForce, Solver}; +use crate::topology::{Graph, SimpleGraph}; use crate::traits::Problem; +use crate::types::{ProblemSize, SolutionSize}; + +fn reduce_vc_to_qubo( + problem: &MinimumVertexCover, +) -> (ReductionPath, ReductionChain) { + let graph = ReductionGraph::new(); + let src = ReductionGraph::variant_to_map(&MinimumVertexCover::::variant()); + let dst = ReductionGraph::variant_to_map(&QUBO::::variant()); + let path = graph + .find_cheapest_path( + "MinimumVertexCover", + &src, + "QUBO", + &dst, + &ProblemSize::new(vec![ + ("num_vertices", problem.graph().num_vertices()), + ("num_edges", problem.graph().num_edges()), + ]), + &Minimize("num_vars"), + ) + .expect("Should find path MinimumVertexCover -> QUBO"); + let chain = graph + .reduce_along_path(&path, problem as &dyn std::any::Any) + .expect("Should reduce MinimumVertexCover to QUBO along path"); + (path, chain) +} #[test] -fn test_vertexcovering_to_qubo_closed_loop() { - // Cycle C4: 0-1-2-3-0 (4 vertices, 4 edges) - // Minimum VC = 2 vertices (e.g., {0, 2} or {1, 3}) - let vc = MinimumVertexCover::new( +fn test_minimumvertexcover_to_qubo_via_path_closed_loop() { + let problem = MinimumVertexCover::new( SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3), (0, 3)]), vec![1i32; 4], ); - let reduction = ReduceTo::>::reduce_to(&vc); - let qubo = reduction.target_problem(); + let (path, chain) = reduce_vc_to_qubo(&problem); + let qubo: &QUBO = chain.target_problem(); - let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); - - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - assert!(vc.evaluate(&extracted).is_valid()); - assert_eq!(extracted.iter().filter(|&&x| x == 1).count(), 2); - } -} - -#[test] -fn test_vertexcovering_to_qubo_triangle() { - // Triangle K3: minimum VC = 2 (any two vertices) - let vc = MinimumVertexCover::new( - SimpleGraph::new(3, vec![(0, 1), (1, 2), (0, 2)]), - vec![1i32; 3], + assert!( + path.len() > 1, + "Removed rule should be exercised through a multi-step path" + ); + assert_eq!( + path.type_names(), + vec![ + "MinimumVertexCover", + "MaximumIndependentSet", + "MaximumSetPacking", + "QUBO", + ] ); - let reduction = ReduceTo::>::reduce_to(&vc); - let qubo = reduction.target_problem(); + assert_eq!(qubo.num_variables(), 4); let solver = BruteForce::new(); let qubo_solutions = solver.find_all_best(qubo); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - assert!(vc.evaluate(&extracted).is_valid()); + let extracted = chain.extract_solution(sol); + assert!(problem.evaluate(&extracted).is_valid()); assert_eq!(extracted.iter().filter(|&&x| x == 1).count(), 2); } } #[test] -fn test_vertexcovering_to_qubo_star() { - // Star graph: center vertex 0 connected to 1, 2, 3 - // Minimum VC = {0} (just the center) - let vc = MinimumVertexCover::new( - SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3)]), - vec![1i32; 4], - ); - let reduction = ReduceTo::>::reduce_to(&vc); - let qubo = reduction.target_problem(); +fn test_minimumvertexcover_to_qubo_via_path_weighted() { + let problem = + MinimumVertexCover::new(SimpleGraph::new(3, vec![(0, 1), (1, 2)]), vec![100, 1, 100]); + let (_, chain) = reduce_vc_to_qubo(&problem); + let qubo: &QUBO = chain.target_problem(); let solver = BruteForce::new(); - let qubo_solutions = solver.find_all_best(qubo); + let qubo_solution = solver + .find_best(qubo) + .expect("QUBO should be solvable via path"); + let extracted = chain.extract_solution(&qubo_solution); - for sol in &qubo_solutions { - let extracted = reduction.extract_solution(sol); - assert!(vc.evaluate(&extracted).is_valid()); - assert_eq!(extracted.iter().filter(|&&x| x == 1).count(), 1); - } + assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(1)); + assert_eq!(extracted, vec![0, 1, 0]); } #[test] -fn test_vertexcovering_to_qubo_structure() { - let vc = MinimumVertexCover::new( - SimpleGraph::new(4, vec![(0, 1), (1, 2), (2, 3), (0, 3)]), +fn test_minimumvertexcover_to_qubo_via_path_star_graph() { + let problem = MinimumVertexCover::new( + SimpleGraph::new(4, vec![(0, 1), (0, 2), (0, 3)]), vec![1i32; 4], ); - let reduction = ReduceTo::>::reduce_to(&vc); - let qubo = reduction.target_problem(); + let (_, chain) = reduce_vc_to_qubo(&problem); + let qubo: &QUBO = chain.target_problem(); - // QUBO should have same number of variables as vertices assert_eq!(qubo.num_variables(), 4); + + let solver = BruteForce::new(); + let qubo_solution = solver.find_best(qubo).expect("QUBO should be solvable"); + let extracted = chain.extract_solution(&qubo_solution); + + assert_eq!(problem.evaluate(&extracted), SolutionSize::Valid(1)); + assert_eq!(extracted.iter().filter(|&&x| x == 1).count(), 1); } diff --git a/src/unit_tests/rules/qubo_ilp.rs b/src/unit_tests/rules/qubo_ilp.rs index cea0abe83..2e58770f7 100644 --- a/src/unit_tests/rules/qubo_ilp.rs +++ b/src/unit_tests/rules/qubo_ilp.rs @@ -9,7 +9,7 @@ fn test_qubo_to_ilp_closed_loop() { // x=0,0 -> 0, x=1,0 -> 2, x=0,1 -> -3, x=1,1 -> 0 // Optimal: x = [0, 1] with obj = -3 let qubo = QUBO::from_matrix(vec![vec![2.0, 1.0], vec![0.0, -3.0]]); - let reduction = ReduceTo::::reduce_to(&qubo); + let reduction = ReduceTo::>::reduce_to(&qubo); let ilp = reduction.target_problem(); let solver = BruteForce::new(); @@ -28,7 +28,7 @@ fn test_qubo_to_ilp_diagonal_only() { // No quadratic terms: minimize 3*x0 - 2*x1 // Optimal: x = [0, 1] with obj = -2 let qubo = QUBO::from_matrix(vec![vec![3.0, 0.0], vec![0.0, -2.0]]); - let reduction = ReduceTo::::reduce_to(&qubo); + let reduction = ReduceTo::>::reduce_to(&qubo); let ilp = reduction.target_problem(); // No auxiliary variables when no off-diagonal terms @@ -50,7 +50,7 @@ fn test_qubo_to_ilp_3var() { vec![0.0, -1.0, 4.0], vec![0.0, 0.0, -1.0], ]); - let reduction = ReduceTo::::reduce_to(&qubo); + let reduction = ReduceTo::>::reduce_to(&qubo); let ilp = reduction.target_problem(); // 3 original + 2 auxiliary (for two off-diagonal terms) diff --git a/src/unit_tests/rules/reduction_path_parity.rs b/src/unit_tests/rules/reduction_path_parity.rs index f84244a11..976085471 100644 --- a/src/unit_tests/rules/reduction_path_parity.rs +++ b/src/unit_tests/rules/reduction_path_parity.rs @@ -158,10 +158,15 @@ fn test_jl_parity_factoring_to_spinglass_path() { ); // Solve Factoring directly via ILP (fast) and verify path solution extraction + use crate::models::algebraic::ILP; + use crate::rules::traits::{ReduceTo, ReductionResult}; let ilp_solver = ILPSolver::new(); - let factoring_solution = ilp_solver - .solve_reduced(&factoring) + let reduction = ReduceTo::>::reduce_to(&factoring); + let ilp = reduction.target_problem(); + let ilp_solution = ilp_solver + .solve(ilp) .expect("ILP solver should find factoring solution"); + let factoring_solution = reduction.extract_solution(&ilp_solution); let metric = factoring.evaluate(&factoring_solution); assert_eq!( metric.unwrap(), diff --git a/src/unit_tests/rules/travelingsalesman_ilp.rs b/src/unit_tests/rules/travelingsalesman_ilp.rs index 44db2e62f..acf626957 100644 --- a/src/unit_tests/rules/travelingsalesman_ilp.rs +++ b/src/unit_tests/rules/travelingsalesman_ilp.rs @@ -18,17 +18,12 @@ fn test_reduction_creates_valid_ilp_c4() { 4, vec![(0, 1), (1, 2), (2, 3), (3, 0)], )); - let reduction: ReductionTSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionTSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); // n=4, m=4: num_vars = 16 + 2*4*4 = 48 assert_eq!(ilp.num_vars, 48); assert_eq!(ilp.sense, ObjectiveSense::Minimize); - - // All variables should be binary - for bound in &ilp.bounds { - assert_eq!(*bound, VarBounds::binary()); - } } #[test] @@ -38,7 +33,7 @@ fn test_reduction_c4_closed_loop() { 4, vec![(0, 1), (1, 2), (2, 3), (3, 0)], )); - let reduction: ReductionTSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionTSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); @@ -57,7 +52,7 @@ fn test_reduction_k4_weighted_closed_loop() { let problem = k4_tsp(); // Solve via ILP reduction - let reduction: ReductionTSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionTSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); @@ -84,7 +79,7 @@ fn test_reduction_c5_unweighted_closed_loop() { vec![(0, 1), (1, 2), (2, 3), (3, 4), (4, 0)], )); - let reduction: ReductionTSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionTSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); let ilp_solution = ilp_solver.solve(ilp).expect("ILP should be solvable"); @@ -103,7 +98,7 @@ fn test_no_hamiltonian_cycle_infeasible() { vec![(0, 1), (1, 2), (2, 3)], )); - let reduction: ReductionTSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionTSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); let result = ilp_solver.solve(ilp); @@ -121,7 +116,7 @@ fn test_solution_extraction_structure() { 4, vec![(0, 1), (1, 2), (2, 3), (3, 0)], )); - let reduction: ReductionTSPToILP = ReduceTo::::reduce_to(&problem); + let reduction: ReductionTSPToILP = ReduceTo::>::reduce_to(&problem); let ilp = reduction.target_problem(); let ilp_solver = ILPSolver::new(); diff --git a/src/unit_tests/solvers/ilp/solver.rs b/src/unit_tests/solvers/ilp/solver.rs index 20ff5ec2c..f4da81fa8 100644 --- a/src/unit_tests/solvers/ilp/solver.rs +++ b/src/unit_tests/solvers/ilp/solver.rs @@ -1,12 +1,12 @@ use super::*; -use crate::models::algebraic::{LinearConstraint, VarBounds}; +use crate::models::algebraic::LinearConstraint; use crate::solvers::BruteForce; use crate::traits::Problem; #[test] fn test_ilp_solver_basic_maximize() { // Maximize x0 + 2*x1 subject to x0 + x1 <= 1, binary vars - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 2.0)], @@ -30,7 +30,7 @@ fn test_ilp_solver_basic_maximize() { #[test] fn test_ilp_solver_basic_minimize() { // Minimize x0 + x1 subject to x0 + x1 >= 1, binary vars - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::ge(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 1.0)], @@ -56,7 +56,7 @@ fn test_ilp_solver_matches_brute_force() { // Maximize x0 + x1 + x2 subject to: // x0 + x1 <= 1 // x1 + x2 <= 1 - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![ LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0), @@ -83,7 +83,7 @@ fn test_ilp_solver_matches_brute_force() { #[test] fn test_ilp_empty_problem() { - let ilp = ILP::empty(); + let ilp = ILP::::empty(); let solver = ILPSolver::new(); let solution = solver.solve(&ilp); assert_eq!(solution, Some(vec![])); @@ -92,7 +92,7 @@ fn test_ilp_empty_problem() { #[test] fn test_ilp_equality_constraint() { // Minimize x0 subject to x0 + x1 == 1, binary vars - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::eq(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0)], @@ -113,10 +113,14 @@ fn test_ilp_non_binary_bounds() { // Variables with larger ranges // x0 in [0, 3], x1 in [0, 2] // Maximize x0 + x1 subject to x0 + x1 <= 4 - let ilp = ILP::new( + // Use ILP:: with explicit upper-bound constraints + let ilp = ILP::::new( 2, - vec![VarBounds::bounded(0, 3), VarBounds::bounded(0, 2)], - vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 4.0)], + vec![ + LinearConstraint::le(vec![(0, 1.0)], 3.0), + LinearConstraint::le(vec![(1, 1.0)], 2.0), + LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 4.0), + ], vec![(0, 1.0), (1, 1.0)], ObjectiveSense::Maximize, ); @@ -132,14 +136,16 @@ fn test_ilp_non_binary_bounds() { } #[test] -fn test_ilp_negative_lower_bounds() { - // Variables with negative lower bounds - // x0 in [-2, 2], x1 in [-1, 1] - // Maximize x0 + x1 (no constraints) - let ilp = ILP::new( +fn test_ilp_integer_upper_bounds() { + // Variables with upper bounds (non-negative integers) + // x0 in [0, 4], x1 in [0, 2] + // Maximize x0 + x1 (with explicit upper-bound constraints) + let ilp = ILP::::new( 2, - vec![VarBounds::bounded(-2, 2), VarBounds::bounded(-1, 1)], - vec![], + vec![ + LinearConstraint::le(vec![(0, 1.0)], 4.0), + LinearConstraint::le(vec![(1, 1.0)], 2.0), + ], vec![(0, 1.0), (1, 1.0)], ObjectiveSense::Maximize, ); @@ -149,17 +155,20 @@ fn test_ilp_negative_lower_bounds() { let result = ilp.evaluate(&solution); assert!(result.is_valid()); - // Optimal: x0=2, x1=1 => objective = 3 - assert!((result.unwrap() - 3.0).abs() < 1e-9); + // Optimal: x0=4, x1=2 => objective = 6 + assert!((result.unwrap() - 6.0).abs() < 1e-9); } #[test] fn test_ilp_config_to_values_roundtrip() { // Ensure the config encoding/decoding works correctly - let ilp = ILP::new( + // x0 in [0, 5], x1 in [0, 3], maximize x0 + x1 + let ilp = ILP::::new( 2, - vec![VarBounds::bounded(-2, 2), VarBounds::bounded(1, 3)], - vec![], + vec![ + LinearConstraint::le(vec![(0, 1.0)], 5.0), + LinearConstraint::le(vec![(1, 1.0)], 3.0), + ], vec![(0, 1.0), (1, 1.0)], ObjectiveSense::Maximize, ); @@ -170,8 +179,8 @@ fn test_ilp_config_to_values_roundtrip() { // The solution should be valid let result = ilp.evaluate(&solution); assert!(result.is_valid()); - // Optimal: x0=2, x1=3 => objective = 5 - assert!((result.unwrap() - 5.0).abs() < 1e-9); + // Optimal: x0=5, x1=3 => objective = 8 + assert!((result.unwrap() - 8.0).abs() < 1e-9); } #[test] @@ -180,7 +189,7 @@ fn test_ilp_multiple_constraints() { // x0 + x1 + x2 <= 2 // x0 + x1 >= 1 // Binary vars - let ilp = ILP::binary( + let ilp = ILP::::new( 3, vec![ LinearConstraint::le(vec![(0, 1.0), (1, 1.0), (2, 1.0)], 2.0), @@ -210,7 +219,7 @@ fn test_ilp_multiple_constraints() { #[test] fn test_ilp_unconstrained() { // Maximize x0 + x1, no constraints, binary vars - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![], vec![(0, 1.0), (1, 1.0)], @@ -232,7 +241,7 @@ fn test_ilp_with_time_limit() { assert_eq!(solver.time_limit, Some(10.0)); // Should still work for simple problems - let ilp = ILP::binary( + let ilp = ILP::::new( 2, vec![LinearConstraint::le(vec![(0, 1.0), (1, 1.0)], 1.0)], vec![(0, 1.0), (1, 1.0)], diff --git a/src/unit_tests/topology/hypergraph.rs b/src/unit_tests/topology/hypergraph.rs deleted file mode 100644 index 69e70d547..000000000 --- a/src/unit_tests/topology/hypergraph.rs +++ /dev/null @@ -1,109 +0,0 @@ -use super::*; - -#[test] -fn test_hypergraph_basic() { - let hg = HyperGraph::new(4, vec![vec![0, 1, 2], vec![2, 3]]); - assert_eq!(hg.num_vertices(), 4); - assert_eq!(hg.num_edges(), 2); -} - -#[test] -fn test_hypergraph_empty() { - let hg = HyperGraph::empty(5); - assert_eq!(hg.num_vertices(), 5); - assert_eq!(hg.num_edges(), 0); -} - -#[test] -fn test_hypergraph_neighbors() { - let hg = HyperGraph::new(4, vec![vec![0, 1, 2], vec![2, 3]]); - let neighbors = hg.neighbors(2); - assert!(neighbors.contains(&0)); - assert!(neighbors.contains(&1)); - assert!(neighbors.contains(&3)); - assert!(!neighbors.contains(&2)); // Not its own neighbor -} - -#[test] -fn test_hypergraph_has_edge() { - let hg = HyperGraph::new(4, vec![vec![0, 1, 2]]); - assert!(hg.has_edge(&[0, 1, 2])); - assert!(hg.has_edge(&[2, 1, 0])); // Order doesn't matter - assert!(!hg.has_edge(&[0, 1])); - assert!(!hg.has_edge(&[0, 1, 3])); -} - -#[test] -fn test_hypergraph_degree() { - let hg = HyperGraph::new(4, vec![vec![0, 1, 2], vec![2, 3]]); - assert_eq!(hg.degree(0), 1); - assert_eq!(hg.degree(2), 2); - assert_eq!(hg.degree(3), 1); -} - -#[test] -fn test_hypergraph_edges_containing() { - let hg = HyperGraph::new(4, vec![vec![0, 1, 2], vec![2, 3]]); - let edges = hg.edges_containing(2); - assert_eq!(edges.len(), 2); -} - -#[test] -fn test_hypergraph_add_edge() { - let mut hg = HyperGraph::empty(4); - hg.add_edge(vec![0, 1]); - hg.add_edge(vec![1, 2, 3]); - assert_eq!(hg.num_edges(), 2); -} - -#[test] -fn test_hypergraph_max_edge_size() { - let hg = HyperGraph::new(4, vec![vec![0, 1], vec![0, 1, 2, 3]]); - assert_eq!(hg.max_edge_size(), 4); -} - -#[test] -fn test_hypergraph_is_regular_graph() { - let regular = HyperGraph::new(3, vec![vec![0, 1], vec![1, 2]]); - assert!(regular.is_regular_graph()); - - let not_regular = HyperGraph::new(4, vec![vec![0, 1, 2]]); - assert!(!not_regular.is_regular_graph()); -} - -#[test] -fn test_hypergraph_to_graph_edges() { - let hg = HyperGraph::new(3, vec![vec![0, 1], vec![1, 2]]); - let edges = hg.to_graph_edges(); - assert!(edges.is_some()); - let edges = edges.unwrap(); - assert_eq!(edges.len(), 2); -} - -#[test] -fn test_hypergraph_to_graph_edges_not_regular() { - // Hypergraph with a hyperedge of size 3 (not a regular graph) - let hg = HyperGraph::new(4, vec![vec![0, 1, 2]]); - assert!(hg.to_graph_edges().is_none()); -} - -#[test] -fn test_hypergraph_get_edge() { - let hg = HyperGraph::new(4, vec![vec![0, 1, 2], vec![2, 3]]); - assert_eq!(hg.edge(0), Some(&vec![0, 1, 2])); - assert_eq!(hg.edge(1), Some(&vec![2, 3])); - assert_eq!(hg.edge(2), None); -} - -#[test] -#[should_panic(expected = "vertex index 5 out of bounds")] -fn test_hypergraph_invalid_vertex() { - HyperGraph::new(4, vec![vec![0, 5]]); -} - -#[test] -#[should_panic(expected = "vertex index 4 out of bounds")] -fn test_hypergraph_add_invalid_edge() { - let mut hg = HyperGraph::empty(4); - hg.add_edge(vec![0, 4]); -} diff --git a/src/unit_tests/unitdiskmapping_algorithms/common.rs b/src/unit_tests/unitdiskmapping_algorithms/common.rs index 53e99169b..2a6cd4f5a 100644 --- a/src/unit_tests/unitdiskmapping_algorithms/common.rs +++ b/src/unit_tests/unitdiskmapping_algorithms/common.rs @@ -1,11 +1,28 @@ //! Common test utilities for mapping tests. use crate::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; -use crate::models::MaximumIndependentSet; use crate::rules::unitdiskmapping::MappingResult; -use crate::rules::{ReduceTo, ReductionResult}; use crate::solvers::ILPSolver; -use crate::topology::SimpleGraph; + +fn build_mis_ilp(num_vertices: usize, edges: &[(usize, usize)], weights: &[i32]) -> ILP { + let constraints: Vec = edges + .iter() + .map(|&(i, j)| LinearConstraint::le(vec![(i, 1.0), (j, 1.0)], 1.0)) + .collect(); + + let objective: Vec<(usize, f64)> = weights + .iter() + .enumerate() + .map(|(i, &w)| (i, w as f64)) + .collect(); + + ILP::::new( + num_vertices, + constraints, + objective, + ObjectiveSense::Maximize, + ) +} /// Check if a configuration is a valid independent set. pub fn is_independent_set(edges: &[(usize, usize)], config: &[usize]) -> bool { @@ -20,13 +37,10 @@ pub fn is_independent_set(edges: &[(usize, usize)], config: &[usize]) -> bool { /// Solve maximum independent set using ILP. /// Returns the size of the MIS. pub fn solve_mis(num_vertices: usize, edges: &[(usize, usize)]) -> usize { - let problem = MaximumIndependentSet::new( - SimpleGraph::new(num_vertices, edges.to_vec()), - vec![1i32; num_vertices], - ); - let reduction = as ReduceTo>::reduce_to(&problem); + let weights = vec![1; num_vertices]; + let ilp = build_mis_ilp(num_vertices, edges, &weights); let solver = ILPSolver::new(); - if let Some(solution) = solver.solve(reduction.target_problem()) { + if let Some(solution) = solver.solve(&ilp) { solution.iter().filter(|&&x| x > 0).count() } else { 0 @@ -35,13 +49,10 @@ pub fn solve_mis(num_vertices: usize, edges: &[(usize, usize)]) -> usize { /// Solve MIS and return the binary configuration. pub fn solve_mis_config(num_vertices: usize, edges: &[(usize, usize)]) -> Vec { - let problem = MaximumIndependentSet::new( - SimpleGraph::new(num_vertices, edges.to_vec()), - vec![1i32; num_vertices], - ); - let reduction = as ReduceTo>::reduce_to(&problem); + let weights = vec![1; num_vertices]; + let ilp = build_mis_ilp(num_vertices, edges, &weights); let solver = ILPSolver::new(); - if let Some(solution) = solver.solve(reduction.target_problem()) { + if let Some(solution) = solver.solve(&ilp) { solution .iter() .map(|&x| if x > 0 { 1 } else { 0 }) @@ -75,24 +86,7 @@ pub fn solve_weighted_grid_mis(result: &MappingResult) -> usize { /// Solve weighted MIS on a graph using ILP. /// Returns the maximum weighted independent set value. pub fn solve_weighted_mis(num_vertices: usize, edges: &[(usize, usize)], weights: &[i32]) -> i32 { - let constraints: Vec = edges - .iter() - .map(|&(i, j)| LinearConstraint::le(vec![(i, 1.0), (j, 1.0)], 1.0)) - .collect(); - - let objective: Vec<(usize, f64)> = weights - .iter() - .enumerate() - .map(|(i, &w)| (i, w as f64)) - .collect(); - - let ilp = ILP::binary( - num_vertices, - constraints, - objective, - ObjectiveSense::Maximize, - ); - + let ilp = build_mis_ilp(num_vertices, edges, weights); let solver = ILPSolver::new(); if let Some(solution) = solver.solve(&ilp) { solution @@ -112,23 +106,7 @@ pub fn solve_weighted_mis_config( edges: &[(usize, usize)], weights: &[i32], ) -> Vec { - let constraints: Vec = edges - .iter() - .map(|&(i, j)| LinearConstraint::le(vec![(i, 1.0), (j, 1.0)], 1.0)) - .collect(); - - let objective: Vec<(usize, f64)> = weights - .iter() - .enumerate() - .map(|(i, &w)| (i, w as f64)) - .collect(); - - let ilp = ILP::binary( - num_vertices, - constraints, - objective, - ObjectiveSense::Maximize, - ); + let ilp = build_mis_ilp(num_vertices, edges, weights); let solver = ILPSolver::new(); if let Some(solution) = solver.solve(&ilp) { diff --git a/src/unit_tests/unitdiskmapping_algorithms/weighted.rs b/src/unit_tests/unitdiskmapping_algorithms/weighted.rs index 917a7977b..62ec282d4 100644 --- a/src/unit_tests/unitdiskmapping_algorithms/weighted.rs +++ b/src/unit_tests/unitdiskmapping_algorithms/weighted.rs @@ -710,7 +710,7 @@ fn test_weighted_map_config_back_standard_graphs() { .map(|(i, &w)| (i, w)) .collect(); - let ilp = ILP::binary(num_grid, constraints, objective, ObjectiveSense::Maximize); + let ilp = ILP::::new(num_grid, constraints, objective, ObjectiveSense::Maximize); let solver = ILPSolver::new(); let grid_config: Vec = solver .solve(&ilp) diff --git a/src/unit_tests/variant.rs b/src/unit_tests/variant.rs index 137533247..740f2fbf4 100644 --- a/src/unit_tests/variant.rs +++ b/src/unit_tests/variant.rs @@ -243,28 +243,13 @@ fn test_kvalue_kn() { // --- Graph type VariantParam tests --- -use crate::topology::HyperGraph; use crate::topology::{BipartiteGraph, Graph, PlanarGraph, SimpleGraph, UnitDiskGraph}; #[test] fn test_simple_graph_variant_param() { assert_eq!(SimpleGraph::CATEGORY, "graph"); assert_eq!(SimpleGraph::VALUE, "SimpleGraph"); - assert_eq!(SimpleGraph::PARENT_VALUE, Some("HyperGraph")); -} - -#[test] -fn test_unit_disk_graph_variant_param() { - assert_eq!(UnitDiskGraph::CATEGORY, "graph"); - assert_eq!(UnitDiskGraph::VALUE, "UnitDiskGraph"); - assert_eq!(UnitDiskGraph::PARENT_VALUE, Some("SimpleGraph")); -} - -#[test] -fn test_hyper_graph_variant_param() { - assert_eq!(HyperGraph::CATEGORY, "graph"); - assert_eq!(HyperGraph::VALUE, "HyperGraph"); - assert_eq!(HyperGraph::PARENT_VALUE, None); + assert_eq!(SimpleGraph::PARENT_VALUE, None); } #[test] @@ -282,11 +267,10 @@ fn test_bipartite_graph_variant_param() { } #[test] -fn test_simple_graph_cast_to_parent() { - let sg = SimpleGraph::new(3, vec![(0, 1), (1, 2)]); - let hg: HyperGraph = sg.cast_to_parent(); - assert_eq!(hg.num_vertices(), 3); - assert_eq!(hg.num_edges(), 2); +fn test_unit_disk_graph_variant_param() { + assert_eq!(UnitDiskGraph::CATEGORY, "graph"); + assert_eq!(UnitDiskGraph::VALUE, "UnitDiskGraph"); + assert_eq!(UnitDiskGraph::PARENT_VALUE, Some("SimpleGraph")); } #[test] diff --git a/tests/suites/reductions.rs b/tests/suites/reductions.rs index daad3adc8..66e81ae6e 100644 --- a/tests/suites/reductions.rs +++ b/tests/suites/reductions.rs @@ -5,6 +5,7 @@ use problemreductions::models::algebraic::{LinearConstraint, ObjectiveSense, ILP}; use problemreductions::prelude::*; +use problemreductions::rules::{Minimize, ReductionGraph}; use problemreductions::topology::{Graph, SimpleGraph}; use problemreductions::variant::{K2, K3}; @@ -381,16 +382,11 @@ mod sg_maxcut_reductions { /// Tests for topology types integration. mod topology_tests { use super::*; - use problemreductions::topology::{HyperGraph, UnitDiskGraph}; + use problemreductions::topology::UnitDiskGraph; #[test] - fn test_hypergraph_to_setpacking() { - // HyperGraph can be seen as a MaximumSetPacking problem - let hg = HyperGraph::new(5, vec![vec![0, 1, 2], vec![2, 3], vec![3, 4]]); - - // Convert hyperedges to sets for MaximumSetPacking - let sets: Vec> = hg.edges().to_vec(); - let sp = MaximumSetPacking::::new(sets); + fn test_setpacking_from_hyperedge_style_input() { + let sp = MaximumSetPacking::::new(vec![vec![0, 1, 2], vec![2, 3], vec![3, 4]]); let solver = BruteForce::new(); let solutions = solver.find_all_best(&sp); @@ -458,8 +454,27 @@ mod qubo_reductions { let n = data.source.num_vertices; let is = MaximumIndependentSet::new(SimpleGraph::new(n, data.source.edges), vec![1i32; n]); - let reduction = ReduceTo::::reduce_to(&is); - let qubo = reduction.target_problem(); + let graph = ReductionGraph::new(); + let src = + ReductionGraph::variant_to_map(&MaximumIndependentSet::::variant()); + let dst = ReductionGraph::variant_to_map(&QUBO::::variant()); + let path = graph + .find_cheapest_path( + "MaximumIndependentSet", + &src, + "QUBO", + &dst, + &ProblemSize::new(vec![ + ("num_vertices", n), + ("num_edges", is.graph().num_edges()), + ]), + &Minimize("num_vars"), + ) + .expect("Should find path MaximumIndependentSet -> QUBO"); + let chain = graph + .reduce_along_path(&path, &is as &dyn std::any::Any) + .expect("Should reduce MaximumIndependentSet to QUBO"); + let qubo: &QUBO = chain.target_problem(); assert_eq!(qubo.num_variables(), data.qubo_num_vars); @@ -468,56 +483,16 @@ mod qubo_reductions { // All QUBO optimal solutions should extract to valid IS solutions for sol in &solutions { - let extracted = reduction.extract_solution(sol); + let extracted = chain.extract_solution(sol); assert!(is.evaluate(&extracted).is_valid()); } // Optimal IS size should match ground truth let gt_is_size: usize = data.qubo_optimal.configs[0].iter().sum(); - let our_is_size: usize = reduction.extract_solution(&solutions[0]).iter().sum(); + let our_is_size: usize = chain.extract_solution(&solutions[0]).iter().sum(); assert_eq!(our_is_size, gt_is_size); } - #[derive(Deserialize)] - struct VCToQuboData { - source: VCSource, - qubo_num_vars: usize, - qubo_optimal: QuboOptimal, - } - - #[derive(Deserialize)] - struct VCSource { - num_vertices: usize, - edges: Vec<(usize, usize)>, - } - - #[test] - fn test_vc_to_qubo_ground_truth() { - let json = - std::fs::read_to_string("tests/data/qubo/minimumvertexcover_to_qubo.json").unwrap(); - let data: VCToQuboData = serde_json::from_str(&json).unwrap(); - - let n = data.source.num_vertices; - let vc = MinimumVertexCover::new(SimpleGraph::new(n, data.source.edges), vec![1i32; n]); - let reduction = ReduceTo::::reduce_to(&vc); - let qubo = reduction.target_problem(); - - assert_eq!(qubo.num_variables(), data.qubo_num_vars); - - let solver = BruteForce::new(); - let solutions = solver.find_all_best(qubo); - - for sol in &solutions { - let extracted = reduction.extract_solution(sol); - assert!(vc.evaluate(&extracted).is_valid()); - } - - // Optimal VC size should match ground truth - let gt_vc_size: usize = data.qubo_optimal.configs[0].iter().sum(); - let our_vc_size: usize = reduction.extract_solution(&solutions[0]).iter().sum(); - assert_eq!(our_vc_size, gt_vc_size); - } - #[derive(Deserialize)] struct ColoringToQuboData { source: ColoringSource, @@ -722,7 +697,7 @@ mod qubo_reductions { .collect(); // The qubogen formula maximizes, so this is a Maximize ILP - let ilp = ILP::binary( + let ilp = ILP::::new( data.source.num_variables, constraints, objective, @@ -747,6 +722,76 @@ mod qubo_reductions { let our_config = reduction.extract_solution(&solutions[0]); assert_eq!(&our_config, gt_config); } + + #[derive(Deserialize)] + struct VCToQuboData { + source: VCSource, + qubo_optimal: QuboOptimal, + } + + #[derive(Deserialize)] + struct VCSource { + num_vertices: usize, + edges: Vec<(usize, usize)>, + } + + #[test] + fn test_vc_to_qubo_ground_truth() { + let json = + std::fs::read_to_string("tests/data/qubo/minimumvertexcover_to_qubo.json").unwrap(); + let data: VCToQuboData = serde_json::from_str(&json).unwrap(); + + let n = data.source.num_vertices; + let vc = MinimumVertexCover::new(SimpleGraph::new(n, data.source.edges), vec![1i32; n]); + + // Find path MVC → ... → QUBO through the reduction graph + let graph = ReductionGraph::new(); + let src = + ReductionGraph::variant_to_map(&MinimumVertexCover::::variant()); + let dst = ReductionGraph::variant_to_map(&QUBO::::variant()); + let path = graph + .find_cheapest_path( + "MinimumVertexCover", + &src, + "QUBO", + &dst, + &ProblemSize::new(vec![ + ("num_vertices", n), + ("num_edges", vc.graph().num_edges()), + ]), + &Minimize("num_vars"), + ) + .expect("Should find path MVC -> QUBO"); + assert_eq!( + path.type_names(), + vec![ + "MinimumVertexCover", + "MaximumIndependentSet", + "MaximumSetPacking", + "QUBO" + ] + ); + + let chain = graph + .reduce_along_path(&path, &vc as &dyn std::any::Any) + .expect("Should reduce MVC to QUBO"); + let qubo: &QUBO = chain.target_problem(); + + let solver = BruteForce::new(); + let solutions = solver.find_all_best(qubo); + + // Extract back through the full chain to get VC solution + for sol in &solutions { + let vc_sol = chain.extract_solution(sol); + assert!(vc.evaluate(&vc_sol).is_valid()); + } + + // Optimal VC size should match ground truth + let vc_sol = chain.extract_solution(&solutions[0]); + let gt_vc_size: usize = data.qubo_optimal.configs[0].iter().sum(); + let our_vc_size: usize = vc_sol.iter().sum(); + assert_eq!(our_vc_size, gt_vc_size); + } } /// Tests for File I/O with reductions.