From 6f5fc56b69f280bf2ee81200b8b1542e47b5b4f0 Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Wed, 25 Feb 2026 23:21:23 +0800 Subject: [PATCH 01/15] docs: add overhead system redesign design (issue #61) Macro-first dual emission approach: compile-time parsed expression strings emit both Rust getter-calling code and symbolic Expr AST for composition/export. Co-Authored-By: Claude Opus 4.6 --- .../2026-02-25-overhead-system-design.md | 138 ++++++++++++++++++ 1 file changed, 138 insertions(+) create mode 100644 docs/plans/2026-02-25-overhead-system-design.md diff --git a/docs/plans/2026-02-25-overhead-system-design.md b/docs/plans/2026-02-25-overhead-system-design.md new file mode 100644 index 000000000..731d78211 --- /dev/null +++ b/docs/plans/2026-02-25-overhead-system-design.md @@ -0,0 +1,138 @@ +# Overhead System Redesign + +**Issue:** #61 — Introduce overhead system +**Date:** 2026-02-25 +**Approach:** Macro-first dual emission + +## Summary + +Replace the current `Polynomial`-based overhead system with a general `Expr` AST, compile-time macro-parsed expression strings, and per-problem inherent getters. The proc macro emits both compiled Rust code (for evaluation + compiler validation) and symbolic `Expr` AST literals (for composition + export). + +## Motivation + +Three pain points with the current system: +1. **Ergonomics** — `problem_size_names()`/`problem_size_values()` parallel arrays are awkward; `poly!` macro is verbose +2. **Correctness** — variable name mismatches between overhead expressions and problem size fields are caught only at runtime +3. **Simplification** — `Polynomial` only supports sums of monomials; general math (exp, log) requires a new representation anyway + +## Design + +### 1. Expression AST (`Expr`) + +Replaces `Polynomial` and `Monomial` with a general math expression tree. + +```rust +// src/expr.rs (replaces src/polynomial.rs) + +#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)] +pub enum Expr { + Const(f64), + Var(&'static str), + Add(Box, Box), + Mul(Box, Box), + Pow(Box, Box), + Exp(Box), + Log(Box), + Sqrt(Box), +} +``` + +Key operations: +- `eval(&self, vars: &ProblemSize) -> f64` +- `substitute(&self, mapping: &HashMap<&str, &Expr>) -> Expr` +- `variables(&self) -> HashSet<&'static str>` +- `is_polynomial(&self) -> bool` +- `degree(&self) -> Option` +- `Display` for human-readable formulas +- `simplify(&self) -> Expr` — minimal constant folding + +### 2. Problem Getters + +Remove `problem_size_names()` and `problem_size_values()` from the `Problem` trait. Each problem type implements inherent getter methods instead. + +```rust +// Before: trait methods returning parallel arrays +impl Problem for MaximumIndependentSet { + fn problem_size_names() -> &'static [&'static str] { &["num_vertices", "num_edges"] } + fn problem_size_values(&self) -> Vec { + vec![self.graph().num_vertices(), self.graph().num_edges()] + } +} + +// After: inherent methods — natural, compiler-checked, IDE-friendly +impl MaximumIndependentSet { + pub fn num_vertices(&self) -> usize { self.graph().num_vertices() } + pub fn num_edges(&self) -> usize { self.graph().num_edges() } +} +``` + +### 3. Proc Macro — Dual Emission + +The `#[reduction]` macro parses expression strings at compile time and emits two outputs. + +User-facing syntax: +```rust +#[reduction(overhead = { + num_vars = "num_vertices", + num_constraints = "num_edges + num_vertices^2", +})] +impl ReduceTo> for MaximumIndependentSet { ... } +``` + +Macro emits: +1. **Compiled evaluation function** — `src.num_vertices()`, `src.num_edges()` calls. Compiler catches missing getters. +2. **Symbolic Expr AST** — `Expr::Add(...)` construction for composition/export. + +Expression grammar (Pratt parser, ~200 LOC in proc macro crate): +``` +expr = term (('+' | '-') term)* +term = factor (('*' | '/') factor)* +factor = base ('^' factor)? +base = NUMBER | IDENT | func_call | '(' expr ')' +func_call = ('exp' | 'log' | 'sqrt') '(' expr ')' +``` + +### 4. Updated `ReductionOverhead` and `ReductionEntry` + +```rust +pub struct ReductionOverhead { + pub output_size: Vec<(&'static str, Expr)>, // Expr replaces Polynomial +} + +pub struct ReductionEntry { + // ...existing fields... + pub overhead_fn: fn() -> ReductionOverhead, // symbolic (composition/export) + pub overhead_eval_fn: fn(&dyn Any) -> ProblemSize, // compiled (evaluation) + // REMOVED: source_size_names_fn, target_size_names_fn +} +``` + +`PathCostFn` uses the symbolic `ReductionOverhead` (via `Expr::eval`) since it operates on type-erased `ProblemSize` during graph traversal. + +### 5. Export Pipeline + +JSON format gains both structured AST and display string: +```json +{ + "overhead": [{ + "field": "num_vars", + "expr": {"Pow": [{"Var": "num_vertices"}, {"Const": 2.0}]}, + "formula": "num_vertices^2" + }] +} +``` + +The paper reads `formula` strings — no Typst code changes needed. + +## Migration Strategy + +| Phase | Description | Files | Risk | +|-------|-------------|-------|------| +| 1 | Add `Expr` type alongside `Polynomial` | 2-3 new | Low (additive) | +| 2 | Update proc macro with Pratt parser, support new syntax | 1 file | Medium | +| 3 | Add inherent getters to all problem types | ~15 model files | Low (additive) | +| 4 | Migrate all reductions to new syntax | ~20 rule files | Low (mechanical) | +| 5 | Remove deprecated APIs (`problem_size_*`, `Polynomial`, `poly!`) | ~10 files | Medium (breaking) | +| 6 | Update documentation and regenerate exports | 3-4 files | Low | + +Phases 1-3 are purely additive. Phase 4 is bulk migration. Phase 5 is cleanup. From 78233598b12d2ef88ec5dbfaee34e86d68e0a614 Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Wed, 25 Feb 2026 23:28:19 +0800 Subject: [PATCH 02/15] docs: add overhead system implementation plan (14 tasks, 6 phases) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Detailed TDD-style implementation plan for issue #61. Phases: Expr type → macro parser → getters → migration → cleanup → docs. Co-Authored-By: Claude Opus 4.6 --- docs/plans/2026-02-25-overhead-system-impl.md | 1289 +++++++++++++++++ 1 file changed, 1289 insertions(+) create mode 100644 docs/plans/2026-02-25-overhead-system-impl.md diff --git a/docs/plans/2026-02-25-overhead-system-impl.md b/docs/plans/2026-02-25-overhead-system-impl.md new file mode 100644 index 000000000..853759ad6 --- /dev/null +++ b/docs/plans/2026-02-25-overhead-system-impl.md @@ -0,0 +1,1289 @@ +# Overhead System Redesign Implementation Plan + +> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. + +**Goal:** Replace the `Polynomial`-based overhead system with a general `Expr` AST, compile-time macro-parsed expression strings, and per-problem inherent getters. + +**Architecture:** The `#[reduction]` proc macro parses expression strings at compile time and emits both compiled Rust getter-calling code (for evaluation + compiler validation) and symbolic `Expr` AST literals (for composition + export). Problems provide inherent getter methods instead of trait-level `problem_size_names()`/`problem_size_values()`. + +**Tech Stack:** Rust proc macros (syn/quote), Pratt parser, serde, inventory + +--- + +## Phase 1: Add `Expr` type (additive, no breaking changes) + +### Task 1: Create `Expr` enum and basic operations + +**Files:** +- Create: `src/expr.rs` +- Create: `src/unit_tests/expr.rs` +- Modify: `src/lib.rs` (add module) + +**Step 1: Write failing tests for Expr construction and evaluation** + +Create `src/unit_tests/expr.rs`: +```rust +use super::*; +use crate::types::ProblemSize; + +#[test] +fn test_expr_const_eval() { + let e = Expr::Const(42.0); + let size = ProblemSize::new(vec![]); + assert_eq!(e.eval(&size), 42.0); +} + +#[test] +fn test_expr_var_eval() { + let e = Expr::Var("n"); + let size = ProblemSize::new(vec![("n", 10)]); + assert_eq!(e.eval(&size), 10.0); +} + +#[test] +fn test_expr_add_eval() { + // n + 3 + let e = Expr::add(Expr::Var("n"), Expr::Const(3.0)); + let size = ProblemSize::new(vec![("n", 7)]); + assert_eq!(e.eval(&size), 10.0); +} + +#[test] +fn test_expr_mul_eval() { + // 3 * n + let e = Expr::mul(Expr::Const(3.0), Expr::Var("n")); + let size = ProblemSize::new(vec![("n", 5)]); + assert_eq!(e.eval(&size), 15.0); +} + +#[test] +fn test_expr_pow_eval() { + // n^2 + let e = Expr::pow(Expr::Var("n"), Expr::Const(2.0)); + let size = ProblemSize::new(vec![("n", 4)]); + assert_eq!(e.eval(&size), 16.0); +} + +#[test] +fn test_expr_exp_eval() { + let e = Expr::Exp(Box::new(Expr::Const(1.0))); + let size = ProblemSize::new(vec![]); + assert!((e.eval(&size) - std::f64::consts::E).abs() < 1e-10); +} + +#[test] +fn test_expr_log_eval() { + let e = Expr::Log(Box::new(Expr::Const(std::f64::consts::E))); + let size = ProblemSize::new(vec![]); + assert!((e.eval(&size) - 1.0).abs() < 1e-10); +} + +#[test] +fn test_expr_sqrt_eval() { + let e = Expr::Sqrt(Box::new(Expr::Const(9.0))); + let size = ProblemSize::new(vec![]); + assert_eq!(e.eval(&size), 3.0); +} + +#[test] +fn test_expr_complex() { + // n^2 + 3*m + let e = Expr::add( + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + Expr::mul(Expr::Const(3.0), Expr::Var("m")), + ); + let size = ProblemSize::new(vec![("n", 4), ("m", 2)]); + assert_eq!(e.eval(&size), 22.0); // 16 + 6 +} +``` + +**Step 2: Run tests to verify they fail** + +Run: `make test` (or `cargo test expr`) +Expected: compilation errors — `Expr` type doesn't exist yet. + +**Step 3: Implement `Expr` enum with eval** + +Create `src/expr.rs`: +```rust +//! General symbolic expression AST for reduction overhead. + +use crate::types::ProblemSize; +use std::collections::{HashMap, HashSet}; +use std::fmt; + +/// A symbolic math expression over problem size variables. +#[derive(Clone, Debug, PartialEq, serde::Serialize, serde::Deserialize)] +pub enum Expr { + /// Numeric constant. + Const(f64), + /// Named variable (e.g., "num_vertices"). + Var(&'static str), + /// Addition: a + b. + Add(Box, Box), + /// Multiplication: a * b. + Mul(Box, Box), + /// Exponentiation: base ^ exponent. + Pow(Box, Box), + /// Exponential function: exp(a). + Exp(Box), + /// Natural logarithm: log(a). + Log(Box), + /// Square root: sqrt(a). + Sqrt(Box), +} + +impl Expr { + /// Convenience constructors (avoid Box::new noise). + pub fn add(a: Expr, b: Expr) -> Self { + Expr::Add(Box::new(a), Box::new(b)) + } + pub fn mul(a: Expr, b: Expr) -> Self { + Expr::Mul(Box::new(a), Box::new(b)) + } + pub fn pow(base: Expr, exp: Expr) -> Self { + Expr::Pow(Box::new(base), Box::new(exp)) + } + + /// Evaluate the expression given concrete variable values. + pub fn eval(&self, vars: &ProblemSize) -> f64 { + match self { + Expr::Const(c) => *c, + Expr::Var(name) => vars.get(name).unwrap_or(0) as f64, + Expr::Add(a, b) => a.eval(vars) + b.eval(vars), + Expr::Mul(a, b) => a.eval(vars) * b.eval(vars), + Expr::Pow(base, exp) => base.eval(vars).powf(exp.eval(vars)), + Expr::Exp(a) => a.eval(vars).exp(), + Expr::Log(a) => a.eval(vars).ln(), + Expr::Sqrt(a) => a.eval(vars).sqrt(), + } + } +} + +#[cfg(test)] +#[path = "unit_tests/expr.rs"] +mod tests; +``` + +Add to `src/lib.rs`: +```rust +pub(crate) mod expr; +``` + +**Step 4: Run tests to verify they pass** + +Run: `cargo test expr` +Expected: all tests pass. + +**Step 5: Commit** + +```bash +git add src/expr.rs src/unit_tests/expr.rs src/lib.rs +git commit -m "feat: add Expr AST type with eval (phase 1 of overhead redesign)" +``` + +--- + +### Task 2: Add `variables()`, `substitute()`, and `Display` to `Expr` + +**Files:** +- Modify: `src/expr.rs` +- Modify: `src/unit_tests/expr.rs` + +**Step 1: Write failing tests** + +Append to `src/unit_tests/expr.rs`: +```rust +#[test] +fn test_expr_variables() { + let e = Expr::add( + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + Expr::mul(Expr::Const(3.0), Expr::Var("m")), + ); + let vars = e.variables(); + assert_eq!(vars, HashSet::from(["n", "m"])); +} + +#[test] +fn test_expr_substitute() { + // n^2, substitute n → (a + b) + let e = Expr::pow(Expr::Var("n"), Expr::Const(2.0)); + let replacement = Expr::add(Expr::Var("a"), Expr::Var("b")); + let mut mapping = HashMap::new(); + mapping.insert("n", &replacement); + let result = e.substitute(&mapping); + // Should be (a + b)^2 + let size = ProblemSize::new(vec![("a", 3), ("b", 2)]); + assert_eq!(result.eval(&size), 25.0); // (3+2)^2 +} + +#[test] +fn test_expr_display_simple() { + assert_eq!(format!("{}", Expr::Const(5.0)), "5"); + assert_eq!(format!("{}", Expr::Var("n")), "n"); +} + +#[test] +fn test_expr_display_add() { + let e = Expr::add(Expr::Var("n"), Expr::Const(3.0)); + assert_eq!(format!("{e}"), "n + 3"); +} + +#[test] +fn test_expr_display_mul() { + let e = Expr::mul(Expr::Const(3.0), Expr::Var("n")); + assert_eq!(format!("{e}"), "3 * n"); +} + +#[test] +fn test_expr_display_pow() { + let e = Expr::pow(Expr::Var("n"), Expr::Const(2.0)); + assert_eq!(format!("{e}"), "n^2"); +} + +#[test] +fn test_expr_display_exp() { + let e = Expr::Exp(Box::new(Expr::Var("n"))); + assert_eq!(format!("{e}"), "exp(n)"); +} + +#[test] +fn test_expr_display_nested() { + // n^2 + 3 * m + let e = Expr::add( + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + Expr::mul(Expr::Const(3.0), Expr::Var("m")), + ); + assert_eq!(format!("{e}"), "n^2 + 3 * m"); +} +``` + +**Step 2: Run tests to verify they fail** + +Run: `cargo test expr` +Expected: FAIL — `variables()`, `substitute()`, `Display` not implemented. + +**Step 3: Implement the methods** + +Add to `src/expr.rs`: +```rust +impl Expr { + // ... existing methods ... + + /// Collect all variable names referenced in this expression. + pub fn variables(&self) -> HashSet<&'static str> { + let mut vars = HashSet::new(); + self.collect_variables(&mut vars); + vars + } + + fn collect_variables(&self, vars: &mut HashSet<&'static str>) { + match self { + Expr::Const(_) => {} + Expr::Var(name) => { vars.insert(name); } + Expr::Add(a, b) | Expr::Mul(a, b) | Expr::Pow(a, b) => { + a.collect_variables(vars); + b.collect_variables(vars); + } + Expr::Exp(a) | Expr::Log(a) | Expr::Sqrt(a) => { + a.collect_variables(vars); + } + } + } + + /// Substitute variables with other expressions. + pub fn substitute(&self, mapping: &HashMap<&str, &Expr>) -> Expr { + match self { + Expr::Const(c) => Expr::Const(*c), + Expr::Var(name) => { + if let Some(replacement) = mapping.get(name) { + (*replacement).clone() + } else { + Expr::Var(name) + } + } + Expr::Add(a, b) => Expr::add(a.substitute(mapping), b.substitute(mapping)), + Expr::Mul(a, b) => Expr::mul(a.substitute(mapping), b.substitute(mapping)), + Expr::Pow(a, b) => Expr::pow(a.substitute(mapping), b.substitute(mapping)), + Expr::Exp(a) => Expr::Exp(Box::new(a.substitute(mapping))), + Expr::Log(a) => Expr::Log(Box::new(a.substitute(mapping))), + Expr::Sqrt(a) => Expr::Sqrt(Box::new(a.substitute(mapping))), + } + } + + /// Check if this expression is a polynomial (no exp/log/sqrt, integer exponents only). + pub fn is_polynomial(&self) -> bool { + match self { + Expr::Const(_) | Expr::Var(_) => true, + Expr::Add(a, b) | Expr::Mul(a, b) => a.is_polynomial() && b.is_polynomial(), + Expr::Pow(base, exp) => { + base.is_polynomial() && matches!(exp.as_ref(), Expr::Const(c) if *c >= 0.0 && (*c - c.round()).abs() < 1e-10) + } + Expr::Exp(_) | Expr::Log(_) | Expr::Sqrt(_) => false, + } + } +} + +impl fmt::Display for Expr { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + match self { + Expr::Const(c) => { + let ci = c.round() as i64; + if (*c - ci as f64).abs() < 1e-10 { + write!(f, "{ci}") + } else { + write!(f, "{c}") + } + } + Expr::Var(name) => write!(f, "{name}"), + Expr::Add(a, b) => write!(f, "{a} + {b}"), + Expr::Mul(a, b) => { + // Parenthesize additions inside multiplication + let left = if matches!(a.as_ref(), Expr::Add(_, _)) { + format!("({a})") + } else { + format!("{a}") + }; + let right = if matches!(b.as_ref(), Expr::Add(_, _)) { + format!("({b})") + } else { + format!("{b}") + }; + write!(f, "{left} * {right}") + } + Expr::Pow(base, exp) => { + let base_str = if matches!(base.as_ref(), Expr::Add(_, _) | Expr::Mul(_, _)) { + format!("({base})") + } else { + format!("{base}") + }; + write!(f, "{base_str}^{exp}") + } + Expr::Exp(a) => write!(f, "exp({a})"), + Expr::Log(a) => write!(f, "log({a})"), + Expr::Sqrt(a) => write!(f, "sqrt({a})"), + } + } +} +``` + +**Step 4: Run tests to verify they pass** + +Run: `cargo test expr` +Expected: all tests pass. + +**Step 5: Commit** + +```bash +git add src/expr.rs src/unit_tests/expr.rs +git commit -m "feat: add variables, substitute, Display to Expr" +``` + +--- + +## Phase 2: Proc macro expression parser + +### Task 3: Add Pratt parser to the proc macro crate + +**Files:** +- Create: `problemreductions-macros/src/parser.rs` +- Create: `problemreductions-macros/tests/parse_tests.rs` +- Modify: `problemreductions-macros/src/lib.rs` (add module) + +The parser operates on `&str` (the contents of the string literal from the macro attribute) and produces a token stream that constructs `Expr` values. + +**Step 1: Write failing parser tests** + +Create `problemreductions-macros/tests/parse_tests.rs`: +```rust +use problemreductions_macros::__parse_overhead_expr; + +// We'll expose a helper proc macro for testing that takes a string +// and outputs the Expr construction code. This is tested by compilation. + +// For now, test the parser module directly via unit tests inside the crate. +``` + +Since proc macro crates can't be tested with normal `#[test]` easily for internal parse logic, add unit tests inside the module. + +Create `problemreductions-macros/src/parser.rs`: +```rust +//! Pratt parser for overhead expression strings. +//! +//! Parses expressions like: +//! - "num_vertices" +//! - "num_vertices^2" +//! - "num_edges + num_vertices^2" +//! - "3 * num_vertices" +//! - "exp(num_vertices^2)" +//! - "sqrt(num_edges)" +//! +//! Grammar: +//! expr = term (('+' | '-') term)* +//! term = factor (('*' | '/') factor)* +//! factor = unary ('^' factor)? // right-associative +//! unary = '-' unary | primary +//! primary = NUMBER | IDENT | func_call | '(' expr ')' +//! func_call = ('exp' | 'log' | 'sqrt') '(' expr ')' + +use proc_macro2::TokenStream; +use quote::quote; + +/// Parsed expression node (intermediate representation before codegen). +#[derive(Debug, Clone, PartialEq)] +pub enum ParsedExpr { + Const(f64), + Var(String), + Add(Box, Box), + Sub(Box, Box), + Mul(Box, Box), + Div(Box, Box), + Pow(Box, Box), + Neg(Box), + Exp(Box), + Log(Box), + Sqrt(Box), +} + +// ... tokenizer and parser implementation ... +// (detailed in Step 3) +``` + +**Step 2: Implement tokenizer** + +Tokens needed: `Number(f64)`, `Ident(String)`, `Plus`, `Minus`, `Star`, `Slash`, `Caret`, `LParen`, `RParen`. + +```rust +#[derive(Debug, Clone, PartialEq)] +enum Token { + Number(f64), + Ident(String), + Plus, + Minus, + Star, + Slash, + Caret, + LParen, + RParen, +} + +fn tokenize(input: &str) -> Result, String> { + let mut tokens = Vec::new(); + let mut chars = input.chars().peekable(); + while let Some(&ch) = chars.peek() { + match ch { + ' ' | '\t' | '\n' => { chars.next(); } + '+' => { chars.next(); tokens.push(Token::Plus); } + '-' => { chars.next(); tokens.push(Token::Minus); } + '*' => { chars.next(); tokens.push(Token::Star); } + '/' => { chars.next(); tokens.push(Token::Slash); } + '^' => { chars.next(); tokens.push(Token::Caret); } + '(' => { chars.next(); tokens.push(Token::LParen); } + ')' => { chars.next(); tokens.push(Token::RParen); } + c if c.is_ascii_digit() || c == '.' => { + let mut num = String::new(); + while let Some(&c) = chars.peek() { + if c.is_ascii_digit() || c == '.' { num.push(c); chars.next(); } + else { break; } + } + let val: f64 = num.parse().map_err(|_| format!("invalid number: {num}"))?; + tokens.push(Token::Number(val)); + } + c if c.is_ascii_alphabetic() || c == '_' => { + let mut ident = String::new(); + while let Some(&c) = chars.peek() { + if c.is_ascii_alphanumeric() || c == '_' { ident.push(c); chars.next(); } + else { break; } + } + tokens.push(Token::Ident(ident)); + } + _ => return Err(format!("unexpected character: '{ch}'")), + } + } + Ok(tokens) +} +``` + +**Step 3: Implement Pratt parser** + +```rust +struct Parser { + tokens: Vec, + pos: usize, +} + +impl Parser { + fn new(tokens: Vec) -> Self { Self { tokens, pos: 0 } } + fn peek(&self) -> Option<&Token> { self.tokens.get(self.pos) } + fn advance(&mut self) -> Option { + let tok = self.tokens.get(self.pos).cloned(); + self.pos += 1; + tok + } + fn expect(&mut self, expected: &Token) -> Result<(), String> { + match self.advance() { + Some(ref tok) if tok == expected => Ok(()), + Some(tok) => Err(format!("expected {expected:?}, got {tok:?}")), + None => Err(format!("expected {expected:?}, got end of input")), + } + } + + fn parse_expr(&mut self) -> Result { + let mut left = self.parse_term()?; + while matches!(self.peek(), Some(Token::Plus) | Some(Token::Minus)) { + let op = self.advance().unwrap(); + let right = self.parse_term()?; + left = match op { + Token::Plus => ParsedExpr::Add(Box::new(left), Box::new(right)), + Token::Minus => ParsedExpr::Sub(Box::new(left), Box::new(right)), + _ => unreachable!(), + }; + } + Ok(left) + } + + fn parse_term(&mut self) -> Result { + let mut left = self.parse_factor()?; + while matches!(self.peek(), Some(Token::Star) | Some(Token::Slash)) { + let op = self.advance().unwrap(); + let right = self.parse_factor()?; + left = match op { + Token::Star => ParsedExpr::Mul(Box::new(left), Box::new(right)), + Token::Slash => ParsedExpr::Div(Box::new(left), Box::new(right)), + _ => unreachable!(), + }; + } + Ok(left) + } + + fn parse_factor(&mut self) -> Result { + let base = self.parse_unary()?; + if matches!(self.peek(), Some(Token::Caret)) { + self.advance(); + let exp = self.parse_factor()?; // right-associative + Ok(ParsedExpr::Pow(Box::new(base), Box::new(exp))) + } else { + Ok(base) + } + } + + fn parse_unary(&mut self) -> Result { + if matches!(self.peek(), Some(Token::Minus)) { + self.advance(); + let expr = self.parse_unary()?; + Ok(ParsedExpr::Neg(Box::new(expr))) + } else { + self.parse_primary() + } + } + + fn parse_primary(&mut self) -> Result { + match self.advance() { + Some(Token::Number(n)) => Ok(ParsedExpr::Const(n)), + Some(Token::Ident(name)) => { + // Check for function call: exp(...), log(...), sqrt(...) + if matches!(self.peek(), Some(Token::LParen)) { + self.advance(); // consume '(' + let arg = self.parse_expr()?; + self.expect(&Token::RParen)?; + match name.as_str() { + "exp" => Ok(ParsedExpr::Exp(Box::new(arg))), + "log" => Ok(ParsedExpr::Log(Box::new(arg))), + "sqrt" => Ok(ParsedExpr::Sqrt(Box::new(arg))), + _ => Err(format!("unknown function: {name}")), + } + } else { + Ok(ParsedExpr::Var(name)) + } + } + Some(Token::LParen) => { + let expr = self.parse_expr()?; + self.expect(&Token::RParen)?; + Ok(expr) + } + Some(tok) => Err(format!("unexpected token: {tok:?}")), + None => Err("unexpected end of input".to_string()), + } + } +} + +/// Parse an expression string into a ParsedExpr. +pub fn parse_expr(input: &str) -> Result { + let tokens = tokenize(input)?; + let mut parser = Parser::new(tokens); + let expr = parser.parse_expr()?; + if parser.pos != parser.tokens.len() { + return Err(format!("unexpected trailing tokens at position {}", parser.pos)); + } + Ok(expr) +} +``` + +**Step 4: Add codegen functions** + +Two codegen functions — one produces `Expr` AST construction code, the other produces Rust evaluation code that calls getters. + +```rust +impl ParsedExpr { + /// Generate TokenStream that constructs an `Expr` value. + pub fn to_expr_tokens(&self) -> TokenStream { + match self { + ParsedExpr::Const(c) => quote! { crate::expr::Expr::Const(#c) }, + ParsedExpr::Var(name) => quote! { crate::expr::Expr::Var(#name) }, + ParsedExpr::Add(a, b) => { + let a = a.to_expr_tokens(); + let b = b.to_expr_tokens(); + quote! { crate::expr::Expr::add(#a, #b) } + } + ParsedExpr::Sub(a, b) => { + let a = a.to_expr_tokens(); + let b = b.to_expr_tokens(); + quote! { crate::expr::Expr::add(#a, crate::expr::Expr::mul(crate::expr::Expr::Const(-1.0), #b)) } + } + ParsedExpr::Mul(a, b) => { + let a = a.to_expr_tokens(); + let b = b.to_expr_tokens(); + quote! { crate::expr::Expr::mul(#a, #b) } + } + ParsedExpr::Div(a, b) => { + let a = a.to_expr_tokens(); + let b = b.to_expr_tokens(); + quote! { crate::expr::Expr::mul(#a, crate::expr::Expr::pow(#b, crate::expr::Expr::Const(-1.0))) } + } + ParsedExpr::Pow(base, exp) => { + let base = base.to_expr_tokens(); + let exp = exp.to_expr_tokens(); + quote! { crate::expr::Expr::pow(#base, #exp) } + } + ParsedExpr::Neg(a) => { + let a = a.to_expr_tokens(); + quote! { crate::expr::Expr::mul(crate::expr::Expr::Const(-1.0), #a) } + } + ParsedExpr::Exp(a) => { + let a = a.to_expr_tokens(); + quote! { crate::expr::Expr::Exp(Box::new(#a)) } + } + ParsedExpr::Log(a) => { + let a = a.to_expr_tokens(); + quote! { crate::expr::Expr::Log(Box::new(#a)) } + } + ParsedExpr::Sqrt(a) => { + let a = a.to_expr_tokens(); + quote! { crate::expr::Expr::Sqrt(Box::new(#a)) } + } + } + } + + /// Generate TokenStream that evaluates the expression by calling getter methods + /// on a source variable `src`. + pub fn to_eval_tokens(&self, src_ident: &syn::Ident) -> TokenStream { + match self { + ParsedExpr::Const(c) => quote! { (#c as f64) }, + ParsedExpr::Var(name) => { + let getter = syn::Ident::new(name, proc_macro2::Span::call_site()); + quote! { (#src_ident.#getter() as f64) } + } + ParsedExpr::Add(a, b) => { + let a = a.to_eval_tokens(src_ident); + let b = b.to_eval_tokens(src_ident); + quote! { (#a + #b) } + } + ParsedExpr::Sub(a, b) => { + let a = a.to_eval_tokens(src_ident); + let b = b.to_eval_tokens(src_ident); + quote! { (#a - #b) } + } + ParsedExpr::Mul(a, b) => { + let a = a.to_eval_tokens(src_ident); + let b = b.to_eval_tokens(src_ident); + quote! { (#a * #b) } + } + ParsedExpr::Div(a, b) => { + let a = a.to_eval_tokens(src_ident); + let b = b.to_eval_tokens(src_ident); + quote! { (#a / #b) } + } + ParsedExpr::Pow(base, exp) => { + let base = base.to_eval_tokens(src_ident); + let exp = exp.to_eval_tokens(src_ident); + quote! { f64::powf(#base, #exp) } + } + ParsedExpr::Neg(a) => { + let a = a.to_eval_tokens(src_ident); + quote! { (-(#a)) } + } + ParsedExpr::Exp(a) => { + let a = a.to_eval_tokens(src_ident); + quote! { f64::exp(#a) } + } + ParsedExpr::Log(a) => { + let a = a.to_eval_tokens(src_ident); + quote! { f64::ln(#a) } + } + ParsedExpr::Sqrt(a) => { + let a = a.to_eval_tokens(src_ident); + quote! { f64::sqrt(#a) } + } + } + } + + /// Collect all variable names in the expression. + pub fn variables(&self) -> Vec { + let mut vars = Vec::new(); + self.collect_vars(&mut vars); + vars.sort(); + vars.dedup(); + vars + } + + fn collect_vars(&self, vars: &mut Vec) { + match self { + ParsedExpr::Const(_) => {} + ParsedExpr::Var(name) => vars.push(name.clone()), + ParsedExpr::Add(a, b) | ParsedExpr::Sub(a, b) + | ParsedExpr::Mul(a, b) | ParsedExpr::Div(a, b) + | ParsedExpr::Pow(a, b) => { + a.collect_vars(vars); + b.collect_vars(vars); + } + ParsedExpr::Neg(a) | ParsedExpr::Exp(a) | ParsedExpr::Log(a) | ParsedExpr::Sqrt(a) => { + a.collect_vars(vars); + } + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_parse_var() { + assert_eq!(parse_expr("num_vertices").unwrap(), ParsedExpr::Var("num_vertices".into())); + } + + #[test] + fn test_parse_const() { + assert_eq!(parse_expr("42").unwrap(), ParsedExpr::Const(42.0)); + } + + #[test] + fn test_parse_pow() { + let e = parse_expr("n^2").unwrap(); + assert_eq!(e, ParsedExpr::Pow( + Box::new(ParsedExpr::Var("n".into())), + Box::new(ParsedExpr::Const(2.0)), + )); + } + + #[test] + fn test_parse_add_mul() { + // n + 3 * m → n + (3*m) + let e = parse_expr("n + 3 * m").unwrap(); + assert_eq!(e, ParsedExpr::Add( + Box::new(ParsedExpr::Var("n".into())), + Box::new(ParsedExpr::Mul( + Box::new(ParsedExpr::Const(3.0)), + Box::new(ParsedExpr::Var("m".into())), + )), + )); + } + + #[test] + fn test_parse_exp() { + let e = parse_expr("exp(n^2)").unwrap(); + assert_eq!(e, ParsedExpr::Exp(Box::new(ParsedExpr::Pow( + Box::new(ParsedExpr::Var("n".into())), + Box::new(ParsedExpr::Const(2.0)), + )))); + } + + #[test] + fn test_parse_complex() { + // 3 * n^2 + exp(m) — should parse correctly + let e = parse_expr("3 * n^2 + exp(m)").unwrap(); + assert!(matches!(e, ParsedExpr::Add(_, _))); + } + + #[test] + fn test_parse_parens() { + let e = parse_expr("(n + m)^2").unwrap(); + assert!(matches!(e, ParsedExpr::Pow(_, _))); + } + + #[test] + fn test_variables() { + let e = parse_expr("n^2 + 3 * m + exp(k)").unwrap(); + assert_eq!(e.variables(), vec!["k", "m", "n"]); + } +} +``` + +**Step 5: Run tests** + +Run: `cargo test -p problemreductions-macros` +Expected: all parser tests pass. + +**Step 6: Commit** + +```bash +git add problemreductions-macros/src/parser.rs +git commit -m "feat: add Pratt expression parser to proc macro crate" +``` + +--- + +### Task 4: Update `#[reduction]` macro to support new syntax + +**Files:** +- Modify: `problemreductions-macros/src/lib.rs` + +The macro should support **both** old syntax (for backwards compatibility during migration) and new syntax: + +Old: `overhead = { ReductionOverhead::new(vec![...]) }` +New: `overhead = { num_vars = "num_vertices^2", num_constraints = "num_edges" }` + +Detection: if the content starts with an identifier followed by `=` and a string literal, it's new syntax. Otherwise, treat the braced content as raw token stream (old syntax). + +**Step 1: Update `ReductionAttrs` parsing** + +Add a new variant to represent parsed overhead fields: +```rust +enum OverheadSpec { + /// Old syntax: raw token stream (ReductionOverhead::new(...)) + Legacy(TokenStream2), + /// New syntax: list of (field_name, expression_string) pairs + Parsed(Vec<(String, String)>), +} +``` + +Update `ReductionAttrs` to store `OverheadSpec` and the parsing logic to detect which format is used. + +**Step 2: Update `generate_reduction_entry` to emit dual code** + +For `OverheadSpec::Parsed`, use the parser from Task 3: +- Parse each expression string at compile time +- Emit `overhead_fn` that constructs `ReductionOverhead` with `Expr` AST +- Emit `overhead_eval_fn` that calls getters on the concrete source type +- Report parse errors as compile errors with `syn::Error` + +For `OverheadSpec::Legacy`, emit the old behavior unchanged. + +**Step 3: Update `ReductionEntry` to include the new field** + +This requires modifying `src/rules/registry.rs` to add `overhead_eval_fn`. For now, legacy reductions pass a no-op eval fn: +```rust +pub overhead_eval_fn: Option ProblemSize>, +``` + +Using `Option` allows legacy code to work with `None` while new syntax populates `Some(...)`. + +**Step 4: Test with one reduction** + +Pick a simple reduction (e.g., `maximumindependentset_qubo.rs`) and convert it to new syntax as a proof: + +Before: +```rust +#[reduction( + overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vertices))]) } +)] +``` + +After: +```rust +#[reduction(overhead = { + num_vars = "num_vertices", +})] +``` + +Run: `cargo test maximumindependentset_qubo` +Expected: passes (both old overhead_fn and new eval paths work). + +**Step 5: Commit** + +```bash +git add problemreductions-macros/src/lib.rs src/rules/registry.rs +git commit -m "feat: support new overhead expression syntax in #[reduction] macro" +``` + +--- + +## Phase 3: Add inherent getters to all problem types + +### Task 5: Add getters to graph problem types + +**Files:** +- Modify: `src/models/graph/maximum_independent_set.rs` +- Modify: `src/models/graph/minimum_vertex_cover.rs` +- Modify: `src/models/graph/maximum_clique.rs` +- Modify: `src/models/graph/maximum_matching.rs` +- Modify: `src/models/graph/max_cut.rs` +- Modify: `src/models/graph/maximal_is.rs` +- Modify: `src/models/graph/minimum_dominating_set.rs` +- Modify: `src/models/graph/kcoloring.rs` +- Modify: `src/models/graph/traveling_salesman.rs` + +For each graph problem that has `problem_size_names = ["num_vertices", "num_edges"]`, add inherent getters. Most already have `graph()` accessor, so the getters are trivial: + +```rust +impl MaximumIndependentSet { + pub fn num_vertices(&self) -> usize { self.graph().num_vertices() } + pub fn num_edges(&self) -> usize { self.graph().num_edges() } +} +``` + +Check each problem's `problem_size_values()` to see what getters are needed — some problems may have additional fields. For example: +- Most graph problems: `num_vertices`, `num_edges` +- KColoring: `num_vertices`, `num_edges` (same) +- TravelingSalesman: check the actual fields + +**Step 1: Add getters to all 9 graph problem files** + +Read each file's `problem_size_values()` to determine exact getters needed. Add inherent `impl` blocks with `pub fn` getters. If a getter already exists as a public method, skip it. + +**Step 2: Run tests** + +Run: `cargo test` +Expected: all existing tests pass (getters are additive). + +**Step 3: Commit** + +```bash +git add src/models/graph/ +git commit -m "feat: add inherent size getters to graph problem types" +``` + +--- + +### Task 6: Add getters to remaining problem types + +**Files:** +- Modify: `src/models/satisfiability/sat.rs` +- Modify: `src/models/satisfiability/ksat.rs` +- Modify: `src/models/optimization/qubo.rs` +- Modify: `src/models/optimization/ilp.rs` +- Modify: `src/models/optimization/spin_glass.rs` +- Modify: `src/models/set/maximum_set_packing.rs` +- Modify: `src/models/set/minimum_set_covering.rs` +- Modify: `src/models/specialized/circuit.rs` +- Modify: `src/models/specialized/factoring.rs` +- Modify: `src/models/specialized/paintshop.rs` +- Modify: `src/models/specialized/bmf.rs` +- Modify: `src/models/specialized/biclique_cover.rs` + +Same approach: read `problem_size_values()` for each, add inherent getter methods. Examples: +- Satisfiability: `num_vars()`, `num_clauses()`, `num_literals()` (may already exist) +- QUBO: `num_vars()` +- ILP: `num_vars()`, `num_constraints()` +- SpinGlass: check fields +- CircuitSAT: `num_variables()`, `num_assignments()` + +**Step 1: Add getters to all remaining problem files** + +**Step 2: Run tests** + +Run: `cargo test` +Expected: all tests pass. + +**Step 3: Commit** + +```bash +git add src/models/ +git commit -m "feat: add inherent size getters to SAT, optimization, set, and specialized problems" +``` + +--- + +## Phase 4: Migrate all reductions to new syntax + +### Task 7: Migrate simple reductions (single field, simple expression) + +**Files:** ~15 reduction files with simple `poly!(var)` or `poly!(var^N)` patterns. + +Target files (identified from grep): `maximumindependentset_qubo.rs`, `coloring_qubo.rs`, `ksatisfiability_qubo.rs` (K2), `ilp_qubo.rs`, `maximumsetpacking_qubo.rs`, `minimumvertexcover_qubo.rs`, `spinglass_qubo.rs`, etc. + +For each file, replace: +```rust +overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vertices))]) } +``` +with: +```rust +overhead = { num_vars = "num_vertices" } +``` + +And: +```rust +overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vertices ^ 2))]) } +``` +with: +```rust +overhead = { num_vars = "num_vertices^2" } +``` + +**Step 1: Migrate files** + +Mechanical replacement. Remove any `use crate::poly;` or `use crate::rules::registry::ReductionOverhead;` imports that become unused. + +**Step 2: Run tests** + +Run: `cargo test` +Expected: all tests pass. + +**Step 3: Commit** + +```bash +git add src/rules/ +git commit -m "refactor: migrate simple reductions to new overhead syntax" +``` + +--- + +### Task 8: Migrate complex reductions (multi-field, compound expressions) + +**Files:** Remaining ~20 reduction files with multi-field or compound polynomial expressions. + +These include reductions like `factoring_circuit.rs`, `circuit_spinglass.rs`, `sat_coloring.rs`, `maximumindependentset_ilp.rs`, etc. + +For compound expressions using `poly!() + poly!()`: +```rust +overhead = { + ReductionOverhead::new(vec![ + ("num_vars", poly!(num_vars) + poly!(num_clauses)), + ]) +} +``` +becomes: +```rust +overhead = { + num_vars = "num_vars + num_clauses", +} +``` + +For multi-field: +```rust +overhead = { + ReductionOverhead::new(vec![ + ("num_vars", poly!(num_vertices)), + ("num_constraints", poly!(num_edges)), + ]) +} +``` +becomes: +```rust +overhead = { + num_vars = "num_vertices", + num_constraints = "num_edges", +} +``` + +**Step 1: Migrate files** + +Read each file's current overhead carefully. Convert polynomial expressions to string syntax. Some expressions may use `poly!(a * b)` (product) — convert to `"a * b"`. + +**Step 2: Run tests** + +Run: `cargo test` +Expected: all tests pass. + +**Step 3: Commit** + +```bash +git add src/rules/ +git commit -m "refactor: migrate complex reductions to new overhead syntax" +``` + +--- + +### Task 9: Migrate variant cast reductions + +**Files:** +- Modify: `src/rules/mod.rs` (the `impl_variant_reduction!` macro) +- Modify: cast files (`kcoloring_casts.rs`, `maximumindependentset_casts.rs`, etc.) + +The `impl_variant_reduction!` macro uses `ReductionOverhead::identity(fields)`. This still works with the new system since identity overhead maps each field to itself. Update the macro to use the new syntax if possible, or keep `ReductionOverhead::identity()` updated to use `Expr::Var` instead of `Polynomial::var`. + +Since `ReductionOverhead::identity()` will now construct `Expr::Var` values (after Phase 5), this migration may be minimal — just ensure the macro still compiles. + +**Step 1: Verify variant casts still compile and pass tests** + +Run: `cargo test` +Expected: all tests pass. + +**Step 2: Commit (if changes needed)** + +```bash +git commit -m "refactor: update variant cast macro for new overhead system" +``` + +--- + +## Phase 5: Remove deprecated APIs + +### Task 10: Switch `ReductionOverhead` from `Polynomial` to `Expr` + +**Files:** +- Modify: `src/rules/registry.rs` +- Modify: `src/export.rs` +- Modify: `src/rules/cost.rs` (if needed) + +**Step 1: Update `ReductionOverhead` to use `Expr`** + +```rust +pub struct ReductionOverhead { + pub output_size: Vec<(&'static str, Expr)>, +} +``` + +Update all methods: `evaluate_output_size` calls `Expr::eval`, `compose` calls `Expr::substitute`, `input_variable_names` calls `Expr::variables`, `identity` creates `Expr::Var` values. + +**Step 2: Update `export.rs`** + +Replace `MonomialJson`/`OverheadEntry` with the new format: +```rust +pub struct OverheadEntry { + pub field: String, + pub expr: Expr, + pub formula: String, +} +``` + +**Step 3: Run tests, fix any compilation errors** + +Run: `cargo test` +Fix any remaining references to `Polynomial` in overhead contexts. + +**Step 4: Commit** + +```bash +git add src/rules/registry.rs src/export.rs +git commit -m "refactor: switch ReductionOverhead from Polynomial to Expr" +``` + +--- + +### Task 11: Remove `problem_size_names` and `problem_size_values` from `Problem` trait + +**Files:** +- Modify: `src/traits.rs` +- Modify: all 21 model files (remove trait method impls) +- Modify: `src/lib.rs` (remove `problem_size` re-export if no longer used) +- Modify: `src/types.rs` (keep `ProblemSize` but remove `from_names_values` if unused) + +**Step 1: Remove from trait definition** + +Remove `problem_size_names()` and `problem_size_values()` from the `Problem` trait in `src/traits.rs`. Remove the `problem_size()` helper function. + +**Step 2: Remove implementations from all 21 model files** + +Remove the `problem_size_names()` and `problem_size_values()` method bodies from each Problem impl. + +**Step 3: Remove `source_size_names_fn` and `target_size_names_fn` from `ReductionEntry`** + +Update `src/rules/registry.rs` and the proc macro to no longer emit these fields. + +**Step 4: Fix compilation errors** + +Search for all remaining uses of `problem_size_names`, `problem_size_values`, `problem_size(`, `source_size_names_fn`, `target_size_names_fn` and update or remove them. + +**Step 5: Run tests** + +Run: `cargo test` +Fix any remaining failures. + +**Step 6: Commit** + +```bash +git add src/traits.rs src/models/ src/rules/registry.rs src/lib.rs problemreductions-macros/src/lib.rs +git commit -m "refactor: remove problem_size_names/values from Problem trait" +``` + +--- + +### Task 12: Remove `Polynomial` and `poly!` macro + +**Files:** +- Delete: `src/polynomial.rs` +- Delete: `src/unit_tests/polynomial.rs` +- Modify: `src/lib.rs` (remove `mod polynomial`) + +**Step 1: Search for any remaining `Polynomial` or `poly!` references** + +Run: `cargo build` — if it compiles, no references remain. + +**Step 2: Delete files** + +**Step 3: Run full test suite** + +Run: `make check` +Expected: fmt + clippy + test all pass. + +**Step 4: Commit** + +```bash +git add -A +git commit -m "refactor: remove Polynomial type and poly! macro (replaced by Expr)" +``` + +--- + +## Phase 6: Update documentation and exports + +### Task 13: Regenerate exports and update docs + +**Files:** +- Modify: `docs/src/reductions/reduction_graph.json` (auto-generated) +- Modify: `docs/paper/reductions.typ` (if format-overhead needs updating) +- Modify: CLAUDE.md (update conventions) +- Regenerate: `tests/data/` ground truth JSON (if format changed) + +**Step 1: Regenerate reduction graph JSON** + +Run: `make rust-export` +Check that the new JSON format has `expr` + `formula` fields instead of `polynomial`. + +**Step 2: Update paper if needed** + +Check `docs/paper/reductions.typ` — the `format-overhead` function reads `formula` fields. If the field name changed, update it. + +**Step 3: Regenerate test data** + +Run: `make qubo-testdata` +If example JSON format changed, regenerate example outputs. + +**Step 4: Run full CI check** + +Run: `make check` +Expected: all pass. + +**Step 5: Update CLAUDE.md** + +Update the Architecture section to reference `Expr` instead of `Polynomial`, and document the new `#[reduction]` syntax. + +**Step 6: Commit** + +```bash +git add -A +git commit -m "docs: update exports and documentation for new overhead system" +``` + +--- + +### Task 14: Update MCP server (if applicable) + +**Files:** +- Check: `problemreductions-cli/` for any MCP-specific overhead formatting + +**Step 1: Search for overhead-related code in CLI/MCP** + +The MCP server's `inspect_problem` and `reduce` tools return overhead info. Ensure they use the new `formula` field. + +**Step 2: Run MCP tests** + +Run: `make mcp-test` +Expected: all pass. + +**Step 3: Commit if changes needed** + +```bash +git commit -m "fix: update MCP server for new overhead format" +``` From 3eae864a7e8096dd40b0f5ca8898bc862458efd5 Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 05:42:31 +0800 Subject: [PATCH 03/15] feat: add Expr AST type with eval (phase 1 of overhead redesign) Co-Authored-By: Claude Opus 4.6 --- src/expr.rs | 59 ++++++++++++++++++++++++++++++++++ src/lib.rs | 1 + src/unit_tests/expr.rs | 72 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 132 insertions(+) create mode 100644 src/expr.rs create mode 100644 src/unit_tests/expr.rs diff --git a/src/expr.rs b/src/expr.rs new file mode 100644 index 000000000..ab0a0631a --- /dev/null +++ b/src/expr.rs @@ -0,0 +1,59 @@ +//! General symbolic expression AST for reduction overhead. + +use crate::types::ProblemSize; + +/// A symbolic math expression over problem size variables. +#[derive(Clone, Debug, PartialEq, serde::Serialize, serde::Deserialize)] +pub enum Expr { + /// Numeric constant. + Const(f64), + /// Named variable (e.g., "num_vertices"). + Var(&'static str), + /// Addition: a + b. + Add(Box, Box), + /// Multiplication: a * b. + Mul(Box, Box), + /// Exponentiation: base ^ exponent. + Pow(Box, Box), + /// Exponential function: exp(a). + Exp(Box), + /// Natural logarithm: log(a). + Log(Box), + /// Square root: sqrt(a). + Sqrt(Box), +} + +impl Expr { + /// Convenience constructor for addition. + pub fn add(a: Expr, b: Expr) -> Self { + Expr::Add(Box::new(a), Box::new(b)) + } + + /// Convenience constructor for multiplication. + pub fn mul(a: Expr, b: Expr) -> Self { + Expr::Mul(Box::new(a), Box::new(b)) + } + + /// Convenience constructor for exponentiation. + pub fn pow(base: Expr, exp: Expr) -> Self { + Expr::Pow(Box::new(base), Box::new(exp)) + } + + /// Evaluate the expression given concrete variable values. + pub fn eval(&self, vars: &ProblemSize) -> f64 { + match self { + Expr::Const(c) => *c, + Expr::Var(name) => vars.get(name).unwrap_or(0) as f64, + Expr::Add(a, b) => a.eval(vars) + b.eval(vars), + Expr::Mul(a, b) => a.eval(vars) * b.eval(vars), + Expr::Pow(base, exp) => base.eval(vars).powf(exp.eval(vars)), + Expr::Exp(a) => a.eval(vars).exp(), + Expr::Log(a) => a.eval(vars).ln(), + Expr::Sqrt(a) => a.eval(vars).sqrt(), + } + } +} + +#[cfg(test)] +#[path = "unit_tests/expr.rs"] +mod tests; diff --git a/src/lib.rs b/src/lib.rs index 30dd2a65d..46b27689f 100644 --- a/src/lib.rs +++ b/src/lib.rs @@ -22,6 +22,7 @@ pub mod error; pub mod export; pub mod io; pub mod models; +pub(crate) mod expr; pub(crate) mod polynomial; pub mod registry; pub mod rules; diff --git a/src/unit_tests/expr.rs b/src/unit_tests/expr.rs new file mode 100644 index 000000000..9f564813a --- /dev/null +++ b/src/unit_tests/expr.rs @@ -0,0 +1,72 @@ +use super::*; +use crate::types::ProblemSize; + +#[test] +fn test_expr_const_eval() { + let e = Expr::Const(42.0); + let size = ProblemSize::new(vec![]); + assert_eq!(e.eval(&size), 42.0); +} + +#[test] +fn test_expr_var_eval() { + let e = Expr::Var("n"); + let size = ProblemSize::new(vec![("n", 10)]); + assert_eq!(e.eval(&size), 10.0); +} + +#[test] +fn test_expr_add_eval() { + // n + 3 + let e = Expr::add(Expr::Var("n"), Expr::Const(3.0)); + let size = ProblemSize::new(vec![("n", 7)]); + assert_eq!(e.eval(&size), 10.0); +} + +#[test] +fn test_expr_mul_eval() { + // 3 * n + let e = Expr::mul(Expr::Const(3.0), Expr::Var("n")); + let size = ProblemSize::new(vec![("n", 5)]); + assert_eq!(e.eval(&size), 15.0); +} + +#[test] +fn test_expr_pow_eval() { + // n^2 + let e = Expr::pow(Expr::Var("n"), Expr::Const(2.0)); + let size = ProblemSize::new(vec![("n", 4)]); + assert_eq!(e.eval(&size), 16.0); +} + +#[test] +fn test_expr_exp_eval() { + let e = Expr::Exp(Box::new(Expr::Const(1.0))); + let size = ProblemSize::new(vec![]); + assert!((e.eval(&size) - std::f64::consts::E).abs() < 1e-10); +} + +#[test] +fn test_expr_log_eval() { + let e = Expr::Log(Box::new(Expr::Const(std::f64::consts::E))); + let size = ProblemSize::new(vec![]); + assert!((e.eval(&size) - 1.0).abs() < 1e-10); +} + +#[test] +fn test_expr_sqrt_eval() { + let e = Expr::Sqrt(Box::new(Expr::Const(9.0))); + let size = ProblemSize::new(vec![]); + assert_eq!(e.eval(&size), 3.0); +} + +#[test] +fn test_expr_complex() { + // n^2 + 3*m + let e = Expr::add( + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + Expr::mul(Expr::Const(3.0), Expr::Var("m")), + ); + let size = ProblemSize::new(vec![("n", 4), ("m", 2)]); + assert_eq!(e.eval(&size), 22.0); // 16 + 6 +} From 6adf13d8e21ce4a91cdeef640272060a387defbe Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 05:44:10 +0800 Subject: [PATCH 04/15] feat: add variables, substitute, Display to Expr Co-Authored-By: Claude Opus 4.6 --- src/expr.rs | 99 ++++++++++++++++++++++++++++++++++++++++++ src/unit_tests/expr.rs | 72 ++++++++++++++++++++++++++++++ 2 files changed, 171 insertions(+) diff --git a/src/expr.rs b/src/expr.rs index ab0a0631a..5959d6e3e 100644 --- a/src/expr.rs +++ b/src/expr.rs @@ -1,6 +1,8 @@ //! General symbolic expression AST for reduction overhead. use crate::types::ProblemSize; +use std::collections::{HashMap, HashSet}; +use std::fmt; /// A symbolic math expression over problem size variables. #[derive(Clone, Debug, PartialEq, serde::Serialize, serde::Deserialize)] @@ -52,6 +54,103 @@ impl Expr { Expr::Sqrt(a) => a.eval(vars).sqrt(), } } + + /// Collect all variable names referenced in this expression. + pub fn variables(&self) -> HashSet<&'static str> { + let mut vars = HashSet::new(); + self.collect_variables(&mut vars); + vars + } + + fn collect_variables(&self, vars: &mut HashSet<&'static str>) { + match self { + Expr::Const(_) => {} + Expr::Var(name) => { + vars.insert(name); + } + Expr::Add(a, b) | Expr::Mul(a, b) | Expr::Pow(a, b) => { + a.collect_variables(vars); + b.collect_variables(vars); + } + Expr::Exp(a) | Expr::Log(a) | Expr::Sqrt(a) => { + a.collect_variables(vars); + } + } + } + + /// Substitute variables with other expressions. + pub fn substitute(&self, mapping: &HashMap<&str, &Expr>) -> Expr { + match self { + Expr::Const(c) => Expr::Const(*c), + Expr::Var(name) => { + if let Some(replacement) = mapping.get(name) { + (*replacement).clone() + } else { + Expr::Var(name) + } + } + Expr::Add(a, b) => Expr::add(a.substitute(mapping), b.substitute(mapping)), + Expr::Mul(a, b) => Expr::mul(a.substitute(mapping), b.substitute(mapping)), + Expr::Pow(a, b) => Expr::pow(a.substitute(mapping), b.substitute(mapping)), + Expr::Exp(a) => Expr::Exp(Box::new(a.substitute(mapping))), + Expr::Log(a) => Expr::Log(Box::new(a.substitute(mapping))), + Expr::Sqrt(a) => Expr::Sqrt(Box::new(a.substitute(mapping))), + } + } + + /// Check if this expression is a polynomial (no exp/log/sqrt, integer exponents only). + pub fn is_polynomial(&self) -> bool { + match self { + Expr::Const(_) | Expr::Var(_) => true, + Expr::Add(a, b) | Expr::Mul(a, b) => a.is_polynomial() && b.is_polynomial(), + Expr::Pow(base, exp) => { + base.is_polynomial() + && matches!(exp.as_ref(), Expr::Const(c) if *c >= 0.0 && (*c - c.round()).abs() < 1e-10) + } + Expr::Exp(_) | Expr::Log(_) | Expr::Sqrt(_) => false, + } + } +} + +impl fmt::Display for Expr { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + match self { + Expr::Const(c) => { + let ci = c.round() as i64; + if (*c - ci as f64).abs() < 1e-10 { + write!(f, "{ci}") + } else { + write!(f, "{c}") + } + } + Expr::Var(name) => write!(f, "{name}"), + Expr::Add(a, b) => write!(f, "{a} + {b}"), + Expr::Mul(a, b) => { + let left = if matches!(a.as_ref(), Expr::Add(_, _)) { + format!("({a})") + } else { + format!("{a}") + }; + let right = if matches!(b.as_ref(), Expr::Add(_, _)) { + format!("({b})") + } else { + format!("{b}") + }; + write!(f, "{left} * {right}") + } + Expr::Pow(base, exp) => { + let base_str = if matches!(base.as_ref(), Expr::Add(_, _) | Expr::Mul(_, _)) { + format!("({base})") + } else { + format!("{base}") + }; + write!(f, "{base_str}^{exp}") + } + Expr::Exp(a) => write!(f, "exp({a})"), + Expr::Log(a) => write!(f, "log({a})"), + Expr::Sqrt(a) => write!(f, "sqrt({a})"), + } + } } #[cfg(test)] diff --git a/src/unit_tests/expr.rs b/src/unit_tests/expr.rs index 9f564813a..d095882ed 100644 --- a/src/unit_tests/expr.rs +++ b/src/unit_tests/expr.rs @@ -1,5 +1,6 @@ use super::*; use crate::types::ProblemSize; +use std::collections::{HashMap, HashSet}; #[test] fn test_expr_const_eval() { @@ -70,3 +71,74 @@ fn test_expr_complex() { let size = ProblemSize::new(vec![("n", 4), ("m", 2)]); assert_eq!(e.eval(&size), 22.0); // 16 + 6 } + +#[test] +fn test_expr_variables() { + let e = Expr::add( + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + Expr::mul(Expr::Const(3.0), Expr::Var("m")), + ); + let vars = e.variables(); + assert_eq!(vars, HashSet::from(["n", "m"])); +} + +#[test] +fn test_expr_substitute() { + // n^2, substitute n → (a + b) + let e = Expr::pow(Expr::Var("n"), Expr::Const(2.0)); + let replacement = Expr::add(Expr::Var("a"), Expr::Var("b")); + let mut mapping = HashMap::new(); + mapping.insert("n", &replacement); + let result = e.substitute(&mapping); + // Should be (a + b)^2 + let size = ProblemSize::new(vec![("a", 3), ("b", 2)]); + assert_eq!(result.eval(&size), 25.0); // (3+2)^2 +} + +#[test] +fn test_expr_display_simple() { + assert_eq!(format!("{}", Expr::Const(5.0)), "5"); + assert_eq!(format!("{}", Expr::Var("n")), "n"); +} + +#[test] +fn test_expr_display_add() { + let e = Expr::add(Expr::Var("n"), Expr::Const(3.0)); + assert_eq!(format!("{e}"), "n + 3"); +} + +#[test] +fn test_expr_display_mul() { + let e = Expr::mul(Expr::Const(3.0), Expr::Var("n")); + assert_eq!(format!("{e}"), "3 * n"); +} + +#[test] +fn test_expr_display_pow() { + let e = Expr::pow(Expr::Var("n"), Expr::Const(2.0)); + assert_eq!(format!("{e}"), "n^2"); +} + +#[test] +fn test_expr_display_exp() { + let e = Expr::Exp(Box::new(Expr::Var("n"))); + assert_eq!(format!("{e}"), "exp(n)"); +} + +#[test] +fn test_expr_display_nested() { + // n^2 + 3 * m + let e = Expr::add( + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + Expr::mul(Expr::Const(3.0), Expr::Var("m")), + ); + assert_eq!(format!("{e}"), "n^2 + 3 * m"); +} + +#[test] +fn test_expr_is_polynomial() { + assert!(Expr::Var("n").is_polynomial()); + assert!(Expr::pow(Expr::Var("n"), Expr::Const(2.0)).is_polynomial()); + assert!(!Expr::Exp(Box::new(Expr::Var("n"))).is_polynomial()); + assert!(!Expr::Log(Box::new(Expr::Var("n"))).is_polynomial()); +} From 324970f30ded819f032f86f2e1ea953be1163b9e Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 05:45:52 +0800 Subject: [PATCH 05/15] feat: add Pratt expression parser to proc macro crate Co-Authored-By: Claude Opus 4.6 --- problemreductions-macros/src/lib.rs | 2 + problemreductions-macros/src/parser.rs | 467 +++++++++++++++++++++++++ 2 files changed, 469 insertions(+) create mode 100644 problemreductions-macros/src/parser.rs diff --git a/problemreductions-macros/src/lib.rs b/problemreductions-macros/src/lib.rs index 6fab33c5f..3b8350e4f 100644 --- a/problemreductions-macros/src/lib.rs +++ b/problemreductions-macros/src/lib.rs @@ -3,6 +3,8 @@ //! This crate provides the `#[reduction]` attribute macro that automatically //! generates `ReductionEntry` registrations from `ReduceTo` impl blocks. +pub(crate) mod parser; + use proc_macro::TokenStream; use proc_macro2::TokenStream as TokenStream2; use quote::quote; diff --git a/problemreductions-macros/src/parser.rs b/problemreductions-macros/src/parser.rs new file mode 100644 index 000000000..2839ca2c7 --- /dev/null +++ b/problemreductions-macros/src/parser.rs @@ -0,0 +1,467 @@ +//! Pratt parser for overhead expression strings. +//! +//! Parses expressions like: +//! - `"num_vertices"` +//! - `"num_vertices^2"` +//! - `"num_edges + num_vertices^2"` +//! - `"3 * num_vertices"` +//! - `"exp(num_vertices^2)"` +//! - `"sqrt(num_edges)"` +//! +//! Grammar: +//! expr = term (('+' | '-') term)* +//! term = factor (('*' | '/') factor)* +//! factor = unary ('^' factor)? // right-associative +//! unary = '-' unary | primary +//! primary = NUMBER | IDENT | func_call | '(' expr ')' +//! func_call = ('exp' | 'log' | 'sqrt') '(' expr ')' + +use proc_macro2::TokenStream; +use quote::quote; + +/// Parsed expression node (intermediate representation before codegen). +#[derive(Debug, Clone, PartialEq)] +pub enum ParsedExpr { + Const(f64), + Var(String), + Add(Box, Box), + Sub(Box, Box), + Mul(Box, Box), + Div(Box, Box), + Pow(Box, Box), + Neg(Box), + Exp(Box), + Log(Box), + Sqrt(Box), +} + +#[derive(Debug, Clone, PartialEq)] +enum Token { + Number(f64), + Ident(String), + Plus, + Minus, + Star, + Slash, + Caret, + LParen, + RParen, +} + +fn tokenize(input: &str) -> Result, String> { + let mut tokens = Vec::new(); + let mut chars = input.chars().peekable(); + while let Some(&ch) = chars.peek() { + match ch { + ' ' | '\t' | '\n' => { + chars.next(); + } + '+' => { + chars.next(); + tokens.push(Token::Plus); + } + '-' => { + chars.next(); + tokens.push(Token::Minus); + } + '*' => { + chars.next(); + tokens.push(Token::Star); + } + '/' => { + chars.next(); + tokens.push(Token::Slash); + } + '^' => { + chars.next(); + tokens.push(Token::Caret); + } + '(' => { + chars.next(); + tokens.push(Token::LParen); + } + ')' => { + chars.next(); + tokens.push(Token::RParen); + } + c if c.is_ascii_digit() || c == '.' => { + let mut num = String::new(); + while let Some(&c) = chars.peek() { + if c.is_ascii_digit() || c == '.' { + num.push(c); + chars.next(); + } else { + break; + } + } + let val: f64 = num.parse().map_err(|_| format!("invalid number: {num}"))?; + tokens.push(Token::Number(val)); + } + c if c.is_ascii_alphabetic() || c == '_' => { + let mut ident = String::new(); + while let Some(&c) = chars.peek() { + if c.is_ascii_alphanumeric() || c == '_' { + ident.push(c); + chars.next(); + } else { + break; + } + } + tokens.push(Token::Ident(ident)); + } + _ => return Err(format!("unexpected character: '{ch}'")), + } + } + Ok(tokens) +} + +struct Parser { + tokens: Vec, + pos: usize, +} + +impl Parser { + fn new(tokens: Vec) -> Self { + Self { tokens, pos: 0 } + } + + fn peek(&self) -> Option<&Token> { + self.tokens.get(self.pos) + } + + fn advance(&mut self) -> Option { + let tok = self.tokens.get(self.pos).cloned(); + self.pos += 1; + tok + } + + fn expect(&mut self, expected: &Token) -> Result<(), String> { + match self.advance() { + Some(ref tok) if tok == expected => Ok(()), + Some(tok) => Err(format!("expected {expected:?}, got {tok:?}")), + None => Err(format!("expected {expected:?}, got end of input")), + } + } + + fn parse_expr(&mut self) -> Result { + let mut left = self.parse_term()?; + while matches!(self.peek(), Some(Token::Plus) | Some(Token::Minus)) { + let op = self.advance().unwrap(); + let right = self.parse_term()?; + left = match op { + Token::Plus => ParsedExpr::Add(Box::new(left), Box::new(right)), + Token::Minus => ParsedExpr::Sub(Box::new(left), Box::new(right)), + _ => unreachable!(), + }; + } + Ok(left) + } + + fn parse_term(&mut self) -> Result { + let mut left = self.parse_factor()?; + while matches!(self.peek(), Some(Token::Star) | Some(Token::Slash)) { + let op = self.advance().unwrap(); + let right = self.parse_factor()?; + left = match op { + Token::Star => ParsedExpr::Mul(Box::new(left), Box::new(right)), + Token::Slash => ParsedExpr::Div(Box::new(left), Box::new(right)), + _ => unreachable!(), + }; + } + Ok(left) + } + + fn parse_factor(&mut self) -> Result { + let base = self.parse_unary()?; + if matches!(self.peek(), Some(Token::Caret)) { + self.advance(); + let exp = self.parse_factor()?; // right-associative + Ok(ParsedExpr::Pow(Box::new(base), Box::new(exp))) + } else { + Ok(base) + } + } + + fn parse_unary(&mut self) -> Result { + if matches!(self.peek(), Some(Token::Minus)) { + self.advance(); + let expr = self.parse_unary()?; + Ok(ParsedExpr::Neg(Box::new(expr))) + } else { + self.parse_primary() + } + } + + fn parse_primary(&mut self) -> Result { + match self.advance() { + Some(Token::Number(n)) => Ok(ParsedExpr::Const(n)), + Some(Token::Ident(name)) => { + // Check for function call: exp(...), log(...), sqrt(...) + if matches!(self.peek(), Some(Token::LParen)) { + self.advance(); // consume '(' + let arg = self.parse_expr()?; + self.expect(&Token::RParen)?; + match name.as_str() { + "exp" => Ok(ParsedExpr::Exp(Box::new(arg))), + "log" => Ok(ParsedExpr::Log(Box::new(arg))), + "sqrt" => Ok(ParsedExpr::Sqrt(Box::new(arg))), + _ => Err(format!("unknown function: {name}")), + } + } else { + Ok(ParsedExpr::Var(name)) + } + } + Some(Token::LParen) => { + let expr = self.parse_expr()?; + self.expect(&Token::RParen)?; + Ok(expr) + } + Some(tok) => Err(format!("unexpected token: {tok:?}")), + None => Err("unexpected end of input".to_string()), + } + } +} + +/// Parse an expression string into a ParsedExpr. +pub fn parse_expr(input: &str) -> Result { + let tokens = tokenize(input)?; + let mut parser = Parser::new(tokens); + let expr = parser.parse_expr()?; + if parser.pos != parser.tokens.len() { + return Err(format!( + "unexpected trailing tokens at position {}", + parser.pos + )); + } + Ok(expr) +} + +impl ParsedExpr { + /// Generate TokenStream that constructs an `Expr` value. + pub fn to_expr_tokens(&self) -> TokenStream { + match self { + ParsedExpr::Const(c) => quote! { crate::expr::Expr::Const(#c) }, + ParsedExpr::Var(name) => quote! { crate::expr::Expr::Var(#name) }, + ParsedExpr::Add(a, b) => { + let a = a.to_expr_tokens(); + let b = b.to_expr_tokens(); + quote! { crate::expr::Expr::add(#a, #b) } + } + ParsedExpr::Sub(a, b) => { + let a = a.to_expr_tokens(); + let b = b.to_expr_tokens(); + quote! { crate::expr::Expr::add(#a, crate::expr::Expr::mul(crate::expr::Expr::Const(-1.0), #b)) } + } + ParsedExpr::Mul(a, b) => { + let a = a.to_expr_tokens(); + let b = b.to_expr_tokens(); + quote! { crate::expr::Expr::mul(#a, #b) } + } + ParsedExpr::Div(a, b) => { + let a = a.to_expr_tokens(); + let b = b.to_expr_tokens(); + quote! { crate::expr::Expr::mul(#a, crate::expr::Expr::pow(#b, crate::expr::Expr::Const(-1.0))) } + } + ParsedExpr::Pow(base, exp) => { + let base = base.to_expr_tokens(); + let exp = exp.to_expr_tokens(); + quote! { crate::expr::Expr::pow(#base, #exp) } + } + ParsedExpr::Neg(a) => { + let a = a.to_expr_tokens(); + quote! { crate::expr::Expr::mul(crate::expr::Expr::Const(-1.0), #a) } + } + ParsedExpr::Exp(a) => { + let a = a.to_expr_tokens(); + quote! { crate::expr::Expr::Exp(Box::new(#a)) } + } + ParsedExpr::Log(a) => { + let a = a.to_expr_tokens(); + quote! { crate::expr::Expr::Log(Box::new(#a)) } + } + ParsedExpr::Sqrt(a) => { + let a = a.to_expr_tokens(); + quote! { crate::expr::Expr::Sqrt(Box::new(#a)) } + } + } + } + + /// Generate TokenStream that evaluates the expression by calling getter methods + /// on a source variable `src`. + pub fn to_eval_tokens(&self, src_ident: &syn::Ident) -> TokenStream { + match self { + ParsedExpr::Const(c) => quote! { (#c as f64) }, + ParsedExpr::Var(name) => { + let getter = syn::Ident::new(name, proc_macro2::Span::call_site()); + quote! { (#src_ident.#getter() as f64) } + } + ParsedExpr::Add(a, b) => { + let a = a.to_eval_tokens(src_ident); + let b = b.to_eval_tokens(src_ident); + quote! { (#a + #b) } + } + ParsedExpr::Sub(a, b) => { + let a = a.to_eval_tokens(src_ident); + let b = b.to_eval_tokens(src_ident); + quote! { (#a - #b) } + } + ParsedExpr::Mul(a, b) => { + let a = a.to_eval_tokens(src_ident); + let b = b.to_eval_tokens(src_ident); + quote! { (#a * #b) } + } + ParsedExpr::Div(a, b) => { + let a = a.to_eval_tokens(src_ident); + let b = b.to_eval_tokens(src_ident); + quote! { (#a / #b) } + } + ParsedExpr::Pow(base, exp) => { + let base = base.to_eval_tokens(src_ident); + let exp = exp.to_eval_tokens(src_ident); + quote! { f64::powf(#base, #exp) } + } + ParsedExpr::Neg(a) => { + let a = a.to_eval_tokens(src_ident); + quote! { (-(#a)) } + } + ParsedExpr::Exp(a) => { + let a = a.to_eval_tokens(src_ident); + quote! { f64::exp(#a) } + } + ParsedExpr::Log(a) => { + let a = a.to_eval_tokens(src_ident); + quote! { f64::ln(#a) } + } + ParsedExpr::Sqrt(a) => { + let a = a.to_eval_tokens(src_ident); + quote! { f64::sqrt(#a) } + } + } + } + + /// Collect all variable names in the expression. + pub fn variables(&self) -> Vec { + let mut vars = Vec::new(); + self.collect_vars(&mut vars); + vars.sort(); + vars.dedup(); + vars + } + + fn collect_vars(&self, vars: &mut Vec) { + match self { + ParsedExpr::Const(_) => {} + ParsedExpr::Var(name) => vars.push(name.clone()), + ParsedExpr::Add(a, b) + | ParsedExpr::Sub(a, b) + | ParsedExpr::Mul(a, b) + | ParsedExpr::Div(a, b) + | ParsedExpr::Pow(a, b) => { + a.collect_vars(vars); + b.collect_vars(vars); + } + ParsedExpr::Neg(a) | ParsedExpr::Exp(a) | ParsedExpr::Log(a) + | ParsedExpr::Sqrt(a) => { + a.collect_vars(vars); + } + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_parse_var() { + assert_eq!( + parse_expr("num_vertices").unwrap(), + ParsedExpr::Var("num_vertices".into()) + ); + } + + #[test] + fn test_parse_const() { + assert_eq!(parse_expr("42").unwrap(), ParsedExpr::Const(42.0)); + } + + #[test] + fn test_parse_pow() { + let e = parse_expr("n^2").unwrap(); + assert_eq!( + e, + ParsedExpr::Pow( + Box::new(ParsedExpr::Var("n".into())), + Box::new(ParsedExpr::Const(2.0)), + ) + ); + } + + #[test] + fn test_parse_add_mul() { + // n + 3 * m → n + (3*m) + let e = parse_expr("n + 3 * m").unwrap(); + assert_eq!( + e, + ParsedExpr::Add( + Box::new(ParsedExpr::Var("n".into())), + Box::new(ParsedExpr::Mul( + Box::new(ParsedExpr::Const(3.0)), + Box::new(ParsedExpr::Var("m".into())), + )), + ) + ); + } + + #[test] + fn test_parse_exp() { + let e = parse_expr("exp(n^2)").unwrap(); + assert_eq!( + e, + ParsedExpr::Exp(Box::new(ParsedExpr::Pow( + Box::new(ParsedExpr::Var("n".into())), + Box::new(ParsedExpr::Const(2.0)), + ))) + ); + } + + #[test] + fn test_parse_complex() { + // 3 * n^2 + exp(m) — should parse correctly + let e = parse_expr("3 * n^2 + exp(m)").unwrap(); + assert!(matches!(e, ParsedExpr::Add(_, _))); + } + + #[test] + fn test_parse_parens() { + let e = parse_expr("(n + m)^2").unwrap(); + assert!(matches!(e, ParsedExpr::Pow(_, _))); + } + + #[test] + fn test_variables() { + let e = parse_expr("n^2 + 3 * m + exp(k)").unwrap(); + assert_eq!(e.variables(), vec!["k", "m", "n"]); + } + + #[test] + fn test_parse_neg() { + let e = parse_expr("-n").unwrap(); + assert_eq!( + e, + ParsedExpr::Neg(Box::new(ParsedExpr::Var("n".into()))) + ); + } + + #[test] + fn test_parse_sub() { + let e = parse_expr("n - m").unwrap(); + assert_eq!( + e, + ParsedExpr::Sub( + Box::new(ParsedExpr::Var("n".into())), + Box::new(ParsedExpr::Var("m".into())), + ) + ); + } +} From 994f1edaf481baf969cda824870b182459d35601 Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 05:57:26 +0800 Subject: [PATCH 06/15] feat: switch ReductionOverhead from Polynomial to Expr, support new overhead syntax - Update #[reduction] macro to detect and parse new `name = "expr"` syntax - Switch ReductionOverhead to store Expr instead of Polynomial - Add From for Expr bridge conversion - Add scale() and Add impl to Expr - Update poly! macro to produce Expr via conversion - Update export format to use expr/formula fields - Update factoring_ilp and travelingsalesman_ilp to use from_polynomials - Update all affected tests Co-Authored-By: Claude Opus 4.6 --- problemreductions-macros/src/lib.rs | 95 ++++++++++++++++++++++++++--- src/export.rs | 31 +++------- src/expr.rs | 47 ++++++++++++++ src/polynomial.rs | 18 +++--- src/rules/factoring_ilp.rs | 2 +- src/rules/registry.rs | 53 +++++++++------- src/rules/travelingsalesman_ilp.rs | 2 +- src/unit_tests/export.rs | 53 +++++----------- src/unit_tests/polynomial.rs | 14 ++--- src/unit_tests/reduction_graph.rs | 55 +++++++---------- src/unit_tests/rules/cost.rs | 6 +- 11 files changed, 234 insertions(+), 142 deletions(-) diff --git a/problemreductions-macros/src/lib.rs b/problemreductions-macros/src/lib.rs index 3b8350e4f..bc650dc58 100644 --- a/problemreductions-macros/src/lib.rs +++ b/problemreductions-macros/src/lib.rs @@ -22,7 +22,20 @@ use syn::{parse_macro_input, GenericArgument, ItemImpl, Path, PathArguments, Typ /// /// # Attributes /// -/// - `overhead = { expr }` — overhead specification (required for non-trivial reductions) +/// - `overhead = { expr }` — overhead specification +/// +/// ## New syntax (preferred): +/// ```ignore +/// #[reduction(overhead = { +/// num_vars = "num_vertices^2", +/// num_constraints = "num_edges", +/// })] +/// ``` +/// +/// ## Legacy syntax (still supported): +/// ```ignore +/// #[reduction(overhead = { ReductionOverhead::new(vec![...]) })] +/// ``` #[proc_macro_attribute] pub fn reduction(attr: TokenStream, item: TokenStream) -> TokenStream { let attrs = parse_macro_input!(attr as ReductionAttrs); @@ -34,9 +47,17 @@ pub fn reduction(attr: TokenStream, item: TokenStream) -> TokenStream { } } +/// Overhead specification: either new parsed syntax or legacy raw tokens. +enum OverheadSpec { + /// Legacy syntax: raw token stream (e.g., `ReductionOverhead::new(...)`) + Legacy(TokenStream2), + /// New syntax: list of (field_name, expression_string) pairs + Parsed(Vec<(String, String)>), +} + /// Parsed attributes from #[reduction(...)] struct ReductionAttrs { - overhead: Option, + overhead: Option, } impl syn::parse::Parse for ReductionAttrs { @@ -51,7 +72,7 @@ impl syn::parse::Parse for ReductionAttrs { "overhead" => { let content; syn::braced!(content in input); - attrs.overhead = Some(content.parse()?); + attrs.overhead = Some(parse_overhead_content(&content)?); } _ => { return Err(syn::Error::new( @@ -70,6 +91,40 @@ impl syn::parse::Parse for ReductionAttrs { } } +/// Detect and parse the overhead content as either new or legacy syntax. +/// +/// New syntax detection: the first tokens are `ident = "string_literal"`. +/// Legacy syntax: everything else (starts with a path like `ReductionOverhead::...`). +fn parse_overhead_content(content: syn::parse::ParseStream) -> syn::Result { + // Fork to peek ahead without consuming + let fork = content.fork(); + + // Try to detect new syntax: ident = "string" + let is_new_syntax = fork.parse::().is_ok() + && fork.parse::().is_ok() + && fork.parse::().is_ok(); + + if is_new_syntax { + // Parse new syntax: field_name = "expression", ... + let mut fields = Vec::new(); + while !content.is_empty() { + let field_name: syn::Ident = content.parse()?; + content.parse::()?; + let expr_str: syn::LitStr = content.parse()?; + fields.push((field_name.to_string(), expr_str.value())); + + if content.peek(syn::Token![,]) { + content.parse::()?; + } + } + Ok(OverheadSpec::Parsed(fields)) + } else { + // Legacy syntax: parse as raw token stream + let tokens: TokenStream2 = content.parse()?; + Ok(OverheadSpec::Legacy(tokens)) + } +} + /// Extract the base type name from a Type (e.g., "IndependentSet" from "IndependentSet") fn extract_type_name(ty: &Type) -> Option { match ty { @@ -139,6 +194,30 @@ fn make_variant_fn_body(ty: &Type, type_generics: &HashSet) -> syn::Resu Ok(quote! { <#ty as crate::traits::Problem>::variant() }) } +/// Generate overhead code from the new parsed syntax. +/// +/// Produces a `ReductionOverhead` constructor that uses `Expr` AST values. +fn generate_parsed_overhead(fields: &[(String, String)]) -> syn::Result { + let mut field_tokens = Vec::new(); + + for (field_name, expr_str) in fields { + let parsed = parser::parse_expr(expr_str).map_err(|e| { + syn::Error::new( + proc_macro2::Span::call_site(), + format!("error parsing overhead expression \"{expr_str}\": {e}"), + ) + })?; + + let expr_ast = parsed.to_expr_tokens(); + let name_lit = field_name.as_str(); + field_tokens.push(quote! { (#name_lit, #expr_ast) }); + } + + Ok(quote! { + crate::rules::registry::ReductionOverhead::new(vec![#(#field_tokens),*]) + }) +} + /// Generate the reduction entry code fn generate_reduction_entry( attrs: &ReductionAttrs, @@ -171,11 +250,11 @@ fn generate_reduction_entry( let target_variant_body = make_variant_fn_body(&target_type, &type_generics)?; // Generate overhead or use default - let overhead = attrs.overhead.clone().unwrap_or_else(|| { - quote! { - crate::rules::registry::ReductionOverhead::default() - } - }); + let overhead = match &attrs.overhead { + Some(OverheadSpec::Legacy(tokens)) => tokens.clone(), + Some(OverheadSpec::Parsed(fields)) => generate_parsed_overhead(fields)?, + None => quote! { crate::rules::registry::ReductionOverhead::default() }, + }; // Generate the combined output let output = quote! { diff --git a/src/export.rs b/src/export.rs index 326398555..2ef553ed8 100644 --- a/src/export.rs +++ b/src/export.rs @@ -5,9 +5,10 @@ //! - `.json` — reduction structure (source, target, overhead) //! - `.result.json` — runtime solutions //! -//! The schema mirrors the internal types: `ReductionOverhead` for polynomials, +//! The schema mirrors the internal types: `ReductionOverhead` for expressions, //! `Problem::variant()` for problem variants, and `Problem::NAME` for problem names. +use crate::expr::Expr; use crate::rules::registry::ReductionOverhead; use crate::rules::ReductionGraph; use serde::Serialize; @@ -26,18 +27,12 @@ pub struct ProblemSide { pub instance: serde_json::Value, } -/// A monomial in JSON: coefficient × Π(variable^exponent). -#[derive(Serialize, Clone, Debug)] -pub struct MonomialJson { - pub coefficient: f64, - pub variables: Vec<(String, u8)>, -} - -/// One output field mapped to a polynomial. +/// One output field mapped to an expression. #[derive(Serialize, Clone, Debug)] pub struct OverheadEntry { pub field: String, - pub polynomial: Vec, + pub expr: Expr, + pub formula: String, } /// Top-level reduction structure (written to `.json`). @@ -66,20 +61,10 @@ pub fn overhead_to_json(overhead: &ReductionOverhead) -> Vec { overhead .output_size .iter() - .map(|(field, poly)| OverheadEntry { + .map(|(field, expr)| OverheadEntry { field: field.to_string(), - polynomial: poly - .terms - .iter() - .map(|m| MonomialJson { - coefficient: m.coefficient, - variables: m - .variables - .iter() - .map(|(name, exp)| (name.to_string(), *exp)) - .collect(), - }) - .collect(), + formula: expr.to_string(), + expr: expr.clone(), }) .collect() } diff --git a/src/expr.rs b/src/expr.rs index 5959d6e3e..156e63af0 100644 --- a/src/expr.rs +++ b/src/expr.rs @@ -41,6 +41,11 @@ impl Expr { Expr::Pow(Box::new(base), Box::new(exp)) } + /// Multiply expression by a scalar constant. + pub fn scale(self, c: f64) -> Self { + Expr::mul(Expr::Const(c), self) + } + /// Evaluate the expression given concrete variable values. pub fn eval(&self, vars: &ProblemSize) -> f64 { match self { @@ -153,6 +158,48 @@ impl fmt::Display for Expr { } } +impl std::ops::Add for Expr { + type Output = Self; + + fn add(self, other: Self) -> Self { + Expr::Add(Box::new(self), Box::new(other)) + } +} + +impl From for Expr { + fn from(poly: crate::polynomial::Polynomial) -> Self { + let terms: Vec = poly + .terms + .iter() + .map(|mono| { + // Build monomial: coefficient * Π(var^exp) + let mut expr = Expr::Const(mono.coefficient); + for &(name, exp) in &mono.variables { + let var_expr = if exp == 1 { + Expr::Var(name) + } else { + Expr::pow(Expr::Var(name), Expr::Const(exp as f64)) + }; + expr = Expr::mul(expr, var_expr); + } + // Simplify `1.0 * x` to just `x` for single-variable monomials + if let Expr::Mul(ref a, ref b) = expr { + if matches!(a.as_ref(), Expr::Const(c) if (*c - 1.0).abs() < 1e-15) { + return b.as_ref().clone(); + } + } + expr + }) + .collect(); + + if terms.is_empty() { + return Expr::Const(0.0); + } + + terms.into_iter().reduce(Expr::add).unwrap() + } +} + #[cfg(test)] #[path = "unit_tests/expr.rs"] mod tests; diff --git a/src/polynomial.rs b/src/polynomial.rs index f17e30876..697f3f0e9 100644 --- a/src/polynomial.rs +++ b/src/polynomial.rs @@ -302,36 +302,38 @@ impl Add for Polynomial { } } -/// Convenience macro for building polynomials. +/// Convenience macro for building overhead expressions. +/// +/// Produces `Expr` values (via `From` conversion). #[macro_export] macro_rules! poly { // Single variable: poly!(n) ($name:ident) => { - $crate::polynomial::Polynomial::var(stringify!($name)) + $crate::expr::Expr::from($crate::polynomial::Polynomial::var(stringify!($name))) }; // Variable with exponent: poly!(n^2) ($name:ident ^ $exp:literal) => { - $crate::polynomial::Polynomial::var_pow(stringify!($name), $exp) + $crate::expr::Expr::from($crate::polynomial::Polynomial::var_pow(stringify!($name), $exp)) }; // Constant: poly!(5) ($c:literal) => { - $crate::polynomial::Polynomial::constant($c as f64) + $crate::expr::Expr::from($crate::polynomial::Polynomial::constant($c as f64)) }; // Scaled variable: poly!(3 * n) ($c:literal * $name:ident) => { - $crate::polynomial::Polynomial::var(stringify!($name)).scale($c as f64) + $crate::expr::Expr::from($crate::polynomial::Polynomial::var(stringify!($name)).scale($c as f64)) }; // Scaled variable with exponent: poly!(9 * n^2) ($c:literal * $name:ident ^ $exp:literal) => { - $crate::polynomial::Polynomial::var_pow(stringify!($name), $exp).scale($c as f64) + $crate::expr::Expr::from($crate::polynomial::Polynomial::var_pow(stringify!($name), $exp).scale($c as f64)) }; // Product of two variables: poly!(a * b) ($a:ident * $b:ident) => { - $crate::polynomial::Polynomial::var_product(stringify!($a), stringify!($b)) + $crate::expr::Expr::from($crate::polynomial::Polynomial::var_product(stringify!($a), stringify!($b))) }; // Scaled product of two variables: poly!(3 * a * b) ($c:literal * $a:ident * $b:ident) => { - $crate::polynomial::Polynomial::var_product(stringify!($a), stringify!($b)).scale($c as f64) + $crate::expr::Expr::from($crate::polynomial::Polynomial::var_product(stringify!($a), stringify!($b)).scale($c as f64)) }; } diff --git a/src/rules/factoring_ilp.rs b/src/rules/factoring_ilp.rs index 491c06278..3d3514175 100644 --- a/src/rules/factoring_ilp.rs +++ b/src/rules/factoring_ilp.rs @@ -94,7 +94,7 @@ impl ReductionResult for ReductionFactoringToILP { } #[reduction(overhead = { - ReductionOverhead::new(vec![ + ReductionOverhead::from_polynomials(vec![ // num_vars = m + n + m*n + num_carries where num_carries = max(m+n, target_bits) // For feasible instances, target_bits <= m+n, so this is 2(m+n) + m*n ("num_vars", Polynomial { diff --git a/src/rules/registry.rs b/src/rules/registry.rs index 95790e95c..29dbfe3c5 100644 --- a/src/rules/registry.rs +++ b/src/rules/registry.rs @@ -1,6 +1,6 @@ //! Automatic reduction registration via inventory. -use crate::polynomial::Polynomial; +use crate::expr::Expr; use crate::rules::traits::DynReductionResult; use crate::types::ProblemSize; use std::any::Any; @@ -9,65 +9,76 @@ use std::collections::HashSet; /// Overhead specification for a reduction. #[derive(Clone, Debug, Default, serde::Serialize)] pub struct ReductionOverhead { - /// Output size as polynomials of input size variables. - /// Each entry is (output_field_name, polynomial). - pub output_size: Vec<(&'static str, Polynomial)>, + /// Output size as expressions of input size variables. + /// Each entry is (output_field_name, expression). + pub output_size: Vec<(&'static str, Expr)>, } impl ReductionOverhead { - pub fn new(output_size: Vec<(&'static str, Polynomial)>) -> Self { + pub fn new(output_size: Vec<(&'static str, Expr)>) -> Self { Self { output_size } } + /// Construct from legacy Polynomial-based overhead. + pub fn from_polynomials( + output_size: Vec<(&'static str, crate::polynomial::Polynomial)>, + ) -> Self { + Self { + output_size: output_size + .into_iter() + .map(|(name, poly)| (name, Expr::from(poly))) + .collect(), + } + } + /// Identity overhead: each output field equals the same-named input field. /// Used by variant cast reductions where problem size doesn't change. pub fn identity(fields: &[&'static str]) -> Self { Self { - output_size: fields.iter().map(|&f| (f, Polynomial::var(f))).collect(), + output_size: fields.iter().map(|&f| (f, Expr::Var(f))).collect(), } } /// Evaluate output size given input size. /// - /// Uses `round()` for the f64 to usize conversion because polynomial coefficients - /// are typically integers (1, 2, 3, 7, 21, etc.) and any fractional results come - /// from floating-point arithmetic imprecision, not intentional fractions. - /// For problem sizes, rounding to nearest integer is the most intuitive behavior. + /// Uses `round()` for the f64 to usize conversion because expression values + /// are typically integers and any fractional results come from floating-point + /// arithmetic imprecision, not intentional fractions. pub fn evaluate_output_size(&self, input: &ProblemSize) -> ProblemSize { let fields: Vec<_> = self .output_size .iter() - .map(|(name, poly)| (*name, poly.evaluate(input).round() as usize)) + .map(|(name, expr)| (*name, expr.eval(input).round() as usize)) .collect(); ProblemSize::new(fields) } - /// Collect all input variable names referenced by the overhead polynomials. + /// Collect all input variable names referenced by the overhead expressions. pub fn input_variable_names(&self) -> HashSet<&'static str> { self.output_size .iter() - .flat_map(|(_, poly)| poly.variable_names()) + .flat_map(|(_, expr)| expr.variables()) .collect() } /// Compose two overheads: substitute self's output into `next`'s input. /// - /// Returns a new overhead whose polynomials map from self's input variables + /// Returns a new overhead whose expressions map from self's input variables /// directly to `next`'s output variables. pub fn compose(&self, next: &ReductionOverhead) -> ReductionOverhead { use std::collections::HashMap; - // Build substitution map: output field name → output polynomial - let mapping: HashMap<&str, &Polynomial> = self + // Build substitution map: output field name → output expression + let mapping: HashMap<&str, &Expr> = self .output_size .iter() - .map(|(name, poly)| (*name, poly)) + .map(|(name, expr)| (*name, expr)) .collect(); let composed = next .output_size .iter() - .map(|(name, poly)| (*name, poly.substitute(&mapping))) + .map(|(name, expr)| (*name, expr.substitute(&mapping))) .collect(); ReductionOverhead { @@ -75,12 +86,12 @@ impl ReductionOverhead { } } - /// Get the polynomial for a named output field. - pub fn get(&self, name: &str) -> Option<&Polynomial> { + /// Get the expression for a named output field. + pub fn get(&self, name: &str) -> Option<&Expr> { self.output_size .iter() .find(|(n, _)| *n == name) - .map(|(_, p)| p) + .map(|(_, e)| e) } } diff --git a/src/rules/travelingsalesman_ilp.rs b/src/rules/travelingsalesman_ilp.rs index af2b37e29..df529fd33 100644 --- a/src/rules/travelingsalesman_ilp.rs +++ b/src/rules/travelingsalesman_ilp.rs @@ -74,7 +74,7 @@ impl ReductionResult for ReductionTSPToILP { #[reduction( overhead = { - ReductionOverhead::new(vec![ + ReductionOverhead::from_polynomials(vec![ // num_vars = n^2 + 2*m*n ("num_vars", Polynomial::var_pow("num_vertices", 2) + Polynomial { terms: vec![Monomial { diff --git a/src/unit_tests/export.rs b/src/unit_tests/export.rs index 25c43c638..6125b5569 100644 --- a/src/unit_tests/export.rs +++ b/src/unit_tests/export.rs @@ -1,5 +1,5 @@ use super::*; -use crate::polynomial::Polynomial; +use crate::expr::Expr; use crate::rules::registry::ReductionOverhead; #[test] @@ -13,58 +13,39 @@ fn test_overhead_to_json_empty() { fn test_overhead_to_json_single_field() { let overhead = ReductionOverhead::new(vec![( "num_vertices", - Polynomial::var("n") + Polynomial::var("m"), + Expr::add(Expr::Var("n"), Expr::Var("m")), )]); let entries = overhead_to_json(&overhead); assert_eq!(entries.len(), 1); assert_eq!(entries[0].field, "num_vertices"); - assert_eq!(entries[0].polynomial.len(), 2); - - // Check first monomial: 1*n - assert_eq!(entries[0].polynomial[0].coefficient, 1.0); - assert_eq!( - entries[0].polynomial[0].variables, - vec![("n".to_string(), 1)] - ); - - // Check second monomial: 1*m - assert_eq!(entries[0].polynomial[1].coefficient, 1.0); - assert_eq!( - entries[0].polynomial[1].variables, - vec![("m".to_string(), 1)] - ); + assert_eq!(entries[0].formula, "n + m"); } #[test] -fn test_overhead_to_json_constant_monomial() { - let overhead = ReductionOverhead::new(vec![("num_vars", Polynomial::constant(42.0))]); +fn test_overhead_to_json_constant() { + let overhead = ReductionOverhead::new(vec![("num_vars", Expr::Const(42.0))]); let entries = overhead_to_json(&overhead); assert_eq!(entries.len(), 1); assert_eq!(entries[0].field, "num_vars"); - assert_eq!(entries[0].polynomial.len(), 1); - assert_eq!(entries[0].polynomial[0].coefficient, 42.0); - assert!(entries[0].polynomial[0].variables.is_empty()); + assert_eq!(entries[0].formula, "42"); } #[test] fn test_overhead_to_json_scaled_power() { - let overhead = - ReductionOverhead::new(vec![("num_edges", Polynomial::var_pow("n", 2).scale(3.0))]); + let overhead = ReductionOverhead::new(vec![( + "num_edges", + Expr::mul(Expr::Const(3.0), Expr::pow(Expr::Var("n"), Expr::Const(2.0))), + )]); let entries = overhead_to_json(&overhead); assert_eq!(entries.len(), 1); - assert_eq!(entries[0].polynomial.len(), 1); - assert_eq!(entries[0].polynomial[0].coefficient, 3.0); - assert_eq!( - entries[0].polynomial[0].variables, - vec![("n".to_string(), 2)] - ); + assert_eq!(entries[0].formula, "3 * n^2"); } #[test] fn test_overhead_to_json_multiple_fields() { let overhead = ReductionOverhead::new(vec![ - ("num_vertices", Polynomial::var("n")), - ("num_edges", Polynomial::var_pow("n", 2)), + ("num_vertices", Expr::Var("n")), + ("num_edges", Expr::pow(Expr::Var("n"), Expr::Const(2.0))), ]); let entries = overhead_to_json(&overhead); assert_eq!(entries.len(), 2); @@ -190,15 +171,13 @@ fn test_reduction_data_serialization() { }, overhead: vec![OverheadEntry { field: "num_vertices".to_string(), - polynomial: vec![MonomialJson { - coefficient: 1.0, - variables: vec![("n".to_string(), 1)], - }], + expr: Expr::Var("n"), + formula: "n".to_string(), }], }; let json = serde_json::to_value(&data).unwrap(); assert_eq!(json["overhead"][0]["field"], "num_vertices"); - assert_eq!(json["overhead"][0]["polynomial"][0]["coefficient"], 1.0); + assert_eq!(json["overhead"][0]["formula"], "n"); } #[test] diff --git a/src/unit_tests/polynomial.rs b/src/unit_tests/polynomial.rs index 04dfd7f5f..be81b0fc9 100644 --- a/src/unit_tests/polynomial.rs +++ b/src/unit_tests/polynomial.rs @@ -43,10 +43,10 @@ fn test_polynomial_complex() { fn test_poly_macro() { let size = ProblemSize::new(vec![("n", 5), ("m", 3)]); - assert_eq!(poly!(n).evaluate(&size), 5.0); - assert_eq!(poly!(n ^ 2).evaluate(&size), 25.0); - assert_eq!(poly!(3 * n).evaluate(&size), 15.0); - assert_eq!(poly!(2 * m ^ 2).evaluate(&size), 18.0); + assert_eq!(poly!(n).eval(&size), 5.0); + assert_eq!(poly!(n ^ 2).eval(&size), 25.0); + assert_eq!(poly!(3 * n).eval(&size), 15.0); + assert_eq!(poly!(2 * m ^ 2).eval(&size), 18.0); } #[test] @@ -168,13 +168,11 @@ fn test_display_polynomial_subtraction() { #[test] fn test_poly_macro_product() { let size = ProblemSize::new(vec![("a", 3), ("b", 4)]); - assert_eq!(poly!(a * b).evaluate(&size), 12.0); - assert_eq!(format!("{}", poly!(a * b)), "a * b"); + assert_eq!(poly!(a * b).eval(&size), 12.0); } #[test] fn test_poly_macro_scaled_product() { let size = ProblemSize::new(vec![("a", 3), ("b", 4)]); - assert_eq!(poly!(5 * a * b).evaluate(&size), 60.0); - assert_eq!(format!("{}", poly!(5 * a * b)), "5 * a * b"); + assert_eq!(poly!(5 * a * b).eval(&size), 60.0); } diff --git a/src/unit_tests/reduction_graph.rs b/src/unit_tests/reduction_graph.rs index ed29b5f4c..2f013f461 100644 --- a/src/unit_tests/reduction_graph.rs +++ b/src/unit_tests/reduction_graph.rs @@ -326,39 +326,32 @@ fn test_3sat_to_mis_triangular_overhead() { let edges = graph.path_overheads(&path); assert_eq!(edges.len(), 3); + // Evaluate overheads at a test point to verify correctness + let test_size = ProblemSize::new(vec![ + ("num_vars", 3), + ("num_clauses", 2), + ("num_literals", 6), + ("num_vertices", 10), + ("num_edges", 15), + ]); + // Edge 0: K3SAT → SAT (identity) - assert_eq!( - edges[0].get("num_vars").unwrap().normalized(), - poly!(num_vars) - ); - assert_eq!( - edges[0].get("num_clauses").unwrap().normalized(), - poly!(num_clauses) - ); - assert_eq!( - edges[0].get("num_literals").unwrap().normalized(), - poly!(num_literals) - ); + assert_eq!(edges[0].get("num_vars").unwrap().eval(&test_size), 3.0); + assert_eq!(edges[0].get("num_clauses").unwrap().eval(&test_size), 2.0); + assert_eq!(edges[0].get("num_literals").unwrap().eval(&test_size), 6.0); // Edge 1: SAT → MIS{SimpleGraph,i32} - assert_eq!( - edges[1].get("num_vertices").unwrap().normalized(), - poly!(num_literals) - ); - assert_eq!( - edges[1].get("num_edges").unwrap().normalized(), - poly!(num_literals ^ 2) - ); + // num_vertices = num_literals, num_edges = num_literals^2 + assert_eq!(edges[1].get("num_vertices").unwrap().eval(&test_size), 6.0); + assert_eq!(edges[1].get("num_edges").unwrap().eval(&test_size), 36.0); // Edge 2: MIS{SimpleGraph,i32} → MIS{TriangularSubgraph,i32} + // num_vertices = num_vertices^2, num_edges = num_vertices^2 assert_eq!( - edges[2].get("num_vertices").unwrap().normalized(), - poly!(num_vertices ^ 2) - ); - assert_eq!( - edges[2].get("num_edges").unwrap().normalized(), - poly!(num_vertices ^ 2) + edges[2].get("num_vertices").unwrap().eval(&test_size), + 100.0 ); + assert_eq!(edges[2].get("num_edges").unwrap().eval(&test_size), 100.0); // Compose overheads symbolically along the path. // The composed overhead maps 3-SAT input variables to final MIS{Triangular} output. @@ -369,14 +362,12 @@ fn test_3sat_to_mis_triangular_overhead() { // // Composed: num_vertices = L², num_edges = L² let composed = graph.compose_path_overhead(&path); + // Evaluate composed at input: L=6, so L^2=36 assert_eq!( - composed.get("num_vertices").unwrap().normalized(), - poly!(num_literals ^ 2) - ); - assert_eq!( - composed.get("num_edges").unwrap().normalized(), - poly!(num_literals ^ 2) + composed.get("num_vertices").unwrap().eval(&test_size), + 36.0 ); + assert_eq!(composed.get("num_edges").unwrap().eval(&test_size), 36.0); } // ---- Overhead validation ---- diff --git a/src/unit_tests/rules/cost.rs b/src/unit_tests/rules/cost.rs index 1be5f44b1..2924cd07d 100644 --- a/src/unit_tests/rules/cost.rs +++ b/src/unit_tests/rules/cost.rs @@ -1,10 +1,10 @@ use super::*; -use crate::polynomial::Polynomial; +use crate::expr::Expr; fn test_overhead() -> ReductionOverhead { ReductionOverhead::new(vec![ - ("n", Polynomial::var("n").scale(2.0)), - ("m", Polynomial::var("m")), + ("n", Expr::mul(Expr::Const(2.0), Expr::Var("n"))), + ("m", Expr::Var("m")), ]) } From ef76c62efbab4dbef19396fc9840ca0254a3d980 Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 06:04:38 +0800 Subject: [PATCH 07/15] feat: add inherent size getters to all problem types Add num_vertices(), num_edges(), num_vars(), num_constraints(), num_literals(), universe_size(), num_interactions(), num_variables(), num_bits_first(), num_bits_second(), num_sequence(), rank(), m(), n() getters matching problem_size_names() fields. Co-Authored-By: Claude Opus 4.6 --- src/models/graph/kcoloring.rs | 12 ++++++++++++ src/models/graph/max_cut.rs | 12 ++++++++++++ src/models/graph/maximal_is.rs | 12 ++++++++++++ src/models/graph/maximum_clique.rs | 12 ++++++++++++ src/models/graph/maximum_independent_set.rs | 12 ++++++++++++ src/models/graph/maximum_matching.rs | 12 ++++++++++++ src/models/graph/minimum_dominating_set.rs | 12 ++++++++++++ src/models/graph/minimum_vertex_cover.rs | 12 ++++++++++++ src/models/graph/traveling_salesman.rs | 12 ++++++++++++ src/models/optimization/ilp.rs | 10 ++++++++++ src/models/optimization/spin_glass.rs | 5 +++++ src/models/satisfiability/ksat.rs | 5 +++++ src/models/set/maximum_set_packing.rs | 9 +++++++++ src/models/specialized/biclique_cover.rs | 5 +++++ src/models/specialized/bmf.rs | 10 ++++++++++ src/models/specialized/circuit.rs | 5 +++++ src/models/specialized/factoring.rs | 10 ++++++++++ src/models/specialized/paintshop.rs | 5 +++++ 18 files changed, 172 insertions(+) diff --git a/src/models/graph/kcoloring.rs b/src/models/graph/kcoloring.rs index 3b02fedd8..4d10276bd 100644 --- a/src/models/graph/kcoloring.rs +++ b/src/models/graph/kcoloring.rs @@ -122,6 +122,18 @@ impl KColoring { } } +impl KColoring { + /// Get the number of vertices in the underlying graph. + pub fn num_vertices(&self) -> usize { + self.graph().num_vertices() + } + + /// Get the number of edges in the underlying graph. + pub fn num_edges(&self) -> usize { + self.graph().num_edges() + } +} + impl Problem for KColoring where G: Graph + VariantParam, diff --git a/src/models/graph/max_cut.rs b/src/models/graph/max_cut.rs index f829e524a..c0e1a7f95 100644 --- a/src/models/graph/max_cut.rs +++ b/src/models/graph/max_cut.rs @@ -147,6 +147,18 @@ impl MaxCut { } } +impl MaxCut { + /// Get the number of vertices in the underlying graph. + pub fn num_vertices(&self) -> usize { + self.graph().num_vertices() + } + + /// Get the number of edges in the underlying graph. + pub fn num_edges(&self) -> usize { + self.graph().num_edges() + } +} + impl Problem for MaxCut where G: Graph + crate::variant::VariantParam, diff --git a/src/models/graph/maximal_is.rs b/src/models/graph/maximal_is.rs index 193938cad..d05feeeaa 100644 --- a/src/models/graph/maximal_is.rs +++ b/src/models/graph/maximal_is.rs @@ -129,6 +129,18 @@ impl MaximalIS { } } +impl MaximalIS { + /// Get the number of vertices in the underlying graph. + pub fn num_vertices(&self) -> usize { + self.graph().num_vertices() + } + + /// Get the number of edges in the underlying graph. + pub fn num_edges(&self) -> usize { + self.graph().num_edges() + } +} + impl Problem for MaximalIS where G: Graph + crate::variant::VariantParam, diff --git a/src/models/graph/maximum_clique.rs b/src/models/graph/maximum_clique.rs index 5161e9061..316304fc0 100644 --- a/src/models/graph/maximum_clique.rs +++ b/src/models/graph/maximum_clique.rs @@ -95,6 +95,18 @@ impl MaximumClique { } } +impl MaximumClique { + /// Get the number of vertices in the underlying graph. + pub fn num_vertices(&self) -> usize { + self.graph().num_vertices() + } + + /// Get the number of edges in the underlying graph. + pub fn num_edges(&self) -> usize { + self.graph().num_edges() + } +} + impl Problem for MaximumClique where G: Graph + crate::variant::VariantParam, diff --git a/src/models/graph/maximum_independent_set.rs b/src/models/graph/maximum_independent_set.rs index a919c5863..d27fd7d7e 100644 --- a/src/models/graph/maximum_independent_set.rs +++ b/src/models/graph/maximum_independent_set.rs @@ -95,6 +95,18 @@ impl MaximumIndependentSet { } } +impl MaximumIndependentSet { + /// Get the number of vertices in the underlying graph. + pub fn num_vertices(&self) -> usize { + self.graph().num_vertices() + } + + /// Get the number of edges in the underlying graph. + pub fn num_edges(&self) -> usize { + self.graph().num_edges() + } +} + impl Problem for MaximumIndependentSet where G: Graph + crate::variant::VariantParam, diff --git a/src/models/graph/maximum_matching.rs b/src/models/graph/maximum_matching.rs index 0320e5b97..da1062ffc 100644 --- a/src/models/graph/maximum_matching.rs +++ b/src/models/graph/maximum_matching.rs @@ -163,6 +163,18 @@ impl MaximumMatching { } } +impl MaximumMatching { + /// Get the number of vertices in the underlying graph. + pub fn num_vertices(&self) -> usize { + self.graph().num_vertices() + } + + /// Get the number of edges in the underlying graph. + pub fn num_edges(&self) -> usize { + self.graph().num_edges() + } +} + impl Problem for MaximumMatching where G: Graph + crate::variant::VariantParam, diff --git a/src/models/graph/minimum_dominating_set.rs b/src/models/graph/minimum_dominating_set.rs index e7b1daec4..4fca59bcc 100644 --- a/src/models/graph/minimum_dominating_set.rs +++ b/src/models/graph/minimum_dominating_set.rs @@ -115,6 +115,18 @@ impl MinimumDominatingSet { } } +impl MinimumDominatingSet { + /// Get the number of vertices in the underlying graph. + pub fn num_vertices(&self) -> usize { + self.graph().num_vertices() + } + + /// Get the number of edges in the underlying graph. + pub fn num_edges(&self) -> usize { + self.graph().num_edges() + } +} + impl Problem for MinimumDominatingSet where G: Graph + crate::variant::VariantParam, diff --git a/src/models/graph/minimum_vertex_cover.rs b/src/models/graph/minimum_vertex_cover.rs index 6fd5f0412..24a4408bd 100644 --- a/src/models/graph/minimum_vertex_cover.rs +++ b/src/models/graph/minimum_vertex_cover.rs @@ -90,6 +90,18 @@ impl MinimumVertexCover { } } +impl MinimumVertexCover { + /// Get the number of vertices in the underlying graph. + pub fn num_vertices(&self) -> usize { + self.graph().num_vertices() + } + + /// Get the number of edges in the underlying graph. + pub fn num_edges(&self) -> usize { + self.graph().num_edges() + } +} + impl Problem for MinimumVertexCover where G: Graph + crate::variant::VariantParam, diff --git a/src/models/graph/traveling_salesman.rs b/src/models/graph/traveling_salesman.rs index e99d12054..51f0b17ce 100644 --- a/src/models/graph/traveling_salesman.rs +++ b/src/models/graph/traveling_salesman.rs @@ -126,6 +126,18 @@ impl TravelingSalesman { } } +impl TravelingSalesman { + /// Get the number of vertices in the underlying graph. + pub fn num_vertices(&self) -> usize { + self.graph().num_vertices() + } + + /// Get the number of edges in the underlying graph. + pub fn num_edges(&self) -> usize { + self.graph().num_edges() + } +} + impl Problem for TravelingSalesman where G: Graph + crate::variant::VariantParam, diff --git a/src/models/optimization/ilp.rs b/src/models/optimization/ilp.rs index 902617417..aad32ec24 100644 --- a/src/models/optimization/ilp.rs +++ b/src/models/optimization/ilp.rs @@ -325,6 +325,16 @@ impl ILP { pub fn num_variables(&self) -> usize { self.num_vars } + + /// Get the number of variables (alias matching `problem_size_names`). + pub fn num_vars(&self) -> usize { + self.num_variables() + } + + /// Get the number of constraints. + pub fn num_constraints(&self) -> usize { + self.constraints.len() + } } impl Problem for ILP { diff --git a/src/models/optimization/spin_glass.rs b/src/models/optimization/spin_glass.rs index c63c92ffd..b42f77f32 100644 --- a/src/models/optimization/spin_glass.rs +++ b/src/models/optimization/spin_glass.rs @@ -140,6 +140,11 @@ impl SpinGlass { self.graph.num_vertices() } + /// Get the number of interactions (edges in the interaction graph). + pub fn num_interactions(&self) -> usize { + self.graph.num_edges() + } + /// Get the interactions as ((i, j), weight) pairs. /// /// Reconstructs from graph.edges() and couplings. diff --git a/src/models/satisfiability/ksat.rs b/src/models/satisfiability/ksat.rs index a86137107..e7c9ac460 100644 --- a/src/models/satisfiability/ksat.rs +++ b/src/models/satisfiability/ksat.rs @@ -139,6 +139,11 @@ impl KSatisfiability { self.clauses.get(index) } + /// Get the total number of literals across all clauses. + pub fn num_literals(&self) -> usize { + self.clauses().iter().map(|c| c.len()).sum() + } + /// Count satisfied clauses for an assignment. pub fn count_satisfied(&self, assignment: &[bool]) -> usize { self.clauses diff --git a/src/models/set/maximum_set_packing.rs b/src/models/set/maximum_set_packing.rs index c4fb0e1bf..658d29c68 100644 --- a/src/models/set/maximum_set_packing.rs +++ b/src/models/set/maximum_set_packing.rs @@ -113,6 +113,15 @@ impl MaximumSetPacking { pairs } + /// Get the universe size (one more than the maximum element across all sets). + pub fn universe_size(&self) -> usize { + self.sets() + .iter() + .flat_map(|s| s.iter()) + .max() + .map_or(0, |&m| m + 1) + } + /// Get a reference to the weights vector. pub fn weights_ref(&self) -> &Vec { &self.weights diff --git a/src/models/specialized/biclique_cover.rs b/src/models/specialized/biclique_cover.rs index f7397950b..a69fe1bef 100644 --- a/src/models/specialized/biclique_cover.rs +++ b/src/models/specialized/biclique_cover.rs @@ -119,6 +119,11 @@ impl BicliqueCover { self.k } + /// Get the rank (alias for `k()`). + pub fn rank(&self) -> usize { + self.k() + } + /// Convert a configuration to biclique memberships. /// /// Config is a flat array where each vertex has k binary variables diff --git a/src/models/specialized/bmf.rs b/src/models/specialized/bmf.rs index 69003086d..f4f1a2726 100644 --- a/src/models/specialized/bmf.rs +++ b/src/models/specialized/bmf.rs @@ -98,6 +98,16 @@ impl BMF { self.k } + /// Get the number of rows (alias for `rows()`). + pub fn m(&self) -> usize { + self.rows() + } + + /// Get the number of columns (alias for `cols()`). + pub fn n(&self) -> usize { + self.cols() + } + /// Get the target matrix. pub fn matrix(&self) -> &[Vec] { &self.matrix diff --git a/src/models/specialized/circuit.rs b/src/models/specialized/circuit.rs index a33346c5d..8412fb09a 100644 --- a/src/models/specialized/circuit.rs +++ b/src/models/specialized/circuit.rs @@ -238,6 +238,11 @@ impl CircuitSAT { &self.variables } + /// Get the number of variables in the circuit. + pub fn num_variables(&self) -> usize { + self.variables.len() + } + /// Check if a configuration is a valid satisfying assignment. pub fn is_valid_solution(&self, config: &[usize]) -> bool { self.count_satisfied(config) == self.circuit.num_assignments() diff --git a/src/models/specialized/factoring.rs b/src/models/specialized/factoring.rs index d08b0e7d0..09223119d 100644 --- a/src/models/specialized/factoring.rs +++ b/src/models/specialized/factoring.rs @@ -75,6 +75,16 @@ impl Factoring { self.n } + /// Get the number of bits for the first factor (alias for `m()`). + pub fn num_bits_first(&self) -> usize { + self.m() + } + + /// Get the number of bits for the second factor (alias for `n()`). + pub fn num_bits_second(&self) -> usize { + self.n() + } + /// Get the target number to factor. pub fn target(&self) -> u64 { self.target diff --git a/src/models/specialized/paintshop.rs b/src/models/specialized/paintshop.rs index 4ee9386f2..4e5b5b008 100644 --- a/src/models/specialized/paintshop.rs +++ b/src/models/specialized/paintshop.rs @@ -119,6 +119,11 @@ impl PaintShop { self.sequence_indices.len() } + /// Get the sequence length (alias for `sequence_len()`). + pub fn num_sequence(&self) -> usize { + self.sequence_len() + } + /// Get the number of unique cars. pub fn num_cars(&self) -> usize { self.num_cars From df05b4ae898f7de3ce971a2d4196906446ee07f4 Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 06:37:55 +0800 Subject: [PATCH 08/15] feat: migrate all reductions to new overhead expression syntax Replace poly!() macro calls with string-based overhead expressions across 33 reduction files. The new syntax uses `field = "expr"` format parsed at compile time by the Pratt expression parser. Co-Authored-By: Claude Opus 4.6 --- src/rules/circuit_ilp.rs | 8 ++--- src/rules/circuit_spinglass.rs | 8 ++--- src/rules/coloring_ilp.rs | 8 ++--- src/rules/coloring_qubo.rs | 4 +-- src/rules/factoring_circuit.rs | 8 ++--- src/rules/factoring_ilp.rs | 31 ++----------------- src/rules/ilp_qubo.rs | 4 +-- src/rules/ksatisfiability_qubo.rs | 8 ++--- src/rules/maximumclique_ilp.rs | 8 ++--- src/rules/maximumindependentset_gridgraph.rs | 14 +++------ src/rules/maximumindependentset_ilp.rs | 8 ++--- ...maximumindependentset_maximumsetpacking.rs | 14 +++------ src/rules/maximumindependentset_qubo.rs | 4 +-- src/rules/maximumindependentset_triangular.rs | 8 ++--- src/rules/maximummatching_ilp.rs | 8 ++--- .../maximummatching_maximumsetpacking.rs | 8 ++--- src/rules/maximumsetpacking_ilp.rs | 8 ++--- src/rules/maximumsetpacking_qubo.rs | 4 +-- src/rules/minimumdominatingset_ilp.rs | 8 ++--- src/rules/minimumsetcovering_ilp.rs | 8 ++--- src/rules/minimumvertexcover_ilp.rs | 8 ++--- ...inimumvertexcover_maximumindependentset.rs | 14 +++------ .../minimumvertexcover_minimumsetcovering.rs | 8 ++--- src/rules/minimumvertexcover_qubo.rs | 4 +-- src/rules/qubo_ilp.rs | 8 ++--- src/rules/sat_circuitsat.rs | 8 ++--- src/rules/sat_coloring.rs | 10 ++---- src/rules/sat_ksat.rs | 20 +++++------- src/rules/sat_maximumindependentset.rs | 8 ++--- src/rules/sat_minimumdominatingset.rs | 8 ++--- src/rules/spinglass_maxcut.rs | 14 +++------ src/rules/spinglass_qubo.rs | 10 ++---- src/rules/travelingsalesman_ilp.rs | 30 ++---------------- 33 files changed, 74 insertions(+), 255 deletions(-) diff --git a/src/rules/circuit_ilp.rs b/src/rules/circuit_ilp.rs index 04e89f255..67e85ea41 100644 --- a/src/rules/circuit_ilp.rs +++ b/src/rules/circuit_ilp.rs @@ -16,9 +16,7 @@ use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; use crate::models::specialized::{BooleanExpr, BooleanOp, CircuitSAT}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use std::collections::HashMap; @@ -174,10 +172,8 @@ impl ILPBuilder { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_variables) + poly!(num_assignments)), - ("num_constraints", poly!(num_variables) + poly!(num_assignments)), - ]) + num_vars = "num_variables + num_assignments", + num_constraints = "num_variables + num_assignments", } )] impl ReduceTo for CircuitSAT { diff --git a/src/rules/circuit_spinglass.rs b/src/rules/circuit_spinglass.rs index 30a019bfd..e002fd268 100644 --- a/src/rules/circuit_spinglass.rs +++ b/src/rules/circuit_spinglass.rs @@ -8,9 +8,7 @@ use crate::models::optimization::SpinGlass; use crate::models::specialized::{Assignment, BooleanExpr, BooleanOp, CircuitSAT}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::SimpleGraph; use num_traits::Zero; @@ -415,10 +413,8 @@ where #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_spins", poly!(num_assignments)), - ("num_interactions", poly!(num_assignments)), - ]) + num_spins = "num_assignments", + num_interactions = "num_assignments", } )] impl ReduceTo> for CircuitSAT { diff --git a/src/rules/coloring_ilp.rs b/src/rules/coloring_ilp.rs index f9e982da0..b80dad209 100644 --- a/src/rules/coloring_ilp.rs +++ b/src/rules/coloring_ilp.rs @@ -9,9 +9,7 @@ use crate::models::graph::KColoring; use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; use crate::variant::{KValue, K1, K2, K3, K4, KN}; @@ -124,10 +122,8 @@ fn reduce_kcoloring_to_ilp( // Register only the KN variant in the reduction graph #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_vertices ^ 2)), - ("num_constraints", poly!(num_vertices) + poly!(num_vertices * num_edges)), - ]) + num_vars = "num_vertices^2", + num_constraints = "num_vertices + num_vertices * num_edges", } )] impl ReduceTo for KColoring { diff --git a/src/rules/coloring_qubo.rs b/src/rules/coloring_qubo.rs index 4a2f954de..5f1573c60 100644 --- a/src/rules/coloring_qubo.rs +++ b/src/rules/coloring_qubo.rs @@ -10,9 +10,7 @@ use crate::models::graph::KColoring; use crate::models::optimization::QUBO; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; use crate::variant::{KValue, K2, K3, KN}; @@ -107,7 +105,7 @@ fn reduce_kcoloring_to_qubo( // Register only the KN variant in the reduction graph #[reduction( - overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vertices ^ 2))]) } + overhead = { num_vars = "num_vertices^2" } )] impl ReduceTo> for KColoring { type Result = ReductionKColoringToQUBO; diff --git a/src/rules/factoring_circuit.rs b/src/rules/factoring_circuit.rs index e43a8853c..4dede1826 100644 --- a/src/rules/factoring_circuit.rs +++ b/src/rules/factoring_circuit.rs @@ -8,9 +8,7 @@ //! carry propagation, building up partial products row by row. use crate::models::specialized::{Assignment, BooleanExpr, Circuit, CircuitSAT, Factoring}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; /// Result of reducing Factoring to CircuitSAT. /// @@ -177,10 +175,8 @@ fn build_multiplier_cell( } #[reduction(overhead = { - ReductionOverhead::new(vec![ - ("num_variables", poly!(num_bits_first * num_bits_second)), - ("num_assignments", poly!(num_bits_first * num_bits_second)), - ]) + num_variables = "num_bits_first * num_bits_second", + num_assignments = "num_bits_first * num_bits_second", })] impl ReduceTo for Factoring { type Result = ReductionFactoringToCircuit; diff --git a/src/rules/factoring_ilp.rs b/src/rules/factoring_ilp.rs index 3d3514175..3cb0045db 100644 --- a/src/rules/factoring_ilp.rs +++ b/src/rules/factoring_ilp.rs @@ -19,9 +19,7 @@ use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; use crate::models::specialized::Factoring; -use crate::polynomial::{Monomial, Polynomial}; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use std::cmp::min; @@ -94,33 +92,8 @@ impl ReductionResult for ReductionFactoringToILP { } #[reduction(overhead = { - ReductionOverhead::from_polynomials(vec![ - // num_vars = m + n + m*n + num_carries where num_carries = max(m+n, target_bits) - // For feasible instances, target_bits <= m+n, so this is 2(m+n) + m*n - ("num_vars", Polynomial { - terms: vec![ - Monomial::var("num_bits_first").scale(2.0), - Monomial::var("num_bits_second").scale(2.0), - Monomial { - coefficient: 1.0, - variables: vec![("num_bits_first", 1), ("num_bits_second", 1)], - }, - ] - }), - // num_constraints = 3*m*n + num_bit_positions + 1 - // For feasible instances (target_bits <= m+n), this is 3*m*n + (m+n) + 1 - ("num_constraints", Polynomial { - terms: vec![ - Monomial { - coefficient: 3.0, - variables: vec![("num_bits_first", 1), ("num_bits_second", 1)], - }, - Monomial::var("num_bits_first"), - Monomial::var("num_bits_second"), - Monomial::constant(1.0), - ] - }), - ]) + num_vars = "2 * num_bits_first + 2 * num_bits_second + num_bits_first * num_bits_second", + num_constraints = "3 * num_bits_first * num_bits_second + num_bits_first + num_bits_second + 1", })] impl ReduceTo for Factoring { type Result = ReductionFactoringToILP; diff --git a/src/rules/ilp_qubo.rs b/src/rules/ilp_qubo.rs index dbbb5c00f..6f2c7d6b8 100644 --- a/src/rules/ilp_qubo.rs +++ b/src/rules/ilp_qubo.rs @@ -10,9 +10,7 @@ //! Slack variables: ceil(log2(slack_range)) bits per inequality constraint. use crate::models::optimization::{Comparison, ObjectiveSense, ILP, QUBO}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; /// Result of reducing binary ILP to QUBO. @@ -37,7 +35,7 @@ impl ReductionResult for ReductionILPToQUBO { } #[reduction( - overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vars))]) } + overhead = { num_vars = "num_vars" } )] impl ReduceTo> for ILP { type Result = ReductionILPToQUBO; diff --git a/src/rules/ksatisfiability_qubo.rs b/src/rules/ksatisfiability_qubo.rs index 2cfb4f980..cbf60226e 100644 --- a/src/rules/ksatisfiability_qubo.rs +++ b/src/rules/ksatisfiability_qubo.rs @@ -14,9 +14,7 @@ use crate::models::optimization::QUBO; use crate::models::satisfiability::KSatisfiability; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::variant::{K2, K3}; /// Result of reducing KSatisfiability to QUBO. @@ -293,7 +291,7 @@ fn build_qubo_matrix( } #[reduction( - overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vars))]) } + overhead = { num_vars = "num_vars" } )] impl ReduceTo> for KSatisfiability { type Result = ReductionKSatToQUBO; @@ -310,9 +308,7 @@ impl ReduceTo> for KSatisfiability { } #[reduction( - overhead = { ReductionOverhead::new(vec![ - ("num_vars", poly!(num_vars) + poly!(num_clauses)), - ]) } + overhead = { num_vars = "num_vars + num_clauses" } )] impl ReduceTo> for KSatisfiability { type Result = Reduction3SATToQUBO; diff --git a/src/rules/maximumclique_ilp.rs b/src/rules/maximumclique_ilp.rs index 08f5026e1..2bcd87d3f 100644 --- a/src/rules/maximumclique_ilp.rs +++ b/src/rules/maximumclique_ilp.rs @@ -8,9 +8,7 @@ use crate::models::graph::MaximumClique; use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; @@ -44,10 +42,8 @@ impl ReductionResult for ReductionCliqueToILP { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_vertices)), - ("num_constraints", poly!(num_vertices ^ 2)), - ]) + num_vars = "num_vertices", + num_constraints = "num_vertices^2", } )] impl ReduceTo for MaximumClique { diff --git a/src/rules/maximumindependentset_gridgraph.rs b/src/rules/maximumindependentset_gridgraph.rs index 8d0ee475c..0ce20fe4d 100644 --- a/src/rules/maximumindependentset_gridgraph.rs +++ b/src/rules/maximumindependentset_gridgraph.rs @@ -4,9 +4,7 @@ //! Maps an arbitrary graph's MIS problem to an equivalent weighted MIS on a grid graph. use crate::models::graph::MaximumIndependentSet; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::rules::unitdiskmapping::ksg; use crate::topology::{Graph, KingsSubgraph, SimpleGraph, UnitDiskGraph}; @@ -33,10 +31,8 @@ impl ReductionResult for ReductionISSimpleToGrid { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vertices", poly!(num_vertices * num_vertices)), - ("num_edges", poly!(num_vertices * num_vertices)), - ]) + num_vertices = "num_vertices * num_vertices", + num_edges = "num_vertices * num_vertices", } )] impl ReduceTo> @@ -80,10 +76,8 @@ impl ReductionResult for ReductionISUnitDiskToGrid { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vertices", poly!(num_vertices * num_vertices)), - ("num_edges", poly!(num_vertices * num_vertices)), - ]) + num_vertices = "num_vertices * num_vertices", + num_edges = "num_vertices * num_vertices", } )] impl ReduceTo> diff --git a/src/rules/maximumindependentset_ilp.rs b/src/rules/maximumindependentset_ilp.rs index 220cd7e76..5396c3962 100644 --- a/src/rules/maximumindependentset_ilp.rs +++ b/src/rules/maximumindependentset_ilp.rs @@ -7,9 +7,7 @@ use crate::models::graph::MaximumIndependentSet; use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; @@ -43,10 +41,8 @@ impl ReductionResult for ReductionISToILP { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_vertices)), - ("num_constraints", poly!(num_edges)), - ]) + num_vars = "num_vertices", + num_constraints = "num_edges", } )] impl ReduceTo for MaximumIndependentSet { diff --git a/src/rules/maximumindependentset_maximumsetpacking.rs b/src/rules/maximumindependentset_maximumsetpacking.rs index 0cdbbb027..e6813bc2b 100644 --- a/src/rules/maximumindependentset_maximumsetpacking.rs +++ b/src/rules/maximumindependentset_maximumsetpacking.rs @@ -5,9 +5,7 @@ use crate::models::graph::MaximumIndependentSet; use crate::models::set::MaximumSetPacking; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; use crate::types::WeightElement; @@ -38,10 +36,8 @@ where #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_sets", poly!(num_vertices)), - ("universe_size", poly!(num_vertices)), - ]) + num_sets = "num_vertices", + universe_size = "num_vertices", } )] impl ReduceTo> for MaximumIndependentSet { @@ -89,10 +85,8 @@ where #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vertices", poly!(num_sets)), - ("num_edges", poly!(num_sets)), - ]) + num_vertices = "num_sets", + num_edges = "num_sets", } )] impl ReduceTo> for MaximumSetPacking { diff --git a/src/rules/maximumindependentset_qubo.rs b/src/rules/maximumindependentset_qubo.rs index ea1e5d08f..9dd32a07c 100644 --- a/src/rules/maximumindependentset_qubo.rs +++ b/src/rules/maximumindependentset_qubo.rs @@ -7,9 +7,7 @@ use crate::models::graph::MaximumIndependentSet; use crate::models::optimization::QUBO; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; /// Result of reducing MaximumIndependentSet to QUBO. @@ -32,7 +30,7 @@ impl ReductionResult for ReductionISToQUBO { } #[reduction( - overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vertices))]) } + overhead = { num_vars = "num_vertices" } )] impl ReduceTo> for MaximumIndependentSet { type Result = ReductionISToQUBO; diff --git a/src/rules/maximumindependentset_triangular.rs b/src/rules/maximumindependentset_triangular.rs index 09d6a85eb..60e9338b9 100644 --- a/src/rules/maximumindependentset_triangular.rs +++ b/src/rules/maximumindependentset_triangular.rs @@ -5,9 +5,7 @@ //! triangular lattice grid graph. use crate::models::graph::MaximumIndependentSet; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::rules::unitdiskmapping::ksg; use crate::rules::unitdiskmapping::triangular; @@ -35,10 +33,8 @@ impl ReductionResult for ReductionISSimpleToTriangular { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vertices", poly!(num_vertices * num_vertices)), - ("num_edges", poly!(num_vertices * num_vertices)), - ]) + num_vertices = "num_vertices * num_vertices", + num_edges = "num_vertices * num_vertices", } )] impl ReduceTo> diff --git a/src/rules/maximummatching_ilp.rs b/src/rules/maximummatching_ilp.rs index dc0168611..681e0092f 100644 --- a/src/rules/maximummatching_ilp.rs +++ b/src/rules/maximummatching_ilp.rs @@ -8,9 +8,7 @@ use crate::models::graph::MaximumMatching; use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; @@ -44,10 +42,8 @@ impl ReductionResult for ReductionMatchingToILP { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_edges)), - ("num_constraints", poly!(num_vertices)), - ]) + num_vars = "num_edges", + num_constraints = "num_vertices", } )] impl ReduceTo for MaximumMatching { diff --git a/src/rules/maximummatching_maximumsetpacking.rs b/src/rules/maximummatching_maximumsetpacking.rs index c477fc660..623cfb19a 100644 --- a/src/rules/maximummatching_maximumsetpacking.rs +++ b/src/rules/maximummatching_maximumsetpacking.rs @@ -5,9 +5,7 @@ use crate::models::graph::MaximumMatching; use crate::models::set::MaximumSetPacking; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; use crate::types::WeightElement; @@ -39,10 +37,8 @@ where #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_sets", poly!(num_edges)), - ("universe_size", poly!(num_vertices)), - ]) + num_sets = "num_edges", + universe_size = "num_vertices", } )] impl ReduceTo> for MaximumMatching { diff --git a/src/rules/maximumsetpacking_ilp.rs b/src/rules/maximumsetpacking_ilp.rs index b5f22d74a..13f610715 100644 --- a/src/rules/maximumsetpacking_ilp.rs +++ b/src/rules/maximumsetpacking_ilp.rs @@ -7,9 +7,7 @@ use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; use crate::models::set::MaximumSetPacking; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; /// Result of reducing MaximumSetPacking to ILP. @@ -42,10 +40,8 @@ impl ReductionResult for ReductionSPToILP { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_sets)), - ("num_constraints", poly!(num_sets ^ 2)), - ]) + num_vars = "num_sets", + num_constraints = "num_sets^2", } )] impl ReduceTo for MaximumSetPacking { diff --git a/src/rules/maximumsetpacking_qubo.rs b/src/rules/maximumsetpacking_qubo.rs index 2e5a48c0b..9fdce7eed 100644 --- a/src/rules/maximumsetpacking_qubo.rs +++ b/src/rules/maximumsetpacking_qubo.rs @@ -8,9 +8,7 @@ use crate::models::optimization::QUBO; use crate::models::set::MaximumSetPacking; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; /// Result of reducing `MaximumSetPacking` to `QUBO`. @@ -33,7 +31,7 @@ impl ReductionResult for ReductionSPToQUBO { } #[reduction( - overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_sets))]) } + overhead = { num_vars = "num_sets" } )] impl ReduceTo> for MaximumSetPacking { type Result = ReductionSPToQUBO; diff --git a/src/rules/minimumdominatingset_ilp.rs b/src/rules/minimumdominatingset_ilp.rs index b6c526b8a..abce597b4 100644 --- a/src/rules/minimumdominatingset_ilp.rs +++ b/src/rules/minimumdominatingset_ilp.rs @@ -8,9 +8,7 @@ use crate::models::graph::MinimumDominatingSet; use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; @@ -45,10 +43,8 @@ impl ReductionResult for ReductionDSToILP { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_vertices)), - ("num_constraints", poly!(num_vertices)), - ]) + num_vars = "num_vertices", + num_constraints = "num_vertices", } )] impl ReduceTo for MinimumDominatingSet { diff --git a/src/rules/minimumsetcovering_ilp.rs b/src/rules/minimumsetcovering_ilp.rs index 1a3889b72..4f4bb6c52 100644 --- a/src/rules/minimumsetcovering_ilp.rs +++ b/src/rules/minimumsetcovering_ilp.rs @@ -7,9 +7,7 @@ use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; use crate::models::set::MinimumSetCovering; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; /// Result of reducing MinimumSetCovering to ILP. @@ -42,10 +40,8 @@ impl ReductionResult for ReductionSCToILP { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_sets)), - ("num_constraints", poly!(universe_size)), - ]) + num_vars = "num_sets", + num_constraints = "universe_size", } )] impl ReduceTo for MinimumSetCovering { diff --git a/src/rules/minimumvertexcover_ilp.rs b/src/rules/minimumvertexcover_ilp.rs index 18780fd57..9f076bd80 100644 --- a/src/rules/minimumvertexcover_ilp.rs +++ b/src/rules/minimumvertexcover_ilp.rs @@ -7,9 +7,7 @@ use crate::models::graph::MinimumVertexCover; use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; @@ -43,10 +41,8 @@ impl ReductionResult for ReductionVCToILP { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_vertices)), - ("num_constraints", poly!(num_edges)), - ]) + num_vars = "num_vertices", + num_constraints = "num_edges", } )] impl ReduceTo for MinimumVertexCover { diff --git a/src/rules/minimumvertexcover_maximumindependentset.rs b/src/rules/minimumvertexcover_maximumindependentset.rs index 0715c4742..a09128834 100644 --- a/src/rules/minimumvertexcover_maximumindependentset.rs +++ b/src/rules/minimumvertexcover_maximumindependentset.rs @@ -3,9 +3,7 @@ //! These problems are complements: a set S is an independent set iff V\S is a vertex cover. use crate::models::graph::{MaximumIndependentSet, MinimumVertexCover}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; use crate::types::WeightElement; @@ -36,10 +34,8 @@ where #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vertices", poly!(num_vertices)), - ("num_edges", poly!(num_edges)), - ]) + num_vertices = "num_vertices", + num_edges = "num_edges", } )] impl ReduceTo> for MaximumIndependentSet { @@ -79,10 +75,8 @@ where #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vertices", poly!(num_vertices)), - ("num_edges", poly!(num_edges)), - ]) + num_vertices = "num_vertices", + num_edges = "num_edges", } )] impl ReduceTo> for MinimumVertexCover { diff --git a/src/rules/minimumvertexcover_minimumsetcovering.rs b/src/rules/minimumvertexcover_minimumsetcovering.rs index e2f130f12..7d0c29089 100644 --- a/src/rules/minimumvertexcover_minimumsetcovering.rs +++ b/src/rules/minimumvertexcover_minimumsetcovering.rs @@ -5,9 +5,7 @@ use crate::models::graph::MinimumVertexCover; use crate::models::set::MinimumSetCovering; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; use crate::types::WeightElement; @@ -38,10 +36,8 @@ where #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_sets", poly!(num_vertices)), - ("universe_size", poly!(num_edges)), - ]) + num_sets = "num_vertices", + universe_size = "num_edges", } )] impl ReduceTo> for MinimumVertexCover { diff --git a/src/rules/minimumvertexcover_qubo.rs b/src/rules/minimumvertexcover_qubo.rs index b0422e57a..e484c7249 100644 --- a/src/rules/minimumvertexcover_qubo.rs +++ b/src/rules/minimumvertexcover_qubo.rs @@ -8,9 +8,7 @@ use crate::models::graph::MinimumVertexCover; use crate::models::optimization::QUBO; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; @@ -34,7 +32,7 @@ impl ReductionResult for ReductionVCToQUBO { } #[reduction( - overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vertices))]) } + overhead = { num_vars = "num_vertices" } )] impl ReduceTo> for MinimumVertexCover { type Result = ReductionVCToQUBO; diff --git a/src/rules/qubo_ilp.rs b/src/rules/qubo_ilp.rs index d43ccc930..89d14f94a 100644 --- a/src/rules/qubo_ilp.rs +++ b/src/rules/qubo_ilp.rs @@ -15,9 +15,7 @@ //! minimize Σ_i Q_ii · x_i + Σ_{i for QUBO { diff --git a/src/rules/sat_circuitsat.rs b/src/rules/sat_circuitsat.rs index b82084f84..3a7efb758 100644 --- a/src/rules/sat_circuitsat.rs +++ b/src/rules/sat_circuitsat.rs @@ -5,9 +5,7 @@ use crate::models::satisfiability::Satisfiability; use crate::models::specialized::{Assignment, BooleanExpr, Circuit, CircuitSAT}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::traits::Problem; @@ -37,10 +35,8 @@ impl ReductionResult for ReductionSATToCircuit { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_variables", poly!(num_vars) + poly!(num_clauses) + poly!(1)), - ("num_assignments", poly!(num_clauses) + poly!(2)), - ]) + num_variables = "num_vars + num_clauses + 1", + num_assignments = "num_clauses + 2", } )] impl ReduceTo for Satisfiability { diff --git a/src/rules/sat_coloring.rs b/src/rules/sat_coloring.rs index 8337c5b9a..b8d89ee37 100644 --- a/src/rules/sat_coloring.rs +++ b/src/rules/sat_coloring.rs @@ -10,9 +10,7 @@ use crate::models::graph::KColoring; use crate::models::satisfiability::Satisfiability; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::sat_maximumindependentset::BoolVar; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::SimpleGraph; @@ -298,12 +296,8 @@ impl ReductionSATToColoring { #[reduction( overhead = { - ReductionOverhead::new(vec![ - // 2*num_vars + 3 (base) + 5*(num_literals - num_clauses) (OR gadgets) - ("num_vertices", poly!(2 * num_vars) + poly!(5 * num_literals) + poly!(num_clauses).scale(-5.0) + poly!(3)), - // 3 (triangle) + 3*num_vars + 11*(num_literals - num_clauses) (OR gadgets) + 2*num_clauses (set_true) - ("num_edges", poly!(3 * num_vars) + poly!(11 * num_literals) + poly!(num_clauses).scale(-9.0) + poly!(3)), - ]) + num_vertices = "2 * num_vars + 5 * num_literals + -5 * num_clauses + 3", + num_edges = "3 * num_vars + 11 * num_literals + -9 * num_clauses + 3", } )] impl ReduceTo> for Satisfiability { diff --git a/src/rules/sat_ksat.rs b/src/rules/sat_ksat.rs index 5321fde62..3b9c0c328 100644 --- a/src/rules/sat_ksat.rs +++ b/src/rules/sat_ksat.rs @@ -7,9 +7,7 @@ //! K-SAT -> SAT: Trivial embedding (K-SAT is a special case of SAT) use crate::models::satisfiability::{CNFClause, KSatisfiability, Satisfiability}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::variant::{KValue, K2, K3, KN}; @@ -113,11 +111,9 @@ fn add_clause_to_ksat( macro_rules! impl_sat_to_ksat { ($ktype:ty, $k:expr) => { #[reduction(overhead = { - ReductionOverhead::new(vec![ - ("num_clauses", poly!(num_clauses) + poly!(num_literals)), - ("num_vars", poly!(num_vars) + poly!(num_literals)), - ]) - })] + num_clauses = "num_clauses + num_literals", + num_vars = "num_vars + num_literals", + })] impl ReduceTo> for Satisfiability { type Result = ReductionSATToKSAT<$ktype>; @@ -187,12 +183,10 @@ fn reduce_ksat_to_sat(ksat: &KSatisfiability) -> ReductionKSATToSA macro_rules! impl_ksat_to_sat { ($ktype:ty) => { #[reduction(overhead = { - ReductionOverhead::new(vec![ - ("num_clauses", poly!(num_clauses)), - ("num_vars", poly!(num_vars)), - ("num_literals", poly!(num_literals)), - ]) - })] + num_clauses = "num_clauses", + num_vars = "num_vars", + num_literals = "num_literals", + })] impl ReduceTo for KSatisfiability<$ktype> { type Result = ReductionKSATToSAT<$ktype>; diff --git a/src/rules/sat_maximumindependentset.rs b/src/rules/sat_maximumindependentset.rs index f678bf796..89978b9be 100644 --- a/src/rules/sat_maximumindependentset.rs +++ b/src/rules/sat_maximumindependentset.rs @@ -10,9 +10,7 @@ use crate::models::graph::MaximumIndependentSet; use crate::models::satisfiability::Satisfiability; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::SimpleGraph; @@ -111,10 +109,8 @@ impl ReductionSATToIS { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vertices", poly!(num_literals)), - ("num_edges", poly!(num_literals ^ 2)), - ]) + num_vertices = "num_literals", + num_edges = "num_literals^2", } )] impl ReduceTo> for Satisfiability { diff --git a/src/rules/sat_minimumdominatingset.rs b/src/rules/sat_minimumdominatingset.rs index 076469020..7dd1570f1 100644 --- a/src/rules/sat_minimumdominatingset.rs +++ b/src/rules/sat_minimumdominatingset.rs @@ -16,9 +16,7 @@ use crate::models::graph::MinimumDominatingSet; use crate::models::satisfiability::Satisfiability; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::sat_maximumindependentset::BoolVar; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::SimpleGraph; @@ -115,10 +113,8 @@ impl ReductionSATToDS { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vertices", poly!(3 * num_vars) + poly!(num_clauses)), - ("num_edges", poly!(3 * num_vars) + poly!(num_literals)), - ]) + num_vertices = "3 * num_vars + num_clauses", + num_edges = "3 * num_vars + num_literals", } )] impl ReduceTo> for Satisfiability { diff --git a/src/rules/spinglass_maxcut.rs b/src/rules/spinglass_maxcut.rs index ef6ae91b0..813480eb0 100644 --- a/src/rules/spinglass_maxcut.rs +++ b/src/rules/spinglass_maxcut.rs @@ -5,9 +5,7 @@ use crate::models::graph::MaxCut; use crate::models::optimization::SpinGlass; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; use crate::types::WeightElement; @@ -45,10 +43,8 @@ where #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_spins", poly!(num_vertices)), - ("num_interactions", poly!(num_edges)), - ]) + num_spins = "num_vertices", + num_interactions = "num_edges", } )] impl ReduceTo> for MaxCut { @@ -136,10 +132,8 @@ where #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vertices", poly!(num_spins)), - ("num_edges", poly!(num_interactions)), - ]) + num_vertices = "num_spins", + num_edges = "num_interactions", } )] impl ReduceTo> for SpinGlass { diff --git a/src/rules/spinglass_qubo.rs b/src/rules/spinglass_qubo.rs index 2d39981fd..178c94812 100644 --- a/src/rules/spinglass_qubo.rs +++ b/src/rules/spinglass_qubo.rs @@ -6,9 +6,7 @@ //! Transformation: s = 2x - 1 (so x=0 -> s=-1, x=1 -> s=+1) use crate::models::optimization::{SpinGlass, QUBO}; -use crate::poly; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::SimpleGraph; @@ -34,9 +32,7 @@ impl ReductionResult for ReductionQUBOToSG { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_spins", poly!(num_vars)), - ]) + num_spins = "num_vars", } )] impl ReduceTo> for QUBO { @@ -111,9 +107,7 @@ impl ReductionResult for ReductionSGToQUBO { #[reduction( overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_spins)), - ]) + num_vars = "num_spins", } )] impl ReduceTo> for SpinGlass { diff --git a/src/rules/travelingsalesman_ilp.rs b/src/rules/travelingsalesman_ilp.rs index df529fd33..a5a331ca9 100644 --- a/src/rules/travelingsalesman_ilp.rs +++ b/src/rules/travelingsalesman_ilp.rs @@ -7,9 +7,7 @@ use crate::models::graph::TravelingSalesman; use crate::models::optimization::{LinearConstraint, ObjectiveSense, VarBounds, ILP}; -use crate::polynomial::{Monomial, Polynomial}; use crate::reduction; -use crate::rules::registry::ReductionOverhead; use crate::rules::traits::{ReduceTo, ReductionResult}; use crate::topology::{Graph, SimpleGraph}; @@ -74,32 +72,8 @@ impl ReductionResult for ReductionTSPToILP { #[reduction( overhead = { - ReductionOverhead::from_polynomials(vec![ - // num_vars = n^2 + 2*m*n - ("num_vars", Polynomial::var_pow("num_vertices", 2) + Polynomial { - terms: vec![Monomial { - coefficient: 2.0, - variables: vec![("num_vertices", 1), ("num_edges", 1)], - }] - }), - // num_constraints = 2n + n(n(n-1) - 2m) + 6mn = n^3 - n^2 + 2n + 4mn - ("num_constraints", Polynomial::var_pow("num_vertices", 3) + Polynomial { - terms: vec![ - Monomial { - coefficient: -1.0, - variables: vec![("num_vertices", 2)], - }, - Monomial { - coefficient: 2.0, - variables: vec![("num_vertices", 1)], - }, - Monomial { - coefficient: 4.0, - variables: vec![("num_vertices", 1), ("num_edges", 1)], - }, - ] - }), - ]) + num_vars = "num_vertices^2 + 2 * num_vertices * num_edges", + num_constraints = "num_vertices^3 + -1 * num_vertices^2 + 2 * num_vertices + 4 * num_vertices * num_edges", } )] impl ReduceTo for TravelingSalesman { From f07fdfceedb532362144c43ab654bdcc5098a2a0 Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 06:54:46 +0800 Subject: [PATCH 09/15] refactor: remove problem_size_names/values from Problem trait Remove problem_size_names() and problem_size_values() from the Problem trait and all 21 model implementations. Remove source_size_names_fn and target_size_names_fn from ReductionEntry. Derive size field names from overhead expressions instead. Update CLI dispatch and MCP tools. Co-Authored-By: Claude Opus 4.6 --- problemreductions-cli/src/commands/inspect.rs | 18 +--- problemreductions-cli/src/dispatch.rs | 8 -- problemreductions-cli/src/mcp/tools.rs | 7 +- problemreductions-cli/tests/cli_tests.rs | 2 +- problemreductions-macros/src/lib.rs | 2 - src/lib.rs | 7 +- src/models/graph/kcoloring.rs | 6 -- src/models/graph/max_cut.rs | 6 -- src/models/graph/maximal_is.rs | 6 -- src/models/graph/maximum_clique.rs | 6 -- src/models/graph/maximum_independent_set.rs | 6 -- src/models/graph/maximum_matching.rs | 6 -- src/models/graph/minimum_dominating_set.rs | 6 -- src/models/graph/minimum_vertex_cover.rs | 6 -- src/models/graph/traveling_salesman.rs | 6 -- src/models/optimization/ilp.rs | 8 +- src/models/optimization/qubo.rs | 6 -- src/models/optimization/spin_glass.rs | 6 -- src/models/satisfiability/ksat.rs | 8 -- src/models/satisfiability/sat.rs | 7 -- src/models/set/maximum_set_packing.rs | 12 --- src/models/set/minimum_set_covering.rs | 6 -- src/models/specialized/biclique_cover.rs | 11 --- src/models/specialized/bmf.rs | 6 -- src/models/specialized/circuit.rs | 6 -- src/models/specialized/factoring.rs | 6 -- src/models/specialized/paintshop.rs | 6 -- src/rules/graph.rs | 91 ++----------------- src/rules/mod.rs | 2 - src/rules/registry.rs | 4 - src/traits.rs | 12 --- src/types.rs | 21 ----- src/unit_tests/reduction_graph.rs | 82 +---------------- src/unit_tests/rules/reduction_path_parity.rs | 17 ++-- src/unit_tests/rules/registry.rs | 14 --- src/unit_tests/rules/traits.rs | 12 --- src/unit_tests/solvers/brute_force.rs | 18 ---- src/unit_tests/traits.rs | 30 ------ 38 files changed, 27 insertions(+), 462 deletions(-) diff --git a/problemreductions-cli/src/commands/inspect.rs b/problemreductions-cli/src/commands/inspect.rs index 16717414f..3a5d37cad 100644 --- a/problemreductions-cli/src/commands/inspect.rs +++ b/problemreductions-cli/src/commands/inspect.rs @@ -33,16 +33,10 @@ fn inspect_problem(pj: &ProblemJson, out: &OutputConfig) -> Result<()> { let mut text = format!("Type: {}{}\n", name, variant_str); - // Size info - let size_names = problem.problem_size_names_dyn(); - let size_values = problem.problem_size_values_dyn(); - if !size_names.is_empty() { - let sizes: Vec = size_names - .iter() - .zip(size_values.iter()) - .map(|(n, v)| format!("{} {}", v, n)) - .collect(); - text.push_str(&format!("Size: {}\n", sizes.join(", "))); + // Size fields from the reduction graph + let size_fields = graph.size_field_names(name); + if !size_fields.is_empty() { + text.push_str(&format!("Size fields: {}\n", size_fields.join(", "))); } text.push_str(&format!("Variables: {}\n", problem.num_variables_dyn())); @@ -60,9 +54,7 @@ fn inspect_problem(pj: &ProblemJson, out: &OutputConfig) -> Result<()> { "kind": "problem", "type": name, "variant": variant, - "size": size_names.iter().zip(size_values.iter()) - .map(|(n, v)| serde_json::json!({"field": n, "value": v})) - .collect::>(), + "size_fields": size_fields, "num_variables": problem.num_variables_dyn(), "solvers": ["ilp", "brute-force"], "reduces_to": targets, diff --git a/problemreductions-cli/src/dispatch.rs b/problemreductions-cli/src/dispatch.rs index 285050b76..c5bb7540d 100644 --- a/problemreductions-cli/src/dispatch.rs +++ b/problemreductions-cli/src/dispatch.rs @@ -39,8 +39,6 @@ pub trait DynProblem: Any { fn dims_dyn(&self) -> Vec; fn problem_name(&self) -> &'static str; fn variant_map(&self) -> BTreeMap; - fn problem_size_names_dyn(&self) -> &'static [&'static str]; - fn problem_size_values_dyn(&self) -> Vec; fn num_variables_dyn(&self) -> usize; } @@ -70,12 +68,6 @@ where .map(|(k, v)| (k.to_string(), v.to_string())) .collect() } - fn problem_size_names_dyn(&self) -> &'static [&'static str] { - T::problem_size_names() - } - fn problem_size_values_dyn(&self) -> Vec { - self.problem_size_values() - } fn num_variables_dyn(&self) -> usize { self.num_variables() } diff --git a/problemreductions-cli/src/mcp/tools.rs b/problemreductions-cli/src/mcp/tools.rs index 36c51e7e7..e3c25e8c4 100644 --- a/problemreductions-cli/src/mcp/tools.rs +++ b/problemreductions-cli/src/mcp/tools.rs @@ -605,8 +605,7 @@ impl McpServer { let variant = problem.variant_map(); let graph = ReductionGraph::new(); - let size_names = problem.problem_size_names_dyn(); - let size_values = problem.problem_size_values_dyn(); + let size_fields = graph.size_field_names(name); let outgoing = graph.outgoing_reductions(name); let mut targets: Vec = outgoing.iter().map(|e| e.target_name.to_string()).collect(); @@ -617,9 +616,7 @@ impl McpServer { "kind": "problem", "type": name, "variant": variant, - "size": size_names.iter().zip(size_values.iter()) - .map(|(n, v)| serde_json::json!({"field": n, "value": v})) - .collect::>(), + "size_fields": size_fields, "num_variables": problem.num_variables_dyn(), "solvers": ["ilp", "brute-force"], "reduces_to": targets, diff --git a/problemreductions-cli/tests/cli_tests.rs b/problemreductions-cli/tests/cli_tests.rs index e22d2c83a..2d3775673 100644 --- a/problemreductions-cli/tests/cli_tests.rs +++ b/problemreductions-cli/tests/cli_tests.rs @@ -2093,7 +2093,7 @@ fn test_inspect_json_output() { let json: serde_json::Value = serde_json::from_str(&content).unwrap(); assert_eq!(json["kind"], "problem"); assert_eq!(json["type"], "MaximumIndependentSet"); - assert!(json["size"].is_array()); + assert!(json["size_fields"].is_array()); assert!(json["solvers"].is_array()); assert!(json["reduces_to"].is_array()); diff --git a/problemreductions-macros/src/lib.rs b/problemreductions-macros/src/lib.rs index bc650dc58..6ff9dce99 100644 --- a/problemreductions-macros/src/lib.rs +++ b/problemreductions-macros/src/lib.rs @@ -268,8 +268,6 @@ fn generate_reduction_entry( target_variant_fn: || { #target_variant_body }, overhead_fn: || { #overhead }, module_path: module_path!(), - source_size_names_fn: || { <#source_type as crate::traits::Problem>::problem_size_names() }, - target_size_names_fn: || { <#target_type as crate::traits::Problem>::problem_size_names() }, reduce_fn: |src: &dyn std::any::Any| -> Box { let src = src.downcast_ref::<#source_type>().unwrap_or_else(|| { panic!( diff --git a/src/lib.rs b/src/lib.rs index 46b27689f..23e098865 100644 --- a/src/lib.rs +++ b/src/lib.rs @@ -50,7 +50,7 @@ pub mod prelude { // Core traits pub use crate::rules::{ReduceTo, ReductionResult}; pub use crate::solvers::{BruteForce, Solver}; - pub use crate::traits::{problem_size, OptimizationProblem, Problem, SatisfactionProblem}; + pub use crate::traits::{OptimizationProblem, Problem, SatisfactionProblem}; // Types pub use crate::error::{ProblemError, Result}; @@ -61,7 +61,7 @@ pub mod prelude { pub use error::{ProblemError, Result}; pub use registry::{ComplexityClass, ProblemInfo}; pub use solvers::{BruteForce, Solver}; -pub use traits::{problem_size, OptimizationProblem, Problem, SatisfactionProblem}; +pub use traits::{OptimizationProblem, Problem, SatisfactionProblem}; pub use types::{ Direction, NumericSize, One, ProblemSize, SolutionSize, Unweighted, WeightElement, }; @@ -73,9 +73,6 @@ pub use problemreductions_macros::reduction; #[path = "unit_tests/graph_models.rs"] mod test_graph_models; #[cfg(test)] -#[path = "unit_tests/problem_size.rs"] -mod test_problem_size; -#[cfg(test)] #[path = "unit_tests/property.rs"] mod test_property; #[cfg(test)] diff --git a/src/models/graph/kcoloring.rs b/src/models/graph/kcoloring.rs index 4d10276bd..7bc2d4fbb 100644 --- a/src/models/graph/kcoloring.rs +++ b/src/models/graph/kcoloring.rs @@ -153,12 +153,6 @@ where self.is_valid_coloring(config) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vertices", "num_edges"] - } - fn problem_size_values(&self) -> Vec { - vec![self.graph().num_vertices(), self.graph().num_edges()] - } } impl SatisfactionProblem for KColoring {} diff --git a/src/models/graph/max_cut.rs b/src/models/graph/max_cut.rs index c0e1a7f95..d80f05720 100644 --- a/src/models/graph/max_cut.rs +++ b/src/models/graph/max_cut.rs @@ -181,12 +181,6 @@ where SolutionSize::Valid(cut_size(&self.graph, &self.edge_weights, &partition)) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vertices", "num_edges"] - } - fn problem_size_values(&self) -> Vec { - vec![self.graph().num_vertices(), self.graph().num_edges()] - } } impl OptimizationProblem for MaxCut diff --git a/src/models/graph/maximal_is.rs b/src/models/graph/maximal_is.rs index d05feeeaa..6f7e9b9ce 100644 --- a/src/models/graph/maximal_is.rs +++ b/src/models/graph/maximal_is.rs @@ -170,12 +170,6 @@ where SolutionSize::Valid(total) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vertices", "num_edges"] - } - fn problem_size_values(&self) -> Vec { - vec![self.graph().num_vertices(), self.graph().num_edges()] - } } impl OptimizationProblem for MaximalIS diff --git a/src/models/graph/maximum_clique.rs b/src/models/graph/maximum_clique.rs index 316304fc0..0c41e51ca 100644 --- a/src/models/graph/maximum_clique.rs +++ b/src/models/graph/maximum_clique.rs @@ -136,12 +136,6 @@ where SolutionSize::Valid(total) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vertices", "num_edges"] - } - fn problem_size_values(&self) -> Vec { - vec![self.graph().num_vertices(), self.graph().num_edges()] - } } impl OptimizationProblem for MaximumClique diff --git a/src/models/graph/maximum_independent_set.rs b/src/models/graph/maximum_independent_set.rs index d27fd7d7e..e175387d8 100644 --- a/src/models/graph/maximum_independent_set.rs +++ b/src/models/graph/maximum_independent_set.rs @@ -136,12 +136,6 @@ where SolutionSize::Valid(total) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vertices", "num_edges"] - } - fn problem_size_values(&self) -> Vec { - vec![self.graph().num_vertices(), self.graph().num_edges()] - } } impl OptimizationProblem for MaximumIndependentSet diff --git a/src/models/graph/maximum_matching.rs b/src/models/graph/maximum_matching.rs index da1062ffc..04e36d32a 100644 --- a/src/models/graph/maximum_matching.rs +++ b/src/models/graph/maximum_matching.rs @@ -206,12 +206,6 @@ where SolutionSize::Valid(total) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vertices", "num_edges"] - } - fn problem_size_values(&self) -> Vec { - vec![self.graph().num_vertices(), self.graph().num_edges()] - } } impl OptimizationProblem for MaximumMatching diff --git a/src/models/graph/minimum_dominating_set.rs b/src/models/graph/minimum_dominating_set.rs index 4fca59bcc..ab6461bbc 100644 --- a/src/models/graph/minimum_dominating_set.rs +++ b/src/models/graph/minimum_dominating_set.rs @@ -156,12 +156,6 @@ where SolutionSize::Valid(total) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vertices", "num_edges"] - } - fn problem_size_values(&self) -> Vec { - vec![self.graph().num_vertices(), self.graph().num_edges()] - } } impl OptimizationProblem for MinimumDominatingSet diff --git a/src/models/graph/minimum_vertex_cover.rs b/src/models/graph/minimum_vertex_cover.rs index 24a4408bd..7a3495fbb 100644 --- a/src/models/graph/minimum_vertex_cover.rs +++ b/src/models/graph/minimum_vertex_cover.rs @@ -131,12 +131,6 @@ where SolutionSize::Valid(total) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vertices", "num_edges"] - } - fn problem_size_values(&self) -> Vec { - vec![self.graph().num_vertices(), self.graph().num_edges()] - } } impl OptimizationProblem for MinimumVertexCover diff --git a/src/models/graph/traveling_salesman.rs b/src/models/graph/traveling_salesman.rs index 51f0b17ce..f67c97054 100644 --- a/src/models/graph/traveling_salesman.rs +++ b/src/models/graph/traveling_salesman.rs @@ -169,12 +169,6 @@ where SolutionSize::Valid(total) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vertices", "num_edges"] - } - fn problem_size_values(&self) -> Vec { - vec![self.graph().num_vertices(), self.graph().num_edges()] - } } impl OptimizationProblem for TravelingSalesman diff --git a/src/models/optimization/ilp.rs b/src/models/optimization/ilp.rs index aad32ec24..7086fc3c6 100644 --- a/src/models/optimization/ilp.rs +++ b/src/models/optimization/ilp.rs @@ -326,7 +326,7 @@ impl ILP { self.num_vars } - /// Get the number of variables (alias matching `problem_size_names`). + /// Get the number of variables. pub fn num_vars(&self) -> usize { self.num_variables() } @@ -364,12 +364,6 @@ impl Problem for ILP { crate::variant_params![] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars", "num_constraints"] - } - fn problem_size_values(&self) -> Vec { - vec![self.num_variables(), self.constraints.len()] - } } impl OptimizationProblem for ILP { diff --git a/src/models/optimization/qubo.rs b/src/models/optimization/qubo.rs index f14db848c..76179001f 100644 --- a/src/models/optimization/qubo.rs +++ b/src/models/optimization/qubo.rs @@ -169,12 +169,6 @@ where crate::variant_params![W] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars"] - } - fn problem_size_values(&self) -> Vec { - vec![self.num_vars()] - } } impl OptimizationProblem for QUBO diff --git a/src/models/optimization/spin_glass.rs b/src/models/optimization/spin_glass.rs index b42f77f32..84b80cd95 100644 --- a/src/models/optimization/spin_glass.rs +++ b/src/models/optimization/spin_glass.rs @@ -229,12 +229,6 @@ where crate::variant_params![G, W] } - fn problem_size_names() -> &'static [&'static str] { - &["num_spins", "num_interactions"] - } - fn problem_size_values(&self) -> Vec { - vec![self.num_spins(), self.graph().num_edges()] - } } impl OptimizationProblem for SpinGlass diff --git a/src/models/satisfiability/ksat.rs b/src/models/satisfiability/ksat.rs index e7c9ac460..0d74d7e01 100644 --- a/src/models/satisfiability/ksat.rs +++ b/src/models/satisfiability/ksat.rs @@ -176,14 +176,6 @@ impl Problem for KSatisfiability { self.is_satisfying(&assignment) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars", "num_clauses", "num_literals"] - } - fn problem_size_values(&self) -> Vec { - let num_literals: usize = self.clauses().iter().map(|c| c.len()).sum(); - vec![self.num_vars(), self.num_clauses(), num_literals] - } - fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![K] } diff --git a/src/models/satisfiability/sat.rs b/src/models/satisfiability/sat.rs index 532257588..380a0a35c 100644 --- a/src/models/satisfiability/sat.rs +++ b/src/models/satisfiability/sat.rs @@ -188,13 +188,6 @@ impl Problem for Satisfiability { self.is_satisfying(&assignment) } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars", "num_clauses", "num_literals"] - } - fn problem_size_values(&self) -> Vec { - vec![self.num_vars(), self.num_clauses(), self.num_literals()] - } - fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![] } diff --git a/src/models/set/maximum_set_packing.rs b/src/models/set/maximum_set_packing.rs index 658d29c68..159bb44f0 100644 --- a/src/models/set/maximum_set_packing.rs +++ b/src/models/set/maximum_set_packing.rs @@ -161,18 +161,6 @@ where crate::variant_params![W] } - fn problem_size_names() -> &'static [&'static str] { - &["num_sets", "universe_size"] - } - fn problem_size_values(&self) -> Vec { - let universe_size = self - .sets() - .iter() - .flat_map(|s| s.iter()) - .max() - .map_or(0, |&m| m + 1); - vec![self.num_sets(), universe_size] - } } impl OptimizationProblem for MaximumSetPacking diff --git a/src/models/set/minimum_set_covering.rs b/src/models/set/minimum_set_covering.rs index 3867574b1..10df9a654 100644 --- a/src/models/set/minimum_set_covering.rs +++ b/src/models/set/minimum_set_covering.rs @@ -166,12 +166,6 @@ where crate::variant_params![W] } - fn problem_size_names() -> &'static [&'static str] { - &["num_sets", "universe_size"] - } - fn problem_size_values(&self) -> Vec { - vec![self.num_sets(), self.universe_size()] - } } impl OptimizationProblem for MinimumSetCovering diff --git a/src/models/specialized/biclique_cover.rs b/src/models/specialized/biclique_cover.rs index a69fe1bef..74e18270d 100644 --- a/src/models/specialized/biclique_cover.rs +++ b/src/models/specialized/biclique_cover.rs @@ -234,17 +234,6 @@ impl Problem for BicliqueCover { crate::variant_params![] } - fn problem_size_names() -> &'static [&'static str] { - &["left_size", "right_size", "num_edges", "rank"] - } - fn problem_size_values(&self) -> Vec { - vec![ - self.left_size(), - self.right_size(), - self.num_edges(), - self.k(), - ] - } } impl OptimizationProblem for BicliqueCover { diff --git a/src/models/specialized/bmf.rs b/src/models/specialized/bmf.rs index f4f1a2726..35ff934ae 100644 --- a/src/models/specialized/bmf.rs +++ b/src/models/specialized/bmf.rs @@ -221,12 +221,6 @@ impl Problem for BMF { crate::variant_params![] } - fn problem_size_names() -> &'static [&'static str] { - &["m", "n", "rank"] - } - fn problem_size_values(&self) -> Vec { - vec![self.rows(), self.cols(), self.rank()] - } } impl OptimizationProblem for BMF { diff --git a/src/models/specialized/circuit.rs b/src/models/specialized/circuit.rs index 8412fb09a..ffe3bd9a7 100644 --- a/src/models/specialized/circuit.rs +++ b/src/models/specialized/circuit.rs @@ -296,12 +296,6 @@ impl Problem for CircuitSAT { crate::variant_params![] } - fn problem_size_names() -> &'static [&'static str] { - &["num_variables", "num_assignments"] - } - fn problem_size_values(&self) -> Vec { - vec![self.num_variables(), self.circuit().num_assignments()] - } } impl SatisfactionProblem for CircuitSAT {} diff --git a/src/models/specialized/factoring.rs b/src/models/specialized/factoring.rs index 09223119d..1e88eafaa 100644 --- a/src/models/specialized/factoring.rs +++ b/src/models/specialized/factoring.rs @@ -153,12 +153,6 @@ impl Problem for Factoring { crate::variant_params![] } - fn problem_size_names() -> &'static [&'static str] { - &["num_bits_first", "num_bits_second"] - } - fn problem_size_values(&self) -> Vec { - vec![self.m(), self.n()] - } } impl OptimizationProblem for Factoring { diff --git a/src/models/specialized/paintshop.rs b/src/models/specialized/paintshop.rs index 4e5b5b008..747c3240f 100644 --- a/src/models/specialized/paintshop.rs +++ b/src/models/specialized/paintshop.rs @@ -183,12 +183,6 @@ impl Problem for PaintShop { crate::variant_params![] } - fn problem_size_names() -> &'static [&'static str] { - &["num_cars", "num_sequence"] - } - fn problem_size_values(&self) -> Vec { - vec![self.num_cars(), self.sequence_len()] - } } impl OptimizationProblem for PaintShop { diff --git a/src/rules/graph.rs b/src/rules/graph.rs index 9a0af5efc..8b4e97ef8 100644 --- a/src/rules/graph.rs +++ b/src/rules/graph.rs @@ -242,56 +242,6 @@ pub struct NeighborTree { pub children: Vec, } -/// Validate that a reduction's overhead variables are consistent with source/target size names. -/// -/// Checks: -/// - Overhead input variables are a subset of `source_size_names` -/// - Overhead output fields are a subset of `target_size_names` (skipped if `target_size_names` is empty) -/// -/// Panics with a descriptive message on mismatch. -pub(crate) fn validate_overhead_variables( - source_name: &str, - target_name: &str, - overhead: &ReductionOverhead, - source_size_names: &[&str], - target_size_names: &[&str], -) { - let source_set: HashSet<&str> = source_size_names.iter().copied().collect(); - let overhead_inputs = overhead.input_variable_names(); - let missing_inputs: Vec<_> = overhead_inputs - .iter() - .filter(|name| !source_set.contains(*name)) - .collect(); - assert!( - missing_inputs.is_empty(), - "Reduction {} -> {}: overhead references input variables {:?} \ - not in source problem_size_names {:?}", - source_name, - target_name, - missing_inputs, - source_set, - ); - - if !target_size_names.is_empty() { - let target_set: HashSet<&str> = target_size_names.iter().copied().collect(); - let overhead_outputs: HashSet<&str> = - overhead.output_size.iter().map(|(name, _)| *name).collect(); - let missing_outputs: Vec<_> = overhead_outputs - .iter() - .filter(|name| !target_set.contains(*name)) - .collect(); - assert!( - missing_outputs.is_empty(), - "Reduction {} -> {}: overhead output fields {:?} \ - not in target problem_size_names {:?}", - source_name, - target_name, - missing_outputs, - target_set, - ); - } -} - /// Runtime graph of all registered reductions. /// /// Uses variant-level nodes: each node is a unique `(problem_name, variant)` pair. @@ -365,13 +315,6 @@ impl ReductionGraph { ); let overhead = entry.overhead(); - validate_overhead_variables( - entry.source_name, - entry.target_name, - &overhead, - (entry.source_size_names_fn)(), - (entry.target_size_names_fn)(), - ); // Check if edge already exists (avoid duplicates) if graph.find_edge(src_idx, dst_idx).is_none() { @@ -432,26 +375,6 @@ impl ReductionGraph { cost_fn: &C, ) -> Option { let src = self.lookup_node(source, source_variant)?; - - // Validate: when input_size is non-empty, check outgoing edges - if !input_size.components.is_empty() { - let size_names: Vec<&str> = input_size - .components - .iter() - .map(|(k, _)| k.as_str()) - .collect(); - for edge_ref in self.graph.edges(src) { - let target_node = &self.nodes[self.graph[edge_ref.target()]]; - validate_overhead_variables( - source, - target_node.name, - &edge_ref.weight().overhead, - &size_names, - &[], // skip output validation at query time - ); - } - } - let dst = self.lookup_node(target, target_variant)?; let node_path = self.dijkstra(src, dst, input_size, cost_fn)?; Some(self.node_path_to_reduction_path(&node_path)) @@ -700,18 +623,20 @@ impl ReductionGraph { /// Get the problem size field names for a problem type. /// - /// Returns the static `problem_size_names()` by finding a reduction entry - /// where this problem is the source or target. - pub fn size_field_names(&self, name: &str) -> &'static [&'static str] { + /// Derives size fields from the overhead expressions of reduction entries + /// where this problem appears as source or target. + pub fn size_field_names(&self, name: &str) -> Vec<&'static str> { for entry in inventory::iter:: { if entry.source_name == name { - return (entry.source_size_names_fn)(); + let overhead = entry.overhead(); + return overhead.output_size.iter().map(|(name, _)| *name).collect(); } if entry.target_name == name { - return (entry.target_size_names_fn)(); + let overhead = entry.overhead(); + return overhead.output_size.iter().map(|(name, _)| *name).collect(); } } - &[] + vec![] } /// Get all incoming reductions to a problem (across all its variants). diff --git a/src/rules/mod.rs b/src/rules/mod.rs index 146a537fe..765e3e8cf 100644 --- a/src/rules/mod.rs +++ b/src/rules/mod.rs @@ -62,8 +62,6 @@ mod qubo_ilp; #[cfg(feature = "ilp-solver")] mod travelingsalesman_ilp; -#[cfg(test)] -pub(crate) use graph::validate_overhead_variables; pub use graph::{ NeighborInfo, NeighborTree, ReductionChain, ReductionEdgeInfo, ReductionGraph, ReductionPath, ReductionStep, TraversalDirection, diff --git a/src/rules/registry.rs b/src/rules/registry.rs index 29dbfe3c5..f49dd8b66 100644 --- a/src/rules/registry.rs +++ b/src/rules/registry.rs @@ -110,10 +110,6 @@ pub struct ReductionEntry { pub overhead_fn: fn() -> ReductionOverhead, /// Module path where the reduction is defined (from `module_path!()`). pub module_path: &'static str, - /// Type-level problem size field names for the source problem. - pub source_size_names_fn: fn() -> &'static [&'static str], - /// Type-level problem size field names for the target problem. - pub target_size_names_fn: fn() -> &'static [&'static str], /// Type-erased reduction executor. /// Takes a `&dyn Any` (must be `&SourceType`), calls `ReduceTo::reduce_to()`, /// and returns the result as a boxed `DynReductionResult`. diff --git a/src/traits.rs b/src/traits.rs index cba13e442..635718c0c 100644 --- a/src/traits.rs +++ b/src/traits.rs @@ -22,18 +22,6 @@ pub trait Problem: Clone { /// Used for generating variant IDs in the reduction graph schema. /// Returns pairs like `[("graph", "SimpleGraph"), ("weight", "i32")]`. fn variant() -> Vec<(&'static str, &'static str)>; - /// Type-level: fixed field names for this problem type's size metrics. - /// - /// Every instance of this problem type uses the same set of field names, - /// so this is a static method. - fn problem_size_names() -> &'static [&'static str]; - /// Instance-level: values for each size field (same order as `problem_size_names()`). - fn problem_size_values(&self) -> Vec; -} - -/// Combine type-level names and instance-level values into a [`crate::types::ProblemSize`]. -pub fn problem_size(p: &P) -> crate::types::ProblemSize { - crate::types::ProblemSize::from_names_values(P::problem_size_names(), &p.problem_size_values()) } /// Extension for problems with a numeric objective to optimize. diff --git a/src/types.rs b/src/types.rs index 3b5c537c7..1daca23c1 100644 --- a/src/types.rs +++ b/src/types.rs @@ -200,27 +200,6 @@ impl ProblemSize { } } - /// Create from separate names and values arrays. - /// - /// This is the primary constructor used by the `problem_size()` free function, - /// combining type-level names with instance-level values. - pub fn from_names_values(names: &[&str], values: &[usize]) -> Self { - assert_eq!( - names.len(), - values.len(), - "ProblemSize: names ({}) and values ({}) length mismatch", - names.len(), - values.len() - ); - Self { - components: names - .iter() - .zip(values.iter()) - .map(|(k, v)| (k.to_string(), *v)) - .collect(), - } - } - /// Get a size component by name. pub fn get(&self, name: &str) -> Option { self.components diff --git a/src/unit_tests/reduction_graph.rs b/src/unit_tests/reduction_graph.rs index 2f013f461..24671b8ab 100644 --- a/src/unit_tests/reduction_graph.rs +++ b/src/unit_tests/reduction_graph.rs @@ -1,11 +1,9 @@ //! Tests for ReductionGraph: discovery, path finding, and typed API. use crate::models::satisfiability::KSatisfiability; -use crate::poly; use crate::prelude::*; use crate::rules::{MinimizeSteps, ReductionGraph, TraversalDirection}; use crate::topology::{SimpleGraph, TriangularSubgraph}; -use crate::traits::problem_size; use crate::types::ProblemSize; use crate::variant::K3; use std::collections::BTreeMap; @@ -291,17 +289,14 @@ fn test_3sat_to_mis_triangular_overhead() { ); // 3-SAT instance: 3 variables, 2 clauses, 6 literals - let source = KSatisfiability::::new( + let _source = KSatisfiability::::new( 3, vec![ CNFClause::new(vec![1, 2, 3]), CNFClause::new(vec![-1, -2, -3]), ], ); - let input_size = problem_size(&source); - assert_eq!(input_size.get("num_vars"), Some(3)); - assert_eq!(input_size.get("num_clauses"), Some(2)); - assert_eq!(input_size.get("num_literals"), Some(6)); + let input_size = ProblemSize::new(vec![("num_vars", 3), ("num_clauses", 2), ("num_literals", 6)]); // Find the shortest path let path = graph @@ -370,79 +365,6 @@ fn test_3sat_to_mis_triangular_overhead() { assert_eq!(composed.get("num_edges").unwrap().eval(&test_size), 36.0); } -// ---- Overhead validation ---- - -#[test] -fn test_validate_overhead_variables_valid() { - use crate::rules::registry::ReductionOverhead; - use crate::rules::validate_overhead_variables; - - let overhead = ReductionOverhead::new(vec![ - ("num_vertices", poly!(num_vars)), - ("num_edges", poly!(num_vars ^ 2)), - ]); - // Should not panic: inputs {num_vars} ⊆ source, outputs {num_vertices, num_edges} ⊆ target - validate_overhead_variables( - "Source", - "Target", - &overhead, - &["num_vars", "num_clauses"], - &["num_vertices", "num_edges"], - ); -} - -#[test] -#[should_panic(expected = "overhead references input variables")] -fn test_validate_overhead_variables_missing_input() { - use crate::rules::registry::ReductionOverhead; - use crate::rules::validate_overhead_variables; - - let overhead = ReductionOverhead::new(vec![("num_vertices", poly!(num_colors))]); - validate_overhead_variables( - "Source", - "Target", - &overhead, - &["num_vars", "num_clauses"], // no "num_colors" - &["num_vertices"], - ); -} - -#[test] -#[should_panic(expected = "overhead output fields")] -fn test_validate_overhead_variables_missing_output() { - use crate::rules::registry::ReductionOverhead; - use crate::rules::validate_overhead_variables; - - let overhead = ReductionOverhead::new(vec![("num_gates", poly!(num_vars))]); - validate_overhead_variables( - "Source", - "Target", - &overhead, - &["num_vars"], - &["num_vertices", "num_edges"], // no "num_gates" - ); -} - -#[test] -fn test_validate_overhead_variables_skips_output_when_empty() { - use crate::rules::registry::ReductionOverhead; - use crate::rules::validate_overhead_variables; - - let overhead = ReductionOverhead::new(vec![("anything", poly!(num_vars))]); - // Should not panic: target_size_names is empty so output check is skipped - validate_overhead_variables("Source", "Target", &overhead, &["num_vars"], &[]); -} - -#[test] -fn test_validate_overhead_variables_identity() { - use crate::rules::registry::ReductionOverhead; - use crate::rules::validate_overhead_variables; - - let names = &["num_vertices", "num_edges"]; - let overhead = ReductionOverhead::identity(names); - validate_overhead_variables("A", "B", &overhead, names, names); -} - // ---- k-neighbor BFS ---- #[test] diff --git a/src/unit_tests/rules/reduction_path_parity.rs b/src/unit_tests/rules/reduction_path_parity.rs index c3813a286..a655b418e 100644 --- a/src/unit_tests/rules/reduction_path_parity.rs +++ b/src/unit_tests/rules/reduction_path_parity.rs @@ -8,7 +8,7 @@ use crate::models::specialized::Factoring; use crate::rules::{MinimizeSteps, ReductionGraph}; use crate::solvers::{BruteForce, Solver}; use crate::topology::SimpleGraph; -use crate::traits::{problem_size, Problem}; +use crate::traits::Problem; use crate::types::ProblemSize; use std::collections::HashSet; @@ -170,8 +170,8 @@ fn test_jl_parity_factoring_to_spinglass_path() { ); } -/// Test that `find_cheapest_path` works with a real `problem_size()` from a -/// constructed problem instance, rather than an empty `ProblemSize::new(vec![])`. +/// Test that `find_cheapest_path` works with a concrete `ProblemSize` input, +/// rather than an empty `ProblemSize::new(vec![])`. #[test] fn test_find_cheapest_path_with_problem_size() { let graph = ReductionGraph::new(); @@ -195,26 +195,21 @@ fn test_find_cheapest_path_with_problem_size() { (7, 9), ], ); - let source = MaxCut::::unweighted(petersen); + let _source = MaxCut::::unweighted(petersen); let src_var = ReductionGraph::variant_to_map(&MaxCut::::variant()); let dst_var = ReductionGraph::variant_to_map(&SpinGlass::::variant()); - // Use source.problem_size() instead of ProblemSize::new(vec![]) + let input_size = ProblemSize::new(vec![("num_vertices", 10), ("num_edges", 15)]); let rpath = graph .find_cheapest_path( "MaxCut", &src_var, "SpinGlass", &dst_var, - &problem_size(&source), + &input_size, &MinimizeSteps, ) .expect("Should find path MaxCut -> SpinGlass"); assert!(!rpath.type_names().is_empty()); - - // Verify problem_size has expected components - let size = problem_size(&source); - assert_eq!(size.get("num_vertices"), Some(10)); - assert_eq!(size.get("num_edges"), Some(15)); } diff --git a/src/unit_tests/rules/registry.rs b/src/unit_tests/rules/registry.rs index 6a33f912b..9e79f7882 100644 --- a/src/unit_tests/rules/registry.rs +++ b/src/unit_tests/rules/registry.rs @@ -32,8 +32,6 @@ fn test_reduction_entry_overhead() { target_variant_fn: || vec![("graph", "SimpleGraph"), ("weight", "One")], overhead_fn: || ReductionOverhead::new(vec![("n", poly!(2 * n))]), module_path: "test::module", - source_size_names_fn: || &["n"], - target_size_names_fn: || &["n"], reduce_fn: dummy_reduce_fn, }; @@ -52,8 +50,6 @@ fn test_reduction_entry_debug() { target_variant_fn: || vec![("graph", "SimpleGraph"), ("weight", "One")], overhead_fn: || ReductionOverhead::default(), module_path: "test::module", - source_size_names_fn: || &[], - target_size_names_fn: || &[], reduce_fn: dummy_reduce_fn, }; @@ -71,8 +67,6 @@ fn test_is_base_reduction_unweighted() { target_variant_fn: || vec![("graph", "SimpleGraph"), ("weight", "One")], overhead_fn: || ReductionOverhead::default(), module_path: "test::module", - source_size_names_fn: || &[], - target_size_names_fn: || &[], reduce_fn: dummy_reduce_fn, }; assert!(entry.is_base_reduction()); @@ -87,8 +81,6 @@ fn test_is_base_reduction_source_weighted() { target_variant_fn: || vec![("graph", "SimpleGraph"), ("weight", "One")], overhead_fn: || ReductionOverhead::default(), module_path: "test::module", - source_size_names_fn: || &[], - target_size_names_fn: || &[], reduce_fn: dummy_reduce_fn, }; assert!(!entry.is_base_reduction()); @@ -103,8 +95,6 @@ fn test_is_base_reduction_target_weighted() { target_variant_fn: || vec![("graph", "SimpleGraph"), ("weight", "f64")], overhead_fn: || ReductionOverhead::default(), module_path: "test::module", - source_size_names_fn: || &[], - target_size_names_fn: || &[], reduce_fn: dummy_reduce_fn, }; assert!(!entry.is_base_reduction()); @@ -119,8 +109,6 @@ fn test_is_base_reduction_both_weighted() { target_variant_fn: || vec![("graph", "SimpleGraph"), ("weight", "f64")], overhead_fn: || ReductionOverhead::default(), module_path: "test::module", - source_size_names_fn: || &[], - target_size_names_fn: || &[], reduce_fn: dummy_reduce_fn, }; assert!(!entry.is_base_reduction()); @@ -136,8 +124,6 @@ fn test_is_base_reduction_no_weight_key() { target_variant_fn: || vec![("graph", "SimpleGraph")], overhead_fn: || ReductionOverhead::default(), module_path: "test::module", - source_size_names_fn: || &[], - target_size_names_fn: || &[], reduce_fn: dummy_reduce_fn, }; assert!(entry.is_base_reduction()); diff --git a/src/unit_tests/rules/traits.rs b/src/unit_tests/rules/traits.rs index 1e900dca7..ab3bff0c8 100644 --- a/src/unit_tests/rules/traits.rs +++ b/src/unit_tests/rules/traits.rs @@ -23,12 +23,6 @@ impl Problem for SourceProblem { fn variant() -> Vec<(&'static str, &'static str)> { vec![("graph", "SimpleGraph"), ("weight", "i32")] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars"] - } - fn problem_size_values(&self) -> Vec { - vec![2] - } } impl Problem for TargetProblem { @@ -43,12 +37,6 @@ impl Problem for TargetProblem { fn variant() -> Vec<(&'static str, &'static str)> { vec![("graph", "SimpleGraph"), ("weight", "i32")] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars"] - } - fn problem_size_values(&self) -> Vec { - vec![2] - } } #[derive(Clone)] diff --git a/src/unit_tests/solvers/brute_force.rs b/src/unit_tests/solvers/brute_force.rs index baa945d08..19561c01f 100644 --- a/src/unit_tests/solvers/brute_force.rs +++ b/src/unit_tests/solvers/brute_force.rs @@ -27,12 +27,6 @@ impl Problem for MaxSumOpt { fn variant() -> Vec<(&'static str, &'static str)> { vec![("graph", "SimpleGraph"), ("weight", "i32")] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars"] - } - fn problem_size_values(&self) -> Vec { - vec![self.weights.len()] - } } impl OptimizationProblem for MaxSumOpt { @@ -66,12 +60,6 @@ impl Problem for MinSumOpt { fn variant() -> Vec<(&'static str, &'static str)> { vec![("graph", "SimpleGraph"), ("weight", "i32")] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars"] - } - fn problem_size_values(&self) -> Vec { - vec![self.weights.len()] - } } impl OptimizationProblem for MinSumOpt { @@ -100,12 +88,6 @@ impl Problem for SatProblem { fn variant() -> Vec<(&'static str, &'static str)> { vec![("graph", "SimpleGraph"), ("weight", "bool")] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars"] - } - fn problem_size_values(&self) -> Vec { - vec![self.num_vars] - } } #[test] diff --git a/src/unit_tests/traits.rs b/src/unit_tests/traits.rs index 069597f22..c59957d61 100644 --- a/src/unit_tests/traits.rs +++ b/src/unit_tests/traits.rs @@ -21,12 +21,6 @@ impl Problem for TestSatProblem { fn variant() -> Vec<(&'static str, &'static str)> { vec![("graph", "SimpleGraph"), ("weight", "bool")] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars"] - } - fn problem_size_values(&self) -> Vec { - vec![self.num_vars] - } } #[test] @@ -85,12 +79,6 @@ impl Problem for TestMaxProblem { fn variant() -> Vec<(&'static str, &'static str)> { vec![("graph", "SimpleGraph"), ("weight", "i32")] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars"] - } - fn problem_size_values(&self) -> Vec { - vec![self.weights.len()] - } } impl OptimizationProblem for TestMaxProblem { @@ -123,12 +111,6 @@ impl Problem for TestMinProblem { fn variant() -> Vec<(&'static str, &'static str)> { vec![("graph", "SimpleGraph"), ("weight", "i32")] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars"] - } - fn problem_size_values(&self) -> Vec { - vec![self.costs.len()] - } } impl OptimizationProblem for TestMinProblem { @@ -179,12 +161,6 @@ impl Problem for MultiDimProblem { fn variant() -> Vec<(&'static str, &'static str)> { vec![("graph", "SimpleGraph"), ("weight", "i32")] } - fn problem_size_names() -> &'static [&'static str] { - &["num_dims"] - } - fn problem_size_values(&self) -> Vec { - vec![self.dims.len()] - } } #[test] @@ -234,12 +210,6 @@ impl Problem for FloatProblem { fn variant() -> Vec<(&'static str, &'static str)> { vec![("graph", "SimpleGraph"), ("weight", "f64")] } - fn problem_size_names() -> &'static [&'static str] { - &["num_vars"] - } - fn problem_size_values(&self) -> Vec { - vec![self.weights.len()] - } } impl OptimizationProblem for FloatProblem { From 35fc74ce4164c3623c4301f135b4c8a7c24dfc45 Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 07:35:39 +0800 Subject: [PATCH 10/15] refactor: remove Polynomial type and poly! macro Delete src/polynomial.rs and its tests. Remove From impl on Expr and from_polynomials() bridge on ReductionOverhead. Update remaining registry tests to use Expr directly. Co-Authored-By: Claude Opus 4.6 --- src/expr.rs | 34 --- src/lib.rs | 1 - src/polynomial.rs | 342 ------------------------------- src/rules/graph.rs | 8 +- src/rules/registry.rs | 12 -- src/unit_tests/polynomial.rs | 178 ---------------- src/unit_tests/rules/registry.rs | 11 +- 7 files changed, 12 insertions(+), 574 deletions(-) delete mode 100644 src/polynomial.rs delete mode 100644 src/unit_tests/polynomial.rs diff --git a/src/expr.rs b/src/expr.rs index 156e63af0..e81035d24 100644 --- a/src/expr.rs +++ b/src/expr.rs @@ -166,40 +166,6 @@ impl std::ops::Add for Expr { } } -impl From for Expr { - fn from(poly: crate::polynomial::Polynomial) -> Self { - let terms: Vec = poly - .terms - .iter() - .map(|mono| { - // Build monomial: coefficient * Π(var^exp) - let mut expr = Expr::Const(mono.coefficient); - for &(name, exp) in &mono.variables { - let var_expr = if exp == 1 { - Expr::Var(name) - } else { - Expr::pow(Expr::Var(name), Expr::Const(exp as f64)) - }; - expr = Expr::mul(expr, var_expr); - } - // Simplify `1.0 * x` to just `x` for single-variable monomials - if let Expr::Mul(ref a, ref b) = expr { - if matches!(a.as_ref(), Expr::Const(c) if (*c - 1.0).abs() < 1e-15) { - return b.as_ref().clone(); - } - } - expr - }) - .collect(); - - if terms.is_empty() { - return Expr::Const(0.0); - } - - terms.into_iter().reduce(Expr::add).unwrap() - } -} - #[cfg(test)] #[path = "unit_tests/expr.rs"] mod tests; diff --git a/src/lib.rs b/src/lib.rs index 23e098865..3224f5c5f 100644 --- a/src/lib.rs +++ b/src/lib.rs @@ -23,7 +23,6 @@ pub mod export; pub mod io; pub mod models; pub(crate) mod expr; -pub(crate) mod polynomial; pub mod registry; pub mod rules; pub mod solvers; diff --git a/src/polynomial.rs b/src/polynomial.rs deleted file mode 100644 index 697f3f0e9..000000000 --- a/src/polynomial.rs +++ /dev/null @@ -1,342 +0,0 @@ -//! Polynomial representation for reduction overhead. - -use crate::types::ProblemSize; -use std::collections::{HashMap, HashSet}; -use std::fmt; -use std::ops::Add; - -/// A monomial: coefficient × Π(variable^exponent) -#[derive(Clone, Debug, PartialEq, serde::Serialize)] -pub struct Monomial { - pub coefficient: f64, - pub variables: Vec<(&'static str, u8)>, -} - -impl Monomial { - pub fn constant(c: f64) -> Self { - Self { - coefficient: c, - variables: vec![], - } - } - - pub fn var(name: &'static str) -> Self { - Self { - coefficient: 1.0, - variables: vec![(name, 1)], - } - } - - pub fn var_pow(name: &'static str, exp: u8) -> Self { - Self { - coefficient: 1.0, - variables: vec![(name, exp)], - } - } - - pub fn scale(mut self, c: f64) -> Self { - self.coefficient *= c; - self - } - - pub fn evaluate(&self, size: &ProblemSize) -> f64 { - let var_product: f64 = self - .variables - .iter() - .map(|(name, exp)| { - let val = size.get(name).unwrap_or(0) as f64; - val.powi(*exp as i32) - }) - .product(); - self.coefficient * var_product - } - - /// Multiply two monomials. - pub fn mul(&self, other: &Monomial) -> Monomial { - let mut variables = self.variables.clone(); - variables.extend_from_slice(&other.variables); - Monomial { - coefficient: self.coefficient * other.coefficient, - variables, - } - } - - /// Normalize: sort variables by name, merge duplicate entries. - pub fn normalize(&mut self) { - self.variables.sort_by_key(|(name, _)| *name); - let mut merged: Vec<(&'static str, u8)> = Vec::new(); - for &(name, exp) in &self.variables { - if let Some(last) = merged.last_mut() { - if last.0 == name { - last.1 += exp; - continue; - } - } - merged.push((name, exp)); - } - // Remove zero-exponent variables - merged.retain(|&(_, exp)| exp > 0); - self.variables = merged; - } - - /// Variable signature for like-term comparison (after normalization). - fn var_signature(&self) -> &[(&'static str, u8)] { - &self.variables - } -} - -/// A polynomial: Σ monomials -#[derive(Clone, Debug, PartialEq, serde::Serialize)] -pub struct Polynomial { - pub terms: Vec, -} - -impl Polynomial { - pub fn zero() -> Self { - Self { terms: vec![] } - } - - pub fn constant(c: f64) -> Self { - Self { - terms: vec![Monomial::constant(c)], - } - } - - pub fn var(name: &'static str) -> Self { - Self { - terms: vec![Monomial::var(name)], - } - } - - pub fn var_pow(name: &'static str, exp: u8) -> Self { - Self { - terms: vec![Monomial::var_pow(name, exp)], - } - } - - /// Create a polynomial with a single monomial that is a product of two variables. - pub fn var_product(a: &'static str, b: &'static str) -> Self { - Self { - terms: vec![Monomial { - coefficient: 1.0, - variables: vec![(a, 1), (b, 1)], - }], - } - } - - pub fn scale(mut self, c: f64) -> Self { - for term in &mut self.terms { - term.coefficient *= c; - } - self - } - - pub fn evaluate(&self, size: &ProblemSize) -> f64 { - self.terms.iter().map(|m| m.evaluate(size)).sum() - } - - /// Collect all variable names referenced by this polynomial. - pub fn variable_names(&self) -> HashSet<&'static str> { - self.terms - .iter() - .flat_map(|m| m.variables.iter().map(|(name, _)| *name)) - .collect() - } - - /// Multiply two polynomials. - pub fn mul(&self, other: &Polynomial) -> Polynomial { - let mut terms = Vec::new(); - for a in &self.terms { - for b in &other.terms { - terms.push(a.mul(b)); - } - } - let mut result = Polynomial { terms }; - result.normalize(); - result - } - - /// Raise to a non-negative integer power. - pub fn pow(&self, n: u8) -> Polynomial { - match n { - 0 => Polynomial::constant(1.0), - 1 => self.clone(), - _ => { - let mut result = self.clone(); - for _ in 1..n { - result = result.mul(self); - } - result - } - } - } - - /// Substitute variables with polynomials. - /// - /// Each variable in the polynomial is replaced by the corresponding - /// polynomial from the mapping. Variables not in the mapping are left as-is. - pub fn substitute(&self, mapping: &HashMap<&str, &Polynomial>) -> Polynomial { - let mut result = Polynomial::zero(); - for mono in &self.terms { - // Start with the coefficient - let mut term_poly = Polynomial::constant(mono.coefficient); - // Multiply by each variable's substitution raised to its exponent - for &(name, exp) in &mono.variables { - let var_poly = if let Some(&replacement) = mapping.get(name) { - replacement.pow(exp) - } else { - Polynomial::var_pow(name, exp) - }; - term_poly = term_poly.mul(&var_poly); - } - result = result + term_poly; - } - result.normalize(); - result - } - - /// Normalize: normalize all monomials, then combine like terms. - pub fn normalize(&mut self) { - for term in &mut self.terms { - term.normalize(); - } - // Combine like terms - let mut combined: Vec = Vec::new(); - for term in &self.terms { - if let Some(existing) = combined - .iter_mut() - .find(|m| m.var_signature() == term.var_signature()) - { - existing.coefficient += term.coefficient; - } else { - combined.push(term.clone()); - } - } - // Remove zero-coefficient terms - combined.retain(|m| m.coefficient.abs() > 1e-15); - self.terms = combined; - } - - /// Return a normalized copy. - pub fn normalized(&self) -> Polynomial { - let mut p = self.clone(); - p.normalize(); - p - } -} - -impl fmt::Display for Monomial { - fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - let coeff_i = self.coefficient.round() as i64; - let is_int = (self.coefficient - coeff_i as f64).abs() < 1e-10; - if self.variables.is_empty() { - if is_int { - write!(f, "{coeff_i}") - } else { - write!(f, "{}", self.coefficient) - } - } else { - let has_coeff = if is_int { - match coeff_i { - 1 => false, - -1 => { - write!(f, "-")?; - false - } - _ => { - write!(f, "{coeff_i}")?; - true - } - } - } else { - write!(f, "{}", self.coefficient)?; - true - }; - for (i, (name, exp)) in self.variables.iter().enumerate() { - if has_coeff || i > 0 { - write!(f, " * ")?; - } - write!(f, "{name}")?; - if *exp > 1 { - write!(f, "^{exp}")?; - } - } - Ok(()) - } - } -} - -impl fmt::Display for Polynomial { - fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - if self.terms.is_empty() { - write!(f, "0") - } else { - for (i, term) in self.terms.iter().enumerate() { - if i > 0 { - if term.coefficient < 0.0 { - write!(f, " - ")?; - let negated = Monomial { - coefficient: -term.coefficient, - variables: term.variables.clone(), - }; - write!(f, "{negated}")?; - } else { - write!(f, " + ")?; - write!(f, "{term}")?; - } - } else { - write!(f, "{term}")?; - } - } - Ok(()) - } - } -} - -impl Add for Polynomial { - type Output = Self; - - fn add(mut self, other: Self) -> Self { - self.terms.extend(other.terms); - self - } -} - -/// Convenience macro for building overhead expressions. -/// -/// Produces `Expr` values (via `From` conversion). -#[macro_export] -macro_rules! poly { - // Single variable: poly!(n) - ($name:ident) => { - $crate::expr::Expr::from($crate::polynomial::Polynomial::var(stringify!($name))) - }; - // Variable with exponent: poly!(n^2) - ($name:ident ^ $exp:literal) => { - $crate::expr::Expr::from($crate::polynomial::Polynomial::var_pow(stringify!($name), $exp)) - }; - // Constant: poly!(5) - ($c:literal) => { - $crate::expr::Expr::from($crate::polynomial::Polynomial::constant($c as f64)) - }; - // Scaled variable: poly!(3 * n) - ($c:literal * $name:ident) => { - $crate::expr::Expr::from($crate::polynomial::Polynomial::var(stringify!($name)).scale($c as f64)) - }; - // Scaled variable with exponent: poly!(9 * n^2) - ($c:literal * $name:ident ^ $exp:literal) => { - $crate::expr::Expr::from($crate::polynomial::Polynomial::var_pow(stringify!($name), $exp).scale($c as f64)) - }; - // Product of two variables: poly!(a * b) - ($a:ident * $b:ident) => { - $crate::expr::Expr::from($crate::polynomial::Polynomial::var_product(stringify!($a), stringify!($b))) - }; - // Scaled product of two variables: poly!(3 * a * b) - ($c:literal * $a:ident * $b:ident) => { - $crate::expr::Expr::from($crate::polynomial::Polynomial::var_product(stringify!($a), stringify!($b)).scale($c as f64)) - }; -} - -#[cfg(test)] -#[path = "unit_tests/polynomial.rs"] -mod tests; diff --git a/src/rules/graph.rs b/src/rules/graph.rs index 8b4e97ef8..a95293311 100644 --- a/src/rules/graph.rs +++ b/src/rules/graph.rs @@ -97,7 +97,7 @@ pub(crate) struct EdgeJson { pub(crate) source: usize, /// Index into the `nodes` array for the target problem variant. pub(crate) target: usize, - /// Reduction overhead: output size as polynomials of input size. + /// Reduction overhead: output size as expressions of input size. pub(crate) overhead: Vec, /// Relative rustdoc path for the reduction module. pub(crate) doc_path: String, @@ -538,7 +538,7 @@ impl ReductionGraph { self.nodes.len() } - /// Get the per-edge overhead polynomials along a reduction path. + /// Get the per-edge overhead expressions along a reduction path. /// /// Returns one `ReductionOverhead` per edge (i.e., `path.steps.len() - 1` items). /// @@ -575,7 +575,7 @@ impl ReductionGraph { /// Compose overheads along a path symbolically. /// - /// Returns a single `ReductionOverhead` whose polynomials map from the + /// Returns a single `ReductionOverhead` whose expressions map from the /// source problem's size variables directly to the final target's size variables. pub fn compose_path_overhead(&self, path: &ReductionPath) -> ReductionOverhead { self.path_overheads(path) @@ -984,7 +984,7 @@ impl ReductionGraph { /// falls back to a name-only match (returning the first entry whose source and /// target names match). This is intentional: specific variants (e.g., `K3`) may /// not have their own `#[reduction]` entry, but the general variant (`KN`) covers - /// them with the same overhead polynomial. The fallback is safe because cross-name + /// them with the same overhead expression. The fallback is safe because cross-name /// reductions share the same overhead regardless of source variant; it is only /// used by the JSON export pipeline (`export::lookup_overhead`). pub fn find_best_entry( diff --git a/src/rules/registry.rs b/src/rules/registry.rs index f49dd8b66..c129379d2 100644 --- a/src/rules/registry.rs +++ b/src/rules/registry.rs @@ -19,18 +19,6 @@ impl ReductionOverhead { Self { output_size } } - /// Construct from legacy Polynomial-based overhead. - pub fn from_polynomials( - output_size: Vec<(&'static str, crate::polynomial::Polynomial)>, - ) -> Self { - Self { - output_size: output_size - .into_iter() - .map(|(name, poly)| (name, Expr::from(poly))) - .collect(), - } - } - /// Identity overhead: each output field equals the same-named input field. /// Used by variant cast reductions where problem size doesn't change. pub fn identity(fields: &[&'static str]) -> Self { diff --git a/src/unit_tests/polynomial.rs b/src/unit_tests/polynomial.rs deleted file mode 100644 index be81b0fc9..000000000 --- a/src/unit_tests/polynomial.rs +++ /dev/null @@ -1,178 +0,0 @@ -use super::*; - -#[test] -fn test_monomial_constant() { - let m = Monomial::constant(5.0); - let size = ProblemSize::new(vec![("n", 10)]); - assert_eq!(m.evaluate(&size), 5.0); -} - -#[test] -fn test_monomial_variable() { - let m = Monomial::var("n"); - let size = ProblemSize::new(vec![("n", 10)]); - assert_eq!(m.evaluate(&size), 10.0); -} - -#[test] -fn test_monomial_var_pow() { - let m = Monomial::var_pow("n", 2); - let size = ProblemSize::new(vec![("n", 5)]); - assert_eq!(m.evaluate(&size), 25.0); -} - -#[test] -fn test_polynomial_add() { - // 3n + 2m - let p = Polynomial::var("n").scale(3.0) + Polynomial::var("m").scale(2.0); - - let size = ProblemSize::new(vec![("n", 10), ("m", 5)]); - assert_eq!(p.evaluate(&size), 40.0); // 3*10 + 2*5 -} - -#[test] -fn test_polynomial_complex() { - // n^2 + 3m - let p = Polynomial::var_pow("n", 2) + Polynomial::var("m").scale(3.0); - - let size = ProblemSize::new(vec![("n", 4), ("m", 2)]); - assert_eq!(p.evaluate(&size), 22.0); // 16 + 6 -} - -#[test] -fn test_poly_macro() { - let size = ProblemSize::new(vec![("n", 5), ("m", 3)]); - - assert_eq!(poly!(n).eval(&size), 5.0); - assert_eq!(poly!(n ^ 2).eval(&size), 25.0); - assert_eq!(poly!(3 * n).eval(&size), 15.0); - assert_eq!(poly!(2 * m ^ 2).eval(&size), 18.0); -} - -#[test] -fn test_missing_variable() { - let p = Polynomial::var("missing"); - let size = ProblemSize::new(vec![("n", 10)]); - assert_eq!(p.evaluate(&size), 0.0); // missing var = 0 -} - -#[test] -fn test_polynomial_zero() { - let p = Polynomial::zero(); - let size = ProblemSize::new(vec![("n", 100)]); - assert_eq!(p.evaluate(&size), 0.0); -} - -#[test] -fn test_polynomial_constant() { - let p = Polynomial::constant(42.0); - let size = ProblemSize::new(vec![("n", 100)]); - assert_eq!(p.evaluate(&size), 42.0); -} - -#[test] -fn test_monomial_scale() { - let m = Monomial::var("n").scale(3.0); - let size = ProblemSize::new(vec![("n", 10)]); - assert_eq!(m.evaluate(&size), 30.0); -} - -#[test] -fn test_polynomial_scale() { - let p = Polynomial::var("n").scale(5.0); - let size = ProblemSize::new(vec![("n", 10)]); - assert_eq!(p.evaluate(&size), 50.0); -} - -#[test] -fn test_monomial_multi_variable() { - // n * m^2 - let m = Monomial { - coefficient: 1.0, - variables: vec![("n", 1), ("m", 2)], - }; - let size = ProblemSize::new(vec![("n", 2), ("m", 3)]); - assert_eq!(m.evaluate(&size), 18.0); // 2 * 9 -} - -#[test] -fn test_display_monomial_constant_int() { - assert_eq!(format!("{}", Monomial::constant(5.0)), "5"); -} - -#[test] -fn test_display_monomial_constant_float() { - assert_eq!(format!("{}", Monomial::constant(3.5)), "3.5"); -} - -#[test] -fn test_display_monomial_single_var() { - assert_eq!(format!("{}", Monomial::var("n")), "n"); -} - -#[test] -fn test_display_monomial_neg_one_coeff() { - assert_eq!(format!("{}", Monomial::var("n").scale(-1.0)), "-n"); -} - -#[test] -fn test_display_monomial_scaled_var() { - assert_eq!(format!("{}", Monomial::var("n").scale(3.0)), "3 * n"); -} - -#[test] -fn test_display_monomial_var_pow() { - assert_eq!(format!("{}", Monomial::var_pow("n", 2)), "n^2"); -} - -#[test] -fn test_display_monomial_multi_var() { - let m = Monomial { - coefficient: 2.0, - variables: vec![("n", 1), ("m", 2)], - }; - assert_eq!(format!("{m}"), "2 * n * m^2"); -} - -#[test] -fn test_display_monomial_float_coeff_var() { - let m = Monomial { - coefficient: 1.5, - variables: vec![("n", 1)], - }; - assert_eq!(format!("{m}"), "1.5 * n"); -} - -#[test] -fn test_display_polynomial_zero() { - assert_eq!(format!("{}", Polynomial::zero()), "0"); -} - -#[test] -fn test_display_polynomial_single_term() { - assert_eq!(format!("{}", Polynomial::var("n").scale(3.0)), "3 * n"); -} - -#[test] -fn test_display_polynomial_addition() { - let p = Polynomial::var("n").scale(3.0) + Polynomial::var("m").scale(2.0); - assert_eq!(format!("{p}"), "3 * n + 2 * m"); -} - -#[test] -fn test_display_polynomial_subtraction() { - let p = Polynomial::var("n").scale(3.0) + Polynomial::var("m").scale(-2.0); - assert_eq!(format!("{p}"), "3 * n - 2 * m"); -} - -#[test] -fn test_poly_macro_product() { - let size = ProblemSize::new(vec![("a", 3), ("b", 4)]); - assert_eq!(poly!(a * b).eval(&size), 12.0); -} - -#[test] -fn test_poly_macro_scaled_product() { - let size = ProblemSize::new(vec![("a", 3), ("b", 4)]); - assert_eq!(poly!(5 * a * b).eval(&size), 60.0); -} diff --git a/src/unit_tests/rules/registry.rs b/src/unit_tests/rules/registry.rs index 9e79f7882..792332497 100644 --- a/src/unit_tests/rules/registry.rs +++ b/src/unit_tests/rules/registry.rs @@ -1,5 +1,5 @@ use super::*; -use crate::poly; +use crate::expr::Expr; /// Dummy reduce_fn for unit tests that don't exercise runtime reduction. fn dummy_reduce_fn(_: &dyn std::any::Any) -> Box { @@ -8,7 +8,10 @@ fn dummy_reduce_fn(_: &dyn std::any::Any) -> Box Date: Thu, 26 Feb 2026 07:43:43 +0800 Subject: [PATCH 11/15] docs: update exports and documentation for new overhead system Fix formatting, suppress dead_code warning in parser, update CLAUDE.md and design.md to reflect removal of problem_size_names/values and addition of Expr-based overhead system. Co-Authored-By: Claude Opus 4.6 --- .claude/CLAUDE.md | 18 +++++++++++++++--- docs/src/design.md | 4 +--- problemreductions-cli/src/mcp/prompts.rs | 13 +++---------- problemreductions-macros/src/parser.rs | 9 +++------ src/lib.rs | 2 +- src/models/graph/kcoloring.rs | 1 - src/models/graph/max_cut.rs | 1 - src/models/graph/maximal_is.rs | 1 - src/models/graph/maximum_clique.rs | 1 - src/models/graph/maximum_independent_set.rs | 1 - src/models/graph/maximum_matching.rs | 1 - src/models/graph/minimum_dominating_set.rs | 1 - src/models/graph/minimum_vertex_cover.rs | 1 - src/models/graph/traveling_salesman.rs | 1 - src/models/optimization/ilp.rs | 1 - src/models/optimization/qubo.rs | 1 - src/models/optimization/spin_glass.rs | 1 - src/models/set/maximum_set_packing.rs | 1 - src/models/set/minimum_set_covering.rs | 1 - src/models/specialized/biclique_cover.rs | 1 - src/models/specialized/bmf.rs | 1 - src/models/specialized/circuit.rs | 1 - src/models/specialized/factoring.rs | 1 - src/models/specialized/paintshop.rs | 1 - src/rules/sat_ksat.rs | 2 ++ src/unit_tests/export.rs | 5 ++++- src/unit_tests/reduction_graph.rs | 11 ++++++----- 27 files changed, 35 insertions(+), 48 deletions(-) diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md index 0ecc50c96..e2fcd652e 100644 --- a/.claude/CLAUDE.md +++ b/.claude/CLAUDE.md @@ -67,9 +67,7 @@ Problem (core trait — all problems must implement) ├── fn dims(&self) -> Vec // config space: [2, 2, 2] for 3 binary variables ├── fn evaluate(&self, config) -> Metric ├── fn variant() -> Vec<(&str, &str)> // e.g., [("graph","SimpleGraph"), ("weight","i32")] -├── fn num_variables(&self) -> usize // default: dims().len() -├── fn problem_size_names() -> &[&str] // static field names for size metrics -└── fn problem_size_values(&self) -> Vec // instance-level size values +└── fn num_variables(&self) -> usize // default: dims().len() OptimizationProblem : Problem> (extension for optimization) │ @@ -98,6 +96,20 @@ enum Direction { Maximize, Minimize } - Weight management via inherent methods (`weights()`, `set_weights()`, `is_weighted()`), not traits - `NumericSize` supertrait bundles common numeric bounds (`Clone + Default + PartialOrd + Num + Zero + Bounded + AddAssign + 'static`) +### Overhead System +Reduction overhead is expressed using `Expr` AST (in `src/expr.rs`) with the `#[reduction]` macro: +```rust +#[reduction(overhead = { + num_vertices = "num_vertices + num_clauses", + num_edges = "3 * num_clauses", +})] +impl ReduceTo for Source { ... } +``` +- Expression strings are parsed at compile time by a Pratt parser in the proc macro crate +- Each problem type provides inherent getter methods (e.g., `num_vertices()`, `num_edges()`) that the overhead expressions reference +- `ReductionOverhead` stores `Vec<(&'static str, Expr)>` — field name to symbolic expression mappings +- Expressions support: constants, variables, `+`, `*`, `^`, `exp()`, `log()`, `sqrt()` + ### Problem Names Problem types use explicit optimization prefixes: - `MaximumIndependentSet`, `MaximumClique`, `MaximumMatching`, `MaximumSetPacking` diff --git a/docs/src/design.md b/docs/src/design.md index 386f9b6b2..6933a9eb9 100644 --- a/docs/src/design.md +++ b/docs/src/design.md @@ -37,8 +37,6 @@ trait Problem: Clone { fn evaluate(&self, config: &[usize]) -> Self::Metric; fn variant() -> Vec<(&'static str, &'static str)>; // e.g., [("graph", "SimpleGraph"), ("weight", "i32")] fn num_variables(&self) -> usize; // default: dims().len() - fn problem_size_names() -> &'static [&'static str]; // e.g., ["num_vertices", "num_edges"] - fn problem_size_values(&self) -> Vec; // e.g., [10, 15] for a specific instance } trait OptimizationProblem: Problem> { @@ -49,7 +47,7 @@ trait OptimizationProblem: Problem> { trait SatisfactionProblem: Problem {} // marker trait ``` -- **`Problem`** — the base trait. Every problem declares a `NAME` (e.g., `"MaximumIndependentSet"`). The solver explores the configuration space defined by `dims()` and scores each configuration with `evaluate()`. For example, a 4-vertex MIS has `dims() = [2, 2, 2, 2]` (each vertex is selected or not); `evaluate(&[1, 0, 1, 0])` returns `Valid(2)` if vertices 0 and 2 form an independent set, or `Invalid` if they share an edge. `problem_size_names()` and `problem_size_values()` expose the instance's structural dimensions (e.g., `num_vertices`, `num_edges`) as a `ProblemSize` — used by the reduction graph to evaluate overhead polynomials along a path. +- **`Problem`** — the base trait. Every problem declares a `NAME` (e.g., `"MaximumIndependentSet"`). The solver explores the configuration space defined by `dims()` and scores each configuration with `evaluate()`. For example, a 4-vertex MIS has `dims() = [2, 2, 2, 2]` (each vertex is selected or not); `evaluate(&[1, 0, 1, 0])` returns `Valid(2)` if vertices 0 and 2 form an independent set, or `Invalid` if they share an edge. Each problem also provides inherent getter methods (e.g., `num_vertices()`, `num_edges()`) used by reduction overhead expressions. - **`OptimizationProblem`** — extends `Problem` with a comparable `Value` type and a `direction()` (`Maximize` or `Minimize`). - **`SatisfactionProblem`** — constrains `Metric = bool`: `true` if all constraints are satisfied, `false` otherwise. diff --git a/problemreductions-cli/src/mcp/prompts.rs b/problemreductions-cli/src/mcp/prompts.rs index 88fbf96b6..102e86b51 100644 --- a/problemreductions-cli/src/mcp/prompts.rs +++ b/problemreductions-cli/src/mcp/prompts.rs @@ -25,9 +25,7 @@ pub fn list_prompts() -> Vec { Some(vec![PromptArgument { name: "description".into(), title: None, - description: Some( - "Free-text description of your real-world problem".into(), - ), + description: Some("Free-text description of your real-world problem".into()), required: Some(true), }]), ), @@ -114,9 +112,7 @@ pub fn list_prompts() -> Vec { ), Prompt::new( "overview", - Some( - "Explore the full landscape of NP-hard problems and reductions in the graph", - ), + Some("Explore the full landscape of NP-hard problems and reductions in the graph"), None, ), ] @@ -266,10 +262,7 @@ pub fn get_prompt( .unwrap_or("QUBO"); Some(GetPromptResult { - description: Some(format!( - "Find reduction path from {} to {}", - source, target - )), + description: Some(format!("Find reduction path from {} to {}", source, target)), messages: vec![PromptMessage::new_text( PromptMessageRole::User, format!( diff --git a/problemreductions-macros/src/parser.rs b/problemreductions-macros/src/parser.rs index 2839ca2c7..6a9321662 100644 --- a/problemreductions-macros/src/parser.rs +++ b/problemreductions-macros/src/parser.rs @@ -236,6 +236,7 @@ pub fn parse_expr(input: &str) -> Result { Ok(expr) } +#[allow(dead_code)] impl ParsedExpr { /// Generate TokenStream that constructs an `Expr` value. pub fn to_expr_tokens(&self) -> TokenStream { @@ -360,8 +361,7 @@ impl ParsedExpr { a.collect_vars(vars); b.collect_vars(vars); } - ParsedExpr::Neg(a) | ParsedExpr::Exp(a) | ParsedExpr::Log(a) - | ParsedExpr::Sqrt(a) => { + ParsedExpr::Neg(a) | ParsedExpr::Exp(a) | ParsedExpr::Log(a) | ParsedExpr::Sqrt(a) => { a.collect_vars(vars); } } @@ -447,10 +447,7 @@ mod tests { #[test] fn test_parse_neg() { let e = parse_expr("-n").unwrap(); - assert_eq!( - e, - ParsedExpr::Neg(Box::new(ParsedExpr::Var("n".into()))) - ); + assert_eq!(e, ParsedExpr::Neg(Box::new(ParsedExpr::Var("n".into())))); } #[test] diff --git a/src/lib.rs b/src/lib.rs index 3224f5c5f..278b0f3d3 100644 --- a/src/lib.rs +++ b/src/lib.rs @@ -20,9 +20,9 @@ pub mod config; pub mod error; pub mod export; +pub(crate) mod expr; pub mod io; pub mod models; -pub(crate) mod expr; pub mod registry; pub mod rules; pub mod solvers; diff --git a/src/models/graph/kcoloring.rs b/src/models/graph/kcoloring.rs index 7bc2d4fbb..281c1fbd6 100644 --- a/src/models/graph/kcoloring.rs +++ b/src/models/graph/kcoloring.rs @@ -152,7 +152,6 @@ where fn evaluate(&self, config: &[usize]) -> bool { self.is_valid_coloring(config) } - } impl SatisfactionProblem for KColoring {} diff --git a/src/models/graph/max_cut.rs b/src/models/graph/max_cut.rs index d80f05720..6024a33f8 100644 --- a/src/models/graph/max_cut.rs +++ b/src/models/graph/max_cut.rs @@ -180,7 +180,6 @@ where let partition: Vec = config.iter().map(|&c| c != 0).collect(); SolutionSize::Valid(cut_size(&self.graph, &self.edge_weights, &partition)) } - } impl OptimizationProblem for MaxCut diff --git a/src/models/graph/maximal_is.rs b/src/models/graph/maximal_is.rs index 6f7e9b9ce..dee4722ba 100644 --- a/src/models/graph/maximal_is.rs +++ b/src/models/graph/maximal_is.rs @@ -169,7 +169,6 @@ where } SolutionSize::Valid(total) } - } impl OptimizationProblem for MaximalIS diff --git a/src/models/graph/maximum_clique.rs b/src/models/graph/maximum_clique.rs index 0c41e51ca..e293b037f 100644 --- a/src/models/graph/maximum_clique.rs +++ b/src/models/graph/maximum_clique.rs @@ -135,7 +135,6 @@ where } SolutionSize::Valid(total) } - } impl OptimizationProblem for MaximumClique diff --git a/src/models/graph/maximum_independent_set.rs b/src/models/graph/maximum_independent_set.rs index e175387d8..2cd2802ff 100644 --- a/src/models/graph/maximum_independent_set.rs +++ b/src/models/graph/maximum_independent_set.rs @@ -135,7 +135,6 @@ where } SolutionSize::Valid(total) } - } impl OptimizationProblem for MaximumIndependentSet diff --git a/src/models/graph/maximum_matching.rs b/src/models/graph/maximum_matching.rs index 04e36d32a..e7b75d3f9 100644 --- a/src/models/graph/maximum_matching.rs +++ b/src/models/graph/maximum_matching.rs @@ -205,7 +205,6 @@ where } SolutionSize::Valid(total) } - } impl OptimizationProblem for MaximumMatching diff --git a/src/models/graph/minimum_dominating_set.rs b/src/models/graph/minimum_dominating_set.rs index ab6461bbc..023f3f713 100644 --- a/src/models/graph/minimum_dominating_set.rs +++ b/src/models/graph/minimum_dominating_set.rs @@ -155,7 +155,6 @@ where } SolutionSize::Valid(total) } - } impl OptimizationProblem for MinimumDominatingSet diff --git a/src/models/graph/minimum_vertex_cover.rs b/src/models/graph/minimum_vertex_cover.rs index 7a3495fbb..757e926bc 100644 --- a/src/models/graph/minimum_vertex_cover.rs +++ b/src/models/graph/minimum_vertex_cover.rs @@ -130,7 +130,6 @@ where } SolutionSize::Valid(total) } - } impl OptimizationProblem for MinimumVertexCover diff --git a/src/models/graph/traveling_salesman.rs b/src/models/graph/traveling_salesman.rs index f67c97054..edb3fbaf6 100644 --- a/src/models/graph/traveling_salesman.rs +++ b/src/models/graph/traveling_salesman.rs @@ -168,7 +168,6 @@ where } SolutionSize::Valid(total) } - } impl OptimizationProblem for TravelingSalesman diff --git a/src/models/optimization/ilp.rs b/src/models/optimization/ilp.rs index 7086fc3c6..7f9776692 100644 --- a/src/models/optimization/ilp.rs +++ b/src/models/optimization/ilp.rs @@ -363,7 +363,6 @@ impl Problem for ILP { fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![] } - } impl OptimizationProblem for ILP { diff --git a/src/models/optimization/qubo.rs b/src/models/optimization/qubo.rs index 76179001f..d3bb01c39 100644 --- a/src/models/optimization/qubo.rs +++ b/src/models/optimization/qubo.rs @@ -168,7 +168,6 @@ where fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![W] } - } impl OptimizationProblem for QUBO diff --git a/src/models/optimization/spin_glass.rs b/src/models/optimization/spin_glass.rs index 84b80cd95..5259f5798 100644 --- a/src/models/optimization/spin_glass.rs +++ b/src/models/optimization/spin_glass.rs @@ -228,7 +228,6 @@ where fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![G, W] } - } impl OptimizationProblem for SpinGlass diff --git a/src/models/set/maximum_set_packing.rs b/src/models/set/maximum_set_packing.rs index 159bb44f0..55a1af2ab 100644 --- a/src/models/set/maximum_set_packing.rs +++ b/src/models/set/maximum_set_packing.rs @@ -160,7 +160,6 @@ where fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![W] } - } impl OptimizationProblem for MaximumSetPacking diff --git a/src/models/set/minimum_set_covering.rs b/src/models/set/minimum_set_covering.rs index 10df9a654..90f281b73 100644 --- a/src/models/set/minimum_set_covering.rs +++ b/src/models/set/minimum_set_covering.rs @@ -165,7 +165,6 @@ where fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![W] } - } impl OptimizationProblem for MinimumSetCovering diff --git a/src/models/specialized/biclique_cover.rs b/src/models/specialized/biclique_cover.rs index 74e18270d..2f4e6b5e4 100644 --- a/src/models/specialized/biclique_cover.rs +++ b/src/models/specialized/biclique_cover.rs @@ -233,7 +233,6 @@ impl Problem for BicliqueCover { fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![] } - } impl OptimizationProblem for BicliqueCover { diff --git a/src/models/specialized/bmf.rs b/src/models/specialized/bmf.rs index 35ff934ae..768e0ee7c 100644 --- a/src/models/specialized/bmf.rs +++ b/src/models/specialized/bmf.rs @@ -220,7 +220,6 @@ impl Problem for BMF { fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![] } - } impl OptimizationProblem for BMF { diff --git a/src/models/specialized/circuit.rs b/src/models/specialized/circuit.rs index ffe3bd9a7..287841f77 100644 --- a/src/models/specialized/circuit.rs +++ b/src/models/specialized/circuit.rs @@ -295,7 +295,6 @@ impl Problem for CircuitSAT { fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![] } - } impl SatisfactionProblem for CircuitSAT {} diff --git a/src/models/specialized/factoring.rs b/src/models/specialized/factoring.rs index 1e88eafaa..05bce8cb4 100644 --- a/src/models/specialized/factoring.rs +++ b/src/models/specialized/factoring.rs @@ -152,7 +152,6 @@ impl Problem for Factoring { fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![] } - } impl OptimizationProblem for Factoring { diff --git a/src/models/specialized/paintshop.rs b/src/models/specialized/paintshop.rs index 747c3240f..6d64e4df2 100644 --- a/src/models/specialized/paintshop.rs +++ b/src/models/specialized/paintshop.rs @@ -182,7 +182,6 @@ impl Problem for PaintShop { fn variant() -> Vec<(&'static str, &'static str)> { crate::variant_params![] } - } impl OptimizationProblem for PaintShop { diff --git a/src/rules/sat_ksat.rs b/src/rules/sat_ksat.rs index 3b9c0c328..973c3453e 100644 --- a/src/rules/sat_ksat.rs +++ b/src/rules/sat_ksat.rs @@ -110,6 +110,7 @@ fn add_clause_to_ksat( /// because the `#[reduction]` proc macro requires concrete types. macro_rules! impl_sat_to_ksat { ($ktype:ty, $k:expr) => { + #[rustfmt::skip] #[reduction(overhead = { num_clauses = "num_clauses + num_literals", num_vars = "num_vars + num_literals", @@ -182,6 +183,7 @@ fn reduce_ksat_to_sat(ksat: &KSatisfiability) -> ReductionKSATToSA /// The `#[reduction]` macro requires concrete types. macro_rules! impl_ksat_to_sat { ($ktype:ty) => { +#[rustfmt::skip] #[reduction(overhead = { num_clauses = "num_clauses", num_vars = "num_vars", diff --git a/src/unit_tests/export.rs b/src/unit_tests/export.rs index 6125b5569..7d6072cd4 100644 --- a/src/unit_tests/export.rs +++ b/src/unit_tests/export.rs @@ -34,7 +34,10 @@ fn test_overhead_to_json_constant() { fn test_overhead_to_json_scaled_power() { let overhead = ReductionOverhead::new(vec![( "num_edges", - Expr::mul(Expr::Const(3.0), Expr::pow(Expr::Var("n"), Expr::Const(2.0))), + Expr::mul( + Expr::Const(3.0), + Expr::pow(Expr::Var("n"), Expr::Const(2.0)), + ), )]); let entries = overhead_to_json(&overhead); assert_eq!(entries.len(), 1); diff --git a/src/unit_tests/reduction_graph.rs b/src/unit_tests/reduction_graph.rs index 24671b8ab..5dab95623 100644 --- a/src/unit_tests/reduction_graph.rs +++ b/src/unit_tests/reduction_graph.rs @@ -296,7 +296,11 @@ fn test_3sat_to_mis_triangular_overhead() { CNFClause::new(vec![-1, -2, -3]), ], ); - let input_size = ProblemSize::new(vec![("num_vars", 3), ("num_clauses", 2), ("num_literals", 6)]); + let input_size = ProblemSize::new(vec![ + ("num_vars", 3), + ("num_clauses", 2), + ("num_literals", 6), + ]); // Find the shortest path let path = graph @@ -358,10 +362,7 @@ fn test_3sat_to_mis_triangular_overhead() { // Composed: num_vertices = L², num_edges = L² let composed = graph.compose_path_overhead(&path); // Evaluate composed at input: L=6, so L^2=36 - assert_eq!( - composed.get("num_vertices").unwrap().eval(&test_size), - 36.0 - ); + assert_eq!(composed.get("num_vertices").unwrap().eval(&test_size), 36.0); assert_eq!(composed.get("num_edges").unwrap().eval(&test_size), 36.0); } From a64f060d0b865ddfc05cab80ed7fd5b6c85c15dd Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 07:46:22 +0800 Subject: [PATCH 12/15] fix: update MCP server and CLI for Vec<&str> size_field_names return type Fix borrow-after-move in graph show command and use reference in MCP tools JSON serialization. Co-Authored-By: Claude Opus 4.6 --- problemreductions-cli/src/commands/graph.rs | 2 +- problemreductions-cli/src/mcp/tools.rs | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/problemreductions-cli/src/commands/graph.rs b/problemreductions-cli/src/commands/graph.rs index 52bcd2457..65378b84b 100644 --- a/problemreductions-cli/src/commands/graph.rs +++ b/problemreductions-cli/src/commands/graph.rs @@ -143,7 +143,7 @@ pub fn show(problem: &str, out: &OutputConfig) -> Result<()> { "\n{}\n", crate::output::fmt_section(&format!("Size fields ({}):", size_fields.len())) )); - for f in size_fields { + for f in &size_fields { text.push_str(&format!(" {f}\n")); } } diff --git a/problemreductions-cli/src/mcp/tools.rs b/problemreductions-cli/src/mcp/tools.rs index e3c25e8c4..d1104d110 100644 --- a/problemreductions-cli/src/mcp/tools.rs +++ b/problemreductions-cli/src/mcp/tools.rs @@ -177,7 +177,7 @@ impl McpServer { let mut json = serde_json::json!({ "name": spec.name, "variants": variants, - "size_fields": size_fields, + "size_fields": &size_fields, "reduces_to": outgoing.iter().map(|e| { serde_json::json!({ "source": {"name": e.source_name, "variant": e.source_variant}, From 3db4e2bbe4e45583c4498ea34f172bc14e69a1a6 Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 09:47:02 +0800 Subject: [PATCH 13/15] fix: size_field_names returns source's own fields, not target's size_field_names() was returning output_size field names from the first matching reduction entry, which are the TARGET problem's fields when the queried problem is a source. Now correctly extracts input variable names when the problem is a source, and collects from all entries for completeness. Also tightens CLI inspect test to assert actual field names, and adds a cross-check test verifying all overhead variables are valid source size fields. Co-Authored-By: Claude Opus 4.6 --- problemreductions-cli/tests/cli_tests.rs | 17 +++++- src/rules/graph.rs | 16 ++++-- src/unit_tests/rules/graph.rs | 72 ++++++++++++++++++++++++ 3 files changed, 99 insertions(+), 6 deletions(-) diff --git a/problemreductions-cli/tests/cli_tests.rs b/problemreductions-cli/tests/cli_tests.rs index 2d3775673..532c5e564 100644 --- a/problemreductions-cli/tests/cli_tests.rs +++ b/problemreductions-cli/tests/cli_tests.rs @@ -2093,7 +2093,22 @@ fn test_inspect_json_output() { let json: serde_json::Value = serde_json::from_str(&content).unwrap(); assert_eq!(json["kind"], "problem"); assert_eq!(json["type"], "MaximumIndependentSet"); - assert!(json["size_fields"].is_array()); + let size_fields: Vec<&str> = json["size_fields"] + .as_array() + .expect("size_fields should be an array") + .iter() + .map(|v| v.as_str().unwrap()) + .collect(); + assert!( + size_fields.contains(&"num_vertices"), + "MIS size_fields should contain num_vertices, got: {:?}", + size_fields + ); + assert!( + size_fields.contains(&"num_edges"), + "MIS size_fields should contain num_edges, got: {:?}", + size_fields + ); assert!(json["solvers"].is_array()); assert!(json["reduces_to"].is_array()); diff --git a/src/rules/graph.rs b/src/rules/graph.rs index a95293311..75c1499f6 100644 --- a/src/rules/graph.rs +++ b/src/rules/graph.rs @@ -624,19 +624,25 @@ impl ReductionGraph { /// Get the problem size field names for a problem type. /// /// Derives size fields from the overhead expressions of reduction entries - /// where this problem appears as source or target. + /// where this problem appears as source or target. When the problem is a + /// source, its size fields are the input variables referenced in the overhead + /// expressions. When it's a target, its size fields are the output field names. pub fn size_field_names(&self, name: &str) -> Vec<&'static str> { + let mut fields = std::collections::HashSet::new(); for entry in inventory::iter:: { if entry.source_name == name { - let overhead = entry.overhead(); - return overhead.output_size.iter().map(|(name, _)| *name).collect(); + // Source's size fields are the input variables of the overhead. + fields.extend(entry.overhead().input_variable_names()); } if entry.target_name == name { + // Target's size fields are the output field names. let overhead = entry.overhead(); - return overhead.output_size.iter().map(|(name, _)| *name).collect(); + fields.extend(overhead.output_size.iter().map(|(name, _)| *name)); } } - vec![] + let mut result: Vec<&'static str> = fields.into_iter().collect(); + result.sort_unstable(); + result } /// Get all incoming reductions to a problem (across all its variants). diff --git a/src/unit_tests/rules/graph.rs b/src/unit_tests/rules/graph.rs index 41c1bf985..c10f60c48 100644 --- a/src/unit_tests/rules/graph.rs +++ b/src/unit_tests/rules/graph.rs @@ -4,6 +4,7 @@ use crate::models::optimization::QUBO; use crate::models::set::MaximumSetPacking; use crate::rules::cost::MinimizeSteps; use crate::rules::graph::{classify_problem_category, ReductionStep}; +use crate::rules::registry::ReductionEntry; use crate::topology::SimpleGraph; use crate::traits::Problem; use crate::types::ProblemSize; @@ -979,3 +980,74 @@ fn test_reduction_chain_with_variant_casts() { // Verify the extracted solution satisfies the original 3-SAT formula assert!(ksat.evaluate(&original_solution)); } + +#[test] +fn test_size_field_names_returns_own_fields() { + let graph = ReductionGraph::new(); + + // MIS should report its own fields (num_vertices, num_edges), + // not the target's fields from any reduction. + let mis_fields = graph.size_field_names("MaximumIndependentSet"); + assert!( + mis_fields.contains(&"num_vertices"), + "MIS should have num_vertices, got: {:?}", + mis_fields + ); + assert!( + mis_fields.contains(&"num_edges"), + "MIS should have num_edges, got: {:?}", + mis_fields + ); + // Should NOT contain target fields like num_vars or num_constraints + assert!( + !mis_fields.contains(&"num_constraints"), + "MIS should not report ILP's num_constraints, got: {:?}", + mis_fields + ); + + // QUBO should report num_vars + let qubo_fields = graph.size_field_names("QUBO"); + assert!( + qubo_fields.contains(&"num_vars"), + "QUBO should have num_vars, got: {:?}", + qubo_fields + ); + + // Unknown problem returns empty + let unknown_fields = graph.size_field_names("NonExistentProblem"); + assert!(unknown_fields.is_empty()); +} + +#[test] +fn test_overhead_variables_are_consistent() { + // For each reduction, the input variables of the overhead should be + // a subset of the source problem's size fields (as derived from all + // reductions where it appears). + let graph = ReductionGraph::new(); + + for entry in inventory::iter:: { + let overhead = entry.overhead(); + let input_vars = overhead.input_variable_names(); + if input_vars.is_empty() { + continue; + } + + let source_fields: std::collections::HashSet<&str> = graph + .size_field_names(entry.source_name) + .into_iter() + .collect(); + + for var in &input_vars { + assert!( + source_fields.contains(var), + "Reduction {} -> {}: overhead references variable '{}' \ + which is not a known size field of {}. Known fields: {:?}", + entry.source_name, + entry.target_name, + var, + entry.source_name, + source_fields + ); + } + } +} From ece1aa6f6cbb8a838d4d6cb2f88e80cec1e472b0 Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 10:49:41 +0800 Subject: [PATCH 14/15] chore: remove overhead system implementation plan Co-Authored-By: Claude Opus 4.6 --- docs/plans/2026-02-25-overhead-system-impl.md | 1289 ----------------- 1 file changed, 1289 deletions(-) delete mode 100644 docs/plans/2026-02-25-overhead-system-impl.md diff --git a/docs/plans/2026-02-25-overhead-system-impl.md b/docs/plans/2026-02-25-overhead-system-impl.md deleted file mode 100644 index 853759ad6..000000000 --- a/docs/plans/2026-02-25-overhead-system-impl.md +++ /dev/null @@ -1,1289 +0,0 @@ -# Overhead System Redesign Implementation Plan - -> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. - -**Goal:** Replace the `Polynomial`-based overhead system with a general `Expr` AST, compile-time macro-parsed expression strings, and per-problem inherent getters. - -**Architecture:** The `#[reduction]` proc macro parses expression strings at compile time and emits both compiled Rust getter-calling code (for evaluation + compiler validation) and symbolic `Expr` AST literals (for composition + export). Problems provide inherent getter methods instead of trait-level `problem_size_names()`/`problem_size_values()`. - -**Tech Stack:** Rust proc macros (syn/quote), Pratt parser, serde, inventory - ---- - -## Phase 1: Add `Expr` type (additive, no breaking changes) - -### Task 1: Create `Expr` enum and basic operations - -**Files:** -- Create: `src/expr.rs` -- Create: `src/unit_tests/expr.rs` -- Modify: `src/lib.rs` (add module) - -**Step 1: Write failing tests for Expr construction and evaluation** - -Create `src/unit_tests/expr.rs`: -```rust -use super::*; -use crate::types::ProblemSize; - -#[test] -fn test_expr_const_eval() { - let e = Expr::Const(42.0); - let size = ProblemSize::new(vec![]); - assert_eq!(e.eval(&size), 42.0); -} - -#[test] -fn test_expr_var_eval() { - let e = Expr::Var("n"); - let size = ProblemSize::new(vec![("n", 10)]); - assert_eq!(e.eval(&size), 10.0); -} - -#[test] -fn test_expr_add_eval() { - // n + 3 - let e = Expr::add(Expr::Var("n"), Expr::Const(3.0)); - let size = ProblemSize::new(vec![("n", 7)]); - assert_eq!(e.eval(&size), 10.0); -} - -#[test] -fn test_expr_mul_eval() { - // 3 * n - let e = Expr::mul(Expr::Const(3.0), Expr::Var("n")); - let size = ProblemSize::new(vec![("n", 5)]); - assert_eq!(e.eval(&size), 15.0); -} - -#[test] -fn test_expr_pow_eval() { - // n^2 - let e = Expr::pow(Expr::Var("n"), Expr::Const(2.0)); - let size = ProblemSize::new(vec![("n", 4)]); - assert_eq!(e.eval(&size), 16.0); -} - -#[test] -fn test_expr_exp_eval() { - let e = Expr::Exp(Box::new(Expr::Const(1.0))); - let size = ProblemSize::new(vec![]); - assert!((e.eval(&size) - std::f64::consts::E).abs() < 1e-10); -} - -#[test] -fn test_expr_log_eval() { - let e = Expr::Log(Box::new(Expr::Const(std::f64::consts::E))); - let size = ProblemSize::new(vec![]); - assert!((e.eval(&size) - 1.0).abs() < 1e-10); -} - -#[test] -fn test_expr_sqrt_eval() { - let e = Expr::Sqrt(Box::new(Expr::Const(9.0))); - let size = ProblemSize::new(vec![]); - assert_eq!(e.eval(&size), 3.0); -} - -#[test] -fn test_expr_complex() { - // n^2 + 3*m - let e = Expr::add( - Expr::pow(Expr::Var("n"), Expr::Const(2.0)), - Expr::mul(Expr::Const(3.0), Expr::Var("m")), - ); - let size = ProblemSize::new(vec![("n", 4), ("m", 2)]); - assert_eq!(e.eval(&size), 22.0); // 16 + 6 -} -``` - -**Step 2: Run tests to verify they fail** - -Run: `make test` (or `cargo test expr`) -Expected: compilation errors — `Expr` type doesn't exist yet. - -**Step 3: Implement `Expr` enum with eval** - -Create `src/expr.rs`: -```rust -//! General symbolic expression AST for reduction overhead. - -use crate::types::ProblemSize; -use std::collections::{HashMap, HashSet}; -use std::fmt; - -/// A symbolic math expression over problem size variables. -#[derive(Clone, Debug, PartialEq, serde::Serialize, serde::Deserialize)] -pub enum Expr { - /// Numeric constant. - Const(f64), - /// Named variable (e.g., "num_vertices"). - Var(&'static str), - /// Addition: a + b. - Add(Box, Box), - /// Multiplication: a * b. - Mul(Box, Box), - /// Exponentiation: base ^ exponent. - Pow(Box, Box), - /// Exponential function: exp(a). - Exp(Box), - /// Natural logarithm: log(a). - Log(Box), - /// Square root: sqrt(a). - Sqrt(Box), -} - -impl Expr { - /// Convenience constructors (avoid Box::new noise). - pub fn add(a: Expr, b: Expr) -> Self { - Expr::Add(Box::new(a), Box::new(b)) - } - pub fn mul(a: Expr, b: Expr) -> Self { - Expr::Mul(Box::new(a), Box::new(b)) - } - pub fn pow(base: Expr, exp: Expr) -> Self { - Expr::Pow(Box::new(base), Box::new(exp)) - } - - /// Evaluate the expression given concrete variable values. - pub fn eval(&self, vars: &ProblemSize) -> f64 { - match self { - Expr::Const(c) => *c, - Expr::Var(name) => vars.get(name).unwrap_or(0) as f64, - Expr::Add(a, b) => a.eval(vars) + b.eval(vars), - Expr::Mul(a, b) => a.eval(vars) * b.eval(vars), - Expr::Pow(base, exp) => base.eval(vars).powf(exp.eval(vars)), - Expr::Exp(a) => a.eval(vars).exp(), - Expr::Log(a) => a.eval(vars).ln(), - Expr::Sqrt(a) => a.eval(vars).sqrt(), - } - } -} - -#[cfg(test)] -#[path = "unit_tests/expr.rs"] -mod tests; -``` - -Add to `src/lib.rs`: -```rust -pub(crate) mod expr; -``` - -**Step 4: Run tests to verify they pass** - -Run: `cargo test expr` -Expected: all tests pass. - -**Step 5: Commit** - -```bash -git add src/expr.rs src/unit_tests/expr.rs src/lib.rs -git commit -m "feat: add Expr AST type with eval (phase 1 of overhead redesign)" -``` - ---- - -### Task 2: Add `variables()`, `substitute()`, and `Display` to `Expr` - -**Files:** -- Modify: `src/expr.rs` -- Modify: `src/unit_tests/expr.rs` - -**Step 1: Write failing tests** - -Append to `src/unit_tests/expr.rs`: -```rust -#[test] -fn test_expr_variables() { - let e = Expr::add( - Expr::pow(Expr::Var("n"), Expr::Const(2.0)), - Expr::mul(Expr::Const(3.0), Expr::Var("m")), - ); - let vars = e.variables(); - assert_eq!(vars, HashSet::from(["n", "m"])); -} - -#[test] -fn test_expr_substitute() { - // n^2, substitute n → (a + b) - let e = Expr::pow(Expr::Var("n"), Expr::Const(2.0)); - let replacement = Expr::add(Expr::Var("a"), Expr::Var("b")); - let mut mapping = HashMap::new(); - mapping.insert("n", &replacement); - let result = e.substitute(&mapping); - // Should be (a + b)^2 - let size = ProblemSize::new(vec![("a", 3), ("b", 2)]); - assert_eq!(result.eval(&size), 25.0); // (3+2)^2 -} - -#[test] -fn test_expr_display_simple() { - assert_eq!(format!("{}", Expr::Const(5.0)), "5"); - assert_eq!(format!("{}", Expr::Var("n")), "n"); -} - -#[test] -fn test_expr_display_add() { - let e = Expr::add(Expr::Var("n"), Expr::Const(3.0)); - assert_eq!(format!("{e}"), "n + 3"); -} - -#[test] -fn test_expr_display_mul() { - let e = Expr::mul(Expr::Const(3.0), Expr::Var("n")); - assert_eq!(format!("{e}"), "3 * n"); -} - -#[test] -fn test_expr_display_pow() { - let e = Expr::pow(Expr::Var("n"), Expr::Const(2.0)); - assert_eq!(format!("{e}"), "n^2"); -} - -#[test] -fn test_expr_display_exp() { - let e = Expr::Exp(Box::new(Expr::Var("n"))); - assert_eq!(format!("{e}"), "exp(n)"); -} - -#[test] -fn test_expr_display_nested() { - // n^2 + 3 * m - let e = Expr::add( - Expr::pow(Expr::Var("n"), Expr::Const(2.0)), - Expr::mul(Expr::Const(3.0), Expr::Var("m")), - ); - assert_eq!(format!("{e}"), "n^2 + 3 * m"); -} -``` - -**Step 2: Run tests to verify they fail** - -Run: `cargo test expr` -Expected: FAIL — `variables()`, `substitute()`, `Display` not implemented. - -**Step 3: Implement the methods** - -Add to `src/expr.rs`: -```rust -impl Expr { - // ... existing methods ... - - /// Collect all variable names referenced in this expression. - pub fn variables(&self) -> HashSet<&'static str> { - let mut vars = HashSet::new(); - self.collect_variables(&mut vars); - vars - } - - fn collect_variables(&self, vars: &mut HashSet<&'static str>) { - match self { - Expr::Const(_) => {} - Expr::Var(name) => { vars.insert(name); } - Expr::Add(a, b) | Expr::Mul(a, b) | Expr::Pow(a, b) => { - a.collect_variables(vars); - b.collect_variables(vars); - } - Expr::Exp(a) | Expr::Log(a) | Expr::Sqrt(a) => { - a.collect_variables(vars); - } - } - } - - /// Substitute variables with other expressions. - pub fn substitute(&self, mapping: &HashMap<&str, &Expr>) -> Expr { - match self { - Expr::Const(c) => Expr::Const(*c), - Expr::Var(name) => { - if let Some(replacement) = mapping.get(name) { - (*replacement).clone() - } else { - Expr::Var(name) - } - } - Expr::Add(a, b) => Expr::add(a.substitute(mapping), b.substitute(mapping)), - Expr::Mul(a, b) => Expr::mul(a.substitute(mapping), b.substitute(mapping)), - Expr::Pow(a, b) => Expr::pow(a.substitute(mapping), b.substitute(mapping)), - Expr::Exp(a) => Expr::Exp(Box::new(a.substitute(mapping))), - Expr::Log(a) => Expr::Log(Box::new(a.substitute(mapping))), - Expr::Sqrt(a) => Expr::Sqrt(Box::new(a.substitute(mapping))), - } - } - - /// Check if this expression is a polynomial (no exp/log/sqrt, integer exponents only). - pub fn is_polynomial(&self) -> bool { - match self { - Expr::Const(_) | Expr::Var(_) => true, - Expr::Add(a, b) | Expr::Mul(a, b) => a.is_polynomial() && b.is_polynomial(), - Expr::Pow(base, exp) => { - base.is_polynomial() && matches!(exp.as_ref(), Expr::Const(c) if *c >= 0.0 && (*c - c.round()).abs() < 1e-10) - } - Expr::Exp(_) | Expr::Log(_) | Expr::Sqrt(_) => false, - } - } -} - -impl fmt::Display for Expr { - fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - match self { - Expr::Const(c) => { - let ci = c.round() as i64; - if (*c - ci as f64).abs() < 1e-10 { - write!(f, "{ci}") - } else { - write!(f, "{c}") - } - } - Expr::Var(name) => write!(f, "{name}"), - Expr::Add(a, b) => write!(f, "{a} + {b}"), - Expr::Mul(a, b) => { - // Parenthesize additions inside multiplication - let left = if matches!(a.as_ref(), Expr::Add(_, _)) { - format!("({a})") - } else { - format!("{a}") - }; - let right = if matches!(b.as_ref(), Expr::Add(_, _)) { - format!("({b})") - } else { - format!("{b}") - }; - write!(f, "{left} * {right}") - } - Expr::Pow(base, exp) => { - let base_str = if matches!(base.as_ref(), Expr::Add(_, _) | Expr::Mul(_, _)) { - format!("({base})") - } else { - format!("{base}") - }; - write!(f, "{base_str}^{exp}") - } - Expr::Exp(a) => write!(f, "exp({a})"), - Expr::Log(a) => write!(f, "log({a})"), - Expr::Sqrt(a) => write!(f, "sqrt({a})"), - } - } -} -``` - -**Step 4: Run tests to verify they pass** - -Run: `cargo test expr` -Expected: all tests pass. - -**Step 5: Commit** - -```bash -git add src/expr.rs src/unit_tests/expr.rs -git commit -m "feat: add variables, substitute, Display to Expr" -``` - ---- - -## Phase 2: Proc macro expression parser - -### Task 3: Add Pratt parser to the proc macro crate - -**Files:** -- Create: `problemreductions-macros/src/parser.rs` -- Create: `problemreductions-macros/tests/parse_tests.rs` -- Modify: `problemreductions-macros/src/lib.rs` (add module) - -The parser operates on `&str` (the contents of the string literal from the macro attribute) and produces a token stream that constructs `Expr` values. - -**Step 1: Write failing parser tests** - -Create `problemreductions-macros/tests/parse_tests.rs`: -```rust -use problemreductions_macros::__parse_overhead_expr; - -// We'll expose a helper proc macro for testing that takes a string -// and outputs the Expr construction code. This is tested by compilation. - -// For now, test the parser module directly via unit tests inside the crate. -``` - -Since proc macro crates can't be tested with normal `#[test]` easily for internal parse logic, add unit tests inside the module. - -Create `problemreductions-macros/src/parser.rs`: -```rust -//! Pratt parser for overhead expression strings. -//! -//! Parses expressions like: -//! - "num_vertices" -//! - "num_vertices^2" -//! - "num_edges + num_vertices^2" -//! - "3 * num_vertices" -//! - "exp(num_vertices^2)" -//! - "sqrt(num_edges)" -//! -//! Grammar: -//! expr = term (('+' | '-') term)* -//! term = factor (('*' | '/') factor)* -//! factor = unary ('^' factor)? // right-associative -//! unary = '-' unary | primary -//! primary = NUMBER | IDENT | func_call | '(' expr ')' -//! func_call = ('exp' | 'log' | 'sqrt') '(' expr ')' - -use proc_macro2::TokenStream; -use quote::quote; - -/// Parsed expression node (intermediate representation before codegen). -#[derive(Debug, Clone, PartialEq)] -pub enum ParsedExpr { - Const(f64), - Var(String), - Add(Box, Box), - Sub(Box, Box), - Mul(Box, Box), - Div(Box, Box), - Pow(Box, Box), - Neg(Box), - Exp(Box), - Log(Box), - Sqrt(Box), -} - -// ... tokenizer and parser implementation ... -// (detailed in Step 3) -``` - -**Step 2: Implement tokenizer** - -Tokens needed: `Number(f64)`, `Ident(String)`, `Plus`, `Minus`, `Star`, `Slash`, `Caret`, `LParen`, `RParen`. - -```rust -#[derive(Debug, Clone, PartialEq)] -enum Token { - Number(f64), - Ident(String), - Plus, - Minus, - Star, - Slash, - Caret, - LParen, - RParen, -} - -fn tokenize(input: &str) -> Result, String> { - let mut tokens = Vec::new(); - let mut chars = input.chars().peekable(); - while let Some(&ch) = chars.peek() { - match ch { - ' ' | '\t' | '\n' => { chars.next(); } - '+' => { chars.next(); tokens.push(Token::Plus); } - '-' => { chars.next(); tokens.push(Token::Minus); } - '*' => { chars.next(); tokens.push(Token::Star); } - '/' => { chars.next(); tokens.push(Token::Slash); } - '^' => { chars.next(); tokens.push(Token::Caret); } - '(' => { chars.next(); tokens.push(Token::LParen); } - ')' => { chars.next(); tokens.push(Token::RParen); } - c if c.is_ascii_digit() || c == '.' => { - let mut num = String::new(); - while let Some(&c) = chars.peek() { - if c.is_ascii_digit() || c == '.' { num.push(c); chars.next(); } - else { break; } - } - let val: f64 = num.parse().map_err(|_| format!("invalid number: {num}"))?; - tokens.push(Token::Number(val)); - } - c if c.is_ascii_alphabetic() || c == '_' => { - let mut ident = String::new(); - while let Some(&c) = chars.peek() { - if c.is_ascii_alphanumeric() || c == '_' { ident.push(c); chars.next(); } - else { break; } - } - tokens.push(Token::Ident(ident)); - } - _ => return Err(format!("unexpected character: '{ch}'")), - } - } - Ok(tokens) -} -``` - -**Step 3: Implement Pratt parser** - -```rust -struct Parser { - tokens: Vec, - pos: usize, -} - -impl Parser { - fn new(tokens: Vec) -> Self { Self { tokens, pos: 0 } } - fn peek(&self) -> Option<&Token> { self.tokens.get(self.pos) } - fn advance(&mut self) -> Option { - let tok = self.tokens.get(self.pos).cloned(); - self.pos += 1; - tok - } - fn expect(&mut self, expected: &Token) -> Result<(), String> { - match self.advance() { - Some(ref tok) if tok == expected => Ok(()), - Some(tok) => Err(format!("expected {expected:?}, got {tok:?}")), - None => Err(format!("expected {expected:?}, got end of input")), - } - } - - fn parse_expr(&mut self) -> Result { - let mut left = self.parse_term()?; - while matches!(self.peek(), Some(Token::Plus) | Some(Token::Minus)) { - let op = self.advance().unwrap(); - let right = self.parse_term()?; - left = match op { - Token::Plus => ParsedExpr::Add(Box::new(left), Box::new(right)), - Token::Minus => ParsedExpr::Sub(Box::new(left), Box::new(right)), - _ => unreachable!(), - }; - } - Ok(left) - } - - fn parse_term(&mut self) -> Result { - let mut left = self.parse_factor()?; - while matches!(self.peek(), Some(Token::Star) | Some(Token::Slash)) { - let op = self.advance().unwrap(); - let right = self.parse_factor()?; - left = match op { - Token::Star => ParsedExpr::Mul(Box::new(left), Box::new(right)), - Token::Slash => ParsedExpr::Div(Box::new(left), Box::new(right)), - _ => unreachable!(), - }; - } - Ok(left) - } - - fn parse_factor(&mut self) -> Result { - let base = self.parse_unary()?; - if matches!(self.peek(), Some(Token::Caret)) { - self.advance(); - let exp = self.parse_factor()?; // right-associative - Ok(ParsedExpr::Pow(Box::new(base), Box::new(exp))) - } else { - Ok(base) - } - } - - fn parse_unary(&mut self) -> Result { - if matches!(self.peek(), Some(Token::Minus)) { - self.advance(); - let expr = self.parse_unary()?; - Ok(ParsedExpr::Neg(Box::new(expr))) - } else { - self.parse_primary() - } - } - - fn parse_primary(&mut self) -> Result { - match self.advance() { - Some(Token::Number(n)) => Ok(ParsedExpr::Const(n)), - Some(Token::Ident(name)) => { - // Check for function call: exp(...), log(...), sqrt(...) - if matches!(self.peek(), Some(Token::LParen)) { - self.advance(); // consume '(' - let arg = self.parse_expr()?; - self.expect(&Token::RParen)?; - match name.as_str() { - "exp" => Ok(ParsedExpr::Exp(Box::new(arg))), - "log" => Ok(ParsedExpr::Log(Box::new(arg))), - "sqrt" => Ok(ParsedExpr::Sqrt(Box::new(arg))), - _ => Err(format!("unknown function: {name}")), - } - } else { - Ok(ParsedExpr::Var(name)) - } - } - Some(Token::LParen) => { - let expr = self.parse_expr()?; - self.expect(&Token::RParen)?; - Ok(expr) - } - Some(tok) => Err(format!("unexpected token: {tok:?}")), - None => Err("unexpected end of input".to_string()), - } - } -} - -/// Parse an expression string into a ParsedExpr. -pub fn parse_expr(input: &str) -> Result { - let tokens = tokenize(input)?; - let mut parser = Parser::new(tokens); - let expr = parser.parse_expr()?; - if parser.pos != parser.tokens.len() { - return Err(format!("unexpected trailing tokens at position {}", parser.pos)); - } - Ok(expr) -} -``` - -**Step 4: Add codegen functions** - -Two codegen functions — one produces `Expr` AST construction code, the other produces Rust evaluation code that calls getters. - -```rust -impl ParsedExpr { - /// Generate TokenStream that constructs an `Expr` value. - pub fn to_expr_tokens(&self) -> TokenStream { - match self { - ParsedExpr::Const(c) => quote! { crate::expr::Expr::Const(#c) }, - ParsedExpr::Var(name) => quote! { crate::expr::Expr::Var(#name) }, - ParsedExpr::Add(a, b) => { - let a = a.to_expr_tokens(); - let b = b.to_expr_tokens(); - quote! { crate::expr::Expr::add(#a, #b) } - } - ParsedExpr::Sub(a, b) => { - let a = a.to_expr_tokens(); - let b = b.to_expr_tokens(); - quote! { crate::expr::Expr::add(#a, crate::expr::Expr::mul(crate::expr::Expr::Const(-1.0), #b)) } - } - ParsedExpr::Mul(a, b) => { - let a = a.to_expr_tokens(); - let b = b.to_expr_tokens(); - quote! { crate::expr::Expr::mul(#a, #b) } - } - ParsedExpr::Div(a, b) => { - let a = a.to_expr_tokens(); - let b = b.to_expr_tokens(); - quote! { crate::expr::Expr::mul(#a, crate::expr::Expr::pow(#b, crate::expr::Expr::Const(-1.0))) } - } - ParsedExpr::Pow(base, exp) => { - let base = base.to_expr_tokens(); - let exp = exp.to_expr_tokens(); - quote! { crate::expr::Expr::pow(#base, #exp) } - } - ParsedExpr::Neg(a) => { - let a = a.to_expr_tokens(); - quote! { crate::expr::Expr::mul(crate::expr::Expr::Const(-1.0), #a) } - } - ParsedExpr::Exp(a) => { - let a = a.to_expr_tokens(); - quote! { crate::expr::Expr::Exp(Box::new(#a)) } - } - ParsedExpr::Log(a) => { - let a = a.to_expr_tokens(); - quote! { crate::expr::Expr::Log(Box::new(#a)) } - } - ParsedExpr::Sqrt(a) => { - let a = a.to_expr_tokens(); - quote! { crate::expr::Expr::Sqrt(Box::new(#a)) } - } - } - } - - /// Generate TokenStream that evaluates the expression by calling getter methods - /// on a source variable `src`. - pub fn to_eval_tokens(&self, src_ident: &syn::Ident) -> TokenStream { - match self { - ParsedExpr::Const(c) => quote! { (#c as f64) }, - ParsedExpr::Var(name) => { - let getter = syn::Ident::new(name, proc_macro2::Span::call_site()); - quote! { (#src_ident.#getter() as f64) } - } - ParsedExpr::Add(a, b) => { - let a = a.to_eval_tokens(src_ident); - let b = b.to_eval_tokens(src_ident); - quote! { (#a + #b) } - } - ParsedExpr::Sub(a, b) => { - let a = a.to_eval_tokens(src_ident); - let b = b.to_eval_tokens(src_ident); - quote! { (#a - #b) } - } - ParsedExpr::Mul(a, b) => { - let a = a.to_eval_tokens(src_ident); - let b = b.to_eval_tokens(src_ident); - quote! { (#a * #b) } - } - ParsedExpr::Div(a, b) => { - let a = a.to_eval_tokens(src_ident); - let b = b.to_eval_tokens(src_ident); - quote! { (#a / #b) } - } - ParsedExpr::Pow(base, exp) => { - let base = base.to_eval_tokens(src_ident); - let exp = exp.to_eval_tokens(src_ident); - quote! { f64::powf(#base, #exp) } - } - ParsedExpr::Neg(a) => { - let a = a.to_eval_tokens(src_ident); - quote! { (-(#a)) } - } - ParsedExpr::Exp(a) => { - let a = a.to_eval_tokens(src_ident); - quote! { f64::exp(#a) } - } - ParsedExpr::Log(a) => { - let a = a.to_eval_tokens(src_ident); - quote! { f64::ln(#a) } - } - ParsedExpr::Sqrt(a) => { - let a = a.to_eval_tokens(src_ident); - quote! { f64::sqrt(#a) } - } - } - } - - /// Collect all variable names in the expression. - pub fn variables(&self) -> Vec { - let mut vars = Vec::new(); - self.collect_vars(&mut vars); - vars.sort(); - vars.dedup(); - vars - } - - fn collect_vars(&self, vars: &mut Vec) { - match self { - ParsedExpr::Const(_) => {} - ParsedExpr::Var(name) => vars.push(name.clone()), - ParsedExpr::Add(a, b) | ParsedExpr::Sub(a, b) - | ParsedExpr::Mul(a, b) | ParsedExpr::Div(a, b) - | ParsedExpr::Pow(a, b) => { - a.collect_vars(vars); - b.collect_vars(vars); - } - ParsedExpr::Neg(a) | ParsedExpr::Exp(a) | ParsedExpr::Log(a) | ParsedExpr::Sqrt(a) => { - a.collect_vars(vars); - } - } - } -} - -#[cfg(test)] -mod tests { - use super::*; - - #[test] - fn test_parse_var() { - assert_eq!(parse_expr("num_vertices").unwrap(), ParsedExpr::Var("num_vertices".into())); - } - - #[test] - fn test_parse_const() { - assert_eq!(parse_expr("42").unwrap(), ParsedExpr::Const(42.0)); - } - - #[test] - fn test_parse_pow() { - let e = parse_expr("n^2").unwrap(); - assert_eq!(e, ParsedExpr::Pow( - Box::new(ParsedExpr::Var("n".into())), - Box::new(ParsedExpr::Const(2.0)), - )); - } - - #[test] - fn test_parse_add_mul() { - // n + 3 * m → n + (3*m) - let e = parse_expr("n + 3 * m").unwrap(); - assert_eq!(e, ParsedExpr::Add( - Box::new(ParsedExpr::Var("n".into())), - Box::new(ParsedExpr::Mul( - Box::new(ParsedExpr::Const(3.0)), - Box::new(ParsedExpr::Var("m".into())), - )), - )); - } - - #[test] - fn test_parse_exp() { - let e = parse_expr("exp(n^2)").unwrap(); - assert_eq!(e, ParsedExpr::Exp(Box::new(ParsedExpr::Pow( - Box::new(ParsedExpr::Var("n".into())), - Box::new(ParsedExpr::Const(2.0)), - )))); - } - - #[test] - fn test_parse_complex() { - // 3 * n^2 + exp(m) — should parse correctly - let e = parse_expr("3 * n^2 + exp(m)").unwrap(); - assert!(matches!(e, ParsedExpr::Add(_, _))); - } - - #[test] - fn test_parse_parens() { - let e = parse_expr("(n + m)^2").unwrap(); - assert!(matches!(e, ParsedExpr::Pow(_, _))); - } - - #[test] - fn test_variables() { - let e = parse_expr("n^2 + 3 * m + exp(k)").unwrap(); - assert_eq!(e.variables(), vec!["k", "m", "n"]); - } -} -``` - -**Step 5: Run tests** - -Run: `cargo test -p problemreductions-macros` -Expected: all parser tests pass. - -**Step 6: Commit** - -```bash -git add problemreductions-macros/src/parser.rs -git commit -m "feat: add Pratt expression parser to proc macro crate" -``` - ---- - -### Task 4: Update `#[reduction]` macro to support new syntax - -**Files:** -- Modify: `problemreductions-macros/src/lib.rs` - -The macro should support **both** old syntax (for backwards compatibility during migration) and new syntax: - -Old: `overhead = { ReductionOverhead::new(vec![...]) }` -New: `overhead = { num_vars = "num_vertices^2", num_constraints = "num_edges" }` - -Detection: if the content starts with an identifier followed by `=` and a string literal, it's new syntax. Otherwise, treat the braced content as raw token stream (old syntax). - -**Step 1: Update `ReductionAttrs` parsing** - -Add a new variant to represent parsed overhead fields: -```rust -enum OverheadSpec { - /// Old syntax: raw token stream (ReductionOverhead::new(...)) - Legacy(TokenStream2), - /// New syntax: list of (field_name, expression_string) pairs - Parsed(Vec<(String, String)>), -} -``` - -Update `ReductionAttrs` to store `OverheadSpec` and the parsing logic to detect which format is used. - -**Step 2: Update `generate_reduction_entry` to emit dual code** - -For `OverheadSpec::Parsed`, use the parser from Task 3: -- Parse each expression string at compile time -- Emit `overhead_fn` that constructs `ReductionOverhead` with `Expr` AST -- Emit `overhead_eval_fn` that calls getters on the concrete source type -- Report parse errors as compile errors with `syn::Error` - -For `OverheadSpec::Legacy`, emit the old behavior unchanged. - -**Step 3: Update `ReductionEntry` to include the new field** - -This requires modifying `src/rules/registry.rs` to add `overhead_eval_fn`. For now, legacy reductions pass a no-op eval fn: -```rust -pub overhead_eval_fn: Option ProblemSize>, -``` - -Using `Option` allows legacy code to work with `None` while new syntax populates `Some(...)`. - -**Step 4: Test with one reduction** - -Pick a simple reduction (e.g., `maximumindependentset_qubo.rs`) and convert it to new syntax as a proof: - -Before: -```rust -#[reduction( - overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vertices))]) } -)] -``` - -After: -```rust -#[reduction(overhead = { - num_vars = "num_vertices", -})] -``` - -Run: `cargo test maximumindependentset_qubo` -Expected: passes (both old overhead_fn and new eval paths work). - -**Step 5: Commit** - -```bash -git add problemreductions-macros/src/lib.rs src/rules/registry.rs -git commit -m "feat: support new overhead expression syntax in #[reduction] macro" -``` - ---- - -## Phase 3: Add inherent getters to all problem types - -### Task 5: Add getters to graph problem types - -**Files:** -- Modify: `src/models/graph/maximum_independent_set.rs` -- Modify: `src/models/graph/minimum_vertex_cover.rs` -- Modify: `src/models/graph/maximum_clique.rs` -- Modify: `src/models/graph/maximum_matching.rs` -- Modify: `src/models/graph/max_cut.rs` -- Modify: `src/models/graph/maximal_is.rs` -- Modify: `src/models/graph/minimum_dominating_set.rs` -- Modify: `src/models/graph/kcoloring.rs` -- Modify: `src/models/graph/traveling_salesman.rs` - -For each graph problem that has `problem_size_names = ["num_vertices", "num_edges"]`, add inherent getters. Most already have `graph()` accessor, so the getters are trivial: - -```rust -impl MaximumIndependentSet { - pub fn num_vertices(&self) -> usize { self.graph().num_vertices() } - pub fn num_edges(&self) -> usize { self.graph().num_edges() } -} -``` - -Check each problem's `problem_size_values()` to see what getters are needed — some problems may have additional fields. For example: -- Most graph problems: `num_vertices`, `num_edges` -- KColoring: `num_vertices`, `num_edges` (same) -- TravelingSalesman: check the actual fields - -**Step 1: Add getters to all 9 graph problem files** - -Read each file's `problem_size_values()` to determine exact getters needed. Add inherent `impl` blocks with `pub fn` getters. If a getter already exists as a public method, skip it. - -**Step 2: Run tests** - -Run: `cargo test` -Expected: all existing tests pass (getters are additive). - -**Step 3: Commit** - -```bash -git add src/models/graph/ -git commit -m "feat: add inherent size getters to graph problem types" -``` - ---- - -### Task 6: Add getters to remaining problem types - -**Files:** -- Modify: `src/models/satisfiability/sat.rs` -- Modify: `src/models/satisfiability/ksat.rs` -- Modify: `src/models/optimization/qubo.rs` -- Modify: `src/models/optimization/ilp.rs` -- Modify: `src/models/optimization/spin_glass.rs` -- Modify: `src/models/set/maximum_set_packing.rs` -- Modify: `src/models/set/minimum_set_covering.rs` -- Modify: `src/models/specialized/circuit.rs` -- Modify: `src/models/specialized/factoring.rs` -- Modify: `src/models/specialized/paintshop.rs` -- Modify: `src/models/specialized/bmf.rs` -- Modify: `src/models/specialized/biclique_cover.rs` - -Same approach: read `problem_size_values()` for each, add inherent getter methods. Examples: -- Satisfiability: `num_vars()`, `num_clauses()`, `num_literals()` (may already exist) -- QUBO: `num_vars()` -- ILP: `num_vars()`, `num_constraints()` -- SpinGlass: check fields -- CircuitSAT: `num_variables()`, `num_assignments()` - -**Step 1: Add getters to all remaining problem files** - -**Step 2: Run tests** - -Run: `cargo test` -Expected: all tests pass. - -**Step 3: Commit** - -```bash -git add src/models/ -git commit -m "feat: add inherent size getters to SAT, optimization, set, and specialized problems" -``` - ---- - -## Phase 4: Migrate all reductions to new syntax - -### Task 7: Migrate simple reductions (single field, simple expression) - -**Files:** ~15 reduction files with simple `poly!(var)` or `poly!(var^N)` patterns. - -Target files (identified from grep): `maximumindependentset_qubo.rs`, `coloring_qubo.rs`, `ksatisfiability_qubo.rs` (K2), `ilp_qubo.rs`, `maximumsetpacking_qubo.rs`, `minimumvertexcover_qubo.rs`, `spinglass_qubo.rs`, etc. - -For each file, replace: -```rust -overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vertices))]) } -``` -with: -```rust -overhead = { num_vars = "num_vertices" } -``` - -And: -```rust -overhead = { ReductionOverhead::new(vec![("num_vars", poly!(num_vertices ^ 2))]) } -``` -with: -```rust -overhead = { num_vars = "num_vertices^2" } -``` - -**Step 1: Migrate files** - -Mechanical replacement. Remove any `use crate::poly;` or `use crate::rules::registry::ReductionOverhead;` imports that become unused. - -**Step 2: Run tests** - -Run: `cargo test` -Expected: all tests pass. - -**Step 3: Commit** - -```bash -git add src/rules/ -git commit -m "refactor: migrate simple reductions to new overhead syntax" -``` - ---- - -### Task 8: Migrate complex reductions (multi-field, compound expressions) - -**Files:** Remaining ~20 reduction files with multi-field or compound polynomial expressions. - -These include reductions like `factoring_circuit.rs`, `circuit_spinglass.rs`, `sat_coloring.rs`, `maximumindependentset_ilp.rs`, etc. - -For compound expressions using `poly!() + poly!()`: -```rust -overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_vars) + poly!(num_clauses)), - ]) -} -``` -becomes: -```rust -overhead = { - num_vars = "num_vars + num_clauses", -} -``` - -For multi-field: -```rust -overhead = { - ReductionOverhead::new(vec![ - ("num_vars", poly!(num_vertices)), - ("num_constraints", poly!(num_edges)), - ]) -} -``` -becomes: -```rust -overhead = { - num_vars = "num_vertices", - num_constraints = "num_edges", -} -``` - -**Step 1: Migrate files** - -Read each file's current overhead carefully. Convert polynomial expressions to string syntax. Some expressions may use `poly!(a * b)` (product) — convert to `"a * b"`. - -**Step 2: Run tests** - -Run: `cargo test` -Expected: all tests pass. - -**Step 3: Commit** - -```bash -git add src/rules/ -git commit -m "refactor: migrate complex reductions to new overhead syntax" -``` - ---- - -### Task 9: Migrate variant cast reductions - -**Files:** -- Modify: `src/rules/mod.rs` (the `impl_variant_reduction!` macro) -- Modify: cast files (`kcoloring_casts.rs`, `maximumindependentset_casts.rs`, etc.) - -The `impl_variant_reduction!` macro uses `ReductionOverhead::identity(fields)`. This still works with the new system since identity overhead maps each field to itself. Update the macro to use the new syntax if possible, or keep `ReductionOverhead::identity()` updated to use `Expr::Var` instead of `Polynomial::var`. - -Since `ReductionOverhead::identity()` will now construct `Expr::Var` values (after Phase 5), this migration may be minimal — just ensure the macro still compiles. - -**Step 1: Verify variant casts still compile and pass tests** - -Run: `cargo test` -Expected: all tests pass. - -**Step 2: Commit (if changes needed)** - -```bash -git commit -m "refactor: update variant cast macro for new overhead system" -``` - ---- - -## Phase 5: Remove deprecated APIs - -### Task 10: Switch `ReductionOverhead` from `Polynomial` to `Expr` - -**Files:** -- Modify: `src/rules/registry.rs` -- Modify: `src/export.rs` -- Modify: `src/rules/cost.rs` (if needed) - -**Step 1: Update `ReductionOverhead` to use `Expr`** - -```rust -pub struct ReductionOverhead { - pub output_size: Vec<(&'static str, Expr)>, -} -``` - -Update all methods: `evaluate_output_size` calls `Expr::eval`, `compose` calls `Expr::substitute`, `input_variable_names` calls `Expr::variables`, `identity` creates `Expr::Var` values. - -**Step 2: Update `export.rs`** - -Replace `MonomialJson`/`OverheadEntry` with the new format: -```rust -pub struct OverheadEntry { - pub field: String, - pub expr: Expr, - pub formula: String, -} -``` - -**Step 3: Run tests, fix any compilation errors** - -Run: `cargo test` -Fix any remaining references to `Polynomial` in overhead contexts. - -**Step 4: Commit** - -```bash -git add src/rules/registry.rs src/export.rs -git commit -m "refactor: switch ReductionOverhead from Polynomial to Expr" -``` - ---- - -### Task 11: Remove `problem_size_names` and `problem_size_values` from `Problem` trait - -**Files:** -- Modify: `src/traits.rs` -- Modify: all 21 model files (remove trait method impls) -- Modify: `src/lib.rs` (remove `problem_size` re-export if no longer used) -- Modify: `src/types.rs` (keep `ProblemSize` but remove `from_names_values` if unused) - -**Step 1: Remove from trait definition** - -Remove `problem_size_names()` and `problem_size_values()` from the `Problem` trait in `src/traits.rs`. Remove the `problem_size()` helper function. - -**Step 2: Remove implementations from all 21 model files** - -Remove the `problem_size_names()` and `problem_size_values()` method bodies from each Problem impl. - -**Step 3: Remove `source_size_names_fn` and `target_size_names_fn` from `ReductionEntry`** - -Update `src/rules/registry.rs` and the proc macro to no longer emit these fields. - -**Step 4: Fix compilation errors** - -Search for all remaining uses of `problem_size_names`, `problem_size_values`, `problem_size(`, `source_size_names_fn`, `target_size_names_fn` and update or remove them. - -**Step 5: Run tests** - -Run: `cargo test` -Fix any remaining failures. - -**Step 6: Commit** - -```bash -git add src/traits.rs src/models/ src/rules/registry.rs src/lib.rs problemreductions-macros/src/lib.rs -git commit -m "refactor: remove problem_size_names/values from Problem trait" -``` - ---- - -### Task 12: Remove `Polynomial` and `poly!` macro - -**Files:** -- Delete: `src/polynomial.rs` -- Delete: `src/unit_tests/polynomial.rs` -- Modify: `src/lib.rs` (remove `mod polynomial`) - -**Step 1: Search for any remaining `Polynomial` or `poly!` references** - -Run: `cargo build` — if it compiles, no references remain. - -**Step 2: Delete files** - -**Step 3: Run full test suite** - -Run: `make check` -Expected: fmt + clippy + test all pass. - -**Step 4: Commit** - -```bash -git add -A -git commit -m "refactor: remove Polynomial type and poly! macro (replaced by Expr)" -``` - ---- - -## Phase 6: Update documentation and exports - -### Task 13: Regenerate exports and update docs - -**Files:** -- Modify: `docs/src/reductions/reduction_graph.json` (auto-generated) -- Modify: `docs/paper/reductions.typ` (if format-overhead needs updating) -- Modify: CLAUDE.md (update conventions) -- Regenerate: `tests/data/` ground truth JSON (if format changed) - -**Step 1: Regenerate reduction graph JSON** - -Run: `make rust-export` -Check that the new JSON format has `expr` + `formula` fields instead of `polynomial`. - -**Step 2: Update paper if needed** - -Check `docs/paper/reductions.typ` — the `format-overhead` function reads `formula` fields. If the field name changed, update it. - -**Step 3: Regenerate test data** - -Run: `make qubo-testdata` -If example JSON format changed, regenerate example outputs. - -**Step 4: Run full CI check** - -Run: `make check` -Expected: all pass. - -**Step 5: Update CLAUDE.md** - -Update the Architecture section to reference `Expr` instead of `Polynomial`, and document the new `#[reduction]` syntax. - -**Step 6: Commit** - -```bash -git add -A -git commit -m "docs: update exports and documentation for new overhead system" -``` - ---- - -### Task 14: Update MCP server (if applicable) - -**Files:** -- Check: `problemreductions-cli/` for any MCP-specific overhead formatting - -**Step 1: Search for overhead-related code in CLI/MCP** - -The MCP server's `inspect_problem` and `reduce` tools return overhead info. Ensure they use the new `formula` field. - -**Step 2: Run MCP tests** - -Run: `make mcp-test` -Expected: all pass. - -**Step 3: Commit if changes needed** - -```bash -git commit -m "fix: update MCP server for new overhead format" -``` From cc8bebbd00e1962d3ba42269ada4691263dacb1f Mon Sep 17 00:00:00 2001 From: GiggleLiu Date: Thu, 26 Feb 2026 11:18:26 +0800 Subject: [PATCH 15/15] fix: update inspect test to match "Size fields:" output format Co-Authored-By: Claude Opus 4.6 --- problemreductions-cli/tests/cli_tests.rs | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/problemreductions-cli/tests/cli_tests.rs b/problemreductions-cli/tests/cli_tests.rs index 532c5e564..54bcfa5d7 100644 --- a/problemreductions-cli/tests/cli_tests.rs +++ b/problemreductions-cli/tests/cli_tests.rs @@ -1937,7 +1937,10 @@ fn test_inspect_problem() { stdout.contains("Type: MaximumIndependentSet"), "expected 'Type: MaximumIndependentSet', got: {stdout}" ); - assert!(stdout.contains("Size:"), "expected 'Size:', got: {stdout}"); + assert!( + stdout.contains("Size fields:"), + "expected 'Size fields:', got: {stdout}" + ); assert!( stdout.contains("Variables:"), "expected 'Variables:', got: {stdout}"