🧩 Constraint Solving POTD:Nogood Learning in Constraint Programming (CDCL-style CP) #22275
Closed
Replies: 1 comment
-
|
This discussion has been marked as outdated by Constraint Solving — Problem of the Day. A newer discussion is available at Discussion #22439. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Modern CP solvers can solve instances that would take classical backtracking millions of times longer to crack. The secret? Nogood learning — borrowing the conflict-analysis ideas from CDCL SAT solvers and applying them to general constraint programming. Today we explore this technique that transformed CP from a niche academic tool into an industrial-strength solver.
Problem Statement
To make things concrete, we will use nogood learning as a technique applied to a small graph coloring instance (3-coloring a 5-node graph), but the principles apply to any CSP.
Instance: Color nodes
{1, 2, 3, 4, 5}with colors{R, G, B}so that no two adjacent nodes share a color. Edges:(1,2), (1,3), (2,3), (2,4), (3,5), (4,5).Input: A constraint graph
(V, E)and a set of colorsk.Output: An assignment
color: V → {1..k}satisfying allallDiff-on-edge constraints, orUNSAT.In the absence of learning, backtracking explores the same dead-end sub-trees repeatedly. Nogood learning makes each conflict "teach" the solver a permanent new constraint — called a nogood — that prunes future branches automatically.
Why It Matters
Modeling Approaches
Approach 1 — Classical Backtracking CP (no learning)
Variables:
x_v ∈ {R, G, B}for eachv ∈ V.Constraints:
x_u ≠ x_vfor each edge(u,v) ∈ E.Search: Pick an unassigned variable, try each color, propagate, backtrack on failure.
Trade-offs: Simple to implement, guaranteed complete, but every new invocation of the same conflict sub-tree re-explores it from scratch. Exponential worst-case blowup on structured instances.
Approach 2 — CDCL-style CP with Nogood Learning
Extend the CP engine with three components:
Formally:
A nogood is a set of partial assignments
{(x_1 = a_1), …, (x_k = a_k)}that cannot all hold simultaneously. It is recorded as the constraint¬(x_1 = a_1) ∨ … ∨ ¬(x_k = a_k).Trade-offs: Memory grows with learned nogoods (requires periodic database cleaning). However, empirically, CDCL-style solvers solve orders-of-magnitude harder instances than classical backtracking, and learned nogoods act as reusable "proof certificates" of infeasibility.
Example Model — MiniZinc with manual conflict trace
Key Techniques
1. First Unique Implication Point (1-UIP) Conflict Analysis
The 1-UIP scheme — borrowed directly from CDCL SAT — finds the cut in the implication graph closest to the conflict that involves only one literal from the current decision level. This produces the shortest, most general nogood, maximizing future pruning power. Alternative cuts (e.g., the decision variable itself) produce longer, weaker nogoods.
2. Non-Chronological Backjumping
Classical backtracking undoes only the most recent decision. With a learned nogood
{(x_i=a), …, (x_j=b)}whose second-highest decision level isd, the solver can leap back to leveldimmediately — skipping potentially thousands of intervening decisions whose variations are now provably irrelevant to this conflict.3. Clause Database Management (LBD-based Cleaning)
Learned nogoods accumulate. Solvers like Chuffed use Literal Block Distance (LBD) — the number of distinct decision levels in a nogood — as a quality proxy. Low-LBD ("glue") clauses are extremely reusable and kept forever; high-LBD clauses are periodically purged. This balances memory against propagation power.
Challenge Corner
References
Next time: we explore another corner of the constraint solving universe. Have a technique you'd love to see covered? Drop a comment below! 🚀
Beta Was this translation helpful? Give feedback.
All reactions