Consider objective function integrality when pruning + bug fixes#851
Consider objective function integrality when pruning + bug fixes#851rapids-bot[bot] merged 468 commits intoNVIDIA:mainfrom
Conversation
…would not be accounted for when iterations aren't multiples for LOG_FEATURE_ITERATIONS
…mpiler is a bit dumb)
This reverts commit ac57402.
📝 WalkthroughWalkthroughAdds an objective-integrality flag and scaling utilities, propagates integrality through presolve and solver, refactors branch-and-bound to distinguish crushed vs uncrushed primal solutions during repair, introduces rational-scaling functions, removes some HDI annotations, and adjusts a memory-instrumentation operator signature. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
No actionable comments were generated in the recent review. 🎉 Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@cpp/src/mip_heuristics/problem/problem.cu`:
- Around line 1210-1234: The p_curr update (p_curr = a * p_prev1 + p_prev2) can
overflow int64_t; modify the loop to detect/avoid overflow before assigning
p_curr: compute the multiplication and addition using a wider type (e.g.
__int128) or use checked multiplication/addition, then if the result exceeds
INT64_MAX or some safe bound (or would exceed a functionally relevant limit
analogous to q_curr > max_denom) break out of the loop; ensure you update
p_prev2/p_prev1 only when the safe checked result is accepted, and keep the
existing q_curr > max_denom and approx_err checks (symbols: p_curr, q_curr,
p_prev1, p_prev2, max_denom, epsilon).
- Around line 1269-1286: The loop computing integer LCM into scm can overflow
when performing scm *= den / std::gcd(scm, den); change the order to compute
gcd_den = std::gcd(scm, den) and factor = den / gcd_den, then check whether
multiplying scm by factor would exceed the allowed scale using a safe pre-check
(e.g. compare (long double)scm * (long double)factor / (long double)gcd >
maxscale or rearrange to avoid direct multiplication) and return no_scaling if
it would; only after the safe check do the integer update scm *= factor
(affecting variables scm, gcd, and using function rational_approximation,
coefficients, maxscale, no_scaling as referenced).
In `@cpp/src/mip_heuristics/solution/solution.cu`:
- Around line 610-612: The gap-clamping uses a fixed <= check and fails for
maximization; modify the condition that sets rel_mip_gap = 0 to be
direction-aware by reading problem_ptr->presolve_data.objective_scaling_factor
(obj_scale) and using <= for positive scaling (minimization) but >= for negative
scaling (maximization) when comparing h_user_obj and solution_bound so the gap
is zero only when the incumbent meets the bound in the correct direction.
🧹 Nitpick comments (1)
cpp/src/dual_simplex/presolve.cpp (1)
572-578: Minor:objective_is_integralnot cleared during dualization path.When the problem is dualized (lines 683–782), the dual objective becomes
rhs/ upper-bound values, making the primalobjective_is_integralflag semantically invalid. Since dualization is barrier-only and likely never reaches B&B, this is low-risk, but consider clearing the flag ondual_problem(Line 766 area) for defensive correctness.
| int64_t p_prev2 = 1, q_prev2 = 0; | ||
| int64_t p_prev1 = (int64_t)std::floor(x), q_prev1 = 1; | ||
|
|
||
| double remainder = x - std::floor(x); | ||
|
|
||
| for (int iter = 0; iter < 100; ++iter) { | ||
| if (std::abs(remainder) < 1e-15) break; | ||
|
|
||
| remainder = 1.0 / remainder; | ||
| int64_t a = (int64_t)std::floor(remainder); | ||
| remainder -= a; | ||
|
|
||
| int64_t p_curr = a * p_prev1 + p_prev2; | ||
| int64_t q_curr = a * q_prev1 + q_prev2; | ||
|
|
||
| if (q_curr > max_denom) break; | ||
|
|
||
| p_prev2 = p_prev1; | ||
| q_prev2 = q_prev1; | ||
| p_prev1 = p_curr; | ||
| q_prev1 = q_curr; | ||
|
|
||
| double approx_err = x - (double)p_curr / (double)q_curr; | ||
| if (std::abs(approx_err) < epsilon) break; | ||
| } |
There was a problem hiding this comment.
Potential int64_t overflow in continued-fraction convergents.
p_curr = a * p_prev1 + p_prev2 (Line 1222) has no overflow guard. While q_curr is bounded by max_denom, p_curr grows proportionally to the magnitude of x and can silently overflow for large objective coefficients. Consider adding an overflow check analogous to the q_curr > max_denom break.
🛡️ Suggested overflow guard
int64_t p_curr = a * p_prev1 + p_prev2;
int64_t q_curr = a * q_prev1 + q_prev2;
if (q_curr > max_denom) break;
+ // Guard against numerator overflow
+ if (std::abs(p_curr) < std::abs(p_prev1)) break; // overflow wrappedA more robust approach would use __int128 or checked multiplication.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| int64_t p_prev2 = 1, q_prev2 = 0; | |
| int64_t p_prev1 = (int64_t)std::floor(x), q_prev1 = 1; | |
| double remainder = x - std::floor(x); | |
| for (int iter = 0; iter < 100; ++iter) { | |
| if (std::abs(remainder) < 1e-15) break; | |
| remainder = 1.0 / remainder; | |
| int64_t a = (int64_t)std::floor(remainder); | |
| remainder -= a; | |
| int64_t p_curr = a * p_prev1 + p_prev2; | |
| int64_t q_curr = a * q_prev1 + q_prev2; | |
| if (q_curr > max_denom) break; | |
| p_prev2 = p_prev1; | |
| q_prev2 = q_prev1; | |
| p_prev1 = p_curr; | |
| q_prev1 = q_curr; | |
| double approx_err = x - (double)p_curr / (double)q_curr; | |
| if (std::abs(approx_err) < epsilon) break; | |
| } | |
| int64_t p_prev2 = 1, q_prev2 = 0; | |
| int64_t p_prev1 = (int64_t)std::floor(x), q_prev1 = 1; | |
| double remainder = x - std::floor(x); | |
| for (int iter = 0; iter < 100; ++iter) { | |
| if (std::abs(remainder) < 1e-15) break; | |
| remainder = 1.0 / remainder; | |
| int64_t a = (int64_t)std::floor(remainder); | |
| remainder -= a; | |
| int64_t p_curr = a * p_prev1 + p_prev2; | |
| int64_t q_curr = a * q_prev1 + q_prev2; | |
| if (q_curr > max_denom) break; | |
| // Guard against numerator overflow | |
| if (std::abs(p_curr) < std::abs(p_prev1)) break; // overflow wrapped | |
| p_prev2 = p_prev1; | |
| q_prev2 = q_prev1; | |
| p_prev1 = p_curr; | |
| q_prev1 = q_curr; | |
| double approx_err = x - (double)p_curr / (double)q_curr; | |
| if (std::abs(approx_err) < epsilon) break; | |
| } |
🤖 Prompt for AI Agents
In `@cpp/src/mip_heuristics/problem/problem.cu` around lines 1210 - 1234, The
p_curr update (p_curr = a * p_prev1 + p_prev2) can overflow int64_t; modify the
loop to detect/avoid overflow before assigning p_curr: compute the
multiplication and addition using a wider type (e.g. __int128) or use checked
multiplication/addition, then if the result exceeds INT64_MAX or some safe bound
(or would exceed a functionally relevant limit analogous to q_curr > max_denom)
break out of the loop; ensure you update p_prev2/p_prev1 only when the safe
checked result is accepted, and keep the existing q_curr > max_denom and
approx_err checks (symbols: p_curr, q_curr, p_prev1, p_prev2, max_denom,
epsilon).
akifcorduk
left a comment
There was a problem hiding this comment.
Great work Alice thanks!
| objective_coefficients[var_idx] * substitute_coefficient[idx]); | ||
| // Substitution changes the constraint coefficients on x_B, invalidating | ||
| // any implied-integrality proof that relied on the original structure. | ||
| var_flags[substituting_var_idx] &= ~(i_t)VAR_IMPLIED_INTEGER; |
There was a problem hiding this comment.
I created this structure back when we added Papilo, in order to store additional information relating to columns. For now it is used only to store if the variable was proven to be implied integral :)
|
/ok to test 97958dc |
|
/ok to test 3589226 |
|
/merge |
This PR adds support for rounding the lower bounds in the branch and bound tree when the problem is proven to have an integral objective function, which allows for tighter pruning.
If objective coefficients are not integer, but are rational numbers with small enough denominators, the objective is scaled by the smallest possible integer that makes the objective function integral. (i.e. in the case of obj = 0.5x + 0.5y; it is scaled by two to obj = 1x + 1y)
Also included are fixes for bugs recently encountered:
problem_t::substitute_variableswould not handle the case of more than two duplicates at onceNumber of optimals in 10min on Eos:
Description
Issue
Checklist
Summary by CodeRabbit
Bug Fixes
Improvements
Chores