Skip to content

Consider objective function integrality when pruning + bug fixes#851

Merged
rapids-bot[bot] merged 468 commits intoNVIDIA:mainfrom
aliceb-nv:integer-objective
Feb 13, 2026
Merged

Consider objective function integrality when pruning + bug fixes#851
rapids-bot[bot] merged 468 commits intoNVIDIA:mainfrom
aliceb-nv:integer-objective

Conversation

@aliceb-nv
Copy link
Copy Markdown
Contributor

@aliceb-nv aliceb-nv commented Feb 12, 2026

This PR adds support for rounding the lower bounds in the branch and bound tree when the problem is proven to have an integral objective function, which allows for tighter pruning.
If objective coefficients are not integer, but are rational numbers with small enough denominators, the objective is scaled by the smallest possible integer that makes the objective function integral. (i.e. in the case of obj = 0.5x + 0.5y; it is scaled by two to obj = 1x + 1y)

Also included are fixes for bugs recently encountered:

  • Solutions sent to the repair queue would not be recrushed after the cuts passes
  • Column deduplication in problem_t::substitute_variables would not handle the case of more than two duplicates at once
  • Removed the cached data pointer in memory_instrumentation as it was bug-prone and is no longer necessary.

Number of optimals in 10min on Eos:

  • baseline: 44
  • PR: 57

Description

Issue

Checklist

  • I am familiar with the Contributing Guidelines.
  • Testing
    • New or existing tests cover these changes
    • Added tests
    • Created an issue to follow-up
    • NA
  • Documentation
    • The documentation is up to date with these changes
    • Added new documentation
    • NA

Summary by CodeRabbit

  • Bug Fixes

    • Repair and queuing now consistently use the correct solution form across deterministic and nondeterministic flows.
    • MIP-gap logic updated to treat solutions at the bound as optimal.
  • Improvements

    • Better handling for integral objectives: detection, bounds rounding, and cut-off adjustments to respect integrality.
    • New utilities to assess and (re)scale objective integrality for more robust presolve and solving.
  • Chores

    • Increased CI package size thresholds.

@aliceb-nv aliceb-nv added this to the 26.04 milestone Feb 12, 2026
@aliceb-nv aliceb-nv requested a review from a team as a code owner February 12, 2026 13:58
@aliceb-nv aliceb-nv added the non-breaking Introduces a non-breaking change label Feb 12, 2026
@aliceb-nv aliceb-nv requested a review from kaatish February 12, 2026 13:58
@aliceb-nv aliceb-nv added the improvement Improves an existing functionality label Feb 12, 2026
@aliceb-nv aliceb-nv requested a review from rg20 February 12, 2026 13:58
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Feb 12, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 12, 2026

📝 Walkthrough

Walkthrough

Adds an objective-integrality flag and scaling utilities, propagates integrality through presolve and solver, refactors branch-and-bound to distinguish crushed vs uncrushed primal solutions during repair, introduces rational-scaling functions, removes some HDI annotations, and adjusts a memory-instrumentation operator signature.

Changes

Cohort / File(s) Summary
Presolve / Problem types
cpp/src/dual_simplex/presolve.hpp, cpp/src/dual_simplex/presolve.cpp, cpp/src/dual_simplex/user_problem.hpp
Added bool objective_is_integral{false} to user_problem_t and lp_problem_t and propagate this flag in convert_user_problem.
Branch-and-Bound adjustments
cpp/src/branch_and_bound/branch_and_bound.cpp
Distinguish crushed vs uncrushed primal solutions: enqueue uncrushed solutions for repair, uncrush then recrush before repair; adjust node lower_bounds and LP cut_off using ceil-based logic when objective is integral; align deterministic/nondeterministic repair flows.
Heuristics: objective scaling & integrality
cpp/src/mip_heuristics/problem/problem.cu, cpp/src/mip_heuristics/problem/problem.cuh
Added rational approximation and scaling utilities (rational_approximation, find_scaling_brute_force, find_scaling_rational, find_objective_scaling_factor) and recompute_objective_integrality(); enhanced substitution/run-merge logic and var_flags propagation; declared new method in header.
Heuristics solver integration
cpp/src/mip_heuristics/solver.cu
Call recompute_objective_integrality() before B&B setup, log objective scaling when integral, and set branch_and_bound_problem.objective_is_integral from the problem.
Solution handling / MIP gap logic
cpp/src/mip_heuristics/solution/solution.cu
Zeroes relative MIP gap when the user objective already meets the solution bound (considering objective scaling sign) before further violation checks.
Minor logging removal
cpp/src/mip_heuristics/solve.cu
Removed an informational log line reporting "Objective function is integral" after Papilo presolve.
CUDA HDI qualifiers removed
cpp/src/mip_heuristics/feasibility_jump/fj_cpu.cu
Removed HDI qualifiers from several public function declarations (get_mtm_for_bound, get_mtm_for_constraint, feas_score_constraint).
Memory instrumentation operator change
cpp/src/utilities/memory_instrumentation.hpp
Changed memop_instrumentation_wrapper_t::operator[] signature from HDI-qualified to plain host-inline value_type operator[](size_type) const; simplified implementation to call underlying()[index] and record load.
CI threshold tweak
ci/validate_wheel.sh
Increased allowed compressed wheel size thresholds for python/libcuopt packages for CUDA major 12 and non-12 cases.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 6.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the primary change (objective function integrality for pruning) and mentions bug fixes, directly aligning with the main objectives and file changes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@cpp/src/mip_heuristics/problem/problem.cu`:
- Around line 1210-1234: The p_curr update (p_curr = a * p_prev1 + p_prev2) can
overflow int64_t; modify the loop to detect/avoid overflow before assigning
p_curr: compute the multiplication and addition using a wider type (e.g.
__int128) or use checked multiplication/addition, then if the result exceeds
INT64_MAX or some safe bound (or would exceed a functionally relevant limit
analogous to q_curr > max_denom) break out of the loop; ensure you update
p_prev2/p_prev1 only when the safe checked result is accepted, and keep the
existing q_curr > max_denom and approx_err checks (symbols: p_curr, q_curr,
p_prev1, p_prev2, max_denom, epsilon).
- Around line 1269-1286: The loop computing integer LCM into scm can overflow
when performing scm *= den / std::gcd(scm, den); change the order to compute
gcd_den = std::gcd(scm, den) and factor = den / gcd_den, then check whether
multiplying scm by factor would exceed the allowed scale using a safe pre-check
(e.g. compare (long double)scm * (long double)factor / (long double)gcd >
maxscale or rearrange to avoid direct multiplication) and return no_scaling if
it would; only after the safe check do the integer update scm *= factor
(affecting variables scm, gcd, and using function rational_approximation,
coefficients, maxscale, no_scaling as referenced).

In `@cpp/src/mip_heuristics/solution/solution.cu`:
- Around line 610-612: The gap-clamping uses a fixed <= check and fails for
maximization; modify the condition that sets rel_mip_gap = 0 to be
direction-aware by reading problem_ptr->presolve_data.objective_scaling_factor
(obj_scale) and using <= for positive scaling (minimization) but >= for negative
scaling (maximization) when comparing h_user_obj and solution_bound so the gap
is zero only when the incumbent meets the bound in the correct direction.
🧹 Nitpick comments (1)
cpp/src/dual_simplex/presolve.cpp (1)

572-578: Minor: objective_is_integral not cleared during dualization path.

When the problem is dualized (lines 683–782), the dual objective becomes rhs / upper-bound values, making the primal objective_is_integral flag semantically invalid. Since dualization is barrier-only and likely never reaches B&B, this is low-risk, but consider clearing the flag on dual_problem (Line 766 area) for defensive correctness.

Comment on lines +1210 to +1234
int64_t p_prev2 = 1, q_prev2 = 0;
int64_t p_prev1 = (int64_t)std::floor(x), q_prev1 = 1;

double remainder = x - std::floor(x);

for (int iter = 0; iter < 100; ++iter) {
if (std::abs(remainder) < 1e-15) break;

remainder = 1.0 / remainder;
int64_t a = (int64_t)std::floor(remainder);
remainder -= a;

int64_t p_curr = a * p_prev1 + p_prev2;
int64_t q_curr = a * q_prev1 + q_prev2;

if (q_curr > max_denom) break;

p_prev2 = p_prev1;
q_prev2 = q_prev1;
p_prev1 = p_curr;
q_prev1 = q_curr;

double approx_err = x - (double)p_curr / (double)q_curr;
if (std::abs(approx_err) < epsilon) break;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential int64_t overflow in continued-fraction convergents.

p_curr = a * p_prev1 + p_prev2 (Line 1222) has no overflow guard. While q_curr is bounded by max_denom, p_curr grows proportionally to the magnitude of x and can silently overflow for large objective coefficients. Consider adding an overflow check analogous to the q_curr > max_denom break.

🛡️ Suggested overflow guard
     int64_t p_curr = a * p_prev1 + p_prev2;
     int64_t q_curr = a * q_prev1 + q_prev2;

     if (q_curr > max_denom) break;
+    // Guard against numerator overflow
+    if (std::abs(p_curr) < std::abs(p_prev1)) break;  // overflow wrapped

A more robust approach would use __int128 or checked multiplication.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
int64_t p_prev2 = 1, q_prev2 = 0;
int64_t p_prev1 = (int64_t)std::floor(x), q_prev1 = 1;
double remainder = x - std::floor(x);
for (int iter = 0; iter < 100; ++iter) {
if (std::abs(remainder) < 1e-15) break;
remainder = 1.0 / remainder;
int64_t a = (int64_t)std::floor(remainder);
remainder -= a;
int64_t p_curr = a * p_prev1 + p_prev2;
int64_t q_curr = a * q_prev1 + q_prev2;
if (q_curr > max_denom) break;
p_prev2 = p_prev1;
q_prev2 = q_prev1;
p_prev1 = p_curr;
q_prev1 = q_curr;
double approx_err = x - (double)p_curr / (double)q_curr;
if (std::abs(approx_err) < epsilon) break;
}
int64_t p_prev2 = 1, q_prev2 = 0;
int64_t p_prev1 = (int64_t)std::floor(x), q_prev1 = 1;
double remainder = x - std::floor(x);
for (int iter = 0; iter < 100; ++iter) {
if (std::abs(remainder) < 1e-15) break;
remainder = 1.0 / remainder;
int64_t a = (int64_t)std::floor(remainder);
remainder -= a;
int64_t p_curr = a * p_prev1 + p_prev2;
int64_t q_curr = a * q_prev1 + q_prev2;
if (q_curr > max_denom) break;
// Guard against numerator overflow
if (std::abs(p_curr) < std::abs(p_prev1)) break; // overflow wrapped
p_prev2 = p_prev1;
q_prev2 = q_prev1;
p_prev1 = p_curr;
q_prev1 = q_curr;
double approx_err = x - (double)p_curr / (double)q_curr;
if (std::abs(approx_err) < epsilon) break;
}
🤖 Prompt for AI Agents
In `@cpp/src/mip_heuristics/problem/problem.cu` around lines 1210 - 1234, The
p_curr update (p_curr = a * p_prev1 + p_prev2) can overflow int64_t; modify the
loop to detect/avoid overflow before assigning p_curr: compute the
multiplication and addition using a wider type (e.g. __int128) or use checked
multiplication/addition, then if the result exceeds INT64_MAX or some safe bound
(or would exceed a functionally relevant limit analogous to q_curr > max_denom)
break out of the loop; ensure you update p_prev2/p_prev1 only when the safe
checked result is accepted, and keep the existing q_curr > max_denom and
approx_err checks (symbols: p_curr, q_curr, p_prev1, p_prev2, max_denom,
epsilon).

Copy link
Copy Markdown
Contributor

@akifcorduk akifcorduk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work Alice thanks!

objective_coefficients[var_idx] * substitute_coefficient[idx]);
// Substitution changes the constraint coefficients on x_B, invalidating
// any implied-integrality proof that relied on the original structure.
var_flags[substituting_var_idx] &= ~(i_t)VAR_IMPLIED_INTEGER;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are var flags?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created this structure back when we added Papilo, in order to store additional information relating to columns. For now it is used only to store if the variable was proven to be implied integral :)

@aliceb-nv
Copy link
Copy Markdown
Contributor Author

/ok to test 97958dc

@aliceb-nv aliceb-nv requested a review from a team as a code owner February 12, 2026 16:00
@aliceb-nv aliceb-nv requested a review from jakirkham February 12, 2026 16:00
@aliceb-nv
Copy link
Copy Markdown
Contributor Author

/ok to test 3589226

@aliceb-nv
Copy link
Copy Markdown
Contributor Author

/merge

@rapids-bot rapids-bot bot merged commit f91f9cc into NVIDIA:main Feb 13, 2026
178 of 185 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants