Conversation
|
/ok to test d47d709 |
📝 WalkthroughWalkthroughThis change adds support for user-provided initial solutions in conjunction with Papilo presolve by implementing forward solution transformation ("crushing") from original to reduced variable space. It includes presolve method refinements, extended initial solution handling in the solver, comprehensive test coverage for round-trip solution transformations, and dataset expansion. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/mip_heuristics/presolve/third_party_presolve.cpp`:
- Around line 1092-1096: The loop that updates z for removed rows uses an exact
check y[i] == 0 which is too strict; replace that check with the
function/epsilon-based test used elsewhere in this function (i.e., treat y as
zero when fabs(y[i]) <= the existing numeric tolerance or by calling the
existing isZero/is_near_zero helper) so that only truly near-zero y[i] are
skipped; locate the loop referencing storage.nRowsOriginal, row_survives, y,
A_offsets, A_indices, z and get_coeff and change the condition to use the same
tolerance variable or helper used elsewhere in this file to avoid injecting
noise into z (and therefore z_reduced).
- Around line 973-980: The kParallelCol case only updates the primal x but fails
to fold reduced costs; update the corresponding dual/reduced-cost vector (z)
analogously so eliminated parallel-column contributions aren't lost: inside the
ReductionType::kParallelCol branch (where col1 = indices[first], col2 =
indices[first+2], scale = values[first+4] and x[col2] += scale * x[col1]) also
perform z[col2] += scale * z[col1] (using the same index mapping and scale),
ensuring you access the z array from the same problem context and respect any
index-mapping helpers used elsewhere in this function.
In `@cpp/tests/linear_programming/unit_tests/presolve_test.cu`:
- Around line 859-861: The test currently uses EXPECT_LT(warm_iters, cold_iters)
which enforces a strict decrease and makes the test flaky; change the assertion
to EXPECT_LE(warm_iters, cold_iters) so the warm-started PDLP is allowed to take
the same number of iterations as the cold run. Update the failure message if
desired but keep the same variables (warm_iters, cold_iters) and replace the
EXPECT_LT macro with EXPECT_LE in the presolve_test assertion.
In `@cpp/tests/mip/incumbent_callback_test.cu`:
- Around line 41-44: The destructor of scoped_env_restore_t always calls
::setenv(name_, prev_value_.c_str(), 1) which re-creates the variable as an
empty string if it was originally unset; change scoped_env_restore_t to record
whether the environment var existed (e.g. a bool prev_exists_ set in the
constructor when std::getenv(env_name) != nullptr) and in
~scoped_env_restore_t() call ::unsetenv(name_) when prev_exists_ is false,
otherwise restore the original value via ::setenv using prev_value_; update the
constructor and member fields (prev_exists_ and prev_value_) accordingly and
ensure behavior is consistent for CUOPT_DISABLE_GPU_HEURISTICS and similar uses.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 81564330-0c16-431c-9cbc-81de259afadb
📒 Files selected for processing (7)
cpp/src/mip_heuristics/diversity/diversity_manager.cucpp/src/mip_heuristics/presolve/third_party_presolve.cppcpp/src/mip_heuristics/presolve/third_party_presolve.hppcpp/src/mip_heuristics/solve.cucpp/tests/linear_programming/unit_tests/presolve_test.cucpp/tests/mip/incumbent_callback_test.cudatasets/mip/download_miplib_test_dataset.sh
| case ReductionType::kParallelCol: { | ||
| // Storage layout: [orig_col1, flags1, orig_col2, flags2, -1] | ||
| // [col1lb, col1ub, col2lb, col2ub, col2scale] | ||
| int col1 = indices[first]; | ||
| int col2 = indices[first + 2]; | ||
| const f_t& scale = values[first + 4]; | ||
| x[col2] += scale * x[col1]; | ||
| break; |
There was a problem hiding this comment.
Fold reduced costs through kParallelCol as well.
This forward replay updates the survivor primal value, but it leaves z in the original basis. After the final projection, any eliminated parallel column contribution is dropped, so z_reduced is wrong whenever dual/reduced-cost crushing hits a parallel-column reduction.
🐛 Suggested fix
case ReductionType::kParallelCol: {
// Storage layout: [orig_col1, flags1, orig_col2, flags2, -1]
// [col1lb, col1ub, col2lb, col2ub, col2scale]
int col1 = indices[first];
int col2 = indices[first + 2];
const f_t& scale = values[first + 4];
x[col2] += scale * x[col1];
+ if (crush_rc) { z[col2] += scale * z[col1]; }
break;
}As per coding guidelines, **/*.{cu,cuh,cpp,hpp,h}: Validate algorithm correctness in optimization logic and ensure variables and constraints are accessed from the correct problem context (original vs presolve vs folded vs postsolve); verify index mapping consistency across problem transformations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/mip_heuristics/presolve/third_party_presolve.cpp` around lines 973 -
980, The kParallelCol case only updates the primal x but fails to fold reduced
costs; update the corresponding dual/reduced-cost vector (z) analogously so
eliminated parallel-column contributions aren't lost: inside the
ReductionType::kParallelCol branch (where col1 = indices[first], col2 =
indices[first+2], scale = values[first+4] and x[col2] += scale * x[col1]) also
perform z[col2] += scale * z[col1] (using the same index mapping and scale),
ensuring you access the z array from the same problem context and respect any
index-mapping helpers used elsewhere in this function.
| for (int i = 0; i < (int)storage.nRowsOriginal; ++i) { | ||
| if (row_survives[i] || y[i] == 0) continue; | ||
| for (i_t p = A_offsets[i]; p < A_offsets[i + 1]; ++p) { | ||
| z[A_indices[p]] += y[i] * get_coeff(i, A_indices[p]); | ||
| } |
There was a problem hiding this comment.
Avoid exact-zero checks on removed-row duals.
y[i] == 0 is too strict for approximate PDLP/DualSimplex duals. Near-zero removed-row multipliers will still enter this correction path and inject noise into z_reduced. Use the same numeric tolerance machinery you already use elsewhere in this function.
As per coding guidelines, **/*.{cu,cuh,cpp,hpp,h}: Check numerical stability: prevent overflow/underflow, precision loss, division by zero/near-zero, and use epsilon comparisons for floating-point equality checks.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/mip_heuristics/presolve/third_party_presolve.cpp` around lines 1092 -
1096, The loop that updates z for removed rows uses an exact check y[i] == 0
which is too strict; replace that check with the function/epsilon-based test
used elsewhere in this function (i.e., treat y as zero when fabs(y[i]) <= the
existing numeric tolerance or by calling the existing isZero/is_near_zero
helper) so that only truly near-zero y[i] are skipped; locate the loop
referencing storage.nRowsOriginal, row_survives, y, A_offsets, A_indices, z and
get_coeff and change the condition to use the same tolerance variable or helper
used elsewhere in this file to avoid injecting noise into z (and therefore
z_reduced).
| EXPECT_LT(warm_iters, cold_iters) | ||
| << "warmstarted solve should not take more iterations than cold solve" | ||
| << " (cold=" << cold_iters << ", warm=" << warm_iters << ")"; |
There was a problem hiding this comment.
Don’t require a strict iteration drop here.
Warm-started PDLP can legitimately converge in the same number of iterations as the cold run because of fixed startup/restart behavior. EXPECT_LT makes this test flaky even when the warm start is working.
💡 Suggested fix
- EXPECT_LT(warm_iters, cold_iters)
+ EXPECT_LE(warm_iters, cold_iters)
<< "warmstarted solve should not take more iterations than cold solve"
<< " (cold=" << cold_iters << ", warm=" << warm_iters << ")";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| EXPECT_LT(warm_iters, cold_iters) | |
| << "warmstarted solve should not take more iterations than cold solve" | |
| << " (cold=" << cold_iters << ", warm=" << warm_iters << ")"; | |
| EXPECT_LE(warm_iters, cold_iters) | |
| << "warmstarted solve should not take more iterations than cold solve" | |
| << " (cold=" << cold_iters << ", warm=" << warm_iters << ")"; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/tests/linear_programming/unit_tests/presolve_test.cu` around lines 859 -
861, The test currently uses EXPECT_LT(warm_iters, cold_iters) which enforces a
strict decrease and makes the test flaky; change the assertion to
EXPECT_LE(warm_iters, cold_iters) so the warm-started PDLP is allowed to take
the same number of iterations as the cold run. Update the failure message if
desired but keep the same variables (warm_iters, cold_iters) and replace the
EXPECT_LT macro with EXPECT_LE in the presolve_test assertion.
| if (const char* prev = std::getenv(env_name)) { prev_value_ = prev; } | ||
| ::setenv(env_name, new_value, 1); | ||
| } | ||
| ~scoped_env_restore_t() { ::setenv(name_, prev_value_.c_str(), 1); } |
There was a problem hiding this comment.
Restore the original unset state, not an empty string.
If CUOPT_DISABLE_GPU_HEURISTICS was originally unset, the destructor currently leaves it defined as "". That leaks process-global state across tests and can change behavior for code that branches on std::getenv(...) != nullptr.
💡 Suggested fix
class scoped_env_restore_t {
public:
scoped_env_restore_t(const char* env_name, const char* new_value) : name_(env_name)
{
- if (const char* prev = std::getenv(env_name)) { prev_value_ = prev; }
+ if (const char* prev = std::getenv(env_name)) {
+ had_prev_value_ = true;
+ prev_value_ = prev;
+ }
::setenv(env_name, new_value, 1);
}
- ~scoped_env_restore_t() { ::setenv(name_, prev_value_.c_str(), 1); }
+ ~scoped_env_restore_t()
+ {
+ if (had_prev_value_) {
+ ::setenv(name_, prev_value_.c_str(), 1);
+ } else {
+ ::unsetenv(name_);
+ }
+ }
scoped_env_restore_t(const scoped_env_restore_t&) = delete;
scoped_env_restore_t& operator=(const scoped_env_restore_t&) = delete;
private:
const char* name_;
+ bool had_prev_value_ = false;
std::string prev_value_;
};As per coding guidelines, **/*test*.{cpp,cu,py}: Ensure test isolation: prevent GPU state, cached memory, and global variables from leaking between test cases; verify each test independently initializes its environment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/tests/mip/incumbent_callback_test.cu` around lines 41 - 44, The
destructor of scoped_env_restore_t always calls ::setenv(name_,
prev_value_.c_str(), 1) which re-creates the variable as an empty string if it
was originally unset; change scoped_env_restore_t to record whether the
environment var existed (e.g. a bool prev_exists_ set in the constructor when
std::getenv(env_name) != nullptr) and in ~scoped_env_restore_t() call
::unsetenv(name_) when prev_exists_ is false, otherwise restore the original
value via ::setenv using prev_value_; update the constructor and member fields
(prev_exists_ and prev_value_) accordingly and ensure behavior is consistent for
CUOPT_DISABLE_GPU_HEURISTICS and similar uses.
This PR implements support for crushing primal incumbents in MIP mode into the Papilo problem space, and crushing primal/dual vectors for LP.
A bugfix is also included to allow consecutive solves to be run in the same GTest process without corrupting the OpenMP runtime.
Closes #513
Closes #1060
Description
Issue
Checklist