Skip to content

Add max scale factor for generated inequalities#2337

Merged
galabovaa merged 6 commits intolatestfrom
fix-2322
May 20, 2025
Merged

Add max scale factor for generated inequalities#2337
galabovaa merged 6 commits intolatestfrom
fix-2322

Conversation

@Opt-Mucca
Copy link
Copy Markdown
Collaborator

This should fix #2322

Explanation: HiGHS always scales a generated inequality so that the largest coefficient is approximately 1.0 before running a separator from HighsCutGeneration. This results in problems when all the coefficients are small, e.g. near feastol = 1e-6, and the inequality violates the optimal solution by some minor value, e.g., 1e-10, because the violation grows substantially after scaling, and the resultant cut then removes the solution.

Solution: Add a max scaling factor (you can still scale from 1e+12 to 1.0, but not from 1e-6 to 1.0). Talked with Leona and some other developers on a sensible value (2**10 = 1024).

Instances affected: Any instance where HiGHS generates an inequality with largest absolute coefficient less than 5e-4 and then calls a separator from HighsCutGeneration. I don't see these being frequent and it's not like we're stopping a cut from being generated (just scaling it by less), but I can imagine a tiny performance hit.

TLDR: Puts up some additional guard rails for numerically instable cuts

@codecov
Copy link
Copy Markdown

codecov Bot commented May 14, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 79.29%. Comparing base (0228337) to head (89ee10f).
Report is 8 commits behind head on latest.

Additional details and impacted files
@@            Coverage Diff             @@
##           latest    #2337      +/-   ##
==========================================
+ Coverage   79.27%   79.29%   +0.01%     
==========================================
  Files         345      345              
  Lines       84374    84366       -8     
==========================================
+ Hits        66887    66895       +8     
+ Misses      17487    17471      -16     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@galabovaa
Copy link
Copy Markdown
Contributor

Looks good to me!

Copy link
Copy Markdown
Collaborator

@fwesselm fwesselm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for fixing this!

  • There is another occurrence of std::frexp in line 850. Is an upper bound needed there as well?

  • Would it be OK to have a method for scaling (to avoid having to repeat the fix)? For example, I tried this here:

double HighsCutGeneration::scale(double val) {
  int expshift = 0;
  std::frexp(val, &expshift);
  expshift = -expshift;

  // Don't scale the coefficients by more than +1024 (violations can increase)
  expshift = std::min(10, expshift);

  // Scale rhs
  rhs = std::ldexp(static_cast<double>(rhs), expshift);

  // Scale row
  for (HighsInt i = 0; i != rowlen; ++i)
    vals[i] = std::ldexp(vals[i], expshift);

  // Return scaling factor
  return std::ldexp(1.0, expshift);
}

Copy link
Copy Markdown
Member

@jajhall jajhall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, but @Opt-Mucca, what about the query from @fwesselm and suggestion that duplicated code is avoided by having a method to do the scaling?

Once this query is resolved, I'm happy for the PR to be merged

@Opt-Mucca
Copy link
Copy Markdown
Collaborator Author

@fwesselm No problem! Interested to see if this helps minimise the amount of issues created by the fuzzer.

  • Yes it should. Nice spot! I was wrongly interpreting that as part of the integral scale block.
  • Am a big fan of that change! Is clean and easy. I'll add it now and also call it at line 850

Copy link
Copy Markdown
Collaborator

@fwesselm fwesselm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@Opt-Mucca
Copy link
Copy Markdown
Collaborator Author

Opt-Mucca commented May 15, 2025

Added suggested changes. Two things:

  • I suspect some instances will be impacted by this (likely some non-standard instances). Coin flip if it's for the better or for the worse. Would recommend performance testing this separately to the CMIR / ImpliedBound changes.
  • In preprocessBaseInequality the rhs scaling was handled slightly different than the other cases. That is we were doing rhs *= scale instead of rhs = std::ldexp((double)rhs, expshift) (where rhs is type HighsCDouble). We now do the later case everywhere. I guess the scaling is now safer, but we lose precision when casting. Not sure if this was an active design choice and just wanted to note the change. @fwesselm @jajhall (just so this second point isn't missed)

@fwesselm
Copy link
Copy Markdown
Collaborator

fwesselm commented May 15, 2025

Added suggested changes. Two things:

  • I suspect some instances will be impacted by this (likely some non-standard instances). Coin flip if it's for the better or for the worse. Would recommend performance testing this separately to the CMIR / ImpliedBound changes.
  • In preprocessBaseInequality the rhs scaling was handled slightly different than the other cases. That is we were doing rhs *= scale instead of rhs = std::ldexp((double)rhs, expshift) (where rhs is type HighsCDouble). We now do the later case everywhere. I guess the scaling is now safer, but we lose precision when casting. Not sure if this was an active design choice and just wanted to note the change. @fwesselm @jajhall (just so this second point isn't missed)

Edit: Some test are also currently failing

Yes, I would also be fine with having something like

double scale = std::ldexp(1.0, expshift);
rhs *= scale;

...
return scale;

in the new utility. The scaling of the coefficients would still be done using std::ldexp, not sure why they are handled differently.

The test failures may be due to the integration with the cMIR fixes that were just merged into latest.

@codecov
Copy link
Copy Markdown

codecov Bot commented May 15, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 79.06%. Comparing base (5e7f6f3) to head (5c1a0d8).
Report is 63 commits behind head on latest.

Additional details and impacted files
@@            Coverage Diff             @@
##           latest    #2337      +/-   ##
==========================================
- Coverage   79.26%   79.06%   -0.21%     
==========================================
  Files         345      345              
  Lines       84363    84386      +23     
==========================================
- Hits        66874    66722     -152     
- Misses      17489    17664     +175     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Opt-Mucca
Copy link
Copy Markdown
Collaborator Author

Yes, I would also be fine with having something like

double scale = std::ldexp(1.0, expshift);
rhs *= scale;

...
return scale;

in the new utility. The scaling of the coefficients would still be done using std::ldexp, not sure why they are handled differently.

I think this is where I need some more C++ background. So the rhs *= scale here would be strictly better from a numerics POV? If so then I'd change it.

The test failures may be due to the integration with the cMIR fixes that were just merged into latest.

I accidentally removed some initialisation of the initialScale. Have added it back. (Still may be a problem with the cMIR fixes)

@fwesselm
Copy link
Copy Markdown
Collaborator

Yes, I would also be fine with having something like

double scale = std::ldexp(1.0, expshift);
rhs *= scale;
...
return scale;

in the new utility. The scaling of the coefficients would still be done using std::ldexp, not sure why they are handled differently.

I think this is where I need some more C++ background. So the rhs *= scale here would be strictly better from a numerics POV? If so then I'd change it.

The test failures may be due to the integration with the cMIR fixes that were just merged into latest.

I accidentally removed some initialisation of the initialScale. Have added it back. (Still may be a problem with the cMIR fixes)

I am no expert here, so I asked around and one suggestion was to implement ldexp for HighsCDouble

friend HighsCDouble ldexp(const HighsCDouble& v, int exp) {
   return HighsCDouble(std::ldexp(v.hi, exp), std::ldexp(v.lo, exp));
 }

and have

rhs = ldexp(rhs, expshift);

This would avoid the static_cast (potential loss of precision), and it looks much nicer, I think.

Sorry for the back and forth on this.

@Opt-Mucca
Copy link
Copy Markdown
Collaborator Author

@fwesselm no problem on the back and forth, especially if it comes out with a solution that convenient. I'll add that change now.

Copy link
Copy Markdown
Member

@jajhall jajhall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If @fwesselm and @Opt-Mucca are happy that the final tweaks have been implemented, then this is good to merge.

Note that we should get into the habit of commenting on developments and fixes in https://github.com/ERGO-Code/HiGHS/blob/latest/FEATURES.md I know that there are some formatting errors, and I'll add things from recent merged PRs, but concise comments on MIP developments/fixes are better done by those who implemented them.

@galabovaa
Copy link
Copy Markdown
Contributor

I think these are already included in #2346 @jajhall

@galabovaa galabovaa merged commit d8ff906 into latest May 20, 2025
303 of 306 checks passed
@jajhall jajhall deleted the fix-2322 branch May 20, 2025 08:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants