Skip to content

Bump libcuopt size by 5 mb #1016

Merged
rgsl888prabhu merged 3 commits intoNVIDIA:mainfrom
rgsl888prabhu:bump_libcuopt_size_by
Apr 1, 2026
Merged

Bump libcuopt size by 5 mb #1016
rgsl888prabhu merged 3 commits intoNVIDIA:mainfrom
rgsl888prabhu:bump_libcuopt_size_by

Conversation

@rgsl888prabhu
Copy link
Copy Markdown
Collaborator

@rgsl888prabhu rgsl888prabhu commented Mar 31, 2026

Description

There were failure due to increase in wheel size, so bumping the wheel size limit for libcuopt

https://github.com/NVIDIA/cuopt/actions/runs/23821469086/job/69435078088?pr=1015

Checklist

  • I am familiar with the Contributing Guidelines.
  • Testing
    • New or existing tests cover these changes
    • Added tests
    • Created an issue to follow-up
    • NA
  • Documentation
    • The documentation is up to date with these changes
    • Added new documentation
    • NA

@rgsl888prabhu rgsl888prabhu requested a review from a team as a code owner March 31, 2026 22:36
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 31, 2026

📝 Walkthrough

Walkthrough

Updated compression size thresholds in the wheel validation script for the libcuopt package. The --max-allowed-size-compressed values were increased by 5Mi for CUDA version 12 (645Mi → 650Mi) and by 5Mi for other CUDA versions (490Mi → 495Mi).

Changes

Cohort / File(s) Summary
Build validation configuration
ci/validate_wheel.sh
Increased pydistcheck compressed size limits for python/libcuopt by 5Mi for both CUDA 12 and other CUDA major versions.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title directly and clearly describes the main change: bumping the libcuopt wheel size limit by 5 MB, which matches the changeset modifications to pydistcheck size thresholds.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description check ✅ Passed The pull request description clearly explains the purpose: bumping the wheel size limit for libcuopt due to wheel size failures, with a reference to the failing GitHub Actions run.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@rgsl888prabhu rgsl888prabhu self-assigned this Apr 1, 2026
@rgsl888prabhu rgsl888prabhu added non-breaking Introduces a non-breaking change improvement Improves an existing functionality labels Apr 1, 2026
@rgsl888prabhu
Copy link
Copy Markdown
Collaborator Author

/merge

@rgsl888prabhu rgsl888prabhu merged commit 1d61c7d into NVIDIA:main Apr 1, 2026
609 of 622 checks passed
akifcorduk pushed a commit to akifcorduk/cuopt that referenced this pull request Apr 3, 2026
<!--

Thank you for contributing to cuOpt :)

Here are some guidelines to help the review process go smoothly.

Many thanks in advance for your cooperation!

Note: The pull request title will be included in the CHANGELOG.
-->


## Description
There were failure due to increase in wheel size, so bumping the wheel
size limit for libcuopt


https://github.com/NVIDIA/cuopt/actions/runs/23821469086/job/69435078088?pr=1015
## Checklist

- [x] I am familiar with the [Contributing
Guidelines](https://github.com/NVIDIA/cuopt/blob/HEAD/CONTRIBUTING.md).
- Testing
   - [ ] New or existing tests cover these changes
   - [ ] Added tests
   - [ ] Created an issue to follow-up
   - [x] NA
- Documentation
   - [ ] The documentation is up to date with these changes
   - [ ] Added new documentation
   - [x] NA
@coderabbitai coderabbitai bot mentioned this pull request Apr 9, 2026
8 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants