fix(test): avoid 4 GB list allocation in BigTIFF threshold test#1808
Merged
Conversation
test_large_strip_table_alone_can_promote built a tag list with ``[0] * (UINT32_MAX // 8 + 1)``, allocating roughly 4 GB before ``_compute_classic_ifd_overhead`` was even called. On GitHub-hosted ubuntu runners (~7 GB RAM) this OOM-killed the pytest worker with exit code 143, which fail-fast then cancelled the whole pytest job on main. Drive the same "huge strip table alone forces BigTIFF" assertion through the ``n_entries`` parameter of ``_should_use_bigtiff_streaming`` (8 bytes per entry, no list allocation). The ``_compute_classic_ifd_overhead`` wiring is already exercised by ``test_overhead_pushes_just_under_threshold_over`` and ``test_large_gdal_metadata_flips_decision``.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
CI on
mainhas been failing since PR #1787 merged. The newly addedtest_large_strip_table_alone_can_promoteallocated a Python list of(UINT32_MAX // 8) + 1(~536 M) zeros before_compute_classic_ifd_overheadwas even called. On GitHub-hosted ubuntu runners (~7 GB RAM) this
OOM-killed the pytest worker with exit code 143, and fail-fast cancelled
every other matrix job.
Fix
Drive the same "huge strip table alone forces BigTIFF" assertion through
the
n_entriesparameter of_should_use_bigtiff_streaming(8 bytes perentry, no list allocation). The
_compute_classic_ifd_overheadwiringis already covered by
test_overhead_pushes_just_under_threshold_overand
test_large_gdal_metadata_flips_decision.Evidence
Before:
xrspatial/tests/test_geotiff_streaming_bigtiff_threshold_1785.py ....then
##[error]Process completed with exit code 143.(run 25807252057).After (local):
Test plan