-
Notifications
You must be signed in to change notification settings - Fork 500
Fix memory exhaustion in TAR header auto-detection #1024
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
adamhathcock
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Smells right
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes a critical security vulnerability where malformed TAR headers with excessively large LongName/LongLink sizes could cause memory exhaustion during archive auto-detection, particularly when reading compressed TAR files without extension hints.
Key Changes:
- Added size validation (32KB limit) in
TarHeader.ReadLongName()to prevent memory exhaustion attacks - Implemented regression test that verifies graceful failure with malformed 8GB-sized headers
- Generalized .gitignore patterns for test scratch directories
Reviewed Changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| src/SharpCompress/Common/Tar/Headers/TarHeader.cs | Added MAX_LONG_NAME_SIZE constant and validation logic to prevent excessive memory allocation when reading LongName/LongLink headers |
| tests/SharpCompress.Test/Tar/TarReaderTests.cs | Added regression test that creates a malformed TAR header with 8GB size and verifies it throws IncompleteArchiveException instead of OutOfMemoryException |
| .gitignore | Generalized Scratch directory patterns to cover subdirectories (tests/TestArchives//Scratch and tests/TestArchives//Scratch2) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
During auto-detection without extension hints, random bytes in compressed files (e.g., tar.lz) can be misinterpreted as TAR
LongName/LongLinkheaders with multi-gigabyte sizes, causing memory exhaustion.Changes
Added size validation in
TarHeader.ReadLongName()MAX_LONG_NAME_SIZEconstant (32KB) - covers real-world path limitsif (size < 0 || size > MAX_LONG_NAME_SIZE)InvalidFormatException(caught byIsTarFile, returns false, auto-detection continues)Added regression test
Tar_Malformed_LongName_Excessive_Sizecreates malformed header with 8GB sizeExample
All existing tests pass. No breaking changes.
Original prompt
This section details on the original issue you should resolve
<issue_title>Bug: Memory exhaustion when auto-detecting a specific tar.lz archive</issue_title>
<issue_description>### Summary
When reading a specific
.tar.lzfile without providing an extension hint, the library attempts to auto-detect the format. This process incorrectly identifies the file as aTararchive with aLongLinkheader, leading to an attempt to allocate a massive amount of memory (e.g., 20GB). This causes the application to either crash or fail to open the archive. Standard compression utilities can open this same file without any issues.The root cause appears to be a lack of validation in
TarHeader.Read()and its helper methods.Steps to Reproduce
.tar.lzfile.ReaderOptions.ExtensionHint, forcing the library to auto-detect the archive type.Root Cause Analysis
The problem occurs because the auto-detection mechanism first tries to parse the file as a standard
Tararchive. My file is a.tar.lz, but a byte at a specific offset is misinterpreted.In
TarHeader.Read(), the code enters a loop to process headers.For my specific file, the byte at offset 157 (read as
entryType) happens to matchEntryType.LongLink. This triggers a call toTarHeader.ReadLongName().Inside
ReadLongName(), theReadSize(buffer)method calculates an extremely large value fornameLengthbased on the misinterpreted header data. The subsequent call toreader.ReadBytes(nameLength)attempts to allocate a massive array without any sanity checks.The
BinaryReader.ReadBytes()method directly allocates memory based on the providedcount.Stream Corruption
After the
Tarparsing attempt fails (likely due to anEndOfStreamExceptionor I/O error fromStream.ReadAtLeast()), the underlyingStreamorSharpCompressStreamappears to be left in a corrupted state.When the auto-detection logic proceeds to the correct
tar.lzformat, it fails to read the header correctly. For example, it does not see the "LZIP" magic bytes at the beginning of the stream, even though debugging shows the bytes are present in the buffer. This strongly suggests that the stream's internal position or state has been irrecoverably altered by the failed read attempt.Workaround
The issue can be avoided by explicitly setting
ReaderOptions.ExtensionHintto guide the parser. This skips the problematicTarauto-detection step.However, most users would expect the auto-detection to be robust and would not think to set this option unless they have investigated the source code.</issue_description>
Comments on the Issue (you are @copilot in this section)
@adamhathcock Please make a P...💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.