Skip to content

Streaming decompression can detect incorrect header ID sooner#3175

Merged
Cyan4973 merged 1 commit intodevfrom
fix3169
Jun 22, 2022
Merged

Streaming decompression can detect incorrect header ID sooner#3175
Cyan4973 merged 1 commit intodevfrom
fix3169

Conversation

@Cyan4973
Copy link
Contributor

@Cyan4973 Cyan4973 commented Jun 22, 2022

Streaming decompression used to wait for a minimum of 5 bytes before attempting decoding.
This meant that, in the case that only a few bytes (<5) were provided,
and assuming these bytes are incorrect,
there would be no error reported.
The streaming API would simply request more data, waiting for at least 5 bytes.

This PR makes it possible to detect incorrect Frame IDs as soon as the first byte is provided.

Fix #3169 for urllib3 use case

@Cyan4973 Cyan4973 force-pushed the fix3169 branch 2 times, most recently from 2264e90 to 53aadc7 Compare June 22, 2022 03:00
Streaming decompression used to wait for a minimum of 5 bytes before attempting decoding.
This meant that, in the case that only a few bytes (<5) were provided,
and assuming these bytes are incorrect,
there would be no error reported.
The streaming API would simply request more data, waiting for at least 5 bytes.

This PR makes it possible to detect incorrect Frame IDs as soon as the first byte is provided.

Fix #3169
@Cyan4973 Cyan4973 merged commit f5c4ec4 into dev Jun 22, 2022
@Cyan4973 Cyan4973 deleted the fix3169 branch January 13, 2023 04:28
@Cyan4973 Cyan4973 mentioned this pull request Feb 9, 2023
neo-technology-build-agent pushed a commit to neo4j/neo4j that referenced this pull request Nov 21, 2025
…)

tldr; `Dumper` does not truncate archives when run with
`--overwrite-destination`, and Zstd throws you an 'unknown frame
descriptor' error for it.

In facebook/zstd#3175, released in Zstd 1.5.4, a
new error was introduced when the stream decoder has finished decoding a
frame. If there are more bytes in the stream, then it assumes that this
must correspond to a new frame - if not, then it determines that the
archive is corrupt with `Unknown frame descriptor`.

Before this error was added, Zstd would quietly ignore that the stream
did not contain new necessary marker, `ZSTD_MAGICNUMBER`, or one of the
16 `ZSTD_MAGIC_SKIPPABLE` headers, happily keep consuming the contents.

When loading dumps, this can happen as the `TarArchiveInputStream`
reaches the `EOF` marker in the TAR archive (two 512 blocks of zeroes).
The stream will then attempt to skip to the end of the file.As it does
so, it pulls more data from the `ZstdInputStream`, which will attempt to
decode a new frame on the trash data.

In a correct archive, there should not be more data after the EOF
marker, but due to an unfortunate bug, our archives can contain more
data under a special circumstance, viz. when running the `Dumper` with
the `--override-destination` flag.

The root cause is found in the `DefaultFileSystemAbstraction` where the
`openAsOutputStream`-method does not truncate files when they are opened
with `append=false`, despite its documentation claiming that is should
do so. Instead it just opens them for write with file offset 0, meaning
that if the original dump was larger than its successor, the successsor
will have trash data trailing its valid Zstd frame.

As far as I can tell, this issue has been present in Neo4j since 5.19.
It seems that the issue was introduced in the refactoring done in
neo-technology/neo4j#23520, which changed the
`Files.newOutputStream` (which does truncate) to `fs.openAsOutputStream`
(which does not).

Cherry-picks:
neo-technology-build-agent pushed a commit to neo4j/neo4j that referenced this pull request Dec 19, 2025
tldr; `Dumper` does not truncate archives when run with
`--overwrite-destination`, and Zstd throws you an 'unknown frame
descriptor' error for it.

In facebook/zstd#3175, released in Zstd 1.5.4, a
new error was introduced when the stream decoder has finished decoding a
frame. If there are more bytes in the stream, then it assumes that this
must correspond to a new frame - if not, then it determines that the
archive is corrupt with `Unknown frame descriptor`.

Before this error was added, Zstd would quietly ignore that the stream
did not contain new necessary marker, `ZSTD_MAGICNUMBER`, or one of the
16 `ZSTD_MAGIC_SKIPPABLE` headers, happily keep consuming the contents.

When loading dumps, this can happen as the `TarArchiveInputStream`
reaches the `EOF` marker in the TAR archive (two 512 blocks of zeroes).
The stream will then attempt to skip to the end of the file.As it does
so, it pulls more data from the `ZstdInputStream`, which will attempt to
decode a new frame on the trash data.

In a correct archive, there should not be more data after the EOF
marker, but due to an unfortunate bug, our archives can contain more
data under a special circumstance, viz. when running the `Dumper` with
the `--override-destination` flag.

The root cause is found in the `DefaultFileSystemAbstraction` where the
`openAsOutputStream`-method does not truncate files when they are opened
with `append=false`, despite its documentation claiming that is should
do so. Instead it just opens them for write with file offset 0, meaning
that if the original dump was larger than its successor, the successsor
will have trash data trailing its valid Zstd frame.

As far as I can tell, this issue has been present in Neo4j since 5.19.
It seems that the issue was introduced in the refactoring done in
neo-technology/neo4j#23520, which changed the
`Files.newOutputStream` (which does truncate) to `fs.openAsOutputStream`
(which does not).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

stream decompression: check 1~4 bytes Magic Number

3 participants