Conversation
Code ReviewSummaryThis PR optimizes RLE encoding with three main changes:
P0 IssuesRemoving the issue #4429 workaround without addressing the root cause The PR removes the 2048 limit workaround for issue #4429 (nested structs causing
Simply changing
P1 IssuesSize estimation may be too conservative let estimated_pairs = (run_count + (num_values / 255)).min(num_values);The Zero run length validation only in decoder The decoder now validates if length == 0 {
return Err(Error::InvalidInput { ... });
}While defensive, this suggests zero-length runs could exist in the encoded data. If the encoder guarantees this never happens, consider an assertion/debug check instead. If it can happen with malformed data, this is appropriate. Minor Notes
|
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
This PR will optimize rle implementation --- **Parts of this PR were drafted with assistance from Codex (with `gpt-5.2`) and fully reviewed and edited by me. I take full responsibility for all changes.**
This PR will optimize rle implementation
Parts of this PR were drafted with assistance from Codex (with
gpt-5.2) and fully reviewed and edited by me. I take full responsibility for all changes.