Conversation
|
The latest Buf updates on your PR. Results from workflow Buf / buf (pull_request).
|
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3081 +/- ##
=======================================
Coverage 59.02% 59.03%
=======================================
Files 2065 2065
Lines 169349 169405 +56
=======================================
+ Hits 99960 100005 +45
- Misses 60630 60641 +11
Partials 8759 8759
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
| } | ||
| } | ||
|
|
||
| func (r *txHashRing) Push(txHash common.Hash, blockNumber uint64, contractAddress common.Address) { |
There was a problem hiding this comment.
Nit, godocs are nice for exported functions
| // Cryptosim passes its metrics as a read observer so cache hits/misses are measured | ||
| // at the cache wrapper, which is the only layer that can distinguish them reliably. | ||
| store, err := receipt.NewReceiptStoreWithReadObserver(storeCfg, nil, metrics) |
There was a problem hiding this comment.
If it would be useful to be able to intercept this information for the cache, we should discuss potential API. Doesn't sound like a very tricky feature to add.
| } | ||
|
|
||
| func selectReceiptReadBlock( | ||
| rng *rand.Rand, |
There was a problem hiding this comment.
Potential hotspot, consider using canned random. (rand.Rand appears in a bunch of places in this file, comment applies to all use sites)
There was a problem hiding this comment.
Note that if there are rand.Rand features you'd like to have that aren't in canned random, it should be fairly straight forward to add additional convenience methods to canned random that return whatever data type is convenient.
38fddce to
9c1a5b6
Compare
Add a lightweight pebble-backed index mapping tx_hash -> block_number that narrows GetReceiptByTxHash to the single parquet file containing the target block, instead of scanning all files. Falls back to full scan for tx hashes not yet in the index. - New TxIndex type with SetBatch, GetBlockNumber, PruneBefore, Close - Store wires tx index into WriteReceipts, GetReceiptByTxHash, Close, SimulateCrash, and periodic pruning - Reader gains GetReceiptByTxHashInBlock for targeted single-file query - Unit tests for TxIndex ops and integration tests for Store/Reader
Extend cryptosim from write-only to mixed read/write workload: add concurrent receipt-by-hash and eth_getLogs-style log filter readers, deterministic synthetic tx hashes for stateless lookups, recency-biased block selection, read metrics/dashboard panels, receipt read observer for cache hit/miss tracking, and clean ErrNotFound from parquet store. Made-with: Cursor
…t reporting Made-with: Cursor
…degradation Instruments the ledger cache layer to measure: - FilterLogs cache scan duration (separate from backend) - GetReceipt cache lookup duration (includes clone cost) - Total log and receipt counts across cache chunks Adds corresponding Grafana dashboard panels. Made-with: Cursor
EstimatedReceiptCacheWindowBlocks (1000 blocks) overshoots the actual cache coverage after rotation (~500 blocks), causing ~70% of cache-mode log filter reads to miss and fall through to DuckDB. Made-with: Cursor
…races Made-with: Cursor
Made-with: Cursor
…index Two performance improvements for duckdb-only reads: 1. GetLogs now skips parquet files whose block range is entirely before FromBlock, matching the existing ToBlock pruning. This prevents DuckDB from scanning irrelevant historical files. 2. Add a lightweight pebble-backed tx_hash -> block_number index that narrows GetReceiptByTxHash to the single parquet file containing the target block, instead of scanning all files. Falls back to full scan for tx hashes not yet in the index. Made-with: Cursor
Made-with: Cursor
7f2e2db to
90505eb
Compare
Reduce receipt read concurrency and fall back to the default commit cadence while keeping tx-index lookup disabled, so the non-indexed read experiment avoids overwhelming write throughput. Made-with: Cursor
* main: Add receipt / log reads to cryptosim (#3081) persist blocks and FullCommitQCs in data layer via WAL (CON-231) (#3126) Update Changelog in prep to cut v6.4.1 (#3213) fix(sei-tendermint): resolve staticcheck warnings (#3207) Add historical state offload stream hook (#3183) feat: wire autobahn config propagation from top-level to GigaRouter (CON-232) (#3194)
Describe your changes and provide context
This PR extends the cryptosim receipt benchmark from a write-only workload into a mixed read/write workload that exercises the production receipt store path more realistically. It adds concurrent receipt readers to RecieptStoreSimulator, uses deterministic synthetic tx hashes so readers can reconstruct lookup targets without storing per-tx state, and simulates both receipt-by-hash queries and eth_getLogs-style log filter reads with a configurable recency bias toward newer blocks.
To make that workload measurable, the PR adds receipt read metrics and dashboard panels for read latency, reads/sec, cache hit vs miss rate, found vs not found results, and log filter latency. It also adds an optional receipt read observer so cache hits/misses are recorded at the ledger-cache layer, updates the parquet receipt store to return ErrNotFound cleanly when there is no legacy KV fallback, and refreshes the sample receipt-store config to enable the new benchmark mode. Overall, this gives us a better way to evaluate receipt-store behavior under concurrent reads, writes, cache pressure, and pruning.
Testing performed to validate your change
Ran
go test ./sei-db/ledger_db/receipt/...Ran
go test ./sei-db/state_db/bench/cryptosim/...Added unit coverage for deterministic SyntheticTxHash generation
Added unit coverage for receipt cache hit/miss observer reporting