feat: add fee filter for tree discovery#2327
Conversation
📝 WalkthroughWalkthroughThis change replaces the broadcast channel-based tree discovery mechanism in EpochManager with a periodic polling approach that queries the RPC at configured intervals. Correspondingly, tree processing functions are updated to handle optional results, allowing selective filtering of trees based on network fees and other criteria. Changes
Sequence Diagram(s)sequenceDiagram
participant EpochManager
participant Timer
participant RPC
participant TreeDataSync
participant Storage
loop Every tree_discovery_interval_seconds
EpochManager->>Timer: Wait for interval
Timer-->>EpochManager: Interval elapsed
EpochManager->>RPC: Fetch trees from chain
RPC-->>EpochManager: Tree accounts + metadata
par Process fetched trees
EpochManager->>TreeDataSync: process_state_account(account)
TreeDataSync-->>EpochManager: Result<Option<TreeAccounts>>
and
EpochManager->>TreeDataSync: process_address_account(account)
TreeDataSync-->>EpochManager: Result<Option<TreeAccounts>>
end
alt Tree is new & valid
EpochManager->>Storage: add_new_tree(accounts)
Storage-->>EpochManager: Success/Error
else Tree is None (e.g., network_fee == 0) or error
EpochManager->>EpochManager: Log & skip tree
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
forester/src/tree_data_sync.rs (1)
83-92: 🧹 Nitpick | 🔵 TrivialError swallowing during batch account processing.
When
process_batch_state_accountreturns an error, you silently fall through to tryprocess_batch_address_account. This is intentional for discriminator mismatches (a BatchedMerkleTree could be either state or address), but actual deserialization errors are also swallowed. Consider logging at trace level when both attempts fail to aid debugging.if let Ok(Some(tree)) = process_batch_state_account(&mut account, pubkey) { all_trees.push(tree); } else if let Ok(Some(tree)) = process_batch_address_account(&mut account, pubkey) { all_trees.push(tree); } else { trace!( event = "batch_tree_processing_failed", pubkey = %pubkey, "Failed to process batch tree as either state or address" ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/tree_data_sync.rs` around lines 83 - 92, The current match arm for batched_result iterates accounts and silently ignores failures when process_batch_state_account or process_batch_address_account return Err; update the logic in the loop that handles each (pubkey, account) so that after attempting process_batch_state_account(pubkey) and process_batch_address_account(pubkey) you emit a trace-level log if neither returned Ok(Some(_)), including the pubkey and any error details from both attempts (or at least a message indicating both attempts failed) so deserialization errors are visible; change the block around process_batch_state_account and process_batch_address_account to capture their Results and log a trace event on failure before continuing to push to all_trees.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@forester/src/epoch_manager.rs`:
- Around line 568-591: The tree discovery loop should apply exponential backoff
when fetch_trees fails to avoid tight retry/log loops; introduce a backoff
counter (e.g., tree_fetch_backoff_count) scoped with the loop in epoch_manager
(near where fetch_trees is called), increment it on each fetch_trees Err and
reset it to 0 on success, compute a delay (capped max) like
exponential_backoff(base, count) used by monitor_epochs, log the backoff
decision, and await tokio::time::sleep for that delay before continuing; ensure
you still call continue after sleeping and keep the existing warn! log that
reports the original error.
---
Outside diff comments:
In `@forester/src/tree_data_sync.rs`:
- Around line 83-92: The current match arm for batched_result iterates accounts
and silently ignores failures when process_batch_state_account or
process_batch_address_account return Err; update the logic in the loop that
handles each (pubkey, account) so that after attempting
process_batch_state_account(pubkey) and process_batch_address_account(pubkey)
you emit a trace-level log if neither returned Ok(Some(_)), including the pubkey
and any error details from both attempts (or at least a message indicating both
attempts failed) so deserialization errors are visible; change the block around
process_batch_state_account and process_batch_address_account to capture their
Results and log a trace event on failure before continuing to push to all_trees.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 06a26bbd-7083-4b8f-b298-6c0eb8c2dc81
⛔ Files ignored due to path filters (1)
external/photonis excluded by none and included by none
📒 Files selected for processing (2)
forester/src/epoch_manager.rsforester/src/tree_data_sync.rs
Summary by CodeRabbit