Skip to content

feat: batch address trees#18

Merged
sergeytimoshin merged 33 commits intojorrit/feat-add-testfrom
sergey/batch-address-updates
May 3, 2025
Merged

feat: batch address trees#18
sergeytimoshin merged 33 commits intojorrit/feat-add-testfrom
sergey/batch-address-updates

Conversation

@sergeytimoshin
Copy link

@sergeytimoshin sergeytimoshin commented Apr 3, 2025

Overview

This pr adds support for batch address updates and introduces a new API endpoint getBatchAddressUpdateInfo

Changes:

API

  • NEW: getBatchAddressUpdateInfo endpoint.
    Retrieves addresses pending insertion from the new address queue, along with necessary proofs and context for batch updates.

Request:

tree: Hash
batch_size: u16

Response:

start_index: u64,
addresses: Vec<AddressQueueIndex>,
non_inclusion_proofs: Vec<MerkleContextWithNewAddressProof>,
subtrees: Vec<[u8; 32]>,

DB

Added address_queues table, corresponding models & migrations.
Table structure:

pub address: Vec<u8>,
pub tree: Vec<u8>,
pub queue_index: i64,

Events

Added PublicTransactionEventV2 structure and parsing logic which contains MerkleTreeSequenceNumberV2(tree_pubkey, queue_pubkey, tree_type, seq).

Tests

tests/integration_tests/batched_state_tree_tests.rs
Contains new test_batched_address_transactions test.

  • 50 addresses appended to the queue
  • 5 batch address append operations with batch_size=10 invoked

Asserts:

  • getBatchAddressUpdateInfo return non-empty queue after indexing of initial 50 txs and contain valid expected addresses.
  • Final root is checked against reference tree after applying batch address append txs.

TODO:

  • Add test for the case when we have 2 addresses in 1 ix.

@sergeytimoshin sergeytimoshin marked this pull request as ready for review April 3, 2025 01:35
@sergeytimoshin sergeytimoshin force-pushed the sergey/batch-address-updates branch from b089102 to db382f7 Compare April 6, 2025 12:07
@sergeytimoshin sergeytimoshin force-pushed the sergey/batch-address-updates branch from ed8b8f0 to c24977f Compare April 8, 2025 18:55
Comment on lines +123 to +136
for (new_address, seq) in event
.new_addresses
.iter()
.zip(event.address_sequence_numbers.iter())
{
let tree_info = TreeInfo::get(&new_address.mt_pubkey.to_string())
.ok_or(IngesterError::ParserError("Missing queue".to_string()))?
.clone();
state_update_event.addresses.push(AddressQueueUpdate {
tree: tree_info.tree.into(),
address: new_address.address,
queue_index: seq.seq,
});
}
Copy link

@ananas-block ananas-block Apr 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for (new_address, seq) in event
.new_addresses
.iter()
.zip(event.address_sequence_numbers.iter())
{
let tree_info = TreeInfo::get(&new_address.mt_pubkey.to_string())
.ok_or(IngesterError::ParserError("Missing queue".to_string()))?
.clone();
state_update_event.addresses.push(AddressQueueUpdate {
tree: tree_info.tree.into(),
address: new_address.address,
queue_index: seq.seq,
});
}
for (new_address, seq) in event
.new_addresses
.iter()
.zip(event.address_sequence_numbers.iter())
{
state_update_event.addresses.push(AddressQueueUpdate {
tree: new_address.mt_pubkey.into(),
address: new_address.address,
queue_index: seq.seq,
});
}

}

pub const MAX_HEIGHT: usize = 32;
pub async fn get_subtrees(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be in persisted_indexed_merkle_tree ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense to leave it here, because persisted_state_tree stores nodes for indexed trees, and persisted_indexed_merkle_tree works more like indexed array.

Copy link

@ananas-block ananas-block left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!
TLDR on comments:

  1. are we sure queue indices are set correctly if we insert 2 addresses in the same tree in the same tx?
  2. found some dead code in parse_public_transaction_event
  3. PublicTransactionEvent::V1 maybe we can get away with less diff by implementing From
  4. Final root assert in test
  5. couple smaller improvements

@@ -79,18 +77,15 @@ solana-program = "1.18.0"
solana-sdk = "1.18.0"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try to remove

…d address is in the AddressQueue but not in tree yet, we should return error. For V1 trees we still return non-inclusion proof, because we don't have information about V1 queue.

* Add offset as request parameter to get_batch_address_update_info.
* Various cleanups
@sergeytimoshin sergeytimoshin changed the title feat: batch address trees support feat: batch address trees Apr 30, 2025
light-compressed-account = { git = "https://github.com/Lightprotocol/light-protocol", rev = "368f9f08272db78c74b2ade1a1c2fead27dd0a96" }
light-concurrent-merkle-tree = { git = "https://github.com/Lightprotocol/light-protocol", rev = "368f9f08272db78c74b2ade1a1c2fead27dd0a96" }
light-hasher = "2.0.0"
light-hasher = { git = "https://github.com/Lightprotocol/light-protocol", rev = "368f9f08272db78c74b2ade1a1c2fead27dd0a96" }
Copy link

@ananas-block ananas-block May 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why was this necessary?
(I didn't want to release light-hasher again.)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

light-merkle-tree-reference has light-hasher dependency as well => there are multiple different versions of crate light_hasher in the dependency graph

@sergeytimoshin sergeytimoshin merged commit 61627ae into jorrit/feat-add-test May 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants