Skip to content

NDRS-98: add GossipTable#38

Merged
Fraser999 merged 5 commits intocasper-network:masterfrom
Fraser999:NDRS-98-gossip-table
Jun 29, 2020
Merged

NDRS-98: add GossipTable#38
Fraser999 merged 5 commits intocasper-network:masterfrom
Fraser999:NDRS-98-gossip-table

Conversation

@Fraser999
Copy link
Contributor

This adds the GossipTable for use by any component which needs to perform gossiping.

It updates the small_network to take a count of target peers and a list of peers to exclude from consideration when choosing targets for a gossip message.

Copy link
Contributor

@afck afck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

I haven't reviewed the tests yet; will do that later today or early next week.

I'm still not entirely clear where GossipTables will be instantiated: Are there multiple instances, one owned by each component that needs it?

I'm also wondering whether for the most common of our DAG entries, like ballots and endorsements, this is overkill: They are tiny and also high-priority, so I wonder whether they should just be sent to every connected peer immediately as soon as we learn about them?
In particular, it seems high overhead to maintain all those sets and maps, and send responses to indicate whether or not the receiver already had that vertex.

/// Data IDs for which gossiping has been force-finished (likely due to detecting that the data
/// was not correct as per our current knowledge). Such data could later be decided as still
/// requiring to be gossiped, so we retain the `State` part here in order to resume gossiping.
force_finished: HashMap<T, (State, Instant)>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it be paused or suspended then, since — as opposed to finished — it can be continued?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I see your point. I'll go for paused as it's shorter.

let update = |state: &mut State| {
if let Some(holder) = maybe_holder {
let _ = state.holders.insert(holder);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be equivalent?

state.holders.extend(maybe_holder);

Co-authored-by: Andreas Fackler <afck@users.noreply.github.com>
Copy link
Contributor

@afck afck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, once the above questions are resolved. 👍

let actual: BTreeSet<_> = gossip_table
.holders(data_id)
.map(|itr| itr.collect())
.unwrap_or_else(BTreeSet::new);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought Clippy would complain, and demand map_or_else here. 😇

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You out-pedantic'ed Clippy - nice one! 😁

@Fraser999
Copy link
Contributor Author

@afck

I'm still not entirely clear where GossipTables will be instantiated: Are there multiple instances, one owned by each component that needs it?

Yes, maybe even more than one per component if required. I was imagining one per gossip-able type.

I'm also wondering whether for the most common of our DAG entries, like ballots and endorsements, this is overkill: They are tiny and also high-priority, so I wonder whether they should just be sent to every connected peer immediately as soon as we learn about them?
In particular, it seems high overhead to maintain all those sets and maps, and send responses to indicate whether or not the receiver already had that vertex.

Simple broadcasting might also not be viable, since it generates O(n^2) messages. It might be worth considering a different gossip protocol for cases of "small" data (e.g. https://zoo.cs.yale.edu/classes/cs426/2013/bib/karp00randomized.pdf) where responses aren't required. There'd still of course need to be some state held for every piece of data being gossiped - I don't imagine it'd be much different to this implementation in terms of resource consumption.

@Fraser999
Copy link
Contributor Author

I have realised that I'm not currently accounting for managing the number of in-flight messages of the gossip protocol. The GossipTable should manage this too I believe, but I'll add that functionality in an upcoming PR.

@Fraser999 Fraser999 merged commit 722ee83 into casper-network:master Jun 29, 2020
@Fraser999 Fraser999 deleted the NDRS-98-gossip-table branch June 29, 2020 11:27
rafal-ch pushed a commit that referenced this pull request Sep 11, 2024
Co-authored-by: John Adler <adlerjohn@users.noreply.github.com>
rafal-ch pushed a commit that referenced this pull request Sep 11, 2024
zajko added a commit to zajko/casper-node that referenced this pull request Oct 13, 2025
# This is the 1st commit message:

rebasing

# The commit message #2 will be skipped:

# WIP

# The commit message #3 will be skipped:

# WIP

# The commit message #4 will be skipped:

# wip

# The commit message #5 will be skipped:

# wip

# The commit message #6 will be skipped:

# wip

# The commit message #7 will be skipped:

# wip

# The commit message #8 will be skipped:

# wip

# The commit message #9 will be skipped:

# wip

# The commit message casper-network#10 will be skipped:

# wip

# The commit message casper-network#11 will be skipped:

# wip

# The commit message casper-network#12 will be skipped:

# wip

# The commit message casper-network#13 will be skipped:

# rebasing

# The commit message casper-network#14 will be skipped:

# wip

# The commit message casper-network#15 will be skipped:

# wip

# The commit message casper-network#16 will be skipped:

# wip

# The commit message casper-network#17 will be skipped:

# wip

# The commit message casper-network#18 will be skipped:

# wip

# The commit message casper-network#19 will be skipped:

# wip

# The commit message casper-network#20 will be skipped:

# rebasing

# The commit message casper-network#21 will be skipped:

# wip

# The commit message casper-network#22 will be skipped:

# wip

# The commit message casper-network#23 will be skipped:

# wip

# The commit message casper-network#24 will be skipped:

# wip

# The commit message casper-network#25 will be skipped:

# wip

# The commit message casper-network#26 will be skipped:

# wip

# The commit message casper-network#27 will be skipped:

# wip

# The commit message casper-network#28 will be skipped:

# wip

# The commit message casper-network#29 will be skipped:

# wip

# The commit message casper-network#30 will be skipped:

# wip

# The commit message casper-network#31 will be skipped:

# wip

# The commit message casper-network#32 will be skipped:

# wip

# The commit message casper-network#33 will be skipped:

# wip

# The commit message casper-network#34 will be skipped:

# wip

# The commit message casper-network#35 will be skipped:

# wip

# The commit message casper-network#36 will be skipped:

# wip

# The commit message casper-network#37 will be skipped:

# wip

# The commit message casper-network#38 will be skipped:

# wip

# The commit message casper-network#39 will be skipped:

# wip

# The commit message casper-network#40 will be skipped:

# rebasing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants