Skip to content

consenus: Implement LLMQ_100_67 quorums#3844

Merged
xdustinface merged 12 commits into
dashpay:developfrom
UdjinM6:platformquorum
Dec 9, 2020
Merged

consenus: Implement LLMQ_100_67 quorums#3844
xdustinface merged 12 commits into
dashpay:developfrom
UdjinM6:platformquorum

Conversation

@UdjinM6
Copy link
Copy Markdown

@UdjinM6 UdjinM6 commented Dec 1, 2020

Implements #3729

Re-uses DEPLOYMENT_V17 as an activation mechanism.

@UdjinM6 UdjinM6 added this to the 17 milestone Dec 1, 2020
@UdjinM6 UdjinM6 force-pushed the platformquorum branch 6 times, most recently from 10b9db5 to 9ec9dbe Compare December 2, 2020 20:42
@UdjinM6 UdjinM6 marked this pull request as ready for review December 3, 2020 10:58
@xdustinface xdustinface changed the title consenus: Implement LLMQ_100_70 quorums consenus: Implement LLMQ_100_67 quorums Dec 3, 2020
@xdustinface
Copy link
Copy Markdown

See suggestions-3844 for some renaming ideas. Feel free to pick only what you like 😄 It's nothing which would hold me up giving an utACK.

My thoughts about the llmq_type_new to llmq_type_v17: If we add another one its not the new one anymore haha.

@UdjinM6
Copy link
Copy Markdown
Author

UdjinM6 commented Dec 4, 2020

Picked some of your suggestions. How about de92158 instead of the rest of them?

Copy link
Copy Markdown

@xdustinface xdustinface left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about de92158 instead of the rest of them?

Okay yeah agree, the term "new" might make sense there if we add another LLMQ at some point :D

There is one thing though where im not totally sure about yet: Will things work well if we have all members of 100 member LLMQs connected (Spork21 is currently active for this new LLMQ Type)?

@codablock You might have tested that? Can you share your thoughts here also?

Comment thread src/chainparams.cpp
Comment on lines +301 to +302
.keepOldConnections = 25,
.recoveryMembers = 50,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a bit more thought about these values and figuring out what a good value is

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment thread src/llmq/quorums_utils.cpp Outdated
Comment thread src/rpc/rpcquorums.cpp Outdated
…uce GetLLMQParams

Signed-off-by: pasta <pasta@dashboost.org>
@PastaPastaPasta
Copy link
Copy Markdown
Member

Just talked with @QuantumExplorer, due to modifications he is making in tenderdash, there is no longer a need for a high minimum size of 90 (no longer using a dynamic threshold), so I propose we drop that to 80.

Comment thread src/chainparams.cpp Outdated
Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com>
Comment thread src/llmq/quorums_utils.cpp
@PastaPastaPasta
Copy link
Copy Markdown
Member

What is the behavior of this quorum in regards to Concentrated Recovery?

@UdjinM6
Copy link
Copy Markdown
Author

UdjinM6 commented Dec 9, 2020

There are no changes in CLLMQUtils::IsAllMembersConnectedEnabled atm so it's going to participate just like 50_60 quorum does. Should I exclude it? (cant find anything about it in the original issue)

@PastaPastaPasta
Copy link
Copy Markdown
Member

It wasn't mentioned in the original issue... I'm not sure if it's a good idea or not. I guess it would depend on if concentrated recovery / MNs could handle another massive hunk of connections...

Comment thread test/functional/feature_new_quorum_type_activation.py Outdated
Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com>
Copy link
Copy Markdown
Member

@PastaPastaPasta PastaPastaPasta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, utACK, at this point waiting on tests

Copy link
Copy Markdown

@xdustinface xdustinface left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK

@xdustinface xdustinface merged commit cf64339 into dashpay:develop Dec 9, 2020
xdustinface added a commit to xdustinface/dash that referenced this pull request Dec 12, 2020
Current `develop` tests fail. This was basically introduced by dashpay#3844 but it didn't come up before dashpay#3853 because the `v17` fork wasn't activated in `feature_llmq_dkgerrors.py`.

After  dashpay#3853 `dip0008` activation takes [200 blocks](dashpay@b95cf01#diff-4a04bc0b355c780033960e8c261ee9b6d3c452897e1dcd88a15d272512266c76R539) which was normally activated after [10 blocks](dashpay@b95cf01#diff-b92fa0fafafa27172736ebc88f9f9b658b1160caca512a318eefb7d93d22bf3cL18) in `feature_llmq_dkgerrors.py`. Now with the 200 blocks `v17` gets activated during test which then leads to MN1, MN2 banning MN0 because it lies in DKG of `LLMQ_TEST` and `LLMQ_TEST_V17`.

There are other ways to solve it, like enabling `dip0008` earlier or enable `v17` later but IMO its anyway better to restrict `ShouldSimulateError` to only trigger for `LLMQ_TEST`.
UdjinM6 added a commit that referenced this pull request Dec 14, 2020
…3871)

* llmq: Restrict `ShouldSimulateError` to trigger for LLMQ_TEST only

Current `develop` tests fail. This was basically introduced by #3844 but it didn't come up before #3853 because the `v17` fork wasn't activated in `feature_llmq_dkgerrors.py`.

After  #3853 `dip0008` activation takes [200 blocks](b95cf01#diff-4a04bc0b355c780033960e8c261ee9b6d3c452897e1dcd88a15d272512266c76R539) which was normally activated after [10 blocks](b95cf01#diff-b92fa0fafafa27172736ebc88f9f9b658b1160caca512a318eefb7d93d22bf3cL18) in `feature_llmq_dkgerrors.py`. Now with the 200 blocks `v17` gets activated during test which then leads to MN1, MN2 banning MN0 because it lies in DKG of `LLMQ_TEST` and `LLMQ_TEST_V17`.

There are other ways to solve it, like enabling `dip0008` earlier or enable `v17` later but IMO its anyway better to restrict `ShouldSimulateError` to only trigger for `LLMQ_TEST`.

* Revert "llmq: Restrict `ShouldSimulateError` to trigger for LLMQ_TEST only"

This reverts commit ec42d86.

* llmq: Restrict `ShouldSimulateError` to trigger for LLMQ_TEST only (alternative)

Move ShouldSimulateError into CDKGSession

Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>
@UdjinM6 UdjinM6 deleted the platformquorum branch July 1, 2021 21:47
gades pushed a commit to cosanta/cosanta-core that referenced this pull request Mar 12, 2022
* Add LLMQ_100_67 quorums

* Re-use DEPLOYMENT_V17 bit to activate LLMQ_100_67 quorums

* Add LLMQ_TEST_NEW quorum and test its activation

* Tweak mine_quorum to work correctly with multiple quorum types

And to avoid a potentialy endless "while" loop

* llmq: Rename IsQuorumTypeEnabledAtBlock -> IsQuorumTypeEnabled

* chainparams|test: Rename llmq_test_new -> llmq_test_v17

* chainparams|consensus|llmq: Rename LLMQ_TEST_NEW -> LLMQ_TEST_V17

* Tweak few strings and the name of the test

* llmq: Make GetEnabledQuorumTypes return a vector of LLMQTypes, introduce GetLLMQParams

Signed-off-by: pasta <pasta@dashboost.org>

* Tweak minSize

Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com>

* Exclude LLMQ_100_67 from Concentrated Recovery

* Update test/functional/feature_new_quorum_type_activation.py

Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com>

Co-authored-by: xdustinface <xdustinfacex@gmail.com>
Co-authored-by: pasta <pasta@dashboost.org>
Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com>
gades pushed a commit to cosanta/cosanta-core that referenced this pull request Apr 1, 2022
…ashpay#3871)

* llmq: Restrict `ShouldSimulateError` to trigger for LLMQ_TEST only

Current `develop` tests fail. This was basically introduced by dashpay#3844 but it didn't come up before dashpay#3853 because the `v17` fork wasn't activated in `feature_llmq_dkgerrors.py`.

After  dashpay#3853 `dip0008` activation takes [200 blocks](dashpay@b95cf01#diff-4a04bc0b355c780033960e8c261ee9b6d3c452897e1dcd88a15d272512266c76R539) which was normally activated after [10 blocks](dashpay@b95cf01#diff-b92fa0fafafa27172736ebc88f9f9b658b1160caca512a318eefb7d93d22bf3cL18) in `feature_llmq_dkgerrors.py`. Now with the 200 blocks `v17` gets activated during test which then leads to MN1, MN2 banning MN0 because it lies in DKG of `LLMQ_TEST` and `LLMQ_TEST_V17`.

There are other ways to solve it, like enabling `dip0008` earlier or enable `v17` later but IMO its anyway better to restrict `ShouldSimulateError` to only trigger for `LLMQ_TEST`.

* Revert "llmq: Restrict `ShouldSimulateError` to trigger for LLMQ_TEST only"

This reverts commit ec42d86.

* llmq: Restrict `ShouldSimulateError` to trigger for LLMQ_TEST only (alternative)

Move ShouldSimulateError into CDKGSession

Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants