Conversation
|
The latest Buf updates on your PR. Results from workflow Buf / buf (pull_request).
|
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3151 +/- ##
==========================================
+ Coverage 58.62% 58.67% +0.05%
==========================================
Files 2055 2055
Lines 168299 168425 +126
==========================================
+ Hits 98660 98821 +161
+ Misses 60865 60822 -43
- Partials 8774 8782 +8
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
| // TODO: retry the handshake/replay if it fails ? | ||
| func (h *Handshaker) Handshake(ctx context.Context, appClient abci.Application) error { | ||
| res, err := appClient.Info(ctx, &proxy.RequestInfo) | ||
| res, err := appClient.Info(ctx, &version.RequestInfo) |
There was a problem hiding this comment.
Just curious: what's the difference between proxy.RequestInfo and version.RequestInfo?
There was a problem hiding this comment.
this is the same global constant, I've just moved into another, imo more relevant, package.
|
|
||
| type GigaNodeAddr struct { | ||
| Key NodePublicKey | ||
| HostPort tcp.HostPort |
There was a problem hiding this comment.
I remember in your p2p PR, one pubkey can have more than one IP:port, but guess we can only config one in config?
There was a problem hiding this comment.
multiple addrs per node key are supported there, because we have multiple SoT - every peer can declare whatever address they think is the correct one - same node can be available under a private IP and public IP, or the peer may be simply malicious (and lie about the correct IP). Here we trust our config. The situation will change though, once we support proxies for validators, in which case a single validator will be able to have multiple endpoints (but then the validator addresses will be discovered dynamically).
| return fmt.Errorf("App.InitChain(): %w", err) | ||
| } | ||
| var ok bool | ||
| next, ok = utils.SafeCast[atypes.GlobalBlockNumber](r.cfg.GenDoc.InitialHeight) |
There was a problem hiding this comment.
When would this happen? Does last==0 mean starting from genesis? Or it means we haven't committed anything?
There was a problem hiding this comment.
it will happen if InitialHeight is negative, which is an invalid configuration anyway. This is just defense-in-depth check against overflows.
| hash := b.Header().Hash() | ||
| var proposerAddress types.Address | ||
| if vals := r.cfg.App.GetValidators(); len(vals) > 0 { | ||
| // Deterministically select a proposer from the app's validator committee. |
There was a problem hiding this comment.
Is this just a placeholder for now? We will calculate the real proposer later right?
There was a problem hiding this comment.
The autobahn committee is not currently related to App committee in any sense. Even if it was the same at block 0, we do not support the dynamic committee rn, so it would diverge. Application interface contract expects the proposer to belong to the App committee (although this relation is rather fragile, and only causes error log in sei-chain App), so we are just trying to adhere to the contract here for the sake of PoC.
There was a problem hiding this comment.
Eventually we could read the proposer from the CommitQC, however afaict we will need a new reward mechanism anyway, so the actual information about tipcut proposer might be no longer relevant to the App.
| } | ||
| resp, err := r.cfg.App.FinalizeBlock(ctx, &abci.RequestFinalizeBlock{ | ||
| Txs: b.Payload().Txs(), | ||
| // Empty DecidedLastCommit is does not indicate missing votes. |
There was a problem hiding this comment.
nit: is does -> does
Where is DecidedLastCommit normally set?
There was a problem hiding this comment.
By design, each tendermint block (except for the first one) contains a Commit (set of votes, unilaterally chosen by the proposer) for the previous block. DecidedLastCommit is by design a digest from that Commit (essentially information about who voted who did not), which is then used to decide which validators are offline and should be jailed. Here we use a degenerated DecidedLastCommit which just does not disclose anything about the Commit of the previous block (especially since autobahn blocks do not contain such information).
| if !r.cfg.InboundPeers[key] { | ||
| ok := false | ||
| for _, addr := range r.cfg.ValidatorAddrs { | ||
| if addr.Key == key { |
There was a problem hiding this comment.
So for now we only allow inbound connection from validators? How do RPCs connect in loadtest?
There was a problem hiding this comment.
For now only validator nodes are supported. For the sake of the loadtest/local cluster we can model RPC nodes as inactive validator nodes (which do not belong to the committee). This should be a small change. Alternatively we can go ahead and divide Giga TCP connections into validator connections and regular p2p connections (without avail/consensus messages).
Made GigaRouter send finalized blocks for execution to Application. Currently Application calls are still synchronous, but it will be trivial to migrate to async execution one it is supported. This is a PoC, given that there is no intergration tests of Autobahn with sei-chain app yet. Integration tests will be set up once Autobahn is integrated with Mempool as well.
Additionally added an AGENTS.md stub for sei-tendermint to navigate code generation from now on.