Skip to content

contractcourt: add mempool watcher to notify mempool events#7564

Merged
Roasbeef merged 16 commits intolightningnetwork:masterfrom
yyforyongyu:mempool-watch
Apr 18, 2023
Merged

contractcourt: add mempool watcher to notify mempool events#7564
Roasbeef merged 16 commits intolightningnetwork:masterfrom
yyforyongyu:mempool-watch

Conversation

@yyforyongyu
Copy link
Copy Markdown
Member

@yyforyongyu yyforyongyu commented Apr 3, 2023

This PR adds a new subsystem to allow subscription of mempool events. In specific, it allows the caller to watch the spending event of a given utxo, and returns the spending tx when found in mempool.

Fixes #4254
Depends on

TODO

  • subscribe input spent in htlc timeout resolver
  • update gomod
  • unit tests
  • itests
  • release notes

@yyforyongyu yyforyongyu added channel closing Related to the closing of channels cooperatively and uncooperatively HTLC mempool labels Apr 3, 2023
@yyforyongyu yyforyongyu added this to the v0.16.1 milestone Apr 3, 2023
@yyforyongyu yyforyongyu self-assigned this Apr 3, 2023
@yyforyongyu yyforyongyu force-pushed the mempool-watch branch 4 times, most recently from 94ac9ab to f0d19e7 Compare April 3, 2023 15:38
@Roasbeef
Copy link
Copy Markdown
Member

Roasbeef commented Apr 3, 2023

Nice, this'll serve to reduce the perceived swap latency of swap services like Loop, as the swap can be completed as soon as the transaction hits the mempool (no need to wait for the eventual conf, which might take some time w/ todays mempool).

@yyforyongyu yyforyongyu force-pushed the mempool-watch branch 3 times, most recently from 19381ba to 71d8d23 Compare April 5, 2023 21:24
@Roasbeef Roasbeef changed the base branch from 0-16-1-staging to master April 6, 2023 00:39
@Roasbeef
Copy link
Copy Markdown
Member

Roasbeef commented Apr 6, 2023

Changed the base branch to master.

@Roasbeef
Copy link
Copy Markdown
Member

Roasbeef commented Apr 6, 2023

Dep btcwallet branch is merged.

Copy link
Copy Markdown
Member

@Roasbeef Roasbeef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really coming along, nice work!

I think the final design thing here is that: we should still be fetching the mempool on start up so we can handle the restart case properly.

@yyforyongyu yyforyongyu force-pushed the mempool-watch branch 2 times, most recently from 538b817 to bcf4473 Compare April 7, 2023 11:16
@yyforyongyu yyforyongyu marked this pull request as ready for review April 7, 2023 11:16
@yyforyongyu yyforyongyu changed the title WIP: add mempool watcher to notify mempool events contractcourt: add mempool watcher to notify mempool events Apr 7, 2023
@yyforyongyu yyforyongyu requested review from Crypt-iQ and Roasbeef April 7, 2023 11:23
Copy link
Copy Markdown
Collaborator

@Crypt-iQ Crypt-iQ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think it's very close

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this need to be stored back into subscribedInputs? That pattern was used elsewhere, but doesn't seem necessary since clients is a pointer?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect so, usually whenever I mess with multiple layers of maps, I always need to update the intermediate value all the wya up the parent stack.

A unit test should reveal what actual behavior here.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this is just a pointer so no need to store it back. This is also why clients is also a sync map so it's safe to update it concurrently.

There's a unit test TestMempoolUnsubscribeEvent checks that the clients here are updated when checking subscribedInputs map.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't see this test, typo or do you mean TestMempoolUnsubscribeConfirmedSpentTx?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the test is in the following commit

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commit message is slightly inaccurate here. For LND, the outgoing broadcast delta is 0, so typically the force close is broadcasted when currentHeight == h.htlcExpiry unless there are other HTLCs, user-initiated, etc. Other implementations may broadcast with non-zero deltas though. The earliest the timeout transaction can be confirmed is at height h.htlcExpiry+1 so the conditional could instead read if uint32(currentHeight) > h.htlcExpiry to stop preimage-claiming as soon as the counterparty can time it out. I don't think it makes a big difference here though and we could potentially even bump the h.htlcExpiry+1 to something larger to account for variable fee environments

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool this commit is now removed. It was added to trigger the race condition used in the itest testExtraMempoolPreimage. Now that test will only be added in the fee bumper PR, will also move this commit and its related discussion there.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is a preimage spend in the mempool, it will cause the resolver to clean up and mark it as ResolverOutcomeClaimed. But if the timeout tx eventually confirms, the resolver will not be around to mark it as ResolverOutcomeTimeout. This might mess up somebody's accounting. I think it would be better if the resolver wasn't cleaned up until an on-chain resolution occurs. We should be fine with sending multiple resolution messages for a single CircuitKey even if the types (success vs fail) differ. This newer behavior could also use an itest. If it's not possible to make an itest that we can commit, a hacky itest would also work that tests the behavior of duplicate resolution messages sent to the switch.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, then below, even once we see the transaction in the mempool, we'll wait until things are fully confirmed before we try to clean up state. See the consumeSpendEvents method below. After we see things in the mempool we resolve then send back, but wait to exit the loop until things are confirmed on chain.

Or do you mean that the incoming link should n't also be resolved (on disk) until the outgoing one? In the scenario where both links need to go on chain.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might mess up somebody's accounting.

Was thinking about this too, and wondering how one could find out the extra htlc "earned" from the timeout path.
And does it also mean we can rely on the bucket contractsBucketKey to tell if an htlc is settled or failed? If so we can get rid of the finalHtlcsBucket?

Copy link
Copy Markdown
Collaborator

@Crypt-iQ Crypt-iQ Apr 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but wait to exit the loop until things are confirmed on chain.

The loop will continue, but the calling function waitForMempoolOrBlockSpend will exit immediately. If the timeout actually hits the chain, then handleCommitSpend won't be called. This would leave the output from the 2nd-level-timeout unclaimed and also give inaccurate accounting. The user would be made whole since they're claiming on the incoming link with the preimage, but this case would leave the value of the outgoing HTLC on the table.

Keeping this behavior might be acceptable since there are no funds at risk and it avoids introducing potential complexity to the switch's resolution message logic.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And does it also mean we can rely on the bucket contractsBucketKey to tell if an htlc is settled or failed?

I don't think so since when a contract (which includes success/timeout claims) is resolved, it is deleted from contractsBucketKey.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect so, usually whenever I mess with multiple layers of maps, I always need to update the intermediate value all the wya up the parent stack.

A unit test should reveal what actual behavior here.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, then below, even once we see the transaction in the mempool, we'll wait until things are fully confirmed before we try to clean up state. See the consumeSpendEvents method below. After we see things in the mempool we resolve then send back, but wait to exit the loop until things are confirmed on chain.

Or do you mean that the incoming link should n't also be resolved (on disk) until the outgoing one? In the scenario where both links need to go on chain.

@yyforyongyu yyforyongyu force-pushed the mempool-watch branch 2 times, most recently from a61124f to 2c33000 Compare April 11, 2023 07:43
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, could to ht.Skipf if we detect it's a neutrino backend at the top.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree should be skipped

Copy link
Copy Markdown
Member Author

@yyforyongyu yyforyongyu Apr 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we have test coverage for this resolver for neutrino tho

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree should be skipped

Copy link
Copy Markdown
Collaborator

@Crypt-iQ Crypt-iQ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚜 after btcwallet dep updated, also a linter error?

This commit adds a mempool notifier which notifies the subscriber the
spending event found in the mempool for a given input.
This commit changes the `subscribedInputs` to store a map of subscribers
so multiple subscribers are allowed to receive events from the same
outpoint.
This commit adds the method `UnsubscribeEvent` to cancel a single
subscription.
This commit adds the mempool watcher to bitcoind notifier to allow the
notifier managing the starting and stopping of the watcher.
This commit adds the mempool watcher to btcd notifier to allow the
notifier managing the starting and stopping of the watcher.
This commit adds the interface `MempoolWatcher` and uses it in the chain
registry.
Also fixes the docs and rename `isSuccessSpend` to `isPreimageSpend`.
This commit extends the current htlc timeout resolver to also watch for
preimage spend in mempool for a full node backend.

If mempool enabled, the resolver will watch the spend of the htlc output
in mempool and blocks **concurrently**, as if they are independent.

Ideally, a transaction will first appear in mempool then in a block.
However, there's no guarantee it will appear in **our** mempool since
there's no global mempool, thus we need to watch the spend in two places
in case it doesn't go through our mempool.

The current design favors the spend event found in blocks, that is, when
the tx is confirmed, we'd abort the monitoring and conitnue since the
outpoint cannot be double spent and re-appear in mempool again. This is
not true in the rare case of reorg, and we will handle reorg seperately.
This commit removes the subscribed inputs from mempool notifier when the
relevant transaction is confirmed.
This commit adds more debug logs for witness beacon and channel
arbitrator.
Copy link
Copy Markdown
Member

@Roasbeef Roasbeef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🦄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

channel closing Related to the closing of channels cooperatively and uncooperatively HTLC mempool

Projects

None yet

Development

Successfully merging this pull request may close these issues.

contractcourt: create new mempool witness script template notification system

3 participants