contractcourt: add mempool watcher to notify mempool events#7564
contractcourt: add mempool watcher to notify mempool events#7564Roasbeef merged 16 commits intolightningnetwork:masterfrom
Conversation
94ac9ab to
f0d19e7
Compare
|
Nice, this'll serve to reduce the perceived swap latency of swap services like Loop, as the swap can be completed as soon as the transaction hits the mempool (no need to wait for the eventual conf, which might take some time w/ todays mempool). |
19381ba to
71d8d23
Compare
|
Changed the base branch to master. |
|
Dep btcwallet branch is merged. |
Roasbeef
left a comment
There was a problem hiding this comment.
This is really coming along, nice work!
I think the final design thing here is that: we should still be fetching the mempool on start up so we can handle the restart case properly.
538b817 to
bcf4473
Compare
bcf4473 to
dc43f80
Compare
chainntnfs/mempool.go
Outdated
There was a problem hiding this comment.
does this need to be stored back into subscribedInputs? That pattern was used elsewhere, but doesn't seem necessary since clients is a pointer?
There was a problem hiding this comment.
I suspect so, usually whenever I mess with multiple layers of maps, I always need to update the intermediate value all the wya up the parent stack.
A unit test should reveal what actual behavior here.
There was a problem hiding this comment.
yeah this is just a pointer so no need to store it back. This is also why clients is also a sync map so it's safe to update it concurrently.
There's a unit test TestMempoolUnsubscribeEvent checks that the clients here are updated when checking subscribedInputs map.
There was a problem hiding this comment.
Don't see this test, typo or do you mean TestMempoolUnsubscribeConfirmedSpentTx?
There was a problem hiding this comment.
the test is in the following commit
There was a problem hiding this comment.
Commit message is slightly inaccurate here. For LND, the outgoing broadcast delta is 0, so typically the force close is broadcasted when currentHeight == h.htlcExpiry unless there are other HTLCs, user-initiated, etc. Other implementations may broadcast with non-zero deltas though. The earliest the timeout transaction can be confirmed is at height h.htlcExpiry+1 so the conditional could instead read if uint32(currentHeight) > h.htlcExpiry to stop preimage-claiming as soon as the counterparty can time it out. I don't think it makes a big difference here though and we could potentially even bump the h.htlcExpiry+1 to something larger to account for variable fee environments
There was a problem hiding this comment.
Cool this commit is now removed. It was added to trigger the race condition used in the itest testExtraMempoolPreimage. Now that test will only be added in the fee bumper PR, will also move this commit and its related discussion there.
There was a problem hiding this comment.
If there is a preimage spend in the mempool, it will cause the resolver to clean up and mark it as ResolverOutcomeClaimed. But if the timeout tx eventually confirms, the resolver will not be around to mark it as ResolverOutcomeTimeout. This might mess up somebody's accounting. I think it would be better if the resolver wasn't cleaned up until an on-chain resolution occurs. We should be fine with sending multiple resolution messages for a single CircuitKey even if the types (success vs fail) differ. This newer behavior could also use an itest. If it's not possible to make an itest that we can commit, a hacky itest would also work that tests the behavior of duplicate resolution messages sent to the switch.
There was a problem hiding this comment.
IIUC, then below, even once we see the transaction in the mempool, we'll wait until things are fully confirmed before we try to clean up state. See the consumeSpendEvents method below. After we see things in the mempool we resolve then send back, but wait to exit the loop until things are confirmed on chain.
Or do you mean that the incoming link should n't also be resolved (on disk) until the outgoing one? In the scenario where both links need to go on chain.
There was a problem hiding this comment.
This might mess up somebody's accounting.
Was thinking about this too, and wondering how one could find out the extra htlc "earned" from the timeout path.
And does it also mean we can rely on the bucket contractsBucketKey to tell if an htlc is settled or failed? If so we can get rid of the finalHtlcsBucket?
There was a problem hiding this comment.
but wait to exit the loop until things are confirmed on chain.
The loop will continue, but the calling function waitForMempoolOrBlockSpend will exit immediately. If the timeout actually hits the chain, then handleCommitSpend won't be called. This would leave the output from the 2nd-level-timeout unclaimed and also give inaccurate accounting. The user would be made whole since they're claiming on the incoming link with the preimage, but this case would leave the value of the outgoing HTLC on the table.
Keeping this behavior might be acceptable since there are no funds at risk and it avoids introducing potential complexity to the switch's resolution message logic.
There was a problem hiding this comment.
And does it also mean we can rely on the bucket contractsBucketKey to tell if an htlc is settled or failed?
I don't think so since when a contract (which includes success/timeout claims) is resolved, it is deleted from contractsBucketKey.
chainntnfs/mempool.go
Outdated
There was a problem hiding this comment.
I suspect so, usually whenever I mess with multiple layers of maps, I always need to update the intermediate value all the wya up the parent stack.
A unit test should reveal what actual behavior here.
There was a problem hiding this comment.
IIUC, then below, even once we see the transaction in the mempool, we'll wait until things are fully confirmed before we try to clean up state. See the consumeSpendEvents method below. After we see things in the mempool we resolve then send back, but wait to exit the loop until things are confirmed on chain.
Or do you mean that the incoming link should n't also be resolved (on disk) until the outgoing one? In the scenario where both links need to go on chain.
a61124f to
2c33000
Compare
itest/lnd_multi-hop_test.go
Outdated
There was a problem hiding this comment.
Alternatively, could to ht.Skipf if we detect it's a neutrino backend at the top.
There was a problem hiding this comment.
I don't think we have test coverage for this resolver for neutrino tho
1271a6a to
eb870f0
Compare
Crypt-iQ
left a comment
There was a problem hiding this comment.
LGTM 🚜 after btcwallet dep updated, also a linter error?
eb870f0 to
d9a1fd7
Compare
This commit adds a mempool notifier which notifies the subscriber the spending event found in the mempool for a given input.
This commit changes the `subscribedInputs` to store a map of subscribers so multiple subscribers are allowed to receive events from the same outpoint.
This commit adds the method `UnsubscribeEvent` to cancel a single subscription.
This commit adds the mempool watcher to bitcoind notifier to allow the notifier managing the starting and stopping of the watcher.
This commit adds the mempool watcher to btcd notifier to allow the notifier managing the starting and stopping of the watcher.
This commit adds the interface `MempoolWatcher` and uses it in the chain registry.
Also fixes the docs and rename `isSuccessSpend` to `isPreimageSpend`.
This commit extends the current htlc timeout resolver to also watch for preimage spend in mempool for a full node backend. If mempool enabled, the resolver will watch the spend of the htlc output in mempool and blocks **concurrently**, as if they are independent. Ideally, a transaction will first appear in mempool then in a block. However, there's no guarantee it will appear in **our** mempool since there's no global mempool, thus we need to watch the spend in two places in case it doesn't go through our mempool. The current design favors the spend event found in blocks, that is, when the tx is confirmed, we'd abort the monitoring and conitnue since the outpoint cannot be double spent and re-appear in mempool again. This is not true in the rare case of reorg, and we will handle reorg seperately.
This commit removes the subscribed inputs from mempool notifier when the relevant transaction is confirmed.
This commit adds more debug logs for witness beacon and channel arbitrator.
d9a1fd7 to
437a329
Compare
437a329 to
66d1392
Compare
This PR adds a new subsystem to allow subscription of mempool events. In specific, it allows the caller to watch the spending event of a given utxo, and returns the spending tx when found in mempool.
Fixes #4254
Depends on
GetRawMempoolandGetRawTransactiontoBitcoindClientbtcsuite/btcwallet#853ForEach,LenandLoadOrStoretoSyncMap#7563TODO