Skip to content

multi: add new rbf coop close actor for RPC server fee bumps#9821

Open
Roasbeef wants to merge 5 commits intomasterfrom
coop-close-actor
Open

multi: add new rbf coop close actor for RPC server fee bumps#9821
Roasbeef wants to merge 5 commits intomasterfrom
coop-close-actor

Conversation

@Roasbeef
Copy link
Copy Markdown
Member

In this PR, we create a new rbfCloseActor wrapper struct. This will
wrap the RPC operations to trigger a new RBF close bump within a new
actor. In the next commit, we'll now register this actor, and clean up
the call graph from the rpc server to this actor.

We then register the rbfCloseActor when we create the rbf
chan closer state machine. Now the RPC server no longer neesd to
traverse a series of maps and pointers (rpcServer -> server -> peer ->
activeCloseMap -> rbf chan closer) to trigger a new fee bump.

Instead, it just creates the service key that it knows that the closer
can be reached at, and sends a message to it using the returned
actorRef/router. We also hide additional details re the various methods
in play, as we only care about the type of message we expect to send and
receive.

Along the way we add some helper types to enable any protofsm state
machine to function as an actor in this framework.

Depends on #9820

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented May 17, 2025

Important

Review skipped

Auto reviews are limited to specific labels.

🏷️ Labels to auto review (1)
  • llm-review

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions
Copy link
Copy Markdown

Pull reviewers stats

Stats of the last 30 days for lnd:

User Total reviews Time to review Total comments
guggero
🥇
24
▀▀▀
23h 49m
47
▀▀▀
ziggie1984
🥈
13
12h 35m
35
▀▀
bhandras
🥉
11
4h 34m
12
yyforyongyu
10
1d 3h 57m
16
Roasbeef
7
9h 39m
4
ellemouton
5
1d 6h 18m
5
bitromortac
5
1h 41m
6
morehouse
3
1d 1h 19m
3
ffranr
2
18m
0
mohamedawnallah
2
6d 14h 50m
▀▀
11
NishantBansal2003
2
5d 15h 32m
▀▀
0
sputn1ck
1
23h 39m
2
GeorgeTsagk
1
3d 36m
0
saubyk
1
20h 37m
0
MPins
1
8d 14h 1m
▀▀▀
3

Copy link
Copy Markdown
Collaborator

@erickcestari erickcestari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you rebase? I think this PR is going to be way easier to review after it.

@Roasbeef Roasbeef changed the base branch from actor to master March 18, 2026 01:44
In this commit, we create a new rbfCloseActor wrapper struct. This will
wrap the RPC operations to trigger a new RBF close bump within a new
actor. In the next commit, we'll now register this actor, and clean up
the call graph from the rpc server to this actor.
In this commit, we now register the rbfCloseActor when we create the rbf
chan closer state machine. Now the RPC server no longer neesd to
traverse a series of maps and pointers (rpcServer -> server -> peer ->
activeCloseMap -> rbf chan closer) to trigger a new fee bump.

Instead, it just creates the service key that it knows that the closer
can be reached at, and sends a message to it using the returned
actorRef/router. We also hide additional details re the various methods
in play, as we only care about the type of message we expect to send and
receive.
In this commit, we implement the actor.ActorBehavior interface for
StateMachine. This enables the state machine executor to be registered
as an actor, and have messages be sent to it via a unique ServiceKey
that a concrete instance will set.
This can be used to allow any system to send a message to the RBF chan
closer if it knows the proper service key. In the future, we can use
this to redo the msgmux.Router in terms of the new actor abstractions.
@Roasbeef
Copy link
Copy Markdown
Member Author

@erickcestari rebased!

Copy link
Copy Markdown
Collaborator

@erickcestari erickcestari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice refactor! Routing fee bumps through the actor system with a service key lookup is a clean simplification over the old rpcServer -> server -> peer chain with the DB fetch + peer map lookup.

peerAccessMan *accessMan

// actors is the central registry for the set of active actors.
actors *actor.ActorSystem
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the separation intentional for isolation, or should these be consolidated into a single actor system?

Comment on lines +98 to +110
// We only want to have a single actor instance for this rbf
// closer, so we'll now attempt to unregister any other
// instances.
_ = actorKey.UnregisterAll(r.actors)

// Now that we know that no instances of the actor are present,
// let's register a new instance. We don't actually need the ref
// though, as any interested parties can look up the actor via
// the service key.
actorID := fmt.Sprintf(
"PeerWrapper(RbfChanCloser(%s))", r.chanPoint,
)
_, _ = actorKey.Spawn(r.actors, actorID, r)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we handle the error from calling UnregisterAll and Spawn? At least may have a log here?

// reach an RBF chan closer, via an active peer.
//
//nolint:ll
func NewRbfCloserServiceKey(op wire.OutPoint) RbfCloseActorServiceKey {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
func NewRbfCloserServiceKey(op wire.OutPoint) RbfCloseActorServiceKey {
func NewRbfCloserPeerServiceKey(op wire.OutPoint) RbfCloseActorServiceKey {

peerAccessMan *accessMan

// actors is the central registry for the set of active actors.
actors *actor.ActorSystem
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also we should probably shutdown both actors when the server stops

// rbfCloseMessage is a message type that is used to trigger a cooperative fee
// bump, or initiate a close for the first time.
type rbfCloseMessage struct {
actor.Message
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here it should embeds the actor.BaseMessage (struct) instead of the actor.Message (interface)

Suggested change
actor.Message
actor.BaseMessage


type retType = *CoopCloseUpdates

// If RBF coop close isn't permitted, then we'll an error.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
// If RBF coop close isn't permitted, then we'll an error.
// If RBF coop close isn't permitted, then we'll return an error.

// nolint:ll
type RbfCloseActorServiceKey = actor.ServiceKey[rbfCloseMessage, *CoopCloseUpdates]

// NewRbfCloserPeerServiceKey returns a new service key that can be used to
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
// NewRbfCloserPeerServiceKey returns a new service key that can be used to
// NewRbfCloserServiceKey returns a new service key that can be used to

opStr := op.String()

// Now that even just using the channel point here would be enough, as
// we have a unique type here ChanCloserActorMsg which will handle the
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
// we have a unique type here ChanCloserActorMsg which will handle the
// we have a unique type here rbfCloseMessage which will handle the

Comment on lines +491 to +494
// Actors enables the peer to send messages to the set of actors, and
// also register new actors itself.

Actors *actor.ActorSystem
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
// Actors enables the peer to send messages to the set of actors, and
// also register new actors itself.
Actors *actor.ActorSystem
// Actors enables the peer to send messages to the set of actors, and
// also register new actors itself.
Actors *actor.ActorSystem

// bump, or initiate a close for the first time.
type rbfCloseMessage struct {
actor.Message

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ctx passed to Receive is the actor's internal context (a.ctx), not the caller's RPC stream context. This means observeRbfCloseUpdates can never detect RPC client disconnection via closeReq.Ctx.Done(), leaking the observer goroutine. Propagating the caller's context through the message restores the old TriggerCoopCloseRbfBump behavior.

Suggested change
// Ctx is the caller's context (e.g., the RPC stream context), used to detect client disconnection.
Ctx context.Context

@saubyk saubyk added this to the v0.21.0 milestone Mar 24, 2026
@saubyk saubyk added this to v0.21 Mar 24, 2026
@saubyk saubyk moved this to In review in v0.21 Mar 24, 2026
// allows us to specify that as an option.
replace google.golang.org/protobuf => github.com/lightninglabs/protobuf-go-hex-display v1.33.0-hex-display

replace github.com/lightningnetwork/lnd/actor => ./actor
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unnecessary change? also the TODO is removed

DeliveryScript: msg.DeliveryScript,
Updates: closeUpdates.UpdateChan,
Err: closeUpdates.ErrChan,
Ctx: ctx,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems to be the wrong ctx to inherit? The ctx here is a bit difficult to follow - but my understanding is that the Receive stores the actor lifecycle context in ChanClose.Ctx. Once this path is wired in, canceling the RPC no longer cancels the close-update observer, because the observer tears down on closeReq.Ctx.Done(), but that context now belongs to the actor, not the RPC caller.

// bypassing the switch entirely.
closeReq := htlcswitch.ChanClose{
CloseType: contractcourt.CloseRegular,
ChanPoint: &msg.ChanPoint,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we check r.chanPoint != msg.ChanPoint? otherwise it just forwards blindly.

// We only want to have a single actor instance for this rbf
// closer, so we'll now attempt to unregister any other
// instances.
_ = actorKey.UnregisterAll(r.actors)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's log the error here? can imagine it will be easier for future debugging

p.log.Infof("Registering RBF actor for channel %v",
channel.ChannelPoint())

actorWrapper := newRbfCloseActor(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add a cleanup or unregister logic when the peer disconnects? otherwise we would have stale RBF actors.


// In addition to the message router, we'll register the state machine
// with the actor system.
if p.cfg.Actors != nil {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We prolly need to move this block after p.activeChanCloses.Store(chanID, makeRbfCloser(&chanCloser)), so we store the closer first, then register the actor.

rpcsLog.Infof("Bypassing Switch to do fee bump "+
"for ChannelPoint(%v)", chanPoint)

closeUpdates, err := r.server.AttemptRBFCloseUpdate(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

// RBF chan closer.
//
//nolint:ll
func NewRbfCloserServiceKey(op wire.OutPoint) actor.ServiceKey[ChanCloserActorMsg, bool] {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we already have peer.NewRbfCloserServiceKey tho, the same name can be confusing.

@lightninglabs-deploy
Copy link
Copy Markdown
Collaborator

@gijswijs: review reminder
@Roasbeef, remember to re-request review from reviewers when ready

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: In review

Development

Successfully merging this pull request may close these issues.

5 participants