Skip to content

Conversation

@haikoschol
Copy link

This PR addresses part of #477. Specifically the issue described in this comment.

@haikoschol
Copy link
Author

Draft for two reasons:

  1. So I can try avoiding the BoxFuture and associated heap allocation
  2. Because this is based on master which does not have my fixes from fix: multistream-select negotiation on outbound substream over webrtc #465 and therefore doesn't actually work.

@lexnv should I wait for #465 to get merged and then merge master in this branch to get the changes or base this PR on my branch for #465?

@haikoschol

This comment was marked as outdated.

fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
self.rx.poll_recv(cx)
let item = self.rx.poll_recv(cx);
self.write_waker.wake();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How could we tell from the poll_next rx channel that the poll_write tx channel has free capacity?

Isn't this introducing a slightly different variation of the same class of issue (CPU looping)?

What would happen if we save the waker in poll_write, but the channel has capacity (meaning we start sending right away)? Wouldn't we wake the context needlessly on every poll_next?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How could we tell from the poll_next rx channel that the poll_write tx channel has free capacity?

SubstreamHandle::rx is the other side of Substream::tx.

When a protocol wants to send data, it calls Substream::poll_write(), which writes the data to Substream::tx. WebRtcConnection::run() polls the associated SubstreamHandle, which pulls the data from SubstreamHandle::rx.

BTW, When I double-checked this, I noticed that the doc comments on SubstreamHandle::tx and SubstreamHandle::rx are incorrect. On rx it says "RX channel for receiving messages from peer." It probably should say something like "RX channel for receiving outbound messages from the associated Substream instance."

Isn't this introducing a slightly different variation of the same class of issue (CPU looping)?

I don't think so but don't know how to verify that. I'm out of my depth here on the inner workings of Rust async and tokio, so I'm relying on AI. 😐

What would happen if we save the waker in poll_write, but the channel has capacity (meaning we start sending right away)? Wouldn't we wake the context needlessly on every poll_next?

Yes, there would be spurious wakes, although I'm not sure they would happen on every poll_next(). According to GPT 5.1:

In almost all realistic workloads, the cost of occasional spurious wakeups is much lower than the cost of introducing a heap allocation via reserve_owned() + BoxFuture.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a test in b88b11e that is supposed to verify that the AtomicWaker approach works.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep it makese sense 🙏

After digging a bit into this issue, I believe we could use tokio-util/PollSender here or similar. It will basically allocate the future once, and reutilize the memory allocated for subsequent polls

Copy link
Author

@haikoschol haikoschol Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've replaced the AtomicWaker with PollSender. Unit tests pass. I'll manually test with litep2p-perf now.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll manually test with litep2p-perf now.

works

}

#[tokio::test]
async fn backpressure_released_wakes_blocked_writer() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! 🚀

and fix doc comments on SubstreamHandle
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants