feat(yamux): increase max number of buffered inbound streams to 256#3872
feat(yamux): increase max number of buffered inbound streams to 256#3872thomaseizinger wants to merge 4 commits intomasterfrom
Conversation
|
Why can't the limit be configurable? |
|
I'd rather not introduce too many tunables and config options. In an ideal world, we migrate users to a muxer that supports backpressure on newly opened streams (QUIC or qmux). Then this config option simply doesn't exist. In the absence of a backpressure mechanism, the best we can do is drop streams when the remote overloads us, otherwise we might face a resource exhaustion. Go has already implemented and we will eventually implement tracking of the ACK backlog (libp2p/rust-yamux#150). Aligning the buffersize with the ack backlog should place us in a sweetspot where we never need to drop a stream. If we make this limit configurable, users will configure it and shoot themselves in the foot because it will likely not be synced with the remote's ACK backlog. You are of course welcome to fork |
Was just out of curiosity though. |
| /// | ||
| /// Implementations are encouraged to not open more than 256 unacknowledged streams, see <https://github.com/libp2p/specs/tree/master/yamux#ack-backlog>. | ||
| /// Thus, it makes sense to buffer up to 256 streams before dropping them for good interoperability. | ||
| const MAX_BUFFERED_INBOUND_STREAMS: usize = 256; |
There was a problem hiding this comment.
I am not sure I follow the reasoning above. At the point where a new stream is retrieved through this.poll_inner, the stream will be acknowledged to the remote, thus allowing the remote to open another new stream.
As far as I can tell, bumping this limit just delays running into the limit itself, no?
There was a problem hiding this comment.
Thanks for questioning this!
At the point where a new stream is retrieved through
this.poll_inner, the stream will be acknowledged to the remote
I don't think this is true. We emit streams in rust-yamux without acknowledging them directly because we don't want to send a frame for just acknowledging a stream. I'll have to dig into this in detail but I think we delay the acknowledgement until we send the first payload on this stream.
The first payload will be multistream-select which in our case happens directly after we have removed a stream from this buffer. Meaning if our Connection is slow in removing streams from this buffer, the remote should eventually stop opening new streams.
I think it makes sense to write a test against this just to be sure but for that we land the ACK backlog changes first: libp2p/rust-yamux#150
There was a problem hiding this comment.
If the WindowUpdateMode is set to OnRead (which is the default), then we are not sending a Frame on a newly opened inbound stream until the user first uses it. To acknowledge the stream in that case, we preemptively set the ACK flag:
Let me know if you see anything wrong with the reasoning here.
There was a problem hiding this comment.
I realized I still had a fully-working branch for the ACK backlog locally so I just quickly added a test for this as well and it confirmed my intuition: libp2p/rust-yamux#153
|
This pull request has merge conflicts. Could you please resolve them @thomaseizinger? 🙏 |
|
Instead of continuing here, i.e. buffer in |
Yes, happy to do that. When I opened this, yamux wasn't as developed yet / I had forgotten that I have the backlog branch still locally 😁 |
Description
With the effort to implement a consistent ACK backlog across all yamux implementations, it makes sense to up this limit to 256 to avoid stream drops as much as possible.
Notes & open questions
Related: paritytech/polkadot-sdk#758.
Change checklist