Skip to content

Conversation

@apolcyn
Copy link
Contributor

@apolcyn apolcyn commented Feb 15, 2017

It looks like current transport doesn't account for padding when consuming/updating the flow control window. This accounts for it, maybe another option is to fail fast.

I'd like to add a large interop test for this situation. For now reproduced the sender deadlock locally by using http2.Framer.WritePadded on client, accounting for padding in client's send quota, and running a long-streaming benchmark which gets QPS 0.

cc @ejona86 @MakMukhi

update: btw there's WIP for a generic interop test for this, but for the main grpc repo, separate from this pr

@apolcyn
Copy link
Contributor Author

apolcyn commented Feb 17, 2017

Created pr to attempt to test/verify this btw, in grpc/grpc#9776.
It does appear that master currently times out after running out of window quota, but the test case passes with this

Copy link
Contributor

@menghanl menghanl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix!

}
return
}
if f.Header().Flags.Has(http2.FlagDataPadded) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be moved inside if size > 0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just updated to move this on client and server

@apolcyn apolcyn force-pushed the account_for_padding branch from 5e3e6d5 to 3de1621 Compare February 17, 2017 23:24
@menghanl
Copy link
Contributor

menghanl commented Mar 6, 2017

LGTM.

@apolcyn apolcyn merged commit 7b399ed into grpc:master Mar 6, 2017
@lock lock bot locked as resolved and limited conversation to collaborators Jan 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants