Skip to content
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
298edba
handled cardinality violation and added tests
Pranjali-2501 Jun 6, 2025
4c3a18a
Merge branch 'master' into server_streaming
Pranjali-2501 Jun 6, 2025
fd5d614
remove vet errors
Pranjali-2501 Jun 6, 2025
7e4206c
modified tests
Pranjali-2501 Jun 6, 2025
324566b
modified tests
Pranjali-2501 Jun 6, 2025
9fbf6b7
Merge branch 'master' into server_streaming
Pranjali-2501 Jun 19, 2025
9b9cddf
change server.recvmsg() to catch cardinality violation
Pranjali-2501 Jun 23, 2025
63565f2
Merge branch 'master' into server_streaming
Pranjali-2501 Jun 23, 2025
67549e7
replace srv with _
Pranjali-2501 Jun 23, 2025
6030b90
resolving comments
Pranjali-2501 Jul 1, 2025
dff08b3
resolving vets
Pranjali-2501 Jul 1, 2025
f4f1b61
addressed comments
Pranjali-2501 Jul 8, 2025
83e8664
resolving vets
Pranjali-2501 Jul 8, 2025
5acd9ab
resolving vets
Pranjali-2501 Jul 8, 2025
8f4f39b
minor change
Pranjali-2501 Jul 15, 2025
6de922d
minor change
Pranjali-2501 Jul 15, 2025
5f6c715
minor change
Pranjali-2501 Jul 22, 2025
990efe1
resolving comments
Pranjali-2501 Jul 24, 2025
7f6d31f
resolving nits
Pranjali-2501 Jul 24, 2025
41d8328
vet changes
Pranjali-2501 Jul 24, 2025
59ad122
added comment
Pranjali-2501 Jul 24, 2025
0f5248a
added comment
Pranjali-2501 Jul 25, 2025
1ff8868
resolving comments
Pranjali-2501 Jul 28, 2025
68fd0f8
update comment
Pranjali-2501 Jul 28, 2025
19e9e71
nits
Pranjali-2501 Jul 29, 2025
0d30b18
modifying test
Pranjali-2501 Jul 30, 2025
83fd598
resolving vet
Pranjali-2501 Jul 30, 2025
efa8e5a
remove comment
Pranjali-2501 Jul 30, 2025
29f6657
modifying tests
Pranjali-2501 Jul 31, 2025
183b1da
resolving comments
Pranjali-2501 Aug 3, 2025
749a52c
resolving comments
Pranjali-2501 Aug 4, 2025
1b1800d
resolving comments
Pranjali-2501 Aug 4, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions server.go
Original file line number Diff line number Diff line change
Expand Up @@ -1598,6 +1598,7 @@ func (s *Server) processStreamingRPC(ctx context.Context, stream *transport.Serv
s: stream,
p: &parser{r: stream, bufferPool: s.opts.bufferPool},
codec: s.getCodec(stream.ContentSubtype()),
desc: sd,
maxReceiveMessageSize: s.opts.maxReceiveMessageSize,
maxSendMessageSize: s.opts.maxSendMessageSize,
trInfo: trInfo,
Expand Down
4 changes: 4 additions & 0 deletions stream.go
Original file line number Diff line number Diff line change
Expand Up @@ -1580,6 +1580,7 @@ type serverStream struct {
s *transport.ServerStream
p *parser
codec baseCodec
desc *StreamDesc

compressorV0 Compressor
compressorV1 encoding.Compressor
Expand Down Expand Up @@ -1774,6 +1775,9 @@ func (ss *serverStream) RecvMsg(m any) (err error) {
binlog.Log(ss.ctx, chc)
}
}
if !ss.desc.ClientStreams {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a behavior change? Users that call RecvMsg would previously get io.EOF and now they'll get a cardinality violation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there is a behaviour change.

For non-client-streaming RPCs, it will return internal error for the following 2 cases.

  • When there is 0 request message from Client.
  • When Server call RecvMsg() twice.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the second behavior change is only possible if you are using the generic API, so wouldn't affect 99% of our users. But I still don't think this is a change we want to make. I would probably be in favor of it if we were not 1.0, but we need to keep backward compatibility unless there's enough justification, which doesn't seem to be the case here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed offline, I have added an additional state in serverstream to track the first call to server.recvMsg().

Based on that server will only return Internal error when there is 0 request message from Client.

return status.Errorf(codes.Internal, "RecvMsg is called twice")
}
return err
}
if err == io.ErrUnexpectedEOF {
Expand Down
226 changes: 226 additions & 0 deletions test/end2end_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3740,6 +3740,232 @@ func (s) TestClientStreaming_ReturnErrorAfterSendAndClose(t *testing.T) {
}
}

// Tests the behavior for server-side streaming when server call
// RecvMsg twice. Second call to RecvMsg should fail with Internal
// error.
func (s) TestServerStreaming_ServerCallRecvMsgTwice(t *testing.T) {
lis, err := testutils.LocalTCPListener()
if err != nil {
t.Fatal(err)
}
defer lis.Close()

s := grpc.NewServer()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to previous comments, can we use a stubserver if we don't need the server to misbehave? Also applicable to TestUnaryRPC_ClientCallSendMsgTwice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

serviceDesc := grpc.ServiceDesc{
ServiceName: "grpc.testing.TestService",
HandlerType: (*any)(nil),
Methods: []grpc.MethodDesc{},
Streams: []grpc.StreamDesc{
{
StreamName: "FullDuplexCall",
Handler: func(srv interface{}, stream grpc.ServerStream) error {
err := stream.RecvMsg(&testpb.Empty{})
if err != nil {
t.Errorf("stream.RecvMsg() = %v, want <nil>", err)
}

if err = stream.RecvMsg(&testpb.Empty{}); status.Code(err) != codes.Internal {
t.Errorf("stream.RecvMsg() = %v, want error %v", status.Code(err), codes.Internal)
}
return nil
},
ClientStreams: false,
ServerStreams: true,
},
},
Metadata: "grpc/testing/test.proto",
}
s.RegisterService(&serviceDesc, &testServer{})
go s.Serve(lis)
defer s.Stop()

ctx, cancel := context.WithTimeout(context.Background(), defaultTestTimeout)
defer cancel()
cc, err := grpc.NewClient(lis.Addr().String(), grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
t.Fatalf("grpc.NewClient(%q) failed unexpectedly: %v", lis.Addr(), err)
}
defer cc.Close()

desc := &grpc.StreamDesc{
StreamName: "FullDuplexCall",
ServerStreams: true,
ClientStreams: false, // This is the test case: client is *not* allowed to stream
}

stream, err := cc.NewStream(ctx, desc, "/grpc.testing.TestService/FullDuplexCall")
if err != nil {
t.Fatalf("cc.NewStream() failed unexpectedly: %v", err)
}

if err := stream.SendMsg(&testpb.Empty{}); err != nil {
t.Errorf("stream.SendMsg() = %v, want <nil>", err)
}

if err := stream.RecvMsg(&testpb.Empty{}); status.Code(err) != codes.Internal {
t.Errorf("stream.RecvMsg() = %v, want error %v", status.Code(err), codes.Internal)
}
}

// Tests the behavior for server-side streaming when client call
// SendMsg twice. Second call to SendMsg should fail with Internal
// error.
func (s) TestServerStreaming_ClientCallSendMsgTwice(t *testing.T) {
lis, err := testutils.LocalTCPListener()
if err != nil {
t.Fatal(err)
}
defer lis.Close()

ss := grpc.UnknownServiceHandler(func(any, grpc.ServerStream) error {
return nil
})

s := grpc.NewServer(ss)
go s.Serve(lis)
defer s.Stop()

ctx, cancel := context.WithTimeout(context.Background(), defaultTestTimeout)
defer cancel()
cc, err := grpc.NewClient(lis.Addr().String(), grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
t.Fatalf("grpc.NewClient(%q) failed unexpectedly: %v", lis.Addr(), err)
}
defer cc.Close()

desc := &grpc.StreamDesc{
StreamName: "FullDuplexCall",
ServerStreams: true,
ClientStreams: false,
}

stream, err := cc.NewStream(ctx, desc, "/grpc.testing.TestService/FullDuplexCall")
if err != nil {
t.Fatalf("cc.NewStream() failed unexpectedly: %v", err)
}

if err := stream.SendMsg(&testpb.Empty{}); err != nil {
t.Errorf("stream.SendMsg() = %v, want <nil>", err)
}

if err := stream.SendMsg(&testpb.Empty{}); status.Code(err) != codes.Internal {
t.Errorf("stream.SendMsg() = %v, want error %v", status.Code(err), codes.Internal)
}
}

// Tests the behavior for unary RPC when server call
// RecvMsg twice. Second call to RecvMsg should fail with Internal
// error.
func (s) TestUnaryRPC_ServerCallRecvMsgTwice(t *testing.T) {
lis, err := testutils.LocalTCPListener()
if err != nil {
t.Fatal(err)
}
defer lis.Close()

s := grpc.NewServer()
serviceDesc := grpc.ServiceDesc{
ServiceName: "grpc.testing.TestService",
HandlerType: (*any)(nil),
Methods: []grpc.MethodDesc{},
Streams: []grpc.StreamDesc{
{
StreamName: "UnaryCall",
Handler: func(srv interface{}, stream grpc.ServerStream) error {
err := stream.RecvMsg(&testpb.Empty{})
if err != nil {
t.Errorf("stream.RecvMsg() = %v, want <nil>", err)
}

if err = stream.RecvMsg(&testpb.Empty{}); status.Code(err) != codes.Internal {
t.Errorf("stream.RecvMsg() = %v, want error %v", status.Code(err), codes.Internal)
}
return nil
},
ClientStreams: false,
ServerStreams: false,
},
},
Metadata: "grpc/testing/test.proto",
}
s.RegisterService(&serviceDesc, &testServer{})
go s.Serve(lis)
defer s.Stop()

ctx, cancel := context.WithTimeout(context.Background(), defaultTestTimeout)
defer cancel()
cc, err := grpc.NewClient(lis.Addr().String(), grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
t.Fatalf("grpc.NewClient(%q) failed unexpectedly: %v", lis.Addr(), err)
}
defer cc.Close()

desc := &grpc.StreamDesc{
StreamName: "UnaryCall",
ServerStreams: false,
ClientStreams: false,
}

stream, err := cc.NewStream(ctx, desc, "/grpc.testing.TestService/UnaryCall")
if err != nil {
t.Fatalf("cc.NewStream() failed unexpectedly: %v", err)
}

if err := stream.SendMsg(&testpb.Empty{}); err != nil {
t.Errorf("stream.SendMsg() = %v, want <nil>", err)
}

if err := stream.RecvMsg(&testpb.Empty{}); status.Code(err) != codes.Internal {
t.Errorf("stream.RecvMsg() = %v, want error %v", status.Code(err), codes.Internal)
}
}

// Tests the behavior for unary RPC when client call
// SendMsg twice. Second call to SendMsg should fail with Internal
// error.
func (s) TestUnaryRPC_ClientCallSendMsgTwice(t *testing.T) {
lis, err := testutils.LocalTCPListener()
if err != nil {
t.Fatal(err)
}
defer lis.Close()

ss := grpc.UnknownServiceHandler(func(any, grpc.ServerStream) error {
return nil
})

s := grpc.NewServer(ss)
go s.Serve(lis)
defer s.Stop()

ctx, cancel := context.WithTimeout(context.Background(), defaultTestTimeout)
defer cancel()
cc, err := grpc.NewClient(lis.Addr().String(), grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
t.Fatalf("grpc.NewClient(%q) failed unexpectedly: %v", lis.Addr(), err)
}
defer cc.Close()

desc := &grpc.StreamDesc{
StreamName: "UnaryCall",
ServerStreams: false,
ClientStreams: false,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like we should have a test case that sets this true, also, and ensures that the server errors? This is theoretically catching a client-side check that errors if zero requests are sent before CloseSend is called when the client knows it is not client-streaming.

(And if we don't have such a check we may want to add one.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify — you're suggesting me to add a test where the client behaves as client/bidi-streaming and sends zero request, while server behave as server-streaming, and then assert that it fails on the server side due to a cardinality violation. Is that correct?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. This exact test, pretty much, but set this field to true.

Copy link
Contributor Author

@Pranjali-2501 Pranjali-2501 Jul 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have modified the test to a table-driven test, where server will run against multiple streamdesc including client-streaming, server-streaming and bidi-streaming.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The key difference between the two cases that I'd like to see is whether the client knows it's required to send a message.

In the case where the client knows (ClientStreams: false), we should detect the error locally and send a RST_STREAM to the server, but if the client doesn't know (ClientStreams: true), the server should detect the error and end the stream with an INTERNAL error. Can we confirm these things are happening? (And AFACT the test will fail since the client doesn't check whether it has sent a message in CloseSend, so it's fine to make that test case Skip until we fix it in another PR.)

}

stream, err := cc.NewStream(ctx, desc, "/grpc.testing.TestService/UnaryCall")
if err != nil {
t.Fatalf("cc.NewStream() failed unexpectedly: %v", err)
}

if err := stream.SendMsg(&testpb.Empty{}); err != nil {
t.Errorf("stream.SendMsg() = %v, want <nil>", err)
}

if err := stream.SendMsg(&testpb.Empty{}); status.Code(err) != codes.Internal {
t.Errorf("stream.SendMsg() = %v, want error %v", status.Code(err), codes.Internal)
}
}

// Tests that a client receives a cardinality violation error for client-streaming
// RPCs if the server call SendMsg multiple times.
func (s) TestClientStreaming_ServerHandlerSendMsgAfterSendMsg(t *testing.T) {
Expand Down