-
-
Notifications
You must be signed in to change notification settings - Fork 126
fix: Read() data truncation issue #142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -52,7 +52,6 @@ type Conn struct { | |
| readErr error | ||
| conn net.Conn | ||
| bufReader *bufio.Reader | ||
| reader io.Reader | ||
| header *Header | ||
| ProxyHeaderPolicy Policy | ||
| Validate Validator | ||
|
|
@@ -161,7 +160,6 @@ func NewConn(conn net.Conn, opts ...func(*Conn)) *Conn { | |
|
|
||
| pConn := &Conn{ | ||
| bufReader: br, | ||
| reader: io.MultiReader(br, conn), | ||
| conn: conn, | ||
| } | ||
|
|
||
|
|
@@ -183,7 +181,21 @@ func (p *Conn) Read(b []byte) (int, error) { | |
| return 0, p.readErr | ||
| } | ||
|
|
||
| return p.reader.Read(b) | ||
| // First drain any buffered data from header parsing, | ||
| // then read directly from the underlying connection. | ||
| n := 0 | ||
| if p.bufReader != nil && p.bufReader.Buffered() > 0 { | ||
| n, _ = p.bufReader.Read(b) | ||
| if p.bufReader.Buffered() == 0 { | ||
| p.bufReader = nil | ||
| } | ||
| } | ||
|
|
||
| if n < len(b) { | ||
| nn, err := p.conn.Read(b[n:]) | ||
| return n + nn, err | ||
| } | ||
| return n, nil | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Following my comments from above, I think this logic can be improved // Drain the buffer.
if p.bufReader != nil && p.bufReader.Buffered() > 0 {
n, err := p.bufReader.Read(b)
// If we got ANY data, return immediately. Do not block on conn.
// We do not return EOF here because the underlying conn might have more.
if err == io.EOF {
p.bufReader = nil // Done with buffer
return n, nil // Return n, not EOF, so next Read hits p.conn
}
return n, err
}
// We're done with the buffer.
// From now on, read directly from the underlying connection.
return p.conn.Read(b)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this version does not fix the bug. I don't want to use a perhaps my expectation is incorrect, but I think the following code in ingress-nginx should not need a modification:
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In #142 (comment) you are agreeing with the assessment that
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The logic I proposed above is incorrect. Here's the revised one: // Drain the buffer if it exists and has data.
if p.bufReader != nil {
if p.bufReader.Buffered() > 0 {
n, err := p.bufReader.Read(b)
// Did we empty the buffer?
// Buffering a net.Conn means the buffer doesn't return io.EOF until the connection returns io.EOF.
// Therefore, we use Buffered() == 0 to detect if we are done with the buffer.
if p.bufReader.Buffered() == 0 {
// Garbage collect the buffer.
p.bufReader = nil
}
// Return immediately. Do not touch p.conn.
// If err is EOF here, it means the connection is actually closed,
// so we should return that error to the user anyway.
return n, err
}
// If buffer was empty to begin with (shouldn't happen with the >0 check
// but good for safety), clear it.
p.bufReader = nil
}
// From now on, read directly from the underlying connection.
return p.conn.Read(b)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok I see your point about the MultiReader which never iterated to the underlying conn, so that data was always read in 256bytes chunks. that was not what I was bothered with in the first place. this 2nd version is fixing that aspect, but it's still making this library act in a non-transparent way w.r.t. the user |
||
| } | ||
|
|
||
| // Write wraps original conn.Write. | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -1984,6 +1984,60 @@ func TestCopyFromWrappedConnectionToWrappedConnection(t *testing.T) { | |
| } | ||
| } | ||
|
|
||
| // chunkedConn wraps a net.Conn and limits reads to simulate TCP chunking. | ||
| type chunkedConn struct { | ||
| net.Conn | ||
| maxRead int | ||
| } | ||
|
|
||
| func (c *chunkedConn) Read(b []byte) (int, error) { | ||
| if len(b) > c.maxRead { | ||
| b = b[:c.maxRead] | ||
| } | ||
| return c.Conn.Read(b) | ||
| } | ||
|
|
||
| // TestConnReadTruncatesData demonstrates that Conn.Read() only returns | ||
| // buffered data when the initial TCP read is smaller than the payload. | ||
| func TestConnReadTruncatesData(t *testing.T) { | ||
| const payloadSize = 400 | ||
|
|
||
| proxyHeader := []byte("PROXY TCP4 192.168.1.1 192.168.1.2 12345 443\r\n") | ||
| payload := bytes.Repeat([]byte("X"), payloadSize) | ||
| fullData := append(proxyHeader, payload...) | ||
|
|
||
| serverConn, clientConn := net.Pipe() | ||
| defer func() { | ||
| serverCloseErr := serverConn.Close() | ||
| clientCloseErr := clientConn.Close() | ||
| if serverCloseErr != nil || clientCloseErr != nil { | ||
| t.Errorf("failed to close connection: %v, %v", serverCloseErr, clientCloseErr) | ||
| } | ||
| }() | ||
|
|
||
| go func() { | ||
| _, _ = clientConn.Write(fullData) | ||
| }() | ||
|
|
||
| // Simulate TCP delivering only 256 bytes in first read | ||
| chunked := &chunkedConn{Conn: serverConn, maxRead: 256} | ||
|
|
||
| // Create a ProxyProto-wrapped connection | ||
| conn := NewConn(chunked) | ||
| _ = conn.SetReadDeadline(time.Now().Add(time.Second)) | ||
|
|
||
| buf := make([]byte, 16384) | ||
| n, _ := conn.Read(buf) | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is where our understanding of
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. that is the case for a typical reader. here however your library is acting as a transparent "pipe" between an underlying conn from which we abstract away the proxy protocol header. therefore I would think it's reasonable to also transparently forward the also, if
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I mislead you for some weird reason. The theoretical maximum size of a v2 header is 65,551 bytes (16 bytes fixed + 65,535 bytes variable). |
||
|
|
||
| t.Logf("Sent: %d bytes payload (after %d byte PROXY header)", payloadSize, len(proxyHeader)) | ||
| t.Logf("Read: %d bytes", n) | ||
|
|
||
| if n < payloadSize { | ||
| t.Errorf("BUG: Read returned %d bytes, expected %d (lost %d bytes)", | ||
| n, payloadSize, payloadSize-n) | ||
| } | ||
| } | ||
|
|
||
| func benchmarkTCPProxy(size int, b *testing.B) { | ||
| // create and start the echo backend | ||
| backend, err := net.Listen("tcp", testLocalhostRandomPort) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC this blocks waiting for the buffer to be filled which is somewhat of a violation of the
io.Readercontract:The
bufio.Readerhandled this.Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it blocks more than what the previous code was doing. with a multireader, assuming you have no buffered data, the
MultiReaderwould have received an EOF from the first reader (bufReader) and would then have called the next reader (reader) with aRead(b)call (which might have blocked as well).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC a
bufio.Readerbacked by an opennet.Connwill only returnio.EOFon.Read(b)when thenet.Connis closed.