Skip to content

Commit b902040

Browse files
committed
mem: allow using io.WriterTo with a io.LimitedReader
The caller of this function can wrap the io.Reader in a io.LimitedReader. This happens if some max message size is set. If so, this `io.WriterTo` check doesn't work anymore. Work around this by checking if it is maybe a `io.LimitedReader`. Overall, the problem I'm trying to solve is that the constant ```go buf := pool.Get(readAllBufSize) ``` 32KiB is way too much in our use case. Messages are typically at max about only 1KiB in size so we always overallocate by ~31KiB in the best case scenario so we want to use the `io.WriterTo` branch so that we could appropriately size the buffer. Signed-off-by: Giedrius Statkevičius <[email protected]>
1 parent 7472d57 commit b902040

File tree

1 file changed

+12
-0
lines changed

1 file changed

+12
-0
lines changed

mem/buffer_slice.go

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -257,6 +257,18 @@ func ReadAll(r io.Reader, pool BufferPool) (BufferSlice, error) {
257257
_, err := wt.WriteTo(w)
258258
return result, err
259259
}
260+
261+
if lr, ok := r.(*io.LimitedReader); ok {
262+
if wt, ok := lr.R.(io.WriterTo); ok {
263+
// This is more optimal since wt knows the size of chunks it wants to
264+
// write and, hence, we can allocate buffers of an optimal size to fit
265+
// them. E.g. might be a single big chunk, and we wouldn't chop it
266+
// into pieces.
267+
w := NewWriter(&result, pool)
268+
_, err := wt.WriteTo(w)
269+
return result, err
270+
}
271+
}
260272
nextBuffer:
261273
for {
262274
buf := pool.Get(readAllBufSize)

0 commit comments

Comments
 (0)