Skip to content

Conversation

@GiedriusS
Copy link
Member

  • Implement io.WriterTo on the snappy decompressor which allows not overallocating memory for decompression
  • Deploy a hack thanos-community/grpc-go@b902040 because the io.WriterTo assertion doesn't work with limited recv size at the moment
  • Disable pooling on unmarshaling and allocate slices that have the exactly needed size. This allows to not overprovision memory because the default memory pool in gRPC has quite coarsely grained buckets.
  • Remove copying of chunks from *AggrChunk because we don't do any pooling on unmarshaling either way. This saves a bit of memory.

- Implement io.WriterTo on the snappy decompressor which allows not
  overallocating memory for decompression
- Deploy a hack
  thanos-community/grpc-go@b902040
  because the io.WriterTo assertion doesn't work with limited recv size
  at the moment
- Disable pooling on unmarshaling and allocate slices that have the
  exactly needed size. This allows to not overprovision memory because
  the default memory pool in gRPC has quite coarsely grained buckets.
- Remove copying of chunks from *AggrChunk because we don't do any
  pooling on unmarshaling either way. This saves a bit of memory.

Signed-off-by: Giedrius Statkevičius <[email protected]>
@GiedriusS GiedriusS force-pushed the fixmemoryconsumption branch from 414b1eb to 569d4e2 Compare November 12, 2025 14:45
@clarifai-fmarceau
Copy link

Genuinely curious about this, did you by any chance profile memory usage before and after this change ?

@GiedriusS
Copy link
Member Author

Sure, I ran the benchmark that is included in this PR. Only allocs/s is still higher compared to v0.39.2. This might be because the io.Reader is casted to io.WriterTo inside of grpc-go. I haven't had enough time to look into this so far. And, of course, gRPC max message size limit doesn't work. That's why I still haven't merged this yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants