Skip to content

Conversation

@etki
Copy link
Contributor

@etki etki commented Apr 25, 2025

What is this?

This PR contains only benchmarks for some basic API, MeterFilter implementations and a couple of Tags methods - no new symbols visible for users, no changes in production code. It goes a bit extra by exploiting different inputs so that different usage patterns would be evaluated and "better at this count, yet worse at that count" situations had less chance of getting unnoticed (assuming that whoever works on improving performance of specific code part runs these benchmarks prior to and after the changes).

Is it really needed?

I'm prepping other PRs for some non-dramatic updates in filters and tags code (also no new symbols or new functionality, but changing the existing one preserving the present functionality), so these benchmarks just had to emerge. It's not up to me to decide if they are needed in the project, but they came up naturally and they shouldn't take any resources from anyone except for the time from people explicitly running them, so i don't see an obstacle here.

But at the same time i'm just having fun, so i don't have a problem with any requested changes or removals.

There are already benchmarks for tags!

Yes, but: 1) i've noticed them just when i was preparing this PR, 2) they use static input (which also comes with a fixed size). I don't have any energy to check the assembly, but JIT theoretically might infer wrong assumptions from that; i also think that dynamic input size is a must here, as the whole sort-the-array story is non-linear from the very beginning. Not sure what do with these, but you can guide me.

Aren't these benchmarks too sophisticated?

They are sophisticated, but not "too" for me; they aren't entirely necessary, but since they are coming for free and doing the more thorough matrix evaluation, i see this as a benefit. But i'm farming my XP here, so if there's a concern against checking these in, i'm OK with it.

Any interesting results?

These are just the baseline measurements, i.e. they exist to compare two different revisions. There are some hints on the situations that can be improved. Selecting the biggest offender (using Intel N100 @ 2GHz) for a dramatic effect:

name ignored tags tags in identifier result unit
config.filter.MeterFilterIgnoreTagsBenchmark.baseline 64 64 8439.085 ± 7.119 ns/op
config.filter.MeterFilterIgnoreTagsBenchmark.baseline:instructions:u 64 64 38896.080 #/op

Thing's happening here is just taking ~64 elements (there are many samples, most of which have 64 tags) and filter out the ones that match a set of 64 keys. While here it comes at 8.4usec, this could be just an O(n) task executing in a matter of a couple of hundreds of nanoseconds. My forthcoming PRs would somewhat address things like that (being honest: this is the biggest offender, other places are not so impressive), but getting to the best position would require #6113.

@etki etki force-pushed the filter-benchmarks branch 5 times, most recently from bb73603 to 6e9d64d Compare April 25, 2025 03:25
@jonatan-ivanov jonatan-ivanov added the performance Issues related to general performance label Apr 28, 2025
@etki etki mentioned this pull request Apr 29, 2025
Copy link
Member

@shakuzen shakuzen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for all the work on this. It's far more sophisticated than our existing benchmarks. It generally looks good to me, but I've left some review comments. Could you rebase on main when you have a chance and look at the review comments? Then I think we're ready to merge, if @jonatan-ivanov doesn't have any objections.

@shakuzen
Copy link
Member

@etki following up to see if you've had a chance to check my review.

@shakuzen
Copy link
Member

I've addressed the review comments and rebased on the latest main.

@shakuzen shakuzen enabled auto-merge (squash) August 20, 2025 07:53
@shakuzen shakuzen merged commit 5e1ae7d into micrometer-metrics:main Aug 20, 2025
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Issues related to general performance

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants