Skip to content

Conversation

@jkosh44
Copy link
Contributor

@jkosh44 jkosh44 commented Dec 8, 2025

The metrics-rs-integration feature allows users to automatically export metrics via the metrics.rs crate. Previously, only base metrics were included, derived metrics were excluded. This commit updates the integration to include derived metrics.

I'm not 100% convinced that this is a good idea, users could just reconstruct the derived metrics on their own wherever those metrics end up getting exported to. However, the implementation was easy enough I thought I'd use the PR as a platform to discuss the idea.

The metrics-rs-integration feature allows users to automatically export
metrics via the metrics.rs crate. Previously, only base metrics were
included, derived metrics were excluded. This commit updates the
integration to include derived metrics.
@rcoh rcoh requested a review from arielb1 December 10, 2025 17:05
Copy link
Contributor

@arielb1 arielb1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Could you add a test to https://github.com/tokio-rs/tokio-metrics/blob/main/tests/auto_metrics.rs ?
  2. Is there a way of making test_no_fields_missing work for derived metrics? I suspect there is not since they are method and not fields, but if there is it would be excellent.

@jkosh44
Copy link
Contributor Author

jkosh44 commented Dec 11, 2025

  1. Could you add a test to https://github.com/tokio-rs/tokio-metrics/blob/main/tests/auto_metrics.rs ?
  2. Is there a way of making test_no_fields_missing work for derived metrics? I suspect there is not since they are method and not fields, but if there is it would be excellent.

I'll get on that this week/weekend. I was holding off on writing tests until I got buy in that it was a good idea, but it sounds like we are interested in merging this (once tests are added)?

@arielb1
Copy link
Contributor

arielb1 commented Dec 11, 2025

I'll get on that this week/weekend. I was holding off on writing tests until I got buy in that it was a good idea, but it sounds like we are interested in merging this (once tests are added)?

Yea, we will be interested in merging this. Great PR!

@jkosh44
Copy link
Contributor Author

jkosh44 commented Dec 11, 2025

Is there a way of making test_no_fields_missing work for derived metrics? I suspect there is not since they are method and not fields, but if there is it would be excellent.

I have two thoughts here. The first is that we can maintain a const array in the impl of TaskMetrics and RuntimeMetrics that contain the names of all methods. For example,

impl RuntimeMetrics {
    const DERIVED_METRICS: &[&str] = &["busy_ratio"];
    const DERIVED_UNSTABLE_METRICS: &[&str] = &["mean_polls_per_park"];

...
}

Then we could check against those arrays in a test to make sure we aren't missing a derived metric. I don't personally like this approach because we still have to remember to manually update the arrays whenever adding a new method. The only real benefit is that these arrays are physically closer to the methods so it will be harder to forget.

The second idea builds off of the first. We could write an attribute proc macro that walks the AST of a struct and builds a list of all method names, and use those lists similarly as the first idea. It's a bit heavyweight but we'd only have to define it once and then it wouldn't require further maintenance. Unless we wanted to add methods to RuntimeMetrics or TaskMetrics in the future that aren't derived metrics, then we'd need a way to ignore these methods.

Any thoughts?

@arielb1
Copy link
Contributor

arielb1 commented Dec 11, 2025

Any thoughts?

Since we already have a macro that defines the direct metrics (

impl RuntimeMetrics {
), a declarative macro that defines the derived metrics sounds fine.

I would not use a proc macro, this is too small for one.

@jkosh44
Copy link
Contributor Author

jkosh44 commented Dec 12, 2025

a declarative macro that defines the derived metrics sounds fine.

I just pushed a commit with this approach.

Could you add a test to https://github.com/tokio-rs/tokio-metrics/blob/main/tests/auto_metrics.rs ?

I added tests for both derived runtime and task metrics. I couldn't really figure out a way for the value to be deterministic, so I just tested that the values existed and were not 0.

@jkosh44
Copy link
Contributor Author

jkosh44 commented Dec 12, 2025

a declarative macro that defines the derived metrics sounds fine.

I just pushed a commit with this approach.

Ignoring whitespace while reviewing makes the diff much more pleasant.

pub fn mean_slow_poll_duration(&self) -> Duration {
mean(self.total_slow_poll_duration, self.total_slow_poll_count)
}
/// The mean duration that tasks spent waiting to be executed after awakening.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your indentation is inconsistent between this and the previous block

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want me to fix that in a follow up?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, never mind, I see that you just did that.

@arielb1 arielb1 merged commit 08394ed into tokio-rs:main Dec 12, 2025
7 checks passed
This was referenced Dec 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants