Skip to content

Conversation

@daniel-salib
Copy link
Contributor

@daniel-salib daniel-salib commented Feb 25, 2025

Add "vllm:num_requests_total" Metric for Scheduler State, which combines the total from "vllm:num_requests_running" and "vllm:num_requests_waiting" into a single metric.

This PR introduces a new metric, vllm:num_requests_total, which tracks the total number of requests running or waiting in the scheduler. This metric provides a more comprehensive view of the scheduler's state and helps with monitoring and debugging.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Feb 25, 2025
iteration_stats: IterationStats):
"""Log to prometheus."""
self.gauge_scheduler_running.set(scheduler_stats.num_running_reqs)
self.gauge_scheduler_total.set(scheduler_stats.num_running_reqs + \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems scheduler_stats are asyncly collected, especially when multip processing is involved. For concurrent request counter to perform effectively as load balancing counter, the recency matters a lot. Would it be better to directly track the concurrency count (how many active http requests) at http service level?

@mergify
Copy link

mergify bot commented Feb 26, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @daniel-salib.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 26, 2025
@daniel-salib daniel-salib changed the title add num_requests_total metric to track total number of requests add num_concurrency_requests metric to track concurrent requests running/waiting Feb 26, 2025
@mergify mergify bot removed the needs-rebase label Feb 26, 2025
@youngkent
Copy link
Contributor

This approach uses AsyncLLM._log_stats to collect trigger metrics logging, which has delays in reporting accurate concurrent request count. In order to have this load balancing counter work well, we need to make sure it is reflected in real-time.
I was thinking tracking the concurrent http request count in the http (api_server) layer without involving the engine, but I can see the Prometheus logger is not available in http layer. Another idea is we just add a new http endpoint like /load to just fetch the concurrent count info. Wdyt?
cc: @simon-mo @WoosukKwon

@daniel-salib
Copy link
Contributor Author

makes sense! I adopted the approach in a new PR
#13950

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants