latencypredictor: improve TPOT training accuracy#2509
Merged
k8s-ci-robot merged 1 commit intokubernetes-sigs:mainfrom Mar 6, 2026
Merged
Conversation
✅ Deploy Preview for gateway-api-inference-extension ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Contributor
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: BenjaminBraunDev, kaushikmitr The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
RyanRosario
pushed a commit
to RyanRosario/gateway-api-inference-extension
that referenced
this pull request
Mar 9, 2026
BizerNotNull
pushed a commit
to BizerNotNull/gateway-api-inference-extension
that referenced
this pull request
Mar 15, 2026
elevran
pushed a commit
to llm-d/llm-d-inference-scheduler
that referenced
this pull request
Apr 23, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request introduces significant improvements to the TPOT (Tokens Per Output Token) latency prediction and training pipeline. The main changes focus on shifting TPOT training to occur once per request using a more accurate average, simplifying token count handling, and updating configuration options. These updates improve the reliability and maintainability of TPOT latency prediction and training.
TPOT Training and Prediction Pipeline Improvements:
ResponseComplete, using the average TPOT calculated as (e2e latency - TTFT) divided by (tokens - 1), resulting in more accurate training data and simplifying the logic. (pkg/epp/framework/plugins/scheduling/scorer/predictedlatency/requestcontrol_hooks.go)processTokenForLatencyPrediction, reducing complexity and avoiding redundant training entries. (pkg/epp/framework/plugins/scheduling/scorer/predictedlatency/latencypredictor_helper.go) [1] [2]Configuration and Data Handling Updates:
TPOT_ZERO_TOKEN_COUNTsetting inSettings, allowing configuration of whether thenum_tokens_generatedis set to zero during TPOT training data preparation. (latencypredictor/training_server.py) [1] [2]latencypredictorasyncpackage, ensuring proper integration with asynchronous latency prediction. (pkg/epp/framework/plugins/scheduling/scorer/predictedlatency/requestcontrol_hooks.go)