Skip to content

Conversation

@ziruiliu
Copy link
Contributor

@ziruiliu ziruiliu commented Nov 3, 2025

Purpose

In LMCache engine, the PR LMCache/LMCache#1835 implement interface get_block_ids_with_load_errors which was introduced in #19330. So vLLM is able to handle failed cache retrieval from LMCache

Test Plan

See PR's comment at LMCache/LMCache#1835 (comment)
In the test, we try to inject fault in cache retrieval when cache lookup returns OK. vLLM is expected to receive the failed block ids from LMCache and reschedule the request

Test Result

Reschedule happened as expected

 [scheduler.py:1536] Recovered from KV load failure: 1 request(s) rescheduled (64 tokens affected).

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@github-actions
Copy link

github-actions bot commented Nov 3, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@mergify mergify bot added the kv-connector label Nov 3, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables error handling for failed KV cache loads from LMCache by implementing the get_block_ids_with_load_errors method in the LMCacheConnectorV1. The implementation correctly provides a fallback for older versions of LMCache that do not support this feature. My review includes a suggestion to improve the robustness of the implementation by checking if the method is callable, which prevents potential TypeError exceptions if the attribute exists but is not a method. This is particularly important for ensuring stability when interacting with versioned dependencies.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: ziruiliu <[email protected]>
Copy link
Collaborator

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ApostaC ApostaC enabled auto-merge (squash) November 3, 2025 18:07
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 3, 2025
Signed-off-by: Zirui Liu <[email protected]>
auto-merge was automatically disabled November 4, 2025 10:04

Head branch was pushed to by a user without write access

@ziruiliu
Copy link
Contributor Author

Hi @NickLucche, coul you please take a look on this change? This is quite straightfoward to enbale LMCache to report block ids those are failed in retrieval

@NickLucche NickLucche merged commit d143152 into vllm-project:main Nov 12, 2025
51 checks passed
@NickLucche
Copy link
Collaborator

Thanks for contributing @ziruiliu !

geodavic pushed a commit to geodavic/vllm that referenced this pull request Nov 16, 2025
…ector (vllm-project#27978)

Signed-off-by: Zirui Liu <[email protected]>
Signed-off-by: ziruiliu <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Signed-off-by: George D. Torres <[email protected]>
bwasti pushed a commit to bwasti/vllm that referenced this pull request Nov 17, 2025
…ector (vllm-project#27978)

Signed-off-by: Zirui Liu <[email protected]>
Signed-off-by: ziruiliu <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Signed-off-by: Bram Wasti <[email protected]>
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
…ector (vllm-project#27978)

Signed-off-by: Zirui Liu <[email protected]>
Signed-off-by: ziruiliu <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kv-connector ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants