Skip to content

Conversation

@benchislett
Copy link
Collaborator

@benchislett benchislett commented Aug 14, 2025

Purpose

In V0, there was a mechanism to skip spec decoding when the request length is longer than the draft model's max_model_len. There is no such mechanism in V1, leading to crashes when running a model like Llama 3.1 8B-Instruct with a standard EAGLE head, which can only handle 2048 positional embeddings.

This caused crashes when running longer prompts. With this change, the user will be forced to pass --max-model-len 2048 in this scenario, acknowledging the limitation until the selective-speculation feature can be re-introduced into V1.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a validation check to prevent crashes when the draft model's max_model_len is smaller than the main model's, which is a necessary fix for V1 speculative decoding. The logic is sound, but I've identified a critical issue where a TypeError could occur if max_model_len happens to be None. I've provided a suggestion to make the check more robust and prevent this potential crash.

Copy link
Contributor

@ekagra-ranjan ekagra-ranjan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delayed review - LGTM but I left a nit

Comment on lines +1314 to +1334
# Make sure the draft model's max_model_len is not less than
# the deployment's max_model_len.
# In V1 there is no way to disable requests when the sequence length
# exceeds the draft model's max_model_len, which can lead to crashes.
effective_max_model_len = self.max_model_len
if effective_max_model_len is None:
effective_max_model_len = model_config.max_model_len
if use_v1 and speculative_config is not None and \
effective_max_model_len is not None and \
speculative_config.draft_model_config is not None and \
speculative_config.draft_model_config.max_model_len is not None:
draft_max_model_len = \
speculative_config.draft_model_config.max_model_len
if draft_max_model_len < effective_max_model_len:
raise ValueError(
"The draft model config's max_model_len "
f"({draft_max_model_len}) "
"is less than the deployment's max_model_len "
f"({effective_max_model_len})."
"--max-model-len should be decreased to match.")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit - I think it makes more sense to have this code inside create_speculative_config - WDYT?

@ywang96 ywang96 added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 3, 2025
@benchislett
Copy link
Collaborator Author

Closing. Prefer #24662

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants