-
-
Notifications
You must be signed in to change notification settings - Fork 12.3k
[Bugfix][V1] Raise ValueError when draft max model len is too small #22935
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Benjamin Chislett <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a validation check to prevent crashes when the draft model's max_model_len is smaller than the main model's, which is a necessary fix for V1 speculative decoding. The logic is sound, but I've identified a critical issue where a TypeError could occur if max_model_len happens to be None. I've provided a suggestion to make the check more robust and prevent this potential crash.
Signed-off-by: Benjamin Chislett <[email protected]>
ekagra-ranjan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
ywang96
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delayed review - LGTM but I left a nit
| # Make sure the draft model's max_model_len is not less than | ||
| # the deployment's max_model_len. | ||
| # In V1 there is no way to disable requests when the sequence length | ||
| # exceeds the draft model's max_model_len, which can lead to crashes. | ||
| effective_max_model_len = self.max_model_len | ||
| if effective_max_model_len is None: | ||
| effective_max_model_len = model_config.max_model_len | ||
| if use_v1 and speculative_config is not None and \ | ||
| effective_max_model_len is not None and \ | ||
| speculative_config.draft_model_config is not None and \ | ||
| speculative_config.draft_model_config.max_model_len is not None: | ||
| draft_max_model_len = \ | ||
| speculative_config.draft_model_config.max_model_len | ||
| if draft_max_model_len < effective_max_model_len: | ||
| raise ValueError( | ||
| "The draft model config's max_model_len " | ||
| f"({draft_max_model_len}) " | ||
| "is less than the deployment's max_model_len " | ||
| f"({effective_max_model_len})." | ||
| "--max-model-len should be decreased to match.") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit - I think it makes more sense to have this code inside create_speculative_config - WDYT?
|
Closing. Prefer #24662 |
Purpose
In V0, there was a mechanism to skip spec decoding when the request length is longer than the draft model's max_model_len. There is no such mechanism in V1, leading to crashes when running a model like Llama 3.1 8B-Instruct with a standard EAGLE head, which can only handle 2048 positional embeddings.
This caused crashes when running longer prompts. With this change, the user will be forced to pass
--max-model-len 2048in this scenario, acknowledging the limitation until the selective-speculation feature can be re-introduced into V1.