-
-
Notifications
You must be signed in to change notification settings - Fork 11.7k
[V0 Deprecation] Remove more V0 tests #25117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Woosuk Kwon <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request removes a significant number of tests related to the deprecated V0 engine, which is a good cleanup effort. The changes in the CI configuration and the removal of V0-specific test files seem correct. However, I've identified one critical issue where a test file that appears to cover both V0 and V1 engines is being removed, potentially reducing test coverage for the V1 engine. Please see the detailed comment.
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
…litPR into model_register * 'model_register' of https://github.com/dsxsteven/vllm_splitPR: (138 commits) Retrieve `sliding_window` from text config in Gemma3 MM (vllm-project#25085) [Docs] Fix API Reference (vllm-project#25140) [Kernel] Better inf handling for grouped topk cu (vllm-project#24886) [CLI] Use streaming in CLI chat and completion commands (vllm-project#23769) [benchmark] add peak throughput metrics and plot (vllm-project#23867) [Spec Decode] Efficient padded speculation (vllm-project#24539) [V0 Deprecation] Remove more V0 tests (vllm-project#25117) [EPLB] Add EPLB support for hunyuan_v1 (vllm-project#23078) [XPU] Whisper model support on XPU Platform (vllm-project#25123) Mark prompt logprobs as incompatible with prompt embeds at API level (vllm-project#25077) [Model] enable data parallel for InternVL vision encoder (vllm-project#23909) [Kernels] Overlap shared experts with combine instead of dispatch (vllm-project#24254) [Bugfix][Qwen3-Next] add prefixes to shared_expert in qwen3-next and mlp in qwen2moe to successfully load ignored params in quantized models (vllm-project#24960) [Core][MM] Cleanup `MultiModalCache` (vllm-project#25006) [Docs] Clean up the contributing README (vllm-project#25099) [MM Encoder] Apply DP ViT for Qwen3-VL model series (vllm-project#24955) [Kernels] Enable DeepGEMM by default (vllm-project#24462) [V0 Deprecation] Skip PP test (vllm-project#25128) [V0 Deprecation] Remove misc V0 tests (vllm-project#25118) [V0 Deprecation] Remove V0 Tracing & Metrics tests (vllm-project#25115) ...
Signed-off-by: Woosuk Kwon <[email protected]>
They have been removed in vllm-project/vllm#25117 and vllm-project/vllm#22772, thus failing in trunk at the moment after the latest pin commit update Pull Request resolved: #163383 Approved by: https://github.com/wdvr, https://github.com/seemethere, https://github.com/malfet
They have been removed in vllm-project/vllm#25117 and vllm-project/vllm#22772, thus failing in trunk at the moment after the latest pin commit update Pull Request resolved: pytorch#163383 Approved by: https://github.com/wdvr, https://github.com/seemethere, https://github.com/malfet
They have been removed in vllm-project/vllm#25117 and vllm-project/vllm#22772, thus failing in trunk at the moment after the latest pin commit update Pull Request resolved: pytorch#163383 Approved by: https://github.com/wdvr, https://github.com/seemethere, https://github.com/malfet
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]> Signed-off-by: charlifu <[email protected]>
They have been removed in vllm-project/vllm#25117 and vllm-project/vllm#22772, thus failing in trunk at the moment after the latest pin commit update Pull Request resolved: pytorch#163383 Approved by: https://github.com/wdvr, https://github.com/seemethere, https://github.com/malfet
They have been removed in vllm-project/vllm#25117 and vllm-project/vllm#22772, thus failing in trunk at the moment after the latest pin commit update Pull Request resolved: #163383 Approved by: https://github.com/wdvr, https://github.com/seemethere, https://github.com/malfet (cherry picked from commit a31acf3)
Clean up obsoleted vLLM tests (#163383) They have been removed in vllm-project/vllm#25117 and vllm-project/vllm#22772, thus failing in trunk at the moment after the latest pin commit update Pull Request resolved: #163383 Approved by: https://github.com/wdvr, https://github.com/seemethere, https://github.com/malfet (cherry picked from commit a31acf3) Co-authored-by: Huy Do <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]> Signed-off-by: xuebwang-amd <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]> Signed-off-by: xuebwang-amd <[email protected]>
No description provided.