Skip to content

Conversation

@WoosukKwon
Copy link
Collaborator

No description provided.

@WoosukKwon WoosukKwon added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 18, 2025
@mergify mergify bot added the ci/build label Sep 18, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request removes a significant number of tests related to the deprecated V0 engine, which is a good cleanup effort. The changes in the CI configuration and the removal of V0-specific test files seem correct. However, I've identified one critical issue where a test file that appears to cover both V0 and V1 engines is being removed, potentially reducing test coverage for the V1 engine. Please see the detailed comment.

Signed-off-by: Woosuk Kwon <[email protected]>
@WoosukKwon WoosukKwon enabled auto-merge (squash) September 18, 2025 02:46
@WoosukKwon WoosukKwon disabled auto-merge September 18, 2025 05:05
@WoosukKwon WoosukKwon merged commit 5c65a72 into main Sep 18, 2025
80 of 82 checks passed
@WoosukKwon WoosukKwon deleted the woosuk/rm-more-v0-tests branch September 18, 2025 05:05
845473182 pushed a commit to dsxsteven/vllm_splitPR that referenced this pull request Sep 18, 2025
…litPR into model_register

* 'model_register' of https://github.com/dsxsteven/vllm_splitPR: (138 commits)
  Retrieve `sliding_window` from text config in Gemma3 MM (vllm-project#25085)
  [Docs] Fix API Reference (vllm-project#25140)
  [Kernel] Better inf handling for grouped topk cu (vllm-project#24886)
  [CLI] Use streaming in CLI chat and completion commands (vllm-project#23769)
  [benchmark] add peak throughput metrics and plot (vllm-project#23867)
  [Spec Decode] Efficient padded speculation (vllm-project#24539)
  [V0 Deprecation] Remove more V0 tests (vllm-project#25117)
  [EPLB] Add EPLB support for hunyuan_v1 (vllm-project#23078)
  [XPU] Whisper model support on XPU Platform (vllm-project#25123)
  Mark prompt logprobs as incompatible with prompt embeds at API level (vllm-project#25077)
  [Model] enable data parallel for InternVL vision encoder (vllm-project#23909)
  [Kernels] Overlap shared experts with combine instead of dispatch (vllm-project#24254)
  [Bugfix][Qwen3-Next] add prefixes to shared_expert in qwen3-next and mlp in qwen2moe to successfully load ignored params in quantized models (vllm-project#24960)
  [Core][MM] Cleanup `MultiModalCache` (vllm-project#25006)
  [Docs] Clean up the contributing README (vllm-project#25099)
  [MM Encoder] Apply DP ViT for Qwen3-VL model series (vllm-project#24955)
  [Kernels] Enable DeepGEMM by default (vllm-project#24462)
  [V0 Deprecation] Skip PP test (vllm-project#25128)
  [V0 Deprecation] Remove misc V0 tests (vllm-project#25118)
  [V0 Deprecation] Remove V0 Tracing & Metrics tests (vllm-project#25115)
  ...
debroy-rh pushed a commit to debroy-rh/vllm that referenced this pull request Sep 19, 2025
pytorchmergebot pushed a commit to pytorch/pytorch that referenced this pull request Sep 20, 2025
mansiag05 pushed a commit to mansiag05/pytorch that referenced this pull request Sep 22, 2025
cleonard530 pushed a commit to cleonard530/pytorch that referenced this pull request Sep 22, 2025
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
charlifu pushed a commit to ROCm/vllm that referenced this pull request Sep 25, 2025
dsashidh pushed a commit to dsashidh/pytorch that referenced this pull request Sep 26, 2025
pytorchbot pushed a commit to pytorch/pytorch that referenced this pull request Sep 30, 2025
They have been removed in vllm-project/vllm#25117 and vllm-project/vllm#22772, thus failing in trunk at the moment after the latest pin commit update

Pull Request resolved: #163383
Approved by: https://github.com/wdvr, https://github.com/seemethere, https://github.com/malfet

(cherry picked from commit a31acf3)
Camyll pushed a commit to pytorch/pytorch that referenced this pull request Sep 30, 2025
Clean up obsoleted vLLM tests (#163383)

They have been removed in vllm-project/vllm#25117 and vllm-project/vllm#22772, thus failing in trunk at the moment after the latest pin commit update

Pull Request resolved: #163383
Approved by: https://github.com/wdvr, https://github.com/seemethere, https://github.com/malfet

(cherry picked from commit a31acf3)

Co-authored-by: Huy Do <[email protected]>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants