Skip to content

[Bugfix][Test] Re-enable the log simple tests#1065

Merged
Gaohan123 merged 2 commits intovllm-project:mainfrom
gcanlin:simple-test
Jan 29, 2026
Merged

[Bugfix][Test] Re-enable the log simple tests#1065
Gaohan123 merged 2 commits intovllm-project:mainfrom
gcanlin:simple-test

Conversation

@gcanlin
Copy link
Contributor

@gcanlin gcanlin commented Jan 29, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

In #774, I change UT to run based on docker container, to avoid the error below(I copy my comment here). But some new tests were only added in simple_test.sh. I think we can unify the setup of env into Dockerfile and safely remove simple_test.sh. So this PR is aim to recover the remaining tests.

The more details: #774 (comment)

Here I move UT into GPU queue but it wouldn't cost GPU resources, so I guess it's okay. The reason is that after introducing platform, when it initialized, torch._C ops would be imported. If keeping in cpu queue, it will raise the error below:
[2026-01-21T11:33:06Z] .venv-simple-test/lib/python3.12/site-packages/vllm/platforms/cuda.py:16: in
[2026-01-21T11:33:06Z] import vllm._C # noqa
[2026-01-21T11:33:06Z] ^^^^^^^^^^^^^^
[2026-01-21T11:33:06Z] E ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
See more details:https://buildkite.com/vllm/vllm-omni/builds/1913/steps/canvas?sid=019be051-4a0c-43ef-8b20->47464b363092

Test Plan

CI

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@gcanlin
Copy link
Contributor Author

gcanlin commented Jan 29, 2026

cc @congw729 @yenuo26 @david6666666

@congw729
Copy link
Contributor

congw729 commented Jan 29, 2026

So after this PR we don't have CPU tests anymore?

@gcanlin
Copy link
Contributor Author

gcanlin commented Jan 29, 2026

So after this PR we don't have CPU tests anymore?

We still need it. But we execute it in GPU queue even if this test don't occur GPU resources.

Would it increase the burden on CI?

@Gaohan123 Gaohan123 added the ready label to trigger buildkite CI label Jan 29, 2026
Copy link
Collaborator

@Gaohan123 Gaohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

@@ -22,8 +22,10 @@ steps:
commands:
- pytest -v -s tests/entrypoints/
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am curious why both pipeline.yml and simple_test.sh have pytest -v -s tests/entrypoints/ ? Intuitively, simple_test.sh run cpu tests. And pipeline.yml run gpu tests.

@Gaohan123 Gaohan123 merged commit 2c4974c into vllm-project:main Jan 29, 2026
5 checks passed
dongbo910220 pushed a commit to dongbo910220/vllm-omni that referenced this pull request Feb 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants