Skip to content

Comments

Simple test from PR#88 on Buildkite#93

Merged
ywang96 merged 7 commits intomainfrom
add_test
Nov 28, 2025
Merged

Simple test from PR#88 on Buildkite#93
ywang96 merged 7 commits intomainfrom
add_test

Conversation

@ywang96
Copy link
Member

@ywang96 ywang96 commented Nov 28, 2025

Adopted test from #88

congw729 and others added 5 commits November 27, 2025 16:19
Signed-off-by: WANG Cong <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +516 to +518
from vllm_omni.entrypoints.omni_llm import OmniLLM

llm = OmniLLM(model="any", init_timeout=1)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Override init_sleep_seconds in tests to prevent long delays

Each test constructs OmniLLM(model="any", init_timeout=1) without overriding init_sleep_seconds (e.g. here), but OmniLLM defaults to init_sleep_seconds=20 and _start_stage_processes sleeps for that duration per stage (see vllm_omni/entrypoints/omni_llm.py:58,132). A two-stage config will therefore block ~40s per test and even the timeout test waits those sleeps before failing, turning the supposed “Simple Unit Test” pipeline into a multi‑minute run and not actually exercising short timeouts.

Useful? React with 👍 / 👎.

Signed-off-by: Roger Wang <[email protected]>
Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice

@DarkLight1337
Copy link
Member

We can work on upgrading dependencies once we get some CI running to validate that vllm-omni can be run successfully.

@ywang96 ywang96 merged commit 1ab8882 into main Nov 28, 2025
3 checks passed
@ywang96 ywang96 mentioned this pull request Nov 28, 2025
5 tasks
@Gaohan123 Gaohan123 deleted the add_test branch December 1, 2025 09:54
princepride pushed a commit to princepride/vllm-omni that referenced this pull request Jan 10, 2026
Signed-off-by: WANG Cong <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: WANG Cong <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants