[Fix] Increase max wait time for server readiness to accommodate model loading#1089
Merged
hsliuustc0106 merged 1 commit intovllm-project:mainfrom Jan 30, 2026
Merged
Conversation
Signed-off-by: Andy Zhou <46011930+AndyZhou952@users.noreply.github.com>
Contributor
|
OK |
david6666666
approved these changes
Jan 30, 2026
Collaborator
|
Is there some other issue besides the maximum waiting time? |
Contributor
Author
Nothing I'm aware of currently. Fingers crossed! |
dongbo910220
pushed a commit
to dongbo910220/vllm-omni
that referenced
this pull request
Feb 1, 2026
…l loading (vllm-project#1089) Signed-off-by: Andy Zhou <46011930+AndyZhou952@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.
Purpose
Fix #1029 example 10 and follow-up to #1075 . This is introduced by switching to the common
OmniServerfromtest.confestwith defaultmax_time=600s(In the CI use case, we increase the time to1200sto accommodate that in the previous PRs, but it was overwritten - we revert the changes here)Loading safetensors checkpoint shards: 67% 2/3 [07:35<03:42, 222.93s/it]ERROR/usr/local/lib/python3.12/dist-packages/coverage/control.py:958: CoverageWarning: No data was collected. (no-data-collected); see https://coverage.readthedocs.io/en/7.13.2/messages.html#warning-no-data-collectedEssential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)