Skip to content

Comments

[fix] Add support for loading model from a local path#52

Merged
Isotr0py merged 1 commit intomainfrom
dev_qby
Nov 10, 2025
Merged

[fix] Add support for loading model from a local path#52
Isotr0py merged 1 commit intomainfrom
dev_qby

Conversation

@qibaoyuan
Copy link
Contributor

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Add support for loading token2wav_weights from a local absolute file path.

Test Plan

python3 end2end.py --model /to/path/llm/Qwen2.5-Omni-3B --prompts "who are you?" --voice-type "m02" --dit-ckpt none --bigvgan-ckpt none --output-wav output_audio --prompt_type text

Test Result

If not fixed, the result will be a model loading error:

(EngineCore_DP0 pid=7550)                 ^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=7550)   File "/Users/bob/.envs/vllm-omni-official/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
(EngineCore_DP0 pid=7550)     validate_repo_id(arg_value)
(EngineCore_DP0 pid=7550)   File "/Users/bob/.envs/vllm-omni-official/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
(EngineCore_DP0 pid=7550)     raise HFValidationError(
(EngineCore_DP0 pid=7550) huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/to/path//Qwen2.5-Omni-3B'. Use `repo_type` argument if needed.

After applying this patch, it works well.

(EngineCore_DP0 pid=9068) WARNING 11-09 04:21:12 [utils.py:72] Trying to guess the arguments for old-style model class <class 'vllm_omni.model_executor.models.qwen2_5_omni_token2wav.Qwen2_5OmniToken2WavModel'>
Loading safetensors checkpoint shards:   0% Completed | 0/3 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 3/3 [00:00<00:00, 35.76it/s]
(EngineCore_DP0 pid=9068) 


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/hsliuustc0106/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Copy link
Member

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@Isotr0py Isotr0py merged commit f1d8a00 into main Nov 10, 2025
1 check passed
@qibaoyuan qibaoyuan deleted the dev_qby branch November 10, 2025 09:16
@qibaoyuan qibaoyuan restored the dev_qby branch November 10, 2025 09:16
@qibaoyuan qibaoyuan deleted the dev_qby branch November 11, 2025 10:18
princepride pushed a commit to princepride/vllm-omni that referenced this pull request Jan 10, 2026
[fix] Add support for loading model from a local path
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants