Skip to content

[Docs] Add qwen image missing doc in user guide#111

Merged
Gaohan123 merged 9 commits intomainfrom
fix_qwenimage_doc
Nov 30, 2025
Merged

[Docs] Add qwen image missing doc in user guide#111
Gaohan123 merged 9 commits intomainfrom
fix_qwenimage_doc

Conversation

@SamitHuang
Copy link
Collaborator

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Add qwen image missing doc in user guide

Test Plan

Test Result

https://vllm-omni.readthedocs.io/en/latest/


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: samithuang <[email protected]>
Signed-off-by: samithuang <[email protected]>
Signed-off-by: samithuang <[email protected]>
Signed-off-by: samithuang <[email protected]>
@SamitHuang SamitHuang requested review from Gaohan123 and ZJY0516 and removed request for Gaohan123 November 30, 2025 06:58
Signed-off-by: samithuang <[email protected]>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +5 to +7
from vllm_omni.diffusion.worker.gpu_worker import GPUWorker, WorkerProc

__all__ = ["GPUWorker", "WorkerProc"]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid importing GPUWorker in package init

The new vllm_omni.diffusion.worker.__init__ eagerly imports GPUWorker/WorkerProc, which in turn pull in vllm, torch, and other heavy dependencies. MkDocs’ api-autonav plugin (see mkdocs.yml’s modules: ["vllm_omni"]) imports each discovered package to render API docs; with only the [docs] extras installed (they don’t install vllm), a simple python -c "import vllm_omni.diffusion.worker" now raises ModuleNotFoundError and will cause mkdocs build to fail. Previously this subpackage had no __init__, so it wasn’t imported during docs generation. Consider making the import lazy or excluding this package from the docs scan to keep documentation builds working in environments without vllm.

Useful? React with 👍 / 👎.

@@ -0,0 +1,66 @@
# Offline Inference Example of vLLM-Omni for Qwen-Image
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is duplicated with examples/offline_inference/qwen_image/README.md.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like omni has 2 docs for ervry example now

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

@SamitHuang SamitHuang Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should address the docs duplication issue in another PR. cc @Gaohan123

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I resolve it in PR#113

@@ -0,0 +1,3 @@
# SPDX-License-Identifier: Apache-2.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need this file?

Copy link
Collaborator Author

@SamitHuang SamitHuang Nov 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is to address the warning in mkdoc building:

WARNING -  api-autonav: Skipping implicit namespace package (without an __init__.py file) at /home/docs/checkouts/readthedocs.org/user_builds/vllm-omni/checkouts/latest/vllm_omni/diffusion/models/qwen_image. Set 'on_implicit_namespace_package' to 'skip' to omit it without warning.

Signed-off-by: samithuang <[email protected]>
@ZJY0516
Copy link
Collaborator

ZJY0516 commented Nov 30, 2025

image

There is something wrong

Signed-off-by: samithuang <[email protected]>
@SamitHuang
Copy link
Collaborator Author

image There is something wrong

fixed, thx

@ZJY0516
Copy link
Collaborator

ZJY0516 commented Nov 30, 2025

Don't merge before #113 lands

Copy link
Collaborator

@Gaohan123 Gaohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks!

@Gaohan123 Gaohan123 merged commit f77e44d into main Nov 30, 2025
3 checks passed
- examples/README.md
- Offline Inference:
- Qwen2.5-Omni: user_guide/examples/offline_inference/qwen2_5_omni.md
- Qwen2.5-Image: user_guide/examples/offline_inference/qwen_image.md
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Qwen2.5-Image: user_guide/examples/offline_inference/qwen_image.md
- Qwen-Image: user_guide/examples/offline_inference/qwen_image.md

@Gaohan123 Gaohan123 deleted the fix_qwenimage_doc branch December 1, 2025 09:52
princepride pushed a commit to princepride/vllm-omni that referenced this pull request Jan 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants