Skip to content

Comments

[Bugfix] Fix redundant shm broadcast warnings in diffusion workers#133

Merged
ywang96 merged 1 commit intovllm-project:mainfrom
SamitHuang:fix_diffusion_worker
Dec 1, 2025
Merged

[Bugfix] Fix redundant shm broadcast warnings in diffusion workers#133
ywang96 merged 1 commit intovllm-project:mainfrom
SamitHuang:fix_diffusion_worker

Conversation

@SamitHuang
Copy link
Collaborator

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

When running qwen-image gradio demo (or any diffusion model), the worker processes generate redundant log messages every 60 seconds:

INFO 12-01 05:28:41 [shm_broadcast.py:466] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-cons

Test Plan

cd examples/offline_inference/qwen_image
python gradio_demo.py

Test Result

Before:

* Running on local URL:  http://127.0.0.1:7862
INFO:httpx:HTTP Request: GET http://127.0.0.1:7862/gradio_api/startup-events "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: HEAD http://127.0.0.1:7862/ "HTTP/1.1 200 OK"
* To create a public link, set `share=True` in `launch()`.
INFO:httpx:HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
INFO:vllm_omni.diffusion.omni_diffusion:Prepared 1 requests for generation.
INFO:vllm_omni.diffusion.diffusion_engine:Generation completed successfully.
INFO:vllm_omni.diffusion.omni_diffusion:Prepared 1 requests for generation.
INFO:vllm_omni.diffusion.diffusion_engine:Generation completed successfully.
INFO 12-01 05:28:41 [shm_broadcast.py:466] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-cons
uming work (e.g. compilation).
INFO 12-01 05:29:41 [shm_broadcast.py:466] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-cons
uming work (e.g. compilation).
INFO 12-01 05:30:41 [shm_broadcast.py:466] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-cons
uming work (e.g. compilation).
...

After fix:

* Running on local URL:  http://127.0.0.1:7862
INFO:httpx:HTTP Request: GET http://127.0.0.1:7862/gradio_api/startup-events "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: HEAD http://127.0.0.1:7862/ "HTTP/1.1 200 OK"
* To create a public link, set `share=True` in `launch()`.
INFO:httpx:HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: samithuang <[email protected]>
@SamitHuang SamitHuang changed the title [Misc] Fix redundant shm broadcast warnings in diffusion workers [Bugfix] Fix redundant shm broadcast warnings in diffusion workers Dec 1, 2025
@SamitHuang SamitHuang requested review from ZJY0516 and removed request for hsliuustc0106 December 1, 2025 06:31
Copy link
Collaborator

@ZJY0516 ZJY0516 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure it's ok to use indefinite=True instead of use a suitable large enough number

Copy link
Collaborator

@ZJY0516 ZJY0516 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is acceptable for the rc1 release

@ywang96 ywang96 merged commit f378f80 into vllm-project:main Dec 1, 2025
3 checks passed
LawJarp-A pushed a commit to LawJarp-A/vllm-omni that referenced this pull request Dec 12, 2025
princepride pushed a commit to princepride/vllm-omni that referenced this pull request Jan 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants