-
Notifications
You must be signed in to change notification settings - Fork 461
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Your current environment
The output of python collect_env.py
Collecting environment information...
WARNING 01-21 12:00:40 [mooncake_connector.py:18] Mooncake not available, MooncakeOmniConnector will not work
WARNING 01-21 12:00:41 [envs.py:194] Flash Attention library "flash_attn" not found, using pytorch attention implementation
==============================
System Info
==============================
OS : Ubuntu 24.04.1 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.31.4
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.9.1+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform : Linux-4.15.0-213-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.61
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version : 530.30.02
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
BIOS Vendor ID: Advanced Micro Devices, Inc.
Model name: AMD EPYC 7642 48-Core Processor
BIOS Model name: AMD EPYC 7642 48-Core Processor Unknown CPU @ 2.3GHz
BIOS CPU family: 107
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 73%
CPU max MHz: 2300.0000
CPU min MHz: 1500.0000
BogoMIPS: 4600.10
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.5.3
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.17.0
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-cutlass-dsl==4.3.4
[pip3] nvidia-ml-py==13.590.44
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvshmem-cu12==3.3.20
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pyzmq==27.1.0
[pip3] torch==2.9.1
[pip3] torchaudio==2.9.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.24.1
[pip3] transformers==4.57.3
[pip3] triton==3.5.1
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
vLLM Version : 0.14.0
vLLM-Omni Version : 0.12.0rc1 (git sha: 5b5ddce)
vLLM Build Flags:
CUDA Archs: 7.5 8.0 8.6 9.0 10.0 12.0+PTX; ROCm: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 NIC10 NIC11 NIC12 NIC13 NIC14 NIC15 NIC16 NIC17 CPU Affinity NUMA Affinity
GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 NODE NODE NODE NODE PXB PXB PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-47,96-143 0
GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 NODE NODE NODE NODE PXB PXB PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-47,96-143 0
GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 PXB PXB PXB PXB NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-47,96-143 0
GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 PXB PXB PXB PXB NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-47,96-143 0
GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB PXB PXB NODE NODE 48-95,144-191 1
GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB PXB PXB NODE NODE 48-95,144-191 1
GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB PXB PXB NODE NODE NODE NODE NODE NODE 48-95,144-191 1
GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB PXB PXB NODE NODE NODE NODE NODE NODE 48-95,144-191 1
NIC0 NODE NODE PXB PXB SYS SYS SYS SYS X PIX PIX PIX NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC1 NODE NODE PXB PXB SYS SYS SYS SYS PIX X PIX PIX NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC2 NODE NODE PXB PXB SYS SYS SYS SYS PIX PIX X PIX NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC3 NODE NODE PXB PXB SYS SYS SYS SYS PIX PIX PIX X NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC4 PXB PXB NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE X PIX PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC5 PXB PXB NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE PIX X PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC6 PXB PXB NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB X PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC7 PXB PXB NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB PIX X SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC8 SYS SYS SYS SYS NODE NODE PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS X PIX PXB PXB NODE NODE NODE NODE NODE NODE
NIC9 SYS SYS SYS SYS NODE NODE PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS PIX X PXB PXB NODE NODE NODE NODE NODE NODE
NIC10 SYS SYS SYS SYS NODE NODE PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB X PIX NODE NODE NODE NODE NODE NODE
NIC11 SYS SYS SYS SYS NODE NODE PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB PIX X NODE NODE NODE NODE NODE NODE
NIC12 SYS SYS SYS SYS PXB PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE X PIX PXB PXB NODE NODE
NIC13 SYS SYS SYS SYS PXB PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE PIX X PXB PXB NODE NODE
NIC14 SYS SYS SYS SYS PXB PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB X PIX NODE NODE
NIC15 SYS SYS SYS SYS PXB PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB PIX X NODE NODE
NIC16 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE NODE NODE NODE NODE X PIX
NIC17 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE NODE NODE NODE NODE PIX X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
NIC9: mlx5_9
NIC10: mlx5_10
NIC11: mlx5_11
NIC12: mlx5_12
NIC13: mlx5_13
NIC14: mlx5_14
NIC15: mlx5_15
NIC16: mlx5_16
NIC17: mlx5_17
==============================
Environment Variables
==============================
NVIDIA_VISIBLE_DEVICES=all
CUBLAS_VERSION=12.8.3.14
NVIDIA_REQUIRE_CUDA=cuda>=9.0
CUDA_CACHE_DISABLE=1
TORCH_CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0+PTX
NCCL_VERSION=2.25.1
NCCL_NVLS_ENABLE=0
NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
TORCH_NCCL_USE_COMM_NONBLOCKING=0
NVIDIA_PRODUCT_NAME=PyTorch
CUDA_VERSION=12.8.0.038
PYTORCH_VERSION=2.6.0a0+ecf3bae
PYTORCH_BUILD_NUMBER=0
CUDNN_FRONTEND_VERSION=1.9.0
CUDNN_VERSION=9.7.0.66
PYTORCH_HOME=/opt/pytorch/pytorch
LD_LIBRARY_PATH=/workspace/.venv/lib/python3.12/site-packages/cv2/../../lib64:/usr/local/lib/python3.12/dist-packages/torch/lib:/usr/local/lib/python3.12/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NVIDIA_BUILD_ID=134983853
CUDA_DRIVER_VERSION=570.86.10
PYTORCH_BUILD_VERSION=2.6.0a0+ecf3bae
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
CUDA_MODULE_LOADING=LAZY
NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
NVIDIA_PYTORCH_VERSION=25.01
TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
TORCHINDUCTOR_CACHE_DIR=/tmp/torchinductor_root
Your code version
The commit id or version of vllm
v0.14.0
The commit id or version of vllm-omni
28d13e0dee41df8dc64e624c6c03357961e4661d
🐛 Describe the bug
run:
pytest -s -v tests/e2e/offline_inference/test_t2i_model.py
Server Error
[2026-01-23T04:01:05Z] [Stage-0] INFO 01-22 20:01:05 [selector.py:92] Using attention backend 'FLASH_ATTN' for diffusion
| Loading safetensors checkpoint shards: 33% 1/3 [03:16<06:33, 196.53s/it]WARNING 01-22 20:04:44 [omni.py:342] [Orchestrator] Initialization timeout: 0/1 stages ready. Missing stages: [0]
| [2026-01-23T04:04:44Z] ERROR 01-22 20:04:44 [omni.py:356] [Orchestrator] Stage initialization failed. Troubleshooting Steps:
| [2026-01-23T04:04:44Z] ERROR 01-22 20:04:44 [omni.py:356] 1) Verify GPU/device assignment in config (runtime.devices) is correct.
| [2026-01-23T04:04:44Z] ERROR 01-22 20:04:44 [omni.py:356] 2) Check GPU/host memory availability; reduce model or batch size if needed.
| [2026-01-23T04:04:44Z] ERROR 01-22 20:04:44 [omni.py:356] 3) Check model weights path and network reachability (if loading remotely).
| [2026-01-23T04:04:44Z] ERROR 01-22 20:04:44 [omni.py:356] 4) Increase initialization wait time (stage_init_timeout or call-site timeout).
| Adding requests: 0% 0/1 [00:00<?, ?it/s]
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working