-
-
Notifications
You must be signed in to change notification settings - Fork 11.8k
Open
Labels
bugSomething isn't workingSomething isn't workingstaleOver 90 days of inactivityOver 90 days of inactivity
Description
Your current environment
The output of python collect_env.py
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.1 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.31.4
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.7.1+cu126
Is debug build : False
CUDA used to build PyTorch : 12.6
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform : Linux-5.4.0-216-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.61
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version : 530.30.02
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7642 48-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 78%
CPU max MHz: 2300.0000
CPU min MHz: 1500.0000
BogoMIPS: 4600.14
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cudnn-frontend==1.9.0
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-cufile-cu12==1.11.1.6
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-dali-cuda120==1.45.0
[pip3] nvidia-modelopt==0.21.0
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvimgcodec-cu12==0.3.0.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvshmem-cu12==3.3.9
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] nvidia-pyindex==1.0.9
[pip3] onnx==1.17.0
[pip3] optree==0.14.0
[pip3] pynvml==11.4.1
[pip3] pytorch-triton==3.1.0+cf34004b8.internal
[pip3] pyzmq==26.2.0
[pip3] torch==2.7.1
[pip3] torch_tensorrt==2.6.0a0
[pip3] torchaudio==2.7.1
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.1
[pip3] transformers==4.53.3
[pip3] triton==3.3.1
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
Neuron SDK Version : N/A
vLLM Version : 0.10.1.dev380+g2acf3aa71.d20250806 (git sha: 2acf3aa71, date: 20250806)
vLLM Build Flags:
CUDA Archs: 7.5 8.0 8.6 9.0 10.0 12.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 NIC10 NIC11 NIC12 NIC13 NIC14 NIC15 NIC16 NIC17 CPU Affinity NUMA Affinity
GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 NODE NODE NODE NODE PXB PXB PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-47,96-143 0
GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 NODE NODE NODE NODE PXB PXB PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-47,96-143 0
GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 PXB PXB PXB PXB NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-47,96-143 0
GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 PXB PXB PXB PXB NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS 0-47,96-143 0
GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB PXB PXB NODE NODE 48-95,144-191 1
GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB PXB PXB NODE NODE 48-95,144-191 1
GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB PXB PXB NODE NODE NODE NODE NODE NODE 48-95,144-191 1
GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB PXB PXB NODE NODE NODE NODE NODE NODE 48-95,144-191 1
NIC0 NODE NODE PXB PXB SYS SYS SYS SYS X PIX PIX PIX NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC1 NODE NODE PXB PXB SYS SYS SYS SYS PIX X PIX PIX NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC2 NODE NODE PXB PXB SYS SYS SYS SYS PIX PIX X PIX NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC3 NODE NODE PXB PXB SYS SYS SYS SYS PIX PIX PIX X NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC4 PXB PXB NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE X PIX PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC5 PXB PXB NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE PIX X PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC6 PXB PXB NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB X PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC7 PXB PXB NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB PIX X SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS
NIC8 SYS SYS SYS SYS NODE NODE PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS X PIX PXB PXB NODE NODE NODE NODE NODE NODE
NIC9 SYS SYS SYS SYS NODE NODE PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS PIX X PXB PXB NODE NODE NODE NODE NODE NODE
NIC10 SYS SYS SYS SYS NODE NODE PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB X PIX NODE NODE NODE NODE NODE NODE
NIC11 SYS SYS SYS SYS NODE NODE PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB PIX X NODE NODE NODE NODE NODE NODE
NIC12 SYS SYS SYS SYS PXB PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE X PIX PXB PXB NODE NODE
NIC13 SYS SYS SYS SYS PXB PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE PIX X PXB PXB NODE NODE
NIC14 SYS SYS SYS SYS PXB PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB X PIX NODE NODE
NIC15 SYS SYS SYS SYS PXB PXB NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE PXB PXB PIX X NODE NODE
NIC16 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE NODE NODE NODE NODE X PIX
NIC17 SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS SYS SYS SYS SYS NODE NODE NODE NODE NODE NODE NODE NODE PIX X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
NIC9: mlx5_9
NIC10: mlx5_10
NIC11: mlx5_11
NIC12: mlx5_12
NIC13: mlx5_13
NIC14: mlx5_14
NIC15: mlx5_15
NIC16: mlx5_16
NIC17: mlx5_17
==============================
Environment Variables
==============================
NVIDIA_VISIBLE_DEVICES=all
CUBLAS_VERSION=12.8.3.14
NVIDIA_REQUIRE_CUDA=cuda>=9.0
CUDA_CACHE_DISABLE=1
TORCH_CUDA_ARCH_LIST=7.5 8.0 8.6 9.0 10.0 12.0+PTX
NCCL_VERSION=2.25.1
NCCL_NVLS_ENABLE=0
NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
TORCH_NCCL_USE_COMM_NONBLOCKING=0
NVIDIA_PRODUCT_NAME=PyTorch
CUDA_VERSION=12.8.0.038
PYTORCH_VERSION=2.6.0a0+ecf3bae
PYTORCH_BUILD_NUMBER=0
CUDNN_FRONTEND_VERSION=1.9.0
CUDNN_VERSION=9.7.0.66
PYTORCH_HOME=/opt/pytorch/pytorch
LD_LIBRARY_PATH=/usr/local/lib/python3.12/dist-packages/torch/lib:/usr/local/lib/python3.12/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NVIDIA_BUILD_ID=134983853
CUDA_DRIVER_VERSION=570.86.10
PYTORCH_BUILD_VERSION=2.6.0a0+ecf3bae
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
CUDA_MODULE_LOADING=LAZY
NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
NVIDIA_PYTORCH_VERSION=25.01
TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
🐛 Describe the bug
I want to use NixlConnector for PD+PP online inference. But I try ray+pp2+tp2 for prefiller, ray+pp2+tp2 for decoder, It failed. And I test PD+Nixlconnector+ray without PP, it works well. Does the current NixL Connector support PD+PP?
Prefiller:
UCX_TLS=cuda_ipc,cuda_copy,tcp \
VLLM_ENABLE_V1_MULTIPROCESSING=1 \
VLLM_WORKER_MULTIPROC_METHOD=spawn \
CUDA_VISIBLE_DEVICES=0,1,2,3 \
VLLM_NIXL_SIDE_CHANNEL_PORT=5559 \
vllm serve $MODEL \
-tp 2\
-pp 2\
--port 8100 \
--gpu-memory-utilization 0.7\
--enforce-eager \
--distributed-executor-backend ray\
--kv-transfer-config \
'{"kv_connector":"NixlConnector","kv_role":"kv_both"}'Decoder:
UCX_TLS=cuda_ipc,cuda_copy,tcp \
VLLM_ENABLE_V1_MULTIPROCESSING=1 \
VLLM_WORKER_MULTIPROC_METHOD=spawn \
CUDA_VISIBLE_DEVICES=4,5,6,7 \
VLLM_NIXL_SIDE_CHANNEL_PORT=5569 \
vllm serve $MODEL \
--port 8200 \
-tp 2\
-pp 2\
--gpu-memory-utilization 0.7\
--enforce-eager \
--distributed-executor-backend ray\
--kv-transfer-config \
'{"kv_connector":"NixlConnector","kv_role":"kv_both"}'ERROR LOG
(EngineCore_0 pid=532332) INFO 08-07 04:48:05 [core.py:620] Waiting for init message from front-end.
(EngineCore_0 pid=532339) INFO 08-07 04:48:05 [core.py:72] Initializing a V1 LLM engine (v0.10.1.dev380+g2acf3aa71.d20250806) with config: model='/workspace/models/Qwen2.5-72B', speculative_config=None, tokenizer='/workspace/models/Qwen2.5-72B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=auto, tensor_parallel_size=2, pipeline_parallel_size=2, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=/workspace/models/Qwen2.5-72B, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":[],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{},"max_capture_size":0,"local_cache_dir":null}
(EngineCore_0 pid=532332) INFO 08-07 04:48:05 [core.py:72] Initializing a V1 LLM engine (v0.10.1.dev380+g2acf3aa71.d20250806) with config: model='/workspace/models/Qwen2.5-72B', speculative_config=None, tokenizer='/workspace/models/Qwen2.5-72B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=auto, tensor_parallel_size=2, pipeline_parallel_size=2, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=/workspace/models/Qwen2.5-72B, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":[],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{},"max_capture_size":0,"local_cache_dir":null}
(EngineCore_0 pid=532332) 2025-08-07 04:48:07,050 INFO worker.py:1927 -- Started a local Ray instance.
(EngineCore_0 pid=532339) 2025-08-07 04:48:07,054 INFO worker.py:1927 -- Started a local Ray instance.
(EngineCore_0 pid=532332) INFO 08-07 04:48:22 [ray_utils.py:339] No current placement group found. Creating a new placement group.
(EngineCore_0 pid=532339) INFO 08-07 04:48:22 [ray_utils.py:339] No current placement group found. Creating a new placement group.
(EngineCore_0 pid=532332) INFO 08-07 04:48:22 [ray_distributed_executor.py:169] use_ray_spmd_worker: True
(EngineCore_0 pid=532339) INFO 08-07 04:48:22 [ray_distributed_executor.py:169] use_ray_spmd_worker: True
(EngineCore_0 pid=532339) (pid=533559) INFO 08-07 04:48:27 [__init__.py:241] Automatically detected platform cuda.
(EngineCore_0 pid=532339) INFO 08-07 04:48:28 [ray_env.py:63] RAY_NON_CARRY_OVER_ENV_VARS from config: set()
(EngineCore_0 pid=532339) INFO 08-07 04:48:28 [ray_env.py:65] Copying the following environment variables to workers: ['VLLM_WORKER_MULTIPROC_METHOD', 'VLLM_USE_V1', 'VLLM_USE_RAY_COMPILED_DAG', 'CUDA_HOME', 'VLLM_ENABLE_V1_MULTIPROCESSING', 'VLLM_NIXL_SIDE_CHANNEL_PORT', 'LD_LIBRARY_PATH', 'VLLM_USE_RAY_SPMD_WORKER']
(EngineCore_0 pid=532339) INFO 08-07 04:48:28 [ray_env.py:68] If certain env vars should NOT be copied, add them to /root/.config/vllm/ray_non_carry_over_env_vars.json file
(EngineCore_0 pid=532332) (pid=533551) INFO 08-07 04:48:27 [__init__.py:241] Automatically detected platform cuda.
(EngineCore_0 pid=532332) INFO 08-07 04:48:28 [ray_env.py:63] RAY_NON_CARRY_OVER_ENV_VARS from config: set()
(EngineCore_0 pid=532332) INFO 08-07 04:48:28 [ray_env.py:65] Copying the following environment variables to workers: ['LD_LIBRARY_PATH', 'VLLM_USE_V1', 'VLLM_ENABLE_V1_MULTIPROCESSING', 'VLLM_NIXL_SIDE_CHANNEL_PORT', 'VLLM_USE_RAY_COMPILED_DAG', 'CUDA_HOME', 'VLLM_USE_RAY_SPMD_WORKER', 'VLLM_WORKER_MULTIPROC_METHOD']
(EngineCore_0 pid=532332) INFO 08-07 04:48:28 [ray_env.py:68] If certain env vars should NOT be copied, add them to /root/.config/vllm/ray_non_carry_over_env_vars.json file
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533559) E0807 04:48:35.807193 533559 nixl_plugin_manager.cpp:122] Failed to load plugin from /usr/local/lib/x86_64-linux-gnu/plugins/libplugin_UCX_MO.so: libplugin_UCX.so: cannot open shared object file: No such file or directory
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533559) E0807 04:48:35.807261 533559 nixl_plugin_manager.cpp:288] Failed to load plugin 'UCX_MO' from any directory
(EngineCore_0 pid=532332) (RayWorkerWrapper pid=533577) E0807 04:48:36.508524 533577 nixl_plugin_manager.cpp:122] Failed to load plugin from /usr/local/lib/x86_64-linux-gnu/plugins/libplugin_UCX_MO.so: libplugin_UCX.so: cannot open shared object file: No such file or directory
(EngineCore_0 pid=532332) (RayWorkerWrapper pid=533577) E0807 04:48:36.508607 533577 nixl_plugin_manager.cpp:288] Failed to load plugin 'UCX_MO' from any directory
Loading safetensors checkpoint shards: 0% Completed | 0/37 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 3% Completed | 1/37 [00:00<00:24, 1.47it/s]
Loading safetensors checkpoint shards: 0% Completed | 0/37 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 8% Completed | 3/37 [00:01<00:15, 2.21it/s]
Loading safetensors checkpoint shards: 3% Completed | 1/37 [00:01<00:36, 1.01s/it]
Loading safetensors checkpoint shards: 11% Completed | 4/37 [00:02<00:19, 1.67it/s]
Loading safetensors checkpoint shards: 8% Completed | 3/37 [00:02<00:21, 1.55it/s]
Loading safetensors checkpoint shards: 16% Completed | 6/37 [00:03<00:15, 2.07it/s]
Loading safetensors checkpoint shards: 19% Completed | 7/37 [00:04<00:18, 1.64it/s]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) E0807 04:48:35.810768 533564 nixl_plugin_manager.cpp:122] Failed to load plugin from /usr/local/lib/x86_64-linux-gnu/plugins/libplugin_UCX_MO.so: libplugin_UCX.so: cannot open shared object file: No such file or directory [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) E0807 04:48:35.810831 533564 nixl_plugin_manager.cpp:288] Failed to load plugin 'UCX_MO' from any directory [repeated 3x across cluster]
Loading safetensors checkpoint shards: 11% Completed | 4/37 [00:03<00:28, 1.18it/s]
Loading safetensors checkpoint shards: 22% Completed | 8/37 [00:04<00:20, 1.41it/s]
Loading safetensors checkpoint shards: 16% Completed | 6/37 [00:04<00:21, 1.44it/s]
(EngineCore_0 pid=532332) (RayWorkerWrapper pid=533558) E0807 04:48:36.510301 533558 nixl_plugin_manager.cpp:122] Failed to load plugin from /usr/local/lib/x86_64-linux-gnu/plugins/libplugin_UCX_MO.so: libplugin_UCX.so: cannot open shared object file: No such file or directory [repeated 3x across cluster]
(EngineCore_0 pid=532332) (RayWorkerWrapper pid=533558) E0807 04:48:36.510395 533558 nixl_plugin_manager.cpp:288] Failed to load plugin 'UCX_MO' from any directory [repeated 3x across cluster]
Loading safetensors checkpoint shards: 30% Completed | 11/37 [00:05<00:13, 1.97it/s]
Loading safetensors checkpoint shards: 19% Completed | 7/37 [00:05<00:23, 1.27it/s]
Loading safetensors checkpoint shards: 32% Completed | 12/37 [00:06<00:14, 1.67it/s]
Loading safetensors checkpoint shards: 22% Completed | 8/37 [00:06<00:24, 1.19it/s]
Loading safetensors checkpoint shards: 38% Completed | 14/37 [00:07<00:12, 1.87it/s]
Loading safetensors checkpoint shards: 30% Completed | 11/37 [00:07<00:14, 1.80it/s]
Loading safetensors checkpoint shards: 41% Completed | 15/37 [00:08<00:13, 1.69it/s]
Loading safetensors checkpoint shards: 32% Completed | 12/37 [00:08<00:15, 1.62it/s]
Loading safetensors checkpoint shards: 46% Completed | 17/37 [00:09<00:11, 1.78it/s]
Loading safetensors checkpoint shards: 38% Completed | 14/37 [00:09<00:12, 1.77it/s]
Loading safetensors checkpoint shards: 49% Completed | 18/37 [00:10<00:12, 1.54it/s]
Loading safetensors checkpoint shards: 41% Completed | 15/37 [00:10<00:13, 1.58it/s]
Loading safetensors checkpoint shards: 54% Completed | 20/37 [00:11<00:09, 1.88it/s]
Loading safetensors checkpoint shards: 62% Completed | 23/37 [00:11<00:04, 2.86it/s]
Loading safetensors checkpoint shards: 46% Completed | 17/37 [00:11<00:11, 1.70it/s]
Loading safetensors checkpoint shards: 65% Completed | 24/37 [00:12<00:05, 2.25it/s]
Loading safetensors checkpoint shards: 49% Completed | 18/37 [00:12<00:13, 1.43it/s]
Loading safetensors checkpoint shards: 54% Completed | 20/37 [00:12<00:09, 1.87it/s]
Loading safetensors checkpoint shards: 78% Completed | 29/37 [00:13<00:02, 3.27it/s]
Loading safetensors checkpoint shards: 62% Completed | 23/37 [00:13<00:05, 2.79it/s]
Loading safetensors checkpoint shards: 86% Completed | 32/37 [00:14<00:01, 3.28it/s]
Loading safetensors checkpoint shards: 65% Completed | 24/37 [00:13<00:05, 2.22it/s]
Loading safetensors checkpoint shards: 92% Completed | 34/37 [00:15<00:01, 2.93it/s]
Loading safetensors checkpoint shards: 78% Completed | 29/37 [00:14<00:02, 3.36it/s]
Loading safetensors checkpoint shards: 100% Completed | 37/37 [00:16<00:00, 3.28it/s]
Loading safetensors checkpoint shards: 100% Completed | 37/37 [00:16<00:00, 2.31it/s]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533573)
Loading safetensors checkpoint shards: 86% Completed | 32/37 [00:15<00:01, 3.29it/s]
Loading safetensors checkpoint shards: 92% Completed | 34/37 [00:16<00:00, 3.16it/s]
Loading safetensors checkpoint shards: 100% Completed | 37/37 [00:17<00:00, 3.51it/s]
Loading safetensors checkpoint shards: 100% Completed | 37/37 [00:17<00:00, 2.15it/s]
(EngineCore_0 pid=532332) (RayWorkerWrapper pid=533577)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:34 [__init__.py:1381] Found nccl from library libnccl.so.2
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:34 [pynccl.py:70] vLLM is using nccl==2.26.2
(EngineCore_0 pid=532339) (pid=533576) INFO 08-07 04:48:27 [__init__.py:241] Automatically detected platform cuda. [repeated 3x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:35 [custom_all_reduce.py:35] Skipping P2P check and trusting the driver's P2P report.
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:35 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_36454afc'), local_subscribe_addr='ipc:///tmp/fe5aadca-db44-40c9-9a51-beb1d0e7160f', remote_subscribe_addr=None, remote_addr_ipv6=False)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:35 [parallel_state.py:1124] rank 2 in world size 4 is assigned as DP rank 0, PP rank 1, TP rank 0, EP rank 0
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:35 [nixl_connector.py:52] NIXL is available
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:35 [factory.py:56] Creating v1 connector with name: NixlConnector and engine_id: fc326be4-205e-4d88-8830-950502d3c24a
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:35 [nixl_connector.py:433] Initializing NIXL wrapper
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:35 [nixl_connector.py:434] Initializing NIXL worker fc326be4-205e-4d88-8830-950502d3c24a
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) Backend UCX was instantiated
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) Initialized NIXL agent: a8ab2334-456e-447c-861b-74caf1bdc6e9
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:36 [cuda.py:327] Using Flash Attention backend on V1 engine.
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:36 [topk_topp_sampler.py:49] Using FlashInfer for top-p & top-k sampling.
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:36 [gpu_model_runner.py:1908] Starting to load model /workspace/models/Qwen2.5-72B...
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:36 [gpu_model_runner.py:1940] Loading model from scratch...
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:52 [default_loader.py:262] Loading weights took 15.79 seconds
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:35 [__init__.py:1381] Found nccl from library libnccl.so.2 [repeated 7x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:35 [pynccl.py:70] vLLM is using nccl==2.26.2 [repeated 7x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:35 [custom_all_reduce.py:35] Skipping P2P check and trusting the driver's P2P report. [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533573) INFO 08-07 04:48:35 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_5fdba0da'), local_subscribe_addr='ipc:///tmp/53f9d69c-d870-4a7e-a639-7f41e8c8db89', remote_subscribe_addr=None, remote_addr_ipv6=False)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:35 [parallel_state.py:1124] rank 3 in world size 4 is assigned as DP rank 0, PP rank 1, TP rank 1, EP rank 1 [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:35 [nixl_connector.py:52] NIXL is available [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:35 [factory.py:56] Creating v1 connector with name: NixlConnector and engine_id: fc326be4-205e-4d88-8830-950502d3c24a [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:35 [nixl_connector.py:433] Initializing NIXL wrapper [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:35 [nixl_connector.py:434] Initializing NIXL worker fc326be4-205e-4d88-8830-950502d3c24a [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) Backend UCX was instantiated [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) Initialized NIXL agent: a7afa6a4-8162-44a4-98ce-e3e92907ec98 [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:36 [cuda.py:327] Using Flash Attention backend on V1 engine. [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:36 [topk_topp_sampler.py:49] Using FlashInfer for top-p & top-k sampling. [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:36 [gpu_model_runner.py:1908] Starting to load model /workspace/models/Qwen2.5-72B... [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:36 [gpu_model_runner.py:1940] Loading model from scratch... [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) INFO 08-07 04:48:53 [gpu_model_runner.py:1957] Model loading took 33.9275 GiB and 16.172315 seconds
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:55 [gpu_worker.py:278] Available KV cache memory: 19.15 GiB
(EngineCore_0 pid=532339) INFO 08-07 04:48:56 [kv_cache_utils.py:829] GPU KV cache size: 261,920 tokens
(EngineCore_0 pid=532339) INFO 08-07 04:48:56 [kv_cache_utils.py:833] Maximum concurrency for 131,072 tokens per request: 2.00x
(EngineCore_0 pid=532339) INFO 08-07 04:48:56 [kv_cache_utils.py:829] GPU KV cache size: 261,920 tokens
(EngineCore_0 pid=532339) INFO 08-07 04:48:56 [kv_cache_utils.py:833] Maximum concurrency for 131,072 tokens per request: 2.00x
(EngineCore_0 pid=532339) INFO 08-07 04:48:56 [kv_cache_utils.py:829] GPU KV cache size: 250,960 tokens
(EngineCore_0 pid=532339) INFO 08-07 04:48:56 [kv_cache_utils.py:833] Maximum concurrency for 131,072 tokens per request: 1.91x
(EngineCore_0 pid=532339) INFO 08-07 04:48:56 [kv_cache_utils.py:829] GPU KV cache size: 250,960 tokens
(EngineCore_0 pid=532339) INFO 08-07 04:48:56 [kv_cache_utils.py:833] Maximum concurrency for 131,072 tokens per request: 1.91x
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] EngineCore failed to start.
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] Traceback (most recent call last):
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/v1/engine/core.py", line 675, in run_engine_core
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/v1/engine/core.py", line 476, in __init__
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/v1/engine/core.py", line 87, in __init__
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] self._initialize_kv_caches(vllm_config)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/v1/engine/core.py", line 197, in _initialize_kv_caches
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] self.model_executor.initialize_from_config(kv_cache_configs)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/v1/executor/abstract.py", line 64, in initialize_from_config
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] self.collective_rpc("initialize_from_config",
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/executor/executor_base.py", line 309, in collective_rpc
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] return self._run_workers(method, *args, **(kwargs or {}))
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/executor/ray_distributed_executor.py", line 503, in _run_workers
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ray_worker_outputs = ray.get(ray_worker_outputs)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/usr/local/lib/python3.12/dist-packages/ray/_private/auto_init_hook.py", line 22, in auto_init_wrapper
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] return fn(*args, **kwargs)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/usr/local/lib/python3.12/dist-packages/ray/_private/client_mode_hook.py", line 104, in wrapper
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] return func(*args, **kwargs)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/usr/local/lib/python3.12/dist-packages/ray/_private/worker.py", line 2858, in get
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/usr/local/lib/python3.12/dist-packages/ray/_private/worker.py", line 958, in get_objects
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] raise value.as_instanceof_cause()
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ray.exceptions.RayTaskError(nixlBackendError): ray::RayWorkerWrapper.execute_method() (pid=533564, ip=10.90.67.84, actor_id=3a88db21fca9a102a65522e401000000, repr=<vllm.executor.ray_utils.RayWorkerWrapper object at 0x7f54dde70380>)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 620, in execute_method
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] raise e
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 611, in execute_method
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] return run_method(self, method, args, kwargs)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/utils/__init__.py", line 2948, in run_method
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] return func(*args, **kwargs)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 598, in initialize_from_config
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] self.worker.initialize_from_config(kv_cache_config) # type: ignore
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/v1/worker/gpu_worker.py", line 299, in initialize_from_config
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] self.model_runner.initialize_kv_cache(kv_cache_config)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/v1/worker/gpu_model_runner.py", line 2927, in initialize_kv_cache
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] get_kv_transfer_group().register_kv_caches(kv_caches)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py", line 193, in register_kv_caches
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] self.connector_worker.register_kv_caches(kv_caches)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/workspace/yrr/vllm/vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py", line 816, in register_kv_caches
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] self.nixl_wrapper.register_memory(descs)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] File "/usr/local/lib/python3.12/dist-packages/nixl/_api.py", line 266, in register_memory
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] self.agent.registerMem(reg_descs, handle_list)
(EngineCore_0 pid=532339) ERROR 08-07 04:48:56 [core.py:684] nixl._bindings.nixlBackendError: NIXL_ERR_BACKEND
(EngineCore_0 pid=532339) Process EngineCore_0:
(EngineCore_0 pid=532339) Traceback (most recent call last):
(EngineCore_0 pid=532339) File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_0 pid=532339) self.run()
(EngineCore_0 pid=532339) File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_0 pid=532339) self._target(*self._args, **self._kwargs)
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/v1/engine/core.py", line 688, in run_engine_core
(EngineCore_0 pid=532339) raise e
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/v1/engine/core.py", line 675, in run_engine_core
(EngineCore_0 pid=532339) engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/v1/engine/core.py", line 476, in __init__
(EngineCore_0 pid=532339) super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/v1/engine/core.py", line 87, in __init__
(EngineCore_0 pid=532339) self._initialize_kv_caches(vllm_config)
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/v1/engine/core.py", line 197, in _initialize_kv_caches
(EngineCore_0 pid=532339) self.model_executor.initialize_from_config(kv_cache_configs)
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/v1/executor/abstract.py", line 64, in initialize_from_config
(EngineCore_0 pid=532339) self.collective_rpc("initialize_from_config",
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/executor/executor_base.py", line 309, in collective_rpc
(EngineCore_0 pid=532339) return self._run_workers(method, *args, **(kwargs or {}))
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/executor/ray_distributed_executor.py", line 503, in _run_workers
(EngineCore_0 pid=532339) ray_worker_outputs = ray.get(ray_worker_outputs)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/usr/local/lib/python3.12/dist-packages/ray/_private/auto_init_hook.py", line 22, in auto_init_wrapper
(EngineCore_0 pid=532339) return fn(*args, **kwargs)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/usr/local/lib/python3.12/dist-packages/ray/_private/client_mode_hook.py", line 104, in wrapper
(EngineCore_0 pid=532339) return func(*args, **kwargs)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/usr/local/lib/python3.12/dist-packages/ray/_private/worker.py", line 2858, in get
(EngineCore_0 pid=532339) values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/usr/local/lib/python3.12/dist-packages/ray/_private/worker.py", line 958, in get_objects
(EngineCore_0 pid=532339) raise value.as_instanceof_cause()
(EngineCore_0 pid=532339) ray.exceptions.RayTaskError(nixlBackendError): ray::RayWorkerWrapper.execute_method() (pid=533564, ip=10.90.67.84, actor_id=3a88db21fca9a102a65522e401000000, repr=<vllm.executor.ray_utils.RayWorkerWrapper object at 0x7f54dde70380>)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 620, in execute_method
(EngineCore_0 pid=532339) raise e
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 611, in execute_method
(EngineCore_0 pid=532339) return run_method(self, method, args, kwargs)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/utils/__init__.py", line 2948, in run_method
(EngineCore_0 pid=532339) return func(*args, **kwargs)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 598, in initialize_from_config
(EngineCore_0 pid=532339) self.worker.initialize_from_config(kv_cache_config) # type: ignore
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/v1/worker/gpu_worker.py", line 299, in initialize_from_config
(EngineCore_0 pid=532339) self.model_runner.initialize_kv_cache(kv_cache_config)
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/v1/worker/gpu_model_runner.py", line 2927, in initialize_kv_cache
(EngineCore_0 pid=532339) get_kv_transfer_group().register_kv_caches(kv_caches)
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py", line 193, in register_kv_caches
(EngineCore_0 pid=532339) self.connector_worker.register_kv_caches(kv_caches)
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py", line 816, in register_kv_caches
(EngineCore_0 pid=532339) self.nixl_wrapper.register_memory(descs)
(EngineCore_0 pid=532339) File "/usr/local/lib/python3.12/dist-packages/nixl/_api.py", line 266, in register_memory
(EngineCore_0 pid=532339) self.agent.registerMem(reg_descs, handle_list)
(EngineCore_0 pid=532339) nixl._bindings.nixlBackendError: NIXL_ERR_BACKEND
(EngineCore_0 pid=532339) INFO 08-07 04:48:56 [ray_distributed_executor.py:120] Shutting down Ray distributed executor. If you see error log from logging.cc regarding SIGTERM received, please ignore because this is the expected termination process in Ray.
(EngineCore_0 pid=532339) 2025-08-07 04:48:56,523 ERROR worker.py:427 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): ray::RayWorkerWrapper.execute_method() (pid=533576, ip=10.90.67.84, actor_id=0184180be9fa4eb26b26337701000000, repr=<vllm.executor.ray_utils.RayWorkerWrapper object at 0x7f7cd964c080>)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 620, in execute_method
(EngineCore_0 pid=532339) raise e
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 611, in execute_method
(EngineCore_0 pid=532339) return run_method(self, method, args, kwargs)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/utils/__init__.py", line 2948, in run_method
(EngineCore_0 pid=532339) return func(*args, **kwargs)
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 598, in initialize_from_config
(EngineCore_0 pid=532339) self.worker.initialize_from_config(kv_cache_config) # type: ignore
(EngineCore_0 pid=532339) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/v1/worker/gpu_worker.py", line 299, in initialize_from_config
(EngineCore_0 pid=532339) self.model_runner.initialize_kv_cache(kv_cache_config)
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/v1/worker/gpu_model_runner.py", line 2927, in initialize_kv_cache
(EngineCore_0 pid=532339) get_kv_transfer_group().register_kv_caches(kv_caches)
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py", line 193, in register_kv_caches
(EngineCore_0 pid=532339) self.connector_worker.register_kv_caches(kv_caches)
(EngineCore_0 pid=532339) File "/workspace/yrr/vllm/vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py", line 816, in register_kv_caches
(EngineCore_0 pid=532339) self.nixl_wrapper.register_memory(descs)
(EngineCore_0 pid=532339) File "/usr/local/lib/python3.12/dist-packages/nixl/_api.py", line 266, in register_memory
(EngineCore_0 pid=532339) self.agent.registerMem(reg_descs, handle_list)
(EngineCore_0 pid=532339) nixl._bindings.nixlBackendError: NIXL_ERR_BACKEND
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:56 [utils.py:113] Connectors do not specify a kv cache layout, defaulting to NHD.
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) INFO 08-07 04:48:56 [nixl_connector.py:758] Registering KV_Caches. use_mla: False, kv_buffer_device: cuda, use_host_buffer: False, num_blocks: 15685, block_shape: torch.Size([16, 4, 128]), per_layer_kv_cache_shape: torch.Size([2, 15685, 16, 4, 128])
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] Error executing method 'initialize_from_config'. This might cause deadlock in distributed execution.
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] Traceback (most recent call last):
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 611, in execute_method
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] return run_method(self, method, args, kwargs)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/utils/__init__.py", line 2948, in run_method
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] return func(*args, **kwargs)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] File "/usr/local/lib/python3.12/dist-packages/ray/util/tracing/tracing_helper.py", line 461, in _resume_span
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] return method(self, *_args, **_kwargs)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 598, in initialize_from_config
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] self.worker.initialize_from_config(kv_cache_config) # type: ignore
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/v1/worker/gpu_worker.py", line 299, in initialize_from_config
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] self.model_runner.initialize_kv_cache(kv_cache_config)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/v1/worker/gpu_model_runner.py", line 2927, in initialize_kv_cache
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] get_kv_transfer_group().register_kv_caches(kv_caches)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py", line 193, in register_kv_caches
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] self.connector_worker.register_kv_caches(kv_caches)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py", line 816, in register_kv_caches
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] self.nixl_wrapper.register_memory(descs)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] File "/usr/local/lib/python3.12/dist-packages/nixl/_api.py", line 266, in register_memory
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] self.agent.registerMem(reg_descs, handle_list)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533564) ERROR 08-07 04:48:56 [worker_base.py:619] nixl._bindings.nixlBackendError: NIXL_ERR_BACKEND
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533573) INFO 08-07 04:48:53 [default_loader.py:262] Loading weights took 16.08 seconds [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533573) INFO 08-07 04:48:53 [gpu_model_runner.py:1957] Model loading took 33.9275 GiB and 16.430247 seconds [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533573) INFO 08-07 04:48:56 [gpu_worker.py:278] Available KV cache memory: 19.98 GiB [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533559) INFO 08-07 04:48:56 [utils.py:113] Connectors do not specify a kv cache layout, defaulting to NHD. [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533559) INFO 08-07 04:48:56 [nixl_connector.py:758] Registering KV_Caches. use_mla: False, kv_buffer_device: cuda, use_host_buffer: False, num_blocks: 16370, block_shape: torch.Size([16, 4, 128]), per_layer_kv_cache_shape: torch.Size([2, 16370, 16, 4, 128]) [repeated 3x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] Error executing method 'initialize_from_config'. This might cause deadlock in distributed execution.
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] Traceback (most recent call last):
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 611, in execute_method
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] return run_method(self, method, args, kwargs)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/utils/__init__.py", line 2948, in run_method
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] return func(*args, **kwargs)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] File "/usr/local/lib/python3.12/dist-packages/ray/util/tracing/tracing_helper.py", line 461, in _resume_span
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] return method(self, *_args, **_kwargs)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/worker/worker_base.py", line 598, in initialize_from_config
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] self.worker.initialize_from_config(kv_cache_config) # type: ignore
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/v1/worker/gpu_worker.py", line 299, in initialize_from_config
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] self.model_runner.initialize_kv_cache(kv_cache_config)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/v1/worker/gpu_model_runner.py", line 2927, in initialize_kv_cache
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] get_kv_transfer_group().register_kv_caches(kv_caches)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] File "/workspace/yrr/vllm/vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py", line 816, in register_kv_caches [repeated 2x across cluster]
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] self.connector_worker.register_kv_caches(kv_caches)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] self.nixl_wrapper.register_memory(descs)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] File "/usr/local/lib/python3.12/dist-packages/nixl/_api.py", line 266, in register_memory
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] self.agent.registerMem(reg_descs, handle_list)
(EngineCore_0 pid=532339) (RayWorkerWrapper pid=533576) ERROR 08-07 04:48:56 [worker_base.py:619] nixl._bindings.nixlBackendError: NIXL_ERR_BACKEND
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstaleOver 90 days of inactivityOver 90 days of inactivity