Skip to content

[Bug]: Engine iteration timed out. This should never happen! #4430

@itechbear

Description

@itechbear

UPDATE on 2024-05-23

Workaround: Use the --disable-custom-all-reduce flag when starting the vLLM instance. Thanks @ywang96 !

Following is the original post

🐛 Describe the bug

Summary

A model execution thread hangs at _random_sample (vllm/model_executor/layers/sampler.py:292) mysteriously during inference, and the corresponding code at that line is random_samples = random_samples.cpu()

What happened

We upgraded vLLM from v0.3.3 to 0.4.x, but found vLLM occasionally got stuck and refused to serve requests. From the vLLM log, we saw that a request never got finished. After we dug deeper, we found that it was because a thread got stuck during execution.

Your current environment

vLLM was running inside a docker container. The following was collected from inside the container.

Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB

Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   46 bits physical, 57 bits virtual
Byte Order:                      Little Endian
CPU(s):                          128
On-line CPU(s) list:             0-127
Vendor ID:                       GenuineIntel
Model name:                      Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
CPU family:                      6
Model:                           106
Thread(s) per core:              2
Core(s) per socket:              32
Socket(s):                       2
Stepping:                        6
Frequency boost:                 enabled
CPU max MHz:                     3500.0000
CPU min MHz:                     800.0000
BogoMIPS:                        5200.00
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities
Virtualization:                  VT-x
L1d cache:                       3 MiB (64 instances)
L1i cache:                       2 MiB (64 instances)
L2 cache:                        80 MiB (64 instances)
L3 cache:                        96 MiB (2 instances)
NUMA node(s):                    2
NUMA node0 CPU(s):               0-31,64-95
NUMA node1 CPU(s):               32-63,96-127
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Vulnerable, IBPB
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] torch==2.2.1
[pip3] triton==2.2.0
[pip3] vllm-nccl-cu12==2.18.1.0.3.0
[conda] Could not collectROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0	GPU1	GPU2	GPU3	NIC0	NIC1	NIC2	NIC3	NIC4	NIC5	NIC6	NIC7	CPU Affinity	NUMA Affinity
GPU0	 X 	NV8	NV8	NV8	PXB	PXB	NODE	NODE	SYS	SYS	SYS	SYS	0-31,64-95	0
GPU1	NV8	 X 	NV8	NV8	PXB	PXB	NODE	NODE	SYS	SYS	SYS	SYS	0-31,64-95	0
GPU2	NV8	NV8	 X 	NV8	NODE	NODE	PXB	PXB	SYS	SYS	SYS	SYS	0-31,64-95	0
GPU3	NV8	NV8	NV8	 X 	NODE	NODE	PXB	PXB	SYS	SYS	SYS	SYS	0-31,64-95	0
NIC0	PXB	PXB	NODE	NODE	 X 	PIX	NODE	NODE	SYS	SYS	SYS	SYS
NIC1	PXB	PXB	NODE	NODE	PIX	 X 	NODE	NODE	SYS	SYS	SYS	SYS
NIC2	NODE	NODE	PXB	PXB	NODE	NODE	 X 	PIX	SYS	SYS	SYS	SYS
NIC3	NODE	NODE	PXB	PXB	NODE	NODE	PIX	 X 	SYS	SYS	SYS	SYS
NIC4	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	 X 	PIX	NODE	NODE
NIC5	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	PIX	 X 	NODE	NODE
NIC6	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	NODE	NODE	 X 	PIX
NIC7	SYS	SYS	SYS	SYS	SYS	SYS	SYS	SYS	NODE	NODE	PIX	 X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6
  NIC7: mlx5_7

Stacktrace

ERROR 04-28 16:01:15 async_llm_engine.py:499] Engine iteration timed out. This should never happen!
ERROR 04-28 16:01:15 async_llm_engine.py:43] Engine background task failed
ERROR 04-28 16:01:15 async_llm_engine.py:43] Traceback (most recent call last):
ERROR 04-28 16:01:15 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 470, in engine_step
ERROR 04-28 16:01:15 async_llm_engine.py:43]     request_outputs = await self.engine.step_async()
ERROR 04-28 16:01:15 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 213, in step_async
ERROR 04-28 16:01:15 async_llm_engine.py:43]     output = await self.model_executor.execute_model_async(
ERROR 04-28 16:01:15 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 418, in execute_model_async
ERROR 04-28 16:01:15 async_llm_engine.py:43]     all_outputs = await self._run_workers_async(
ERROR 04-28 16:01:15 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 408, in _run_workers_async
ERROR 04-28 16:01:15 async_llm_engine.py:43]     all_outputs = await asyncio.gather(*coros)
ERROR 04-28 16:01:15 async_llm_engine.py:43] asyncio.exceptions.CancelledError
ERROR 04-28 16:01:15 async_llm_engine.py:43]
ERROR 04-28 16:01:15 async_llm_engine.py:43] During handling of the above exception, another exception occurred:
ERROR 04-28 16:01:15 async_llm_engine.py:43]
ERROR 04-28 16:01:15 async_llm_engine.py:43] Traceback (most recent call last):
ERROR 04-28 16:01:15 async_llm_engine.py:43]   File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
ERROR 04-28 16:01:15 async_llm_engine.py:43]     return fut.result()
ERROR 04-28 16:01:15 async_llm_engine.py:43] asyncio.exceptions.CancelledError
ERROR 04-28 16:01:15 async_llm_engine.py:43]
ERROR 04-28 16:01:15 async_llm_engine.py:43] The above exception was the direct cause of the following exception:
ERROR 04-28 16:01:15 async_llm_engine.py:43]
ERROR 04-28 16:01:15 async_llm_engine.py:43] Traceback (most recent call last):
ERROR 04-28 16:01:15 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
ERROR 04-28 16:01:15 async_llm_engine.py:43]     task.result()
ERROR 04-28 16:01:15 async_llm_engine.py:43]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 496, in run_engine_loop
ERROR 04-28 16:01:15 async_llm_engine.py:43]     has_requests_in_progress = await asyncio.wait_for(
ERROR 04-28 16:01:15 async_llm_engine.py:43]   File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
ERROR 04-28 16:01:15 async_llm_engine.py:43]     raise exceptions.TimeoutError() from exc
ERROR 04-28 16:01:15 async_llm_engine.py:43] asyncio.exceptions.TimeoutError
ERROR:asyncio:Exception in callback functools.partial(<function _raise_exception_on_finish at 0x7fdd6c983370>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7fddf0f1eb60>>)
handle: <Handle functools.partial(<function _raise_exception_on_finish at 0x7fdd6c983370>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7fddf0f1eb60>>)>
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 470, in engine_step
    request_outputs = await self.engine.step_async()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 213, in step_async
    output = await self.model_executor.execute_model_async(
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 418, in execute_model_async
    all_outputs = await self._run_workers_async(
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 408, in _run_workers_async
    all_outputs = await asyncio.gather(*coros)
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
    return fut.result()
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
    task.result()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 496, in run_engine_loop
    has_requests_in_progress = await asyncio.wait_for(
  File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
    raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 45, in _raise_exception_on_finish
    raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.
INFO 04-28 16:01:15 async_llm_engine.py:154] Aborted request cmpl-c2bd7aff0e9141b685c9e33e8e7135cb-0.
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 265, in __call__
    await wrap(partial(self.listen_for_disconnect, receive))
  File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 261, in wrap
    await func()
  File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 238, in listen_for_disconnect
    message = await receive()
  File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 568, in receive
    await self.message_event.wait()
  File "/usr/lib/python3.10/asyncio/locks.py", line 214, in wait
    await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7fd74838cdc0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 75, in app
    await response(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 258, in __call__
    async with anyio.create_task_group() as task_group:
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
    raise BaseExceptionGroup(
exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)

Request Log

...
INFO 04-28 16:24:30 metrics.py:229] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%
INFO 04-28 16:24:40 metrics.py:229] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%
INFO 04-28 16:24:50 metrics.py:229] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%
INFO 04-28 16:25:00 metrics.py:229] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%
INFO 04-28 16:25:10 metrics.py:229] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%
...

The Running: 1 reqs never changed to Running: 0 reqs

NCCL Error

After some time, it complained that there was an NCCL timeout issue.

(RayWorkerWrapper pid=7156) [rank1]:[E ProcessGroupNCCL.cpp:523] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=5447, OpType=GATHER, NumelIn=9872, NumelOut=0, Timeout(ms)=600000) ran for 600195 milliseconds before timing out.
(RayWorkerWrapper pid=7156) [rank1]:[E ProcessGroupNCCL.cpp:537] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
(RayWorkerWrapper pid=7156) [rank1]:[E ProcessGroupNCCL.cpp:543] To avoid data inconsistency, we are taking the entire process down.
(RayWorkerWrapper pid=7156) [rank1]:[E ProcessGroupNCCL.cpp:1182] [Rank 1] NCCL watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=5447, OpType=GATHER, NumelIn=9872, NumelOut=0, Timeout(ms)=600000) ran for 600195 milliseconds before timing out.
(RayWorkerWrapper pid=7156) Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:525 (most recent call first):
(RayWorkerWrapper pid=7156) frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7fd00ced87 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
(RayWorkerWrapper pid=7156) frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1e6 (0x7f7b1dcab6e6 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
(RayWorkerWrapper pid=7156) frame #2: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x19d (0x7f7b1dcaec3d in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
(RayWorkerWrapper pid=7156) frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f7b1dcaf839 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
(RayWorkerWrapper pid=7156) frame #4: <unknown function> + 0xdc253 (0x7f7fd5cbd253 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
(RayWorkerWrapper pid=7156) frame #5: <unknown function> + 0x94ac3 (0x7f7fd7affac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
(RayWorkerWrapper pid=7156) frame #6: clone + 0x44 (0x7f7fd7b90a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
(RayWorkerWrapper pid=7156)
(RayWorkerWrapper pid=7156) [2024-04-28 16:10:16,129 E 7156 7629] logging.cc:101: Unhandled exception: N3c1016DistBackendErrorE. what(): [Rank 1] NCCL watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=5447, OpType=GATHER, NumelIn=9872, NumelOut=0, Timeout(ms)=600000) ran for 600195 milliseconds before timing out.
(RayWorkerWrapper pid=7156) Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:525 (most recent call first):
(RayWorkerWrapper pid=7156) frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7fd00ced87 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
(RayWorkerWrapper pid=7156) frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x1e6 (0x7f7b1dcab6e6 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
(RayWorkerWrapper pid=7156) frame #2: c10d::ProcessGroupNCCL::workCleanupLoop() + 0x19d (0x7f7b1dcaec3d in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
(RayWorkerWrapper pid=7156) frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x119 (0x7f7b1dcaf839 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
(RayWorkerWrapper pid=7156) frame #4: <unknown function> + 0xdc253 (0x7f7fd5cbd253 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
(RayWorkerWrapper pid=7156) frame #5: <unknown function> + 0x94ac3 (0x7f7fd7affac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
(RayWorkerWrapper pid=7156) frame #6: clone + 0x44 (0x7f7fd7b90a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
(RayWorkerWrapper pid=7156)
(RayWorkerWrapper pid=7156) Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1186 (most recent call first):
(RayWorkerWrapper pid=7156) frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7fd00ced87 in /usr/local/lib/python3.10/dist-packages/torch/lib/libc10.so)
(RayWorkerWrapper pid=7156) frame #1: <unknown function> + 0xdf6b11 (0x7f7b1da05b11 in /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so)
(RayWorkerWrapper pid=7156) frame #2: <unknown function> + 0xdc253 (0x7f7fd5cbd253 in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
(RayWorkerWrapper pid=7156) frame #3: <unknown function> + 0x94ac3 (0x7f7fd7affac3 in /usr/lib/x86_64-linux-gnu/libc.so.6)
(RayWorkerWrapper pid=7156) frame #4: clone + 0x44 (0x7f7fd7b90a04 in /usr/lib/x86_64-linux-gnu/libc.so.6)
(RayWorkerWrapper pid=7156)
(RayWorkerWrapper pid=7156) [2024-04-28 16:10:16,138 E 7156 7629] logging.cc:108: Stack trace:
(RayWorkerWrapper pid=7156)  /usr/local/lib/python3.10/dist-packages/ray/_raylet.so(+0xfe64fa) [0x7f7fd6df34fa] ray::operator<<()
(RayWorkerWrapper pid=7156) /usr/local/lib/python3.10/dist-packages/ray/_raylet.so(+0xfe8fb8) [0x7f7fd6df5fb8] ray::TerminateHandler()
(RayWorkerWrapper pid=7156) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xae20c) [0x7f7fd5c8f20c]
(RayWorkerWrapper pid=7156) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xae277) [0x7f7fd5c8f277]
(RayWorkerWrapper pid=7156) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xae1fe) [0x7f7fd5c8f1fe]
(RayWorkerWrapper pid=7156) /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda.so(+0xdf6bcc) [0x7f7b1da05bcc] c10d::ProcessGroupNCCL::ncclCommWatchdog()
(RayWorkerWrapper pid=7156) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xdc253) [0x7f7fd5cbd253]
(RayWorkerWrapper pid=7156) /usr/lib/x86_64-linux-gnu/libc.so.6(+0x94ac3) [0x7f7fd7affac3]
(RayWorkerWrapper pid=7156) /usr/lib/x86_64-linux-gnu/libc.so.6(clone+0x44) [0x7f7fd7b90a04] __clone
(RayWorkerWrapper pid=7156)
(RayWorkerWrapper pid=7156) *** SIGABRT received at time=1714291816 on cpu 51 ***
(RayWorkerWrapper pid=7156) PC: @     0x7f7fd7b019fc  (unknown)  pthread_kill
(RayWorkerWrapper pid=7156)     @     0x7f7fd7aad520  (unknown)  (unknown)
(RayWorkerWrapper pid=7156) [2024-04-28 16:10:16,138 E 7156 7629] logging.cc:365: *** SIGABRT received at time=1714291816 on cpu 51 ***
(RayWorkerWrapper pid=7156) [2024-04-28 16:10:16,138 E 7156 7629] logging.cc:365: PC: @     0x7f7fd7b019fc  (unknown)  pthread_kill
(RayWorkerWrapper pid=7156) [2024-04-28 16:10:16,138 E 7156 7629] logging.cc:365:     @     0x7f7fd7aad520  (unknown)  (unknown)
(RayWorkerWrapper pid=7156) Fatal Python error: Aborted
(RayWorkerWrapper pid=7156)
(RayWorkerWrapper pid=7156)
(RayWorkerWrapper pid=7156) Extension modules: msgpack._cmsgpack, google._upb._message, psutil._psutil_linux, psutil._psutil_posix, setproctitle, yaml._yaml, charset_normalizer.md, simplejson._speedups, uvloop.loop, ray._raylet, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, sentencepiece._sentencepiece, pyarrow.lib, pyarrow._json, PIL._imaging, __triton_launcher, cuda_utils (total: 36)
(RayWorkerWrapper pid=7458) [W socket.cpp:697] [c10d] The client socket cannot be initialized to connect to [::ffff:172.17.0.3]:47044 (errno: 97 - Address family not supported by protocol). [repeated 2x across cluster]
(raylet) A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffffcb1cf47f2bc1c19ce70ffdfe01000000 Worker ID: 037b4db9866efdc63cfb62664ff3e6aa96ef26495b3ad2cbdc1dca92 Node ID: 5926d48174b708f57246402a749a9c5025218f0a2d1439d8aaaa28e7 Worker IP address: 172.17.0.3 Worker port: 46404 Worker PID: 7156 Worker exit type: SYSTEM_ERROR Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors.
(RayWorkerWrapper pid=7458) INFO 04-28 15:52:38 custom_all_reduce.py:246] Registering 5635 cuda graph addresses [repeated 2x across cluster]
(RayWorkerWrapper pid=7458) INFO 04-28 15:52:38 model_runner.py:1057] Graph capturing finished in 31 secs. [repeated 2x across cluster]

Thread stack

We dumped the thread and found that it got stuck during sampling.

# py-spy dump --pid 1
Process 1: python3 -m vllm.entrypoints.openai.api_server --model /models/my-model --tensor-parallel-size 4 --gpu-memory-utilization 0.9 --enable-prefix-caching
Python v3.10.12 (/usr/bin/python3.10)

Thread 0x7F9BB9AFE480 (active): "MainThread"
    run (asyncio/runners.py:44)
    run (uvicorn/server.py:65)
    run (uvicorn/main.py:575)
    <module> (vllm/entrypoints/openai/api_server.py:169)
    _run_code (runpy.py:86)
    _run_module_as_main (runpy.py:196)
Thread 7027 (idle): "ray_listen_error_messages"
    listen_error_messages (ray/_private/worker.py:2136)
    run (threading.py:953)
    _bootstrap_inner (threading.py:1016)
    _bootstrap (threading.py:973)
Thread 7028 (idle): "ray_print_logs"
    print_logs (ray/_private/worker.py:898)
    run (threading.py:953)
    _bootstrap_inner (threading.py:1016)
    _bootstrap (threading.py:973)
Thread 7718 (idle): "Thread-1 (_report_usage_worker)"
    _report_continous_usage (vllm/usage/usage_lib.py:186)
    _report_usage_worker (vllm/usage/usage_lib.py:137)
    run (threading.py:953)
    _bootstrap_inner (threading.py:1016)
    _bootstrap (threading.py:973)
Thread 0x7F8B217EB640 (active): "ThreadPoolExecutor-0_0"
    _random_sample (vllm/model_executor/layers/sampler.py:292)
    _sample_with_torch (vllm/model_executor/layers/sampler.py:495)
    _sample (vllm/model_executor/layers/sampler.py:593)
    forward (vllm/model_executor/layers/sampler.py:90)
    _call_impl (torch/nn/modules/module.py:1520)
    _wrapped_call_impl (torch/nn/modules/module.py:1511)
    sample (vllm/model_executor/models/llama.py:375)
    execute_model (vllm/worker/model_runner.py:858)
    decorate_context (torch/utils/_contextlib.py:115)
    execute_model (vllm/worker/worker.py:249)
    decorate_context (torch/utils/_contextlib.py:115)
    execute_method (vllm/worker/worker_base.py:149)
    run (concurrent/futures/thread.py:58)
    _worker (concurrent/futures/thread.py:83)
    run (threading.py:953)
    _bootstrap_inner (threading.py:1016)
    _bootstrap (threading.py:973)

Host software

GPUs: A800 x 8 (single node, multi-GPU)
NVIDIA Driver: 525.85.12
NVIDIA GPU plugin related software:

libnvidia-container-tools.x86_64 1.8.0-1                       
libnvidia-container1.x86_64      1.8.0-1                    
nvidia-container-toolkit.x86_64  1.8.0-1                 
nvidia-docker2.noarch            2.9.0-1                        
nvidia-fabric-manager.x86_64     525.85.12-1      

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions