-
Notifications
You must be signed in to change notification settings - Fork 5.1k
[Bug] DeepSeek V3 exception when running under sgl-kernel 0.0.9.post1 #5529
Copy link
Copy link
Closed
Description
Checklist
- 1. I have searched related issues but cannot get the expected help.
- 2. The bug has not been fixed in the latest version.
- 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- 5. Please use English, otherwise it will be closed.
Describe the bug
DeepSeek V3 exception when running under sgl-kernel 0.0.9.post1, 0.0.9.post2
Reproduction
Start command:
# node 1
python3 -m sglang.launch_server --model-path /path/to/DeepSeek-V3-Channel-INT8 --trust-remote-code --host 0.0.0.0 --port 30000 --attention-backend flashinfer --quantization w8a8_int8 --tp 16 --dist-init-addr IP:20000 --nnodes 2 --node-rank 0
# node 2
python3 -m sglang.launch_server --model-path /path/to/DeepSeek-V3-Channel-INT8 --trust-remote-code --host 0.0.0.0 --port 30000 --attention-backend flashinfer --quantization w8a8_int8 --tp 16 --dist-init-addr IP:20000 --nnodes 2 --node-rank 1
An error occurred while executing inference:
[2025-04-18 10:47:44 TP0] TpModelWorkerClient hit an exception: Traceback (most recent call last):
File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 112, in forward_thread_func
self.forward_thread_func_()
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 143, in forward_thread_func_
logits_output, next_token_ids = self.worker.forward_batch_generation(
File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker.py", line 183, in forward_batch_generation
next_token_ids = self.model_runner.sample(logits_output, model_worker_batch)
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 1091, in sample
next_token_ids = self.sampler(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/sgl-workspace/sglang/python/sglang/srt/layers/sampler.py", line 104, in forward
batch_next_token_ids, success = top_k_top_p_sampling_from_probs(
ValueError: not enough values to unpack (expected 2, got 1)
Environment
Python: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA A800-SXM4-80GB
GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.54.14
PyTorch: 2.5.1+cu124
sglang: 0.4.5.post1
sgl_kernel: 0.0.9.post2
flashinfer: Module Not Found
triton: 3.1.0
transformers: 4.51.1
torchao: 0.9.0
numpy: 2.2.4
aiohttp: 3.11.16
fastapi: 0.115.12
hf_transfer: 0.1.9
huggingface_hub: 0.30.1
interegular: 0.3.3
modelscope: 1.24.1
orjson: 3.10.16
outlines: 0.1.11
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: Module Not Found
zmq: Module Not Found
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.17
openai: 1.70.0
tiktoken: 0.9.0
anthropic: 0.49.0
litellm: 1.65.4.post1
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV8 NV8 NV8 NV8 NV8 NV8 NV8 PXB PXB SYS SYS SYSSYS SYS SYS SYS 0-31,64-95 0 N/A
GPU1 NV8 X NV8 NV8 NV8 NV8 NV8 NV8 PXB PXB SYS SYS SYSSYS SYS SYS SYS 0-31,64-95 0 N/A
GPU2 NV8 NV8 X NV8 NV8 NV8 NV8 NV8 SYS SYS PXB PXB SYSSYS SYS SYS SYS 0-31,64-95 0 N/A
GPU3 NV8 NV8 NV8 X NV8 NV8 NV8 NV8 SYS SYS PXB PXB SYSSYS SYS SYS SYS 0-31,64-95 0 N/A
GPU4 NV8 NV8 NV8 NV8 X NV8 NV8 NV8 SYS SYS SYS SYS PXBPXB SYS SYS SYS 32-63,96-127 1 N/A
GPU5 NV8 NV8 NV8 NV8 NV8 X NV8 NV8 SYS SYS SYS SYS PXBPXB SYS SYS SYS 32-63,96-127 1 N/A
GPU6 NV8 NV8 NV8 NV8 NV8 NV8 X NV8 SYS SYS SYS SYS SYSSYS PXB PXB SYS 32-63,96-127 1 N/A
GPU7 NV8 NV8 NV8 NV8 NV8 NV8 NV8 X SYS SYS SYS SYS SYSSYS PXB PXB SYS 32-63,96-127 1 N/A
NIC0 PXB PXB SYS SYS SYS SYS SYS SYS X PIX SYS SYS SYSSYS SYS SYS SYS
NIC1 PXB PXB SYS SYS SYS SYS SYS SYS PIX X SYS SYS SYSSYS SYS SYS SYS
NIC2 SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS X PIX SYSSYS SYS SYS SYS
NIC3 SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS PIX X SYSSYS SYS SYS SYS
NIC4 SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS X PIX SYS SYS SYS
NIC5 SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS SYS SYS PIX X SYS SYS SYS
NIC6 SYS SYS SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS SYSSYS X PIX SYS
NIC7 SYS SYS SYS SYS SYS SYS PXB PXB SYS SYS SYS SYS SYSSYS PIX X SYS
NIC8 SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYSSYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_2
NIC1: mlx5_3
NIC2: mlx5_4
NIC3: mlx5_5
NIC4: mlx5_6
NIC5: mlx5_7
NIC6: mlx5_8
NIC7: mlx5_9
NIC8: mlx5_bond_0
ulimit soft: 1048576
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels