-
-
Notifications
You must be signed in to change notification settings - Fork 11.7k
Closed as not planned
Closed as not planned
Copy link
Labels
bugSomething isn't workingSomething isn't workingstaleOver 90 days of inactivityOver 90 days of inactivity
Description
Your current environment
vllm 0.6.0
🐛 Describe the bug
when I try this
vllm serve neuralmagic/Meta-Llama-3.1-70B-Instruct-FP8 --host 0.0.0.0 --port 8000 --tensor-parallel-size 8 --seed 1234 --enable_prefix_caching --enable-chunked-prefill --max-model-len 32000 --num-scheduler-steps 8
I got this
raise ValueError("Chunked prefill is not supported with "
ValueError: Chunked prefill is not supported with multi-step (--num-scheduler-steps > 1)
ERROR 09-08 13:11:00 api_server.py:186] RPCServer process died before responding to readiness probe
is it an expected behavior?
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
dhruvmullick
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstaleOver 90 days of inactivityOver 90 days of inactivity