Skip to content

Commit 6c69566

Browse files
comaniacLeiWang1999
authored andcommitted
[MISC] Keep chunked prefill enabled by default with long context when prefix caching is enabled (vllm-project#8342)
Signed-off-by: LeiWang1999 <[email protected]>
1 parent f992554 commit 6c69566

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

vllm/engine/arg_utils.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -878,7 +878,6 @@ def create_engine_config(self) -> EngineConfig:
878878
if (is_gpu and not use_sliding_window and not use_spec_decode
879879
and not self.enable_lora
880880
and not self.enable_prompt_adapter
881-
and not self.enable_prefix_caching
882881
and not has_seqlen_agnostic_layers):
883882
self.enable_chunked_prefill = True
884883
logger.warning(

0 commit comments

Comments
 (0)