Skip to content

Commit 73f2058

Browse files
ywang96lulmer
authored andcommitted
[Bugfix] Add mm_processor_kwargs to chat-related protocols (vllm-project#13644)
Signed-off-by: Louis Ulmer <[email protected]>
1 parent d140faf commit 73f2058

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

vllm/entrypoints/openai/protocol.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -974,6 +974,10 @@ class EmbeddingChatRequest(OpenAIBaseModel):
974974
description=("Additional kwargs to pass to the template renderer. "
975975
"Will be accessible by the chat template."),
976976
)
977+
mm_processor_kwargs: Optional[Dict[str, Any]] = Field(
978+
default=None,
979+
description=("Additional kwargs to pass to the HF processor."),
980+
)
977981
priority: int = Field(
978982
default=0,
979983
description=(
@@ -1394,6 +1398,10 @@ class TokenizeChatRequest(OpenAIBaseModel):
13941398
description=("Additional kwargs to pass to the template renderer. "
13951399
"Will be accessible by the chat template."),
13961400
)
1401+
mm_processor_kwargs: Optional[Dict[str, Any]] = Field(
1402+
default=None,
1403+
description=("Additional kwargs to pass to the HF processor."),
1404+
)
13971405

13981406
@model_validator(mode="before")
13991407
@classmethod

0 commit comments

Comments
 (0)