Skip to content

Commit 5b1042f

Browse files
ywang96Alvant
authored andcommitted
[Bugfix] Fix token padding for chameleon (vllm-project#6724)
Signed-off-by: Alvant <[email protected]>
1 parent c4d1c1b commit 5b1042f

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

vllm/model_executor/models/chameleon.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,8 @@ def input_processor_for_chameleon(ctx: InputContext, llm_inputs: LLMInputs):
125125

126126
# Appending sep token for chat mode to follow default processor
127127
# behavior
128-
new_prompt += tokenizer.sep_token
128+
if new_prompt is not None:
129+
new_prompt += tokenizer.sep_token
129130
new_token_ids += [CHAMELEON_SEP_TOKEN_ID]
130131

131132
# NOTE: Create a defensive copy of the original inputs

0 commit comments

Comments
 (0)