Skip to content

Conversation

@DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Mar 19, 2025

  • Make AutoProcessor's chat template take precedence over AutoTokenizer
  • Fix HF tool calling templates failing to be loaded
  • Remove "unicode_escape` decoding

FIX #14884
FIX #15095
FIX #15125

Thanks @chaunceyjiang who helped with fixing #15095!

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) labels Mar 19, 2025
Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337 DarkLight1337 requested a review from hmellor March 19, 2025 17:31
DarkLight1337 and others added 3 commits March 20, 2025 04:58
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: chaunceyjiang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
DarkLight1337 and others added 2 commits March 21, 2025 06:50
@DarkLight1337 DarkLight1337 requested a review from Isotr0py March 21, 2025 07:13
Copy link
Member

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@Isotr0py Isotr0py added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 21, 2025
Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can confirm this PR fixes chat template for a few models I tried, but it looks like the CI is still not passing..

@DarkLight1337
Copy link
Member Author

Yeah I will check that out today or tomorrow.

Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) March 23, 2025 12:46
@DarkLight1337
Copy link
Member Author

Should be fixed now

Signed-off-by: DarkLight1337 <[email protected]>
@ywang96
Copy link
Member

ywang96 commented Mar 24, 2025

I think there are some differences between the chat template in the model repo versus the chat template of the tokenizer of llava-hf/llava-onevision-qwen2-0.5b-ov-hf which is why CI is breaking. Going to verify if the new chat template is more desired.

@ywang96
Copy link
Member

ywang96 commented Mar 24, 2025

Hmm - it seems that the new chat template is skipping the system prompt loading.

Main:

"<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<video>\nWhat's in this video?<|im_end|>\n<|im_start|>assistant\n"

This branch:

"<|im_start|>user <video>\nWhat's in this video?<|im_end|><|im_start|>assistant\n"

Now checking the default behavior of huggingface

import requests
from PIL import Image

import torch
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration, AutoTokenizer

model_id = "llava-hf/llava-onevision-qwen2-0.5b-ov-hf"
model = LlavaOnevisionForConditionalGeneration.from_pretrained(
    model_id, 
    torch_dtype=torch.float16, 
    low_cpu_mem_usage=True, 
).to(0)

processor = AutoProcessor.from_pretrained(model_id)
messages = [
    {
        "role": "user",
        "content": [
            {"type": "video", "url": "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4"},
            {"type": "text", "text": "What's in this video?"},
        ],
    },
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
print(repr(prompt))

Output:

"<|im_start|>user <video>\nWhat's in this video?<|im_end|><|im_start|>assistant\n"

Given this branch is more aligned with the default behavior of huggingface, I will update the test accordingly.

Signed-off-by: Roger Wang <[email protected]>
@DarkLight1337 DarkLight1337 disabled auto-merge March 24, 2025 04:03
DarkLight1337 and others added 2 commits March 24, 2025 04:03
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
@simon-mo simon-mo added this to the v0.8.2 milestone Mar 24, 2025
@ywang96 ywang96 enabled auto-merge (squash) March 24, 2025 06:47
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
@ywang96 ywang96 merged commit cbcdf2c into vllm-project:main Mar 24, 2025
34 checks passed
@DarkLight1337 DarkLight1337 deleted the fix-chat-template-loading branch March 24, 2025 13:50
erictang000 pushed a commit to erictang000/vllm that referenced this pull request Mar 25, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: chaunceyjiang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
wrmedford pushed a commit to wrmedford/vllm that referenced this pull request Mar 26, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: chaunceyjiang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Signed-off-by: Wes Medford <[email protected]>
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: chaunceyjiang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]>
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: chaunceyjiang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
shreyankg pushed a commit to shreyankg/vllm that referenced this pull request May 3, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: chaunceyjiang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: chaunceyjiang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Signed-off-by: Mu Huai <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

4 participants