Skip to content

Conversation

@chaunceyjiang
Copy link
Collaborator

@chaunceyjiang chaunceyjiang commented Mar 24, 2025

Fix #15291

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the frontend label Mar 24, 2025
@ywang96
Copy link
Member

ywang96 commented Mar 25, 2025

Can confirm this PR fixes the issue - so I'm marking it as ready for 0.8.2 release.

@ywang96 ywang96 marked this pull request as ready for review March 25, 2025 01:30
@ywang96 ywang96 added this to the v0.8.2 milestone Mar 25, 2025
@ywang96 ywang96 added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 25, 2025
@ywang96
Copy link
Member

ywang96 commented Mar 25, 2025

Client side code

import requests
import json

url = "http://127.0.0.1:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}

model = "Qwen/Qwen2.5-VL-7B-Instruct"

image_url = "https://texascoffeeschool.com/wp-content/uploads/2021/10/DSC_0052-scaled.jpg"
video_url = "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerFun.mp4"

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "Describe all contents in a single sentence.",
            },
            {"type": "image_url", "image_url": {"url": image_url}},
            {"type": "video_url", "video_url": {"url": video_url}},
        ],
    },
]

data = {"model": model, "messages": messages}

response = requests.post(url, headers=headers, data=json.dumps(data))

Server-side log to verify the prompt is indeed parsed correctly

'<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nDescribe the contents in a single sentence.<|vision_start|><|image_pad|><|vision_end|><|vision_start|><|video_pad|><|vision_end|><|im_end|>\n<|im_start|>assistant\n'

I don't think Qwen2.5VL is capable of mixed-modality inference, but at least this PR fixes the functionality to do so. Thanks for the work! @chaunceyjiang

@ywang96 ywang96 enabled auto-merge (squash) March 25, 2025 02:53
@ywang96 ywang96 merged commit 10b34e3 into vllm-project:main Mar 25, 2025
41 of 43 checks passed
erictang000 pushed a commit to erictang000/vllm that referenced this pull request Mar 25, 2025
wrmedford pushed a commit to wrmedford/vllm that referenced this pull request Mar 26, 2025
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
shreyankg pushed a commit to shreyankg/vllm that referenced this pull request May 3, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Qwen2.5 VL online service can not input video and image simultaneously.

2 participants