Skip to content

Conversation

@bravo325806
Copy link
Contributor

@bravo325806 bravo325806 commented Mar 14, 2025

fix small typo in the example of Streaming delimiter
This is apiserver delimiter

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the documentation Improvements or additions to documentation label Mar 14, 2025
@DarkLight1337
Copy link
Member

DarkLight1337 commented Mar 14, 2025

Can you verify that streaming still works for your PR? In case it triggers a regression from #2756

@DarkLight1337 DarkLight1337 changed the title [DOC] Fix small typo in the example of Streaming delimiter [Bugfix] Fix small typo in the example of Streaming delimiter Mar 14, 2025
@bravo325806
Copy link
Contributor Author

Yes, streaming still can work
The #2756 is fix server response regarding json packet boundary determination.
I just fix the client json decoding in the example,

Below is my test code

python -m vllm.entrypoints.api_server --model  /path/to/model

and send message

import requests
import json
params = {"prompt": "Hello?",
          "temperature": 0,
          "stream": "True", 
          "max_tokens": 128}
        
resp = requests.post('http://localhost:8000/generate', json=params)
for chunk in resp.iter_lines(chunk_size=8192,
                             decode_unicode=False,
                             delimiter=b"\0"):

    if chunk:
        data = json.loads(chunk.decode("utf-8"))
        output = data["text"]
        print(output)

if no change \0 to \n, you'll get an JSONDecodeError: Extra data error

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing and testing!

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) March 14, 2025 06:40
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 14, 2025
@DarkLight1337 DarkLight1337 merged commit 54cc46f into vllm-project:main Mar 14, 2025
26 of 30 checks passed
richardsliu pushed a commit to richardsliu/vllm that referenced this pull request Mar 14, 2025
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants