Skip to content

Conversation

@gcalmettes
Copy link
Contributor

@gcalmettes gcalmettes commented Feb 27, 2025

This is a rework of #13039 following the inputs from @robertgshaw2-redhat (see in particular this comment)

Currently only ValueError exceptions are catched for the preprocessing input step on the frontend openai endpoints.

As a result, any error happening in the preprocess_chat method not being a ValuError (e.g.: jinja2 templating error, multimodal error) is not catched and result in a 500 Internal Server Error error, instead of leveraging the self.create_error_response method.

This PR adds TypedError and jinja2.TemplateError exceptions to the catched exception list, so a proper error is returned.

Reproduce the error:

  1. start the vllm latest version
docker run --rm -d --gpus all --shm-size=1G --ulimit memlock=-1 --ulimit stack=67108864 vllm/vllm-openai:v0.7.2 --served-model-name llama  --model=neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8

2.a. make a valid openAI query using a feature not supported by the model (e.g.: llama 3.1 doesn't support messages type image_url)

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="not-used",
)

messages = [
    {
        "role": "system",
        "content": "You are a helpful assistant.",
    },
    {
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "Extract the information from those images. The description must be one sentence."
            },
            {
                "type": "image_url",
                "image_url": {
                    "url": "https://picsum.photos/id/237/200/300"
                }
            }
        ]
    }
]


response = client.chat.completions.create(
    messages=messages,
    model="llama",
)

A 500 Internal Server Error is returned on client side (openai.InternalServerError: Internal Server Error)

INFO:     172.17.0.1:59024 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 409, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.12/dist-packages/starlette/applications.py", line 112, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.12/dist-packages/starlette/middleware/errors.py", line 187, in __call__
    raise exc
  File "/usr/local/lib/python3.12/dist-packages/starlette/middleware/errors.py", line 165, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.12/dist-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.12/dist-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.12/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.12/dist-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.12/dist-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.12/dist-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.12/dist-packages/starlette/routing.py", line 288, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.12/dist-packages/starlette/routing.py", line 76, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/usr/local/lib/python3.12/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.12/dist-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.12/dist-packages/starlette/routing.py", line 73, in app
    response = await f(request)
               ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/fastapi/routing.py", line 301, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/fastapi/routing.py", line 212, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/utils.py", line 56, in wrapper
    return handler_task.result()
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 392, in create_chat_completion
    generator = await handler.create_chat_completion(request, raw_request)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/serving_chat.py", line 177, in create_chat_completion
    ) = await self._preprocess_chat(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/serving_engine.py", line 386, in _preprocess_chat
    conversation, mm_data_future = parse_chat_messages_futures(
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/chat_utils.py", line 951, in parse_chat_messages_futures
    sub_messages = _parse_chat_message_content(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/chat_utils.py", line 879, in _parse_chat_message_content
    result = _parse_chat_message_content_parts(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/chat_utils.py", line 782, in _parse_chat_message_content_parts
    parse_res = _parse_chat_message_content_part(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/chat_utils.py", line 839, in _parse_chat_message_content_part
    mm_parser.parse_image(str_content)
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/chat_utils.py", line 574, in parse_image
    placeholder = self._tracker.add("image", image_coro)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/chat_utils.py", line 458, in add
    return self._placeholder_str(modality, current_count)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/chat_utils.py", line 422, in _placeholder_str
    raise TypeError(f"Unknown {modality} model type: {model_type}")
TypeError: Unknown image model type: llama

With this PR

The error is correctly catched and propagated to the client

openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': 'Unknown image model type: llama', 'type': 'BadRequestError', 'param': None, 'code': 400}

2.b. another example, make a valid query providing a broken chat template:

import requests

broken_chat_template = "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content']}}"

headers = {
    "Content-Type": "application/json"
}
payload = {
    "model": "mymodel",
    "messages":[
        {"role":"system","content":"You are a helpful assistant."},
        {"role":"user","content":"What is 1+1?"},
    ],
    "chat_template": broken_chat_template,
}

response = requests.post(f"{URL}/chat/completions", json=payload, headers=headers)
print(response.text)

Response

# Before: 500 error code
Internal Server Error

# After: 400 error code with helpful message
{'object': 'error', 'message': "Unexpected end of template. Jinja was looking for the following tags: 'endfor' or 'else'. The innermost block that needs to be closed is 'for'.", 'type': 'BadRequestError', 'param': None, 'code': 400}

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the frontend label Feb 27, 2025
@DarkLight1337
Copy link
Member

Is there a particular reason why we can't just catch all Exceptions in the preprocessing step?

@gcalmettes gcalmettes force-pushed the feat/catch-preprocessing-exceptions branch from c4f4e7c to 85c242f Compare February 27, 2025 17:15
@gcalmettes
Copy link
Contributor Author

gcalmettes commented Feb 27, 2025

@DarkLight1337 see this discussion .

I originally was catching all the exceptions with except Exception as e, from @robertgshaw2-redhat's comment it looked like it could interfere with the handlers in place to catch the RuntimeError and AsyncEngineDeadError.

@DarkLight1337
Copy link
Member

The preprocessor will also raise RuntimeError if preprocessing fails. I think we should use a different error type for RuntimeErrors that are intended to shut down the server, and re-raise them explicitly in the try-except.

@gcalmettes
Copy link
Contributor Author

@DarkLight1337 from my understanding looking at the code of the runtime_exception_handler currently in place, it seems that the logic is to check if the engine is still running on any RuntimeError and shut down the server if not.
So I guess there is not really a specific exception to be raised as it is a safety check to ensure that when any RuntimeError is triggered, it has not resulted in the engine being dead.

If we were to catch all the Exceptions in the preprocessing step then the health of the engine would not be checked for RuntimeError triggered during the preprocessing step (I agree though that in theory the RuntimeError occuring during the preprocessing steps should not affect the engine health).

@DarkLight1337
Copy link
Member

@robertgshaw2-redhat can you comment on this as well? Since you also have a PR to improve the error handling, it would be great if they could complement each other.

@gcalmettes gcalmettes force-pushed the feat/catch-preprocessing-exceptions branch from 148da98 to 7266939 Compare March 2, 2025 12:06
Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

He seems busy, I think this PR is reasonable so let's merge this first.

@DarkLight1337
Copy link
Member

Can you merge from main to avoid the CI failures?

@gcalmettes gcalmettes force-pushed the feat/catch-preprocessing-exceptions branch from d170a1d to 01c3e40 Compare March 13, 2025 20:33
@gcalmettes
Copy link
Contributor Author

✅ done @DarkLight1337 , waiting for all checks to finish

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) March 14, 2025 02:57
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 14, 2025
@vllm-bot vllm-bot merged commit fd8e055 into vllm-project:main Mar 14, 2025
43 of 45 checks passed
richardsliu pushed a commit to richardsliu/vllm that referenced this pull request Mar 14, 2025
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
shreyankg pushed a commit to shreyankg/vllm that referenced this pull request May 3, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants