Skip to content

Conversation

@aniketmaurya
Copy link
Collaborator

@aniketmaurya aniketmaurya commented Jul 1, 2025

What does this PR do?

Before

insane error trace:
(LitServe) ➜  LitServe git:(improve-error) ✗ python main.py
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
OpenAI spec setup complete
Swagger UI is available at http://0.0.0.0:8000/docs
INFO:     Started server process [95899]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     127.0.0.1:53752 - "POST /v1/chat/completions HTTP/1.1" 200 OK
lit-inference-completions_0[95878] - litserve.loops.streaming_loops - ERROR - LitAPI ran into an error while processing the streaming request uid=dbb351f1-ec62-47ce-9b5c-cb4957436bf0.
Please check the error trace for more details.
Traceback (most recent call last):
  File "/Users/aniket/Projects/github/LitServe/src/litserve/loops/streaming_loops.py", line 91, in run_streaming_loop
    for y_enc in y_enc_gen:
                 ^^^^^^^^^
  File "/Users/aniket/Projects/github/LitServe/src/litserve/specs/openai.py", line 446, in encode_response
    for output in output_generator:
                  ^^^^^^^^^^^^^^^^
  File "/Users/aniket/Projects/github/LitServe/main.py", line 21, in predict
    raise ApiException(status_code=500, detail="test")
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: ApiException.__init__() got an unexpected keyword argument 'status_code'
lit-uvicorn-0[95899] - litserve.server - ERROR - Error occurred while streaming outputs from the inference worker. Please check the above traceback.
lit-uvicorn-0[95899] - litserve.specs.openai - ERROR - Error in streaming response: b"\x80\x04\x95i\x00\x00\x00\x00\x00\x00\x00\x8c\x08builtins\x94\x8c\tTypeError\x94\x93\x94\x8cHApiException.__init__() got an unexpected keyword argument 'status_code'\x94\x85\x94R\x94."
ERROR:    Exception in ASGI application
  + Exception Group Traceback (most recent call last):
  |   File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/_utils.py", line 76, in collapse_excgroups
  |     yield
  |   File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/responses.py", line 263, in __call__
  |     async with anyio.create_task_group() as task_group:
  |                ^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 772, in __aexit__
  |     raise BaseExceptionGroup(
  | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    |     await app(scope, receive, sender)
    |   File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/routing.py", line 74, in app
    |     await response(scope, receive, send)
    |   File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/responses.py", line 262, in __call__
    |     with collapse_excgroups():
    |          ^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/aniket/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/contextlib.py", line 158, in __exit__
    |     self.gen.throw(value)
    |   File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/_utils.py", line 82, in collapse_excgroups
    |     raise exc
    |   File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/responses.py", line 266, in wrap
    |     await func()
    |   File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/responses.py", line 246, in stream_response
    |     async for chunk in self.body_iterator:
    |   File "/Users/aniket/Projects/github/LitServe/src/litserve/specs/openai.py", line 500, in streaming_completion
    |     raise HTTPException(status_code=500)
    | fastapi.exceptions.HTTPException: 500: Internal Server Error
    +------------------------------------

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/routing.py", line 74, in app
    await response(scope, receive, send)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/responses.py", line 262, in __call__
    with collapse_excgroups():
         ^^^^^^^^^^^^^^^^^^^^
  File "/Users/aniket/.local/share/uv/python/cpython-3.12.7-macos-aarch64-none/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/_utils.py", line 82, in collapse_excgroups
    raise exc
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/responses.py", line 266, in wrap
    await func()
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/responses.py", line 246, in stream_response
    async for chunk in self.body_iterator:
  File "/Users/aniket/Projects/github/LitServe/src/litserve/specs/openai.py", line 500, in streaming_completion
    raise HTTPException(status_code=500)
fastapi.exceptions.HTTPException: 500: Internal Server Error

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 409, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/applications.py", line 112, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
    raise exc
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
    await self.app(scope, receive, _send)
  File "/Users/aniket/Projects/github/LitServe/src/litserve/middlewares.py", line 69, in __call__
    await self.app(scope, receive, send)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/routing.py", line 714, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/routing.py", line 734, in app
    await route.handle(scope, receive, send)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/routing.py", line 288, in handle
    await self.app(scope, receive, send)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/routing.py", line 76, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/Users/aniket/Projects/github/LitServe/.venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 56, in wrapped_app
    raise RuntimeError("Caught handled exception, but response already started.") from exc
RuntimeError: Caught handled exception, but response already started.

After

Cleaner error handling

^[[AINFO:     127.0.0.1:53912 - "POST /v1/chat/completions HTTP/1.1" 200 OK
2025-07-01 23:26:30,409 - inference-worker[98235] - litserve.loops.streaming_loops - ERROR - streaming_loops.py:116 - LitAPI ran into an error while processing the streaming request uid=fc0dd2a1-0409-477c-a395-8cbed8843b4a.
Please check the error trace for more details.
Traceback (most recent call last):
  File "/Users/aniket/Projects/github/LitServe/src/litserve/loops/streaming_loops.py", line 91, in run_streaming_loop
    for y_enc in y_enc_gen:
                 ^^^^^^^^^
  File "/Users/aniket/Projects/github/LitServe/src/litserve/specs/openai.py", line 466, in encode_response
    for output in output_generator:
                  ^^^^^^^^^^^^^^^^
  File "/Users/aniket/Projects/github/LitServe/main.py", line 21, in predict
    raise ApiException(status_code=500, detail="test")
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: ApiException.__init__() got an unexpected keyword argument 'status_code'
2025-07-01 23:26:30,414 - LitServer-0[98269] - litserve.server - ERROR - server.py:869 - Error occurred while streaming outputs from the inference worker. Please check the above traceback.
2025-07-01 23:26:30,415 - LitServer-0[98269] - litserve.specs.openai - ERROR - openai.py:520 - Error in streaming response: b"\x80\x04\x95i\x00\x00\x00\x00\x00\x00\x00\x8c\x08builtins\x94\x8c\tTypeError\x94\x93\x94\x8cHApiException.__init__() got an unexpected keyword argument 'status_code'\x94\x85\x94R\x94."
2025-07-01 23:26:30,416 - LitServer-0[98269] - litserve.specs.openai - ERROR - openai.py:554 - Error in streaming response: 500: Internal Server Error
Traceback (most recent call last):
  File "/Users/aniket/Projects/github/LitServe/src/litserve/specs/openai.py", line 521, in streaming_completion
    raise HTTPException(status_code=500)
fastapi.exceptions.HTTPException: 500: Internal Server Error
Before submitting
  • Was this discussed/agreed via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in GitHub issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

…ve process/thread naming in LitServer

- Updated `_handle_error_response` to log exceptions more clearly and handle byte responses using pickle.
- Changed process and thread names in LitServer to a more consistent format for better identification.
- Refactored logging format in `configure_logging` to include additional context information.
Copy link

@KaelanDt KaelanDt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! good error handling is good :)

@aniketmaurya aniketmaurya changed the title Improve error handling for streaming Improve error handling and logging for streaming Jul 1, 2025
- Changed `_handle_error_response` to a static method for improved clarity and usage.
- Updated error handling in OpenAISpec to re-raise HTTPException for better error propagation.
- Added unit tests for error handling in RegularRequestHandler to ensure proper exception raising and logging.
@aniketmaurya aniketmaurya enabled auto-merge (squash) July 1, 2025 18:34
@codecov
Copy link

codecov bot commented Jul 1, 2025

Codecov Report

Attention: Patch coverage is 79.76190% with 17 lines in your changes missing coverage. Please review.

Project coverage is 85%. Comparing base (ba66a9d) to head (881ab73).
Report is 1 commits behind head on main.

Additional details and impacted files
@@         Coverage Diff         @@
##           main   #562   +/-   ##
===================================
- Coverage    85%    85%   -0%     
===================================
  Files        38     38           
  Lines      2954   2979   +25     
===================================
+ Hits       2520   2535   +15     
- Misses      434    444   +10     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@aniketmaurya aniketmaurya merged commit 51b56cf into main Jul 1, 2025
21 checks passed
@aniketmaurya aniketmaurya deleted the improve-error branch July 1, 2025 18:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants