Skip to content

Conversation

@aniketmaurya
Copy link
Collaborator

@aniketmaurya aniketmaurya commented Jun 16, 2025

What does this PR do?

Old: users are forced to implement all the methods as async and not rely on default decode_request and encode_response method.

import litserve as ls

class OpenAIAPI(ls.LitAPI):
    def setup(self, device):
        pass

    async def decode_request(self, request):
        return super().decode_request(request)

    async def predict(self, request):
        for i in range(10):
            yield f"Hello, world! {i}"

    async def encode_response(self, output):
        return super().encode_response(output)

if __name__ == "__main__":
    api = OpenAIAPI(enable_async=True, spec=ls.OpenAISpec())
    server = ls.LitServer(api, accelerator="auto")
    server.run(port=8000)

New (this PR): User only implements predict and rest of the methods gets automatically converted into async.

import litserve as ls

class OpenAIAPI(ls.LitAPI):
    def setup(self, device):
        pass

    async def predict(self, request):
        for i in range(10):
            yield f"Hello, world! {i}"

if __name__ == "__main__":
    api = OpenAIAPI(enable_async=True, spec=ls.OpenAISpec())
    server = ls.LitServer(api, accelerator="auto")
    server.run(port=8000)
Before submitting
  • Was this discussed/agreed via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in GitHub issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

…utils.py

- Added `asyncify` decorator to convert sync functions, sync generators, async functions, and async generators to a consistent async interface.
- Introduced `_stream_gen_from_thread` to handle streaming from sync generators in a separate thread, allowing for non-blocking async operations.
- Enhanced error handling within the streaming generator to manage exceptions effectively.
…tionality

- Enhanced the `asyncify` decorator to streamline the conversion of sync functions and generators to async counterparts.
- Improved documentation within the decorator for better understanding of its behavior.
- Updated imports in base.py and openai.py to utilize the new asyncify decorator, ensuring consistent async handling across the codebase.
- Removed the `_stream_gen_from_thread` and `asyncify` functions from `utils.py` to streamline async function handling.
- Updated `_async_inject_context` in `base.py` to improve context injection for async functions and generators.
- Enhanced `StreamingLoop` to correctly handle async generators in response encoding.
- Introduced `as_async` method in `LitSpec` and `OpenAISpec` to support async specifications.
- Added `_AsyncOpenAISpecWrapper` to manage async behavior for OpenAI specifications.

These changes improve the overall async functionality and maintainability of the codebase.
- Introduced `_AsyncSpecWrapper` to facilitate async handling in `LitSpec`.
- Updated `as_async` method in `LitSpec` to return an instance of `_AsyncSpecWrapper`.
- Refactored `_AsyncOpenAISpecWrapper` to inherit from `_AsyncSpecWrapper`, streamlining async behavior for `OpenAISpec`.
- Improved async request and response handling in both specifications.

These changes enhance the overall async functionality and maintainability of the codebase.
@aniketmaurya aniketmaurya marked this pull request as ready for review June 16, 2025 13:13
- Improved the `_validate_async_methods` function to use a structured validation approach for async methods.
- Introduced a dictionary to define validation rules for `decode_request`, `encode_response`, and `predict` methods.
- Enhanced error handling by collecting warnings and errors separately, providing clearer feedback when async requirements are not met.
- Ensured that appropriate warnings are issued and errors raised based on the validation results.

These changes enhance the clarity and maintainability of async method validation in the LitAPI class.
@codecov
Copy link

codecov bot commented Jun 16, 2025

Codecov Report

Attention: Patch coverage is 75.86207% with 14 lines in your changes missing coverage. Please review.

Project coverage is 85%. Comparing base (e03ba13) to head (bc0ee32).
Report is 1 commits behind head on main.

Additional details and impacted files
@@         Coverage Diff         @@
##           main   #552   +/-   ##
===================================
- Coverage    85%    85%   -0%     
===================================
  Files        38     38           
  Lines      2902   2940   +38     
===================================
+ Hits       2480   2504   +24     
- Misses      422    436   +14     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@aniketmaurya aniketmaurya merged commit 550a23e into main Jun 16, 2025
21 checks passed
@aniketmaurya aniketmaurya deleted the asyncify branch June 16, 2025 14:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants