-
Notifications
You must be signed in to change notification settings - Fork 512
Fix LiteLLM logging worker reset on proxy restart #174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes a bug where LiteLLM's global logging worker would raise a RuntimeError when the LLM proxy is restarted, because the logging worker's asyncio queue remains bound to the old event loop after Uvicorn creates a new one.
- Adds a new function
_reset_litellm_logging_worker()to recreate LiteLLM's global logging worker on proxy restart - Calls the reset function in
LLMProxy.start()to ensure the logging worker uses the fresh event loop - Includes a regression test to verify the logging worker is properly replaced between restarts
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| agentlightning/llm_proxy.py | Adds logging worker reset functionality and calls it during proxy startup |
| tests/test_llm_proxy_restart.py | New test file verifying the logging worker is refreshed on restart |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| import litellm.utils as litellm_utils | ||
|
|
||
| litellm_utils.GLOBAL_LOGGING_WORKER = litellm_logging_worker.GLOBAL_LOGGING_WORKER | ||
| except Exception: # pragma: no cover - best-effort hygiene |
Copilot
AI
Oct 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bare except Exception is too broad and could mask unexpected errors. Consider catching more specific exceptions like ImportError or AttributeError that are expected when the module doesn't exist or the attribute is missing.
| except Exception: # pragma: no cover - best-effort hygiene | |
| except (ImportError, AttributeError): # pragma: no cover - best-effort hygiene |
|
Exception: |
…x/investigate-llmproxy-restart-exception
…x/investigate-llmproxy-restart-exception
Summary
Testing
https://chatgpt.com/codex/tasks/task_e_68f4dab56bc4832e9e5ef8a9f053433f