-
Notifications
You must be signed in to change notification settings - Fork 1.6k
update openai server #3346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
update openai server #3346
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| except Exception as e: | ||
| return JSONResponse( | ||
| status_code=500, | ||
| content={"error": f"Internal server error: {e!s}"}, |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Stack trace information
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 7 days ago
The ideal fix is to avoid sending the exception string to the user and instead provide a generic error message such as "Internal server error." To maintain developer visibility, the exception (including stack trace) should be logged using the existing logging facility (get_logger).
Specifically:
- In the
except Exception as e:block on line 5432, change the response at line 5435 to have a generic message. - Add a line to log the exception using the project logger (
logger.error), capturing the stack trace withexc_info=True. - Ensure
loggeris defined; from the code,get_loggeris imported—so obtain a logger instance in this scope if not present. - Do not otherwise alter application logic.
All changes must be in the shown snippet(s), i.e., entirely within camel/agents/chat_agent.py.
-
Copy modified lines R5433-R5434 -
Copy modified line R5437
| @@ -5430,9 +5430,11 @@ | ||
| content={"error": "No user message provided"}, | ||
| ) | ||
| except Exception as e: | ||
| logger = get_logger() # Ensure logger in scope | ||
| logger.error("Exception in chat_completions handler", exc_info=True) | ||
| return JSONResponse( | ||
| status_code=500, | ||
| content={"error": f"Internal server error: {e!s}"}, | ||
| content={"error": "Internal server error"}, | ||
| ) | ||
|
|
||
| async def _stream_response(message: BaseMessage, request_data: dict): |
|
@LuoPengcheng12138 thanks for this pr! is it a enhance pr based on https://github.com/camel-ai/camel/pull/2311/files? |
Yes, it was copied from PR2311. When I tried merging |
i see,maybe It's been a while since 2311 last update. will help to merge main to 2311 |
Wendong-Fan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @LuoPengcheng12138 , left some initial comments below
camel/agents/chat_agent.py
Outdated
| elif msg_role == "tool": | ||
| # Handle tool response messages if needed | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also support taking system message to align with OpenAI interface?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems content in this file is not used in this PR
| # Type guard to ensure tools_to_use is not None | ||
| if tools_to_use is not None: | ||
| for tool in tools_to_use: | ||
| self.add_external_tool(tool) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems here would accumulate the tools added to agent, should we clean after one request ended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @LuoPengcheng12138 , left some comments below and added one enhance PR:#3380 , feel free to review and leave your comments
| # Define Pydantic models for request/response | ||
| class ChatMessage(BaseModel): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems defined but not used
| if reset_memory: | ||
| self.init_messages() | ||
|
|
||
| def reset_system_message( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we already has update_system_message, seems no necessary to add this new function
| content={"error": f"Internal server error: {e!s}"}, | ||
| ) | ||
|
|
||
| async def _stream_response(message: BaseMessage, request_data: dict): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The streaming implementation awaited the full response first, then split it into words and sent them one-by-one with artificial delays. This defeats the purpose of streaming
| messages = request_data.get("messages", []) | ||
| model = request_data.get("model", "camel-model") | ||
| stream = request_data.get("stream", False) | ||
| functions = request_data.get("functions") | ||
| tools = request_data.get("tools") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The agent's state persisted across different API requests, which would causing conversations from different clients to mix and memory leaks, would better to add self.reset() and self._external_tool_schemas.clear() at the start of each request?
| elif msg_role == "system": | ||
| sys_msg = BaseMessage.make_system_message( | ||
| role_name="System", content=msg_content | ||
| ) | ||
| self.update_memory(sys_msg, OpenAIBackendRole.SYSTEM) | ||
| self.reset_system_message(msg_content, True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we update system message without clearing memory to preserve conversation history?
This PR is still in draft status, and I will revise it later based on your comments. |
Description
Feat #3190
Checklist
Go over all the following points, and put an
xin all the boxes that apply.Fixes #issue-numberin the PR description (required)pyproject.tomlanduv lockIf you are unsure about any of these, don't hesitate to ask. We are here to help!