Skip to content

Conversation

@LuoPengcheng12138
Copy link
Collaborator

Description

Feat #3190

Checklist

Go over all the following points, and put an x in all the boxes that apply.

  • I have read the CONTRIBUTION guide (required)
  • I have linked this PR to an issue using the Development section on the right sidebar or by adding Fixes #issue-number in the PR description (required)
  • I have checked if any dependencies need to be added or updated in pyproject.toml and uv lock
  • I have updated the tests accordingly (required for a bug fix or a new feature)
  • I have updated the documentation if needed:
  • I have added examples if this is a new feature

If you are unsure about any of these, don't hesitate to ask. We are here to help!

@github-actions github-actions bot added the Review Required PR need to be reviewed label Oct 30, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 30, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/openai-compatible-server

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

except Exception as e:
return JSONResponse(
status_code=500,
content={"error": f"Internal server error: {e!s}"},

Check warning

Code scanning / CodeQL

Information exposure through an exception Medium

Stack trace information
flows to this location and may be exposed to an external user.

Copilot Autofix

AI 7 days ago

The ideal fix is to avoid sending the exception string to the user and instead provide a generic error message such as "Internal server error." To maintain developer visibility, the exception (including stack trace) should be logged using the existing logging facility (get_logger).

Specifically:

  • In the except Exception as e: block on line 5432, change the response at line 5435 to have a generic message.
  • Add a line to log the exception using the project logger (logger.error), capturing the stack trace with exc_info=True.
  • Ensure logger is defined; from the code, get_logger is imported—so obtain a logger instance in this scope if not present.
  • Do not otherwise alter application logic.

All changes must be in the shown snippet(s), i.e., entirely within camel/agents/chat_agent.py.


Suggested changeset 1
camel/agents/chat_agent.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/camel/agents/chat_agent.py b/camel/agents/chat_agent.py
--- a/camel/agents/chat_agent.py
+++ b/camel/agents/chat_agent.py
@@ -5430,9 +5430,11 @@
                         content={"error": "No user message provided"},
                     )
             except Exception as e:
+                logger = get_logger()  # Ensure logger in scope
+                logger.error("Exception in chat_completions handler", exc_info=True)
                 return JSONResponse(
                     status_code=500,
-                    content={"error": f"Internal server error: {e!s}"},
+                    content={"error": "Internal server error"},
                 )
 
         async def _stream_response(message: BaseMessage, request_data: dict):
EOF
@@ -5430,9 +5430,11 @@
content={"error": "No user message provided"},
)
except Exception as e:
logger = get_logger() # Ensure logger in scope
logger.error("Exception in chat_completions handler", exc_info=True)
return JSONResponse(
status_code=500,
content={"error": f"Internal server error: {e!s}"},
content={"error": "Internal server error"},
)

async def _stream_response(message: BaseMessage, request_data: dict):
Copilot is powered by AI and may make mistakes. Always verify output.
@fengju0213
Copy link
Collaborator

@LuoPengcheng12138 thanks for this pr! is it a enhance pr based on https://github.com/camel-ai/camel/pull/2311/files?

@LuoPengcheng12138
Copy link
Collaborator Author

@LuoPengcheng12138 thanks for this pr! is it a enhance pr based on https://github.com/camel-ai/camel/pull/2311/files?

Yes, it was copied from PR2311. When I tried merging main into 2311, the operation couldn’t be completed due to numerous conflicts with the main branch(maybe), so I created a new branch instead. Could you help me merge main into 2311 if needed?

@fengju0213
Copy link
Collaborator

@LuoPengcheng12138 thanks for this pr! is it a enhance pr based on https://github.com/camel-ai/camel/pull/2311/files?

Yes, it was copied from PR2311. When I tried merging main into 2311, the operation couldn’t be completed due to numerous conflicts with the main branch(maybe), so I created a new branch instead. Could you help me merge main into 2311 if needed?

i see,maybe It's been a while since 2311 last update. will help to merge main to 2311

Copy link
Member

@Wendong-Fan Wendong-Fan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @LuoPengcheng12138 , left some initial comments below

Comment on lines 5287 to 5289
elif msg_role == "tool":
# Handle tool response messages if needed
pass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we also support taking system message to align with OpenAI interface?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems content in this file is not used in this PR

# Type guard to ensure tools_to_use is not None
if tools_to_use is not None:
for tool in tools_to_use:
self.add_external_tool(tool)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems here would accumulate the tools added to agent, should we clean after one request ended?

@Wendong-Fan Wendong-Fan added Waiting for Update PR has been reviewed, need to be updated based on review comment and removed Review Required PR need to be reviewed labels Nov 2, 2025
Copy link
Member

@Wendong-Fan Wendong-Fan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @LuoPengcheng12138 , left some comments below and added one enhance PR:#3380 , feel free to review and leave your comments

Comment on lines +5258 to +5259
# Define Pydantic models for request/response
class ChatMessage(BaseModel):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems defined but not used

if reset_memory:
self.init_messages()

def reset_system_message(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we already has update_system_message, seems no necessary to add this new function

content={"error": f"Internal server error: {e!s}"},
)

async def _stream_response(message: BaseMessage, request_data: dict):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The streaming implementation awaited the full response first, then split it into words and sent them one-by-one with artificial delays. This defeats the purpose of streaming

Comment on lines +5278 to +5282
messages = request_data.get("messages", [])
model = request_data.get("model", "camel-model")
stream = request_data.get("stream", False)
functions = request_data.get("functions")
tools = request_data.get("tools")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The agent's state persisted across different API requests, which would causing conversations from different clients to mix and memory leaks, would better to add self.reset() and self._external_tool_schemas.clear() at the start of each request?

Comment on lines +5301 to +5306
elif msg_role == "system":
sys_msg = BaseMessage.make_system_message(
role_name="System", content=msg_content
)
self.update_memory(sys_msg, OpenAIBackendRole.SYSTEM)
self.reset_system_message(msg_content, True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we update system message without clearing memory to preserve conversation history?

@LuoPengcheng12138
Copy link
Collaborator Author

thanks @LuoPengcheng12138 , left some comments below and added one enhance PR:#3380 , feel free to review and leave your comments

This PR is still in draft status, and I will revise it later based on your comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Waiting for Update PR has been reviewed, need to be updated based on review comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants