feat: wire full agent loop into WebSocket chat handler#3367
Closed
liorwn wants to merge 1 commit intozeroclaw-labs:masterfrom
Closed
feat: wire full agent loop into WebSocket chat handler#3367liorwn wants to merge 1 commit intozeroclaw-labs:masterfrom
liorwn wants to merge 1 commit intozeroclaw-labs:masterfrom
Conversation
The /ws/chat endpoint previously called provider.chat_with_history()
directly, which only returned raw LLM output without executing tool
calls. This meant WebSocket clients could not use tools β a significant
gap vs the CLI agent mode.
This commit replaces the direct provider call with process_message(),
which runs the full agent loop: tool parsing, execution (shell, file,
memory, etc.), security policy enforcement, and multi-turn tool
iterations. WebSocket clients now get identical agentic behaviour to
`zeroclaw agent -m "..."`.
Changes:
- Replace chat_with_history() with process_message() in handle_socket()
- Add "thinking" event so clients can show loading state during tool execution
- Auto-save user/assistant messages to memory (respects auto_save config)
- Update module doc comments to reflect the new behaviour
The protocol remains the same:
Client β {"type":"message","content":"..."}
Server β {"type":"thinking"}
Server β {"type":"done","full_response":"..."}
Server β {"type":"error","message":"..."}
Resolves the tool execution gap noted in zeroclaw-labs#88.
Collaborator
|
Closing β CI failed across Build, Lint, Test, and Check with compilation error: |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
The
/ws/chatendpoint currently callsprovider.chat_with_history()directly, which only returns raw LLM output without executing tool calls. This means WebSocket clients cannot use tools β a significant gap vs the CLI agent mode and a blocker for platform integrations that rely on WebSocket.This PR replaces the direct provider call with
process_message(), which runs the full agent loop: tool parsing, execution (shell, file, memory, etc.), security policy enforcement, and multi-turn tool iterations.Changes
chat_with_history()withprocess_message()inhandle_socket(){"type": "thinking"}event so clients can show a loading state during tool executionauto_saveconfig)Protocol
No breaking changes. Same message format:
Motivation
We're building a managed AI agent hosting platform (getbotler.ai) and evaluated ZeroClaw as a replacement for OpenClaw on our dedicated tier. The <1s cold start and <5MB RAM footprint are exactly what we need. The only blocker was that WebSocket chat didn't execute tools β this PR fixes that.
Related: #88 (feature parity checklist mentions tool execution gap)
Testing
Tested locally with
zeroclaw gateway+ WebSocket client. Tool calls (shell, file_read, file_write, memory_store) all execute correctly through the WebSocket endpoint after this change.