Skip to content

Commit 8070974

Browse files
committed
fix(ai): align backend event names and payloads with frontend listeners
Fix critical event name mismatch in the legacy AI streaming pipeline where the Rust backend emitted "ai:stream" but all frontend listeners expected "ai:stream_chunk". Also flatten StreamEventPayload from nested { thread_id, chunk: StreamChunk } to flat { thread_id, content, done } matching the frontend StreamChunkEvent interface. Backend changes (src-tauri/src/ai/mod.rs): - Rename event "ai:stream" to "ai:stream_chunk" - Flatten StreamEventPayload to { thread_id, content, done } - Add ToolCallPayload struct and "ai:tool_call" event emission when streaming chunks contain tool calls (previously silently dropped) - Add doc comments documenting event name, payload, and listeners Test fixes (src/context/__tests__/AIContext.test.tsx): - Fix hyphenated event names to match actual underscore-based names: ai:stream-start → ai:stream_chunk, ai:stream-chunk → ai:tool_call, ai:stream-end → ai:tool_result, ai:stream-error → ai:error Event contract documentation: - protocol.rs: Document WsMessage as the "cortex-event" payload contract - events.ts: Add module-level docs explaining two AI pipelines, add per-interface Tauri event annotations with emitter and pipeline info - AIStreamContext.tsx: Document all 4 consumed legacy pipeline events - SDKContext.tsx: Annotate "cortex-event" listener with contract details - docs/AI_EVENT_CONTRACTS.md: New 190-line comprehensive reference for all AI event channels, both pipelines, payload shapes, and wiring
1 parent 091c1b5 commit 8070974

File tree

7 files changed

+316
-23
lines changed

7 files changed

+316
-23
lines changed

docs/AI_EVENT_CONTRACTS.md

Lines changed: 190 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,190 @@
1+
# AI Event Contracts
2+
3+
This document describes all Tauri IPC event channels used for AI features in Cortex Desktop, their payload shapes, emitters, and listeners.
4+
5+
## Overview
6+
7+
Cortex Desktop has **two separate AI event pipelines**:
8+
9+
| Pipeline | Tauri Event | Purpose | Status |
10+
|----------|-------------|---------|--------|
11+
| **Cortex Protocol** | `"cortex-event"` | Primary agent pipeline (sessions, streaming, tools, approvals) | Active / Primary |
12+
| **Legacy AI Module** | `"ai:stream_chunk"`, `"ai:tool_call"`, etc. | Direct model streaming, inline completions | Active / Secondary |
13+
14+
---
15+
16+
## Pipeline 1: Cortex Protocol (`"cortex-event"`)
17+
18+
### Channel
19+
20+
- **Tauri Event:** `"cortex-event"`
21+
- **Emitter:** `src-tauri/src/ai/session.rs``convert_event_to_ws()`
22+
- **Payload Type:** `WsMessage` enum (see `src-tauri/src/ai/protocol.rs`)
23+
- **Serialization:** `#[serde(tag = "type", rename_all = "snake_case")]`
24+
25+
### Frontend Listeners
26+
27+
| File | Hook/Method |
28+
|------|-------------|
29+
| `src/context/SDKContext.tsx` | `useTauriListen<CortexEvent>("cortex-event", ...)` |
30+
| `src/context/AgentFollowContext.tsx` | `listen("cortex-event", ...)` |
31+
32+
### Message Types
33+
34+
All messages are JSON objects with a `type` field discriminator:
35+
36+
#### Streaming
37+
38+
| `type` | Fields | Description |
39+
|--------|--------|-------------|
40+
| `stream_chunk` | `{ content: string }` | Streaming text delta |
41+
| `agent_message` | `{ content: string }` | Full message (end of stream) |
42+
| `reasoning_delta` | `{ delta: string }` | Thinking/reasoning text delta |
43+
44+
#### Tool Execution
45+
46+
| `type` | Fields | Description |
47+
|--------|--------|-------------|
48+
| `tool_call_begin` | `{ call_id, tool_name, arguments }` | Tool execution started |
49+
| `tool_call_output_delta` | `{ call_id, stream, chunk }` | Tool stdout/stderr (base64) |
50+
| `tool_call_end` | `{ call_id, tool_name, output, success, duration_ms, metadata? }` | Tool execution completed |
51+
| `approval_request` | `{ call_id, command, cwd }` | Awaiting user approval |
52+
53+
#### Task Lifecycle
54+
55+
| `type` | Fields | Description |
56+
|--------|--------|-------------|
57+
| `task_started` | `{}` | Agent task began |
58+
| `task_complete` | `{ message? }` | Agent task finished |
59+
| `cancelled` | `{}` | Operation was cancelled |
60+
61+
#### Session Management
62+
63+
| `type` | Fields | Description |
64+
|--------|--------|-------------|
65+
| `joined_session` | `{ session_id }` | Joined a session |
66+
| `session_configured` | `{ session_id, model, cwd }` | Session configured by CLI |
67+
| `model_updated` | `{ model }` | Model changed |
68+
| `session_closed` | `{}` | Session ended |
69+
70+
#### Metadata
71+
72+
| `type` | Fields | Description |
73+
|--------|--------|-------------|
74+
| `token_usage` | `{ input_tokens, output_tokens, total_tokens }` | Token count update |
75+
| `message_received` | `{ id, role, content }` | Message echo |
76+
| `status` | `{ connected, authenticated, session_id?, uptime_seconds }` | Connection status |
77+
78+
#### Errors
79+
80+
| `type` | Fields | Description |
81+
|--------|--------|-------------|
82+
| `error` | `{ code, message }` | Error message |
83+
| `warning` | `{ message }` | Warning message |
84+
85+
#### Terminal
86+
87+
| `type` | Fields | Description |
88+
|--------|--------|-------------|
89+
| `terminal_created` | `{ terminal_id, name, cwd }` | Terminal created |
90+
| `terminal_output` | `{ terminal_id, timestamp, content, stream }` | Terminal output |
91+
| `terminal_status` | `{ terminal_id, status, exit_code? }` | Terminal status change |
92+
| `terminal_list` | `{ terminals }` | Terminal list |
93+
94+
#### Design System
95+
96+
| `type` | Fields | Description |
97+
|--------|--------|-------------|
98+
| `design_system_pending` | `{ call_id, project_type, fonts, palettes }` | Awaiting design system selection |
99+
| `design_system_received` | `{ call_id }` | Design system selection received |
100+
101+
---
102+
103+
## Pipeline 2: Legacy AI Module (`"ai:*"` events)
104+
105+
### Events
106+
107+
#### `"ai:stream_chunk"`
108+
109+
- **Emitter:** `src-tauri/src/ai/mod.rs``ai_stream` command
110+
- **Payload:** `{ threadId: string, content: string, done: boolean }`
111+
- **Listeners:**
112+
- `src/context/AIContext.tsx``setupEventListeners()`
113+
- `src/context/ai/AIStreamContext.tsx``setupEventListeners()`
114+
- `src/components/ai/InlineAssistant.tsx``listen("ai:stream_chunk", ...)`
115+
116+
#### `"ai:tool_call"`
117+
118+
- **Emitter:** `src-tauri/src/ai/mod.rs``ai_stream` command (when `StreamChunk.tool_calls` is present)
119+
- **Payload:** `{ threadId: string, callId: string, name: string, arguments: string }`
120+
- **Listeners:**
121+
- `src/context/AIContext.tsx``setupEventListeners()`
122+
- `src/context/ai/AIStreamContext.tsx``setupEventListeners()`
123+
124+
#### `"ai:tool_result"`
125+
126+
- **Emitter:** Not currently emitted by backend (tool results flow through `"cortex-event"` pipeline)
127+
- **Payload:** `{ threadId: string, callId: string, output: string, success: boolean, durationMs?: number }`
128+
- **Listeners:**
129+
- `src/context/AIContext.tsx``setupEventListeners()`
130+
- `src/context/ai/AIStreamContext.tsx``setupEventListeners()`
131+
- `src/components/cortex/CortexAIModificationsPanel.tsx``listen("ai:tool_result", ...)`
132+
133+
#### `"ai:error"`
134+
135+
- **Emitter:** Not currently emitted by backend (errors flow through `"cortex-event"` pipeline)
136+
- **Payload:** `{ code: string, message: string }`
137+
- **Listeners:**
138+
- `src/context/AIContext.tsx``setupEventListeners()`
139+
- `src/context/ai/AIStreamContext.tsx``setupEventListeners()`
140+
141+
#### `"ai:completion_stream"`
142+
143+
- **Emitter:** `src-tauri/src/ai/completions.rs``ai_inline_completion` command
144+
- **Payload:** `{ requestId: string, delta: string, done: boolean }`
145+
- **Listeners:**
146+
- `src/providers/InlineCompletionsProvider.ts``listen("ai:completion_stream", ...)`
147+
148+
#### `"ai:index_progress"`
149+
150+
- **Emitter:** `src-tauri/src/ai/indexer.rs`
151+
- **Payload:** `{ totalFiles, indexedFiles, totalChunks, done, currentFile }`
152+
- **Listeners:**
153+
- `src/context/ai/AIAgentContext.tsx``listen("ai:index_progress", ...)`
154+
155+
#### `"ai:agent_status"`
156+
157+
- **Emitter:** `src-tauri/src/ai/agents/commands.rs` (via agent lifecycle events)
158+
- **Payload:** `{ agentId: string, status: string }`
159+
- **Listeners:**
160+
- `src/context/ai/AIAgentContext.tsx``listen("ai:agent_status", ...)`
161+
162+
---
163+
164+
## Other AI-Related Events
165+
166+
#### `"agent-action"`
167+
168+
- **Emitter:** `src-tauri/src/ai/session.rs``log_action()`
169+
- **Payload:** `{ action: { type, ... }, description, category }`
170+
- **Listeners:**
171+
- `src/context/AgentFollowContext.tsx``listen("agent-action", ...)`
172+
173+
#### `"openrouter:stream"`
174+
175+
- **Emitter:** `src-tauri/src/ai/openrouter_commands.rs``openrouter_stream_chat`
176+
- **Payload:** `{ threadId: string, chunk: StreamChunk }`
177+
- **Listeners:**
178+
- `src/utils/llm/OpenRouterProvider.ts``listen("openrouter:stream", ...)`
179+
180+
#### Agent Lifecycle Events
181+
182+
| Event | Emitter | Payload |
183+
|-------|---------|---------|
184+
| `"agent:spawned"` | `agents/commands.rs` | `{ agent: Agent }` |
185+
| `"agent:task_started"` | `agents/commands.rs` | `{ agentId, prompt }` |
186+
| `"agent:task_progress"` | `agents/commands.rs` | Progress chunk |
187+
| `"agent:task_completed"` | `agents/commands.rs` | `{ task: Task }` |
188+
| `"agent:task_failed"` | `agents/commands.rs` | `{ agentId, error }` |
189+
| `"agent:task_cancelled"` | `agents/commands.rs` | `{ taskId }` |
190+
| `"agent:removed"` | `agents/commands.rs` | `{ agentId }` |

src-tauri/src/ai/mod.rs

Lines changed: 42 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -167,10 +167,15 @@ pub async fn ai_complete(
167167
.map_err(|e| e.to_string())
168168
}
169169

170-
/// Stream a conversation response
170+
/// Stream a conversation response.
171171
///
172-
/// Emits "ai:stream_chunk" events with StreamChunk payloads.
173-
/// Event payload includes thread_id for routing to correct UI component.
172+
/// **Tauri Event:** `"ai:stream_chunk"`
173+
/// **Payload:** `{ threadId: string, content: string, done: bool }`
174+
/// **Direction:** Backend → Frontend
175+
/// **Listeners:** `AIContext.tsx`, `AIStreamContext.tsx`, `InlineAssistant.tsx`
176+
///
177+
/// When `chunk.tool_calls` is present, also emits `"ai:tool_call"` events
178+
/// with payload `{ threadId, callId, name, arguments }`.
174179
#[tauri::command]
175180
pub async fn ai_stream(
176181
app: AppHandle,
@@ -189,9 +194,27 @@ pub async fn ai_stream(
189194
let app_clone = app.clone();
190195
tauri::async_runtime::spawn(async move {
191196
while let Some(chunk) = rx.recv().await {
197+
// Emit tool_call events if tool calls are present in this chunk
198+
if let Some(ref tool_calls) = chunk.tool_calls {
199+
for tc in tool_calls {
200+
if let Some(ref id) = tc.id {
201+
let tool_payload = ToolCallPayload {
202+
thread_id: thread_id_clone.clone(),
203+
call_id: id.clone(),
204+
name: tc.function.name.clone(),
205+
arguments: tc.function.arguments.clone(),
206+
};
207+
if let Err(e) = app_clone.emit("ai:tool_call", &tool_payload) {
208+
error!("Failed to emit tool_call event: {}", e);
209+
}
210+
}
211+
}
212+
}
213+
192214
let event_payload = StreamEventPayload {
193215
thread_id: thread_id_clone.clone(),
194-
chunk,
216+
content: chunk.content,
217+
done: chunk.done,
195218
};
196219
if let Err(e) = app_clone.emit("ai:stream_chunk", &event_payload) {
197220
error!("Failed to emit stream event: {}", e);
@@ -208,12 +231,25 @@ pub async fn ai_stream(
208231
Ok(())
209232
}
210233

211-
/// Payload for stream events
234+
/// Flattened payload for `"ai:stream_chunk"` events.
235+
/// Matches the frontend `StreamChunkEvent` interface: `{ threadId, content, done }`.
212236
#[derive(Debug, Clone, serde::Serialize)]
213237
#[serde(rename_all = "camelCase")]
214238
struct StreamEventPayload {
215239
thread_id: String,
216-
chunk: StreamChunk,
240+
content: String,
241+
done: bool,
242+
}
243+
244+
/// Payload for `"ai:tool_call"` events.
245+
/// Matches the frontend `ToolCallEvent` interface.
246+
#[derive(Debug, Clone, serde::Serialize)]
247+
#[serde(rename_all = "camelCase")]
248+
struct ToolCallPayload {
249+
thread_id: String,
250+
call_id: String,
251+
name: String,
252+
arguments: String,
217253
}
218254

219255
// =============================================================================

src-tauri/src/ai/protocol.rs

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,17 @@ pub struct TokenUsageInfo {
88
pub total_tokens: u32,
99
}
1010

11-
/// Server-to-client WebSocket messages.
12-
/// Adapted for Tauri event emitting.
11+
/// Server-to-client messages emitted via the `"cortex-event"` Tauri event channel.
12+
///
13+
/// This is the **primary AI pipeline** used by `SDKContext.tsx` and `AgentPanel.tsx`.
14+
/// Events are serialized with `#[serde(tag = "type", rename_all = "snake_case")]`,
15+
/// so each variant becomes a `{ type: "variant_name", ...fields }` JSON object.
16+
///
17+
/// **Tauri Event:** `"cortex-event"`
18+
/// **Direction:** Backend (`session.rs`) → Frontend (`SDKContext.tsx`)
19+
/// **Emitter:** `convert_event_to_ws()` in `session.rs`
20+
///
21+
/// See also: `docs/AI_EVENT_CONTRACTS.md` for the full event contract reference.
1322
#[derive(Debug, Clone, Serialize, Deserialize)]
1423
#[serde(tag = "type", rename_all = "snake_case")]
1524
pub enum WsMessage {

src/context/SDKContext.tsx

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -800,7 +800,11 @@ export function SDKProvider(props: ParentProps) {
800800
}
801801
});
802802

803-
// Listen for Tauri cortex events
803+
// Listen for Tauri cortex events (Primary AI Pipeline)
804+
// Event: "cortex-event" — emitted by session.rs via convert_event_to_ws()
805+
// Payload: WsMessage (see src-tauri/src/ai/protocol.rs) with { type: "...", ...fields }
806+
// Handles: stream_chunk, agent_message, tool_call_begin/end, task_started/complete,
807+
// approval_request, token_usage, error, reasoning_delta, terminal events, etc.
804808
useTauriListen<CortexEvent>("cortex-event", (payload) => {
805809
processMessage(payload);
806810
});

src/context/__tests__/AIContext.test.tsx

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -357,36 +357,36 @@ describe("AIContext", () => {
357357
});
358358

359359
describe("Streaming Events", () => {
360-
it("should listen for stream start event", async () => {
360+
it("should listen for stream chunk event", async () => {
361361
vi.mocked(listen).mockResolvedValueOnce(() => {});
362362

363-
await listen("ai:stream-start", () => {});
363+
await listen("ai:stream_chunk", () => {});
364364

365-
expect(listen).toHaveBeenCalledWith("ai:stream-start", expect.any(Function));
365+
expect(listen).toHaveBeenCalledWith("ai:stream_chunk", expect.any(Function));
366366
});
367367

368-
it("should listen for stream chunk event", async () => {
368+
it("should listen for tool call event", async () => {
369369
vi.mocked(listen).mockResolvedValueOnce(() => {});
370370

371-
await listen("ai:stream-chunk", () => {});
371+
await listen("ai:tool_call", () => {});
372372

373-
expect(listen).toHaveBeenCalledWith("ai:stream-chunk", expect.any(Function));
373+
expect(listen).toHaveBeenCalledWith("ai:tool_call", expect.any(Function));
374374
});
375375

376-
it("should listen for stream end event", async () => {
376+
it("should listen for tool result event", async () => {
377377
vi.mocked(listen).mockResolvedValueOnce(() => {});
378378

379-
await listen("ai:stream-end", () => {});
379+
await listen("ai:tool_result", () => {});
380380

381-
expect(listen).toHaveBeenCalledWith("ai:stream-end", expect.any(Function));
381+
expect(listen).toHaveBeenCalledWith("ai:tool_result", expect.any(Function));
382382
});
383383

384384
it("should listen for stream error event", async () => {
385385
vi.mocked(listen).mockResolvedValueOnce(() => {});
386386

387-
await listen("ai:stream-error", () => {});
387+
await listen("ai:error", () => {});
388388

389-
expect(listen).toHaveBeenCalledWith("ai:stream-error", expect.any(Function));
389+
expect(listen).toHaveBeenCalledWith("ai:error", expect.any(Function));
390390
});
391391
});
392392

src/context/ai/AIStreamContext.tsx

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,18 @@
11
/**
2-
* AIStreamContext - Manages AI streaming state
3-
*
2+
* AIStreamContext - Manages AI streaming state (Legacy Pipeline)
3+
*
4+
* This context listens to the **legacy AI event pipeline** (`"ai:*"` events)
5+
* emitted by the `ai_stream` Tauri command in `src-tauri/src/ai/mod.rs`.
6+
*
7+
* **Events consumed:**
8+
* - `"ai:stream_chunk"` → `{ threadId, content, done }` — streaming content deltas
9+
* - `"ai:tool_call"` → `{ threadId, callId, name, arguments }` — tool call notifications
10+
* - `"ai:tool_result"` → `{ threadId, callId, output, success, durationMs? }` — tool results
11+
* - `"ai:error"` → `{ code, message }` — error notifications
12+
*
13+
* **Note:** The primary AI pipeline uses `"cortex-event"` via `SDKContext.tsx` instead.
14+
* This context is used by `InlineAssistant.tsx` and direct model streaming features.
15+
*
416
* Handles:
517
* - Streaming content accumulation
618
* - Stream cancellation

0 commit comments

Comments
 (0)