Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions rules/electron-ipc.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,18 @@ queryClient.invalidateQueries({ queryKey: queryKeys.apps.all });

**Adding new keys:** Add entries to the appropriate domain in `queryKeys.ts`. Follow the existing pattern with `all` for the base key and factory functions using object parameters for parameterized keys.

## High-volume event batching

When an IPC event can fire at very high frequency (e.g., stdout/stderr from child processes), **batch messages and flush on a timer** instead of sending each message individually. This prevents IPC channel saturation, excessive array allocations in the renderer, and unnecessary React re-renders.

**Pattern** (see `app_handlers.ts` `enqueueAppOutput`/`flushAllAppOutputs`):

- Buffer outgoing events in a `Map<WebContents, Payload[]>`.
- Start a `setTimeout` on first enqueue; flush all buffered messages as a single batch event (e.g., `app:output-batch`) when the timer fires (100ms default).
- Flush immediately on process exit so no messages are lost.
- Keep latency-sensitive events (e.g., `input-requested`) on an immediate, unbatched channel.
- On the renderer side, process the entire batch array in a single state update (`setConsoleEntries(prev => [...prev, ...newEntries])`) instead of one update per message.

## Streaming chunk optimizations

The `chat:response:chunk` event supports two modes:
Expand Down
67 changes: 45 additions & 22 deletions src/hooks/useRunApp.ts
Original file line number Diff line number Diff line change
Expand Up @@ -70,12 +70,22 @@ export function useAppOutputSubscription() {
console.error("Failed to respond to app input:", error);
}
});
return; // Don't add to regular output
return null; // Don't add to regular output
}

// Add to console entries
// Use "server" type for stdout/stderr to match the backend log store
// (app_handlers.ts stores these as type: "server")
// Handle HMR updates
if (
output.message.includes("hmr update") &&
output.message.includes("[vite]")
) {
onHotModuleReload();
}

// Process proxy server output
processProxyServerOutput(output);

// Only send client-error logs to central store
// Server logs (stdout/stderr) are already stored in the main process
const logEntry = {
level:
output.type === "stderr" || output.type === "client-error"
Expand All @@ -87,39 +97,52 @@ export function useAppOutputSubscription() {
timestamp: output.timestamp ?? Date.now(),
};

// Only send client-error logs to central store
// Server logs (stdout/stderr) are already stored in the main process
if (output.type === "client-error") {
ipc.misc.addLog(logEntry);
}

// Also update UI state
setConsoleEntries((prev) => [...prev, logEntry]);

// Process proxy server output
processProxyServerOutput(output);
return logEntry;
},
[setConsoleEntries, processProxyServerOutput],
[processProxyServerOutput, onHotModuleReload],
);

// Subscribe to app output events from main process
// Subscribe to immediate app output events (input-requested)
useEffect(() => {
const unsubscribe = ipc.events.misc.onAppOutput((output) => {
// Only process events for the currently selected app
if (appId !== null && output.appId === appId) {
// Handle HMR updates
if (
output.message.includes("hmr update") &&
output.message.includes("[vite]")
) {
onHotModuleReload();
const entry = processAppOutput(output);
if (entry) {
setConsoleEntries((prev) => [...prev, entry]);
}
}
});

return unsubscribe;
}, [appId, processAppOutput, setConsoleEntries]);

// Subscribe to batched app output events (stdout/stderr)
useEffect(() => {
const unsubscribe = ipc.events.misc.onAppOutputBatch((outputs) => {
const newEntries: ReturnType<typeof processAppOutput>[] = [];
for (const output of outputs) {
if (appId !== null && output.appId === appId) {
const entry = processAppOutput(output);
if (entry) {
newEntries.push(entry);
}
}
processAppOutput(output);
}

if (newEntries.length > 0) {
setConsoleEntries((prev) => [
...prev,
...(newEntries as NonNullable<(typeof newEntries)[number]>[]),
]);
Comment on lines +126 to +140
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve readability and type safety, we can explicitly type newEntries as ConsoleEntry[]. This allows us to remove the verbose type assertion when updating the state, making the code cleaner and more maintainable.

To do this, you'll also need to import the ConsoleEntry type at the top of the file:

import type { ConsoleEntry } from "@/ipc/types";
Suggested change
const newEntries: ReturnType<typeof processAppOutput>[] = [];
for (const output of outputs) {
if (appId !== null && output.appId === appId) {
const entry = processAppOutput(output);
if (entry) {
newEntries.push(entry);
}
}
processAppOutput(output);
}
if (newEntries.length > 0) {
setConsoleEntries((prev) => [
...prev,
...(newEntries as NonNullable<(typeof newEntries)[number]>[]),
]);
const newEntries: ConsoleEntry[] = [];
for (const output of outputs) {
if (appId !== null && output.appId === appId) {
const entry = processAppOutput(output);
if (entry) {
newEntries.push(entry);
}
}
}
if (newEntries.length > 0) {
setConsoleEntries((prev) => [
...prev,
...newEntries,
]);
}

}
});

return unsubscribe;
}, [appId, processAppOutput, onHotModuleReload]);
}, [appId, processAppOutput, setConsoleEntries]);
}

export function useRunApp() {
Expand Down
52 changes: 46 additions & 6 deletions src/ipc/handlers/app_handlers.ts
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ import {
gitRenameBranch,
} from "../utils/git_utils";
import { safeSend } from "../utils/safe_sender";
import type { AppOutput } from "../types/misc";
import { normalizePath } from "../../../shared/normalizePath";
import {
isServerFunction,
Expand Down Expand Up @@ -278,6 +279,43 @@ Details: ${details || "n/a"}
});
}

// =============================================================================
// App Output Batcher
// =============================================================================
// Batches stdout/stderr IPC messages to avoid flooding the renderer when apps
// emit high-volume logs. Messages are buffered and flushed every 100ms.

const APP_OUTPUT_FLUSH_INTERVAL_MS = 100;

const pendingOutputs = new Map<Electron.WebContents, AppOutput[]>();
let flushTimer: ReturnType<typeof setTimeout> | null = null;

function enqueueAppOutput(
sender: Electron.WebContents,
output: AppOutput,
): void {
let queue = pendingOutputs.get(sender);
if (!queue) {
queue = [];
pendingOutputs.set(sender, queue);
}
queue.push(output);

if (!flushTimer) {
flushTimer = setTimeout(flushAllAppOutputs, APP_OUTPUT_FLUSH_INTERVAL_MS);
Comment on lines +304 to +305
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Flush queued output before it can span an app switch

Because enqueueAppOutput() now delays every stdout/stderr event by 100ms, logs emitted while app A is selected can be delivered only after the user has switched to app B. At that point src/app/layout.tsx:102-106 has already cleared appConsoleEntriesAtom, and src/hooks/useRunApp.ts:125-133 drops any batched entries whose output.appId !== appId, so those just-produced lines disappear from the console permanently. Before batching, the same output was delivered immediately while app A was still selected.

Useful? React with 👍 / 👎.

}
}

function flushAllAppOutputs(): void {
flushTimer = null;
Comment on lines +309 to +310
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Missing clearTimeout in flushAllAppOutputs causes stale timer to prematurely flush other processes' batched data

flushAllAppOutputs() sets flushTimer = null at src/ipc/handlers/app_handlers.ts:310 without calling clearTimeout(flushTimer) first. When this function is called manually from the process close handler (line 459), the previously scheduled setTimeout (created at line 305) is still pending and will fire later. This stale timer calls flushAllAppOutputs() again, which flushes any data newly enqueued by other running processes — breaking their 100ms batching window and sending incomplete batches early. It also sets flushTimer = null even though a new timer may have been created by enqueueAppOutput in the interim, silently orphaning that reference.

Suggested change
function flushAllAppOutputs(): void {
flushTimer = null;
function flushAllAppOutputs(): void {
if (flushTimer) {
clearTimeout(flushTimer);
}
flushTimer = null;
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

for (const [sender, outputs] of pendingOutputs) {
Comment on lines +309 to +311
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Cancel pending flush timeout when flushing early

flushAllAppOutputs() is called from the process close handler to force-deliver buffered logs, but this function only sets flushTimer = null and never clears the already scheduled timeout. If any new output is enqueued before that stale timeout fires, the old callback will flush the new queue early, shrinking the intended 100ms batching window and reintroducing extra IPC bursts/rerenders under restart or rapid process-exit scenarios.

Useful? React with 👍 / 👎.

if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
Comment on lines +309 to +313
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Drop buffered stdout/stderr when the user clears logs

If the user clicks Clear logs while an app is still producing output, src/components/preview_panel/Console.tsx:106-112 clears the backend store and UI immediately, but any stdout/stderr already sitting in pendingOutputs is still emitted here on the next timer tick or process-exit flush. That makes previously cleared lines reappear in the console, which is a regression introduced by buffering every output for up to 100ms.

Useful? React with 👍 / 👎.

}
}
pendingOutputs.clear();
}
Comment on lines +309 to +317
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Missing clearTimeout before manual flush on process close

flushAllAppOutputs() sets flushTimer = null but never calls clearTimeout on the handle, so the originally-scheduled timer is still alive. When it fires ~100ms later it calls flushAllAppOutputs() again. In isolation that's benign (the map is already empty), but in a race where a new app process starts within those 100ms:

  1. Close fires → manual flushAllAppOutputs()flushTimer = null, map cleared.
  2. New process starts → enqueueAppOutput sees flushTimer === null, schedules a new timer.
  3. The old stale timer fires → calls flushAllAppOutputs() → drains the new process's messages earlier than intended.
  4. The new timer fires → no-op (map is empty).

This breaks the 100ms batching guarantee for the new process. The fix is to cancel the pending timer before resetting:

Suggested change
function flushAllAppOutputs(): void {
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
pendingOutputs.clear();
}
function flushAllAppOutputs(): void {
if (flushTimer !== null) {
clearTimeout(flushTimer);
}
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
pendingOutputs.clear();
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/ipc/handlers/app_handlers.ts
Line: 309-317

Comment:
**Missing `clearTimeout` before manual flush on process close**

`flushAllAppOutputs()` sets `flushTimer = null` but never calls `clearTimeout` on the handle, so the originally-scheduled timer is still alive. When it fires ~100ms later it calls `flushAllAppOutputs()` again. In isolation that's benign (the map is already empty), but in a race where a **new** app process starts within those 100ms:

1. Close fires → manual `flushAllAppOutputs()``flushTimer = null`, map cleared.
2. New process starts → `enqueueAppOutput` sees `flushTimer === null`, schedules a **new** timer.
3. The old stale timer fires → calls `flushAllAppOutputs()` → drains the new process's messages earlier than intended.
4. The new timer fires → no-op (map is empty).

This breaks the 100ms batching guarantee for the new process. The fix is to cancel the pending timer before resetting:

```suggestion
function flushAllAppOutputs(): void {
  if (flushTimer !== null) {
    clearTimeout(flushTimer);
  }
  flushTimer = null;
  for (const [sender, outputs] of pendingOutputs) {
    if (outputs.length > 0) {
      safeSend(sender, "app:output-batch", outputs);
    }
  }
  pendingOutputs.clear();
}
```

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing clearTimeout causes orphaned timer and potential message loss

Medium Severity

flushAllAppOutputs sets flushTimer = null without ever calling clearTimeout on the existing timer handle. When the function is called directly from the process "close" handler (not from the timer callback), the previously-scheduled timer is still live and will fire again ~100ms later. At that point flushTimer is null, so any messages enqueued after the close-triggered flush will start a fresh timer. Now two timers are racing: the orphaned one and the new one. The orphaned timer fires first, drains and clears pendingOutputs, and the new timer fires on an already-empty map — silently dropping all messages that were enqueued in the window between the two firings.

Additional Locations (1)
Fix in Cursor Fix in Web

Comment on lines +309 to +317
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a race condition here that could lead to lost log messages. If enqueueAppOutput is called while flushAllAppOutputs is executing, a new log message could be added to pendingOutputs after the for...of loop has started but before pendingOutputs.clear() is called. This would cause the newly added log to be discarded without being sent.

To fix this, you should copy the pending outputs to a local variable and clear the shared map before iterating and sending the messages. This ensures that any new logs enqueued during the flush are collected for the next batch and are not lost.

Suggested change
function flushAllAppOutputs(): void {
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
pendingOutputs.clear();
}
function flushAllAppOutputs(): void {
flushTimer = null;
const outputsToFlush = new Map(pendingOutputs);
pendingOutputs.clear();
for (const [sender, outputs] of outputsToFlush) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
}


function listenToProcess({
process: spawnedProcess,
appId,
Expand Down Expand Up @@ -322,15 +360,15 @@ function listenToProcess({
const inputRequestPattern = /\s*›\s*\([yY]\/[nN]\)\s*$/;
const isInputRequest = inputRequestPattern.test(message);
if (isInputRequest) {
// Send special input-requested event for interactive prompts
// Send input-requested immediately (not batched) for responsive UX
safeSend(event.sender, "app:output", {
type: "input-requested",
message,
appId,
});
} else {
// Normal stdout handling
safeSend(event.sender, "app:output", {
// Batch normal stdout for efficient IPC
enqueueAppOutput(event.sender, {
type: "stdout",
message,
appId,
Expand All @@ -350,7 +388,7 @@ function listenToProcess({
appInfo.originalUrl === originalUrl &&
appInfo.proxyUrl
) {
safeSend(event.sender, "app:output", {
enqueueAppOutput(event.sender, {
type: "stdout",
message: `[dyad-proxy-server]started=[${appInfo.proxyUrl}] original=[${originalUrl}]`,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 MEDIUM | performance-feel / latency

Proxy server start message is batched but is latency-sensitive

The [dyad-proxy-server]started= message triggers showing the preview panel in the renderer. By routing it through enqueueAppOutput instead of safeSend, it's now delayed by up to 100ms. This message is functionally similar to input-requested — it drives a UI state transition that the user is actively waiting for (the preview panel appearing after the dev server starts).

While 100ms is usually below perception threshold, this is a latency-sensitive signal. If the process exits quickly after the proxy starts, there's also a theoretical window where the onStarted callback fires after the close handler has already flushed and cleared the map, leaving this message to sit until the next flush timer (which may never come if no other process is running).

💡 Suggestion: Consider sending proxy-server-started messages immediately via safeSend (like input-requested), since they trigger a user-visible UI transition.

appId,
Expand All @@ -371,7 +409,7 @@ function listenToProcess({
latestAppInfo.proxyUrl = proxyUrl;
latestAppInfo.originalUrl = originalUrl;
}
safeSend(event.sender, "app:output", {
enqueueAppOutput(event.sender, {
type: "stdout",
message: `[dyad-proxy-server]started=[${proxyUrl}] original=[${originalUrl}]`,
appId,
Expand Down Expand Up @@ -405,7 +443,7 @@ function listenToProcess({
appId,
});

safeSend(event.sender, "app:output", {
enqueueAppOutput(event.sender, {
type: "stderr",
message,
appId,
Expand All @@ -417,6 +455,8 @@ function listenToProcess({
logger.log(
`App ${appId} (PID: ${spawnedProcess.pid}) process closed with code ${code}, signal ${signal}.`,
);
// Flush any remaining batched output before signaling process exit
flushAllAppOutputs();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Process close flushes all apps' buffered output

Low Severity

flushAllAppOutputs() is a global operation that drains the pendingOutputs buffer for every WebContents entry. Calling it inside a single app's "close" event handler means that when app A's process exits, it also prematurely flushes any buffered messages still accumulating for app B (and clears app B's queue via pendingOutputs.clear()). Those messages are sent to the renderer before the normal 100ms timer fires, which is harmless on its own, but it also cancels the in-flight timer — the flushTimer is set to null inside flushAllAppOutputs, so any subsequent messages from app B will start a brand-new timer. This breaks the intended batching window for unrelated apps and can cause the timer to be abandoned while pendingOutputs still has entries if the close event races with a setTimeout callback.

Fix in Cursor Fix in Web

removeAppIfCurrentProcess(appId, spawnedProcess);
});

Expand Down
5 changes: 5 additions & 0 deletions src/ipc/types/misc.ts
Original file line number Diff line number Diff line change
Expand Up @@ -383,6 +383,11 @@ export const miscEvents = {
payload: AppOutputSchema,
}),

appOutputBatch: defineEvent({
channel: "app:output-batch",
payload: z.array(AppOutputSchema),
}),

deepLinkReceived: defineEvent({
channel: "deep-link-received",
payload: DeepLinkDataSchema,
Expand Down
Loading