Skip to content

perf: batch and throttle IPC app output to prevent log flooding#3035

Merged
wwwillchen merged 2 commits intodyad-sh:mainfrom
wwwillchen-bot:batch-ipc-app-output
Mar 23, 2026
Merged

perf: batch and throttle IPC app output to prevent log flooding#3035
wwwillchen merged 2 commits intodyad-sh:mainfrom
wwwillchen-bot:batch-ipc-app-output

Conversation

@wwwillchen
Copy link
Copy Markdown
Collaborator

@wwwillchen wwwillchen commented Mar 17, 2026

Summary

  • Buffer stdout/stderr messages from child processes and flush them every 100ms as a single batched IPC event (app:output-batch), reducing IPC traffic, array allocations, and React re-renders when apps emit high-volume logs
  • Keep input-requested messages on the immediate app:output channel for responsive UX
  • Renderer processes batched events with a single setConsoleEntries state update instead of one per message

Test plan

  • Run an app that emits high-volume logs (e.g., console.log in a loop) and verify the UI remains responsive
  • Verify app console still shows all log output correctly
  • Verify interactive prompts (y/n) still appear immediately
  • Verify proxy URL detection and preview panel still work
  • Verify HMR updates still trigger iframe refresh

πŸ€– Generated with Claude Code


Open with Devin

@wwwillchen
Copy link
Copy Markdown
Collaborator Author

@BugBot run

@wwwillchen
Copy link
Copy Markdown
Collaborator Author

@BugBot run

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses performance issues related to high-volume logging from child processes. By batching stdout/stderr messages and throttling IPC events, it aims to reduce IPC traffic, minimize React re-renders, and maintain UI responsiveness. Input requests are prioritized to ensure interactive prompts remain responsive.

Highlights

  • Batching App Output: Implements batching of stdout/stderr messages from child processes to reduce IPC traffic and improve UI responsiveness.
  • Throttling IPC Events: Introduces a 100ms flush interval for batched IPC events (app:output-batch) to prevent log flooding.
  • Immediate Input Requests: Ensures input-requested messages are sent immediately via the app:output channel for a responsive user experience with interactive prompts.
Changelog
  • src/hooks/useRunApp.ts
    • Modified subscription logic to handle batched app output events.
    • Added logic to process HMR updates and proxy server output.
    • Refactored the app output processing to handle batched events and immediate input requests separately.
  • src/ipc/handlers/app_handlers.ts
    • Implemented an app output batching mechanism to buffer stdout/stderr messages.
    • Introduced a timer to flush batched messages every 100ms.
    • Modified the process output handling to use the batching mechanism for stdout and stderr, while sending input requests immediately.
    • Added a flush before process exit.
  • src/ipc/types/misc.ts
    • Defined a new IPC event (app:output-batch) for transmitting batched app output.
Activity
  • The pull request introduces batching and throttling to prevent log flooding.
  • The pull request includes tests to verify UI responsiveness and log output correctness.
  • The pull request ensures interactive prompts and proxy URL detection still work as expected.
  • The pull request confirms HMR updates trigger iframe refresh.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with πŸ‘ and πŸ‘Ž on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 17, 2026

Greptile Summary

This PR introduces a 100ms batching layer for stdout/stderr IPC traffic between Electron's main process and the renderer, replacing per-message safeSend calls with a shared Map<WebContents, AppOutput[]> buffer flushed on a timer. Latency-sensitive input-requested events are kept on the unbatched app:output channel, and the renderer merges each batch into a single setConsoleEntries state update. The approach is well-structured and the documentation update in rules/electron-ipc.md correctly captures the pattern for future contributors.

Key changes:

  • app_handlers.ts: adds enqueueAppOutput / flushAllAppOutputs module-level batcher; stdout/stderr go through the queue, input-requested bypasses it.
  • useRunApp.ts: splits the single onAppOutput effect into two β€” one for immediate events and one for app:output-batch β€” with processAppOutput extracted as a shared useCallback.
  • misc.ts: registers the new appOutputBatch event contract (z.array(AppOutputSchema)).
  • rules/electron-ipc.md: documents the high-volume batching pattern.

Issues found:

  • enqueueAppOutput calls do not include a timestamp, so the renderer's Date.now() fallback runs at batch-processing time (up to 100ms late), making every message in a flush share nearly the same timestamp instead of reflecting when the process actually emitted it.
  • flushAllAppOutputs() in the close handler drains and clears the entire pendingOutputs map, prematurely flushing buffered outputs for all other concurrently-running app processes rather than only the one whose process just exited.

Confidence Score: 3/5

  • Mostly safe to merge but carries two correctness issues in the batcher that should be addressed before landing.
  • The renderer-side refactor and IPC contract changes are clean. The main process batcher has a missing-timestamp issue (minor but affects log display accuracy) and a cross-app flush issue on process close (can prematurely deliver buffered output for other live processes and disrupt their batching schedule). Additionally, the previously-flagged missing clearTimeout before the manual flush compounds the cross-app problem.
  • src/ipc/handlers/app_handlers.ts β€” specifically the enqueueAppOutput call sites (missing timestamp) and the close handler (flushAllAppOutputs scope).

Important Files Changed

Filename Overview
src/ipc/handlers/app_handlers.ts Introduces a module-level output batcher (enqueueAppOutput / flushAllAppOutputs). Two issues: enqueued outputs lack a timestamp, causing up to 100ms of timestamp skew in the renderer; and the process-close flush drains all WebContents rather than only the closing sender's queue.
src/hooks/useRunApp.ts Refactored to subscribe to both the immediate app:output channel and the new batched app:output-batch channel; processAppOutput extracted as a shared callback. Logic is sound β€” input-requested stays on the fast path, batch entries are merged into a single setConsoleEntries call. The NonNullable cast is a bit awkward but correct.
src/ipc/types/misc.ts Cleanly adds the appOutputBatch event definition using z.array(AppOutputSchema). No issues.
rules/electron-ipc.md Documents the new high-volume event batching pattern with clear guidance. No issues.

Sequence Diagram

sequenceDiagram
    participant CP as Child Process
    participant MH as app_handlers.ts (Main)
    participant BUF as pendingOutputs Map
    participant TIM as flushTimer (100ms)
    participant IPC as Electron IPC
    participant RND as useRunApp (Renderer)
    participant ST as Jotai State

    CP->>MH: stdout data (normal)
    MH->>BUF: enqueueAppOutput(sender, {stdout})
    BUF->>TIM: start timer (if not running)

    CP->>MH: stdout data (input-requested)
    MH->>IPC: safeSend("app:output", {input-requested})
    IPC->>RND: onAppOutput β†’ showInputRequest()

    TIM-->>MH: flushAllAppOutputs() fires
    MH->>BUF: iterate pendingOutputs
    BUF->>IPC: safeSend("app:output-batch", [outputs])
    IPC->>RND: onAppOutputBatch(outputs)
    RND->>RND: processAppOutput() per item
    RND->>ST: setConsoleEntries(prev => [...prev, ...newEntries])

    CP->>MH: process close
    MH->>MH: flushAllAppOutputs() (immediate)
    MH->>IPC: safeSend("app:output-batch", remaining)
    IPC->>RND: onAppOutputBatch(remaining)
Loading

Comments Outside Diff (1)

  1. src/ipc/handlers/app_handlers.ts, line 454-460 (link)

    P2 flushAllAppOutputs on close prematurely drains other apps' batched output

    flushAllAppOutputs() iterates over every entry in pendingOutputs and clears the entire map. If multiple app processes are concurrently writing to different WebContents instances, closing one of them causes the pending outputs for all other live processes to be flushed immediately β€” bypassing the intended 100ms batching window and resetting the scheduled timer. In a multi-window setup this could produce a burst of IPC messages from unrelated apps every time any single process closes.

    A targeted flush that only drains the outputs belonging to the closing process's sender (and cancels the shared timer only when the map is left empty afterward) would be safer. For example:

    function flushAppOutputsForSender(sender: Electron.WebContents): void {
      const outputs = pendingOutputs.get(sender);
      if (outputs && outputs.length > 0) {
        safeSend(sender, "app:output-batch", outputs);
      }
      pendingOutputs.delete(sender);
      if (pendingOutputs.size === 0 && flushTimer !== null) {
        clearTimeout(flushTimer);
        flushTimer = null;
      }
    }

    Then call flushAppOutputsForSender(event.sender) in the close handler instead of flushAllAppOutputs().

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: src/ipc/handlers/app_handlers.ts
    Line: 454-460
    
    Comment:
    **`flushAllAppOutputs` on close prematurely drains other apps' batched output**
    
    `flushAllAppOutputs()` iterates over every entry in `pendingOutputs` and clears the entire map. If multiple app processes are concurrently writing to different `WebContents` instances, closing one of them causes the pending outputs for *all* other live processes to be flushed immediately β€” bypassing the intended 100ms batching window and resetting the scheduled timer. In a multi-window setup this could produce a burst of IPC messages from unrelated apps every time any single process closes.
    
    A targeted flush that only drains the outputs belonging to the closing process's sender (and cancels the shared timer only when the map is left empty afterward) would be safer. For example:
    
    ```typescript
    function flushAppOutputsForSender(sender: Electron.WebContents): void {
      const outputs = pendingOutputs.get(sender);
      if (outputs && outputs.length > 0) {
        safeSend(sender, "app:output-batch", outputs);
      }
      pendingOutputs.delete(sender);
      if (pendingOutputs.size === 0 && flushTimer !== null) {
        clearTimeout(flushTimer);
        flushTimer = null;
      }
    }
    ```
    
    Then call `flushAppOutputsForSender(event.sender)` in the `close` handler instead of `flushAllAppOutputs()`.
    
    How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/ipc/handlers/app_handlers.ts
Line: 371-375

Comment:
**Missing `timestamp` causes inaccurate log timestamps**

`enqueueAppOutput` is called without a `timestamp` field, so the renderer falls back to `output.timestamp ?? Date.now()` at batch-processing time β€” up to 100ms after the event was actually generated. Every message in a batch will receive nearly the same wall-clock timestamp (the moment the timer fires) rather than the time it was emitted by the process.

Adding `timestamp: Date.now()` at the point of enqueue records the true generation time:

```suggestion
      enqueueAppOutput(event.sender, {
        type: "stdout",
        message,
        appId,
        timestamp: Date.now(),
      });
```

The same fix applies to all other `enqueueAppOutput` call sites (lines ~392, ~414, ~446).

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/ipc/handlers/app_handlers.ts
Line: 454-460

Comment:
**`flushAllAppOutputs` on close prematurely drains other apps' batched output**

`flushAllAppOutputs()` iterates over every entry in `pendingOutputs` and clears the entire map. If multiple app processes are concurrently writing to different `WebContents` instances, closing one of them causes the pending outputs for *all* other live processes to be flushed immediately β€” bypassing the intended 100ms batching window and resetting the scheduled timer. In a multi-window setup this could produce a burst of IPC messages from unrelated apps every time any single process closes.

A targeted flush that only drains the outputs belonging to the closing process's sender (and cancels the shared timer only when the map is left empty afterward) would be safer. For example:

```typescript
function flushAppOutputsForSender(sender: Electron.WebContents): void {
  const outputs = pendingOutputs.get(sender);
  if (outputs && outputs.length > 0) {
    safeSend(sender, "app:output-batch", outputs);
  }
  pendingOutputs.delete(sender);
  if (pendingOutputs.size === 0 && flushTimer !== null) {
    clearTimeout(flushTimer);
    flushTimer = null;
  }
}
```

Then call `flushAppOutputsForSender(event.sender)` in the `close` handler instead of `flushAllAppOutputs()`.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: "docs: record session..."

Comment on lines +309 to +317
function flushAllAppOutputs(): void {
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
pendingOutputs.clear();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Missing clearTimeout before manual flush on process close

flushAllAppOutputs() sets flushTimer = null but never calls clearTimeout on the handle, so the originally-scheduled timer is still alive. When it fires ~100ms later it calls flushAllAppOutputs() again. In isolation that's benign (the map is already empty), but in a race where a new app process starts within those 100ms:

  1. Close fires β†’ manual flushAllAppOutputs() β†’ flushTimer = null, map cleared.
  2. New process starts β†’ enqueueAppOutput sees flushTimer === null, schedules a new timer.
  3. The old stale timer fires β†’ calls flushAllAppOutputs() β†’ drains the new process's messages earlier than intended.
  4. The new timer fires β†’ no-op (map is empty).

This breaks the 100ms batching guarantee for the new process. The fix is to cancel the pending timer before resetting:

Suggested change
function flushAllAppOutputs(): void {
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
pendingOutputs.clear();
}
function flushAllAppOutputs(): void {
if (flushTimer !== null) {
clearTimeout(flushTimer);
}
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
pendingOutputs.clear();
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/ipc/handlers/app_handlers.ts
Line: 309-317

Comment:
**Missing `clearTimeout` before manual flush on process close**

`flushAllAppOutputs()` sets `flushTimer = null` but never calls `clearTimeout` on the handle, so the originally-scheduled timer is still alive. When it fires ~100ms later it calls `flushAllAppOutputs()` again. In isolation that's benign (the map is already empty), but in a race where a **new** app process starts within those 100ms:

1. Close fires β†’ manual `flushAllAppOutputs()` β†’ `flushTimer = null`, map cleared.
2. New process starts β†’ `enqueueAppOutput` sees `flushTimer === null`, schedules a **new** timer.
3. The old stale timer fires β†’ calls `flushAllAppOutputs()` β†’ drains the new process's messages earlier than intended.
4. The new timer fires β†’ no-op (map is empty).

This breaks the 100ms batching guarantee for the new process. The fix is to cancel the pending timer before resetting:

```suggestion
function flushAllAppOutputs(): void {
  if (flushTimer !== null) {
    clearTimeout(flushTimer);
  }
  flushTimer = null;
  for (const [sender, outputs] of pendingOutputs) {
    if (outputs.length > 0) {
      safeSend(sender, "app:output-batch", outputs);
    }
  }
  pendingOutputs.clear();
}
```

How can I resolve this? If you propose a fix, please make it concise.

`App ${appId} (PID: ${spawnedProcess.pid}) process closed with code ${code}, signal ${signal}.`,
);
// Flush any remaining batched output before signaling process exit
flushAllAppOutputs();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Process close flushes all apps' buffered output

Low Severity

flushAllAppOutputs() is a global operation that drains the pendingOutputs buffer for every WebContents entry. Calling it inside a single app's "close" event handler means that when app A's process exits, it also prematurely flushes any buffered messages still accumulating for app B (and clears app B's queue via pendingOutputs.clear()). Those messages are sent to the renderer before the normal 100ms timer fires, which is harmless on its own, but it also cancels the in-flight timer β€” the flushTimer is set to null inside flushAllAppOutputs, so any subsequent messages from app B will start a brand-new timer. This breaks the intended batching window for unrelated apps and can cause the timer to be abandoned while pendingOutputs still has entries if the close event races with a setTimeout callback.

Fix in CursorΒ Fix in Web

}
}
pendingOutputs.clear();
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing clearTimeout causes orphaned timer and potential message loss

Medium Severity

flushAllAppOutputs sets flushTimer = null without ever calling clearTimeout on the existing timer handle. When the function is called directly from the process "close" handler (not from the timer callback), the previously-scheduled timer is still live and will fire again ~100ms later. At that point flushTimer is null, so any messages enqueued after the close-triggered flush will start a fresh timer. Now two timers are racing: the orphaned one and the new one. The orphaned timer fires first, drains and clears pendingOutputs, and the new timer fires on an already-empty map β€” silently dropping all messages that were enqueued in the window between the two firings.

Additional Locations (1)
Fix in CursorΒ Fix in Web

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a performance optimization by batching and throttling IPC messages for application output, which is a great way to prevent log flooding and improve UI responsiveness. The implementation correctly separates immediate messages (like input requests) from high-volume logs, which are now batched.

I've identified a critical race condition in the new log flushing logic that could lead to lost messages under certain circumstances. I've also provided a suggestion to improve the readability and type safety of the code that processes these batches on the renderer side. With these fixes, the implementation will be more robust.

Comment on lines +309 to +317
function flushAllAppOutputs(): void {
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
pendingOutputs.clear();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a race condition here that could lead to lost log messages. If enqueueAppOutput is called while flushAllAppOutputs is executing, a new log message could be added to pendingOutputs after the for...of loop has started but before pendingOutputs.clear() is called. This would cause the newly added log to be discarded without being sent.

To fix this, you should copy the pending outputs to a local variable and clear the shared map before iterating and sending the messages. This ensures that any new logs enqueued during the flush are collected for the next batch and are not lost.

Suggested change
function flushAllAppOutputs(): void {
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
pendingOutputs.clear();
}
function flushAllAppOutputs(): void {
flushTimer = null;
const outputsToFlush = new Map(pendingOutputs);
pendingOutputs.clear();
for (const [sender, outputs] of outputsToFlush) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
}

Comment on lines +126 to +140
const newEntries: ReturnType<typeof processAppOutput>[] = [];
for (const output of outputs) {
if (appId !== null && output.appId === appId) {
const entry = processAppOutput(output);
if (entry) {
newEntries.push(entry);
}
}
processAppOutput(output);
}

if (newEntries.length > 0) {
setConsoleEntries((prev) => [
...prev,
...(newEntries as NonNullable<(typeof newEntries)[number]>[]),
]);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve readability and type safety, we can explicitly type newEntries as ConsoleEntry[]. This allows us to remove the verbose type assertion when updating the state, making the code cleaner and more maintainable.

To do this, you'll also need to import the ConsoleEntry type at the top of the file:

import type { ConsoleEntry } from "@/ipc/types";
Suggested change
const newEntries: ReturnType<typeof processAppOutput>[] = [];
for (const output of outputs) {
if (appId !== null && output.appId === appId) {
const entry = processAppOutput(output);
if (entry) {
newEntries.push(entry);
}
}
processAppOutput(output);
}
if (newEntries.length > 0) {
setConsoleEntries((prev) => [
...prev,
...(newEntries as NonNullable<(typeof newEntries)[number]>[]),
]);
const newEntries: ConsoleEntry[] = [];
for (const output of outputs) {
if (appId !== null && output.appId === appId) {
const entry = processAppOutput(output);
if (entry) {
newEntries.push(entry);
}
}
}
if (newEntries.length > 0) {
setConsoleEntries((prev) => [
...prev,
...newEntries,
]);
}

Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

βœ… Bugbot reviewed your changes and found no new issues!

Comment @cursor review or bugbot run to trigger another review on this PR

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ’‘ Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: bef5393b66

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with πŸ‘.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +309 to +311
function flushAllAppOutputs(): void {
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Cancel pending flush timeout when flushing early

flushAllAppOutputs() is called from the process close handler to force-deliver buffered logs, but this function only sets flushTimer = null and never clears the already scheduled timeout. If any new output is enqueued before that stale timeout fires, the old callback will flush the new queue early, shrinking the intended 100ms batching window and reintroducing extra IPC bursts/rerenders under restart or rapid process-exit scenarios.

Useful? React with πŸ‘Β / πŸ‘Ž.

Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

βœ… Bugbot reviewed your changes and found no new issues!

Comment @cursor review or bugbot run to trigger another review on this PR

@dyad-assistant
Copy link
Copy Markdown
Contributor

πŸ” Dyadbot Code Review Summary

Verdict: βœ… YES - Ready to merge

Reviewed by 3 independent agents: Correctness Expert, Code Health Expert, UX Wizard.

Issues Summary

Severity File Issue
🟑 MEDIUM src/ipc/handlers/app_handlers.ts Proxy server start message is batched but is latency-sensitive
🟒 Low Priority Notes (1 item)
  • Convoluted NonNullable type cast β€” src/hooks/useRunApp.ts:131-135 β€” The as NonNullable<(typeof newEntries)[number]>[] cast could be avoided by declaring newEntries with a non-nullable type directly or using a type predicate filter.
🚫 Dropped False Positives (5 items)
  • flushAllAppOutputs flushes ALL senders on process close β€” Dropped: Flushing early is harmless (sends messages sooner than the timer would). No data loss or corruption.
  • Destroyed WebContents retained in pendingOutputs Map β€” Dropped: safeSend guards against sending to destroyed targets, and pendingOutputs.clear() on each flush cycle cleans up stale entries. Window is at most 100ms.
  • Misleading comment about input-requested channel β€” Dropped: Comment is accurate β€” currently only input-requested is sent on the unbatched channel.
  • Single global flush timer shared across senders β€” Dropped: Multi-window usage is uncommon; acceptable trade-off.
  • Batched output for non-selected apps discarded β€” Dropped: Same filtering behavior as before batching; logs are preserved in main process log store.

Generated by Dyadbot multi-agent code review

Copy link
Copy Markdown
Contributor

@dyad-assistant dyad-assistant bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multi-agent review: 1 issue found

safeSend(event.sender, "app:output", {
enqueueAppOutput(event.sender, {
type: "stdout",
message: `[dyad-proxy-server]started=[${appInfo.proxyUrl}] original=[${originalUrl}]`,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟑 MEDIUM | performance-feel / latency

Proxy server start message is batched but is latency-sensitive

The [dyad-proxy-server]started= message triggers showing the preview panel in the renderer. By routing it through enqueueAppOutput instead of safeSend, it's now delayed by up to 100ms. This message is functionally similar to input-requested β€” it drives a UI state transition that the user is actively waiting for (the preview panel appearing after the dev server starts).

While 100ms is usually below perception threshold, this is a latency-sensitive signal. If the process exits quickly after the proxy starts, there's also a theoretical window where the onStarted callback fires after the close handler has already flushed and cleared the map, leaving this message to sit until the next flush timer (which may never come if no other process is running).

πŸ’‘ Suggestion: Consider sending proxy-server-started messages immediately via safeSend (like input-requested), since they trigger a user-visible UI transition.

cubic-dev-ai[bot]

This comment was marked as resolved.

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 4 additional findings in Devin Review.

Open in Devin Review

Comment on lines +309 to +310
function flushAllAppOutputs(): void {
flushTimer = null;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟑 Missing clearTimeout in flushAllAppOutputs causes stale timer to prematurely flush other processes' batched data

flushAllAppOutputs() sets flushTimer = null at src/ipc/handlers/app_handlers.ts:310 without calling clearTimeout(flushTimer) first. When this function is called manually from the process close handler (line 459), the previously scheduled setTimeout (created at line 305) is still pending and will fire later. This stale timer calls flushAllAppOutputs() again, which flushes any data newly enqueued by other running processes β€” breaking their 100ms batching window and sending incomplete batches early. It also sets flushTimer = null even though a new timer may have been created by enqueueAppOutput in the interim, silently orphaning that reference.

Suggested change
function flushAllAppOutputs(): void {
flushTimer = null;
function flushAllAppOutputs(): void {
if (flushTimer) {
clearTimeout(flushTimer);
}
flushTimer = null;
Open in Devin Review

Was this helpful? React with πŸ‘ or πŸ‘Ž to provide feedback.

@github-actions github-actions bot added the needs-human:review-issue ai agent flagged an issue that requires human review label Mar 17, 2026
wwwillchen and others added 2 commits March 17, 2026 21:47
Buffer stdout/stderr messages from child processes and flush them
every 100ms as a single batched IPC event, reducing IPC traffic,
array allocations, and React re-renders when apps emit high-volume logs.

- Add `app:output-batch` event for batched log delivery
- Add enqueueAppOutput/flushAllAppOutputs batcher in app_handlers
- Keep `input-requested` messages immediate for responsive UX
- Update renderer to process batches with a single state update
- Flush remaining output on process exit

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@wwwillchen
Copy link
Copy Markdown
Collaborator Author

@BugBot run

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ’‘ Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 354f15855f

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with πŸ‘.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +304 to +305
if (!flushTimer) {
flushTimer = setTimeout(flushAllAppOutputs, APP_OUTPUT_FLUSH_INTERVAL_MS);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Flush queued output before it can span an app switch

Because enqueueAppOutput() now delays every stdout/stderr event by 100ms, logs emitted while app A is selected can be delivered only after the user has switched to app B. At that point src/app/layout.tsx:102-106 has already cleared appConsoleEntriesAtom, and src/hooks/useRunApp.ts:125-133 drops any batched entries whose output.appId !== appId, so those just-produced lines disappear from the console permanently. Before batching, the same output was delivered immediately while app A was still selected.

Useful? React with πŸ‘Β / πŸ‘Ž.

Comment on lines +309 to +313
function flushAllAppOutputs(): void {
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Drop buffered stdout/stderr when the user clears logs

If the user clicks Clear logs while an app is still producing output, src/components/preview_panel/Console.tsx:106-112 clears the backend store and UI immediately, but any stdout/stderr already sitting in pendingOutputs is still emitted here on the next timer tick or process-exit flush. That makes previously cleared lines reappear in the console, which is a regression introduced by buffering every output for up to 100ms.

Useful? React with πŸ‘Β / πŸ‘Ž.

@dyad-assistant
Copy link
Copy Markdown
Contributor

πŸ” Dyadbot Code Review Summary

Verdict: βœ… YES - Ready to merge

Reviewed by 3 independent agents: Correctness Expert, Code Health Expert, UX Wizard.

Issues Summary

No new HIGH or MEDIUM issues found beyond what existing reviewers have already flagged.

The key issues identified by other reviewers that should be addressed before merging:

  • clearTimeout before manual flush (flagged by 5 reviewers) β€” flushAllAppOutputs() on process close should clearTimeout(flushTimer) first to prevent the orphaned timer from double-flushing
  • Proxy server start message latency β€” consider sending [dyad-proxy-server]started= on the immediate channel since it's low-frequency and latency-sensitive
  • Flush sender's buffer before input-requested β€” ensures prompt context arrives before the prompt itself
🟒 Low Priority Notes (1 item)
  • Immediate channel handler has dead code path β€” src/hooks/useRunApp.ts:112-116: The onAppOutput handler calls processAppOutput and adds the entry to console state, but currently only input-requested messages arrive on this channel, and those return null from processAppOutput. The entry-adding code never executes. Consider either filtering explicitly by output.type === 'input-requested' or updating the comment to clarify the intent.
🚫 Dropped False Positives (7 items)
  • Global singleton flush on close β€” Already covered by cursor[bot] at line 459
  • Missing flush in error handler β€” Node.js close event always fires after error, so the close handler's flush covers this case
  • Stale appId drops batched logs on app switch β€” Console is cleared on app switch; same behavior existed pre-batch with a narrower window
  • Duplicated filtering logic β€” Minimal duplication justified by single vs. array cardinality
  • Convoluted type cast β€” Already covered by gemini at line 140
  • Multiple HMR refreshes from batch β€” React 18 automatic batching collapses multiple setPreviewPanelKey calls into one re-render
  • Proxy URL detection delay β€” Already covered by dyad-assistant at line 393

Generated by Dyadbot multi-agent code review

Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

βœ… Bugbot reviewed your changes and found no new issues!

2 issues from previous reviews remain unresolved.

Fix All in Cursor

Comment @cursor review or bugbot run to trigger another review on this PR

@github-actions
Copy link
Copy Markdown
Contributor

🎭 Playwright Test Results

❌ Some tests failed

OS Passed Failed Flaky Skipped
🍎 macOS 258 1 8 6

Summary: 258 passed, 1 failed, 8 flaky, 6 skipped

Failed Tests

🍎 macOS

  • select_component.spec.ts > select component next.js
    • Error: expect(locator).toBeVisible() failed

πŸ“‹ Re-run Failing Tests (macOS)

Copy and paste to re-run all failing spec files locally:

npm run e2e \
  e2e-tests/select_component.spec.ts

⚠️ Flaky Tests

🍎 macOS

  • annotator.spec.ts > annotator - capture and submit screenshot (passed after 1 retry)
  • context_manage.spec.ts > manage context - exclude paths with smart context (passed after 1 retry)
  • setup_flow.spec.ts > Setup Flow > setup banner shows correct state when node.js is installed (passed after 1 retry)
  • setup.spec.ts > setup ai provider (passed after 1 retry)
  • telemetry.spec.ts > telemetry - reject (passed after 1 retry)
  • template-create-nextjs.spec.ts > create next.js app (passed after 1 retry)
  • undo.spec.ts > undo (passed after 1 retry)
  • visual_editing.spec.ts > edit style of one selected component (passed after 1 retry)

πŸ“Š View full report

@wwwillchen wwwillchen merged commit 4d7fa15 into dyad-sh:main Mar 23, 2026
10 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

needs-human:review-issue ai agent flagged an issue that requires human review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants