perf: batch and throttle IPC app output to prevent log flooding#3035
perf: batch and throttle IPC app output to prevent log flooding#3035wwwillchen merged 2 commits intodyad-sh:mainfrom
Conversation
|
@BugBot run |
|
@BugBot run |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses performance issues related to high-volume logging from child processes. By batching stdout/stderr messages and throttling IPC events, it aims to reduce IPC traffic, minimize React re-renders, and maintain UI responsiveness. Input requests are prioritized to ensure interactive prompts remain responsive. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with π and π on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Greptile SummaryThis PR introduces a 100ms batching layer for stdout/stderr IPC traffic between Electron's main process and the renderer, replacing per-message Key changes:
Issues found:
Confidence Score: 3/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant CP as Child Process
participant MH as app_handlers.ts (Main)
participant BUF as pendingOutputs Map
participant TIM as flushTimer (100ms)
participant IPC as Electron IPC
participant RND as useRunApp (Renderer)
participant ST as Jotai State
CP->>MH: stdout data (normal)
MH->>BUF: enqueueAppOutput(sender, {stdout})
BUF->>TIM: start timer (if not running)
CP->>MH: stdout data (input-requested)
MH->>IPC: safeSend("app:output", {input-requested})
IPC->>RND: onAppOutput β showInputRequest()
TIM-->>MH: flushAllAppOutputs() fires
MH->>BUF: iterate pendingOutputs
BUF->>IPC: safeSend("app:output-batch", [outputs])
IPC->>RND: onAppOutputBatch(outputs)
RND->>RND: processAppOutput() per item
RND->>ST: setConsoleEntries(prev => [...prev, ...newEntries])
CP->>MH: process close
MH->>MH: flushAllAppOutputs() (immediate)
MH->>IPC: safeSend("app:output-batch", remaining)
IPC->>RND: onAppOutputBatch(remaining)
|
| function flushAllAppOutputs(): void { | ||
| flushTimer = null; | ||
| for (const [sender, outputs] of pendingOutputs) { | ||
| if (outputs.length > 0) { | ||
| safeSend(sender, "app:output-batch", outputs); | ||
| } | ||
| } | ||
| pendingOutputs.clear(); | ||
| } |
There was a problem hiding this comment.
Missing
clearTimeout before manual flush on process close
flushAllAppOutputs() sets flushTimer = null but never calls clearTimeout on the handle, so the originally-scheduled timer is still alive. When it fires ~100ms later it calls flushAllAppOutputs() again. In isolation that's benign (the map is already empty), but in a race where a new app process starts within those 100ms:
- Close fires β manual
flushAllAppOutputs()βflushTimer = null, map cleared. - New process starts β
enqueueAppOutputseesflushTimer === null, schedules a new timer. - The old stale timer fires β calls
flushAllAppOutputs()β drains the new process's messages earlier than intended. - The new timer fires β no-op (map is empty).
This breaks the 100ms batching guarantee for the new process. The fix is to cancel the pending timer before resetting:
| function flushAllAppOutputs(): void { | |
| flushTimer = null; | |
| for (const [sender, outputs] of pendingOutputs) { | |
| if (outputs.length > 0) { | |
| safeSend(sender, "app:output-batch", outputs); | |
| } | |
| } | |
| pendingOutputs.clear(); | |
| } | |
| function flushAllAppOutputs(): void { | |
| if (flushTimer !== null) { | |
| clearTimeout(flushTimer); | |
| } | |
| flushTimer = null; | |
| for (const [sender, outputs] of pendingOutputs) { | |
| if (outputs.length > 0) { | |
| safeSend(sender, "app:output-batch", outputs); | |
| } | |
| } | |
| pendingOutputs.clear(); | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/ipc/handlers/app_handlers.ts
Line: 309-317
Comment:
**Missing `clearTimeout` before manual flush on process close**
`flushAllAppOutputs()` sets `flushTimer = null` but never calls `clearTimeout` on the handle, so the originally-scheduled timer is still alive. When it fires ~100ms later it calls `flushAllAppOutputs()` again. In isolation that's benign (the map is already empty), but in a race where a **new** app process starts within those 100ms:
1. Close fires β manual `flushAllAppOutputs()` β `flushTimer = null`, map cleared.
2. New process starts β `enqueueAppOutput` sees `flushTimer === null`, schedules a **new** timer.
3. The old stale timer fires β calls `flushAllAppOutputs()` β drains the new process's messages earlier than intended.
4. The new timer fires β no-op (map is empty).
This breaks the 100ms batching guarantee for the new process. The fix is to cancel the pending timer before resetting:
```suggestion
function flushAllAppOutputs(): void {
if (flushTimer !== null) {
clearTimeout(flushTimer);
}
flushTimer = null;
for (const [sender, outputs] of pendingOutputs) {
if (outputs.length > 0) {
safeSend(sender, "app:output-batch", outputs);
}
}
pendingOutputs.clear();
}
```
How can I resolve this? If you propose a fix, please make it concise.| `App ${appId} (PID: ${spawnedProcess.pid}) process closed with code ${code}, signal ${signal}.`, | ||
| ); | ||
| // Flush any remaining batched output before signaling process exit | ||
| flushAllAppOutputs(); |
There was a problem hiding this comment.
Process close flushes all apps' buffered output
Low Severity
flushAllAppOutputs() is a global operation that drains the pendingOutputs buffer for every WebContents entry. Calling it inside a single app's "close" event handler means that when app A's process exits, it also prematurely flushes any buffered messages still accumulating for app B (and clears app B's queue via pendingOutputs.clear()). Those messages are sent to the renderer before the normal 100ms timer fires, which is harmless on its own, but it also cancels the in-flight timer β the flushTimer is set to null inside flushAllAppOutputs, so any subsequent messages from app B will start a brand-new timer. This breaks the intended batching window for unrelated apps and can cause the timer to be abandoned while pendingOutputs still has entries if the close event races with a setTimeout callback.
| } | ||
| } | ||
| pendingOutputs.clear(); | ||
| } |
There was a problem hiding this comment.
Missing clearTimeout causes orphaned timer and potential message loss
Medium Severity
flushAllAppOutputs sets flushTimer = null without ever calling clearTimeout on the existing timer handle. When the function is called directly from the process "close" handler (not from the timer callback), the previously-scheduled timer is still live and will fire again ~100ms later. At that point flushTimer is null, so any messages enqueued after the close-triggered flush will start a fresh timer. Now two timers are racing: the orphaned one and the new one. The orphaned timer fires first, drains and clears pendingOutputs, and the new timer fires on an already-empty map β silently dropping all messages that were enqueued in the window between the two firings.
Additional Locations (1)
There was a problem hiding this comment.
Code Review
This pull request introduces a performance optimization by batching and throttling IPC messages for application output, which is a great way to prevent log flooding and improve UI responsiveness. The implementation correctly separates immediate messages (like input requests) from high-volume logs, which are now batched.
I've identified a critical race condition in the new log flushing logic that could lead to lost messages under certain circumstances. I've also provided a suggestion to improve the readability and type safety of the code that processes these batches on the renderer side. With these fixes, the implementation will be more robust.
| function flushAllAppOutputs(): void { | ||
| flushTimer = null; | ||
| for (const [sender, outputs] of pendingOutputs) { | ||
| if (outputs.length > 0) { | ||
| safeSend(sender, "app:output-batch", outputs); | ||
| } | ||
| } | ||
| pendingOutputs.clear(); | ||
| } |
There was a problem hiding this comment.
There is a race condition here that could lead to lost log messages. If enqueueAppOutput is called while flushAllAppOutputs is executing, a new log message could be added to pendingOutputs after the for...of loop has started but before pendingOutputs.clear() is called. This would cause the newly added log to be discarded without being sent.
To fix this, you should copy the pending outputs to a local variable and clear the shared map before iterating and sending the messages. This ensures that any new logs enqueued during the flush are collected for the next batch and are not lost.
| function flushAllAppOutputs(): void { | |
| flushTimer = null; | |
| for (const [sender, outputs] of pendingOutputs) { | |
| if (outputs.length > 0) { | |
| safeSend(sender, "app:output-batch", outputs); | |
| } | |
| } | |
| pendingOutputs.clear(); | |
| } | |
| function flushAllAppOutputs(): void { | |
| flushTimer = null; | |
| const outputsToFlush = new Map(pendingOutputs); | |
| pendingOutputs.clear(); | |
| for (const [sender, outputs] of outputsToFlush) { | |
| if (outputs.length > 0) { | |
| safeSend(sender, "app:output-batch", outputs); | |
| } | |
| } | |
| } |
| const newEntries: ReturnType<typeof processAppOutput>[] = []; | ||
| for (const output of outputs) { | ||
| if (appId !== null && output.appId === appId) { | ||
| const entry = processAppOutput(output); | ||
| if (entry) { | ||
| newEntries.push(entry); | ||
| } | ||
| } | ||
| processAppOutput(output); | ||
| } | ||
|
|
||
| if (newEntries.length > 0) { | ||
| setConsoleEntries((prev) => [ | ||
| ...prev, | ||
| ...(newEntries as NonNullable<(typeof newEntries)[number]>[]), | ||
| ]); |
There was a problem hiding this comment.
To improve readability and type safety, we can explicitly type newEntries as ConsoleEntry[]. This allows us to remove the verbose type assertion when updating the state, making the code cleaner and more maintainable.
To do this, you'll also need to import the ConsoleEntry type at the top of the file:
import type { ConsoleEntry } from "@/ipc/types";| const newEntries: ReturnType<typeof processAppOutput>[] = []; | |
| for (const output of outputs) { | |
| if (appId !== null && output.appId === appId) { | |
| const entry = processAppOutput(output); | |
| if (entry) { | |
| newEntries.push(entry); | |
| } | |
| } | |
| processAppOutput(output); | |
| } | |
| if (newEntries.length > 0) { | |
| setConsoleEntries((prev) => [ | |
| ...prev, | |
| ...(newEntries as NonNullable<(typeof newEntries)[number]>[]), | |
| ]); | |
| const newEntries: ConsoleEntry[] = []; | |
| for (const output of outputs) { | |
| if (appId !== null && output.appId === appId) { | |
| const entry = processAppOutput(output); | |
| if (entry) { | |
| newEntries.push(entry); | |
| } | |
| } | |
| } | |
| if (newEntries.length > 0) { | |
| setConsoleEntries((prev) => [ | |
| ...prev, | |
| ...newEntries, | |
| ]); | |
| } |
There was a problem hiding this comment.
π‘ Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: bef5393b66
βΉοΈ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with π.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| function flushAllAppOutputs(): void { | ||
| flushTimer = null; | ||
| for (const [sender, outputs] of pendingOutputs) { |
There was a problem hiding this comment.
Cancel pending flush timeout when flushing early
flushAllAppOutputs() is called from the process close handler to force-deliver buffered logs, but this function only sets flushTimer = null and never clears the already scheduled timeout. If any new output is enqueued before that stale timeout fires, the old callback will flush the new queue early, shrinking the intended 100ms batching window and reintroducing extra IPC bursts/rerenders under restart or rapid process-exit scenarios.
Useful? React with πΒ / π.
π Dyadbot Code Review SummaryVerdict: β YES - Ready to merge Reviewed by 3 independent agents: Correctness Expert, Code Health Expert, UX Wizard. Issues Summary
π’ Low Priority Notes (1 item)
π« Dropped False Positives (5 items)
Generated by Dyadbot multi-agent code review |
| safeSend(event.sender, "app:output", { | ||
| enqueueAppOutput(event.sender, { | ||
| type: "stdout", | ||
| message: `[dyad-proxy-server]started=[${appInfo.proxyUrl}] original=[${originalUrl}]`, |
There was a problem hiding this comment.
π‘ MEDIUM | performance-feel / latency
Proxy server start message is batched but is latency-sensitive
The [dyad-proxy-server]started= message triggers showing the preview panel in the renderer. By routing it through enqueueAppOutput instead of safeSend, it's now delayed by up to 100ms. This message is functionally similar to input-requested β it drives a UI state transition that the user is actively waiting for (the preview panel appearing after the dev server starts).
While 100ms is usually below perception threshold, this is a latency-sensitive signal. If the process exits quickly after the proxy starts, there's also a theoretical window where the onStarted callback fires after the close handler has already flushed and cleared the map, leaving this message to sit until the next flush timer (which may never come if no other process is running).
π‘ Suggestion: Consider sending proxy-server-started messages immediately via safeSend (like input-requested), since they trigger a user-visible UI transition.
| function flushAllAppOutputs(): void { | ||
| flushTimer = null; |
There was a problem hiding this comment.
π‘ Missing clearTimeout in flushAllAppOutputs causes stale timer to prematurely flush other processes' batched data
flushAllAppOutputs() sets flushTimer = null at src/ipc/handlers/app_handlers.ts:310 without calling clearTimeout(flushTimer) first. When this function is called manually from the process close handler (line 459), the previously scheduled setTimeout (created at line 305) is still pending and will fire later. This stale timer calls flushAllAppOutputs() again, which flushes any data newly enqueued by other running processes β breaking their 100ms batching window and sending incomplete batches early. It also sets flushTimer = null even though a new timer may have been created by enqueueAppOutput in the interim, silently orphaning that reference.
| function flushAllAppOutputs(): void { | |
| flushTimer = null; | |
| function flushAllAppOutputs(): void { | |
| if (flushTimer) { | |
| clearTimeout(flushTimer); | |
| } | |
| flushTimer = null; |
Was this helpful? React with π or π to provide feedback.
Buffer stdout/stderr messages from child processes and flush them every 100ms as a single batched IPC event, reducing IPC traffic, array allocations, and React re-renders when apps emit high-volume logs. - Add `app:output-batch` event for batched log delivery - Add enqueueAppOutput/flushAllAppOutputs batcher in app_handlers - Keep `input-requested` messages immediate for responsive UX - Update renderer to process batches with a single state update - Flush remaining output on process exit Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
f940ab8 to
354f158
Compare
|
@BugBot run |
There was a problem hiding this comment.
π‘ Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 354f15855f
βΉοΈ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with π.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (!flushTimer) { | ||
| flushTimer = setTimeout(flushAllAppOutputs, APP_OUTPUT_FLUSH_INTERVAL_MS); |
There was a problem hiding this comment.
Flush queued output before it can span an app switch
Because enqueueAppOutput() now delays every stdout/stderr event by 100ms, logs emitted while app A is selected can be delivered only after the user has switched to app B. At that point src/app/layout.tsx:102-106 has already cleared appConsoleEntriesAtom, and src/hooks/useRunApp.ts:125-133 drops any batched entries whose output.appId !== appId, so those just-produced lines disappear from the console permanently. Before batching, the same output was delivered immediately while app A was still selected.
Useful? React with πΒ / π.
| function flushAllAppOutputs(): void { | ||
| flushTimer = null; | ||
| for (const [sender, outputs] of pendingOutputs) { | ||
| if (outputs.length > 0) { | ||
| safeSend(sender, "app:output-batch", outputs); |
There was a problem hiding this comment.
Drop buffered stdout/stderr when the user clears logs
If the user clicks Clear logs while an app is still producing output, src/components/preview_panel/Console.tsx:106-112 clears the backend store and UI immediately, but any stdout/stderr already sitting in pendingOutputs is still emitted here on the next timer tick or process-exit flush. That makes previously cleared lines reappear in the console, which is a regression introduced by buffering every output for up to 100ms.
Useful? React with πΒ / π.
π Dyadbot Code Review SummaryVerdict: β YES - Ready to merge Reviewed by 3 independent agents: Correctness Expert, Code Health Expert, UX Wizard. Issues SummaryNo new HIGH or MEDIUM issues found beyond what existing reviewers have already flagged. The key issues identified by other reviewers that should be addressed before merging:
π’ Low Priority Notes (1 item)
π« Dropped False Positives (7 items)
Generated by Dyadbot multi-agent code review |
π Playwright Test Resultsβ Some tests failed
Summary: 258 passed, 1 failed, 8 flaky, 6 skipped Failed Testsπ macOS
π Re-run Failing Tests (macOS)Copy and paste to re-run all failing spec files locally: npm run e2e \
e2e-tests/select_component.spec.ts
|


Summary
app:output-batch), reducing IPC traffic, array allocations, and React re-renders when apps emit high-volume logsinput-requestedmessages on the immediateapp:outputchannel for responsive UXsetConsoleEntriesstate update instead of one per messageTest plan
console.login a loop) and verify the UI remains responsiveπ€ Generated with Claude Code