Conversation
WalkthroughAdds a 30-minute cooldown to the proofs processing skip condition, requiring queue growth beyond a tracked lastCount after the cooldown to proceed. Adjusts queue batching: uses direct time assignment for processed timestamp, and forces minimum batch size when cutoff < 45 (instead of cutoff == 0). Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Proofs as Proofs.Start
participant Queue as queue.Count()
participant Timer as time.Now()
Note over Proofs,Queue: New proof-cycle gating (high level)
Proofs->>Queue: get queueCount
Proofs->>Proofs: check lastProcessed timestamp
alt elapsed < 30m AND lastCount > 0 AND queueCount > lastCount
Proofs-->Proofs: skip proof cycle (cooldown + growth)
else
Proofs->>Timer: now
Proofs->>Queue: process proofs
end
sequenceDiagram
autonumber
participant Listener as queue.Listen()
participant Broad as BroadcastPending()
participant Time as time.Now()
Note over Listener,Broad: Batch gating and timestamp update
Listener->>Broad: evaluate pending items (total)
alt total >= 25 OR elapsed >= 10m
Broad->>Listener: create batch (normal)
Listener->>Time: q.processed = time.Now()
else
Listener-->Listener: skip batching
end
Note over Broad: When packaging, if cutoff < 45 then force minimum batch size = 45
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
proofs/proofs.go(1 hunks)queue/queue.go(2 hunks)
🔇 Additional comments (1)
queue/queue.go (1)
148-148: LGTM: Direct assignment improves clarity.The removal of the intermediate time variable is a clean simplification with no behavioral change.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
queue/queue.go (1)
184-187: Critical: This change worsens the transaction size limit violation.This modification makes the previously flagged critical issue more severe. The size calculation loop (lines 168-182) determines how many messages fit within
maxSizeBytes. If the loop finds that only 10 messages fit,cutoffis set to 10. However, this logic now forcescutoff = 45whenever fewer than 45 messages fit, completely overriding the size constraint.Impact: Transactions will exceed
maxSizeBytesand fail during broadcast.The previous condition
cutoff == 0only forced a minimum when no messages fit (an edge case). The new conditioncutoff < 45forces a minimum even when the size calculation has determined a valid, smaller batch size.Recommended fix:
Remove the forced minimum batch size entirely, as it conflicts with transaction size limits:
-// Process at least the first 45 messages or the total number of messages if less than 45 -if cutoff < 45 { - cutoff = 45 -} - if cutoff > total { cutoff = total }Alternatively, if a minimum batch size is truly necessary, implement it with respect to the size constraint—only expand the batch if additional messages still fit within
maxSizeBytes. The current implementation is fundamentally incompatible with the size-based batching logic.Based on past review comments.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
queue/queue.go(2 hunks)
🔇 Additional comments (1)
queue/queue.go (1)
155-155: LGTM!The direct assignment of
time.Now()toq.processedsimplifies the code without changing functionality.
| // bunch into 25 message chunks if possible | ||
| if total < 25 { // if total is less than 25 messages, and it's been less than 10 minutes passed, skip | ||
| if q.processed.Add(time.Minute * 10).After(time.Now()) { | ||
| continue | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Batching logic looks good, but consider clarifying the comment.
The gating logic correctly implements a strategy to wait for at least 25 messages or a 10-minute timeout before processing. However, the comment "bunch into 25 message chunks if possible" doesn't fully capture this behavior—it's more accurately a minimum batch size with a time-based fallback.
Consider updating the comment to something like:
-// bunch into 25 message chunks if possible
+// Wait for at least 25 messages or 10 minutes before processing📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // bunch into 25 message chunks if possible | |
| if total < 25 { // if total is less than 25 messages, and it's been less than 10 minutes passed, skip | |
| if q.processed.Add(time.Minute * 10).After(time.Now()) { | |
| continue | |
| } | |
| } | |
| // Wait for at least 25 messages or 10 minutes before processing | |
| if total < 25 { // if total is less than 25 messages, and it's been less than 10 minutes passed, skip | |
| if q.processed.Add(time.Minute * 10).After(time.Now()) { | |
| continue | |
| } | |
| } |
🤖 Prompt for AI Agents
In queue/queue.go around lines 147 to 152, the inline comment "bunch into 25
message chunks if possible" is misleading; update it to clearly state that the
logic enforces a minimum batch size of 25 messages with a 10-minute time-based
fallback before processing. Replace the comment with a concise explanation that
the code waits until there are at least 25 messages or until 10 minutes have
passed since last processing, then proceed; keep the conditional logic
unchanged.
Summary by CodeRabbit