Skip to content

More block tweaking#149

Merged
TheMarstonConnell merged 2 commits intomainfrom
marston/blocks
Nov 1, 2025
Merged

More block tweaking#149
TheMarstonConnell merged 2 commits intomainfrom
marston/blocks

Conversation

@TheMarstonConnell
Copy link
Member

@TheMarstonConnell TheMarstonConnell commented Oct 31, 2025

Summary by CodeRabbit

  • Refactor
    • Added a cooldown to proof processing so cycles pause briefly before retrying, reducing redundant work during steady load.
    • Smarter message batching: triggering and minimum-batch behavior adjusted to create larger, more efficient batches when appropriate.
    • Improved timing updates for queue processing to make background work more consistent and predictable.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 31, 2025

Walkthrough

Adds a 30-minute cooldown to the proofs processing skip condition, requiring queue growth beyond a tracked lastCount after the cooldown to proceed. Adjusts queue batching: uses direct time assignment for processed timestamp, and forces minimum batch size when cutoff < 45 (instead of cutoff == 0).

Changes

Cohort / File(s) Summary
Proof cycle gating
proofs/proofs.go
Replaced the previous queue-full skip with a composite condition in Start(): skip only when (elapsed < 30 minutes) AND lastCount > 0 AND queueCount > lastCount. Removes the prior standalone queue-size-only check.
Queue batching & timestamps
queue/queue.go
In Listen(), remove temporary now var and assign q.processed = time.Now() directly after BroadcastPending(). In BroadcastPending(), change the minimum-batch forcing condition from cutoff == 0 to cutoff < 45, ensuring at least 45 items are processed when fewer than 45 fit otherwise.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Proofs as Proofs.Start
  participant Queue as queue.Count()
  participant Timer as time.Now()
  Note over Proofs,Queue: New proof-cycle gating (high level)
  Proofs->>Queue: get queueCount
  Proofs->>Proofs: check lastProcessed timestamp
  alt elapsed < 30m AND lastCount > 0 AND queueCount > lastCount
    Proofs-->Proofs: skip proof cycle (cooldown + growth)
  else
    Proofs->>Timer: now
    Proofs->>Queue: process proofs
  end
Loading
sequenceDiagram
  autonumber
  participant Listener as queue.Listen()
  participant Broad as BroadcastPending()
  participant Time as time.Now()
  Note over Listener,Broad: Batch gating and timestamp update
  Listener->>Broad: evaluate pending items (total)
  alt total >= 25 OR elapsed >= 10m
    Broad->>Listener: create batch (normal)
    Listener->>Time: q.processed = time.Now()
  else
    Listener-->Listener: skip batching
  end
  Note over Broad: When packaging, if cutoff < 45 then force minimum batch size = 45
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Files needing extra attention:
    • proofs/proofs.go — verify time comparison semantics, boundary conditions, and interaction of lastCount/queue growth logic.
    • queue/queue.go — validate the cutoff < 45 change impact on batching, memory, and latency; confirm q.processed is updated in all appropriate paths.

Possibly related PRs

  • fixing zero #148: Earlier change that added a lastCount > 0 guard to the proofs queue-full check; directly related to the gating logic expanded here.
  • more block tweaks #147: Modifies similar proof-start and queue batching logic, including cutoff handling and proof-cycle gating.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title Check ❓ Inconclusive The title "More block tweaking" is vague and uses non-descriptive language that doesn't clearly communicate the specific changes in the pull request. The word "tweaking" conveys minimal information about what is actually being modified, and "block" is not specific enough to identify which components or logic are affected. While the changes do relate to block processing broadly, the title fails to highlight the key improvements: adding time-based cooldown gating to proof processing and implementing batching thresholds with time constraints to queue processing. A reader scanning the pull request history would not easily understand the substantive nature of these changes from this title alone. Consider revising the title to be more descriptive and specific about the changes being made. A better title might reference the addition of cooldown logic and batching constraints, such as "Add cooldown gating and batching thresholds to proof and queue processing" or "Implement time-based guards for proof and queue batch processing." This would give reviewers and future developers a clearer understanding of what improvements are included in this pull request.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch marston/blocks

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fc25d62 and fda0a87.

📒 Files selected for processing (2)
  • proofs/proofs.go (1 hunks)
  • queue/queue.go (2 hunks)
🔇 Additional comments (1)
queue/queue.go (1)

148-148: LGTM: Direct assignment improves clarity.

The removal of the intermediate time variable is a clean simplification with no behavioral change.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
queue/queue.go (1)

184-187: Critical: This change worsens the transaction size limit violation.

This modification makes the previously flagged critical issue more severe. The size calculation loop (lines 168-182) determines how many messages fit within maxSizeBytes. If the loop finds that only 10 messages fit, cutoff is set to 10. However, this logic now forces cutoff = 45 whenever fewer than 45 messages fit, completely overriding the size constraint.

Impact: Transactions will exceed maxSizeBytes and fail during broadcast.

The previous condition cutoff == 0 only forced a minimum when no messages fit (an edge case). The new condition cutoff < 45 forces a minimum even when the size calculation has determined a valid, smaller batch size.

Recommended fix:

Remove the forced minimum batch size entirely, as it conflicts with transaction size limits:

-// Process at least the first 45 messages or the total number of messages if less than 45
-if cutoff < 45 {
-  cutoff = 45
-}
-
 if cutoff > total {
   cutoff = total
 }

Alternatively, if a minimum batch size is truly necessary, implement it with respect to the size constraint—only expand the batch if additional messages still fit within maxSizeBytes. The current implementation is fundamentally incompatible with the size-based batching logic.

Based on past review comments.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fda0a87 and b468a42.

📒 Files selected for processing (1)
  • queue/queue.go (2 hunks)
🔇 Additional comments (1)
queue/queue.go (1)

155-155: LGTM!

The direct assignment of time.Now() to q.processed simplifies the code without changing functionality.

Comment on lines +147 to +152
// bunch into 25 message chunks if possible
if total < 25 { // if total is less than 25 messages, and it's been less than 10 minutes passed, skip
if q.processed.Add(time.Minute * 10).After(time.Now()) {
continue
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Batching logic looks good, but consider clarifying the comment.

The gating logic correctly implements a strategy to wait for at least 25 messages or a 10-minute timeout before processing. However, the comment "bunch into 25 message chunks if possible" doesn't fully capture this behavior—it's more accurately a minimum batch size with a time-based fallback.

Consider updating the comment to something like:

-// bunch into 25 message chunks if possible
+// Wait for at least 25 messages or 10 minutes before processing
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// bunch into 25 message chunks if possible
if total < 25 { // if total is less than 25 messages, and it's been less than 10 minutes passed, skip
if q.processed.Add(time.Minute * 10).After(time.Now()) {
continue
}
}
// Wait for at least 25 messages or 10 minutes before processing
if total < 25 { // if total is less than 25 messages, and it's been less than 10 minutes passed, skip
if q.processed.Add(time.Minute * 10).After(time.Now()) {
continue
}
}
🤖 Prompt for AI Agents
In queue/queue.go around lines 147 to 152, the inline comment "bunch into 25
message chunks if possible" is misleading; update it to clearly state that the
logic enforces a minimum batch size of 25 messages with a 10-minute time-based
fallback before processing. Replace the comment with a concise explanation that
the code waits until there are at least 25 messages or until 10 minutes have
passed since last processing, then proceed; keep the conditional logic
unchanged.

@TheMarstonConnell TheMarstonConnell merged commit 4e25157 into main Nov 1, 2025
6 checks passed
@TheMarstonConnell TheMarstonConnell deleted the marston/blocks branch November 1, 2025 06:24
This was referenced Nov 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant