Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion config/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ type RateLimitConfig struct {
}

func DefaultRateLimitConfig() RateLimitConfig {
return RateLimitConfig{PerTokenMs: 300, Burst: 20}
return RateLimitConfig{PerTokenMs: 400, Burst: 10}
}
Comment on lines 70 to 72
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify rate limit changes align with batch size increases.

The rate limit has become more restrictive (PerTokenMs increased from 300 to 400, Burst decreased from 20 to 10), which reduces broadcast frequency. This aligns with the larger minimum batch size (45 messages) introduced in queue/queue.go. However, ensure this combination doesn't cause excessive queue buildup during high-traffic periods, as fewer broadcasts with larger batches could lead to delays.

Run the following script to check how the rate limiter is used and verify no other components depend on the previous rate limit values:


🏁 Script executed:

#!/bin/bash
# Description: Find all references to rate limit configuration and verify usage patterns

# Search for RateLimitConfig usage
echo "=== RateLimitConfig usage ==="
rg -n "RateLimitConfig" --type=go -C 3

# Search for rate limiter Allow() calls to understand frequency patterns
echo -e "\n=== Rate limiter Allow() calls ==="
rg -n "limiter\.Allow\(\)" --type=go -C 3

# Check for any hardcoded rate limit values that might conflict
echo -e "\n=== Hardcoded rate limit values ==="
rg -n "PerTokenMs|Burst" --type=go -C 2

Length of output: 4549



Wait, this needs codebase access. Let me correct:


</parameter>
</invoke>

---

🏁 Script executed:

```shell
#!/bin/bash
# Verify batch size changes and understand rate limiter interaction

# Search for batch size configuration
echo "=== Batch size configuration ==="
rg -n "batch|Batch" --type=go queue/queue.go -C 3

# Check BroadcastPending implementation to understand frequency
echo -e "\n=== BroadcastPending usage ==="
rg -n "BroadcastPending" --type=go -C 2

# Search for message count thresholds in queue.go
echo -e "\n=== Message count/thresholds ==="
rg -n "[0-9]{2,}" --type=go queue/queue.go | head -30

Length of output: 2332


🏁 Script executed:

#!/bin/bash
# Verify batch size changes and understand rate limiter interaction

# Search for batch size configuration
echo "=== Batch size configuration ==="
rg -n "batch|Batch" --type=go queue/queue.go -C 3

# Check BroadcastPending implementation to understand frequency
echo -e "\n=== BroadcastPending method ==="
rg -n "func.*BroadcastPending" --type=go -A 20

# Search for message accumulation/minimum size logic
echo -e "\n=== Message size/count checks ==="
rg -n "messages.*len|len.*messages|MinBatch" --type=go queue/queue.go -C 2

Length of output: 1861


Update outdated rate limit comment and verify configuration aligns with batch processing logic.

The rate limit change is verified: PerTokenMs increased from 300 to 400ms (reducing frequency from 3.33 to 2.5 tokens/sec), and Burst decreased from 20 to 10. This aligns with the minimum batch size of 45 messages enforced in BroadcastPending() (queue/queue.go:180), supporting fewer but larger broadcasts.

However, the comment at queue/queue.go:142 is now outdated. It states "Token-bucket rate limit: allow calling BroadcastPending at most 20 times per 6 seconds," but with the new configuration, the rate allows approximately 15 refilled tokens over 6 seconds (2.5 tokens/sec × 6), not 20. Update this comment to reflect the new rate limit.

🤖 Prompt for AI Agents
In queue/queue.go around line 142 (and note config/types.go lines 70-72 define
the current DefaultRateLimitConfig), update the outdated token-bucket comment to
reflect the new rate: PerTokenMs is 400ms (≈2.5 tokens/sec) and Burst 10, so
over 6 seconds you get about 15 tokens (not 20); change the comment text to
state the correct refill math and ensure it mentions that the config supports
fewer, larger broadcasts (minimum batch size 45) so the comment and config are
consistent.


type StrayManagerConfig struct {
Expand Down
10 changes: 7 additions & 3 deletions queue/queue.go
Original file line number Diff line number Diff line change
Expand Up @@ -175,9 +175,13 @@ func (q *Queue) BroadcastPending() (int, error) {
cutoff = i + 1 // cutoff is now the count of messages that fit
}

// If nothing fits, process at least the first message
// If nothing fits, process at least the first 45 messages or the total number of messages if less than 45
if cutoff == 0 {
cutoff = 1
cutoff = 45
}

if cutoff > total {
cutoff = total
}
Comment on lines +178 to 185
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical logic flaw: Forcing 45 messages when size limit is exceeded.

When cutoff == 0 after the size-checking loop (lines 162-176), it indicates that even the first message exceeds maxSizeBytes. Setting cutoff = 45 in this scenario forces the inclusion of messages that are known to violate the size constraint, virtually guaranteeing the broadcast will fail.

The previous logic of setting cutoff = 1 was correct—it attempted a best-effort broadcast of the first message. If the intent is to process multiple messages when the queue is large but none fit, the logic should distinguish between:

  1. Size exceeded: Keep cutoff = 1 (best effort)
  2. Empty queue or other conditions: Use a larger minimum

Apply this diff to restore safe behavior:

-	// If nothing fits, process at least the first 45 messages or the total number of messages if less than 45
+	// If nothing fits, process at least the first message
 	if cutoff == 0 {
-		cutoff = 45
+		cutoff = 1
 	}
 
 	if cutoff > total {
 		cutoff = total
 	}

Alternatively, if larger batches are required for performance, consider removing or increasing maxSizeBytes rather than bypassing the size check.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// If nothing fits, process at least the first 45 messages or the total number of messages if less than 45
if cutoff == 0 {
cutoff = 1
cutoff = 45
}
if cutoff > total {
cutoff = total
}
// If nothing fits, process at least the first message
if cutoff == 0 {
cutoff = 1
}
if cutoff > total {
cutoff = total
}
🤖 Prompt for AI Agents
In queue/queue.go around lines 178 to 185, the code currently forces cutoff = 45
when cutoff == 0 which allows messages that exceed maxSizeBytes to be included;
change this to set cutoff = 1 so we attempt a best-effort broadcast of the first
(oversized) message instead of batching many violating messages. Preserve the
subsequent guard that limits cutoff to total; if you need larger batches for
performance, adjust maxSizeBytes elsewhere rather than bypassing the size check.


log.Info().Msg(fmt.Sprintf("Queue: Posting %d messages to chain...", cutoff))
Expand All @@ -200,7 +204,7 @@ func (q *Queue) BroadcastPending() (int, error) {
var i int
for !complete && i < 10 {
i++
res, err = q.wallet.BroadcastTxAsync(data)
res, err = q.wallet.BroadcastTxSync(data)
if err != nil {
if strings.Contains(err.Error(), "tx already exists in cache") {
if data.Sequence != nil {
Expand Down
Loading