-
Notifications
You must be signed in to change notification settings - Fork 2
more block tweaks #147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
more block tweaks #147
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -311,6 +311,12 @@ func (p *Prover) Start() { | |||||
| continue | ||||||
| } | ||||||
|
|
||||||
| if p.q.Count() > p.lastCount { // don't run if the queue has more than the amount of files on disk | ||||||
| log.Warn(). | ||||||
| Msg("Queue is full, skipping proof cycle") | ||||||
| continue | ||||||
| } | ||||||
|
Comment on lines
+314
to
+318
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Logic bug: Guard triggers on first cycle with any queued messages. Since Additionally, the guard compares queue size against the number of files processed last cycle, not the previous queue size. This doesn't directly measure queue growth—if you processed 10 files last cycle but the queue now has 15 messages, it could mean the queue grew from 20 to 15 (actually shrinking). Consider one of these approaches: Option 1: Initialize lastCount to prevent first-cycle skip In func NewProver(wallet *wallet.Wallet, q *queue.Queue, io FileSystem, interval uint64, threads int16, chunkSize int) *Prover {
p := Prover{
running: false,
wallet: wallet,
q: q,
processed: time.Time{},
interval: interval,
io: io,
threads: threads,
chunkSize: chunkSize,
+ lastCount: 1000, // or another reasonable threshold
}
return &p
}Option 2: Track actual queue size to measure growth Replace - if p.q.Count() > p.lastCount { // don't run if the queue has more than the amount of files on disk
+ currentQueueSize := p.q.Count()
+ if p.lastQueueSize > 0 && currentQueueSize > p.lastQueueSize { // skip if queue is growing
log.Warn().
Msg("Queue is full, skipping proof cycle")
+ p.lastQueueSize = currentQueueSize
continue
}
+ p.lastQueueSize = currentQueueSizeThen update at the end (replace line 347): - p.lastCount = count
+ p.lastQueueSize = p.q.Count()
|
||||||
|
|
||||||
| log.Debug().Msg("Starting proof cycle...") | ||||||
|
|
||||||
| abciInfo, err := p.wallet.Client.RPCClient.ABCIInfo(context.Background()) | ||||||
|
|
@@ -319,7 +325,7 @@ func (p *Prover) Start() { | |||||
| continue | ||||||
| } | ||||||
| height := abciInfo.Response.LastBlockHeight | ||||||
|
|
||||||
| var count int // reset last count here | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧹 Nitpick | 🔵 Trivial Minor: Clarify comment. The comment "reset last count here" is slightly misleading—this line declares a new local variable Consider clarifying: - var count int // reset last count here
+ var count int // track files processed this cycle📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||
| t := time.Now() | ||||||
|
|
||||||
| err = p.io.ProcessFiles(func(merkle []byte, owner string, start int64) { | ||||||
|
|
@@ -331,12 +337,15 @@ func (p *Prover) Start() { | |||||
| log.Debug().Msg(fmt.Sprintf("proving: %x", merkle)) | ||||||
| filesProving.Inc() | ||||||
| p.Inc() | ||||||
| count++ | ||||||
| go p.wrapPostProof(merkle, owner, start, height, t) | ||||||
| }) | ||||||
| if err != nil { | ||||||
| log.Error().Err(err) | ||||||
| } | ||||||
|
|
||||||
| p.lastCount = count | ||||||
|
|
||||||
| p.processed = time.Now() | ||||||
| } | ||||||
| log.Info().Msg("Prover module stopped") | ||||||
|
|
||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -19,6 +19,7 @@ type Prover struct { | |
| threads int16 | ||
| currentThreads int16 | ||
| chunkSize int | ||
| lastCount int | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chainVerify initialization and first-cycle behavior. The Run the following script to confirm 🏁 Script executed: #!/bin/bash
# Check if lastCount is initialized in NewProver
ast-grep --pattern 'func NewProver($$$) *Prover {
$$$
}'Length of output: 619 I can confirm 🏁 Script executed: #!/bin/bash
# Find all usages of lastCount to understand guard behavior and impact
rg -nP -C5 'lastCount' proofs/Length of output: 1507 🏁 Script executed: #!/bin/bash
# Check git history and surrounding comments for context on first-cycle behavior intention
git log -p --follow -S 'lastCount' -- proofs/ | head -100Length of output: 59 🏁 Script executed: #!/bin/bash
# Get more context around the guard logic with comments
sed -n '300,360p' proofs/proofs.goLength of output: 1567 Address first-cycle processing skip caused by uninitialized The The intended guard logic (line 314 comment: "don't run if the queue has more than the amount of files on disk") is to detect backlogging, not to skip the first cycle. Initialize 🤖 Prompt for AI Agents |
||
| } | ||
|
|
||
| type FileSystem interface { | ||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -175,9 +175,13 @@ func (q *Queue) BroadcastPending() (int, error) { | |||||||||||||||||||||||||||||||||||
| cutoff = i + 1 // cutoff is now the count of messages that fit | ||||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||
| // If nothing fits, process at least the first message | ||||||||||||||||||||||||||||||||||||
| // If nothing fits, process at least the first 45 messages or the total number of messages if less than 45 | ||||||||||||||||||||||||||||||||||||
| if cutoff == 0 { | ||||||||||||||||||||||||||||||||||||
| cutoff = 1 | ||||||||||||||||||||||||||||||||||||
| cutoff = 45 | ||||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||
| if cutoff > total { | ||||||||||||||||||||||||||||||||||||
| cutoff = total | ||||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||||
|
Comment on lines
+178
to
185
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Critical logic flaw: Forcing 45 messages when size limit is exceeded. When The previous logic of setting
Apply this diff to restore safe behavior: - // If nothing fits, process at least the first 45 messages or the total number of messages if less than 45
+ // If nothing fits, process at least the first message
if cutoff == 0 {
- cutoff = 45
+ cutoff = 1
}
if cutoff > total {
cutoff = total
}Alternatively, if larger batches are required for performance, consider removing or increasing 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||
| log.Info().Msg(fmt.Sprintf("Queue: Posting %d messages to chain...", cutoff)) | ||||||||||||||||||||||||||||||||||||
|
|
@@ -200,7 +204,7 @@ func (q *Queue) BroadcastPending() (int, error) { | |||||||||||||||||||||||||||||||||||
| var i int | ||||||||||||||||||||||||||||||||||||
| for !complete && i < 10 { | ||||||||||||||||||||||||||||||||||||
| i++ | ||||||||||||||||||||||||||||||||||||
| res, err = q.wallet.BroadcastTxAsync(data) | ||||||||||||||||||||||||||||||||||||
| res, err = q.wallet.BroadcastTxSync(data) | ||||||||||||||||||||||||||||||||||||
| if err != nil { | ||||||||||||||||||||||||||||||||||||
| if strings.Contains(err.Error(), "tx already exists in cache") { | ||||||||||||||||||||||||||||||||||||
| if data.Sequence != nil { | ||||||||||||||||||||||||||||||||||||
|
|
@@ -246,3 +250,7 @@ func (q *Queue) BroadcastPending() (int, error) { | |||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||
| return cutoff, err | ||||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||
| func (q *Queue) Count() int { | ||||||||||||||||||||||||||||||||||||
| return len(q.messages) | ||||||||||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify rate limit changes align with batch size increases.
The rate limit has become more restrictive (PerTokenMs increased from 300 to 400, Burst decreased from 20 to 10), which reduces broadcast frequency. This aligns with the larger minimum batch size (45 messages) introduced in
queue/queue.go. However, ensure this combination doesn't cause excessive queue buildup during high-traffic periods, as fewer broadcasts with larger batches could lead to delays.Run the following script to check how the rate limiter is used and verify no other components depend on the previous rate limit values:
🏁 Script executed:
Length of output: 4549
Length of output: 2332
🏁 Script executed:
Length of output: 1861
Update outdated rate limit comment and verify configuration aligns with batch processing logic.
The rate limit change is verified: PerTokenMs increased from 300 to 400ms (reducing frequency from 3.33 to 2.5 tokens/sec), and Burst decreased from 20 to 10. This aligns with the minimum batch size of 45 messages enforced in
BroadcastPending()(queue/queue.go:180), supporting fewer but larger broadcasts.However, the comment at queue/queue.go:142 is now outdated. It states "Token-bucket rate limit: allow calling BroadcastPending at most 20 times per 6 seconds," but with the new configuration, the rate allows approximately 15 refilled tokens over 6 seconds (2.5 tokens/sec × 6), not 20. Update this comment to reflect the new rate limit.
🤖 Prompt for AI Agents