Skip to content

Conversation

@julienrbrt
Copy link
Member

@julienrbrt julienrbrt commented Oct 30, 2025

ref: #1914

A choice has been made to make this logic in the executor and avoid extending the reaper and the sequencer.
This is because, updating the repeer, means passing down the last fetched da height accross all components.
It adds a lot of complexity otherwise. Adding it in the sequencer may be preferable, but this makes the inclusion in a sync node less straightforward. This is what is being investigated.

Compared to the previous implementation, a forced transaction does not have any structure. It should be the raw structure from the execution client. This is to keep ev-node know nothing about the transaction. No signature checks, no validation of correctness. The execution client must make sure to reject gibberish transactions.

  • implement for executor
    • add size checks, to make sure batch doesn't go over max bytes
  • implement for syncer
  • requirements:
    • syncer must be able to detect if sequencer was malicious and hasn't included a forced tx
    • fetched inclusion of the txs bytes within the block data
    • if that happened, reject all future sequencer blocks
      • eventually, become based and keep produce block
  • tool for submitting forced txs in celestia easily -> feat(server): add force inclusion cmd ev-abci#295
  • make sure epochs are in sync between syncer and executor

---- for later, won't be included in this pr (ref #2797 (comment))

  • improve offline capabilities (if sequencer goes offline, full node become fully based)
    • when sequencer comes back only, needs to fetch if any block was produced :/

@github-actions
Copy link
Contributor

github-actions bot commented Nov 1, 2025

The latest Buf updates on your PR. Results from workflow CI and Release / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedNov 10, 2025, 3:58 PM

@claude
Copy link
Contributor

claude bot commented Nov 1, 2025

Claude finished @julienrbrt's task —— View job


Comprehensive Code Review: Forced Inclusion Implementation

I've completed a thorough review of this PR implementing the forced inclusion mechanism. Overall, this is a well-designed and carefully implemented feature with good test coverage. Below are my findings organized by focus area.


1. Code Quality & Architecture ⭐

Strengths

  • Excellent modular design: Clear separation of concerns with dedicated components (ForcedInclusionRetriever, BasedSequencer, size validation utilities)
  • Clean interfaces: The ForcedInclusionRetriever interface is well-defined and properly abstracted
  • Idiomatic Go: Code follows Go conventions with proper error handling, context usage, and structured logging
  • Zero-dependency core pattern: Correctly maintains the architectural principle with minimal dependencies in core packages
  • ADR documentation: Comprehensive ADR-019 document explaining design decisions and trade-offs

Issues Found

🔴 Critical: Race condition in BasedSequencer.txQueue

  • Location: sequencers/based/sequencer.go:36,116,151,160
  • Issue: txQueue [][]byte is accessed without mutex protection in GetNextBatch and createBatchFromQueue
  • Impact: Concurrent calls to GetNextBatch could cause data races or slice bounds violations
  • Fix: Add mutex protection around txQueue operations (similar to single sequencer's BatchQueue)
type BasedSequencer struct {
    // ... existing fields ...
    mu       sync.Mutex
    txQueue  [][]byte
}

func (s *BasedSequencer) GetNextBatch(...) {
    s.mu.Lock()
    defer s.mu.Unlock()
    // ... rest of implementation
}

🟡 Medium: Inconsistent error handling in syncer verification

  • Location: block/internal/syncing/syncer.go:527-533
  • Issue: Non-malicious errors during forced inclusion verification don't halt sync, potentially allowing blocks to pass without full verification
  • Current code:
if err := s.verifyForcedInclusionTxs(currentState, data); err != nil {
    s.logger.Error().Err(err).Uint64("height", nextHeight).Msg("forced inclusion verification failed")
    if errors.Is(err, errMaliciousProposer) {
        s.cache.RemoveHeaderDAIncluded(headerHash)
        return err
    }
    // Non-malicious errors don't return - block is applied anyway!
}
  • Fix: All verification errors should halt sync:
if err := s.verifyForcedInclusionTxs(currentState, data); err != nil {
    s.logger.Error().Err(err).Uint64("height", nextHeight).Msg("forced inclusion verification failed")
    s.cache.RemoveHeaderDAIncluded(headerHash)
    return err
}

🟡 Medium: Configuration validation disabled

  • Location: pkg/config/config.go:264-268
  • Issue: Forced inclusion namespace validation is commented out and returns error unconditionally
if len(c.DA.GetForcedInclusionNamespace()) > 0 {
    // if err := validateNamespace(c.DA.GetForcedInclusionNamespace()); err != nil {
    //     return fmt.Errorf("could not validate forced inclusion namespace (%s): %w", c.DA.GetForcedInclusionNamespace(), err)
    // }
    return fmt.Errorf("forced inclusion is not yet live")
}
  • Context: Per commit 0d790ef, feature is intentionally disabled for initial merge
  • Recommendation: Before enabling feature, uncomment validation or remove the block entirely

🟢 Minor: Potential unbounded slice growth

  • Location: sequencers/single/sequencer.go:58
  • Issue: pendingForcedInclusionTxs []pendingForcedInclusionTx could grow unbounded if forced txs consistently exceed maxBytes
  • Impact: Memory leak potential in adversarial scenarios
  • Recommendation: Add maximum pending queue size with rejection/logging for excess

🟢 Minor: Missing godoc on exported types

  • Location: block/internal/da/forced_inclusion_retriever.go:27
  • Issue: ForcedInclusionEvent struct lacks godoc comment
  • Fix: Add documentation:
// ForcedInclusionEvent contains forced inclusion transactions retrieved from DA
// along with the DA height range from which they were fetched.
type ForcedInclusionEvent struct {
    StartDaHeight uint64   // First DA height in the epoch range
    EndDaHeight   uint64   // Last DA height processed (may be less than epoch end if future height encountered)
    Txs           [][]byte // Raw transaction bytes from DA forced inclusion namespace
}

2. Error Handling & Resilience ⭐⭐

Strengths

  • Consistent error wrapping: Proper use of fmt.Errorf with %w for error chains
  • Graceful degradation: System continues when forced inclusion not configured
  • Context handling: Proper propagation of context.Context throughout
  • ErrHeightFromFuture handling: Correctly handles DA height not yet available

Issues Found

🟡 Medium: Missing context cancellation check in epoch loop

  • Location: block/internal/da/forced_inclusion_retriever.go:110-125
  • Issue: Long epoch loops (e.g., 100 DA heights) don't check for context cancellation
for epochHeight := epochStart + 1; epochHeight < epochEnd; epochHeight++ {
    // No context check here - could block shutdown
    result := r.client.RetrieveForcedInclusion(ctx, epochHeight)
    // ...
}
  • Fix: Add periodic context checks:
for epochHeight := epochStart + 1; epochHeight < epochEnd; epochHeight++ {
    select {
    case <-ctx.Done():
        return nil, ctx.Err()
    default:
    }
    result := r.client.RetrieveForcedInclusion(ctx, epochHeight)
    // ...
}

🟢 Minor: Logging of potentially large data

  • Location: Throughout forced inclusion code
  • Issue: Transaction bytes could be logged at debug level, potentially filling logs
  • Recommendation: Log tx count and sizes, not full tx data

3. Security 🔒

Strengths

  • Malicious proposer detection: Excellent verification logic in syncer detects missing forced txs
  • Size validation: Two-tier validation (absolute + batch limits) prevents DoS
  • No tx validation in ev-node: Correctly delegates validation to execution client
  • Deterministic epoch boundaries: Prevents timing-based manipulation

Issues Found

🟡 Medium: Hash collision vulnerability in verification

  • Location: block/internal/syncing/syncer.go:689-692
  • Issue: Uses simple string conversion for tx hashing, vulnerable to collision attacks
blockTxMap := make(map[string]struct{})
for _, tx := range data.Txs {
    blockTxMap[hashTx(tx)] = struct{}{} // What is hashTx?
}
  • Need to verify: Check hashTx() implementation. If it's just string(tx), this is vulnerable
  • Recommendation: Use cryptographic hash (SHA256) for transaction identity:
import "crypto/sha256"

func hashTx(tx []byte) string {
    h := sha256.Sum256(tx)
    return hex.EncodeToString(h[:])
}

🟢 Minor: Absolute blob size limit discrepancy

  • Location: sequencers/common/size_validation.go:8
  • Issue: Comment in ADR says 1.5MB but code has 2MB
// AbsoluteMaxBlobSize is the absolute maximum size for a single blob (DA layer limit).
AbsoluteMaxBlobSize = 2 * 1024 * 1024 // 2MB
  • ADR-019 line 382: Says const AbsoluteMaxBlobSize = 1.5 * 1024 * 1024
  • Recommendation: Align code with documentation or update both consistently

🟢 Minor: No rate limiting on forced inclusion submissions

  • Issue: Users can spam forced inclusion namespace with invalid/oversized blobs
  • Mitigation: DA layer's native spam protection + blob size validation provides basic protection
  • Recommendation: Document recommended DA layer rate limiting configuration

4. Performance & Resource Efficiency ⚡

Strengths

  • Epoch-based fetching: Excellent optimization to reduce DA queries
  • Efficient queue operations: BatchQueue reuses space before head via Prepend
  • Lazy evaluation: Only fetches at epoch boundaries

Issues Found

🟡 Medium: Potential memory inefficiency in epoch fetching

  • Location: block/internal/da/forced_inclusion_retriever.go:72-142
  • Issue: All txs from entire epoch accumulated in memory before returning
  • Impact: Large epochs with many txs could cause memory spikes
  • Recommendation: Consider streaming/chunking for very large epochs, or document maximum recommended epoch size

🟡 Medium: Sequential DA fetching in epoch loop

  • Location: block/internal/da/forced_inclusion_retriever.go:110-125
  • Issue: DA heights fetched sequentially within epoch range
  • Opportunity: Parallel fetching could significantly reduce latency for large epochs
  • Follow-up: Already noted in #issuecomment-3512654215 for future work ✓

🟢 Minor: Unbuffered map allocation

  • Location: block/internal/syncing/syncer.go:689
blockTxMap := make(map[string]struct{})
  • Recommendation: Pre-allocate with known size:
blockTxMap := make(map[string]struct{}, len(data.Txs))

🟢 Minor: Slice append in hot path

  • Location: sequencers/single/sequencer.go:208
batch.Transactions = append(forcedTxs, trimmedBatchTxs...)
  • Optimization: Pre-allocate combined slice:
combined := make([][]byte, 0, len(forcedTxs)+len(trimmedBatchTxs))
combined = append(combined, forcedTxs...)
combined = append(combined, trimmedBatchTxs...)
batch.Transactions = combined

5. Testing & Reliability ✅

Strengths

  • Comprehensive test coverage: 76.88% patch coverage is good for new feature
  • Unit tests present: Good coverage for epoch calculations, size validation, queue operations
  • Mock-based testing: Proper use of mockery for external dependencies
  • Edge cases covered: Tests for epoch boundaries, empty batches, size limits

Issues Found

🔴 Critical: Missing concurrency tests for BasedSequencer

  • Location: sequencers/based/sequencer_test.go
  • Issue: No tests verify thread-safety of GetNextBatch concurrent calls
  • Required: Add race detector tests:
func TestBasedSequencer_ConcurrentGetNextBatch(t *testing.T) {
    // Run with: go test -race
    // Spawn multiple goroutines calling GetNextBatch
}

🟡 Medium: Missing forced inclusion verification tests

  • Location: Test files
  • Gap: No tests for syncer's verifyForcedInclusionTxs with various scenarios:
    • Malicious sequencer (missing forced txs)
    • Partial inclusion
    • Reordered txs
    • Invalid hashes
  • Recommendation: Add syncer_forced_inclusion_test.go with comprehensive verification tests

🟡 Medium: Epoch boundary edge cases

  • Location: types/epoch_test.go
  • Coverage: Good basic tests, but missing:
    • daHeight < daStartHeight (returns what?)
    • daEpochSize = 0 (should this be allowed?)
    • Integer overflow scenarios (very large heights)
  • Recommendation: Add edge case tests

🟢 Minor: Test coverage for pending queue overflow

  • Issue: No tests verify behavior when pendingForcedInclusionTxs grows large
  • Recommendation: Add test with many oversized forced txs

6. Documentation & Developer Experience 📚

Strengths

  • Excellent ADR: ADR-019 is comprehensive, well-structured, and explains trade-offs
  • Inline comments: Good explanatory comments in complex logic
  • CLAUDE.md updated: Project instructions updated with forced inclusion context
  • Logging: Structured logging with appropriate levels and context

Issues Found

🟡 Medium: Missing migration guide

  • Issue: No documentation for upgrading existing deployments
  • Impact: Operators may not know how to enable feature safely
  • Recommendation: Add migration guide covering:
    • Genesis parameter changes (DAEpochForcedInclusion)
    • Config changes (forced inclusion namespace)
    • Backward compatibility considerations
    • Recommended epoch size for different DA layers

🟢 Minor: Inconsistent terminology

  • Issue: Mixed use of "forced inclusion" vs "force inclusion" (note "d")
  • Examples:
    • ErrForceInclusionNotConfigured (no 'd')
    • verifyForcedInclusionTxs (with 'd')
    • Comments use both forms
  • Recommendation: Standardize on "forced inclusion" throughout

🟢 Minor: Missing operational metrics

  • Issue: No Prometheus metrics for:
    • Forced inclusion tx count per epoch
    • Pending forced inclusion queue depth
    • Based sequencer queue size
    • Forced inclusion verification failures
  • Recommendation: Add metrics for operational visibility

🟢 Minor: TODO comments in production code

  • Location: sequencers/common/size_validation.go:3
// TODO(@julienrbrt): technically we may need to check for block gas as well
  • Recommendation: Either implement or create GitHub issue and reference it

7. Concurrency & Race Conditions 🔄

Issues Found

🔴 Critical: BasedSequencer data race (detailed above in Architecture section)

🟡 Medium: Single sequencer pending queue race

  • Location: sequencers/single/sequencer.go:58,285-358
  • Issue: pendingForcedInclusionTxs accessed without explicit locking
  • Analysis: Likely safe because only GetNextBatch modifies it (single-threaded executor), but not explicitly documented
  • Recommendation: Add comment documenting thread-safety assumptions or add mutex for defensive programming

8. Additional Observations

Breaking Changes

  • ✅ Well documented in ADR
  • ✅ Backward compatible (feature disabled by default)
  • ⚠️ Genesis parameter DAEpochForcedInclusion is consensus-critical (can't be changed post-genesis)

Follow-up Work Identified

Good capture of follow-up items in comments:

  1. Rename apps to remove single/ prefix
  2. Extend execution interface for congestion avoidance
  3. Parallelize DA epoch fetching
  4. Simplify DA requests after [EPIC] Remove DA Interface #2796
  5. Handle sequencer offline/online edge cases

Summary & Recommendations

Must Fix Before Merge

  1. 🔴 BasedSequencer race condition: Add mutex protection for txQueue
  2. 🔴 Syncer verification error handling: Return all verification errors, not just malicious proposer
  3. 🔴 Add concurrency tests: Verify thread-safety with race detector

Should Fix Before Enabling Feature

  1. 🟡 Review hashTx() implementation for collision resistance
  2. 🟡 Add comprehensive forced inclusion verification tests
  3. 🟡 Add context cancellation in epoch fetch loop
  4. 🟡 Align AbsoluteMaxBlobSize with documentation (1.5MB vs 2MB)
  5. 🟡 Add migration documentation

Nice to Have

  1. 🟢 Pre-allocate maps/slices with known sizes
  2. 🟢 Add operational metrics
  3. 🟢 Bound pendingForcedInclusionTxs size
  4. 🟢 Add godoc to exported types
  5. 🟢 Standardize "forced inclusion" terminology
  6. 🟢 Convert TODOs to GitHub issues

Overall Assessment

This is a well-designed and carefully implemented feature. The architecture is sound, the code quality is high, and the ADR provides excellent context. The main concerns are:

  • Thread-safety issues in BasedSequencer (critical)
  • Error handling in verification (medium)
  • Test coverage for concurrency and verification scenarios (medium)

With the critical issues addressed, this will be a solid foundation for censorship resistance in the rollup.

Estimated Remaining Work: 2-3 days to address critical and high-priority issues.


@julienrbrt julienrbrt changed the title [WIP] feat: forced inclusion for executor feat: forced inclusion Nov 5, 2025
Copy link
Contributor

@alpe alpe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice start!
Can you elaborate why you decided on a separate namespace for the force inclusion TX? The consumers have to read both ns anyway to stay up to date


event.StartDaHeight = epochHeight
event.Txs = append(event.Txs, result.Data...)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to prepare for malicious content. let's exit the loop early when a tx size threshold is reached. This can be a multiple of common.DefaultMaxBlobSize used by the executor

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense for the height check yes!. However i was thinking of doing no other checks and let the execution client deal with gibberish data (this is why i added that as requirement in the execution interface description)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to keep raw TX data in the namespace, there is not much we can do here to validate, indeed. A size check is an easy win but more would require extending the executor interface for a checkTX.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i agree, and this actually may be required to avoid congestions issue and losing txs.

@julienrbrt
Copy link
Member Author

Can you elaborate why you decided on a separate namespace for the force inclusion TX? The consumers have to read both ns anyway to stay up to date

This was a suggestion. Personally I think it makes sense, as we are filtering what's coming up in that namespace at fetching level directly in ev-node. What is posted in the force included namespace is handled directly by the execution client. ev-node only pass down bytes.

@julienrbrt julienrbrt marked this pull request as ready for review November 6, 2025 20:46
@julienrbrt julienrbrt marked this pull request as draft November 6, 2025 20:47
@github-actions
Copy link
Contributor

github-actions bot commented Nov 10, 2025

PR Preview Action v1.6.3

🚀 View preview at
https://evstack.github.io/docs-preview/pr-2797/

Built to branch main at 2025-11-21 13:29 UTC.
Preview will be ready when the GitHub Pages deployment is complete.

@codecov
Copy link

codecov bot commented Nov 10, 2025

Codecov Report

❌ Patch coverage is 76.37131% with 112 lines in your changes missing coverage. Please review.
✅ Project coverage is 65.45%. Comparing base (178b4fe) to head (a18e75f).

Files with missing lines Patch % Lines
sequencers/single/sequencer.go 72.38% 28 Missing and 9 partials ⚠️
block/internal/da/forced_inclusion_retriever.go 79.51% 12 Missing and 5 partials ⚠️
block/internal/executing/executor.go 43.33% 11 Missing and 6 partials ⚠️
sequencers/based/sequencer.go 81.17% 11 Missing and 5 partials ⚠️
block/internal/syncing/syncer.go 80.00% 8 Missing and 1 partial ⚠️
block/components.go 0.00% 6 Missing and 1 partial ⚠️
core/sequencer/dummy.go 0.00% 3 Missing ⚠️
block/public.go 75.00% 2 Missing ⚠️
pkg/config/config.go 81.81% 1 Missing and 1 partial ⚠️
pkg/genesis/genesis.go 75.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2797      +/-   ##
==========================================
+ Coverage   64.76%   65.45%   +0.68%     
==========================================
  Files          81       85       +4     
  Lines        7328     7755     +427     
==========================================
+ Hits         4746     5076     +330     
- Misses       2041     2114      +73     
- Partials      541      565      +24     
Flag Coverage Δ
combined 65.45% <76.37%> (+0.68%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@julienrbrt julienrbrt marked this pull request as ready for review November 10, 2025 16:14
@github-actions
Copy link
Contributor

github-actions bot commented Nov 10, 2025

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedNov 21, 2025, 1:29 PM

@julienrbrt
Copy link
Member Author

List of improvements to do in follow-ups:

  1. Improve DA fetching by parallelizing epoch fetching
  2. Simplify DA requests after [EPIC] Remove DA Interface #2796. Fetch DA latest height, instead of checking epoch boundaries
  3. Solve edge case where proposer misses blocks and comes back online with forced included blocks published

@julienrbrt julienrbrt marked this pull request as draft November 10, 2025 16:19
@julienrbrt
Copy link
Member Author

julienrbrt commented Nov 11, 2025

We discussed the above in the standup (#2797 (comment)), and a few ideas came.

1 - 2 . When making the call async, we need to make sure the executor and full node stay insync with an epoch. This can be done easily by making an epoch a few blocks behind the actual DA height.

  • We need to make sure all heights of that epoch are available when we fetch the epoch (there is already code for this)
  • We need to scale that block window based on an average fetching time (the higher the da epoch is, the higher the window is)
  1. We can re-use some code from [WIP] HA failover #2814 to automate node restarting (syncing -> base sequencer)
    • When the sequencer comes back online and missed an epoch, it needs to sync up until the head of the da layer
    • Based sequencers must check the forced included transaction namespace (@julienrbrt -- I picked this solution, otherwise it would need to fetch 2 namespaces instead of 1. alternative is to have the sequencer fetch only at the end of the epoch the header namespace) for a synced checkpoint from the da layer, and restart as sync node if it was found.

@julienrbrt julienrbrt marked this pull request as ready for review November 11, 2025 16:29
@julienrbrt julienrbrt marked this pull request as draft November 11, 2025 16:58
Copy link
Contributor

@alpe alpe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for answering all my questions and comments.
There is still the todo in the code to store unprocessed direct TX when the max block size is reached.


event.StartDaHeight = epochHeight
event.Txs = append(event.Txs, result.Data...)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to keep raw TX data in the namespace, there is not much we can do here to validate, indeed. A size check is an easy win but more would require extending the executor interface for a checkTX.

julienrbrt added a commit that referenced this pull request Nov 13, 2025
we decided to remove the sequencer go.mod, as ev-node can provide
directly the sequencer implementation (sequencers/single was already
depending on ev-node anyway)

this means no go.mod need to be added for the new based sequencers in
#2797
@julienrbrt julienrbrt marked this pull request as ready for review November 13, 2025 10:58
@julienrbrt
Copy link
Member Author

Once is PR is merged, we should directly after:

In the meantime, I have disabled the feature so it can be merged (0d790ef)

@julienrbrt
Copy link
Member Author

FYI the upgrade test will fail until tastora is updated.

Users can submit transactions in two ways:

### Systems Affected
1. **Normal Path**: Submit to sequencer's mempool/RPC (fast, low cost)
Copy link
Contributor

@damiannolan damiannolan Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the mempool not used app side for abci? Does ev-node have a mempool? Or does "sequencer's mempool/RPC" here refer to the sequencer node as a single entity even if its running the app out-of-process as is with evm.

From what I understand, the reth/evm mempool is used for evm and the sequencer queries the pending txs pool/queue in GetTxs

Comment on lines +531 to +541
### Full Node Verification Flow

```
1. Receive block from DA or P2P
2. Before applying block:
a. Fetch forced inclusion txs from DA at block's DA height
b. Build map of transactions in block
c. Verify all forced txs are in block
d. If missing: reject block, flag malicious proposer
3. Apply block if verification passes
```
Copy link
Contributor

@damiannolan damiannolan Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense! I think my mental model was assuming that ev-node did not need to be run with ev-reth for full nodes. But on reflection I think I was incorrect or misunderstood.

I assume ev-node must always be run even for evm stack full nodes but with --evnode.node.aggregator=false.

Copy link
Member Author

@julienrbrt julienrbrt Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, full node runs the whole stack. Light nodes on the other hand just fetch headers.

- Only at epoch boundaries
- Scan epoch range for forced transactions
3. Get batch from mempool queue
4. Prepend forced txs to batch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if we wanted to zk prove forced inclusion txs we could query the forced inclusion namespace at each epoch and prepend them to the txs list that we compare with the execution client's state transition function 🤔

github-merge-queue bot pushed a commit that referenced this pull request Nov 21, 2025
Rename `evm-single` to `evm` and `grpc-single` to `evgrpc` for clarity.

ref: #2797 (comment)
@julienrbrt julienrbrt changed the base branch from main to julien/extract-fi November 21, 2025 12:15
github-merge-queue bot pushed a commit that referenced this pull request Nov 21, 2025
Extract some logic from #2797.
Those refactors were done to ease force inclusion integration but they
can be extracted to be merged sooner
Base automatically changed from julien/extract-fi to main November 21, 2025 13:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants