Skip to content

Conversation

@ajsutton
Copy link
Contributor

@ajsutton ajsutton commented Feb 5, 2024

Description

  • Reduce the min size for large preimages in devnet to make it easier to reach the limit.
  • Provide a buffer between target and max batch size to avoid the batcher splitting the large tx across multiple smaller frames.

Tests

Ran the test locally repeatedly for a few hours with no issues.

Metadata

* Reduce the min size for large preimages in devnet to make it easier to reach the limit.
* Provide a buffer between target and max batch size to avoid the batcher splitting the large tx across multiple smaller frames.
@ajsutton ajsutton requested review from a team as code owners February 5, 2024 02:59
@ajsutton ajsutton requested review from mds1 and mslipper February 5, 2024 02:59
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2024

Walkthrough

Walkthrough

The recent modifications focus on enhancing the handling of preimages and faultproof testing in the dispute game package. Key changes include adjusting the minimum preimage size for efficiency, refining the way last commitments are modified with a new function, and improving test configurations to better simulate scenarios with large preimages. These updates aim to streamline operations and testing processes within the system.

Changes

Files Change Summary
.../disputegame/preimage/preimage_helper.go - Reduced MinPreimageSize to 10000.
- Added WithLastCommitment function.
.../faultproofs/challenge_preimage_test.go - Replaced WithReplacedCommitment with WithLastCommitment in TestChallengeLargePreimages_ChallengeLast.
.../faultproofs/output_cannon_test.go - Added preimage import.
- Used preimage.MinPreimageSize instead of hardcoded value in TestOutputCannonStepWithLargePreimage.
.../faultproofs/util.go - Adjusted cfg.BatcherTargetL1TxSizeBytes and cfg.BatcherMaxL1TxSizeBytes in withLargeBatches function.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository from git and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

@ajsutton ajsutton requested review from refcell and removed request for mds1 and mslipper February 5, 2024 02:59
@codecov
Copy link

codecov bot commented Feb 5, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (afb2048) 27.85% compared to head (b456799) 26.64%.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #9346      +/-   ##
===========================================
- Coverage    27.85%   26.64%   -1.22%     
===========================================
  Files          167      135      -32     
  Lines         7362     6503     -859     
  Branches      1272     1120     -152     
===========================================
- Hits          2051     1733     -318     
+ Misses        5190     4649     -541     
  Partials       121      121              
Flag Coverage Δ
cannon-go-tests 62.56% <ø> (ø)
chain-mon-tests 27.14% <ø> (ø)
common-ts-tests ?
contracts-bedrock-tests 0.65% <ø> (ø)
contracts-ts-tests 12.25% <ø> (ø)
core-utils-tests ?
sdk-next-tests 41.52% <ø> (ø)
sdk-tests 41.52% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

see 32 files with indirect coverage changes

Copy link
Contributor

@refcell refcell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

@refcell refcell added this pull request to the merge queue Feb 5, 2024
Merged via the queue into develop with commit a849efa Feb 5, 2024
@refcell refcell deleted the aj/large-preimage-flaky branch February 5, 2024 15:14
spacesailor24 pushed a commit that referenced this pull request Feb 6, 2024
* Reduce the min size for large preimages in devnet to make it easier to reach the limit.
* Provide a buffer between target and max batch size to avoid the batcher splitting the large tx across multiple smaller frames.
spacesailor24 pushed a commit that referenced this pull request Feb 7, 2024
* Reduce the min size for large preimages in devnet to make it easier to reach the limit.
* Provide a buffer between target and max batch size to avoid the batcher splitting the large tx across multiple smaller frames.
spacesailor24 pushed a commit that referenced this pull request Feb 8, 2024
* Reduce the min size for large preimages in devnet to make it easier to reach the limit.
* Provide a buffer between target and max batch size to avoid the batcher splitting the large tx across multiple smaller frames.
spacesailor24 pushed a commit that referenced this pull request Feb 8, 2024
* Reduce the min size for large preimages in devnet to make it easier to reach the limit.
* Provide a buffer between target and max batch size to avoid the batcher splitting the large tx across multiple smaller frames.
spacesailor24 pushed a commit that referenced this pull request Feb 8, 2024
* Reduce the min size for large preimages in devnet to make it easier to reach the limit.
* Provide a buffer between target and max batch size to avoid the batcher splitting the large tx across multiple smaller frames.
spacesailor24 pushed a commit that referenced this pull request Feb 8, 2024
* Reduce the min size for large preimages in devnet to make it easier to reach the limit.
* Provide a buffer between target and max batch size to avoid the batcher splitting the large tx across multiple smaller frames.
spacesailor24 pushed a commit that referenced this pull request Feb 9, 2024
* Reduce the min size for large preimages in devnet to make it easier to reach the limit.
* Provide a buffer between target and max batch size to avoid the batcher splitting the large tx across multiple smaller frames.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants