Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion beacon_node/beacon_chain/src/naive_aggregation_pool.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ const SLOTS_RETAINED: usize = 3;
/// The maximum number of distinct `AttestationData` that will be stored in each slot.
///
/// This is a DoS protection measure.
const MAX_ATTESTATIONS_PER_SLOT: usize = 16_384;
const MAX_ATTESTATIONS_PER_SLOT: usize = 32_768;

/// Returned upon successfully inserting an item into the pool.
#[derive(Debug, PartialEq)]
Expand Down
12 changes: 6 additions & 6 deletions beacon_node/network/src/beacon_processor/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ pub use worker::{ChainSegmentProcessId, GossipAggregatePackage, GossipAttestatio
/// The maximum size of the channel for work events to the `BeaconProcessor`.
///
/// Setting this too low will cause consensus messages to be dropped.
pub const MAX_WORK_EVENT_QUEUE_LEN: usize = 16_384;
pub const MAX_WORK_EVENT_QUEUE_LEN: usize = 32_768;

/// The maximum size of the channel for idle events to the `BeaconProcessor`.
///
Expand All @@ -92,15 +92,15 @@ pub const MAX_WORK_EVENT_QUEUE_LEN: usize = 16_384;
const MAX_IDLE_QUEUE_LEN: usize = 16_384;

/// The maximum size of the channel for re-processing work events.
const MAX_SCHEDULED_WORK_QUEUE_LEN: usize = 3 * MAX_WORK_EVENT_QUEUE_LEN / 4;
const MAX_SCHEDULED_WORK_QUEUE_LEN: usize = MAX_WORK_EVENT_QUEUE_LEN;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, I wonder if we should keep this at 3/4?


/// The maximum number of queued `Attestation` objects that will be stored before we start dropping
/// them.
const MAX_UNAGGREGATED_ATTESTATION_QUEUE_LEN: usize = 16_384;
const MAX_UNAGGREGATED_ATTESTATION_QUEUE_LEN: usize = 32_768;

/// The maximum number of queued `Attestation` objects that will be stored before we start dropping
/// them.
const MAX_UNAGGREGATED_ATTESTATION_REPROCESS_QUEUE_LEN: usize = 8_192;
/// The maximum number of queued `Attestation` objects that reference an unknown
/// block that will be stored before we start dropping them.
const MAX_UNAGGREGATED_ATTESTATION_REPROCESS_QUEUE_LEN: usize = 32_768;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether we should keep the reprocess queue limit at half the regular limit, i.e. only bump it to 16K.

I'd like to have some good basis for this decision (queueing theory?) but I don't, other than "if it ain't broke, don't fix it".


/// The maximum number of queued `SignedAggregateAndProof` objects that will be stored before we
/// start dropping them.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ pub const QUEUED_RPC_BLOCK_DELAY: Duration = Duration::from_secs(3);
const MAXIMUM_QUEUED_BLOCKS: usize = 16;

/// How many attestations we keep before new ones get dropped.
const MAXIMUM_QUEUED_ATTESTATIONS: usize = 16_384;
const MAXIMUM_QUEUED_ATTESTATIONS: usize = 32_768;

/// How many light client updates we keep before new ones get dropped.
const MAXIMUM_QUEUED_LIGHT_CLIENT_UPDATES: usize = 128;
Expand Down