Conversation
| //! | ||
| //! - The `DefaultCoreSelector` implements a round-robin selection on the cores that can be | ||
| //! occupied by the parachain at the very next relay parent. This is the equivalent to what all | ||
| //! parachains on production networks have been using so far. |
There was a problem hiding this comment.
Hmm. Shall we rename this as part of this PR? It seems like LookaheadCoreSelector should be the "default" as we expect any new parachain to use asynchronous backing?
| //! <div class="warning">If you configure a velocity which is different from the number of assigned | ||
| //! cores, the measured velocity in practice will be the minimum of these two. However, be mindful | ||
| //! that if the velocity is higher than the number of assigned cores, it's possible that | ||
| //! <a href="https://github.com/paritytech/polkadot-sdk/issues/6667"> only a subset of the collator set will be authoring blocks.</a></div> |
There was a problem hiding this comment.
The question is why do we need to configure a velocity at all, seems redundant.
There was a problem hiding this comment.
Once the slot based collator can produce multiple blocks per slot we should also add that we recommend slot durations of at least 6s, preferably even 12. (better censorship resistance)
| //! `overseer_handle` and `relay_chain_slot_duration` params passed to `start_consensus` and pass | ||
| //! in the `slot_based_handle`. | ||
| //! | ||
| //! ### Phase 2 - Configure core selection policy in the parachain runtime |
There was a problem hiding this comment.
Phase 2 assumes candidate receipt v2 feature bit was enabled.
This phase will change after the feature bit is enabled on all networks and a form of #6939 is merged
| use cumulus_client_collator::service::CollatorService; | ||
| #[docify::export(lookahead_collator)] | ||
| use cumulus_client_consensus_aura::collators::lookahead::{self as aura, Params as AuraParams}; | ||
| use cumulus_client_consensus_aura::collators::slot_based::{ |
There was a problem hiding this comment.
Changes in this file will be rolled back before merge, but currently showcase what a parachain team using the template would need to do on the node-side to use elastic scaling
| //! | ||
| //! ### Phase 3 - Configure maximum scaling factor in the runtime | ||
| //! | ||
| //! First of all, you need to decide the upper limit to how many parachain blocks you need to |
There was a problem hiding this comment.
Actually the thinking is the other way around - what is the minimum target block time? It is then no longer needed to configure any other parameters manually as you can compute them from this value.
There was a problem hiding this comment.
you can also make all the calculations based on the velocity, which is what I describe here
There was a problem hiding this comment.
I can see what is described here, but I want a better DX.
As you've noticed recently, people didn't ask "how many parachains blocks can I produce per relay chain block ?", Instead they ask "How can I get 500ms blocks ?" because that is what their end users care about. The velocity of the parachain is largely an implementation detail.
With that being said, we can then remove all of the details about velocity and concerns around they need to compute all sorts of other constants.
| //! | ||
| //! ## Current constraints | ||
| //! | ||
| //! Elastic scaling is still considered experimental software, so stability is not guaranteed. |
There was a problem hiding this comment.
After launching on Polkadot this is not true.
There was a problem hiding this comment.
True, will update when that is the case
There was a problem hiding this comment.
Lets remove it at this point, since we are really close :D
| //! duration of 2 seconds per block.** Using the current implementation with multiple collators | ||
| //! adds additional latency to the block production pipeline. Assuming block execution takes | ||
| //! about the same as authorship, the additional overhead is equal the duration of the authorship | ||
| //! plus the block announcement. Each collator must first import the previous block before | ||
| //! authoring a new one, so it is clear that the highest throughput can be achieved using a | ||
| //! single collator. Experiments show that the peak performance using more than one collator | ||
| //! (measured up to 10 collators) is utilising 2 cores with authorship time of 1.3 seconds per | ||
| //! block, which leaves 400ms for networking overhead. This would allow for 2.6 seconds of | ||
| //! execution, compared to the 2 seconds async backing enabled. | ||
| //! The development required for enabling maximum compute throughput for multiple collators is tracked by | ||
| //! [this issue](https://github.com/paritytech/polkadot-sdk/issues/5190). |
There was a problem hiding this comment.
I think we can do much better in terms of structure here vs a large blob of text which is not that easy to read and focus important information.
There was a problem hiding this comment.
I rewrote this section. let me know how it looks
| //! this should obviously only be used for testing purposes, due to the clear lack of decentralisation | ||
| //! and resilience. Experiments show that the peak compute throughput using more than one collator | ||
| //! (measured up to 10 collators) is utilising 2 cores with authorship time of 1.3 seconds per block, | ||
| //! which leaves 400ms for networking overhead. This would allow for 2.6 seconds of execution, compared |
There was a problem hiding this comment.
Let's add the formula as a function of latency to compute the max usable execution time.
| //! | ||
| //! ### Phase 3 - Configure maximum scaling factor in the runtime | ||
| //! | ||
| //! First of all, you need to decide the upper limit to how many parachain blocks you need to |
There was a problem hiding this comment.
I can see what is described here, but I want a better DX.
As you've noticed recently, people didn't ask "how many parachains blocks can I produce per relay chain block ?", Instead they ask "How can I get 500ms blocks ?" because that is what their end users care about. The velocity of the parachain is largely an implementation detail.
With that being said, we can then remove all of the details about velocity and concerns around they need to compute all sorts of other constants.
skunert
left a comment
There was a problem hiding this comment.
Overall looking pretty good!
| //! # Enable elastic scaling for a parachain | ||
| //! | ||
| //! <div class="warning">This guide assumes full familiarity with Asynchronous Backing and its | ||
| //! terminology, as defined in <a href="https://wiki.polkadot.network/docs/maintain-guides-async-backing">the Polkadot Wiki</a>. |
| //! | ||
| //! ## Current constraints | ||
| //! | ||
| //! Elastic scaling is still considered experimental software, so stability is not guaranteed. |
There was a problem hiding this comment.
Lets remove it at this point, since we are really close :D
| //! the relay chain. Therefore, assuming the full 2 seconds are used, a parachain can only | ||
| //! utilise at most 3 cores in a relay chain slot of 6 seconds. If the full execution time is not | ||
| //! being used or if all collators are able to author blocks faster than the reference hardware, | ||
| //! higher core counts can be achieved. |
There was a problem hiding this comment.
| //! higher core counts can be achieved. | |
| //! higher core counts can be utilized. |
| //! 2 seconds building the block and announces it. The next collator fetches and executes it, wasting | ||
| //! 2 seconds plus the block fetching duration out of its 2 second slot. Therefore, the next collator | ||
| //! cannot build a subsequent block in due time and ends up authoring a fork, which defeats the purpose | ||
| //! of elastic scaling. The highest throughput can therefore be achieved with a single collator but |
There was a problem hiding this comment.
| //! of elastic scaling. The highest throughput can therefore be achieved with a single collator but | |
| //! of elastic scaling. The highest throughput can therefore be achieved with a single collator, but |
| //! of elastic scaling. The highest throughput can therefore be achieved with a single collator but | ||
| //! this should obviously only be used for testing purposes, due to the clear lack of decentralisation | ||
| //! and resilience. In other words, to fully utilise the cores, the following formula needs to be | ||
| //! satisfied: `2 * authorship duration + network overheads <= slot time`. For example, you can use |
There was a problem hiding this comment.
From a users perspective, I think this paragraph is a bit dense. What do you think about making this a bit shorter, stating that we need some import time between blocks. We could have the full details at the end maybe. Not insisting, is just an idea.
| //! | ||
| //! - Ensure Asynchronous Backing (6-second blocks) has been enabled on the parachain using | ||
| //! [`crate::guides::async_backing_guide`]. | ||
| //! - Ensure the `AsyncBackingParams.max_candidate_depth` value is configured to a value that is at |
| //! least double the maximum targeted parachain velocity. For example, if the parachain will build | ||
| //! at most 3 candidates per relay chain block, the `max_candidate_depth` should be at least 6. | ||
| //! - Ensure enough coretime is assigned to the parachain. | ||
| //! - Ensure the `CandidateReceiptV2` node feature is enabled on the relay chain configuration (node |
There was a problem hiding this comment.
This sounds a bit technical, can we remove this ? All relays should support it at this point.
| //! least double the maximum targeted parachain velocity. For example, if the parachain will build | ||
| //! at most 3 candidates per relay chain block, the `max_candidate_depth` should be at least 6. | ||
| //! - Ensure enough coretime is assigned to the parachain. | ||
| //! - Ensure the `CandidateReceiptV2` node feature is enabled on the relay chain configuration (node |
There was a problem hiding this comment.
This sounds a bit technical, can we remove this ? All relays should support it at this point.
| //! | ||
| //! <div class="warning">Phase 1 is NOT needed if using the <code>polkadot-parachain</code> or | ||
| //! <code>polkadot-omni-node</code> binary, or <code>polkadot-omni-node-lib</code> built from the | ||
| //! latest polkadot-sdk release! Simply pass the <code>--experimental-use-slot-based</code> |
There was a problem hiding this comment.
| //! latest polkadot-sdk release! Simply pass the <code>--experimental-use-slot-based</code> | |
| //! latest polkadot-sdk release! Simply pass the <code>--authoring slot-based</code> |
| //! ```ignore | ||
| //! type ParachainBlockImport = TParachainBlockImport< | ||
| //! Block, | ||
| //! SlotBasedBlockImport<Block, Arc<ParachainClient>, ParachainClient>, |
There was a problem hiding this comment.
I think we don't need SlotBasedBlockImport anymore if we only support 6s slots or more. wdyt @bkchr
|
Superseded by #9677 |
Resolves #5050
Updates the elastic scaling guide, taking into consideration:
This PR should not be merged until:
CandidateReceiptV2node feature bit is enabled on all networksexperimental-ump-signalsfeature of the parachain-system pallet is turned on by default (which can only be done after 1)TODO: