Add storage bounds for pallet staking and clean up deprecated non paged exposure storages#6445
Add storage bounds for pallet staking and clean up deprecated non paged exposure storages#6445
staking and clean up deprecated non paged exposure storages#6445Conversation
| type MaxInvulnerables = ConstU32<20>; | ||
| type MaxRewardPagesPerValidator = ConstU32<20>; | ||
| type MaxValidatorsCount = ConstU32<300>; | ||
| type MaxValidatorsCount = MaxAuthorities; |
There was a problem hiding this comment.
This is a huge jump, is there a reason why the MaxAuthorities is so big in the test runtime?
There was a problem hiding this comment.
Oh, right! Then it's probably better to revert the change in that case, adding also a comment: 6c9806c
gui1117
left a comment
There was a problem hiding this comment.
Overall looks good, but there is still some comments to resolve.
substrate/frame/staking/src/lib.rs
Outdated
| pub fn from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> { | ||
| let old_exposures = exposure.others.len(); | ||
| let others = WeakBoundedVec::try_from(exposure.others).unwrap_or_default(); | ||
| defensive_assert!(old_exposures == others.len(), "Too many exposures for a page"); |
There was a problem hiding this comment.
This function is not used, we can make it better or remove it.
| pub fn from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> { | |
| let old_exposures = exposure.others.len(); | |
| let others = WeakBoundedVec::try_from(exposure.others).unwrap_or_default(); | |
| defensive_assert!(old_exposures == others.len(), "Too many exposures for a page"); | |
| pub fn try_from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> { | |
| let others = WeakBoundedVec::try_from(exposure.others).map_err(|_| ())?; |
| claimed_pages.push(page); | ||
| // try to add page to claimed entries | ||
| if claimed_pages.try_push(page).is_err() { | ||
| defensive!("Limit reached for maximum number of pages."); |
There was a problem hiding this comment.
The proof should be more precise. In what circumstance can this limit be reached, why is it impossible in practice.
There was a problem hiding this comment.
What do you mean by "the proof"?
| let mut eras_stakers_keys = | ||
| v16::ErasStakers::<T>::iter_keys().map(|(k1, _k2)| k1).collect::<Vec<_>>(); | ||
| eras_stakers_keys.dedup(); | ||
| for k in eras_stakers_keys { | ||
| let mut removal_result = | ||
| v16::ErasStakers::<T>::clear_prefix(k, u32::max_value(), None); | ||
| while let Some(next_cursor) = removal_result.maybe_cursor { | ||
| removal_result = v16::ErasStakers::<T>::clear_prefix( | ||
| k, | ||
| u32::max_value(), | ||
| Some(&next_cursor[..]), | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
This seems to try to remove all keys in one time, if we don't need multi-block migration and we are sure it is ok then we can do:
| let mut eras_stakers_keys = | |
| v16::ErasStakers::<T>::iter_keys().map(|(k1, _k2)| k1).collect::<Vec<_>>(); | |
| eras_stakers_keys.dedup(); | |
| for k in eras_stakers_keys { | |
| let mut removal_result = | |
| v16::ErasStakers::<T>::clear_prefix(k, u32::max_value(), None); | |
| while let Some(next_cursor) = removal_result.maybe_cursor { | |
| removal_result = v16::ErasStakers::<T>::clear_prefix( | |
| k, | |
| u32::max_value(), | |
| Some(&next_cursor[..]), | |
| ); | |
| } | |
| } | |
| v16::ErasStakers::<T>::clear(u32::max_value(), None); |
There was a problem hiding this comment.
We can be sure for Polkadot or Kusama but how can we be sure for every other chain? Maybe there are some limits I'm not aware of
There was a problem hiding this comment.
These storage items are already empty in Polkadot and Kusama (you can validate in an UI), and this pallet is not parachian-ready, so it should be fine.
| let mut eras_stakers_clipped_keys = v16::ErasStakersClipped::<T>::iter_keys() | ||
| .map(|(k1, _k2)| k1) | ||
| .collect::<Vec<_>>(); | ||
| eras_stakers_clipped_keys.dedup(); | ||
| for k in eras_stakers_clipped_keys { | ||
| let mut removal_result = | ||
| v16::ErasStakersClipped::<T>::clear_prefix(k, u32::max_value(), None); | ||
| while let Some(next_cursor) = removal_result.maybe_cursor { | ||
| removal_result = v16::ErasStakersClipped::<T>::clear_prefix( | ||
| k, | ||
| u32::max_value(), | ||
| Some(&next_cursor[..]), | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
| let mut eras_stakers_clipped_keys = v16::ErasStakersClipped::<T>::iter_keys() | |
| .map(|(k1, _k2)| k1) | |
| .collect::<Vec<_>>(); | |
| eras_stakers_clipped_keys.dedup(); | |
| for k in eras_stakers_clipped_keys { | |
| let mut removal_result = | |
| v16::ErasStakersClipped::<T>::clear_prefix(k, u32::max_value(), None); | |
| while let Some(next_cursor) = removal_result.maybe_cursor { | |
| removal_result = v16::ErasStakersClipped::<T>::clear_prefix( | |
| k, | |
| u32::max_value(), | |
| Some(&next_cursor[..]), | |
| ); | |
| } | |
| } | |
| v16::ErasStakersClipped::<T>::clear(u32::max_value(), None); |
There was a problem hiding this comment.
Same here, are we assuming that this is safe?
| } else { | ||
| log!(info, "v17 applied successfully."); | ||
| } | ||
| T::DbWeight::get().reads_writes(1, 1) |
There was a problem hiding this comment.
we can count the number of operation as we do them.
There was a problem hiding this comment.
How can we do that?
There was a problem hiding this comment.
As you iterate and remove/set any storage item, bump a counter (let mut x = 0, y) in this code, and use that as the final reads_writes(x, y)
Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
|
All GitHub workflows were cancelled due to failure one of the required jobs. |
| /// | ||
| /// When this value is not set, no limits are enforced. | ||
| #[pallet::storage] | ||
| pub type MaxValidatorsCount<T> = StorageValue<_, u32, OptionQuery>; |
There was a problem hiding this comment.
Why is this removed? it should not be. This counter is still used, and is a moving target for maximum number of validators that the system can have.
In my branch, there is a static target declared as the maximum number of validators:
type MaxValidatorSet
And MaxValidatorsCount should still be dynamic, but always less thanMaxValidatorsSet
The names suck, I know :(
… exposures (#7483) Building from #6445 on top of #7282 **Changes** - [x] Bound `Invulnerables`, vector of validators invulnerable to slashing. - Add `MaxInvulnerables` to bound `Invulnerables` Vec -> `BoundedVec`. - Set to constant 20 in the pallet (must be >= 17 for backward compatibility with runtime `westend`). - [x] Bound `Disabled Validators`, vector of validators that have offended in a given era and have been disabled. - Add `MaxDisabledValidators` to bound `DisabledValidators` Vec -> `BoundedVec`. - Set to constant 100 in the pallet (it should be <= 1/3 * `MaxValidatorsCount` according to the current disabling strategy). - [x] Remove `ErasStakers` and `ErasStakersClipped` (see #433 ), non-paged validators exposures. - They were deprecated in v14 and could have been removed since staking era 1504 (now it's > 1700). - They are already empty on Polkadot and Kusama. - Completing the task from #5986. Migrating pallet `staking` storage to v17 to apply all changes. **TO DO** (in a follow-up PR) - [ ] Bound `ErasStakersPaged` - this needs bounding `ExposurePage.others` vector - [ ] Bound `BondedEras` vector - [ ] Bound `ClaimedRewards` pages vector - [ ] Bound `ErasRewardPoints` - this needs bounding `EraRewardPoints.individual` BTreeMap - [ ] Bound `UnappliedSlashes` - [ ] Bound `SlashingSpans` - this needs bounding `SlashingSpans.prior` vector --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: kianenigma <kian@parity.io>
This is part of #6289 and necessary for the Asset Hub migration.
Building on the observations and suggestions from #255 .
Changes
MaxInvulnerablesto boundInvulnerablesVec ->BoundedVec.westend).MaxDisabledValidatorsto boundDisabledValidatorsVec ->BoundedVecMaxValidatorsCountaccording to the current disabling strategy)ErasStakersandErasStakersClipped(see Tracker issue for cleaning up old non-paged exposure logic in staking pallet #433 )MaxExposurePageSizeto boundErasStakersPagedmapping to exposure pages: eachExposurePage.othersVec is turned into aWeakBoundedVecto allow easy and quick changes to this boundMaxBondedErasto boundBondedErasVec ->BoundedVecBondingDuration::get() + 1everywhere to include both time interval endpoints in [current_era - BondingDuration::get(),current_era]. Notice that this was done manually in every test and runtime, so I wonder if there is a better way to ensure thatMaxBondedEras::get() == BondingDuration::get() + 1everywhere.MaxRewardPagesPerValidatorto boundClaimedRewardsVec of pages ->WeakBoundedVecWeakBoundedVecto allow easy and quick changes to this parameterMaxValidatorsCountoptional storage item to addMaxValidatorsCountmandatory config parameterEraRewardPoints.individualBTreeMap ->BoundedBTreeMap;TO DO
Slashing storage items will be bounded in another PR.
UnappliedSlashesSlashingSpans