Skip to content
Merged
Show file tree
Hide file tree
Changes from 145 commits
Commits
Show all changes
195 commits
Select commit Hold shift + click to select a range
793142e
WIP primitives
sandreim Jul 17, 2024
3a29fdf
WIP
sandreim Aug 5, 2024
4a53577
Working version.
sandreim Aug 7, 2024
8285ae7
Better version
sandreim Aug 8, 2024
2831c5e
Add missing primitives and fix things
sandreim Aug 8, 2024
c767d60
Implement v2 receipts in polkadot-runtime-parachains
sandreim Aug 8, 2024
96999e3
add missing stuff
sandreim Aug 12, 2024
c5f2dc3
Switch parachains runtime to use new primitives
sandreim Aug 12, 2024
dbb0160
use vstaging primitives
sandreim Aug 12, 2024
5efab68
update rococo and westend
sandreim Aug 12, 2024
c2232e4
client keeps using the old primitives
sandreim Aug 12, 2024
87b079f
no unsafe pls
sandreim Aug 12, 2024
00e8c13
move async backing primtiives to own file
sandreim Aug 12, 2024
cd4d02f
fix
sandreim Aug 12, 2024
5509e33
fix test build
sandreim Aug 12, 2024
f8b86d2
fix test-runtime
sandreim Aug 12, 2024
fe2fbfb
self review feedback
sandreim Aug 13, 2024
975e13b
review feedback
sandreim Aug 13, 2024
1c7ac55
feedback
sandreim Aug 13, 2024
653873b
feedback
sandreim Aug 13, 2024
dc98149
clippy
sandreim Aug 13, 2024
0a6bce3
chores
sandreim Aug 13, 2024
5e4dac2
Filter v2 candidate descriptors
sandreim Aug 14, 2024
f12ca7a
fix
sandreim Aug 14, 2024
13734de
fix prospective parachains tests
sandreim Aug 14, 2024
effb1cc
fix fix
sandreim Aug 14, 2024
3f75cba
fmt
sandreim Aug 14, 2024
75a47bb
fix comment
sandreim Aug 14, 2024
12ed853
another one
sandreim Aug 14, 2024
f2c0882
fix build
sandreim Aug 15, 2024
768e034
.
sandreim Aug 15, 2024
4bf0706
improve test and add comment
sandreim Aug 15, 2024
0c83201
add log
sandreim Aug 15, 2024
4296942
simplify check()
sandreim Aug 19, 2024
e1a7509
Merge branch 'sandreim/rfc103-primitives' of github.com:paritytech/po…
sandreim Aug 20, 2024
6fb7790
impl<H>
sandreim Aug 20, 2024
e6add9c
Merge branch 'sandreim/rfc103-primitives' of github.com:paritytech/po…
sandreim Aug 20, 2024
d0b3961
comment
sandreim Aug 20, 2024
66f7a96
add some tests
sandreim Aug 20, 2024
5c0c919
update
sandreim Aug 20, 2024
38ce589
prdoc
sandreim Aug 21, 2024
9f1d611
can't be happy if CI is sad
sandreim Aug 21, 2024
a6a7329
Merge branch 'master' of github.com:paritytech/polkadot-sdk into sand…
sandreim Aug 21, 2024
663817d
remove newlines
sandreim Aug 21, 2024
a1dacc1
match rfc 103 reserved field naming
sandreim Aug 21, 2024
33b80ea
remove default cq offset
sandreim Aug 21, 2024
d5b165f
Merge branch 'sandreim/rfc103-primitives' of github.com:paritytech/po…
sandreim Aug 21, 2024
29e4b47
Ignore UMP signals when checking and processing UMP queue
sandreim Aug 16, 2024
ab85fe3
wip
sandreim Aug 20, 2024
7d5636b
refactor a bit
sandreim Aug 20, 2024
2954bba
use descriptor core_index in `map_candidates_to_cores`
sandreim Aug 20, 2024
e7abe8b
nits
sandreim Aug 20, 2024
1db5eb0
Para Inherent: filter v2 candidate descriptors (#5362)
sandreim Aug 22, 2024
cdb49a6
increase test coverage
sandreim Aug 22, 2024
f6f714a
Merge branch 'sandreim/rfc103-primitives' of github.com:paritytech/po…
sandreim Aug 22, 2024
9cc8232
WIP
alindima Aug 22, 2024
6211c8e
first version that compiles
alindima Aug 23, 2024
aa925cd
Improve usability of primitives
sandreim Aug 23, 2024
00d7c71
use committed core index if available in v1 receipts
sandreim Aug 23, 2024
af9f561
typo
sandreim Aug 23, 2024
fb2cefb
fix check
sandreim Aug 23, 2024
b53787d
typo
sandreim Aug 23, 2024
e24afd4
first version that works well
alindima Aug 26, 2024
3e13ef9
remove ProcessedCandidates
alindima Aug 26, 2024
0df5886
start fixing tests
alindima Aug 26, 2024
e2ef46e
add test for mixed v1 v2 scenario
sandreim Aug 26, 2024
2dfc542
comment
sandreim Aug 26, 2024
a38a243
add ump test
sandreim Aug 26, 2024
da381da
avoid one storage read
sandreim Aug 26, 2024
ca5c618
store claim queue snapshot in allowed relay parent info
sandreim Aug 27, 2024
4266665
check v2 receipts using claim queue snapshots
sandreim Aug 27, 2024
e93b983
typo
sandreim Aug 27, 2024
2267e62
don't back anything if there's an upcoming session change
alindima Aug 28, 2024
e01bf53
it was a bad idea to process commitments of v1 receipts
sandreim Aug 28, 2024
fb9fbe6
fmt
sandreim Aug 28, 2024
c507488
remove unused
sandreim Aug 28, 2024
178e201
Validate session index
sandreim Aug 28, 2024
67f6382
avoid pushing back items to the assignment provider on session change
alindima Aug 28, 2024
984e8e1
add unknown version
sandreim Aug 29, 2024
fab215d
add check for unknown version and test
sandreim Aug 29, 2024
7300552
Merge branch 'sandreim/rfc103-primitives' of github.com:paritytech/po…
sandreim Aug 29, 2024
9bbe2cc
typo
sandreim Aug 29, 2024
4dda9df
adjust comments
sandreim Aug 29, 2024
af6df0f
duplicate the first assignment if the claim queue used to be empty
alindima Aug 30, 2024
12c7ebd
temp: don't kill the pipelines if tests or clippy is failing
alindima Aug 30, 2024
e781da1
Merge remote-tracking branch 'origin/master' into alindima/remove-ttl
alindima Aug 30, 2024
cd3eb5f
Merge branch 'master' of github.com:paritytech/polkadot-sdk into sand…
sandreim Aug 30, 2024
f8ef4ce
fix merge damage
sandreim Aug 30, 2024
04e31a1
unused
sandreim Aug 30, 2024
5fd1279
fix
sandreim Aug 30, 2024
19d6f32
fix benchmark build
sandreim Sep 2, 2024
552078a
Merge branch 'sandreim/rfc103-primitives' of github.com:paritytech/po…
sandreim Sep 2, 2024
4ec3fc8
typos
sandreim Sep 2, 2024
2ba0a27
fmt
sandreim Sep 2, 2024
e468d62
fix comment
sandreim Sep 2, 2024
3fe368f
Merge branch 'master' of github.com:paritytech/polkadot-sdk into sand…
sandreim Sep 3, 2024
18a0496
mixed v1, v2, v2 without select core tests,
sandreim Sep 4, 2024
d320269
Add allowed relay parents storage migration
sandreim Sep 4, 2024
8490488
fix migration
sandreim Sep 5, 2024
db67486
fix
sandreim Sep 5, 2024
03cf8c1
clippy
sandreim Sep 5, 2024
43f6de7
feedback
sandreim Sep 5, 2024
70e48d2
sir, make it faster
sandreim Sep 5, 2024
1e26c73
fix
sandreim Sep 5, 2024
f4e3fb5
one last fix
sandreim Sep 5, 2024
2e87ad3
fixes
sandreim Sep 5, 2024
54432be
remove println
sandreim Sep 5, 2024
cfbecb0
add prdoc
sandreim Sep 6, 2024
3a518f2
fix comment
sandreim Sep 6, 2024
54106e2
refactor map_candidates_to_cores
sandreim Sep 6, 2024
b44a604
doc updates
sandreim Sep 9, 2024
4c5c707
Merge branch 'master' of github.com:paritytech/polkadot-sdk into sand…
sandreim Sep 9, 2024
caff543
feedback
sandreim Sep 13, 2024
218f530
refactor
sandreim Sep 13, 2024
216937a
fix try-runtime
sandreim Sep 13, 2024
c0aee8c
check ump signal count and test
sandreim Sep 16, 2024
d0a42c8
Merge remote-tracking branch 'origin/master' into alindima/remove-ttl
alindima Sep 17, 2024
d31e0a0
remove fields from configuration
alindima Sep 17, 2024
fce03b3
fix storage version
alindima Sep 17, 2024
f07b6f4
add scheduler migration
alindima Sep 18, 2024
1ef7952
remove unused
sandreim Sep 18, 2024
5790b8e
fix prdoc
sandreim Sep 19, 2024
ba9d3ff
more tests cases
sandreim Sep 19, 2024
9c4e2ae
stricter UMP signal checks and tests
sandreim Sep 23, 2024
5b157a2
Merge remote-tracking branch 'origin/master' into alindima/remove-ttl
alindima Sep 24, 2024
8d7b59b
fix todo with a slight hack
alindima Sep 24, 2024
d9f0b52
fix core count on session change
alindima Sep 24, 2024
43bbb9d
type alias
sandreim Sep 24, 2024
d7e57fd
Merge branch 'master' into sandreim/runtime_v2_descriptor_support
sandreim Sep 24, 2024
e64216a
Merge remote-tracking branch 'origin/sandreim/runtime_v2_descriptor_s…
alindima Sep 24, 2024
eb1de77
add workaround for #64 for claim queue
alindima Sep 24, 2024
5437b4c
Revert "remove fields from configuration"
alindima Sep 24, 2024
415a938
remove config extrinsics and outdated error
alindima Sep 24, 2024
8985f60
complete fix for #64
alindima Sep 24, 2024
e7cf960
fix paras_inherent tests
alindima Sep 24, 2024
4488c11
don't add empty entries to cq
alindima Sep 25, 2024
d0c4ea8
fix tests and remove superfluous ones
alindima Sep 25, 2024
a48548c
remove yml hacks
alindima Sep 25, 2024
9bec454
fix config storage version
alindima Sep 25, 2024
4a316d0
bugfixes
alindima Sep 25, 2024
d745915
tests
alindima Sep 25, 2024
bac4b9e
clippy
alindima Sep 26, 2024
658da59
bugfixes
alindima Sep 26, 2024
8f65358
finish scheduler tests
alindima Sep 27, 2024
37fd8cf
extract back_candidates function
alindima Sep 27, 2024
5967f78
testing cleanups
alindima Sep 30, 2024
ae9b1ac
add session change test
alindima Sep 30, 2024
b9596ba
address comment
alindima Sep 30, 2024
fb599e7
prdoc
alindima Sep 30, 2024
b5f7bda
revert
alindima Sep 30, 2024
7a1f382
missing feedback
sandreim Oct 2, 2024
4757dab
Merge remote-tracking branch 'origin' into sandreim/runtime_v2_descri…
sandreim Oct 2, 2024
a2a0795
Merge branch 'sandreim/runtime_v2_descriptor_support' of github.com:p…
sandreim Oct 2, 2024
01ce087
:facepalm:
sandreim Oct 2, 2024
c0b36b1
Merge branch 'master' of github.com:paritytech/polkadot-sdk into sand…
sandreim Oct 2, 2024
45b4690
".git/.scripts/commands/bench/bench.sh" --subcommand=pallet --runtime…
Oct 2, 2024
28e4309
".git/.scripts/commands/bench/bench.sh" --subcommand=pallet --runtime…
Oct 2, 2024
6c81b4c
Merge remote-tracking branch 'origin/sandreim/runtime_v2_descriptor_s…
alindima Oct 7, 2024
a5d4dc0
Merge remote-tracking branch 'origin/master' into alindima/remove-ttl
alindima Oct 7, 2024
ccb1176
clippy
alindima Oct 7, 2024
9e411f8
address some review comments
alindima Oct 8, 2024
7ea37ad
clippy
alindima Oct 8, 2024
25b1c23
some more polishing
alindima Oct 8, 2024
9767e1b
move eligible_paras
alindima Oct 8, 2024
ce092e5
do report_processed when backing/dropping claim
alindima Oct 8, 2024
b369153
clippy
alindima Oct 8, 2024
345c956
Merge branch 'master' into alindima/remove-ttl
alindima Oct 9, 2024
4daf2ff
Merge branch 'master' into alindima/remove-ttl
alindima Oct 9, 2024
e61646a
nits
alindima Oct 9, 2024
75f1390
some more review feedback
alindima Oct 15, 2024
6e8f479
Merge remote-tracking branch 'origin/master' into alindima/remove-ttl
alindima Oct 15, 2024
8942c57
some fixes
alindima Oct 15, 2024
eff59c9
fix hyperlink
alindima Oct 15, 2024
e03d00d
add sync backing zombienet test
alindima Oct 15, 2024
de0fb25
Merge remote-tracking branch 'origin/master' into alindima/remove-ttl
alindima Oct 15, 2024
4e89e25
add zombienet test for core sharing when one parachain is not produci…
alindima Oct 15, 2024
053fe9c
try fixing zombienet
alindima Oct 16, 2024
9ea0e26
fix benchmarks
alindima Oct 16, 2024
8b3f248
update prdoc
alindima Oct 16, 2024
5b7bad1
Merge remote-tracking branch 'origin/master' into alindima/remove-ttl
alindima Oct 16, 2024
f022674
".git/.scripts/commands/bench/bench.sh" --subcommand=pallet --runtime…
Oct 16, 2024
468a28c
fix bench
alindima Oct 17, 2024
66d5abf
Merge remote-tracking branch 'origin/master' into alindima/remove-ttl
alindima Oct 17, 2024
a99309b
Merge remote-tracking branch 'origin/alindima/remove-ttl' into alindi…
alindima Oct 17, 2024
04a541b
fix benchmarks one last time
alindima Oct 17, 2024
1bfb31b
".git/.scripts/commands/bench/bench.sh" --subcommand=pallet --runtime…
Oct 17, 2024
10f7c72
switch test runtime to coretime
alindima Oct 21, 2024
acf109d
fix prdoc
alindima Oct 21, 2024
ec5c45f
Merge remote-tracking branch 'origin/master' into alindima/remove-ttl
alindima Oct 21, 2024
527a80c
Merge remote-tracking branch 'origin/alindima/remove-ttl' into alindi…
alindima Oct 21, 2024
e93a05f
try fixing prdoc
alindima Oct 21, 2024
14c1641
fix prdoc
alindima Oct 21, 2024
0680a2c
review feedback
alindima Oct 21, 2024
dea739c
fix clippy
alindima Oct 21, 2024
2c7b6ca
".git/.scripts/commands/bench/bench.sh" --subcommand=pallet --runtime…
Oct 21, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
231 changes: 189 additions & 42 deletions polkadot/primitives/src/vstaging/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,15 @@ use super::{
HashT, HeadData, Header, Id, Id as ParaId, MultiDisputeStatementSet, ScheduledCore,
UncheckedSignedAvailabilityBitfields, ValidationCodeHash,
};
use alloc::{
collections::{BTreeMap, BTreeSet, VecDeque},
vec,
vec::Vec,
};
use bitvec::prelude::*;
use sp_application_crypto::ByteArray;

use alloc::{vec, vec::Vec};
use codec::{Decode, Encode};
use scale_info::TypeInfo;
use sp_application_crypto::ByteArray;
use sp_core::RuntimeDebug;
use sp_runtime::traits::Header as HeaderT;
use sp_staking::SessionIndex;
Expand Down Expand Up @@ -298,9 +301,9 @@ pub struct ClaimQueueOffset(pub u8);
/// Signals that a parachain can send to the relay chain via the UMP queue.
#[derive(PartialEq, Eq, Clone, Encode, Decode, TypeInfo, RuntimeDebug)]
pub enum UMPSignal {
/// A message sent by a parachain to select the core the candidate is commited to.
/// A message sent by a parachain to select the core the candidate is committed to.
/// Relay chain validators, in particular backers, use the `CoreSelector` and
/// `ClaimQueueOffset` to compute the index of the core the candidate has commited to.
/// `ClaimQueueOffset` to compute the index of the core the candidate has committed to.
SelectCore(CoreSelector, ClaimQueueOffset),
}
/// Separator between `XCM` and `UMPSignal`.
Expand All @@ -324,6 +327,25 @@ impl CandidateCommitments {
UMPSignal::SelectCore(core_selector, cq_offset) => Some((core_selector, cq_offset)),
}
}

/// Returns the core index determined by `UMPSignal::SelectCore` commitment
/// and `assigned_cores`.
///
/// Returns `None` if there is no `UMPSignal::SelectCore` commitment or
/// assigned cores is empty.
///
/// `assigned_cores` must be a sorted vec of all core indices assigned to a parachain.
pub fn committed_core_index(&self, assigned_cores: &[&CoreIndex]) -> Option<CoreIndex> {
if assigned_cores.is_empty() {
return None
}

self.selected_core().and_then(|(core_selector, _cq_offset)| {
let core_index =
**assigned_cores.get(core_selector.0 as usize % assigned_cores.len())?;
Some(core_index)
})
}
}

/// CandidateReceipt construction errors.
Expand All @@ -337,7 +359,8 @@ pub enum CandidateReceiptError {
InvalidSelectedCore,
/// The parachain is not assigned to any core at specified claim queue offset.
NoAssignment,
/// No core was selected.
/// No core was selected. The `SelectCore` commitment is mandatory for
/// v2 receipts if parachains has multiple cores assigned.
NoCoreSelected,
/// Unknown version.
UnknownVersion(InternalVersion),
Expand Down Expand Up @@ -432,33 +455,57 @@ impl<H: Copy> CandidateDescriptorV2<H> {
}

impl<H: Copy> CommittedCandidateReceiptV2<H> {
/// Checks if descriptor core index is equal to the commited core index.
/// Input `assigned_cores` must contain the sorted cores assigned to the para at
/// the committed claim queue offset.
pub fn check(&self, assigned_cores: &[CoreIndex]) -> Result<(), CandidateReceiptError> {
// Don't check v1 descriptors.
if self.descriptor.version() == CandidateDescriptorVersion::V1 {
return Ok(())
}

if self.descriptor.version() == CandidateDescriptorVersion::Unknown {
return Err(CandidateReceiptError::UnknownVersion(self.descriptor.version))
/// Checks if descriptor core index is equal to the committed core index.
/// Input `cores_per_para` is a claim queue snapshot stored as a mapping
/// between `ParaId` and the cores assigned per depth.
pub fn check_core_index(
&self,
cores_per_para: &TransposedClaimQueue,
) -> Result<(), CandidateReceiptError> {
match self.descriptor.version() {
// Don't check v1 descriptors.
CandidateDescriptorVersion::V1 => return Ok(()),
CandidateDescriptorVersion::V2 => {},
CandidateDescriptorVersion::Unknown =>
return Err(CandidateReceiptError::UnknownVersion(self.descriptor.version)),
}

if assigned_cores.is_empty() {
if cores_per_para.is_empty() {
return Err(CandidateReceiptError::NoAssignment)
}

let descriptor_core_index = CoreIndex(self.descriptor.core_index as u32);

let (core_selector, _cq_offset) =
self.commitments.selected_core().ok_or(CandidateReceiptError::NoCoreSelected)?;
let (offset, core_selected) =
if let Some((_core_selector, cq_offset)) = self.commitments.selected_core() {
(cq_offset.0, true)
} else {
// If no core has been selected then we use offset 0 (top of claim queue)
(0, false)
};

// The cores assigned to the parachain at above computed offset.
let assigned_cores = cores_per_para
.get(&self.descriptor.para_id())
.ok_or(CandidateReceiptError::NoAssignment)?
.get(&offset)
.ok_or(CandidateReceiptError::NoAssignment)?
.into_iter()
.collect::<Vec<_>>();

let core_index = if core_selected {
self.commitments
.committed_core_index(assigned_cores.as_slice())
.ok_or(CandidateReceiptError::NoAssignment)?
} else {
// `SelectCore` commitment is mandatory for elastic scaling parachains.
if assigned_cores.len() > 1 {
return Err(CandidateReceiptError::NoCoreSelected)
}

let core_index = assigned_cores
.get(core_selector.0 as usize % assigned_cores.len())
.ok_or(CandidateReceiptError::InvalidCoreIndex)?;
**assigned_cores.get(0).ok_or(CandidateReceiptError::NoAssignment)?
};

if *core_index != descriptor_core_index {
let descriptor_core_index = CoreIndex(self.descriptor.core_index as u32);
if core_index != descriptor_core_index {
return Err(CandidateReceiptError::CoreIndexMismatch)
}

Expand Down Expand Up @@ -512,6 +559,12 @@ impl<H> BackedCandidate<H> {
&self.candidate
}

/// Get a mutable reference to the committed candidate receipt of the candidate.
/// Only for testing.
#[cfg(feature = "test")]
pub fn candidate_mut(&mut self) -> &mut CommittedCandidateReceiptV2<H> {
&mut self.candidate
}
/// Get a reference to the descriptor of the candidate.
pub fn descriptor(&self) -> &CandidateDescriptorV2<H> {
&self.candidate.descriptor
Expand Down Expand Up @@ -697,6 +750,29 @@ impl<H: Copy> From<CoreState<H>> for super::v8::CoreState<H> {
}
}

/// The claim queue mapped by parachain id.
pub type TransposedClaimQueue = BTreeMap<ParaId, BTreeMap<u8, BTreeSet<CoreIndex>>>;

/// Returns a mapping between the para id and the core indices assigned at different
/// depths in the claim queue.
pub fn transpose_claim_queue(
claim_queue: BTreeMap<CoreIndex, VecDeque<Id>>,
) -> TransposedClaimQueue {
let mut per_para_claim_queue = BTreeMap::new();

for (core, paras) in claim_queue {
// Iterate paras assigned to this core at each depth.
for (depth, para) in paras.into_iter().enumerate() {
let depths: &mut BTreeMap<u8, BTreeSet<CoreIndex>> =
per_para_claim_queue.entry(para).or_insert_with(|| Default::default());

depths.entry(depth as u8).or_default().insert(core);
}
}

per_para_claim_queue
}

#[cfg(test)]
mod tests {
use super::*;
Expand Down Expand Up @@ -778,7 +854,7 @@ mod tests {

assert_eq!(new_ccr.descriptor.version(), CandidateDescriptorVersion::Unknown);
assert_eq!(
new_ccr.check(&vec![].as_slice()),
new_ccr.check_core_index(&BTreeMap::new()),
Err(CandidateReceiptError::UnknownVersion(InternalVersion(100)))
)
}
Expand All @@ -802,7 +878,13 @@ mod tests {
.upward_messages
.force_push(UMPSignal::SelectCore(CoreSelector(0), ClaimQueueOffset(1)).encode());

assert_eq!(new_ccr.check(&vec![CoreIndex(123)]), Ok(()));
let mut cq = BTreeMap::new();
cq.insert(
CoreIndex(123),
vec![new_ccr.descriptor.para_id(), new_ccr.descriptor.para_id()].into(),
);

assert_eq!(new_ccr.check_core_index(&transpose_claim_queue(cq)), Ok(()));
}

#[test]
Expand All @@ -814,21 +896,31 @@ mod tests {
new_ccr.commitments.upward_messages.force_push(UMP_SEPARATOR);
new_ccr.commitments.upward_messages.force_push(UMP_SEPARATOR);

// The check should fail because no `SelectCore` signal was sent.
assert_eq!(
new_ccr.check(&vec![CoreIndex(0), CoreIndex(100)]),
Err(CandidateReceiptError::NoCoreSelected)
);
let mut cq = BTreeMap::new();
cq.insert(CoreIndex(0), vec![new_ccr.descriptor.para_id()].into());

// The check should not fail because no `SelectCore` signal was sent.
// The message is optional.
assert!(new_ccr.check_core_index(&transpose_claim_queue(cq)).is_ok());

// Garbage message.
new_ccr.commitments.upward_messages.force_push(vec![0, 13, 200].encode());

// No `SelectCore` can be decoded.
assert_eq!(new_ccr.commitments.selected_core(), None);

// Failure is expected.
let mut cq = BTreeMap::new();
cq.insert(
CoreIndex(0),
vec![new_ccr.descriptor.para_id(), new_ccr.descriptor.para_id()].into(),
);
cq.insert(
CoreIndex(100),
vec![new_ccr.descriptor.para_id(), new_ccr.descriptor.para_id()].into(),
);

assert_eq!(
new_ccr.check(&vec![CoreIndex(0), CoreIndex(100)]),
new_ccr.check_core_index(&transpose_claim_queue(cq.clone())),
Err(CandidateReceiptError::NoCoreSelected)
);

Expand All @@ -847,7 +939,7 @@ mod tests {
.force_push(UMPSignal::SelectCore(CoreSelector(1), ClaimQueueOffset(1)).encode());

// Duplicate doesn't override first signal.
assert_eq!(new_ccr.check(&vec![CoreIndex(0), CoreIndex(100)]), Ok(()));
assert_eq!(new_ccr.check_core_index(&transpose_claim_queue(cq)), Ok(()));
}

#[test]
Expand Down Expand Up @@ -884,13 +976,57 @@ mod tests {
Decode::decode(&mut encoded_ccr.as_slice()).unwrap();

assert_eq!(v2_ccr.descriptor.core_index(), Some(CoreIndex(123)));
assert_eq!(new_ccr.check(&vec![CoreIndex(123)]), Ok(()));

let mut cq = BTreeMap::new();
cq.insert(
CoreIndex(123),
vec![new_ccr.descriptor.para_id(), new_ccr.descriptor.para_id()].into(),
);

assert_eq!(new_ccr.check_core_index(&transpose_claim_queue(cq)), Ok(()));

assert_eq!(new_ccr.hash(), v2_ccr.hash());
}

// Only check descriptor `core_index` field of v2 descriptors. If it is v1, that field
// will be garbage.
#[test]
fn test_core_select_is_mandatory() {
fn test_v1_descriptors_with_ump_signal() {
let mut ccr = dummy_old_committed_candidate_receipt();
ccr.descriptor.para_id = ParaId::new(1024);
// Adding collator signature should make it decode as v1.
ccr.descriptor.signature = dummy_collator_signature();
ccr.descriptor.collator = dummy_collator_id();

ccr.commitments.upward_messages.force_push(UMP_SEPARATOR);
ccr.commitments
.upward_messages
.force_push(UMPSignal::SelectCore(CoreSelector(1), ClaimQueueOffset(1)).encode());

let encoded_ccr: Vec<u8> = ccr.encode();

let v1_ccr: CommittedCandidateReceiptV2 =
Decode::decode(&mut encoded_ccr.as_slice()).unwrap();

assert_eq!(v1_ccr.descriptor.version(), CandidateDescriptorVersion::V1);
assert!(v1_ccr.commitments.selected_core().is_some());

let mut cq = BTreeMap::new();
cq.insert(CoreIndex(0), vec![v1_ccr.descriptor.para_id()].into());
cq.insert(CoreIndex(1), vec![v1_ccr.descriptor.para_id()].into());

assert!(v1_ccr.check_core_index(&transpose_claim_queue(cq)).is_ok());

assert_eq!(
v1_ccr.commitments.committed_core_index(&vec![&CoreIndex(10), &CoreIndex(5)]),
Some(CoreIndex(5)),
);

assert_eq!(v1_ccr.descriptor.core_index(), None);
}

#[test]
fn test_core_select_is_optional() {
// Testing edge case when collators provide zeroed signature and collator id.
let mut old_ccr = dummy_old_committed_candidate_receipt();
old_ccr.descriptor.para_id = ParaId::new(1000);
Expand All @@ -899,11 +1035,22 @@ mod tests {
let new_ccr: CommittedCandidateReceiptV2 =
Decode::decode(&mut encoded_ccr.as_slice()).unwrap();

let mut cq = BTreeMap::new();
cq.insert(CoreIndex(0), vec![new_ccr.descriptor.para_id()].into());

// Since collator sig and id are zeroed, it means that the descriptor uses format
// version 2.
// We expect the check to fail in such case because there will be no `SelectCore`
// commitment.
assert_eq!(new_ccr.check(&vec![CoreIndex(0)]), Err(CandidateReceiptError::NoCoreSelected));
// version 2. Should still pass checks without core selector.
assert!(new_ccr.check_core_index(&transpose_claim_queue(cq)).is_ok());

let mut cq = BTreeMap::new();
cq.insert(CoreIndex(0), vec![new_ccr.descriptor.para_id()].into());
cq.insert(CoreIndex(1), vec![new_ccr.descriptor.para_id()].into());

// Should fail because 2 cores are assigned,
assert_eq!(
new_ccr.check_core_index(&transpose_claim_queue(cq)),
Err(CandidateReceiptError::NoCoreSelected)
);

// Adding collator signature should make it decode as v1.
old_ccr.descriptor.signature = dummy_collator_signature();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ state.

Once we have all parameters, we can spin up a background task to perform the validation in a way that doesn't hold up
the entire event loop. Before invoking the validation function itself, this should first do some basic checks:
* The collator signature is valid
* The collator signature is valid (only if `CandidateDescriptor` has version 1)
* The PoV provided matches the `pov_hash` field of the descriptor

For more details please see [PVF Host and Workers](pvf-host-and-workers.md).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ All failed checks should lead to an unrecoverable error making the block invalid
1. Ensure that any code upgrade scheduled by the candidate does not happen within `config.validation_upgrade_cooldown`
of `Paras::last_code_upgrade(para_id, true)`, if any, comparing against the value of `Paras::FutureCodeUpgrades`
for the given para ID.
1. Check the collator's signature on the candidate data.
1. Check the collator's signature on the candidate data (only if `CandidateDescriptor` is version 1)
1. check the backing of the candidate using the signatures and the bitfields, comparing against the validators
assigned to the groups, fetched with the `group_validators` lookup, while group indices are computed by `Scheduler`
according to group rotation info.
Expand Down
18 changes: 12 additions & 6 deletions polkadot/runtime/parachains/src/assigner_coretime/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -284,12 +284,10 @@ impl<T: Config> AssignmentProvider<BlockNumberFor<T>> for Pallet<T> {
})
}

fn report_processed(assignment: Assignment) {
match assignment {
Assignment::Pool { para_id, core_index } =>
on_demand::Pallet::<T>::report_processed(para_id, core_index),
Assignment::Bulk(_) => {},
}
fn report_processed(para_id: ParaId, core_index: CoreIndex) {
// Reporting processed assignments is only important for on-demand.
// Doing the call below is a no-op if the assignment was a `Bulk` one.
on_demand::Pallet::<T>::report_processed(para_id, core_index);
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eskimor this is a bit of a hack but is IMO safe to do.
This helps when reporting as processed the cores that remain occupied when a session change occurs. As you suggested, we no longer back anything if we know that at the end of the block a session change will occur. Cores that remain occupied at the session change will be freed and their assignments will no longer be pushed back to the assigner, BUT we still want to report them as processed, right?

AFAICT this is only relevant for on-demand cores, but because I removed the AV-core storage, we no longer know which assignment type this was (on demand or bulk). I prefer not storing this info in the inclusion module or somewhere else so this hack seems to do the trick:
on_demand::report_processed seems to not do anything if the paraid had no affinity on the core.

This is a hack definitely. But it's one I'm willing to do to not add extra complexity and we'll fix this for good when removing the claim queue: #5529
Sounds ok?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, please no distinction between on-demand or bulk. The whole goal of the assignment provider was to hide this difference, so the rest of the code does not have to bother.

Just report processed, if dropped. That's not even a hack. It has been processed, just not successfully. We can add this to the docs of the call, that "processed" does not necessarily mean success, but more that it had its chance and execution time.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok sorry. Now I looked at the code as well. Indeed this is a hack of course. The assignment provider should know about on-demand/bulk. Calling into the on-demand for non-ondemand assignments is not nice indeed, but I see you added the no-op to the docs. Better yet would be to state this as a proper invariant, ideally even with an accommodating test, checking that this is indeed a noop.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the other hand this is only an intermediary hack right? With the proper fix coming, we should no longer need this hack anyway.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading the code I don't get why we need this hack. Anyhow, will not block on this as it is only interim.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved the report_processed to be done when advancing the claim queue only.
We were previously reporting when the core was freed (because we used to have the TTL), but this is no longer needed. Cleaned this up and removed the hack

}

/// Push an assignment back to the front of the queue.
Expand Down Expand Up @@ -322,6 +320,14 @@ impl<T: Config> AssignmentProvider<BlockNumberFor<T>> for Pallet<T> {
let config = configuration::ActiveConfig::<T>::get();
config.scheduler_params.num_cores
}

fn assignment_duplicated(assignment: &Assignment) {
match assignment {
Assignment::Pool { para_id, core_index } =>
on_demand::Pallet::<T>::assignment_duplicated(*para_id, *core_index),
Assignment::Bulk(_) => {},
}
}
}

impl<T: Config> Pallet<T> {
Expand Down
Loading