Skip to content

Commit ea514a6

Browse files
DrudgeRajenclaude
andcommitted
fix(stable2512): adapt EnableAsyncBackingAndCoretime migration to stable2512 scheduler
After merging upgrade/1.12.0-all, the free-stuck-AvailabilityCores code from PR #29 doesn't compile on stable2512 — the upstream paritytech#4937 rewrite removed `AvailabilityCores` + `CoreOccupied` storage from `pallet_scheduler`. Only `ClaimQueue` survives, now storing `VecDeque<Assignment>` instead of `VecDeque<ParasEntryType<T>>`. On stable2512 the stuck-core scenario doesn't manifest either: paritytech#4937 fixes the fragment-chain fork deadlock natively, so candidates always reach availability under the normal flow. We still kill `ClaimQueue` in the migration so the scheduler rebuilds from the just-updated `ActiveConfig`. Also dedupes `scale-type-resolver` from Cargo.lock (duplicate entry introduced by the merge). Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
1 parent 6d1159b commit ea514a6

2 files changed

Lines changed: 9 additions & 54 deletions

File tree

Cargo.lock

Lines changed: 2 additions & 32 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

thxnet/runtime/thxnet-testnet/src/lib.rs

Lines changed: 7 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -2437,37 +2437,22 @@ pub mod migrations {
24372437
cfg.node_features.set(3, true);
24382438
});
24392439

2440-
// Force-free any stuck AvailabilityCores. In normal flow cores transition
2441-
// Paras(entry) → Free via ParaInherent bitfield signaling availability complete;
2442-
// but when the config just changed under an occupied core (e.g. async-backing
2443-
// enabled while a v1.12.0-pre-#4937 relay has a capacity=2 cumulus collator
2444-
// producing forks), the occupying entry never concludes. Until session rotation
2445-
// the core stays stuck. Session rotation calls `push_occupied_cores_to_assignment_provider`
2446-
// which replaces every Paras(_) with Free. We do the same here so the next
2447-
// ParaInherent pass can schedule fresh candidates atomically with setCode.
2448-
//
2449-
// Caveat: relay-client subsystems cache fragment-chain / SessionInfo per session
2450-
// in Rust memory. Even after freeing storage, those caches persist until the
2451-
// next real session boundary OR validator-process restart. Operator action
2452-
// required: `kubectl rollout restart deploy/validator-*` after setCode.
2453-
parachains_scheduler::AvailabilityCores::<Runtime>::mutate(|cores| {
2454-
for core in cores.iter_mut() {
2455-
*core = parachains_scheduler::CoreOccupied::Free;
2456-
}
2457-
});
2458-
// Drop stale claims too — next block's free_cores_and_fill_claimqueue
2459-
// repopulates from AssignmentProvider.
2440+
// Drop stale claim queue so the scheduler rebuilds from fresh config
2441+
// on the next block. stable2512 has #4937 natively so there is no
2442+
// stuck-AvailabilityCores scenario — only ClaimQueue needs clearing.
2443+
// (v1.12.0 variant of this migration also freed AvailabilityCores;
2444+
// that storage no longer exists in stable2512 after the #4937 rework.)
24602445
parachains_scheduler::ClaimQueue::<Runtime>::kill();
24612446

24622447
log::info!(
24632448
target: "runtime",
24642449
"EnableAsyncBackingAndCoretime: num_cores={}, max_vals_per_core=None, \
24652450
lookahead=1, async_backing=(depth=1, ancestry=2), node_features[0,1,3]=true, \
2466-
AvailabilityCores freed, ClaimQueue cleared",
2451+
ClaimQueue cleared",
24672452
num_cores,
24682453
);
24692454

2470-
<Runtime as frame_system::Config>::DbWeight::get().reads_writes(2, 3)
2455+
<Runtime as frame_system::Config>::DbWeight::get().reads_writes(1, 2)
24712456
}
24722457
}
24732458

0 commit comments

Comments
 (0)