check inherents and equivocations in import_block#8446
check inherents and equivocations in import_block#8446
import_block#8446Conversation
3100de0 to
9ebf20c
Compare
import_block
…e_operator --bump minor --force'
bkchr
left a comment
There was a problem hiding this comment.
I don't get why you added all these CreateInherentForX thingies?
|
|
||
| /// Create inherent data providers for BABE. | ||
| #[derive(Debug, Clone)] | ||
| pub struct CreateInherentDataProvidersForBabe { |
There was a problem hiding this comment.
I think it was not a bad idea. It helped to avoid a copy of this type tram:
std::sync::Arc<
dyn sp_inherents::CreateInherentDataProviders<
Block,
(),
InherentDataProviders = (InherentDataProvider, sp_timestamp::InherentDataProvider),
>,
>;
and was resued in 3 places.
The code requires the CIDP impls for |
iulianbarbu
left a comment
There was a problem hiding this comment.
Did a first pass, I will take another look soon.
iulianbarbu
left a comment
There was a problem hiding this comment.
Second pass, I still need to look over BABE changes
| create_inherent_data_providers: move |_, ()| async move { Ok(()) }, | ||
| create_inherent_data_providers: Arc::new(move |parent, _| { | ||
| let slot_duration = sc_consensus_aura::standalone::slot_duration_at( | ||
| client_for_closure.as_ref(), | ||
| parent, | ||
| ); | ||
| let timestamp = sp_timestamp::InherentDataProvider::from_system_time(); | ||
| let slot = slot_duration.map(|slot_duration| { | ||
| sp_consensus_aura::inherents::InherentDataProvider::from_timestamp_and_slot_duration( | ||
| *timestamp, | ||
| slot_duration, | ||
| ) | ||
| }); | ||
| async move { Ok((slot?, timestamp)) } | ||
| }) as AuraCreateInherentDataProviders<Block>, |
There was a problem hiding this comment.
I am not entirely familiar with the logic of the slot based colator for omni-node. Why should we change this? Are all inherent data providers supposed to return slot and timestamp (no matter which type of collator that is) - this is a question for myself too (to clarify)?
There was a problem hiding this comment.
I think this is because the AuraVerifier required both the timestamp and the slot. Now this check is done by the import_block method, so both the slot and the timestamp are required here.
There was a problem hiding this comment.
I might misread but does Omni Node use the AuraVerifier you pointed to? I am seeing some usage of the EquivocationVerifier from here:
SlotBasedBlockImport?
There was a problem hiding this comment.
I think you are correct, this AURA part was implemented incorrectly. 7a28d33
In fact I think I have another oversight: here the parachain template node is using a raw ParachainBlockImport in conjuction with the cumulus_client_consensus_aura::equivocation_import_queue::fully_verifying_import_queue which I changed in the commit above. So I think that the parachain template node should also use this AuraBlockImport type.
@bkchr Can you confirm that this is the right way to proceed? Or am I missing something?
| let hash = block.post_hash(); | ||
| let number = *block.header.number(); | ||
| let info = self.client.info(); | ||
| let parent_hash = *block.header.parent_hash(); |
There was a problem hiding this comment.
nit: this function is already super long, maybe we can contain the check_inherents logic even further - e.g. create a method named execute_check_inherents (even though sounds awful, we can still document it to clarify its purpose). Same for checking for equivocation.
There was a problem hiding this comment.
this function is already super long
Why is that a problem?
maybe we can contain the check_inherents logic even further - e.g. create a method named execute_check_inherents (even though sounds awful)
What benefit do we get by doing this? I see at least one drawback, which you already pointed out - we are adding a function with an "awful" name. And the function is called in only one place. I don't really see a reason for it.
There was a problem hiding this comment.
Why is that a problem?
Main problem is having a long function means we do many things in it, and then it will be hard to test it, which makes it easy for bugs to sneak in.
What benefit do we get by doing this?
Not much if we don't test the newly added units - which might not be at hand. The other benefit is for easing code reading of this function which spans over hundreds of lines 🙈 I am not entirely sure if it is doable to test the contained units. All in all the logic is for sure tested at some higher level by the CI. Then the reading argument feels like a small improvement that can go unnoticed when you still read the whole function, but we should also start from somewhere.
Please consider this a nit, not a hard ask. I consider this technical debt, and can continue or attempt to stop it. Either way can be clarified if we open an issue, but even this is optional. Mainly posted the comment to seek alignment, and I think we're mostly fine based on the testing done in the CI. Someone who's sufficiently determined can seek more alignment whenever it feels like doing so.
cumulus/client/consensus/aura/src/collators/slot_based/block_import.rs
Outdated
Show resolved
Hide resolved
…ents-when-importing-block
| .await | ||
| .map_err(Error::<B>::Inherent)?; | ||
|
|
||
| let slot_now = create_inherent_data_providers.slot(); |
There was a problem hiding this comment.
We should remove the inherent data providers completely and just provide some function that returns the current slot.
There was a problem hiding this comment.
Sorry, I don't understand. This CreateInherentDataProvider trait is just a function that returns the current slot (and timestamp). It has a blanket implementation for a specific Fn signature. How exactly would you like this to be done instead?
There was a problem hiding this comment.
As discussed offline, I think the point here is to make this verifier unaware of CIDP. For just verifying the slot, we don't need all the inherent data, but just the slot.
| fn authorities<A, B, C>( | ||
| /// Return AURA authorities at the given `parent_hash`. | ||
| #[doc(hidden)] | ||
| pub fn authorities<A, B, C>( |
| Ok(()) | ||
| } | ||
|
|
||
| async fn check_and_report_equivocation( |
There was a problem hiding this comment.
While this is fine to have it here right now. This should be some background process to which the verify function is sending the headers to check. The point being that you already want to report equivocations when you see the header not want to wait for receiving the entire body (which may never appears).
(This can be some followup)
| create_inherent_data_providers: params.create_inherent_data_providers, | ||
| block_import: params.block_import, | ||
| create_inherent_data_providers: params.create_inherent_data_providers.clone(), | ||
| block_import: ValidatingBlockImport::<_, _, _, _, P>::new( |
There was a problem hiding this comment.
As said below, this is not needed and can be deleted.
It should be able to stay there. However, there are some calls into the runtime that need to be removed. One of them being the fetching of the authorities. This needs to be changed. We need to track the authorities inside the aura node side code and get them from the digests, similar to how it is done in BABE. (this is a task that can be done separately, but would need to be merged before) |
…ents-when-importing-block
c843d3c to
507f3b2
Compare
5a19b36 to
5779e08
Compare
|
All GitHub workflows were cancelled due to failure one of the required jobs. |
|
Will be split into separate PRs as previously agreed |
* Don't use labels for branch names creation in the backport bot (paritytech#9243) * Remove unused deps (paritytech#9235) # Description Remove unused deps using `cargo udeps` Part of: paritytech#6906 --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Branislav Kontur <[email protected]> * Fixed genesis config presets for bridge tests (paritytech#9185) Closes: paritytech#9116 --------- Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Karol Kokoszka <[email protected]> * Remove `subwasmlib` (paritytech#9252) This removes `subwasmlib` and replaces it with some custom code to fetch the metadata. Main point of this change is the removal of some external dependency. Closes: paritytech#9203 --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> * zombienet, make logs for para works (paritytech#9230) Fix for correctly display the logs (urls) for paras. * feat(cumulus): Adds support for additional relay state keys in parachain validation data inherent (paritytech#9262) Adds the possibility for parachain clients to collect additional relay state keys into the validation data inherent. With this change, other consensus engines can collect additional relay keys into the parachain inherent data: ```rs let paras_inherent_data = ParachainInherentDataProvider::create_at( relay_parent, relay_client, validation_data, para_id, vec![ relay_well_known_keys::EPOCH_INDEX.to_vec() // <----- Example ], ) .await; ``` * Allow locking to bump consumer without limits (paritytech#9176) Locking is a system-level operation, and can only increment the consumer limit at most once. Therefore, it should use `inc_consumer_without_limits`. This behavior is optional, and is only used in the call path of `LockableCurrency`. Reserves, Holds and Freezes (and other operations like transfer etc.) have the ability to return `DispatchResult` and don't need this bypass. This is demonstrated in the unit tests added. Beyond this, this PR: * uses the correct way to get the account data in tests * adds an `Unexpected` event instead of a silent `debug_assert!`. * Adds `try_state` checks for correctness of `account.frozen` invariant. --------- Co-authored-by: Ankan <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> * babe: keep stateless verification in `Verifier`, move everything else to the import queue (paritytech#9147) We agreed to split paritytech#8446 into two PRs: one for BABE (this one) and one for AURA. This is the easier one. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Fix CandidateDescriptor debug logs (paritytech#9255) Regardless of the descriptor version, the CandidateDescriptor was logged as a CandidateDescriptorV2 instance. To address this issue we now derive RuntimeDebug only when std is not enabled so we can have that empty implementation that does not bloat the runtime WASM. When std is enabled we implement core::fmt::Debug by hand and print the structure differently depending on the CandidateDescriptor version. Fixes: paritytech#8457 --------- Signed-off-by: Alexandru Cihodaru <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> * Rewrite validator disabling test with zombienet-sdk (paritytech#9128) Fixes paritytech#9085 --------- Signed-off-by: Alexandru Cihodaru <[email protected]> * gossip-support: make low connectivity message an error (paritytech#9264) All is not well when a validator is not properly connected, e.g: of things that might happen: - Finality might be slightly delay because validator will be no-show because they can't retrieve PoVs to validate approval work: paritytech#8915. - When they author blocks they won't back things because gossiping of backing statements happen using the grid topology:, e.g blocks authored by validators with a low number of peers: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc-polkadot.helixstreet.io#/explorer/query/26931262 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc-polkadot.helixstreet.io#/explorer/query/26931260 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot.api.onfinality.io%2Fpublic-ws#/explorer/query/26931334 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot-public-rpc.blockops.network%2Fws#/explorer/query/26931314 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot-public-rpc.blockops.network%2Fws#/explorer/query/26931292 https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot-public-rpc.blockops.network%2Fws#/explorer/query/26931447 The problem is seen in `polkadot_parachain_peer_count` metrics, but it seems people are not monitoring that well enough, so let's make it more visible nodes with low connectivity are not working in good conditions. I also reduced the threshold to 85%, so that we don't trigger this error to eagerly. --------- Signed-off-by: Alexandru Gheorghe <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Allow setting idle connection timeout value in custom node implementations (paritytech#9251) Allow setting idle connection timeout value. This can be helpful in custom networks to allow maintaining long-lived connections. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> * [revive] eth-decimals (paritytech#9101) On Ethereum, 1 ETH is represented as 10^18 wei (wei being the smallest unit). On Polkadot 1 DOT is defined as 1010 plancks. It means that any value smaller than 10^8 wei can not be expressed with the native balance. Any contract that attempts to use such a value currently reverts with a DecimalPrecisionLoss error. In theory, RPC can define a decimal representation different from Ethereum mainnet (10^18). In practice tools (frontend libraries, wallets, and compilers) ignore it and expect 18 decimals. The current behaviour breaks eth compatibility and needs to be updated. See issue paritytech#109 for more details. Fix paritytech/contract-issues#109 [weights compare](https://weights.tasty.limo/compare?unit=weight&ignore_errors=true&threshold=10&method=asymptotic&repo=polkadot-sdk&old=master&new=pg/eth-decimals&path_pattern=substrate/frame/**/src/weights.rs,polkadot/runtime/*/src/weights/**/*.rs,polkadot/bridges/modules/*/src/weights.rs,cumulus/**/weights/*.rs,cumulus/**/weights/xcm/*.rs,cumulus/**/src/weights.rs) --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Alexander Theißen <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]> * Rewrite old disputes test with zombienet-sdk (paritytech#9257) Fixes: paritytech#9256 --------- Signed-off-by: Alexandru Cihodaru <[email protected]> * Zombienet CI improvements (paritytech#9172) ## 🔄 Zombienet CI Refactor: Matrix-Based Workflows This PR refactors the Zombienet CI workflows to use a **matrix-based approach**, resulting in: - ✅ **Easier test maintenance** – easily add or remove tests without duplicating workflow logic. - 🩹 **Improved flaky test handling** – flaky tests are excluded by default but can be explicitly included by pattern. - 🔍 **Pattern-based test selection** – run only tests matching a name pattern, ideal for debugging. --- ## 🗂️ Structure Changes - **Test definitions** are now stored in `.github/zombienet-tests/`. - Each workflow (`Cumulus`, `Substrate`, `Polkadot`, `Parachain Template`) has its own YAML file with test configurations. --- ## 🧰 Added Scripts ### `.github/scripts/parse-zombienet-tests.py` - Parses test definitions and generates a GitHub Actions matrix. - Filters out flaky tests by default. - If a `test_pattern` is provided, matching tests are **included even if flaky**. ### `.github/scripts/dispatch-zombienet-workflow.sh` - Triggers a Zombienet workflow multiple times, optionally filtered by test name pattern. - Stores results in a **CSV file** for analysis. - Useful for debugging flaky tests or stress-testing specific workflows. - Intended to be run from the local machine. --------- Co-authored-by: Javier Viola <[email protected]> Co-authored-by: Alexander Samusev <[email protected]> Co-authored-by: Javier Viola <[email protected]> * consensus/grandpa: Fix high number of peer disconnects with invalid justification (paritytech#9015) A grandpa race-casse has been identified in the versi-net stack around authority set changes, which leads to the following: - T0 / Node A: Completes round (15) - T1 / Node A: Applies new authority set change and increments the SetID (from 0 to 1) - T2 / Node B: Sends Precommit for round (15) with SetID (0) -- previous set ID - T3 / Node B: Applies new authority set change and increments the SetID (1) In this scenario, Node B is not aware at the moment of sending justifications that the Set ID has changed. The downstream effect is that Node A will not be able to verify the signature of justifications, since a different SetID is taken into account. This will cascade through the sync engine, where the Node B is wrongfully banned and disconnected. This PR aims to fix the edge-case by making the grandpa resilient to verifying prior setIDs for signatures. When the signature of the grandpa justification fails to decode, the prior SetID is also verified. If the prior SetID produces a valid signature, then the outdated justification error is propagated through the code (ie `SignatureResult::OutdatedSet`). The sync engine will handle the outdated justifications as invalid, but without banning the peer. This leads to increased stability of the network during authority changes, which caused frequent disconnects to versi-net in the past. ### Review Notes - Main changes that verify prior SetId on failures are placed in [check_message_signature_with_buffer](https://github.com/paritytech/polkadot-sdk/pull/9015/files#diff-359d7a46ea285177e5d86979f62f0f04baabf65d595c61bfe44b6fc01af70d89R458-R501) - Sync engine no longer disconnects outdated justifications in [process_service_command](https://github.com/paritytech/polkadot-sdk/pull/9015/files#diff-9ab3391aa82ee2b2868ece610100f84502edcf40638dba9ed6953b6e572dfba5R678-R703) ### Testing Done - Deployed the PR to versi-net with 40 validators - Prior we have noticed 10/40 validators disconnecting every 15-20 minutes, leading to instability - Over past 24h the issue has been mitigated: https://grafana.teleport.parity.io/goto/FPNWlmsHR?orgId=1 - Note: bootnodes 0 and 1 are currently running outdated versions that do not incorporate this SetID verification improvement Closes: paritytech#8872 Closes: paritytech#1147 --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Dmitry Markin <[email protected]> * network: Upgrade litep2p to v0.10.0 (paritytech#9287) ## litep2p v0.10.0 This release adds the ability to use system DNS resolver and change Kademlia DNS memory store capacity. It also fixes the Bitswap protocol implementation and correctly handles the dropped notification substreams by unregistering them from the protocol list. ### Added - kad: Expose memory store configuration ([paritytech#407](paritytech/litep2p#407)) - transport: Allow changing DNS resolver config ([paritytech#384](paritytech/litep2p#384)) ### Fixed - notification: Unregister dropped protocols ([paritytech#391](paritytech/litep2p#391)) - bitswap: Fix protocol implementation ([paritytech#402](paritytech/litep2p#402)) - transport-manager: stricter supported multiaddress check ([paritytech#403](paritytech/litep2p#403)) --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Dedup dependencies between dependencies and dev-dependencies (paritytech#9233) # Description Deduplicate some dependencies between `dependencies` and `dev-dependencies` sections --------- Co-authored-by: Bastian Köcher <[email protected]> * Ci-unified update (with solc and resolc) (paritytech#9289) add `solc` and `resolc` binaries to image ``` $ solc --version solc, the solidity compiler commandline interface Version: 0.8.30+commit.73712a01.Linux.g++ $ resolc --version Solidity frontend for the revive compiler version 0.3.0+commit.ed60869.llvm-18.1.8 ``` You can update or install specific version with `/builds/download-bin.sh <solc | resolc> [version | latest]` e.g. ``` /builds/download-bin.sh solc v0.8.30 ``` * fix: skip verifying imported blocks (paritytech#9280) Closes paritytech#9277. Still WIP testing --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> * [Staking Async] Saturating accrue era reward points (paritytech#9186) Replaces regular addition with saturating addition when accumulating era reward points in `pallet-staking-async` to prevent potential overflow. --------- Co-authored-by: Bastian Köcher <[email protected]> * Replace `log` with `tracing` on `pallet-bridge-grandpa` (paritytech#9294) This PR replaces `log` with `tracing` instrumentation on `pallet-bridge-grandpa` by providing structured logging. Partially addresses paritytech#9211 * Fix subsume_assets incorrectly merging two AssetsInHolding (paritytech#9179) `subsume_assets` fails to correctly subsume two instances of `AssetsInHolding` under certain conditions which can result in loss of funds (as assets are overriden rather than summed together) Eg. consider following test: ``` #[test] fn subsume_assets_different_length_holdings() { let mut t1 = AssetsInHolding::new(); t1.subsume(CFP(400)); let mut t2 = AssetsInHolding::new(); t2.subsume(CF(100)); t2.subsume(CFP(100)); t1.subsume_assets(t2); ``` current result (without this PR change): ``` let mut iter = t1.into_assets_iter(); assert_eq!(Some(CF(100)), iter.next()); assert_eq!(Some(CFP(100)), iter.next()); ``` expected result: ``` let mut iter = t1.into_assets_iter(); assert_eq!(Some(CF(100)), iter.next()); assert_eq!(Some(CFP(500)), iter.next()); ``` --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Branislav Kontur <[email protected]> * yap-runtime: fixes for `GetParachainInfo` (paritytech#9312) This fixes the YAP parachain runtimes in case you encounter a panic in the collator similar to paritytech/zombienet#2050: ``` Failed to retrieve the parachain id ``` (which we do have zombienet-sdk tests for [here](https://github.com/paritytech/polkadot-sdk/blob/master/substrate/client/transaction-pool/tests/zombienet/yap_test.rs)) --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> * RecentDisputes/ActiveDisputes use BTreeMap instead of Vec (paritytech#9309) Fixes paritytech#782 --------- Signed-off-by: Alexandru Cihodaru <[email protected]> * network/litep2p: Switch to system DNS resolver (paritytech#9321) Switch to system DNS resolver instead of 8.8.8.8 that litep2p uses by default. This enables full administrator control of what upstream DNS servers to use, including resolution of local names using custom DNS servers. Fixes paritytech#9298. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> * litep2p/discovery: Ensure non-global addresses are not reported as external (paritytech#9281) This PR ensures that external addresses discovered by the identify protocol are not propagated to the litep2p backend if they are not global. This leads to a healthier DHT over time, since nodes will not advertise loopback / non-global addresses. We have seen various cases were loopback addresses were reported as external: ``` 2025-07-16 16:18:39.765 TRACE tokio-runtime-worker sub-libp2p::discovery: verify new external address: /ip4/127.0.0.1/tcp/30310/p2p/12D3KooWNw19ScMjzNGLnYYLQxWcM9EK9VYPbCq241araUGgbdLM 2025-07-16 16:18:39.765 INFO tokio-runtime-worker sub-libp2p: 🔍 Discovered new external address for our node: /ip4/127.0.0.1/tcp/30310/p2p/12D3KooWNw19ScMjzNGLnYYLQxWcM9EK9VYPbCq241araUGgbdLM ``` This PR takes into account the network config for `allow_non_global_addresses`. Closes: paritytech#9261 cc @paritytech/networking --------- Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> * [Backport] Regular version bumps and prdoc reordering from the stable2506 release branch back to master (paritytech#9320) This PR backports: - NODE_VERSION bumps - spec_version bumps - prdoc reordering from the release branch back to master --------- Co-authored-by: ParityReleases <[email protected]> * add node version to the announcement message * test in the internal room --------- Signed-off-by: Alexandru Cihodaru <[email protected]> Signed-off-by: Alexandru Gheorghe <[email protected]> Signed-off-by: Alexandru Vasile <[email protected]> Co-authored-by: Diego <[email protected]> Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: Anthony Kveder <[email protected]> Co-authored-by: Karol Kokoszka <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Javier Viola <[email protected]> Co-authored-by: Rodrigo Quelhas <[email protected]> Co-authored-by: Kian Paimani <[email protected]> Co-authored-by: Ankan <[email protected]> Co-authored-by: sistemd <[email protected]> Co-authored-by: Alexandru Cihodaru <[email protected]> Co-authored-by: Alexandru Gheorghe <[email protected]> Co-authored-by: Dmitry Markin <[email protected]> Co-authored-by: PG Herveou <[email protected]> Co-authored-by: Alexander Theißen <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Lukasz Rubaszewski <[email protected]> Co-authored-by: Alexander Samusev <[email protected]> Co-authored-by: Javier Viola <[email protected]> Co-authored-by: Alexandru Vasile <[email protected]> Co-authored-by: Evgeny Snitko <[email protected]> Co-authored-by: Raymond Cheung <[email protected]> Co-authored-by: ordian <[email protected]> Co-authored-by: ParityReleases <[email protected]>
When importing blocks, there are essentially two steps:
verifyandimport_block. With #65, we would like to re-broadcast blocks between these two steps (i.e. basically right afterverify).This PR is the first step to enable that change, by moving
check_inherents(and equivocation checks) from theverifyto theimport_blockstep (becauseverifydoes runtime checks that can potentially be expensive, and we don't really need to do those before re-broadcasting).