Merged
Conversation
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Collaborator
Author
|
For testing, I've left this PR + #61 running a full Kusama node with:
|
dmitry-markin
approved these changes
Apr 22, 2024
Comment on lines
+21
to
+29
| #![allow(clippy::single_match)] | ||
| #![allow(clippy::result_large_err)] | ||
| #![allow(clippy::redundant_pattern_matching)] | ||
| #![allow(clippy::type_complexity)] | ||
| #![allow(clippy::result_unit_err)] | ||
| #![allow(clippy::should_implement_trait)] | ||
| #![allow(clippy::too_many_arguments)] | ||
| #![allow(clippy::assign_op_pattern)] | ||
| #![allow(clippy::match_like_matches_macro)] |
Collaborator
There was a problem hiding this comment.
Should we revisit this later?
lexnv
added a commit
that referenced
this pull request
Apr 23, 2024
This PR ensures that litep2p does not panic when decoding public keys from received TCP noise handshake. The code operated under the assumption that only the `ed25519` key is valid in the context of Substrate. However, peers could still use a different key (`rsa` / `ecdsa`) and cause the code to panic. In those cases, an error is returned which terminates the negotiation handshake. Discovered during testing a sync node with litep2p backend on kusama as part of #83. ```bash Version: 1.10.0-cd9d08d6311 0: sp_panic_handler::set::{{closure}} 1: std::panicking::rust_panic_with_hook 2: std::panicking::begin_panic_handler::{{closure}} 3: std::sys_common::backtrace::__rust_end_short_backtrace 4: rust_begin_unwind 5: core::panicking::panic_fmt 6: <litep2p::crypto::PublicKey as core::convert::TryFrom<litep2p::crypto::keys_proto::PublicKey>>::try_from 7: litep2p::crypto::PublicKey::from_protobuf_encoding 8: litep2p::crypto::noise::parse_peer_id 9: litep2p::transport::tcp::connection::TcpConnection::negotiate_connection::{{closure}} 10: <tokio::time::timeout::Timeout<T> as core::future::future::Future>::poll 11: <litep2p::transport::tcp::TcpTransport as litep2p::transport::Transport>::negotiate::{{closure}} 12: <futures_util::stream::futures_unordered::FuturesUnordered<Fut> as futures_core::stream::Stream>::poll_next 13: <litep2p::transport::tcp::TcpTransport as futures_core::stream::Stream>::poll_next 14: <litep2p::transport::manager::TransportContext as futures_core::stream::Stream>::poll_next 15: litep2p::transport::manager::TransportManager::next::{{closure}} 16: <tokio::future::poll_fn::PollFn<F> as core::future::future::Future>::poll 17: <sc_network::litep2p::Litep2pNetworkBackend as sc_network::service::traits::NetworkBackend<B,H>>::run::{{closure}} 18: sc_service::build_network_future::{{closure}}::{{closure}}::{{closure}} 19: <futures_util::future::poll_fn::PollFn<F> as core::future::future::Future>::poll 20: <sc_service::task_manager::prometheus_future::PrometheusFuture<T> as core::future::future::Future>::poll 21: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll 22: <tracing_futures::Instrumented<T> as core::future::future::Future>::poll 23: tokio::runtime::park::CachedParkThread::block_on 24: tokio::runtime::context::runtime::enter_runtime 25: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll 26: tokio::runtime::task::core::Core<T,S>::poll 27: tokio::runtime::task::harness::Harness<T,S>::poll 28: tokio::runtime::blocking::pool::Inner::run 29: std::sys_common::backtrace::__rust_begin_short_backtrace 30: core::ops::function::FnOnce::call_once{{vtable.shim}} 31: std::sys::pal::unix::thread::Thread::new::thread_start 32: <unknown> 33: <unknown> Thread 'tokio-runtime-worker' panicked at 'not implemented: unsupported key type', /home/ubuntu/.cargo/git/checkouts/litep2p-2515ad90543f141a/153d388/src/crypto/mod.rs:103 ``` cc @dmitry-markin Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Collaborator
Author
Testing ResultsThe warp-sync node is producing blocks, running for roughly ~20h. WARN tokio-runtime-worker telemetry: ❌ Error while dialing /dns/telemetry.polkadot.io/tcp/443/x-parity-wss/%2Fsubmit%2F: Custom { kind: Other, error: Timeout }
WARN tokio-runtime-worker litep2p::ipfs::identify: inbound identify substream opened for peer who doesn't exist peer=PeerId("12D3KooWF3PWbXdGEuT35nBh3MgECtxnHng3s5c5QKapoDZMy38z") protocol=/ipfs/id/1.0.0
WARN tokio-runtime-worker litep2p::ipfs::identify: inbound identify substream opened for peer who doesn't exist peer=PeerId("12D3KooWF3PWbXdGEuT35nBh3MgECtxnHng3s5c5QKapoDZMy38z") protocol=/ipfs/id/1.0.0
WARN tokio-runtime-worker sync: 💔 Ignored block (#22873601 -- 0x649e…eab2) announcement from 12D3KooWBDbBuoE4umuzJnZcUouT4GY6n31BRWHXdAFsThjTKrug because all validation slots for this peer are occupied.
WARN tokio-runtime-worker sync: 💔 Ignored block (#22873781 -- 0xf711…b203) announcement from 12D3KooWBDbBuoE4umuzJnZcUouT4GY6n31BRWHXdAFsThjTKrug because all validation slots for this peer are occupied.
WARN tokio-runtime-worker sync: 💔 Ignored block (#22873782 -- 0x917b…dfa0) announcement from 12D3KooWBDbBuoE4umuzJnZcUouT4GY6n31BRWHXdAFsThjTKrug because all validation slots for this peer are occupied.
WARN tokio-runtime-worker db::notification_pinning: Notification block pinning limit reached. Unpinning block with hash
ERROR tokio-runtime-worker beefy: 🥩 Error: ConsensusReset. Restarting voter.I think we are good to go here. I'll wait for a few more hours and if everything looks sane, I'll merge this and #61. I'll leave the full node running for a few more days. |
Merged
lexnv
added a commit
that referenced
this pull request
May 24, 2024
## [0.5.0] - 2023-05-24 This is a small patch release that makes the `FindNode` command a bit more robst: - The `FindNode` command now retains the K (replication factor) best results. - The `FindNode` command has been updated to handle errors and unexpected states without panicking. ### Changed - kad: Refactor FindNode query, keep K best results and add tests ([#114](#114)) ## [0.4.0] - 2023-05-23 This release introduces breaking changes to the litep2p crate, primarily affecting the `kad` module. Key updates include: - The `GetRecord` command now exposes all peer records, not just the latest one. - A new `RecordType` has been introduced to clearly distinguish between locally stored records and those discovered from the network. Significant refactoring has been done to enhance the efficiency and accuracy of the `kad` module. The updates are as follows: - The `GetRecord` command now exposes all peer records. - The `GetRecord` command has been updated to handle errors and unexpected states without panicking. Additionally, we've improved code coverage in the `kad` module by adding more tests. ### Added - Add release checklist ([#115](#115)) - Re-export `multihash` & `multiaddr` types ([#79](#79)) - kad: Expose all peer records of `GET_VALUE` query ([#96](#96)) ### Changed - multistream_select: Remove unneeded changelog.md ([#116](#116)) - kad: Refactor `GetRecord` query and add tests ([#97](#97)) - kad/store: Set memory-store on an incoming record for PutRecordTo ([#88](#88)) - multistream: Dialer deny multiple /multistream/1.0.0 headers ([#61](#61)) - kad: Limit MemoryStore entries ([#78](#78)) - Refactor WebRTC code ([#51](#51)) - Revert "Bring `rustfmt.toml` in sync with polkadot-sdk (#71)" ([#74](#74)) - cargo: Update str0m from 0.4.1 to 0.5.1 ([#95](#95)) ### Fixed - Fix clippy ([#83](#83)) - crypto: Don't panic on unsupported key types ([#84](#84)) --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Intel-driver
added a commit
to Intel-driver/litep2p
that referenced
this pull request
Dec 24, 2025
This PR ensures that litep2p does not panic when decoding public keys from received TCP noise handshake. The code operated under the assumption that only the `ed25519` key is valid in the context of Substrate. However, peers could still use a different key (`rsa` / `ecdsa`) and cause the code to panic. In those cases, an error is returned which terminates the negotiation handshake. Discovered during testing a sync node with litep2p backend on kusama as part of paritytech/litep2p#83. ```bash Version: 1.10.0-cd9d08d6311 0: sp_panic_handler::set::{{closure}} 1: std::panicking::rust_panic_with_hook 2: std::panicking::begin_panic_handler::{{closure}} 3: std::sys_common::backtrace::__rust_end_short_backtrace 4: rust_begin_unwind 5: core::panicking::panic_fmt 6: <litep2p::crypto::PublicKey as core::convert::TryFrom<litep2p::crypto::keys_proto::PublicKey>>::try_from 7: litep2p::crypto::PublicKey::from_protobuf_encoding 8: litep2p::crypto::noise::parse_peer_id 9: litep2p::transport::tcp::connection::TcpConnection::negotiate_connection::{{closure}} 10: <tokio::time::timeout::Timeout<T> as core::future::future::Future>::poll 11: <litep2p::transport::tcp::TcpTransport as litep2p::transport::Transport>::negotiate::{{closure}} 12: <futures_util::stream::futures_unordered::FuturesUnordered<Fut> as futures_core::stream::Stream>::poll_next 13: <litep2p::transport::tcp::TcpTransport as futures_core::stream::Stream>::poll_next 14: <litep2p::transport::manager::TransportContext as futures_core::stream::Stream>::poll_next 15: litep2p::transport::manager::TransportManager::next::{{closure}} 16: <tokio::future::poll_fn::PollFn<F> as core::future::future::Future>::poll 17: <sc_network::litep2p::Litep2pNetworkBackend as sc_network::service::traits::NetworkBackend<B,H>>::run::{{closure}} 18: sc_service::build_network_future::{{closure}}::{{closure}}::{{closure}} 19: <futures_util::future::poll_fn::PollFn<F> as core::future::future::Future>::poll 20: <sc_service::task_manager::prometheus_future::PrometheusFuture<T> as core::future::future::Future>::poll 21: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll 22: <tracing_futures::Instrumented<T> as core::future::future::Future>::poll 23: tokio::runtime::park::CachedParkThread::block_on 24: tokio::runtime::context::runtime::enter_runtime 25: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll 26: tokio::runtime::task::core::Core<T,S>::poll 27: tokio::runtime::task::harness::Harness<T,S>::poll 28: tokio::runtime::blocking::pool::Inner::run 29: std::sys_common::backtrace::__rust_begin_short_backtrace 30: core::ops::function::FnOnce::call_once{{vtable.shim}} 31: std::sys::pal::unix::thread::Thread::new::thread_start 32: <unknown> 33: <unknown> Thread 'tokio-runtime-worker' panicked at 'not implemented: unsupported key type', /home/ubuntu/.cargo/git/checkouts/litep2p-2515ad90543f141a/153d388/src/crypto/mod.rs:103 ``` cc @dmitry-markin Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Intel-driver
added a commit
to Intel-driver/litep2p
that referenced
this pull request
Dec 24, 2025
## [0.5.0] - 2023-05-24 This is a small patch release that makes the `FindNode` command a bit more robst: - The `FindNode` command now retains the K (replication factor) best results. - The `FindNode` command has been updated to handle errors and unexpected states without panicking. ### Changed - kad: Refactor FindNode query, keep K best results and add tests ([#114](paritytech/litep2p#114)) ## [0.4.0] - 2023-05-23 This release introduces breaking changes to the litep2p crate, primarily affecting the `kad` module. Key updates include: - The `GetRecord` command now exposes all peer records, not just the latest one. - A new `RecordType` has been introduced to clearly distinguish between locally stored records and those discovered from the network. Significant refactoring has been done to enhance the efficiency and accuracy of the `kad` module. The updates are as follows: - The `GetRecord` command now exposes all peer records. - The `GetRecord` command has been updated to handle errors and unexpected states without panicking. Additionally, we've improved code coverage in the `kad` module by adding more tests. ### Added - Add release checklist ([#115](paritytech/litep2p#115)) - Re-export `multihash` & `multiaddr` types ([#79](paritytech/litep2p#79)) - kad: Expose all peer records of `GET_VALUE` query ([#96](paritytech/litep2p#96)) ### Changed - multistream_select: Remove unneeded changelog.md ([#116](paritytech/litep2p#116)) - kad: Refactor `GetRecord` query and add tests ([#97](paritytech/litep2p#97)) - kad/store: Set memory-store on an incoming record for PutRecordTo ([#88](paritytech/litep2p#88)) - multistream: Dialer deny multiple /multistream/1.0.0 headers ([#61](paritytech/litep2p#61)) - kad: Limit MemoryStore entries ([#78](paritytech/litep2p#78)) - Refactor WebRTC code ([#51](paritytech/litep2p#51)) - Revert "Bring `rustfmt.toml` in sync with polkadot-sdk (#71)" ([#74](paritytech/litep2p#74)) - cargo: Update str0m from 0.4.1 to 0.5.1 ([#95](paritytech/litep2p#95)) ### Fixed - Fix clippy ([#83](paritytech/litep2p#83)) - crypto: Don't panic on unsupported key types ([#84](paritytech/litep2p#84)) --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Builds on #57, however due to the high number of conflicts I fixed the errors in this PR.
Next Steps