Optimize initialization of networking protocol benchmarks#6636
Optimize initialization of networking protocol benchmarks#6636AndreiEres merged 10 commits intomasterfrom
Conversation
|
/cmd prdoc --audience node_dev |
|
All GitHub workflows were cancelled due to failure one of the required jobs. |
lexnv
left a comment
There was a problem hiding this comment.
Nice! 👍
Andrei could you give us a new picture of the report litep2p vs libp2p? 🙏
dmitry-markin
left a comment
There was a problem hiding this comment.
Exactly what we need!
| (27, "128MB"), | ||
| const SMALL_PAYLOAD: &[(u32, usize, &'static str)] = &[ | ||
| // (Exponent of size, number of notifications, label) | ||
| (6, 100, "64B"), |
There was a problem hiding this comment.
Can we increase the number of notifications, especially for smaller payload sizes? Even on localhost, the TCP window control can influence the results quite a lot and for low count of small notifications we can get inaccurate results.
Ideally, we would like to transmit tens of megabytes for every notification size. I.e., we can keep constant (size) * (number) = 100 MB. In case it takes too long for smaller payload sizes, we can limit it to 15-30 seconds. And we can also decrease the number of criterion iterations for every size. 10 should be fine to have an estimate of the mean deviation
We can do this number picking in a follow up PR.
There was a problem hiding this comment.
We can make the number of notifications a variable, but this will disrupt the charts that track the total time for a transfer.
| let service1 = worker1.network_service().clone(); | ||
| let (worker2, rx2, _notification_service2) = | ||
| create_network_worker::<B, H, N>(listen_address2.clone()); | ||
| let _guard = rt.enter(); |
There was a problem hiding this comment.
What is this needed for?
There was a problem hiding this comment.
Without it we can't initialize workers that needs to spawn a tokio task inside.
| let (break_tx, break_rx) = async_channel::bounded(10); | ||
| async fn run_with_backpressure(setup: Arc<BenchSetup>, size: usize, limit: usize) { | ||
| let (break_tx, break_rx) = async_channel::bounded(1); | ||
| let requests = futures::future::join_all((0..limit).into_iter().map(|_| { |
There was a problem hiding this comment.
It would be interesting to compare the results when using a single-threaded runtime and a multi-threaded one and see if they differ.
…#6636) # Description These changes should enhance the quality of benchmark results by excluding worker initialization time from the measurements and reducing the overall duration of the benchmarks. ### Integration It should not affect any downstream projects. ### Review Notes - Workers initialize once per benchmark to avoid side effects. - The listen address is assigned when a worker starts. - Benchmarks are divided into two groups by size to create better charts for comparison. --------- Co-authored-by: GitHub Action <[email protected]>
…#6636) # Description These changes should enhance the quality of benchmark results by excluding worker initialization time from the measurements and reducing the overall duration of the benchmarks. ### Integration It should not affect any downstream projects. ### Review Notes - Workers initialize once per benchmark to avoid side effects. - The listen address is assigned when a worker starts. - Benchmarks are divided into two groups by size to create better charts for comparison. --------- Co-authored-by: GitHub Action <[email protected]>
…#6636) # Description These changes should enhance the quality of benchmark results by excluding worker initialization time from the measurements and reducing the overall duration of the benchmarks. ### Integration It should not affect any downstream projects. ### Review Notes - Workers initialize once per benchmark to avoid side effects. - The listen address is assigned when a worker starts. - Benchmarks are divided into two groups by size to create better charts for comparison. --------- Co-authored-by: GitHub Action <[email protected]>
Description
These changes should enhance the quality of benchmark results by excluding worker initialization time from the measurements and reducing the overall duration of the benchmarks.
Integration
It should not affect any downstream projects.
Review Notes