Skip to content

Conversation

@sosthene-nitrokey
Copy link
Contributor

No description provided.

@sosthene-nitrokey sosthene-nitrokey force-pushed the chained-tuples branch 2 times, most recently from 91a1ec1 to 703d9b1 Compare February 1, 2025 18:07
@algesten
Copy link
Owner

algesten commented Feb 1, 2025

One problem with this approach is that macros are re-expanded and re-compiled every compilation. I don't believe there is any caching for them.

I've been meaning to revisit my use of macros for stuff like get/head/post etc to unroll them and get rid of macros altogether.

This is kinda taking us backwards unless we unroll these macros upfront as well.

@sosthene-nitrokey
Copy link
Contributor Author

I thought this was possible:

#[derive(Debug)]
pub struct DefaultConnector {
    inner: ChainedConnector<(), (
        #[cfg(feature = "_test")]
        test::TestConnector,
        #[cfg(feature = "socks-proxy")]
        SocksConnector,
        #[cfg(feature = "socks-proxy")]
        WarnOnNoSocksConnector,
        TcpConnector,
        #[cfg(feature = "rustls")]
        RustlsConnector,
        #[cfg(feature = "native-tls")]
        NativeTlsConnector,
        #[cfg(feature = "_tls")]
        WarnOnMissingTlsProvider,
    )>,
}

but it's not possible because tuples in a type definition cannot have some elements behind a #[cfg()]. This makes this PR slightly less interesting.

I still think it can lead to better type definitions and error messages.

@sosthene-nitrokey
Copy link
Contributor Author

sosthene-nitrokey commented Feb 3, 2025

I expanded the macros, but benchmarking shows no significant performance improvement.

Hyperfine for touch src/lib.rs && cargo c prior to expansion:

Benchmark 1: touch src/lib.rs && cargo c
  Time (mean ± σ):     280.0 ms ±   4.3 ms    [User: 190.5 ms, System: 86.8 ms]
  Range (min … max):   272.2 ms … 285.0 ms    10 runs

For the commit after expansion: 

Benchmark 1: touch src/lib.rs && cargo c
  Time (mean ± σ):     282.1 ms ±   5.9 ms    [User: 196.4 ms, System: 82.5 ms]
  Range (min … max):   272.4 ms … 293.6 ms    10 runs

Running the benchmark multiple times leads to inconsistent results. The difference between the two is too small to be distinguished from the noise. I don't think it's worth the added maintenance to have duplicated code.

Note that by the same logic it would make sense to pre-expand all the #[derive()] macros, and this would maybe have a larger impact.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants