Skip to content

Move away from traits implemented for specific Uint sizes? #793

@fjarri

Description

@fjarri

Some traits are currently only implemented for specific Uint sizes, like Encoding, ConcatMixed/SplitMixed, WideningMul, and some others. This creates problems when working with big Uints of non-standard sizes, and increases compilation time because of all the macros creating the implementations.

I wonder if there's any desire to try and make these traits implementable generically, or make analogous traits that can be implemented generically, to exist side by side?

E.g. in my library I would like to use ConcatMixed, but can't, since it's only implemented up to U1024. So I have a trait Extendable, such that

pub trait Extendable<Wide: Sized>: Sized {
    fn to_wide(&self) -> Wide;
    fn try_from_wide(value: &Wide) -> Option<Self>;
}

impl<const L: usize, const W: usize> Extendable<Uint<W>> for Uint<L> {
    fn to_wide(&self) -> Uint<W> {
        const {
            if W < L {
                panic!("Inconsistent widths in `Extendable::to_wide()`");
            }
        }
        ...
    }
    ...
}

That is, the output length has to be provided explicitly, and if it is not consistent with the Uint length, a compile-time error is raised. Arguably, it is a clearer compile-time error compared to what you get if you try to use ConcatMixed with a type for which it is not implemented. Although, of course, in order to get it, you need to actually use to_wide() somewhere in your code, which is a downside.

I have a similar trait for the widening multiplication, where there's a compile-time check that the sum of the lengths of two arguments is equal to the output length.

For serialization, I have

pub trait BoxedEncoding: Sized {
    fn to_be_bytes(&self) -> Box<[u8]>;
    fn try_from_be_bytes(bytes: &[u8]) -> Result<Self, String>;
}

The problem with having it defined locally is that I have to wrap Uints in a newtype in order to use it.

Alternatively, it is possible to implement generic non-allocating Serialize/Deserialize for Uint (without it having to depend on Encoding), but it requires the use of zerocopy or bytemuck to transmute between &[Limb] and &[u8].

Any interest of bringing any of that into crypto-bigint?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions