Conversation
8caf42b to
52824bc
Compare
52824bc to
f1047dd
Compare
| .with_auth_component(AuthSingleSig::new( | ||
| PublicKeyCommitment::from(EMPTY_WORD), | ||
| AuthScheme::Falcon512Rpo, | ||
| )) |
There was a problem hiding this comment.
If this is such a common pattern, should we maybe make this easier?
impl AuthSingleSig {
fn falcon_512(pub_key: Falcon512PublicKey) -> Self { ... }
}Though I don't have enough insight into how these various components play together to really know what the right API is.
There was a problem hiding this comment.
This could work, but IMO it would unnecessarily couple the enum definition to the API of AuthSingleSig:
#[non_exhaustive]
pub enum AuthScheme {
Falcon512Rpo = FALCON_512_RPO,
EcdsaK256Keccak = ECDSA_K256_KECCAK,
}and whenever the enum changes, we'd need to adapt the constructors of AuthSingleSig. It's not a huge deal, but if I were to choose I'd probably avoid this coupling.
There was a problem hiding this comment.
But the enum definition is tied to it already? Or why else do you need both the enum and the public key in AuthSingleSig::new(..)? Right now its possible to pass in effectively any value which seems very incorrect.
There was a problem hiding this comment.
Right now its possible to pass in effectively any value which seems very incorrect.
That's a separate question in my mind: are you talking about validating that the passed pub key is consistent with the specified scheme?
This could be achieved with the current interface:
/// Creates a new [`AuthSingleSig`] component with the given `public_key`.
pub fn new(pub_key: PublicKeyCommitment, auth_scheme: AuthScheme) -> Self {
match auth_scheme {
// TODO verify that the pk is valid w.r.t. to scheme supplied
}
Self { pub_key, auth_scheme }
}or with your suggested interface:
impl AuthSingleSig {
fn falcon_512(pub_key: Falcon512PublicKey) -> Self {
// TODO verify that pub key is a valid falcon key
Self { pub_key, AuthScheme::Falcon512Rpo }
}
}having a AuthSingleSig::falcon_512 constructor would be purely for convenience, but unless we decide to add validation, it won't solve "pass in effectively any value which seems very incorrect."
There was a problem hiding this comment.
My suggested interface does not require validation because the type already is that type? Which is kind of my point. If a user has t: T, why do we ask them to use new(t.to_string(), t.type()) instead of just taking new_t(t).
Or in other words, we should use types which are already validated instead of doing validation inside of other places. Or taking in vague blobs as an API.
There was a problem hiding this comment.
Oh I see, so instead of using PublicKeyCommitment (which both falcon and ecdsa "reduce into"), you'd require the helper constructors to take the individual scheme's pk (like Falcon512PublicKey, not just a commitment to it).
I think this could work
There was a problem hiding this comment.
Yeah :) Because what ends up happening for me, is that "reducing"/From/Into aren't trivially discoverable. And then it takes me a while to figure out how to produce the correct blob from my actual data
There was a problem hiding this comment.
Ok thanks for your patience in explaning :) I'm glad we're on the same page now I created an issue for this
juan518munoz
left a comment
There was a problem hiding this comment.
Looks good, thanks!
|
oops misclicked 😅 |
Updates all protocol crates to rev
3154a3and adapts to the following interface changes:Authentication
AuthFalcon512Rpo::new(pk)replaced withAuthSingleSig::new(pk, AuthScheme::Falcon512Rpo)create_basic_walletnow takesAuthMethodinstead ofAuthSchemeas a paramterAggLayer (in genesis tests)
create_existing_bridge_accountnow requiresbridge_admin_idandger_manager_id(which are created as twoBasicWallets)create_existing_agglayer_faucetnow requirestoken_supply,origin_token_address,origin_network,scale. Capmax_decimals = 8(prev. 18 to match Ethereum, but now we enforcemax_decimals < 12. We will eventually need to adjustscaleand/ormax_decimals, but this is WIP in the protocol repo)