-
Notifications
You must be signed in to change notification settings - Fork 287
Closed
Description
We have 2 non allocating hash functions; xor and concat_and_hash. xor just XORs two hashes together. concat_and_hash concats the two 32byte hashes into one 64byte hash and then hashes that to get out a 32byte hash again.
xoris faster, but if you XOR A, B, C together you get the same result as C,B,A or A,C,B etc. ie it's the same irrespective of the htings we're XORing.- If you
xortwo identical hashes you end up with 0's everywhere, which can obscure a lot of mismatches (iexor(A,A)==xor(B,B)==xor(C,C), so all of them would appear identical when they are in fact things we probably want to have different hashes). concat_and_hashis slower (it actually hashes), butconcat_and_hash(A,B)would give different output toconcat_and_hash(B,A)ie the order is preserved.
We use xor fairly liberally. We also allocate in a few places (ie allocate a vec, append some stuff to it, sort it, and hash that).
We should:
- Ensure that we use
concat_and_hasheverywhere order etc matters, and that we aren't over-usingxor. - See whether we can get rid of the allocations; can we just XOR eg pallet hashes together rather than do any sorting based on pallet names? Things like hashing the pallet name into the per-pallet hashes will help ensure they are unique.
- Think about validation in terms of DecodeAsType and EncodeAsType, ie if field names in some struct change places, that's mostly OK now for instance (this is an optimisaiton though; we can be stricter too if we want)
Ultimately we want validation to be as fast as possible (so that people have as few reasons as possible reason to opt out) but also actually protect as best as possible against things that DecodeAsType and EncodeAsType would consider different.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels