Make rand a dev-dependency#353
Conversation
|
Alternative to #352. |
|
Looks cool :) |
|
I think for portability (eg hardware wallets or possible mobile devices or other embedding) reasons we probably don't want to use rand at all. |
|
I meant in the dev dependencies |
6625de1 to
2a1813b
Compare
|
Should be ready for review now. |
| }; | ||
|
|
||
| let mut crypter = if get_slice!(1)[0] != 0 { | ||
| let their_pubkey = match PublicKey::from_slice(get_slice!(33)) { |
There was a problem hiding this comment.
This will fail a lot.
Every time the first byte won't be 0x02 or 0x03 it will fail.
There was a problem hiding this comment.
Maybe add a function that generates a Secret Key and creates a Public Key from it.
or hard code 0x02 and randomize the other 32 bytes. (the first one will at least guarantee success)
There was a problem hiding this comment.
Yea, this test isnt really great, but its also testing very small surface area, so Im not gonna worry too much about it.
There was a problem hiding this comment.
Edit: As you told me before, get_slice!() doesn't use randomness.
So maybe using 0x02/0x03 as the first byte and the rest from the provided data should cover most errors.
| @@ -1309,7 +1308,8 @@ fn do_channel_reserve_test(test_recv: bool) { | |||
| let secp_ctx = Secp256k1::new(); | |||
| let session_priv = SecretKey::from_slice(&{ | |||
There was a problem hiding this comment.
Same.
You can use SecretKey::new(&mut Rng)
56b2398 to
630a3ba
Compare
ariard
left a comment
There was a problem hiding this comment.
I'm just a bit skeptical about starting_time in the KeysManager interface. That's said that just our own KeysManager and I'm pretty sure users will come with their own KeysInterface, but we should be at least more clear on seed storage requirements.
Just slightly review the changes on the fuzzing tests.
| /// starting_time isn't strictly required to actually be a time, but it must absolutely, | ||
| /// without a doubt, be unique to this instance. ie if you start multiple times with the same | ||
| /// seed, starting_time must be unique to each run. Thus, the easiest way to achieve this is to | ||
| /// simply use the current time (with very high precision). |
There was a problem hiding this comment.
Hmmm isn't this change making starting time part of user channel keys ? And now he would have to backup the provided current time, maybe with nanos precision, that's not great.. Furthermore I understand why you want a unique seed + nonce for every lightning instance but not for multiple run of the same instance. You want to be able to derive channel_keys again
Reading this whole interface again, IMO we should do better to explain to the user what he need to reliably backup his funds.
There was a problem hiding this comment.
Cleaned up the comment. Is it better now?
There was a problem hiding this comment.
Well I would add a last sentence like "You MUST backup the seed. Your seed is needed to recover outpoints closing the channel (destination_key, shutdown_pubkey). The seed alone can't recover in-channel funds, so you MUST backup individual channel too. Starting time reason is to get new ephemeral key data but can't help you to know channel state."
There was a problem hiding this comment.
Should we also ask user to backup software version in case we update our derivation scheme?
There was a problem hiding this comment.
Ehh, dont really want to commit to a versioning scheme just yet, updated the docs to indicate that we will have something in the future, but for now, expect funds loss when upgrading.
There was a problem hiding this comment.
Okay, seems good for me, that's clear enough for users they are on their own here!
| // | ||
| // 0c007d - connect a block with one transaction of len 125 | ||
| // 02000000013f00000000000000000000000000000000000000000000000000000000000000000000000000000080020001000000000000220020e2000000000000000000000000000000000000000000000000000000000000006cc10000000000001600142e0000000000000000000000000000000000000005000020 - the commitment transaction for channel 3f00000000000000000000000000000000000000000000000000000000000000 | ||
| // 02000000013f0000000000000000000000000000000000000000000000000000000000000000000000000000008002000100000000000022002090000000000000000000000000000000000000000000000000000000000000006cc10000000000001600145c0000000000000000000000000000000000000005000020 - the commitment transaction for channel 3f00000000000000000000000000000000000000000000000000000000000000 |
There was a problem hiding this comment.
Quick git blame would point towards me with a2b6a76, IIRC with this one, I've added few connect blocks more to hit delay of passing failures backaward. This test has passed the travis, does it the mix of our changes which breaks it ?
There was a problem hiding this comment.
Probably? It doesn't really matter all that much.
| let mut key = [0u8; 32]; | ||
| rng::fill_bytes(&mut key); | ||
|
|
||
| pub fn new_outbound(their_node_id: PublicKey, ephemeral_key: SecretKey) -> PeerChannelEncryptor { |
There was a problem hiding this comment.
This is part of our public interface, shouldn't this get its own comment, specially on what we require in term of entropy for the new parameter ephemeral_key ?
There was a problem hiding this comment.
Oh I saw the comment in peer_handler, but maybe you could invite to read the one there!
There was a problem hiding this comment.
peer_channel_encryptor is only exposed privately? Otherwise it would be a build failure due to lack of docs.
| let high = if low == 0 { | ||
| self.peer_counter_high.fetch_add(1, Ordering::AcqRel) | ||
| } else { | ||
| self.peer_counter_high.load(Ordering::Acquire) |
There was a problem hiding this comment.
Oh that's interesting, it means on 32-bit platform, we have a 2^1024 monotonic counter right to use as sha-256 input for ephemeral key generation ?
There was a problem hiding this comment.
Where'd you get 2^1024? 2 32-bit counters is a 64-bit counter.
There was a problem hiding this comment.
You're right, I've screwed up my calculation
| PendingHTLCsForwardable { | ||
| /// The amount of time that should be waited prior to calling process_pending_htlc_forwards | ||
| /// The minimum amount of time that should be waited prior to calling | ||
| /// process_pending_htlc_forwards. To increase the effort required to correlate payments, |
There was a problem hiding this comment.
But if you're already on the payment path, you can already do decorrelation attacks with hashes ? You're trying to break what kind of analysis here ?
There was a problem hiding this comment.
Hmm...I guess I haven't formalized it, but there's almost certainly some stuff you can learn if the timing is super, super consistent. eg you could learn the number of hops that something took just by looking at the timing. Hiding that at least somewhat or for individual payments.
There was a problem hiding this comment.
I would argue if you're two spots on the same payment path, you can guess the number of hops by recomputing the per-hop fees between your A and your B given fees are public. Out-of-topic, as you said if user is willingly to wait shouldn't hurt.
There was a problem hiding this comment.
Right, but a user could have a more privacy-conscious routing algorithm, and we don't want to reveal info unless we have to. I think its generally a good idea, but agreed more study and a formal threat model would be better.
630a3ba to
8f487eb
Compare
They were only used for ensuring generated keys were globally unique (ie in case the user opened the same seed at a different time, we need generated keys to be globally unique). Instead, we let the user specify a time in secs/nanos, and provide a precise meaning for the user to understand.
This removes the bulk of our reliance on the rand crate in non-test envs, paving a way towards a syscall-less rust-lightning and WASM. Since this is a breaking change for full_stack_target (and several fuzz targets), go ahead and make other changes to make things more distinct.
This removes the last calls to rand outside of test and moves the dep to a dev-dependency, dropping our fuzz rng wrapper in the process.
8f487eb to
bf7eeb1
Compare
This moves the two places we called rand in regular operation into parameters, making rand a dev-dependency and (hopefully) fully supporting WASM. There's still a few things to do tomorrow, but this is 95% there.