And yet the NSA is moving to prime numbers.
A large public key isn't a very good reason to not adopt quantum-safe crypto, it just means that it requires having the Tor project to be able to scale to a larger degree. I suggest hash tables, a percentage of which are pseudorandomly downloaded. Otherwise the Tor project won't scale to 10x the relays ... even ignoring quantum cryptography.
On 3 Jan 2016, at 12:18, Ryan Carboni ryacko@gmail.com wrote:
And yet the NSA is moving to prime numbers.
A large public key isn't a very good reason to not adopt quantum-safe crypto, it just means that it requires having the Tor project to be able to scale to a larger degree. I suggest hash tables, a percentage of which are pseudorandomly downloaded. Otherwise the Tor project won't scale to 10x the relays ... even ignoring quantum cryptography.
We had a GSOC project to produce "consensus diffs", so that clients could download the differences between each consensus each hour, rather than downloading a full consensus (~1.5MB).
It showed some great results, but still needs a little work before we merge it. https://trac.torproject.org/projects/tor/ticket/13339 https://trac.torproject.org/projects/tor/ticket/13339
Tim
Tim Wilson-Brown (teor)
teor2345 at gmail dot com PGP 968F094B
teor at blah dot im OTR CAD08081 9755866D 89E2A06F E3558B7F B5A9D14F
On Sat, 2 Jan 2016 17:18:56 -0800 Ryan Carboni ryacko@gmail.com wrote:
And yet the NSA is moving to prime numbers.
So? In terms of prioritization, ensuring all existing traffic isn't subject to later decryption is far more important that defending against targeted active attacks that require hardware that doesn't exist yet.
A large public key isn't a very good reason to not adopt quantum-safe crypto, it just means that it requires having the Tor project to be able to scale to a larger degree. I suggest hash tables, a percentage of which are pseudorandomly downloaded. Otherwise the Tor project won't scale to 10x the relays ... even ignoring quantum cryptography.
Nope. Every client needs to know the public key of every relay or we're worse off vs active attackers.
To put numbers into things for the bandwidth/storage overhead for having SPHINCS256 keys for every relay, currently the full list of microdescriptors for a consensus is ~3.2 MiB, with 6960 relays.
This is roughly 9.3 MiB of extra information that would need to be downloaded in terms directory information, and ~41 KiB per hop of extra traffic as part of the circuit build process.
Additionally, without AVX2, signing is glacially slow, clocking in at ~200 ms on an Haswell i5. The same hardware does our existing ntor handshake in ~230 usec. Increasing the amount of work each hop needs to do to establish a circuit by 3 orders of magnitude to the point where a single core on a relatively modern processor can process 5 circuit creations/second would kill the Tor network.
(I'm done arguing over this. If you think relays should have PQ signature based identity keys, then feel free to write a patch. I view other things as more important, and will focus my efforts elsewhere.)
Regards,
On Sat, Jan 2, 2016 at 10:22 PM, Yawning Angel yawning@schwanenlied.me wrote:
In terms of prioritization, ensuring all existing traffic isn't subject to later decryption is far more important
I'd think so as you could adapt around other things, but a traffic decrypt seems quite bad, especially given how much is stored in purpose built agency farms for later action, and how who's talking to who is perhaps already known.
Additionally, without AVX2, signing is glacially slow, clocking in at ~200 ms on an Haswell i5. The same hardware does our existing ntor handshake in ~230 usec.
Haswell i5 seems to have AVX2, as do all Haswell's, perhaps you refer to Ivy Bridge i5's which do not...
https://software.intel.com/en-us/blogs/2011/06/13/haswell-new-instruction-de... https://en.wikipedia.org/wiki/Haswell_(microarchitecture)#New_features https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#AVX2 https://en.wikipedia.org/wiki/List_of_Intel_Core_i5_microprocessors
On Sun, 3 Jan 2016 04:16:17 -0500 grarpamp grarpamp@gmail.com wrote:
Just another link.
None of those algorithms will hold up to a quantum computer, and apart from for TLS (where we use the NIST curves) we already use "safe" Curve/Ed25519.
So I don't know why you're bringing it up. This is discussion regarding how to prevent a total disaster in the event of a Curve25519 break.
nb: Migrating to X448 would possibly hold up longer than Curve25519 would since it requires a bigger quantum computer. But performance isn't that great without using vectorization.
Additionally, without AVX2, signing is glacially slow, clocking in at ~200 ms on an Haswell i5. The same hardware does our existing ntor handshake in ~230 usec.
Haswell i5 seems to have AVX2, as do all Haswell's, perhaps you refer to Ivy Bridge i5's which do not...
Or, perhaps I meant exactly what I said, because the implementation I happened to benchmark (which I coincidentally, happened to write) does not use AVX2 (it doesn't, since it was written to be portable) and I wanted non-vectorized performance numbers (I did).
I know the algorithm is faster when vectorized but that does little good for what I suspect are a substantial fraction of the relays.