[tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

bancfc at openmailbox.org bancfc at openmailbox.org
Wed May 18 00:39:41 UTC 2016


On 2016-05-16 18:53, isis agora lovecruft wrote:
> 
> Hello,
> 
> I am similarly excited to see a more comprehensive write up on the NTRU 
> Prime
> idea from Dan's blog post several years ago on the idea for a
> subfield-logarithm attack on ideal-lattice-based cryptography. [0]  The 
> idea to
> remove some of the ideal structure from the lattice, while still aiming 
> to
> keep a similarly high, estimated minimum, post-quantum security 
> strength as
> Newhope (~2^128 bits post-quantum security and ~2^215 classical for 
> NTRU
> Prime, versus ~2^206 bits post-quantum and ~2^229 classical for 
> Newhope) and
> speed efficiencies competitive with NewHope, [1] by altering the 
> original NTRU
> parameters is very exciting, and I'm looking forward to more research
> w.r.t. to the ideas of the original blog post (particularly the 
> exploitation
> of the ideal structure).  Additionally, the Toom6 decomposition of the
> "medium-sized" 768-degree polynomial in NTRU Prime in order to apply 
> Karatsuba
> is quite elegant.  Also, I'm glad to see that my earlier idea [2] to 
> apply a
> stable sorting network in order to generate valid polynomial 
> coefficients in
> constant-time is also suggested within the NTRU Prime paper.
> 
> However, from the original NTRU Prime blog post, Dan mentioned towards 
> the
> end: "I don't recommend actually using NTRU Prime unless and until it 
> survives
> years of serious cryptanalytic attention, including quantitative 
> evaluation of
> specific parameter choices."  Léo Ducas, one of the NewHope authors, 
> has
> responded to the NTRU Prime paper with a casual cryptanalysis of its 
> security
> claims, [3] mentioning that "A quick counter-analysis suggests the 
> security of
> the proposal is overestimated by about 75 bits" bringing NTRU Prime 
> down to
> ~2^140 classical security.

As you say, I think the security reduction is a bit steep but not 
catastrophic. However when I saw the NTRU Prime blog post before I 
interpreted to mean "its very likely that the powerful attack against 
the Smart–Vercauteren system can be extended against Lattice-based 
cryptosystems in general, that would completely break them". [0] This 
brings up another point that digresses from the discussion:

Dan and Tanja support more conservative systems like McEliece because it 
survived decades of attacks. In the event that cryptanalysis eliminates 
Lattice crypto, McEliece will remain the only viable and well studied 
alternative. How prohibitive are McEliece key sizes that they can never 
make it into Tor? Can the size problem be balanced against longer 
re-keying times for PFS - say once every 6 hours or more instead of 
every connection (there are probably many other changes needed to 
accomodate it). They've worked on making tradeoffs of longer decryption 
times to get smaller keys in their McBits implementation [1] but they 
still are no where near Lattice ones (McEliece has very fast 
encoding/decoding so it works out). With the averge webpage being 2 MBs 
in size, larger keys may not be that bad? Another interesting strategy 
for performance/efficiency is public key slicing and communicating them 
parallel. [2]


> 
> Current estimates on a hybrid BKZ+sieving attack combined with Dan's
> subfield-logarithm attack, *if* it proves successful someday (which 
> it's
> uncertain yet if it will be), would (being quite generous towards the
> attacker) roughly halve the pre-quantum security bits for n=1024 (since 
> the
> embedded subfield tricks are probably not viable), bringing NewHope 
> down to
> 103/114 bits.  For the case of the hybrid handshake in this proposal, 
> it still
> doesn't matter, because the attacker would still also need to break 
> X25519,
> which still keeps its 2^128 bits of security.  (Not to mention that 
> 103-bits
> post-quantum security is not terrible, considering that the attacker 
> still
> needs to do 2^103 computations for each and every Tor handshake she 
> wants to
> break because keys are not reused.)
> 
> Please feel free to correct me if I've misunderstood something.  IANAC
> and all that.
> 
> Further, there are other problems in the NTRU Prime paper:
> 
>  1. Defining PFS as "the server erases its key every 60 seconds" seems
>     arbitrary and a bit strange.  It also makes the claims hard to 
> analyse in
>     comparison with the NTRU literature (where, as far as I know, it's 
> never
>     stated whether or not keys may be reused, and what downsides might 
> come
>     with that) as well as with NewHope (where it's explicitly stated 
> that keys
>     should not be reused).
> 
>  2. In Fig. 1.2, the number of bytes sent by a NewHope client is 1792, 
> not
>     2048.  (To be fair, it was 2048 bytes in an earlier version.)
> 
>  3. The bandwidth estimates for NTRU Prime do not take into account 
> that, due
>     to providing a key-encapsulation mechanism rather than a key 
> exchange, the
>     client must already know the server's long-term public encryption 
> key, in
>     order that the client may encrypt its public key to the server in 
> the
>     first round of the handshake.
> 
>     Further, and more specifically in the context of Tor handshakes, 
> this
>     requires that Tor either add a second round to the handshake (which 
> we
>     currently have no mechanism, nor proposed mechanism, for), or else 
> that we
>     add the server's NTRU Prime key to the server's (micro)descriptor,
>     increasing descriptor size by 1232 bytes per relay.  For the case 
> of
>     adding these keys to microdescriptors, where the average 
> microdescriptor
>     size is currently 416 bytes [4] and ~1 Gb/s is spent on requests to 
> the
>     directories for descriptors, [5] increasing the average 
> microdescriptor
>     size to 1648 bytes would roughly quadruple the amount of network 
> traffic
>     used by directories to respond to fetches (and then double that 
> number
>     again because additional bandwidth is used through the client's 
> guard
>     relay, since most directory fetches are tunneled).  This means that 
> any
>     relay providing < ~1.142 Mb/s bandwidth would be using up more 
> network
>     bandwidth to tell everything else in the network about its 
> existence and
>     keys than the relay itself actually provides.  This means that 3840 
> out of
>     the current 7100 become a burden on the network.  This might be 
> doable,
>     [6] but I just wanted to point out how much it matters for us to 
> not need
>     to add giant keys to descriptors.
> 
> [0]: https://blog.cr.yp.to/20140213-ideal.html
> [1]: https://cryptojedi.org/papers/newhope-20160328.pdf
> [2]: 
> https://lists.torproject.org/pipermail/tor-dev/2016-May/010903.html
> [3]:
> https://groups.google.com/forum/?_escaped_fragment_=topic/cryptanalytic-algorithms/BoSRL0uHIjM#!topic/cryptanalytic-algorithms/BoSRL0uHIjM
> [4]:
> https://collector.torproject.org/recent/relay-descriptors/microdescs/micro/2016-05-16-14-05-37-micro
> [5]:
> https://metrics.torproject.org/dirbytes.html?start=2016-02-16&end=2016-05-16
> [6]: As an aside, I'd be super happy to kick the crap relays out of the 
> network:
>      
> https://lists.torproject.org/pipermail/tor-dev/2014-September/007558.html
> 
> Best Regards,


[0] https://blog.cr.yp.to/20140213-ideal.html

[1] https://binary.cr.yp.to/mcbits-20130616.pdf

[2] https://cr.yp.to/talks/2016.02.24/slides-djb-20160224-a4.pdf




More information about the tor-dev mailing list