[tor-bugs] #7202 [Tor]: Implement ntor handshake or its successor

Tor Bug Tracker & Wiki blackhole at torproject.org
Wed Jan 2 17:03:46 UTC 2013


#7202: Implement ntor handshake or its successor
--------------------------------+-------------------------------------------
 Reporter:  karsten             |          Owner:                    
     Type:  project             |         Status:  needs_review      
 Priority:  normal              |      Milestone:  Tor: 0.2.4.x-final
Component:  Tor                 |        Version:                    
 Keywords:  SponsorZ tor-relay  |         Parent:                    
   Points:                      |   Actualpoints:                    
--------------------------------+-------------------------------------------

Comment(by nickm):

 > In 6bd72b711dd21d34e08fe2299dfe0bc5b4a72f41, you implement a linear-time
 data structure.  Do we expect this to be small or will you replace it
 later with something faster?

 I expect it to have typical size of 1 or 2. In the future I could conceive
 of 3 or 4, but if we ever needed it for something big, we'd want to
 reimplement.

 >  - Okay, this is just the one I commented on under
 316f22fe141624d2b0a6ac6b25c4f4ece6d5e10f
 >
 > I'll assume the correctness of the curve25519 implementation in
 8bd4d20a693dad9d3de743d8921d69762c50558e, since it's external and you've
 addressed portability issues in 9648ac43f1a1eb0ca200eed9177f41ee8324ca2a.

 Yup.  It's currently synchronized with Adam's code; I expect to keep it
 that way.

 > In 316f22fe141624d2b0a6ac6b25c4f4ece6d5e10f:
 >  - /* XXXX Does this possible early-return business threaten our
 security? */
 >    - The first early return for the check if the node_id matches seems
 harmless, since the only thing it depends on is
 >      something that's public anyway: the router's identity digest.
 >
 >    - Why do we need the map for keypair_bB, though?  Is the router's
 curve25519 public key not a once-per-router
 >      public value?  If not, then this provides a timing-based test for
 whether a router has a particular key in
 >      its map.  Does this leak anything dangerous?

 A router's onion key is a one-per-router value as far as any client knows,
 but it servers want to rotate them sometimes.  When they do this, they
 need the old key to keep working for a while.

 >      - 0a685804f407e06dac8bbf338f8e38aaedcbc33f makes it seem like it
 has only ever one or two entries:
 >        current and possibly previous keys - we need the map because the
 key is rotated and clients might not have
 >        seen the new key yet.  Is this correct?  Since the keys then
 appear in the consensus, I think this can't
 >        provide any information that wasn't already public.
 >      - But in 18ff5cb448181f2090349d89a3f1bc1a02e54d26, the early-exit
 is removed.  Just being paranoid, or something
 >        amiss in my reasoning above?

 I believe I'm just being paranoid there, for values of paranoia less like
 "everything is trying to get me" and more like "let's engineer this a
 little better than we think we need to, in case there's some circumstance
 I'm not thinking of that makes this less safe than it appears."

 >  - Just as an aside making sure I understand the protocol properly, the
 essential thing here is that secret_input contains the X the server saw
 *and* has g^xy and g^xb in it, so by the latter it could only be computed
 by someone who knows x (as Y^x and B^x) or by someone who knows y and b
 (as X^y and X^b), so the client can check that the value of verify that it
 received was based on the same X that it sent and knows that it could only
 have been produced by the server?  We need the server ephemeral keypair
 (y,Y) because without it, compromising b lets an attacker compute
 secret_input for past recorded traffic and decrypt it?

 I think rransom's answer here is right; I think that most of the stuff in
 the secret_input/auth_input parts of the ntor protocol are there on the
 theory that you're not going to regret having more part of the protocol
 get checked.

 For more info on the security of the protocol, you can check out

 >  - /* XXXXX VERIFY Y IS NOT POINT AT INFINITY */
 >    - Fix this?
 >    - Okay, af175dbe07d3fd712c8d7cf232d6715a55b8580d.  Is the all-zeros
 comparison the representation of the point at infinity here? (Sorry, I'm
 just reading up on elliptic curves as groups here; the point at infinity
 is always the group
 > identity?)
 >    - What was the 2007 DH bug mikeperry mentions in the comments on
 #7202?

 See rransom's answer here.  As discussed above, I don't believe that
 checking for all zeros output or other checks is necessary for curve25519
 either.

 > In 512ab64aa45e7cef83f1c54d7aeb3be69d22f48e;
 >  - Generating the server ephemeral key means an attacker can force us to
 consume entropy.  What happens to the quality
 > of the entropy we would be using for legitimate clients in that case?
 Looks like crypto_strongest_rand() can't block to wait on real random
 bits, but I presume it's a strong enough PRNG that this isn't dangerous?

 So, crypto_rand() just uses the OpenSSL RNG; crypto_strongest_rand() uses
 the OpenSSL RNG *and* the platform RNG.  It's only used to generate the
 medium-term onion keys, not the ephemeral keys.  So even if "consuming OS
 entropy" were something we worried about, we'd only be using it for the
 onion keys, which aren't made once-per-handshake unless I seriously
 screwed up.

 The traditional rationale behind tracking how much entropy is "in" a
 cryptographic PRNG is to distinguish whether you're getting an
 information-theoretic kind of security (i.o.w, "short of a defect in the
 entropy source, you'd need sorcery to break this") or "merely"
 cryptographic security (i.o.w., "you could also break this if there's a
 defect in the PRNG algorithm.")  But rransom is correct that the right
 answer to a suspect PRNG is pretty much always "Get a better PRNG", unless
 you're trying to use /dev/random to make a one-time-pad or something,
 which we aren't.

 (It's not insane IMO for paranoid folks to re-seed their PRNGs
 periodically though, like we do, since we probably haven't heard the last
 word in cryptanalysis, and since most likely advances against currently
 good-looking PRNGs are likely to take the form of "given a whole lot of
 output from the same seed, the adversary can guess things about other
 output".)

 Also, it *would be* a bit rude to read huge amounts of data from the OS
 RNG, since even if we don't believe in blocking on entropy, some other
 programs do, and it's a bit rude to punish users who want to use programs
 like that.

 > I'm okay with all the create_cell_t, etc. refactoring assuming this has
 been tested thoroughly.

 I tested all the cases I could think of in a chutney network, and there
 are unit tests for the other cases.

 > This CREATE2-inside-CREATE business in
 89e3dd8c3029e2319466fb47f33d31b76c870008 isn't in the proposal.  Is it
 going to get into the spec properly?

 It's in the last paragraph of "Integrating with the rest of Tor" in
 proposal 216.

-- 
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/7202#comment:28>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online


More information about the tor-bugs mailing list