On Tue, Nov 1, 2011 at 4:40 PM, Marsh Ray marsh@extendedsubset.com wrote:
On 11/01/2011 03:06 PM, Watson Ladd wrote:
GCM is a Wegman-Carter authenticator with polynomial evaluation in GF2^128 as the universal hash and AES as the encryption. As NIST pointed out, neither of those papers had anything to say about the actual security of GCM: the security claims in the GCM paper are still valid. The Microsoft paper is simply irrelevant. 2^{n-k} looks scary, but messages are 2^k blocks long, where a block is 128 bits. And the GCM paper limited the length of messages to 2^36 bytes. The Joux paper was a threat to a misfeature: Theorem 1 still holds from the GCM paper (http://eprint.iacr.org/2004/193.pdf). Its just a matter of remembering that unique means unique, and that supporting variable length IVs by padding changes the meaning of the word unique.
But those are exactly the kinds of details that get overlooked time and time again. Whether or not you consider it a "real attack", a little bit of controversy can help raise the important design constraints out of footnote status.
Definition of attack: something that breaks a security claim. Microsoft breaks a claim the GCM designers never made, Joux very cleverly breaks a part of the standard that their proof doesn't cover. The designers also didn't claim anything near the real security. (http://cr.yp.to/antiforgery/securitywcs-20050227.pdf is a tighter proof). True, NIST should put the claims front and center in the document, especially when they have such a nice form. But it is the responsibility of implementators of cryptography to exhaustively document what their security claims are. Hiding behind NIST won't save your data if you do something stupid.
I don't think stupidity by standards designers is a strike against what they are standardizing.
:-)
Everyone except fools and specialized cryptographers look to the recognized experts to guide them. NIST's function is to provide expert, conservative guidance on these topics, and they're pretty well regarded by engineers and others who choose which standard to use. By following NIST, at the very least it's not your fault if it turns out to be broken.
NIST is standardizing cryptographic algorithms for government use. They make tradeoffs in their algorithms the same as any other user: speed vs. security. Blindly implementing standards is a way to fall into a trap. DES is a standard, Blowfish isn't as much. Which would you rather use?
Furthermore, NIST didn't write the draft that the comments from Ferguson were about. As the response indicates GCM could be secure if you made the right parameter choices. Its not substantially different from RSA in that regard. Now, why the options, why q=2^128 instead of some prime? Because NIST wanted to support high speed encryption of fiber optic links in hardware, and Galois fields are easy in hardware. This is the story I get from the comments on the standard. If you are working in software, I would suggest something else.
If, as you suggest, NIST can't get this unambiguously correct the first time then in peoples' minds it naturally raises the question of the whole scheme maybe being too complicated in practice.
The standard is complicated But the problem is the picking the IV. I have no idea why NIST decided to use 96 bit IVs but fix the first 32 bits, so you are left with a 64 bit counter, then support longer ones through hashing, and then ultimately have to backtrack and support only 64 bit counters as nonces. But GCM is simple: it is polynomial evaluation, and encryption of the resulting value. The NSA has had these incidents before: ECMQV used to be in Suit B, before unknown key share attacks were published against it. The NIST P-256 was fast on 32 bit machines, but put a reflection right in the middle of a word on 64 bit machines. Etc, etc,. I'm sure there are many we never hear about.
More fundamentally implementing a system such as Tor requires making choices about how standards are used and fit together. It is the rare (and wonderful!) standard that has no options. Making those choices requires knowledge of the security offered by the standard. It is never going to be possible to rely on guidance alone. Implementors who implement AES with giant tables are in for a surprise when they run on cloud machines. Hashing a password is pointless when that hash is a hash function: checking all 10 character passwords takes about a week on a nice graphics card. Understanding of cryptography is required even if standards exist.
Nonces are required for protection against replay attacks. If nonces are unique, GCM is secure and pretty fast. Poly1305 is faster. HMAC does not have as high a margin of security proven, and when a hash function breaks so does HMAC. This is a good case for GCM or Poly1305 in Tor. Right now we have nothing: cells are encrypted with AES in counter mode only.
In order to do something constructive I might go through Torspec and look at how things are used and what we rely on them for. This might give us a better idea of what the requirements are.
KDF!=turn a password into a bunch of bits. The KDF Tor uses is used to turn the result of a DH negotiation into keying material for several different algorithms. The KDF we need just needs to take some distribution to something random looking. We have enough randomness in the initial DH negotiation to not have to worry about slowing down brute force attacks.
So you could almost use MD5(counter + DH material) for that :-)
You can indeed. Or any stream cipher really. (Provided it doesn't do very bad things)
Where we use hashing to protect a password we should use scrypt or similar functions.
I didn't know if passwords was a use case under discussion. PBKDFs has been a hot topic lately and was just ranting in general.
Thanks,
- Marsh
Sincerely, Watson Ladd