Two important figures in the Linux community talked about NSA and their code-breaking at googl+, Theodore posted
This is came up late in the comment thread of an earlier G+ post of mine, but I think it’s an interesting enough topic that it’s worth its own top-level post. Suppose you are an NSA agent, and your goal is to enable bulk dragnet-style surveillance, covertly, in the face of widespread adoption of encryption. Requirements: (1) it should be operationally easy for the NSA to exploit; (2) it should be hard to discover; (3) being able to break into the target computer is a non-goal, at least for this program. (Individually targetting one machine at a time doesn’t scale if the goal is dragnet surveillance; let’s just assume for the sake of argument that if the NSA wants to compromise any single machine, they probably can do it if they are willing to throw enough resources at the problem.)
I nominated corrupting the RDRAND output in the x86 chip so that it is the encryption of some increasing counter (how to initialize the counter on each boot-up is an interesting question; see the earlier set of comments), encrypted by a key known by the NSA. Then convince people that it is a good idea to use the output of RDRAND directly when creating session keys for IPSEC, SSH, and SSL connections.
Can you think of other similar attacks, targeting other commonly used software or hardware systems? It is said that you shouldn’t create your own encryption algorithm until you have spent a lot of time doing code breaking. Now that we know that the NSA is gunning after civilian computer systems and trying to introduce back doors to enable their SIGINT mission, if we want to design systems which are resistant to such attacks, we need to first start by trying to think up ways that we could engineer such attacks, if we worked at the NSA and this was the mission given to us.
Linus replied to the post and was critical about the assumption about RDRAND could be part of making a key simple to break:
Theodore Ts’o I don’t believe in the rdrand theory, for the very simple reason that it breaks your own #1 requirement.
It does not matter one whit if the NSA knows the AES key that is used to whiten the rdrand output, unless the NSA also knows enough to then be able to look up the initial state for whatever was then whitened using that AES key.
And quite frankly, rdrand is very much not amenable to that. Any other noise will basically kill your theory. Not just noise like the Linux kernel randomness pool. In fact, even if you use the output of rdrand without any other noise at all, and build your private key using that boot-time clean rdrand output, I doubt that the NSA can reasonably figure it out from your public keys.
No, the whole “sabotage ipsec standards bodies and infiltrate the commercial trust verifiers” approach sounds a hell of a lot more likely. Screw the random numbers, just make sure that the encryption is weak enough (or down-gradable enough) that it doesn’t even matter what your keys are..
Side note: don’t get me wrong. I think it’s good that we don’t use rdrand mindlessly in /dev/random. But if you want to look at likely targets, I’d look at site certificates and the verifying agencies etc long long long before I worry about rdrand.
Theodore countered with a defence of his earlier statement:
Linus Torvalds Consider what happens with using RDRAND to generate session keys (not just the user’s long-term public key). See my earlier comment about how if the NSA can get access to a single RDRAND output, and decrypt it, it can now use that to find the initial counter value. Yes, if you never give out the RDRAND output, but do something as simple as running SHA or MD5 on the output before you use it, it would defeat this potential attack. The problem is there are programs out there which use the output of /dev/urandom without any whitening to generate session keys, and so if you were to connect the output of RDRAND to /dev/urandom, the external attacker might be able to get their hands on raw RDRAND output, and from that, be able to predict future RDRAND output (if RDRAND was compromised as I described).
BTW, I’m not trying to beat up on Intel. There’s a very good chance that Intel chips are clean. It’s the exercise of “Think Like The NSA” which I’m trying to encourage people to consider. In that context, it’s important not to focus on just one potential attack, and then blind ourselves to other potential approaches.
For those who want to read the original post and other peoples comments can read it at google+.