Six Lines

Apple's SSL Failure

Posted by Aaron Massey on 27 Feb 2014.

Apple recently released patches for an extremely critical flaw in their implementation of SSL on both iOS and OS X. If you are (somehow) reading this, and you aren’t at all familiar with this problem and how it might affect you, then you should check out Macworld’s What You Need To Know post. You should also update your various Apple products to their latest, patched operating. (Even your Apple TV.)

Lots of people have written about the details of this problem. Adam Langley’s post on the issue is one of the best. Thus, I’m not going to get into the analysis of what exactly went wrong or how it could have been detected or prevented. Instead, I’m going to discuss what the NSA should have done if the knew about this bug.

There is some reason, based on somewhat circumstantial evidence, to consider it plausible that the NSA may have known about Apple’s SSL bug. Here’s John Gruber of Daring Fireball on the timing of things:

Jeffrey Grossman, on Twitter:

I have confirmed that the SSL vulnerability was introduced in iOS 6.0. It is not present in 5.1.1 and is in 6.0.

iOS 6.0 shipped on 24 September 2012.

According to slide 6 in the leaked PowerPoint deck on NSA’s PRISM program, Apple was “added” in October 2012.

These three facts prove nothing; it’s purely circumstantial. But the shoe fits.

I’m actually less interested in whether the NSA actually knew about this bug in Apple’s SSL implementation than I am in whether the NSA should be responsible for reporting these sorts of flaws. Bruce Schneier says we know the NSA has three basic surveillance programs:

Broadly speaking, three types of NSA surveillance programs were exposed by the documents released by Edward Snowden. And while the media tends to lump them together, understanding their differences is critical to understanding how to divide up the NSA’s missions.

The first is targeted surveillance. […] The second is bulk surveillance, the NSA’s collection of everything it can obtain on every communications channel to which it can get access. […] The third is the deliberate sabotaging of security. […] That’s the three: good, bad, very bad.

Clearly, something like actually creating or causing the Apple bug would fall into this third category and would fall well short of legitimate surveillance or protecting American interests. But what if the NSA discovered the bug and then failed to report it? Failure to report a bug doesn’t quite fit in that third category, but should reporting it be the NSA’s responsibility? Here’s Eugene Spafford, from his recent oral history which is a part of the Charles Babbage Institute’s oral history project:

To me, the problem of information security is not how to dominate other countries. The problem to me is how do we make the infrastructure trustworthy enough that we aren’t at the mercy of criminals and terrorists, and we can use our systems to be able to learn and to negotiate and talk with others. To me, it’s much more important to defend all the computer systems that we have in the world, than it is to be sure that the computers that are in use by a current adversary are weak enough that we can exploit them. I think that strategy is, in the long run, a losing strategy. I think the strategy of helping others to make their systems strong against random elements puts them on a better footing for us to negotiate with them, to deal with them as needed. It goes to that idea of trust across, really, the whole world.

A ‘current adversary’ could be basically any organization and might be something other than another nation-state, which is critically important. Terrorists, drug cartels, or organized crime are serious threats on the Internet. They often employ cutting-edge technologies and attacks. Legitimate businesses and consumers need all the help they can to be protected. Zero-day exploits, like the Apple SSL Bug, are extremely dangerous in part because there are too many unknowns involved. The President’s Review Group on Intelligence and Communications agrees:

We recommend that the National Security Council staff should manage an interagency process to review on a regular basis the activities of the US Government regarding attacks that exploit a previously unknown vulnerability in a computer application or system. These are often called “Zero Day” attacks because developers have had zero days to address and patch the vulnerability. US policy should generally move to ensure that Zero Days are quickly blocked, so that the underlying vulnerabilities are patched on US Government and other networks. In rare instances, US policy may briefly authorize using a Zero Day for high priority intelligence collection, following senior, interagency review involving all appropriate departments.

If the NSA was aware of the Apple SSL Bug, then it absolutely should have been their responsibility to report it to Apple through some form of responsible disclosure. Information security is asymmetric, and defense is much harder than offense. The NSA should not be in the business of deliberately sabotaging software products or encryption standards. Collection of intelligence, even through clandestine means, is clearly a part of the NSA’s mission. This sometimes conflits with their defensive mission, but for unreported software flaws affecting millions of Americans, resolving this conflict should be simple: report the flaw.