EFF: Tens of thousands of websites' SSL "offers effectively no security"

The Electronic Frontier Foundation's SSL Observatory is a research project that gathers and analyzes the cryptographic certificates used to secure Internet connections, systematically cataloging them and exposing their database for other scientists, researchers and cryptographers to consult.

Now Arjen Lenstra of École polytechnique fédérale de Lausanne has used the SSL Observatory dataset to show that tens of thousands of SSL certificates "offer effectively no security due to weak random number generation algorithms." Lenstra's research means that much of what we think of as gold-standard, rock-solid network security is deeply flawed, but it also means that users and website operators can detect and repair these vulnerabilities.

While we have observed and warned about vulnerabilities due to insufficient randomness in the past, Lenstra's group was able to discover more subtle RNG bugs by searching not only for keys that were unexpectedly shared by multiple certificates, but for prime factors that were unexpectedly shared by multiple publicly visible public keys. This application of the 2,400-year-old Euclidean algorithm turned out to produce spectacular results.

In addition to TLS, the transport layer security mechanism underlying HTTPS, other types of public keys were investigated that did not use EFF's Observatory data set, most notably PGP. The cryptosystems that underlay the full set of public keys in the study included RSA (which is the most common class of cryptosystem behind TLS), ElGamal (which is the most common class of cryptosystem behind PGP), and several others in smaller quantities. Within each cryptosystem, various key strengths were also observed and investigated, for instance RSA 2048 bit as well as RSA 1024 bit keys. Beyond shared prime factors, there were other problems discovered with the keys, which all appear to stem from insufficient randomness in generating the keys. The most prominently affected keys were RSA 1024 bit moduli. This class of keys was deemed by the researchers to be only 99.8% secure, meaning that 2 out of every 1000 of these RSA public keys are insecure. Our first priority is handling this large set of tens of thousands of keys, though the problem is not limited to this set, or even to just HTTPS implementations.

We are very alarmed by this development. In addition to notifying website operators, Certificate Authorities, and browser vendors, we also hope that the full set of RNG bugs that are causing these problems can be quickly found and patched. Ensuring a secure and robust public key infrastructure is vital to the security and privacy of individuals and organizations everywhere.

Researchers Use EFF's SSL Observatory To Discover Widespread Cryptographic Vulnerabilities

17

  1. Wait, wait, wait, just hold on FOR ONE SECOND! JUST HOLD THE PHONE!

    Are they saying that they literally ran the Euclidian algorithm on a bunch of public keys, and found that a bunch of them were created using the very same source primes because their RNGs were so bad they literally spat out the identical 512-bit numbers?

    Holy shit, that’s the most awesome and disturbing shit I’ve ever heard!

    1. Surely not more disturbing than the fact you can just buy certification keys, for money?

      I mean, I don’t know about you, but “it’s secure because some anonymous entity you’ve never met, with a lot of money, says so” has never really made me feel terribly safe.

  2. This may be unnecessarily alarmist with respect to web servers. Nadia Heninger writes on the Freedom to Tinker blog of her team;s research, which found similar problems with keys but also found that nearly all the affected keys were on devices, such as routers, using embedded software. This is still a problem, but mainly one that concerns network administrators, not consumers.

  3. The root of the problem is probably people hacking their computers to assume they have more access to “good” random numbers than they do in reality.  In Linux for instance, /dev/urandom (Unlimited random) is a source of random numbers *not appropriate for crypto*  while /dev/random is a source of bits that are generated from hashes of truly random events. 

    You need a lot of randomness to generate good key pairs, and often the rate of trustworthy randomness generated is low – especially on a remote server.  Key generation can become unacceptably long. 

    Unfortunately the web is full of advice (like here: http://www.chrissearle.org/blog/technical/increase_entropy_26_kernel_linux_box) which teaches you how to override your system’s defaults and pipe untrusted randomness from /dev/urandom into /dev/random.  BAD IDEA!  DON’T DO IT!  You’ll end up generating a crypto key that’s identical to any generated by any other computer with the same random seed for /dev/urandom.  Even if (say) 10 bits of good randomness are folded into the mix, you’re still left with a 1 in 1024 chance of having your secret collide with one from some other machine.

    It would be better if folks posted a dumb shell script that generated network traffic with high entropy rather than advise folks to use /dev/urandom as an entropy source.

  4. You’ll end up generating a crypto key that’s identical to any generated by any other computer with the same random seed for /dev/urandom.

    Which is exactly the same level of security you will get from any more reputable PRNG method.

    I will supply you with five megabytes of data from my /dev/urandom.  You tell me what the next ten bytes are.

    1. I’m really glad that someone else is pointing out the fact that there is a great disservice being done here by everyone who is confusing randomly generated numbers with pseudo-randomly generated numbers.

      By their very definition, true random number generators don’t have “bugs”. Quantum mechanical processes are not software.

    2. If I (or anyone else on the Internet) *can* tell you what the next 10 bytes are, (or, as a more appropriate simulation of key generation attacks, guess the next 1024 bits given a few billion chances), would you be happy to reveal all data you encrypt?  Not so sure anymore?  Living with a slow entropy rate is IMHO much better than taking the kind of privacy risk implicit in pretending /dev/urandom is appropriate for crypto.

      1. You can’t; and neither can anybody else as far as I know; and I do send encrypted data in the clear, all the time.

        But I think your message has at least one typo.

  5. P.S. If I did chi squared tests on the various permutations for the 5MB + 10bytes, would you simply accept the result that gives the closest match to the result given by the 5MB on its own, or will you make me work hard? :)

  6. “Effectively no security” is sensationalism. Even if the keys aren’t different, each session starts by negotiating a new, shared secret for that session that’s used and reused for quite a while. Requiring an attack to snoop on the initial negotiation and have SSL snooping capability in the first place significantly raises the bar for casual eavesdropping at, say, a coffee shop.

    Saying it provides “effectively no security” under such circumstances would be like saying your house key provides no security because someone could photograph it on a table. Yet, they’d still have to put in the work and have the means to create a clone of the key, which is a substantial barrier versus an unlocked door. Not all security is about the underlying mathematical problem.

    Is this problem a huge security issue? Yes. Does it mean SSL offers “effectively no security”? Hardly, as long as your review considers more than pure crypt-analysis assuming ideal attack conditions and resources.

    1. Given that this was a brute force thing, wouldn’t it be a more apt analogy to say “your house key provides no security because someone could break your door down with a sledgehammer”? If so, then yeah, it’s an illusion of security alright. These people used wood when they should have used steel.

      1. Using that analogy, wood is still better than no door at all. From an “effective” basis, having something vulnerable to a practical attack is still better than having nothing to attack at all.

        I’m not trying to say this attack isn’t a big deal, just that “effectively no security” is a huge exaggeration considering real-world circumstances.

        1. Nowhere did we say SSL offers no security. We did say 27k of the surveyed RSA-1024 keys (which include SSL/X509 but also PGP keys) offer effectively no security, because anyone could compute the corresponding private keys.

          Also, when DH is disabled for SSL (and it is more often than not, for performance reasons) there is no forward secrecy, why would casual eavesdropping not be sufficient to compromise a session ? With a tool like Wireshark all you have to do is to point it at the right private key file.

          Editing as I can’t reply:

          All it takes to recover traffic from/to a peer using a weak key without DH, is to:
          1) wait for the database of private keys to eventually leak to the net, and download it
          2) open Wireshark’s SSL options and pick the file in the database that corresponds to the ip you are trying to eavesdrop on (until someone writes a 10-line patch to do it automatically)

          and that’s it. I don’t think this qualifies as considerable work. And even if it was, relying on the lack of skill of the attacker is obfuscation, not security.

          1. “Nowhere did we say SSL offers no security.”

            I didn’t say you did. I did dispute your claim that a compromised key is “effectively no security.” As soon as you invoke the word “effective,” you’re making a claim about real-world conditions.

            “We did say 27k of the surveyed RSA-1024 keys (which include SSL/X509 but also PGP keys) offer effectively no security, because anyone could compute the corresponding private keys.”

            Yes, but there’s a difference between “no cryptographic security” and “effectively no security.” Even a site with no key secrecy/originality still has greater *effective* security against attacks than a site with no SSL at all.

            “Also, when DH is disabled for SSL (and it is more often than not, for performance reasons) there is no forward secrecy”

            Yes, but it still requires a considerably more complex attack than just knowing a symmetric private key itself. I’m talking about the *effective* increase in difficulty for getting to the plaintext, not the cryptographic.

            “why would casual eavesdropping not be sufficient to compromise a session”

            Because *casual* eavesdropping doesn’t involve capturing traffic, performing an informed attack against the encryption (with software that doesn’t quite exist yet or at least isn’t widespread), and then looking at the plaintext. I’m a computer scientist, and it would still take me considerable work — even with the private key given to me and some pointers on the software to use — to set up the right software and hardware to get the plaintext of intercepted traffic.

            I do, however know how to get a Wi-Fi card, put it in promiscuous mode, and use something like Wireshark. 

            Therefore, even if the sample size with me is one (and I have no reason to believe I’m unique here), the effective level of security is non-zero even with a known key. Any deterrent to getting at the plaintext is some *effective* increase in security.

            If someone comes out with an application that makes snooping on SSL encrypted with a bad key and getting the plaintext as easy as it is right now to get at unencrypted data, then — and only then — will the *effective* security be nearly zero.

          2. “All it takes to recover traffic from/to a peer using a weak key without DH, is to:

            1) wait for the database of private keys to eventually leak to the net, and download it
            2) open Wireshark’s SSL options and pick the file in the database that corresponds to the ip you are trying to eavesdrop on (until someone writes a 10-line patch to do it automatically)

            and that’s it. I don’t think this qualifies as considerable work.”

            If it requires fetching a database of keys (one you admit either doesn’t exist or isn’t readily available right now), configuring it in your software (either manually or with a capability that doesn’t exist yet), and using it *in addition* to what you would have to do to eavesdrop on unencrypted traffic, that is a substantial barrier constituting more than “effectively no security.”

            We have all sorts of security mechanisms that are cracked from a cryptographic basis (GSM, WEP, car remotes) but they continue to provide more than “effectively no security” because they increase the effort and difficulty of attacks to the point where exploitation takes more than casual effort.

            Fundamentally, all security is about the difficulty of exploitation. Adopting the black-and-white thinking that something is either computationally unfeasible to attack or offers “effectively no security” is misleading.

            There are things that require little effort snoop on (unencrypted traffic), more effort to snoop on (traffic with weak keys and no DH), yet more effort to snoop on (weak key with DH), and unfeasible effort to snoop on (strong key). Lumping all but the last on in the same category is kind of silly given how humans actually behave (a sharp drop-off in activity) as you add additional barriers and require more effort.

            “And even if it was, relying on the lack of skill of the attacker is obfuscation, not security.”

            That’s also absurd when we’re talking about *effective* security. The skill of the attacker is absolutely relevant. Almost all models we have for physical security rely on assumed skill of the attacker in picking locks, avoiding surveillance, etc. You’re making an unsubstantiated redefinition of “effective security” when you only rely on the cryptographic angle.

Comments are closed.