In a paper published by the International Association for Cryptologic Research, a group of Harvard and MIT cryptographers demonstrate that even if the government were to backdoor encryption and lock up anyone who used non-backdoored systems, people could still hide undetectable, secure, private messages within the messages sent over the compromised systems.
The authors describe a steganographic system -- a system that hides the existence of a message -- that can run undetectably in any cryptographic protocol that is designed to keep rival governments' spy agencies out.
I don't have the math chops to follow this paper, though the non-math parts are pretty clear to anyone with a lay understanding of crypto. I know that many cryptographers of my acquaintance are skeptical of steganography in general, since the introduction of scrambled messages into imperceptible or invisible channels in widely trafficked file-types might be hard for humans to foil, but this has historically been easy for computers to detect. For example, many have tried to hide messages in images by toggling the last bit for the color value in each pixel; this least-significant bit can be twiddled without creating any visible degradation in a message, but images containing cryptographic messages in their pixels' least-significant digits have markedly different properties to normal images, because under normal circumstances, compression systems would smooth out any seeming randomness in those least-significant digits, and this smoothness is mathematically very different to an encrypted message, which is meant to be indistinguishable from random noise.
I think the authors here are arguing that since they're hiding encrypted messages in other encrypted messages, the "container" files are already very random-noise-seeming, and adding the additional seemingly random bits to them will not arouse suspicion, even under close statistical analysis.
The authors are at pains to point out that their attack on backdoored crypto and authoritarian states rests on two bedrock assumptions: first, that governments will want their domestic users to have cryptographic protection against rival spies, so they will try to design a system that is secure to anyone who isn't in possession of some secret known only to the state; and second, that this state will not be able to spy on the programs running on the computers in its borders, so that attackers will be able to run programs to encrypt, hide, detect and decrypt the steganographic messages without tipping off the state.
It's not inconceivable that a state might demand backdoors and also do something that wouldn't satisfy these conditions: for example, they may insist that all computers in their borders be designed to function like Apple's Ios devices, only capable of running programs that have been certified by some remote party, and then arrogate that certifying power to themselves, banning programs that could perform the steganography attack.
But that's OK: the point of this paper -- as I see it -- is to point out to policymakers that an attempt to ban working crypto would also require total surveillance of all computers and/or a willingness to let enemy spies conduct surveillance of your population, government and businesses. As clueless lawmakers at the highest levels of government in self-styled democratic states moot bans on working cryptography as a means of ensuring that cops and spies can break into their domestic adversaries' communications, these reality checks are a welcome addition to the debate about whether any of this can possibly work.
In this work, we examine the feasibility of secure and undetectable point-to-point communication in a world where governments can read all the encrypted communications of their citizens. We consider a world where the only permitted method of communication is via a government-mandated encryption scheme, instantiated with government-mandated keys. Parties cannot simply encrypt ciphertexts of some other encryption scheme, because citizens caught trying to communicate outside the government's knowledge (e.g., by encrypting strings which do not appear to be natural language plaintexts) will be arrested. The one guarantee we suppose is that the government mandates an encryption scheme which is semantically secure against outsiders: a perhaps reasonable supposition when a government might consider it advantageous to secure its people's communication against foreign entities. But then, what good is semantic security against an adversary that holds all the keys and has the power to decrypt?
We show that even in the pessimistic scenario described, citizens can communicate securely and undetectably. In our terminology, this translates to a positive statement: all semantically secure encryption schemes support subliminal communication. Informally, this means that there is a two-party protocol between Alice and Bob where the parties exchange ciphertexts of what appears to be a normal conversation even to someone who knows the secret keys and thus can read the corresponding plaintexts. And yet, at the end of the protocol, Alice will have transmitted her secret message to Bob. Our security definition requires that the adversary not be able to tell whether Alice and Bob are just having a normal conversation using the mandated encryption scheme, or they are using the mandated encryption scheme for subliminal communication.
Our topics may be thought to fall broadly within the realm of steganography: the science of hiding secret communication within innocent-looking messages, or cover objects. However, we deal with the non-standard setting of an adversarially chosen distribution of cover objects (i.e., a stronger-than-usual adversary), and we take advantage of the fact that our cover objects are ciphertexts of a semantically secure encryption scheme to bypass impossibility results which we show for broader classes of steganographic schemes. We give several constructions of subliminal communication schemes under the assumption that key exchange protocols with pseudorandom messages exist (such as Diffie-Hellman, which in fact has truly random messages). Each construction leverages the assumed semantic security of the adversarially chosen encryption scheme, in order to achieve subliminal communication.
How to Subvert Backdoored Encryption: [Thibaut Horel and Sunoo Park and Silas Richelson and Vinod Vaikuntanathan]
(Image: Corey Burger, CC-BY-SA)