An accountable algorithm for running a secure random checkpoint

Ed Felten presents and argues for the idea of "accountable algorithms" for use in public life — that is, "output produced by a particular execution of the algorithm can be verified as correct after the fact by a skeptical member of the public."

He gives a great example of how to run a securely random TSA checkpoint where, at the end of each day, the public can open a sealed envelope and verify that the TSA was using a truly fair random selection method, and not just picking people they didn't like the look of:

Now we can create our accountable selection method. First thing in the morning, before the security checkpoint opens, the TSA picks a random value R and commits it. Now the TSA knows R but the public doesn't. Immediately thereafter, TSA officials roll dice, in public view, to generate another random value S. Now the TSA adds R+S and makes that sum the key K for the day.

Now, when you arrive at the checkpoint, you announce your name N, and the TSA uses the selection function to compute S(K, N). The TSA announces the result, and if it's "yes," then you get searched. You can't anticipate whether you'll be searched, because that depends on the key K, which depends on the TSA's secret value R, which you don't know.

At the end of the day, the TSA opens its commitment to R. Now you can verify that the TSA followed the algorithm correctly in deciding whether to search you. You can add the now-public R to the already-public S, to get the day's (previously) secret key K. You can then evaluate the selection function S(K,N) with your name N–replicating the computation that the TSA did in deciding whether to search you. If the result you get matches the result the TSA announced earlier, then you know that the TSA did their job correctly. If it doesn't match, you know the TSA cheated–and when you announce that they cheated, anybody can verify that your accusation is correct.

This method prevents the TSA from creating a non-random result. The reason the TSA cannot do this is that the key K is based on result of die-rolling, which is definitely random. And the TSA cannot have chosen its secret value R in a way that neutralized the effect of the random die-rolls, because the TSA had to commit to its choice of R because the dice were rolled. So citizens know that if they were chosen, it was because of randomness and not any TSA bias.

Accountable Algorithms: An Example