# An accountable algorithm for running a secure random checkpoint

Ed Felten presents and argues for the idea of "accountable algorithms" for use in public life -- that is, "output produced by a particular execution of the algorithm can be verified as correct after the fact by a skeptical member of the public."

He gives a great example of how to run a securely random TSA checkpoint where, at the end of each day, the public can open a sealed envelope and verify that the TSA was using a truly fair random selection method, and not just picking people they didn't like the look of:

Now we can create our accountable selection method. First thing in the morning, before the security checkpoint opens, the TSA picks a random value R and commits it. Now the TSA knows R but the public doesn’t. Immediately thereafter, TSA officials roll dice, in public view, to generate another random value S. Now the TSA adds R+S and makes that sum the key K for the day.

Now, when you arrive at the checkpoint, you announce your name N, and the TSA uses the selection function to compute S(K, N). The TSA announces the result, and if it’s “yes,” then you get searched. You can’t anticipate whether you’ll be searched, because that depends on the key K, which depends on the TSA’s secret value R, which you don’t know.

At the end of the day, the TSA opens its commitment to R. Now you can verify that the TSA followed the algorithm correctly in deciding whether to search you. You can add the now-public R to the already-public S, to get the day’s (previously) secret key K. You can then evaluate the selection function S(K,N) with your name N–replicating the computation that the TSA did in deciding whether to search you. If the result you get matches the result the TSA announced earlier, then you know that the TSA did their job correctly. If it doesn’t match, you know the TSA cheated–and when you announce that they cheated, anybody can verify that your accusation is correct.

This method prevents the TSA from creating a non-random result. The reason the TSA cannot do this is that the key K is based on result of die-rolling, which is definitely random. And the TSA cannot have chosen its secret value R in a way that neutralized the effect of the random die-rolls, because the TSA had to commit to its choice of R because the dice were rolled. So citizens know that if they were chosen, it was because of randomness and not any TSA bias.

### 71

1. thaum says:

But… but… how are they going to racially profile people with *that* algorithm?!

1. EH says:

That’s the genius of it: it only applies to brown people. Whitey gets let on through from then on.

1. BillStewart2012 says:

Whitey doesn’t always get through even when they’re doing racial profiling. If you’re carrying lots of strange computer equipment they get worried about it, or if you’re young (during the 80s) with long hair and a beard and have just arrived in Miami on a one-way flight from the Bahamas it seems that anybody who’s got any kind of uniform or badge wants to look through your bags.

Weighted dice, duh.

3. dragonfrog says:

Remember, the actual decision is the result of the decision algorithm S(K, N), where K is the day’s computed key (revealed only later) and N is the name of the person to be searched.  The actual method of computing S is not revealed.  This should clarify matters:

S (int K, string N) {

switch
case N like (“Mohammad”, “Ahmadi”, “Aziz”, “Khwaja”, “Debnath”, “Ahmed”, “Banerjee”, “Al-*”, “Agarwal”, “Lal”, “Patel”, “Khan”, “Chaudry”)
return true;
else
return false;

}

2. Ryan_T_H says:

Not all searches SHOULD be random. Assuming the actual goal is to provide security having some people searched at the discretion of the screeners is necessary. And “that guy over there looked kind of shifty” is a perfectly reasonable reason to search someone.

The problem is not in the screening selection. The problem is that the security function is being performed by low-quality rent-a-cops without any accountability. The solution isn’t to try and come up with some sort of algorithm that even the dumbest TSA recruit can’t screw up. The solution is to hire professionals and hold them accountable.

1. nettdata says:

Exactly.  What should be looked at is something called Predictive Profiling, as used by El Al.

http://en.wikipedia.org/wiki/Predictive_profiling

Those that are doing the profiling are HIGHLY trained, and not the stereotypical TSA burger flipper.

1. joeposts says:

There’s many stories of nightmare encounters with El Al’s goon squads. From what I’ve read it’s a three tiered system with Israeli Jews at the top and Arabs at the bottom. Lots of Arab travellers, including Arab Israelis, have complained of aggressive security guards and they don’t sound any more competent than the TSA goon squads. Hired goons are hired goons.

El Al also doesn’t seem to make money, they post losses regularly. Invasive security costs money, especially with all the litigation.

2. SamSam says:

The basic take-away was that, while it may or may not be possible for an expert to perform some kind of behavior analysis (the jury is out on that), all other “predictive” analyses (race, religion, religious symbols, clothing, etc) serve only to weaken the system, and never strengthen it, by the very fact that if you make it harder for someone with some set of variables to get through, you by definition make it easier for someone without those (changeable) variables to get through.

1. nettdata says:

I totally agree, as those “indicators” become filters rather than an analysis. That’s why I specified that the behaviourial analysis should be looked at, not the other.

Thanks for the link… didn’t see that when it was originally posted.

3. oldtaku says:

People they hire for TSA screeners can’t add R+S, much less compute S(K,N), so you’d better have an app for that.

1. Same thought – this is for a gang that closes their shoes with Velcro?

2. wysinwyg says:

OK, this might sound a little paranoid but consider the idea that the whole point of the TSA is to put lower class folks in a position to power trip over middle class folks for the sake of stoking class resentment.  If that were true, it would seem to have worked in your case.

4. Boundegar says:

The problem with this algorithm is it makes the security personnel feel less powerful; and people don’t become security personnel in order to feel less powerful.

1. I wonder what’s to stop them from picking, say, R = 1 every time.

1. IronEdithKidd says:

Presumably, two or more dice.

1. First thing in the morning, before the security checkpoint opens, the TSA picks a random value R and commits it. Now the TSA knows R but the public doesn’t. Immediately thereafter, TSA officials roll dice, in public view, to generate another random value Q.

Well, my reading of the above suggests that the selection of R is by some means other than a die-roll.  Granted, nothing here precludes the use of dice, and I suppose the TSA would be free to improvise, but the wording, to me, implied something else.  I do hope I’m wrong!

2. SamSam says:

Because that doesn’t help them — they are still bound by the roll of the dice (S) to pick only randomly-selected people.

The only thing picking R=1 each time would do would be to allow everyone at the airport to calculate ahead of time whether they would be screened or not, thus allowing people who didn’t want to be screened to try again another day.

It would be the opposite of what anyone, including the TSA, would want.

3. austinhamman says:

let s=function(k,N){if((length(N)*k) % 100 <=10) return true;//dont know how to math that up so programming terms}
first run
{
let R=1;
let Q=90;//d100
let N=foo;
k=1+90=91;
so calculating for foo we get 73 and its a no. calculating for helloworld gives 10 and mr helloworld gets searched.*
}
second run
{
let R=1;
let Q=23;//d100
let N=foo;
k=1+23=24;
so calculating for foo we get 72 and its a no. calculating for helloworld gives 40 and mr helloworld doesnt get searched.
madhyamam get 7 and gets searched
}
of course if R is always 1, the person being searched will know(if he knows its always 1) that he is gonna be searched.

*foo doesnt but bar is now guaranteed not to be searched, perhaps taking the value of their name or a digest of their name and a timestamp or something…that's the tricky part getting that down.
using the length of a name is bad the numeric value of a crypto hash of the name and something else which is different for each person (their ticket number maybe?) could work, but was more than i wanted to calculate(big ass number)

5. jetfx says:

“You’ve rolled a save and will only take half a pat down!”

1. Ashen Victor says:

” Haha! I´m a 3rd level rogue, I don’t take any pat on a successful save!”

Can I roll 4 dice and then pick the best 3?  Or how about if I roll 6 times and put them in any order I want?

1. well, ok. But none of that Strength 18/100 crap.

1. Ashen Victor says:

Man, that´s so 2nd edition!

7. The TSA if they were run by intelligent people and staffed by repectable people, WOULD pick the ones they didn’t like the look of… They might have some freakin idea who to pick on then, rather than the utter idiocy of strip searching grandma for sanitary goods or babies in arms or breaking people’s colostomy bags…  Pick the guy who looks nervous, pick the one who is from some country where they hate us. FFS “profiling” is NOT a bad thing if applied with common sense.

1. fuzzyfuzzyfungus says:

You might want to replace ‘common sense’ with ’empirical data’.
Humans are pretty dreadful at statistics (even statisticians are, if they don’t very carefully avoid falling back on intuition) and low-frequency events of high emotional salience are likely to be even worse than usual in terms of bringing out the statistical incompetence in all of us.

2. foobar says:

1.  Nah sorry, my common sense is showing.. And i resent old ladies being assaulted by the people who are supposedly protecting them.

1. fuzzyfuzzyfungus says:

While the TSA’s lack of tact in the various instances of granny-groping is positively breathaking, blanket-excepting a specific group is exactly the sort of ‘common sense’ that ends up making your security system work less well.

Oh, so women over the age of 65 or visibly similar don’t get hassled? Sounds like we need one of those…

Obviously, the TSA has been a truly impressive mixture of lackadasical and sporadically hard-assed as though to compensate; but there is a logic behind (randomized) sampling of even implausible people: specifically, it keeps any one type of mule from being a sure bet.

3. SamSam says:

Letting anyone through without risk of random searching is the stupidest idea ever. That’s just a gaping wide hole in security — you may as well throw away the stupid charade.

Let grandmas through and question all the Arabs? Hello? Is it simply impossible that early-Alzheimer’s grandma didn’t accept a package from someone? Can you really tell a non-threatening grandma from a religious fundamentalist grandma? Or an 82-year-old nun who wants to sabotage a nuclear facility?

[They] WOULD pick the ones they didn’t like the look of” is just code for “let us nice white people through, we couldn’t harm anyone.”

8. Paul232 says:

If his Algorithm is applied, what are the long term consequences in terms of safety?

1.  well, the scanners don’t really work any way, so… none.

2. fuzzyfuzzyfungus says:

Given the TSA’s current…err…thrilling record for detecting threats, it would be hard to imagine this making the process less effective…

(In seriousness, that is the point behind R being a secret for the duration of the operating period: even with knowledge of S, a hypothetical malefactor cannot game the screening process without knowledge of R, unless the algorithm S used is lousy enough that you can compute K just by obtaining a small number of samples.)

3. ldobe says:

Positive or zero, since a statistically even distribution of people will be searched.  How it is now, only people of color, or people who protest the x-ray screening, or people who look at the TSOs cross-ways get searched, while a real terrorist who probably has some training, and is convinced he’s in the right, kills a family of goatherders in Pakistan.  Because that’s where the vast majority of acts of terror happen.  In the middle east.

1. Ben Hull says:

Pakistan isn’t in the middle east.

1. Mantissa128 says:

We should move it there, though. So all the bad people are in the same place.

4. jgs says:

Unprovable without running the experiment, but I’d expect it to improve safety. A non-random system, such as exists today, can be gamed by a savvy attacker. Basically, look like they expect you to, and with high confidence, you won’t be searched. With a random system, you can never make your probability of being searched less than what’s built in to the system.

1. fuzzyfuzzyfungus says:

It might catch the total noobs (like the feckless bastards that the FBI manages to carefully stage-manage through every step of a plot that they were too dumb or incompetent to pull off themselves; but the level of ‘savvy’ required to game a ‘who looks suspicious’ based system just isn’t very high at all.

(Edit: Anecdote: My dad used to travel on business quite frequently. Despite being a totally bland business-travelling WASP, he got stopped all the time. Then he shaved his beard. Boom. Bottom dropped out of the getting-stopped.)

Given how common and cheap razors, western business casual cloths, and your choice of skin-lighteners, make-up artists, and a veritable profusion of anxiolytics stiff enough to keep you nice and relaxed; but not so punchy as to make you extremely drowsy and/or stupid, anybody who you would be worried about, and who is remotely competent, can probably show up at the terminal looking respectable and relaxed.)

9. rattypilgrim says:

TSA isn’t interested in using an algorithm to decide who to screen and whose privacy to invade. That’s not the way they roll. Thugs can’t be bothered with a civil approach to practicing security measures. And there’s no way they’ll ever stop racially profiling people and fearlessly go the whole nine yards  with old women, children, and the handicapped.

10. anansi133 says:

That’s fine for my airplane trip, but is there a way for me to find out if my vote was correctly tallied, without revealing my choice to others?

(I realize there’s probably a limit to what’s doable- If I don’t like the result and it turns out my vote ended up in dev: null, I probably can’t complain without showing who I voted for.  It would probably be enough to ensure that the number of votes cast resembles the number of votes tallied, and then within that, sample for accuracy. Right?)

11. What about the people who REALLY don’t look right?

12. This assumes all names are evenly distributed within the namespace. In reality they’re not, which means for certain values of  K many people will get searched and for other values few people will get searched.

1. dmatos says:

It all depends on your function S(N,K), I suppose.  That function can be tailored to counteract the uneven distribution of N within the namespace, as long as said distribution is well-known beforehand.

2. fuzzyfuzzyfungus says:

Name is probably a lousy N value, since it is rarely unique and probability is all over the place; but I assume that you could bludgeon together a decent N by glomming on enough additional information(eg. seat number and flight operator and number) and then hashing it. If the attacker doesn’t know K, they can’t (usefully) manipulate their N by controlling things like flight information, or address, or method of payment, or any other variable you could throw in for a bit of extra perturbation(N would be under their control; but they wouldn’t know which Ns get searched and which don’t), and the that would provide a distribution of N values such that even people with very common names aren’t treated notably differently, either to greater or lesser scrutiny…

13. I’m uncomfortable with the snobbish distain directed toward TSA screeners on display here in the comments. People who take these low wage jobs do so because that is what is available to them in this economy. They did not create the TSA security theater farce that understandably annoys you, so your ire is misdirected. If you ridicule workers as “low-quality rent-a-cops” & “burger flippers” who “can’t add” – in contrast to “intelligent” & “respectable”  people (respectable!?) – you say more about yourself than about working class folks.

Consider this, the next time you’re annoyed at the TSA screeners when you’re stuck in the terminal before your flight to your vacation or college or consulting gig: they might not be able to afford to fly much at all. They’re stuck in the terminal all the time.

1. ‘Burger-flipper’ as an insult drives me nuts.  The self-made man( or woman) does have to start somewhere, you know, and that’s usually at the bottom. And if they never rise above that, they’re still working hard, and for little money, too.

2. bcsizemo says:

Agreed.  Not saying BB hasn’t showed what looks like some cases of TSA screeners power tripping, but anger about the system shouldn’t be entirely directed towards the front line, lowest paid employees.

Same thing happens when working retail.  I think it should be mandatory for everyone to work a year or two in a retail environment, teach everyone some first hand respect toward other human beings.

3. BillStewart2012 says:

I’m fine with burger flippers – never been bullied by one, and the folks at the counter don’t routinely lie to me about something having “always been the rules” when I’ve been both flying and paying attention to civil liberties law since before they were born. TSA goons? Not so much.

14. Could you apply this concept to voting booths and have an accountable record of voting?  Some way to verify that what the person intended to vote was in fact recorded properly, but without revealing the vote?  Perhaps the voter gets a receipt that is code, but can be verified later to match their recorded vote, but not reveal the vote?

1. SamSam says:

I think there is a system in place in some places that works like this:

You go to the booth and fill out your choice. When you fill it out, the one oval you selected presents itself with a random number. (In the system I read, you fill in the oval with an ink that reveals an invisibly-inked number in the oval.)

The numbers in all ovals are random, and are only revealed when you vote.

After the election, the officials make public all the vote numbers, and who they voted for. So you can then go and search for your number, and check that it both exists and was counted in the right column.

Edit: link. It’s not clear, however, how you can dispute your vote, or prove that a certain number was yours.

15. retepslluerb says:

Wait a minute. They trust national security to an arabic invention?

1. Ashen Victor says:

Don’t expect any TSA official to know what a Roman numeral is…

“I have roll a VI, and today´s random value is III… I think it´s pronounced veeeeeeee “

1. retepslluerb says:

Hello?

Al-Gorithm

Al-Qaida

Al-Gore

Highly suspect!

1.  Al-Imentary, my Dear Breulls.

16. If they get the same number 5 days straight, can they yell “Yahtzee” and search the next 50 brown people at will? Seems only fair (to the TSA).

17. Dave Lloyd says:

To all those still proposing racial profiling or smart agents who can sniff a baddie out at 10 paces, here’s Bruce Scheier’s definitive argument against exactly that:

http://www.schneier.com/blog/archives/2012/05/the_trouble_wit.html
(TLDR: profiling opens opportunities for the baddies to exploit)
Case closed. Can we move on now?

1. gsilas says:

THIS.
I will never be searched.  Fortunately, I’m also harmless.  But if I weren’t, I guarantee I would never be searched.  I’ve gotten out of enough traffic tickets by just being polite, and never have been harassed by law enforcement of any kind.  Sweet white privilege, versus my nearly identical African American friend who is regularly stopped and asked to empty his pockets on the street (Manhattan).  I strongly feel that random searches on the street should have the identical racial distribution of the neighborhood they occur in, but this will never be the case.

Obviously, not all white people are harmless, as the news demonstrates daily.  I think it is an improvement if I have a chance of getting searched by the TSA, unlike our current security theater.

18. class_enemy says:

I give it about three days until some smartass, unselected by this “random” system, tells the world that he/she brought on board the plane some Fiendish Thingie like a plastic butter knife, or a 5 ounce tube of sunscreen, or a picture of a bomb.  Back to the cattle call….

I wonder how long it would be until we start reading about TSA agents playing with that die in the break room, betting with electronic gear stolen from passenger luggage.

20. eff_em says:

ALL OF YOU ARE WRONG (and you should feel bad).

The problem is: the Fourth Amendment says nobody gets searched without probable cause and a warrant.

1. DMStone says:

“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

You are misreading it and conflating the two sections. A probable cause in some cases makes search and seizure reasonable, and thus a warrant is issued. There are many cases where search and seizure is considered reasonable without a warrant or probable cause for example, demand of a driver’s license when driving. The fourth amendment only protects from unreasonable searches, which the verdict is still out on in many cases with the TSA.

1. eff_em says:

you get a gold star for correcting me. yay.

internal checkpoints are unconstitutional. it’s not a question.

1. Antinous / Moderator says:

What on earth are those bizarre character strings that you’re inserting into your comments?

2. eff_em says:

former assistant TSA administrator admits that airport security checkpoints violate the 4th amendment

21. DMStone says:

I got porno-scanned “randomly” four times on one two way trip… what are the odds?

1. wysinwyg says:

Easily calculated if you know the rate at which they randomly scan.  Say it’s one out of ten.  Then the odds of getting scanned on four subsequent trips is 1/10^4, or one in ten thousand.  Considering the fact that significantly more than 10,000 people fly every day the odds are that this will happen to someone, somewhere.  Inevitably, that person is going to feel like they were subject to non-random procedures.

This is one of many reasons why human beings’ gut feelings about probability and statistics are almost always wrong.