Tim O'Reilly writes about the reality that more and more of our lives — including whether you end up seeing this very sentence! — is in the hands of "black boxes": algorithmic decision-makers whose inner workings are a secret from the people they affect.
O'Reilly proposes four tests to determine whether a black box is trustable:
1. Its creators have made clear what outcome they are seeking, and it is possible for external observers to verify that outcome.
2. Success is measurable.
3. The goals of the algorithm's creators are aligned with the goals of the algorithm's consumers.
4. Does the algorithm lead its creators and its users to make better longer term decisions?
O'Reilly goes on to test these assumptions against some of the existing black boxes that we trust every day, like aviation autopilot systems, and shows that this is a very good framework for evaluating algorithmic systems.
But I have three important quibbles with O'Reilly's framing. The first is absolutely foundational: the reason that these algorithms are black boxes is that the people who devise them argue that releasing details of their models will weaken the models' security. This is nonsense.
For example, Facebook's tweaked its algorithm to downrank "clickbait" stories. Adam Mosseri, Facebook's VP of product management told Techcrunch, "Facebook won't be publicly publishing the multi-page document of guidelines for defining clickbait because 'a big part of this is actually spam, and if you expose exactly what we're doing and how we're doing it, they reverse engineer it and figure out how to get around it.'"
There's a name for this in security circles: "Security through obscurity." It is as thoroughly discredited an idea as is possible. As far back as the 19th century, security experts have decried the idea that robust systems can rely on secrecy as their first line of defense against compromise.
The reason the algorithms O'Reilly discusses are black boxes is because the people who deploy them believe in security-through-obscurity. Allowing our lives to be manipulated in secrecy because of an unfounded, superstitious belief is as crazy as putting astrologers in charge of monetary policy, no-fly lists, hiring decisions, and parole and sentencing recommendations.
So there's that: the best way to figure out whether we can trust a black box is the smash it open, demand that it be exposed to the disinfecting power of sunshine, and give no quarter to the ideologically bankrupt security-through-obscurity court astrologers of Facebook, Google, and the TSA.
Then there's the second issue, which is important whether or not we can see inside the black box: what data was used to train the model? Or, in traditional scientific/statistical terms, what was the sampling methodology?
Garbage in, garbage out is a principle as old as computer science, and sampling bias is a problem that's as old as the study of statistics. Algorithms are often deployed to replace biased systems with empirical ones: for example, predictive policing algorithms tell the cops where to look for crime, supposedly replacing racially biased stop-and-frisk with data-driven systems of automated suspicion.
But predictive policing training data comes from earlier, human-judgment-driven stop-and-frisk projects. If the cops only make black kids turn out their pockets, then all the drugs, guns and contraband they find will be in the pockets of black kids. Feed this data to a machine learning model and ask it where the future guns, drugs and contraband will be found, and it will dutifully send the police out to harass more black kids. The algorithm isn't racist, but its training data is.
There's a final issue, which is that algorithms have to have their models tweaked based on measurements of success. It's not enough to merely measure success: the errors in the algorithm's predictions also have to be fed back to it, to correct the model. That's the difference between Amazon's sales-optimization and automated hiring systems. Amazon's systems predict ways of improving sales, which the company tries: the failures are used to change the model to improve it. But automated hiring systems blackball some applicants and advance others, and the companies that makes these systems don't track whether the excluded people go on to be great employees somewhere else, or whether the recommended hires end up stealing from the company or alienating its customers.
I like O'Reilly's framework for evaluating black boxes, but I think we need to go farther.
Let's subject an airplane autopilot to my four tests:
1. Clarity of intended outcome. Get the plane from point A to point B following a predefined route. Respond correctly to wind and weather, in accordance with known principles of aeronautics. Optimize for congestion at busy airports. Do not crash.
2. Success is measurable. Achievement of the outcomes is made possible by a massive armature of sensors and controls that allow the autopilot to respond to real-time data from those sensors. GPS. Altitude sensors. Airspeed. Attitude. Turbulence. And the ultimate measurement is success in flying: the alignment between the actual behavior of the plane and the laws of physics and aeronautics. Whenever there is a failure (for any reason—human, mechanical, or "act of God"), the National Transportation Safety Board does a deep dive to analyze the causes and improve processes to reduce the chance that the same accident will recur.
3. Goal alignment. No passenger would argue with these goals: do not crash. Get me there in the shortest possible time. Give me a smooth ride. But the passengers might argue with a decision by the airline to optimize for fuel consumption rather than travel time. And pilots would not likely be aligned with a goal that took away their jobs.
4. Long-term decisions. Over the long term, there might be a little bit of daylight between the goals of airplane owners and pilots, or airline owners and society. For example, pilots might correctly argue that using the autopilot too much deprives them of necessary experience, increasing the likelihood of crashes when they unexpectedly have to fly the plane manually. It is also likely that the cost of upgrading aircraft to be fully autonomous is prohibitive. The fact that we still have pilots in aircraft is probably as much a testimony to the length of time it takes to replace high-cost equipment as it is to the fears of the public and the work of the Airline Pilot's Association to defend the jobs of its members.
The great question of the 21st century: Whose black box do you trust?