Algorithmic decision-making: an arms-race between entropy, programmers and referees

Nesta's Juan Mateos-Garcia proposes that "entropic forces" make algorithmic decision-making tools worse over time, requiring that they be continuously maintained and improved (this is also a key idea from Cathy O'Neil's Weapons of Math Destruction: a machine-learning system is only honest if someone is continuously matching its predictions to reality and refining its model based on the mistakes it makes).


In order to counter this, machine learning systems need to be "supervised" — someone needs to look at all the guesses the system makes and see if they're right. Programmers make algorithms, supervisors check them. For supervision to take place, it needs to be cost-effective.

There's a problem, though: supervising is potentially highly skilled labor ("did the algorithm flag a potential fair use as a copyright violation?") and a single programmer can write an algorithm that makes millions of decisions that will require millions of supervisor hours to evaluate.

Mateos-Garcia proposes a kind of taxonomy of the kinds of problems that machine learning can be safely deployed against, given these asymmetries: start algorithms in "low-stakes" domains and only transition them to "high-stakes" when they're proved out; use supervisor resources as the limiting factor in where you deploy algorithmic decision-making; don't rely too much on crowdsourced supervision; and "always keep a human in the loop."

One other implication that seems to be omitted from Mateos-Garcia's analysis: machine learning systems are often "adversarial," trying to beat a human or another algorithm. Ad-optimizers are trying to find ways past the mental firewalls you use to protect your attention — when they succeed, you quickly adapt to the techniques and they stop working. Stock-trading bots are (of course) playing against other bots whose authors are continuously improving them. "Anti-extremist content" bots are playing against extremists who are trying to beat them. So whether or not there's entropic element that make algorithms worse over time, there are definitely adversarial elements that are doing this.


1. There is a race between an algorithm's ability to learn from the decisions it makes, and the amount of information that it obtains from new decisions. New machine learning techniques help algorithms 'learn from experience', making them more accurate as they make more decisions.[8] However, more decisions can also degrade an algorithm's accuracy. Perhaps it is forced to deal with weirder cases, or new situations it is not trained to deal with.[9] To make things worse, when an algorithm becomes very popular (makes more decisions), people have more reasons to game it.

My prior is that the 'entropic forces' that degrade algorithm accuracy will win out in the end: no matter how much more data you collect, it is just impossible to make perfect predictions about a complex, dynamic reality.

2. The second race is between the data scientists creating the algorithms and the supervisors checking these algorithm's decisions. Data scientists are likely to 'beat' the human supervisors because their productivity is higher: a single algorithm, or an improvement in an algorithm, can be scaled up over millions of decisions. By contrast, supervisors need to check each decision individually. This means that as the number of decisions increases, most of the organisation's labour bill will be spent on supervision, with potentially spiralling costs as the supervision process gets bigger and more complicated.

What happens at the end?

When considered together, the decline in algorithmic accuracy and the increase in labour costs I just described are likely to limit the number of algorithmic decisions an organisation can make economically. But if and when this happens depends on the specifics of the situation.


To err is algorithm: Algorithmic fallibility and economic organisation
[Juan Mateos-Garcia/Nesta]

(via 4 Short Links)

(Image: Veloshots)