Britain developed an AI-based system for detecting benefits fraud. The AI determined that some groups are more likely to commit benefit fraud than others. A review shows that the AI was wrong.
An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people's age, disability, marital status and nationality, the Guardian can reveal. An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.
The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The "statistically significant outcome disparity" emerged in a "fairness analysis" of the automated system for universal credit advances carried out in February this year.
Officials have been lying about it from the outset, the information had to be prized from them with FOIA requests, and now they're stonewalling the media. The public will assume that the data is infallible—almost every comment on every story about this already amounts to "the computer is just telling it like it is"—and media will be hard-pressed to explain how bias is introduced by training flaws and other factors. The Post Office Horizon Scandal may have galvanized the nation, but nothing has been done about the underlying problems that led to it.
It's almost as if the point of using AI is to save money by denying benefits to legitimate claimants and get away with it by saying "computer says no" when challenged.
Government departments, including the Home Office and the DWP have, in recent years, been reluctant to disclose more about their use of AI, citing concerns that to do so could allow bad actors to manipulate systems.
It is not clear which age groups are more likely to be wrongly targeted for fraud checks by the algorithm, as the DWP redacted that part of the fairness analysis.