Why the FBI would be nuts to try to use chatbots to flush out terrorists online

Social scientist/cybersecurity expert Susan Landau (previously) and Cathy "Weapons of Math Destruction" O'Neil take to Lawfare to explain why it would be a dangerous mistake for the FBI to use machine learning-based chatbots to flush out potential terrorists online.


These "Artificial Intelligence Targeting Personas" (ATIPs) have been seriously proposed as a way to find terrorists before they strike, and as nice as it would be to be able to do that, there's every reason to believe that ATIPs would fail to find the bad guys, throw suspicion on the rest of us, and waste a nearly infinite amount of human time trying to investigate their "leads."

The core problem is that machine learning only works when its predictive models can be compared to real-world outcomes and tweaked. If Amazon's ML system predicts that changing the checkout process in a specific way will increase sales, Amazon can test that hypothesis and feed back the results to the system. But terrorism is incredibly rare. Virtually everyone is not a terrorist. A chatbot would be vanishingly unlikely to meet a terrorist, and even if it did, it wouldn't meet a statistically significant number of terrorists (because there isn't a statistically significant number of terrorists). These bots would be reduced to identifying "terrorism-like activity" (whatever that is).

Even if you can make this work (you can't), there's another problem: society is not well served by having everyone under constant state scrutiny and suspicion of terrorism.

To sum up, a single bot will generate an enormous pile of difficult work for humans to resolve. When you scale up, replacing a single human with an army of bots, then that problem skyrockets. You've replaced the limitations of a human with the limitations of multiple weak algorithms with highly correlated errors. This will give the humans more work sifting through the detritus than they'd had before, and very likely looking in the wrong place to boot. It's much better to do a smarter search to begin with…

…For that matter, we've already seen how surveillance and suspicion over-reach can effectively isolate and disaffect a population. One of us helped with the data analysis that led to the court case examining whether New York City's stop-and-frisk program was constitutional (it wasn't). And we are seeing AI and machine learning techniques proliferate world-wide while producing highly inaccurate and distasteful responses.


Why Ghosts in the Machine Should Remain Ghosts

[Susan Landau and Cathy O'Neil/Lawfare]


(Image: Tomy Chatbot, Michele MF, CC-BY-SA)

(via Mathbabe)