security Small stickers on the ground trick Tesla autopilot into steering into opposing traffic lane Cory Doctorow
security Model stealing, rewarding hacking and poisoning attacks: a taxonomy of machine learning's failure modes Cory Doctorow
security Researchers think that adversarial examples could help us maintain privacy from machine learning systems Cory Doctorow
security Towards a method for fixing machine learning's persistent and catastrophic blind spots Cory Doctorow
security Towards a general theory of "adversarial examples," the bizarre, hallucinatory motes in machine learning's all-seeing eye Cory Doctorow
AI Adversarial examples: attack can imperceptibly alter any sound (or silence), embedding speech that only voice-assistants will hear Cory Doctorow
scholarship Adversarial patches: colorful circles that convince machine-learning vision system to ignore everything else Cory Doctorow
security Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design Cory Doctorow
scholarship Researchers can fool machine-learning vision systems with a single, well-placed pixel Cory Doctorow
scholarship The "universal adversarial preturbation" undetectably alters images so AI can't recognize them Cory Doctorow