Small stickers on the ground trick Tesla autopilot into steering into opposing traffic lane

Researchers from Tencent Keen Security Lab have published a report detailing their successful attacks on Tesla firmware, including remote control over the steering, and an adversarial example attack on the autopilot that confuses the car into driving into the oncoming traffic lane. Read the rest

The Chinese Communist Party's newspaper has spun out an incredibly lucrative censorship business

People.cn is a publicly listed subsidiary of The People's Daily, the official newspaper of the Chinese Communist Party; its fortunes are rising and rising with no end in sight as it markets itself as an outsource censorship provider who combine AI and a vast army of human censors to detect and block attempts to circumvent censorship through irony, memes, and metaphors. Read the rest

Philadelphia city council candidate says his secret AI has discovered disqualifying fraud in the nominations of 30 out of 33 candidates

Devon Cade -- a former bureaucrat who now describes himself as a "philanthropist" -- has asked a court to disqualify 30 out of the 33 other Democrats standing in the primary for the city's council elections on the grounds that the signatures on their nominating petitions were forged. Read the rest

Some pretty impressive machine-learning generated poetry courtesy of GPT-2

GPT-2 is Open AI's language-generation model (last seen around these parts as a means of detecting machine-generated text); it's powerful and cool, and Gwern Branwen fed it the Project Gutenberg poetry corpus to see what kind of poetry it would write. Read the rest

Palmer Luckey wins secretive Pentagon contract to develop AI for drones

Palmer Luckey (previously) the alt-right financier who was made a billionaire by Mark Zuckerberg's decision to acquire his VR startup Oculus, is now running a Peter-Thiel-backed surveillance startup called Anduril Industries, which has won a contract to contribute to Project Maven, the Pentagon's controversial AI-for-drones system (Google's involvement in Project Maven sparked an employee uprising that ended with the relevant executives leaving the company and the contract being allowed to lapse). Read the rest

A machine-learning system that guesses whether text was produced by machine-learning systems

Gltr is an MIT-IBM Watson Lab/Harvard NLP joint project that analyzes texts and predicts whether that text was generated by a machine-learning model. Read the rest

Towards a general theory of "adversarial examples," the bizarre, hallucinatory motes in machine learning's all-seeing eye

For several years, I've been covering the bizarre phenomenon of "adversarial examples (AKA "adversarial preturbations"), these being often tiny changes to data than can cause machine-learning classifiers to totally misfire: imperceptible squeaks that make speech-to-text systems hallucinate phantom voices; or tiny shifts to a 3D image of a helicopter that makes image-classifiers hallucinate a rifle Read the rest

Common Voice: Mozilla releases the largest dataset of voice samples for free, for all

42,000 Mozilla supporters contributed to Common Voice, a free-open dataset of 1,361 hours of voice recordings in 18 languages, which is now free for anyone to use as a set of "high quality, transcribed voice data... available to startups, researchers, and anyone interested in voice-enabled technologies" -- in a field plagued with sampling bias problems, this is a dataset that aims to be diverse, representative and inclusive, and it's growing by the day (you can contribute your voice too!) -- the whole project is inspiring. (via Four Short Links) Read the rest

Beyond GIGO: how "predictive policing" launders racism, corruption and bias to make them seem empirical

"Predictive policing" is the idea that you can feed crime stats to a machine-learning system and it will produce a model that can predict crime. It is garbage. Read the rest

Chasing down that list of potential Predpol customers reveals dozens of cities that have secretly experimented with "predictive policing"

Last October, I published a list of cities that appeared to have contracted with Predpol, a "predictive policing" company, based on research provided to me by an anonymous source who used clever methods to uncover the customer list. Read the rest

American prisoners coerced or tricked into providing voice-prints for use in eternal, secret, unchecked surveillance

American prisoners are being forced -- on pain of losing access to the prison phone system -- to provide training data for a voice-print recognition algorithm that private contractors are building for biometric surveillance system that listens in on prisoners' calls. Read the rest

The Safe Face Pledge: an ethical code of conduct for facial recognition scientists and developers

The Safe Face Pledge launched last month as a "pledge to mitigate abuse of facial analysis technology," with four themes: "Show Value for Human Life, Dignity, and Rights;" "Address Harmful Bias"; "Facilitate Transparency"; and "Embed Commitments into Business Practices" (SAFE). Read the rest

Building a high-performance cluster of Gameboy emulators to teach computers to play video games

Kamil Rocki was inspired by the 2016 paper from Google Deepmind researchers explaining how they used machine learning to develop a system that could play Breakout on the Atari 2600 with superhuman proficiency. Read the rest

The third annual AI Now report: 10 more ways to make AI safe for human flourishing

Every year, NYU's nonprofit, critical activist group AI Now releases a report on the state of AI, with ten recommendations for making machine learning systems equitable, transparent and fail-safe (2016, 2017); this year's report just published, written by a fantastic panel, including Meredith Whittaker (previously -- one of the leaders of the successful googler uprising over the company's contract to supply AI tools to the Pentagon's drone project); Kate Crawford (previously -- one of the most incisive critics of AI); Jason Schultz (previously -- a former EFF attorney now at NYU) and many others. Read the rest

Breed weird critters with machine learning and Ganbreeder

Ganbreeder uses a machine learning technique called Generative Adversarial Networks (GANs) to generate images that seem like photos, at least a first glance. Read the rest

Babysitter vetting and voice-analysis: Have we reached peak AI snakeoil?

The ever-useful Gartner Hype Cycle identified an inflection point in the life of any new technology: the "Peak of Inflated Expectations," attained just before the sharp dropoff into the "Trough of Disillusionment"; I've lived through the hype-cycles of several kinds of technology and one iron-clad correlate of the "Peak of Inflated Expectations" is the "Peak of Huckster Snakeoil Salesmen": the moment at which con-artists just add a tech buzzword to some crooked scam and head out into the market to net a fortune before everyone gets wise to the idea that the shiny new hypefodder isn't a magic bullet. Read the rest

Chinese AI traffic cam mistook a bus ad for a human and publicly shamed the CEO it depicted for jaywalking

China's war on jaywalking went to the next level last spring when AI-based facial recognition systems were integrated into some crosswalks, to punish jaywalkers by squirting them with water, sending them texts warning them about legal consequences of jaywalking, and/or publicly shaming them by displaying their pictures and names on large digital billboards. Read the rest

More posts