AI Alarmism: why smart people believe dumb things about our future AI overlords

Maciej Cegłowski (previously) gave this talk, "Superintelligence: The Idea That Eats Smart People," at Web Camp Zagreb last October, spending 45 minutes delving into the origin of the idea that computers are going to become apocalyptic, self-programming, superintelligent basilisks that end all live on Earth (and variations on this theme) and then explaining why this fundamentally evidence-free, fuzzy idea has colonized so many otherwise brilliant people -- including people like Stephen Hawking -- and why it's an irrational and potentially harmful belief system. Read the rest

This Bach chorale composed by machine learning is pretty good

Gaetan Hadjeres and Francois Pachet at the Sony Computer Science Laboratories in Paris created DeepBach, then entered Bach's 352 chorales. The resulting composition is certainly in the style. So why does this work better than some other attempts? Read the rest

Free audiobook of Car Wars, my self-driving car/crypto back-door apocalypse story

Last month, Melbourne's Deakin University published Car Wars, a short story I wrote to inspire thinking and discussion about the engineering ethics questions in self-driving car design, moving beyond the trite and largely irrelevant trolley problem. Read the rest

Snowden on fake news, Twitter features, and the rule of law

Edward Snowden's Periscope interview with Jack Dorsey -- hosted by the Pardon Snowden campaign ranged over a lot of territory, including the special problems of metadata surveillance (metadata can be "more intrusive" than content "because it can be understood at scale"); asymmetry in privacy (where "an increasing imbalance of power" arises between citizens, with no privacy, and officials with all the privacy: "We can't even see their tax-returns"); the problems of relying on the rule of law in a "global context" where surveillance crosses borders and jurisdictions; and fake news, which Snowden thinks can't be solved by asking Google to be a "referee" but rather when "We talk and we share and we point out what is true." Read the rest

Why the FBI would be nuts to try to use chatbots to flush out terrorists online

Social scientist/cybersecurity expert Susan Landau (previously) and Cathy "Weapons of Math Destruction" O'Neil take to Lawfare to explain why it would be a dangerous mistake for the FBI to use machine learning-based chatbots to flush out potential terrorists online. Read the rest

A checklist for figuring out whether your algorithm is a "weapon of math destruction"

The Data & Society institute (dedicated to critical, interdisciplinary perspectives on big data) held an online seminar devoted to Cathy O'Neil's groundbreaking book Weapons of Math Destruction, which showed how badly designed algorithmic decision-making systems can create, magnify and entrench the social problems they're supposed to solve, perpetuating inequality, destabilizing the economy, and making a small number of people very, very rich. Read the rest

The latest generation of chatbot toys listen to your kids 24/7 and send their speech to a military contractor

Last year's Hello Barbie chatbot toy sent all your kid's speech to cloud servers operated by Mattel and its tech partner, but only when your kid held down Barbie's listen button -- new chatbot toys like My Friend Cayla and the i-Que Intelligent Robot are in constant listening mode -- as is your "OK Google" enabled phone, your Alexa-enabled home mic, and your Siri-enabled Ios device -- and everything that is uttered in mic range is transmitted to Nuance, a company that makes text-to-speech tech (you probably know them through their Dragon-branded tools), and contracts to the US military. Read the rest

Accelerando: once you teach a computer to see, it can teach itself to hear

In SoundNet: Learning Sound Representations from Unlabeled Video, researchers from MIT's computer science department describe their success in using software image-recognition to automate sound recognition: once software can use video analysis to decide what's going on in a clip, it can then use that understanding to label the sounds in the clip, and thus accumulate a model for understanding sound, without a human having to label videos first for training purposes. Read the rest

LipNet: lip-reading AI uses machine learning

Lip-reading algorithms have all sorts of real-world applications, and LipNet shows great promise in machine-learning lipreading of constructed sentences from the GRID sentence corpus. Read the rest

Google and Facebook's "fake news" ban is a welcome nail in the coffin of "software objectivity"

In the wake of the Trump election -- a triumph of fake news -- both Google and Facebook have announced that they will take countermeasures to exclude "fake news" from their services, downranking them in the case of Facebook and cutting them off from ad payments in Google's case. Read the rest

Clinton's data-driven "ground game" sucked in exactly the same way "targeted ads" suck, for the same reason

Tom Ewing rails against the Clinton campaign's reliance on "micro-modelling preference at an individual voter level to tell [volunteers] who to turn out where with what message and where to allocate resources." Read the rest

An "ahem" detector that uses deep learning to auto-clean recordings of speech

Train the Deep Learning Ahem Detector with two sets of audio files, "a negative sample with clean voice/sound" (minimum 3 minutes) and "a positive one with 'ahem' sounds concatenated" (minimum 10s) and it will detect "ahems" in any voice sample thereafter. Read the rest

Researchers trick facial recognition systems with facial features printed on big glasses

In Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition, researchers from Carnegie-Mellon and UNC showed how they could fool industrial-strength facial recognition systems (including Alibaba's "smile to pay" transaction system) by printing wide, flat glasses frames with elements of other peoples' faces with "up to 100% success." Read the rest

Using machine-learning to auto-gen scary faces and hauntify buildings

The Nightmare Machine is an MIT project to use machine learning image-processing to make imagery for Hallowe'en. Read the rest

Using Machine Learning to synthesize images that look NSFW but aren't

Yahoo has released a machine-learning model called open_nsfw that is designed to distinguish not-safe-for-work images from worksafe ones. By tweaking the model and combining it with places-CNN, MIT's scene-recognition model, Gabriel Goh created a bunch of machine-generated scenes that score high for both models -- things that aren't porn, but look porny. Read the rest

AI's blind spot: garbage in, garbage out

Social scientist Kate Crawford (previously) and legal scholar Ryan Calo (previously) helped organize the interdisciplinary White House AI Now summits on how AI could increase inequality, erode accountability, and lead us into temptation and what to do about it. Read the rest

Joi Ito interviews Barack Obama for Wired: machine learning, neurodiversity, basic research and Star Trek

Joi Ito (previously) -- director of MIT Media Lab, former Creative Commons chief, investor, entrepreneur, and happy mutant -- interviewed Barack Obama for a special, Obama-edited issue of Wired. Read the rest

More posts