Marina Abramović and Kim Stanley Robinson perform "The Hard Problem," a performance-art podcast

"The Hard Problem" is a new episode of the Into The Impossible podcast from the Arthur C. Clarke Center for Human Imagination: it features the outcome of a collaboration between legendary performance artist Marina Abramović (previously) and environmentalist science fiction writer Kim Stanley Robinson (previously): a short story about an interstellar journey incorporating elements of Robinson's outstanding 2015 novel Aurora -- a novel that is pitiless in its insistence on rigor in our thinking about the problems of living in space and on other planets.

AI Alarmism: why smart people believe dumb things about our future AI overlords

Maciej Cegłowski (previously) gave this talk, "Superintelligence: The Idea That Eats Smart People," at Web Camp Zagreb last October, spending 45 minutes delving into the origin of the idea that computers are going to become apocalyptic, self-programming, superintelligent basilisks that end all live on Earth (and variations on this theme) and then explaining why this fundamentally evidence-free, fuzzy idea has colonized so many otherwise brilliant people -- including people like Stephen Hawking -- and why it's an irrational and potentially harmful belief system. Read the rest

This Bach chorale composed by machine learning is pretty good

Gaetan Hadjeres and Francois Pachet at the Sony Computer Science Laboratories in Paris created DeepBach, then entered Bach's 352 chorales. The resulting composition is certainly in the style. So why does this work better than some other attempts? Read the rest

Dive into Machine Learning and Artificial Intelligence with this training bundle

Artificial Intelligence already has tons of applications in countless industries, making it an incredibly lucrative skill set. It’s an exciting time to get involved in AI, and this bundle is an excellent way to get familiar with a variety of machine learning concepts and techniques.

Read the rest

A checklist for figuring out whether your algorithm is a "weapon of math destruction"

The Data & Society institute (dedicated to critical, interdisciplinary perspectives on big data) held an online seminar devoted to Cathy O'Neil's groundbreaking book Weapons of Math Destruction, which showed how badly designed algorithmic decision-making systems can create, magnify and entrench the social problems they're supposed to solve, perpetuating inequality, destabilizing the economy, and making a small number of people very, very rich. Read the rest

An AI wrote a Christmas song

It's not bad. In fact, this is a triumph: a Christmas song written entirely by an artificial intelligence at the University of Toronto. Yet it has that uncanny neural network je ne sais quoi in spades.

I swear it’s Christmas Eve I hope that’s what you say The best Christmas present in the world is a blessing I’ve always been there for the rest of our lives.
Read the rest

Accelerando: once you teach a computer to see, it can teach itself to hear

In SoundNet: Learning Sound Representations from Unlabeled Video, researchers from MIT's computer science department describe their success in using software image-recognition to automate sound recognition: once software can use video analysis to decide what's going on in a clip, it can then use that understanding to label the sounds in the clip, and thus accumulate a model for understanding sound, without a human having to label videos first for training purposes. Read the rest

LipNet: lip-reading AI uses machine learning

Lip-reading algorithms have all sorts of real-world applications, and LipNet shows great promise in machine-learning lipreading of constructed sentences from the GRID sentence corpus. Read the rest

Google and Facebook's "fake news" ban is a welcome nail in the coffin of "software objectivity"

In the wake of the Trump election -- a triumph of fake news -- both Google and Facebook have announced that they will take countermeasures to exclude "fake news" from their services, downranking them in the case of Facebook and cutting them off from ad payments in Google's case. Read the rest

An "ahem" detector that uses deep learning to auto-clean recordings of speech

Train the Deep Learning Ahem Detector with two sets of audio files, "a negative sample with clean voice/sound" (minimum 3 minutes) and "a positive one with 'ahem' sounds concatenated" (minimum 10s) and it will detect "ahems" in any voice sample thereafter. Read the rest

Researchers trick facial recognition systems with facial features printed on big glasses

In Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition, researchers from Carnegie-Mellon and UNC showed how they could fool industrial-strength facial recognition systems (including Alibaba's "smile to pay" transaction system) by printing wide, flat glasses frames with elements of other peoples' faces with "up to 100% success." Read the rest

Artificial intelligence won't destroy the human race anytime soon

The Allen Institute for Artificial Intelligence (AI2), funded by billionaire Paul Allen's, is developing projects like an AI-based search engine for scientific papers and a system to extract "visual knowledge" from images and videos. According to Scientific American, another goal of AI2 is "to counter messages perpetuated by Hollywood and even other researchers that AI could menace the human race." SciAm's Larry Greenemeier interviewed AI2 CEO and computer scientist Oren Etzioni:

Why do so many well-respected scientists and engineers warn that AI is out to get us?

It’s hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I’d have to guess that talking about black holes gets boring after awhile—it’s a slowly developing topic. The one thing that I would say is that when they and Bill Gates—someone I respect enormously—talk about AI turning evil or potential cataclysmic consequences, they always insert a qualifier that says “eventually” or this “could” happen. And I agree with that. If we talk about a thousand-year horizon or the indefinite future, is it possible that AI could spell out doom for the human race? Absolutely it’s possible, but I don’t think this long-term discussion should distract us from the real issues like AI and jobs and AI and weapons systems. And that qualifier about “eventually” or “conceptually” is what gets lost in translation...

How do you ensure that an AI program will behave legally and ethically?

If you’re a bank and you have a software program that’s processing loans, for example, you can’t hide behind it.

Read the rest

Using machine-learning to auto-gen scary faces and hauntify buildings

The Nightmare Machine is an MIT project to use machine learning image-processing to make imagery for Hallowe'en. Read the rest

Using Machine Learning to synthesize images that look NSFW but aren't

Yahoo has released a machine-learning model called open_nsfw that is designed to distinguish not-safe-for-work images from worksafe ones. By tweaking the model and combining it with places-CNN, MIT's scene-recognition model, Gabriel Goh created a bunch of machine-generated scenes that score high for both models -- things that aren't porn, but look porny. Read the rest

AI's blind spot: garbage in, garbage out

Social scientist Kate Crawford (previously) and legal scholar Ryan Calo (previously) helped organize the interdisciplinary White House AI Now summits on how AI could increase inequality, erode accountability, and lead us into temptation and what to do about it. Read the rest

Joi Ito interviews Barack Obama for Wired: machine learning, neurodiversity, basic research and Star Trek

Joi Ito (previously) -- director of MIT Media Lab, former Creative Commons chief, investor, entrepreneur, and happy mutant -- interviewed Barack Obama for a special, Obama-edited issue of Wired. Read the rest

Alt-Right trolls argue for hours with Twitter bot

The internet's army of enraged anime avatars has a new enemy beyond their comprehension: a Twitter bot created by writer and activist Sarah Nyberg to make fools of them. Some lose themselves to hours of interaction, unaware they are ranting at a computer program. Read the rest

More posts