Kate Crawford (previously) takes to the New York Times's editorial page to ask why rich white guys act like the big risk of machine-learning systems is that they'll evolve into Skynet-like apex-predators that subjugate the human race, when there are already rampant problems with machine learning: algorithmic racist sentencing, algorithmic, racist and sexist discrimination, algorithmic harassment, algorithmic hiring bias, algorithmic terrorist watchlisting, algorithmic racist policing, and a host of other algorithmic cruelties and nonsense, each one imbued with unassailable objectivity thanks to its mathematical underpinnings. Read the rest
You could not ask for a clearer, easier-to-read, more informative guide to facial recognition and machine learning thank Adam Geitgey's article, which is the latest in a series of equally clear explainers on machine learning, aimed at non-technical people -- and if you are a programmer, he's got links to Python sample source and projects you can use to develop your own versions. Read the rest
Meredith from Simply Secure writes, "Artificial Intelligence is already with us, and the White House and New York University’s Information Law Institute are hosting a major public symposium to face what the social and economic impacts might be. AI Now, happening July 7th in New York City, will address the real world impacts of AI systems in the next next 5-10 years." Read the rest
Steven Levy is in characteristic excellent form in a long piece on Medium about the internal vogue for machine learning at Google; drawing on the contacts he made with In the Plex, his must-read 2012 biography of the company, Levy paints a picture of a company that's being utterly remade around newly ascendant machine learning techniques. Read the rest
Concrete Problems in AI Safety, an excellent, eminently readable paper from a group of Google AI researchers and some colleagues, sets out five hard problems facing the field: robots might damage their environments to attain their goals; robots might figure out how to cheat to attain their goals; supervising robots all the time is inefficient; robots that are allowed to try novel strategies might cause disasters; and robots that are good at one task might inappropriately try to apply that expertise to another unrelated task. Read the rest
Dango is a personal assistant that feeds its users' messages into a deep-learning neural net to discover new expressive possibilities for emojis, GIFs and stickers, and then suggests never-seen combinations of graphic elements to your text messages that add striking nuances to them. Read the rest
Director Oscar Sharp and AI researcher Ross Goodwin trained a machine-learning system with a huge pile of classic science fiction screenplays and turned it loose to write a short film. What emerged was an enigmatic 9-minute movie called Sunspring, which has just won Sci-Fi London's 48-hour challenge. Read the rest
London's Daniel Brown created a generative design system that designs beautiful, brutalist cityscapes that are part Blade Runner Hong Kong, part Inception; he then manually sorts through the results, picks the best, and publishes them in a series called "Travelling by Numbers." Read the rest
This 'Trump Deep Nightmare' video is insane. Insanely accurate, that is. Don't watch while using psychedelic drugs, unless highly experienced.
Read the rest
…Microsoft suggests the open-ended nature of Minecraft makes it particularly useful because of the huge variety of situations it can simulate from first-person perspectives.
"It allows you to have 'embodied AI'," explained Matthew Johnson, the principal software engineer working on AIX.
"So, rather than have a situation where the AI sees an avatar of itself, it can actually be inside, looking out through the eyes of something that is living in the world.
"We think this is an essential part of building this kind of general intelligence."