Leaked presentation from AI snake-oil salesmen to AAA game company promises horrific, dystopian manipulation of players to drain their wallets

Techpowerup has published a redacted presentation from an unnamed AI company to an unnamed big-budget multiplayer video-game publisher, setting out a suite of surveillance capitalism tools combined with machine-learning to manipulate players to make them as addicted as possible and drain them of as much money as possible. Read the rest

Reddit shuts down Deepfakes subreddit, home to faceswapped pornography (and some other stuff)

Reddit has shut down /r/deepfakes, the subreddit where people collaborate to produce incredibly disturbing faceswapped pornography that uses machine-learning to put the faces of famous people who aren't pornography performers onto the bodies of people having sex in pornographic videos. Read the rest

Deepfakes has democratized the creation of extremely realistic video faceswapping, especially in porn

Late last year, a redditor called Deepfakes gained notoriety for the extremely convincing face-swap porn videos he was making, in which the faces of mainstream Hollywood actors and rockstars were convincingly overlaid on the bodies of performers in pornography. Read the rest

Research report explains how adtech supercharges political deceit, allowing even bumblers to be master propagandists

A new report from the New America Foundation uses the current fear that Russian government elements manipulated the 2016 US election to explore the relationship between advertising technology, surveillance capitalism, and "precision propaganda," showing how the toolsuite developed for the advertising industry is readily repurposable by even modestly competent actors to spread disinformation campaigns. Read the rest

Google makes machine learning image classifier available to the public

Google's Cloud Automl Vision system -- a machine-learning-based image classifier -- is now available to the general public; anyone can sign up to the program, upload a set of 20-10,000 images and train a new model with them, which they can then use. Read the rest

How machine learning engineers can detect and debug algorithmic bias

Ben Lorica, O'Reilly's chief data scientist, has posted slides and notes from his talk at last December's Strata Data Conference in Singapore, "We need to build machine learning tools to augment machine learning engineers." Read the rest

Two years later, Google solves 'racist algorithm' problem by purging 'gorilla' label from image classifier

In 2015, a black software developer named Jacky Alciné revealed that the image classifier used by Google Photos was labeling black people as "gorillas."

Read the rest

Adversarial examples: attack can imperceptibly alter any sound (or silence), embedding speech that only voice-assistants will hear

Adversarial examples have torn into the robustness of machine-vision systems: it turns out that changing even a single well-placed pixel can confound otherwise reliable classifiers, and with the right tricks they can be made to reliably misclassify one thing as another or fail to notice an object altogether. But even as vision systems were falling to adversarial examples, audio systems remained stubbornly hard to fool, until now. Read the rest

Adversarial patches: colorful circles that convince machine-learning vision system to ignore everything else

Machine learning systems trained for object recognition deploy a bunch of evolved shortcuts to choose which parts of an image are important to their classifiers and which ones can be safely ignored. Read the rest

Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design

In Partial Information Attacks on Real-world AI, a group of MIT computer science researchers report on their continuing work fooling Google's image-classifier, this time without any knowledge of how the classifier works. Read the rest

CGP Grey explains how machines learn

YouTuber CGP Grey is known for breaking down complex subjects so that they can be understood by just about anyone. His most recent animated video is no exception. In it, he answers the question, "How do all the algorithms around us learn to do their jobs?," ie. how do machines learn? Good stuff.

He's even created a place on reddit to discuss it further.

Here's a footnote video to the first one: Read the rest

Ted Chiang: Elon Musk's fear of runaway AI is a projection of his repressed terror of runaway corporations

Why do billionaires like Elon Musk make terrified pronouncements about the imminent rise of self-aware, murderous AIs that use us to reproduce themselves, controlling us instead of serving us? Read the rest

What is bias in machine learning, when does it matter, and what can we do about it?

"The Trouble with Bias," Kate Crawford's (previously) keynote at the 2017 Neural Information Processing Systems is a brilliant tour through different ways of thinking about what bias is, and when we should worry about it, specifically in the context of machine learning systems and algorithmic decision making -- the best part is at the end, where she describes what we should do about this stuff, and where to get started. (via 4 Short Links) Read the rest

Coalition of 100+ tech groups and leaders warn the DHS that "extreme vetting" software will be a worse-than-useless, discriminatory nightmare

In a pair of open letters to Letter to The Honorable Elaine C. Duke, Acting Secretary of Homeland, a coalition of more than 100 tech liberties groups and leading technology experts urged the DHS to abandon its plan to develop a black-box algorithmic system for predicting whether foreigners coming to the USA to visit or live are likely to be positive contributors or risks to the nation. Read the rest

A quantitative analysis of doxing: who gets doxed, and how can we detect doxing automatically?

A group of NYU and University of Illinois at Chicago computer scientists have presented a paper at the 2017 ACM Internet Measurement Conference in London presenting their findings in a large-scale study of online doxings, with statistics on who gets doxed (the largest cohort being Americann, male, gamers, and in their early 20s), why they get doxed ("revenge" and "justice") and whether software can detect doxing automatically, so that human moderators can take down doxing posts quickly. Read the rest

One week after release, iPhone X's Face ID reportedly defeated by a $150 mask

The Vietnamese security company Bkav says that a prototype mask costing $150 can reliably defeat Apple's Face ID authentication system. However, the company (which has a good track record for defeating facial recognition systems) has not released technical details for the defeat and says that it was able to accomplish the task "Because... we are the leading cyber security firm ;)." Read the rest

Watson for Oncology isn't an AI that fights cancer, it's an unproven mechanical turk that represents the guesses of a small group of doctors

There are 50 hospitals on 5 continents that use Watson for Oncology, an IBM product that charges doctors to ingest their cancer patients' records and then make treatment recommendations and suggest journal articles for further reading. Read the rest

More posts