Wireheading: when machine learning systems jolt their reward centers by cheating

Machine learning systems are notorious for cheating, and there's a whole menagerie of ways that these systems achieve their notional goals while subverting their own purpose, with names like "model stealing, rewarding hacking and poisoning attacks." Read the rest

Trying to land on some runways causes the Boeing 737's control screens to go black

The Boeing 737 Next Generation has a gnarly bug: on instrument approach to seven specific runways, the six cockpit display units used to guide the pilots to their landing go suddenly black and they remain black until the pilots choose a different runway to land on. Read the rest

AI, machine learning, and other frothy tech subjects remained overhyped in 2019

Rodney Brooks (previously) is a distinguished computer scientist and roboticist (he's served as as head of MIT's Computer Science and Artificial Intelligence Laboratory and CTO of Irobot); two years ago, he published a list of "dated predictions" intended to cool down some of the hype about self-driving cars, machine learning, and robotics, hype that he viewed as dangerously gaseous. Read the rest

AI Now's annual report: stop doing "emotion detection"; stop "socially sensitive" facial recognition; make AI research diverse and representative -- and more

Every year, the AI Now Institute (previously) publishes a deep, thoughtful, important overview of where AI research is and the ethical gaps in AI's use, and makes a list of a dozen urgent recommendations for the industry, the research community, and regulators and governments. Read the rest

Librecorps: an organization that connects student free/open source software developers with humanitarian NGOs

Librecorps is a program based at the Rochester Institute for Technology's Free and Open Source Software (FOSS) initiative that works with UNICEF to connect students with NGOs for paid co-op placements where they build and maintain FOSS tools used by nonprofits. Read the rest

Model stealing, rewarding hacking and poisoning attacks: a taxonomy of machine learning's failure modes

A team of researchers from Microsoft and Harvard's Berkman Center have published a taxonomy of "Failure Modes in Machine Learning," broken down into "Intentionally-Motivated Failures" and "Unintended Failures." Read the rest

Tiny alterations in training data can introduce "backdoors" into machine learning models

In TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents, a group of Boston University researchers demonstrate an attack on machine learning systems trained with "reinforcement learning" in which ML systems derive solutions to complex problems by iteratively trying multiple solutions. Read the rest

How to recognize AI snake oil

Princeton computer scientist Arvind Narayanan (previously) has posted slides and notes from a recent MIT talk on "How to recognize AI snake oil" in which he divides AI applications into three (nonexhaustive) categories and rates how difficult they are, and thus whether you should believe vendors who claim that their machine learning models can perform as advertised. Read the rest

Genetic Evasion: using genetic algorithms to beat state-level internet censorship

Geneva ("Genetic Evasion") is a project from the University of Maryland's Breakerspace ("a lab dedicated to scaling-up undergraduate research in computer and network security"); in a paper presented today at the ACM's Conference on Computer and Communications Security, a trio of Maryland researchers and a UC Berkeley colleague present their work on evolutionary algorithms as a means of defeating state-level network censorship. Read the rest

Tpmfail: a timing attack that can extract keys from secure computing chips in 4-20 minutes

Daniel Moghimi, Berk Sunar, Thomas Eisenbarth and Nadia Heninger have published TPM-FAIL: TPM meets Timing and Lattice Attacks, their Usenix security paper, which reveals a pair of timing attacks against trusted computing chips ("Trusted Computing Modules" or TPMs), the widely deployed cryptographic co-processors used for a variety of mission-critical secure computing tasks, from verifying software updates to establishing secure connections. Read the rest

Behind the scenes, "plain" text editing is unbelievably complex and weird

One of the most interesting things about programming is that it forces you to decompose seemingly simple ideas into a set of orderly steps, and when you do that, you often realize that the "simplicity" of things you deal with all day, every day, is purely illusory and that these are actually incredibly complex, nuanced, fuzzy and contradictory: everything from peoples' names to calendars to music, art, email addresses, families, phone numbers, and really, every single idea and concept. Read the rest

Procedural one-page dungeon generator

Oleg Dolya (last seen here for his amazing procedural medieval city-map generator) is back with a wonderful procedural one-page dungeon generator that produces detailed, surprisingly coherent quickie dungeons for your RPG runs (it's an entry in the monthly challenge from /r/procedural generation). Read the rest

SQL Murder Mystery: teaching SQL concepts with a mystery game

SQL Murder Mystery is a free/open game from Northwestern University's Knight Lab that teaches the player SQL database query structures and related concepts while they solve imaginary crimes. Read the rest

The Hippocratic License: A new software license that prohibits uses that contravene the UN Universal Declaration of Human Rights

Coraline Ada Ehmke's Hippocratic License is a software license that permits the broad swathe of activities enabled by traditional free/open licenses, with one exception it bars use by: "individuals, corporations, governments, or other groups for systems or activities that actively and knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of individuals or groups in violation of the United Nations Universal Declaration of Human Rights." Read the rest

Researchers think that adversarial examples could help us maintain privacy from machine learning systems

Machine learning systems are pretty good at finding hidden correlations in data and using them to infer potentially compromising information about the people who generate that data: for example, researchers fed an ML system a bunch of Google Play reviews by reviewers whose locations were explicitly given in their Google Plus reviews; based on this, the model was able to predict the locations of other Google Play reviewers with about 44% accuracy. Read the rest

Regular expression crossword puzzles!

I'm on record as being a big supporter of learning regular expressions (AKA "regexp") -- handy ways to search through text with very complex criteria. It's notoriously opaque to beginners, but it's such a massively effective automation tool and drudgery reliever! Regex Crosswords help you hone your regexp skills with fiendishly clever regular expressions that ascend a smooth complexity gradient from beginner to expert. (via Kottke) Read the rest

Rage Inside the Machine: an insightful, brilliant critique of AI's computer science, sociology, philosophy and economics

[I ran a review of this in June when the UK edition came out -- this review coincides with the US edition's publication]

Rob Smith is an eminent computer scientist and machine learning pioneer whose work on genetic algorithms has been influential in both industry and the academy; now, in his first book for a general audience, Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All, Smith expertly draws connections between AI, neoliberalism, human bias, eugenics and far-right populism, and shows how the biases of computer science and the corporate paymasters have distorted our whole society. Read the rest

More posts