Building a confessions box with self-destructing messages

In this Simone Giertz video, she builds a secret messages box where your confessions, overshares, rants, love letters, etc. are only readable for a few seconds after the box is open and before they're shred.

Simone builds the box in response to thoughts that she might overshare her feelings and sometimes end up making people feel uncomfortable. — Read the rest

What happened to Dianna Cowern, aka Physics Girl

Many Boing Boing readers are likely familiar with science and physics educator, Dianna Cowern, aka YouTube's Physics Girl. With nearly 3 million subscribers, Dianna is beloved for her exuberant personality and her clear and entertaining explanations of complex physics concepts and her love for all things science. — Read the rest

A roadmap for AI policy questions

Robot law pioneer Ryan Calo (previously) has published a "roadmap" for an "artificial intelligence policy…to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI at least well enough to initiative their own exploration."

A taxonomy of algorithmic accountability

Eminent computer scientist Ed Felten has posted a short, extremely useful taxonomy of four ways that an algorithm can fail to be accountable to the people whose lives it affects: it can be protected by claims of confidentiality ("how it works is a trade secret"); by complexity ("you wouldn't understand how it works"); unreasonableness ("we consider factors supported by data, even when you there's no obvious correlation"); and injustice ("it seems impossible to explain how the algorithm is consistent with law or ethics").

Cataloging the problems facing AI researchers is a cross between a parenting manual and a management book

Concrete Problems in AI Safety, an excellent, eminently readable paper from a group of Google AI researchers and some colleagues, sets out five hard problems facing the field: robots might damage their environments to attain their goals; robots might figure out how to cheat to attain their goals; supervising robots all the time is inefficient; robots that are allowed to try novel strategies might cause disasters; and robots that are good at one task might inappropriately try to apply that expertise to another unrelated task. — Read the rest