## How to solve the artificial intelligence "stop button" problem

Implementing an on/off switch on a general artificial intelligence is way more complicated than it sounds. Rob Miles of Computerphile looks at what could go wrong. Hint: lots. Read the rest

## Free: scanned copy of Martin Gardner's Logic Machines and Diagrams (1958)

I love Martin Gardner's puzzle, math, magic, and philosophy books. I just learned from visiting Clifford Pickover's website about a Gardner book that's new to me: Logic Machines & Diagrams (1958).

From the introduction:

A logic machine is a device, electrical or mechanical, designed specifically for solving problems in formal logic. A logic diagram is a geometrical method for doing the same thing. The two fields are closely intertwined, and this book is the first attempt in any language to trace their curious, fascinating histories.

I think I need the hard copy.

## Using sandwiches to teach the Socratic method

Fans of the Judge John Hodgman podcast know that the harder you interrogate the category "sandwich," the less definitive it becomes, until you find yourself raging over tacos and hot-dogs. Read the rest

## Saturday morning mind-benders: "Newcomb's Problem" and "Parfit's Hitchhiker" dilemma

In this video Julie Galef, host of the Rationally Speaking podcast (about philosophy, rationality, science) presents one of my favorite paradoxes - Newcomb's Problem (and the related and "Parfit's Hitchhiker" dilemma).

Before Carla and I started the bOING bOING zine, I published another zine in the mid-1980s called Toilet Devil (Koko the talking ape calls people and her pet kitties "dirty toilet devils" when she is mad at them). In the first issue I drew a comic about "Newcomb's Problem." I might scan it one day and post it.

In 2006, I posted about Newcomb's Problem:

Franz Kiekeben does a nice job of describing Newcomb's Paradox, which I've enjoyed contemplating, on and off, for many years.

A highly superior being from another part of the galaxy presents you with two boxes, one open and one closed. In the open box there is a thousand-dollar bill. In the closed box there is either one million dollars or there is nothing. You are to choose between taking both boxes or taking the closed box only. But there's a catch.

The being claims that he is able to predict what any human being will decide to do. If he predicted you would take only the closed box, then he placed a million dollars in it. But if he predicted you would take both boxes, he left the closed box empty. Furthermore, he has run this experiment with 999 people before, and has been right every time.

What do you do?

On the one hand, the evidence is fairly obvious that if you choose to take only the closed box you will get one million dollars, whereas if you take both boxes you get only a measly thousand.