From the Department of Terrible Ideas: The Washington Post has a must-read story up reporting on research that promises to someday make military drones fully automated. Yes, that's right, drones that kill based on software such as facial recognition, rather than any direct human command.
I know the obvious thing to do here is make Skynet jokes. But, frankly, there are plenty of problems with this without welcoming our robotic overlords. Say, for instance, this issue, which the Post broaches with a note of wry eyebrow-raising:
The prospect of machines able to perceive, reason and act in unscripted environments presents a challenge to the current understanding of international humanitarian law.
To say the least.
But here's the really interesting thing about this story: Arms control ethicists are trying to deal with it before it exists, rather than after-the-fact.
Read the rest
In Berlin last year, a group of robotic engineers, philosophers and human rights activists formed the International Committee for Robot Arms Control (ICRAC) and said such technologies might tempt policymakers to think war can be less bloody.
Some experts also worry that hostile states or terrorist organizations could hack robotic systems and redirect them. Malfunctions also are a problem: In South Africa in 2007, a semiautonomous cannon fatally shot nine friendly soldiers.
The ICRAC would like to see an international treaty, such as the one banning antipersonnel mines, that would outlaw some autonomous lethal machines. Such an agreement could still allow automated antimissile systems.
“The question is whether systems are capable of discrimination,” said Peter Asaro, a founder of the ICRAC and a professor at the New School in New York who teaches a course on digital war.