The ethics of torturing robots is not a new question, but it's becoming more important as robots and AI becomes more lifelike. Author Ted Chiang explored it in his 2010 novella, The Lifecycle of Software Objects. In 1998 I wrote an article for Wired Online called "Virtual Sadism" about people who liked to torture artificial life forms called "norns" (and a movement of norn lovers who tried to stop them). In 1977 Terrel Miedaner wrote a philosophical science fiction novel called The Soul of Anna Klan, which featured a little Roomba like creature that seems to be afraid to "die" when someone tries to crush it with a hammer. (An excerpt from the novel appears in the excellent book, The Mind's I: Fantasies And Reflections On Self & Soul, edited by Douglas R. Hofstadter and Daniel C. Dennett.)
Dylan Love of Inverse revisits the idea of robot abuse in his article for Inverse, "Is it OK to torture a robot?"
Consider the latest robot to be unveiled by Google’s Boston Dynamics. When the collective internet saw a bearded scientist abuse the robot with a hockey stick, weird pangs of empathy went out everywhere. Why do we feel so bad when we watch the robot fall down, we wonder? There’s no soul or force of life to empathize with, and yet: This robot is just trying to lift a box, why does that guy have to bully it?
The Boston Dynamics video reminded me of the inflatable Bozo men, meant to be abused: