Yesterday I caught a presentation by Adam Greenfield about the ethics of "ubiquitous computing" (the idea that the devices around us will know where they are, what they are, and who you are). This is a place where science fiction and real world policy are converging; for example, it's becoming harder and harder to ride the London Underground without carrying a radio-pollable card that could be used later to identify who you are and where you've been. American passports are getting RFID chips that can be read at a distance, and visitors to the US are likewise being told that they have to carry radio-readable "papers" at all times in the country, at a pilot program being run at two border-crossings.
The utility of radio-readable identifiers is undeniable. I've written stories about how people could use them to improve their quality of life; seniors' homes are incorporating them into the Alzheimer's ward, in Hong Kong, the contactless card has made public transit and other routine transactions into an act of graceful dance, where people gesture in a fluid motion at the turnstiles to present them with their "Octopus" cards.
Obviously, the supply-chain uses for these in retail and wholesale are many and interesting, as are the uses that arise after we bring stuff home — everyone's favorite example is the washing machine that won't let you mix colors and whites.
But there's an ethical dimension that needs to be considered in engineering radio-readable products. These products are potential privacy-bombs, capable of wreaking great havoc in our personal lives and the body-politic. They have the potential to be systems of control, rather than empowerment. As Mitch Kapor says, "Architecture is politics." The way we design these systems will effect the way we live our lives: in freedom or in tyranny.
Greenfield has a recent book out on the subject, called Everyware, which attacks the promise and peril of ubicomp at great length and in depth. He surveys the ways in which RFIDs are being used today, the good and the bad, looks at the research that's being done for the next generation, and tackles these thorny ethical questions. (introduction, conclusion)
Here's an article that Greenfield wrote on the subject of ubicomp ethics, called "All watched over by machines of loving grace: Some ethical guidelines for user experience in ubiquitous-computing settings." It give you a good flavor for the talk I heard yesterday — it's fascinating and thought-provoking.
Principle 1. Default to harmlessness. Ubiquitous systems must default to a mode that ensures their users' (physical, psychic and financial) safety.
We are familiar with the notion of "graceful degradation," the ideal that if a system fails, if at all possible it should fail gently in preference to catastrophically, with functionality being lost progressively rather than all at once.
Given the assumption of responsibility for users and their environments implied by the ubicomp rubric, such systems must take measures that go well beyond mere graceful degradation.
Slaved passenger vehicles, dosage settings for pharmaceutical-delivery systems, controls for sealed or denied environments are examples of situations where redundant interlocks must be provided to ensure user safety.