Today on the Tor-Forge blog, I write about the nearly inescapable temptation of trying to solve our problems with other peoples' actions by redesigning the technology they use to boss them around, rather than serving them.
DRM is a good example of this; but so too is "lawful interception" (backdoors in our devices and service to let the police spy on suspected criminals, and also allow anyone who can figure them out to spy on anyone). It's not that this tactic doesn't produce some good outcomes for the companies, agencies and governments that rely on it, but it does so at the expense of enormous, incalculable cost to the rest of us.
Computer weapons are like bioweapons: we don't know how to make ones that only get used in ways we all agree upon; thus developing them makes everyone more vulnerable. In some ways, computer weapons are worse than bioweapons; because they rely on discovering and hoarding vulnerabilities in commonly used services — defects that are sure to be independently rediscovered and weaponized — a computer weapon doesn't have to be stolen by the "bad guys" to be turned against its developer; it's enough that the bad guys discover and exploit the same weakness the "good guys" are relying on.
Designing computers to treat their owners as untrustworthy adversaries, unfit to reconfigure them or know their defects, is a far more dangerous proposition than merely having computers with bad software. Asimov was interested in how computers work. He should have been paying attention to how they fail.
The failure mode of prohibiting the owners of computers from changing which programs they run, and of knowing whether those computers are secure, is that those computers are now designed to control their owners, rather than being controlled by them.
This is the key difference between computers that liberate and computers that enslave.
Asimov had three laws. I propose two:
1. Computers should obey their owners
2. It should always be legal to tell the truth about computers and their security
Neither of these laws is without potential for mischief. I could write a hundred stories about how they could go wrong. But the harms of following these rules are far worse than the harms of deliberately setting computers to control the people they are meant to serve.
I charge you to be hard-liners for these rules. If they aren't calling you unreasonable, a puritan, a fanatic for these rules, you're not trying hard enough.
The future is riding on it.
How can we make technology that frees us, rather than enslaves us?