Why do billionaires like Elon Musk make terrified pronouncements about the imminent rise of self-aware, murderous AIs that use us to reproduce themselves, controlling us instead of serving us?
Science fiction writer Ted Chiang (previously) says it's because Musk is already enmeshed with artificial life-forms that were supposed to serve humanity and instead have colonized us and use us as their reproductive medium: corporations.
This is an idea I'm very big on.
Chiang quotes Jameson: "it's easier to imagine the end of the world than to imagine the end of capitalism." When Musk sees the problems around him, his imagination fails him — instead of believing in political reform (which would gore his ox), he finds it easier to believe in solving his problems by regulating AIs that don't exist yet (which gores no one's ox, because, well, they don't exist).
The ethos of startup culture could serve as a blueprint for civilization-destroying AIs. "Move fast and break things" was once Facebook's motto; they later changed it to "Move fast with stable infrastructure," but they were talking about preserving what they had built, not what anyone else had. This attitude of treating the rest of the world as eggs to be broken for one's own omelet could be the prime directive for an AI bringing about the apocalypse. When Uber wanted more drivers with new cars, its solution was to persuade people with bad credit to take out car loans and then deduct payments directly from their earnings. They positioned this as disrupting the auto loan industry, but everyone else recognized it as predatory lending. The whole idea that disruption is something positive instead of negative is a conceit of tech entrepreneurs. If a superintelligent AI were making a funding pitch to an angel investor, converting the surface of the Earth into strawberry fields would be nothing more than a long overdue disruption of global land use policy.
There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be "friendly," meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook's and Amazon's goals were aligned with the public good. But I shouldn't be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you'd do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.
The Real Danger To Civilization Isn't AI. It's Runaway Capitalism [Ted Chiang/Buzzfeed]
(Image: Steve Jurvetson, CC-BY)