AI Alarmism: why smart people believe dumb things about our future AI overlords

Maciej Cegłowski (previously) gave this talk, "Superintelligence: The Idea That Eats Smart People," at Web Camp Zagreb last October, spending 45 minutes delving into the origin of the idea that computers are going to become apocalyptic, self-programming, superintelligent basilisks that end all live on Earth (and variations on this theme) and then explaining why this fundamentally evidence-free, fuzzy idea has colonized so many otherwise brilliant people -- including people like Stephen Hawking -- and why it's an irrational and potentially harmful belief system.

As a science fiction writer, I've spent a fair bit of time noodling with these ideas in both story form and essays: True Names, the novella I wrote with Ben Rosenbaum about this, was nominated for a Hugo Award; and Charlie Stross and I wrote a novel on the theme, Rapture of the Nerds, and then there's my essay on the Singularity as a spiritual belief system that can pass for a scientific prediction.

One thing I'm keenly aware of is that the aesthetic appeal of futuristic Singularity predictions is firmly rooted in the here-and-now: it's nice to think that there is a thing called "progress," and that we're in the midst of it; it's nice to think that when progress outstrips your capacity to make sense of it, it's because it's transcended human comprehension (and not, say, because your time has past and you are becoming irrelevant to a field and discourse you once dominated); it's nice to think that the privilege you enjoy in the midst of great deprivation is in the service to a better future for all humanity, and not a fundamentally unfair situation that you would rise up in fury over if the roles were reversed.

Cegłowski's expert puncturing of the arguments for "AI Alarmism" was prompted by philosopher Nick Bostrom's bestselling book Superintelligence: Paths, Dangers, Strategies, which is a fun read but which also palms a lot of cards in the construction of its arguments.

I believe the greater social meaning of AI Alarmism is the twin phenomena of the worry of an ever-larger class of have-nots whose lives are upended by uneven economic returns from technological disruption (in other words, the problem isn't that only some of us have to clean toilets, while all of us have to use them; it's that the dividends from self-cleaning toilets never accrue to the toilet-cleaners they displace); and the blithe dismissal of this worry by an ever-smaller, ever-richer 1%, who use the story of AI as a spiritual belief system that declares this division to be natural, inevitable, and, ultimately, beneficial.

At one point, Bostrom outlines what he believes to be at stake:

“If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth's oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.”

That's a heavy thing to lay on the shoulders of a twenty year old developer!

There's a parlor trick, too, where by multiplying such astronomical numbers by tiny probabilities, you can convince yourself that you need to do some weird stuff.

This business about saving all of future humanity is a cop-out. We had the same exact arguments used against us under communism, to explain why everything was always broken and people couldn't have a basic level of material comfort.

We were going to fix the world, and once that was done, happiness would trickle down to the point where everyday life would change for the better for everyone. But it was vital to fix the world first.

I live in California, which has the highest poverty rate in the United States, even though it's home to Silicon Valley. I see my rich industry doing nothing to improve the lives of everyday people and indigent people around us.

But if you’re committed to the idea of superintelligence, AI research is the most important thing you could do on the planet right now. It’s more important than politics, malaria, starving children, war, global warming, anything you can think of.

Because what hangs in the balance is trillions and trillions of beings, the entire population of future humanity, simulated and real, integrated over all future time.

In such conditions, it’s not rational to work on any other problem.

Superintelligence: The Idea That Eats Smart People [Maciej Cegłowski/Idlewords]

Notable Replies

  1. deedub says:

    I live in California, which has the highest poverty rate in the United States, even though it's home to Silicon Valley.

    I'm sorry, what?

  2. RatMan says:

    Because I know waaaaay too many clever programmers who want to work on AI who shouldn't be allowed within ten meters of any project that describes itself as "AI"?

  3. When Marvin Minsky was asked about the danger of AI machines doing something malevolent, he said "I would hope people would test these systems extensively before deploying them."

  4. [10:03] This, in caricature, is exactly what Bostrom and people like him are arguing

    This is not how argument works.

    [11:10] A lot of this relies on Intelligence not being well defined at all

    Incorrect. There are reams of stuff about the importance of instrumentality to systems that can be described as intelligent.

    [16:48] Stephen Hawking's Cat

    As we speak, self-driving everything is being made up to and including autonomous military vehicles. Their security, if previous experience is any indication, will be crap. A fully general superintelligence will likely be a lot better at breaking security than we are.

    Problem solved.

    [20:05] Where AI succeeds

    A lot of things get called AI. Chess playing machines and Go playing machines both play a board game better than humans do but they work with completely different technologies. Also, 'throwing a bunch of data' at things implies machine learning (it is machine learning, really) which is self-improving.

    [19:00] Not buying the orthogonality thesis.

    You can buy it or not, the fact of the matter is that nobody knows and argument ad ricketmortiam is not in fact an argument. It doesn't seem immediately impossible and we'll know if the orthogonality thesis holds only after we plug in the Overlord and we all sit around calculating Pi all day.

    See. I can quote Mass Effect 2, as if that proves anything.

    [20:30] Argument from Lazy Roommate

    Well yes, but as a matter of fact the 'designer' of Peter's mind is satisfied. The hardwired goals put in by evolution (food, pleasure, shelter) were amply satisfied since Peter did not starve out on the street. If the highly brilliant Peter was put in a situation where he had to think fast or die, I believe he would develop a taste for self-improvement.

    AIs if they are built and if they are possible are likely to be built with goals since humans are, largely, not in the habit of building machines with no purpose. Especially not expensive ones.

    This, of course, all under the assumptions that the human mind is akin to whatever it is AI engineering can produce, if anything.

    [21:10] Argument from Brain Surgery

    ...Mr. Cegłowski, may I see your source code?

    Oh, you don't have that? Really?

    Huh.

    [21:41] The childhood of AIs

    It's likely AIs will have childhoods?

    Likely?

    I have a strong suspicion that this argument was rectally derived.

    [22:36] Argument by massive anthropomorphism.

    Failure of an argument in its own terms. This was meant to be arguments accepting the premises outlined in the start. One of them was that whatever we make won't be anything like us.

    So how the hell do you know it even realized the concept of loneliness? Hell, there are animals that actively avoid members of its own species except for mating. Or that it requires collaboration to achieve its full effect?

    23:00+ Making fun of nerds.

    So basically the rest of this talk is going to make fun of weird nerds for being weird, nerdy?

    Oh good. We need that. Very useful.

    24:20 Grandiosity and poverty

    Funny. Every single person really worrying about AI I've ever read seems to be in the effective altruism movement. Bostrom certainly is.

    So whatever it is they believe (the tears of joy and trillions of people stuff is, basically, preference utilitarianism applied with an attempt at rigor) is making them give something like 10-20% of everything they make to malaria charities.

    Clearly they must be stopped.

    [24:42] Megalomania

    Those stupid nerds thinking that a fully general AI might help with the many many problems the world has! Why, politics has those well in hand. Just look at the recent elections in...

    ...

    ...Jesus.

    Oh, but they are white.

    Well, yes. They are. But the LessWrong math-weirdo group also has many times more people in the various LGBTQ+ categories than the general population if that makes things a little less terrible. It's not like they are the commentariat of Breibart or anything.

    [25:42] Transhuman Voodoo

    Gibberish. This is a debating tactic where you pick out the weirdest things the weirdest people in a group have said and pretend this is a central example of the group. Does Elon Musk believe this? Bill Gates? Stephen Hawking? Those are the people he mentions so... Okay, those are celebrities. Which part of that does Yudowsky believe? Scott Alexander? Nick Bostrom?

    [26:44] Religion 2.0

    Declaring something to be a religion is a fully general argument. Creationists love claiming that evolution is a religion, fr'instance.

    Further, atheism is entirely compatible with building a God, but this isn't a God. Very explicitly. It can't break the laws of physics. It's not coterminous with all of space-time. It's not eternal, or self-causing or universe-making. It is, in fact, nothing like a god except insofar it is a very powerful thing. It's like claiming an atomic bomb is a god because it can destroy cities just like God can in the Bible.

    [27:36] Comic Book Ethics

    Saving the world with technology is the job description of engineers and scientists. Oh, it's grandiosely put, but if I had to name the force making the world a worse place 'grandiosity' wouldn't even clear the top one hundred.

    [28:10] Simulation Fever

    No. The simulation argument doesn't rely on or interact with the possibility of strong AI. It's a philosophical argument, and like most philosophical arguments it sounds silly but explores interesting truths.

    Protip: Plato didn't think we were all in a cave, either.

    (Though he did think we remembered things using our liver, so...)

    And even ignoring this, his argument boils down to: "I DON'T UNDERSTAND THE SIMULATION ARGUMENT THEREFORE PEOPLE WHO THINK ABOUT IT ARE POOPY-HEADS." I say 'think about it' because nobody actually believes in it. It's an intellectual diversion especially since as he notices but doesn't follow up on, once you are in the simulation you don't know what's out of it so, really, nothing at all changes. The universe is the universe.

    [30:44] Data Hunger

    Contradicts self. Says earlier that the AI that what he likes to call AI-weenies is not like the stuff we use today which he characterizes by the sea-of-data approach, and now lays the faults of this selfsame sea-of-data approach at the feet of the people he's criticizing.

    Make up your mind.

    [32:28] AI Cosplay

    AIs have instrumentality. The people he's... you know what, I'm just going to say 'demonizing.' The people he's demonizing have instrumentality. Therefore something-something and they are pretending to be AIs.

    Is ignoring Chesterton fences silly? Yeah. Have people of a technical bent been doing it, largely harmlessly for decades? Yeah. Read Chesterton. Seriously. Napoleon of Notting Hill. Read it. The Super-man story. Forgot its name. Read that too.

    All they are doing is trying to imagine how to solve things better from first principles. At worst, its a bunch of wasted effort and impossible cities on paper for people in the future to giggle at. At best, they figure out something useful.

    And while I'm not in the LessWrong community I know enough about it that 'NPC' doesn't mean what you think it means. It's not a value-judgement but a statement on how people see themselves.

    [34:43] Ostblock SF is the best SF

    I'm Slavic and this man embarrasses the hell out of me.

    Western SF is bad and the stuff I read as a kid by sheerest accident is the best stuff.

    Yeah.

    Not that Lem and the Strugatsky brothers aren't brilliant, of course.


    Yuck. There are good arguments against superintelligence but he made precisely none of them except, interestingly, as a joke.

    Slavic pessimism.

    It does work.

    See, if we can build a fully general AI that can improve itself and improve its ability to improve itself then making its goals align with ours is the most important thing and the threat from such a device is virtually limitless.

    Yeah.

    If.

    There's no reason to be certain or even confident that we can. Yes, machines get faster. And yes, configurations of matter in the cosmos exists which host what we term consciousness. But there's no reason to think we're clever enough to make, ab ovo, such a configuration, except, of course, by the obvious method.

    That's the long and the short of the argument. There's no evidence whatsoever that building AI is possible at all, or practical within the next $YEARS.

    So what do I think? I think that AI risk is not as pressing as some people think. However, the amount of resources expended to work on the research problems which largely amount to math to be able to express goals in abstract machine-readable terms, is comically small. It's basically what's required to keep a few nerds in pizza and beer or, given the demographics involved, soylent and modafinil.

    We waste orders of magnitude more on Transformers movies, for heaven's sake. Given the size of the, admittedly, low-probability payout, it's likely worth it. And if it isn't, we are doing so little harm that it gets lost in the noise.

    I also think that, given 40 minutes to talk about a group of people making the world worse I wouldn't choose a group of mathematicians and computer scientists who are trying to avert an improbable catastrophe and habitually give sizable chunks of their income to charity. Just not the first people I'd pick.

    Maybe that's me. I always did have a massive soft spot for weirdos. Even ones who can be very abrasive like a lot of the LW crowd.

    As for the ethics challenges of modern actually-existing AI, that's a red herring. Yes, those exist, but their resolution is a matter of politics, not technology. You have to make certain people not do things. You can't engineer that away.

    All in all: Mean-spirited talk that argues in bad faith, poorly. Would not watch again.

Continue the discussion bbs.boingboing.net

46 more replies

Participants