Very big thinkers ponder: "What do you think about machines that think?"


Over at BB pal John Brockman's, nearly 200 very smart people, like Daniel C. Dennett, Brian Eno, Alison Gopnik, Nina Jablonski, Peter Norvig, and Rodney Brooks, ponder the EDGE Annual Question of 2015: What do you think about machines that think?

From John's intro:

In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can "really" think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These "AIs", if they achieve "Superintelligence" (Nick Bostrom), could pose "existential risks" that lead to "Our Final Hour" (Martin Rees). And Stephen Hawking recently made international headlines when he noted "The development of full artificial intelligence could spell the end of the human race."

But wait! Should we also ask what machines that think, or, "AIs", might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is "their" society "our" society? Will we, and the AIs, include each other within our respective circles of empathy?

Numerous Edgies have been at the forefront of the science behind the various flavors of AI, either in their research or writings. AI was front and center in conversations between charter members Pamela McCorduck (Machines Who Think) and Isaac Asimov (Machines That Think) at our initial meetings in 1980. And the conversation has continued unabated, as is evident in the recent Edge feature "The Myth of AI", a conversation with Jaron Lanier, that evoked rich and provocative commentaries.

Is AI becoming increasingly real? Are we now in a new era of the "AIs"? To consider this issue, it's time to grow up. Enough already with the science fiction and the movies, Star Maker, Blade Runner, 2001, Her, The Matrix, "The Borg". Also, 80 years after Turing's invention of his Universal Machine, it's time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history. (See George Dyson's 2004 Edge feature "Turing's Cathedral".) So, once again, this time with rigor...

The EDGE Annual Question 2015 - What Do You Think About Machines That Think?

Notable Replies

  1. Really? Because I was just thinking the next topic should be, "What do you think about unicorns?"

  2. Imagine technology that totally breaks the mold of doing work to get things, though. Suppose someone invented star trek replicators (and an energy source to power them). The issue is what happens if machines can do so much of the work that the rich don't need the poor to work for them? (The answer is probably dead rich people in the long run, I wish the rich would understand this)

    On the larger topic, I find pondering about thinking machines is all fantasy driven and not very reality based. If we define "thinking" as "that thing that happens in our brains" then nothing will ever be a thinking machine. If we don't then we've had thinking machines for long time. When was the last time someone designed a new computer without using an existing computer? Thinking machines are already here, and they are already doing a lot of thinking for us. Moore coined Moore's law in 1965 and was based on observations of history. The "singularity" occurred before 1965 - it's just that crazy singularity thinkers mistake "exponential growth" (which doubling every 18 months is) for "instantaneous launch to infinity."

  3. Jorpho says:

    But what does that mean, exactly? That humans would be more willing to trust an imperfectly designed model rather than considering their own intuition? Because we lost that race a few centuries ago. I think my favorite example will probably remain that thing two years ago, when people decided to design sweeping austerity policies around a spreadsheet error.

    And so it goes with "thinking machines". Decades from now we may very well have more advanced computers capable of doing many more things than they do right now, and some people will still be fretting over whether they are thinking or not. Sometimes I suspect that a lot of these "very big thinkers" would benefit from sitting down and taking a few programming courses.

  4. When 1% of the world owns more than half of the stuff, it's not a real dichotomy, but it is approaching one.

    That is exactly what I'm saying, that one possible future a small percentage of people living lives of extreme luxury while everyone else rediscovers subsistence farming.

    Because the point of money is to command other people, and people who were raised in that paradigm aren't going to forget that if post-scarcity suddenly becomes a thing. Rather than "make a lot of money" how about "have a lot of people with guns protect their stuff." Why would people let their cash crops rot in silos while the local people who were forced off their land to grow those cash crops eat the rats that are eating the crops in the silos? I don't know, but that's totally something people have done.

    Like I said, ultimately I think the solution to this is either that someone says, "Why the hell aren't we sharing" or rich people end up dead (but are probably replaced by a new crop of rich people because that's how revolutions go, so it will take several iterations).

    I meant it to be a purely semantic point. If we define thinking to be a thing machines can't do, then we will never agree that they do it.

    And they designed these things by drawing lines on the beach with sticks?

Continue the discussion

80 more replies