Paul Allen on the singularity

Microsoft cofounder Paul Allen and AI scientist Mark Greaves are pessimistic about the prospect of a technological singularity in the intermediate future: "If [it] is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs." [Technology Review]


  1. Isn’t technological singularity by definition all about “unforeseeable and fundamentally unpredictable breakthroughs”?

  2. Allen’s pessimistic view is based on the assumption that we must implement intelligence in the same way as a human brain implements intelligence, that we must understand all aspects of human brain function, and that we must understand all aspects of the hardware we build and the software we write. We like to think that because we’ve made something complex (like the internet) that someone must know all about the pieces and how they behave. Not true.

    “Some textbooks make it sound as if software engineering is a formal process of making out lists of specifications, milestones, and the like, [but] the process is also an experiential hands-on endeavor. You don’t ride a bicycle by making out lists of part numbers. You have to get on it and lurch around.” From Three Questions about Cyberculture, an interview with Rudy Rucker.

    1. I disagree, I think Allen’s pessimistic view is based on his assumption we’ll be using Microsoft technology, and they’re always behind the times.

  3. The singularity is an insult.  It degrades all notions of creativity and design and is based on absurd assumptions about the mind, technology, and computing.  Kurzweil is a good writer and people like to believe in tidy but ridiculous prophecies, but it has always seemed strange to me that people with otherwise rational, skeptical, scientific outlooks can get behind this singularity hooey.

    1. The singularity is hope. It expands all notions of creativity and design and is based upon the sound assumption that our brain is a finite physical system that must follow the same rules that govern the universe and thus like everything else can be mathematically explained. Our technology and ability to compute will keep increasing and evolving.  See what I did there? Don’t bother posting if you are not going to bring any real arguments to the table. Our emotions can be dangerous guides for determining the possibility of events.

      You can argue about the timeframe but we will one day understand the brain so I don’t get why you think it is so strange that rational, skeptic, scientifically minded folks like to think about this issue. 

      1. Koen, while I believe that our brains are physical systems, and I do see technology advancing and computational power increasing, the belief that technology and computation will lead to creative machines is a faith that I do not possess.

        I can’t argue about the time frame: I think it won’t happen.  The singularity is based on a simplistic notion of creative thought and design– which I find insulting– and an unsupported assertion that machines will be able to do things unlike anything I’ve ever seen any hint of them doing.

        I would be *thrilled* to be wrong, so please– what’s the evidence that this will come to pass?  Where are the thinking, creative machines?  Or even a hint of a creative machine?

        1. What would you accept as positive proof of a creative machine? For example, is a machine with behavior indistingushable from an insect or cat good enough? Would a human Turing test be necessary and sufficient?

        2. Where are the thinking, creative machines?  Or even a hint of a creative machine?

          We do not need a machine that has the “creativity” of a Da Vinci. We need a machine that has the creativity of a human baby… and then have it learn.

  4. re: “it will take unforeseeable and fundamentally unpredictable breakthroughs.”

    I don’t now if the singularity will happen or not, but one just has to look back in time to see how shortsighted we can be in regards to technology. IIRC in the 50s IBM saw a market for maybe a couple dozen computers. Some thought no one would ever need a computer in their home. Thoughts that 640K should be enough RAM for home computers. etc etc So what computers will look like 35 years from now is going to be very hard to predict.

  5. I agree with Allen that we need to understand neural mechanisms better.  Heck; I’ve devoted a lot of my research efforts into trying to steal algorithmic tricks from the brain.  I’ve had a recent success – see for the world’s fastest Basis Pursuit Denoising solver, inspired by how sensory cortex works (not that IEEE let me publish the underlying biology there – but that’s another gripe).

    What I’d like to see is more funding/faculty positions for people studying the ways brains solve problems better than our current best AI algorithms.  We can steal computational tricks from the brain without having to simulate every last synapse; just from looking at the way architectures solve seemingly impossible problems we can uncover new and better ideas and algorithms.  We need to prove the algorithms work as a separate project, but that’s do-able, and thinking about brain function is a great source of new computational ideas.  I’m doing my best to try to find support for this project, but the pickings are pretty slim; it’s no longer easy to be interdisciplinary in today’s tight, short-term funding climate.

  6. If we are going to have a sensible conversation about the singularity, we need to define our terms. Conversations about the singularity tend to be plagued by this because the term is defined several different ways:

    1) The original way in which this term was used in this context is that of a significant enough change in the speed at which technical advances occur that events after this shift cannot be predicted before this shift. Of course, such shifts occur all the time, but the singularity is different because of how close it is to instantaneous.

    2) The second way in which this term is used is that of the creation or emergence of an AI of weakly superhuman artificial intelligence — because of a statement by someone (probably Vinge) that the existence of a weakly superhuman artificial intelligence would cause (1) and that (1) could cause the emergence of a weakly superhuman artificial intelligence.

    3) The way hippies of a strongly McKenna-ish bent sometimes use it, which resembles (1) except insomuch as it involves the mayan calendar, the i ching, machine elves, and some fairly confused interpretations of information theory.

    (1) is difficult to argue against but is essentially unspectacular except from an outsider perspective. In 1802 it would have been difficult to convince your average london street urchin that the iPad could exist, let alone be exceedingly profitable, or that wifi networks would exist in subsaharan africa, but it gets easier to predict the future the closer you get to it. On the other hand, the world of 302 was not so extremely divorced from the world of 102. It is reasonable to say that some time periods are more difficult to predict than others, and say that in retrospect these were singularities to varying degrees.

    (2) is difficult for a materialist to refute but difficult for a dualist to accept. A materialist may, however, easily question the line of reasoning by which (2) might lead to (1) in some especially notable way: some humans have extremely high intelligence but do not cause significant shifts in the speed of progress, while sometimes technological progress is given great leaps by accidental discoveries or by the good luck of extremely dumb people.

    (3) is questionable even to people who accept (1) and (2), but is fun to think about.

  7. It depends on what you mean by the singularity. I can’t speak for Kurtzweil or Vinge, but I see it as the culmination of a demonstrable trend of accelerating change at which point in time events  are beyond our ability to predict. Even if Allen is right, shooting down two specific examples out the multitude of ways the singularity can occur isn’t particularly compelling. But hey, Allen’s be plenty wrong before. Why stop now?

Comments are closed.