The Singularity won't be heaven: Annalee Newitz

IO9's Annalee Newitz takes aim at the idea of the Singularity in an essay called "Why the Singularity isn't going to happen." Newitz's objection to the idea that technology will allow us to transcend human limitation and misery boils down to this: the vision of technological utopia is insufficiently weird. It is a "[vision] of tomorrow [based on] on fantasies from the past rather than what we can glean from factual accounts of history." It assumes that technology will solve problems but doesn't consider what new problems will emerge.

I think that this may be true of people who embrace the Singularity as a scientific prediction or even as a social project, but I don't think it's particularly true of literature about the Singularity. Novels like Vernor Vinge's Rainbows End, Rudy Rucker's Postsingular and Karl Schroeder's Sun of Suns, are all about this idea of problems lurking inside the solutions. Charlie Stross and I have written two novellas, "Jury Service" and "Appeals Court," that are also about this notion (we're writing a third one now, "Parole Board," and Tor's going to publish all three as a novel called Rapture of the Nerds).

Indeed, it seems to me that in literature, the Singularity's role is to serve as a straw-man for critiquing technology as a one-sided panacea (see, for example, my interview with Ray Kurzweil on the Singularity as a literary device, a spiritual belief system, and a technological prediction).

But Newitz's critique of insufficiently weird technological prediction is spot on. She's identified a common flaw in futuristic prediction: assuming that technology will go far enough to benefit us, and then stop before it disrupts us. For a prime example from recent history, see the record industry's correct belief that technology would advance to the point where we could have optical disc readers in every room, encouraging us to buy all our music again on CD; but their failure to understand that technology would continue to advance to the point where we could rip all those CDs and share the music on them using the Internet.

What people of the industrial age didn't bargain on - didn't even know about - were all the ways that byproducts of industry would destroy the environment. And I don't just mean in a save-the-Earth kind of way: I mean that when Friedrich Engels walked through industrial Manchester in the mid-nineteenth century he was completely blindsided by how destroyed people's lives were by pollution and life in cities built around factories. In response, he wrote The Condition of the Working Class in England, a document that would have stunned reformers fifty years earlier, many of whom believed that industrialization would solve every social problem and remove the burden of physical labor from the shoulders of even the poorest people.

There's no denying that industrialization laid the foundations for a better, more productive society. It has led to countless innovations, and has improved the lives of many working people. But it also destroys life in ways that Adam Smith and Eli Whitney could never have imagined. It has transformed our entire planet, from its ecosystems to its atmosphere and beyond. It's only a slight exaggeration to say that we do not live on the same planet that people lived on in 1750. The environment - from megacities to airborne particles and space junk - has changed that much.

Singularity-level technology changes the world to the point where the things our ancestors wanted are not the same things we want. Today, we are trying to roll back the effects of industrialization. We are trying to undo the damage that penicillin did. If history, real history, teaches us any lesson it's that new technologies do not cause us to transcend. They fix some things, and then cause new problems we hadn't anticipated.

Why the Singularity isn't going to happen


  1. Cory, I think you’re absolutely right about science fiction about the Singularity. I was actually thinking as I was writing this about how Iain M. Banks’ new novel Surface Detail deals so well with the weird and unexpected consequences of being able to upload our brains to virtual heavens after we die. Many other authors depict plausible post-Singularity worlds – including Vinge, as you point out, as well as many others.

  2. So, more accurately: the Singularity IS actually going to happen, just not the fluffy cloud nerd heaven way :)

    (By the way: it’s very much IMHO a better fit with observed history that we’re going to have a string of non-all-encompassing singularities rather than one hopping-off point, but still. Maybe increasing in frequency? I sure hope not; generation gap is already enough of a problem without me being able to meaningfully communicate to my future grandsons because their body was built with wireless antennas equipment covering their most of their eardrums.)

    (‹ [I] [HAS] [NEED] [LOVE] [EXPRESS] [YOU] ›
    ‹ What’s the geezer trying to say? ›
    ‹ Heh, he’ll never keep up on that keyboard. Just /ignore him, nod and vocalize something nice. ›
    “It-is-right, grandfather”, the older one said, in his awkward almost-English rendition of thought-patterns.)

  3. The problem is that this person doesn’t understand Vinge’s premise: once intelligence finds a way to apply itself to increasing intelligence, there will be a feedback loop of intelligence increase whose consequences are impossible to predict or understand with the intelligence we have today.

    Human intelligence varies by only a small percentage on an absolute scale. It isn’t widely known but there is a mathematical technique, Rasch measures, which allows putting measurements of intelligence on a ratio scale – a true measure which allows all arithmetic operations. Unlike IQ, such scales have true zero points and equal intervals on the scale correspond to equal increments of ability. The CSS scale provided on the Stanford-Binet 5 is one such scale. It sets the average 10-year old’s score to be 500. An average 27-month old would score 435, a five year old 470, and an adult about 510. The highest score in the test norming group on this scale was 592, equivalent to an adult IQ in the mid 150s.

    The average adult is only about 8.5% smarter than a five year old. The smartest person in the world is likely less than 50% more intelligent than the average 2 year old.

    A very slight gain in intelligence on an absolute scale will take those who achieve it beyond the horizon of our comprehension. That intelligence could be used to do anything that human intelligence has been used for, but better, and it may be capable of things it wasn’t ever before. The realm of application of intelligence isn’t just improving technology or solving artificial problems, but also art, culture, social development and everything else to which thought or perception has ever been or ever could be applied.

    Whether it is biological or machine intelligence that starts the cycle of intelligence improvement isn’t vital to the argument or to its conclusion. Unless intelligence cannot be used to improve intelligence or there is some fundamental limit on intelligence which we have already reached, there will be a singularity.

    1. I understand the whole idea of a super-intelligence, but what is intelligence without knowledge? The super-intelligence will still need to gain knowledge through empirical research which is a really slow process (obtaining materials samples, conducting lab tests, etc.)It can’t just “guess” everything so it’s not going to be an omniscient danger straight away.

      Until machines can carry out all the boring manual research themselves, we’re OK. But after that, yep, it’s the end of the human era. Will these super-intelligent entities be our benevolent slaves? Hmmm. Watch “Animatrix – The Second Renaissance”.

    2. So we’ll have smarter psychopaths. Great.

      Which is just one example of why it’s incredibly simplistic and outdated to regard mentality, and by extension evolution, as being meaningfully measurable on a one-dimensional scale such as “IQ”. This is akin to using the X-men comics’ view of evolution as proceeding ever upwards on a straight line to ever increasing utility in the current environment. We already know that that view of evolution, with the famous linear picture of the amphibian evolving by stages to a 50s suburbanite, is dead wrong: and not just because you can paste Alfred E. Neuman at the end.

      Evolution doesn’t “progress”; it branches out in N-dimensional space (where “N” is a very large integer) to fill every available micro-environment. Occasionally a branch gets cut when its micro-environment, say for example human civilisation or a particular volcanic vent, disappears. Such changes aren’t towards any particular goal they’re just random changes and survivability isn’t a matter of being “better” but towards a new set of design trade-offs and often just a matter of sheer dumb-luck. What is supreme in one micro-environment is pathetic by the metrics of another. Even within our own evolutionary Clade the other hominids, such as Neanderthal or Homo Hobbitus, aren’t “ancestors” they’re a bunch of (often remote) cousins differing in ways that have nothing to do with “IQ”. Chief amongst the selection criteria are things like “don’t be on the side of the planet that gets blown up by a volcanic caldera or hit by an asteroid”.

      To restrict ourselves to mentality lets consider one vector out of many that has nothing to do with “IQ”: the Autistic Spectrum. I’m at one end, you’re (probably) in the middle and, depending on how you count, either Wilson’s Syndrome or Schizophrenics are at the other end. The difference between us is not of “IQ” nor even of ability: it’s just a change of co-ordinate systems. Even so I can easily solve problems that are almost impossible for you to solve in ways that are literally beyond your comprehension and you can say the same about me. We can both look over at the super-human socialisation skills of the Wilson’s Syndrome girls and say “How the Hell do they do that?“. It’s like a shift between Cartesian and Cylindrical co-ordinates or a LaPlace Transform. Ted Chiang’s “Understand” story aside, you’re never going to be able to combine these rather antagonistic viewpoints into one super-duper, do-it-all, outlook anymore than you can be Vishnu and Shiva, or the Country Mouse and the Town Mouse, in the same brushstroke.

      Leaving “IQ” aside in another fashion, we’ve also seen speculations of alien and post-human branches of life doing much better than us without having any intellect at all. Cthulhu and some of Bruce Sterling’s alien races come to mind.

      1. This is boingboing, man. You can expect a rather atypical clumping of us on one end of the autistic spectrum. We are not a normal distribution ;-)

      2. You seem to be replying to things that weren’t in my post. Increased intelligence does not imply psychopaths – in fact, smarter people have a strong tendency to be more ethical. Natural evolution has little to no bearing on intelligence increase of the type I am talking about, which is a consciously directed development. Intelligence does have many dimensions, nevertheless they are generally aligned rather than orthogonal and g is still by far the biggest one, accounting for over half the variation in performance on tests, with the largest subsidiary factors accounting for perhaps a third to a fifth that much. Also, despite the early literature, schizophrenia has nothing to to do with the autistic spectrum, still less Wilson’s (or Williams’, as I think you meant) syndrome – unless you are saying that they are the opposite of autistic? Even so, I think they are pretty much orthogonal concepts.

        Your ideas about mental coordinate transforms are interesting, if necessarily a bit vague. I have thought quite a bit about mapping mind-space, I’m not saying that mind is unidimensional, however there are potentially meaningful unidimensional measures which can be constructed of minds in mindspace which correspond to linear algebra concepts such as the determinant, the magnitude of the principal eigenvalue, and the span.

        Moving back to conventional psychometrics, the empirical evidence suggests that the nature of problem solving abilities seldom differs radically between people, the structure of dependencies and correlations of those abilities is hierarchical (see Carroll’s three-stratum theory for more), and that there is a basis for comparison of people not only of narrow-ability factors but far more for the overall g factor. Savants and those with wild talents like lightning calculators and eidetic memory may not fit into this scheme – but I wouldn’t say for sure that they don’t. We already know that the bell-curve assumption is wrong and leptokurtosis is rampant – so these may be part of the fat tail of existing abilities.

    3. The problem is that this person doesn’t understand Vinge’s premise: once intelligence finds a way to apply itself to increasing intelligence, there will be a feedback loop of intelligence increase whose consequences are impossible to predict or understand with the intelligence we have today.

      If the idea behind the singularity is based on an intelligence feedback loop then it began when we started writing things down, allowing ideas to exist outside the confines of a brain. Or maybe when we started developing complex language, allowing ideas to spread from one brain to another across space and time. Either way, it’s been happening for a long long time.*

      I never quite understood why a future that is “impossible to predict or understand” with our current frame of reference is any different that what we’ve always had. Jules Verne was arguably the most gifted, forward-thinking futurist of his generation but his predictions of 21st century life were less “iPhone” and more “battle zeppelin.” The future has always been a mystery and will remain so until we get there.

      (*I know, I know… “this is different because we’re gonna have sentient, superintelligent machines any time now.” Well, maybe and maybe not.)

      1. I never quite understood why a future that is “impossible to predict or understand” with our current frame of reference is any different that what we’ve always had.

        I think the difference is a matter of timeframe. Only having a vague idea what our world will be like in twenty or thirty years is one thing. Having no idea what it will be like in six months, or next week… well, that’s a whole ‘nother ball of superintelligent gray goop.

      2. If the idea behind the singularity is based on an intelligence feedback loop then it began when we started writing things down, allowing ideas to exist outside the confines of a brain…. I never quite understood why a future that is “impossible to predict or understand” with our current frame of reference is any different that what we’ve always had.

        Good points. I had writing and Google in mind as examples of ways natural intelligence has already been applied to increasing itself when I wrote that. Others have noted that to those going through the singularity, it may not be obvious whether anything fundamental has changed, as in the scene in Accelerando when the uploaded intelligences in the coke-can sized starship are arguing with each other over whether there was a singularity. It’s like falling into a black hole – you don’t notice anything too unusual as you cross the event horizon.

        There is a potential difference from past changes, though, with radical increases in intelligence. A neolithic baby brought forward to today would have no trouble adapting, but could not adapt with her existing intelligence potential to a world where the average intelligence had doubled. SF readers could cope with going to a time when the gadgets, government, society and so forth are different (within a wide range corresponding to all but the weirdest or worst SF scenarios) but they couldn’t cope with going to a time when the equivalent of opening a door requires factoring a thousand-digit number or understanding everyday news involves category-theory concepts that just don’t fit into a human brain.

        There is a potential for the future to be far more alien to us than anything today would have been to a person of the past. The really unfathomable differences are most likely to come from changes in desires, interests and goals. Take out sex, food, money, possessions, power, knowledge, socializing and sports from the motivations of future people and replace them with things that are to the old motivations as a cuneiform barley accounting tablet is to a hypertext version of Finnegan’s Wake. There’s no telling what such people would want, let alone what they would do individually, still less what they would do collectively.

        I’m also skeptical that any kind of conscious AI or human upload will happen any time in the next few decades. The problems and complexities are far greater than the enthusiasts or even most experts realize. To future people the idea of simulating a brain with today’s ideas of black-box neurons, electrical action potentials and dumb synapses will seem as crazy as trying to go to the moon on a sailing ship pulled by birds. “Can you believe those witch-doctor silicon-age neuroscientists?!” they will say. “They didn’t even suspect acoustic signaling! Despite the propagation speed and seeing the axons actually pulsing! Let alone the cytoskeletal mechanical resonance spectra. Wouldn’t even listen even after the physicists spelled everything out and people had already built phased-array ultrasonic neural stimulators. No, they wouldn’t give up that ridiculous Hodgkin-Huxley model no matter how many times it was shown to violate the laws of thermodynamics, yet still insisted that heat would decohere quantum entanglement in the brain even after it had already been observed in plants…”

        I think the future path of radical intelligence augmentation will go something like this: non-invasive mind-machine interfaces will come first, with some sensory and motor temporary add-ons as well as deep brain stimulation, then some low-performance partial brain prosthetics, then much later elective permanent partial upgrades. Uploads won’t happen until there are really good wet, body-temperature molecular disassemblers, which will be at least a few decades out (and uploads will also likely need a few orders of magnitude more computation than Moravec and Kurtzweil think, much of that as quantum computing). It may never be economically feasible, but if it is, then uploads will have much more scope for modification and machine interface than meat people. At the moment there seems to be no reason to think designed rather than evolved AI will ever work, but with sufficiently augmented natural-origin intelligence designing, who knows?

    4. If you take a moment to abstract and generalize the I^2 = (unpredictable & thus plausibly utopian) argument, and look backwards along the evolution chain and into different species for information regarding intelligence ratios, rather than just among a living individual’s age, we have examples of differences of numerous magnitudes of intelligence upon which to compare and extract invariant patterns.

      Humans are nearly incalculably more intelligent than any given fish. Yet, has that prevented environment, free will, and externalities from creating socially negative behavior in human actions? Not at all. It’s only made the manifestations of them wholly imperceivable and incomprehensible to a fish. They’re still there though. It’s a zero-sum progress.

      Thus, while it is likely true that culture of the singularity would be incomprehensible to us, that doesn’t preclude it from retaining the normal distribution of “good” and “bad” in an individual’s actions which goes back far earlier than our existence as a species. It may just redefine what those terms come to mean, much the same as what is deemed morally correct nowdays differs from that of a few centuries ago.

      1. Good point, and I wasn’t arguing that people in a super-intelligent society wouldn’t understand each other and have perceptions of good and bad in their environment. Singularity is not utopia, just complex, intelligent, unpredictable and incomprehensible. (On the other hand I don’t think we are “incalculably” smarter than fish or even mollusks; though it would be hard to pin exact the difference down I suspect that we aren’t as much as 10 times smarter than an octopus.)

  4. The problem with the singularity is far deeper. It assumes that the mind can exist without a body and its environment. Impossible. The mind is embedded in the environment–you cannot extract it. For further reading I suggest Bateson’s An Ecology of Mind and Maturana and Varela’s Tree of Knowledge. Or check out the Buddhist technique for deconstructing the self. One discovers through practice that the mind and self are emergent phenomena (based on sensory experience) that in no way can be a thing one can simply upload and download. Sure, it’s a great sci-fi plot device, but is simply not feasible nor desirable.

    1. Mr Lopez, the problem with your argument is also deep; it assumes the body and environment which the mind requires cannot be radically altered.

      This is very unlikely to be true. We already have people inventing new senses, and training their brains to use them – new limbs are only a matter of time and technology.

      The mind is an emergent phenomenon. But don’t think emergent phenomena can’t, once in existence, be moved, copied, or altered. It’s very difficult, not impossible.

    2. The problem with the singularity is far deeper. It assumes that the mind can exist without a body and its environment. Impossible.

      “The Singularity,” in its many guises and iterations, assumes no such thing. Now, in the versions where people are uploaded, they’ll either live in virtual worlds that feel similar to the real ones, or be placed inside robotic bodies with similar sensory inputs. Perhaps both.

      It may be that sentient beings can exist as pure thought, but those beings would be very different from us. We would require extensive modification, at the very least.

    3. Of course we’re emergent phenomena. And yes, we do depend on sensory stimulus. But those stimuli reach our mind as electrical impulses. If we knew the algorithms, and how all the neurons were connected, we could model a brain in real time and feed it information equivalent to any sensory impulses you like, then properly decode the brain’s output as the actions it wants to take.

      1. But now you’re talking about simulating a heck of a lot more than just one human brain. If the plan is to simulate the entire human experience in a machine then what we’re talking about is basically the Matrix.

  5. We’re *just* as vulnerable to disease now as before penicillin? This is nonsense. It’s very rare now for people to die from sepsis; pre-antibiotics it wasn’t uncommon for a minor injury to go gangrenous and kill you. So objectively, antibiotics have led to a massive improvement in vulnerability to disease. The author’s being unduly pessimistic. To a medieval peasant, the average Joe these days lives in a world more comfortable than that of their King. Are we happier than that peasant? That’s another matter, but has more to do with human psychology than any objective measure of technological progress.

  6. Do the problems new technologies create neutralize the benefits? The answer to that question is a lot more indeterminate than predicting unforeseen contingencies, and more relevant to what lesson we ought to take from this. Evaluating the net benefit of the Industrial Revolution would be a massively complicated question as well, but even if we could say that it was for the better or not, it likely wouldn’t help us predict how dire the problems future technologies might cause. It’s correct to note that there will likely be unpredictable negative repercussions, but this observation itself doesn’t predict whether the future will be utopian or not.

    I’d also like to point out the probably worn-out view that human intelligence is not well understood at all, and that there’s certainly no reasonable metric for measuring a person’s intelligence relative to a computer’s. Has any singularity proponent ever responded to the point that computational power and intelligence are vastly separate concepts?

  7. Has anybody here ever taken the output of their guitar distortion pedal and fed it back into the input? Yeah. That’s a good analogy of what might happen to computer intelligence. Try it. It’s kinda cool, kinda scary.

  8. With all the technology grafted around the blonde FOX anchors, all that is achieved are a Barbies and Kens with an even bigger heads containing more air. There is such a thing as a limiting factor. As long as humans will remain at the center of all those new tools the result can only be more human, at best or worst.

    How much more of that can we take really? ;)

  9. You will not be able to stay connected, brothers and sisters.
    You will not be able to plug in, turn on and cop out.
    You will not be able to lose yourself on BoingBoing, Lifehacker, or Perez Hilton,
    knocking down a mocha and watching videos,
    because the Singularity will not be webcast.

    The Singularity will not be brought to you by Google
    via multiplatform, geocoded, personalized, pushed content.
    The Singularity will not show you Sergey Brin
    hosting a podcast with Steve Ballmer, Mark Zuckerberg, and Steve Jobs,
    who are still pretending they care about just plain folks.
    The Singularity will not be webcast.

    The Singularity will not be brought to you one tweet at a time and will not star Ashton Kutcher and Demi Moore or Pedobear or 4chan.
    The Singularity will not friend you.
    The Singularity will not make you famous for five minutes.
    The Singularity will not increase your page views, because the Singularity will not be webcast.

    The Singularity will not be webcast, will not be televised, will not be simulcast, will not be in 3D, will not be in IMAX or THX.
    The Singularity will not be mediated, brothers and sisters;
    The Singularity will be live.

  10. With the exception of Gibson (Neuromancer) and the partial exception of Rudy Rucker (whose take on human psychology and artificial minds struck me as naive in “Post Singular”), almost every single novelist extrapolating an AI singularity is severely pessimistic about the consequences for humanity. Starting with Leiber’s classic “The Creature from Cleveland Depth” ( For most authors, the AI singularity is a modern Golem.

    Also, you will find that the smart weirdos from Singularity Institute are mainly concerned with the singularity posing an existential risk that will be very difficult to contain (if at all).

    The observation that novelists commonly (and probably erroneously) believe that technological progress tends to stop where it benefits us was quite often voiced in the 1970es. Now it is neither original nor true. Golden Era SF arguably became carried away with the promises of a bright future, but most current SF is usually seriously dystopian. The future has been taken from us, and it is probably not going to come back.

  11. I would say that mind uploading is basically the biological equivalent of a computer drive imaging routine, and the neural equivalent of a virtual machine. The big issue is that compared to a hard drive, a brain never stops altering its content (unless it is dead). A hard drive in comparison can be made read only while the imaging process is performed.

    But by the time a brain imaging system have been developed, we are likely to already having mapped the various sensory inputs and found ways to emulate them. Then it will be a short jump from there to attaching those emulations to a brain image, and bootstrap it all into the brain equivalent of a virtual machine.

  12. Newitz isn’t really critiquing Singularity science fiction so much as she is critiquing the popular transhumanist agenda, and rightly so, although most write-ups of transhumanists that I’ve read have been explicitly or implicitly critical anyway (as with the one about the middle-aged couple who didn’t plan for retirement in any way because they assumed that they’d live forever). You’ve always got the fans who choose to ignore the darker implications of some of the scenarios that they read.

  13. This point reminds me of a book I dearly love for its combination of enduring insight, clear example, and chuckle-inducing good humor, and which should probably be required reading at the University of Futurists, Prognosticators, and Technophiles: John Gall’s Systems Bible: The Beginner’s Guide to Systems Large and Small.

    Gall’s “Fundamental Theorem” — the tenet with which he begins the book:
    New Systems Mean New Problems.

    “When a new system is set up to accomplish some goal, a new entity has come into being — the system itself. No matter what the ‘goal’ of the system, it immediately begins to exhibit systems-behavior, that is, to act according to the general laws that govern the operation of all systems [e.g., from later in the book, ‘Large systems usually operate in failure mode.’]. Now the system itself has to be dealt with. Whereas before there was only the Problem — such as warfare between nations, or garbage collection — there is now an additional universe of problems associated with the functioning or merely the presence of the new system.”

    God(s) bless John Gall.

  14. I guess I am mostly here to echo what Halloween Jack said in #15 and, to a lesser extent, what Ultan said in #3. Newitz says “the Singularity won’t happen”, but she means “the Rapture of the Nerds won’t happen”. This is a common conflation made by both the rabid believers and the stone deniers.

    What’s funny here, is the fact Newitz doesn’t seem to be ruling out Vinge’s Singularity at all — just Kurtweil’s version. But we don’t know a Vingian Singularity will happen at all! And, if it does, by its very nature we can’t know what is on the other side OR IT WOULDN”T BE A SINGULARITY!

    I wrote about this problem in my essay “I am not a Singulatarian”:

    “What do we know? Well, based on the ebb and flow of human history we can be certain there is no change which doesn’t bring some good and some bad. Quite often the bad can be really bad. In the case of something as transforming as a Technological Singularity it might even be world-endingly bad. Outcomes involving drastic changes to our species fall into the ‘good’ side of the ledger, because the red-ink side doesn’t include us at all!”

  15. I think the “it’s not weird enough” is pretty much built in to Vinge’s concept of the Singularity, which is “it’s so weird we can’t really imagine what life will be like.”

    As far as I’ve read, Kurzweil seems to be the only one whose view toward it seems rapturous.

    I think it’ll happen, and shit will go wrong in about 10,000 different ways while going right in about 10,481 ways. Pretty much as cooking, agriculture, the Industrial Revolution, and the information age have.

    When cyberpunk authors were predicting the internet, they didn’t come up with the blink tag, 4chan, Facebook, Chatroulette, iTunes, the DMCA, LOLcats, Fake Steve Jobs, or Android. It was pretty much Second Life with dudes in leather jackets with swords, and everything was Japanese.

  16. It won’t be a singularity. There will be a rain storm of advances that splash into each other changing, influencing, creating new advances. This is real life so it won’t be heaven. There will be trolls and griefers and people doing stupid things… and presumably AI doing some stupid things too. There will be people/AI doing good things, but don’t expect utopia.

    So if you can copy your brain/soul/personality and upload then other people can too. Then it can be copied and remixed. Who owns the copyright on you? And is that copyright worth anything?

  17. Well, if we go into the singularity still commodifying everything and anything, dystopia is certain.

    Besides, out of 100 people, how many people even understand computers? Deep down? Electronics?I’m sorry, but you’re friends shouldn’t be stunned by your ‘wizardry’ in creating flashers. The inherent problem is that WE are not Technology. We and Technology are intertwined but not the same, it is our sociopathic child. If technology is following exponential growth at a high rate, we need to jump on, understand it now, and wrangle it. Otherwise, seen as a force of magic, only the understanding and wealthy enough (or both) will have even the slightest notion of how to wield it.

  18. Ms. Newell is wrong on many points. The first of which is that The Singularity will not occur.

    Her basic premise that The Singularity will not be the “fluffy, kittehs, all-is-good-and-light” event that so many make it out to be it spot-on.

    However, the idea that she glosses over most is the coming intelligence explosion. For the first time in the history of humanity there will be greater-than-human-intellect on the planet. That intellect may be born of humanity, however it will not be human.

    Ms. Newell underestimates the effect and affects this will have. By orders of magnitude.

  19. One of the problems here is that ‘singularity’ means several different things, only vaguely related to each other, and few people care to distinguish.

    The varieties of technological singularities I’ve come across are as follows:
    1) The ‘jumping jesus’ effect (or, as someone above cited, I^2 — intelligence working to improve intelligence; that said, the jumping jesus effect isn’t precisely the same thing, but it’s close enough): the more knowledge one has, the easier it is to get knowledge, so knowledge tends to grow exponentially.
    2) The posthuman singularity: the jumping jesus effect will let us modify ourselves to the point where we will no longer be considered human
    3) The Moravek nerd-rapture singularity (this is a subvariety of the posthuman technological singularity, which is a subvariety of the jumping jesus effect): when tech gets to the point where AGI is not only possible but cheap, humans will transplant their consciousness into artificial constructs with longer shelf lives. This tends to be where utopianism starts getting to crazy levels, but there’s a lot of fiction that transplants the moravek upload stuff into a dystopian or negative-utopian light: _Altered Carbon_ and _Down And Out In The Magic Kingdom_ are good examples.
    4) The Kurzweil AI Singularity: when AI becomes not just possible but cheap, all the previously AI-complete problems will be solved. There’s more dystopian than utopian material related to this, which is unfortunate: RUR could be said to start the trend (though the universal robots in question are biological in RUR, much like the andies/replicants in _Do Androids Dream of Electric Sheep_ / Blade Runner), and I needn’t list the more recent examples.
    5) The tiplerian singularity: some time near the heat death of the universe, all the information throughout time will merge simultaneously, causing everyone to come back from the dead and live forever in limbo because of some obscure mathematics and HEP theory. Might be pathological science.
    6) The GB singularity: earth as a superorganism will ‘wake up’, humans (being its neurons, connected by axons formed of communication networks) will know this somehow, maybe. Utopian amongst techno-hippies. A lot of people assume this has already happened, and that we neurons won’t, can’t, or shouldn’t know the details of the macrocosm.
    7) The (weak) jumping-jesus effect: technology progresses in complex ways and causes unexpected effects, making it both hard to predict the near future and hard to understand the recent past
    8) Something about black holes

    As you can see, it’s pretty hard to talk about the ‘singularity’ without specifying which of the above you refer to. In some of the above, nobody could claim with a straight face that they haven’t happened already, and in others, few could claim with a straight face that they are likely to happen, while there is at least one that constitutes a total non-sequitor in this context.

    1. I’m hoping for a Kurt Weill Singularity, where human consciousness gets uploaded to a smokey cabaret to sing Bertold Brecht songs.

  20. Absolutely — our problems won’t go away with the Singularity, but we will emerge with new abilities to deal with them. And new problems to occupy us. Why would we want otherwise.

    Reminds me of one of my favorite David Byrne lyrics: “Heaven is a place where nothing ever happens.”

    I’m reading Charles Stross’ “Saturn’s Children” right now, and loving the vision of posthuman culture, both familiar and utterly alien. This novel is generating much inner distress for me, as I contemplate its future world without “humans” yet filled with all of humanities legacy and so much more. But forcing one to imagine something else is what great (science) fiction can do.

  21. I always see talk of “The Singularity” as nothing more than a repackaging of the notion of “Utopia” from previous eras. Everything will be better and wonderful once we solve all these horrible problems with (faith, science, social justice, balloon juice) and we can all live in perfection.

    To me, the singularity is more about the point at which AI asserts itself on equal footing to human intelligence and decision-making. For better or worse, we will, at that point, be sharing our world with something smarter than we are and capable of thinking faster and in more discrete terms than we can. That’s never happened in human history. The closest comparison was our time sharing the earth with the neanderthals. And that did not turn out well for our hominid cousins.

    It will be akin to the computer revolution of the past 30 years, but more rapid and more unusual. Imagine an AI that can not only model climate change, but forcefully model for action on the issue? Or a civil planning intelligence that suggests a minor war to thin out the population might be to our nation’s advantage?

    People are already so ready to be chain themselves to technology (crackbrerries and iphones) imagine a technology that actively encourages you rather than just being shiny and well-advertised.

    I don’t think the singularity will be heaven. I think it could be a very real hell.

  22. someone already stated this, but every iteration of explosive ingenuity changes our perception and interaction with the world around us.

    The Singularity is not Utopia. It’s the compounded effort of all ingenuity all at once, at once. Just because you can know everything ever known (as fast as physically possible), doesn’t mean you’ll make good decisions. My issue is, will we understand the technology we are using, or will it still be a secret? And will it be available to all, if they so wish. If everybody knows all that can be known, then the playing field is re-leveled, and how you utilize the knowledge will be important.

    Something of enkiv2’s list 1) & 2) combined. 3) is only possible if we are able to break away from our socio-political ambitions and start seeing the exalted Forest for the Trees. Are we a species of individuals, or are we individuals of a species?

    Vote for dystopia. of course, so much traffic, no single thought will be heard clearly. So, really, nothing will change.

  23. For the sake of being a devil’s advocate, I’d like to mention that seen from a distance (and back in time), modern times would probably seem rather utopian. This obviously cannot be fully generalized — there are people who are far worse off than they were 100 years ago, but the trend is towards better as far as I can tell. The thing about utopias is that they stop being utopias as soon as you start living in them; the human capacity for nitpicking is staggering (and, of course, staggeringly useful).

    There is also the rear-view-mirror effect. As one grows older, one tends to live more in the past than in the future (or rather, one’s sense of the ‘now’, being based on models formed prior to ‘now’, slowly increases its lag time as one tries to process new phenomena), which adds to the whole future-shock dealie. If you are relatively unmoored, you can tramp forward gracefully into the future through ignorance or apathy (those frictional forces that keep us concerned about that in which we have invested can be superceded by not knowing or caring about the currents that threaten them), but it’s a small caste that can afford to be so bohemian as to live in the present most of the time. Since rear-view-mirror living is influenced and guided by panic, it’s only natural that it manifest in the form of panic — and little is more dystopian than panic, even if it’s panic over something that is universally good (or universally better than what it replaces).

  24. The magic Singularity date is 2060, assuming Moore’s Law holds until then. This number is derived in a fairly direct manner:

    The Blue Brain project is modeling neurons with a degree of detail that most electrophysiologists are happy with. There’s a great many things they aren’t modeling yet, like homeostasis and neuromodulatory connections, but all of those are at least an order of magnitude slower than the diffeq calculations of membrane conductance and as such are computational small fry. The model has problems, but the scale of the model is acceptably close to what’s needed to simulate neural function, give or take an order of magnitude.

    IBM’s Blue Gene, on which Blue Brain runs, maxes out at 500 teraflops.

    At this speed, they can simulate 10k neurons with their connections in 1/100 real time.

    This works out to 5 teraflops per real-time neuron. This is closer to an upper bound, since they’re ignoring a number of potential shortcuts which might save a great deal of speed.

    At ten billion neurons per human brain, we’ll need 50 yottaflops to emulate an entire human brain in real time.

    At the current pace of Moore’s Law, that means supercomputers will be able to emulate the human brain around 2040.

    But that’s for supercomputers. People won’t really be affected until it hits their personal computers. That’s about a 10 year lag. So, 2050.

    Add 10 years for adoption. There will be problems – legal, moral, social – but the siren song of eternity is a strong one, even if it means having your brain flayed into tiny bits to be scanned and uploaded.

    That’s the biological side of the Singularity. We have the technology to destructively scan brains right now, but like Farnsworth reactors (hot fusion) it’s hard engineering to scale up to histological sizes, let alone entire brains, without anything going wrong in the process. That’s just a matter of investment, and aged billionaires will probably be willing to invest quite a lot when the time is right.

    Nondestructive scanning is more like cold fusion, or AI: theoretically possible, but no one has the slightest clue how to go about it aside from waving hands and screaming “NANITES” at each other.

    Anyway, this has gone on long enough. There’s your Singularity.

    If you want to start in on the messy problems we’ll see, here’s one: every post-Singularity universe dodges the issue of cloning, but that’s going to be a real possibility. Like copying music, it’s not going to be something easily regulated away, nor should it be. We’ll find incredible value in Sorcerer’s Apprentice scenarios, where a single expert makes numerous copies of themself to multiply their work. When do copies stop being data and start being human beings, with all the inalienable rights therein? If I might humor myself with a little prophecy, I predict that this will become the dominant form of human reproduction. Meatbags grow too slow to keep up with computer production. Digital families are another romantic alternative: some number of parents chip in for a blank brain, a tabula rasa, which they raise as their child in the customary manner. Dawww as that may be, it’s still expensive and slow compared to copying yourself and moving on.

    1. Beelzebuddy, you don’t need to emulate the brain to have AGI. Would we really want to emulate the brain anyway? Another development I expect before 2050-60 or whatever(singularity probably way sooner) is quantum computing, which will punch a hole in Moore’s model.

      1. We need to emulate the brain because we know it works. AGI and quantum computing are a lot like cold fusion: interesting ideas, theoretically possible, but waiting on scientific breakthroughs which may never come. We know the human brain is capable of intelligence. Therefore, emulate that and you have AGI. If synthetic AGI or QC happen of their own accord along the way, great, that’ll speed the process up tremendously, but I’m not holding my breath. It’s precisely that reliance on imaginary technologies that give Singularitards such a bad name.

        Furthermore, we should want to emulate the brain because it’s better than dying, even if you gotta die to do it.

        1. We need to emulate the brain because we know it works.

          If we’d taken the same approach to powered flight we would still be stuck on the ground with a bunch of flopping mechanical birds.

          As a matter of pure research, emulating the human brain in software is an interesting challenge that will likely teach us as much about our own brains as it does about artificial intelligence. But as a practical method for creating an thinking mind… not so much. If you could grow a real brain in a laboratory jar it wouldn’t magically develop intelligence and a personality unless you could somehow feed it the same sensory input that real humans are exposed to since our earliest days in the womb.

          1. If we’d taken the same approach to powered flight we would still be stuck on the ground with a bunch of flopping mechanical birds.
            If it hadn’t been for Bernoulli’s principle we WOULD still be stuck on the ground with a bunch of flopping mechanical birds. I don’t know the equivalent simplifying theory for intelligence. I DO know that smarter men than I have wasted their lives trying to find it. So rather than twiddle my thumbs telling myself that someday someone will have a stunning revelation, I bothered to work out how long it would take to brute force. 2060. There you go.

            But as a practical method for creating an thinking mind… not so much. If you could grow a real brain in a laboratory jar it wouldn’t magically develop intelligence and a personality unless you could somehow feed it the same sensory input that real humans are exposed to since our earliest days in the womb.

            Who said anything about growing one from scratch? Take an adult brain, rip it apart, upload it. We can worry about appropriate virtual environments for “newborns” when the time comes.

            As an aside, it also sounds to me like you’re seriously underestimating how adaptive brains are.

  25. Science fiction has predicted the cell phone, the atomic bomb, telecommunication satellites in geosynchronous orbits, personal computers, laser weapons etc.
    While singularity itself is a misnomer, the progress of machine based “intelligence” is well on the way to surpassing general human cognitive capacity in the near future. Whatever happens the new form of “life” will begin a transformation of life beyond anyone’s imagination.

    On a personal note, I think it is great to see your kids grow up to be better than their parents…

  26. Just recently read Paolo Bacigalupi’s ‘Pump Six’, and in practically every story I was thinking singularity gone awry…

  27. Who said anything about growing one from scratch? Take an adult brain, rip it apart, upload it.

    OK, so you’re not one of the people looking at “artificial brains” as a mechanism for creating emergent A.I. so much as a method of format-shifting an existing mind. Fair enough, though I don’t know how easy it will be to find volunteers willing to have their brains ripped apart while they’re still in use.

    But in that case a simulation of the brain is still just a tiny piece of the puzzle, since our brains are inexorably tied to our bodies and our bodies are in constant interaction with our environment. An almost undetectable shift in body chemistry can be enough to turn a mentally healthy adult into a functionally retarded psychotic, and that’s if you get every one of a million other factors (known or otherwise) just right.

  28. “There can be no understanding between the hand and the brain unless the heart acts as mediator.”

Comments are closed.