The Singularity is Far: A Neuroscientist's View

Discuss

99 Responses to “The Singularity is Far: A Neuroscientist's View”

  1. Stephan says:

    I am a physicist. I must say that from my experience, David is exactly right about the pace of science. Although our knowledge is always growing, our ability to make progress in science will always be limited by the capabilities of the human mind. Computer can help, but they are no more intelligent than we are — nothing magical happens when we run things on a faster machine.

    One could plausibly even argue that modern science is progressing at a sublinear pace, now that we have exhausted the “low hanging fruit.” Consider the pace of research in medicine these days, for example. Even as our knowledge of human biology has increased vastly, development of new drugs has not kept pace.

    Our ability to manipulate and observe the nanoscale is so limited that it’s remarkable that we know anything about it at all. It is absurd to speculate about nanomachines controlling our thoughts in the foreseeable future.

  2. bill draper says:

    Maybe the progress ‘curve’ is neither entirely exponential nor stubbornly linear, but rather is more step-like in its progression. A breakthrough in fundamental understanding or engineering fuels a period of rapid advancement followed by a longer period of implementing what we learned.

  3. W. James Au says:

    My issue with Kurzweil’s prediction: he says we’re going to upload our consciousness into “a virtual world like Second Life”, and that this is only decades away. Big problem: Second Life is not very popular, and hasn’t grown, even though the graphics have gotten a lot better. Kurzweil says the exact opposite:

    http://bit.ly/ijuJwO

    I’m not qualified to speak on his neurology, but I can definitely say his predictions are totally off-base with the space he thinks the singularity is taking us.

  4. Gordon Stark says:

    Human intelligence does not reside within the brain,
    which is just an interface with living intelligence,
    which is why Artificial Intelligence went nowhere…

    Intelligence is sentient apart from the lifeforms it
    operates through, which, are interfaced with it, via
    the brain. The brain is like a computer without any
    software, while intelligence (the software of life),
    is channeled by the brain which internalizes it, and
    encapsulates it within a shell called “self”, which,
    we judge by appearances and imagine to be within any
    given living being’s body.

    You are not actually within your body, you are merely
    interfaced with it, and it is a peripheral you perceive
    yourself to be within, until it ceases operating.

    These professions are backed by hard science, and are
    not related to the inspiring dreaming of Ray Kurzweil
    which is well intended but based upon argued notions.

    Interestingly, a nanorobot can be interfaced with the
    eternal living intelligence of your sentient person, so
    that it becomes your body or life form peripheral, but
    that’s another subject, and your present body would need
    to be deactivated first.

  5. Stumpadoodle says:

    http://www.wisegeek.com/what-is-brainbow.htm

    Let’s not forget Brainbow.

  6. Anonymous says:

    The best definition I have for the singularity is the point at which any computer can read this page and understand what is said. A computer that can do that will also have access to the internet, a virtually unlimited and perfectly accurate memory, and the ability therefore to explore all of human knowledge at virtually the speed of light. It can accumulate all the knowledge on the web about any given subject. That can be both expertise and data.

    This is an arbitrary tipping point. It’s less than a decade away. Probably less. Once it happens there will not be any going back, and we will not be at all the smartest creatures on the planet.

    The nanobots, even if Kurzweil weren’t just being used as an argumentative strawman here, wouldn’t help anyways.

  7. Anonymous says:

    How do we reboot?

  8. Anonymous says:

    Linden’s repeated objection re Kurzweil’s timetable is that biological insight progression is linear. To be convincing Linden needs to support that objection with (1) a metric for biological progress and (2) show that, on that metric, biological progress has indeed been linear for relevant the time span.

    I don’t see that here. We mostly get anecdotes. And some things that seem to pull against Linden’s view, like: “There have been a number of genuine paradigm-shifting insights in genetics in recent years.”

    That sounds like blazingly fast progress to me!

    To say more we’d need to split the time period of modern biological research into whatever chunk of time Linden takes “recent years” to be. Then we ask for each such previous chunk: how many genuine paradign-shifting insights are located here? If the numbers are ramping up then that is evidence against Linden’s linear thesis.

    I don’t have the biological expertize to carry out that investigation. But know enough about arguments to see that Linden needs to do that to have a cogent positive argument (rather than one for a weaker agnostic-ish objection that Kurzweil’s thesis lack positive support)

  9. Powell says:

    This kinda creeps me out, not sure why. Wouldnt this uploaded persona just be a simulacrum of ones’ self? I dont think I trust science enough now to entrust the “essence” of what I am to this process. I would think it would be prudent to find out what defines self in real terms before trying to copy it to a computer.
    Or maybe it is just that “simple” that we “are” just what is inferred by Kurzweil and my copying this info about our brain state you copy me. I can just imagine myself becoming aware as my new digital self, then realize I am being uploaded to some Pirate Bay of the future. Some kid then could download me and put me into some Angry Birds construct as the green pig equivalent, only to be tortured ad infinitum with exploding birds. Do me a favor and just drag me to the recycle bin.

  10. Antinous / Moderator says:

    The singularity has already taken place. The assumption that turned out to be incorrect is that the singularity would be an event primarily centered around human consciousness.

    It turns out that individual human lifespans would start to decrease, but our species would find immortality in the form of the Limited Liability Corporation.

  11. Anonymous says:

    To transfer one’s consciousness to a computer it would require a slow process of implementing nanobots to copy and replace tiny parts of the brain. Over time the entire brain would be replaced and the individual would be conscious of the transformation. If it’s not done this way there would be no proof for the individual that the electronic person they have become is the actual person or just a copy.

    To be a successful migration the various elements that make up our brains would need to be emulated or virtualised by the nanobots in order to give the complete human experience. Some people however will wish to eliminate these biochemical reactions to make them more logical beings.

  12. gwailo_joe says:

    The Singularity=The Rapture

    An epoch-changing consciousness-raising event where ‘the world as we know it’ will be forever altered for the betterment of the lucky chosen few.

    (you don’t imagine the lower classes will get free nanobots, do you?)

    And even if it does all pan out in some amazing future, the people living through it will mostly be so-so: “yeah, viewing distant galaxies with a blink of an eye and never getting sick is pretty cool, but mostly I use The Singularity to telecommute and flirt with my ex”

    • bmcraec says:

      Isn’t it interesting how “the Lower Classes” are always the class just below the speaker? No offence to you, gwailo_joe. It is after all, the key to lotteries, and of course, the American Dream.

      “I cna’t be overdrawn. I still have cheques left!”

      • gwailo_joe says:

        Should I have said ‘the brave downtrodden proletariat’? I just meant ‘poor’. But lower classes has more of a ring to it, besides…I ain’t so high class myself. No nanobots for me :(

        • Ugly Canuck says:

          Stay classy, Gwailo joe.

          IMHO, you should have gone all ‘East Coast boojwah’ in yer first comment, and referred to “les bas-fonds” instead of to “the lower classes”.

          But, c’est la meme chose, peut-etre.

          • gwailo_joe says:

            Thanks U.C.! I had to look it up, but ‘Lower Depths’ works….(and thanks to our friend the Internet, I ordered the movie :)

            And just as a general response: nanobots? For Reals? Human beings are rather smart, and our ability to create (and destroy) is unrivaled throughout the galaxy! But a robotic mind control brain matrix?? You can’t be. . .serious?

            What happens when the little buggers get stuck in there? MS blue screen is annoying, but if that was your entire consciousness…you’d be awful mad.

            Magnets??? Go to Ross to buy some socks, the lady removes the theft protector, you swipe your credit card Annnnd Brain. Erased.

            As far as I know with all our technological prowess, science has yet to create a robotic dragonfly that can maneuver like the real thing…show me that ‘simple’ thing first.

            If you want to shove a turkey baster filled with sparkly techno bits in your ear and squeeze, be my guest…but if it gives you a migraine don’t come cryin’ to me!

  13. Anonymous says:

    Good post.

    Somewhat related: I’m sure some BoingBoing readers have already seen it, but Charlie Stross had a recent post titled Three arguments against the singularity. I thought it was a good read too.

    • Jesse M. says:

      The thing is, Stross doesn’t really make much of an argument against the possibility of mind uploading (which I take to be the main subject of this boingboing post), he just says that if anyone tries it it’ll cause a giant religious war, which to me isn’t very convincing and seems a little too much like an atheist engaging in lazy caricature of the religious (and I’m an agnostic who thinks all claims of “revealed” religion are silly). I think if mind uploading works the religious will just invent some new excuse to prove supernatural souls were never an essential part of their theology–I’ve actually talked to one Christian who said the “soul” is just the “form” of the body rather than something wholly nonphysical, which apparently was Aristotle’s view and something similar was held by the Catholic philosopher (and saint) Aquinas. Perhaps Muslims can find a similar excuse to say an upload could still have a “soul”, and Jews don’t really have a developed belief in a soul AFAIK. As for some of the biggest non Judeo-Christian religions, Buddhists already view people as just a combination of interacting elements with no single “self” in control, Hindus do have a belief in a soul-like “atman” but I don’t think it corresponds too exactly with Western mind/body duality, and there have been many different metaphysical positions in Hindu philosophy, I suspect they too would be able to find some reason to not see intelligent mind uploads as conflicting with their main beliefs.

      • Ugly Canuck says:

        Yeah, funny how people have thought differently about that.

        IIRC, one ancient Chinese belief system asserted that each single human was actually a combination of some 24 different “souls” (out of a population of how may hundreds or thousands of such, I know not), which came together at birth and dispersed with that human’s death…I can no longer recall how many of these “souls” came from the earth, and how many from the sky, but apparently how many came from each realm (and the number and type of such realms – there may have been more than two, iirc – memory is weak as to details) was also claimed as known.

        Such concepts were used in ancient times to help understand disease, ie, a specific ailment would be “diagnosed” as constituent-soul x arguing with constituent-soul y, etc., etc…and with so many constituent-souls making up each person, the possible ailments their imbalances could cause would be many!
        That is to say, that any doctor using such a “science” would have an infinite variety of “explanations” ready to hand to explain to the patient why she was sick, in the particular way she was sick.

        I seem to remember reading that account from a volume concerning the pre-history of medical science – a history of proto-science, so to speak – but its title and author are quite lost to my memory now.

        Anyway, this “we have many souls, sometimes they fight” type of clap-trap and hoo-doo, or similar, was typical of all medicine (more or less) in describing the operation and mechanism of any particular disease, prior to our surprisingly young age of modern science. Prior, that is to say, to our healthier age of empiricism and chemistry.

        • jjsaul says:

          I seem to recall reading that the liquid coolant in one of the 1980′s Crays was interestingly close to artificial plasma.

          On the many souls thing, Marvin Minsky’s theory of a “society of mind” has proved useful enough to have filtered beyond the AI world to inform new approaches in cognitive behavioral psychotherapy.

  14. Anonymous says:

    While I broadly agree with David Linden’s point that understand progresses maddeningly slowly with respect to our ability to gather data, his assertion that understanding progresses lineary is kind of nonsense. How do we measure understanding? What exactly is a unit of understanding? For all we know understanding is also on an exponential curve, just on a much more shallow one that the data gathering curve.

    The rapid proliferation of organised research across the C20th has certainly accelerated the rate at which we’ve come to understand all sorts of things. I agree that it’s almost certainly the case that the amount we understand is not linearly related to the rate we gather data. But what we come to understand is likely directly related to the sheer number of people engaged in researching a given topic.

  15. magneticwheels says:

    Is anyone else here disturbed/depressed by the entire goal, here? Immortality seems a narcissistic, cowardly goal. No single human personality, my own included is worth keeping around for centuries and centuries. And while it’s a bummer to grow old, feeble, sick and die, there may be no possibility of wisdom without it. Instead of wisdom and sympathy and mercy I feel we would end up with cunning, carelessness and cruelty. Why care about anyone or anything if it’s all just invulnerable, indestructible perfectly computer-created egos existing in a virtual reality that we need to care nothing about?

    kurzweil seems like just another self-centered baby boomer to me- like many men of his generation he had a remote and uninvolved father figure. He seems emotionally unable to deal with his father’s death, or any death or loss. He needs a therapist, not an upload.

    what if all these bloated rich egos our culture is saddled with never went away? imagine an immortal murdoch. yikes!

  16. Anonymous says:

    I think David Linden is underestimating how quickly humans will be displaced altogether from scientific research.

    Take for example a current problem in genetics – working out what all those chromosomes and genes do – and how to implement designer changes.

    Currently this takes years for a single gene – you have to isolate the gene, work out how to switch it on and off, find out what protein(s) it facilitates, and try to work out what the proteins actually do.

    A sufficiently powerful computer changes all this. With a powerful computer, knowing how things work becomes less important than knowing what you want.

    The computer starts with what we know – a list of chromosomes and genes and observed effects. It also has needs state of the art knowledge of physical chemistry.

    The computer also knows what we want.

    The computer then starts randomly applying changes, literally evolving its way towards a solution, then testing each variation in a physical chemistry simulation able to simulate entire organisms, to determine whether it helps approach the goal – whether to keep the simulated change, or reject it.

    Note this is a very simplistic description of such a simulation – you can apply all sorts of strategies, such as speciated evolution, to reduce the risk of the system prematurely discarding changes which might be crucial to the goal, even though their benefit is not immediately obvious.

    When the goal is achieved, the computer provides a recipe for the required new organism, or even controls the change itself.

    This hideously inefficient approach requires a stupendous, almost unimaginable level of computing power. But this nearly incomprehensible level of computing power is something we will see in our lifetimes, because of the exponential rise of computers.

  17. Anonymous says:

    I see where this guy’s coming from.. Even if I’m more willing to give Kurzweil the benefit of the doubt, I too have found it hard to picture exponential progress in certain fields. But I think he isn’t considering a factor which will arise soon.. and change that.

    Here’s his disagreement with Kurzweil:

    “I contend that our understanding of biological processes remains on a stubbornly linear trajectory. In my view the central problem here is that Kurzweil is conflating biological data collection with biological insight.”

    I think it’s wrong to assume that factor will STAY linear.. Even if our “insight” into that data has been linear thus far, it will soon start to pick up, and with it, our insight/understanding of MANY other issues too.

    I assume he thinks that insight is a uniquely human feature, and that humans are the only things capable of figuring-out and understanding the usable information in that data. If that were true, then our insight may very well stay linear, and it certainly would us take a long time to make sense of and fully understand such huge and complex data, that way.

    But we humans aren’t necessarily the only method for gaining such insight into data. Or at least, we wont be for much longer.

    I’m sure this is what Kurzweil is getting at, as I’ve heard him explain this in various ways:

    Time after time, over the last few decades, computers have shown us that they can rise to the challenge of matching our various “human” abilities. Ray explains this better, but the examples span from mathematics and logic, to winning chess, and even pattern recognition.. Each time, people assumed computers couldn’t do it as well as humans, if at all, because those features are/were “uniquely human”. But each time, eventually, the computers match us in those skills, and then pass us, like a jet passing a snail.

    I see no reason things will be any different when it comes to our abilities of “insight and understanding”. And there’s already some promising examples of how quickly computers are catching up to us, on even these. IBM’s Watson is a good example. Watson did something no computer before it could; It can take abstract sentences with misleading hints, word-plays, and riddles, and figure out the one meaning behind them. And it does that using a vast database of general knowledge, which it has to make sense of all on it’s own! Isn’t this exactly what David in this article is referring too? Taking huge data sets, and getting the desired useful meanings and insights, out?

    Watson is good at figuring out riddles, even better at figuring out straight-forward questions, and now he’s currently being fed an even larger data set, medical knowledge! Meaning soon Watson will be able to solve medical issues. And armed with almost all the medical knowledge and data out there, he’ll be able to take many more elements and factors into account. And when asked to solve medical problems and give diagnosis, he may very well be better at it than any human doctor, because no human can consider THAT many factors, and still make usable sense of it.

    And Watson is only the beginning! His descendents will get better and better, leaving even him in the dust, probably in less than just a decade. Since this will follow the exponential curve of progress which we see with computers and processing technology.

    So, I’d guess that by the mid 2020s, since processing power by then will make computers like Watson even more powerful and common, with broader abilities and areas of understanding; We’ll be putting those computers to use making sense of all that data, and boy will there be a lot more of it by then too. So I think we’ll see computers being fed tons of research data, most of it nothing more than simple observations, of how the brain works, and of making technology on the nano-scale, and all the medical knowledge and science in existence, and so on! Even the data-sets we think we already understand, these computers will be able to draw whole new conclusions from and links too and between! Data will be merged across disciplines by these computers. New connections will be discovered and understood, and insights in ways we could never have hoped to figure out, let alone come up with in the first place, will change everything, rapidly..

    So does 2030 really seem like THAT huge of a stretch? Consider all the progress we’ve seen in the last few decades. Do I even need to describe that? These great futurist thinkers say “The Singularity is Near”, I say take a look back at our childhoods and compare technology and human knowledge from then, to now. The Singularity is Here, and it’s already started. We’re climbing our way up that ever increasing slope of exponential progress, and it’s effects are felt across every last single aspect of human knowledge and progress.. Even those which we currently believe are limited by our “uniquely” human abilities, wont remain untouched for long.

  18. sam1148 says:

    I like the bit in his book where he extrapolated how you could invest in the stock market and expect it double at intervals and end up with a billion dollars, funding your cyber retirement.

    Yeah, about that.

  19. Anonymous says:

    My first singularity moment:
    Back in the 70s, when the last drop of fresh water in New Zealand was contaminated with Giardia.
    Previous to this tipping point. There was a day when most humans could drink fresh water without getting sick. Now all fresh water consumption require some form of technology to be potable by humans. There are no “alternate” cures for Giardia, it requires medical attention.
    Hooray for the singularity. The point at which mankind required technology to obtain a basic element of sustenance.

    The next singularity: the point at which breathing ear through our mouth/nose/lungs will become deadly and humans will require some form of air cleaning technology for the mere act of breathing. This I suspect not in my own lifetime.

  20. Brainspore says:

    I’ve noticed that most futurists (and doomsday prophets) seem to favor timetables which place the amazing world-changing events within their own lifetimes, especially when the prophets in question seem obsessed with their own mortality. Anyone know if there is a name for this kind of predictive bias?

    • chenille says:

      I like the idea of a bitter futurist who always places things just a little bit further. “I’m about 40 years old now, so I expect we will achieve human immortality in 2065.”

      • Brainspore says:

        I like the idea of a bitter futurist who always places things just a little bit further. “I’m about 40 years old now, so I expect we will achieve human immortality in 2065.”

        I like the idea of a bitter rapturist even more. “The worthy will ascend to heaven on such-and-such date later this year. Unfortunately sinners like you and me are pretty much screwed.”

    • Anonymous says:

      a “God Complex”

      • Brainspore says:

        We have a winner! I’m surprised it took until 2007 for someone to come up with a formal name for the phenomenon.

    • oheso says:

      Anyone know if there is a name for this kind of predictive bias?

      “Ego”

    • Anonymous says:

      narcissistic fantasy?

    • Ugly Canuck says:

      Conceit.

      More accurate than the suggested ‘ego’, for ewe all have an ego, but not all of us display the behaviour you wish to characterize.

      That word again: conceit.

      Used in a sentence:
      They are conceited enough to think that their generation shall be the one to witness the Apocalypse. ( Or, the Singularity, or the final triumph of the Proletariat, or the final triumph of the House of Bourbon, or whatever the specific prognostication they cleave to calls for to transpire.)

      • bmcraec says:

        Nice typo… ewe works even better than we.

        • Ugly Canuck says:

          If ewe say so…but I think it kinda makes the comment itself sound (if not read) a little too conceited!

          Computers and bots need heat-sinks, but living brains don’t. Brains, unlike computers, do not USE energy : rather, brains themselves ARE energy – or to be clear, a bio-energetic process – what heat they generate (are) keeps them alive, and they don’t waste it by generating (being) more than they need (are), for what they need to do (be).

          You are what you heat, when it comes to your brain.

          Like pretty well every system which is both biological, and living.

          ( I’m not that interested in dead brains. Zombie Canuck might be, though.)

          • Jesse M. says:

            “what heat they generate (are) keeps them alive, and they don’t waste it by generating (being) more than they need (are), for what they need to do (be).”

            Maintaining a constant temperature is necessary for warm-blooded animals, but the heat is still “waste heat” in the sense that it’s not a form of energy that can be used to do work, in order to keep various essential molecules like DNA from breaking down due to entropy, you need a source of energy other than heat, like the potential energy in the bonds of the ATP molecule (which is itself created from the sugars and lipids you get from food) which can be used to create other molecules in various chemical reactions (this page has more detail on energy, entropy and biological reactions). The rate of a given chemical reaction does depend on the temperature though, that’s why if your body gets too hot or too cold you’re in trouble.

          • Ugly Canuck says:

            Yes… I guess that some “waste heat” must be getting radiated off…I had been thinking of “heat sinks”, as mentioned up-thread; it seemed to me then that brains do fine without them, but it does occur now upon reading your remark and further reflection, that the circulation of blood, of interstitial fluid, and of air around the head, would serve an analogous heat transport/dissipation function – that is , analogous to a structural “heat sink”.

            Do any organisms actually have biologically analogous structures to the solid metal “heat sinks” we find in computers and other electronics? Or is waste heat dealt with by organisms only by radiation into the environment, either directly or by circulation?

            Something for me to look into.

          • Anonymous says:

            You’re kidding right? Neurons uses energy in the form of ATP. And of course our brain has heat sinks. Both the cerebrospinal fluid in the ventricle and the blood vessels are used to dissipate excess heat. They do a pretty good of it so no matter how hard you think you won’t overheat your brain. However these heat sinks are very important when you have a fever or when it’s hot outside.

    • Anonymous says:

      Astronomers point out that the odds of anything of cosmic significance (e.g. a nearby supernova) within your lifetime are very low.

  21. tozeuma says:

    Aren’t all accelerating functions nearly linear until they go all exponential on you? We are probably just grinding out the flat part of the curve now. Next week? Asymptote!

  22. Anonymous says:

    The enthusiastic optimists visualise the geometry of the singularity as some sort of exciting ride, and technology as a whizbang little vehicle, a bobsled perhaps, or the luge for the really daring. Away we go, the ride gets faster and faster, we approach the singularity which looks very steep, steeper than anything we’ve seen before, and Whoosh! up the curve we go. I’d just like to suggest that the visualisation might be more accurate if the technology was seen as a freight train approaching a vertical cliff.

  23. Anonymous says:

    I like Ray Kuzweill’s outlook, but the author’s critique is spot on IMO.

    My example is slightly off topic, but it’s my field so bear with me for a moment if you will.

    The advent of detailed information about the human genome has been touted to have immense promise for the rational design of highly effective new drugs. There have been some striking new discoveries.

    How are 99% of new drugs coming to market today being found? Much like they were 30-40 years ago (and more), by testing lots of small molecules against cell cultures and looking for favorable results. Most of the “progress” in drug development has come from automating the screening processes. The brute force approach is far easier to get results from than doing the fantastically complex job of understanding in detail how animal biochemistry works.

    If it was possible to automate innovation and understanding we’d be at the “singularity” already. More and better computers and nanobots only get you so far. Someday it will be a post-human world, but since it’s going to take humans to get us there, I’m with the author in believing it’s going to take a bit longer than our lifetimes to get there.

    Walt

  24. BWJones says:

    This neuroscientist also knows that “the singularity” that Kurtzweil is all about is nowhere near reality. The new field of connectomics is revealing neural systems to be far more complex than previously believed with current circuit models grossly underpredicting reality. Our work in the neural retina has revealed a complexity that changes current models of neural connectivity dramatically: http://prometheus.med.utah.edu/~bwjones/2009/04/a-computational-framework-for-ultrastructural-mapping-ofneural-circuitry/

    Application to cortex wide understanding of neural circuitry is *far* off, not to mention the ability to upload ones mind to a silicon representation.

  25. JeggStone says:

    What if when the nanobots entered our brains, they found more nanobots?…..

  26. jphilby says:

    This process would capture a person’s entire personality, memory, skills and history.

    Yeah right. Nobody even knows what those words really mean yet, let alone how to “capture” all that “data” and “transfer” it with fidelity to whatever miraculous non-existent tech will borg it.

    In the 30s, 40s and 50s many experts said we’d never go to the moon. They were wrong. They didn’t grasp the facts. I remain unconvinced that Ray is trying to grasp anything but his optimism stick.

    • Jesse M. says:

      Yeah right. Nobody even knows what those words really mean yet, let alone how to “capture” all that “data” and “transfer” it with fidelity to whatever miraculous non-existent tech will borg it.

      Modern science is based on the assumption of reductionism though–that the behavior of any complex system is ultimately a product of the interaction between its parts (for example, you can simulate the behavior of a large collection of water molecules using only the basic laws of quantum physics that govern the interactions between the charged particles that make up each atom, see here). So, if you can accurately simulate how each neuron interacts with its neighbors (and map the arrangement of them all), this should provide an accurate simulation of the brain as a whole, even if you have no understanding of how the neurons produce things like “personality” on a large scale. Of course, “if you can accurately simulate how each neuron interacts with its neighbors” might be a lot harder than Kurzweil makes out, as this article makes clear…there’s a whole lot of different chemical signals that influence the way neurons respond, we’d need to understand their combined effects on a neuron in great detail.

  27. Anonymous says:

    The article says

    “…brain-uploading to computers in the year 2039.
    That’s where I get off the bus.
    …Kurzweil is conflating biological data collection with biological insight.”

    However I’m not sure how much biological insight is needed to do a straight neuron by neuron simulation. If you check out the blue brain project
    (eg http://boingboing.net/2011/05/11/blue-brain-project-b.html )
    they can already do a reasonable simulation of 10,000 neurons so doing 100 trillion is mostly just scaling up.

  28. RHK says:

    I, for one, welcome our new nanobot overlords.
    Nanobots manipulating my brain, what can possible go wrong?

  29. JeffinMontreal says:

    narrow AI will increase exponentially and bootstrap itself via evolutionary algorithms. Some other way of interfacing with our brain will work if nanobots don’t. also: alien superheroes will intervene.

  30. Anonymous says:

    “Mind uploading” can’t be the transfer of consciousness strong-AI folks think it will be. If you can “load” a consciousness into another system, then you can copy that same consciousness.

    Now, here’s the thought experiment: if you load consciousness into two systems, which one is “you”? The answer: neither. You’re essentially copying a persons behavior and memory, and killing the original. There’s no continuation of person — no immortality — that people like Kurzweil desire so wholeheartedly.

    I think there’s a lot more going on in consciousness than people realize and the continuity issue to consciousness is one people haven’t addressed *at all*.

    • Anonymous says:

      Actually, he did address the continuity issue when looking from the human enhancement angle. He said that you’d have to slowly integrate the tech into your body so that at the end, you would be fully machine with a human mind. He said it was no different than shedding dead cells every day and having an entirely new body every half a year or year or whatever the cycle is. Point being, you now is a complete replacement of you a while ago. Gradual replacement keeps the continuity just like your body does.

      As for the copying angle: it IS a copy and Kurzweil said it would be. “Permutation City” is a good book discussing the ramifications of copying your consciousness. The antidote to this problem is the human enhancement I noted previously: becoming more machine bit-by-bit so that your consciousness is safe.

      I’m not necessarily an advocate, but it makes sense.

    • Anonymous says:

      It’s easy to address – there is none. You disappear every time you lose consciousness, replaced by a future copy that is almost but not quite the same. Why an artificial discontinuity must be any more of a death than this natural one is anyone’s guess.

  31. blissfulight says:

    Living forever…that’s been tried before.

  32. oheso says:

    This seems reasonable to me — a complete layman on the subject. And while I agree that we’re probably the sum (or perhaps epiphenomenon) of entirely mechanical processes, that doesn’t mean that simply gathering data about the process is going to reveal it for us. I would be surprised if the nature of thought, soul, emotion etc. doesn’t at some point require a fundamental revelation such as the emergence of quantum mechanics was in physics. And that process was the result of a bunch of extremely intelligent and creative people attacking the problems in fundamentally new ways.

    I’m sure we have the talent pool available, but that alone doesn’t guarantee the eureka moments.

  33. Anonymous says:

    Ray makes points which are all reasonable and make sense, and yes the timescale is off – and I think he knows this. But the thing is there is a lot more off. He paints an idealistic picture, a future of “happily ever after” and given his background (given anyone’s background for that matter) who can blame him (or anyone) for painting a future that is both fundamentally humane, not in opposition with our understanding of basic scientific insights. I think we’l may ‘organically’ drift into mind uploading somewhere by the end of this century, and there should be no objection that by 2100 most of humanity can afford to be biologically healthy, immortal, sane and attractive. If they’d want to.

    But stating that this is an ugly universe feels so much like going up to this nice, positive man and stating, ‘hey dude didn’t you know (the) santa(larity) doesn’t exist?’. If I would he’d look that contemptuous look at me whee I know he knows and he knows I know he is not going to make it, and he looks over his shoulder at The Children.

    It is all for The Children. Ray is the easily consumed, vaguely understandable, well-marketed, exceptionally geek and nerd ideal future outcome “that may actually be”. It is a religion based on an positive, getupandgo american engineering based approach to the world and a thoroughly naive understanding of reality, good and evil, the passage of time, entropy, despair, and the wickedness of the human soul.

    And yes Ray knows this. Listen to some SU and SIAI and Humanity+ articles and statements and cautious publications. They also know ‘we are all probably not going to make it’.

    So best regard Ray’s vision as an idealized blueprint of The World Of Tomorrow, kinda jetsons 3.0. that will never be, but it will still make sense to aim for it. It’s a roadmap, and yes like always following it we’ll get lost.

    That’s the way the universe tends to disagree with these visions.

  34. moo says:

    The singularity has already taken place. The assumption that turned out to be incorrect is that the singularity would be an event primarily centered around human consciousness.

    Human consciousness is a frustrating speed limit for systems evolution, and it was wrong of us to have expected that some momentous shift in the natural paradigm would abide such a limit.

    The singularity thus far seems to have involved an extra-consciousness hack of probabilistic phenomenon — a kind of hyper-dimensional reprogramming.

    The phenomenal occurrence of revelation and discovery, for instance, have been revised, causing the rate of momentous new knowledge to increase dramatically.

  35. Gawain Lavers says:

    Even if we grant magical nanobots that can go anywhere in the brain and have their way with the neurons once they get there, there is still the issue of telling them where to go and what to do. I would think even trivial understanding of how (grossly simplified, digitally modelled) neural networks operate would deny this possibility.

  36. Anonymous says:

    Exponential Trends in Neuroscience

    “Over the last five decades, progress in neural recording techniques has allowed the number of simultaneously recorded neurons to double approximately every 7 years, mimicking Moore’s law. Such exponential growth motivates us to ask how data analysis techniques are affected by progressively larger numbers of recorded neurons.”

    http://www.nature.com/neuro/journal/v14/n2/abs/nn.2731.html

  37. Jesse M. says:

    I don’t think mind uploading really requires nanobots, the more “realistic” proposal I’ve seen involves freezing a brain (using vitrification so crystals don’t form) and then using the method of serial sectioning to map out the microscopic structure, something that can already be done with microscopic tissue samples (though I think one would have to add other technologies to map things like the relative concentration of different neurotransmitter molecules at each synapse). But I agree with the rest of this article, scientists don’t know nearly enough about neurons to accurately simulate the interactions of even a very small collection (like those in the nervous system of a nematode, which only has 302 neurons), so there’s no particular reason to think we will gain this knowledge on Kurzweil’s very optimistic time table. It always seems a little suspicious how singularity advocates always choose a date within their lifetime! I’m gonna go on record and say mind uploading will probably happen eventually, but probably after I’m dead (I’m 34).

  38. Anonymous says:

    Everyday we put things into our bodies that change the way we feel and think. If you have become too attached to the term “nanobot” then perhaps such terms as vitamins or minerals will help to relinquish your skepticism. When thinking about evolutionary leaps you must never doubt our physiological relationship with our surroundings– they often side with “the path of least resistance. ”

  39. ThinkBanQ says:

    I really, sincerely, appreciated reading through these comments. So many of them intelligent, insightful and nuanced, and so few uselessly rude. I thirst for any knowledge that helps me make sense of this rapidly changing/unfolding world. Kurzweil is one of many sherpas guiding me along this path, and I tend not to be too judgmental of him, as there are clearly legions of you capable of critiquing him constructively.

    Having vicariously absorbed the blows and jabs, occasionally catching a second wind from the rare complimentary posts, I’m left with this thought: Until a singularity occurs or reveals itself unto him, Ray Kurzweil is just a man in process; a grown man confident enough in his innate understanding that he will forever push this envelope of skepticism…and thrive on the challenge (- for those of you who think he lives in a vacuum).

    Or consider this: maybe RK is just the Cosmic Giggle manifest. Weren’t we all told at one time, “You’ve got a brain, why don’t you use it?” Don’t kid yourself, Ray knows from mortality – it ain’t all life and death. I think he’s just helping us use our grey matter so that when/if we get to that Other Singularity we might grasp the moment with a smile on our face, knowing we’ll be partying on.

  40. Anonymous says:

    I would agree with your assessment that our understanding of biological processes occurs on a stubbornly linear trajectory which would throw off the timetable Ray endorses by a wide margin. However, the power of collaborative learning and the sharing of discoveries through the use of the Internet and other virtual “think tank” environments is changing this paradigm. As more and more data is being shared redundant research and discoveries will occur less, and this trend will accelerate learning.

    Younger researchers by default adopt and integrate these new technologies and the technology itself accelerates and AI brain learning models are starting to emerge. This human simulated AI learning model may be able to “augment” the learning cycle by rapidly sharing, sorting, and compiling data from hundreds of simultaneously running experiments.

  41. Anonymous says:

    What is left out in all the nanobot and ‘upload your brain’ scenario’s is that they seem to overlook that the brain is not just electical, and that the personality is not just the brain.

    A large part of the signalling in the brain is chemical. Not just between synapses, you could simulate that by electrically stimulating the receiving neuron, but the released chemicals also influence other cells in the neighbourhood.

    Furthermore, chemicals released by glands in the rest of your body have a huge influence on your behaviour. For instance, your adrenalin level, your sensitivity to it, sets you apart from other people. Same goes for adrenaline and all those other hormones.

    Go talk to a transsexual if you want to hear how big the influence of hormones on you personality is. And that’s just the sex hormones, there are much more types! Picture your brain lying in a fluid which full of hormones and other chemicals, some of them local, others everywhere.

    Will every nanobot be equipped with a small tank of hormones and neutralizing agents? The simulation of my conciousness would be bland without the hormonally triggered reactions to perceived threats. You think Feynman was just brains?

    • Brian Vaughan says:

      One of the best things I’ve read recently was The Ego Tunnel, by Thomas Metzinger, which is an interesting account of current interplay between philosophy and neuroscience. The central focus is how our consciousness is constructed — how we have a sense of an identity that is continuous through time. Much of Metzinger’s answer involves our awareness of our bodies, and how background physiological processes provide a sort of carrier signal, that keeps reminding us that we exist.

      The upshot, for this discussion anyway, is that we couldn’t just map the neural connections of our brain, get a read of the current electrical activity, and upload it. We’d need a thorough-going simulation of our physiological processes, just to start with.

      So, my sense is that not only is neuroscience unlikely to advance at an exponential rate — data != theory — but that the problem requires us to have a much more complete knowledge of our total physiology than Kurzweil is allowing for.

  42. jjsaul says:

    I think the appeal of steam-punk is primarily playing with the mental perspective of early adopters at the birth of the singularity age. I think that was when it first became clear that we were already on the accelerating curve.

    We’ve been falling upwards into the future at a faster and faster pace since then, and have mostly lost perspective on the wonderment and horror of our times.

    If you don’t believe in the general concept of the singularity, it’s because you missed it. Only the misperception that there’s a tipping point that would be obvious and unmistakable from inside the acceleration curve is getting in the way of a better grasp of the trend.

    • sam1148 says:

      I think the appeal of steam-punk is primarily playing with the mental perspective of early adopters at the birth of the singularity age. I think that was when it first became clear that we were already on the accelerating curve.

      The appeal of steam punk is that computers and technology evolved so quickly it never had a chance to go through a ‘craftmanship design phase’ Like Deco and arts and craft movement did with making elegant products with hand crafted items.

    • Brainspore says:

      If you don’t believe in the general concept of the singularity, it’s because you missed it. Only the misperception that there’s a tipping point that would be obvious and unmistakable from inside the acceleration curve is getting in the way of a better grasp of the trend.

      I don’t put a lot of stake in the general concept because I don’t think there’s anything particularly “singular” about the acceleration curve. The criteria for when we hit the singularity is purely subjective depending on what arbitrary milestone you decide to showcase. Was it the advent of complex language, which allowed unique thoughts to travel from one mind to another? The advent of writing, which allowed those thoughts to exist independently of a specific brain? Long distance communication? Mathematics? Etc, etc.

      Even the much-ballyhooed theoretical “brain/computer interface” won’t be a singular event. If that technology ever comes it will take the form of incremental improvements, not an all-or-nothing “my brain is in the computer now” day.

    • pauldavis says:

      We’ve been falling upwards into the future at a faster and faster pace since then, and have mostly lost perspective on the wonderment and horror of our times.

      This sort of hand-waving stuff shows up all the time. Unless you define the metric you’re using, I call BS. The transformation of people’s lives from, say, 1870 to 1940 (an arbitrary 70 year period) appears to me to be vastly more substantive in real, measurable terms (years lived, miles travelled, distinct items eaten, diseases curable, time spent on a given task, tasks possible and so on) than anything that has happened from, say, 1940 to 2010 (another arbitary period of 70 years).

      Furthermore, I don’t think you really grasped the essence of Linden’s critique of Kurzweil’s ideas about the “singularity”. He’s not arguing about whether it could happen at some point in time. He’s contesting Kurzweil’s specific claim that our understanding of biology (specifically, neuro-biology) has been increasing in a way that is somehow comparable to the exponential increase in our computing technology capabilities. Linden say’s that wrong, and that because Kurzweil himself says that this explosion in our understanding of (neuro)biology is a critical part of the singularity, Linden notes that this means the singularity isn’t really on the timetable at present. Put a little differently, Kurzweil says that “A and B make the singularity inevitable” (as do you). Linden says “yes, but B isn’t the story on the ground”.

      • bmcraec says:

        Hand waving is exactly the right term. I was searching for something more weasley, but hand-waving at least suggests neutrality, as opposed to a con-man’s spiel, designed to both bamboozle and persuade at the same time. Nobody has mentioned the rather large amounts of currency involved in the Singularity biz. There’s the University, and the conferences, and the merchandising, and the books, and the public speaking engagements…

        It’s all a biz, just like any other grand, sweeping idea, regardless of reality.

        And an afterthought… the Singularity doesn’t have to involve biology at all, does it? Except, maybe, as self-replicating batteries and raw material concentrators/assemblers…

      • gimmi says:

        “The transformation of people’s lives from, say, 1870 to 1940 (an arbitrary 70 year period) appears to me to be vastly more substantive in real, measurable terms (years lived, miles travelled, distinct items eaten, diseases curable, time spent on a given task, tasks possible and so on) than anything that has happened from, say, 1940 to 2010 (another arbitary period of 70 years).”

        to me it doesn’t, at least not for all the measurable terms you chose. And you can also consider for example the following metrics: The time (and the cost) it takes to send a letter. Or to video-chat with somebody leaving in another continent. Let alone what you cannot compare because it did not exist at all in 1940, e.g. having a blood sample analyzed for PSA.

        Or the number of books, or encyclopedias, or movies you can carry in your pocket. A measure that would have looked even foolish only some decades ago (you would have needed a lorry).

        In ten years from now, we may measure the number of items you can build (print) at home, or something else we are not even thinking about now.

        (the above does not imply I believe in brain nanobots, however)

  43. Anonymous says:

    I submit that understanding in neurology hasn’t been linear, but rather geometric. This is testable in a simple way by anyone with training in bio-sciences. Look at a basic text in the field published recently, one from the 1990s, one from the the 1980s, etc.

    I suggest that it will be true that the works from the 1950s through the 1970s will be relatively close, but from then on things jumped with improvements in imaging. The 1990s saw further big jumps with a revolution in computer-assisted bio-chem and imaging that has transformed medicine and research. The current pace is such that a book published in 2005 is outdated by now, missing large portions of what we now know.

    Any field that uses computers in any capacity is seeing, now or soon, the same doubling in efficiency that computers are.

  44. dr.hypercube says:

    One quibble – I’m not sure Kurzweil requires that we understand the brain. At least early on (I haven’t read recent stuff) he was talking about brute-forcing it – build a quantitatively accurate model and turn it loose. Getting the chemical stuff right might be an issue (understatement?).

  45. Anonymous says:

    Be careful what you wish for. If Michelle Bachmann has her way the nanobots in your head will make you blind if you commit a certain venal act that is not even on her Big 10 list. And she’s just a puppet, the really scary ones are all Agent Smith…
    Thanks, I’ll pass. The D students in government directing the C students in management to choose what projects the A students in lab get to work on, using the F students in the military to insure their delusions get played out are already too dangerous. Extinction is the singularity.

  46. Anonymous says:

    Futures in Biotech did an episode about the genome project and one of the things covered was that just 4 years before an earthworm genome was sequenced it wasn’t thought possible. Then in only a few years they’d sequenced the human genome.
    It only takes the appropriate breakthrough in a technology to initiate a sudden leap forward. Kurzweil may be overly optimistic, or he may be taking such leaps into account.

    • Anonymous says:

      He is taking such leaps into account and using very sophisticated statistical analysis models to make his predictions. That said, I think his predictions may be a few decades off. We will have neural nanobots someday, but I’d say closer to 2070.

      • Dewi Morgan says:

        I was thinking the exact same thing! Lessee… I was born in ’74; telomeres apparently run out after 120 years tops; so I reckon we’ll have the singularity by 2094AD at the very latest.

  47. subhan says:

    Your body is essentially the heat sink for your brain.

  48. Dewi Morgan says:

    Although technology is currently advancing exponentially, which has some advantageous knock-on effects to the non-technological problems we need to solve… many of those problems are exponentially hard to solve, resulting in only linear progress, while others are completely intractable.

    When all you have is a hammer, everything looks like a nail: but not everything can be easily whacked with the hammer of nano-tech woo. Poverty, caste systems, meat eating, disease, war, aggression, hunger, politics, human behavior… these problems can’t even easily be *described* through science.

    Most people wouldn’t even agree on which of these are problems and which are advantages.

    And even for those problems we CAN easily describe for computers: we don’t even have a solution for the traveling salesman problem, or the halting problem, or dozens of others, and we never will, no matter how powerful our computers get.

  49. Ramez Naam says:

    While I believe Kurzweil’s predictions of the timeline for uploading are overly optimistic, and nanobots in particular are an unlikely path, the brain side of uploading does appear to be feasible in the not so distant future.

    Systems combining diamond knives, microfluidics, and electron scanning microscopes have been successful in scanning slices of brain tissue 20 nanometers thick with an x-y resolution of 5 nanometers. The resulting scans provide a great deal of detail, down to well below the individual synapse level.

    Of course, computing power doesn’t yet exist to implement a simulation of 100 billion neurons with a 100 trillion synapses, but computing power is, after all, increasing exponentially.

    And, of course, our knowledge of how to model neurons and synapses is likely to be incomplete in some ways which we can’t yet anticipate.

    Nevertheless, current systems, if massively parallelized, do seem to provide enough resolution to capture the complete connectome of an individual, brain data that many neuroscientists would agree is sufficient to fairly accurately simulate an individuals brain.

    The major caveat even when we do have sufficient computing power and sufficiently good models of neurons and synapses is that a slice-and-scan uploading system requires a dead brain (and even if it wouldn’t, would render that person dead in the process). So anyone taking this path must either have a great deal of confidence in the outcome, or must have little or nothing to lose.

  50. Anonymous says:

    “one ancient Chinese belief system asserted that each single human was actually a combination of some 24 different “souls” (out of a population of how may hundreds or thousands of such, I know not), which came together at birth and dispersed with that human’s death…”.

    Though I am no expert, I think you are mixing up terms and using the word “Souls” to express your skeptical wisdom. The number 24 occurs in Samkhya philosophy and is a model
    of the human body as well as the Universe. As above so below, they say.

    In Buddhism, there are many “fundamental” numbers, 18 Dhātus being one of the important concepts.

  51. Lexicat says:

    David Linden: another vital point missing in the “let’s assume pervasive nanobots” scenarios is the issue of heat. Where is all the heat generated by the mechanical action of the grillions of nanobots going to dissipate into? Biological systems, particularly nervous tissue, is exquisitely sensitive to variations in temperature. It’s hard to imagine those hordes of nanobots not cooking (or at least severely fever-ing) the brains they are supposedly generating real time data about.

    • IWood says:

      The great aesthetic irony of the nanobot-enhanced brain is that you have to have a whopping great heatsink surgically implanted in your skull, and can no longer wear a hat.

      • bmcraec says:

        But if designed right, you’ll never need to worry about rain or snow messing up your ‘do. Precip should just vaporize before it hits the mouse/gel/styling product.

  52. Anonymous says:

    No one ever seems concerned about the distributive justice question when we get talking about mind uploads. I’m pretty sure that means that only rich white guys with connections to silicon valley VCs need apply, right.

    But actually, I’ve always suspected that if Ray ever did manage to upload his consciousness to the cloud, it would take him about a nanosecond to become Skynet. Just a thought…

  53. icastico says:

    I would just like to say that bill draper has it right. Understanding is unlikely to follow a smooth curve. Punctuated, stepped progress is much more likely. It is a fundamental reason why prediction of future developments is so difficult. The “miracle” that Linden mentions can happen at any time. Kurzweil’s optimism may make it more likely that we are ready to take advantage of that miracle (Nassim Taleb style) when it happens, but any predictions about a time table are mumbo-jumbo.

  54. Anonymous says:

    If ‘nanobots’ are feasible in our lifetime (whether as pseudo-intelligent hardware or more likely as narcotics that can be preprogrammed to some degree — or altered during ‘runtime’ — to affect the subject’s predispositions or perception of reality), the time to market for our lead product will decrease and R&D funds can be shifted to marketing. We have tried for years to successfully outfit sharks with freaking laser beams (for military, industrial, and residential purposes). We quickly saw that it was fraught with dangers that we did not experience in the development of the Cat Herder 2000 (patent still pending). At any rate, if we can outfit stray cats with laser pointers but have the subject *perceive* these are Great Hammerheads (Sphyrna mokarran) — the species defined by focus groups as ‘most scariest’ — with like some dual wicked lasers near its eye thingys…well, need I say more? And besides I’ve got like 50 howling cats outside my window every night.

  55. urbanspaceman says:

    As we read this, scientists are working on a device to read human thoughts. This device’s ostensible purpose is to help people crippled by strokes communicate with the outside world. Now imagine a device like this in the hand of a dictator’s secret police looking for political dissenters.

    Brainwashing would no longer be necessary with nanobots. Merely shoot your victim up with the right kind and he’s literally your zombie. And just think of the possibilities for wiping out alternative thought and social progress once and for all. Just dump a few zillion of these babies into a town’s water supply. Tyrants of the future could end up achieving a level of godlike social control which Stalin and Hitler could only dream of.

    Every new technology that comes along is inevitably used for evil at some point. It is imperative that we figure out how we’re going to face this evil and deal with it.

  56. Anonymous says:

    Kurzweill often seems overly optimistic to me as well, and then I read something like this article about a rat with an chip augmented memory:
    http://www.popsci.com/technology/article/2011-06/artificial-memory-chip-rats-can-remember-and-forget-touch-button

    I also think about how in 1992 (less than 20 years ago) the internet was barely a glimmer and the idea of cellphones as a major computing tool was ludicrous.

    He could possibly be right about at least some of his predictions…

  57. Cryptocog says:

    Linden has substituted Kurzweill’s theory of “transhumanism” in place of the singularity. The two are related but not the same. The notion of singularity can be condensed to this: the rate of information processing is proceeding at an exponential rate and we are nearing the knee of the curve where the rate of increase approaches the hyperbolic. At hyperbolic processing rates, the universe will inevitably be flooded with information and we’ll enter into information-based reality. Kurzweill predicts that humans will adopt a technology-assisted model and jump off the human evolution track in order to navigate the new reality. Becoming computer-assisted will provide such an advantage that those without it will not be able to participate in the life of the new reality. Transhumanism will soon follow. Or, so the theory goes.

    Linden’s point about “understanding” versus “data collection” is well-taken but the processors will nevertheless create information at hyperbolic rates. The universe will indeed be flooded. On the other hand, we may finally solve the riddle of the reversible computer and be able to kick the singularity can down the road a ways.

  58. Anonymous says:

    I noticed that everyone,including Dr. Linden, feel it necessary to pay homage to the positivistic, mechanistic view of the world. There are very good reasons to doubt such a world view and to hypothesize that we are not the sum of our mechanized parts.

    I’m a computer scientist who started out drinking the Kool-Aid that brains are computers and eventually we’ll have real AI, etc.. As i dug into the graduate-level theory, however, i have come to the conclusion that whatever the human mind does, it is nothing like the modern computer (which is restricted to using the incomplete axiomatized formal systems as a guide).

    Looking for meaning by scanning brains is like trying to understand Shakespeare by zooming in to the paper it’s written on with it’s molecules of paper, ink, etc. but no meaning will ever be found here.

    I am not saying that creating conscious, intelligent physical objects is impossible, but it will require something far far more revolutionary than what a turing machine can do. I have a hunch that our best hope is in harnessing the native consciousness in the universe which what i suspect our nervous systems do…

  59. JeffF says:

    It seems to me that the knowledge and capability necessary to interface with a living human brain in such a deep and integral way is basically equivalent to the knowledge and capability necessary to simulate an entire human mind in software. Which makes me think that human level and beyond AI or uploading would happen at the same time as deep interface, possibly earlier because it isn’t as dependent on magical nanobots.

    Shallow interface (ie sensory input, fairly simple output) is a much simpler problem. I can totally buy that by 2020, though not at all sure it will be very mainstream.

Leave a Reply