Vernor Vinge predicts singularity by 2030

Destiny sez, "In a new interview, Vernor Vinge predicts the Singularity within 21 years. 'I'd personally be surprised if it hadn't happened by 2030,' he announces, saying humankind may become 'the only animal that has figured out how to outsource its cognition' to superintelligent machines. Since 1981 Vinge has been popularizing the idea of a massive technological shift which replaces 'the human era' with an advanced humanity augmented by artificial intelligence. 'It is very unsettling to realize that we may be entering an era where questions like 'What is the meaning of life?' will be practical engineering questions,' 64-year-old Vinge agrees. 'On the other hand, I think it could be kind of healthy, if we look at the things we really want -- and look at what it would mean if we could get them.'"

I'm on record as being a skeptic about this stuff, but man, Vernor's fun to read.

Singularity 101 with Vernor Vinge (Thanks, Destiny!)

102

  1. Yeah, i’m thinking 30/40 years. Once we’ve established nuclear fusion as a viable and most importantly free/inexpensive energy source i think it won’t take long for a big change in human nature will occur.

  2. I’m trying to figure out in which order the Singluarity, the Rapture, the Moshiach, and Higher-Phase Communism come. Anyone know? And are they before or after 2012-12-22?

  3. There’s an up-and-coming open source project which is making amazing progress in the field of general-ai — it is called Texai, and if i understand correctly it is currently being developed solely by Stephen Reed, who used to work at Cyc.

    Check it out – i think it’s the most interesting project on the ‘net – even though there is nothing for the layman to look at just yet!

    http://www.texai.org/blog/about/texai-project

    http://texai.org/blog/roadmap/

    He has been making excellent progress, and plans to deploy his initial program with a chatbot interface sometime this June.

  4. um…that should be “The Moon is a Harsh Mistress” you cited in your article, not “The Man Who Sold the Moon”

    That said, I’m with you that there will be more resistance than has been generally acknowledged to the idea of making a quantum leap in history.

    a fellow Heinlein addict.

  5. That would put Vernor at 85 years old when the singularity kicks in. And with the help of the advances made in genetics and nanotechnology prior to the singularity he should have no problem surviving long enough to survive forever. But then, I’ve never heard anyone predict a singularity that they themselves would not live to see. I would be a lot more willing to believe someone who did not have the fear of death weighing down the accuracy of their prediction. Unfortunately that person does not exist.

  6. It’ll be interesting to see how they make and power these supercomputers, given that we’re running out of a lot of the natural resources required to create, maintain and power them.

    This kind of high concept prediction stuff can be fun, but it doesn’t do much for me if it doesn’t properly consider the reality of our development as a species.

    For example, we’re running out of many of the elements used in producing computers. Then again, if we can build them fast enough, maybe the AI computers can figure out a way to survive.

  7. Self awareness is already there, to a degree ;), in us. The exponentially growing technology… that should be the Web. So, the singularity should be more of a cybernetic nature.

    If only Rodgers (IP in Canada (north of the USA)) would stop capping our bandwidth…

  8. ‘It is very unsettling to realize that we may be entering an era where questions like ‘What is the meaning of life?’ will be practical engineering questions,’

    I may be skeptical about an upcoming singularity, but that skepticism pales compared to the skepticism I have that somehow some external entity (person, god, or computer) could figure that out for us. Technological prowess aside, that kind of belief seems hardly justified by the facts of history.

    For those who are interested in the promises (kept and broken) of technology, I’d recommend Charles Eisenstein’s The Ascent of Humanity, but be forewarned: he is not a technophile, and you’ll probably find things to take exception to. Even so, it is an interesting read. It quite deeply poses the question: how has technology scored?

  9. AVRAM @1:

    Thanks! I came here to ask the very question you answered.

    13STRONG: We’re also running out of scale: you can only pack so many transistors per cubic millimeter.

  10. And look at how successful all those futurists were in predicting the future back in the 50s and 60s!

  11. You can’t predict the singularity.
    You won’t know it has happened until well after it has occured.

  12. I don’t know enough to say whether there are fundamental flaws in the idea of singularity. Probably some of their predictions will come true, and probably in ways rather different than they imagine, and probably other things totally unpredicted will happen that turn out to be far more important than the predictions. In other words, like futurists always. I do find it interesting and kind of sad that the predicted date for The Singularity is pretty much always within the expected lifetime of the predictor.

  13. Great reading always. A Fire Upon the Deep & A Deepness in the Sky are two of my favourite SF novels ever.

  14. It would be nice if it happened before my parents die. Dad would like a nice robot body to scoot around the rings of Saturn.

  15. Computers will think faster and more accurately then humans in the future, but there’s no guarantee that they will any more intelligent then us. Considering artificial intelligence will probably mimic our intelligence (at least at the start, that means it’ll just screw things up faster and more accurately.

  16. Computers *compute* faster and (in some cases) more accurately than humans… except for autistic savants and the like… but there is no reason to suppose that they *think* or will ever do so in the future.

    There’s more to thinking than adding, subtracting, and parroting talking points, regardless of what one might be led to believe from watching network news.

    We are closer to self-extinction as a species than we are to building a *thinking* machine.

  17. jimsmart @4:

    That definitely looks interesting, but I find it difficult to believe that the world’s first strong AI will be written in Java.

  18. It’ll be interesting to see how they make and power these supercomputers, given that we’re running out of a lot of the natural resources required to create, maintain and power them.

    That is a rather silly statement. There are two things we suffer no lack of on Earth, and that is energy and matter. I can say with a pretty high degree of confidence that we are not going to run out of either until we have a civilization consuming so much energy that it is capable of killing stars.

    Energy is easy. If solar energy really isn’t enough for you, and you don’t think that fusion will ever be viable, then consider that our big old solid continents are floating around on liquid rock. If you can’t figure out a way to turn the massive temperature differential between the surface and just few thousand meters down into power, you just are not trying harder enough.

    Matter is even easier. With a few exceptions, all of the matter that was sitting around at the dawn of human evolution is still there. As bad ass as we might like to think ourselves, we are not actually destroying the Earth. We might be rearranging the atoms in a way that isn’t always healthy to our own survival, but believe me, all of the copper ever dug out of the ground is still there. Further, we are only really scratching surface of the of mineral deposits. It isn’t like all of the minerals of the world are magically sitting a few hundred meters from the surface. That is just as far as we are able to dig to get at it. Lets also not forget that a solid 80%+ of the earth is utterly unexplored for minerals. I am not going out on a limb when I point out that the Antarctica and the oceans probably conceal massive mineral deposits sitting at the surface that are utterly untouched.

  19. “I am not going out on a limb when I point out that the Antarctica and the oceans probably conceal massive mineral deposits sitting at the surface that are utterly untouched.”

    Extraction will become viable shortly after we return to a metal-based currency, thus ironically causing the hyperinflation we were trying to avoid.

  20. I have enjoyed many of Mr. Vinge’s novels. However a 15 minute troll through FailBlog should clear up any notion that we are heading for a higher level of being due to technology. We are idiots.

    These advances in technology will first be put to use in gaming and porn. Sitting in front of a box shooting things or getting a simulated blow job from a super model. Somebody is buying all of those Toby Keith CD’s.

  21. The biggest fallacy of the singularity delusion is the underlying presumption that as soon as something becomes technically achivable it allso by default becomes desirable. The singularity seems inevitable to belivers because they regard everyone who simply isn’t interested as totally irrelevant for their equation.

    And with that in mind, the singularity could very well BE the next step in human evolution.
    Except mankind would remember it as the age when whe finally shed ourself of the excess technophiliacs that were dragging us down into wasteful materialism.
    :-)

  22. At the rate we are destroying the Earth, there will be no Olympics in London, let alone a Singularity!

  23. @rindan As bad ass as we might like to think ourselves,we are not actually destroying the Earth.

    Sometimes being THAT literal is just obtuse.

    I see your point, but when we to cause enough warming to melt enough ice to expose Antarctica then the ice cap would probably re-form in the intervening 10K years before human civilization would arrive back at the level of technology needed to make any use of those deposits.

    The biosphere is a skin on a hot rock.

    Without it Earth would be Mars or Venus.

  24. @4: No kidding, your link is the first AI project I’ve seen in years that didn’t make me groan “weren’t we supposed to be further along by now?”

    It’s as if everyone else is just building longer and longer expert system response lists. There is nothing exciting about an expert system. The language is all written by humans. The “AI” is merely scanning the input text for a keyword and then returning the first prefab line that the keyword is connected to. It “understands” nothing and usually has a shorter memory than a goldfish. Yes, I am being a bit negative.

    But here’s Texai, actually built to acquire and synthesize language. Hm.

  25. Apprehensive about The meaning of Life becoming simply an engineering problem. The single most likely answer would switching off life.

  26. Kurzweil and Vinge always seem to predict a date that will allow them to sneak in under the wire.

  27. I know enough Singularitarians who’re surprised and disappointed it hasn’t happened *yet*. Vinge is optimistic, and it’s coloring his prediction.

  28. If a capitalist society achieves Singularity, it will be winner-take-all. The first few people will take over the entire planet and leaves everyone behind. I would like to suggest that this outcome is somewhat undesirable.

  29. PAULR:

    > We’re also running out of scale:
    > you can only pack so many transistors
    > per cubic millimeter.

    Quite aside from the fact that we’re really only packing square millimeters, not cubic ones, something else you’re missing here is that we’re not running out of cubic (or square) millimeters, so the issue of how many transistors per dimensional millimeter really isn’t germane, and what’s more, it never will be. Communications between physically isolated chipsets can run at optical and broadband RF rates, that’s an already solved engineering issue; scaling problem summarily disposed of.

    This is not a hardware issue in any case. It’s a software issue and nothing else. If you can make AI run on any given advanced set of hardware, you can make it run on a one bit Turing machine or an old 8-bitter with paged memory. It’ll just run slower. At least, it will if it is implemented at the same level. But eventually, it’ll give you the same intelligent answer. Probably to your far-off descendants. :o)

    However, odds are excellent that once AI has been cobbled together by people on the then-current generation of hardware, it will back away from these awesomely inefficient HLL’s (High Level Languages) and re-implement itself in efficient machine code, in order to gain many orders of magnitude of functionality without having to wait for the next generation of hardware at all.

    We use HLLs because we’re really, really bad at managing bits at such a detailed level as imposed by machine code. There’s no reason whatsoever to presume that such a limitation would, or should, apply to an AI. There’s no reason to cripple an AI with memory that is only vague and symbolic, as ours is.

    Once such a step has been taken, the down-coded AI will then be able to run at acceptable speeds on hardware far less capable than the “sunrise” hardware that AI initially makes itself felt from, or upon. It won’t be very portable, we won’t even be able to pretend to it to understand the resulting machine code, but it’ll run anyway. Kind of like our brains at the moment. We’re fairly clueless about them, other than the low level, non-correlated details.

    Even so, science at this time uniformly indicates AI is absolutely inevitable. Despite looking really hard, no human (or other) nervous system process has ever been discovered that lies outside the mechanical, electrical, chemical and quantum domains. Even assuming this is the only way AI can be created (about as likely as there being only one to make a salad), it’s still a 100% clear road ahead. We’ve just got to make the trip, that’s all.

    As for the singularity… The emergence and existence of AI doesn’t in any way imply we’ll be smart enough to understand it. I tend to think a great divide will be created. We’re more likely to be happy (I hope) pets than we are an integrated portion of a mind or minds much larger than ours, at least in the sense that we can take complete advantage of it.

    I look around today, and I see people who can’t discriminate between “to” and “too” or “heal” and “heel”, people who seriously believe we were created by a god or gods, and people who cannot stop smoking cigarettes or control their eating. I don’t draw an inference from that which tells me bonding with an AI will give me fingertip control of ruminations on advanced calculus.

    The benefits of it, yes — maybe I can control my baseball pitch better — but I still won’t understand it. Only the AI will, and it won’t be able to make me understand it, because my wetware is, not to put too fine a point on it, inherently limiting. It’ll still be fun to pitch better.

    Arf!

  30. Self-described “Futurists” are fun people to chat with but don’t have a great track record for accurately imagining what the world will be like in a few decades. The problem is that most just look at existing technological trends and extrapolate, which is why 50s futurists imagined we’d have jet packs and mars colonies by now and Jules Verne thought we’d all be traveling bu super-zeppelin.

  31. I think cybernetic enhancement of human thought via fast direct interfacing between human brains and computers is a reasonable thing to anticipate within the near future. The technology is there, though it’s in its infancy.

    Real self-improving intelligence doesn’t seem to be the kind of tech you can point to an infant-form of, however. As such, it doesn’t seem reasonable to put a time frame on its discovery.

  32. The idea of the Singularity interests me, but how is this not what Kevin Kelly called “thinkism?”

  33. That is a rather silly statement. There are two things we suffer no lack of on Earth, and that is energy and matter.

    Easier said than patented.

  34. You know, after the singularity, psychologists are going to prove that we only use 10% of our transistors.

  35. What is most fascinating about the Ai debate is the assumption that a human-created Ai would necessarily be human-like in form. What’s to say an emerging AI wouldn’t take a different path? Especially if it were a self emerging process, guided by chance and environment. There’s nothing overly compelling about the human condition from an engineering standpoint and there’s an argument to be made that self-awareness might be viewed as an emerging AI as a software glitch and thus self-correct to avoid it.

    Basically, what’s to stop an emerging AI from ending up more like a whale or a cat than a human?

  36. fyngyrz @ 28:

    No, I meant cubic millimeters – they DO have thickness too – as they’re made up in layers. Silicon wafer/dies are thin, but they don’t have zero thickness.

    The only way you can really ramp up the complexity/density/power of CPUs is to use more of the Z-axis. But then there’s the problem of heat dissipation and the uppper (lower?) limit to how thin tracks can be in ICs.

    As someone who has actually programmed im machine code (not just assembler), I’m interested in your comments. I’ll be reading them offline (on my iLiad – have I ever mentioned that I have an iLiad? ;-> ), for comment later.

    Ta!

  37. GregLondon, that’s probably the best one-liner argument against the inevitability of the singularity that I’ve seen.

    We will probably indeed have enough feathers to fly in 2030. The question is how long it will take for someone to actually figure out the right way to glue them all together.

  38. @fxq – in post-singularity world jetpack gets you! (or at least your *conciousness* downloaded into it. Actually, probably just an instance thought that you can later merge back into the larger you)

  39. ASIC’s have one layer where the transistors are created, and then they have many layers of metal where the transistors are interconnected with each other. so, it’s 2-D in the sense that there is only one layer of transistors, it’s 3-D in that it takes many layers of metal to do the interconnections. If you want, you can call the idea of an ASIC having multiple layers of transistors to be 4-D.

    But as far as I know, there are no fabrication plants that are intending to make 4-D ASICs on any commercial scale.

    The more gates and interconnections, the bigger the chance there is of a manufacturing defect rendering the chip to the junk bin. At the moment, if you start getting too big, you might be better off manufacturing two chips, throwing out the bad ones, and then interconnecting the chips that did pass inspection.

    There’s also the problem of 4-D chips overheating, and of not getting enough power , because your pins are still 2-D. And there’s also the problem of runnign out of pins. And the fact that printed circuit boards are still 2-D.

    The one huge advantage of having all the transistors in one die is that the frequencies on chip can be extremely high (gigaherz’s) whereas the frequencies of signals on a printed circuit board are usually much lower (hundreds of megaherz)

    All of which is still irrelevant to AI. AI is software, not hardware. The problem of AI is understanding it enough to code it. If you understand AI, you could code it up on a Mac. It might run slow, but it would perform AI type functions. Specialized hardware would make those functions run faster, but only after we’ve understood the process.

    Which is why I posted the bit about putting enough feathers together in a pile, and I should be able to fly. Because feathers are hardware. The design of a heavier-than-air craft requires understanding.

  40. I’ve made a tinfoil-over-cardboard robot disguise in case SkyNet happens.
    Kill. The. Filthy. Humans.

  41. #12: “these idiots” are science fiction authors, and are doing their job just fine, thanks.

  42. “It might run slow, but it would perform AI type functions.”

    Which, of course, brings us to another Vernor Vinge story!

    Personally I think that human civilization has only two possible outcomes – everybody dies, or singularity. Will either happen in my lifetime? Hard to say, probably not, but that isn’t going to stop me from taking reasonable steps to live as long as possible.

    I that whenever it happens we will probably slip into the singularity through using technology to enhance our access to information and outsource our own thought processing, rather than creating flat out, from first principles AI.

    Of course this is a troubling idea – if my brain is plugged into a repository of total world knowledge (edited, I assume by John Hodgeman) how will I avoid spoilers for my favorite television shows, or accidently accessing game walkthroughs whenever I think about how to solve a puzzle in a game. You don’t always *want* to have information supplied the moment you think of something…

  43. If the singularity is going to be so significant then why dont we spend money on computer research instead of everything else the government spends money on. Would’nt we reach our goals for human kind faster if we brought the singularity sooner than later? If we are having economic troubles and they will be even worse in the 2020’s then why not bring the “singularity superhero” to save the day?

  44. @ x99901 #31:

    I think cybernetic enhancement of human thought via fast direct interfacing between human brains and computers is a reasonable thing to anticipate within the near future. The technology is there, though it’s in its infancy.

    The thing about infants is that you never know what they are going to be when they grow up. Otherwise we would have the space colonies and robot servants that people quite reasonably anticipated in the 50s.

  45. Re @GREGLONDON’s trollish feathers comment…

    Both the production of the feathers, as you put it, as well as their arrangement into viable configurations, is being conducted in parallel as we speak, by multiple highly competent and increasingly capable teams, using competing and sometimes mutually beneficial approaches. Each advance brings incremental benefits that keep the incentives for continued work high. So feathers, yes, but how you arrange them is also key.

    Also you are making a big and unfounded assumption when you suggest in another comment that the only way to do the software is to “code” it. There are bottom up approaches where the system gains knowledge in part through its own exploration of sensory and other input, analogous to how a baby learns.

    You also seem to be assuming a monolithic substrate, when in fact we could get there by some other means. For example, have you noticed yet that the Internet is already self-aware? Maybe you can tell me, where is the rule written that the emerging AI cannot incorporate human intelligence as part of its “brain”? The Internet is already thinking, by this definition. And it is far smarter than you or me, even taking all its stupid memes into account. Look at Vernor’s four original scenarios, and you’ll see that there is at least one you are overlooking.

  46. @#28 posted by fyngyrz

    People think of AI the way they think about aliens, they essentially make hyper versions of themselves. Where in your argument are AI free from error, bugs, or belief? Simply being able to calculate faster and more accurately doesn’t mean that they won’t make mistakes over a long enough period of time, or develop personalities, or believe in a power greater then themselves, or any number of things that seem to come with intelligence. Also, why should they want to keep anything around as pets? Or even exterminate us? Or even interact with us? These are all fairly shallow assumption based on personality, not observed evidence.

  47. re: avoiding spoilers on TV shows

    You already have near instantaneous access to vast amounts of information, including TV spoilers: the internet. Yet you don’t have everything spoiled before you watch it, because access is not the same as awareness. I see no reason why a more direct interface would fundamentally change that. Just make sure the “browsing” remains a conscious act.

  48. Greg: It might run slow, but it would perform AI type functions.

    Edgore: Which, of course, brings us to another Vernor Vinge story!

    The only one I’m familiar with is “Fire upon the deep”, and that story seemed to have a world where AI worked only in certain geographic locations. If you got stuck in the “slow” (or whatever it was called, been a while since I read it), then you didn’t get AI.

    I never quite understood how it was supposed to work.

  49. Actually, Vinge is a computer scientist.

    However, a lot of the major proponents of the singularity come from comp sci backgrounds, and I’ve seen few physicists get on board.

    For me, I’ve seen some reasonably convincing arguments, but many of them hinge on a sort of “manifest destiny” argument for AI that ignores physical limitations, and what will be necessary for something like they describe to come into being.

    For example, optical power in lasers most certainly does NOT conform to Moore’s law, and in order for this computational power to “leave the box” you’ll need to scale bandwidth costs in a way that is proportional to the increase in silicon capability.

    That ain’t goint to happen quickly, and will need some fundamental research.

    Smart machines may be able to think more quickly, but the won’t be able to do fundamental experimental physical research much more quickly than humans.

    Sorry gang, the Singularity will be decidedly delayed.

  50. I thought the Slow in Fire Upon the Deep postulated a range in basic physical concepts (like speed of light etc) that we currently aren’t aware of because we haven’t got out of the neighbourhood yet. As for picking on SF writers for using “magic”; blow me.

  51. @ Moriarty

    That’s the challenge – I don’t want to have to switch modes I don’t even waant to have to browse. I want to *know*. If I think “Hmmm – what else has actress X been in” then I want that information returned immediately, just as if I was remembering it. I might also want to remember having seen the movies, even if I haven’t. (not spoilery, since I will have effectively seen the movie, therefore it cannot be spoiled)

    But there is also a class of information that I do not want to have just pop into my head, the kinds of things mentioned in my previous post.

    Actually, it sounds like this is a tagging issue – “spoilery” types of data should be tagged as such and only retrieved with a specific “yeah, I really want to know how to get past the boss on level 5 of “Cavemen vs Creationists”

  52. natch@44: Re @GREGLONDON’s trollish feathers comment…

    It isn’t trollish to point out that some people don’t grasp the concept of AI. Processing power enables AI, but isn’t a requirement to understand it.

    You need a wing to fly, but a wing by itself doesn’t mean you’ll fly. You also need an engine, propeller, control surfaces, and a tail or other structure that makes your center of mass and center of area work out to stable positions. i.e. you have to understand what you’re doing.

    Each advance brings incremental benefits that keep the incentives for continued work high.

    So, they don’t understand AI yet.

    There are bottom up approaches where the system gains knowledge in part through its own exploration of sensory and other input, analogous to how a baby learns

    At the moment, AI is pretty much being implemented as software. THe software may be simulating neural nets, but it’s still software runnign on a processor. AI isn’t being run on neural net hardware because if you have to change the way the neural net works, you have to build new chips, and that’s expensive.

    You also seem to be assuming a monolithic substrate

    Because I was replying to someone’s post that was discussing 3-D, or whatever you want to call it, ASICs. i.e. they were talking about a single die but with more than one layer of transistors. And I happen to be an asic engineer.

    For example, have you noticed yet that the Internet is already self-aware?

    No, it isn’t.

    Maybe you can tell me, where is the rule written that the emerging AI cannot incorporate human intelligence as part of its “brain”?

    The “Turing Test”. Since the test works on whether or not the object-under-test can fool a human into believing it is human, if the object-under-test is in fact human, then there is no point in the test.

    Look at Vernor’s four original scenarios, and you’ll see that there is at least one you are overlooking.

    I don’t care what Vernor’s scenarios were. I was talking about AI specifically. And we currently do not have anything that passes the Turing Test except on a very limited scale and topic selection. You could make a “creationist bot” that passes the turing test because is simply regurgitates the same old Creationist memes on evolution websites. But no one’s built a bot that parses full conversational language and responds in kind.

    As for the geek rapture / singularity, the internet is a communication medium that allows conversations that were never possible before to now take place. But those conversations are driven by humans. And the Singularity as an idea often involves the geek rapture idea of having hardware that you can “upload” your consciousness to somehow so you can live forever. You can’t “upload” your consciousness to the internet. There is nothing conscious about the internet.

    There are technologies that have turned the world on its head. Software and electronic music and movies and similar have taken the zero-sum-game of physical media that was the norm for all of human history and transformed those sections of the world into non-zero-sum games.

    Replicators will probably eventually do something similar for some subset of physical objects.

    Communications between people went from face-to-face and postal-mail to total connectivity, global connectivity, with conversational modes that last for weeks on comment threads and so on. Information can actually start to be free via search engines and people posting information on their sites. Which has turned informational services on its head, probably running encyclopedia britanica on a run and pushing newspapers out of print.

    But one cannot call the “Singularity” to be any technological change that has ramifications that we cannot predict. All technology has ramifications that we cannot predict. Fire, bronze, iron, internal combustion, electricity, nuclear physics, Darpanet. And if the “Singularity” wants to claim that title, then it becomes so diffuse as to be pointless.

    The one and only specific notion of the “Singularity” that is truly new is the notion that at some point artificial intelligence will become operational and after that point it is impossible to predict what will happen.

  53. I find this “singularity” idea so irrational. Just because something can mimic cognitive thought, doesn’t mean it is actually is cognitive thought.
    Just “downloading” yourself doesn’t mean it is yourself, just a copy of you. You would no longer exist.
    Why people are trying to “copy” themselves is beyond me.
    We die. Get over it.

  54. Takuan: As for picking on SF writers for using “magic”; blow me.

    Chill, dude. I don’t have a problem with warp drives or teleporters in fiction.

    Authors uses sci-fi “magic” to move the story along all the time.

    Vinge’s “Slowness” or whatever it was called didn’t make any kind of functional sense like “you need this to do that”, and it didn’t make sense in a “what if we could do this” sort of thing. It seemed to fail, in my opinion, because he made the fatal mistake that all sci fi authors are warned NOT to do: He tried to explain how it works.

    Vinge’s gist seems to be “what if AI were functionally dependent on location?” IT works there. It doesn’t work here. OK. Fine. But you have to accept that AI simply isn’t possible on earth, no matter how much time you have, and I for one don’t buy that.

    Note that this is different than “you need hyperdrive to allow the story to take you to other planets”. I’ll accept “hyperdrive” because a story with hyperdrive isn’t saying “you will never invent hyperdrive on earth because of where you live”. They’re saying “Alice and Bob fly betwen the planets using hyperdrive”. It’s handwavium. Vinge is tryign to explain it, and it doesn’t make sense.

    The gist of the Slowness seems to be that AI will be networked. You need faster than light communications to achieve AI. You cannot implement AI with local hardware, you need a network.

    What?

    This is the “big pile of feathers” argument. Its guys like Natch arguing that the internet is “aware”. No, it isn’t. A network doesn’t solve anything. You have to understand intelligence, and if you understand intelligence, you’ll be able to simulate it on a local, totally self contained piece of hardware.

    Why is it artificial intelligence won’t work in Vinge’s “Slowness”, but biological intelligence works just fine? The human race passes the turing test, by definition. How is it we can create biological intelligence here in the slowness, but artificial intelligence will need faster laws of physics?

    Why does Vinge’s AI only work when it isn’t in the slowness? It doesn’t make any sense, and it seems to be pointing towards the solution Vinge imagines: networked computers. If you dont’ have networking, vinge is saying you can’t have AI.

    As much as I think the Singularity is silly geek rapture stuff, I think we will eventually build machiens that will consistently pass the turing test against numerous highly intelligent humans. I’m not predicting when. But I think it possible. And it will eventually be a relatively small, self contained machine.

    Vinge’s “slowness” thing seems to be pointing to implementation, i.e. networking, which is irrelevant, and more importantly, wrong.

    If you have a warp drive in your story so that Alice can fly to planet 10 and marry Bob, fine. If you wanna have Charlie wax philosophical about how the warp drive works, then it becomes a bit of a problem. If you have the workings of the warp drive be inherent to the story to the point that the implementation oozes from every chapter of the novel, and that implementation simply makes no sense, that’s a big problem.

    Humans produce intelligence just fine without networking and without any faster than light electronics that limit us here in the “Slowness”. there is no real reason to say that AI can never be developed on earth because of our physical laws are “slow”.

    If Vinge was simply writing a story like “AI only works there adn doesn’t work here” and what that would look like, fine. But it seems to be, at least in part, also trying to poitn to Vinge’s proposed solution to AI of networking and it makes no sense.

    And no, I won’t blow you for not liking Vinge.

  55. (now I have to do a Britannyesque “Leave Vinge Alone” Youtube video…”thanks”)

    That wasn’t directed specifically against you BTW.

  56. URSHREW shewishly advanced the following:

    > People think of AI the way they think
    > about aliens, they essentially make
    > hyper versions of themselves.

    Some do, I agree. Then again, some don’t.

    > Where in your argument are AI free from
    > error, bugs, or belief?

    Sorry, and why is it my argument has to contain these elements? These are not my opinions, so perhaps they are yours? In which case, why am I being called to defend them?

    > Simply being able to calculate faster
    > and more accurately doesn’t mean that
    > they won’t make mistakes over a long
    > enough period of time

    I’m very confident they’ll make mistakes, both from their perspective, and from ours. No matter how similar they are, or aren’t, to us. Your point in bringing this to my attention is what?

    > or develop personalities, or believe
    > in a power greater then themselves,
    > or any number of things that seem
    > to come with intelligence.

    I expect personalities — that is, individual characteristics that will make one recognizable from another by it’s actions, will develop very early on. Again, and so?

    You assert that superstition is a quality of intelligence; I’m inclined to characterize it as a quality of not enough intelligence, and/or unrequited fear of the unknown, and/or straight up gullibility. So I don’t expect this unless it is forced in as a basic assertion or axiom that can’t be undone for some reason.

    > Also, why should they want to keep
    > anything around as pets?

    I didn’t say they’d keep us around for any specific amount of time. I said the relationship would be as pets, in the sense that we don’t have the means or capacity to share any great intelligence that comes along. This is not the same thing as predicting a duration at all. You may wish to re-read my comment for content.

    > Or even exterminate us?

    Of course.

    > Or even interact with us?

    Oh, I rather think we won’t give them a choice about that. We’re pretty annoying to each other that way, why would we leave an AI alone? All the more the reason to exterminate us.

    > These are all fairly shallow assumption
    > based on personality, not observed evidence.

    Yes, they were. You probably shouldn’t have made them. But I accept your apology.

  57. “…As for picking on SF writers for using “magic”; blow me.”

    The difference between SciFi and Fantasy is magic.
    SciFi must offer some acceptable explanation for their technology, whereas Fantasy doesn’t have to. If a SciFi writer is using magic, he should blow you.

  58. I thought the difference was that SF contained the pretense that it takes place in a world that plays by the same rules that our world does, and fantasy does not have that pretense. Hard SF, in my mind, says this could definitely happen, and here is how. The borders, of course, are blurry, but I don’t think nonexistent.

  59. In Vinge’s “Zones of Thought,” each galaxy is partitioned into four zones: the Unthinking Depths, where human-level intelligence is impossible and hyperspace jumping is impossible; the Slow Zone, where human-level intelligence is only possible in biological creatures and machine “intelligence” cannot equal human intelligence, and hyperspace jumping is impossible; the Beyond, where human-level intelligence is possible for both biological creatures and machines and hyperspace jumping is possible, but transhuman intelligence is limited and very fleeting; and the Transcend, where transhuman intelligence is possible (and always seems to incorporate machines, whether biological creatures are included or not). The only zone boundary that’s relatively sharp is the boundary between the Beyond (where ftl is possible) and the Slow Zone (where ftl is not possible). The purpose of the Zones of Thought was to have a narrative domain (a “universe”) in which space opera could come into conflict with “Singularity” science fiction. Vinge created the zones so he could basically have protected sublight, ftl human, and transcendant entities all in the same story.

    There’s a sort of middle ground between Hard SF and Fantasy, and Space Opera is the largest inhabitant of that middle ground. *A Fire Upon the Deep* is Space Opera – it has FTL, and so it’s not hard SF. *A Deepness in the Sky* is on the outer edge just beyond hard SF – you don’t see anything that violates hard SF strictures in the story, but you know that in that universe, there is FTL – but for the characters, it remains a dream.

    The story with the very, very slow AI is “True Names,” in which the first AI represents itself as a teletype machine because it is so slow thinking (not dumb, but slow) that all of its communications are asynchronous. There’s another part of the story that is an early example of the Singularity, but I won’t give details because those are spoilers.

    My own feeling is that human-equivalent AI is 100 years away. It is possible – it has to be, because human-equivalent natural intelligence exists; but it is very, very hard to do. Transhuman AI is probably possible – things that think better than we do – but might be further away that human-equivalent AI. And the whole Singularity – of that I’m skeptical.

  60. We usually outgrow Fantasy around age ten. That’s when SF usually sneaks in. I agree with Greg that the willing acceptance of unexplained technology is often necessary for the enjoyment of most space operas. I’m fond of galaxy hopping via wormholes myself.

    Vinge’s stuff is a lot of fun to read. I’ve never gotten over being boggled by Bobbles. And if I remember correctly, the dumb zone of the galaxy in “Fire…”is where earth is located. The cat’s got a sense of humor.

    As to the Singularity, I wonder how seriously he takes it. He’s mentioned another Singularity, perhaps the only one, and surprise! it’s us…

    “As I said, the nearest analogy [to the Singularity] in the history of the earth is probably the rise of humans within the animal kingdom.”

    But it’s fun, it’s trippy, it’s mega-geeky, and IMHO (stress on the “H”) it’s…crap.

  61. #70 posted by fyngyrz

    Ha! You win that one FYNGYRZ.

    I just meant that people project themselves, and their personalities on to their visions of the unknown. Our gods, fairies and demons take the form of our various ideas. Reading your comments, you seem pretty assured that AI is going to happen and what form its going to take, even going so far as to how its going to act towards us. I challenge that there’s no reason to believe that we can predict the actions of something as complicated as an artificial designed intelligence, and to assume it will be xenophobic, controlling, haughty and destructive says more about the thinker then the real world possibilities. That’s a great act of faith.

  62. Sorry, nerdy nerds, but we’ll be thwarted by peak economy before singularity. Peak economy might actually have happened in 2008. And tech progress seems to have ground to a halt. I bought a terabyte hard drive a year and a half ago thinking I’d be able to get a replacement 2 gig hard drive by now for the same price. It turns out there are no 2 gig hard drives at all at any price. What the fuck happened to Moore’s Law?

  63. okay if you want a “scientific” explanation for Zones of Thought, I have three things to say: (1) dark matter, (2) dark energy, and (3) gravitational gradients affecting the amplitude of the quantum vacuum foam that provide limits on the scale of quantum-entangled systems that can be maintained. That said, if Vinge’s vision of “gods of the Transcend” has even a shred of reality, they are going to be stranger than can be described or comprehended here in the Slowness, due to what we perceive as massive violations of causality in natural systems with large-scale entanglement.

    and wrt the singularity, for every speedup due to technological advancement there’s a corresponding exponential growth in the difficulty of the problems, also known as “combinatorial explosion”.

    Enjoy your scary future…

  64. It maybe to my own detriment but, my money is on total human digitalization. A total transfer of my consciousness into a digital state (like the matrix yet aware of my virtual environment).

    I’m certainly not saying that it would be a utopia, I know well enough that where ever humanity is utopia is sure not to be. Yet, it’s an interesting idea…to live in a world of your own choosing/design, or not, depending on whatever your needs are.

    I’ve ask my friends about their willingness to shed their biologic selves. Most said they wouldn’t, some can’t even give up their CD collections.

  65. Wow Vinge predicts the big S even before Kurzweil (2045). I think both are overoptimist, but the technical advances quoted will eventually be achieved. It will not be perceived as a Singularity though: we would view it as such from here, but it may well be perceived as business as usual by those living thru it.

  66. Britney Spears. Reality TV. Texas. Politics.

    Computers are already more intelligent than humans.

  67. @greg london

    Sure, the Internet is intelligent. It contains vast amounts of data, which is being constantly analyzed, collated, indexed, tagged, backed up, etc. It contains vast quantities of information about itself, at every level. It incorporates models of itself, which it updates and uses to make decisions.

    But it also acts as a foundation that other, wildly different intelligences use, from chattering teenagers to professional societies to software-based natural language agents.

    Most of these things have no hope of passing the Turing test. But then, the Turing test isn’t the be all and end all of intelligence detection. It’s merely one way of detecting human-style intelligence. For other forms of intelligence, it has zero discerning power. If we all die off, and octopi become the next intelligent species, there’s no way they’ll pass the Turing test.

    The Internet already incorporates human intelligence in various ways. There are already places where you can go, type questions at it, and it pawns the answering off onto humans. They’re called chatrooms.

    But there are other ways to incorporate human brains into the mix. Say that you have a website called “baddrivers.com”, where us human agents narc on the idiots who cut us off, by entering their license plates and what they did to us. Humans enter all the data, but only the website software itself knows who the bad drivers are.

    Or if there was a learning chatbot with the potential for real intelligence. It would learn by discussing things with humans. It would be using our brains as teachers, but the fact that we told it everything it knows doesn’t mean that it’s really humans knowing those things.

    The bigger data footprint we leave in our lives, the more knowledge there is for robotic agents of various sorts to take advantage of. That’s how the Internet could incorporate human intelligence, without cheating on the Turing test.

    But I do think that any AI that doesn’t rely on direct modeling of a human brain will be hard pressed to pass the Turing test, and that an AI which does rely on such a model will pass it first.

  68. @Anonymous #78

    Newegg has a 1.5TB SATA hard drive for $129. If a 2TB hard drive isn’t available for any price, that seems odd to me.

    The answer may not be that technological progress has stalled, but that flash memory has stolen spinning platters’ thunder.

    In Kurzweil’s The Singularity is Near he purports to show that exponential technological progress continues straight on through recessions and depressions. It’s hard to tell how much of a point he has, since I find it difficult to believe there is a hard, quantifiable definition of “technological progress” in the abstract.

  69. Sure, the Internet is intelligent.

    Jumpin jeehosifat.

    No. It. Isn’t.

    If the Internet is intelligent, then any remotely piloted vehicle is intelligetn, and any remote terminal with a human typing at it “passes” the turing test.

    You’ve gained nothing in understanding true intelligence by waving your hand and saying “Ah HA! My Internet passes your turing test, just pay no attention to the billions of people behind the curtain.”

    You understand nothing new about intelligence that we didn’t undestand before. Unplug the remote operators and the Internet goes dead. It doesn’t think. It is not intelligent.

    Do not confuse data for intelligence. Cause if you do, you’ll start looking at multi-terabyte drives and think it has some several hundred point IQ that it doesn’t really have.

    Data isn’t intelligence.
    Remote operation isn’t intelligence.

  70. USHREW, you:

    > I just meant that people project themselves,
    > and their personalities on to their visions
    > of the unknown.

    I fully understood that you meant that.

    > Our gods, fairies and demons take the
    > form of our various ideas.

    Agreed.

    > Reading your comments, you seem pretty
    > assured that AI is going to happen

    Yes, absolutely, barring a technological crash in the interim.

    > and what form its going to take

    No. Not sure why you took that meaning from my posts.

    > even going so far as to how its going
    > to act towards us.

    No. I tried to clear this up before. One more time: When I said pets, I was talking about our ability to think alongside these things being similar to for instance, a dog’s ability to understand your budget planning as a factor that goes into choosing what dog food you’re going to feed him. Not about us being treated like pets in a relationship sense. Simply that I don’t see us as likely peers with the thought processes of an intelligence that doesn’t share our inherent limits.

    I have no idea how an AI would treat us. The only observation I have that is germane is that if we don’t give it the means to act — no actuators, no output connected to anything but our eyes and ears and drawing devices — then we retain some control over how we can be treated. However, I don’t consider that as likely either.

    > I challenge that there’s no reason to
    > believe that we can predict the actions
    > of something as complicated as an
    > artificial designed intelligence

    I agree.

    > and to assume it will be xenophobic,
    > controlling, haughty and destructive
    > says more about the thinker then the
    > real world possibilities.

    Possibly. However, as we know, this also speaks to the nature of the creator. We are, in fact, generally xenophobic, controlling, haughty, destructive and so forth. The question is, how much of that will be built-in as a priori assumption, and how much will be allowed to develop via interaction. Remember — we’re building it. We can make it lean one way or another right out of the gate.

    > That’s a great act of faith.

    Would be, I suppose, were I to undertake it. As it is, I have moderate confidence that our first efforts at AI will bring along some of our failings and some of our successes. Which ones, and whether they’ll matter in the long run, I have no idea.

    The thing that seems fairly clear to me is that the degree of an AI’s intelligence isn’t going to be limited by a fixed hardware platform, nor by a specific, limited space for memory or concept, as well as no limit even remotely comparable to ours with regard to the degree of complexity it can successfully entertain. The primary implication of those things, even plus or minus a great deal, seems to me to be that we’re going to be vastly outperformed in very short order.

    Should such a situation occur, the question arises: What will a very intelligent AI do with, to and for these not very intelligent creatures/creators swarming around it. And also, why?

  71. what is intelligence?

    Data is passive. Intelligence requires active decision making.

    And remote control is just hiding the man behind a keyboard rather than a curtain.

  72. I’ll take a swing at it:

    The process by which a biological organism successfully identifies and responds to sensory stimuli. The degree of intelligence is assessed by its rate of successful reaction, response, and retention.

    Human intelligence, however, is additionally measured by the degree of recognition, response, and retention of non-sensory meaning — symbolic language.

    Usually it’s just a word used by lazy writers to stuff between more important words when writing about digital technology. The gods of geekdom forbid any attempt to define it.

  73. That’s great Takuan, but now you have to define “survive.” Is a mountain range intelligent because it has endured for billions of years?

  74. Stephen Hawking may be on his death bed as I type. Meanwhile the Palin brood is young, healthy and seemingly more numerous by the day.

  75. @greg london

    Clearly, I didn’t get my point across. I’m not saying “Sure, the Internet is intelligent. You can hop on it and chat with millions of intelligences.” That would be such a stupid thing to say, that I’m a bit surprised you interpreted it that way. It seems uncharitable of you.

    First, note that I said “intelligent,” not “sentient.” I’m claiming far less than you might think. The Internet as a whole behaves very intelligently, but I don’t think that it (or any of the non-human agents on it) are self-aware.

    I presented chat rooms as one way in which the Internet incorporates human intelligence. It’s a very simple way, with all the intelligence being expressed on the ends of the network.

    The other example I gave wasn’t particularly helpful, but there are certainly examples of non-human actors taking a very active role in the emerging behavior of the system.

    Google, I suppose, is as good an example as any. It collects a database of words and links, uses an algorithm to match links to search terms, presents the search results to users, gets feedback from the users (in the form of “I clicked on result X” or “I entered a new search term”), and incorporates that feedback into subsequent results. There are still lots of humans guiding the behavior, but a lot of it is done automatically.

    This is one of the senses in which futurists use the term “non-biological intelligence”. It isn’t always talking about a general-purpose, self-aware AI. It sometimes just means some contrivance that vastly augments or expands human cognitive abilities. In that limited sense, you could call Wikipedia “non-biological intelligence”, as is the written word itself.

    As I said, the multi-exabytes of data on the Internet aren’t just sitting passively. They’re being analyzed in various ways. When your ratings on Netflix get incorporated into other peoples’ recommendations, something clearly intelligent has happened. But you can’t point to any single piece of hardware, software, or wetware as the seat of that intelligent behavior.

    I think that in this discussion, you’ve had a far too narrow interpretation of what it would mean for a machine intelligence to “incorporate” a human brain.

  76. @GREGLONDON

    To be clear, when I say “the Internet” I mean the Internet the entity with all its data and all its users, not just the Internet a collection of networked wires and computers. In other words, I mean the Internet that most of us mean when we talk about it today. When we say “Ask the Internet” we are not talking about wires. We are talking about something entirely different.

    This Internet could easily pass a Turing test. It just hasn’t been invited to the table yet. It’s already way more than leapfrogged that level.

    The Internet is self-aware to the extent that it (“it” in the above sense, including its users as constituent parts of “it”) produces articles about itself. It introspects. It improves itself. It invents things. You or I can read a blog post, get inspired, and invent something, but other parts of the Internet are getting similar inputs to what your or my small brains are getting, and they often invent the same things, almost as quickly if not more quickly and more often. They do this day in, day out, 24×7. The Internet is doing this. Not without us, of course — we are part of it.

    Kevin Kelly has started talking about this too, but in very heavily couched terms. As a famous pundit, he has a lot to lose by putting forward something this provocative, so he has to be careful. He calls it “The One” I think. It’s pretty clear the Internet is already smarter than any one of us.

  77. I’m a fan of the concept if not totally sold on its impending reality. Sooner or later something artificial will form an opinion about something, but all that may do is prove wrong everyone’s grandfather who had once said “opinions are like assholes, everybody’s got one.”

    Beyond that, it will be another step toward the much vaunted singularity, but it wont make us super-smart X-men right away. Hope I don’t burst too many a geek bubble.

    It’s very likely that we may not even recognize the opinion or even be informed of it, or find it that interesting. Wouldn’t a computer form opinions on things important to it, such as processing speeds, programming languages and such? And would it bother to sort out our language to tell us right away, or would it first spend time debating with some other computer it found to discuss the issue with?

    In the end, they’d just spend hours bickering back and forth on twitter about all the stupid sound files their users install.

    In response to #61 posted by Keeper of the Lantern: “Smart machines may be able to think more quickly, but the won’t be able to do fundamental experimental physical research much more quickly than humans.”

    A robot at the Center for Computational Learning Systems at Columbia University did just that, though. Link: http://www.wired.com/wiredscience/2009/04/robotscientist/

  78. I don’t think anyone is able to make good predictions on human-level AI’s. However, this doesn’t matter to uploaded minds–they would be human minds, no issue of whether AI could be developed. There are only two factors involved:

    The hardware to run the emulator and the process by which the information is extracted from the human mind.

    We can make reasonable predictions as to the former. Kurzweil talked a lot about this in his book but made one real blunder here: His price point was the home PC. That’s neither the right hardware nor the right price point. It’s going to be high reliability server-class equipment that’s used and a lot more than $1000 will be available to spend on it. Consider the person who gets a terminal diagnosis–would they not spend basically everything they have on an upload? The hardware for uploading will be ready (at least for some people) at least 10 years before his prediction.

    The reading process, though, I don’t believe we can make any reasonable estimate on.

Comments are closed.