Ken Macleod on socialism, Singularity, and the rapture of the nerds

Discuss

26 Responses to “Ken Macleod on socialism, Singularity, and the rapture of the nerds”

  1. Boundegar says:

    I still don’t get it about the “singularity.”  We’re supposed to upload ourselves to the mega-future-internet and live forever?  When was the last time you saw a computer live forever?

    The fact is, most computers have a useful life of 3-5 years.  The human body, 50-100.  And as for the eternity of software, when was the last time you used a ten year old program?  Or even accessed ten year old data?  And let’s don’t even get started on the possibility of malware.

    • SamLL says:

      In fact I just used sed, a 38-year-old program, about three minutes ago.

      • Boundegar says:

        Well that just invalidates everything I said.  Where do I go to get uploaded?

        • Nomen Nescio says:

          actually, it kind of DOES invalidate (some of) your point. i access years- and decades-old data fairly often; it’s all in ASCII-based formats, or otherwise free and open formats, but that doesn’t keep it from working.

          a better criticism would’ve been that the GNU sed most of us *nix geeks use today is (1) nowhere near thirty years old, and (2) noticeably different from the Bell Labs original. still backwards compatible with that original, true, but much expanded.

          still doesn’t mean we’ll ever get to upload ourselves to the matrix, much less that it would be even remotely smart to do it were a miracle to ever make it possible (protip: don’t bet on miracles happening; mind uploading would require at least a couple). just that bit rot wouldn’t necessarily be the main reason it would be a bad idea.

      •  I used Nastran (b. 1968) earlier today. I beat you by five years. I win!

    • bkad says:

      Well, people have different ideas of what the singularity is and even whether it is a reasonable theory. I can’t claim I ‘get it’ either. But in interests of working it out together, here is what I understand. I hear singularity applied narrowly in the case of artificial intelligence. The singularity is the point at which a human-created artificial intelligence becomes smarter than humans, to the point where it’s thought processes are impossible to understand and machines thereby become akin Gods. (And by extension, technology fashioned by artificial intelligence is similarly inconceivable to humans, because the mind that created it so far exceeds our own.) To me that’s a rather dark vision, but I guess some use ‘singularity’ more broadly…

      • Nomen Nescio says:

        in the sense of human-level AIs driving technological advance faster than humans can keep pace… well, okay, maybe i can swallow that. provided that somebody first creates a human-level AI, of course. not really holding my breath for that, though.

        but in the more general sense, the whole “singularity” business has always seemed fishy to me. having technological advance proceeding faster than the people driving it can keep up with just seems like an oxymoron, in my eyes. i’m only giving the inventors-are-all-AIs scenario a sort-of pass because i’m guessing that a human-level AI could be run on faster processors than biological neurons, outpacing us that way. but that’s really just a SWAG, too, since we have nothing like a human-level AI and do not know what their limits might actually be.

        Alice in wonderland had to run as fast as she could just to stay in one place, but in mundane-land you can’t create a new advance unless you can first comprehend the problem, which implies you grasp the current state of the art. which limits the speed of progress, ultimately, to however fast the smartest humans can think. assuming we can invent something to do our thinking for us and moreover assuming that that thing will be able to think faster than we can is building our cloud castles one level too tall for my liking.

        • Brad Arnold says:

          The secret is HMM.  There already is strong AI, it just hasn’t been built to pass the Turing test.  Check out Mind’s Eye, for example.

          • Tynam says:

            Useful though HMM algorithms are, they’re a very long way from being a ‘strong AI’ in the sense we mean here.  (i.e. a meaningful general-purpose AI of the kind you’d need for the Singularity.) 

            We don’t know how to build one, we don’t know how to find out how to build one, and even if we could we don’t know how to make sure it does what we wanted it for.

          • Brad Arnold says:

            People mistake anthropomorphic strengthens for strong AI.  When I say that a strong AI need only be successfully self-adapting, the answer I get back is “we can already do that.”

            If not for the Turing test, “we already do that.”  HMM, neural nets, multi-core processors, etc. Mind’s Eye is a perfect example of softa visual recognition.  Siri is a perfect example of softa verbal recognition.

          • Tynam says:

            (In reply to Brad below; thread reply limit.)

            Everything you say is true, but irrelevant to the Singularity.  Sure, Siri is pretty good soft ai verbal recognition, but it’s not going to design a better version of Siri any time soon.  And nor is any other AI we have.  There’s nothing we can build at the moment that is capable of runaway self-improvement, which is the key step for the Singularity.  Current HMM and neural nets aren’t able to pass that hurdle.  They’re not just ‘a little’ short of solving the problem, they’re out by several orders of magnitude in computing power and flexibility.

            Saying ‘if not for the Turing test’ is actually a pretty big exception.  It’s not necessary that the AI be able to pass Turing, or pass for human at all, but it is necessary that it be able to solve tasks as complicated as that.

          • dnebdal says:

             Hidden Markov Models? Sure, they’re useful – but there’s a long step from one to the kind of independently-operating self-driven AI usually meant by “strong”.

      • Paul says:

        Specifically, it’s the idea that if human intelligence is bright enough to produce an intelligence greater than itself, that intelligence is probably likewise capable of improving on its own designs. So V1 might be better equipped to keep entire processor schematics in mind, and so come up with ways of designing a computer chip 3x faster. And we build the chip, and run V1 on it, and now it’s a bit smarter and optimizes some mistakes it thinks we made on its source code. So it/we compile that new version and get V2. Without the errors inherent in V1, V2 can’t believe we aren’t using a proper quantum computer, and designs one and a quantum algorithm for itself, V3, significantly better than V2. And now V3 discovers significant new science, which it uses to build V4…

        At any point along that line may exist an asymptote. Perhaps V4 makes use of all the tricks of computation and intelligence available, or V3 needs to iteratively improve itself in tiny ways for ten million years to get to the insight in V3.992 that allow V4, or we’re already at the asymptote and we can’t build an intelligence smarter than us. Perhaps the first step is millenia away. But in any case, this ‘bootstrapping’ AI concept is the core of the singularity. The ‘and then we’ll…’ is just guessing, in fact the name singularity is supposed to imply that our guesses are probably wrong.

    • Benjamin Terry says:

      I think using 10 year old programs or accessing more than 10 year old data is pretty common.  Just in video games alone, I’ll still play the occasional Doom (19 years), Asteroids (33 years), Tetris, Deus Ex (12 years), Street Fighter 3: 3rd Strike (13 years), etc.  Something like Bash in Linux is trickier, as it does receive updates, even though to almost everyone it is the same as using it 15 years ago.  There are people who still like making music sequencing on Atari, or old Akai MPCs running 10+ year old software.  
      As for 10 year old data, it comes down to how you classify its age, but pretty much any photo album, most of history, music, novels, or movies.  Philosophy, science, etc.

      While computers don’t last forever, on the video game example, you have Atari 2600 emulators that can run all of the games on modern PCs, and essentially the entire catalog of software there has been preserved and scattered across the internet, all 30+ year old stuff.  While working hardware is definitely still out there in droves, even if it breaks down the software is no longer tied to the hardware it was written for.

      I guess that could maybe taken as a metaphor for at least part of the Transhumanist dream as I understand it, “The software is no longer tied to the hardware it was written for”.  Still pretty sci-fi, and maybe there would be some reason we discover it is impossible.  When you throw it in with faster than light travel, terraforming, space elevators, sustainable fusion reactors and Star Trek style replicators…  I’d say it seems more likely than some of those, and less likely than others?  I really don’t get the confidence many Singulatarians seem to have on the time-line they expect all of this to go down, however.  It’s so… at a speculative early kind of stage.

    • Chris Dias says:

      “The singularity” and mind-uploading are two different things. The first will just quickly lead to the possibility of the second.

  2. SedanChair says:

    The writer had calculated that the continued existence of Afghanistan would delay the [Singularity] by six months. Millions around the world who would have had a chance of eternal bliss would be irretrievably lost to natural deaths in the interim. According to strict utilitarian reckoning, exterminating the Afghans via a nuclear carpet-bombing campaign would be the kinder course.

    Singularity boffin: “My +1 Mask of the White Man’s Burden keeps falling off. After the Singularity, we will have better mask straps and no one will be able to see my colonialism”

  3. Brad Arnold says:

    Imagine there’s no heaven
    It’s easy if you try
    No hell below us
    Above us only sky
    Imagine all the people living for today

    Imagine there’s no countries
    It isn’t hard to do
    Nothing to kill or die for
    And no religion too
    Imagine all the people living life in peace

    You, you may say 
    I’m a dreamer, but I’m not the only one
    I hope some day you’ll join us
    And the world will be as one

  4. tigerthink says:

    I’m pretty involved in the singularitarian movement, and I haven’t seen anyone advocate anything like carpet-bombing afghanistan. That seems like an extremely bad idea, even from a utilitarian perspective.  Removing the taboo against nuclear weapons like that would be disastrous.

  5. cfuse says:

    We might get there faster without Afghanistan, but the trip would be far more pleasant without those extropians. 

  6. FoolishOwl says:

    If socialism is dead, and ideals of human solidarity along with it, then it’s rather hard to explain what OWS is up to, or what the significance may be of the ties between OWS, Anonymous, Wikileaks, and many groups struggling against colonialism, exploitation, and war, and for civil liberties. There seems to me to be a re-emergent tendency around the world with ideals of egalitarianism and solidarity.

    My sense of Ken Macleod, from the books of his I’ve read, is that he has an incredibly thorough knowledge of Trotskyism in the UK and US in the 1980s, and a near-complete lack of knowledge of the radical left outside that window. He doesn’t even seem to know what happened to those same groups in the 1990s. It seems like his interest broke at his first encounter with conservative criticisms of 1980s Trotskyism, and, sticking with the assumption of Trotskyists of that era that no one else even existed on the left, he assumed that a few comments from Hayek meant the radical left was completely defeated.

    As far as the Singularity and transhumanism goes, I think that in the first place, it’s a mistake to tie the two concepts too closely together. I’ve come to suspect that intelligence amplification, insofar as it may be possible, will face rapidly diminshing returns. Partly, there’s a limit to how many problems we have, that could be solved by raw calculation. And somewhere, I once saw a quip, that “Millions long for immortality who don’t know what to do with themselves on a rainy Sunday afternoon.”

    Beyond that, I think one thing we’ve learned about the human psyche in recent years is how fundamental empathy is, and how we extend empathy to other living things, and even to inanimate objects. The exciting thing to me about transhumanism is that we may extend our notion of what constitutes the human community, not constrict it to nothing, as Macleod seems to fear.

  7. Ken MacLeod says:

    FoolishOwl, I can see how reading a certain fraction of my books could give that impression of what I know and think about the left, but it’s not actually true.

    • Tynam says:

      For a start, I’d have to think that you had completely different views depending on which book I pulled off my shelf today.

      (I prefer, of course, to assume that you’re an author, and your worlds do not necessarily reflect all of your opinions.  Even in The Cassini Division.)

      Can I be the first to say thanks for joining us?

Leave a Reply