Meet the six androids that will never exist

What will the future of artificial intelligence actually look like? We're getting some clues already from projects like Hiroshi Ishiguro's Geminoid series with its incredibly realistic bodies, writes my friend Dennis Cass at io9. But we're also seeing hints of what real-life androids won't be like.

In a post last week, Cass talks about some common fictional tropes that have shaped our expectations of androids, but probably won't be present in the real thing.

The android that finds humanity to be a deep, abiding mystery

We flatter ourselves: A machine could never understand jokes. Then IBM's Watson uses natural language processing to understand the punning intent behind Jeopardy! questions, and we're proven wrong. If anything, the android will see us more clearly than we see ourselves. Jeremy Bailenson, director of Stanford's Virtual Human Interaction Lab, used the Xbox Kinect to analyze body language during student-teacher interactions to "mathematically uncover subtle movement patterns, many of which would not be noticed by the human eye." Psychologist Paul Ekman has discovered the "micro-expressions" the human face flashes during a lie-if facial recognition software can read it, then the android can know it.

Ultimately, as "big data" gets bigger we'll ask ourselves what we want our androids to share. Do we charge them with stopping us from making bad life decisions? Or do they help us maintain our innocence? Fiction has its Rikers, the wise humans who preside over the sentient machine, often with a whiff of bearded condescension. Maybe the android will be the one who wears the bemused smile.

Meet the other five androids that will never exist in Dennis Cass' full post on io9.



  1. This seems like a challenge to the science fiction writers among us for a story about an android that doesn’t fall into any of those personifications.

    1. What?  It’s not a slideshow, at least where the link takes me to.
      But yeah, when it’s a slideshow, more often than not, I close the tab, except when it’s shocking new pictures of the Lohans or Kardashians :-P

  2. “Meet the six androids that will never exist”


    “…but probably won’t be present in the real thing.”

    Will never exist or probably won’t, which is it?

  3. The title is a bit misleading, “Meet the six androids that will never exist” is very general, whereas the title of the io9 link is “How close to reality are the androids in Alien and Prometheus?”

    Hence, no mention of Asimov and the Three Laws Of Robotics (four if you’re punctilious and feel the need to include the Zeroth Law) in an article that includes as one of the six The android that suddenly turns murderous.

  4. The android that finds humanity to be a deep, abiding mystery

    My take: If we understand how our minds work well enough to program an android that smart, we’ll be able to program that understanding into it. (Not that humans don’t find humanity to be a deep, abiding mystery as well.)

    The android that yearns to be human

    Nothing implausible to me about internalized speciesism, though that’d depend on how well we manage to untangle our emotions before we program them in.

    1. Not to mention that in most fiction, robots (even those of human intelligence and appearance) tend to not have legally protected rights and are often property of the humans (think of the rather creepy world of Star Wars where C3PO, etc. are basically slaves than can be bought and sold). Why wouldn’t an intelligent being want to be part of the non-slave class?

      1. While I find the world of Star Wars somewhat creepy and would like to closely look at those androids: Was there ever any indication that they are truly sentient?

        1. Good point. They *act* sentient (especially C3PO) and seem to have personalities that don’t really help them do their jobs, but maybe it was all an intentional part of solving the “uncanny valley” problem.

          1. Twenty years ago I would have said they are persons. Like Data.  But  these days I’m leaning more  towards the interpretation, that they are Apple’s Siri on Steroid: Fake personality, powerful artificial intelligence, but no real sense of self.

        2. Moaning about your mistreatment by fate when no people are around to hear you sounds pretty good to me. If your standards are higher than that, I’m not sure all the people I’ve known will meet them.

          Now, how do we know Wookies are any more sapient than tool-using dogs? All they ever say is “rraughn”, Han’s probably making the rest up, the way people do. Right?

          1. I’m not entirely convinced by your C3PO example – my alarm clock goes off even when I’m not around.  To me, he seems to act entirely in the scope of his programmed function, enhanced by quirky personality programmed in by his hipster owner.  I don’t remember creative acts, not even when he talks to the Ewoks, 
            But that reminds me of R2D2 exacting revenge against an Ewok. That lies so much outside the realm of what I would expect as sensible programming for a repair droid, that it might constitute a personality.

          2. @retepslluerb:disqus :  I’m sorry, did you just call Darth Vader a hipster?

            Somewhere out on the intertubes is an essay that postulates that R2-D2 runs the StarWars universe.  It’s not an entirely convincing argument, but R2 is the only main character that existed prior to the telling of the story and survives all 6 episodes.  I don’t count the ghosts as having survived the whole story.   

        3. If they are non-sentient, that is to say, not willful, then what is the purpose of a “restraining bolt”?

    2. Quite simply, whatever we may not program into a sentient android may appear as a mystery to it, just as it does to us because it wasn’t programmed into us by Darwin. It does seem that we would need to have solved the mystery of human personality to write the Strong AI, but we may leave that knowledge out of the program if we wish. 

      It may be a matter of computational convenience. We approximate because exact inference is often computationally expensive. And even then we often round our numbers.

      It may even be an ethical consideration. We don’t force people to give up religion; will we force them to give up the mystery of individuality? Perhaps we’ll give the same respect to androids.

  5. I can almost guarantee that an android like Data will exist as long as there are nerds in the future who love retro shows like Star Trek and will build a fully functional data replica in their basement along with a genetically engineered tribble.

    1. They’ll use up the raw material to build a Number Three, Six, Eight, an Andrea and a Leia. Make that two six – both models.

  6. First example listed here, along with the picture of Data, requires me to point out that Data had an “older brother” Lore, who DID understand such things, but was found to be disturbing.  Well, that, and he was a psychopath and mass murderer.  Data was made less human to make him stand out, but given a drive to more fully understand humanity.  I suppose from a story perspective, it made him stand out in a way similar to Spock, while giving him a purpose.

    1. I agree. Data was clearly written that way as a foil to the human and alien characters. He forces an analysis of what humanity is, and acts as a stepping stone toward understanding the alien cultures they meet.

      This is the exact same role Spock had, but making Data a new life form that only one person in the universe understood is a great set up for exploring this stuff.

      Considering that the best science fiction is all (beneath the surface) about examining humanity rather than examining alien cultures, Data (and Spock) are rightly some of the most beloved sci-fi characters.

  7. A lot of the arguments in the article rely on the logic that “companies wouldn’t program them that way because it’s not marketable”.  I think this is shortsighted and dealing with the current level of AI, or more accurately “pseudo AI” as they’re not really intelligent at all, just following clever tricks in their code to appear intelligent.  A true AI will be able to learn, think, and form opinions for itself.  And of course if and when we ever cross the Singularity, their thoughts and motives may quickly skyrocket beyond our comprehension.  So the very idea that “androids won’t be this way because humans wouldn’t want it” is pretty absurd.  Past the singularity, most AI will probably take on android form as a way of dealing with us – so the motives will be entirely theirs.  Intelligent beings develop a wide variety of behavior patterns, many of which aren’t remotely logical.  Therefore I think it’s entirely possible that there could be any of these archetypes.

  8. Meet the six androids that will never exist.


  9. The problem with positing that an artificial intelligence won’t murder us all in our sleep is this:

    While humans are more powerful than the AI, the AI will work with us to achieve its goals. Most people won’t see any reason to instate safeguards, since there are no signs of bad behavior yet.

    It is more efficient for the AI to achieve its goals if it is more powerful.

    When the AI is more powerful than humans, it has no reason to work with us to achieve its goals.

    You are made of atoms that the AI can make use of elsewhere.

Comments are closed.