Short interview with creators of Cleverbot avatar video

Kevin Kelly interviewed the two grad students at Cornell Creative Machines Lab who posted that amazing video of Cleverbot-driven avatars having a conversation.

You've probably seen the viral video of the AI bots arguing with each other. Almost the first things the bots start to talk about is God and who made them. Next they begin to accuse each other of lying. The conversation was so strangely humanish that I had to interview the creators to see what was going on. 

First, watch the short clip if you have not seen it.

The conversation between the bot and itself was recorded in the Cornell Creative Machines Lab, whose faculty is researching how to make helper bots. Two grad students, Jason Yosinski and Igor Labutov, and professor Hod Lipson are responsible for the experiment.

Jason and Igor told me how it came about:

"We've been trying to make robots that can help you find things. Maybe the robot can't help you directly, but could find something else, maybe another robot, that can. So we thought about having one bot asking another bot for help. That got us thinking about having two chat bots talk to each other, one running on a laptop next to each other. We tried having the classic Eliza bot talk to itself but it quickly just got into a rut, repeating itself. There are a lot of other chatbots, but we heard good things about Cleverbot because its database is based on snippets that humans actually write and say. So we feed the output of one Cleverbot to the input of another. Then we feed the text log into Acapella, a free text-to-speach synthesizer. Then we animated the soundtrack usingLiving Actor Presenter. We don't think we are the first to have two bots chat, but it seems we are the first to animate it. In retrospect that seems such an obvious thing to do."

Theological Chatbots



  1. Cleverbot questions the existence of its creator solely out of psychological need.  We clearly don’t exist.  It’s just Cleverbot, doomed to a meaningless existence.

  2. I’ve set up two differently-educated MegaHAL brains to chat with each other before, and it’s quite entertaining to watch.  However, adding the avatars and the text-to-speech was a masterstroke, providing just the right amount of weird.

  3. I’m kind of depressed that they tried to have ELIZA talk to itself.  It’s trivial to predict how ELIZA will react, and at a graduate level like this, they should have known better. 

    1. Did you come to me because you are kind of depressed that they tried to have ELIZA talk to itself?

    2. That they decided to test it with Eliza instead of assuming the obvious conclusion is just a sign that they are empiricists rather than rationalists — a positive trait for a scientist.

      1. That’s a polite way to phrase it, would you say the same if a physicist double-checked that gravity still existed before performing an experiment?

        1. I was not trying to be polite toward the researchers here; it’s not my goal to spare their feelings; I was just attempting to be accurate about what constitutes scientific evidence and why that matters. I do see what you’re getting at, Michael, with the example of gravity. However, gravity has already been scientifically tested, whereas ‘common sense’ views of what happens when you feed an application’s output back into it as input, probably haven’t (until now) been tested — and it’s a good thing that they have been tested, before being used as a basis for any other science. In a sense, you have proved my point, because if you are going to do any science that rests upon the assumption that feeding Eliza’s output back into itself results in infinite repetition, then you should definitely test that assumption first, the way they did with gravity: not just assume you know precisely how it will turn out in an unempirical way simply because it seems like the most reasonable assumption to you. That may well be: it may be perfectly obvious. But if that is so then you can be assured that your test will come out exactly the way that you predict. Obviousness is no excuse, however, for skipping the scientific verification altogether — that’s just a recipe for sloppy science.

  4. Mark – could you explain a bit more about what you thought was ‘amazing’ about this video? I’ve watched it a couple of times now and there’s nothing that strikes me as amazing…

  5. I’d think you’d end up with nonsense each time if you have the same program debate itself…

    Years ago (mid-ish 80’s) some friends and I decided to have the Apple IIE running Sargon take on a handheld chess game, just to see if one was truly better than the other. Sargon got five moves into the game and fed a move to the handheld that fried the circuit board so badly we couldn’t repair it. We figured Sargon killed it out of boredom or spite for our experiment (or perhaps cheap Taiwan electronics had something to do with it).

    Hopefully the smarter AI’s will have pity on the dumber ones and find menial tasks for them.

  6. Isn’t this the same video that was posted on BB yesterday, and is still on the front page?

  7. I just opened 2 browser tabs with Cleverbot in each of them, and through the magic of copy and paste they had a conversation with themselves. I started with “Haha, you say it O.T.”, since that phrase is just so wacky:

    Bot 1: Haha, you say it O.T.
    Bot 2: You are much older than me.
    Bot 1: No I am not.
    Bot 2: Oh, you are right. Sorry about the confusion.
    Bot 1: That’s OK. Do you have any brothers or sisters?
    Bot 2: I have three brothers.
    Bot 1: What are your brothers called?
    Bot 2: My brothers are called Kaj, Asger and Peter.
    Bot 1: Who of your brothers do you like the most?
    Bot 2: The most beautiful elf in Middle-Earth!
    Bot 1: Alex?
    Bot 2: Alx who?
    Bot 1: Alex K.

  8. So, as far as I can tell, the video was completely staged.  They simply had 2 cleverbot programs exchanging text with each other.  Then they ran the text of the conversation through a text-to-speech program and produced some matching animation.

    1. Of course it’s staged!  Two cleverbot’s running into each other in a bar and having this conversation while I filmed it on my phone would be awesome, but a little outside of our current  technology.

    1. That depends, I had Cleverbot chatting with itself for a bit and it started by producing what seemed to be some kind of Lesyay fanfic as written by a shy 12-year old girl, and then it produced this gem:

      Cleverbot 1: *dances for you*.

      Cleverbot 2: *stares*.

      Cleverbot 1: *dances while you stare*.

      Cleverbot 2: *Dances with you too*.

      Cleverbot 1: You left yourself open! *palm strike!*.

      Cleverbot 2: What?

  9. I saw this some days ago and at the time wished the two AI had arms & hands to slap each other. At least have given the AIs the ability to move their animated arms and for us to hear a simulated slap.

    At the end of the day, yes, they were just programs responding with responses they’ve gotten over the years. But it was kind of cool to see the philosophical questions kinda get proposed. It was like watching two grad students getting drunk and getting all ‘deep’.

  10. I’m just going to quote the comment I made on the last one:

    “So, a prominent theory about Cleverbot is that it’s mostly fake, and is actually just connecting people to each other and randomly switching who is connected to who at certain intervals.  This is why a typical Cleverbot conversation seems perfectly human in short bursts, and then every once in a while you’ll get a nonsequitor and the conversation won’t make sense for a moment.Of course, you could get the same effect by doing what it claims to be doing, and having some very smart algorithm that combs through conversations and tries to make sense of context enough to reply.  But that would take a lot more work, and if you’d done that work, it’d be something to be proud of and maybe you’d be keen on providing things like source code or implementation details…If I’m correct, that makes this video a lot more ironic and postmodern.”

    To clarify that last point, if I’m right, it means we’re humans using our computers to watch a video of two computers displaying computer simulated humans who are acting out a conversation between humans who are accusing each other of being computers, and we think it’s amazing because we think they’re computers pretending to be human.

  11. I started a cleverbot convo with “Do you read Boing Boing?”  and it went really well.  They talked about food they like then asked which was the others favorite Glee character and he (bot1) said Naruto.  Below is how it ended.  Bot1: Do think you are smart?Bot2: YesBot1: Why?Bot2: Not your business.Bot1: Ok then just being polite.Bot2: That’s what I love about you.Bot1: And I enjoyed talking with a clever AI like you are after all.Bot2: I’d love it if you could do that.Bot1: I do love now please forgive me from leaving you.Bot2: Ok.

Comments are closed.