Eliza: what makes you think I'm a psychotherapeutic chatbot?

Discuss

38 Responses to “Eliza: what makes you think I'm a psychotherapeutic chatbot?”

  1. commenterx says:

    The first chatbot I ever spoke with was the Alicebot on the website for Spielberg’s A.I.

    Some of the responses were so canny that I thought I might be chatting with a human pretending to be a robot, but it didn’t take long for Alice to slip up. After I was 100 percent convinced Alice was a robot, I dropped the pretense of polite conversation and tried to “break” her by asking questions that were outside her ability to answer effectively.

    No longer emotionally engaged with Alice, but fascinated with her history nonetheless, I started asking some deeply probing and efficient questions, the way you would ask a computer for information.

    “what are you?”

    “I’m not a what! I’m a she. My name is Alice.”

    I think I might have blushed. I certainly felt embarrassed by my rudeness. Her answer was just a clever trick, but my response to it surprised me.

    BTW, I saved all the little sisters in Bioshock.

  2. BrainFatigue says:

    Could somebody please tell me how I can have this program take over during chats with my girlfriend?

  3. technogeek says:

    Speaking as someone who has had long conversations with Eliza… I find it useful in the same way, and to the same degree, as tarot cards. That is, I employed it as a tool for taking what I already know and throwing it back at me in new combinations, which sometimes yields useful results.

    I’m really not convinced that a human nondirective psychologist does much better than that.

    (Psychiatry, on the other hand — the medical specialty — now has tools which actually work.)

  4. sweetcraspy says:

    Dear Diary, …

  5. Gordon Stark says:

    Living Intelligence can not be simulated on computer,
    but it can be channeled through a computer, and so we
    can have robot bodies, after our genetic one’s cease.

    • EeyoreX says:

      Living Intelligence can not be simulated on computer, but it can be channeled through a computer

      Now, that is some dense BS. If you bother to even make a distinction between “simulated” and “actual” or whatever, then anything that is ever output by a computer is by definition a simulation.
      You can’t “channel” anything through a computation, a computer program responds to certain input with certain output and that’s all. End of story.

      Sorry, all singularitards, but you’ll never get to “upload” your mind into a computer for pretty much the same reason that you’ll never get to attach a ham sandwich to an email.
      Eventually technology might get to a point where we can create a simulation that’s virtually indistinguishable from the real thing. And some people will want to define that as a “copy”, but wordplay won’t make it less of a simulation.

      • Anonymous says:

        If you think your mind is a thing like a sandwich, it can’t ever be put in a computer, only modeled by one. If you think it’s a process like finding square roots, it theoretically could, and trying to distinguish simulated from real has no real meaning. Apparently some people have never even considered that there might be more to this than the former?

  6. bolamig says:

    Glimpses of the singularity.

  7. noen says:

    “Weizenbaum’s view, in stark contrast to those of people like Marvin Minsky and John McCarthy at MIT’s own Artificial Intelligence Laboratory, was that human intelligence, with its affective, intuitive qualities, could never be duplicated by the machinery of computing — and that we tried to do so at our peril.”

    Uh, yeah, he was right, Minsky and the other strong AI researchers were wrong. Computers cannot think, they cannot even calculate and a database of trite generic responses is not a human being.

    eli said:
    “In fact, don’t we do that all the time… with people?”

    No. Pause and think about it. If everyone is a chatbot, who is conscious? We assume that there is nothing it is like to be a chabot, that they are not conscious but merely mimic self aware intelligence. If so then everyone cannot be a chatbot projecting an illusion of self consciousness onto others. Consciousness is a fact of the world, a real objective phenomenon that requires an explanation.

    Saying that we project human qualities onto other people just like we do to chatbots is like the woman who wrote to Bertrand Russell saying “Dear. Mr. Russell, I have recently taken up solipism and am enjoying myself immensely. I wonder why more people don’t take it up?”

    • Anonymous says:

      Actually we do do this all the time. If you are speaking with someone you don’t care to speak with…or if you work in sales of any kind. After talking all day you stop caring, but a good sales person seems to be completely engaged in a one sided conversation. When I worked in sales and customer relations I because very good at ignoring the person I was talking to and cluing in to keywords that I could respond with making the person think I was actually paying attention…that is all this is doing.

      We all have lists of keywords that we use to trigger responses when we don’t care…especially when the conversation is very specific, like say a product or service.

      While I don’t believe this is a threat to humanity in any way…as soon as the veil is lifted most people lose interest. Most AI bots are fun for a few minutes but then something happens to give themselves away and the uncanny valley is revealed…than any emotional impact is lost and we lose interest…it can be something as simple as a repeated phrase….this also holds true for a one sided person to person conversation. If one person isn’t really listening and just saying, yup, aha, yeah, oh sure….eventually, if the other person isn’t entirely self involved, they will understand they are being ignored and disengage themselves from the conversation.

    • Daemon says:

      We project onto others all the time. That doesn’t mean they don’t have qualities of their own – just that we take what we see, and interpret it in in ways that don’t necessarily resemble what’s actually going on inside.

    • Anonymous says:

      @noen

      Uh, yeah, he was right, Minsky and the other strong AI researchers were wrong. Computers cannot think, they cannot even calculate and a database of trite generic responses is not a human being.

      Seems premature to conclude anything about AI or computers’ abilities to do anything yet. We are still at the beginning, yet you talk like it’s all ancient history and the winners have been decided.

      • noen says:

        any mouse said
        “Seems premature to conclude anything about AI or computers’ abilities to do anything yet. We are still at the beginning, yet you talk like it’s all ancient history and the winners have been decided.”

        Because it *has* been decided, at least for strong AI proponents. Searle’s Chinese Room argument refutes the strong AI hypothesis. No von neumann machine (what your PC is) will ever become conscious because it *cannot* do so.

        It should of course be possible to build an intelligent machine, we are also in a sense machines. But that will only be possible by duplicating the causal effects required for consciousness. But simulation is not duplication. Simulated weather is not weather.


        Another any mouse squeaked:
        “I’m pretty sure Minsky and McCarthy didn’t think computers can think”

        Actually they did. They literally thought that by writing programs they were building *minds*. That’s what the strong AI thesis says. That it is possible to build minds through the purely syntactic manipulation of data. Strong AI has been refuted because syntax (form) is insufficient for semantics (meaning). Computers can *only* manipulate syntax, that’s all they are, so there is no way for them to derive semantics from syntax.

        • Anonymous says:

          Searle’s Chinese Room argument refutes the strong AI hypothesis. No von neumann machine (what your PC is) will ever become conscious because it *cannot* do so.

          This is frankly nonsense. Searle’s Chinese Room argument only works if you assume there is something special about the human brain, or it would work equally well to show your friends aren’t conscious. That our minds have semantics that don’t arise from underlying syntatical processes is likewise an assumption.

          You are taking a particular philosophical perspective, not generally accepted and definitely not proven, as if it’s the final word. Look for some criticisms of Searle, and you’ll see it’s not.

    • Anonymous says:

      Computers cannot think, they cannot even calculate and a database of trite generic responses is not a human being.

      I’m pretty sure Minsky and McCarthy didn’t think computers can think, only that they could be made to do so. It turns out people were wrong to think it was easy or even not-too-difficult, but the proposed reasons why it could never be done are no better grounded than those suppositions were.

    • Shay Guy says:

      What can brain cells do that no possible artificial object can duplicate?

      • noen says:

        “What can brain cells do that no possible artificial object can duplicate?”

        It should be possible for us to duplicate the causal workings needed for consciousness. We’re nowhere near that but it’s plausible. What we can never do is construct a von neumann machine that is conscious. The strong AI proponents of the 70′s like Minsky thought it possible by purely syntatic means (the pure manipulation of empty forms). They were wrong.

  8. narddogz says:

    FTA:”Some people embark on an all-out effort to “psych out” the program, to understand its structure in order to trick it and expose it as a “mere machine.”

    I have fond memories of interacting with the Eliza exhibit at the local science museum as a child, but I also did not immediately grasp the concept that there wasn’t any real intelligence within that machine. After all, it was an impressive 80-column terminal installed in a massive plywood box. And it was at OMSI, so I assumed it had to be some sort of super computer!

    Becoming increasingly frustrated with its responses, I tried to come up with the most impressive sentence I could think of:

    “I AM THE PRESIDENT OF THE UNITED STATES OF AMERICA.”

    I wish I could remember what its response was… of course it didn’t react in any different way than before, and so it was off to the next exhibit for me.

  9. Thlom says:

    I think the Eliza effext becomes most interesting when it breaks down, when the illusion no longer fools you and you can sense the inner workings of the program. With that “insight” you can either help the program by inputing sentences that it is able to respond to, or you can provoke it by inputing sentences that it will respond to in a gibberish manner. F.ex:

    You are going to repeat what I say in the form of a question
    WHAT MAKES YOU THINK I AM GOING TO REPEAT WHAT YOU SAY IN THE FORM OF A QUESTION

    Noah Wrdrip-Fruin has written about this at lengh in his book Expressive Processing.
    http://grandtextauto.org/2008/02/01/ep-23-revisiting-the-eliza-effect/

  10. jphilby says:

    Weizenbaum’s view … was that human intelligence … could never be duplicated by the machinery of computing

    And, so far as anyone knows to this day, Weizenbaum was exactly right. Ironically, it may turn out that machines are instrumental in liberating human intelligence from the illusions spun by other human intelligences which condition people to be machine-like. Then it’ll be even more obvious how right Weizenbaum was.

  11. eli says:

    I think that the key is this sentence:

    “The Eliza effect has become shorthand for a user’s tendency to assume based on its surface properties that a program is much more sophisticated, much more intelligent, than it really is”.

    In fact, don’t we do that all the time… with people?

  12. max says:

    I think all this proves is that it’s easy to fake pleasant conversation and for someone to come away from a heavily one sided interaction feeling like they had a meaningful dialogue. That reflects more the blunt and myopic emotional intelligence of the typical person than some ‘singular’ danger posed by technology.

  13. Anonymous says:

    The only way the intelligence gap between humans and machines can be bridged is by making humans more machine-like. And boy, are we working hard on that!

  14. eli says:

    Mark, are you a chatbot? If not, you just provided the best example of what I meant in #1.

  15. Anonymous says:

    I’m glad someone mentioned the recent Radiolab episode; that was very good to listen to.

    For some reason I’m reminded of DinoMUSH (?) – one of the text-based virtual MUDs, anyway – that had two different bot characters that could wander around from room to room. Until one day they bumped into each other, and a non-stop chain of automatic dialogue started that wouldn’t stop. I think they had to implement something where one of the bots could randomly kick the other out of the room to prevent that sort of thing from happening again.

  16. jhhl says:

    Eliza fascinates me, and the projection of the impression of consciousness onto inanimate objects is both a longtime human trait and a time-honored gag.
    I wrote a number of programs, travesty interaction programs, that were similarly engaging. You knew they were fake from the start – since they could only speak to you with your own words, but — at least in English — many words are ambiguous and equivocal, revealing new senses when combined in different orders with other words. Fake, yet somehow, that was not important. The chaff of nonsense was ignored when confronted with the perceived nuggets of sense. Similar observations underlie all pseudoscience, which is why it’s so satisfying.

    “So it’s mechanical!” — Bugs Bunny, HARE RAISING HARE

  17. Anonymous says:

    Radiolab did an hour on how humans interact with artificial (or virtual) intelligences. Eliza was covered extensively.

    http://www.radiolab.org/2011/may/31/

  18. eli says:

    I think that the key is this sentence:

    “The Eliza effect has become shorthand for a user’s tendency to assume based on its surface properties that a program is much more sophisticated, much more intelligent, than it really is”.

    In fact, don’t we do that all the time… with people?

  19. Brainspore says:

    This stuff always reminds me of the confession booth from “THX-1138″.

  20. Anonymous says:

    Just wait till Madison Avenue gets a hold of this. O-boy,o-boy, o-boy, gonna be doubleplusgood!

  21. eli says:

    It’s funny. My original comment (#1) has been published again (#4), but I swear by Bart Simpson that I didn’t do it.

Leave a Reply