Science fiction writing contest: "The robot felt..."

Paul Malmont sez,
Simon & Schuster and novelist Paul Malmont are looking for a science fiction writer. To promote the July release of his WW2 novel about pulp writers Robert Heinlein, Isaac Asimov and L. Ron Hubbard, The Astounding, the Amazing, and the Unknown, the publishing house is hosting a story writing contest. The winner will have their story published in the paperback edition of the book.

In the novel, young Isaac Asimov begins a story with 'The robot felt' and ends it with, 'In the end, the robot felt nothing. He wasn't programmed to.' The contest is looking for up-to-2,000-word sci-fi submissions that connect those lines. The top five finalists, selected by online voting, will be judged by the incredible Patton Oswalt, lit-agent Susan Golomb, and R/GA agency Creative Director, Dan Harvey and Malmont.

39

  1. It’s nice to see that the AI field (or SciFi imaginings of future AI) has not progressed one step beyond the 1970’s.

    Proof that my PC can feel:

    print(“Hello Dave, I’m feeling fine.”)

    QED.

    1. Noen – prove to us that *you* can feel.

      Serious question, that. You end up doing the equivalent of that same exact print() yourself in that case. That’s part of why Alan Turing devised the Turing Test the way he did – the question of “Can computers think/feel?” hits the same exact problem as *people* trying to prove they think/feel. So he rephrased it as a “if it looks like a duck and quacks like a duck” – it doesn’t really matter what’s going on inside if it manages to convince an outside observer. And it’s true whether it’s human or computer.

      1. Valdis said:
        “Noen – prove to us that *you* can feel.”

        Pains and tickles and itches, feelings, have a first person ontology. So if I feel that I am in pain, then I *am* in pain. That I am conscious is not something that I can be mistaken about. To be conscious of being unconscious would mean being consciously unconscious. A plain contradiction and therefore not possible.

        “Serious question, that.”

        I appreciate the opportunity to refine my position. Thanks.

        “That’s part of why Alan Turing devised the Turing Test the way he did – the question of “Can computers think/feel?” hits the same exact problem as *people* trying to prove they think/feel.”

        The Turing test has been rendered useless. There is in fact a case where the Turing test cannot distinguish between an unthinking “AI” and a real intelligence. Searle’s Chinese Room refutes both strong AI (that computers can or ever could think) and it also provides a counter example to Turing’s test.

        Do you really want to claim that chatbots, which are nothing more than databases, are living conscious beings? Do you really believe that it is possible in principle to put yourself onto your hard drive? If so, before you do, can I have your stuff?

        “if it looks like a duck and quacks like a duck”

        Behaviorism was also refuted long ago (Chomsky). No one actually believes in the behaviorist program any more. While it may be useful in certain clinical settings it has become abundantly clear that it cannot fully explain human psychology nor could it possibly be the case that consciousness can be reduced to behavior only.

        Simulated weather is not weather.

        1. “Pains and tickles and itches, feelings, have a first person ontology. So if I feel that I am in pain, then I *am* in pain. That I am conscious is not something that I can be mistaken about. To be conscious of being unconscious would mean being consciously unconscious. A plain contradiction and therefore not possible.”

          You’ve failed to address Noen’s point at all. He didn’t ask you to prove to yourself that you are conscious. He asked you to prove to outside observers who have no access to your supposed “inner life” and “feelings” that you are conscious. At this point, for all we know, you might be a chatbot that has been programmed to give the above response when asked if you’re a real boy.

          1. any mouse said:
            “You’ve failed to address Noen’s point at all.”

            I’m pretty sure I did respond to my own ideas.

            “He asked you to prove to outside observers who have no access to your supposed “inner life” and “feelings” that you are conscious.”

            There are standard arguments against solipsism. You should check them out (G.E. Moore). Basically the claim of the solipsist, that we cannot know if other minds exist, is incoherent and self contradictory. If you are seriously questioning if I exist or not then why are you asking me?

            “Dear colleague, I have recently converted to solipsism and am having a great time. I wonder why more people don’t take it up?”

            Can you detect the logical error?

          2. Sure can. It’s called a strawman.

            The point in question is whether a specific object can be shown to have a mind, and if so, how. Your strawman is solipsism, which is the position that it can not be proved that anything at all exists.

            I’m convinced, you’re a human.

        2. Searle’s Chinese Room in no way disproves the possibility of strong AI. The fact that Searle thought it did just means that he didn’t think very much about how human brains work. Our brains are made out of unthinking neurons obeying simple laws just like the workings of the hypothetical Chinese room. It’s the *system* that emerges that is intelligent. Pretty much every current neuroscience text makes short work of demolishing Searle’s argument.

          There is no reason to suppose that downloading your mind onto a hard drive is impossible. Yes, you could argue that it isn’t *you*, but that’s the same argument that claiming the person that materializes after going through a Star Trek teleporter isn’t you. But assuming that all the connections among neurons were accurately captured, I have no doubt that it would think and even think it was me.

          Behaviorism was in no way refuted by anyone. What *was* refuted was the idea that some hard-core behaviorists (like B. F. Skinner) had, which was it was unscientific to think beyond behavior. Now we know that besides conditioning pigeons and observing their behavior, we can scientifically speculate about what is actually going on in their brains. But that doesn’t mean that “if it looks like a duck and quacks like a duck” is no longer useful.

          1. Johnny the Badger opined:
            “Searle’s Chinese Room in no way disproves the possibility of strong AI.”

            The refutation is definitive:

            1) Implemented programs are syntactical processes.
            2) Minds have semantic contents.
            3) Syntax by itself is neither sufficient for nor constitutive of semantics.

            Therefore, the implemented programs are not by themselves constitutive of, nor sufficient for, minds. In short, Strong Artificial Intelligence is false.

            The argument is valid. Your only hope is to attack one of the premises. Good luck.

            “The fact that Searle thought it did just means that he didn’t think very much about how human brains work”

            The beauty of logic is that it doesn’t matter what one believes. If the premises are true and the argument valid then the conclusion necessarily follows.

            “It’s the *system* that emerges that is intelligent.”

            It is stipulated that the man in the room does not understand Chinese even though it is allowed for the purposes of the argument that the program he is running is sufficient. To outside observers the answers exhibit an understanding of Chinese. The man receives a squiggle, consults his tables and outputs a squoggle. Why then can he not understand Chinese?

            The answer is that syntax alone is insufficient for semantics. The man has no way to get from the syntax of the program to the *meaning* of the Chinese symbols. And if the man cannot get from pure syntactical operations to semantic content then neither can the room.

            “There is no reason to suppose that downloading your mind onto a hard drive is impossible.”

            I don’t just have good reasons, I know it is impossible. I know it is impossible for the same reason I know that if I simulate rain or fire on my PC it will not get wet or burn up. Simulation is not duplication. I can simulate a pump but I cannot take that simulation and pump water with it.

            In order to replace a person’s heart with a mechanical pump it is not enough to create a simulation. I have to actually duplicate the causal action of real pistons and real fluids. We may someday create minds. When we do it will be through duplicating the causal activity of real minds.

            “Behaviorism was in no way refuted by anyone.”

            Good luck getting approval for any behaviorist research program on consciousness. Behaviorism says that mind reduces to behavior or dispositions to behave. So the behaviorist says that “If it is raining, the man will take an umbrella.” But suppose that there is nothing the man likes more than getting wet and splashing about in the rain? Behaviorism cannot account for intentional states but intentionality is precisely what we were tasked with explaining. It will not do to try to sweep consciousness under the eliminative materialist rug. Conscious intentional states are real world phenomenon that need to be explained. Neither dualism nor materialism can do so. Hence we need a naturalistic understanding where consciousness is not seen as a “nothing but” (eliminative) reduction.

            Hence no Von Neumann machine robot can ever embody an artificial mind.

            Hence Commander Data is a zombie.

            ;)

          2. TL;DR.

            The issue is simpler than you make it out to be: If you believe that strong AI is impossible, you must believe there is something non-mechanistic, i.e. magical, about the human mind. And if you believe in magic, well, that’s the end of the argument, isn’t it?

          3. Your third claim is a premise, not a conclusion. It does not follow from (1) and (2). It is that premise with which I disagree. Or perhaps I disagree with (2) as well. You leave “semantic context” as an undefined term, when in fact it’s definition is at the core of the problem.

            First you, like Searle, vastly underestimate the complexity of the program the man in the Chinese room would need to execute. It would essentially include a complete dump of the neural structure of the human mind of a native speaker, including all the knowledge of the physical world that we use in interpreting even seemingly simple statements. That information exists in the man in the room, yes, but it would also be duplicated in the instructions for processing squiggles. That data is what the word “semantic context” means. We’re talking about many terabytes of data. It would take centuries for a 1Hz human to output a single sentence in this way.

            One of the mistakes some AI researchers once made was using what have been called “suggestively named LISP tokens” in their code. A variable called “apple” does not mean the same thing as the word “apple” or the object “apple,” and certainly not the company “Apple.” Actually defining “apple” in code cannot depend on what the variable is called.
            When you talk about “simulating a fire” you’re going to have variables called “temperature” and “pressure” and these are of course, not the temperature and pressure of the room. But so what? The word “apple” in my brain is not an apple, either. If I think the word “fire,” and imagine my desk burning, it does not burn.

            You say “if the man cannot get from pure syntactical operations to semantic content then neither can the room.” The man in Searle’s though experiment acts as a microprosessor. He receives verbal input, executes syntactical operations, and produces output. Your concern seems to be that if this is not sufficient to learn Chinese, then no system using such a microprocessor as its active element (the rest is data storage) can understand Chinese. But if you took a human infant, and exposed it to language only by playing tapes while it is trapped in a sensory deprivation chamber, it wouldn’t learn language, either. Unless you have access to the referents of language, and see the objects and actions that sentences are talking about, you will lack the metadata that gives the syntactical operations meaning. That’s all Searle does- he separates the syntax from the metadata, whereas a human mind stores them both in the same hardware.

            What, precisely, do you believe is the missing ingredient that prevents a computer or robot from being conscious and understanding language. Assume we provide sensory data input (video, audio, pressure, temperature, and so on). Assume you have unlimited data storage and computing power. Assume the program being run includes control over systems for manipulating objects in the environment. What operation occurs in a human brain that cannot occur, ever, in such a system? What ingredient other than neurons firing is required for sentience? Where does this ingredient reside, and how have you examined it? Or are you claiming that it is impossible for an computer to determine whether a neuron would fire in a given environment, even with a detailed physical simulation?

        3. ‘Noen felt conscious. But in the end, Noen felt nothing. He wasn’t programmed to.’

  2. I dig this era in SFdom, just because during WWII, we had REH and Asimov together in the same office at the Philly Navy Yards with L. Sprague de Camp. How cool is that?

    Asimov, the proto-engineer dork, Heinlein the manly man (who apparently made a huge pain in the ass of himself in Philly, according to his recent biography), and Hubbard, who is reported to have stopped by, already peddling lies about his war record.

  3. It looked interesting until the instructions said to submit your story, then get your friends and family to vote for your story. Which makes it a social networking contest more than a writing contest. I’m glad it stated it baldly so I don’t have to waste time entering.

    1. I have to agree here. It sounds like fun, but why bother if it’s pretty much a popularity contest (something I know I won’t win) to get into the top 5? I mean it’s right there in the rules to “… then get your friends and family to vote for your story”.

      Seems like a nice fun project but I think I’ll finish up my Machine of Death Volume 2 submission first.

    2. Yep. I find it slightly ironic that a Science Fiction writing contest is geared to weed out the people on the fringes of society. Because the best artists are usually the people with the most friends on Facebook! You know…like Tila Tiquila!

  4. The robot felt up the drunk coed. In the end, the robot felt nothing. He wasn’t programmed to.

  5. Hmmmm…. Facebook App to register for contest… NOPE!
    All people voting have to also have Facebook accounts…. Don’t think it will work so good…

    Yes (Before you ask) I have a Facebook Acct that I use for family and a few friends, no games.

    http://wildfireapp.com/website/6/contests/121740/rules

    1. How to Enter. To enter the Contest, go to http://www.facebook.com/#!/pages/Paul-Malmonts-The-Astounding-the-Amazing-and-the-Unknown/192391090786301, select the Contest tab which hosts the application (“Contest app”), complete the entry form (including your name, birth date, state, and email address),…

    To vote for a Submission, go to the Contest app (http://www.facebook.com/#!/pages/Paul-Malmonts-The-Astounding-the-Amazing-and-the-Unknown/192391090786301) and select a Submission based on the Public Voting Criteria. Limit one (1) vote per day of the Public Voting Period.

  6. Why on earth would the robot be a “he”? Ascribing gender to robots (that are not standard pleasure models on the off-world colonies) just seems odd.

  7. I don’t see why public voting for the top five is so bad, it’s the way of the world these days. I don’t think just friends and family would vote, unless your writing stinks. Neil Gaiman’s contest was the same way. I mean, come on, these contests are for publicity – books need all the help they can get. Malmont’s and Gaiman’s prizes are just better than a t-shirt or signed book. As a writer, I’d kill to get Susan Golomb to read my work! But sci-fi isn’t really my thing. Hmmm, still might be worth it.

  8. Here’s what bothers me. The rules say, “The winner grants Simon & Schuster and Paul Malmont a royalty-free, non-exclusive, perpetual, worldwide license to use the story in all versions of The Astounding, The Amazing, and the Unknown and associated media including advertising and the author’s website.”

    So, basically, you get the honor of being published but no payment. What’s more, you’re basically giving away first publication rights worldwide. Forever. I . . . don’t think so.

  9. JES – Do you understand what non-exclusive means? You can absolutely publish it elsewhere. Which is much more likely after someone like Susan Golomb is interested you that it sitting in your bedside table. I swear, you’d think you were asked for your firstborn child instead of 2K words and a few hours’ time. I’ve decided to do it, and I guess there won’t be much competition :)

  10. The robot felt incredibly self satisfied that it was made in the good old U.S. of A. where it was free to enter contests and win prizes, unlike those inferior Canadian models who only had the sad lament of “In the end, the robot felt nothing. He wasn’t programmed to.”

  11. Hmmmm… so the authors of a book don’t know how to write a section of their book, and so are asking for other people to write it for free? And then they’ll simply put your work in their book. And they say the whole contest is just to promote this book?

    Ok…

  12. Hmm sounds interesting. But if you ‘win’ does that then turn into a book deal or something?

  13. Why is “the robot felt…” blasphemous?

    All 5 Star Wars movies used for comic relief C3PO staring wide-eyed into the jaws of danger, and R2D2 snickering at him…

  14. The reason they tell you to get your friends and family to vote is because this is a contest to promote the novel, as they say. So yes, it is a social networking contest. All they care about here is the exposure…

    It’s clever, but it’s just an ad campaign, plus they get some free prose to put in their book.

  15. After reading what I presume to be some of the current top five, I can only imagine the robot felt a falsely earned sense of pride after having his extended network of robo pals upvote his hastily thrown together robot story onto the top of the heap in a data mining contest.

  16. How about “The robot felt was far more high tech than the traditional woolen kind, although more expensive”?

  17. Dang. This sounds like a lot of fun but I just can’t get over the Facebook thing. Still can’t wait to see the entries, though!

    Also,and IANAG(Grammar instructor),does that last bit(“In the end, the robot felt nothing. He wasn’t programmed to.”)feel a bit awkward? Just me?

    Still, though, it takes nothing away from the cool concept.

  18. The robot felt me up. I happened to have a big, heavy crowbar within reach. In the end, the robot felt nothing.

  19. Yup JES, I agree. This contest is a clear violation of Yog’s Law (“Money flows towards the writer”). The lack of real payment along with the rather onerous perpetual republication rights is a killer. I’ll pass.

    I may write a story on that premise but it won’t go to their contest.

  20. “Only persons residing in United States who are at least 21 years of age can enter.”

    This. In the 21st Century.

    FAIL.

    1. You can’t enter into a contract with someone under 18 – and perhaps 21 in some states. I am sure this is for legal reasons.

Comments are closed.