XKCD: Why you should give bad reviews to hotels you like

Hotels [xkcd.com]

Discuss

63 Responses to “XKCD: Why you should give bad reviews to hotels you like”

  1. Mordicai says:

    Optimal the way that trading bundled subprime loans on the bubble is “optimal,” which is to say, not optimal for anyone with an rational capacity to understand that choices persist beyond the immediate frame of reference.  I mean– XKCD is joking of course– but it is one of those jokes that HURT because the “tragedy of the jerks” is like, a defining force of American “ethics” & capitalism.

    • adamnvillani says:

      Big like for “tragedy of the jerks.”

    • Lalo Martins says:

      And if you’re one of those who didn’t get it (“you” being a reader, not Mordicai):

      The reason it’s not optimal is because less customers = less money = poorer maintenance = in a few years, the hotel won’t be able to maintain the quality, and will either not be good anymore, or close.

  2. BrotherPower says:

    Surfing sucks, don’t try it.

  3. Perhaps I’m a bit of a bastard, but I don’t see this behaviour as being *inherently* dicklike.   Certainly it’s pretty dickish, and I doubt that I would do it. 

    But basically it’s just the prisoners’ dilemma writ large, surely?  You can collaberate with the review system, or you can play it to win for just you.  The point is that either way (as the character in the comic says) it’s unlikely to affect the outcome that much.   Since it’s not likely to have much of an effect, then it’s not really immoral/unethical, surely?

    You can argue that there is a small difference in favour of collaberation.  But it’s going to be a very small difference.

    • EH says:

      Not dicklike, just dickish. Gotcha.

    • Mari Lwyd says:

      To ensure I will someday have sex with you I must merely write the prisoner’s dilemma large and repeat “SHADOWFIREBIRD HAS LATEX DESTROYING HERPES” across the internet and on posters.

      It’s really not a dick move.

      • You’re assuming that anybody would believe you. 

        If nobody believes you — or if I’m not interested in sex or my reputation — then it’s still kind of a nasty thing to do.   But it’s not unethical in and of itself, because it’s ineffective.

        See, I’m not saying it’s ethical per se; I’m saying that its lack of effectiveness renders it ethically ambiguous.

        • Mari Lwyd says:

          Please define ethical.

          • Ye gods.  In a tiny comment box?  Define something that philosophy has fought over since philosophy began?  

            For the purposes of this thread only, let’s define ‘unethical’ as ‘wrong’. 

            I believe the difference between ethics and morals is that one is supposed to be universal — pah! — and one personal.  I can never remember which is which, though.

            The question I am posing is: is it wrong to do something with malicious intent which is completely ineffective?   My point being that that’s not a question with an obvious answer.

          • Mari Lwyd says:

            According to the law?

            Attempted murder is malicious, ineffective, and prosecuted.

            Who is your determining authority on what is wrong?

          • Goodness, you really do want an argument on right and wrong?

            FWIW — and I’m batting out of my league here — you can’t take the law as a final arbiter on right and wrong, because the law is made by people and people are fallible and can make bad laws.  (That would mean that in Bahrain, it’s wrong for doctors to save the lives of people that the government doesn’t like.)

            In the final analysis, there is no final analysis.  You just have to use the best criteria you can find.  And that includes your innate, unconscious sense of what feels wrong — but not exclusively.

            My 10p; certainly not an absolute position.  But then again, I don’t think that there is one.

          • Mari Lwyd says:

            No, I’m not taking the law as the final arbiter. 

            If I’m the arbiter (excellent word choice btw) then yes, lying whether it’s effective or not is wrong as it degrades the information that we have about the world as a whole and moves us further from a reliable understanding of our surroundings.

            You as the arbiter appear to say that an act is only wrong if it has negative, measurable consequences, correct?

          • “lying whether it’s effective or not is wrong as it degrades the information that we have about the world”

            Degrades who’s information?  There’s no-one to hear, remember?  And *you* know that you’re lying.

            “You as the arbiter appear to believe that an act is only wrong if it has negative, measurable consequences, correct?”

            Not correct.  I’ve stated it as a question; I’ve said that I think it’s a difficult one to answer.   I’m not sure that I can answer it, in fact, in a convincing and logical way.

            It *is* true that I think that logic and rationality should form a big part of the process, and that I don’t think there is any universal right or wrong.  So while in fact my gut says that lying to no-one and stealing something that would be thrown away are both wrong, I don’t think I could bring myself to condemn it in others — because I could not make a logical argument as to why.

          • Mari Lwyd says:

            The who is whoever’s reading/interpreting the data. We are dealing with a shared-who not just one in the context of internet hotel reviews. There’s everyone to hear who decides to listen.

            If the data is useless, let’s say that every rating on Amazon was 5 stars, then we are better to trash it altogether.

          • marilove says:

            “The question I am posing is: is it wrong to do something with malicious intent which is completely ineffective”

            But how could you ever prove it was  ineffective?  Just because you think it might not be effective, doesn’t mean it won’t actually be effective instead.  “There is a chance it won’t be effective” means there is a chance it could be effective.

            What if that bad review convinced even one person not to visit that hotel?  I’d say then it was effective, even if not greatly so. Or, as someone else mentioned, what if it caused someone to be fired?

          • Well, of course you couldn’t know for sure, 99 times out of 100.   It was a completely hypothetical question.

            If it would help, I can conjure another scenario where it would be possible to know.   Assuming that it’s wrong to tell lies about another person, is it wrong to do it where no-one can hear?  Assuming that it is wrong to steal, is it wrong to steal something that the owner was planning on throwing away?

            I’m not saying it’s okay to do these things.  I’m saying that it’s ambiguous.

          • self.2c  :=

            1) “morals” is “ethics” applied to a specific situation,so having no ethics always implies inadequate situational response 
            2) there is the inherent trap that right is right whether your behaviour is observed or not; self-observation counts, so, for example,  to do sth right but out of ulterior motives is cheating yourself and thereby betraying your (and thereby the universal) ethic code :)
            3) the strip is funny, tho….
            4) corollary: only ppl evil enough to voluntarily ignore everything that can spontaneously be known as “right” or “proper” can be successful entrepreneurs, let alone bankers….no ” ;)” or “:) ” after this sentence i’m afraid …

          • Neat.

            “2) there is the inherent trap that right is right whether your beviour is observed or not; self-observation counts, so, for example,  to do sth right but out of ulterior motives is cheating yourself and thereby betraying your (and thereby the universal) ethic code :)”

            But then you have to answer Amanicdroids question about where a “universal ethic code” comes from…

          • ouch….you’re trying to get me to wear out my keyboard here :)

            the very shortest of possible answers would have to touch on the concept of
            “platonic ideas”  (e.g. there obviously IS such a thing as a perfect square, mankind did not have to invent it, it waited in the platonic world to be discovered…same holds true for a lot of other things).
            a lot of very hard-to-read-their-damn-books guys have argued whether or not concepts such as “love” or “compassion” can exist in this abstract world, and if they do and are discovered, if it’s ok to ignore them afterwards like, say, hyperbolic geometry, which is totally platonically true but also totally irrelevant to 99.9% of us and can thus safely be ignored.

            the answer, in a nutshell, summing up the last 400 years of thinking from kant to wittgenstein  who was looking for god in the ww I trenches  :)
            seems to be sth. like
            yes, there is such a thing as a moral code valid for all members of an ontological group (here: humanity) but, hey, you’re welcome to ignore it – the only drawback is that, platonically speaking, you’ll have to be considered a total asshole – no further negative implications for your everyday life attached, though …

            not very satisfactory as answers come, but on the plus side you can consider your job as parent of a potential non-banker or non-prisoners-dilemma-optimization-seeker to suddenly have become just that much more challenging and rewarding, knighted by philosophical science :)

            oh, and you can always read this

            http://www.amazon.com/Short-History-Ethics-Philosophy-Twentieth/dp/026801759X/ref=sr_1_10?ie=UTF8&qid=1317421495&sr=8-10

            …much longer text,  again without real answers  to your question  :)

          • I like your argument.  I’ve got some pagan leanings, so the whole Platonic, Perfect Thing thing rather appeals.

            But the problem remains, both for your perfect platonic square and your perfect platonic system of right and wrong: how do we access it here in this, imperfect world?  Can I use my Platonic Viewer to give me an absolute answer as to whether stealing something that the owner was about to throw away was right or wrong?  Nope.

            I’m cool with absolute, platonic right and wrong *existing*.  But it’s not much use to us as a concept when making judgements. It’s not so much a question of my being free to ignore it as of it being futile to persue it.

          • The Chemist says:

            That’s a bit like saying:

            Please have every argument the Internet has ever had in this thread.

      • Antinous / Moderator says:

        Would this be a good time to mention that five flags make a comment disappear?

        • Mari Lwyd says:

          That’s a nifty rule to know. So it takes a gang of five to rabble-mod the board?

          • Antinous / Moderator says:

            That’s a nifty rule to know. So it takes a gang of five to rabble-mod the board?

            I try to reset comments that have obviously been flagged due to animus on the part of the flaggers, rather than obnoxiousness on the part of the commenter. Like when somebody gets upset at Rob and flags all his comments. But that’s rather random and subjective.

          • Mari Lwyd says:

            That’s mildly disconcerting as the mods are limited and hoards numerous but you’re pros so I’ll assume you have further contingencies.

            Thanks for the reply, I’ll kindly avoid wasting your time and offing the topic now. :)

    • AbleBakerCharlie says:

      Well here’s where game theoretical/evolutionary discussions of the origins of moral impulses get really interesting. In more “lifelike” versions of dilemma games, namely ones that iterate and give the players ways to dispense benefits or extract costs from each other, the type of player that most uniformly wins, or replicates if the game variant include it, is one called a “strong reciprocator, ” that cooperates with cooperators and goes out of its way, including paying out personal costs, to punish non-cooperators or cheaters.  In other words, an agent evolves that both likes to play nice and is vigilant about people whose behavior in aggregate will break the system of playing nice, and indeed, there are all sorts of clever experiments that suggest that both human prowess and occasional failures in logic all map well to a system designed to detect social defectors.

      The whole reason we are capable of conceiving of Black Hat having behaved poorly for maximizing his personal return is that you are equipped with software that knows when an action, if performed ubiquitously, would screw everyone. Pretty cool, right?

      • Very cool.  

        But: “uniformly wins”?  I understood that the advantage for cooperation was pretty small.

        • AbleBakerCharlie says:

          “The” advantage? The advantage for cooperation isn’t some fixed quantity- there is no SI “cooperation constant.” It is whatever the circumstances of the game you’re playing out on paper or with genes in the wild dictate it is. You’d think it was silly if a whole group of bison decided to all tackle the same shoot of grass as a team- one pining it down while another nipped it off, a third chewing it into cud and barfing it up, and then everyone eating an equal share- because the circumstances don’t match the strategy. There is no incentive for bison to cooperate in harvesting grass, but there are three or four distinct payoffs for wolves that care to harvest bison- taking larger prey (and thus taking more calories in fewer hunts) being able to functionally store your surplus calories when you have a good day in the bodies of your friends, who will return them when you have a bad day and they have a good day, and so forth. The cooperative payoff in the game the wolves play is very high, and very low in the game the bison play (at least with feeding,) and so one has evolved feeding cooperation and the other hasn’t.

          Now, to bring it back around- the cooperative payoff is superior provided there are other cooperators in play. If you dropped Doormat Wolf into the pack of Dick Wolves (well, maybe just by the passing Dick Wolf, since they wouldn’t be forming packs) Doormat Wolf would starve, and not make more Doormat Wolves, and there would only be Dick Wolves. And note, as from above, that even though Dick Wolf outbreeds Doormat Wolf when they are paired up, he is at a foraging disadvantage to a pair of Doormat Wolves- he can’t bring down big prey, and he doesn’t get fed on bad days. So we have a bit of a conundrum- Doormat Wolf is in a superior position, but is vulnerable to any mutation that turns one of his packmates into a Dick Wolf, who will bring the whole party down for everyone.

          So what is Doormat Wolf to do? He responds by adding a wrinkle to his behavior- namely, he keeps a close eye out for the signature markers of Dick Wolves, and when they are spotted, he gets the other Doormat Wolves together to beat the crap out of them- either to scare them into being Doormat Wolves, or to kill them or drive them away so they don’t leave baby Dick Wolves in their neck of the woods. Doormat Wolf has been transformed into Payback’s-a-Bitch Wolf.

          Payback Wolf didn’t evolve a spreadsheet with some conditional statement that launches anti-wolf missiles after three meals where the next-door neighbor didn’t share- he evolved a set of emotional responses that make him behave nicely to nice wolves and badly to Dick Wolves, buried in the genes that build his brain. And in yours, too. You are Payback Wolf.

          You can keep playing with the wolves and tons of behaviors pop out. Dick Wolf doesn’t want to get pummeled, so he dresses like Payback Wolf, and so becomes Liar Wolf. Payback Wolf gets tired of getting outbred by Liar Wolf breaking the system again, and so becomes Suspicious Wolf. Two Suspicious Wolves meet up, circle each other for a long time, decide they aren’t Liar Wolves, and teach each other a secret handshake, so they don’t have to do this every time, and they become Friendship Wolves. And round and round we go.

          Now, humans are the ultimate Friendship Wolves- emotionally defaulting to moderate cooperation, willing to up the stakes if we detect certain signals, and not fond of people that break nice systems. The payoff is massive (witness our global supremacy) and so are the computational loads to keep it straight (witness our massive brains.) Now, this is usually when someone says this is a pretty grim view of human affairs, and nerney-nerny, see, people really are jerks because they want to be jerks to jerks, to which I say, that’s Dick Wolf talk, and we won’t be having any of that in my pack. :-P

          • Mari Lwyd says:

            I would purchase this treatise in the form of a children’s book.

          • flosofl says:

            Dude, that was beautifully simple and easy to understand analogy. I really hope you’re a teacher or a mentor. Your explanation made this dark little corner of game theory like a shining crystal for newbies like me.

          • Donald Petersen says:

            What a spectacular convergence of a Very Good Question and The Right Kind Of Person To Answer It.  The internet’s a better place for it.

            I’ll buy this children’s book, too.  Hope your illustrations are as cool as your thesis, Charlie!

          • Dewi Morgan says:

            “Now, humans are the ultimate Friendship Wolves”

            In small groups, perhaps.

            The US — and arguably most similarly large groups — see the Dick Wolves rise to the top, separated from and unreachable by the Payback Wolves.

            They drink champagne and laugh as the Payback Wolves howl in the street.

          • AbleBakerCharlie says:

            You have to zoom out a bit- because the picture the data suggests is stunningly different. The steady growth of human groups has been accompanied by a steadily marching reduction in per-capita dickishness of all kinds-violence, exploitation, you name it- that is all that gets me out of bed some mornings. What passes for dickishness these days runs good odds of being enlightened business practices somewhere in the world within living memory.

            Which isn’t to say that the Masters of the Universe crowd isn’t filled with some grade-A sociopaths who’ve found themselves a cozy niche, and that maybe someone should look to that. Far from it. But it also doesn’t represent a fundamental breakdown in the code of cooperation that enables me to walk into a store and hand a clerk one thing and them handing me something else instead of trying to kill each other on sight for control of food and resources, or that unlike in many hunter-gatherer societies, I no longer run a 3 in 4 chance of being murdered so someone can take my things and rape my family members- and if that’s a transition we can make as a planet, some folks that get goggle-eyed at piles of abstracted figures when they ought not to are small potatoes. We’ve more than got it in us. Payback Wolves keep on winning. I see no reason they’d stop.

          • My feeling is that you are right about the human race in general, but there is a specific sub-trend towards sociopathy that works against it. 

            Some wolves don’t see any other wolf as counting as a real wolf, and regrettably, they tend to become the pack leaders…

          • An excellent argument, none of which applies to the game of Prisoners’ Dilemma, which is what I was talking about.

          • AbleBakerCharlie says:

            No, it’s exactly what I’m talking about. The prisoner’s dilemma isn’t one single game- it’s a class of games you use to model cooperative/antagonistic behavior. My whole little wolf story is a paint-by-numbers retelling of what happens in a multi-agent, multi-iteration, agent-interactive prisoner’s dilemma, played with humans or evolving software.

            If that didn’t come through, I’ll guess you were probably specifically referring to a two-agent, zero-interaction, four-outcome, single-iteration dilemma- the kind you sketch in a litle two by two grid. It gets trotted out in discussions of nuclear deterrence sometimes (though even then it’s not sufficiently sophisticated to be of any use) and what usually gets glossed over in the discussion is that the whole reason RAND started adding complications when they stumbled upon game theory is that in that simplest of forms, the dilemma is mathematically overdetermined and doesn’t have either an optimal or stable strategy.  You literally must iterate it and add agents with memory for any  strategy to succeed over any other.

    • SamSam says:

      It’s not really the prisoner’s dilemma, it really is the tragedy of the commons.

      The tragedy of the commons is, essentially: if I exploit this public resource (here, the “commons” is the review forums and the exploitation is the ability to write misleading reviews), it’s good for me. If everyone did it, it would be good for none of us.

      The prisoner’s dilemma, on the other hand, is different and is much closer to being a paradox: a rational actor ought to always make the choice that is better for him. And in every scenario of the dilemma, the choice with the best outcome is to blab on your friend. But following that seemingly-impeccable logic leads you to spend more time in jail than you would if neither of you blabbed.

    • Hmm…except there is a flaw in this choice. If the hotel loses traffic on the account of a false bad review, the hotel will have less capital to invest in repairs and maintenance, meaning that the next time you stay at this hotel it may actually “earn” that bad review.

  4. Raederle says:

    The alt-text is fabulous, as it so often is.

  5. bardfinn says:

    BlackHatMan is engaged in two kinds of dickishness : the one mentioned, and the dickishness of promulgating a fallacy. If his review is so statistically insignificant that it is unlikely to tank a hotel, then it is EQUALLY unlikely to affect their business sufficiently to drive up vacancies and down prices.

    It’s far more statistically likely that his behaviour is due to a malformed love map, developed due to parental neglect in his early years, causing him to feel that love/attention/relationships are only validly (or more powerfully) expressed through power struggle, especially manipulation of those he considers to be duped by societal illusions.

  6. JohnH says:

    1.  It is not true that one person’s review won’t be enough to drive a place out of business, especially if the place is small.
    2. For larger numbers of reviews, the reverse is also true.  The number of people you drive away from the hotel is sufficiently negligible so as not to be worth the time to do this.
    3. If everyone did this, then everyone would know people were doing this.  Hotel reviews would be meaningless, and it wouldn’t work; instead, we’d be back in the old system where it was hard to find reliable information on whether a hotel was good.

  7. Charlie B says:

    I’m watching a comment thread evolve towards the categorical imperative, and it’s fascinating.

     I think I will go see if Randall’s forum crowd are as clever as you guys!

  8. Shaun Thompson says:

    Having recently been fired from a hotel after 3 years of thanks and positive reviews from guests I can strongly recommend not doing this. If management can figure out what employees this person interacted with I guarantee they will be punished. A couple of these and someone may lose their job. Hotels are a crazy business :/

  9. Charlie B says:

    Y’all are way quicker than Randall’s crowd.  More contentious, though (at least today).

  10. MrJM says:

    It’s a funny cartoon, but it’s a little dishonest because we all know Markets Always Work.

  11. johnnyaction says:

    As if astroturfing and astronerfing doesn’t already happen. I pretty much throw out “Worst ever” and “Best ever” ratings as either competitors or hotel employees/owners comments.  I only pay attention to the reviews in the middle.

  12. niktemadur says:

    Randall implies that the system works, as if the system isn’t already profoundly compromised.
    For almost every moderately priced hotel on down, one review reads “Best hotel ever, will stay again”, while the next one says “Worst hotel ever, avoid at all costs”.
    In fact, for years I’ve suspected that the hotel across or down the street may be doing the BlackHatMan shtick.

  13. Mari Lwyd says:

    SHADOWFIREBIRD HAS LATEX DESTROYING HERPES

  14. Palomino says:

    The dialogue in the cartoon scrip scares the hell out of me. It’s Orwellian in nature, says one thing but means something else and that underlying meaning produces the opposite of it’s intended affect.

     Attention!
    Your attention please!
    A newsflash has this moment
    arrived from the Malabar front!
    Our forces in South India have won a
    glorious victory!
    I am authorized to say that the action
    we are now reporting
    may well bring the war within
    measurable distance of its end.
    Here is the newsflash…:

    plusgood
    doubleplusgood
    plusgood
    doubleplusgood

    doubleplusgood
    times 17.3.84 bb speech
    malreported africa rectify
    times 19.12.83 forecasts 3 yp 4th
    quarter 83 misprints verify current
    issue
    times 14.2.84 miniplenty malquoted
    chocolate rectify
    times 3.12.83 reporting bb day
    order doubleplusungood….

    Eurythmics: Doubleplusgood….From the 1984 Soundtrack.

  15. dr says:

    There’s a nice pub near Regents Park where I used to stay when traveling in London. One day I observed it had no reviews in Trip Advisor, so I posted one.   In the 6 years since I’ve not been able to book a room there, despite numerous attempts.  In this case at least I think Randall’s character was correct (and I belatedly apologize to all the other former regulars of the only affordable clean lodging in that neighborhood).

  16. nosehat says:

    Hmmm… does Yelp offer up an API.  (googles)  Yes, yelp does have a API.  Heh, heh, heh!

     Seriously though, so many businesses engage in astroturfing campaigns that it’s become a little crazy for a consumer to trust any collection of online reviews from disparate sources.

    Also, pretty much any time someone invokes “the tragedy of the commons” to excuse their own selfish behavior, that’s always going to actually be “the tragedy of you’re a dick.” I’m remembering this one.

  17. teapot says:

    The ‘delicious invisible meat’ line made me lose it completely when I saw this last night.

    Also if you’re a new or non-frequent reader of XKCD: The hat guy is always an ingenious backwards-logic-employing jerk. There’s no need to analyse his actions… His whole point of existing is to rock the boat.

  18. slab99_99 says:

    My family spent last spring break at an amazing resort. The night before we left, I made the mistake of reading the online reviews. Some were truly horrifying. I couldn’t understand why they were at odds with the majority. Now it makes sense.

  19. Patrik Martinson says:

    I’d say that 5-15% of reviews already do this. The trick is to sort them out from the “happy about everything, regardless”, “constant complaining, regardless”, astroturfers and real reviews. Then actually making judgement call with the information you think you have. Involves a lot of doublethink, but you learn eventually. Sometimes I read reviews of things I own already, to hone my ability to sieve the review noise for a useful signal.

    • AP² says:

      “Sometimes I read reviews of things I own already, to hone my ability to sieve the review noise for a useful signal.”

      I thought most people reading reviews already did that. It’s especially useful for subjective reviews (movies and music, for example), since often even non-malicious reviewers are biased, and while I’m probably biased too, I’d rather read a review from someone who shared my biases ;)

  20. Adam Fields says:

    This is the Tragedy of the Selfish: http://www.aquick.org/blog/2008/10/02/on-libertariancapitalist-intent/

Leave a Reply