Utilitarianism's darkly comical pitfalls

A characteristically great Saturday Morning Breakfast Cereal installment explores the hidden pitfalls of extreme utilitarianism. I just re-read Starship Troopers and was once again struck by Heinlein's strange idea of a scientifically provable "moral philosophy" that puts every human situation to the test of being expressed in symbolic logic to weigh its validity.

We created a utilitarian ethics computer to replace government (Thanks, Jamie!)


    1.  The advantage of utilitarianism is that, in theory, we *could* reduce the problem of ethics to one of metrics.  Which is pretty cool.

      First of all, of course, we’d need a device that measures “happiness”, whatever that is.  So, not there yet. 

      Also, it seems to me we would (even sans Felix) end up with a world where we got what we wanted rather than what we needed.  If there is any benefit to utilitarianism, I think it lies in the application of an arbitrary, objective standard to political decision making, rather than in the “most happiness” thing.

      1. There are spin-offs of utilitarianism to handle the “what we wanted rather than what we needed” problem — “preference utilitarianism” being what we wanted, and “welfarist utilitarianism” being what we needed.

        Personally, I’m not very bothered by this distinction, and don’t feel a strong need to pick one — embracing *either* form would profoundly change our priorities, first of all regarding the incredible levels of  suffering experienced by the global poor, which is avoidable with known and cheap interventions that we currently (as a society) aren’t bothering to spend enough money on.  I think making either change would be an excellent thing to do.

        I’m also not too bothered by the inability to measure happiness (although I’m excited by the possibilities neuroscience may offer some day), for the same reason: we don’t need a machine to tell us that pain and suffering make people unhappy.  I’m more interested in reducing the extreme suffering in the world than adding to the happiness of already-happy people: this is another spin-off, known as “prioritarianism”.

        I liked the comic, though.  Since this comment seems to have turned into a list of variants of utilitarianism, it’d make sense for me to point out that the utility monster argument is valid because the version of utilitarianism used by the comic is maximizing *total happiness*, whereas maximizing *average happiness* across the population would avoid the utility monster problem (at the cost of introducing some new and different edge cases).

        1. “embracing *either* form would profoundly change our priorities”

          I think we agree.  Using almost any form of objective measure to set policy would be a massive move forward.  Probably even if the objective measure was a vaguely stupid one.

        2.  The Utility Monster is about average happiness, because increasing total happiness is simply a matter of addition, so sacrificing the happiness of others for the happiness of the utility monster would make no sense in this case because you would be decreasing total happiness. With a utility monster, it’s  not subject to diminishing happiness returns from resource consumption, so it would make sense to feed it at the expense of everyone else who does have diminishing returns to increase average happiness.

          1. Ah, I think we’re confusing averages.  Optimizing mean average happiness would be subject to a utility monster, but median average happiness would ignore it altogether, right?

          2. I’m not confusing averages, as you can still do the median average of a group of one. In other words, to get the highest possible median, it would make sense to do away with everyone else but the utility monster, because it could continue increasing its happiness, therefore continually raising the median average.

          3. Median happiness leading to executing everyone except the utility monster is an interesting edge case, but it’s still just an edge case, avoidable with “we should maximize happiness according to median average happiness, and also you aren’t allowed to kill everyone other than one person to do that”.

          4.  So we would agree that a median average would still be subject to a utility monster? It’s an edge case yes, but it means that the median average alone will not avoid the utility monster.

            Your qualifiers though run into the issue that we don’t exactly know what this utility/happiness is that we are supposed to be maximizing. This is because the qualifiers you added mean that living becomes more important than maximizing happiness.

            I’m not anti-utilitarian per se, because I see it as a useful tool, or framework of ideas, to view problems and solutions with to make things better. However, as a methodological system for running things, it tends to lead to logically incoherent ends(the definition of utility) or ethically repugnant ones (Soylent Green is people!).

          5. Well, you’re still assuming that the utility monster isn’t made unhappy by you killing everyone else.  In the comic, Felix is very dependent on the other humans in the society for his happiness.  Presumably even most imaginary utility monsters would want to procreate and interact with others.

            I don’t think the median-of-one problem is especially linked to the utility monster problem, honestly.  Any time you’re talking about maximizing a metric across a range you have the option of dramatic change in sample size to fudge the figures, so most of the time we ban that edge case outright when posing the question.

          6.  The utility monster is a thought experiment, meant to explore a logical extreme of utilitarianism. Edge cases are used to explore all the possible ends of an ideological system, even if these ends are not entirely realistic. In other words, if this is a logical outcome of your ethical system, perhaps there are some issues with your founding premises?

            Your argument tacitly admits that this is the case with utilitarianism, as adding all sorts of mitigating factors to the thought experiment is saying “but this would never happen in real life!” Admittedly, I would tend to agree with you that it’s not a realistic outcome, but it is a logical outcome none the less. Therein lies the problem – if your system’s logical outcome does not correspond with the realistic outcome, what good is the system in practical terms?

          7. You seem to be arguing that utility monsters are both unrealistic and realistic, when you say that the logical outcome is not matching the realistic one in the case of imaginary utility monsters.  Everything that’s complicated enough to be useful has edge cases, and this is one of them.

            I think you’re holding ethics to a standard that we would never require for other fields, such as physics or even mathematics.  You say that systems are no practical good if they contain inconsistency: I say that I’ll stop using utilitarianism when you stop using arithmetic.  :)

      2. I’d like to direct commenters’ attention to edgore’s point about the comic’s point, so folks don’t continue to miss it:

        The point that it’s making is, if I am not mistaken, about wealth concentration. Under the system presented the already very happy person’s happiness is supported because that person is already so happy.
        Replace happiness with money and you essentially have the system that we live in today, where those who already have money receive preferential treatment that enables them to accumulate more money, because they already have so much.

        It’s excellent point is that this is stupid.

        I submit today’s comic as evidence that a socioeconomic interpretation is not at all far fetched.


  1. Another reason why Starship Troopers is a great book. It’s probably stimulated more debate and response than any randomly selected best seller from it’s time.
    We could make a similar case for SMBC.

    1. I’m a big fan of RAH too, but let’s remember 1959: Naked Lunch, Rise and Fall of the Third Reich, A Raisin in the Sun, The Tin Drum, Advise and Consent.

  2. I do hope that everyone realizes what this is actually about. And that’s it’s making a very excellent point.

    1. THANK YOU. I’m glad I wasn’t the only one who realized that it’s not about utilitarianism.

    2.  I’m asking an honest question here. What is this actually talking about, and what is its excellent point?

      1. From Wikipedia:
        Gross domestic product (GDP) refers to the market value of all officially recognized final goods and services produced within a country in a given period. GDP per capita is often considered an indicator of a country’s standard of living;

        1.  While the comic may well take a swipe at this, I think I have to disagree.  The majority of the laughs here come from utilitarianism.  Or from averages.

          And of course, what the comic is “actually about” is making people laugh.

      2. The point that it’s making is, if I am not mistaken, about wealth concentration. Under the system presented the already very happy person’s happiness is supported because that person is already so happy. Replace happiness with money and you essentially have the system that we live in today, where those who already have money receive preferential treatment that enables them to accumulate more money, because they already have so much.

        It’s excellent point is that this is stupid.

  3. While the notion of utilitarianism (which rather more serious thinkers than Heinlein developed) has practical problems (such as how one can objectively measure happiness in the first place), the  problem of average happiness as described in the comic is trivial to overcome — just switch to the median happiness.

  4. Any Felific Calculator should be using some sort of dynamic Distributed Constraint Algorithm rather than naive average maximization as the comic portrays.  Optimizing global happiness is likely to be more difficult a problem than packing a knapsack, which is NP-complete! That’s depressing to think about, and therefore unethical.

  5. I think Nozick first gave the “utility monster” its name in “Anarchy, State, and Utopia”, but I swear I remember reading this critique of utility earlier — perhaps from Bentham or Mill themselves?

  6.  It seems to me that this is the world we have today, with money in the place of some bizarre comedy about confusing psychopathic giddyness for human welfare.

  7. Utilitarianism has a lot of appeal for the privileged because the idea of “averaged” happiness says nothing about social justice. Like libertarianism it’s an easy, naive philosophy that justifies self-interest. If that sounds like an interesting statement, I suggest reading John Rawls; he’ll do those claims much more justice than I can. You can get a good intro, if you have access to JSTOR, in the 1972 book review by Lyons that pops up in a Google search for “rawls utilitarianism.”

    Nozick is an interesting read too, if for no other reason than he did a 180 on the positions he took in “Anarchy, State, Utopia.”

    1. I don’t understand.  I think that utilitarianism tells me (and most of the people reading this) unequivocally that I should donate most of my money to effective aid interventions for extremely poor people, because it will help them much more than it will help me.  In what way is this justifying self-interest or ignoring social justice?

      1. Chris, it sounds like you may be the reincarnation of Rawls, since you’re putting your own self-interest behind an iron curtain. And if you’re the sort of person that will write a three-paragraph discussion of utilitarianism, you’re going to love reading some Rawls. He spells out the pitfalls better than I can, but I’ll say that, in short, you’re not most people. Neoliberalism is largely based on a demonic chimera of utilitarianism, libertarianism, and “enlightened self-interest,” and neo-liberalism is rapidly bring us a new Gilded Age. I take this seriously enough that I’d try to gloss Rawls’s A Theory of Justice, BUT I really, really need to grade a bunch of papers.

        1. Reading your post again, I think that the definition of utilitarianism you apply is not very accurate because you’re reading it thru the lenses of the various modifications you list above, many of which may be responses to the libertarian critique offered by Rawls in A Theory of Justice.

          Basically Rawls argues that u. threatens individual happiness and liberty. He claims that liberty should not be exchanged for happiness and that any purchased increase in happiness should make every individual more happy than before. (I use “purchase” to indicate an expenditure in energy, comfort, money, what-have-you; it’s not Rawls’s way of putting it.) Again, I’m a dilettante when it comes to ethical philosophy, so I’m not doing a good job here.

        2. it sounds like you may be the reincarnation of Rawls, since you’re putting your own self-interest behind an iron curtain

          Sorry, I still don’t understand.  Why do you find this special?  Minimizing personal selfishness in order to maximize happiness across the whole of society instead is simply what utilitarianism is, no?

          1.  That depends entirely on how many people are unhappy about the unhappiness of other people. If you see what I mean.

      2. Because the way you (and I, and everyone lately) made the money you would give, itself neccesitates effective interventions for extremely poor people?

        They’re poor because we’re rich. We’re not rich without them. They get paid unfairly. and so do we.

        1. I think I agree with you, but I still don’t understand how this says anything critical about utilitarianism, which is the *most demanding* philosophy I know of when it comes to social justice.  I didn’t have any control over winning the birth lottery and being ridiculously wealthy compared to people who live on $2/day, and I can’t go back and change that history now — but now that I’m in this situation, I don’t see any ethics promoting my self-interest *less* than utilitarianism does, so it seems a bizarre criticism to make.

          1. We’d probably have to agree the comic was about utilitarianism, rather than utilitarianists. So sayeth a unitarian.

      1.  Well, if they got rid of all the characters in order to pay for more cookies, I think they would have a pretty good approximation in the form of Cookie Monster.

  8. Except that isn’t utilitarianism. The artist makes a mistake in simplifying utilitarianism to “do the greatest good”. Except that’s not how I was taught it. The way I learned it, utilitarianism is “do the greatest good for the greatest number of people.” Which effectively kneecaps his faulty criticism.

    1. How many is meant exactly by the greatest number? What if the greatest number of people you can do the greatest good for is only one person? What if that one person is a utility monster like in the comic?

  9. What I find so fascinating about Heinlein’s  symbolic logic for ethics is that it validates what he preaches.

    And please don’t start with “don’t confuse the author with his characters” – Heinlein’s protagonists are 2nd in wish fulfillment and soap boxing only to Rand’s. 

Comments are closed.