Fraud, failure, and FUBAR in science

Here's an issue we don't talk about enough. Every year, peer-reviewed research journals publish hundreds of thousands of scientific papers. But every year, several hundred of those are retracted — essentially, unpublished. There's a number of reasons retraction happens. Sometimes, the researchers (or another group of scientists) will notice honest mistakes. Sometimes, other people will prove that the paper's results were totally wrong. And sometimes, scientists misbehave, plagiarizing their own work, plagiarizing others, or engaging in outright fraud. Officially, fraud only accounts for a small proportion of all retractions. But the number of annual retractions is growing, fast. And there's good reason to think that fraud plays a bigger role in science then we like to think. In fact, a study published a couple of weeks ago found that there was misconduct happening in 3/4ths of all retracted papers. Meanwhile, previous research has shown that, while only about .02% of all papers are retracted, 1-2% of scientists admit to having invented, fudged, or manipulated data at least once in their careers.

The trouble is that dealing with this isn't as simple as uncovering a shadowy conspiracy or two. That's not really the kind of misconduct we're talking about here.

Even when scientists are caught, red-handed, inventing results out of whole cloth, it's not usually a political or ideological agenda driving the fraud. Instead, this misconduct tends to be caused by biases and motivations that are a lot harder to spot and root out. It's about the personal and economic pressure to be successful and regularly publish (and, especially, pressure to publish something really important). It's about having spent years on a project and really, really, really not wanting to believe that time was wasted. It's about doing things by the book, even when you know the book is wrong. It's about laziness. Or jealousy. Or trying to please a demanding boss. It's about convincing yourself that you can cheat a little, just this one time, because your particular circumstances are just.

In other words, the problems with science are problems that exist in every human industry. Like any other job, most people are honest most of the time. Unlike other jobs, however, the culture of science has long operated on the assumption that everybody is honest all of the time — and that we always catch them when they aren't.

That's why it's actually exciting to me to see more people talking about the problems and misconduct that do happen in science.

It means more people within science are acknowledging that those problems are real and are starting to think about how to deal with them. It also means that you, the general public, are more aware of the ways in which science isn't perfect. That's important.

I talk a lot here about how hard it is to know what to think about the new studies you read about in the paper. Do they represent absolute Truth? Why are they so often contradicted by other studies? To really know why individual studies matter and what they mean for your life, you need context — context about the specific subject, and context about how science works as a whole. Knowing that scientists are human — and that the scientific process has flaws — is part of that.

And if what you're thinking right now is, "But this means that I can't totally trust any individual scientist, and it means that I can't think the newest research paper actually tells me much on its own, and it means that even honest researchers could be producing bad results, and it means I can't go around assuming that the facts I know are always going to be right," well ... congratulations. You're learning.

Some Suggested Reading/Listening:
• The New York Times on the recent research suggesting that fraud and misconduct are bigger issues in science than we like to think.
• Nature on retractions and why there are more retractions today then there used to be.
• Neuroskeptic on bad people breaking good rules, and the even bigger problem of good people following bad rules.
• Ed Yong's speech on the very human flaws in the scientific process.
• Ivan Oransky's Retraction Watch blog, where every day is "Time to Give the Scientific Process the Side-eye" Day
The Half-life of Facts: Why Everything We Know Has an Expiration Date — a new book by Samuel Arbesman, which I am currently reading with deep interest.
Wrong — a book by David Freedman that everybody should read.

Vintage advertising image via X-Ray Delta One


  1. Scientific Method the way it is conducted nowdays is like Democracy: full of weaknesses, but nobody came up with something better yet…

  2. ” a study published a couple of weeks ago found that there was misconduct happening in 3/4ths of all retracted papers”

    Whaddaya wanna bet that number was fudged?

    Seriously though, I’ve always felt this was a big issue. You rarely hear about the retractions to the same degree that you hear about the original studies (as reported in the mainstream press like the NYT or WaPo, I mean). I wonder how many people are going around acting on conclusions from some study they read about that has since been discredited?

    1. Yes, but this is a problem with journalism in general, not just science. Dis-proven political, crime, and health reporting rarely gets reversed either. Scandals turned out to be false alarms, suspects are found innocent, diets are found ineffective, vaccines are found harmless. I suspect it is because people have moved on and the main event isn’t current interesting anymore… but the result is people’s minds stay polluted forever. I’m sure I’ve been victim to this. It has actually made me over-skeptical of the “first printing”. It was a long time (years) before I finally believed my old Nalgene bottles are bad. 

  3. “Unlike other jobs, however, the culture of science has long operated on the assumption that everybody is honest all of the time — and that we always catch them when they aren’t.”

    I think that’s incorrect. Science, as a philosophy, assumes that our knowledge is incomplete, that humans are inherently biased, that results will be wrong. Instruments lie, people lie, reality tries to delude you. Science is about constantly reviewing and disproving old findings to arrive at something that, while it can’t necessarily be proven true, is at least not obviously wrong.

    As such, I think that statement is not right. Science culture assumes quite the opposite. Nothing can be 100% proven, there are always errors to catch, and no one is entirely honest, if just because they can delude themselves or have incomplete data.

      1. I’m not sure what you mean by science as a philosophy vs science as an industry. Are you trying to say, “the process of science acknowledges and attempts to mitigate human fallibility  however, human scientists are publicly portrayed as infallible.”? If so, I agree.

        However I do think science is unique in that it actively tries to remove the human element. Literature, politics, religion, business, etc., alternately celebrate and exploit the behavior (good and bad) that make us human. People need to pay more attention to the fallibility of the scientific process, I agree. But I am personally offended when people assume science plays by the same ‘rules’ as big business or national politics (especially conspiracy theories). There are politics, money, and bad behavior everywhere, but science, properly done, can be done by any competent individual anywhere in the world with any belief system — and the same result will be attained. That’s significant.

        1.  I was with you up until “any belief system.” Doing science properly requires that you be willing to give up *any* belief, no matter how cherished, if it turns out not to be the way the world is.

          1. My point was more that the scientific process attempts to remove the impact of a belief system from the results. That a liberal Christian or a conservative muslim or an atheist measuring the (for example) the reflectivity of the snow cap will measure the same thing. I agree that this means sometimes this means these beliefs can be challenged and must be given up. I disagree if you think, as a condition for doing good science, that a scientist must expunge all non-scientific thinking from his or her life. I’m very unscientific about my choice of breakfast cereal, and some of my political opinions, but I’d like to think I’m very scientific on the job and make a clear distinction between when I’m trying to be scientific (successful or not) and when I’m acting on opinion or a cultural value I’ve decided not to fight.

          2. No, you don’t have to (and can’t even if you try to) expunge all non-scientific thinking. But good science does require that when trying to answer a question scientifically, empirical data takes precedence over all other information. And before someone misinterprets that, statements like “People want X,” are empirically verifiable claims.

            For example, the data says most pregnancies end naturally before the woman is aware of them. Therefore, if the Christian god exists, either 1) god wantonly damns most people to eternal suffering before they are even born, 2) baptism is not necessary for salvation, or 3) human life does not begin at conception.

        2. “Science” is used in more than one sense.  Sometimes it’s used to refer to the body of knowledge accumulated through the work of scientists (“facts of science”), sometimes it refers to the idealized methodology (“science is a method for gaining knowledge about the physical world”) and sometimes it’s used to refer to the actual, factual real-world communities performing science (“science has provided many of the comforts of the modern world”).  I think Maggie is using it in the third sense.  I’ve also heard to it referred as “industrial science” which I think describes the institutional model under which most modern physical science takes place. 

          The upshot is that the idealized methodology is indeed a great idea, but that the real-world communities practicing science often fall short of that ideal.  I don’t think anyone’s criticizing the ideal of science here.

  4. Misconduct occurs- but it is incredibly rare.  Obviously more often than it should be, but it is still rare.  As far as accuracy both in number and sentiment, can any other published field boast a similar accuracy? What if news were held to the same standards?

    Considering the publication pressures scientists face, I think it is absolutely remarkable that it is as rare as it is.  By far, trace misconduct isn’t the biggest challenge faced by scientists (the ones who really care if a study is accurate or not).  

  5. can’t remember who said it.  it went something like –

    science is a process whereby beautiful theories are routinely hacked to death by ugly facts.

    anyone know the original quote/author?

    1.  Apparently much contested.  Quick google search got an argument about whether it should be attributed to Robert Glass (dunno who that is) or Aldous Huxley with people apparently leaning towards glass.  Also got an attribution to Benjamin Franklin.

      Here’s a source for the Robert Glass attribution.  But I saw another link where Glass calls it “one of my favorite quotes from engineering culture” or something like that — so Glass isn’t really taking credit.

      I also have Thomas Huxley getting credit for this quote: “The great tragedy of Science — the slaying of a beautiful hypothesis by an ugly fact.”  Here  Given that the last link includes a citation, I’m leaning towards Thomas Huxley.

  6. It appears some scientific publishers are becoming more skeptical of the assumption “everyone is honest all of the time.”

    I was reading recently that SIAM (Society for Industrial and Applied Mathematics) is screening all papers both at the submission and acceptance stages for plagiarism and duplicate publication, using a system called CrossCheck.

    See “Combating Plagiarism” at

  7.  My father is a published scientist and bemoans the amount of plagiarism he suffers – working out of the UK where funding for his work is negligible at best he frequently finds his papers taken (largely by American labs it must be said) expanded upon and re-published without a shred of attribution.

    The sad result of this is that he now sits on several impact 15 papers because he cant get funding for the lab work and docent want the credit to go to another lab who have done half the work but will take all the credit.

    its a sad state of affairs.

  8. I am a little bit worried about this flood of articles about scientific fraud in media in recent years. What other area of human endevor has a fraud rate of only 2%. Indursty? Economy? Politics? Sports? Yet texts about scientific fraud constitute a disproportional number of total texts about science in non-scientfic media.

    Ultimately destroys the public image of other 98% of scientists and, already shaky, public perception of science. We already live in deeply anti-intelectual culture.

    Number of instances of fraud may well be rising, which certenly is alarming. I would guess that the probable cause is in financial constraints imposed on scientists. Since the end of the cold war public funds for research have been going constantly down, yet number of people involved in scientific research has exploded. In effect we have more people fighting for fewer funds. Government grants have been replaced in may research areas by money comming from industry, and industry is not interested in long term research. They want fast results. Results in modern science mean simply papers published in journals. These ar ethe points you need to collect in order to advance your own scientific career. These problems started to fester when our society decided to suddenly run sicnece as a profitable business.

    Solution would of course be to return to earlier model, push industry out and alocate more public funds. However, I’m afraid that this sort of texts in media would serve only to justify the further cutting of funds.

  9. There is indeed pressure to publish and publish well. With more emphasis on metrics and making sure faculty are actually working (during the 50-60 hours a week we spend in our labs–makes me wonder what admin is doing during the 40 hr a week they spend in their offices that they assume we’re slacking). For me, it’s simply a question of never knowing what the real answer is if we cut corners. And quite frankly, I’ve rarely been so enraged as when a reviewer out-and-out claimed our results were too good to be real, after the years of careful tweaking and adjusting for hours every day to identify uncontrolled sources of variability and figuring out how to control them. [So must keep telling myself that it is “compulsive” only when it harms myself or others, not when it’s functional . . .]
    How to inculcate this into students, who are in a rush to graduate, and who do science because it pays better than waiting tables is worrisome.

Comments are closed.