Meet Science: What is "peer review"?

When the science you learned in school and the science you read in the newspaper don't quite match up, the Meet Science series is here to help, providing quick run-downs of oft-referenced concepts, controversies, and tools that aren't always well-explained by the media.


"According to a peer-reviewed journal article published this week ..."

How often have you read that phrase? How often have I written that phrase? If we tried to count, there would probably be some powers of 10 involved. It's clear from the context that "peer-reviewed journal articles" are the hard currency of science. But the context is less obliging on the whys and wherefores.

Who are these "peers" that do the reviewing? What, precisely, do they review? Does a peer-reviewed paper always deserve respect, and how much trust should we place in the process of peer review, itself? If you don't have a degree in the sciences, and you aren't particularly well-versed in self-taught science Inside Baseball, there's really no reason why you should know the answers to all those questions. You can't be an expert in everything, and this isn't something that's explicitly taught in most high schools or basic level college science courses. And yet, I and the rest of the science media continue to reference "peer review" like all our readers know exactly what we're talking about.

I think it's high time to rectify that mistake. Ladies and gentlemen, meet peer review:

What does the phrase "peer-reviewed journal article" really mean?

This part you've probably already figured out. Journal articles are like book reports, usually written to document the methodology and results of a single scientific experiment, or to provide evidence supporting a single theory. Another common type of paper that I talk about a lot are "meta analyses" or "reviews"—big-picture reports that compare the results of lots of individual experiments, usually done by compiling all the previously published papers about a very specific topic. No single journal article is meant to be the definitive last word on anything. Instead, we're supposed to improve our understanding of the world by looking at what the balance of evidence, from many experiments and many articles, tell us. That's why I think reviews are often more useful, for laypeople. A single experiment may be interesting, but it doesn't always tell you as much about how the world works as a review can.

Both individual reports and reviews are published in scientific journals. You can think of these as older, fancier, more heavily edited versions of 'zines. The same scientists who read the journals write the content that goes in the journals. There are hundreds of journals. Some publish lots of different types of papers on a very broad range of topics—"Science" and "Nature", for instance—while others are much, much more specific. "Acute Pain", say. Or "Sleep Medicine Reviews". Often, you have to pay a journal a fee per page to be published. And you—or the institution you work for—has to buy a subscription to the journal, or pay steep prices to read individual papers.

Peer review really just means that other scientists have been involved in helping the editors of these journals decide which papers to publish, and what changes need to be made to those papers before publication.

How does peer review work?

It may surprise you to learn that this is not a standardized thing. Peer review evolved out of the informal practice of sending research to friends and colleagues to be critiqued, and it's never really been codified as a single process. It's still done on a voluntary basis, in scientists' free time. Such as that is. And most journals do not pay scientists for the work of peer review. For the most part, scientists are not formally trained in how to do peer review, nor given continuing education in how to do it better. And they usually don't get direct feedback from the journals or other scientists about the quality of their peer reviewing.

Instead, young scientists learn from their advisors—often when that advisor delegates, to the grad students, papers he or she has been asked to review. Your peer-review education really depends on whether your advisor is good at it, and how much time they choose to spend training you. Meanwhile, feedback is usually indirect. Journals do show all the reviews to all of a paper's reviewers. So you can see how other scientists reviewed the same paper you reviewed. That gives you a chance to see what flaws you missed, and compare your work with others'. If you're a really incompetent peer reviewer, journals might just stop asking you to review, altogether.

Different journals have different guidelines they ask peer reviewers to follow. But there are some commonalities. First, most journals weed out a lot of the papers submitted to them before those papers are even put up for peer review. This is because different journals focus on publishing different things. No matter how cool your findings are, if they aren't on-topic, then "Acute Pain" won't publish them. Meanwhile, a journal like "Science" might prefer to publish papers that are likely to be very original, important to a field, or particularly interesting to the general public. In that case, if your results are accurate, but kind of dull, you probably will get shut out.

Second, peer reviews are normally done anonymously. The editors of the journal will often give the paper's author an opportunity to recommend, or caution against, a specific reviewer. But, otherwise, they pick who does the reviewing.

Reviewers are not the people who decide which papers will be published and which will not. Instead, reviewers look for flaws—like big errors in reasoning or methodology, and signs of plagiarism. Depending on the journal, they might also be asked to rate how novel the paper's findings are, or how important the paper is likely to be in its field. Finally, they make a recommendation on whether or not they think the specific paper is right for the specific journal.

After that, the paper goes back to the journal's editors, who make the final call.

If a paper is peer reviewed does that mean it's correct?

In a word: Nope.

Papers that have been peer reviewed turn out to be wrong all the time. That's the norm. Why? Frankly, peer reviewers are human. And they're humans trying to do very in-depth, time-consuming work in a limited number of hours, for no pay. They make mistakes. They rush through, while worrying about other things they're trying to get done. They once had to share a lab with the guy whose paper they're reviewing and they didn't like him. They get frustrated when a paper they're reviewing contradicts research they're working on. By sending every paper to several peer-reviewers, journals try to cancel out some of the inevitable slip-ups and biases, but it's an imperfect system. Especially when, as I said, there's not really any way to know whether or not you're a good peer reviewer, and no system for improving if you aren't. There's some evidence that, at least in the medical field, the quality and usefulness of reviews actually goes down as the reviewers get older. Nobody knows exactly why that is, but it could have to do with the lack of training and follow-up, the tendency to get more set in our ways as we age, and/or reviewers simply feeling burnt out and too busy.

It's also worth noting that peer review is really not set up to catch deliberate fraud. If you fake your results, and do it convincingly, there's not really any good reason why a peer reviewer would catch you. Instead, that's usually something that happens after a paper has been published—usually when other scientists try to replicate the fraudster's spectacular results, or find that his research contradicts their own in a way that makes no sense.

If a paper isn't peer-reviewed, does that mean it's incorrect?

Technically, no. But, here's the thing. Flawed as it is, peer review is useful. It's a first line of defense. It forces scientists to have some evidence to back up their claims, and it is likely to catch the most egregious biases and flaws. It even means that frauds can't be really obvious frauds.

Being peer reviewed doesn't mean your results are accurate. Not being peer reviewed doesn't mean you're a crank. But the fact that peer review exists does weed out a lot of cranks, simply by saying, "There is a standard." Journals that don't have peer review do tend to be ones with an obvious agenda. White papers, which are not peer reviewed, do tend to contain more bias and self-promotion than peer-reviewed journal articles.

You should think critically and skeptically about any paper—peer reviewed or otherwise—but the ones that haven't been submitted to peer review do tend to have more wrong with them.

What problems do scientists have with peer review, and how are they trying to change it?

Scientists do complain about peer review. But let me set one thing straight: The biggest complaints scientists have about peer review are not that it stifles unpopular ideas. You've heard this truthy factoid from countless climate-change deniers, and purveyors of quack medicine. And peer review is a convenient scapegoat for their conspiracy theories. There's just enough truth to make the claims sound plausible.

Peer review is flawed. Peer review can be biased. In fact, really new, unpopular ideas might well have a hard time getting published in the biggest journals right at first. You saw an example of that in my interview with sociologist Harry Collins. But those sort of findings will often published by smaller, more obscure journals. And, if a scientist keeps finding more evidence to support her claims, and keeps submitting her work to peer review, more often than not she's going to eventually convince people that she's right. Plenty of scientists, including Harry Collins, have seen their once-shunned ideas published widely.

So what do scientists complain about? This shouldn't be too much of a surprise. It's the lack of training, the lack of feedback, the time constraints, and the fact that, the more specific your research gets, the fewer people there are with the expertise to accurately and thoroughly review your work.

Scientists are frustrated that most journals don't like to publish research that is solid, but not ground-breaking. They're frustrated that most journals don't like to publish studies where the scientist's hypothesis turned out to be wrong.

Some scientists would prefer that peer review not be anonymous—though plenty of others like that feature. Journals like the British Medical Journal require reviewers to sign their comments, and have produced evidence that this practice doesn't diminish the quality of the reviews.

There are also scientists who want to see more crowd-sourced, post-publication review of research papers. Because peer review is flawed, they say, it would be helpful to have centralized places where scientists can go to find critiques of papers, written by scientists other than the official peer-reviewers. Maybe the crowd can catch things the reviewers miss. We certainly saw that happen earlier this year, when microbiologist Rosie Redfield took a high-profile peer-reviewed paper about arsenic-based life to task on her blog. The website Faculty of 1000 is attempting to do something like this. You can go to that site, look up a previously published peer-reviewed paper, and see what other scientists are saying about it. And the Astrophysics Archive has been doing this same basic thing for years.

So, what does all this mean for me?

Basically, you shouldn't canonize everything a peer-reviewed journal article says just because it is a peer-reviewed journal article. But, at the same time, being peer reviewed is a sign that the paper's author has done some level of due diligence in their work. Peer review is flawed, but it has value. There are improvements that could be made. But, like the old joke about democracy, peer review is the worst possible system except for every other system we've ever come up with.

If you're interested in reading more about peer review, and how scientists are trying to change and improve it, I'd recommend checking out Nature's Peer to Peer blog. They recently stopped updating it, but there's lots of good information archived there that will help you dig deeper.

Journals have also commissioned studies of how peer review works, and how it could be better. The British Medical Journal is one publication that makes its research on open access, peer review, research ethics, and other issues, available online. Much of it can be read for free.


The following people were instrumental in putting this explainer together: Ivan Oransky, science journalist and editor of the Retraction Watch blog; John Moore, Professor of Microbiology and Immunology at Weill Cornell Medical College; and Sara Schroter, senior researcher at the British Medical Journal.

Image: Some rights reserved by Nic's events


  1. My problem with peer review is that reviewers are anonymous, and as such there is significant politics behind the scenes.

    A better system would be to publish the names of the reviewers alongside the paper. This would prevent anonymous reviewers from torpedoing good papers, and would expose conflicts of interest to the community. Transparency is greatly needed in science.

    Good papers are killed by influential older scientists often. Science is ruled by the old guys, and the younger scientists that do the heavy lifting have no champion to defend them.

    1. Revealing the identities of the reviewers wouldn’t fix this problem (if it exists), it would make it worse. If, as you say, the old farts are calling the shots, why would a young reviewer point out the flaws in a older, more influential researcher’s work if they don’t have the cover of anonymity?

    2. In reply to Anon in comment #2, Frontiers do exactly this. Reviewers remain anonymous through the review process, but once the paper is published, the reviewers names are on it. The review process also attempts to be a bit more interactive – after the reviewers write their initial comments, both reviewers and authors interact through a forum, discussing changes to the paper. This can be a pro and a con, depending on how ‘chatty’ your reviewers are!

      Frontiers are an open-access outfit – you pay to have your article published, but then anyone can download it for free.

      Full disclosure – my GF works for them, so I’m becoming slowly brainwashed by the ‘revolutionary Frontiers process’. I’m also much more aware of why somebody has to pay to get an article published. Even when there is no printed journal, you still need an office full of people making sure the articles arrive, reviewers review, the website stays up, etc.

  2. I’ve been a peer reviewer dozens of times for several journals and quite a few conferences. I would say that being able to see other reviewers’ reviews is decidedly *not* the norm. It happens in my fields, but certainly less than half the time.

    1. I second Bill Barth’s comment about not usually being able to read other reviews. Most often this is simply not provided by the system (online platform). You log in, upload your feedback, and that’s it. A colleague of mine once had to write to the editor and explicitly request the other reviews – and it was easier because it was an old-fashioned, non automated system, i.e. all files sent by email instead of online forms.

      One positive change I’m seeing is that, while it’s true that we don’t get formal training, more and more peer-review forms include lists of guidelines or guiding questions to help the reviewers focus on the major aspects of the manuscript (such as coherence between goals and results, solid methodological information, originality, etc). That is quite useful for reviewers, especially “newbies”.

      Time constraints are a problem. After all, we do peer-review for free, in our spare time (not that we have much of it). And at least here in Argentina, CONICET, the state agency for scientific research that reads my reports, won’t care that I’ve reviewed several papers, they only care that I publish my own work.

      We’ve been talking about the “peer review paradox” at work. On the one hand, the editors seem to be giving less and less time for a review (i.e. 2 weeks when it used to be a month), with the system sending automatic reminders that add extra pressure for us to hurry… On the other hand, waiting times for authors seem to be getting longer. And it is quite frustrating to hear back from the editor *months* after submitting your ms only to learn that they can’t find anyone to review your work and/or the reviewers didn’t respond and/or they didn’t provide a useful review (this actually happened to me).
      I think this is the logical result of many researchers not wanting to spend their time reviewing when they can be doing something more productive for their careers. And as long as peer-reviewing doesn’t get the recognition it deserves as part of the scientific activity, these problems will only keep growing.

    1. That depends a lot on the article. For results that are very time-sensitive, people will post to the arXiv while their paper is still being refereed. So that means they stake a claim, but they also get additional feedback from the wider public during the refereeing process and can incorporate that in revised versions. For results that are less urgent or more uncertain, people will typically wait until they at least have a first referee report (indicating only small corrections) or have been accepted before posting. So arXiv does not have much impact in those cases.

      Note that “informal” refereeing usually goes on too, where authors send papers to people they know and trust before they are published. Anonymous referees (which is usually the case with the ones coming from journals, unless the referee asks to be named) are good, but having more opinions can also help. Knowing the people makes it easier to weight the comments appropriately.

  3. Had my first peer-reviewed paper published last year. Main thing I learned was how picky my peers could be.

    The main conflict was comments by an academic reviewer asking for more algorithm description, really wanting something entirely reproducible. Since the work described was for a commercial company, I had to say no, my boss won’t allow it to that extent, but we’ll add as many references as I can get away with. Ironically, we were integrating a major software module that was supplied by a university research lab and paid for by the government, and the university’s lawyers were as hard to get information out of as anyone….

    It was a pain but I have no question that in this case responding to the the reviewer’s comments made the paper much better in the end.

  4. Great article, Maggie, and a wonderful topic for an ongoing series!

    As an outsider to / fanboy of the scientific community, I have one question: typically peer review and claims of potential reproducibility by a third party are used in a defense of the scientific method, say, in arguments for evolutionary theory.

    But just as peer-review is an opaque process to the outside world, the process of third party verification of lab results etc. is even more obscure.

    I happen to have access to Science magazine as “casual lunchtime reading”, and I am overwhelmed by the sheer amount of articles published. But out of all the published papers in all of the journals, how many of the results will be reproduced by peers, third parties, in other laboratories?

    I expect the cost for most modern scientific experiments to be rather exorbitant — time, materials, equipment, staffing etc. I also understand that existing research is based on findings of earlier research, but in that case, how many scientists will first go back and verify the original findings, before going forward with their own research?

    1. In my very limited experience as a first-year grad student, I’ve seen researchers go over other researchers’ work for three reasons:

      1- It’s useful to their own work, but it would be even more useful if it can be verified.

      2- They want to prove it wrong. Either it offends them, or it makes their own research more difficult,

      3- It’s a job requirement. Their institution requires them to spend a certain amount of time each year reviewing and interpreting outside research.

      I’m sure there are more reasons than this, but these are the ones I’ve seen first-hand.

    2. This is a good question but you have to remember that science and the technology created by the scientific process is cumulative. Say somebody publishes an idea and it’s right but not completely right. The idea then becomes more of a tool to explore other questions. Now another researcher comes along and tries to use this tool but the tool doesn’t work, then perhaps the original idea is flawed. However if the tool continues to perform well no one will question the original idea.

      In science, ideas build on each other…sometimes it’s a house of cards…and other times the ideas are solid no mater how many different ways they are tested by answering other related questions consistently. The problem is that bad ideas can get put into this system and persist for quite a while…though generally they do get disproved one way or another when people try to build upon a bad idea and are unable too.

    3. A lot of the time, when an article is submitted and discussions are produced with data to support the hypothesis of the research being executed, other laboratories that are investigating the same topic will design research loosely based on the data presented. This means that in developing laboratory experiments, researchers often have to begin by reproducing parts of other people’s experiments in order to proceed quantitatively with their own research. This is just one example of how results are reproduced. Hope that helps.

    4. I love this article. More like it, please!

      Before replying to jbldb, I just would add 2 points: 1) There are vast differences across disciplines in how peer review is conducted, as well as 2) across journals within a given discipline. I work in sociological social psychology. The turn around time for many (sadly, most) journals in my area, particularly the ones that are most prestigious and highly ranked, is 6 or more months. My brother works in computer science. He reports turn around times of a few weeks. My most recent accepted article was written with one a coauthor, contained 64 pages text, wherein we reviewed a huge literature and reported the results from a cross-national survey. His last paper was 8 pages, had 10 authors, and was mostly code.

      Both went through peer review. All together, my paper was reviewed by 3 reviewers at one journal (which rejected it 8 months after I sent it in) and after a complete revision, by another 4 reviewers at a second journal. The second journal asked us to revise and resubmit that paper. We did, guided by the reviewers’ critiques, and sent it back. It then went out for a second round of reviews. Two of these second round reviewers were among the three initial reviewers, and one was new. Paper came back with conditional acceptance, based on our making several changes that the new reviewer requested. We made the changes and received an acceptance decision. My brother often submits an abstract, a brief description of an algorithm, and code, which see two reviews before acceptance.

      I am not complaining (In fact, my work, in every case, has been hugely improved by going through such rigorous peer reviews), just describing the variance across disciplines in how it works in practice.

      @jbldb: Studies that replicate previously published work are difficult to get through peer review because a typical criteria reviewers are asked to consider is originality, or the potential for the article to contribute important new findings. Because replications are, by definition, not as original as the study being replicated, pure replication is often more difficult to get accepted by top journals, even if otherwise the research is well done. One good strategy for getting replications accepted is to include in a study some conditions that replicate those from previously published work, along with other new conditions that address other predictions. There are also one or two general journals that focus on publishing replications.

      You are correct that the huge number of journals, sub areas, and sub sub areas, means that even if replications were not difficult to publish, there are simply too many studies being published to easily ensure that third-party replications are conducted for most. Compounding this problem is publication bias. As mentioned in the article, it is also more difficult, all else equal, to get a paper successfully through peer review when its results fail to reject the null hypotheses (this means the findings are inconsistent with predictions derived from the theory under test) compared to an experiment where the null is rejected. Thus, a bias exists toward false positive results being accepted.

      Both the difficulty in publishing replications and publication bias are serious problems for at least two reasons.

      First, publication bias means that there is a preference for papers that reject the null simply because their results are consistent with the theory driving the study. The number of experiments that correctly failed to reject the null is unknown but could be large.

      Second, the difficulty of getting replications published, combined with publication bias, means that one of the mechanisms by which science seeks to self correct is likely to be inefficient. If a replication, for example, fails reject the null, it has two strikes against it. First, it is a replication and there fore, not as original; second, it fails to support the theory be tested, so it less appealing to editors.

      Finally, publication bias also leads to serious but almost universally ignored problems with meta analytic reviews of multiple studies. One of the statistical assumptions for meta analysis is that the sample of studies must contain both failures to reject the null and successes. In other words, the studies being analysed must include successes and failures. But because success are more likely to get published due to publication bias, they are typically over represented in meta analytic reviews, making them less powerful.

  5. As a phd/scientist that’s often been a reviewer and (often brutally) reviewed, I want to congratulate you on a very good summary of a very crufty, complex and often cryptic system – well done!. Peer review is old, political, fraught with resistance to the novel and greased skids for the established. However, I never observed anything superior that wouldn’t require lots of money that simply isn’t there, and would rapidly become even more corrupt. For instance, removing anonymity would only heighten the politics, (were that possible). The process of web-ifying it is under-way, albeit slowly, (as a humble aside, a grievous problem there is the prevailing stricture to use MSWordâ„¢)

    As to another commenter that laments upon the “old guys” (aka “seniority”) in the system, I think they’ll find that that’s an unfortunate outcome in every social division that depends upon training. I wish it were otherwise but it’s a slow process and we have limited lifetimes and tend to age. I await their detailed proposal towards a Logan’s Run concept of scientific publishing.

  6. Thanks for the great article and explaining this so clearly. I taught a college class last autumn in psychology. Despite telling students that they had to have peer reviewed journal articles as sources, having the library do a demo on how to search and download them, at least 50% of my students didn’t use them. I got references from the Bible, Brighthub (and the like), and blogs.

  7. Hey, one thing Maggie: every time you define peer review you make it about science: “Peer review really just means that other scientists have been involved in helping the editors of these journals”, etc.

    Well, I know you know this, but just so someone has made the point here explicitly: the humanities have peer review too. As one who has done my share of keeping BS out of (humanities) journals, it would be great to reinforce for all BoingBoing readers that the sciences don’t have a monopoly on rigour. (In fact, many times I’ve been on review committees with humanities and science professors, it’s the historians and philosophers who have insisted the most on a certain kind of rigour.)

  8. As an up and coming scientist I would be very uncomfortable having my name attached to a peer review. In fact, if I was asked to review a paper with that condition, I would likely respectfully decline.

    First of all, I’m relatively inexperienced at reviewing. My future self would likely cringe at reviews done at the beginning of my career (with my name attached for all to see forever). And as someone else mentioned above, I would be very hesitant to be too critical of the work of a senior scientist in my field if they would know it was me being critical. Do that to the wrong person and you’ve just committed career suicide!

    That said, the peer review process can be very frustrating. When I spend 4-6 months on a research project and the reviewer obviously just skimmed the abstract and says “not recommended for publishing” and nothing else …. I LOSE MY MIND!!

  9. It’s also worth remembering that many journals are peer-edited as well — while they may have an paid editorial staff to do things like handling actual editing per se, most of the decisions on who to send a paper out for peer review, whether it should be sent out to peer review at all (sometimes manuscripts are really that bad), and deciding whether to accept or reject in cases of conflicting peer review is up to an academic editor who is a (generally unpaid) volunteer in the relevent field.

  10. Great post. I second the comment that it is not just hard science that uses peer review. Also, I feel it is important to say that if you read something in a newspaper, magazine, blog, or other mass media source, you can’t call it “research” unless it has been published in a peer reviewed journal. Its not just about checking for fraud, its about methods. There are lots of people who weren’t trained well and don’t follow the rules of what is considered quality research methodology and those people don’t get their stuff published because peer review stops them. People need to learn about the differences between “journalism” (which should include fact-checking and multiple corroborating sources) and research with all of its specific rules about methodology. I do think it was important for you to point out that only researchers read the journals, whereas it might help if lots of people read the research, instead of just staying in a closed system. Can we also discuss how long it takes from the moment you finish the research study to when it actually gets published? Many times it has been a year or more and the results are already out of date. We need a faster system, perhaps by having dedicated, full-time employee reviewers and editors instead of people procrastinating to do it for free.

  11. Albert Einstein was a vocal critic of peer review and preferred “public review”; mainly seeing how peer review is such a fantastic way for science to collude with big business to build weapons of mass distraction. Science ought to serve the public, not the other way around.

  12. Excellent point about remaining skeptical. I look at peer-review the way I look at double-blind studies – it’s not a guarantee of accuracy or truth, but if someone is not willing to submit their work for peer review or for controlled studies, that raises a big red flag.

    I also would read the words “peer-reviewed” and accept it without too much question. Very interesting look at the process – what a wonderful thing that science has people that are able to do this work in their spare time to advance the cause of knowledge. While obviously not perfect, it beats a lot of other things.

  13. @jbldb,
    It depends. Very few experiments are started trying to exclusively replicate a previous result. This does happen occasionally, particularly if the result is very controversial or if it strikes someone as “obviously wrong.” However, it’d be hard to run/fund a lab that only replicated others’ work.

    Instead, replication generally happens while trying to extend that result. For example, suppose a paper argues that a mutations in gene X confer susceptibility to some disease in adulthood. Subsequent papers (often by different researchers) might ask which, if any, of the proteins that gene X codes for, actually cause the pathology. Others might ask if there’s a time course: is gene X always on or is it only expressed during particular stages of development. Others still might do experiments to see how different levels of X expression affect the severity of the disease, or if some other genes can ameliorate or exacerbate the effects of having a mutated copy of X.

    All these experiments–and many more–necessarily involve replicating the initial findings as part of your later experiment. Suppose you decide to look at whether levels of X expression affect the disease’s severity. Your results necessarily include the original result; you just have to collapse across expression levels. If that doesn’t match the original result, something is wrong, either with your experiment or the original one.

    So, in many cases, replication isn’t really a matter of doing something before you get on with your own experiment. Instead, it’s a necessary part of the future experiments too.

    1. However, it’d be hard to run/fund a lab that only replicated others’ work.

      And almost impossible to ever get any articles accepted for publication. Most journals aren’t interested in replication work (with either positive or negative outcomes, bizarrely) – they are much more keen on breaking new ground.

  14. My limited experience with peer review has left me with two conclusions:

    1. The current system of peer review is a clunky relic of the 19th century which is used mainly because the other alternatives seem to be worse.

    2. The arXiv system is a big improvement, but has its own flaws. The best thing about arXiv, in fact, is that it proves that we can have high quality science without peer review.

    Given the fact that journals are run by human beings, it is not likely that we will see perfection any time soon.

  15. What jon_anon said–peer review is not limited to the sciences and medicine.

    As far as the issue of anonymous reviewers goes, and the concerns people have about that–are the papers reviewed ever presented to reviewers anonymously, so to possibly avoid any biases that reviewers have about the researchers?

  16. Hey Maggie, love your writing. Most of the peer reviewing I do (social sciences) is for conferences, where there are a ton of papers. It’s a similar process in terms of using peer review to select papers, but getting reviewers can be hard since, like reviewing for a journal, all you get is brownie points from the editor or division chair (and the knowledge that you are helping your field). Conference reviewing is tough as you can get a ton of papers and the reviews are all due at the same time.

    I was a little concerned and curious when you wrote, “Usually, you have to pay a journal a fee per page to be published.” In my field, this is not true for any journal that I know of, and the only ones that would ask for payment are, as far as I know, scams — “pay to play” kind of things (there are conferences like this too), I don’t know how they sucker people in in order to survive. When you’re in a department, you know the journals and conferences that are respected in your field (and in your department, in your graduate school department, and probably the ones of the grad depts of your colleagues — there should be a lot of overlap).

    Is a fee per page the norm in the natural sciences? Which fields are you talking about? If it is a norm as you say, why isn’t it one in the social sciences? (Or, since it’s the norm to not pay in the social sciences [or at least the ones I’m aware of], why is it the norm to pay in whichever natural science fields you are talking about?)

    1. I don’t know the answer to that question, anon. The sources I spoke with for this story (who are, in retrospect, centered around medical sciences) told me that was the case, and I’ve been told the same by sources in various fields of the natural sciences over the years. It’s apparently part of the journals’ business plans.

      I don’t know why that’s the case. Or why it’s not the case for social sciences. If there’s anybody else who could weigh in, I’d be curious to find out.

      1. he NIH and many other funding body rules require that any information obtained from their grants must be made available to Pubmed after 1 year. Since the journals are loosing some control over charging for access they are begining to push the cost of publication onto the researcher.

        Journals like the PLoS have a fee associated with publishing which I think is around $3,000 per article (at least the last time I checked). They are predominately an online journal without a major print distribution. I think their model is to push the cost of publishing onto the researcher so that way the information can be distributed much for freely to the general public. Otherwise the cost of publication is pushed onto institutions that subscribe to the journal. For instance you can publish in a journal for no or a small cost but the distribution of the journal is much lower. These types of journals also restrict distributions of *.pdf to subscribers or charge to download the articles. I think as time goes on print journals are going to become less and less common.

      2. In my experience, sociology journals, whether published by the ASA, regional professional organisations, university presses or private academic presses do not charge per page except for color or photographic figures. Most do, however, charge a submission fee or $25-$50.00 (whether or not the paper is eventually accepted). Depending on the journal, this might end up being ~ 1.00 a page if the article is accepted.

      3. There are two reasons for page charges:

        1) In traditional closed-access print journals, there is generally a page charge — for example the Proceedings of the National Academy of Sciences (hardly a scam, but probably the most influential journal after Nature and Science), this is $70 per page, and $250 per page if you want color. This is because it’s a non-trivial cost to print, especially in color, and the journal subscription costs would have to be higher if these charges weren’t there.

        2) In open access journals, which may or may not have a print version, there is a charge (generally $2K-$3K) to cover processing and typesetting of the manuscript, which in traditional journals is covered by the subscription costs — in order to make an open access journal free for all to read, this cost needs to come from somewhere.

        In both cases, these charges are only charged if the paper is accepted — there is no “pay to play” going on.

        1. You say:

          “In open access journals, which may or may not have a print version, there is a charge (generally $2K-$3K) to cover processing and typesetting of the manuscript, which in traditional journals is covered by the subscription costs — in order to make an open access journal free for all to read, this cost needs to come from somewhere.”

          Well, your last statement is true, but as someone who runs an open access journal in social sciences and who knows many other people who also do, this is far from a universal truth, in fact none of the open access journals of my acquaintance do ‘pay to publish’.

          So where does the money come from?

          Well, first of all, the free labour of volunteers like me and all the other editors and reviewers who are basically doing it because they think it’s worthwhile. It’s not about ‘career’ per se, because I am pretty sure I could advance my career better by carefully targeting a couple of articles a year to the ‘top’ conventional journals. It is about the love.

          Second, we set up a charitable trust to run our journal and an association which you can join if you want to support the journal – and people do. You can also donate. Basically, this means we rely more on people’s good will and their interest in the subject – their ‘love’ if you like.

          Third, we run conferences which make a modest profit which also gets ploughed back into the journal.

          And finally, we bid for whatever grants we can. I’m working on one such bid right now.

          Sure, we could charge in all kinds of different ways, but we think that if you really believe in open access then it has to be truly open, and that means it is the quality if your work that should determine whether you get published not whether you can afford it (and this really matters to PhD students, post-docs and academics from poorer countries), as well as that it should be open to anyone to read. This philosophy means we reach far more people than most social science journals. We may not make any financial gain ourselves and we may take a lot of extra time to produce the journal, but it’s worth it.

          1. I’m not sure I understand your comment — are you saying the open access journal you are referring to has no author charges? Which one? Relying on volunteer effort of reviewers and editors is all good and well, but most journals do that — even bizarrely the commercial ones published by Springer and Elsevier, which you’d think wouldn’t attract any volunteer effort.

          2. Yes, that’s exactly what I am saying.

            Like over 9000 other journals, we run on the Open Journal System, which makes a lot of the process easier. We’re doing some work on the site right now, so the archiving is all messed up – we have been running since 2002 and the old site was not an OJS-powered one – but my journal is:

            Having sat on the committee that awards funds to journals in Canada, I can tell you that none of the open access journals we considered had author charges. I also know this lack is true of many other open access journals in the social sciences, particularly those that feel they have a more radical purpose beyond academia, like ACME in geography or Ephemera in organisation studies In fact, I regard this as a recent and rather unwelcome development, which is against the spirit of open access.

            And yes, this is a challenge. But we get a large number of submissions, and are able to maintain a very high standard, our readership is much larger than if were were not open access and we have an impact factor – which of course we have to calculate ourselves because, like many non-publisher backed enterprizes, which do not fit their ‘current priorities’ Thomson Reuters won’t give us the time of day – that is comparable to many journals that are supposedly ‘more prestigious’ – that’s what makes running an open access journal worthwhile.

          3. The Open Journal System sounds like an interesting idea, but I still don’t see how it will work in practice without government subsidies (which might be more feasible in Canada than the US). In biology, (where the whole idea of open access started in the PLoS project), author charges are universal, although they can be waived in cases where the authors can’t pay. There are real costs to producing a journal, especially if you want it to be professional enough to be indexed by Thompson and other services (The PLoS journals are, and at least in biology, that’s pretty much a requirement for a publication to be counted as real, unfortunately)

          4. We are indexed by several leading indexers including EBSCO and ProQuest. But for some reasons, Thomson Reuters have managed to convince a lot of people that their index means something more than most others, even though their criteria for selecting journals to appear in their lists are opaque to say the least. The one thing I do know having tried to get some sensible reason out of them as to their criteria, is that their ‘commercial priorities’ appear to be the biggest consideration – i.e. they aren’t so interested in academic considerations. I guess biology is more within the range of their ‘commercial priorities’ than interdisciplinary social sciences.

  17. In my experience papers are always presented anonymously, but it’s usually very easy to find out who wrote them (in the humanities, papers usually get submitted long after the research is first presented at a conference; in the sciences the arXiv makes it easy to know who wrote what).

    Also, some journals have the policy of not sending an article to any reviewers who are mentioned in the footnotes. So I have heard of authors deliberately citing a comment by so-and-so who they think would not review the article kindly!

  18. One of the problems with peer-reviewing, and this stems from Kuhn’s scientific paradigm where new paradigm’s are decided upon by group consensus, is that it makes truth subjective and subject to a vote (and if truth is Truth, that should not be).

  19. A couple of questions:

    “Peer reviews are normally done anonymously. The editors of the journal will often give the paper’s author an opportunity to recommend, or caution against, a specific reviewer. But, otherwise, they pick who does the reviewing.”

    –In a small field, however, doesn’t the limited pool of practitioners mean that the anonymity is a fiction? If I’m the scientist who recommends reviewer A, B, and C, and there are only A-F practitioners, aren’t I in practice able to pick the reviewers?

    “You have to pay a journal a fee per page to be published.”

    –This seems like it deserves more reporting since it appears like an obvious financial conflict of interest. The more articles rejected, the more money the journal makes. And why wouldn’t some researchers or institutions seek to influence journal editors by “buying” access?

    1. “You have to pay a journal a fee per page to be published.”

      –This seems like it deserves more reporting since it appears like an obvious financial conflict of interest. The more articles rejected, the more money the journal makes. And why wouldn’t some researchers or institutions seek to influence journal editors by “buying” access?

      I don’t think I explained the page fee system very well. You don’t pay to submit your article to peer review. You only pay IF your article is accepted for publication. So the peer-review process is somewhat separate from the page fee.

      Journals don’t make money by rejecting articles, because scientists don’t pay the page fee unless the article is accepted.

      And I don’t see a logical way that page fee could be used to influence editorial decisions, either, as it’s the same fee no matter who gets published.

      It does put a burden on scientists, especially if they are associated with institutions that have limited funding. But that’s a different sort of problem.

      1. Others seem to have given pretty accurate info about page charges. I think the short version answer to the “why” question is that the scientific publishing model is not self-sustaining. There has never been enough subscribers to fully support publication. Authors are asked to subsidize their own publications, and in turn, publication fees are built into their lab budgets.

        It was easier to justify these charges when the publishers also had to ship print copies. Now that most journals are going online, there is pressure to reduce the fees. Publishers counter by arguing that they have to pay editorial overhead in order to provide good peer review…the debate continues.

        Some journals charge a submission fee, but most charge a fee only at acceptance. There could be an argument made for financial conflict of interest—the more papers submitted/accepted, the more fees—however, almost every journal decision is made by academic editors (often volunteers) who have little if any connection to the business side of their own journal.

  20. Publication page charges for journal submissions are a common expense request in grant budgets.

  21. There’s a wide range of possibilities and combinations when it comes to publishing and paying. Off the top of my head:

    ° Journals that demand payment for every page of an article
    ° Journals that demand payment only for color figures that will be printed (but color figures are free for electronic versions)
    ° Journals that publish on behalf of scientific/academic societies and only publish works by members of these societies. In this case the authors don’t pay for publication, but they’ve already paid their membership fees which may be quite high.
    ° Those that ask the authors if they are willing to pay for their paper; one can choose how much to pay (any amount up to the full cost of each paper as given by the journal), or not to pay at all. This happens after acceptance.
    ° Journals that will publish it for free and offer the author a choice between, say, 25 offprints or one pdf version (we always choose the pdf version and make our own “home made” offprints).
    ° Journals that will publish it for free but won’t give the author even the pdf version without payment
    ° At least one journal whose submission form includes a radio button that allows the author to apply for an automatic grant to cover the cost of publication, if they’re not willing or able to pay for it.
    ° Entirely payment-free publication

    In any case, as Maggie explained above, the financial intricacies are separate, and hopefully independent, from the peer review process.

  22. I think as you progress through your career you will be exposed to many more reasons to do reviews.

    Think of it as a social responsibility: for every paper you submit, three people have to give of their time to read it (more likely 6, because it will be rejected the first time, and three more will have to be found to review your revision if the first three are not available). That means for every manuscript you submit, you should personally do 3 reviews to “pay” for the time of the reviewers who read your own paper. Reviewing makes the world go round.

    Also, if you are a good reviewer (diligent, constructive, and on time) it is likely the editors of the journals will get to know your name. You might be invited to give talks, submit papers, or even serve on the editorial board of the journal. These are all good things for your career.

  23. Great article!
    I’d like to draw your attention to an explicitly non-per review Journal, we have recently launched at

    All articles, we get will be published and we have editorial board members who then comment on the articles, so one thing traditional academia really lacks is triggered: constructive discussion.

    We have just launched our second CFP on Wikileaks btw :)

  24. Below, I quote in full a recent “peer review” we received for a paper on pattern recognition.
    Reviewer #2: This paper examines how much various ‘domain extension’ methods for image representation (e.g. pyramids, spectral representations) enhance feature-based approaches for a variety of texture classification type tasks.

    They show that either good textural features should be used or a spectral representation, and there is a very small added value for using both.

    This took ~6 months to produce – the review, not the manuscript.
    Besides being a blow-off, they completely missed the boat.
    Doesn’t this look like fun?

    1. As an academic editor of a couple of journals, let me tell you how this sort of useless review happens. Basically, the first month or so is spent just trying to *find* a peer reviewer. You’d think that a paper that would be about, for example, an extension of Smith and Jones’ algorithm would be properly reviewed by Smith or Jones. So I ask them. They take a week (or more) to get back to me, and then claim they are too busy to review it. If they are being generous, they will give a lead on another possible reviewer. If not, I have to try to find other reviewers myself. So I find authors of similar papers — who often then either say they are too busy also, or decline on the grounds that they aren’t experts in the subject because their work differs in trivial way from that of the manuscript. The idea that nobody is doing exactly the same work as themselves never seems to occur to them. However, peer reviewers are finally found. And then the review deadline comes and no review is forthcoming. So I remind the reviewer. Multiple times. They often complain that they have a real job to do (as if I didn’t have one besides being an academic editor) Finally they spend 15 minutes writing up a pointless review like the above to get me off their back. Of course, most reviewers take the process more seriously, but non-trivial percent don’t.

  25. Thank you Maggie! I was hoping you would speak to the ‘dark side’.

    IMHO, not only is the “quality” of peer reviewed papers inversely proportional to reviewer’s age, but so to with how prestigious the University from whence it came.

    In my yout’ I had the opportunity to participate in bench research at UCSF and Rockeller, and clinical at Cornell. I was present at the latter two during this scandal:

    There were even more hard to believe salacious details than what are stated in the wiki. Never knew what happened to him after he left Rockefeller. I was sadly surprised to see retractions happened again while at Caltech…

    The weakness of validity of the research is not malicious, but neither is it harmless.

  26. OK, reread. Sorry people. Re-University prestige, I meant the extent to which questioning research could negatively impact the representative academic institution.

  27. Peer review exists not just for publishing of journal articles, but also for judging the merit of research grant applications submitted to agencies like the NIH.

  28. Jonathan Badger: you wrote “even bizarrely the commercial ones published by Springer and Elsevier, which you’d think wouldn’t attract any volunteer effort”.

    In fact, although the journals don’t pay the reviewers, it’s not volunteer effort. In my job as a professor, part of my job description apart from teaching and publishing my research is “service to the university and the discipline”, which means among other things serving on editorial boards and reviewing journal and conference submissions. And I get to say, in my annual report, that I reviewed articles for publications x, y and z, and my chair takes that into account for my annual performance-based raise.

    1. I hear you; I hear you. Yes, academic volunteerism is part of service and not entirely done out of good will — but people do have a choice on where to spend the service. I’m not entirely an open-access zealot in that I do review papers for closed-access journals as well as serve as an AE on a couple of open access ones, although I know people who have actually made the stance not to deal with closed access journals at all.

  29. Since no one has mentioned this already: journals also vary tremendously in quality. There are dozens and dozens of journals out there. Some are top-notch, world-class journals. Others are obscure, local outfits that have very limited distribution beyond the immediate circle of their inceptor.
    Researchers compete to get published in journals of the highest rank. If they keep getting rejected, they aim lower until they find a journal that accepts their work.
    [Of course, due to the vagaries of the peer-review process, it is well possible that a higher ranked journal would accept a manuscript that would have been rejected by a lower ranked journal (due to the particular mix of referees it sends the manuscript to).]
    If you really wanted to, you can get anything published in a peer-reviewed journal if you are willing to aim low enough. But the repeated submit-review-resubmit cycle inevitably improves the quality of the manuscript.

  30. Rats, I think I missed most of the conversation. To add my point of view: I am an assistant prof (five years in the profession) and have both published in peer-reviewed journals and been a peer reviewer. My take home experience is profoundly positive. I love the process and my work was immensely improved by the high quality comments I got from two of my three anonymous reviewers. Those comments also enabled the journal editor to hone her comments and make my work better. Full disclosure: I am in History, and have not published in a science journal. But I have reviewed scientific papers for science journals (I have a science background) and the process was similarly rewarding.

  31. I have been transitively threatened over scooping another work (I didn’t know and they didn’t publish).

    Given my name was known the threat was levied against me. This individual was barred from his campus for making threats. Now this is with indirect interaction, imagine the amplification of this effect for review. I have to be able to say negative things about your work and still shake your hand at a conference.

    Also graduate students, PhDs and profs are not all of a steady keel. I consider graduate studies to be an indicator of a potential for variable behaviour (read that in a mental health sense). Thus I would prefer if peer review was anonymous, otherwise I wouldn’t feel safe as a reviewer in many cases.

  32. Adding another ingredient to this mix, Hidde Ploegh just published a column in Nature critiquing the peer review system.

    Specifically the tendency of reviewers to suggest additional experiments that are either do not add anything or are outside the scope of the paper, thus slowing down the progress of science. This really goes toward this article’s point that there are varying guidelines and no formal training for scientists doing peer review.

  33. At the risk of opening another box of worms (which has already been tangentially discussed here), there is also the matter of impact factor to consider (IF). I know this is controversial, and different methodologies exist which given different scores (and thus different journals use the impact factor which portrays their organs most favourably), but there are a number of journals which dominate using most any reasonable IF calculation (Nature, Science, Cell, NEJM, PNAS, The Lancet) and which can *usually* be relied upon to only publish the top-flight stuff. Of course, occasionally a stinker sneaks in (*cough cough* Hendrik Schön *cough cough*) but in general, their papers are world-class.

    Secondly, Nature now insists that authors disclose any financial conflicts (funding from commercial concerns or fees paid by special interest groups etc). I’m not sure what Science et al’s policies are, but I think there is a move towards encouraging full financial disclosure by authors, which, while it may not prevent bias, at least brings it into the open.

Comments are closed.