How did that make it through peer review?

How did that make it through peer review?

How did that make it through peer review? I've heard that asked many times over the years. It has been uttered by senior colleagues, grad students, amateurs, and just about everyone else, too. The query is usually raised in response to a breach of fact, omission of citations, misconceived analysis, or odd conclusion from a published paper.

So, how did that make it through peer review?

The usual answer people seemingly have in mind is some combination of editorial incompetence, reviewer laziness, or authorial shenanigans. Maybe the editor didn't have the expertise to evaluate the paper, or ignored the reviewers, or was talked out of listening to the reviewers by a slick author, or was a personal friend of the author. Maybe it was sent to unqualified reviewers, or the reviewers just were sloppy.

Having been in the researching, reviewing and editing game for a few years now, and having listened to the professional rumor mill for a few years too, I can say that sometimes those answers are the case. For my own part, I've received reviews that were clearly written in haste, with maybe three sentences of superficial "accept with minor revisions" comments, contrasted against a lengthy "burn it with fire and bury the ashes" opinion from another reviewer. I've seen papers published with pretty egregious errors, for which it is hard to imagine how the reviewers didn't catch it.

How did that make it through peer review?
The modern Australian lungfish Neoceratodus; just the perfect thing to break up a long block of text. Public domain, modified from Flower 1898.

Yet, I suspect these kinds of situations are relatively rare. Having been involved in enough papers, and, yes, being party to papers where I didn't catch something in the review or editorial process, I have the ultimate answer:

Reviewers, editors, and authors are human.

What I mean by this is that scientific papers are complex beasts. A single manuscript may weave together disparate groups of organisms, unfamiliar pieces of anatomy, far-flung reaches of the globe, and multiple statistical techniques. A typical paper is usually seen by a single editor and two to four reviewers. It is extremely unlikely that every facet of the paper will be seen by an appropriate expert on that given facet. How likely is it that every error will be caught and addressed?

Let's give a hypothetical example. Say I'm writing a paper on a new species of raptorasaur from Wisconsin, with a statistical analysis of how it expands known anatomical disparity for the group. I send it to the Journal of Dinosaur Paleontology, but they don't have any editors who specialize in raptorasaurs. So, it goes to an expert on dinodonts (we'll call her Editor A). The dinodont editor does a quick literature search and finds three raptorosaur experts. We'll call them Dr. B, Dr. C, and Dr. D. They review it, suggest some minor changes to the anatomical descriptions, and after a round of revision it is published.

How did that make it through peer review?
What better than another lungfish to exemplify the persistence of peer review? Image in the public domain, modified from Ray 1908.

And then, oh, the humanity! Blogger E reads the paper, and is dismayed by the disparity analysis. A critical data correction wasn't done, so the supposedly significant results are meaningless, and in fact are the opposite of what the paper concluded. Dr. F thumbs through the paper, and finds that one of his papers on raptorosaur morphometrics wasn't cited. Grad Student G finds that several of the characters in the phylogenetic analysis was miscoded and the phylogeny should be a bit different than the authors presented, and so says that the overall conclusions of the paper are highly suspect.

How did all of this happen? Well, none of the reviewers had statistical expertise. They were all well qualified to assess raptorasaurs, but none had ever done a disparity analysis themselves. They did a cursory glance, thought it looked reasonable, and moved on. When they suggested references to add, many of them were their own papers, and…well, the raptorosaur morphometrics paper fell through the cracks, or just wasn't as relevant as another paper. As for the phylogenetic analysis…it is very rare indeed for reviewers and editors to check all character codings. Dr. C might have caught some of the miscodings, but was in the midst of grading term papers, and didn't have more than a few hours to devote. A few of the codings were impossible to check in any case, because they were for incompletely described fossils from a distant museum.

This is not to downplay the fact that editorial and reviewer incompetence does exist. There are certainly journals with less stringent editorial processes than others, as well as editors and reviewers who are not well suited for their jobs. Some authors submit manuscripts that are on the low end of the quality scale. Yet, for the bulk of journals and the bulk of manuscripts, I think most people put forth a genuine good-faith effort. I have seen errors or editorial/reviewer lapses in pretty much every journal I have read. This ranges from PLOS ONE to JVP to Nature to Cretaceous Research. Finally, I will confess that I have played the part of pretty much every character in the hypothetical above.

So, how can we productively deal with this? I don't have a single easy solution, but I do have a few thoughts.

First, I think that respectful (emphasis on respectful) discussion in informal venues is useful–social media, journal clubs, etc. Frequently, I miss something in a published paper until a colleague raises the point. I have learned a lot via these discussions, and I think they are an important part of the field. Blogs and social media are just the kinds of venues where small quirks in papers can be noted (and if done as comments at the journal itself, as at PeerJ or PLOS ONE, it can be maintained as a semi-permanent record with the paper). If possible, a formal public comment or correction may be warranted, particularly for large scale errors (but as described in the footnotes, there are practical limitations to this, and I don't think formal corrections preclude informal discussion).

Second, I think the overall situation makes a strong case for a more open peer review process. Only a small set of relevant experts might see a before publication, and it is inevitable that they won't catch everything. Editors and aren't omniscient. I genuinely believe that preprints prior to or alongside the formal review process would allow more of these kinds of issues to be addressed. They wouldn't eliminate errors (and there are certainly cases where a public preprint wouldn't be beneficial), but they would help. At the very least it would cut down on the "Well, if I had been a reviewer, that error wouldn't have been published" grouching.

Third, this is not a case for the elimination of pre-publication peer review. Pre-publication peer review, although imperfect, genuinely improves most papers. On average it is better than nothing for keeping the literature on track.

This has been a rather rambling piece, so let me finish it off. Reviewers and editors are human. Peer review isn't perfect. Mistakes will make it into the permanent literature, even under the best of circumstances. A more open process is one way forward.

Journal information: PLoS ONE , Nature , Cretaceous Research , PeerJ

This story is republished courtesy of PLOS Blogs: blogs.plos.org.

Citation: How did that make it through peer review? (2016, February 4) retrieved 16 April 2024 from https://phys.org/news/2016-02-peer.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Open peer review could result in better quality of peer review

6 shares

Feedback to editors