Peer review: not as dark as some might propose

With the provocative title, “Publish-or-perish: Peer review and the corruption of science“, David Colquhoun lays out, what he believes to be some problems with the current status of peer review, highlighting a specific example involving a manuscript about acupuncture.

It seems to me that Colquhoun lament the emphasis on the quantity of publications, with respect to career promotions and grant applications, which drives two undesirable practices:
1. The parsing of papers to the LPU (least publishable unit), resulting in more publications, but without a comprehensive story, or with overstated results. Or, in extreme cases, outright fraud; and,
2. An over-burdening of the current review system, where there are not enough qualified reviewers to systematically, and carefully, review each manuscript.

This may lead to initiation up of more and more journals, which accommodate the publication of LPU papers, and may often suffer from a lack of qualified reviewers.

He pulls out a study from a peer-reviewed journal on acupuncture, which, in hind-sight, is not the best example. This paper doesn’t appear to have any glaring examples of fraud, it was not retracted, and responses from one of the paper’s authors and one of the journal’s editors, seem to contradict the claims made in his summary. Alternatively, Colquhoun could have cited one of the more highly-publicized examples of the scientific community questioning the peer-review process. The first one that comes to mind is the study of “arsenic-based life“, and the many researchers who voiced their concerns about it. Or, a recent paper on the genetics of longevity that was retracted due to technical differences in how the control and sample datasets were analyzed.

I do agree with Colquhoun that one beneficial alternative would be something similar to publishing your paper and leave it open to comments from the community. I don’t think that reviews need be anonymous, as he supposes, after citing a failure of such an endeavor at Nature. There is an example of a successful journal, Biology Direct, in which the peer review process is open, and the reviews (as well as author responses) are published alongside the manuscript. After a quick search, I found one really wonderful example, where I learned as much from the back-and-forth between the reviewers and the authors, as I did from the article. The conversation between the authors, and the reviewers, people who have been studying the evolution of life on earth, for decades was respectful, but with disagreements.

Maybe I’m optimistic, but I think there is a lot of benefit of public reviews. It is especially useful to read questions raised by experts that I, as someone unfamiliar with the field, would not know to ask.

Alternatively, I am not sure how to address the problem of quantity-over-quality in academic promotions and funding success. Colquhoun says it very well, I think, that, “It arises from official pressure to publish when you have nothing to say”. He also does not have many suggestions, and seems pessimistic about alternative measurements of scientific success.

Perhaps, rather than simply list publications, scientists could write a summary of the findings from their publications over the review period. This might equal the playing field between a manuscript that presents a comprehensive analysis, and a list of several manuscripts, each of which are short-stories. In addition, it would encourage the dissemination of such knowledge to the public (and encourage public support of science) because Institutions could share these short summaries online every time a new review is completed. A double win!

But, it seems like the current system is in place for the foreseeable future.

I wonder if blog posts can contribute to my publication list?


  1. In my opinion, anonymous peer review is absolutely crucial. Otherwise, it will be a jungle out there where it will be impossible for everybody except for a few experts, to navigate through what might be valid and what might not be. It is a very odd logic to believe that eliminating the journals will reduce the amount of wacky studies published. In fact, the proliferation of journals over the past few years has arguably lead to lowered standards and an increase in the publication rate of poor studies with faulty statistics.

    This of course does not mean that the peer review process could not be improved. But if there were not so many weird journals, it might allow researchers to concentrate more on dedicating their time to the peer review process.

    The pressure to publish a lot is one of the culprits. I agree with that. But the proliferation of inferior journals is another.

    1. Maybe an alternative would be to publish papers with reviewers comments, but still be anonymous. What I appreciate about the non-anonymous reviews at Biology Direct, as I stated, is the transparent flow of questions/critiques and responses between the authors and reviewers. Publishing the comments, but not identities, would still give readers new to the field some critical framework that authors don’t always include, but would also maintain the benefits of anonymous review.

      With respect to the proliferation of inferior journals… I don’t know how to fix this. Perhaps something as simple as reduced emphasis on the quantity of publications would eliminate the demand for so many obscure journals.

      One suggestion to improve the peer review process is to encourage journals to reach out to postdoctoral researchers and graduate students to participate in the peer review process (maybe two graduate student reviews could count for one more established researcher). Doing so would not only aid in the training process, by promoting critical assessment, but would also ease the burden on the relatively small set of scientists who seem to be overwhelmed with requests to serve as a reviewer (who – sometimes without disclosing it – request such trainees to do the assessments anyway).

  2. I believe that double-blind reviewing process is a necessary step, so studies will be fairly evaluated and not biased by authors’ prestige or affiliation (there are too many papers around which have been published in top journals only because of a “famous last name”).

    1. While it may alleviate some biases, double-blind doesn’t get around the problems of over-burdened reviewers who don’t delegate the time for proper reviews. And, because editors are not blind, it still may not prevent publications from “famous last names” from being preferred, even after the review process. The editors’ decisions outweigh those of the reviewers, even when considerable errors have been pointed out during the review process.

  3. I agree with most of the thing said:
    – The large amount of publications with no comprehensive story, or with overstated results.
    – The over-burden of the current review system.
    – The chosen paper might not be the best example, but that the main issue is there.
    – Reviews don’t need to be anonymous (although I don’t see a problem about it).

    However there are some points that, I think, were left out. Currently, the mechanics of the scientific system are based on two main pillars: the dissemination of results/data (publication) and the assessment of its quality (eg. ultimately for grant attribution). Both these phases have problems and none with an easy solution.

    – The current publication system is, to say the least, awkward! Academics provide its content (sometimes paying to publish it), do the quality control for free (as reviewers) and in the end have have to pay for journal access. As someone commented on the original article: “The NHS pays for research to be done, then has to pay again a fortune to journals to access the results of the research that it has paid for: something is wrong.”
    – I agree that the reviewers don’t need be anonymous but more importantly (I think) is that their comments and the author replies be made available (as in Biology Direct). As Melissa pointed out, sometimes there is a lot more to learn from “the back-and-forth between the reviewers and the authors”. Also, it would also be helpful in those cases where the paper gets published with “opposition” from one of the reviewers (in the end, the editor has the final say). Nevertheless, I think that journals should be more coherent: if reviewers are anonymous, then so should the authors.
    About self-publishing and leaving it open for comments, I’m not quite sure since we would need a centralized resource (a bit like Pubmed) but, more importantly, another peer-review system. Despite all its flaws, I think it is still very important in filtering and scrutinizing what gets published.

    One way towards improving some of these flaws is for the financing agencies to forbid work payed by them to be published under this “double-payment” system. Either the journal charges to publish, but the access its free, or if it wants to charge the access it cannot charge for publication.

    Quality assessment
    – Nowadays, “author’s quality” is measured by the number of papers it published and its quality. Paper’s quality, in turn, is measured by where they were published. If we take a deeper look it doesn’t make much sense, but it’s what evaluators look at when they have to make a decision. These will inevitably lead to the so called “publish-or-perish” culture and to the large quantity of papers being published every year (“an estimated 1.3 million papers in 23,750 journals in 2006”). With these numbers, and the current publication system (see above), the journals multiply and we get to a situation where any paper, however bad, can now get published in a journal that claims to be peer-reviewed. But who benefits with these model?? We don’t, since we end up with lots of flawed, non-scientific, skewed analysis published and, as reviewers, have a lot more work to do (and ultimately less time to do science). As the author puts it, “the only people who benefit from these intense pressure to publish are those in the publishing industry”.

    From these last lines I think it is clear that the driving force is the way authors and papers are evaluated. Although there isn’t a magic solution for all these problems, a possible way to minimize lots of these problems would be to evaluate papers (and authors) by their REAL contribution to science, that is, by the number of citations. Of course this can’t be used directly (old papers have more citations than new ones) but one can think of approximations and adjustments, like “average number of citations per paper per year”, filter out self-citations, etc…
    Another approximation that is already around (although not widely used) is the H-index. This could also work but it would need some changes (in my opinion), like filtering out self-citations and probably limit it to the last ‘n’ years. These way, authors would have pressure not to publish MORE but to publish BETTER, reducing the number of published papers per year and putting less pressure on the reviewers, leading, hopefully, to better reviewers.

    As final words, I’d like to bring into discussion an opinion paper recently published (Segalat L, 2010) by Laurent Segalat, where he draws a parallel between current world financial crisis and state of science.
    As the author puts it, current financial crisis originated from “A dangerous cocktail of short-term gains prevailing over long-term interests, herding, increasing pressure to deliver results, the absence of effective oversight, and blind trust that the system would regulate itself…”, and these are very similar to some affecting science.

    Some quotes:

    “Any one of the millions of scientists in the world can (…) propose a manuscript (…) for peer review. Here comes the first problem: the editors of the most popular and influential journals do not only work in the interest of science, they also work in the interest of the shareholders and owners of these journals.(…)The higher the journal’s impact factor (IF) —a value that is calculated by ISI Thompson (Philadelphia, PA, USA), which is another commercial enterprise with its own interests— the more the journal appeals to authors and readers, as it suggests the science published therein is of a high quality. This is a crucial flaw in the publication system: the scientific community has relinquished immense power to a few publishers whose agenda and interests differ from those of most scientists. The analo­gy
    to the global financial crisis is obvious (…)”

    “(…) the second problem: it is a widespread illusion that merit has anything to do with getting published in Nature, Cell or Science. Merit (…) is, of course, a prerequisite, but it does not make the difference. Let alone the quirks and unpredictable effects of the peer review process, the papers that are eventually accepted are usually a combination of good research and spectacular and unexpected results in a trendy field. Again, the analogy to the financial world is more than obvious: risky speculations to achieve short-term yields gained prominence over solid, long-term investments.”

    From all these “(…) The real loser, however, is the scientific community; the literature is becoming swamped with useless papers in which the data is flawed and the conclusions are wrong.(…)Returning to
    the financial analogy, these useless papers are the toxic assets of the scientific system. Not only do they represent a huge waste of money in terms of the experiments that are needed to re-examine and correct the findings and conclusions, they also devalue truly good papers that do not contain exaggerated claims and conclusions.”

    “Chief among the IF addicts, funding agencies are to blame for putting too much pressure on scientists to publish in high-IF journals.”

    “Another apt comparison between science and global finance is the lack of effective oversight. In the financial sector, the dominant ideology during the past decade was that markets are better left alone to regulate themselves (…). The idea that ‘natural selection’ will increase the fitness of the system as a whole and the competing elements within it prevailed. The sight of bankers asking for government support and lining up for bail-outs from taxpayers in the autumn of 2008 demonstrated the failure of this school of thought.”

    “In the aftermath of the global financial crisis, journalists and politicians are discovering that many experts issued warnings about the system’s shortcomings long before the situation deteriorated. Why were
    their Cassandrian warnings of inevitable collapse not heard? The answer is that the stars of the financial world, who had grown enormously rich within the system, were the ones who called the tune and exerted an enormous influence on governments. Similarly researchers who regularly publish in top-ranking journals and are thus rewarded by honours, promotions and grants, are the stars of science. They, too, are unlikely to criticize a system that largely benefits them. (…) In a system that confuses success and merit, wisdom rarely prevails.”

    Ségalat, L (2010). “System crash. Science and finance: same symptoms, same dangers?”. EMBO Reports 11:86-89

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>