Show me the data!
Header

Why do we rejoice in rejecting perfectly valid research?

January 15th, 2016 | Posted by rmounce in Open Science
This has done the rounds on Twitter a lot recently, and justifiably-so but just in case you haven’t seen it yet…
I thought I’d quickly blog about this excellent graph published on a FrontiersIn blog late last year (source/credit: http://blog.frontiersin.org/2015/12/21/4782/ )
Source, Credit, Kudos, and Copyright: Pascal Rocha da Silva, originally posted here.

Source, Credit, Kudos, and Copyright: Pascal Rocha da Silva. Originally posted here.

With data from 570 different journals, it appears to demonstrate that rejection rate (the percentage of papers submitted, but NOT accepted for publication at a journal) has no apparent correlation with journal impact factor.

 

Why is this significant?

 

Well, a lot of people seem to think that ‘selectivity’ is good for research. That somehow by rejecting lots of perfectly valid papers submitted to a journal, it somehow ensures increased ‘quality’ (citations?) of the papers that are eventually accepted for publication at a journal. The fact is, high rejection rates in practice indicate that a lot of good research papers are being rejected just to satisfy an unjustified fetish for arbitrary and crude pre-publication filtering. This is important evidence for advocates of the ‘publish first, filter post-publication’ philosophy; as put into practice by journals such as F1000Research and Research Ideas and Outcomes.

 

Release early, release often?

 

Rejecting perfectly good/sound research causes delays in the dissemination of knowledge – rejected manuscripts have to be reformatted, resubmitted and re-reviewed elsewhere at great cost. The overwhelming majority of initially rejected manuscripts get published somewhere else, eventually. So why bother rejecting them in the first place, if all it does is waste time and effort?

Please show your friends the graph if they haven’t already seen it. I think data like this could change a lot of people’s minds…

Further Reading:

Similar findings have been reported before with smaller samples:
Schultz, D. M. 2010. Rejection rates for journals publishing in the atmospheric sciences. Bull. Amer. Meteor. Soc. 91:231-243 DOI: 10.1175/2009bams2908.1

5 Responses

  • To maintain a certain level of quality in scholarly research and its communication it makes sense to apply methods of quality assurance. This is required to prevent peers from wasting their time to read badly written papers and access poor research. But what is poor research? And don’t we waste the time of peers to let them review almost 2 million manuscripts every year (in STM only) which won’t be published? Have these papers been all junk? I cannot believe. Assuming that a reviewer spends 2 to 3 hours (at minimum) to read a paper and draft a (short) report, 4-6 million hours of spare time of our best researchers worldwide are used for rejecting manuscripts annually. And they have to do it for free. Isn’t this totally amazing? And is this principle of quality assurance still justified in the digital age and the world of Web 2.0?
    More amazingly, the principle of rejecting manuscripts has been misused by some publishers and (internal) journal editors as a way to filter scholarly research and to boost the impact factor of journals for decades. The more ‘fancy’ the journal, the more unsolicited submissions arrive at the publisher’s desk. A percentage of those manuscripts are rejected immediately by ‘editorial decisions’, some of them not as a result of poor research but because of missing buzzwords or reputed affilations of the authors. No external peer will ever see that paper (but maybe when being re-submitted elsewhere) and one may ask if this procedure is ethically accurate, anyway. However, we should ask ourselves if this procedure is still needed at all to filter scholarly communication. In a recent publication in which I asked the question whether we further need academic journals in its present, print-based shape and in its role to filter research, I made an attempt to analyse that situation and to suggest alternatives: https://goo.gl/AlouAe

  • One could argue that the reason for rejection in higher impact journals is different than in lower impact journals. For example, there’s probably a large amount of self-selection in submissions to Nature and Science, so more rejections at these journals could be due to selectivity (ie, it’s valid research but not “exciting” enough), while rejections at lower impact journals is largely due to shoddy research.

    Of course, I have no evidence for this but I anticipate this response from colleagues. Do journals somehow code and release the “why” for rejections?

    • Robert Cameron says:

      I help edit a few (low impact, but in TR) journals, and do masses of reviewing. Talking to editors-in – chief, yes, there are shoddy articles rejected at editor level, and some after review. But there is a different problem, especially for specialist journals that do well in their restricted field (TR comparisons with equivalent journals). We get lots of papers submitted which may be good, bad or ugly, but are simply not within the journal’s remit. I don’t think the editors keep a tally; perhaps they should?



Leave a Reply

Your email address will not be published. Required fields are marked *