Show me the data!
This has done the rounds on Twitter a lot recently, and justifiably-so but just in case you haven’t seen it yet…
I thought I’d quickly blog about this excellent graph published on a FrontiersIn blog late last year (source/credit: )
Source, Credit, Kudos, and Copyright: Pascal Rocha da Silva, originally posted here.

Source, Credit, Kudos, and Copyright: Pascal Rocha da Silva. Originally posted here.

With data from 570 different journals, it appears to demonstrate that rejection rate (the percentage of papers submitted, but NOT accepted for publication at a journal) has no apparent correlation with journal impact factor.


Why is this significant?


Well, a lot of people seem to think that ‘selectivity’ is good for research. That somehow by rejecting lots of perfectly valid papers submitted to a journal, it somehow ensures increased ‘quality’ (citations?) of the papers that are eventually accepted for publication at a journal. The fact is, high rejection rates in practice indicate that a lot of good research papers are being rejected just to satisfy an unjustified fetish for arbitrary and crude pre-publication filtering. This is important evidence for advocates of the ‘publish first, filter post-publication’ philosophy; as put into practice by journals such as F1000Research and Research Ideas and Outcomes.


Release early, release often?


Rejecting perfectly good/sound research causes delays in the dissemination of knowledge – rejected manuscripts have to be reformatted, resubmitted and re-reviewed elsewhere at great cost. The overwhelming majority of initially rejected manuscripts get published somewhere else, eventually. So why bother rejecting them in the first place, if all it does is waste time and effort?

Please show your friends the graph if they haven’t already seen it. I think data like this could change a lot of people’s minds…

Further Reading:

Similar findings have been reported before with smaller samples:
Schultz, D. M. 2010. Rejection rates for journals publishing in the atmospheric sciences. Bull. Amer. Meteor. Soc. 91:231-243 DOI: 10.1175/2009bams2908.1

I’ve written 29 blog posts this year! Still time for one more…

This work relates to my new postdoc at the University of Cambridge in Sam Brockington’s group.

I’ve been closely examining IUCN RedList data for plant taxa and found some rather odd things.

Out of the 100 or so plant species that the IUCN RedList asserts as ‘extinct’, at least 16 of them are growing alive and well somewhere in the world at the moment.

For some species even Wikipedia notes the conflict between reality and the ‘official’ IUCN assessment e.g. for Rauvolfia nukuhivensis.

Here are the 16 plant species that I think are incorrectly assessed as ‘extinct’ right now by the IUCN RedList:

Astragalus nitidiflorus, Cnidoscolus fragrans, Cynometra beddomei, Dipterocarpus cinereus, Dracaena umbraculifera, Madhuca insignis, Melicope cruciata, Ochrosia brownii, Ochrosia fatuhivensis, Ochrosia tahitensis, Pausinystalia brachythyrsum, Pouteria stenophylla, Rauvolfia nukuhivensis, Wendlandia angustifolia, Wikstroemia skottsbergiana, Wikstroemia villosa

Additionally to the 16 above, with less certainty, I also think the Hawaiian taxa Delissea kauaiensis and Delissea niihauensis might have some individuals still alive according to this Department of Land and Natural Resources ‘Fact Sheet’ from 2013.


Why not harness the wisdom of the crowds and/or semi-automated text mining?


It’s remarkable that the IUCN RedList still lists some of these as ‘extinct’ when there are easily findable peer-reviewed articles reporting the rediscovery and hence extant status of these taxa. To their credit, many are listed as “needs updating” but still, if there are important updates to statuses why not just go in and make the change(s) to correct the record?   The IUCN RedList page listing Wendlandia angustifolia as ‘extinct’ is possibly the worst example – it was reported as rediscovered back in the year 2000, more than a decade ago! The IUCN has had 15 years to update their incorrect assertion of ‘extinct’ for this taxon!

I can’t possibly go through the literature and check all other IUCN-listed plant taxa myself but this does seem like a great opportunity for ContentMine tools to help the IUCN RedList stay on top of the latest updates about IUCN RedListed taxa. See ‘Daily updates on IUCN Red List species‘ for more on that idea.


Below I list sources of information relating to the 16 species that I think are definitely NOT extinct, despite being listed as such on the IUCN RedList.

Wahyu, Y., Wihermanto, N., Risna, R. A., and Ashton, P. S. 2013. Rediscovery of the supposedly extinct Dipterocarpus cinereus. Oryx 47:324.

Martínez-Sánchez, J. J., Segura, F., Aguado, M., Franco, J. A., and Vicente, M. J. 2011. Life history and demographic features of Astragalus nitidiflorus, a critically endangered species. Flora – Morphology, Distribution, Functional Ecology of Plants 206:423-432.

Lorence, D. and Butaud, J.-F. 2011. A reassessment of Marquesan Ochrosia and Rauvolfia (Apocynaceae) with two new combinations. PhytoKeys 4:95+

Viswanathan MB, Harrison Premkumar E, Ramesh N. 2000. Rediscovery of Wendlandia angustifolia Wight ex Hook.f. (Rubiaceae), from Tamil Nadu, a species presumed extinct. J. Bombay Nat. Hist. Soc. 97. (2): 311-313

Oppenheimer, H. 2011. New Hawaiian plant records for 2009 Records of the Hawaii Biological Survey for 2009–2010. Bishop Museum Occasional Papers 110: 5–10 [notes the rediscovery of Wikstroemia villosa]

Shenoy et al. 2014. Extended distribution of Madhuca insignis (Radlk.) H. J. Lam. (Sapotaceae) – A Critically Endangered species in Shimoga District of Karnataka. ZOO’s PRINT  Volume XXIX, Number 6

Sudhi, K. S. 2012. Rediscovered tree still ‘extinct’ on IUCN Red List. The Hindu. [Cynometra beddomeii]

Missouri Botanical Garden 2012. Umbrella Draceana. [Dracaena umbraculifera might be extinct in the wild, but it is still successfully grown in many botanical gardens!]






OpenCon 2015 Brussels was an amazing event. I’ll save a summary of it for the weekend but in the mean time, I urgently need to discuss something that came up at the conference.

At OpenCon, it emerged that Elsevier have apparently been blocking Chris Hartgerink’s attempts to access relevant psychological research papers for content mining.

No one can doubt that Chris’s research intent is legitimate – he’s not fooling around here. He’s a smart guy; statistically, programmatically and scientifically – without doubt he has the technical skills to execute his proposed research. Only recently he was an author on an excellent paper highlighted in Nature News: ‘Smart software spots statistical errors in psychology papers‘.

Why then are Elsevier interfering with his research?

I know nothing more about his case other than what is in his blog posts, however I have also had publishers block my own attempts to do content mining this year, so I think this is the right time for me to go public about this, in support of Chris.

My own use of content mining

I am trying to map where in the giant morass of research literature Natural History Museum (London) specimens are mentioned. No-one has an accurate index of this information. With the use of simple regular expressions it’s easy to filter hundreds of thousands of full text articles to find, classify and lookup potential mentions of specimens.

In the course of this work, I was frequently obstructed by BioOne. My IP address kept getting blocked, stopping me from downloading any further papers from this publisher. I should note here that my institution (NHMUK) pays BioOne to provide access to all their papers – my access is both legitimate and paid-for.

Strong claims, require strong evidence. Thankfully I was doing my work with the full support and knowledge of the NHM Library & Archives team, so they forwarded one or two of the threatening messages they were getting from the publishers I was mining. I have no idea how many messages were sent in total. Here’s one such message from BioOne (below)

Blocked by BioOne

Blocked by BioOne

So according to BioOne, I swiftly found out that downloading more that 100 full text articles in a single session is automatically deemed “excessive” and “a violation of permissible activity“.

Isn’t that absolutely crazy? In the age of ‘big data’ where anyone can download over a million full text articles from the PubMed Central OA subset at a few clicks, an artificially imposed-restriction of just 100 is simply mad and is anti-science. As a member of a subscription-paying institution I have a paid right to be able to access and analyze this content surely? We are paying for access but not actually getting full access.

If I tell other journals like eLife, PLOS ONE, or PeerJ that I have downloaded every single one of their articles for analysis – I get a high-five: these journals understand the importance of analysis-at-scale. Furthermore, the subscription access business model needn’t be a barrier: the Royal Society journals are very friendly with content mining – I have never had a problem downloading entire decades worth of journal content from the Royal Society journals.

I have two objectives for this blog post.

1.) A plea to traditional publishers: PLEASE STOP BLOCKING LEGITIMATE RESEARCH

Please get out of the way and let us do our research. If our institutions have paid for access, you should provide it to us. You are clearly impeding the progress of science. Far more content mining research has been done on open access content and there’s a reason for that – it’s a heck of a lot less hassle and (legal) danger. These artificial obstructions on access to research are absurd and unhelpful.

2.) A plea to researchers and librarians: SHARE YOUR STORIES

I’m absolutely sure it’s not just Chris & I that have experienced problems with traditional publishers artificially obstructing our research. Heather Piwowar is one great example I know. She bravely, extensively and publicly documented her torturous experiences with negotiating access & text mining to Elsevier-controlled content. But we need more people to speak-up. I fear that librarians in particular may be inadvertently sweeping these issues under the carpet – they are most likely to get the most interesting emails from publishers with respect to these matters.

This is a serious matter. Given the experience of Aaron Swartz; being faced with up to 50 years of imprisonment for downloading ‘too many’ JSTOR papers – it would not surprise me if few researchers come forward publicly.

Anecdata On Sharing Science

October 1st, 2015 | Posted by rmounce in ARCS2015 - (0 Comments)

[This is my competition entry for the ARCS2015 essay competition hosted at The Winnower. I’m using their excellent WordPress plugin to automagically transfer this post from my blog to their site at the click of a button.]

There’s a 1,000-word limit for this competition, so forgive my brevity. I could easily write ten thousand! These are merely a couple of vignettes.

To really understand why open is better, you should try traditional science. Otherwise, you won’t see all the most awful practices as these are usually hidden from view.

My first peer-reviewed paper was published in a popular glamour magazine called Nature. Most academics read it for the News and Jobs sections, but it also publishes some research articles too. Editorially, it selects research articles for publication on the basis of their news-worthiness which has unfortunate side-effects: significantly more of these stories eventually get retracted or corrected, relative to other journals which focus more on the correctness of the science.

My one-page, one-figure article simply pointed out that an article the magazine had previously published on its front cover was wrong. I wasn’t the only one to notice this either. Amazingly, it took the journal 160 days from submission to publication to publish my small contribution. This was my first author-experience of the vast inefficiency, bureaucracy, and secrecy practised by traditional ‘closed’ science journals.

It was thus made obvious to me from a very early stage of my PhD that there had to be other better, faster, cheaper, more-enriched ways of communicating science available. Nowadays I wouldn’t recommend anyone to use the traditional (read: slow, obstructive, secretive) means of post-publication commentary. If you want to communicate what is poor about a paper published at a traditional journal, writing to the journal is the very least effective means of doing so. Use PubPeer, PubMed Commons, blogs, Twitter, or even The Winnower for post-publication peer-review.  Making incisive, well-communicated points about research you have read, and sharing these thoughts, openly for others to read and comment on, is a valuable skill. Although hard to evidence, I believe I have gained respect and wider exposure for doing this myself, as have others e.g. Rosie Redfield whom I would not have heard of were it not for her excellent critique of the #arseniclife paper (for those who don’t know the about it: the original paper was published in another glamour mag, and was also subsequently formally-rebutted with neutrally-titled ‘Technical Comments‘ 177 days after online publication, despite Rosie’s much more timely blogged-rebuttal which went online 2 days after the initial publication). I’m not alone in thinking these are glaring examples of how traditional science communication is broken.

Even simply sharing your research talk slides online can be hugely beneficial for your career

Another thing I learned by experience, early-on in my PhD was that there’s a problematic absence of data supporting many research articles. To put it more bluntly; most articles have pretty figures and lovely prose but many simply don’t make the underlying data available. I discussed this at length, with evidence in a conference presentation at the Young Systematists Forum, 2010. I pro-actively put my talk slides online to share my ideas on this with the world and with the help of Twitter, this one small act of sharing a conference presentation directly-led to a multiplicity of benefits:

  • I was invited on to the council of the Systematics Association, so I can try to influence the future direction of the society towards data sharing, open access, and better publishing (it’s work in progress, large committees have a tendency to change slooooooowly)
  • I was invited to join an international collaboration to document the lack of data archiving for phylogenetic studies, which was published in 2012, in an open access journal, and has been cited 18 times so far
  • It also led to my first invited speaking slot at the Open Knowledge Foundation conference in 2011 (OKCon), which in turn helped me become aware of and successfully apply for one of the first Panton Fellowships for Open Data in Science (£8,000)

So one small act of sharing directly-led to Fellowship money, many speaking invites, additional publications/collaborations I wouldn’t have otherwise been involved with, and genuine influence within an academic society. Sharing my presentations, my ideas, my data, and of course my publications has clearly benefited my career, and if anything I’m only likely to go more open with my research in future, rather than less!

As my title alludes to, I’m well aware my stories are just anecdata. This isn’t an objective assessment of the benefits of open science, but the logical basis of the benefits are clear nonetheless: if you don’t share your work, less will know of it. Share freely and openly and you may find yourself with many more beneficial opportunities as a result. Go forth and upload your work today!