Show me the data!
Header

Yesterday I published a blog post calling for ongoing monitoring of ‘hybrid’ open access articles and academic publisher services in general.

Today I want to share with you some highlights from my brief checks on 2 years worth of Wellcome Trust ‘open access’ article processing charge (APC) supported published research outputs.

Source data

Robert Kiley of the Wellcome Trust has made public official data on the APC spend on ‘open access’ articles paid for by the Wellcome Trust over at his figshare profile. This was a brilliant thing to do. Many people have made thought-provoking and brilliant analysis of this data. The data has been copied many times and I see it now in many different github repositories.

Yesterday, just to test the idea, with no real knowledge that I’d actually find anything of interest, I decided to check the DOIs of 2 years worth (2012 to 2014) of Wellcome Trust funded ‘open access’ articles. Here’s the three major things that I have discovered from this mini exercise so far:

 

1.) Paywalled Articles That Should Be Open Access Wellcome Trust Funded articles that are not openly accessible (updated for accuracy 2017-02-27)

According to Robert Kiley’s figshare data for 2013-2014 Wellcome Trust paid £1,194 to Emerald to make an article entitled Running a hospital patient safety campaign: a qualitative study open access at the publisher website. I followed the DOI link given and found that today this article is paywalled, and is being advertised for sale at £20 for 30 days of access by Emerald Group Publishing (screenshot below):

 

Sadly, I am no stranger to this kind of event. I have personally seen Elsevier, Wiley, Springer and Oxford University Press sell articles that had been paid-for specifically by funders to be open access to everyone in the world, not to be sold at the point of availability. It seems inevitable now that hybrid open access would lead to this. Paywall publishers simply can’t keep the paywalls off, even if they are paid to do so.

UPDATE 2017-02-27: Three weeks after publication of this blog post Wellcome Trust and Emerald kindly confirmed to me that no APC was actually paid for the above article (contrary to what was mistakenly stated in the figshare data). The article authors backed-out of choosing gold open access. Unfortunately, the authors did not self-archive a freely available version of this paper either, so it remains not freely accessible to those outside paywalls and thus was definitely NOT published in a manner compliant with Wellcome Trust rules and regulations that were in place at the time.

 

2.) Misuse of funds set aside to cover Open Access charges

I thought this was another simple case of hybrid open access being ‘mistakenly’ paywalled by the publisher but the truth is even stranger.

I found this article entitled, Mechanisms underlying cortical activity during value-guided choice at the journal Nature Neuroscience, that according to Robert Kiley’s data, the sum of £1,272.86 had been paid by Wellcome Trust to make this article open access. The plot thickens however. Nature Neuroscience doesn’t really do hybrid open access, and if it did, it would charge a lot more than that. What I have discovered here is an instance where the authors (or their institution) have mistakenly (fraudulently?) depending on how forgivingly you view this: used the Wellcome Trust Open Access fund to pay Nature Neuroscience £1,272.86 for colour figures. The article is NOT open access at the publisher website. The 2017 Wellcome Trust guidelines are absolutely clear that you cannot use Wellcome Trust Open Access money for “page charges” or “colour figure” charges. I do not know if Wellcome’s rules were so clear back in 2012 when the payment was made but this is extremely disappointing to observe. Charity funding should be better spent than on spurious publisher-invented ransoms like “colour figure charges”.

3.) Elsevier ‘open access’ articles are not accessible to all machine methods

To help check article DOIs in a simple and automated manner, I used the R package httr. The code I used is available as a github gist. The code works well for articles hosted at all publishers except one: Elsevier. Any attempt to follow DOI links with R::httr just hangs and I have to use a timeout to ensure that my script skips over such problems to proceed onto the next article.

Can Elsevier really call what they are offering ‘open access’ if it is not openly accessible by automated methods such as R::httr scripts? I don’t have time to expound upon this at length here, but I will certainly return to this particular point at a later date.

Conclusions

So there you have it. Super simple automated checks of just a few thousand Wellcome Trust funded ‘open access’ articles by their DOIs has revealed three rather interesting things, and supports my overall thesis that we need to continuously monitor academic publishers: not just “one-time” compliance checks.

I really do think this is the start of something very interesting. I have plans. WATCH THIS SPACE!

In a recent series of posts I’ve become fascinated with how unnecessarily fragile the scholarly communications system seems to be in 2017:

Oxford University Press have failed to preserve access to the scholarly record (23-01-2017)

Documenting the many failures across OUP journals (24-01-2017)

Comparing OUP to other publishers (25-01-2107)

As a reminder, academics literally invented the internet, I think we can and should be doing better.

We have the technology and resources available to make a robust and efficient scholarly communications system, yet from the more than $10 billion per year we spend on it every year we appear to be getting incredibly poor service from our various providers. If we take Digital Object Identifiers (DOIs) as an example – they are great in theory. If I send a colleague a DOI such as “10.1093/sysbio/syw105” in theory, in five years time, no matter who owns this journal, or what platform technology they decide to use, my colleague should be able to follow this DOI-based link to an article landing page: http://doi.org/10.1093/sysbio/syw105 (as of 06-02-2017 this one happens to be broken though!).

The DOI registration agency (CrossRef) that most journals use is highly competent. I have no doubts about their technical abilities, or the support and documentation they provide to publishers. Most modern, born-digital publishers create DOIs for their journal articles with accurate metadata, and little difficulty or breakage. Yet when it comes to some publishers like Oxford University Press, I am amazed to find that more than 2.3% of their DOIs result in a 404 “Not Found” error. Collectively, research institutions, libraries, and personal subscribers across the world pay publishers like OUP a huge amount every year to provide publishing services. Why can’t they actually do the job we pay them to do?

Personally, I have now lost all faith in the ability of many legacy academic publishers to actually publish content on the internet in a robust manner. For example, when we pay them >$3000 to make an article hybrid open access, what happens? They end up putting it behind a paywall and selling it to readers, even despite the hybrid open access payment. Wiley, Elsevier, Springer and now OUP have been caught doing this to some extent. Nor are the failures confined to just subscription access journals. Remember when all of Scientific Reports and other NPG journals were down for a few days with absolutely no prior warning (mid June 2016)? I totally understand the need for publishers to upgrade and or maintain their platforms but the apparent lack of testing or forethought when they decide to fiddle with their platforms is professionally incompetent at times, and this recent problem with OUP definitely falls into this category. When GitLab (not an academic publishing services company, unfortunately) made a serious screw-up recently that caused a major outage to their services, they livestreamed on Youtube their attempts to fix the problem and had it fixed in about 12 hours. They also published full, transparent, extensive and frank reports on what happened, why it happened and how and when they fixed it. In stark contrast to this OUP have given their customers opaque reassurances in an official statement AND still haven’t fixed many of the problems even weeks after! Legacy academic publishers and other modern internet services are sadly miles apart in the levels of service and transparency they provide.

So what can we do to remedy this abominable situation?

I propose that funders, institutions, and authors need to start doing more than just “one-time” compliance checks on the way that research outputs are published. We need continuous, daily, weekly, monthly or quarterly checks on research outputs just to make sure they are still actually there and that vital links to them like DOIs actually work! Additionally, those that pay for these publisher services actually need to start checking that publishers are actually providing good service. Withhold money, or bring consequences to those publishers who provide poor service.

To this end I have created toy code for use in R to help empower authors to check that the DOIs of their own authored research outputs actually work. I have my script setup as a cronjob scheduled to check my DOIs every day. Today’s fully-automated report is below, the 401 error tells me that my letter is behind a paywall at Nature (true), unfortunately, but it’s not a 404 so it’s otherwise okay:


"HTTP.Status","DOI"
"Success: (200) OK","http://doi.org/10.1111/evo.12884"
"Success: (200) OK","http://doi.org/10.7287/peerj.preprints.773v1"
"Success: (200) OK","http://doi.org/10.3897/rio.1.e7547"
"Success: (200) OK","http://doi.org/10.5334/ban"
"Success: (200) OK","http://doi.org/10.1045/november14-murray-rust"
"Success: (200) OK","http://doi.org/10.3897/bdj.2.e1125"
"Success: (200) OK","http://doi.org/10.4033/iee.2013.6b.14.f"
"Success: (200) OK","http://doi.org/10.1002/bult.2013.1720390406"
"Success: (200) OK","http://doi.org/10.5061/dryad.h6pf365t"
"Success: (200) OK","http://doi.org/10.1186/1756-0500-5-574"
"Client error: (401) Unauthorized","http://doi.org/10.1038/nature10266"
"Success: (200) OK","http://doi.org/10.1038/npre.2011.6048"

This idea gets more interesting however if scaled-up to the institutional or funder-level. What would happen if Cambridge University checked the DOIs of all “their” co-authored research outputs every week? What would happen if the Wellcome Trust checked the DOIs of all their funded research outputs every month? At this scale it could be done both time and cost effectively, and it is much more likely to uncover the abundant problems that lie quietly unobserved and under-reported.

Tomorrow I will blog about what I discovered when I checked the DOIs from just 2 years worth (2012 to 2014) of Wellcome Trust funded research. I promise it’ll be interesting and it’ll demonstrate more the utility of this exercise…