Show me the data!
Header

Hack4ac recap

July 9th, 2013 | Posted by rmounce in BMC | eLife | Hack days | Open Access | Open Data | Open Science | PeerJ | PLoS - (4 Comments)

Last Saturday I went to Hack4Ac – a hackday in London bringing together many sections of the academic community in pursuit of two goals:

  • To demonstrate the value of the CC-BY licence within academia. We are interested in supporting innovations around and on top of the literature.
  • To reach out to academics who are keen to learn or improve their programming skills to better their research. We’re especially interested in academics who have never coded before

DSCF3425

The list of attendees was stellar, cross-disciplinary (inc. Humanities) and international. The venue (Skills Matter) & organisation were also suitably first-class – lots of power leads, spare computer mice, projectors, whiteboards, good wi-fi, separate workspaces for the different self-assembled hack teams, tea, coffee & snacks all throughout the day to keep us going, prizes & promo swag for all participants…

The principal organizers; Jason Hoyt (PeerJ, formerly at Mendeley) & Ian Mulvany (Head of Tech at eLife) thus deserve a BIG thank you for making all this happen. I hear this may also be turned into a fairly regular set of meetups too, which will be great for keeping up the momentum of innovation going on right now in academic publishing.

The hack projects themselves…

The overall winner of the day was ScienceGist as voted for by the attendees. All the projects were great in their own way considering we only had from ~10am to 5pm to get them in a presentable state.

ScienceGist

 

This project was initiated by Jure Triglav, building upon his previous experience with Tiris. This new project aims to provide an open platform for post-publication summaries (‘gists’) of research papers, providing shorter, more easily understandable summaries of the content of each paper.

I also led a project under the catchy-title of Figures → Data where-by we tried to provide added-value by taking CC-BY bar charts and histograms from the literature and attempting to re-extract the numerical data from those plots with automated efforts using computer vision techniques. On my team for the day I had Peter Murray-Rust, Vincent Adam (of HackYourPhD) and Thomas Branch (Imperial College). This was handy because I know next to nothing about computer vision – I’m Your Typical Biologist ™  in that I know how to script in R, perl, bash and various other things, just enough to get by but not nearly enough to attempt something ambitious like this on my own!

Forgive me the self-indulgence if I talk about this  Figures → Data project more than I do the others but I thought it would be illuminative to discuss the whole process in detail…

In order to share links between our computers in real-time, and to share initial ideas and approaches, Vincent set-up an etherpad here to record our notes. You can see the development of our collaborative note-taking using the timeslider function below (I did a screen record of it for prosperity using recordmydesktop):

In this etherpad we document that there are a variety of ways in which to discover bar charts & histograms:

  • figuresearch is one such web-app that searches the PMC OA subset for figure captions & figure images. With this you can find over 7,000 figure captions containing the word ‘histogram’ (you would assume that the corresponding figure would contain at least one histogram for 99% of those figures, although there are exceptions).
  • figshare has nearly 10,000 hits for histogram figures, whilst BMC & PLOS can also be commended for providing the ability to search their literature stack by just figure captions, making the task of figure discovery far more efficient and targeted.

Jason Hoyt was in the room with us for quite a bit of the hack and clearly noted the search features we were looking for – just yesterday he tweeted: “PeerJ now supports figure search & all images free to use CC-BY (inspired by @rmounce at #hack4ac)” [link] – I’m really glad to see our hack goals helped Jason to improve content search for PeerJ to better enable the needs (albeit somewhat niche in this case) of real researchers. It’s this kind of unique confluence of typesetters, publishers, researchers, policymakers and hackers at doing-events like this that can generate real change in academic publishing.

The downside of our project was that we discovered someone’s done much of this before. ReVision: Automated Classification, Analysis and Redesign of Chart Images  [PDF] was an award-winning paper at an ACM conference in 2011. Much of this project would have helped our idea, particularly the classification of figures tech. Yet sadly, as with so much of ‘closed’ science we couldn’t find any open source code associated with this project. There were comments that this type of non-code sharing behaviour, blocking re-use and progress, are fairly typical in computer science & ACM conferences (I wouldn’t know but it was muttered…).  If anyone does know of the existence of related open source code available for this project do let me know!

So… we had to start from a fairly low-level ourselves: Vincent & Thomas tried MATLAB & C based approaches with OpenCV and their code is all up on our project github. Peter tried using AMI2 toolset, particularly the Canny algorithm, whilst I built up an annotated corpus of 40 CC-BY bar charts & histograms for testing purposes. Results of all three approaches can be seen below in their attempts to simplify this hilarious figure about dolphin cognition from a PLOS paper:

The plastic fish just wasn't as captivating...

“Figure 5. Total time spent looking at different targets.” from Siniscalchi M, Dimatteo S, Pepe AM, Sasso R, Quaranta A (2012) Visual Lateralization in Wild Striped Dolphins (Stenella coeruleoalba) in Response to Stimuli with Different Degrees of Familiarity. PLoS ONE 7(1): e30001. doi:10.1371/journal.pone.0030001 CC-BY

Peter’s results (using AMI2):

 

Thomas’s results (OpenCV & C):

 

Vincent’s results (OpenCV & MATLAB & bilateral filtering)

We might not have won 1st prize but I think our efforts are pretty cool, and we got some laughs from our slides presenting our days’ work at the end (e.g. see below). Importantly, *everything* we did that day is openly-available on github to re-use, re-work and improve upon (I’ll ping Thomas & Vincent soon to make sure their code contributions are openly licensed). Proper full-stack open science basically!

some figures are just awful

 

Other hack projects…

As I thought would happen I’ve waffled on about our project. If you’d like to know more about the other projects hopefully someone else will blog about them at greater length (sorry!) I’ve got my thesis to write y’know! ;)

You can find more about them all either on the Twitter channel #hack4ac or alternatively on the hack4ac github page. I’ll write a little bit more below, but it’ll be concise, I warn you!

  • Textmining PLOS Author Contributions

This project has a lovely website for itself: http://hack4ac.com/plos-author-contributions/ and so needs no more explanation.

  • Getting more openly-licensed content on wikipedia

This group had problems with the YouTube API I think. Ask @EvoMRI (Daniel Mietchen) if you’re interested…

  • articlEnhancer

Not content with helping out the PLOS author contribs project Walther Georg also unveiled his own article enhancer project which has a nice webpage about it here: http://waltherg.github.io/articlEnhancer/

  • Qual or Quant methods?

Dan Stowell & co used NLP techniques on full-text accessible CC-BY research papers, to classify all of them in an automated way determining whether they were qualitative or quantitative papers (or a mixture of the two). The last tweeted report of it sounded rather promising: “Upstairs at #hack4ac we’re hacking a system to classify research papers as qual or quant. first results: 96 right of 97. #woo #NLPwhysure” More generally, I believe their idea was to enable a “search by methods” capability, which I think would be highly sought-after if they could do it. Best of luck!
Apologies if I missed any projects. Feel free to post epic-long comments about them below ;)

 

 

 

This post was originally posted over at the LSE Impact blog where I was kindly invited to write on this theme by the Managing Editor. It’s a widely read platform and I hope it inspires some academics to upload more of their work for everyone to read and use

Recently I tried to explain on twitter in a few tweets how everyone can take easy steps towards open scholarship with their own work. It’s really not that hard and potentially very beneficial for your own career progress – open practices enable people to read & re-use your work, rather than let it gather dust unread and undiscovered in a limited access venue as is traditional. For clarity I’ve rewritten the ethos of those tweets below:

Step 1: before submitting to a journal or peer-review service upload your manuscript to a public preprint server

Step 2: after your research is accepted for publication, deposit all the outputs – full-text, data & code in subject or institutional repositories

The above is the concise form of it, but as with everything in life there is devil in the detail, and much to explain, so I will elaborate upon these steps in this post.

Step 1: Preprints

Uploading a preprint before submission is technically very easy to do – it takes just a few clicks, but the barrier that prevents many from doing this in practice is cultural and psychological. In disciplines like physics it’s completely normal to upload preprints to arXiv.org and their submission to a journal in some cases has more to do with satisfying the requirements of the Research Excellence Framework exercise than any real desire to see it in a journal. Many preprints on arXiv get cited and are valued scientific contributions, even without them ever being published in a journal. That said, even within this community author perceptions differ as to the exact practice of when to upload a preprint in the publication cycle.

Within biology it’s relatively unheard of to upload a preprint before submission but that’s likely to change this year because of an excellent well-put article advocating their use in biology and the very many different outlets available for them. My own experience of this has been illuminating – I recently co-authored a paper openly on github and the preprint was made available with a citable DOI via figshare. We’ve received a nice comment, more than 250 views and a citation from another preprint. All before our paper has been ‘published’ in the traditional sense. I hope this illustrates well how open practices really do accelerate progress.

This is not a one-off occurrence either. As with open access papers, freely accessible preprints have a clear citation advantage over traditional subscription access papers:

graph

Outside of the natural sciences the situation is also similar; Martin Fenner notes that in the social sciences (SSRN) and economics (RePEc) preprints are also common either in this guise, or as ‘working papers’ – the name may be different but the pre-submission accessibility is the same. Yet I suspect, like in biology, this practice isn’t yet mainstream in the Arts & Humanities – perhaps just a matter of time before this cultural shift occurs (more on this later on in the post…)?

There is one important caveat to mention with respect to posting preprints – a small minority of conservative, traditional journals will not accept articles that have been posted online prior to submission. You might well want to check Sherpa/RoMEo before you upload your preprint to ensure that your preferred destination journal accepts preprint submissions. There is an increasing grass-roots led trend apparent to convince these journals that preprint submissions should be allowed, of which some have already succeeded.

If even much-loathed publishers like Elsevier allow preprints, unconditionally, I think it goes to show how rather uncontroversial preprints are. Prior to submission it’s your work and you can put it anywhere you wish.

 

Step 2: Postprints

 

Unlike with preprints, the postprint situation is a little trickier. Publishers like to think that they have the exclusive right to publish your peer-reviewed work. The exact terms of these agreements will vary from journal to journal depending on the exact terms of the copyright or licencing agreement you might have signed. Some publishers try to enforce ‘embargoes’ upon postprints, to maintain the artificial scarcity of your work and their monopoly of control over access to it. But rest assured, at some point, often just 12 months after publication, you’ll be ‘allowed’ to upload copies of your work to the public internet (again SHERPA/RoMEO gives excellent information with respect to this).

So, assuming you already have some form of research output(s) to show for your work, you’ll want these to be discoverable, readable and re-usable by others – after all, what’s the point of doing research if no-one knows about it! If you’ve invested a significant amount of time writing a publication, gathering data, or developing software – you want people to be able to read and use this output. All outputs are important, not just publications. If you’ve published a paper in a traditional subscription access journal, then most of the world can’t read it. But, you can make a postprint of that work available, subject to the legal nonsense referred to above.

If it’s allowed, why don’t more people do it?

Similar to the cultural issues discussed with preprints, for some reason, researchers on the whole don’t tend to use institutional repositories (IR) to make their work more widely available. My IR at the University of Bath lists metadata for over 3300 published papers, yet relatively few of those metadata records have a fulltext copy of the item deposited with them for various reasons. Just ~6.9% of records have fulltext deposits, as published back in June 2011.

I think it’s because institutional repositories have an image problem: some are functional but extremely drab. I also hear of researchers full of disdain who say of their IR’s (I paraphrase):

“Oh, that thing? Isn’t that just for theses & dissertations – you wouldn’t put proper research there”

All this is set to change though as researchers are increasingly being mandated to deposit their fulltext outputs in IR’s. One particular noteworthy driver of change in this realm could be the newly-launched Zenodo service. Unlike Academia.edu or ResearchGate which are for-profit operations, and are really just websites in many respects; Zenodo is a proper repository – it supports harvesting of content via the OAI-PMH protocol and all metadata about the content is CC0, and it’s a not-for-profit operation. Crucially, it provides a repository for academics less well-served by the existing repository systems – not all research institutions have a repository, and independent or retired scholars also need a discoverable place to put their postprints. I think the attractive, modern-look, and altmetrics to demonstrate impact will also add that missing ‘sex appeal’ to provide the extra incentive to upload.

519a4594ec8d83225e000004

Providing Access to Your Published Research Data Benefits You

A new preprint on PeerJ shows that papers with associated open research data have a citation advantage. Furthermore other research has shown that willingness to share research data is related to the strength of the evidence and the quality of the results. Traditional repository software was designed around handling metadata records and publications. They don’t tend be great at storing or visualizing research data. But a new development in this arena is the use of CKAN software for research data management. Originally CKAN was developed by the Open Knowledge Foundation to help make open government data more discoverable and usable; the UK, US, and governments around the world now use this technology to make data available. Now research institutions like the University of Lincoln are also using this too for research data management, and like Zenodo the interface is clean, modern and provides excellent discoverability.

CKAN

Repositories are superior for enabling discovery of your work

Even though I use Academia.edu & ResearchGate myself. They’re not perfect solutions. If someone is looking for your papers, or a particular paper that you wrote these websites do well in making your output discoverable for these types of searches from a simple Google search. But interestingly, for more complex queries, these simple websites don’t provide good discoverability.

An example: I have a fulltext copy of my Nature letter on Academia.edu, it can’t be found from Google Scholar – but the copy in my institutional repository at Bath can. This is the immense value of interoperable and open metadata. Academics would do well to think closely about how this affects the discoverability of their work online.

The technology for searching across repositories for freely accessible postprints isn’t as good as I’d want it to be. But repository search engines like BASE, CORE and Repository Search are improving day by day. Hopefully, one day we’ll have a working system where you can paste-in a DOI and it’ll take you to a freely available postprint copy of the work; Jez Cope has an excellent demo of this here.

Open scholarship is now open to all

So, if there aren’t any suitable fee-free journals in your subject area (1), you find you don’t have funds to publish a gold open access article (2), and you aren’t eligible for am OA fee waiver (3), fear not. With a combination of preprint & postprint postings, you too can make your research freely available online, even if it has the misfortune to be published in a traditional subscription access journal. Upload your work today!

My final repost today (edited) from the Open Knowledge Foundation blog. It’s a little old, originally posted on the 16th of April, 2013 but I think it definitely deserves to be here on my blog as a record of my activities…

So… it’s over.

For the past twelve months I was immensely proud to be one of the first Open Knowledge Foundation Panton Fellows, but that has now come to an end (naturally). In this post I will try and recap my activities and achievements during the fellowship.

okfhelsinki

The broad goals of the fellowship were to:

  • Promote the concept of open data in all areas of science
  • Explore practical solutions for making data open
  • Facilitate discussions surrounding the role and value of openness
  • Catalyse the open community, and reach out beyond its traditional core

and I’m pleased to say that I think I achieved all four of these goals with varying levels of success.

 

Achievements:

Outreach & Promotion – I went to a lot of conferences, workshops and meetings during my time as a Panton Fellow to help get the message out there. These included:

Conferences

At all of these I made clear my views on open data and open access, and ways in which we could improve scientific communication using these guiding principles. Indeed I was more than just a participant at all of these conferences – I was on stage at some point for all, whether it was arguing for richer PDF metadata, discussing data re-use on a panel or discussing AMI2 and how to liberate open phylogenetic data from PDFs.

One thing I’ve learnt during my fellowship is that just academic-to-academic communication isn’t enough. In order to change the system effectively, we’ve got to convince other stakeholders too, such as librarians, research funders and policy makers. Hence I’ve been very busy lately attending more broader policy-centred events like the Westminster Higher Education Forum on Open Access & the Open Access Royal Society workshop & the Institute of Historical Research Open Access colloquium.

Again, here in the policy-space my influence has been international not just domestic. For example, my trips to Brussels, both for the Narratives as a Communication Tool for Scientists workshop (which may help shape the direction of future FP8 funding), and the ongoing Licences For Europe: Text and Data Mining stakeholder dialogue have had real impact. My presentation about content mining for the latter has garnered nearly 1000 views on slideshare and the debate as a whole has been featured in widely-read news outlets such as Nature News. Indeed I’ve seemingly become a spokesperson for certain issues in open science now. Just this year alone I’ve been asked for comments on ‘open’ matters in three different Nature features; on licencing, text mining, and open access from an early career researcher point-of-view – I don’t see many other UK PhD students being so widely quoted!

Another notable event I was particularly proud of speaking at and contributing to was the Revaluing Science in the Digital Age invite-only workshop organised jointly by the International Council for Science & Royal Society at Chicheley Hall, September 2012. The splendour was not just in the location, but also the attendees too – an exciting, influential bunch of people who can actually make things happen. The only downside of such high-level international policy is the glacial pace of action – I’m told, arising from this meeting and subsequent contributions, a final policy paper for approval by the General Assembly of ICSU will likely only be circulated in 2014 at the earliest!

 

helsinkiTALK

The most exciting outreach I did for the fellowship were the ‘general public’ opportunities that I seized to get the message out to people beyond the ‘ivory towers’ of academia. One such event was the Open Knowledge Festival in Helsinki, September 2012 (pictured above). Another was my participation in a radio show broadcast on Voice of Russia UK radio with Timothy Gowers, Bjorn Brembs, and Rita Gardner explaining the benefits and motivation behind the recent policy shift to open access in the UK. This radio show gave me the confidence & experience I needed for the even bigger opportunity that was to come next – at very short notice I was invited to speak on a live radio debate show on open access for BBC Radio 3 with other panellists including Dame Janet Finch & David Willetts MP! An interesting sidenote is that this opportunity may not have arisen if I hadn’t given my talk about the Open Knowledge Foundation at a relatively small conference; Progressive Palaeontology in Cambridge earlier that year – it pays to network when given the opportunity!

 

Outputs

The fellowship may be over, but the work has only just begun!

I have gained significant momentum and contacts in many areas thanks to this Panton Fellowship. Workshop and speaking invites continue to roll in, e.g. next week I shall be in Berlin at the Making Data Count workshop, then later on in the month I’ll be speaking at the London Information & Knowledge Exchange monthly meet and the ‘Open Data – Better Society’ meeting (Edinburgh).

Even completely independent of my activism, the new generation of researchers in my field are discovering for themselves the need for Open Data in science. The seeds for change have definitely been sown. Attitudes, policies, positions and ‘defaults’ in academia are changing. For my part I will continue to try and do my bit to help this in the right direction; towards intelligent openness in all its forms.

What Next?

I’m going to continue working closely with the Open Knowledge Foundation as and when I can. Indeed for 6 months starting this January I agreed to be the OKF Community Coordinator, Open Science before my postdoc starts. Then when I’ve submitted my thesis (hopefully that’ll go okay), I’ll continue on in full-time academic research with funding from a BBSRC grant I co-wrote partially out in Helsinki(!) at the Open Knowledge Festival with Peter Murray-Rust & Matthew Wills, that has subsequently been approved for funding. This grant proposal which I’ll blog further about at a later date, comes as a very direct result of the content mining work I’ve been doing with Peter Murray-Rust for this fellowship using AMI2 tools to liberate open data. Needless to say I’m very excited about this future work… but first things first I must complete and submit my doctoral thesis!

In the last 2 weeks I’ve given talks in Brussels & Amsterdam.

The first one was given during a European Commission (Brussels) working group meeting on Text & Data Mining. There were perhaps only ~30 people in the room for that.

The second presentation was given just a few days ago at Beyond The PDF 2 (#btpdf2) in Amsterdam.

I uploaded the slides from both of these talks to Slideshare just before or after I gave each talk to help maximize their impact. Since then they’ve had nearly 1000 views according to my Slideshare analytics dashboard.

It’s not just the view count I’m impressed with. The global reach is also pretty cool too (see below, created with BatchGeo):

View My Slideshare Impact 08/Mar/2013 to 22/Mar/2013 in a full screen map

Now obviously, these view counts don’t always mean that the viewers always went through all the slides, and a minority of the view-count are bots crawling the web but still I’m pretty pleased. Imagine if I hadn’t uploaded my Content Mining presentation to the public web? I would have travelled all the way to Brussels and back again (in the same day!) for the benefit of *just* ~30 people (albeit rather important people!). Instead, over 800 people have had the opportunity to view my slides, from all over the world (although, admittedly mostly just US & Europe).

The moral of this short story: upload your slides & tweet about them whenever you give a talk!
You may not appreciate just how big your potential audience could be. Something academics sceptical of Open Access should perhaps think about?

Particular thanks should go to @openscience for helping disseminate these slides far and wide. During just a 60 minute period, upon first release, thanks to @openscience and others my PDF metadata slidedeck got over 100 views this Wednesday!

Next step… must work on getting these stats into an ImpactStory widget for the next version of my CV!

U2in
Just a quick post.

I happened to see @wisealic Tweet about her “new Atira/Pure colleagues” yesterday. I didn’t know what Atira was, but I’d heard of PURE.

I googled it to find out more… and soon found the official Elsevier press release , dated August 15, 2012 (so this isn’t really new news). But combined with recent rumours it does worry me. Elsevier own perhaps a fifth of the academic literature, whatever the true figure it’s a significant share. Despite the research that went into most of those papers being publicly or charitably-funded, Elsevier now rent access to this work back to us (the world) for vast sums of money each and every year.

Not to mention the fake journals they published, the arms dealings their parent company (Reed Elsevier) was involved in, their initial support for the RWA (since withdrawn), the megabundling of journals, the non-provision of open bibliographic metadata (even NPG release this!), the obscene profit margins (and to be fair they’re not the only corporate publisher making a killing here by selling freely provided academic work),  there are 1001 reasons why -  this isn’t an exhaustive list of all the evils…

So Elsevier are not a well-loved company in academia at the moment – more than 13,000 people have signed a boycott of them.

There are rumours that Elsevier are in talks to buy Mendeley at the moment. And Atira/PURE now part of the Elsevier (Umbrella?) corporation are I think the exclusive(?) providers of the research information ‘management’ systems that the UK will be using for it’s next Research Evaluation Framework (REF formerly RAE) exercise in 2014.

So… Elsevier own a significant portion of our papers,  and they may soon own a significant chunk of the bibliographic metadata stored by academics (Mendeley data) and all the commercial insight and advantage that gives, AND they own the company that is managing the data that evaluates UK academics and more round the world no doubt.

I do wonder if there isn’t a significant conflict of interest if thousands of UK academics have publicly boycotted Elsevier and now their academic work is going to be evaluated by… Elsevier. Academic jobs thoroughly depend on the results of these evaluations as I understand it, and heads will roll if the results at an institution are below expectations.

From a purely business perspective many financial analysts would rightly applaud these acquisitions as “good business moves” (good for profits no doubt). But from an ethical standpoint? Elsevier now seem to have a worrying empire of services built around academia and a significant amount of data which presumably they can pool together from each of these different services to gain additional insight? They also have a very poor record when it comes to providing open data. Why are we still giving them our data so easily – they’re only going to rent it back to us at a later date?

To me it’s clear, we’re giving up far too much of our data to this company and they do not have our best interests at heart – shareholder profits are by definition their primary goal. They have a sizeable monopoly on academic data in all it’s forms which they can and do leverage and I suspect we’re going to be made to pay for this mistake in the future as we have with hugely inflated journal subscription prices.

Is it just me that’s worried?

Anyone who knows me, knows I’m very passionate on the subject of data sharing in science, and after all the relevant conferences I’ve been to and research I’ve done – I don’t mind saying I’m fairly knowledgeable on the subject too.

It’s part of the reason I got this Panton Fellowship that has helped me develop my work and do what I want to do in pursuit of Open Data goals.

So when I saw this article come up on my RSS feeds – I thought great! It’s finally happening. The vertebrate palaeontology community is finally seeing the light – the absolute need to share research data associated with published papers (we’ll tackle pre-publication data sharing later, first things first…)!

Uhen, M. D., Barnosky, A. D., Bills, B., Blois, J., Carrano, M. T., Carrasco, M. A., Erickson, G. M., Eronen, J. T., Fortelius, M., Graham, R. W., Grimm, E. C., O'Leary, M. A., Mast, A., Piel, W. H., Polly, P. D., and Säilä, L. K. 2013.
From card catalogs to computers: databases in vertebrate paleontology. Journal of Vertebrate Paleontology 33:13-28.

2013-01-12-142813_1054x983_scrot

…and yet when I read the paper – it sorely disappointed me for a variety of reasons.

Choosing examples: bad choices & odd absences

Despite clear criteria given, I found the choice of databases reviewed to be an odd selection – for example they choose to include AHOB (Ancient Human Occupation of Britain) and write about it that:

“Access is restricted to project members during the life of the project, after which access will be publicly granted.”

This probably explains why then, that when I go to the database website – I can’t seem to get access to any of the purported data to be there!

AHOB
Screenshot of the login screen for AHOB. Try it yourself.

Yet apparently: “More than 250 publications have results from the AHOB project, all of which are recorded in the database.”

How many more publications will come out of this cosy little database before access will be publicly granted I wonder? I don’t think this is a good example of a research database as it doesn’t seem to publicly share any data.

Where’s Dryad?

Furthermore there are some really big, obvious, relevant databases it neglects to review, in particular Dryad – the only mention of which is that TreeBASE received “some support from Dryad” – with absolutely no mention anywhere that Dryad itself is a database with lots of vertebrate palaeontological data in it and likely to be a strongly important, long-lasting database in this area for the foreseeable future IMO! Even some data associated with an article in JVP itself is in Dryad! Although less prominently paleo-related figshare (with no less that 26 paleontology-related datasets there at the moment, TreeBASE has approximately as many!) might have been worth mentioning too.

Dryad has a partnership with The Paleontological Society and many evolutionary biology journals. Dryad even bought a promotional stand at last year’s Society of Vertebrate Paleontology annual meeting (the society that publishes the Journal of Vertebrate Paleontology) but as Richard Butler has pointed out to me on Twitter this article was submitted before that meeting. Still, it’s simply impossible that none of the 16 authors listed doesn’t know about Dryad. I find the non-inclusion of Dryad deeply suspicious and possibly political given it could ‘compete’ to store much of the data that some of the other reviewed databases do (it’s a broad generalist in the types of data it accepts).

Isn’t there a conflict of interest issue given that most of the authors of this paper are involved with at least one of the ‘reviewed’ (=advertised) databases in the paper? I see no mention of this conflict of interest anywhere in the paper. I dearly hope this paper was peer-reviewed – that it is an ‘invited article’ makes me wonder a bit about that…

The inclusion of Polyglot Paleontologist too, in the reviewed databases does also rather stretch the meaning of ‘data’ in the word database. Are translations of 434 different papers ‘data’? In the same way that TreeBASE or PaleoDB contain data? It’s a fantastic freely provided resource, no doubt – I mean no criticism of it – but is it data? I think not tbh.

Strong contenders for things that could/should have been cited but weren’t

WRT to Data Portals: rOpenSci provide great R interfaces for a wide variety of databases, including TreeBASE which was one of the ‘reviewed’ databases.

WRT to the History of databases section: I find it odd that they didn’t think to mention my own widely publicised and well-supported call for data archiving in palaeontology back in 2011. Nearly 200 palaeontologists signed in support of our ideas with some memorable quotes of support e.g. Brian Huber “This is the way of the future” , P J Wagner “I’ve been trying to get the Paleo Society to sign on with Dryad, but it’s been like slamming my head on jello…”

They could have explained why freely accessible databases/archives are so important a bit better in my opinion:
that ‘Data archiving is a good investment‘ (Piwowar et al, 2012),
that only 4% of phylogenetic data is currently archived and that it’s really useful data (Stoltzfus et al, 2012),
that Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results (Wicherts et al, 2011),
that the “data available upon request” system really doesn’t work (Wicherts et al, 2006)
the undesirable consquences of non-commercial clauses applied to biodiversity data (Hagedorn et al, 2011)

Odd wording

“…community approach, facilitated by the open access of the WWW and…”

sounds like something my dad would say about the interweb

“The CCL 3.0 license allows…”

a classic mistake – which CCL license?
In this case they mean the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 license, or CC BY-NC-SA for short. Calling it “Creative Commons License 3.0 (BY-NC-SA)” makes me wonder how familiar they are with licencing. Perhaps a sub-editor did this. And why they link specifically to the US version not the international unported license I do not know.

Data Citation: the Elephant in the Room?

Attribution is mentioned many times, and is vitally important to motivate people to share data. Yet the concept of citing data in countable ways or Data Citation isn’t explicitly mentioned once. Nor altmetrics for that matter.

This would have been an excellent opportunity – the start of a new year to encourage authors to actually cite data that they re-use from someone else so that those citations can be easily counted and contribute towards research evaluations, but alas no.

So what now?

So I like some of the message of this paper. But I don’t think it goes far enough, nor does a good job of it. Call me egotistical but I think I could do better and expand upon what I’ve written above.

If any journal editor happens to read this, and would like to commission an ‘invited article’, comment, or proper independent critical review of databases in vertebrate palaeontology / evolutionary biology please contact me. I think I could offer an interesting perspective.

PS I’m not going to write to the journal. I tried that with Nature and it took 6 months from submission for my comment to get published! It’s 2013 – if I’m going to do post-publication peer review – I’ll definitely be blogging it from now on, Rosie Redfield style!

So a week ago, I investigated publisher-produced Version of Record PDFs with pdfinfo and the results were very disappointing. Lots of missing metadata was found and one could not reliably identify most of these PDFs from metadata alone, let alone extract particular fields of interest.

But Rod Page kindly alerted to me the fact that I might be using the wrong tool for this investigation. So at his suggestion I’ve tried again to extract metadata from the exact same set of PDFs as last time…

Only this time I’ll be using exiftool version 9.10.

This time I’ve put the full raw metadata output from exiftool on figshare for each and every PDF file, just to really prove the point, reproducible research and all. I’d love to post the corresponding PDFs too but sadly many of them are not Open Access and this thus prevents me from uploading them to a public space.   **Insert timely comment here about how closed access publications stifle effective research practices…**

Exiftool is really simple to use. You just need type:
exiftool NameOfPDF.pdf
to get a human-readable exhaustive output of all possible metadata.

and
exiftool -b -XMP NameOfPDF.pdf
to get XML-structured metadata. I could only extract this from 56 of the 69 PDF files. The data output from this for those 56 PDFs is available as a separate fileset on figshare here.

Finally, if you want to test a whole bunch of PDF files in your working directory I’ve made a simple shell script that loops through all PDFs in your working directory, available here (oops, it’s not data, perhaps I should have put that on github instead?). [I'm sure many readers will be able to create a simple bash loop themselves but just for those that don't...]

 

I’m assuming that the reason exiftool -b -XMP failed on 13 of those PDFs is because they have no embedded XMP metadata – an empty (zero-byte sized) file is created for these. This is an assumption though… I notice that those 13 exactly correspond with all the 13 that were produced with iText. I checked the website and I’m pretty sure iText 2.x and up can embed XMP metadata, it’s just whether the publishers have bothered to use & apply this functionality.

So if I’m right, neither Taylor & Francis, BRILL, nor Acta Palaeontology Polonica embed XMP metadata (at all!) in their PDFs. The alternative explanation is that the XMP metadata is in there but exiftool for whatever reason can’t read/parse it from iText produced PDFs. I find this an unlikely alternative explanation though tbh.

Elsevier have superior XMP metadata to everyone else by the looks of it, but Elsevier aside the metadata is still very poor, so my conclusions from last week’s post still stand I think.

Most of the others do contain metadata (of some sort) but by and large it’s rather poor. I need to get some other work done on Monday so I’m afraid this is where I’m going to leave this for now. But I hope I’ve made the point.

Further angles to explore

Interestingly Brian Kelly, has taken this a slightly different direction and looked at the metadata of PDFs in institutional repositories. I hadn’t realised this but apparently some institutional repositories (IRs) universally add cover pages to most deposits. If this is done without care for the embedded metadata, the original metadata can be wiped and/or replaced with newer (less informative) metadata.  Not to mention that cover pages are completely unnecessary -> all the information on a cover page is exactly the kind of stuff that should be put in embedded metadata! No need to waste time and space by putting that info as the first page. JSTOR does this too (cover pages) and it annoys the hell out of me.

After some excellent chat on Twitter about this IR angle I’ve discovered that UKOLN based here on campus at Bath have also done some interesting research in this area, in particular the FixRep project which is described in more detail here. CrossRef labs pdfmark tool also looks like something of interest towards fixing poor quality metadata PDFs. I’ve got this installed/compiled from the source on github but haven’t tried it out yet. It would be interesting to see the difference it makes – a before and after comparison of metadata to see what we’re missing… But why should we fix a problem that shouldn’t exist in the first place? Publishers are the point of origin for this. It’s their job to be the first to publish the Version of Record. They should provide the highest level of metadata possible IMO.

 

Why would publishers add metadata?

Because their customers – libraries, governments, research funders (in the case of Open Access PDFs ) should demand it. A pipe dream perhaps but that’s my $.02.  I would ask for a refund if I downloaded MP3′s from iTunes/Amazon MP3 with insufficient embedded metadata. Why not the same principle for electronically published PDFs?

 

PS Apologies for some of the very cryptic filenames in the metadata uploads on figshare. You’ll have to cross-match with this list here or the spreadsheet I uploaded last week to work out which metadata file corresponds to which PDF/Bibliographic Data record/Publisher.

Publisher Identifier Journal Contains embedded XMP metadata? Filename
American Association for the Advancement of Science Ezard2011 Science yes? ezard_11_interplay_759293.pdf
American Association for the Advancement of Science Nagalingum2011 Science yes? nagalingum_11_recent_719133.pdf
American Association for the Advancement of Science Rowe2011 Science yes? Science-2011-Rowe-955-7.pdf
Blackwell Publishing Ltd Burks2011 Cladistics yes? burks_11_combined_694888.pdf
Blackwell Publishing Ltd Janies2011 Cladistics yes? janies_11_supramap_779773.pdf
Blackwell Publishing Ltd Simmons2011 Cladistics yes? simmons_11_deterministic_779537.pdf
BRILL Barbosa2011 Insect Systematics & Evolution no barbosa_11_phylogeny_779910.pdf
BRILL Dellape2011 Insect Systematics & Evolution no dellape_11_phylogenetic_779909.pdf
Cambridge Journals Online Knoll2010 Geological Magazine yes? knoll_10_primitive_475553.pdf
Cambridge Journals Online Saucede2007 Geological Magazine yes? thomas_saucegraved_07_phylogeny_506869.pdf
CSIRO Chamorro2011 Invertebrate Systematics yes? chamorro_11_phylogeny_780467.pdf
CSIRO Daugeron2011 Invertebrate Systematics yes? daugeron_11_phylogenetic_780466.pdf
CSIRO Johnson2011 Invertebrate Systematics yes? johnson_11_collaborative_750540.pdf
Elsevier Lane2011 Molecular Phylogenetics and Evolution yes E3-1-s2.0-S1055790311001448-main.pdf
Elsevier Cunha2011 Molecular Phylogenetics and Evolution yes E2-1-s2.0-S1055790311001680-main.pdf
Elsevier Spribille2011 Molecular Phylogenetics and Evolution yes E1-1-s2.0-S1055790311001606-main.pdf
Frontiers In Horn2011 Frontiers in Neuroscience yes? fnins-05-00088.pdf
Frontiers In Ogura2011 Frontiers in Neuroscience yes? fnins-05-00091.pdf
Frontiers In Tsagareli2011 Frontiers in Neuroscience yes? fnins-05-00092.pdf
Hindawi Diniz2012 Psyche: A Journal of Entomology yes? 79139500.pdf
Hindawi Restrepo2012 Psyche: A Journal of Entomology yes? 516419.pdf
Hindawi Savopoulou2012 Psyche: A Journal of Entomology yes? 167420.pdf
Institute of Paleobiology, Polish Academy of Sciences Amson2011 Acta Palaeontologica Polonica no amson_11_affinities_666987.pdf
Institute of Paleobiology, Polish Academy of Sciences Edgecombe2011 Acta Palaeontologica Polonica no edgecombe_11_new_666988.pdf
Institute of Paleobiology, Polish Academy of Sciences Williamson2011 Acta Palaeontologica Polonica no app2E20092E0147.pdf
Magnolia Press Agiuar2011 Zootaxa yes? zt02846p098.pdf
Magnolia Press Ebach2011 Zootaxa yes? ebach_11_taxonomy_599972.pdf
Magnolia Press Nelson2011 Zootaxa yes? nelson_11_resemblance_688762.pdf
National Academy of Sciences Casanovas2011 Proceedings of the National Academy of Sciences yes? casanovas-vilar_11_updated_644658.pdf
National Academy of Sciences Goswami2011 Proceedings of the National Academy of Sciences yes? goswami_11_radiation_814757.pdf
National Academy of Sciences Thorne2011 Proceedings of the National Academy of Sciences yes? thorne_11_resetting_654055.pdf
Nature Publishing Group Meng2011 Nature yes? meng_11_transitional_644647.pdf
Nature Publishing Group Rougier2011 Nature yes? rougier_11_highly_720202.pdf
Nature Publishing Group Venditti2011 Nature yes? venditti_11_multiple_779840.pdf
NRC Research Press CruzadoCaballero2010 Canadian Journal of Earth Sciences yes? 650000.pdf
NRC Research Press Druckenmiller2010 Canadian Journal of Earth Sciences yes? 80000000c5.pdf
NRC Research Press Mazierski2010 Canadian Journal of Earth Sciences yes? mazierski_10_description_577223.pdf
NRC Research Press Modesto2009 Canadian Journal of Earth Sciences yes? modesto_09_new_577201.pdf
NRC Research Press Parsons2009 Canadian Journal of Earth Sciences yes? parsons_09_new_575744.pdf
NRC Research Press Wu2007 Canadian Journal of Earth Sciences yes? wu_07_new_622125.pdf
Pensoft Publishers Hagedorn2011 ZooKeys yes? hagedorn_11_creative_779747.pdf
Pensoft Publishers Penev2011 ZooKeys yes? penev_11_interlinking_694886.pdf
Pensoft Publishers Thessen2011 ZooKeys yes? thessen_11_data_779746.pdf
Public Library of Science Hess2011 PLoS ONE yes? hess_11_addressing_694222.pdf
Public Library of Science McDonald2011 PLoS ONE yes? mcdonald_11_subadult_694229.pdf
Public Library of Science Wicherts2011 PLoS ONE yes? wicherts_11_willingness_779788.pdf
SAGE Publications deKloet2011 Journal of Veterinary Diagnostic Investigation yes? Invest-2011-deKloet-421-9.pdf
SAGE Publications Richter2011 Journal of Veterinary Diagnostic Investigation yes? Invest-2011-Richter-430-5.pdf
SAGE Publications Wassmuth2011 Journal of Veterinary Diagnostic Investigation yes? Invest-2011-Wassmuth-436-53.pdf
Senckenberg Natural History Collections Dresden Fresneda2011 Arthropod Systematics & Phylogeny yes? fresneda_11_phylogenetic_785869.pdf
Senckenberg Natural History Collections Dresden Mally2011 Arthropod Systematics & Phylogeny yes? ASP_69_1_Mally_55-71.pdf
Senckenberg Natural History Collections Dresden Shimizu2011 Arthropod Systematics & Phylogeny yes? ASP_69_2_Shimizu_75-81.pdf
Springer-Verlag Beermann2011 Zoomorphology yes? 10.1007_s00435-011-0129-9.pdf
Springer-Verlag Cuezzo2011 Zoomorphology yes? cuezzo_11_ultrastructure_694669.pdf
Springer-Verlag Vinn2011 Zoomorphology yes? 10.1007_s00435-011-0133-0.pdf
Taylor & Francis Bianucci2011 Journal of Vertebrate Paleontology no bianucci_11_aegyptocetus_778747.pdf
Taylor & Francis Makovicky2011 Journal of Vertebrate Paleontology no makovicky_11_new_694826.pdf
Taylor & Francis Pietri2011 Journal of Vertebrate Paleontology no pietri_11_revision_689491.pdf
Taylor & Francis Rook2011 Journal of Vertebrate Paleontology no rook_11_phylogeny_694916.pdf
Taylor & Francis Tsuihiji2011 Journal of Vertebrate Paleontology no tsuihiji_11_cranial_660620.pdf
Taylor & Francis Yates2011 Journal of Vertebrate Paleontology no yates_11_new_694821.pdf
Taylor & Francis Gerth2011 Systematics and Biodiversity no gerth_11_wolbachia_779749.pdf
Taylor & Francis Krebes2011 Systematics and Biodiversity no krebes_11_phylogeography_779700.pdf
Sociedade Brasileira de Ictiologia Britski2011 Neotropical Ichthyology yes? a02v9n2.pdf
Sociedade Brasileira de Ictiologia Sarmento2011 Neotropical Ichthyology yes? a03v9n2.pdf
Sociedade Brasileira de Ictiologia Calegari2011 Neotropical Ichthyology yes? a04v9n2.pdf
Royal Society Billet2011 Proceedings of the Royal Society B: Biological Sciences yes? billet_11_oldest_687630.pdf
Royal Society Polly2011 Proceedings of the Royal Society B: Biological Sciences yes? polly_11_history_625430.pdf
Royal Society Sansom2011 Proceedings of the Royal Society B: Biological Sciences yes? sansom_11_decay_625429.pdf