Show me the data!
Header

A quick blog from Meise, Belgium at the Pro-iBiosphere wrap-up event.

Yesterday I gave a talk about my progress liberating, and making searchable, OA figures from academic literature:

 

I’ve had a lot of great feedback and interest in what I’m doing with this.

Cyndy Parr has pointed out that EOL are on Flickr too, and have been marking-up photographs of taxa with ‘machine tags‘.

I will now start to experiment with how I can incorporate taxonomic & geographical machine tags into my workflow when uploading images to Flickr. As an example I have added binomial tags to two figures from an OA Zootaxa paper on ‘Urothrips': https://www.flickr.com/photos/121174006@N06/13379028204/in/set-72157642842813323

 

see bottom right hand corner for the added 'machine tags'

see bottom right hand corner for the added ‘machine tags’

 

 

 

 

 

 

 

 

 

 

 

 

Jeremy Miller from Naturalis is also very interested in OA Zootaxa content from the point of view of spiders. He gave a talk on Data Visualization on behalf of his team from the Leiden hackday. Luckily, with no prior ‘special’ mark-up, by searching ‘Araneae‘ I could show Jeremy the promise of what I’m doing on Flickr. Many phylogenies containing spider taxa came up in the search, many of which he immediately recognized as from his own open access publications! With a little bit of work to further mark-up the attributes he’s interested in, I might be able to provide something of real use – the ability to search figure images/captions across hundreds of open access journals, from many different publishers with just ONE search!

The Bouchout Declaration will be launched today at this meeting. I’m happy to say I facilitated the signing of this declaration by Open Knowledge. Many other organisations have signed this declaration and I hope it makes a splash – we need science to be open to do good science!

Finally, I’ve also potentially got a new research collaboration going (more of which later!).
It’s been well worth the trip!

[Update: the conference itself will be in November, 2014 - this is just the first announcement!]

I’m super excited to announce I’m part of the international organizing committee for OpenCon 2014:

OpenCon 2014

 

 

 

 

You can read the official first press release about this event here:

http://www.righttoresearch.org/act/opencon/announcement

 

here’s an excerpt from it:

“From Nigeria to Norway, the next generation is beginning to take ownership of the system of scholarly communication which they will inherit,” said Nick Shockey, founding Director of the Right to Research Coalition. “OpenCon 2014 will support and accelerate this rapidly growing movement of students and early career researchers advocating for openness in research literature, education, and data.

The first event of its kind, OpenCon 2014 builds on the success of the Berlin 11 Satellite Conference for Students and Early Stage Researchers, which brought together more than 70 participants from 35 countries to engage on Open Access to scientific and scholarly research. The interest, energy, and passion from the student and researcher participants and the Open Access movement leaders who attended made a clear case for expanding the event in size and duration, and to broaden the scope to related areas of the Openness movement.”

 

Last year, I was also part of the organizing committee for the event that this has grown from – the Berlin 11 Satellite conference:

berlin11

 

 

 

 

The Berlin 11 Satellite Conference was really exciting but only a 1-day event before the ‘main’ Berlin 11 event – an assemblage of students and ECR’s from literally all over the world (attending with generous full funding support), including representatives from (in no particular order) China, India, Saudi Arabia, Georgia, Tanzania, Tasmania(!), Kenya, Nigeria, Ghana, Uganda, Columbia, FYR Macedonia,  Mexico, Brazil, Sweden, Holland, Denmark, Poland, Portugal, Canada, the US, the UK… So don’t worry about where you are in the world – as long as you’re a student or ECR you’ll be eligible to apply for OpenCon 2014 (places are limited though!).

As a reminder, at the event last year we had Jack Andraka and Mike Taylor amongst the guest speakers. It was such a comprehensive success that it’s been expanded into a full 3-day event this year, expanding scope too, to includmeandjacke Open Data and OER, not just OA (they’re all obviously inter-related problems; better to tackle the integrated set of problems rather than aspects in isolation!).

Applications for OpenCon 2014 will open in August. For more information about the conference and to sign up for updates, visit www.opencon.net

I promise you this – it’s going to be BIG and I’m stoked to be part of an international organizing committee helping to make this happen.

OpenCon 2014 is also looking for additional sponsorship, particularly for Travel Scholarships to ensure global representation at this meeting, so if you have a marketing budget to spend, or are feeling generous please do have a look at the sponsorship opportunities.

I’m proud to announce an interesting public output from my BBSRC-funded postdoc project:
PLUTo: Phyloinformatic Literature Unlocking Tools. Software for making published phyloinformatic data discoverable, open, and reusable

MOAR PHYLOGENY!

Screenshot of some of the PLOS ONE phylogeny figure collection on Flickr

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I’ve made openly available my first-pass filter of PLOS ONE phylogeny figures (I’m not in any way claiming this is *all* of them).

This curated & tagged image collection is on Flickr for easy browsing: http://bit.ly/PLOStrees

As well as on Github for version control, open archiving, and collaboration (I have remote collaborators):

https://github.com/rossmounce/P1-phylo-part1

https://github.com/rossmounce/P1-phylo-part2

https://github.com/rossmounce/P1-phylo-part3

https://github.com/rossmounce/P1-phylo-part4

(Github doesn’t like repositories over 1GB so I’ve had to split-up the content between 4 separate repositories)

 

Why?

The aim of the PLUTo project is to re-extract & liberate phylogenetic data & associated metadata from the research literature. Sadly, only ~4% of modern published phylogenetic analysis studies make their underlying data available. Another study finds that if you ask the authors for this data, only 16% will be kind enough to reply with the requested data!

This particular data type is a cornerstone of modern evolutionary biology. You’ll find phylogenetic analyses across a whole host of journal subjects – medical, ecological, natural history, palaeontology… There are also many different ways in which this data can be re-used e.g. supertrees  & comparative cladistics. Not to mention, simple validation studies &/or analyses which extend-upon or map new data on to a phylogeny. It’s really useful data and we should be archiving it for future re-use and re-analysis. To my great delight, this is what I’m being paid to attempt to do for my first postdoc; on a grant I co-wrote – finding & liberating phylogenetic data for everyone!

 

Why PLOS ONE?

 

  •  It’s a BOAI-compliant open access journal that publishes most articles under CC BY, with a few under CC0.
    • This means I can openly re-publish figures online (provided sufficient attribution is given) — no need to worry about DMCA takedown notices or ‘getting sued’! This makes the process of research much easier. Private, non-public, access-restricted repositories for collaboration are a hassle I’d rather do without.
  • It’s a high-volume ‘megajournal’ publishing ~200 articles per day, many of which include phylogenetic analyses.
    • Thus its worthwhile establishing a regular daily or weekly method for parsing-out phylogenetic tree figures from this journal
  • Killer feature: as far as I know, PLOS are the only publisher to embed rich metadata inside their figure image files.
    • This makes satisfying the CC BY licence trivially easy — sufficient attribution metadata is already embedded in the file. Just ensure that wherever you’re uploading the file to doesn’t wipe this embedded data, hence why I chose Flickr as my initial upload platform.

 

What does this enable or make easier?

 

On it’s own, this collection doesn’t do much, this is still an early stage – but it gives us an important insight into the prevalence of certain types of visual display-style that researchers are using:

‘radial’ phylogenies

https://www.flickr.com/search?user_id=123621741%40N08&sort=relevance&text=radial

Source: Zerillo et al 2013 PLOS ONE. Carbohydrate-Active Enzymes in Pythium and Their Role in Plant Cell Wall and Storage Polysaccharide Degradation

Source: Zerillo et al 2013 PLOS ONE. Carbohydrate-Active Enzymes in Pythium and Their Role in Plant Cell Wall and Storage Polysaccharide Degradation

 

 

 

 

 

 

 

 

 

 

 

 

 

‘geophylogeny’ (phylogeny displayed relative to a map of some sort, 2D or 3D)

https://www.flickr.com/search?user_id=123621741%40N08&sort=relevance&text=geophylogeny

Source: Guo et al 2012 PLOS ONE. Evolution and Biogeography of the Slipper Orchids: Eocene Vicariance of the Conduplicate Genera in the Old and New World Tropics

Source: Guo et al 2012 PLOS ONE. Evolution and Biogeography of the Slipper Orchids: Eocene Vicariance of the Conduplicate Genera in the Old and New World Tropics

 

 

 

 

 

 

 

 

 

 

‘timescaled’ (phylogenies where the branch lengths are proportional to units of time or geological periods)
https://www.flickr.com/search?user_id=123621741%40N08&sort=relevance&text=timescaled

Source: Pol et al 2014 PLOS ONE. A New Notosuchian from the Late Cretaceous of Brazil and the Phylogeny of Advanced Notosuchians

Source: Pol et al 2014 PLOS ONE. A New Notosuchian from the Late Cretaceous of Brazil and the Phylogeny of Advanced Notosuchians

 

 

 

 

 

 

 

 

 

‘splitstrees’

https://www.flickr.com/search?user_id=123621741%40N08&sort=relevance&text=splitstree

Source: McDowell et al 2013 PLOS ONE. The Opportunistic Pathogen Propionibacterium acnes: Insights into Typing, Human Disease, Clonal Diversification and CAMP Factor Evolution

Source: McDowell et al 2013 PLOS ONE. The Opportunistic Pathogen Propionibacterium acnes: Insights into Typing, Human Disease, Clonal Diversification and CAMP Factor Evolution

 

 

 

 

 

 

 

 

 

 

 

Arguably it also facilitates complex searches for specific types of phylogeny

e.g. analyses using cytochrome b
https://www.flickr.com/search/?w=123621741@N08&q=%22cyt%20b%22%20OR%20%22cytochrome%20b%22
(you could use PLOS’s API to do this, particularly their figure/table caption search field — but you’d get a lot of false positives — this is an expert-curated collection that has filtered-out non-phylo figures)

In my initial roadmap, the plan is to do PLOS ONE, the other PLOS journals, then BMC journals, then possibly Zootaxa & Phytotaxa (Magnolia Press). There will be a Github-based website for the project soon, lots still to do…!

 

Want to know more / collaborate / critique ?

Conferences:

I’ve got an accepted lightning talk at iEvoBio in Raleigh, NC later this year about the PLUTo project.

As well as an accepted lightning talk at the Bioinformatics Open Source Conference (BOSC) in Boston, MA.

Elsewise, contact me via twitter @rmounce , the comment section on this blog post, or email ross dot mounce <at> gmail dot com

Hack4ac recap

July 9th, 2013 | Posted by rmounce in BMC | eLife | Hack days | Open Access | Open Data | Open Science | PeerJ | PLoS - (4 Comments)

Last Saturday I went to Hack4Ac – a hackday in London bringing together many sections of the academic community in pursuit of two goals:

  • To demonstrate the value of the CC-BY licence within academia. We are interested in supporting innovations around and on top of the literature.
  • To reach out to academics who are keen to learn or improve their programming skills to better their research. We’re especially interested in academics who have never coded before

DSCF3425

The list of attendees was stellar, cross-disciplinary (inc. Humanities) and international. The venue (Skills Matter) & organisation were also suitably first-class – lots of power leads, spare computer mice, projectors, whiteboards, good wi-fi, separate workspaces for the different self-assembled hack teams, tea, coffee & snacks all throughout the day to keep us going, prizes & promo swag for all participants…

The principal organizers; Jason Hoyt (PeerJ, formerly at Mendeley) & Ian Mulvany (Head of Tech at eLife) thus deserve a BIG thank you for making all this happen. I hear this may also be turned into a fairly regular set of meetups too, which will be great for keeping up the momentum of innovation going on right now in academic publishing.

The hack projects themselves…

The overall winner of the day was ScienceGist as voted for by the attendees. All the projects were great in their own way considering we only had from ~10am to 5pm to get them in a presentable state.

ScienceGist

 

This project was initiated by Jure Triglav, building upon his previous experience with Tiris. This new project aims to provide an open platform for post-publication summaries (‘gists’) of research papers, providing shorter, more easily understandable summaries of the content of each paper.

I also led a project under the catchy-title of Figures → Data where-by we tried to provide added-value by taking CC-BY bar charts and histograms from the literature and attempting to re-extract the numerical data from those plots with automated efforts using computer vision techniques. On my team for the day I had Peter Murray-Rust, Vincent Adam (of HackYourPhD) and Thomas Branch (Imperial College). This was handy because I know next to nothing about computer vision – I’m Your Typical Biologist ™  in that I know how to script in R, perl, bash and various other things, just enough to get by but not nearly enough to attempt something ambitious like this on my own!

Forgive me the self-indulgence if I talk about this  Figures → Data project more than I do the others but I thought it would be illuminative to discuss the whole process in detail…

In order to share links between our computers in real-time, and to share initial ideas and approaches, Vincent set-up an etherpad here to record our notes. You can see the development of our collaborative note-taking using the timeslider function below (I did a screen record of it for prosperity using recordmydesktop):

In this etherpad we document that there are a variety of ways in which to discover bar charts & histograms:

  • figuresearch is one such web-app that searches the PMC OA subset for figure captions & figure images. With this you can find over 7,000 figure captions containing the word ‘histogram’ (you would assume that the corresponding figure would contain at least one histogram for 99% of those figures, although there are exceptions).
  • figshare has nearly 10,000 hits for histogram figures, whilst BMC & PLOS can also be commended for providing the ability to search their literature stack by just figure captions, making the task of figure discovery far more efficient and targeted.

Jason Hoyt was in the room with us for quite a bit of the hack and clearly noted the search features we were looking for – just yesterday he tweeted: “PeerJ now supports figure search & all images free to use CC-BY (inspired by @rmounce at #hack4ac)” [link] – I’m really glad to see our hack goals helped Jason to improve content search for PeerJ to better enable the needs (albeit somewhat niche in this case) of real researchers. It’s this kind of unique confluence of typesetters, publishers, researchers, policymakers and hackers at doing-events like this that can generate real change in academic publishing.

The downside of our project was that we discovered someone’s done much of this before. ReVision: Automated Classification, Analysis and Redesign of Chart Images  [PDF] was an award-winning paper at an ACM conference in 2011. Much of this project would have helped our idea, particularly the classification of figures tech. Yet sadly, as with so much of ‘closed’ science we couldn’t find any open source code associated with this project. There were comments that this type of non-code sharing behaviour, blocking re-use and progress, are fairly typical in computer science & ACM conferences (I wouldn’t know but it was muttered…).  If anyone does know of the existence of related open source code available for this project do let me know!

So… we had to start from a fairly low-level ourselves: Vincent & Thomas tried MATLAB & C based approaches with OpenCV and their code is all up on our project github. Peter tried using AMI2 toolset, particularly the Canny algorithm, whilst I built up an annotated corpus of 40 CC-BY bar charts & histograms for testing purposes. Results of all three approaches can be seen below in their attempts to simplify this hilarious figure about dolphin cognition from a PLOS paper:

The plastic fish just wasn't as captivating...

“Figure 5. Total time spent looking at different targets.” from Siniscalchi M, Dimatteo S, Pepe AM, Sasso R, Quaranta A (2012) Visual Lateralization in Wild Striped Dolphins (Stenella coeruleoalba) in Response to Stimuli with Different Degrees of Familiarity. PLoS ONE 7(1): e30001. doi:10.1371/journal.pone.0030001 CC-BY

Peter’s results (using AMI2):

 

Thomas’s results (OpenCV & C):

 

Vincent’s results (OpenCV & MATLAB & bilateral filtering)

We might not have won 1st prize but I think our efforts are pretty cool, and we got some laughs from our slides presenting our days’ work at the end (e.g. see below). Importantly, *everything* we did that day is openly-available on github to re-use, re-work and improve upon (I’ll ping Thomas & Vincent soon to make sure their code contributions are openly licensed). Proper full-stack open science basically!

some figures are just awful

 

Other hack projects…

As I thought would happen I’ve waffled on about our project. If you’d like to know more about the other projects hopefully someone else will blog about them at greater length (sorry!) I’ve got my thesis to write y’know! ;)

You can find more about them all either on the Twitter channel #hack4ac or alternatively on the hack4ac github page. I’ll write a little bit more below, but it’ll be concise, I warn you!

  • Textmining PLOS Author Contributions

This project has a lovely website for itself: http://hack4ac.com/plos-author-contributions/ and so needs no more explanation.

  • Getting more openly-licensed content on wikipedia

This group had problems with the YouTube API I think. Ask @EvoMRI (Daniel Mietchen) if you’re interested…

  • articlEnhancer

Not content with helping out the PLOS author contribs project Walther Georg also unveiled his own article enhancer project which has a nice webpage about it here: http://waltherg.github.io/articlEnhancer/

  • Qual or Quant methods?

Dan Stowell & co used NLP techniques on full-text accessible CC-BY research papers, to classify all of them in an automated way determining whether they were qualitative or quantitative papers (or a mixture of the two). The last tweeted report of it sounded rather promising: “Upstairs at #hack4ac we’re hacking a system to classify research papers as qual or quant. first results: 96 right of 97. #woo #NLPwhysure” More generally, I believe their idea was to enable a “search by methods” capability, which I think would be highly sought-after if they could do it. Best of luck!
Apologies if I missed any projects. Feel free to post epic-long comments about them below ;)

 

 

 

This post was originally posted over at the LSE Impact blog where I was kindly invited to write on this theme by the Managing Editor. It’s a widely read platform and I hope it inspires some academics to upload more of their work for everyone to read and use

Recently I tried to explain on twitter in a few tweets how everyone can take easy steps towards open scholarship with their own work. It’s really not that hard and potentially very beneficial for your own career progress – open practices enable people to read & re-use your work, rather than let it gather dust unread and undiscovered in a limited access venue as is traditional. For clarity I’ve rewritten the ethos of those tweets below:

Step 1: before submitting to a journal or peer-review service upload your manuscript to a public preprint server

Step 2: after your research is accepted for publication, deposit all the outputs – full-text, data & code in subject or institutional repositories

The above is the concise form of it, but as with everything in life there is devil in the detail, and much to explain, so I will elaborate upon these steps in this post.

Step 1: Preprints

Uploading a preprint before submission is technically very easy to do – it takes just a few clicks, but the barrier that prevents many from doing this in practice is cultural and psychological. In disciplines like physics it’s completely normal to upload preprints to arXiv.org and their submission to a journal in some cases has more to do with satisfying the requirements of the Research Excellence Framework exercise than any real desire to see it in a journal. Many preprints on arXiv get cited and are valued scientific contributions, even without them ever being published in a journal. That said, even within this community author perceptions differ as to the exact practice of when to upload a preprint in the publication cycle.

Within biology it’s relatively unheard of to upload a preprint before submission but that’s likely to change this year because of an excellent well-put article advocating their use in biology and the very many different outlets available for them. My own experience of this has been illuminating – I recently co-authored a paper openly on github and the preprint was made available with a citable DOI via figshare. We’ve received a nice comment, more than 250 views and a citation from another preprint. All before our paper has been ‘published’ in the traditional sense. I hope this illustrates well how open practices really do accelerate progress.

This is not a one-off occurrence either. As with open access papers, freely accessible preprints have a clear citation advantage over traditional subscription access papers:

graph

Outside of the natural sciences the situation is also similar; Martin Fenner notes that in the social sciences (SSRN) and economics (RePEc) preprints are also common either in this guise, or as ‘working papers’ – the name may be different but the pre-submission accessibility is the same. Yet I suspect, like in biology, this practice isn’t yet mainstream in the Arts & Humanities – perhaps just a matter of time before this cultural shift occurs (more on this later on in the post…)?

There is one important caveat to mention with respect to posting preprints – a small minority of conservative, traditional journals will not accept articles that have been posted online prior to submission. You might well want to check Sherpa/RoMEo before you upload your preprint to ensure that your preferred destination journal accepts preprint submissions. There is an increasing grass-roots led trend apparent to convince these journals that preprint submissions should be allowed, of which some have already succeeded.

If even much-loathed publishers like Elsevier allow preprints, unconditionally, I think it goes to show how rather uncontroversial preprints are. Prior to submission it’s your work and you can put it anywhere you wish.

 

Step 2: Postprints

 

Unlike with preprints, the postprint situation is a little trickier. Publishers like to think that they have the exclusive right to publish your peer-reviewed work. The exact terms of these agreements will vary from journal to journal depending on the exact terms of the copyright or licencing agreement you might have signed. Some publishers try to enforce ‘embargoes’ upon postprints, to maintain the artificial scarcity of your work and their monopoly of control over access to it. But rest assured, at some point, often just 12 months after publication, you’ll be ‘allowed’ to upload copies of your work to the public internet (again SHERPA/RoMEO gives excellent information with respect to this).

So, assuming you already have some form of research output(s) to show for your work, you’ll want these to be discoverable, readable and re-usable by others – after all, what’s the point of doing research if no-one knows about it! If you’ve invested a significant amount of time writing a publication, gathering data, or developing software – you want people to be able to read and use this output. All outputs are important, not just publications. If you’ve published a paper in a traditional subscription access journal, then most of the world can’t read it. But, you can make a postprint of that work available, subject to the legal nonsense referred to above.

If it’s allowed, why don’t more people do it?

Similar to the cultural issues discussed with preprints, for some reason, researchers on the whole don’t tend to use institutional repositories (IR) to make their work more widely available. My IR at the University of Bath lists metadata for over 3300 published papers, yet relatively few of those metadata records have a fulltext copy of the item deposited with them for various reasons. Just ~6.9% of records have fulltext deposits, as published back in June 2011.

I think it’s because institutional repositories have an image problem: some are functional but extremely drab. I also hear of researchers full of disdain who say of their IR’s (I paraphrase):

“Oh, that thing? Isn’t that just for theses & dissertations – you wouldn’t put proper research there”

All this is set to change though as researchers are increasingly being mandated to deposit their fulltext outputs in IR’s. One particular noteworthy driver of change in this realm could be the newly-launched Zenodo service. Unlike Academia.edu or ResearchGate which are for-profit operations, and are really just websites in many respects; Zenodo is a proper repository – it supports harvesting of content via the OAI-PMH protocol and all metadata about the content is CC0, and it’s a not-for-profit operation. Crucially, it provides a repository for academics less well-served by the existing repository systems – not all research institutions have a repository, and independent or retired scholars also need a discoverable place to put their postprints. I think the attractive, modern-look, and altmetrics to demonstrate impact will also add that missing ‘sex appeal’ to provide the extra incentive to upload.

519a4594ec8d83225e000004

Providing Access to Your Published Research Data Benefits You

A new preprint on PeerJ shows that papers with associated open research data have a citation advantage. Furthermore other research has shown that willingness to share research data is related to the strength of the evidence and the quality of the results. Traditional repository software was designed around handling metadata records and publications. They don’t tend be great at storing or visualizing research data. But a new development in this arena is the use of CKAN software for research data management. Originally CKAN was developed by the Open Knowledge Foundation to help make open government data more discoverable and usable; the UK, US, and governments around the world now use this technology to make data available. Now research institutions like the University of Lincoln are also using this too for research data management, and like Zenodo the interface is clean, modern and provides excellent discoverability.

CKAN

Repositories are superior for enabling discovery of your work

Even though I use Academia.edu & ResearchGate myself. They’re not perfect solutions. If someone is looking for your papers, or a particular paper that you wrote these websites do well in making your output discoverable for these types of searches from a simple Google search. But interestingly, for more complex queries, these simple websites don’t provide good discoverability.

An example: I have a fulltext copy of my Nature letter on Academia.edu, it can’t be found from Google Scholar – but the copy in my institutional repository at Bath can. This is the immense value of interoperable and open metadata. Academics would do well to think closely about how this affects the discoverability of their work online.

The technology for searching across repositories for freely accessible postprints isn’t as good as I’d want it to be. But repository search engines like BASE, CORE and Repository Search are improving day by day. Hopefully, one day we’ll have a working system where you can paste-in a DOI and it’ll take you to a freely available postprint copy of the work; Jez Cope has an excellent demo of this here.

Open scholarship is now open to all

So, if there aren’t any suitable fee-free journals in your subject area (1), you find you don’t have funds to publish a gold open access article (2), and you aren’t eligible for am OA fee waiver (3), fear not. With a combination of preprint & postprint postings, you too can make your research freely available online, even if it has the misfortune to be published in a traditional subscription access journal. Upload your work today!

My final repost today (edited) from the Open Knowledge Foundation blog. It’s a little old, originally posted on the 16th of April, 2013 but I think it definitely deserves to be here on my blog as a record of my activities…

So… it’s over.

For the past twelve months I was immensely proud to be one of the first Open Knowledge Foundation Panton Fellows, but that has now come to an end (naturally). In this post I will try and recap my activities and achievements during the fellowship.

okfhelsinki

The broad goals of the fellowship were to:

  • Promote the concept of open data in all areas of science
  • Explore practical solutions for making data open
  • Facilitate discussions surrounding the role and value of openness
  • Catalyse the open community, and reach out beyond its traditional core

and I’m pleased to say that I think I achieved all four of these goals with varying levels of success.

 

Achievements:

Outreach & Promotion – I went to a lot of conferences, workshops and meetings during my time as a Panton Fellow to help get the message out there. These included:

Conferences

At all of these I made clear my views on open data and open access, and ways in which we could improve scientific communication using these guiding principles. Indeed I was more than just a participant at all of these conferences – I was on stage at some point for all, whether it was arguing for richer PDF metadata, discussing data re-use on a panel or discussing AMI2 and how to liberate open phylogenetic data from PDFs.

One thing I’ve learnt during my fellowship is that just academic-to-academic communication isn’t enough. In order to change the system effectively, we’ve got to convince other stakeholders too, such as librarians, research funders and policy makers. Hence I’ve been very busy lately attending more broader policy-centred events like the Westminster Higher Education Forum on Open Access & the Open Access Royal Society workshop & the Institute of Historical Research Open Access colloquium.

Again, here in the policy-space my influence has been international not just domestic. For example, my trips to Brussels, both for the Narratives as a Communication Tool for Scientists workshop (which may help shape the direction of future FP8 funding), and the ongoing Licences For Europe: Text and Data Mining stakeholder dialogue have had real impact. My presentation about content mining for the latter has garnered nearly 1000 views on slideshare and the debate as a whole has been featured in widely-read news outlets such as Nature News. Indeed I’ve seemingly become a spokesperson for certain issues in open science now. Just this year alone I’ve been asked for comments on ‘open’ matters in three different Nature features; on licencing, text mining, and open access from an early career researcher point-of-view – I don’t see many other UK PhD students being so widely quoted!

Another notable event I was particularly proud of speaking at and contributing to was the Revaluing Science in the Digital Age invite-only workshop organised jointly by the International Council for Science & Royal Society at Chicheley Hall, September 2012. The splendour was not just in the location, but also the attendees too – an exciting, influential bunch of people who can actually make things happen. The only downside of such high-level international policy is the glacial pace of action – I’m told, arising from this meeting and subsequent contributions, a final policy paper for approval by the General Assembly of ICSU will likely only be circulated in 2014 at the earliest!

 

helsinkiTALK

The most exciting outreach I did for the fellowship were the ‘general public’ opportunities that I seized to get the message out to people beyond the ‘ivory towers’ of academia. One such event was the Open Knowledge Festival in Helsinki, September 2012 (pictured above). Another was my participation in a radio show broadcast on Voice of Russia UK radio with Timothy Gowers, Bjorn Brembs, and Rita Gardner explaining the benefits and motivation behind the recent policy shift to open access in the UK. This radio show gave me the confidence & experience I needed for the even bigger opportunity that was to come next – at very short notice I was invited to speak on a live radio debate show on open access for BBC Radio 3 with other panellists including Dame Janet Finch & David Willetts MP! An interesting sidenote is that this opportunity may not have arisen if I hadn’t given my talk about the Open Knowledge Foundation at a relatively small conference; Progressive Palaeontology in Cambridge earlier that year – it pays to network when given the opportunity!

 

Outputs

The fellowship may be over, but the work has only just begun!

I have gained significant momentum and contacts in many areas thanks to this Panton Fellowship. Workshop and speaking invites continue to roll in, e.g. next week I shall be in Berlin at the Making Data Count workshop, then later on in the month I’ll be speaking at the London Information & Knowledge Exchange monthly meet and the ‘Open Data – Better Society’ meeting (Edinburgh).

Even completely independent of my activism, the new generation of researchers in my field are discovering for themselves the need for Open Data in science. The seeds for change have definitely been sown. Attitudes, policies, positions and ‘defaults’ in academia are changing. For my part I will continue to try and do my bit to help this in the right direction; towards intelligent openness in all its forms.

What Next?

I’m going to continue working closely with the Open Knowledge Foundation as and when I can. Indeed for 6 months starting this January I agreed to be the OKF Community Coordinator, Open Science before my postdoc starts. Then when I’ve submitted my thesis (hopefully that’ll go okay), I’ll continue on in full-time academic research with funding from a BBSRC grant I co-wrote partially out in Helsinki(!) at the Open Knowledge Festival with Peter Murray-Rust & Matthew Wills, that has subsequently been approved for funding. This grant proposal which I’ll blog further about at a later date, comes as a very direct result of the content mining work I’ve been doing with Peter Murray-Rust for this fellowship using AMI2 tools to liberate open data. Needless to say I’m very excited about this future work… but first things first I must complete and submit my doctoral thesis!

In the last 2 weeks I’ve given talks in Brussels & Amsterdam.

The first one was given during a European Commission (Brussels) working group meeting on Text & Data Mining. There were perhaps only ~30 people in the room for that.

The second presentation was given just a few days ago at Beyond The PDF 2 (#btpdf2) in Amsterdam.

I uploaded the slides from both of these talks to Slideshare just before or after I gave each talk to help maximize their impact. Since then they’ve had nearly 1000 views according to my Slideshare analytics dashboard.

It’s not just the view count I’m impressed with. The global reach is also pretty cool too (see below, created with BatchGeo):

View My Slideshare Impact 08/Mar/2013 to 22/Mar/2013 in a full screen map

Now obviously, these view counts don’t always mean that the viewers always went through all the slides, and a minority of the view-count are bots crawling the web but still I’m pretty pleased. Imagine if I hadn’t uploaded my Content Mining presentation to the public web? I would have travelled all the way to Brussels and back again (in the same day!) for the benefit of *just* ~30 people (albeit rather important people!). Instead, over 800 people have had the opportunity to view my slides, from all over the world (although, admittedly mostly just US & Europe).

The moral of this short story: upload your slides & tweet about them whenever you give a talk!
You may not appreciate just how big your potential audience could be. Something academics sceptical of Open Access should perhaps think about?

Particular thanks should go to @openscience for helping disseminate these slides far and wide. During just a 60 minute period, upon first release, thanks to @openscience and others my PDF metadata slidedeck got over 100 views this Wednesday!

Next step… must work on getting these stats into an ImpactStory widget for the next version of my CV!

U2in
Just a quick post.

I happened to see @wisealic Tweet about her “new Atira/Pure colleagues” yesterday. I didn’t know what Atira was, but I’d heard of PURE.

I googled it to find out more… and soon found the official Elsevier press release , dated August 15, 2012 (so this isn’t really new news). But combined with recent rumours it does worry me. Elsevier own perhaps a fifth of the academic literature, whatever the true figure it’s a significant share. Despite the research that went into most of those papers being publicly or charitably-funded, Elsevier now rent access to this work back to us (the world) for vast sums of money each and every year.

Not to mention the fake journals they published, the arms dealings their parent company (Reed Elsevier) was involved in, their initial support for the RWA (since withdrawn), the megabundling of journals, the non-provision of open bibliographic metadata (even NPG release this!), the obscene profit margins (and to be fair they’re not the only corporate publisher making a killing here by selling freely provided academic work),  there are 1001 reasons why –  this isn’t an exhaustive list of all the evils…

So Elsevier are not a well-loved company in academia at the moment – more than 13,000 people have signed a boycott of them.

There are rumours that Elsevier are in talks to buy Mendeley at the moment. And Atira/PURE now part of the Elsevier (Umbrella?) corporation are I think the exclusive(?) providers of the research information ‘management’ systems that the UK will be using for it’s next Research Evaluation Framework (REF formerly RAE) exercise in 2014.

So… Elsevier own a significant portion of our papers,  and they may soon own a significant chunk of the bibliographic metadata stored by academics (Mendeley data) and all the commercial insight and advantage that gives, AND they own the company that is managing the data that evaluates UK academics and more round the world no doubt.

I do wonder if there isn’t a significant conflict of interest if thousands of UK academics have publicly boycotted Elsevier and now their academic work is going to be evaluated by… Elsevier. Academic jobs thoroughly depend on the results of these evaluations as I understand it, and heads will roll if the results at an institution are below expectations.

From a purely business perspective many financial analysts would rightly applaud these acquisitions as “good business moves” (good for profits no doubt). But from an ethical standpoint? Elsevier now seem to have a worrying empire of services built around academia and a significant amount of data which presumably they can pool together from each of these different services to gain additional insight? They also have a very poor record when it comes to providing open data. Why are we still giving them our data so easily – they’re only going to rent it back to us at a later date?

To me it’s clear, we’re giving up far too much of our data to this company and they do not have our best interests at heart – shareholder profits are by definition their primary goal. They have a sizeable monopoly on academic data in all it’s forms which they can and do leverage and I suspect we’re going to be made to pay for this mistake in the future as we have with hugely inflated journal subscription prices.

Is it just me that’s worried?