Show me the data!

Some further questions for SVP about their talk abstract embargo

August 24th, 2012 | Posted by rmounce in Conferences | Open Access | Palaeontology - (Comments Off on Some further questions for SVP about their talk abstract embargo)

I just sent this email to Darin Croft (of SVP). I chose to contact him because he recently answered questions about the embargo for EmbargoWatch and it was rather unclear who else I should approach. I did not want to blanket email the whole council.

This is the (entire) email I sent him, from my gmail account:
(I will post his reply as and when I receive it)

Dear Darin,

It’s been noted many times before, by many different researchers – but the SVP meeting abstract embargo just doesn’t make sense to me. I know of no other conference that operates like this, and indeed for most other conferences the abstract booklet (and it’s open, free availability online) is a big promotional aid in getting people interested in the event in the lead-up to it.

I saw you answered some questions on EmbargoWatch recently, so I thought you might be the correct person to contact for my queries on the same subject:

I have blogged my own displeasure with the embargo policy here:

I would like to ask:

1.) What would happen if a researcher (and SVP member) deliberately broke the embargo and blogged/tweeted/published research that was the basis of their own submitted talk abstract (I’m surprised this hasn’t happened already tbh, given how early the abstract deadline is – some e-journals have very quick turnaround times…)

2.) What would happen if a researcher (and SVP member) broke the embargo and blogged or tweeted some or all the of the content of another researcher’s talk abstract

3.) If a blogger or journalist *did* write an article or two on the basis of the meeting abstract booklet – do you seriously think that could harm the chances of VP’ers getting published in one of the glamour mags?

I look forward to hearing from you, and will publish your response in full context with this email on my blog



Ross Mounce
PhD Student & Panton Fellow
Fossils, Phylogeny and Macroevolution Research Group
University of Bath, 4 South Building, Lab 1.07

Sometimes you just have to laugh…

The year is 2012, we have the internet, we have blogs, and a huge variety of other tools to enable free, efficient and rapid communication of information and yet the Society of Vertebrate Paleontology annual meeting rules still insist that all information within this year’s abstract booklet remain a big secret until the day of the event.

Many others have justly written to complain about this before.

Here’s the 2012 version I just received in my inbox today:

SVP Embargo Policy Regarding Content in the Program and Abstract Book

Unless specified otherwise, coverage of abstracts presented orally at the Annual Meeting is strictly prohibited until the start time of the presentation, and coverage of poster presentations is prohibited until the relevant poster session opens for viewing. As defined here, “œcoverage” includes all types of electronic and print media; this includes blogging, tweeting and other intent to communicate or disseminate results or discussion presented at the SVP Annual Meeting. Content that may be pre-published online in advance of print publication is also subject to the SVP embargo policy.

So I think I can tell you I’m giving a talk there in the ‘Phylogenetic and Comparative Paleobiology — New Approaches to the Study of Vertebrate Macroevolution’ symposium.

But can I tell you what the title of my talk is, or the abstract I submitted (a rather long time ago, which is another bugbear I have with this particular conference)? Well, given the quote above, probably not!

And therein is part of the ridiculousness of the embargo. By submitting a (subsequently accepted) talk & abstract to this conference – I’m banned from communicating about my own research on that subject until I give the talk. Not even a tweet about it.

It also seems to me that they’re preventing their own members from effectively promoting the event with this policy. Wouldn’t it be great if all speakers could blog and tweet: “Hey, I’m giving a talk on new dinosaur XXXX and it’s unusual anatomy (further details of which are in my abstract here) at a meeting in Raleigh, NC. Come along, tickets still available here” Isn’t that 100 times better than “Hey, I’m giving a talk at this conference – I can’t tell you what the title is or the subject, sorry” ?

This policy strikes me as a massive and unjustified own goal. I appreciate some of the science glamour mags don’t take kindly to press reportage of science before it is published in their glossy pages BUT I think we’ve got to remember that science talks & posters are NOT papers, and they should not and are not treated as such. The abstracts for SVP are only minimally peer-reviewed before acceptance and the talk content itself is completely unreviewed. Therefore if a journalist/blogger/tweeter did report on the abstract booklet (and btw, it would take tremendous journalistic spin to make good, interesting copy from most talk abstracts I’ve ever seen – they’re rather short!) they’d be reporting non-peer reviewed discussion, that may or may not be related to unspecified future peer-reviewed publications. So I don’t buy [what I presume is the justification for all this?] the argument that reportage of talk abstracts jeopardises the publication of peer-reviewed papers. The two may be related, but are also very distinct from each other.

I think it’s only a matter of time until this policy changes. SVP have being doing reasonably well with respect to openness recently. They’ve reduced their hybrid Open Access fees, and instituted new editorial policy encouraging data archiving so that data published in their journal is more transparent & re-usable (=better science). But it seems there are still improvements to be made. Will there be an abstract embargo in 2013 I wonder? I for one hope not.

Running TNT in parallel

July 14th, 2012 | Posted by rmounce in Conferences | Phylogenetics | TNT - (6 Comments)

Building upon the instructions given here and here I thought I’d write up one of the many useful things Pablo Goloboff kindly taught us at the TNT scripting workshop after the Hennig XXXI meeting.

It’s actually not the easiest thing to setup if you’re using Ubuntu… Pablo had to help me do it – I would never have got it up and running on my own.


You’ll need to install the pvm package either from the repositories with

sudo apt get install pvm

or download and compile from source

It’s actually better to compile from source because the pvm package in the Ubuntu repositories is out of date – they provide only version 3.4.5, whilst the latest version of pvm, released way back in 2009 is 3.4.6 ! I guess the packaging team have other priorities…

Then you’ll need to configure pvm on your machine:

  • edit your bashrc file  nano ~/.bashrc  and insert this line:

export PVM_ROOT=/usr/lib/pvm3

save & close the bashrc file. Now  source ~/.bashrc and then test the path with echo $PVM_ROOT this should now return


  • in your user home directory (for me this is /home/ross/ ) create a plaintext file called hostlist (the exact name doesn’t matter but remember it) and write one line within this file:

rossnetbook ep=/usr/bin/

(replace ‘rossnetbook’ with your computer hostname – if you’re not sure what this is then nano /etc/hostname will tell you) save and close this file.

  • now start pvm from your user home directory with pvm hostlist  this tells pvm your hostname and the path. Unfortunately you’ll need to start-up pvm this way every time you restart your computer. Perhaps there’s a better way? Let me know if so…

Finally, make sure you’ve copied the 64-bit TNT binary to both /usr/bin/ & to your user home directory and make sure that they’re executable.

Now you should be ready to go…

if you get an error message like this from TNT:

tnt*>ptnt begin ajob2 2 = mult 5; return ; ptnt wait . ;
Macro language is ON
Macros: 50.5 Kb in use, 51.8 Kb free
libpvm [pid7539] /tmp/pvmd.1000: No such file or directory
libpvm [pid7539] /tmp/pvmd.1000: No such file or directory
libpvm [pid7539] /tmp/pvmd.1000: No such file or directory
libpvm [pid7539]: pvm_config(): Can’t contact local daemon

Can’t enter parallel interface (make sure PVM is running)

you’ve probably forgotten to start pvm with pvm hostlist

see the video I uploaded below for demonstration of the speed-up possible by performing tasks in parallel:

the video shows me performing a simple search on the zilla dataset of Chase et al. (1993) using traditional heuristic settings (60 reps) performed first in serial, then in parallel (starting after 2:00) 20 reps x 3 slaves.

100 seconds for search 1, down to just 48 seconds for search 2 (in parallel). YMMV

Neither of these searches found the shortest length trees btw!

mxram 100; /* increase memory */
p zilla.tnt; /* read in the data */
hold 20000; /* increase the maximum number of trees held */
mult 60; /* perform a traditional search with 60 replications */
le; /* tree lengths */

/* parallel tnt job, called ‘ajob’ using 3 slaves performing ‘mult 20’ on each slave */
ptnt begin ajob 3 = mult 20; return ; ptnt wait . ;

basically just insert what you want your slaves to do in between the ‘=’ and the return; commands.

ptnt get ajob; /* get data back from slaves to master */

This was just the tip of the iceberg of the course. I can’t even begin to write-up the rest of the course in this much detail! But I hope this helps…

many many thanks Pablo, and all the organisers of this workshop AND the conference – it was *much* appreciated

It’s that time again… time to write my monthly Panton Fellowship update.

The trouble is, as I start writing this it’s 6am (London, UK). I arrived back from the Hennig XXXI meeting (University of California Riverside) after a long flight yesterday and am supremely jetlagged. I still can’t decide whether this is awesome (I can get more work done, by waking up earlier), or terrible as I can’t keep my eyes open past 9pm at night!

At this conference I shoe-horned some of my Panton Fellowship project work into the latter half of my talk (slides below), as it fitted in with the theme of the submitted abstract on supertrees.

Supertrees are just one of many many different possible (re)uses of the phylogenetic tree data I am trying to liberate from the literature for this project. I tried to stress this during my talk, as a lot of people at Hennig aren’t too keen on supertrees as a method for inferring large phylogenies. In fact, there was a compelling talk with solid data from Dan Janies given later on in conference, critiquing supertree methods such as SuperFine and SuperTriplets, which were outperformed in most tests in terms of both speed and optimality (tree length) by supermatrix methods using TNT. That’s fine though – there are so many other interesting hypotheses one can investigate with large samples of real phylogenetic estimates (trees).


  • Do model-based phylogenetic analyses perform better than parsimony? [Probably not, judging by the conclusions in this paper]  –  I’d like to see this hypothesis re-tested more rigorously using tree-to-tree distance comparisons between the different method trees. Except we can’t currently do this very easily because there’s a paucity of machine-readable tree data from published papers
  • Meta-analysis of phylogenetic tree balance and factors that influence balance e.g. (this thesis, and this PLoS ONE article).  Are large trees more imbalanced than small trees? Are vertebrate trees more balanced than invertebrate trees?
  • Fossil taxa in phylogenetic trees – are they more often than not found at the base of the tree? Is this ‘real’ or perhaps apparent ‘stem-ward slippage‘ caused by preservational biases?
  • Similarity and dissimilarity between phylogeny and measures of morphological disparity as studied  by my lab mate Martin Hughes

So, I hope you’ll appreciate this data isn’t just needed for producing large supertrees.

I could go on about the conference – it was excellent as ever, but I’ll save that for a dedicated later post.

Other activities this month included:

  • submitting my quarterly Panton report to the Fellowship Board
  • attending the OKFN Bibliohack session at QMUL’s Mile End campus (13th & 14th June) helping out with the creation of the OKFN Open Access Index, and learning how to use & debug a few issues with PubCrawler (a web crawler for scraping academic publication information, not a beer finder app!), with Peter Murray-Rust
  • discussing Open Access, Open Data and full text XML publishing with the Geological Society of London. The GSL have a working group currently investigating if/how they can transition to greater openness. Kudos to them for looking into this. Many a UK academic society may currently be hiding their heads in the sand at moment ignoring that the UK policy-wise is now committed to Open Access as the future of research publishing. It probably won’t be easy for GSL to make this transition as their accounts[PDF] show they are rather reliant on subscription-based journals and books for income. It’s hard to see how Open Access article processing charges could immediately replace the £millions subscription income per year from relatively few books & journals. Careful and perhaps difficult decisions will have to be made at some point to balance the goals of this charitable society, the acceptable level of income and the choice and amount of expenditure on non-publication related activites (e.g. ‘outreach events’).Interestingly, I note The American Geophysical Union (AGU) has recently decided to outsource their publications to an external company. Does anyone know ‘who’ yet? I just hope it’s not Elsevier.

Finally, the audio for the talk on the Open Knowledge Foundation and the Panton Fellowships, I gave in Cambridge recently, has now been uploaded, so I can now present the slides and the actual talk I gave together (below) for the first time! Many thanks for the organisers of the conference for doing all this work to make audio from all the talks available – it’s really cool that a relatively modest, small PhD student conference can produce such an excellent digital archive of what happened – I only wish the ‘bigger’ conferences has the resources & willpower to do this too!

…and if that’s not enough Panton updates for you, you can read Sophie Kershaw’s updates for June too, over on her blog