Show me the data!
Header

Running TNT in parallel

July 14th, 2012 | Posted by rmounce in Conferences | Phylogenetics | TNT - (6 Comments)

Building upon the instructions given here and here I thought I’d write up one of the many useful things Pablo Goloboff kindly taught us at the TNT scripting workshop after the Hennig XXXI meeting.

It’s actually not the easiest thing to setup if you’re using Ubuntu… Pablo had to help me do it – I would never have got it up and running on my own.

THE FOLLOWING INSTRUCTIONS ASSUME YOU’RE USING UBUNTU ON A MULTICORE MACHINE

You’ll need to install the pvm package either from the repositories with

sudo apt get install pvm

or download and compile from source

It’s actually better to compile from source because the pvm package in the Ubuntu repositories is out of date – they provide only version 3.4.5, whilst the latest version of pvm, released way back in 2009 is 3.4.6 ! I guess the packaging team have other priorities…

Then you’ll need to configure pvm on your machine:

  • edit your bashrc file  nano ~/.bashrc  and insert this line:

export PVM_ROOT=/usr/lib/pvm3

save & close the bashrc file. Now  source ~/.bashrc and then test the path with echo $PVM_ROOT this should now return

/usr/lib/pvm3

  • in your user home directory (for me this is /home/ross/ ) create a plaintext file called hostlist (the exact name doesn’t matter but remember it) and write one line within this file:

rossnetbook ep=/usr/bin/

(replace ‘rossnetbook’ with your computer hostname – if you’re not sure what this is then nano /etc/hostname will tell you) save and close this file.

  • now start pvm from your user home directory with pvm hostlist  this tells pvm your hostname and the path. Unfortunately you’ll need to start-up pvm this way every time you restart your computer. Perhaps there’s a better way? Let me know if so…

Finally, make sure you’ve copied the 64-bit TNT binary to both /usr/bin/ & to your user home directory and make sure that they’re executable.

Now you should be ready to go…

if you get an error message like this from TNT:

tnt*>ptnt begin ajob2 2 = mult 5; return ; ptnt wait . ;
Macro language is ON
Macros: 50.5 Kb in use, 51.8 Kb free
libpvm [pid7539] /tmp/pvmd.1000: No such file or directory
libpvm [pid7539] /tmp/pvmd.1000: No such file or directory
libpvm [pid7539] /tmp/pvmd.1000: No such file or directory
libpvm [pid7539]: pvm_config(): Can’t contact local daemon

Can’t enter parallel interface (make sure PVM is running)

you’ve probably forgotten to start pvm with pvm hostlist

see the video I uploaded below for demonstration of the speed-up possible by performing tasks in parallel:

the video shows me performing a simple search on the zilla dataset of Chase et al. (1993) using traditional heuristic settings (60 reps) performed first in serial, then in parallel (starting after 2:00) 20 reps x 3 slaves.

100 seconds for search 1, down to just 48 seconds for search 2 (in parallel). YMMV

Neither of these searches found the shortest length trees btw!

Commands:
mxram 100; /* increase memory */
p zilla.tnt; /* read in the data */
hold 20000; /* increase the maximum number of trees held */
mult 60; /* perform a traditional search with 60 replications */
le; /* tree lengths */

/* parallel tnt job, called ‘ajob’ using 3 slaves performing ‘mult 20’ on each slave */
ptnt begin ajob 3 = mult 20; return ; ptnt wait . ;

basically just insert what you want your slaves to do in between the ‘=’ and the return; commands.

ptnt get ajob; /* get data back from slaves to master */

This was just the tip of the iceberg of the course. I can’t even begin to write-up the rest of the course in this much detail! But I hope this helps…

many many thanks Pablo, and all the organisers of this workshop AND the conference – it was *much* appreciated

It’s that time again… time to write my monthly Panton Fellowship update.

The trouble is, as I start writing this it’s 6am (London, UK). I arrived back from the Hennig XXXI meeting (University of California Riverside) after a long flight yesterday and am supremely jetlagged. I still can’t decide whether this is awesome (I can get more work done, by waking up earlier), or terrible as I can’t keep my eyes open past 9pm at night!

At this conference I shoe-horned some of my Panton Fellowship project work into the latter half of my talk (slides below), as it fitted in with the theme of the submitted abstract on supertrees.

Supertrees are just one of many many different possible (re)uses of the phylogenetic tree data I am trying to liberate from the literature for this project. I tried to stress this during my talk, as a lot of people at Hennig aren’t too keen on supertrees as a method for inferring large phylogenies. In fact, there was a compelling talk with solid data from Dan Janies given later on in conference, critiquing supertree methods such as SuperFine and SuperTriplets, which were outperformed in most tests in terms of both speed and optimality (tree length) by supermatrix methods using TNT. That’s fine though – there are so many other interesting hypotheses one can investigate with large samples of real phylogenetic estimates (trees).

e.g.

  • Do model-based phylogenetic analyses perform better than parsimony? [Probably not, judging by the conclusions in this paper]  –  I’d like to see this hypothesis re-tested more rigorously using tree-to-tree distance comparisons between the different method trees. Except we can’t currently do this very easily because there’s a paucity of machine-readable tree data from published papers
  • Meta-analysis of phylogenetic tree balance and factors that influence balance e.g. (this thesis, and this PLoS ONE article).  Are large trees more imbalanced than small trees? Are vertebrate trees more balanced than invertebrate trees?
  • Fossil taxa in phylogenetic trees – are they more often than not found at the base of the tree? Is this ‘real’ or perhaps apparent ‘stem-ward slippage‘ caused by preservational biases?
  • Similarity and dissimilarity between phylogeny and measures of morphological disparity as studied  by my lab mate Martin Hughes

So, I hope you’ll appreciate this data isn’t just needed for producing large supertrees.

I could go on about the conference – it was excellent as ever, but I’ll save that for a dedicated later post.

Other activities this month included:

  • submitting my quarterly Panton report to the Fellowship Board
  • attending the OKFN Bibliohack session at QMUL’s Mile End campus (13th & 14th June) helping out with the creation of the OKFN Open Access Index, and learning how to use & debug a few issues with PubCrawler (a web crawler for scraping academic publication information, not a beer finder app!), with Peter Murray-Rust
  • discussing Open Access, Open Data and full text XML publishing with the Geological Society of London. The GSL have a working group currently investigating if/how they can transition to greater openness. Kudos to them for looking into this. Many a UK academic society may currently be hiding their heads in the sand at moment ignoring that the UK policy-wise is now committed to Open Access as the future of research publishing. It probably won’t be easy for GSL to make this transition as their accounts[PDF] show they are rather reliant on subscription-based journals and books for income. It’s hard to see how Open Access article processing charges could immediately replace the £millions subscription income per year from relatively few books & journals. Careful and perhaps difficult decisions will have to be made at some point to balance the goals of this charitable society, the acceptable level of income and the choice and amount of expenditure on non-publication related activites (e.g. ‘outreach events’).Interestingly, I note The American Geophysical Union (AGU) has recently decided to outsource their publications to an external company. Does anyone know ‘who’ yet? I just hope it’s not Elsevier.

Finally, the audio for the talk on the Open Knowledge Foundation and the Panton Fellowships, I gave in Cambridge recently, has now been uploaded, so I can now present the slides and the actual talk I gave together (below) for the first time! Many thanks for the organisers of the conference for doing all this work to make audio from all the talks available – it’s really cool that a relatively modest, small PhD student conference can produce such an excellent digital archive of what happened – I only wish the ‘bigger’ conferences has the resources & willpower to do this too!

…and if that’s not enough Panton updates for you, you can read Sophie Kershaw’s updates for June too, over on her blog

I’m really pleased this new Open Access paper has just been published.

CC BY 3.0 Zookeys Special Issue 150ResearchBlogging.org

Hagedorn, G. et al. Creative commons licenses and the non-commercial condition: Implications for the re-use of biodiversity information 150, 127-149 (2011).

Some background…

After parading my Open Data t-shirt (pictured below) around the Society of Vertebrate Paleontology meeting this month, I was invited to give an impromptu pitch in front of the great and good of the Mammal AToL project & MorphoBank people. Having pointed out to MorphoBank a while ago that they should really make explicit the terms and conditions [license] under which they make their (?) data available, I naturally advocated CC-BY 3.0 and CC0 licences. I talked about this very subject and pleaded with them NOT to use the NC clause refering to Rod Page & Peter Murray-Rust ‘s [1,2] thoughts on the matter.

Data providers vs Data re-users – need they really be in opposition?

The trouble is, a lot of (data providing) institutions seem hell-bent on ‘protecting commercial interests’, at the expense of research opportunities. So as I understand it, at the moment databases such as these face an awkward problem of either satisfying the restriction requests of data providers OR satisfying permissiveness of re-use by data re-users [such as myself!], and the needs of both camps are seldom entirely met.

Conclusion

I see this paper as an important step in persuading such restriction-minded institutions of the absolute importance of #OpenData / #PantonPrinciples and how NC clauses can genuinely obstruct and impair real academic research.
I just hope people read it and take note!

[Most of this is just a re-post of my spur of the moment G+ post here.
I’m reposting here so that this might hopefully get picked up by Research Blogging to give this paper the publicity it deserves. Much of the content is widely applicable IMO to most of scholarly communications, not just biodiversity informatics, and indeed the whole ZooKeys special issue (Open Access) is well worth a browse.]

References

[1] http://iphylo.blogspot.com/2010/12/plant-list-nice-data-shame-it-not-open.html
[2] http://blogs.ch.cam.ac.uk/pmr/2010/12/17/why-i-and-you-should-avoid-nc-licences/
[3] Hagedorn, G., Mietchen, D., Morris, R., Agosti, D., Penev, L., Berendsohn, W., & Hobern, D. (2011). Creative Commons licenses and the non-commercial condition: Implications for the re-use of biodiversity information ZooKeys, 150 DOI: 10.3897/zookeys.150.2189

This is a re-post of something I was invited to write to sum-up my experiences at OKCon 2011. The original post can be viewed here on the official OKFN Open Science blog. For some reason the Prezi embed code at the bottom didn’t work, but does here on my blog

Many thanks to Jenny Molloy for inviting me to write the post, and Maria Neicu for editing it.

A couple of months ago, I gave a talk at the Open Knowledge Conference 2011, on ‘Open Palaeontology’ – based upon 18 months experience as a lowly PhD student trying, and mostly failing to get usable digital data from palaeontological research papers. As you might well have inferred already from that last sentence; it’s been an interesting ride.

The main point of my talk was the sheer stupidity/naivety of the way in which data is supplied (or in some cases, not at all!) with or within research papers. Effective science operates through the accumulation of knowledge and data, all advances are incremental and build upon the work of others – the Panton Principles probably sum it up far better than I could. Any such barriers to the accumulation of knowledge/data therefore impede the progress of science.

Whilst there are numerous barriers to academic research – access to research papers being perhaps the most well-known and well-publicised; the issue that most aggravates me, is not access to these papers, but the actual papers themselves – in the context of the 21st century (I’m thinking the Internet Age here…), they are only barely adequate (at best) for communicating research data and this is a major problem for the future legacy of our published work… and my research project.

My PhD thesis title is quite broad: ‘The Importance of Fossils in Phylogeny’. Given this title and (wide)scope, I need to look at a lot of papers, in a lot of different journals, and extract data from these articles to re-analyse; to assess the importance of fossils in phylogeny; on a meta-scale. There are long established data formats for the particular type of data I wish to extract. So well established and easy to understand there’s even a Wikipedia page here describing the most commonly used data format (nexus). There exist multiple databases set aside specifically to host this type of data e.g. TreeBASE and MorphoBank. Yet despite all this standardisation and provisioning for paleomorphological phylogenetic data – far less than 1% of all data published on, is actually readily-available in a standardised, digital, usable format.

In most cases the data is there; you just have to dig very very hard to release it from the pdf file it’s usually buried in (and then spend unnecessary and copious amounts of time, manually reformatting and validating it). See the picture below for a typical example (and yes, it is sadly printed sideways, this is a common and silly practice that publishers use to inappropriately squeeze data matrices into papers):
7BHO

I hope you’ll agree with me that this is clearly absurd and hugely inefficient. As I explain in my presentation (slides at the bottom of this post) the data, as originially analysed/used, comes in a much richer, more usable, digital, Standardised format. Yet when published it gets stripped of all useful metadata and converted into a flat, inextricable and significantly obfuscated table. Why? It’s my belief this practise is a lazy unwanted vestigial hangover from the days of paper-based (only) publishing, in which this might have been the only way in which to convey the data with the paper. But in 2011, I can confidently say that the vast majority of researchers read and the use the digital versions of research papers – so why not make full and proper use of the digital format to aid scientific communication? I argue, not to axe paper copies. But to make sure that digital versions are more than just plain pdf versions of the paper copy, as they can and should IMO be.

With this goal in mind, I set about writing an Open Letter to the rest of my research community to explain why we need to richly-digitise our published research data ASAP. Naturally, I wouldn’t get very far just by myself, so I enlisted the support of a variety of academic friends via Facebook, and (inspired by OKFN pads I’d seen) we concocted a draft letter together using an Etherpad. The result of this was a fairly basic Drupal-based website that we launched http://supportpalaeodataarchiving.co.uk/ and disseminated via mailing lists, Twitter, Academia.edu as far and wide as we possibly could, *hoping* just hoping, that our fellow academics would read, take note and support our cause.

Surprisingly, it worked to an extent and a lot of big names in Palaeontology signed our Open Letter in support of our cause; then things got even better when a Nature journalist (Ewen Callaway) got interested in our campaign and wrote an article for Nature News about it, which can be found here. A huge thanks must go to everyone who helped out with the campaign, it’s generated truly International support, as can be demonstrated on the map below:
(you might have to zoom out a bit. For some reason it zooms into Africa by default )


View Open Letter Signatures in a larger map

It’s far too soon to know the true impact of the campaign. Journal editorial boards can be very slow to change their editorial policies, especially if it requires a modicum of extra effort on the part of the publisher. Additionally, once editorial policy does change at a journal, it can only apply to articles submitted from henceforth and thus articles already in the submission pipeline don’t get affected by any new guidelines. It’s not uncommon for delays of a year between submission and publishing in palaeontology, so for this and other reasons, I’m not expecting to see visible change until 2012, but I think we might have helped get the ball rolling, if nothing else…
The Paleontological Society journals (Paleobiology and Journal of Paleontology) have recently adopted mandatory data submission to the Dryad repository, and the Journal of Vertebrate Paleontology has also improved their editorial policy with respect to certain types of data, but these are just a few of many many journals that publish palaeontological articles. I’m very much hoping that other journals will follow suit in the next few months and years by taking steps to improve the way in which research data is communicated, for the good of everyone; authors, publishers, funders and readers.

Anyway, here’s the Prezi I used to convey some of that (and more) at OKCon 2011. Huge thanks to the conference organisers for inviting me to give this talk. It was the most professionally run conference I’ve ever been to, by far. Great food, excellent WiFi provisioning, good comms, superb accommodation… I could go on. If the conference is on next year – I’ll be there for sure!