My full comments on the PLOS ONE manuscript submission modelling paper:
On 27 January 2015 at 23:05, Chris Woolston <REDACTED> wrote:
Hello again. I contacted you awhile ago for my Nature column on the intersection of science and social media.
Yep I remember.
I’m wondering if I could once more ask for your help. (This is what you get for being a prolific and articulate tweeter.)
Sure why not? Thanks for the compliment :)
The next edition will look at the PLoS report on the optimum strategy for submitting papers.
Salinas S, Munch SB (2015) Where Should I Send It? Optimizing the Submission Decision Process. PLoS ONE 10(1): e0115451. doi: 10.1371/journal.pone.0115451
A worthy choice. It relates to my most recent research too… I have a preprint in which I comprehensively demonstrate that information published in PLOS ONE is substantially more discoverable than publishing in other paywalled journals – if other researchers can’t discover your paper when searching for relevant terms, they probably won’t cite it…
PLOS ONE is both an open access journal AND a technically excellent content platform, thus it is near perfectly full-text indexed in Google Scholar. Other journals operating a paywall, or with a more simplistic content platform & content provision (e.g. PDF only) are not well indexed in Google Scholar & thus may suffer in terms of citation.
I saw your tweet regarding “scoops.” If you have a moment, I would appreciate a brief elaboration. Isn’t there some extra value in a scoop?
Some academics have an odd psychological complex around this thing called ‘scooping’. The authors of this paper are clearly strong believers in scooping. I don’t believe in scooping myself – it’s a perverse misunderstanding of good scientific practice. I believe what happens is that someone publishes something interesting; useful data testing a novel hypothesis — then somewhere else another academic goes “oh no, I’ve been scooped!” without realising that even if they’re testing exactly the same hypothesis, their data & method is probably different in some or many respects — independently generated and thus extremely useful to science as a replication even if the conclusions from the data are essentially the same.
Many papers are often published, deliberately, testing the same hypothesis on different species, across species, in different countries or habitats, under different conditions – these are not generally labelled ‘already scooped papers’ although under this scheme of thought, perhaps they should be? Particularly in lab or field ecology I find it extremely unlikely that two independent groups could possibly go out and collect data on *exactly* the same hypothesis, species, population, area… They’d bump into one another, surely?
It’s only really with entirely computational theoretical ecology that it might be possible for two independent groups to be working on exactly the same hypothesis, with roughly the same method at the same time. But even here, subtle differences in parameter choice will produce two different experiments & different, independent implementations are useful to validate each other. In short, scooping is a figment of the imagination in my opinion. There should be no shame in being ‘second’ to replicate or experimentally test a hypothesis. All interesting hypotheses should be tested multiple times by independent labs, so REPLICATION IS A GOOD THING.
I suggest the negative psychology around ‘scooping’ in academia has probably arisen in part from the perverse & destructive academic culture of chasing publication in high impact factor journals. Such journals typically will only accept a paper if it is the first to test a particular hypothesis, regardless of the robustness of approach used – hence the nickname ‘glamour publications’ / glam pubs. Worrying about getting scooped is not healthy for science. We should embrace, publish, and value independent replications.
With relevance to the PLOS ONE paper – it’s a fatal flaw in their model that they assumed that ‘scooped’ (replication) papers had negligible value. This is a false assumption. I would like to see an update of calculations where ‘scooped’ (replication) papers are given various parameterizations between 10% & 80% of the value of a completely novel ‘not-scooped’ paper. In such a model I’d expect submitting to journals with efficient, quick submission-to-publication times will be optimal, journals such as PeerJ, F1000Research & PLOS ONE would come top probably. Many academics who initially think they’ve been mildly or partially scooped, rework their paper, do perhaps an additional experiment and then still proceed to publish it. This reality is not reflected in the assumption of “negligible value”.
And don’t scientists generally look for an outlet that will publish their work sooner than later?
Some do. I do. But others chase high impact factor publication & glamour publication – this is silly, and in many cases results in a high-risk suboptimal strategy. I know people who essentially had to quit academia because they chose this high-risk approach (and failed / didn’t get lucky) rather than just publishing their work in decent outlets that do the job appropriately.
I suppose that’s a big part of the decision process: Impact vs. expediency. Did any of the other points in the paper strike your attention?
It’s great news for PLOS ONE. Many ecologists have a strange & irrational distaste for PLOS ONE, particularly in the UK – often it’s partly a reticence around open access but also many seem to wilfully misunderstand PLOS ONE’s review process: reviewing for scientific-soundness and not perceived potential ‘impact’. This paper provides solid evidence that if you want your work to be cited, PLOS ONE is great place to send your work.
Citations aren’t the be all & and all though. It’s dangerous to encourage publication strategies based purely on maximising number of citations. Such thinking encourages sensationalism & ‘link-bait’ article titles, at a cost to robust science. To be highly-cited is NOT the purpose of publishing research. Brilliant research that saves lives, reduces global warming, or some other real-world concrete impact, can have a fairly low absolute number of citations. Likewise research in a popular field or topic can be highly-cited simply because many people are also publishing in that area. Citations don’t necessarily equate to good scholarship or ‘worthyness’.
I would welcome a brief response over email, or perhaps we could schedule a chat on the phone tomorrow. I’m in the US, and I’m generally not available before 3 p.m. your time. Thank you.
I’ll skip the phone chat if that’s okay. I’ve failed to be brief but I’ve bolded bits I think are key.
All the best,