Show me the data!

What is Journal Visibility?

August 28th, 2015 | Posted by rmounce in Publications

I’ve just read a paper published in Systematic Biology called ‘A Falsification of the Citation Impediment in the Taxonomic Literature‘.

Having read the full paper many times, including the 64-page PDF supplementary materials file. I’m amazed the paper was published in its current form.

Early on, in the abstract no less, the authors introduce a parameter called ‘journal visibility’. Apparently they ‘correct’ the number of citations for it.

We compared the citation numbers of 306 taxonomic and 2291 non-taxonomic research articles (2009–2012) on mosses, orchids, ciliates, ants, and snakes, using Web of Science (WoS) and correcting for journal visibility. For three of the five taxa, significant differences were absent in citation numbers between taxonomic and non-taxonomic papers.

I count over twenty further instances of the term ‘visibility’ or ‘visable’ in this paper. It is clearly an important part of the work and calculations. But what is it and how did they correct for it? All parameters in reputable scientific papers should be clearly defined, as well as any numerical ‘correction’ operations performed. Yet in this paper I honestly can’t find any given explicit definition of ‘journal visibility’. As Brian O’Meara points out, they define highly visible journals as “those included in WoS and with a good standing”. Good standing is not further defined or scored. No definition is given for what a lowly visible or middlingly visible journal is. All journals indexed in Web of Science are assigned an Impact Factor. Thus ‘included in WoS’ and ‘has Impact Factor’ are two ways of saying the same thing.

For the sake of clarity I will now quote and number all other passages in the paper, aside from the abstract, that mention ‘visibility’ or ‘visible’ (I have highlighted each instance in red):

1 & 2 & 3

In more detail, we address five questions: Does publishing taxonomy harm a journal’s citation performance? Is it within the possibilities of journal editors to influence taxonomy’s visibility? If more high-visibility journals opened their doors to taxonomic publications, would taxonomy’s productivity be sufficient for an increase in the number of taxonomic papers in these journals? Can taxonomy be published by taxonomists only or by a larger community? And finally, would the community use the chance to publish more taxonomic papers in highly visible journals?


Just 14 of the 47 journals published both taxonomic and non-taxonomic papers on the focal taxa on a yearly basis in the years 2009–2012 (Table 1). The analyzed taxonomic publications in these 14 journals might have experienced lower visibility than publications in the other 33 journals. This is due to the fact that the average IF 2012 of the 14 journals with both taxonomic and non-taxonomic publications was significantly lower ( 1.16±0.51 standard deviation [SD]) than the average IF of the other 33 journals ( 2.66±1.60 ; Student’s t -test, P<0.001 ).


Because of the correction for journal visibility, we consider the results for the 14 journals to be more representative of the citation performance of taxonomic versus non-taxonomic per se than the results for all journals.

6 & 7 & 8


For strengthening the impact and prospects of taxonomy, equal opportunity is needed for taxonomists and non-taxonomists. In practice, this means that taxonomists should be able to publish in highly visible journals (those included in WoS and with a good standing). Editors of highly visible periodicals that include taxonomy will contribute actively to reducing the taxonomic impediment and, considering our analyses, might on top of this do the best for their journals.

9 & 10

The IF 2012 of these 19 journals that (in principle) publish taxonomy ( 2.61±1.64 ) does, on average, not differ significantly from that of the 14 journals that do not publish taxonomy at all ( 2.73±1.61 ; Student’s t -test, P=0.84 ) meaning that equal visibility for taxonomists and non-taxonomists might, in fact, not be out of reach. In essence, for many editors of highly visible periodicals, it might not so much be a question of changing the scope of their journals but of increasing the frequency of taxonomic publications and thus simply of communicating the willingness to publish taxonomy to the community

11 & 12 & 13


It is not enough, however, for editors of highly visible journals to actively invite taxonomic contributions. A crucial question about whether increasing taxonomy’s visibility will work is the capacity of taxonomy to follow the invitation. One way to approach this issue is looking at the growth rate of taxonomy


To our knowledge, a comprehensive taxonomic literature database is available just for animals, Zoological Record (ZR). For 2012, the latest year considered here, ZR lists 2.1 times more publications on animal taxonomy than WoS (Fig. 2b, c). This indicates that already in the short term, there is sufficient taxonomic publication output for editors of highly visible journals to indeed increase their share in taxonomy.


On the whole, the capacity for increased publication of taxonomy in highly visible journals seems to be there. Accepting that the potential exists, there is still a question of whether taxonomy’s flexibility will be sufficient for a change in publication culture to be realized.

16 & 17 & 18 & 19


… This suggests that taxonomists indeed would use also other chances of publishing in highly visible journals, should the opportunity arise. The resulting shift from aiming at low visibility to targeting highly visible journals will be very important for taxonomists in working toward both an improved image (Carbayo and Marques 2011) and an improved measure of their scientific impact (Agnarsson and Kuntner 2007).

20 & 21 & 22

Editors of highly visible journals in biology could help (i) increase the visibility of taxonomic publications by encouraging taxonomists to publish in their journals (thereby generally not harming but possibly boosting their journals) and (ii) increase total taxonomic output by making it attractive for scientists working in species delimitation (with their primary focus different from taxonomy) to publish the taxonomic consequences of their research.

The task of taxonomic authors, in turn, will be to follow the invitation and to submit indeed their best papers to the best-visible journals available for submission—just as authors of non-taxonomic papers do.

My inferences on visibility

For independent, unbiased confirmation, I looked-up the definition of ‘visibility’ online and found:


visibility ‎(countable and uncountable, plural visibilities)

  1. (uncountable) The condition of being visible.
  2. (countable) The degree to which things may be seen.


By the above definition, which is not unreasonable, I would have thought that open access journals would have the highest ‘journal visibility’ as everyone with an internet connection is able to see articles in them without having to login or pay money to view.

Popular subscription access journals like Nature arguably have middling visibility as many scientists have access to them (although not that many actually read all the articles in them, I certainly don’t). Finally, many subscription access journals are known to be less widely subscribed to by both individuals and institutions e.g. Zootaxa (I would love to have data to demonstrate this more objectively, it is certainly true for UK Higher Education Institutions that significantly more subscribe to Nature than to Zootaxa).

I get the feeling that the authors of this paper did not score ‘visibility’ in this manner.

Many of the mentions of ‘visibility’ appear near discussion of Impact Factor (IF). Perhaps the authors mean to suggest that visibility and Impact Factor are one and the same thing or are highly-correlated? No evidence or citation is given to support this idea. I find this conflation of ‘visibility’ and Impact Factor to be simply wrong and dangerously misleading. Why?

Take the visibility of Elsevier journals for instance. They range in Impact Factor from 0 (many journals e.g. Arab Journal of Gastroenterology), to 2 (e.g. Academic Pediatrics), up to 45 (The Lancet). Yet I’d argue the visibility of most Elsevier subscription journals is the same because institutional libraries tend to (be practically forced to) buy Elsevier journals as a bundle – the euphemistically-titled ‘Freedom Collection‘. With the privilege of an institutional affiliation you typically either have access to all the Elsevier journals, including the cruddy ones, or you have access to none of them (in one ARL survey from 2012, 92% of surveyed libraries subscribed to the Elsevier bundle). Unfortunately very few academic libraries opt to subscribe to just a few select Elsevier subscription-only journals, rather than the bundle, MIT is one of the rare exceptions. Thus whether an individual subscription access Elsevier journal has an Impact Factor of 0, 2, 5, or 10 the global visibility of articles in Elsevier journals is relatively similar between different Elsevier journals, except only for the very most popular journals like The Lancet which might have an appreciable number of individual subscribers and institutions that subscribe to the journal without subscribing to the rest of Elsevier’s bundle of journals.

Journals aren’t a good unit of measure anyway – citations, views, downloads and ‘quality’ (broadly-defined) can vary greatly even within the same journal. Articles are a more appropriate unit of measure and we have abundant article-level metrics (ALMs) these days. Let’s not lose sight of that fact.

Surely this article needs correction at the very least? This is more than just a minor linguistic quibble. If the authors mean to say Impact Factor every time they say ‘visible’ or ‘visibility’ why don’t they just do this? Perhaps it is because Impact Factor is so widely and rightly derided, not to mention statistically illiterate (the distribution of journal article citations are well known to be skewed, you shouldn’t take the mean but the median to measure central tendency. The Impact Factor uses the mean in its calculation – oops!) they knew that it wouldn’t be meaningful and so masked it by using ‘visibility’ a weasel-word instead?

This article seems to be asking: Is it within the possibilities of journal editors to influence taxonomy’s visibility Impact Factor.

  • The study does define visibility: “highly visible journals (those included in WoS and with a good standing)”. “good standing” clearly means impact factor in context, but impact factor is not the only criterion: being in Web of Science is also part of this. I agree that I personally would have welcomed a discussion of a numerical cutoff for what impact factor is “good”, but it’s a long way from that to accusing the authors of using a “weasel-word”. They clearly relied on impact factors a lot, as mentioned in the keywords, table 1, second paragraph, etc. You may disagree with the use of impact factor, but the authors are not unethically trying to weasel out of of using it: it seems to me they’re just using impact factor plus presence in a database.

    It’s worth looking at the first two lines of their conclusion, too:
    “Criticisms of the use of bibliometric tools such as the IF in decisions about who gets funded and who gets academic jobs are justified (Benítez 2014; Erikson and Erlandson 2014; Pyke 2014). However, these tools are currently used widely and, as long as this is the case, taxonomy would benefit from a positive bibliometric performance.”

    For what it’s worth, they (and I) agree with you that impact factors aren’t ideal to use. But their article wasn’t a defense of impact factors. It was instead taking the real world issue: people say taxonomy isn’t cited as much as non-taxonomy, which 1) makes it harder for people to do good taxonomy to get jobs (because the people hiring them might think their work has less impact) and 2) journals might be reticent to publish taxonomy lest their impact factor go down. One approach to these issues is to say, “well, impact factor shouldn’t be used, so let’s work on that problem” — and there are justifications for this view. This paper instead takes the world as it is and effectively says, ‘given the current views about impact [which the authors clearly don’t support], are these criticisms actually true for taxonomy’.

    Given that’s their goal, they could have done a few things. One is just take a bunch of taxonomy papers and non-taxonomy papers for the same groups of organisms and compare individual article metrics. I’d imagine this would show taxonomy gets many fewer citations (or tweets, or blog posts, or downloads, etc.) than non-taxonomic papers: Science, Nature, and PLoS Biology don’t publish a lot of taxonomy (other than of humans or dinos) and papers in those journals are visible in all senses of the word, while taxonomic work might be published in journals with a smaller readership. A different approach would be to look in journals that publish taxonomy and non-taxonomic papers and basically do a twin study: compare the impact (however measured) of a taxonomic and non-taxonomic paper in the same issue, for many such pairs. This is probably what I would have done (but I’m an amateur in this particular field, so I could be missing a reason this is bad). However, this may not have met their goal of showing journal editors that publishing taxonomy may “hurt” their journal’s impact factor (again, not trying to change the editors’ views on what’s important, just trying to take those as a given and say that taxonomy shouldn’t be punished as a result). Instead, they did something more like the first approach, but controlling for the fact that a paper in say, Science, is more likely to be read and cited than a paper in Psyche, the journal of the Cambridge Entomological Club (where I was the secretary for a bit), even though the latter is open (at Hindawi). For this correction, they need a journal-level metric, not an article-level metric, since they want to control for a journal effect; they chose impact factor and presence in a database.

    • rmounce

      Really lovely comment Brian. Well crafted. You’re right I somewhat overlooked the brief parenthetical definition they gave.

      But even after considering this definition more closely: “highly visible journals (those included in WoS and with a good standing)” my point still stands.

      1.) I disagree that this is visibility. Visibility is the wrong word for this definition. It is more accurate to say: Thomson Reuters thinks this is an interesting journal and the journal is two years older or more.

      2.) As I said, essentially in this paper ‘visibility’ seems to be implied to be exactly identical to Impact Factor. I note that being indexed in Web of Science (Thomson Reuters citation database), is completely non-independent of having an Impact Factor (Thomson Reuters assigns Impact Factor based upon data collected from being included in Web of Science). Thus having an Impact Factor and being indexed in Web of Science is one and the same thing.

      3.) If they were testing citations to journals not included in Web of Science versus journals that are included in Web of Science this would be an interesting study (and could have been done if they used say Scopus or Google Scholar). But that is not what the authors did. All 47 of the journals they included in their study are in Web of Science, and they obtained their citation data from Web of Science. All 47 journals examined thus have an Impact Factor. One can only conclude that by ‘high visibility’ the authors mean ‘high Impact Factor’ and that by ‘low visibility’ they mean ‘low Impact Factor’.

      I still think it disingenuous and obscures clarity to consistently refer to ‘visibility’ throughout this paper. Whether ‘weasel word’ is the right term for this, I don’t know. But I do still think the paper needs correcting.