Two articles in the 14 July issue of Nature make interesting reading for those concerned about many universities’ and education ministries’ enthusiasm for bibliometrics as proxies for research excellence (E-P, January 2015) and their place in the academic equivalent of the Guides Michelin’s star system. Such institutions have become increasingly obsessed by ‘impact factors’. These are a metric applied to individual science journals, and are really quite simple: the average number of citations that articles published by a journal in the previous two years have received in the current year. So, it is supposed, if you get a paper published in a journal with a high impact factor, that can be deemed to be a ‘good thing’; it must it be more excellent than one published in a journal with a lower impact factor. That is a statistically very naive view. Indeed the first article by Nature regular Ewen Callaway (Callaway, E., 2016. Publishing elite turns against impact factor. Nature, v. 535, p. 210-211) implies that it is downright stupid. Citations do not follow a normal distribution; the majority of papers receive far fewer entries in reference lists than the mean of all those published, and that stats have a long tail towards papers with very large numbers of citations. The impact factor is strongly biassed by the much smaller number of papers the ‘go viral’, generally because they excite interest and often point many researchers in new directions. Take the top two science journals, Nature and Science: respectively their impact factors this year are 38.1 and 34.7, but in both 75% of all papers that they published cited less times than the mean. Indeed, a fair number got no citations at all. PLoS Genetics, an on-line, open-access journal of the Public Library of Science, whose throughput of papers is far higher than those of both Nature and Science has a much lower impact value (6.7) but only 65% receive fewer that number citations.
But there seems to be something a bit more sinister going on, to do with massaging the citations for individual papers to give the impression of ‘high impact’ and a long ‘shelf life’ for their influence. The sort of ‘gaming’ that goes on is covered by Mario Biagioli, of the University of California, Davis (Biagioli, M., 2016. Watch out for cheats in the citation game. Nature, v. 535, p. 203). Would you believe that some authors supply journal editors with e-mail addresses for ‘sock-puppet’ peer reviewers to get into print in the first place, and suggest additional references to other work by the authors? There’s more, with rings that effectively trade fake reviews in exchange for citations of the reviewers papers; a lot worse than the familiar practice of self citation. It isn’t necessarily the case that such papers are themselves fraudulent in some way, but to milk the citations cow and tart-up CVs. Biagioli believes that this tendency emerges partly from the drive towards collaborative papers with huge numbers of authors, which again institutions demand in order to be able to say that its research output is international in scope and ‘world-leading’, without being transparently hyperbolic. But skillful individuals can build up bloated reputations with relatively little effort; it’s also possible to guess who they might be. Properly unmasking what Biagioli terms ‘post-production misconduct’ is possible, but only by mining journal databases for evidence, which takes a lot of time. Some of this data analysis is done by journals themselves, pour encourager les autres I suppose, but rarely reported. Biagioli mentions new watchdog groups, Retraction Watch and PubPeer, the latter fostering post-production peer review. But such groups may themselves be gamed, because the ‘pursuit of excellence’ has a competitive side too: overweeningly ambitious academics have tended, until recently, to do the ‘proper thing’ by stabbing one another in the chest in plain view …
I write, referee and edit English papers written in China. At my university, publication in an international journal (i.e. one listed in the citation index) is essential for the award of a PhD. Publication in Nature or Science brings a salary bonus and enhanced chances of promotion. An editor told me that he would like his own papers added to one of my citation lists, although he did suggest that this was a condition of acceptance. I have also encountered an editor of a different journal who failed to understand a technical paper written in rather broken English but rather than admitting his failure, prevaricated about accepting or rejecting it. We submitted to a different journal that accepted the paper. Roger Mason, China University of Geosciences, Wuhan.
LikeLike
Typo in my comment above: it should read “he did NOT suggest that this was a condition for acceptance.”
LikeLike