
Brian Deer, the British investigative journalist who exposed Andrew Wakefield’s methods that implicated the MMR vaccine as a cause of autism, has broadened his scope to research misconduct throughout science (Deer, B. 2011. Doctoring the evidence: what the scientific establishment doesn’t want you to know. The Sunday Times, 12 August 2012, p. 16). His article comes in the wake of several related articles in leading scientific journals (Enserink, M. 2012. Fraud-detection tool could shake up psychology. Science, 6 July 2012, p. 21-22. Macilwain, C. 2012. The time is right to confront misconduct. Nature, 2 August 2012, p. 7. Godlee, F. 2012.Helping institutions tackle research misconduct. The British Medical Journal, 10 August 2012). The focus has shifted in the last decade from a major campaign against plagiarism by students tempted by the information largesse of Wikipedia, Google and other electronic treasure troves to unwholesome behaviour among university academics. In an age when redundancy at universities has become an issue for the first time in nine centuries, many academics – frenzied by looming cuts – are engaged in a Gaderene rush for promotion and funding. The now obligatory stream of ‘learned’ papers seeks to justify their own puff and, equally as important, the puff of their departments, faculties and institutions trying to blag their corporate way through funding shortages. Misconduct is the child of education-as-commodity.
There are three mortal sins of academic fraudulence: plagiarism, including self-plagiarism (see Self-plagiarism, 6 January 2011); data falsification, including fiddling with those of other people (see Sabotage in Science, 4 November 2010), and fabrication of data, such as starting with a made-up graph and then using it to create plausible values in a table. Venial sins include publishing much the same data and interpretations again and again. The last highlights one of the reasons why miscreants get away with their chicanery and benefit from it; sloppy academic editing and even sloppier peer review.
Deer observes that ‘The science establishment’s consensus is that there is no need for outside scrutiny because … science is above that kind of misconduct that has tainted the Roman Catholic church, politics, the press and, of course, the banks.’ But, as in these notorious cases, the lid is coming off scientific misconduct, largely through the bravery of ‘whistle-blowers’ within the system. Yet the offenders who have been unmasked were unfortunate enough to work in institutions that have appropriate investigative mechanisms and the will at high office to use them. That determination to maintain the highest ethical standards is perhaps not as widespread as it once was.
Geoscientists have yet to figure much in the rogues’ gallery of malfeasants, except for the odd light-fingered palaeontologist. That may have something to do with the vagueness of much of their scope, epitomised by the trajectory of a lithological boundary on a geological map of poorly exposed ground. Indeed, virtually every aspect of the science is open to many interpretations, and errors of omission are perhaps more common than those of commission – any field worker knows that they will inevitably have missed something. But there are quantitative, laboratory-based aspects of the science, such as radiometric dating, that are more readily scrutinised for malpractice. In the early days of using radioactive isotopes and their daughter products to work out an age for an igneous or metamorphic event a common analytical tool was the isochron plot, as in the Rb-Sr method. A ‘good’ age was signified by all the data points falling on or very close to the line of best fit from which an age was calculated. Consequently, there may well have been cases where errant data were conveniently ‘lost’, but there was no way of telling.
That it did happen emerged from the honesty of those isotope geochemists who openly admitted that some mass-spectrometry runs had been omitted because the samples showed some signs of ‘contamination’ or ‘open-system behaviour’. For that they were merely taken to task by those who disagreed with their findings, but excused by those whose ideas the results supported: ethically honest. But how many Rb-Sr runs never made it to a published data table? Things are now a great deal more sophisticated than the days of punched tape and IBM cards in the geochemistry lab, geophysical software and that used for the growing cottage industry of process modelling. So much data and such a wealth of corrections that vast spreadsheets develop in the course of analysis, correction and calculation: few peer reviewers are going to go through data-processing steps with a fine-tooth comb, even if they have been lodged in public data repositories. Such settings provide ample scope for data invention, ‘fiddling’, ‘fudging’ and, in labs with a cavalier attitude to security, stealing but little way of pinning down any malpractice: that is, unless a culprit is either carelessly overconfident or a serial offender. A simple test that any peer reviewer might apply, most usefully at random, is to ask for a copy of laboratory notes associated with a manuscript. If one is not forthcoming, then suspicions will arise naturally.
A measure of just how much dodgy behaviour may go on is a survey conducted by Daniele Fanelli of the Institute for the Study of Science, Technology & Innovation, at the University of Edinburgh (Fanelli, D. 2009. How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE, 4, e5738 http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005738). In it he found that up to a third of all researchers admit – anonymously – to engaging in shoddy practices, while around 2% admitted to having fabricated, falsified or modified data or results at least once. When asked off the record about colleagues, 85% of researchers reported suspicious behaviour known to them, 14% for data falsification.

Time cannot be far off when the red laser-beam spot moves across geoscience labs and individual geoscientists. Are they audited by disinterested peers and in such a small tightly-knit discipline are there such individuals? Do managing academics scrupulously keep records themselves and demand that their research fellows do likewise? Are there victims or witnesses brave enough to blow the whistle on any spite, fraud or slovenly methods, or will our science remain in its habitual state of bliss?
Related articles
- Scientific Fraud Prevalent Among Science-Based Medicines (sott.net)
- Research waste, research failure and research misconduct (rm21.typepad.com)
- Cracks in the Ivory Tower: Is Academia’s Culture Sustainable? (tutoringtoexcellence.blogspot.com)