Publishing: is it worth the effort?

A measure of the esteem in which a peer-reviewed paper is held is supposedly the number of times to which it is referred in other papers. Of course, the older a paper is the more chance that such citations will have built up; but the annual rate of citation is likely to fizzle out over time. Papers that create a frisson of initial excitement and command enduring citation are few and far between: they probably launched a new line of inquiry.

It is instructive to try to nail Alfred Wegener’s influence in tectonics using the Web of Science, which ought to have been pretty high. Superficially, he had none and is remembered through that arm of Thomson Reuters for six papers: four on atmospheric physics – his speciality; one on lunar craters and a sixth on the patterns of cracking seen on rotten wood. These give him a mere 20 citations. Wegener’s posthumous problem was that Die Entstehung der Kontinente first appeared in the fourth issue of Geologische Rundshau in 1912, and seemingly the Web of Science doesn’t have that journal in its archives of a century ago. Later, extended editions appeared in book format which were not peer reviewed (most geoscientists would not touch his ideas with a barge pole until long after his death in 1930), and are therefore outside the academic pale. The key to a plausible mechanism for continental drift – symmetrical magnetic striping above ocean basins – was first described by Fred Vine and Drummond Matthews in an issue of Nature in 1963. In 50 years their work, ranking with discovering the structure of DNA, has accumulated 709 citations; i.e. 38.5 citations per year on average, which is not much for fuelling a revolution.

Photograph of Alfred Wegener, the scientist
Alfred Wegener, the unsung hero of continental drift(credit: Wikipedia)

Of course, citation is not the same as the frequency at which a paper is read. It is no secret that a not inconsiderable number of papers that appear in published reference lists haven’t been read by the authors who cite them. They are there by proxy, and you will probably find them in the bibliography of later papers that those same authors have cited. There is perhaps a certain kudos in such proxy citations, for it may be that the cited paper has achieved the equivalent of canonical status in the field.

Citation frequency is something of a lottery: language of publication; discipline (since 1953 Crick and Watson achieved three times Vine and Matthews’s average citations); date of publication (E. Komatsu of the University of Texas at Austin has already had 1939 citations for his February 2011 paper ‘Seven-Year Wilkinson Microwave Anisotropy Probe Observations: Cosmological Interpretations’ published in a supplement to the Astrophysical Journal; nine times the rate of Crick and Watson, but the paper is about the origin of everything)

Interestingly, the December 2012 issue of Geology presents stats on the most cited papers that it has published since 2000 (Cowie, P.A. 2012. Highly cited Geology papers (2000-2010) – What were they and who wrote them? Geology, v.  40, p. 1147-1148). Geology is among the highest ranking journals in the geoscience field, and had an impact factor of 4.8 over the last 5 years. A journal’s impact factor is the number of times all articles published in a 2-year period were cited in all indexed journals in the year following them, divided by the total number of articles published in the two years by the assessed journal. So, papers published in Geology between2007 and 2011 were cited on average 4.8 times in the year following publication. This journal is a useful source of citation statistics as it covers the full range of geoscience and all papers are limited to 4 printed pages, thereby forcing authors to be concise and clear in their writing and illustration. Consequently it is popular, which, incidentally, may explain its high impact factor.

Of the 33 papers cited most between 2000 and 2010, 14 are on topics relating to Tibet and China. There are 3 on oceanography; 3 on paleontology and extinctions; 6 on palaeoclimatology; 10 on tectonics and 10 on magmatism (3 of which were about rare adakites formed by partial melting of subducted oceanic crust). I haven’t read all of the papers, and the stats on topics may tell us very little, but I would bet that papers about geology in high-population emerging countries – China, India and Brazil – are met gleefully by rapidly growing communities of eager young geoscientists. It may even be worth a flutter on adakites as the ‘next big thing’ in petrogenesis. Mind you, it looks like I am not likely to be the best punter for hot papers, as out of the 33 ‘top-3’ papers since 2000, only 6 made it into Earth Pages, and of those only one between 2004-2010.

The digest goes on to show that year-by-year as many as 10 % of papers in Geology are not cited at all, up to 70% are cited between 1 and 5 times per year, while less than 10% get 10 or more citations in a year. Oddly, the author suggests that a dip in citations of Geology papers in recent years may reflect the launch of Nature Geoscience in 2008. Yet glossy as that new addition to the Nature stable might be, it has become something of a desert for papers on geology. Then there is evidence for both ‘vintage’ and ‘just-about-drinkable’ years  in Geology citations: the ‘top ten’ papers in 2001, 2005, 2006, 2008 and 2010 ranged from 10-15 citations for the tenth to 20-25 for the ‘hottest’ paper, while in 2000, 2002, 2003, 2004, 2007 and 2009 the most cited papers stood well above the rest at 32 to 55 citations per year. But that may just reflect the uneven pace at which well-received and provocative work emerges.

So, it begins to seem, from Geology at least, that for most geoscience authors publishing isn’t going to raise much hope as far as jobs or promotions are concerned. Yet if results are not published funding agencies may become fractious about your next grant application, and of course, university science departments puff themselves with annual publication rates (though rarely citation records, which as far as geosciences goes could be a wise move). But it is a matter of academic duty to publish for the record; even if a paper fills just one tiny niche the cumulative effect of publically available knowledge does eventually result in breaks through – one never knows… It could be a salutary lesson should publishers release data on hits for on-line PDFs of papers, as that would give some indication of how many readers individual papers have, but as for a ‘like this’ button or a means of star rating I think we have to venture into the deeper recesses of academic conservatism one small step at a time.

A glimpse of the deep Moon

Charting the variation in gravitational potential across a planet provides a measure of the distribution of mass beneath its surface. That depends on both the planet’s actual shape and on internal variations in rock density. The Earth’s gravity has been mapped with varying degrees of precision, depending on sample spacing, by surface measurements using gravimeters. Doing gravity surveys from space cannot be so direct, however. One ingenious approach for the gravitational field over the oceans is to measure the mean height of the ocean surface using radar beams from a satellite. Since this is affected by variations in the gravitational field, partly due to bathymetry and partly because of varying density beneath the ocean floor, removing the calculable bathymetric effect leaves a gravitational signal from the underling lithosphere and deeper mantle. The first satellite to illuminate the Earth with radar microwaves, Seasat, gradually built up such a gravitational map of the deep Earth over a period of 105 days in 1978, which was followed up by other satellites such as the ERS series and Topex-Poseidon.

GRAIL lunar probes
The GRAIL satellites in lunar orbit (credit: Wikipedia)

It is not so easy to map gravity precisely above a solid planetary surface, but through the GRACE experiment this can be done by measuring very precisely the distance between a pair of satellites that follow the same orbit. As the gravitational field changes so too does the separation between the tandem of satellites; an increase in gravity pulls the satellites closer together and vive versa. GRACE has provided some fascinating data, such as estimates of the withdrawal of groundwater from large sedimentary basins and shrinkage of ice caps. However, GRACE is limited in its resolution of gravitational anomalies by the fact that Earth has an atmosphere above which such tandems must be parked in orbit to avoid burning up. The higher the orbit, the more degraded is the resolution. This effect is much less for Mars and non-existent for the Moon.

Gravity field of the moon as measured by NASA's GRAIL mission. The far side of the moon is at the centre, whereas the nearside (as viewed from Earth) is at either side. (credit: NASA/ARC/MIT)
Gravity field of the moon as measured by NASA’s GRAIL mission. The far side of the moon is at the centre, whereas the nearside (as viewed from Earth) is at either side. (credit: NASA/ARC/MIT)

A sister experiment to GRACE has been orbiting the Moon since September 2011: the Gravity Recovery and Interior Laboratory (GRAIL). First the tandem orbited at 55 km, then 22 and for a brief period 11 km, before running out of thruster fuel on 17 December 2012 and crashing into the lunar surface. Results from the highest orbit resolve lunar gravity to 13 km cells, recently reported on-line in three papers (Zuber, M.T. and 16 others 2012. Gravity field of the Moon from the Gravity Recovery and Interior Laboratory (GRAIL) Mission. Science, doi 10.1126/science.1231507; Wieczorek, M.A. and 15 others 2012. The crust of the Moon as seen by GRAIL. Science, doi 10.1126/science.1231530; Andrews-Hanna, J.C. and 18 others 2012. Ancient igneous intrusions and early expansion of the Moon revealed by GRAIL gravity gradiometry. Science, doi 10.1126/science.1231753). From crater gravitational signatures due to variations in surface topography it seems that the early bombardment of the lunar surface far exceeded previous assumptions. Impact effects dominate the GRAIL data at this resolution, but 2% of the information relates to structures hidden at depth.

500 km linear anomaly in the Moon's far-side  gravitational field. (credit: NASA/JPL-Caltech/CSM)
500 km linear anomaly in the Moon’s far-side gravitational field. (credit: NASA/JPL-Caltech/CSM)

There are linear gravity anomalies extending over hundreds of kilometres, which may be huge igneous intrusions in the form of dykes; perhaps reflections of early influences of early extensional tectonics in the Moons lithosphere. Estimates point to this having been due to an up to 5 km increase in the lunar radius, probably as a result of thermal changes. The dominant feature of the lunar surface is not the near-side flat basaltic maria, visually prominent as they are, but the far more rugged lunar highlands which stand far higher because of the lower density of their constituent feldspar-rich anorthosites. GRAIL permitted a bulk estimate of the density of highland crust that turned out to be substantially lower, at 2550 kg m-3 – compared with 2600-2700 for granite and 2800-3000 for basalt – than originally estimated from samples returned by the Apollo mission. This forces a reassessment of the thickness of highland crust from 50-60 km to between 34 and 43 km, with a near-surface layer that has a porosity of around 12%, probably resulting from its awful battering. A thinner highland crust than previously assumed presents a bulk geochemical picture that need not be more enriched in ‘refractory’  elements, such as aluminium and calcium, than is the Earth.

Such unanticipated results from the low-resolution mode of the GRAIL experiment have its science team almost salivating at prospects from the sharper ‘pictures’ that will arise from the lower altitude orbits.

The Ediacaran fossils: a big surprise

English: Photograph showing the 'golden spike'...
Edicara sandstones in the Flinders Ranges of  South Australia (credit: Wikipedia).

The first macroscopic life forms were the enigmatic bag-like and quilted fossils in sedimentary rocks dating back to 635 Ma in Australia, eastern Canada and NW Europe. They are grouped as the Ediacaran Fauna named after the Ediacara Hills in South Australia where they are most common and diverse. Generally they are not body fossils but impressions of soft-bodied organisms, often in sandstones rather than muds. Some are believed to be animals that absorbed nutrients through their skin, whereas others are subjects of speculation. One thing seems clear; these first metazoans arose because of some kind of trigger provided by the global glacial conditions that preceded their appearance. It has always been assumed that, whatever they were, Ediacaran organisms lived on the sea floor, probably in shallow water. New sedimentological evidence found in the type locality by Gregory Retallack of the University of Oregon seems set to force a complete rethink about these hugely important life forms (Retallack, G.J 2012. Ediacaran life on land. Nature (online), doi:10.1038/nature11777). So momentous are his conclusions that they form the subject of a Nature editorial in the 13 December 2012 issue.

Retallack, a specialist on ancient soils of the Precambrian, examined reddish facies of the Ediacara Member of the Rawnsley Quartzite of South Australia, whose previous interpretation have a somewhat odd background. Originally regarded as non-marine, before their fossils were discovered, when traces of jellyfish-like organisms turned up this view was reversed to marine, the red coloration being ascribed to deep Cretaceous weathering. A range of features, such as clasts of red facies in grey Ediacaran rocks, the presence of feldspar in the red facies – unlikely to have survived deep weathering – bedding surfaces with textures very like those formed by subaerial biofilms, and desiccation cracks, suggest to Retallack that the red facies represents palaeosols in the sedimentary sequence. Moreover, some features indicate a land surface prone to freezing from time to time. The key observation is that this facies contains Ediacaran trace fossils representing many of the forms previously regarded as marine animals of some kind, including Spriggina, Dickinsonia and Charnia  on which most palaeontologists would bet good money that they were animals, albeit enigmatic ones.

English: Cropped and digitally remastered vers...
Specimen of Edicaran Dickinsonia (credit: Wikipedia)

If Retallack’s sedimentological observations are confirmed then organisms found in the palaeosols cannot have been animals but perhaps akin to lichens or colonial microbes, and survived freezing conditions. As they occur in other facies more likely to be subaqueous, then they were ‘at home’ in a variety of ecosystems. As the Nature editorial reminds us, from the near-certainty that early macroscopic life was marine there is a chance that views will have to revert to a terrestrial emergence first suggested in the 1950s by Jane Grey. Uncomfortable times lie ahead for the palaeontological world.

Grand Canyon now the Grand Old Canyon?

Grand Canyon in Winter
Grand Canyon in Winter (credit: Wikipedia)

Among the best known and certainly the most visited topographic feature on the planet, the Grand Canyon resulted from erosion by the Colorado River keeping pace with uplift of the south-central United States. It is the archetype for what is known as antecedent drainage. Since that uplift is still going on, albeit slowly, the Grand Canyon has been assumed to be a relative young landform. By dating the first appearance of debris from the eastern end of the canyon in sediments at its western limit geomorphologists estimated that incision began around 6 Ma ago. Yet a range of other observations present puzzling contradictions. One means of settling the issue is to somehow to date the uplift radiometrically.

A long-used technique is to determine ‘cooling ages’ of crustal rocks exposed by uplift and erosion, exploiting the way in which rock temperature determines whether or not products of radioactive decay cab be preserved intact. One method uses the tracks of defects produced by electrons or helium nuclei from radioactive decay as they pass through various minerals that incorporate high amounts of elements such as uranium. Above a certain temperature the fission tracks anneal and disappear quickly, while below it they accumulate over time. Quantifying that build-up allows the date of cooling below the threshold temperature to be estimated. Similarly, gases produced by radioactive decay of some radioactive isotopes, such as argon from the decay of 40K or helium from uranium and thorium isotopes, can only stay in their host mineral if it remains cooler than a narrow range of temperatures. As rock rises towards the Earth’s surface, it starts out hot at depth but cools by conduction as it get closer to the surface. For the 1.8 km of uplift of the Grand Canyon and the relatively cool nature of the underlying crust, neither the fission-track nor the  40Ar/39Ar cooling-age methods give meaningful results. However, minerals lose helium at temperatures above about 70°C, so a method based on helium accumulation from uranium and thorium isotope decay is a possible means of assessing uplift timing. But there have been plenty of snags to overcome to make this approach reliable. In the case of the Grand Canyon analytical quality and careful sample collection has given a credible result (Flowers, R.M. & Farley, K.A. 2012. Apatite 4He/3He and (U-Th)He evidence for an ancient Grand Canyon. Science , doi 10.1126/science.1229390)

Flowers and Farley from the University of Colorado at Boulder and the California Institute of Technology, Pasadena, respectively, produced a result that completely overturns previous conceptions. The western end of the Canyon had been incised to within a few hundred metres of modern depths by 70 Ma ago; more than ten times earlier than previously thought. The eastern end has a more complex history that reveals cooling events in the Neogene as well as an end-Cretaceous initiation of uplift and erosion. Their data are consistent with early incision of the Grand Canyon by a Cretaceous river flowing eastward from the Western Cordillera, with a reversal of flow in the late-Tertiary as uplift of the Colorado Plateau began and western mountains subsided. Whether or not this fits with Cretaceous and later geological history of the SW US, is beyond my ken, but you can bet there will be a storm of comment from US geomorphologists once the paper appears in the print issue of Science.

Toba ash and calibrating the Pleistocene record

Landsat image of Lake Toba, the largest volcan...
Landsat image of the Lake Toba caldera, Sumatra (credit: Wikipedia)

The largest volcanic catastrophe during the evolution of humans formed the huge caldera at Lake Toba near the Equator in Sumatra about 70 thousand years ago. Explosive action erupted 2800 cubic kilometres of magma, of which 800 km3 was deposited as thick ash across most of South Asia and the northern Indian Ocean. Sulfates derived from the gas emissions by Toba form clear ‘spikes’ in ice cores from both Greenland and Antarctica. Its effects were global through the mixing of sulfate aerosols in the stratosphere of both hemispheres, encouraged by its position close to the Equator. By reflecting incoming solar energy the aerosols resulted in a century-long 10°C fall in temperature over the Greenland ice cap. Such global cooling almost certainly affected anatomically modern humans, but it is possible that in South Asia Toba had an even more devastating effect.

Jwalapuram
The Toba ash at the Jwalapuram excavations in South India(Photo credit: Sanjay P. K. via Flickr)

At several sites in the Indian state of Tamil Nadu and in Malaysia Toba ash has buried artifacts that arguably may have been made by the earliest modern emigrants from Africa. Immediately above the ash are yet more tools that suggest humans did survive the eruption. Palaeoanthropologists have argued that the stress of Toba’s environmental effects on all hominins living at the time may have resulted in population crashes from which only the fittest individuals emerged. Major evolutionary changes have been ascribed to ‘bottlenecks’ of that kind to result in changes in human behaviour detectable from the archaeological record, such as the creation of completely new kinds of tools, art and language.  However, recent finds in Africa suggest that many such shifts are much older than Toba.

Perhaps Toba’s greatest contribution to palaeoanthropology is that it is an easily recognised event in the geological record, but compared with its sulfate spike in the Greenland ice core at ~71 ka the existing radiometric dates have uncertainties of several thousand years. Using the latest 40Ar/39Ar dating methods on fresh crystals of sanidine (volcanic K-feldspar) from new excavations in Malaysia these uncertainties have been reduced significantly (Storey, M. et al. 2012. Astronomically calibrated 40Ar/39Ar age for the Toba supereruption and global synchronization of late Quaternary records. Proceedings of the National Academy of Sciences, v. 109, p. 19684-18688 ). The sulfate peak and the ash can now be attributed to an age of 73.88 ± 0.32 ka; better than a golden spike in Late Pleistocene stratigraphy. The ice-cores have a check on chronology just beyond the limit of counting annual layering, as do ocean sediment cores for a time older than 14C can ever achieve. Toba now links too with events recorded by the precise U-Th series dating of cave deposits

Probing the Earth’s mantle using noise

sesmic tomography
Artistic impression of a global seismic tomogram – beneath Mercator projection – dividing the mantle into ‘warm’ and ‘cool’ regions (Credit: Cornell University Geology Department – http://www.geo.cornell.edu/geology/classes/Geo101/graphics/s12fsl.jpg)

It goes without saying that it is difficult to sample the mantle. The only direct samples are inclusions found in igneous rocks that formed by partial melting at depth so that the magma incorporated fragments of mantle rock as it rose, or where tectonics has shoved once very deep blocks to the surface. Even if such samples were not contaminated in some way, they are isolated from any context. For 20 years geophysicists have been analysing seismograms from many stations across the globe for every digitally recordable earthquake to use in a form of depth sounding. This seismic tomography assesses variations in the speed of body (P and S) waves according to the path that they travelled through the Earth.

Unusually high speeds at a particular depth suggests more rigid rock and thus cooler temperatures whereas hotter materials slow down body waves. The result is images of deep structure in vertical 2-D slices, but the quality of such sections depends, ironically, on plate tectonics. Earthquakes, by definition mainly occur at plate boundaries, which are lines at the surface. Such a one-dimensional source for seismic tomograms inevitably leaves the bulk of the mantle as a blur. But there are more ways of killing a cat than drowning it in melted butter. All kinds of processes unconnected with tectonics, such as ocean waves hitting the shore and interfering with one another across the ocean basins, plus changes in atmospheric pressure especially associated with storms, also create waves similar in kind to seismic ones that pass through the solid Earth.

Such aseismic energy produces the background noise seen on any seismogram. Even though this noise is way below the energy and amplitude associated with earthquakes, it is continuous and all pervading: the cumulative energy. Given highly sensitive modern detectors and sophisticated processing much the same kind of depth sounding is possible using micro-seismic noise, but for the entire planet and at high resolution. Rather than imaging speed variations this approach can pick up reflections from physical boundaries in the solid Earth. Surface micro-seismic waves exactly the same as Rayleigh and Love waves from earthquakes have already been used to analyse the Mohorovičić discontinuity between crust and upper mantle as well as features in the continental crust; indeed the potential of noise was recognized in the 1960s. But the deep mantle and core are the principle targets, being far out of reach of experimental seismic surveys using artificial energy input. It seems they are now accessible using body-wave noise (Poli, P. et al. 2012. Body-wave imaging of Earth’s mantle discontinuities from ambient seismic noise. Science, v. 338, p. 1063-1065).

Poli and colleagues from the University of Grenoble, France and Finland used a temporary network of 42 seismometers laid out in Arctic Finland to pick up noise, and sophisticated signal processing to separate surface waves from body waves. Their experiment resolved two major mantle discontinuities at ~410 and 660 km depth that define a transition zone between the upper and lower mantle, where the dominant mineral of the upper mantle – olivine – changes its molecular state to a more closely packed configuration akin to that of the mineral perovskite that is thought to characterize the lower mantle. Moreover, they were able to demonstrate that the 2-step shift to perovskite occupies depth changes of about 10-15 km.

Applying the method elsewhere doesn’t need a flurry of new closely-spaced seismic networks. Data are already available from arrays that aimed at conventional seismic tomography, such as USArray that deploys  400 portable stations in area-by-area steps across the United States (http://earth-pages.co.uk/2009/11/01/the-march-of-the-seismometers/)

It is early days, but micro-seismic noise seems very like the dreams of planetary probing foreseen by several science fiction writers, such as Larry Niven who envisaged ‘deep radar’ being deployed for exploration by his piratical hero Louis Wu. Trouble is, radar of that kind would need a stupendous power source and would probably fry any living beings unwise enough to use it. Noise may be a free lunch to the well-equipped geophysicist of the future.

  • Prieto, G.A. 2012. Imaging the deep Earth. Science (Perspectives), v. 338, p. 1037-1038.