Photosynthesis, arsenic and a window on the Archaean world

At the very base of the biological pyramid life is far simpler than that which we can see.  It takes the form of single cells that lack a nucleus and propagate only by cloning: the prokaryotes as opposed to eukaryote life such as ourselves. It is almost certain that the first viable life on Earth was prokaryotic, though which of its two fundamental divisions – Archaea or Bacteria – came first is still debated. At present, most prokaryotes metabolise other organisms’ waste or dead remains: they are heterotrophs (from the Greek for ‘other nutrition’). But there are others that are primary producers getting their nutrition by themselves, exploiting the inorganic world in a variety of ways: the autotrophs. Biogeochemical evidence from the earliest sedimentary rocks suggests that, in the Archaean prokaryotic autotrophs were dominant, mainly exploiting chemical reactions to gain energy necessary for building carbohydrates. Some reduced sulfate ions to those of sulphide, others combined hydrogen with carbon dioxide to generate methane as a by-product. Sunlight being an abundant energy resource in near-surface water, a whole range of prokaryotes exploit its potential through photosynthesis. Under reducing conditions some photosynthesisers convert sulfur to sulfuric acid , yet others combine photosynthesis with chemo-autotrophy. Dissolved material capable of donating electrons – i.e. reducing agents – are exploited in photosynthesis: hydrogen, ferrous iron (Fe2+), reduced sulfur, nitrite, or some organic molecules. Without one group, which uses photosynthesis to convert CO2 and water to carbohydrates and oxygen, eukaryotes would never have arisen, for they depend on free oxygen. A transformation 2400 Ma ago marked a point in Earth history when oxygen first entered the atmosphere and shallow water (see: Massive event in the Precambrian carbon cycle; January, 2012), known as Great Oxygenation Event (GOE). It has been shown that the most likely sources of that excess oxygen were extensive bacterial mats in shallow water made of photosynthesising blue-green bacteria that produced the distinctive carbonate structures known as stromatolites. These had formed in Archaean sedimentary basins for 1.9 billion years. It has been generally assumed that blue-green bacteria had formed them too, before the oxygen that they produced overcame the reducing conditions that had generally prevailed before the GOE. But that may not have been the case …

Microbial mats made by purple sulfur bacteria in highly toxic spring water flowing into a salt-lake in northern Chile. (credit: Visscher et al. 2020; Fig 1c)

Prokaryotes are a versatile group and new types keep turning up as researchers explore all kinds of strange and extreme environments, for instance: hot springs; groundwater from kilometres below the surface and highly toxic waters. A recent surprise arose from the study of anoxic springs laden with dissolved salts, sulfide ions and arsenic that feed parts of hypersaline lakes in northern Chile (Visscher, P.T. and 14 others 2020. Modern arsenotrophic microbial mats provide an analogue for life in the anoxic ArcheanCommunications Earth & Environment, v. 1, article 24; DOI: 10.1038/s43247-020-00025-2). This is a decidedly extreme environment for life, as we know it, made more challenging by its high altitude exposure to high UV radiation. The springs’ beds are covered with bright-purple microbial mats. Interestingly the water’s arsenic concentration varies from high in winter to low in summer, suggesting that some process removes it, along with sulfur, according to light levels: almost certainly the growth and dormancy of mat-forming bacteria. Arsenic is an electron donor capable of participating in photosynthesis that doesn’t produce oxygen. The microbial mats do produce no oxygen whatever – uniquely for the modern Earth – but they do form carbonate crusts that look like stromatolites. The mats contain purple sulfur bacteria (PSBs) that are anaerobic photosynthesisers, which use sulfur, hydrogen and Fe2+ as electron donors. The seasonal changes in arsenic concentration match similar shifts in sulfur, suggesting that arsenic is also being used by the PSBs. Indeed they can, as the aio gene, which encodes for such an eventuality, is present in the genome of PSBs.

Pieter Visscher and his multinational co-authors argue for prokaryotes similar to modern PSBs having played a role in creating the stromatolites found in Archaean sedimentary rocks. Oxygen-poor, the Archaean atmosphere would have contained no ozone so that high-energy UV would have bathed the Earth’s surface and its oceans to a considerable depth. Moreover, arsenic is today removed from most surface water by adsorption on iron hydroxides, a product of modern oxidising conditions (see: Arsenic hazard on a global scale; May 2020): it would have been more abundant before the GOE. So the Atacama springs may be an appropriate micro-analogue for Archaean conditions, a hypothesis that the authors address with reference to the geochemistry of sedimentary rocks in Western Australia deposited in a late-Archaean evaporating lake. Stromatolites in the Tumbiana Formation show, according to the authors, definite evidence for sulfur and arsenic cycling similar to that in that Atacama springs. They also suggest that photosynthesising blue-green bacteria (cyanobacteria) may not have viable under such Archaean conditions while microbes with similar metabolism to PSBs probably were. The eventual appearance and rise of oxygen once cyanobacteria did evolve, perhaps in the late-Archaean, left PSBs and most other anaerobic microbes, to which oxygen spells death, as a minority faction trapped in what are became ‘extreme’ environments when long before they ‘ruled the roost’. It raises the question, ‘What if cyanobacteria had not evolved?’. A trite answer would be, ‘I would not be writing this and nor would you be reading it!’. But it is a question that can be properly applied to the issue of alien life beyond Earth, perhaps on Mars. Currently, attempts are being made to detect oxygen in the atmospheres of exoplanets orbiting other stars, as a ‘sure sign’ that life evolved and thrived there too. That may be a fruitless venture, because life happily thrived during Earth’s Archaean Eon until its closing episodes without producing a whiff of oxygen.

See also: Living in an anoxic world: Microbes using arsenic are a link to early life. (Science Daily, 22 September 2020)

Did early humans learn to cook in Olduvai Gorge?

Olduvai Gorge in northern Tanzania was for many years the stamping ground of the famous Leakey family and many other anthropologists because of it richness in the skeletal remains and the tools of the earliest members of our genus Homo. The first of these, H. habilis, appears in the Olduvai stratigraphic sequence at around 2 Ma: older examples are now known from localities in Kenya and South Africa taking the species back to about 2.4 Ma. ‘Handy Man’ got the Latinised nickname from its association with abundant stone tools, albeit of a very primitive kind. Oldowan tools are of the ‘let’s bash a couple of pebbles together to get a cutting edge’ kind, dating back to 3.4 Ma (but without evidence of who made them then) and as easily-made disposable tools they linger in the archaeological record until the Neolithic and even modern times. Homo habilis had a brain size little larger than that of australopithecines and some authorities deem them to be such.

Olduvai also yielded the earliest of a more ‘brainy’ species H. ergaster (‘Action Man’), which coexisted with habilis for a few hundred thousand years from around 2 Ma. Initially they also left Oldowan tools. Then, around 1.7 Ma at Olduvai, ergaster began making another stone artefact, the symmetrical bifacial ‘axe’ – probably a multipurpose tool and possibly an object of ritual significance, according to some researchers. Whichever, to make one required visualising the finished item within a shapeless lump of hard rock, and making them required great dexterity: and still does for stone knappers. The biface or ‘Acheulean’ tool originates with one of humanity’s greatest cognitive leaps and lay at the centre of the human toolkit for well over a million years. After being made first in Olduvai by African H. ergaster biface artefacts then spread throughout the continent with H. erectus (probably a direct descendent) and beyond its shores with succeeding humans, up to and including the earliest H. sapiens. How did what seems to be a ‘golden spike’ in human culture first take material form in Olduvai? The possibility of an answer stems from pure serendipity and the development of new research tools.

A flint bifacial stone artefact from the Palaeolithic of Norfolk, UK, which incorporates a bivalve fossil

That the Olduvai Gorge has drawn in several generations of researchers lies in its geology. As well as the sediments deposited by rivers and in ephemeral lakes that characterised a broadly speaking savannah environment, from 2 to 1 Ma there were at least 31 major volcanic eruptions that deposited lavas and a wide range of volcanic ash beds. These have enabled precise dating to calibrate in minute detail the evolution of a highly productive environment and the flora and fauna that it supported during the early Pleistocene. A recently developed technique involves identification of a variety of fatty acids or lipids – natural oils, waxes and steroids – using gas chromatography. Lipids are the remaining ‘biomarkers’ of plants and microorganisms that once lived in an ecosystem. Ainara Sistiaga of the Massachusetts Institute of Technology and the University of Copenhagen, with colleagues from Denmark, Spain, the US and Tanzania, set out to document ecological variation at Olduvai over a million-year interval using this approach. Among the microbial biomarkers they stumbled on something of possibly great importance (Sistiaga, A. and 10 others 2020. Microbial biomarkers reveal a hydrothermally active landscape at Olduvai Gorge at the dawn of the Acheulean, 1.7 Ma. Proceedings of the National Academy of Sciences, v. 117, published online; DOI: 10.1073/pnas.2004532117).

The palaeo-landscape of Olduvai, as revealed by lipid analysis, was highly diverse and rich in grasses, palms shrubs, aquatic flora and edible plants, watered by spring-fed rivers. It supported a diverse fauna including large herbivores (supported by fecal biomarkers): ideal for hominin subsistence. Sistiaga et al. focus in their paper on samples from the 1.7 Ma sedimentary and volcanic sequence (the Lower Augitic Sandstones – augite is an igneous pyroxene) that contains remains of H. ergaster, the oldest bifacial artefacts, and dismembered carcases of hominin prey animals. The surprise that emerged from the volcanoclastic sandstones included lipids produced by a range of bacterial species that only thrive in modern hot springs, such as those at Yellowstone and on the North Island of New Zealand. At three sample sites biomarkers for one particular hyperthermophile were found (Thermocrinis ruber), which can only live in water between 80 to 95°C. This and the other heat-loving bacteria also require water chemistry that, if cool, is drinkable.

Artist’s impression of Homo ergaster cooking an antelope in a 1.7 Ma hot spring at Olduvai Gorge, Tanzania (credit: Tom Björklund, MIT)

The implication is obvious: the ancient Olduvai hot springs were capable of thoroughly cooking meat and vegetables. The importance for humans is that cooking both tenderises meat and tough tubers and roots and breaks down carbohydrates and proteins to make them more easily and efficiently digestible. The brain capacity of H. ergaster was significantly greater than that of H. habilis, and at the average 800 cm3 about 2/3 that of anatomically modern humans. An increase in the input of easily digested protein, fats and carbohydrates may have fuelled that growth and, in turn, the cognitive capacity of H. ergaster. Not only the Western Rift Valley of Tanzania but the whole of the East African Rift System is liberally dotted with hydrothermal vents and also with hominin-rich sites.

See also: Chu, J. 2020. Did our early ancestors boil their food in hot springs? (MIT News, 15 September 2020)

End-Triassic mass extinction: evidence for oxygen depletion on the ocean floor

For British geologists of my generation the Triassic didn’t raise our spirits to any great extent. There’s quite a lot of it on the British Geological Survey 10-miles-to-the-inch geological map (South Sheet) but it is mainly muds, sandstones or pebble beds, generally red and largely bereft of fossils. For the Triassic’s 50 Ma duration following the end-Permian extinction at 252 Ma Britain was pretty much a desert in the middle of the Pangaea supercontinent. Far beyond our travel grants’ reach, the Triassic is a riot, as in the Dolomites of Northern Italy. Apart from a day trip to look at the Bunter Pebble Beds in a quarry near Birmingham and several weeks testing the load-bearing strength of the Keuper mudstones in the West Midlands (not far off zero) in a soil-mechanics lab, we did glimpse the then evocatively named Tea Green Marl (all these stratigraphic names have vanished). Conveniently they outcrop by the River Severn estuary, below its once-famous suspension bridge and close-by the M5 motorway. Despite the Tea Green Marl containing a bone bed with marine reptiles, time didn’t permit us to fossick, and, anyway, there was a nearby pub … The formation was said to mark a marine transgression leading on to the ‘far more interesting Jurassic’ – the reason we were in the area. We were never given even a hint that the end of the Triassic was marked by one of the ‘Big Five’ mass extinctions: such whopping events were not part of the geoscientific canon in the 1960s.

Pangaea just before the start of Atlantic opening at the end of the Triassic, showing the estimated extend of the CAMP large igneous province. The pink triangles show the sites investigated by He and colleagues.

At 201.3 Ma ago around 34 % of marine genera disappeared, comparable with the effect of the K-Pg extinction that ended the Mesozoic Era. Extinction of Triassic terrestrial animals is less quantifiable. Early dinosaurs made it through to diversify hugely during the succeeding Jurassic and Cretaceous Periods. Probably because nothing famous ceased to be or made its first appearance, the Tr-J mass extinction hasn’t captured public attention in the same way as those with the K-Pg or the P-Tr acronyms.  But it did dramatically alter the course of biological evolution. The extinctions coincided with a major eruption of flood basalts known as the Central Atlantic Magmatic Province (CAMP), whose relics occur on either side of the eponymous ocean, which began to open definitively at about the same time. So, chances are, volcanic emissions are implicated in the extinction event, somehow (see: Is end-Triassic mass extinction linked to CAMP flood basalts? June 2013). Tianchen He  of Leeds University, UK and the China University of Geosciences and British and Italian colleagues have studied three Tr-J marine sections on either side of Pangaea: in Sicily, Northern Ireland and British Columbia (He, T. and 12 others 2020. An enormous sulfur isotope excursion indicates marine anoxia during the end-Triassic mass extinction. Science Advances, v. 6, article eabb6704; DOI: 10.1126/sciadv.abb6704). Their objective was to test the hypothesis that CAMP resulted in an episode of oceanic anoxia that caused the many submarine organisms to become extinct. Since eukaryote life depends on oxygen, a deficit would put marine animals of the time under great stress. Such events in the later Mesozoic account for global occurrences of hydrocarbon-rich, black marine shales – petroleum source rocks – in which hypoxia thwarted complete decay of dead organisms over long periods. However there is scant evidence for such rocks having formed ~201 Ma ago. Such as there is dates to about 150 ka younger than the Tr-J boundary in an Italian shallow marine basin. The issue of evidence is compounded by the fact that there are no ocean-floor sediments as old as that, thanks to their complete subduction as Pangaea broke apart in later times and its continental fragments drifted to their present configuration.

But there is an indirect way of detecting deep-ocean anoxia, in the inevitable absence of any Triassic and early Jurassic oceanic crust. It emerges from what happens to the stable isotopes of sulfur when there are abundant bacteria that use the reduction of sulfate (SO42-) to sulfide (S2-) ions. Such microorganisms thrive in anoxic conditions and produce abundant hydrogen sulfide, which in turn leads to the precipitation of dissolved iron as minute grains of pyrite (FeS2). This biogenic process selectively excludes 34S from the precipitated pyrite. As a result, at times of widespread marine reducing conditions seawater as a whole becomes enriched in 34S relative to sulfur’s other isotopes. The enrichment is actually expressed in the unreacted sulfate ions, and they may be precipitated as calcium sulfate or gypsum (CaSO4) in marine sediments deposited anywhere: He et al. focussed on such fractionation. They discovered large ‘spikes’ in the relative enrichment of 34S at the Tr-J boundary in shallow-marine sedimentary sequences exposed at the three sites. Moreover, they were able to estimate that the conditions on the now vanished bed of the Triassic ocean that gave rise to the spikes lasted for about 50 thousand years. The lack of dissolved oxygen resulted in a five-fold increase in pyrite burial in the now subducted ocean-floor sediments of that time. The authors suggest that the oxygen depletion stemmed from extreme global warming, which, in turn, encouraged methane production by other ocean-floor bacteria and, in a roundabout way, other chemical reactions that consumed free dissolved oxygen. Quite a saga of a network of interactions in the whole Earth system that may hold a dreadful warning for the modern Earth and ourselves.

Diamonds and the deep carbon cycle

When considering the fate of the element carbon and CO2, together with all their climatic connotations, it is easy to forget that they may end up back in the Earth’s mantle from which they once escaped to the surface. In fact all geochemical cycles involve rock, so that elements may find their way into the deep Earth through subduction, and they could eventually come out again: the ‘logic’ of plate tectonics. Teasing out the various routes by which carbon might get to mantle is not so easily achieved. Yet one of the ways it escapes is through the strange magma that once produced kimberlite intrusions, in the form of pure-carbon crystals of diamond that kimberlites contain. A variety of petrological and geochemical techniques, some hinging on other minerals that occur as inclusions, has allowed mineralogists to figure out that diamonds may form at depths greater than about 150 km. Most diamonds of gem quality formed in unusually thick lithosphere beneath the stable, and relatively cool blocks of ancient continental crust known as cratons, which extends to about 250 km. But there are a few that reflect formation depths as great as 800 km that span two major discontinuities in the mantle (at 410 and 660 km depth). These transition zones are marked by sudden changes in seismic speed due to pressure-induced transformations in the structure and density of the main mantle mineral, olivine.

Diamond crystal containing a garnet and other inclusions (Credit: Stephen Richardson, University of Cape Town, South Africa)

Carbon-rich rocks that may be subducted are not restricted to limestones and carbon-rich mudstones. Far greater in mass are the basalts of oceanic crust. Not especially rich in carbon when they crystallised as igneous rocks, their progress away from oceanic spreading centres exposes them to infiltration by ocean water. Once heated, aqueous fluids cause basalts to be hydrothermally altered. Anhydrous feldspars, pyroxenes and olivines react with the fluids to break down to hydrated-silicate clays and dissolved metals. Dissolved carbon dioxide combines with released calcium and magnesium to form pervasive carbonate minerals, often occupying networks of veins. So there has been considerable dispute as to whether subducted sediments or igneous rocks of the oceanic crust are the main source of diamonds. Diamonds with gem potential form only a small proportion of recovered diamonds. Most are only saleable for industrial uses as the ultimate natural abrasive and so are cheaply available for research. This now centres on the isotopic chemistry of carbon and nitrogen in the diamonds themselves and the various depth-indicating silicate minerals that occur in them as minute inclusions, most useful being various types of garnet.

The depletion of diamonds in ‘heavy’ 13C once seemed to match that of carbonaceous shales and the carbonates in fossil shells, but recent data from carbonates in oceanic basalts reveals similar carbon, giving three possibilities. Yet, when their nitrogen-isotope characteristics are taken into account, even diamonds that formed at lithospheric depths do not support a sedimentary source (Regier, M.E. et al. 2020. The lithospheric-to-lower-mantle carbon cycle recorded in superdeep diamonds. Nature, v. 585, p. 234–238; DOI: 10.1038/s41586-020-2676-z). That leaves secondary carbonates in subducted oceanic basalts as the most likely option, the nitrogen isotopes more reminiscent of clays formed from igneous minerals by hydrothermal processes than those created by weathering and sedimentary deposition. However, diamonds with the deepest origins – below the 660 km mantle transition zone – suggest yet another possibility, from the oxygen isotopes of their inclusions combined with those of C and N in the diamonds. All three have tightly constrained values that most resemble those from pristine mantle that has had no interaction with crustal rocks. At such depths, unaltered mantle probably contains carbon in the form of metal alloys and carbides. Regier and colleagues suggest that subducted slabs reaching this environment – the lower mantle – may release watery fluids that mobilise carbon from such alloys to form diamonds. So, I suppose, such ultra-deep diamonds may be formed from the original stellar stuff that accreted to form the Earth and never since saw the ‘light of day’.

Monitoring ground motions with satellite radar

By using artificially generated microwaves to illuminate the Earth’s surface it is possible to create images. The technology and the theory behind this radar imaging are formidable. After about 30 years of development using aircraft-mounted transmission and reception antennas, the first high resolution images from space were produced in the late 1970s. Successive experiments improved and expanded the techniques, and for the last decade radar surveillance has been routine from a number of orbiting platforms. Radar has two advantages over optical remote sensing: being an active system it can be done equally effectively day or night; it also penetrates cloud cover, which is almost completely transparent to microwaves with wavelengths between a centimetre and a metre. The images are very different from those produced by visible or infrared radiation, the energy returns from the surface being controlled by topography and the roughness of the surface. One of many complicating factors is that images can only be produced by oblique illumination.  That, together with deployment of widely separated transmission and reception antennas, opens up the possibility of extracting very-high precision (millimetre) measurements of topographic elevation.

In 1992 radar data from two overpasses of the European ERS-1 satellite over California were processed to capture interference due to changes in the ground elevation during the time between the two orbits: the first interferometric radar or InSAR. It revealed the regional ground motions that resulted from the magnitude 7.3 Landers earthquake at 4:57 am local time on June 28, 1992. For the last decade InSAR has become a routine tool to monitor globally both lateral and vertical ground movements, whether rapid, as in earthquakes, or slow in the case of continental plate motions, subsidence or the inflation of volcanoes prior to eruptions. Juliet Biggs and Tim Wright, respectively of the Universities of Bristol and Leeds, UK, have summarised InSAR’s potential (Biggs, J. & Wright, T.J. 2020. How satellite InSAR has grown from opportunistic science to routine monitoring over the last decade. Nature Communications, v. 11, p. 1-4; DOI: 10.1038/s41467-020-17587-6).

Ground motions associated with the 2016 Kaiköuea earthquake on the South Island of New Zealand. Each colour fringe represents 11.4 cm of displacement in the radar line-of-sight (LOS) direction. Known faults are shown as thick black lines (Credit: Hamling et al. 2017. Complex multifault rupture during the 2016 Mw 7.8 Kaikōura earthquake, New Zealand. Science, v. 356, article eaam7194; DOI: 10.1126/science.aam7194)

Since the ERS-1 satellite discovered the ground motions associated with the Landers earthquake, InSAR has covered more than 130 large seismic events. Although the data post-dated the damage, they have demonstrated the particular mechanics of each earthquake, allowing theoretical models to be tested and refined. In the image above it is clear that the motions were not associated with a single fault in New Zealand: the Kaikoura earthquake involved a whole network of them, at least at the surface. Probably, displacement jumped from one to another; a complexity that must be taken into account for future events on such notorious fault systems as those in densely populated parts of California and Turkey.

East to west speed of the Anatolian micro-plate south of the North Anatolian Fault derived from the first five years of the EU’s Sentinel-1 InSAR constellation. Major known faults shown by black lines (Credit: Emre, O. et al. 2018. Active fault database of Turkey. Bulletin of Earthquake Engineering, v. 16, p. 3229-3275; DOI: 10.1007/s10518-016-0041-2)

Since its inception, GPS has proved capable of monitoring tectonic motions over a number of years, but only for widely spaced, individual ground instruments. Using InSAR alongside years’ worth of GPS measurements helps to extend detected motions to much finer resolution, as the image above shows for Asiatic Turkey. An important parameter needed for prediction of earthquakes is the way in which crustal strain builds up in regions with dangerously active fault systems.

InSAR image of the Sierra Negra volcano on Isabela Island in the Galapagos Archipelago, at the time of a magma body intruding its flanks. Each colour fringe represents 2.8 cm of subsidence in the LOS direction (Credit: Anantrasirichai, N. et al. 2019. A deep learning approach to detecting volcano deformation from satellite imagery using synthetic datasets. Remote Sensing of Environment, v. 230, article 111179; DOI: 10.1016/j.rse.2019.04.032)

Volcanism obviously involves the movement of large masses of magma beneath the surface before eruptions. GPS and micro-gravity measurements show that charging of a magma chamber causes volcanoes to inflate so InSAR provides a welcome means of detecting the associated uplift, even if it only a few centimetres, as show by the example above from the Galapagos Islands. A volcano’s flanks may bulge, which could presage a lateral eruption or a pyroclastic flow such as that at Mount St Helens in 1980. Truly vast eruptions are associated with calderas whose ring faults may cause collapse in advance.

The presence of cavities beneath the surface, formed by natural solution of limestones, deliberately as in extraction of brines from salt deposits or after subsurface mining, present subsidence hazards. There have been several series of alarming TV programmes about sinkhole formation that demonstrate sudden collapse. Yet every case will have been preceded by years of gradual sagging. InSAR allows risky areas to be identified well in advance of major problems. Indeed estate agents (realtors) as well as planners, civil engineers and insurers form a ready market for such survey.

Natural sparkling water and seismicity

For all manner of reasons, natural springs have fascinated people since at least as long ago as the Neolithic. Just the fact that clear water emerges from the ground to source streams and great rivers seems miraculous. There are many occurrences of offerings having been made to supernatural spirits thought to guard springs. Even today many cannot resist tossing in a coin, hanging up a ring, necklace or strip of cloth beside a spring, for luck if nothing else. Hot springs obviously attract attention and bathers. Water from cool ones has been supposed to have health-giving properties for at least a couple of centuries, even if they stink of rotten eggs or precipitate yellow-brown iron hydroxide slime in the bottom of your cup. Spas now attribute their efficacy to their waters’ chemistry, and that depends on the rocks through which they have passed. Those in areas of volcanic rock are generally the most geochemically diverse: remember the cringe-making adverts for Volvic from the volcanic Chain des Puys in the French Auvergne. Far more ‘posh’ are naturally carbonated waters that well-out full of fizz from pressurised, dissolved CO2. Internationally the best known of these is Perrier from the limestone-dominated Gard region of southern France. Sales of bottled spring waters are booming and the obligatory water-chemistry data printed on their labels form  a do-it-yourself means of regional geochemical mapping (Dinelli, E. et al. 2010. Hydrogeochemical analysis on Italian bottled mineral waters: Effects of geology. Journal of Geochemical Exploration, v. 107, p. 317–335; DOI: 10.1016/j.gexplo.2010.06.004) But it appears from a study of variations in CO2 output from commercial springs in Italy that they may also help in earthquake prediction (Chiodini, G. et al. 2020. Correlation between tectonic CO2 Earth degassing and seismicity is revealed by a 10-year record in the Apennines, Italy. Science Advances, v. 6, article eabc2938; DOI: 10.1126/sciadv.abc2938).

Italy produces over 12 billion litres of spring water and the average Italian drinks 200 litres of it every year. There are more than 600 separate brands of acqua minerale produced in Italy, including acqua gassata (sparkling water). Even non-carbonated springs emit CO2, so it is possible to monitor its emission from the deep Earth across wide tracts of the country. High CO2 emissions are correlated worldwide with areas of seismicity, either associated with shallow magma chambers or to degassing from subduction zones. There are two possibilities: that earthquakes help release built-up fluid pressure or because fluids, such as CO2 somehow affect rock strength. Giovanni Chiodini and colleagues have been monitoring variations in CO2 release from carbonated spring water in the Italian Apennines since 2009. Over a ten-year period there have been repeated earthquakes in the area, including three of magnitude 6.0 or greater. The worst was that affecting L’Aquila in April 2009, the aftermath of which saw six geoscientists charged with – and eventually acquitted of – multiple manslaughter (see: Una parodia della giustizia?, October 2012). It was this tragedy that prompted Chiodini et al.’s unique programme of 21 repeated sampling of gas discharge rates at 36 springs, matched to continuous seismograph records. The year after the L’Aquila earthquake coincided with high emissions, which then fell to about half the maximum level by 2013. In 2015 emissions began to rise to reach a peak before earthquakes with almost the same magnitude, but less devastation, on 24 August and 30 October 2016. Thereafter emissions fell once again. This suggests a linked cycle, which the authors suggest is modulated by ascent of CO2 that originates from the melting of carbonates along the subduction zone that dips beneath central Italy. They suggest that gas accumulates in the lower crust and builds up pressure that is able to trigger earthquakes in the crust.

The variation in average emissions across central Italy (see figure above) suggests that there are two major routes for degassing from the subduction zone, perhaps focussed by fractures generated by previous crustal tectonic movements. In my opinion, this study does not prove a causal link, although that is a distinct possibility, which may be verified by extending this survey of degassing and starting similar programmes in other seismically active areas. Whether or not it might become a predictive tool depends on further work. However, other studies, particularly in China, show that other phenomena associated with groundwater in earthquake-prone areas, such as rise in well-water levels and an increase in their emissions of radon and methane, correlate in a similar manner.

‘Mud, mud, glorious mud’

Earth is a water world, which is one reason why we are here. But when it comes to sedimentary rocks, mud is Number 1. Earth’s oceans and seas hide vast amounts of mud that have accumulated on their floors since Pangaea began to split apart about 200 Ma ago during the Early Jurassic. Half the sedimentary record on the continents since 4 billion years ago is made of mudstones. They are the ultimate products of the weathering of crystalline igneous rocks, whose main minerals – feldspars, pyroxenes, amphiboles, olivines and micas, with the exception of quartz – are all prone to breakdown by the action of the weakly acidic properties of rainwater and the CO2 dissolved in it. Aside from more resistant quartz grains, the main solid products of weathering are clay minerals (hydrated aluminosilicates) and iron oxides and hydroxides. Except for silicon, aluminium and ferric iron, most metals end up in solution and ultimately the oceans.  As well as being a natural product of weathering, mud is today generated by several large industries, and humans have been dabbling in natural muds since the invention of pottery some 25 thousand years ago.  On 21 August 2020 the journal Science devoted 18 pages to a Special Issue on mud, with seven reviews (Malakoff, D. 2020. Mud. Science, v. 369, p. 894-895; DOI: 10.1126/science.369.6506.894).

Mud carnival in Brazil (Credit: africanews.com)

The rate at which mud accumulates as sediment depends on the rate at which erosion takes place, as well as on weathering. Once arable farming had spread widely, deforestation and tilling the soil sparked an increase in soil erosion and therefore in the transportation and deposition of muddy sediment. The spurt becomes noticeable in the sedimentary record of river deltas, such as that of the Nile, about 5000 years ago. But human influences have also had negative effects, particularly through dams. Harnessing stream flow to power mills and forges generally required dams and leats. During medieval times water power exploded in Europe and has since spread exponentially through every continent except Antarctica, with a similar growth in the capacity of reservoirs. As well as damming drainage these efforts also capture mud and other sediments. A study of drainage basins in north-east USA, along which mill dams quickly spread following European colonisation in the 17th century, revealed their major effects on valley geomorphology and hydrology (see: Watermills and meanders; March 2008). Up to 5 metres of sediment build-up changed stream flow to an extent that this now almost vanished industry has stoked-up the chances of major flooding downstream and a host of other environmental changes. The authors of the study are acknowledged in one Mud article (Voosen, P. 2020. A muddy legacy. Science, v. 369, p. 898-901; DOI: 10.1126/science.369.6506.898) because they have since demonstrated that the effects in Pennsylania are reversible if the ‘legacy’ sediment is removed. The same cannot be expected for truly vast reservoirs once they eventually fill with muds to become useless. While big dams continue to function, alluvium downstream is being starved of fresh mud that over millennia made it highly and continuously productive for arable farming, as in the case of Egypt, the lower Colorado River delta and the lower Yangtze flood plain below China’s Three Gorges Dam.

Mud poses extreme risk when set in motion. Unlike sand, clay deposits saturated with water are thixotropic – when static they appear solid and stable but as soon as they begin to move en masse they behave as a viscous fluid. Once mudflows slow they solidify again, burying and trapping whatever and whomever they have carried off. This is a major threat from the storage of industrially created muds in tailings ponds, exemplified by a disaster at a Brazilian mine in 2019, first at the site itself and then as the mud entered a river system and eventually reached the sea. Warren Cornwall explains how these failures happen and may be prevented (Cornwall, W. 2020. A dam big problem.  Science, v. 369, p. 906-909; DOI: 10.1126/science.369.6506.906). Another article in the Mud special issue considers waste from aluminium plants (Service, R.F. 2020. Red alert. Science, v. 369, p. 910-911; DOI: 10.1126/science.369.6506.910). The main ore for aluminium is bauxite, which is the product of extreme chemical weathering in the tropics. The metal is smelted from aluminium hydroxides formed when silica is leached out of clay minerals, but this has to be separated from clay minerals and iron oxides that form a high proportion of commercial bauxites, and which are disposed of in tailings dams. The retaining dam of such a waste pond in Hungary gave way in 2010, the thixotropic red clay burying a town downstream to kill 10 people. This mud was highly alkaline and inflicted severe burns on 150 survivors. Service also points out a more positive aspect of clay-rich mud: it can absorb CO2 bubbled through it to form various, non-toxic carbonates and help draw down the greenhouse gas.

Muddy sediments are chemically complex, partly because their very low permeability hinders oxygenated water from entering them: they maintain highly reducing conditions. Because of this, oxidising bacteria are excluded, so that much of the organic matter deposited in the muds remains as carbonaceous particles. They store carbon extracted from the atmosphere by surface plankton whose remains sink to the ocean floor. Consequently, many mudrocks are potential source rocks for petroleum. Although they do not support oxygen-demanding animals, they are colonised by bacteria of many different kinds. Some – methanogens – break down organic molecules to produce methane. The metabolism of others depends on sulfate ions in the trapped water, which they reduce to sulfide ions and thus hydrogen sulfide gas: most muds stink. Some of the H2S reacts with metal ions, to precipitate sulfide minerals, the most common being pyrite (FeS2). In fact a significant proportion of the world’s copper, zinc and lead resources reside in sulfide-rich mudstones: essential to the economies of Zambia and the Democratic Republic of Congo. But there are some strange features of mud-loving bacteria that are only just emerging. The latest is the discovery of bacteria that build chains up to 5 cm long that conduct electricity (Pennisi, E. 2020. The mud is electric. Science, v. 369, p. 902-905; DOI: 10.1126/science.369.6506.902). The bacterial ‘nanowires’ sprout from minute pyrite grains, and transfer electrons released by oxidation of organic compounds, effectively to catalyse sulfide-producing reduction reactions. NB Oxygen is not necessary for oxidation as its chemistry involves the loss of electrons, while reduction involves a gain of electrons, expressed by the acronym OILRIG (oxidation is loss, reduction is gain). It seems such electrical bacteria are part of a hitherto unsuspected chemical ecosystem that helps hold the mud together as well as participating in a host of geochemical cycles. They may spur an entirely new field of nano-technology, extending, bizarrely, to an ability to generate electricity from moisture in the air.

If you wish to read these reviews in full, you might try using their DOIs at Sci Hub.

Can a supernova affect the Earth System?

The easy answer is yes, simply because chemical elements with a greater relative atomic mass than that of iron are thought to be created in supernovae when dying giant stars collapse under their own gravity and then explode. Interstellar dust and gas clouds accumulate their debris. If the clouds are sufficiently dense gravity forms clumps that may become new stars and the planets that surround them. Matter from every once-nearby supernova enters these clouds and thus contributes to the formation of a planet. This was partly proven when pre-solar grains were found in the Murchison meteorite, some of which are as old as 7.5 billion years (Ga) – 3 Ga older than the Solar System (see: Mineral grains far older than the Solar System; January 15, 2020). Murchison is a carbonaceous chondrite, a class of meteorite which likely contributed lots of carbon-based compounds to the early Earth, setting the stage for the emergence of life. It has been estimated that a near-Earth supernova (closer than 1000 light years) would have noticeable effects on the biosphere, mainly because of the effects on atmospheric composition of the associated high-energy gamma-ray burst. That would create sufficient nitrogen oxides to destroy the ozone layer that shields the surface from harmful radiation. There are reckoned to have been 20 nearby supernovae during the last 10 Ma or so from the presence of anomalously high levels of the isotope 60Fe in marine sediment layers on the Pacific floor. Yet there is no convincing evidence that they coincided with detectable extinctions in the fossil record. But supernovae have been suggested as a possible cause of more ancient mass extinctions, such as that at the end of the Ordovician Period (but see: The late-Ordovician mass extinction: volcanic connections; July 2017).

Diorama of an Early Devonian reef with tabulate and rugose corals and trilobites (Credit: Richard Bizley)

The Late Devonian is generally accepted to be one of the ‘Big Five’ mass extinction events. However, unlike the others, the event was a protracted decline in biodiversity, with several extinction peaks). In particular it marked the end of Palaeozoic reef-building corals. Some have put down the episodic faunal decline to the effects of species moving from one marine basin to another as global sea levels fluctuated: much like the effects of the ‘invasion’ of the coral-eating Crown of Thorns sea urchin that has helped devastate parts of the Great Barrier Reef during present-day global warming (see: Late Devonian: mass extinction or mass invasion? January 2012). Recently, attention has switched to evidence for ultra-violet damage to the morphology of spores found in the strata that display faunal extinction; i.e. to the possibility of the ozone layer having been lost or severely depleted. One suggestion has been sudden peaks in volcanic activity, hinted at by spikes in the abundance of mercury of marine sediments. Brian Fields of the University of Illinois, with colleagues from the USA, UK, Estonia and Switzerland, have closely examined the possibility and the testability of a supernova’s influence (Fields. B.D. et al. 2020.  Supernova triggers for end-Devonian extinctions. Proceedings of the National Academy of Sciences, v. 117, article 202013774; DOI: 10.1073/pnas.2013774117).

They propose the deployment of mass-spectrometric analysis for anomalous stable-isotope abundances in the sediments that contain faunal evidence for accelerated extinction, particularly those of 146Sm, 235U and the long-lived plutonium isotope 244Pu (80 Ma hal-life). They suggest that the separation of the extinction into several events, may be a clue to a supernova culprit. A gamma-ray burst would arrive at light speed, but dust – containing the detectable isotopes –  although likely to be travelling very quickly would arrive hundred to thousands of years later, depending on the distance to the supernova. Cosmic rays generated by the supernova, also a possible kill mechanism, given a severely depleted ozone layer, travel about half the speed of light. Three separate arrivals for the products of a single stellar explosion are indeed handy as an explanation for the Late Devonian extinctions. But someone needs to do the analyses. The long-lived  plutonium isotope is the best candidate: even detection of a few atoms in a sample would be sufficient proof. But that would require a means of ruling out contamination by anthropogenic plutonium, such as analysing the interior of fossils. But would even such an exotic discovery prove the sole influence of a galactic even?

Centenary of the Milanković Theory

A letter in the latest issue of Nature Geoscience (Cvijanovic, I. et al. 2020. One hundred years of Milanković cycles, v. Nature Geoscience , v.13p. 524–525; DOI: 10.1038/s41561-020-0621-2) reveals the background to Milutin Milanković’s celebrated work on the astronomical  driver of climate cyclicity. Although a citizen of Serbia, he had been born at Dalj, a Serbian enclave, in what was Austro-Hungary. Just before the outbreak of World War I in 2014, he returned to his native village to honeymoon with his new bride. The assassination (29 June 2014) in Sarajevo of Archduke Franz Ferdinand by Bosnian-Serb nationalist Gavrilo Princip prompted Austro-Hungarian authorities to imprison Serbian nationals. Milanković was interned in a PoW camp. Fortunately, his wife and and a former Hungarian colleague managed to negotiate his release, on condition that he served his captivity, with a right to work but under police surveillance, in Budapest. It was under these testing conditions that he wrote his seminal Mathematical Theory of Heat Phenomena Produced by Solar Radiation; finished in 1917 but remaining unpublished until 1920 because of a shortage of paper during the war.

Curiously, Milanković was a graduate in civil engineering — parallels here with Alfred Wegener of Pangaea fame, who was a meteorologist — and practised in Austria. Appointed to a professorship in Belgrade in 1909, he had to choose a field of research. To insulate himself from the rampant scientific competitiveness of that era, he chose a blend of mathematics and astronomy to address climate change. During his period as a political prisoner Milanković became the first to explain how the full set of cyclic variations in Earth’s orbit — eccentricity, obliquity and precession — caused distinct variations in incoming solar radiation at different latitudes and changed on multi-thousand-year timescales. The gist  of what might have lain behind the cyclicity of ice ages had first been proposed by Scottish scientist James Croll almost half a century earlier, but it was Milutin Milanković who, as it were, put the icing on the cake. What is properly known as the Milanković-Croll Theory triumphed in the late 1970s as the equivalent of plate tectonics in palaeoclimatology after Nicholas Shackleton and colleagues teased out the predicted astronomical signals from time series of oxygen isotope variations in marine-sediment cores.

Appropriately, while Milanković’s revoluitionary ideas lacked corroborating geological evidence, one of the first to spring to his support was that other resilient scientific ‘prophet’, Alfred Wegener. Neither of them lived to witness their vindication.

The Younger Dryas and volcanic eruptions

The issue of the Younger Dryas (YD) cold ‘hiccup’  between 12.9 to 11.7 thousand years (ka) ago during deglaciation and general warming has been the subject of at least 10 Earth-logs commentaries in the last 15 years (you can check them via the Palaeoclimatology logs). I make no apologies for what might seem to be verging on a personal obsession, because it isn’t. That 1200-year episode is bound up with major human migrations on all the northern continents: it may be more accurate to say ‘retreats’. Cooling to near-glacial climates was astonishingly rapid, on the order of a few decades at most. The YD was a shock, and without it the major human transition from foraging to agriculture might, arguably, have happened more than a millennium before it did. There is ample evidence that at 12.9 ka ocean water in the North Atlantic was freshened by a substantial input of meltwater from the decaying ice sheet on northern North America, which shut down the Gulf Stream (see: Tracking ocean circulation during the last glacial period, April 2005; The Younger Dryas and the Flood, June 2006). Such an event has many supporters. Less popular is that it was caused by some kind of extraterrestrial impact, based on various lines of evidence assembled by what amounts to a single consortium of enthusiasts. Even more ‘outlandish’ is a hypothesis that it all kicked off with radiation from a coincident supernova in the constellation Vela in the Southern sky, which is alleged to have resulted in cosmogenic 14C and 10Be anomalies at 12.9 ka. Another coincidence has been revealed by 12.9 ka-old volcanic ash in a sediment core from a circular volcanogenic lake or maar in Germany (see: Did the Younger Dryas start and end at the same times across Europe? January 2014). Being in a paper that sought to chart climate variations during the YD in a precisely calibrated and continuous core, the implications of that coincidence have not been explored fully, until now.

The Laacher See caldera lake in the recently active Eifel volcanic province in western Germany

A consortium of geochemists from three universities in Texas, USA has worked for some time on cave-floor sediments in Hall’s Cave, Texas as they span the YD. In particular, they sought an independent test of evidence for the highly publicised and controversial causal impact in the form of anomalous concentrations of the highly siderophile elements (HSE) osmium, iridium, platinum, palladium and rhenium (Sun, N. et al. 2020. Volcanic origin for Younger Dryas geochemical anomalies ca. 12,900 cal B.P.. Science Advances, v. 6, article eaax8587; DOI: 10.1126/sciadv.aax8587). There is a small HSE ‘spike’ at the 12.9 ka level, but there are three larger ones that precede it and one at about 11 ka. Two isotopes of the element osmium are often used to check the ultimate source of that element through the 187Os/188Os ratio, as can the relative proportions of the HSE elements compared with those in chondritic meteorites. The presence of spikes other than at the base of the YD does not disprove the extraterrestrial causal hypothesis, but the nature of those that bracket the mini-glacial time span not only casts doubt on it, they suggest a more plausible alternative. The 187Os/188Os data from each spike are ambiguous: they could either have arisen from partial melting of the mantle or from an extraterrestrial impact. But the relative HSE proportions point unerringly to the enriched layers having been inherited from volcanic gas aerosols. Two fit dated major eruptions of  the active volcanoes Mount Saint Helens (13.75 to 13.45 ka) and Glacier Peak (13.71 to 13.41 ka) in the Cascades province of western North America. Two others in the Aleutian and Kuril Arcs are also likely sources. The spike at the base of the YD exactly matches the catastrophic volcanic blast that excavated the Laacher See caldera in the Eifel region of western Germany, which ejected 6.3 km3 of sulfur-rich magma (containing 2 to 150 Mt of sulfur). Volcanic aerosols blasted into the stratosphere then may have dispersed throughout the Northern Hemisphere: a plausible mechanism for climatic cooling.

Sun et al. have not established the Laacher See explosion as the sole cause of the Younger Dryas. However, its coincidence with the shutdown of the Gulf Stream would have added a sudden cooling that may have amplified climatic effects of the disappearance of the North Atlantic’s main source of warm surface water. Effects of the Laacher See explosion may have been a tipping point, but it was one of several potential volcanic injections of highly reflective sulfate aerosols that closely precede and span the YD.

See also: Cooling of Earth caused by eruptions, not meteors (Science Daily, 31 July 2020)