Up-to-date review of animals before the Cambrian ‘Explosion’

Artist’s impression of the Ediacaran Fauna (credit: Science)

Since I began this blog in 2000 one of my most regular topics concerns the animals of the latest Precambrian: the Ediacaran fauna. If you want to browse through the items use ‘Ediacaran’ in the Search Earth-logs box. New material and ideas about those precursors to modern life forms (and some that are still puzzling) appear on a regular basis. Science journalist Traci Watson has just summarised the latest developments in an essay for Nature. It is a nicely written and copiously illustrated piece with lots of links. Rather than precis her article, I suggest that you go straight to it, if the topic piques your interest.

(Watson, T. 2020. The bizarre species that are rewriting animal evolution. Nature, v. 586, p. 662-665; DOI: 10.1038/d41586-020-02985-z)

Environmental change and early-human innovation

Acheulean biface tools strewn on a bedding surface in the Olorgesailie Basin, Kenya (credit: mmercedes_78 Flickr)

The Olorgesailie Basin in Southern Kenya is possibly the world’s richest source for evidence of ancient stone-tool manufacture. For early humans, it certainly was rich in the necessary resources from which to craft tools. Lying in East Africa’s active rift system, its stratigraphy contains abundant beds of hydrothermal silica (chert), deposited by hot springs, and flows of fine grained lavas. Its sediments spanning the last 1.2 million years show that the Basin hosted lakes and extensive river systems for the earlier part of this period: it was rich in food resources too. The tools, together with bones from dismembered prey, bear witness to long-term human occupation, but hominin remains themselves have yet to be discovered. The time span suggests early occupation by Homo erectus, who probably manufactured Acheulean biface stone tools in large quantities that litter the surface at some archaeological sites.

There is a break in the stratigraphic sequence from about 500 to 320 thousand years ago caused by erosion during a period of tectonic uplift. Younger sediments reveal a striking change in archaeology. The earlier large cutting tools give way to a more diverse ‘toolkit’ of smaller tools produced by more sophisticated techniques than those used to make the Acheulean ‘hand axes’. In African archaeological parlance, the <320 ka-old tools mark the onset of the Middle Stone Age (NB not equivalent to the much younger Mesolithic of Europe). The sedimentary gap also marks what seems to have been very different human behaviour. The stone resources used in the 1.2 to 0.5 Ma sequence were local: no more than 5 km from the tool-yielding sites. After the gap a much more varied range of lithologies was used, from as far afield as 95 km. Not only that, but rock unsuitable for tools appears: soft pigments such as hematite.

The foregoing was known from three major papers that appeared in March 2018 (see: Human evolution and revolution in Africa, March 2018 – specifically the section Hominin cultural revolution 320,000 years ago). Now, many members of the teams who produced that published evidence report detailed analysis of samples from a deep drill core through the stratigraphy in a similar, nearby basin (Potts, R. and 21 others 2020. Increased ecological resource variability during a critical transition in hominin evolutionScience Advances, v. 6, article eabc8975; DOI:10.1126/sciadv.abc8975). As well as calibrating the timing of stratigraphic changes using 40Ar/39Ar dating from 22 volcanic layers, the team analysed sedimentary structures, body- and trace fossils, variations in sediment geochemistry, palaeobotany and carbon isotopes, to suggest variations in environmental conditions and ecology throughout the section in greater detail than previously achieved anywhere in Africa.

They conclude that as well as a change in topography resulting from the 500-320 ka period of tectonic uplift and erosion, the climate of this part of East Africa became more unstable. Combined, these two factors transformed the ecosystems of the Olorgesailie Basin. Between 1.2 to 0.5 Ma the Acheulean tool makers inhabited dominantly grassy plains with substantial, permanent lakes – a stable period of 700 thousand years, well suited to large herbivores and thus to these early humans. Tectonic and climatic change disrupted a ‘land of plenty’; the herbivores left to be replaced by smaller prey animals; vegetation shifted back and forth from grassland to woodland with the unstable climate; lakes became smaller and ephemeral. The problem in linking environmental change to changed human practices in this case, however, is the 180 thousand-year gap in the geological record. Lead author Richard Potts, director of the Human Origins Program at the Smithsonian’s National Museum of Natural History, and his team suggest that the change contributed to the ecological flexibility of the probable Homo sapiens who left the fancier, more diverse tools during the later phase. Yet 1.6 million years beforehand early H. erectus had sufficient flexibility to cross 30 to 40 degrees of latitude and end up on the shores of the Black Sea in Georgia! The likely late-stage H. erectus of Olorgesailie may have moved out around 500 ka ago and sometime later early H. sapiens moved in with new technology developed elsewhere. We know that the earliest known anatomically modern humans lived in Morocco at around 315 ka (see: Origin of anatomically modern humans, June 2017): but we don’t know what tools they had or where they went next. There are all sorts of possibilities that cannot be addressed by even the most intricate analysis of secondary evidence. The important issue seems, I think, to centre on the transition from erects to sapiens, in anatomical, cognitive and behavioural contexts, via some intermediary such as H. antecessor, to which this study can contribute very little. That needs complete stratigraphic records: ironically, the other basin from which the core was drilled is apparently more complete, especially for the 500 to 320 ka ‘gap’. That seems likely to offer more potential. Yet, such big questions also demand a much broader brush: perhaps on a continental scale. It’s to early to tell …

See also: Turbulent era sparked leap in human behavior, adaptability 320,000 years ago (Science Daily,21 October 2020)

How continental keels and cratons may have formed

There is Byzantine ring to the word craton: hardly surprising as it stems from the Greek kratos meaning ‘might’ or ‘strength’. Yes, the ancient cores of the continents were well named, for they are mighty. Some continents, such as Africa, have several of them: probably relics of very ancient supercontinents that have split and spread again and again. Cratons overlie what are almost literally the ‘keels’ of continents. Unlike other mantle lithosphere beneath continental crust (150 km on average) cratonic lithosphere extends down to 350 km and is rigid. Upper mantle rocks at that depth elsewhere are mechanically weaker and constitute the asthenosphere. Geologists only have evidence from the near-surface on which to base ideas of how cratons formed. Their exposed rocks are always Precambrian in age, from 1.5 to 3.5 billion years old, though in some cases they are covered by a thin veneer of later sedimentary rocks that show little sign of deformation. No cratons formed after the Palaeoproterozoic and they are the main repositories of Archaean rock. Their crust is thicker than elsewhere and dominated at the surface by crystalline rocks of roughly granitic composition. Cratons have the lowest amount of heat flowing out from the Earth’s interior; i.e. heat produced by the decay of long-lived radioactive isotopes of uranium, thorium and potassium. This relative coolness provides an explanation for the rigidity of cratons relative to younger continental lithosphere. Because granitic rocks are well-endowed with heat-producing isotopes, the implication of low heat flow is that the deeper parts of the crust are strongly depleted in them. As a result the deep mantle in cratonic keels is at higher pressure and lower temperature than elsewhere beneath the continental surface. Ideal conditions for the formation of diamonds in mantle rock, so that cratonic keels are their main source – they get to the surface in magma pipes when small amounts of partial melting take place in the lithospheric mantle.

The low heat flow through cratons beckons the idea that the heat-producing elements U, Th and K were at some stage driven from depth. An attractive hypothesis is that they were carried in low-density granitic magmas formed by partial melting of mantle lithosphere during the Precambrian that rose to form continental crust. Yet there is an abundance of younger granite plutons that are associated with thinner continental lithosphere. This seeming paradox suggests different kinds of magmagenesis and tectonics during the early Precambrian. Russian and Australian geoscientists have proposed an ingenious explanation (Perchuk, A.L. et al. 2020. Building cratonic keels in Precambrian plate tectonics. Nature, v. 586, p. 395-401; DOI: 10.1038/s41586-020-2806-7). The key to their hypothesis lies in the 2-layered nature of mantle keels beneath cratons, as revealed by seismic studies. Modelling of the data suggests that the layering resulted from different degrees of partial melting in the upper mantle during Precambrian subduction.

Development of a cratonic keel from melt-depleted lithospheric mantle during early Precambrian subduction. Mantle temperature is 250°C higher than it is today. The oceanic lithosphere being subducted in (a) has become a series of stagnant slabs in (b) (credit: Perchuk et al.; Fig. 2)

Perchuk et al. suggest that high degrees of partial melting of mantle associated with subduction zones produced the bulk of magma that formed the Archaean and Palaeoproterozoic crust. This helps explain large differences between the bulk compositions of ancient and more recent continental crust, which involves less melting. The residue left by high degrees of melting of mantle rock in the early Precambrian would have had a lower density than the rest of the mantle. While older oceanic crust at ancient subduction zones would be transformed to a state denser than the mantle as a whole and thus able to sink, this depleted lithospheric mantle would not. In its hot ductile state following partial melting, this mantle would be ‘peeled’ from the associated oceanic crust to be emplaced below. The figure shows one of several outcomes of a complex magmatic-thermomechanical model ‘driven’ by assumed Archaean conditions in the upper mantle and lithosphere An excellent summary of modern ideas on the start of plate tectonics and evolution of the continents is given by:Hawkesworth, C.J., Cawood, P.A. & Dhuime, B. 2020. The evolution of the continental crust and the onset of plate tectonics. In Topic: The early Earth crust and its formation, Frontiers in Earth Sciences; DOI: 10.3389/feart.2020.00326

Balanced boulders and seismic hazard

The seismometer invented by early Chinese engineer Zhang Heng

China has been plagued by natural disasters since the earliest historical writings. Devastating earthquakes have been a particular menace, the first recorded having occurred in 780 BC . During the Han dynasty in 132 CE, polymath Zhang Heng invented an ‘instrument for measuring the seasonal winds and the movements of the Earth’ (Houfeng Didong Yi, for short): the first seismometer. A pendulum mechanism in a large bronze jar activated one of eight dragons corresponding to the eight cardinal and intermediate compass directions (N, NE, E etc.) so that a bronze ball dropped from its mouth to be caught by a corresponding bronze toad. The device took advantage of unstable equilibrium in which a small disturbance will produce a large change: akin to a pencil balanced on its unsharpened end. Modern seismometers exploit the same basic principle of amplification of small motions. The natural world is also full of examples of unstable equilibrium, often the outcome of chemical and physical weathering. Examples are slope instability, materials that are on the brink of changing properties from those of a solid to a liquid state (thixotropic materials – see: Mud, mud, glorious mud August 2020) and rocks in which stress has built almost to the point of brittle failure: earthquakes themselves. But there are natural curiosities that not only express unstable equilibrium but have maintained it long enough to become … curious! Perched boulders, such as glacial erratics and the relics of slow erosion and weathering, are good examples. Seismicity could easily topple them, so that their continued presence signifies that large enough tremors haven’t yet happened.

A precarious boulder in coastal central California (credit: Anna Rood & Dylan Rood, Imperial College London)

Now it has become possible to judge how long their delicate existence has persisted, giving a clue to the long-term seismicity and thus the likely hazard in their vicinity (Rood, A.H. and 10 others 2020. Earthquake Hazard Uncertainties Improved Using Precariously Balanced Rocks. American Geological Union Advances, v. 1, ePDF e2020AV000182; DOI: 10.1029/2020AV000182). Anna Rood and her partner Dylan of Imperial College London, with colleagues from New Zealand, the US and Australia, found seven delicately balanced large boulders of silica-rich sedimentary rock in seismically active, coastal California. They had clearly withstood earthquake ground motions for some time. Using multiple photographs to produce accurate digital 3D renditions and modelling of resistance to shaking and rocking motions, the authors determined each precarious rock’s probable susceptibility to toppling as a result of earthquakes. How long each had withstood tectonic activity shows up from the mass-spectrometric determination of beryllium-10 isotopes produced by cosmic-ray bombardment of the outer layer. Comparing its surface abundance relative to that in the rock’s interior indicates the time since the boulders’ first exposure to cosmic rays. With allowance for former support from surrounding blocks, this gives a useful measure of the survival time of each boulder – its ‘fragility age’.

The boulder data provide a useful means of reducing the uncertainties inherent in conventional seismic hazard assessment, which are based on estimates of the frequency of seismic activity, the magnitude of historic ‘quakes, in most cases over the last few hundred years, and the underlying geology and tectonics. In the study area (near a coastal nuclear power station) the data have narrowed uncertainty down to almost a half that in existing risk models. Moreover, they establish that the highest-magnitude earthquakes to be expected every 10 thousand years (the ‘worst case scenario’) were 27% less than otherwise estimated. This is especially useful for coastal California, where the most threatening faults lie off shore and are less amenable to geological investigation.

See also:  Strange precariously balanced rocks provide earthquake forecasting clues. (SciTech Daily; 1 October 2020) 

Supernova at the start of the Pleistocene

This brief note takes up a thread begun in Can a supernova affect the Earth System? (August 2020). In February 2020 the brightness of Betelgeuse – the prominent red star at the top-left of the constellation Orion – dropped in a dramatic fashion. This led to media speculation that it was about to ‘go supernova’, but with the rise of COVID-19 beginning then, that seemed the least of our worries. In fact, astronomers already knew that the red star had dimmed many times before, on a roughly 6.4-year time scale. Betelgeuse is a variable star and by March 2020 it brightened once again: shock-horror over; back to the latter-day plague.

When stars more than ten-times the mass of the Sun run out of fuel for the nuclear fusion energy that keeps them ‘inflated’ they collapse. The vast amount of gravitational potential energy released by the collapse triggers a supernova and is sufficient to form all manner of exotic heavy isotopes by nucleosynthesis. Such an event radiates highly energetic and damaging gamma radiation, and flings off dust charged with a soup of exotic isotopes at very high speeds. The energy released could sum to the entire amount of light that our Sun has shone since it formed 4.6 billion years ago. If close enough, the dual ‘blast’ could have severe effects on Earth, and has been suggested to have caused the mass extinction at the end of the Ordovician Period.

Betelgeuse is about 700 light years away, massive enough to become a future supernova and its rapid consumption of nuclear fuel – it is only about 10 million years old – suggests it will do so within the next hundred thousand years. Nobody knows how close such an event needs to be to wreak havoc on the Earth system, so it is as well to check if there is evidence for such linked perturbations in the geological record. The isotope 60Fe occurs in manganese-rich crusts and nodules on the floor of the Pacific Ocean and also in some rocks from the Moon. It is radioactive with a half-life of about 2.6 million years, so it soon decays away and cannot have been a part of Earth’s original geochemistry or that of the Moon. Its presence may suggest accretion of debris from supernovas in the geologically recent past: possibly 20 in the last 10 Ma but with apparently no obvious extinctions. Yet that isotope of iron may also be produced by less-spectacular stellar processes, so may not be a useful guide.

There is, however, another short-lived radioactive isotope, of manganese (53Mn), which can only form under supernova conditions. It has been found in ocean-floor manganese-rich crusts by a German-Argentinian team of physicists  (Korschinek, G. et al. 2020. Supernova-produced 53Mn on Earth. Physical Review Letters, v. 125, article 031101; DOI: 10.1103/PhysRevLett.125.031101). They dated the crusts using another short-lived cosmogenic isotope produced when cosmic rays transform the atomic nuclei of oxygen and nitrogen to 10Be that ended up in the manganese-rich crusts along with any supernova-produced  53Mn and 60Fe. These were detected in parts of four crusts widely separated on the Pacific Ocean floor. The relative proportions of the two isotopes matched that predicted for nucleosynthesis in supernovas, so the team considers their joint presence to be a ‘smoking gun’ for such an event.

The 10Be in the supernova-affected parts of the crusts yielded an age of 2.58 ± 0.43 million years, which marks the start of the Pleistocene Epoch, the onset of glacial cycles in the Northern Hemisphere and the time of the earliest known members of the genus Homo. A remarkable coincidence? Possibly. Yet cosmic rays, many of which come from supernova relics, have been cited as a significant source of nucleation sites for cloud condensation. Clouds increase the planet’s reflectivity and thus act to to cool it. This has been a contentious issue in the debate about modern climate change, some refuting their significance on the basis of a lack of correlation between cloud-cover data and changes in the flux of cosmic rays over the last century. Yet, over the five millennia of recorded history there have been no records of supernovas with a magnitude that would suggest they were able to bathe the night sky in light akin to that of daytime. That may be the signature of one capable of affecting the Earth system. Thousands that warrant being dubbed a ‘very large new star’are recorded, but none that ‘turned night into day’. The hypothesis seems to have ‘legs’, but so too do others, such as the slow influence on oceanic circulation of the formation of the Isthmus of Panama and other parochial mechanisms of changing the transfer of energy around our planet

See also: Stellar explosion in Earth’s proximity, eons ago. (Science Daily; 30 September 2020.)

Severe COVID-19 associated with Neanderthal inheritance?

News broke in 2010 about evidence from modern and ancient DNA samples that showed some anatomically modern humans who left Africa before 40 thousand years ago to have interbred with the Neanderthal occupants of Eurasia (see: Yes, it seems that they did …; May 2010). In 2011 it turned out that the same had happened when AMH migrants in Asia met with Denisovans (see: Snippets on human evolution; November 2011). The resulting human hybrids went on to spread their new genes as they populated East Asia, Australasia and the Americas. Genomes of thousands of living people from these continents all show varying proportions – but generally less than about 5% – of genetic contributions from one or the other and in some cases both. Some of the modern humans who remained in Africa also had a similar opportunity. A few Neanderthals did set foot in Africa sharing their genes with its original inhabitants, but those venturing far from their normal range had already interbred with early ‘out-of-Africa’ AMH migrants about 150 to 100 thousand years ago, as had  AMH returning from Eurasia around 20 ka ago. Five widespread groups of modern Africans (but not all) carry up to 0.3% of the Neanderthal genome. Moreover, the ancestors of some living Africans had also exchanged genes with as-yet unknown archaic humans (see also: Everyone now has their Inner Neanderthal; February 2020).

Personally, I reacted to the news with a sense of pride. Neanderthals were tough, survivors of several hundred thousand years of climatic extremes hunting fearsome prey, and they probably had an intellect as advanced as that of the AMH with whom they mingled. My little bit of Neanderthal has conferred several advantages including resistance to Eurasian pathogens, but also has its downside, such as a tendency to depression and excessive blood clotting. But an unedited paper released in advance of publication by the journal Nature suggests that my pride turns out to include an unwelcome element of hubris.

Early in the COVID-19 pandemic, genetic research on over 3000 individuals, whose symptoms were severe enough for them to be hospitalised – a high proportion of whom sadly died – revealed that there was more to their being prone to severe symptoms than age, co-morbidities, gender and ethnicity. A short segment (around 50,000 base pairs long) on the DNA of their chromosome-3 is significantly associated with severe COVID-19 outcomes. Svante Pääbo, the leader of the team than reconstructed the Neanderthal and Denisovan genomes and discovered their presence in living people, and his co-author, Hugo Zeberg, have linked this segment to a stretch in the Neanderthal genome (Zeberg, H. & Pääbo, S. 2020. The major genetic risk factor for severe COVID-19 is inherited from Neanderthals. Nature, accelerated preview; DOI: 10.1038/s41586-020-2818-3). One gene included in the segment plays a role in the human immune response and another is linked to the way the coronavirus invades human cells, but how they influence health outcomes in COVID-19 victims is yet to be established. Sixteen percent of Europeans and 30% of South Asians carry the segment. If infected, they are at higher risk of severe outcomes than the rest of the population. That the relatively small segment still persists 40 ka after Neanderthals became extinct suggests that it has not always conferred high risk: probably because it once conferred significant advantages, perhaps by protecting against other, now extinct pathogens. A fitness benefit is passed on through natural selection

The pandemic has yet to run its course and genetic research, such as that by Zeberg and Pääbo, takes second place to that aiming at lessening the effects of the disease and developing vaccines that, hopefully, will wipe it out. The country-by-country statistics of COVID-19 morbidity and mortality are shaky, but an interesting feature may be emerging. Although there have been many cases in Africa, where health services are under developed, it seems that deaths as a proportion of infections are significantly lower there than in the more advanced countries. Hopefully that will continue, perhaps as a result of lower Neanderthal inheritance.

See also: Sample, I. 2020. Neanderthal genes increase risk of serious Covid-19, study claims. (The Guardian, 30 September 2020)

Photosynthesis, arsenic and a window on the Archaean world

At the very base of the biological pyramid life is far simpler than that which we can see.  It takes the form of single cells that lack a nucleus and propagate only by cloning: the prokaryotes as opposed to eukaryote life such as ourselves. It is almost certain that the first viable life on Earth was prokaryotic, though which of its two fundamental divisions – Archaea or Bacteria – came first is still debated. At present, most prokaryotes metabolise other organisms’ waste or dead remains: they are heterotrophs (from the Greek for ‘other nutrition’). But there are others that are primary producers getting their nutrition by themselves, exploiting the inorganic world in a variety of ways: the autotrophs. Biogeochemical evidence from the earliest sedimentary rocks suggests that, in the Archaean prokaryotic autotrophs were dominant, mainly exploiting chemical reactions to gain energy necessary for building carbohydrates. Some reduced sulfate ions to those of sulphide, others combined hydrogen with carbon dioxide to generate methane as a by-product. Sunlight being an abundant energy resource in near-surface water, a whole range of prokaryotes exploit its potential through photosynthesis. Under reducing conditions some photosynthesisers convert sulfur to sulfuric acid , yet others combine photosynthesis with chemo-autotrophy. Dissolved material capable of donating electrons – i.e. reducing agents – are exploited in photosynthesis: hydrogen, ferrous iron (Fe2+), reduced sulfur, nitrite, or some organic molecules. Without one group, which uses photosynthesis to convert CO2 and water to carbohydrates and oxygen, eukaryotes would never have arisen, for they depend on free oxygen. A transformation 2400 Ma ago marked a point in Earth history when oxygen first entered the atmosphere and shallow water (see: Massive event in the Precambrian carbon cycle; January, 2012), known as Great Oxygenation Event (GOE). It has been shown that the most likely sources of that excess oxygen were extensive bacterial mats in shallow water made of photosynthesising blue-green bacteria that produced the distinctive carbonate structures known as stromatolites. These had formed in Archaean sedimentary basins for 1.9 billion years. It has been generally assumed that blue-green bacteria had formed them too, before the oxygen that they produced overcame the reducing conditions that had generally prevailed before the GOE. But that may not have been the case …

Microbial mats made by purple sulfur bacteria in highly toxic spring water flowing into a salt-lake in northern Chile. (credit: Visscher et al. 2020; Fig 1c)

Prokaryotes are a versatile group and new types keep turning up as researchers explore all kinds of strange and extreme environments, for instance: hot springs; groundwater from kilometres below the surface and highly toxic waters. A recent surprise arose from the study of anoxic springs laden with dissolved salts, sulfide ions and arsenic that feed parts of hypersaline lakes in northern Chile (Visscher, P.T. and 14 others 2020. Modern arsenotrophic microbial mats provide an analogue for life in the anoxic ArcheanCommunications Earth & Environment, v. 1, article 24; DOI: 10.1038/s43247-020-00025-2). This is a decidedly extreme environment for life, as we know it, made more challenging by its high altitude exposure to high UV radiation. The springs’ beds are covered with bright-purple microbial mats. Interestingly the water’s arsenic concentration varies from high in winter to low in summer, suggesting that some process removes it, along with sulfur, according to light levels: almost certainly the growth and dormancy of mat-forming bacteria. Arsenic is an electron donor capable of participating in photosynthesis that doesn’t produce oxygen. The microbial mats do produce no oxygen whatever – uniquely for the modern Earth – but they do form carbonate crusts that look like stromatolites. The mats contain purple sulfur bacteria (PSBs) that are anaerobic photosynthesisers, which use sulfur, hydrogen and Fe2+ as electron donors. The seasonal changes in arsenic concentration match similar shifts in sulfur, suggesting that arsenic is also being used by the PSBs. Indeed they can, as the aio gene, which encodes for such an eventuality, is present in the genome of PSBs.

Pieter Visscher and his multinational co-authors argue for prokaryotes similar to modern PSBs having played a role in creating the stromatolites found in Archaean sedimentary rocks. Oxygen-poor, the Archaean atmosphere would have contained no ozone so that high-energy UV would have bathed the Earth’s surface and its oceans to a considerable depth. Moreover, arsenic is today removed from most surface water by adsorption on iron hydroxides, a product of modern oxidising conditions (see: Arsenic hazard on a global scale; May 2020): it would have been more abundant before the GOE. So the Atacama springs may be an appropriate micro-analogue for Archaean conditions, a hypothesis that the authors address with reference to the geochemistry of sedimentary rocks in Western Australia deposited in a late-Archaean evaporating lake. Stromatolites in the Tumbiana Formation show, according to the authors, definite evidence for sulfur and arsenic cycling similar to that in that Atacama springs. They also suggest that photosynthesising blue-green bacteria (cyanobacteria) may not have viable under such Archaean conditions while microbes with similar metabolism to PSBs probably were. The eventual appearance and rise of oxygen once cyanobacteria did evolve, perhaps in the late-Archaean, left PSBs and most other anaerobic microbes, to which oxygen spells death, as a minority faction trapped in what are became ‘extreme’ environments when long before they ‘ruled the roost’. It raises the question, ‘What if cyanobacteria had not evolved?’. A trite answer would be, ‘I would not be writing this and nor would you be reading it!’. But it is a question that can be properly applied to the issue of alien life beyond Earth, perhaps on Mars. Currently, attempts are being made to detect oxygen in the atmospheres of exoplanets orbiting other stars, as a ‘sure sign’ that life evolved and thrived there too. That may be a fruitless venture, because life happily thrived during Earth’s Archaean Eon until its closing episodes without producing a whiff of oxygen.

See also: Living in an anoxic world: Microbes using arsenic are a link to early life. (Science Daily, 22 September 2020)

Did early humans learn to cook in Olduvai Gorge?

Olduvai Gorge in northern Tanzania was for many years the stamping ground of the famous Leakey family and many other anthropologists because of it richness in the skeletal remains and the tools of the earliest members of our genus Homo. The first of these, H. habilis, appears in the Olduvai stratigraphic sequence at around 2 Ma: older examples are now known from localities in Kenya and South Africa taking the species back to about 2.4 Ma. ‘Handy Man’ got the Latinised nickname from its association with abundant stone tools, albeit of a very primitive kind. Oldowan tools are of the ‘let’s bash a couple of pebbles together to get a cutting edge’ kind, dating back to 3.4 Ma (but without evidence of who made them then) and as easily-made disposable tools they linger in the archaeological record until the Neolithic and even modern times. Homo habilis had a brain size little larger than that of australopithecines and some authorities deem them to be such.

Olduvai also yielded the earliest of a more ‘brainy’ species H. ergaster (‘Action Man’), which coexisted with habilis for a few hundred thousand years from around 2 Ma. Initially they also left Oldowan tools. Then, around 1.7 Ma at Olduvai, ergaster began making another stone artefact, the symmetrical bifacial ‘axe’ – probably a multipurpose tool and possibly an object of ritual significance, according to some researchers. Whichever, to make one required visualising the finished item within a shapeless lump of hard rock, and making them required great dexterity: and still does for stone knappers. The biface or ‘Acheulean’ tool originates with one of humanity’s greatest cognitive leaps and lay at the centre of the human toolkit for well over a million years. After being made first in Olduvai by African H. ergaster biface artefacts then spread throughout the continent with H. erectus (probably a direct descendent) and beyond its shores with succeeding humans, up to and including the earliest H. sapiens. How did what seems to be a ‘golden spike’ in human culture first take material form in Olduvai? The possibility of an answer stems from pure serendipity and the development of new research tools.

A flint bifacial stone artefact from the Palaeolithic of Norfolk, UK, which incorporates a bivalve fossil

That the Olduvai Gorge has drawn in several generations of researchers lies in its geology. As well as the sediments deposited by rivers and in ephemeral lakes that characterised a broadly speaking savannah environment, from 2 to 1 Ma there were at least 31 major volcanic eruptions that deposited lavas and a wide range of volcanic ash beds. These have enabled precise dating to calibrate in minute detail the evolution of a highly productive environment and the flora and fauna that it supported during the early Pleistocene. A recently developed technique involves identification of a variety of fatty acids or lipids – natural oils, waxes and steroids – using gas chromatography. Lipids are the remaining ‘biomarkers’ of plants and microorganisms that once lived in an ecosystem. Ainara Sistiaga of the Massachusetts Institute of Technology and the University of Copenhagen, with colleagues from Denmark, Spain, the US and Tanzania, set out to document ecological variation at Olduvai over a million-year interval using this approach. Among the microbial biomarkers they stumbled on something of possibly great importance (Sistiaga, A. and 10 others 2020. Microbial biomarkers reveal a hydrothermally active landscape at Olduvai Gorge at the dawn of the Acheulean, 1.7 Ma. Proceedings of the National Academy of Sciences, v. 117, published online; DOI: 10.1073/pnas.2004532117).

The palaeo-landscape of Olduvai, as revealed by lipid analysis, was highly diverse and rich in grasses, palms shrubs, aquatic flora and edible plants, watered by spring-fed rivers. It supported a diverse fauna including large herbivores (supported by fecal biomarkers): ideal for hominin subsistence. Sistiaga et al. focus in their paper on samples from the 1.7 Ma sedimentary and volcanic sequence (the Lower Augitic Sandstones – augite is an igneous pyroxene) that contains remains of H. ergaster, the oldest bifacial artefacts, and dismembered carcases of hominin prey animals. The surprise that emerged from the volcanoclastic sandstones included lipids produced by a range of bacterial species that only thrive in modern hot springs, such as those at Yellowstone and on the North Island of New Zealand. At three sample sites biomarkers for one particular hyperthermophile were found (Thermocrinis ruber), which can only live in water between 80 to 95°C. This and the other heat-loving bacteria also require water chemistry that, if cool, is drinkable.

Artist’s impression of Homo ergaster cooking an antelope in a 1.7 Ma hot spring at Olduvai Gorge, Tanzania (credit: Tom Björklund, MIT)

The implication is obvious: the ancient Olduvai hot springs were capable of thoroughly cooking meat and vegetables. The importance for humans is that cooking both tenderises meat and tough tubers and roots and breaks down carbohydrates and proteins to make them more easily and efficiently digestible. The brain capacity of H. ergaster was significantly greater than that of H. habilis, and at the average 800 cm3 about 2/3 that of anatomically modern humans. An increase in the input of easily digested protein, fats and carbohydrates may have fuelled that growth and, in turn, the cognitive capacity of H. ergaster. Not only the Western Rift Valley of Tanzania but the whole of the East African Rift System is liberally dotted with hydrothermal vents and also with hominin-rich sites.

See also: Chu, J. 2020. Did our early ancestors boil their food in hot springs? (MIT News, 15 September 2020)

End-Triassic mass extinction: evidence for oxygen depletion on the ocean floor

For British geologists of my generation the Triassic didn’t raise our spirits to any great extent. There’s quite a lot of it on the British Geological Survey 10-miles-to-the-inch geological map (South Sheet) but it is mainly muds, sandstones or pebble beds, generally red and largely bereft of fossils. For the Triassic’s 50 Ma duration following the end-Permian extinction at 252 Ma Britain was pretty much a desert in the middle of the Pangaea supercontinent. Far beyond our travel grants’ reach, the Triassic is a riot, as in the Dolomites of Northern Italy. Apart from a day trip to look at the Bunter Pebble Beds in a quarry near Birmingham and several weeks testing the load-bearing strength of the Keuper mudstones in the West Midlands (not far off zero) in a soil-mechanics lab, we did glimpse the then evocatively named Tea Green Marl (all these stratigraphic names have vanished). Conveniently they outcrop by the River Severn estuary, below its once-famous suspension bridge and close-by the M5 motorway. Despite the Tea Green Marl containing a bone bed with marine reptiles, time didn’t permit us to fossick, and, anyway, there was a nearby pub … The formation was said to mark a marine transgression leading on to the ‘far more interesting Jurassic’ – the reason we were in the area. We were never given even a hint that the end of the Triassic was marked by one of the ‘Big Five’ mass extinctions: such whopping events were not part of the geoscientific canon in the 1960s.

Pangaea just before the start of Atlantic opening at the end of the Triassic, showing the estimated extend of the CAMP large igneous province. The pink triangles show the sites investigated by He and colleagues.

At 201.3 Ma ago around 34 % of marine genera disappeared, comparable with the effect of the K-Pg extinction that ended the Mesozoic Era. Extinction of Triassic terrestrial animals is less quantifiable. Early dinosaurs made it through to diversify hugely during the succeeding Jurassic and Cretaceous Periods. Probably because nothing famous ceased to be or made its first appearance, the Tr-J mass extinction hasn’t captured public attention in the same way as those with the K-Pg or the P-Tr acronyms.  But it did dramatically alter the course of biological evolution. The extinctions coincided with a major eruption of flood basalts known as the Central Atlantic Magmatic Province (CAMP), whose relics occur on either side of the eponymous ocean, which began to open definitively at about the same time. So, chances are, volcanic emissions are implicated in the extinction event, somehow (see: Is end-Triassic mass extinction linked to CAMP flood basalts? June 2013). Tianchen He  of Leeds University, UK and the China University of Geosciences and British and Italian colleagues have studied three Tr-J marine sections on either side of Pangaea: in Sicily, Northern Ireland and British Columbia (He, T. and 12 others 2020. An enormous sulfur isotope excursion indicates marine anoxia during the end-Triassic mass extinction. Science Advances, v. 6, article eabb6704; DOI: 10.1126/sciadv.abb6704). Their objective was to test the hypothesis that CAMP resulted in an episode of oceanic anoxia that caused the many submarine organisms to become extinct. Since eukaryote life depends on oxygen, a deficit would put marine animals of the time under great stress. Such events in the later Mesozoic account for global occurrences of hydrocarbon-rich, black marine shales – petroleum source rocks – in which hypoxia thwarted complete decay of dead organisms over long periods. However there is scant evidence for such rocks having formed ~201 Ma ago. Such as there is dates to about 150 ka younger than the Tr-J boundary in an Italian shallow marine basin. The issue of evidence is compounded by the fact that there are no ocean-floor sediments as old as that, thanks to their complete subduction as Pangaea broke apart in later times and its continental fragments drifted to their present configuration.

But there is an indirect way of detecting deep-ocean anoxia, in the inevitable absence of any Triassic and early Jurassic oceanic crust. It emerges from what happens to the stable isotopes of sulfur when there are abundant bacteria that use the reduction of sulfate (SO42-) to sulfide (S2-) ions. Such microorganisms thrive in anoxic conditions and produce abundant hydrogen sulfide, which in turn leads to the precipitation of dissolved iron as minute grains of pyrite (FeS2). This biogenic process selectively excludes 34S from the precipitated pyrite. As a result, at times of widespread marine reducing conditions seawater as a whole becomes enriched in 34S relative to sulfur’s other isotopes. The enrichment is actually expressed in the unreacted sulfate ions, and they may be precipitated as calcium sulfate or gypsum (CaSO4) in marine sediments deposited anywhere: He et al. focussed on such fractionation. They discovered large ‘spikes’ in the relative enrichment of 34S at the Tr-J boundary in shallow-marine sedimentary sequences exposed at the three sites. Moreover, they were able to estimate that the conditions on the now vanished bed of the Triassic ocean that gave rise to the spikes lasted for about 50 thousand years. The lack of dissolved oxygen resulted in a five-fold increase in pyrite burial in the now subducted ocean-floor sediments of that time. The authors suggest that the oxygen depletion stemmed from extreme global warming, which, in turn, encouraged methane production by other ocean-floor bacteria and, in a roundabout way, other chemical reactions that consumed free dissolved oxygen. Quite a saga of a network of interactions in the whole Earth system that may hold a dreadful warning for the modern Earth and ourselves.

Diamonds and the deep carbon cycle

When considering the fate of the element carbon and CO2, together with all their climatic connotations, it is easy to forget that they may end up back in the Earth’s mantle from which they once escaped to the surface. In fact all geochemical cycles involve rock, so that elements may find their way into the deep Earth through subduction, and they could eventually come out again: the ‘logic’ of plate tectonics. Teasing out the various routes by which carbon might get to mantle is not so easily achieved. Yet one of the ways it escapes is through the strange magma that once produced kimberlite intrusions, in the form of pure-carbon crystals of diamond that kimberlites contain. A variety of petrological and geochemical techniques, some hinging on other minerals that occur as inclusions, has allowed mineralogists to figure out that diamonds may form at depths greater than about 150 km. Most diamonds of gem quality formed in unusually thick lithosphere beneath the stable, and relatively cool blocks of ancient continental crust known as cratons, which extends to about 250 km. But there are a few that reflect formation depths as great as 800 km that span two major discontinuities in the mantle (at 410 and 660 km depth). These transition zones are marked by sudden changes in seismic speed due to pressure-induced transformations in the structure and density of the main mantle mineral, olivine.

Diamond crystal containing a garnet and other inclusions (Credit: Stephen Richardson, University of Cape Town, South Africa)

Carbon-rich rocks that may be subducted are not restricted to limestones and carbon-rich mudstones. Far greater in mass are the basalts of oceanic crust. Not especially rich in carbon when they crystallised as igneous rocks, their progress away from oceanic spreading centres exposes them to infiltration by ocean water. Once heated, aqueous fluids cause basalts to be hydrothermally altered. Anhydrous feldspars, pyroxenes and olivines react with the fluids to break down to hydrated-silicate clays and dissolved metals. Dissolved carbon dioxide combines with released calcium and magnesium to form pervasive carbonate minerals, often occupying networks of veins. So there has been considerable dispute as to whether subducted sediments or igneous rocks of the oceanic crust are the main source of diamonds. Diamonds with gem potential form only a small proportion of recovered diamonds. Most are only saleable for industrial uses as the ultimate natural abrasive and so are cheaply available for research. This now centres on the isotopic chemistry of carbon and nitrogen in the diamonds themselves and the various depth-indicating silicate minerals that occur in them as minute inclusions, most useful being various types of garnet.

The depletion of diamonds in ‘heavy’ 13C once seemed to match that of carbonaceous shales and the carbonates in fossil shells, but recent data from carbonates in oceanic basalts reveals similar carbon, giving three possibilities. Yet, when their nitrogen-isotope characteristics are taken into account, even diamonds that formed at lithospheric depths do not support a sedimentary source (Regier, M.E. et al. 2020. The lithospheric-to-lower-mantle carbon cycle recorded in superdeep diamonds. Nature, v. 585, p. 234–238; DOI: 10.1038/s41586-020-2676-z). That leaves secondary carbonates in subducted oceanic basalts as the most likely option, the nitrogen isotopes more reminiscent of clays formed from igneous minerals by hydrothermal processes than those created by weathering and sedimentary deposition. However, diamonds with the deepest origins – below the 660 km mantle transition zone – suggest yet another possibility, from the oxygen isotopes of their inclusions combined with those of C and N in the diamonds. All three have tightly constrained values that most resemble those from pristine mantle that has had no interaction with crustal rocks. At such depths, unaltered mantle probably contains carbon in the form of metal alloys and carbides. Regier and colleagues suggest that subducted slabs reaching this environment – the lower mantle – may release watery fluids that mobilise carbon from such alloys to form diamonds. So, I suppose, such ultra-deep diamonds may be formed from the original stellar stuff that accreted to form the Earth and never since saw the ‘light of day’.