Ancient mining pollutants in river sediments reveal details of early British economic history

People have been mining in Britain since Neolithic farmers opened the famous Grimes Graves in Norfolk – a large area dotted with over 400 pits up to to 13 metres deep. The target was a layer of high quality black flint in a Cretaceous limestone known as The Chalk. Later Bronze Age people in Wales and Cornwall drove mine shafts deeper underground to extract copper and tin ores to make the alloy bronze. The Iron Age added iron ore to the avid search for sources of metals. The production and even export of metals and ores eventually attracted the interest of Rome. Roman invasion in 43 CE during the reign of Claudius annexed most of England and Wales to create the province of Britannia. This lasted until the complete withdrawal of Roman forces around 410 CE. Roman imperialism and civilisation depended partly on lead for plumbing and silver coinage to pay its legionaries. Consequently, an important aspect in Rome’s four-century hegemony was mining, especially for lead ore, as far north as the North Pennines. This littered the surface in mining areas with toxic waste. Silver occurs in lead ore in varying proportions. In the Bronze Age early metallurgists extracted silver from smelted, liquid lead by a process known as cupellation. The molten Pb-Ag alloy is heated in air to a much higher temperature than its melting point, when lead reacts with oxygen to form a solid oxide (PbO) and silver remains molten.

Mine waste in the North Pennine orefield of England. Credit: North Pennines National Landscape

Until recently, historians believed that the fall of the Western Empire brought economic collapse to Britain. Yet archaeologists have revealed that what was originally called the “Dark Ages” (now Early Medieval Period) had a thriving culture among both the remaining Britons and Anglo Saxon immigrants. A means of tracking economic activity is to measure the amount of pollutants from mining waste at successive levels in the alluvium of rivers that flow through orefields. Among the best known in Britain is the North Pennine Orefield of North Yorkshire and County Durham through which substantial rivers flow eastwards, such as the River Ure that flows through the heavily mined valley of Wensleydale. A first attempt at such geochemical archaeology has been made by a British team led by Christopher Loveluck of Nottingham University (Loveluck, C.P. and 10 others 2025. Aldborough and the metals economy of northern England, c. AD 345–1700: a new post-Roman narrative. Antiquity: FirstView, online article; DOI: 10.15184/aqy.2025.10175). Aldborough in North Yorkshire – sited on the Romano-British town of Isurium Brigantum – lies in the Vale of York, a large alluvial plain. The River Ure has deposited sands, silts and muds in the area since the end of the last Ice Age, 11 thousand years ago.

Loveluck et al. extracted a 6 m core from the alluvium on the outskirts of Aldborough, using radiocarbon and optically-stimulated luminescence of quartz grains to calibrate depth to age in the sediments.  The base of the core is Mesolithic in age (~6400 years ago) and extends upwards to modern times, apparently in an unbroken sequence. Samples were taken for geochemical analysis every 2 cm through the upper 1.12 m of the core, which spans the Roman occupation (43 to 410 CE), the early medieval (420 to 1066 CE), medieval (1066 to 1540 CE), post-medieval (1540 to 1750 CE) and modern times (1750 CE to present). Each sample was analysed for 56 elements using mass spectrometry; lead, silver, copper, zinc, iron and arsenic being the elements of most interest in this context. Other data gleaned from the sediment are those of pollen, useful in establishing climate and ecological changes. Unfortunately, the metal data begin in 345 CE, three centuries after the Roman invasion, by which time occupation and acculturation were well established. The authors assume that Romans began the mining in the North Pennines. They say nothing about the pre-mining levels of pollution from the upstream orefield nor mining conducted by the Iron Age Brigantes. For this kind of survey, it is absolutely essential that a baseline is established for the pollution levels under purely natural conditions. The team could have analysed sediment from the Mesolithic when purely natural weathering, erosion and transport could safely be assumed, but they seem not to have done that.

The team has emphasised that their data suggest that mining for lead continued and even increased through the ‘Dark Ages’ rather than declining, in an economic ‘slump’ once the Romans left, as previous historians have suggested. Lead pollution continued at roughly the same levels as during the Roman occupation through the Early Medieval Period and then rose to up to three times higher after the late 14th century. The data for silver are different. The Ag data from Aldborough show a large ‘spike’ in 427 to 427 CE. Interestingly this is after the Roman withdrawal. Its level in alluvium then ‘flatlines’ at low abundances until the beginning of the 14th century when again there is a series of ‘booms’. This seems to me to mark sudden spells of coining, after the Romans left perhaps first to ensure a money economy remained possible, and then as a means of funding wars with the French in the 14th century. The authors also found changing iron abundances, which roughly double from low Roman levels to an Early Medieval peak and then fall in the 11th century: a result perhaps of local iron smelting. The overall patterns for zinc and copper differ substantially from those of lead, as does that for arsenic which roughly follows the trend for iron. That might indicate that local iron production was based on pyrite (FeS2) which can contain arsenic at moderate concentrations: pyrite is a common mineral in the ore bodies of the North Pennines’ The paper by Loveluck et al. is worth reading as a first attempt to correlate stratigraphic geochemistry data with episodes in British and, indeed, wider European history. But I think it has several serious flaws, beyond the absence of any pre-Roman geochemical baseline, as noted above. No data are presented for barium (Ba) and fluorine (F) derived from the gangue minerals baryte (BaSO4) and fluorite (CaF2), which outweigh lead and zinc sulfides in North Pennine ore bodies, yet had no use value until the Industrial Revolution. They would have made up a substantial proportion of mine spoil heaps – useful ores would have been picked out before disposal of gangue – whose erosion, comminution and transport would make contributions to downstream deposition of alluvium consistent with the pace of mining. That is: Ba and F data would be far better guides to industrial activity. There is a further difficulty with such surveys in northern Britain. The whole of the upland areas were subjected to repeated glaciation, which would have gathered exposed ore and gangue and dumped it in till, especially in the numerous moraines exposed in valleys such as Wensleydale. Such sources may yield sediment in periods of naturally high erosion during floods. Finally, the movement of sediment downstream is obviously not immediate, especially when waste is disposed in large dumps near mines Therefore phases of active mining may not contribute increased toxic waste far downstream until decades or even centuries later. These factors could easily have been clarified by a baseline study from earlier archaeological periods when mining was unlikely, into which the Aldborough alluvium core penetrates

Arsenic: an agent of evolutionary change?

The molecules that make up all living matter are almost entirely (~98 %) made from the elements Carbon, Hydrogen, Oxygen, Nitrogen and Phosphorus (CHONP) in order of their biological importance. All have low atomic numbers, respectively 6th, 1st, 8th, 7th and 15th in the Periodic Table. Of the 98 elements found in nature, about 7 occur only because they form in the decay schemes of radioactive isotopes. Only the first 83 (up to Bismuth) are likely to be around ‘for ever’; the fifteen heavier than that are made up exclusively of unstable isotopes that will eventually disappear, albeit billions of years from now. There are other oddities that mean that the 92 widely accepted  to be naturally occurring is not strictly correct. That CHONP are so biologically important stems partly from their abundances in the inorganic world and also because of the ease with which they chemically combine together. But they are not the only ones that are essential.

About 20 to 25% of the other elements are also literally vital, even though many are rare. Most of the rest are inessential except in vanishingly small amounts that do no damage, and may or may not be beneficial. However some are highly toxic. Any element can produce negative biological outcomes if above certain levels. Likewise, deficiencies can result in ill thrift and event death. For the majority of elements, biologists have established concentrations that define deficiency and toxic excess. The World Health Organisation has charted the maximum safe levels of elements in drinking water in milligrams per litre. In this regard, the lowest safe level is for thallium (Tl) and mercury (Hg) at 0.002 mg l-1.Other highly toxic elements are cadmium (Cd) (0.003 mg l-1), then arsenic (As) and lead (Pb) (0.01 mg l-1) that ‘everyone knows’ are elements to avoid like the plague. In nature lead is very rarely at levels that are unsafe because it is insoluble, but arsenic is soluble under reducing conditions and is currently responsible for a pandemic of related ailments, especially in the Gangetic plains of India and Bangladesh and similar environments worldwide.

Biological evolution has been influenced since life appeared by the availability, generally in water, of both essential and toxic elements. In 2020 Earth-logs summarised a paper about modern oxygen-free springs in Chile in which photosynthetic purple sulfur bacteria form thick microbial mats. The springs contain levels of arsenic that vary from high in winter to low in summer. This phenomenon can only be explained by some process that removes arsenic from solution in summer but not in winter. The purple-bacteria’s photosynthesis uses electrons donated by sulfur, iron-2 and hydrogen – the spring water is highly reducing so they thrive in it. In such a simple environment this suggested a reasonable explanation: the bacteria use arsenic too. In fact they contain a gene (aio) that encodes for such an eventuality. The authors suggested that purple sulfur bacteria may well have evolved before the Great Oxygenation Event (GOE). They reasoned that in an oxygen-free world arsenic, as well as Fe2+ would be readily available in water that was in a reducing state, whereas oxidising conditions after the GOE would suppress both: iron-2 would be precipitated as insoluble iron-3 oxides that in turn efficiently absorb arsenic (see: Arsenic hazard on a global scale, May 2020).

Colour photograph and CT scans of Palaeoproterozoic discoidal fossils from the Francevillian Series in Gabon. (Credit: El Albani et al. 2010; Fig. 4).

A group of geoscientists from France, the UK, Switzerland and Austria have investigated the paradox of probably high arsenic levels before the GOE and the origin and evolution of life during the Archaean  (El Khoury et al. 2025. A battle against arsenic toxicity by Earth’s earliest complex life forms. Nature Communications, v. 16, article 4388; DOI: 10.1038/s41467-025-59760-9). Note that the main, direct evidence for Archaean life are fossilized microbial mats known as stromatolites, some palaeobiologists reckoning they were formed by oxygenic photosynthesising cyanobacteria others favouring the purple sulfur bacteria (above). The purple sulfur bacteria in Chile and other living prokaryotes that tolerate and even use arsenic in their metabolism clearly evolved that potential plus necessary chemical defence mechanisms, probably when arsenic was more available in the anoxic period before the GOE. Anna El Khoury and her colleagues sought to establish whether or not eukaryotes evolved similar defences by investigating the earliest-known examples; the 2.1 Ma old Francevillian biota of Gabon that post-dates the GOE. They are found in black shales, look like tiny fried eggs and are associated with clear signs of burrowing. The shales contain steranes that are breakdown products of steroids, which are unique to eukaryotes.

The fossils have been preserved by precipitation of pyrite (Fe2S) granules under highly reducing conditions. Curiously, the cores of the pyrite granules in the fossils are rich in arsenic, yet pyrite grains in the host sediments have much lower As concentrations. The latter suggest that seawater 2.1 Ma ago held little dissolved arsenic as a result of its containing oxygen. The authors interpret the apparently biogenic pyrite’s arsenic cores as evidence of the organism having sequestered As into specialized compartments in their bodies: their ancestors must have evolved this efficient means of coping with significant arsenic stress before the GOE. It served them well in the highly reducing conditions of black shale sedimentation. Seemingly, some modern eukaryotes retain an analogue of a prokaryote As detoxification gene.

Provenance of the Stonehenge Altar Stone: a puzzling development

 Curiously, two weeks after my previous post about Stonehenge, a wider geochemical study of the Devonian sandstones and a number of Neolithic megaliths in Orkney seems to have ruled out the Stonehenge Altar Stone having been transported from there (Bevins, R.E. et al. 2024. Was the Stonehenge Altar Stone from Orkney? Investigating the mineralogy and geochemistry of Orcadian Old Red sandstones and Neolithic circle monumentsJournal of Archaeological Science: Reports, v. 58, article 104738;   DOI: 10.1016/j.jasrep.2024.104738). Since two of the authors of Clarke et al. (2024) were involved in the newly published study, it is puzzling at first sight why no mention was made in that paper of the newer results. The fact that the topic is, arguably, the most famous prehistoric site in the world may have generated a visceral need for getting an academic scoop, only for it to be dampened a fortnight later. In other words, was there too much of a rush?

The manuscript for Clarke et al. (2024) was received by Nature in December 2023 and accepted for publication on 3 June 2024; a six-month turnaround and plenty of time for peer review. On the other hand, Bevins et al. (2024) was received by the Journal of Archaeological Science on 23 July 2024, accepted a month later and then hit the website a week after that: near light speed in academic publishing. And it does not refer to the earlier paper at all, despite two of its authors’ having contributed to it. Clarke et al. (2024) was ‘in press’ before Bevins et al. (2024) had even hit the editor’s desk. The work that culminated in both papers was done in the UK, Australia, Canada and Sweden, with some potential for poor communication within the two teams. Whatever, the first paper dangled the carrot that Orkney might have been the Altar Stone’s source, on the basis of geochemical evidence that the grains that make up the sandstone could not have been derived from Wales but were from the crystalline basement of NE Scotland. The second shows that this ‘most popular’ Scottish source may be ruled out. To Orcadians and the archaeologists who worked there, long in the shade of vast outpourings from Salisbury Plain, this might come as a great disappointment.

Cyclical sediments of the Devonian Stromness Flagstones. (Credit Mike Norton, Wikimedia)

The latest paper examines 13 samples from 8 outcrops of the Middle Devonian Stromness Flagstones strata in the south of the main island of Orkney close to the Ring of Brodgar and the Stones of Stenness, and the individual monoliths in each. On the main island, however, there is a 500 m sequence of Stromness Flagstones in which can be seen 50 cycles of sedimentation. Each cycle contains sandstone beds of various thicknesses and textures. They are fluviatile, lacustrine or aeolian in origin. So the Neolithic builders of Orkney had a wide choice, depending on where they erected monumental structures. Almost certainly they chose monolithic stones where they were most easy to find: close to the coast where exposure can be 100 %. The Ring of Brodgar and the Stones of Stenness are not on the coast, so the enormous stones would have to be dragged there. There is an ancient pile of stones (Vestra Fiold) about 20 km to the NW where some of the mmegaliths may have been extracted, but ancient Orcadians would have been spoilt for choice if they had their hearts set on erecting monoliths!

In a nutshell, the geological case made by Bevins et al. (2024) for rejecting Orkney as the source for the Stonehenge Altar Stone (AS) is as follows: 1. Grains of the mineral baryte (BaSO) present in the AS are only found in two of the Orkney rock samples. 2. All the Orcadian sandstone samples contain lots of grains of K-feldspar (KAlSi3O8) – common in the basement rocks of northern Scotland – but the AS contains very little. 3. A particular clay mineral (tosudite) is plentiful in the AS, but was not detected in the rock samples from Orkney. Does that rule out a source in Orkney altogether? Well, no: only the outcrops and megalith samples involved in the study are rejected.

To definitely negate an Orcadian source would require a monumental geochemical and mineralogical study across Orkney; covering every sedimentary cycle. Searching the rest of the Old Red Sandstone elsewhere in NE Scotland – and there is a lot of it – would be even more likely to be fruitless. Tracking down the source for the basaltic bluestones at Stonehenge was easy by comparison, because they crystallised from a particular magma over a narrow time span and underwent a specific degree of later metamorphism. They were easily matched visually and under the microscope with outcrops in West Wales in the 1920s and later by geochemical features common to both.

But all that does not detract from the greater importance of the earlier paper (Clarke et al., 2024), which enhanced the idea of Neolithic cultural coherence and cooperation across the whole of Britain. The building of Stonehenge drew people from the far north of Scotland together with those of what are now Wales and England. Since then it hasn’t always been such an amicable relationship …

See also:  Addley, E. 2024. Stonehenge tale gets ‘weirder’ as Orkney is ruled out as altar stone origin. The Guardian 5 September 2024.

Geology cracks Stonehenge mysteries

High resolution vertical aerial photograph of Stonehenge. (Credit: Gavin Hellier/robertharding/Getty)

During the later parts of the Neolithic the archipelago now known as the British Isles and Ireland was a landscape on which large stone buildings with ritual and astronomical uses were richly scattered. The early British agricultural societies also built innumerable monuments beneath which people of the time were buried, presumably so that they remained in popular memory as revered ancestors. Best known among these constructions is the circular Stonehenge complex of dressed megaliths set in the riot of earlier, contemporary and later human-crafted features of the Chalk downs known as Salisbury Plain. Stonehenge itself is now known to have been first constructed some five thousand years ago (~3000 BCE) as an enclosure surrounded by a circular ditch and bank, together with what seems to have been a circular wooden palisade. This was repeatedly modified during the following two millennia. Around 2600 BCE the wooden circle was replaced by one of stone pillars, each weighing about 2 t. These ‘bluestones’ are of mainly basaltic igneous origin unknown in the Stonehenge area itself. The iconic circle of huge, 4 m monoliths linked by 3 m lintel stones that enclose five even larger trilithons arranged in a horseshoe dates to the following two-centuries to 2400 BCE coinciding with the Early Bronze Age when newcomers from mainland Europe – perhaps as far away as the steppe of western Russia – began to replace or assimilate the local farming communities. This phase included several major modifications of the earlier bluestones.

It might seem that the penchant for circular monuments began with the Neolithic people of Salisbury Plain, and then spread far and wide across the archipelago in a variety of sizes. However, it seems that building of sophisticated monuments, including stone circles, began some two centuries earlier than in southern England in the Orkney Islands 750 km further north and, even more remote, in the Outer Hebrides of Scotland. A variety of archaeological and geochemical evidence, such as the isotopic composition of the bones of livestock brought to the vicinity of Stonehenge during its period of development and use, strongly suggests that people from far afield participated. Remarkably, a macehead made of gneiss from the Outer Hebrides turned up in an early Stonehenge cremation burial. Ideas can only have spread during the Neolithic through the spoken word. As it happens, the very stones themselves came from far afield. The earliest set into the circular structure, the much tinkered-with bluestones, were recognised to be exotic over a century ago. They match late Precambrian dolerites exposed in western Wales, first confirmed in the 1980s through detailed geochemical analyses by the late Richard Thorpe and his wife Olwen Williams-Thorpe of the Open University. Some suggested that they had been glacially transported to Salisbury Plain, despite complete lack of any geological evidence. Subsequently their exact source in the Preseli Hills was found, including a breakage in the quarry that exactly matched the base of one of the Stonehenge bluestones. They had been transported 230 km to the east by Neolithic people, using perhaps several means of transport. The gigantic monoliths, made of ‘sarsen’ – a form of silica-cemented sandy soil or silcrete – were sourced from some 25 km away where Salisbury Plain is still liberally scattered with them. Until recently, that seemed to be that as regards provenance, apart from a flat, 5 x 1 m slab of sandstone weighing about 6 t that two fallen trilithon pillars had partly hidden. At the very centre of the complex, this had been dubbed the ‘Altar Stone’, originally supposed to have been brought with the bluestones from west Wales.

The stones of Stonehenge colour-coded by lithology. The sandstone ‘Altar Stone’ lies beneath fallen blocks of a trilithon at the centre of the circle. (Credit: Clarke et al. 2024, Fig 1a)

A group of geologists from Australia and the UK, some of whom have long been engaged with Stonehenge, recently decided to apply sophisticated geochemistry at two fragments broken from the Altar Stone, presumably when the trilithons fell on it (Clarke, A. J. I. et al.2024.  A Scottish provenance for the Altar Stone of Stonehenge. Nature v.632, p. 570–575; DOI: 10.1038/s41586-024-07652-1). In particular they examined various isotopes and trace-elements in sedimentary grains of zircon, apatite and rutile that weathering of igneous rocks had contributed to the sandstone, along with quartz, feldspar, micas and clay minerals. It turned out that the zircon grains had been derived from Mesoproterozoic and Archaean sources beneath the depositional site of the sediment (the basement). The apatite and rutile grains show clear signs of derivation from 460 Ma old (mid-Ordovician) granites. The basement beneath west Wales is by no stretch of the imagination a repository of any such geology. That of northern Scotland certainly does have such components, and it also has sedimentary rocks derived from such sources: the Devonian of Orkney and mainland Scotland surrounding the Moray Firth. Unlike the lithologically unique bluestones, the sandstone is from a thick and widespread sequence of terrestrial sediments colloquially known as the ‘Old Red Sandstone’. The ORS of NE Scotland was deposited mainly during the Devonian Period (419 to 369 Ma) as a cyclical sequence in a vast, intermontane lake basin. Much the same kinds of rock occur throughout the sequence, so it is unlikely that the actual site where the ‘Alter Stone’ was selected will ever be known.

To get the ‘Alter Stone’ (if indeed that is what it once was) to Stonehenge demanded transport from its source over a far more rugged route, three times longer than the journey that brought the bluestones from west Wales: at least 750 km. It would probably have been dragged overland. Many Neolithic experts believe that transport of such a large block by boat is highly unlikely; it could easily have been lost at sea and, perhaps more important, few would have seen it. An overland route, however arduous, would have drawn the attention of everyone en route, some of whom might have been given the honour of helping drag such a burden for part of the way. The procession would certainly have aroused great interest across the full extent of Britain. Its organisers must have known its destination and what it signified, and the task would have demanded fervent commitment. In many respects it would have been a project that deeply unified most of the population. That could explain why people from near and far visited the Stonehenge site, herding livestock for communal feasting on arrival. Evidence is now pointing to the construction and use of the ritual landscape of Salisbury Plain as an all-encompassing joint venture of most of Neolithic Britain’s population. It would come as no surprise if objects whose provenance is even further afield come to light. It remained in use and was repeatedly modified during the succeeding Bronze Age up to 1600 BCE. By that time, the genetic group whose idea it was had been assimilated, so that only traces of its DNA remain in modern British people. This seems to have resulted from waves of immigrants from Central Europe, the Yamnaya, who brought new technology and the use of metals and horses.

See also: Gaind, N. & Smith, R. 2024. Stonehenge’s enigmatic centre stone was hauled 800 kilometres from Scotland. Nature, v. 632, p. 484-485; DOI: 10.1038/d41586-024-02584-2; Addley, E. 2024. Stonehenge megalith came from Scotland, not Wales, ‘jaw-dropping’ study finds. The Guardian, 14 August 2024.

Early land plants and oceanic extinctions

In September 2022 Earth-logs highlighted how greening of the continents affected the composition of the continental crust. It now seems that was not the only profound change that the first land plants wrought on the Earth system. Beginning in the Silurian, the spread of vegetation swept across the continents during the Devonian Period. From a height of less than 30 cm among the earliest species by the Late Devonian the stature of plants went through a large increase with extensive forests of primitive tree-sized conifers, cycads, horsetails and sporiferous lycopods up to 10 m tall. Their rapid evolution and spread was not hampered by any herbivores. It was during the Devonian that tetrapod amphibians emerged from the seas, probably feeding on burgeoning terrestrial invertebrates. The Late Devonian was marked by five distinct episodes of extinction, two of which comprise the Devonian mass extinction: one of the ‘Big Five’. This affected both marine and terrestrial organisms. Neither flood volcanism nor extraterrestrial impact can be linked to the extinction episodes. Rather they marked a long drawn-out period of repeated environmental stress.

Phytoplankton bloom off the east coast of Scotland ‘fertilised’ by effluents carried by the Tay and Forth estuaries.

One possibility is that a side effect of the greening of the land was the release of massive amounts of nutrients to the seas that would have resulted in large-scale blooms of phytoplankton whose death and decay depleted oxygen levels in the water column. That is a process seen today where large amounts of commercial fertilisers end up in water bodies to result in their eutrophication. Matthew Smart and others from Indiana University-Purdue University, USA and the University of Southampton, UK, geochemically analysed Devonian lake deposits from Greenland and Scotland to test this hypothesis (Smart, M.S. et al. 2022. Enhanced terrestrial nutrient release during the Devonian emergence and expansion of forests: Evidence from lacustrine phosphorus and geochemical records. Geological Society of America Bulletin, v. 134, early release article;  DOI: 10.1130/B36384.1).

Smart et al. show that in the Middle and Late Devonian the lacustrine strata show cycles in their abundance of phosphorus (P an important plant nutrient) that parallel evidence for wet and dry cycles in the lacustrine basins. The cycles show that the same phosphorus abundance patterns occurred at roughly the same times at five separate sites. This may suggest a climatic control forced by changes in Earth’s orbital behaviour, similar to the Milankovich Effect on the Pleistocene climate and at other times in Phanerozoic history. The wet and dry intervals show up in the changing ratio between strontium and copper abundances (Sr/Cu): high values signify wet conditions, low suggesting dry. The wet periods show high ratios of rubidium to strontium (Rb/Sr) that suggest enhanced weathering, while dry periods show the reverse – decreased weathering.

When conditions were dry and weathering low, P built up in the lake sediments, whereas during wet conditions P decreases; i.e. it was exported from the lakes, presumably to the oceans. The authors interpret the changes in relation to the fate of plants under the different conditions. Dry periods would result in widespread death of plants and their rotting, which would release their P content to the shallowing, more stagnant lakes. When conditions were wetter root growth would have increased weathering and more rainfall would flush P from the now deeper and more active lake basins. The ultimate repository of the sediments and freshwater, the oceans, would therefore be subject to boom and bust (wet and dry) as regards nutrition and phytoplankton blooms. Dead phytoplankton, in turn, would use up dissolved oxygen during their decay. That would lead to oceanic anoxia, which also occurred in pulses during the Devonian, that may have contributed to animal extinction.

See also: Linking mass extinctions to the expansion and radiation of land plants, EurekaAlert 10 November 2022; Mass Extinctions May Have Been Driven by the Evolution of Tree Roots, SciTechDaily, 14 November 2022.

Earliest plate tectonics tied down?

Papers that ponder the question of when plate tectonics first powered the engine of internal geological processes are sure to get read: tectonics lies at the heart of Earth science. Opinion has swung back and forth from ‘sometime in the Proterozoic’ to ‘since the very birth of the Earth’, which is no surprise. There are simply no rocks that formed during the Hadean Eon of any greater extent than 20 km2. Those occur in the 4.2 billion year (Ga) old Nuvvuagittuq greenstone belt on Hudson Bay, which have been grossly mangled by later events. But there are grains of the sturdy mineral zircon ZrSiO4)  that occur in much younger sedimentary rocks, famously from the Jack Hills of Western Australia, whose ages range back to 4.4 Ga, based on uranium-lead radiometric dating. You can buy zircons from Jack Hills on eBay as a result of a cottage industry that sprang up following news of their great antiquity: that is, if you do a lot of mineral separation from the dust and rock chips that are on offer, and they are very small. Given a laser-fuelled SHRIMP mass spectrometer and a lot of other preparation kit, you could date them. Having gone to that expense, you might as well analyse them chemically using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) to check out their trace-element contents. Geochemist Simon Turner of Macquarie University in Sydney, Australia, and colleagues from Curtin University in Western Australia and Geowissenschaftliches Zentrum Göttingen in Germany, have done all this for 32 newly extracted Jack Hills zircons, whose ages range from 4.3 to 3.3 Ga (Turner, S. et al. 2020. An andesitic source for Jack Hills zircon supports onset of plate tectonics in the HadeanNature Communications, v. 11, article 1241; DOI: 10.1038/s41467-020-14857-1). Then they applied sophisticated geochemical modelling to tease out what kinds of Hadean rock once hosted these grains that were eventually eroded out and transported to come to rest in a much younger sedimentary rock.

Artist’s impression of the old-style hellish Hadean (Credit : Dan Durday, Southwest Research Institute)

Zircons only form duuring the crystallisation of igneous magmas, at around 700°C, the original magma having formed under somewhat hotter conditions – up to 1200°C for mafic compositions. In the course of their crystallising, minerals take in not only the elements of which they are mainly composed, zirconium, silicon and oxygen in the case of zircon , but many other elements that the magma contains in low concentrations. The relative proportions of these trace elements that are partitioned from the magma into the growing mineral grains are more or less constant and unique to that mineral, depending on the particular composition of the magma itself. Using the proportions of these trace elements in the mineral gives a clue to the original bulk composition of the parent magma. The Jack Hills zircons  mainly  reflect an origin in magmas of andesitic composition, intermediate in composition between high-silica granites and basalts that have lower silica contents. Andesitic magmas only form today by partial melting of more mafic rocks under the influence of water-rich fluid driven upwards from subducting oceanic lithosphere. The proportions of trace elements in the zircons could only have formed in this way, according to the authors.

Interestingly, the 4.2 Ga Nuvvuagittuq greenstone belt contains metamorphosed mafic andesites, though any zircons in them have yet to be analysed in the manner used by Turner et al., although they were used to date those late-Hadean rocks. The deep post-Archaean continental crust, broadly speaking, has an andesitic composition, strongly suggesting its generation above subduction zones. Yet that portion of Archaean age is not andesitic on average, but a mixture of three geochemically different rocks. It is referred to as TTG crust from those three rock types (trondhjemite, tonalite and granodiorite). That TTG nature of the most ancient continental crust has encouraged most geochemists to reject the idea of magmatic activity controlled by plate tectonics during the Archaean and, by extension, during the preceding Hadean. What is truly remarkable is that if mafic andesites – such as those implied by the Jack Hills zircons and found in the Nuvvuagittuq greenstone belt – partially melted under high pressures that formed garnet in them, they would have yielded magmas of TTG composition. This, it seems, puts plate tectonics in the frame for the whole of Earth’s evolution since it stabilised several million years after the catastrophic collision that flung off the Moon and completely melted the outer layers of our planet. Up to now, controversy about what kind of planet-wide processes operated then have swung this way and that, often into quite strange scenarios. Turner and colleagues may have opened a new, hopefully more unified, episode of geochemical studies that revisit the early Earth . It could complement the work described in An Early Archaean Waterworld published on Earth-logs earlier in March 2020.

Closure for the K-Pg extinction event?

Anyone who has followed the saga concerning the mass extinction at the end of the Cretaceous Period (~66 Ma ago) , which famously wiped out all dinosaurs except for the birds, will know that its cause has been debated fiercely over four decades. On the one hand is the Chicxulub asteroid impact event, on the other the few million years when the Deccan flood basalts of western India belched out gases that would have induced major environmental change across the planet. Support has swung one way or the other, some authorities reckon the extinction was set in motion by volcanism and then ‘polished-off’ by the impact, and a very few have appealed to entirely different mechanism lumped under ‘multiple causes’. One factor behind the continuing disputes is that at the time of the Chicxulub impact the Deccan Traps were merrily pouring out Disentanglement hangs on issues such as what actual processes directly caused the mass killing. Could it have been starvation as dust or fumes shut down photosynthesis at the base of the food chain? What about toxic gases and acidification of ocean water, or being seared by an expanding impact fireball and re-entering incandescent ejecta? Since various lines of evidence show that the late-Cretaceous atmosphere had more oxygen that today’s the last two may even have set the continents’ vegetation ablaze: there is evidence for soots in the thin sediments that mark the K-Pg boundary. The other unresolved issue is timing: of volcanogenic outgassing; of the impact, and of the extinction itself. A new multi-author, paper may settle the whole issue (Hull, P.M and 35 others 2020. On impact and volcanism across the Cretaceous-Paleogene boundary. Science, v. 367, p. 266-272; DOI: 10.1126/science.aay5055).

K-Pg oxygen
Marine temperature record derived from δ18O and Mg/Ca ratios spanning 1.5 Ma that includes the K-Pg boundary: the bold brown line shows the general trend derived from the data points (Credit: Hull et al. 2020; Fig 1)

The multinational team approached the issue first by using oxygen isotopes and the proportion of magnesium relative to calcium (Mg/Ca ratio) in fossil marine shells (foraminifera and molluscs) in several ocean-floor sediment cores, through a short interval spanning the last 500 thousand years of the Cretaceous and the first  million years of the Palaeocene. The first measures are proxies for seawater temperature. The results show that close to the end of the Cretaceous temperature rose to about 2°C above the average for the youngest Cretaceous (the Maastrichtian Age; 72 to 66 Ma) and then declined. By the time of the mass extinction (66 Ma) sea temperature was back at the average and then rose slightly in the first 200 ka of Palaeocene to fall back to the average at 350 ka and then rose slowly again.

Changes in carbon isotopes (δ13C) of bulk carbonate samples from the sediment cores (points) and in deep-water foraminifera (shaded areas) across the K-Pg boundary. (Credit: Hull et al. 2020; Fig 2A)

The second approach was to look in detail at carbon isotopes (δ13C) – a measure of changes in the marine carbon cycle –  and oxygen isotopes (δ18O) in deep water foraminifera and bulk carbonate from the sediment cores, in comparison to the duration of Deccan volcanism (66.3 to 65.4 Ma). The δ13C measure from bulk carbonate stays roughly constant in the Maastrichtian, then falls sharply at 66 Ma.  The δ13C of the deep water forams rises to a peak at 66 Ma. The δ18O measure of temperature peaks and declines at the same times as it does for the mixed fossils. Also examined was the percentage of coarse sediment grains in the muds from the cores. That measure is low during the Maastrichtian and then rises sharply at the K-Pg boundary.

Since warming seems almost certainly to be a reflection of CO2 from the Deccan (50 % of total Deccan outgassing), the data suggest not only a break in emissions at the time of the mass extinction but also that by then the marine carbon system was drawing-down its level in air. The δ13C data clearly indicate that the ocean was able to absorb massive amounts of CO2 at the very time of the Chicxulub impact and the K-Pg boundary. Flood-basalt eruption may have contributed to the biotic aftermath of the extinction for as much as half a million years. The collapse in the marine fossil record seems most likely to have been due to the effects of the Chicxulub impact. A third study – of the marine fossil record in the cores – undertaken by, presumably, part of the research team found no sign of increased extinction rates in the latest Cretaceous, but considerable changes to the marine ecosystem after the impact. It therefore seems that the K-Pg boundary impact ‘had an outsized effect on the marine carbon cycle’. End of story? As with earlier ‘breaks through’; we shall see.

See also: Morris, A. 2020 Earth was stressed before dinosaur extinction (Northwestern University)

How far has geochemistry led geology?

 

Granite pmg ss 2006
Thin section of a typical granite: clear white and grey grains are quarts (silica); striped black and white is feldspar; coloured minerals are micas (credit: Wikipedia)

In the Solar System the Earth is unique in having a surface split into two distinct categories according to their relative elevation; one covered by water, the other not. More than 60% of its surface – the ocean basins – falls between 2 to 11 km below sea level with a mean around 4 to 5 km deep. A bit less than 40% – land and the continental shelves – stands higher than 1 km below sea level up to almost 9 km above, with a mean around 1 km high. Between 1 and 2 km below sea level is represented by only around 3 % of the surface area. This combined hypsography and wetness is reckoned to have had a massive bearing on the course of climate and biological evolution, as far as allowing our own emergence. The Earth’s bimodal elevation stems from the near-surface rock beneath each division having different densities: continental crust is less dense than its oceanic counterpart, and there is very little crustal rock with an intermediate density. Gravitational equilibrium ensures that continents rise higher than oceans. That continents were underpinned mainly by rocks of granitic composition and density, roughly speaking, was well known by geologists at the close of the 19th century. What lay beneath the oceans didn’t fully emerge until after the advent of plate tectonics and the notion of simple basaltic magmas pouring out as plates became detached.

In 1915 Canadian geologist Norman Levi Bowen resolved previously acquired knowledge of the field relations, mineralogy and, to a much lesser extent, the chemistry of igneous rocks, predominantly those on the continents in a theory to account for the origin of continents. This involved a process of distillation or fractionation in which the high-temperature crystallisation of mafic (magnesium- and iron-rich) minerals from basaltic magma left a residual melt with lower Mg and Fe, higher amounts of alkalis and alkaline earth elements and especially enriched in SiO2 (silica). A basalt with ~50% silica could give rise to rocks of roughly granitic composition (~60% SiO2) – the ‘light’ rocks that buoy-up the continental surface – through Bowen’s hypothetical fractional crystallisation. Later authors in the 1930s, including Bowen’s teacher Reginald Aldworth Daly, came up with the idea that granites may form by basalt magma digesting older SiO2-rich rocks or by partially melting older crustal rocks as suggested by British geologist Herbert Harold Read. But, of course, this merely shifted the formation of silica-rich crust further back in time

A great deal of field, microscope and, more recently, geochemical lab time has been spent since on to-ing and fro-ing between these hypotheses, as well as on the petrology of basaltic magmas since the arrival of plate theory and the discovery of the predominance of basalt beneath ocean floors. By the 1990s one of the main flaws seen in Bowen’s hypothesis was removed, seemingly at a stroke. Surely, if a basalt magma split into a dense Fe- Mg-rich cumulate in the lower crust and a less dense, SiO2-rich residual magma in the upper continental crust the bulk density of that crust ought to remain the same as the original basalt. But if the dense part somehow fell back into the mantle what remained would be more able to float proud. Although a neat idea, outside of proxy indications that such delamination had taken place, it could not be proved.

Since the 1960s geochemical analysis has became steadily easier, quicker and cheaper, using predominantly X-ray fluorescence and mass-spectrometric techniques. So geochemical data steadily caught up with traditional analysis of thin sections of rock using petrological microscopes. Beginning in the late 1960s igneous geochemistry became almost a cottage industry and millions of rocks have been analysed. Recently, about 850 thousand multi-element analyses of igneous rocks have been archived with US NSF funding in the EarthChem library. A group from the US universities of Princeton, California – Los Angeles and Wisconsin – Madison extracted 123 thousand plutonic and 172 thousand volcanic igneous rocks of continental affinities from EarthChem to ‘sledgehammer’ the issue of continent formation into a unified theory (Keller, C.B. et al. 2015. Volcanic-plutonic parity and the differentiation of the continental crust. Nature, v. 523, p. 301-307).

In a nutshell, the authors compared the two divisions in this vast data bank; the superficial volcanic with the deep-crustal plutonic kinds of continental igneous rock. The gist of their approach is a means of comparative igneous geochemistry with an even longer pedigree, which was devised in 1909 by British geologist Alfred Harker. The Harker Diagram plots all other elements against the proportionally most variable major component of igneous rocks, SiO2. If the dominant process involved mixing of basalt magma with or partial melting of older silica-rich rocks such simple plots should approximate straight lines. It turns out – and this is not news to most igneous geochemists with far smaller data sets – that the plots deviate considerably from straight lines. So it seems that old Bowen was right all along, the differing deviations from linearity stemming from subtleties in the process of initial melting of mantle to form basalt and then its fractionation at crustal depths. Keller and colleagues found an unexpected similarity between the plutonic rocks of subduction-related volcanic arcs and those in zones of continental rifting. Both record the influence of water in the process, which lowers the crystallisation temperature of granitic magma so that it freezes before the bulk can migrate to the surface and extrude as lava. Previously. rift-related magmas had been thought to be drier than those formed in arcs so that silica-rich magma should tend to be extruded.

But there is a snag, the EarthChem archive hosts only data from igneous rocks formed in the Phanerozoic, most being less than 100 Ma old. It has long been known that continental crust had formed as far back as 4 billion years ago, and many geologists believe that most of the continental crust was in place by the end of the Precambrian about half a billion years ago. Some even reckon that igneous process may have been fundamentally different before 3 billion years ago(see: Dhuime, B., Wuestefeld, A. & Hawkesworth, C. J. 2015. Emergence of modern continental crust about 3 billion years ago.  Nature Geoscience, v. 8, p.552–555). So big-science data mining may flatter to deceive and leave some novel questions unanswered .