A hint of proto-Earth that predates Moon formation by giant impact  

Artist’s impression of the impact of a roughly Mars-size planet with the proto-Earth to form an incandescent cloud, from part of which the Moon formed.

Geochemists have gradually built a model of the proportions of the 92 naturally occurring elements that characterise the Solar System. It is based on systematic chemical analysis of meteorites, especially the ‘stony’ ones. One hypothesis for Earth formation is that the bulk of it chemically resembles a class of meteorites known as C1 carbonaceous chondrites. But there are important deviations between that and reality. For instance the relative proportions of the isotopes of several elements in meteorites have been found to differ. Because nuclei of all the elements and their individual isotopes have been shown to form in supernovae through nucleosynthesis, such instances are known as ‘nucleosynthetic anomalies’. An example is that of the isotopes of potassium (K), which was investigated by a team of geochemists from the Carnegie Institution for Science in Washington DC, USA and the Chengdu University of Technology, China led by Nicole Nie  (Nie, N.X. et al. 2023. Meteorites have inherited nucleosynthetic anomalies of potassium-40 produced in supernovae. Science, v.379, p, 372-376; DOI: 10.1126/science.abn1783).

A measure for the magnitude of this nucleosynthetic anomaly  is the ratio between the abundance in a sample of potassium’s  rarest (40K) and its most common isotope (39K), divided by the ratio in an accepted standard of terrestrial rock. Since isotopically identical samples would yield a value of 1, the result has 1.0 subtracted from it to emphasise anomalies. Samples that are relatively depleted in 40K give negative values, whereas enriched samples give positive values. This measure is signified by ε40K, ε being the Greek letter epsilon. The authors found significant and variable positive anomalies of ε40K in carbonaceous chondrite (CC) meteorites, compared with non-carbonaceous (NC) meteorites. They also found that ε40K data in terrestrial rocks are quite different from those of CC meteorites. Indeed, they suggested that Earth was more likely to have formed from NC meteoritic material. Clearly, there seems to be something seriously amiss with the hypothesis that Earth largely accreted from C1 carbonaceous chondrites.

The correlation between ε40K and ε100Ru in meteorites (EC – enstatite chondrites, OC – ordinary chondrites; CC – carbonaceous chondrites), Earth and a geochemically modelled proto-Earth. Credit: Da Wang et al., Fig 2

Three of the authors of Nie et al. and other researchers from MIT in Cambridge MA and Scripps Institution of Oceanography in San Diego CA, USA and ETH in Zurich, Switzerland have produced more extensive potassium isotope data to examine Earth’s possible discrepancy with the chondritic Earth hypothesis (Da Wang et al. 2025. Potassium-40 isotopic evidence for an extant pre-giant-impact component of Earth’s mantle. Nature Geoscience, v. 18, online article; DOI: 10.1038/s41561-025-01811-3). To better approximate the bulk Earth’s potassium isotopes they analysed a large number of terrestrial rock samples of all kinds and ages to compare with meteorites of different classes. Meteorites also have variable  nucleosynthetic anomalies for ruthenium-100 (ε100Ru). So, ε40K  and ε100Ru may be useful tracers with regards to Earth’s history. But, for some reason, the research group did not analyse ruthenium isotopes in the terrestrial samples.

Most samples of igneous rocks from different kinds of Phanerozoic volcanic provinces (continental flood basalts, island arcs, and ocean ridge basalts) showed no evidence of anomalous potassium isotopes. However, some young ocean-island basalts from Réunion and Hawaii showed considerable depletion in 40K. A quarter of early Archaean (>3.5 Ga) metamorphosed basaltic rocks from greenstone belts also showed clear 40K depletion. Yet no samples of granitic crust of similar antiquity showed any anomaly and nor did marine sediments derived from younger continental crust. Even the oldest known minerals – zircon grains from Jack Hills Western Australia – showed no anomalies. The authors suggest that both the anomalous groups of young and very ancient terrestrial basalts show signs that their parent magmas may have formed by partial mantle melting of substantial bodies of the relics of proto-Earth. To account for this anomalous mantle Da Wang et al. suggest from modelling that proto-Earths 40K deficit may have arisen from early accretion of meteorites with that property. Later addition of material more enriched with that isotope, perhaps as meteorites or through the impact with a smaller planet that triggered Moon-formation. That cataclysm was so huge that it left the Earth depleted in ‘volatile’ elements and in a semi-molten state. It reset Earth geochemistry as a result of several processes including the mixing induced by very large-scale melting. No radiometric dating has penetrated that far back in Earth history. However, in February 2004, Alex Halliday used evidence from several isotopic systems (Pb, Xe, Sr, W) to show that about two thirds of Earth’s final mass may have accreted in the first 11 to 40 Ma of its history.

Curiously, none of the hundreds of meteorites that have been geochemically analysed show the level of 40K depletion in the terrestrial samples. Nicole Nie has comments, “… our study shows that the current meteorite inventory is not complete, and there is much more to learn about where our planet came from.”

I’m persuaded to write this by ‘Piso Mojado’. And today – 23rd October – is the anniversary of the Creation of Earth, Life and the Universe in 4004 BCE, according to Archbishop James Ussher (1581-1656) by biblical reckoning, which always tickles me!

See also: Chu, J. 2025. Geologists discover the first evidence of 4.5-billion-year-old “proto Earth”. MIT News, 14 October 2025.

The final closure of the Iapetus Ocean

A symposium hosted by the Royal Society in 1965 aimed at resurrecting Alfred Wegener’s hypothesis of continental drift. During the half century since Wegener made his proposal in 1915, it had been studiously ignored by most geologists. The majority had bumbled along with the fixist ideology of their Victorian predecessors. The symposium launched what can only be regarded as a revolution in the Earth Sciences. In the three years following the symposium, the basic elements of plate tectonics had emerged from a flurry of papers, mainly centred on geophysical evidence. Geology itself became part of this cause célèbre through young scientists eager to make a name for themselves. The geological history of Britain, together with that of the eastern North America, became beneficiaries only four years after the Royal Society meeting (Dewey, J. 1969. Evolution of the Appalachian/Caledonian Orogen. Nature 222, 124–129; DOI: 10.1038/222124a0).

In Britain John Dewey, like a few other geologists, saw plate theory as key to understanding the many peculiarities revealed by geological structure, igneous activity and stratigraphy of the early Palaeozoic. These included very different Cambrian and Ordovician fossil assemblages in Scotland and Wales, now only a few hundred kilometres apart. The Cambro-Ordovician of NW Scotland was bounded to the SE by a belt of highly deformed and metamorphosed Proterozoic to Ordovician sediments and volcanics forming the Scottish Highlands. That was terminated to the SE by a gigantic fault zone containing slivers of possible oceanic lithosphere. The contorted and ‘shuffled’ Ordovician and Silurian sediments of the Southern Uplands of Scotland. The oldest strata seemed to have ocean-floor affinities, being deposited on another sliver of ophiolites.  A few tens of km south of that there was a very different Lower Palaeozoic stratigraphy in the Lake District of northern England. It included volcanic rocks with affinities to those of modern island arcs. A gap covered by only mildly deformed later Palaeozoic shelf and terrestrial sediments, dotted by inliers of Proterozoic sediments and volcanics separated the Lake District from yet another Lower Palaeozoic assembly of arc volcanics and marine sediments in Wales. Intervening in Anglesey was another Proterozoic block of deformed sediments that also included ophiolites.

Dewey’s tectonic assessment from this geological hodge-podge, which had made Britain irresistible to geologists through the 19th and early 20th centuries, was that it had resulted from blocks of crust (terranes), once separated by thousands of kilometres, being driven into each other. Britain was thus formed by the evolution and eventual destruction of an early Palaeozoic ocean, Iapetus: a product of plate tectonics. Scotland had a fundamentally different history from England and Wales; the unification of several terranes having taken over 150 Ma of diverse tectonic processes. Dewey concluded that the line of final convergence lay at a now dead, major subduction zone – the Iapetus Suture – roughly beneath the Solway Firth. During the 56 years since Dewey’s seminal paper on the Caledonian-Appalachian Orogeny details and modifications have been added at a rate of around one to two publications per year. The latest seeks to date when and where the accretion of 6 or 7 terranes was finally completed (Waldron, J.W.F. et al. 2025. Is Britain divided by an Acadian suture?  Geology, v. 53, p. 847–852; DOI: 10.1130/G53431.1).

Kernel density plots – smoothed versions of histograms – of detrital zircon ages in Silurian and Devonian sandstones from Wales. The bracketed words are stratigraphic epochs. Credit: Waldron et al. 2025, Fig 3A

John Waldron and colleagues from the University of Alberta and Acadia University in Canada and the British Geological Survey addressed this issue by extracting zircons from four late Silurian and early Devonian sandstones in North and South Wales. These sediments had been deposited between 433 and 393 Ma ago at the southernmost edge of the British Caledonide terrane assemblage towards the end of terrane assembly. The team dated roughly 250 zircons from each sandstone using the 207Pb/206Pb and 206Pb/238U methods. Each produced a range of ages, presumed to be those of igneous rocks from whose magma the zircon grains had crystallised. These data are expressed as plots of probable frequency against age.  Each pattern of ages is assumed to be a ‘fingerprint’ for the continental crust from which the zircons were eroded and transported to their resting place in their host sediment. In this case, the researchers were hoping to see signs of continental crust from the other side of the Caledonian orogen; i.e. from the Precambrian basement of the Laurentia continent.

The three late-Silurian sediments showed distinct zircon-age peaks around 600 Ma and a spread of smaller peaks extending to 2.2 Ga. This tallied with a sediment source in Africa, from which the southernmost Caledonian terrane was said to have split and moved northwards.  The Devonian sediment lacked signs of such an African ‘heritage’ but had a prominent age peak at about 1.0 Ga, absent from the Welsh Silurian sediments.  Not only is this a sign of different sediment provenance but closely follows the known age of a widespread magmatic pulse in the Laurentian continent. So, sediment transport from the opposite side of the Iapetus Ocean across the entire Caledonian orogenic belt was only possible after the end of the Silurian Period at around 410 Ma. There must have been an intervening barrier to sediment movement from Laurentia before that, such as deep ocean water further north. Previous studies from more northern Caledonian terranes show that Laurentian zircons arrived in the Southern Uplands of Scotland and the English Lake District around 432 Ma in the mid-Silurian. Waldron et al. suggest, on these grounds that the suture marking the final closure of the Iapetus Ocean lies between the English Lake District and Anglesey, rather than beneath the Solway. They hint that the late-Silurian to early Devonian granite magmatism that permeated the northern parts of the Caledonian-Appalachian orogen formed above northward subduction of the last relics of Iapetus, which presaged widespread crustal thickening known as the Acadian orogeny in North America.

Readers interested in this episode of Earth history should download Waldron et al.’s paper for its excellent graphics, which cannot be reproduced adequately here.

Gravity survey reveals signs of Archaean tectonics in Canadian Shield

Much of the Archaean Eon is represented by cratons, which occur at the core of continental parts of tectonic plates. Having low geothermal heat flow they are the most rigid parts of the continental crust.  The Superior Craton is an area that makes up much of the eastern part of the Canadian Shield, and formed during the Late Archaean from ~4.3 to 2.6 billion years (Ga) ago. Covering an area in excess of 1.5 million km2, it is the world’s largest craton. One of its most intensely studied components is the Abitibi Terrane, which hosts many mines. A granite-greenstone terrain, it consists of volcano-sedimentary supracrustal rocks in several typically linear greenstone belts separated by areas of mainly intrusive granitic bodies. Many Archaean terrains show much the same ‘stripey’ aspect on the grand scale. Greenstone belts are dominated by metamorphosed basaltic volcanic rock, together with lesser proportions of ultramafic lavas and intrusions, and overlying metasedimentary rocks, also of Archaean age. Various hypotheses have been suggested for the formation of granite-greenstone terrains, the latest turning to a process of ‘sagduction’. However the relative flat nature of cratonic areas tells geologists little about their deeper parts. They tend to have resisted large-scale later deformation by their very nature, so none have been tilted or wholly obducted onto other such stable crustal masses during later collisional tectonic processes. Geophysics does offer insights however, using seismic profiling, geomagnetic and gravity surveys.

The Geological Survey of Canada has produced masses of geophysical data as a means of coping with the vast size and logistical challenges of the Canadian Shield. Recently five Canadian geoscientists have used gravity data from the Canadian Geodetic Survey to model the deep crust beneath the huge Abitibi granite-greenstone terrain, specifically addressing variations in its density in three dimensions. They also used cross sections produced by seismic reflection and refraction data along 2-D survey lines (Galley, C. et al. 2025. Archean rifts and triple-junctions revealed by gravity modeling of the southern Superior Craton. Nature Communications, v. 16, article 8872; DOI: 10.1038/s41467-025-63931-z). The group found that entirely new insights emerge from the variation in crustal density down to its base at the Moho (Mohorovičić discontinuity). These data show large linear bulges in the Moho separated by broad zones of thicker crust.

Geology of the Abitibi Terrane (upper),; Depth to the Moho beneath the Abitibi Terrane with rifts and VMS deposits superimposed (lower). Credit: After Galley et al. Figs 1 and 5.

Galley et al. suggest that the zones are former sites of lithospheric extensional tectonics and crustal thinning: rifts from which ultramafic to mafic magmas emerged. They consider them to be akin to modern mid-ocean and continental rifts. Most of the rifts roughly parallel the trend of the greenstone belts and the large, long-lived faults that run west to east across the Abitibi Terrain. This suggests that rifts formed under the more ductile lithospheric condition of the Neoarchaean set the gross fabric of the granites and greenstones. Moreover, there are signs of two triple junctions where three rifts converge: fundamental features of modern plate tectonics. However, both rifts and junctions are on a smaller scale than those active at present. The rift patterns suggest plate tectonics in miniature, perhaps indicative of more vigorous mantle convection during the Archaean Eon.

There is an interesting spin-off. The Abitibi Terrane is rich in a variety of mineral resources, especially volcanic massive-sulfide deposits (VMS). Most of them are associated with the suggested rift zones. Such deposits form through sea-floor hydrothermal processes, which Archaean rifting and triple junctions would have focused to generate clusters of ‘black smokers’ precipitating large amounts of metal sulfides. Galley et al’s work is set to be applied to other large cratons, including those that formed earlier in the Archaean: the Pilbara and Kaapvaal cratons of Australia and South Africa. That could yield better insights into earlier tectonic processes and test some of the hypotheses proposed for them

See also: Archaean Rifts, Triple Junctions Mapped via Gravity Modeling. Scienmag, 6 October 2025

A possible Chinese ancestor for Denisovans, Neanderthals and modern humans

Assigning human fossils older than around 250 ka to different groups of the genus Homo depends entirely on their physical features. That is because ancient DNA has yet to be found and analysed from specimens older than that. The phylogeny of older human remains is also generally restricted to the bones that make up their heads; 21 that are fixed together in the skull and face, plus the moveable lower jaw or mandible. Far more teeth than crania have been discovered and considerable weight is given to differences in human dentition. Teeth are not bones, but they are much more durable, having no fibrous structure and vary a great deal. The main problem for palaeoanthropologists is that living humans are very diverse in their cranial characteristics, and so it is reasonable to infer that all ancient human groups were characterised by such polymorphism, and may have overlapped in their physical appearance. A measure of this is that assigning fossils to anatomically modern humans, i.e. Homo sapiens, relies to a large extent on whether or not their lower mandible juts out to define a chin. All earlier hominins and indeed all other living apes might be regarded as ‘chinless wonders’! This pejorative term suggests dim-wittedness to most people, and anthropologists have had to inure themselves to such crude cultural conjecture.

The extraction, sequencing and comparison of ancient DNA from human fossils since 2010 has revealed that three distinct human species coexisted and interbred in Eurasia. Several well preserved examples of ancient Neanderthals and anatomically modern humans (AMH) have had their DNA sequenced, but a Denisovan genome has only emerged from a few bone fragments from the Denisova Cave in western Siberia. Whereas Neanderthals have well-known robust physical characters, until 2025 palaeoanthropologists had little idea of what Denisovans may have looked like. Then proteins and, most importantly, mitochondrial DNA (mtDNA) were extracted from a very robust skull found around 1931 in Harbin, China, dated at 146 ka. Analysis of the mtDNA and proteins, from dental plaque and bone respectively, reveal that the Harbin skull is likely to be that of a Denisovan. Previously it had been referred to as Homo longi, or ‘Dragon Man’, along with several other very robust Chinese skulls of a variety of ages.

The distorted Yunxian cranium (right) and its reconstruction (middle) [Credit: Guanghui Zhao] compared with the Harbin Denisovan cranium (left) [Hebei Geo University]

The sparse genetic data have been used to suggest the times when the three different coexisting groups diverged. DNA in Y chromosomes from Denisovans and Neanderthals suggest that the two lineages split from a common ancestor around 700 ka ago, whereas Neanderthals and modern humans diverged genetically at about 370 ka. Yet the presence of sections of DNA from both archaic groups in living humans and the discovery that a female Neanderthal from Denisova cave had a Neanderthal mother and a Denisovan father reveals that all three were interfertile when they met and interacted. Such admixture events clearly have implications for earlier humans. There are signs of at least 6 coexisting groups as far back as the Middle Pleistocene (781 to 126 ka), referred to by some as the ‘muddle in the middle’ because such an association has increasingly mystified palaeoanthropologists. A million-year-old, cranium found near Yunxian in Hubei Province, China, distorted by the pressure of sediments in which it was buried, has been digitally reconstructed.

This reconstruction encouraged a team of Chinese scientists, together with Chris Stringer of the UK Museum of Natural History, to undertake a complex statistical study of the Yunxian cranium. Their method compares it with anatomical data for all members of the genus Homo from Eurasia and Africa, i.e. as far back as the 2.4 Ma old H. habilis (Xiabo Feng and 12 others 2025. The phylogenetic position of the Yunxian cranium elucidates the origin of Homo longi and the Denisovans. Science, v. 389, p. 1320-1324; DOI: 10.1126/science.ado9202). The study has produced a plausible framework that suggests that the five large-brained humans known from 800 ka ago – Homo erectus (Asian), H. heidelbergensis, H. longi (Denisovans), H. sapiens, and H. neanderthalensis – began diverging from one another more than a million years ago. The authors regard the Yuxian specimen as an early participant in that evolutionary process. The fact that at least some remained interfertile long after the divergence began suggests that it was part of the earlier human evolutionary process. It is also possible that the repeated morphological divergence may stem from genetic drift. That process involves small populations with limited genetic diversity that are separated from other groups, perhaps by near-extinction in a population bottleneck or as a result of the founder effect when a small group splits from a larger population during migration. The global population of early humans was inevitably very low, and migrations would dilute and fragment each group’s gene pool.

The earliest evidence for migration of humans out of Africa emerged from the discovery of five 1.8 Ma old crania of H. erectus at Dmanisi to the east of the Black Sea in Georgia. similar archaic crania have been found in eastern Eurasia, especially China, at various localities with Early- to Middle Pleistocene dates. The earliest European large-brained humans – 1.2 to 0.8 Ma old H. antecessor from northern Spain – must have migrated a huge distance from either Africa or from eastern Eurasia and may have been a product of the divergence-convergence evolutionary framework suggested by Xiabo Feng and colleagues. Such a framework implies that even earlier members of what became the longi, heidelbergensis, neanderthalensis, and sapiens lineages may await either recognition or discovery elsewhere. But the whole issue raises questions about the widely held view that Homo sapiens first appeared 300 ka ago in North Africa and then populated the rest of that continent. Was that specimen a migrant from Eurasia or from elsewhere in Africa? The model suggested by Xiabo Feng and colleagues is already attracting controversy, but that is nothing new among palaeoanthropologists. Yet it is based on cutting edge phylogeny derived from physical characteristics of hominin fossils: the traditional approach by all palaeobiologists. Such disputes cannot be resolved without ancient DNA or protein assemblages. But neither is a completely hopeless task, for Siberian mammoth teeth have yielded DNA as old as 1.2 Ma and the record is held by genetic material recovered from sediments in Greenland that are up to 2.1 Ma old. The chances of pushing ancient human DNA studies back to the ‘muddle’ in the Middle Pleistocene depend on finding human fossils at high latitudes in sediments of past glacial maxima or very old permafrost, for DNA degrades more rapidly as environmental temperature rises.

See also: Natural History Museum press release. Analysis of reconstructed ancient skull pushes back our origins by 400,000 years to more than one million years ago. 25 September 2025; Bower, B. 2025. An ancient Chinese skull might change how we see our human roots. ScienceNews, 25 September 2025; Ghosh, P. 2025. Million-year-old skull rewrites human evolution, scientists claim. The Guardian, 25 September 2025

Ancient mining pollutants in river sediments reveal details of early British economic history

People have been mining in Britain since Neolithic farmers opened the famous Grimes Graves in Norfolk – a large area dotted with over 400 pits up to to 13 metres deep. The target was a layer of high quality black flint in a Cretaceous limestone known as The Chalk. Later Bronze Age people in Wales and Cornwall drove mine shafts deeper underground to extract copper and tin ores to make the alloy bronze. The Iron Age added iron ore to the avid search for sources of metals. The production and even export of metals and ores eventually attracted the interest of Rome. Roman invasion in 43 CE during the reign of Claudius annexed most of England and Wales to create the province of Britannia. This lasted until the complete withdrawal of Roman forces around 410 CE. Roman imperialism and civilisation depended partly on lead for plumbing and silver coinage to pay its legionaries. Consequently, an important aspect in Rome’s four-century hegemony was mining, especially for lead ore, as far north as the North Pennines. This littered the surface in mining areas with toxic waste. Silver occurs in lead ore in varying proportions. In the Bronze Age early metallurgists extracted silver from smelted, liquid lead by a process known as cupellation. The molten Pb-Ag alloy is heated in air to a much higher temperature than its melting point, when lead reacts with oxygen to form a solid oxide (PbO) and silver remains molten.

Mine waste in the North Pennine orefield of England. Credit: North Pennines National Landscape

Until recently, historians believed that the fall of the Western Empire brought economic collapse to Britain. Yet archaeologists have revealed that what was originally called the “Dark Ages” (now Early Medieval Period) had a thriving culture among both the remaining Britons and Anglo Saxon immigrants. A means of tracking economic activity is to measure the amount of pollutants from mining waste at successive levels in the alluvium of rivers that flow through orefields. Among the best known in Britain is the North Pennine Orefield of North Yorkshire and County Durham through which substantial rivers flow eastwards, such as the River Ure that flows through the heavily mined valley of Wensleydale. A first attempt at such geochemical archaeology has been made by a British team led by Christopher Loveluck of Nottingham University (Loveluck, C.P. and 10 others 2025. Aldborough and the metals economy of northern England, c. AD 345–1700: a new post-Roman narrative. Antiquity: FirstView, online article; DOI: 10.15184/aqy.2025.10175). Aldborough in North Yorkshire – sited on the Romano-British town of Isurium Brigantum – lies in the Vale of York, a large alluvial plain. The River Ure has deposited sands, silts and muds in the area since the end of the last Ice Age, 11 thousand years ago.

Loveluck et al. extracted a 6 m core from the alluvium on the outskirts of Aldborough, using radiocarbon and optically-stimulated luminescence of quartz grains to calibrate depth to age in the sediments.  The base of the core is Mesolithic in age (~6400 years ago) and extends upwards to modern times, apparently in an unbroken sequence. Samples were taken for geochemical analysis every 2 cm through the upper 1.12 m of the core, which spans the Roman occupation (43 to 410 CE), the early medieval (420 to 1066 CE), medieval (1066 to 1540 CE), post-medieval (1540 to 1750 CE) and modern times (1750 CE to present). Each sample was analysed for 56 elements using mass spectrometry; lead, silver, copper, zinc, iron and arsenic being the elements of most interest in this context. Other data gleaned from the sediment are those of pollen, useful in establishing climate and ecological changes. Unfortunately, the metal data begin in 345 CE, three centuries after the Roman invasion, by which time occupation and acculturation were well established. The authors assume that Romans began the mining in the North Pennines. They say nothing about the pre-mining levels of pollution from the upstream orefield nor mining conducted by the Iron Age Brigantes. For this kind of survey, it is absolutely essential that a baseline is established for the pollution levels under purely natural conditions. The team could have analysed sediment from the Mesolithic when purely natural weathering, erosion and transport could safely be assumed, but they seem not to have done that.

The team has emphasised that their data suggest that mining for lead continued and even increased through the ‘Dark Ages’ rather than declining, in an economic ‘slump’ once the Romans left, as previous historians have suggested. Lead pollution continued at roughly the same levels as during the Roman occupation through the Early Medieval Period and then rose to up to three times higher after the late 14th century. The data for silver are different. The Ag data from Aldborough show a large ‘spike’ in 427 to 427 CE. Interestingly this is after the Roman withdrawal. Its level in alluvium then ‘flatlines’ at low abundances until the beginning of the 14th century when again there is a series of ‘booms’. This seems to me to mark sudden spells of coining, after the Romans left perhaps first to ensure a money economy remained possible, and then as a means of funding wars with the French in the 14th century. The authors also found changing iron abundances, which roughly double from low Roman levels to an Early Medieval peak and then fall in the 11th century: a result perhaps of local iron smelting. The overall patterns for zinc and copper differ substantially from those of lead, as does that for arsenic which roughly follows the trend for iron. That might indicate that local iron production was based on pyrite (FeS2) which can contain arsenic at moderate concentrations: pyrite is a common mineral in the ore bodies of the North Pennines’ The paper by Loveluck et al. is worth reading as a first attempt to correlate stratigraphic geochemistry data with episodes in British and, indeed, wider European history. But I think it has several serious flaws, beyond the absence of any pre-Roman geochemical baseline, as noted above. No data are presented for barium (Ba) and fluorine (F) derived from the gangue minerals baryte (BaSO4) and fluorite (CaF2), which outweigh lead and zinc sulfides in North Pennine ore bodies, yet had no use value until the Industrial Revolution. They would have made up a substantial proportion of mine spoil heaps – useful ores would have been picked out before disposal of gangue – whose erosion, comminution and transport would make contributions to downstream deposition of alluvium consistent with the pace of mining. That is: Ba and F data would be far better guides to industrial activity. There is a further difficulty with such surveys in northern Britain. The whole of the upland areas were subjected to repeated glaciation, which would have gathered exposed ore and gangue and dumped it in till, especially in the numerous moraines exposed in valleys such as Wensleydale. Such sources may yield sediment in periods of naturally high erosion during floods. Finally, the movement of sediment downstream is obviously not immediate, especially when waste is disposed in large dumps near mines Therefore phases of active mining may not contribute increased toxic waste far downstream until decades or even centuries later. These factors could easily have been clarified by a baseline study from earlier archaeological periods when mining was unlikely, into which the Aldborough alluvium core penetrates

Human interventions in geological processes

During the Industrial Revolution not only did the emission of greenhouse gases by burning fossil fuels start to increase exponentially, but so too did the movement of rock and sediment to get at those fuels and other commodities demanded by industrial capital. In the 21st century about 57 billion tons of geological materials are deliberately moved each year. Global population followed the same trend, resulting in increasing expansion of agriculture to produce food. Stripped of its natural cover on every continent soil began to erode at exponential rates too. The magnitude of human intervention in natural geological cycles has become stupendous, soil erosion now shifting on a global scale about 75 billion tons of sediment, more than three times the estimated natural rate of surface erosion. Industrial capital together with society as a whole also creates and dumps rapidly growing amounts of solid waste of non-geological provenance. The Geological Society of America’s journal Geology recently published two research papers that document how capital is transforming the Earth.

Dust Bowl conditions on the Minnesota prairies during the 1930s.

One of the studies is based on sediment records in the catchment of a tributary of the upper Mississippi River. The area is surrounded by prairie given over mainly to wheat production since the mid 19th century. The deep soil of the once seemingly limitless grassland developed by the prairie ecosystem is ideal for cereal production. In the first third of the 20th century the area experienced a burst of erosion of the fertile soil that resulted from the replacement of the deep root systems of prairie grasses by shallow rooted wheat. The soil had formed from the glacial till deposited by the Laurentide ice sheet than blanketed North America as far south as New York and Chicago. Having moved debris across almost 2000 km of low ground, the till is dominated by clay- and silt-sized particles. Once exposed its sediments moved easily in the wind. Minnesota was badly affected by the ‘Dust Bowl’ conditions of the 1930s, to the extent that whole towns were buried by up to 4.5 metres of aeolian sediment. For the first time the magnitude of soil erosion compared with natural rates has been assessed precisely by dating layers of alluvium deposited in river terraces of one of the Mississippi’s tributaries  (Penprase, S.B. et al. 2025. Plow versus Ice Age: Erosion rate variability from glacial–interglacial climate change is an order of magnitude lower than agricultural erosion in the Upper Mississippi River Valley, USA. Geology, v. 53, p. 535-539; DOI: 10.1130/G52585.1).

Shanti Penprase of the University of Minnesota and her colleagues were able to date the last time sediment layers at different depths in terraces were exposed to sunlight and cosmic rays, by analysing optically stimulated luminescence (OSL) and cosmogenic 10Be content of quartz grains from the alluvium. The data span the period since the Last Glacial Maximum 20 thousand years ago during which the ecosystem evolved from bare tundra through re-vegetation to pre-settlement prairie. They show that post-glacial natural erosion had proceeded at around 0.05 mm yr-1 from a maximum of 0.07 when the Laurentide Ice Sheet was at its maximum extent. Other studies have revealed that after the area was largely given over to cereal production in the 19th century erosion rates leapt to as high as 3.5 mm yr-1 with a median rate of 0.6 mm yr-1, 10 to 12 times that of post-glacial times. It was the plough and single-crop farming introduced by non-indigenous settlers that accelerated erosion. Surprisingly, advances in prairie agriculture since the Dust Bowl have not resulted in any decrease in soil erosion rates, although wind erosion is now insignificant. The US Department of Agriculture considers the loss of one millimetre per year to be ‘tolerable’: 14 times higher than the highest natural rate in glacial times.

The other paper has a different focus: how human activities may form solid rock. The world over, a convenient means of disposing of unwanted material in coastal areas is simply to dump waste in the sea. That has been happening for centuries, but as for all other forms of anthropogenic waste disposal the volumes have increased at an exponential rate. The coast of County Durham in Britain began to experience marine waste disposal when deep mines were driven into Carboniferous Coal Measures hidden by the barren Permian strata that rest unconformably upon them. Many mines extended eastwards beneath the North Sea, so it was convenient to dump 1.5 million tons of waste rock annually at the seaside. The 1971 gangster film Get Carter starring Michael Caine includes a sequence showing ‘spoil’ pouring onto the beach below Blackhall colliery, burying the corpse of Carter’s rival. The nightmarish, 20 km stretch of grossly polluted beach between Sunderland and Hartlepool also provided a backdrop for Alien 3. Historically, tidal and wave action concentrated the low-density coal in the waste at the high-water mark, to create a free resource for locals in the form of ‘sea coal’ as portrayed in Tom Scott Robson’s 1966 documentary Low Water. Closure of the entire Duham coalfield in the 1980s and ‘90s halted this pollution and the coast is somewhat restored – at a coast of around £10 million.

‘Anthropoclastic’ conglomerate formed from iron-smelting slag dumped on the West Cumbrian coast. It incorporates artefacts as young as the 1980s, showing that it was lithified rapidly. Credit: Owen et al, Supplementary Figure 2

On the West Cumbrian coast of Britain another industry dumped millions of tons of waste into the sea. In the case it was semi-molten ‘slag’ from iron-smelting blast furnaces poured continuously for 130 years until steel-making ended in the 1980s. Coastal erosion has broken up and spread an estimated 27 million cubic metres of slag along a 2 km stretch of beach. Astonishingly this debris has turned into a stratum of anthropogenic conglomerate sufficiently well-bonded to resist storms (Owen, A., MacDonald, J.M. & Brown, D.J 2025. Evidence for a rapid anthropoclastic rock cycle. Geology, v. 53, p. 581–586; DOI: 10.1130/G52895.1). The conglomerate is said by the authors to be a product of ‘anthropoclastic’ processes. Its cementation involves minerals such as goethite, calcite and brucite. Because the conglomerate contains car tyres, metal trouser zips, aluminium ring-pulls from beer cans and even coins lithification has been extremely rapid. One ring-pull has a design that was not used in cans until 1989, so lithification continued in the last 35 years.

Furnace slag ‘floats’ on top of smelted iron and incorporates quartz, clays and other mineral grains in iron ore into anhydrous calcium- and magnesium-rich aluminosilicates. This purification is achieved deliberately by including limestone as a fluxing agent in the furnace feed. The high temperature reactions are similar to those that produce aluminosilicates when cement is manufactured. Like them, slag breaks down in the presence of water to recrystallis in hydrated form to bond the conglomerate. This is much the same manner as concrete ‘sets’ over a few days and weeks to bind together aggregate. There is vastly more ‘anthropoclastic’ rock in concrete buildings and other modern infrastructure. Another example is tarmac that coats millions of kilometres of highway.

See also: Howell, E. 2025. Modern farming has carved away earth faster than during the ice age. Science, v. 388

Earliest hominin occupation of Sulawesi and crossing of an ocean barrier

Regular readers of Earth-logs will recall that the islands of Indonesia were reached by the archaic humans Homo erectus and H. floresiensis at least a million years ago. Anatomical comparison of their remains suggest that the diminutive H. floresiensis probably evolved from H. erectus under the stress of being stranded on the small, resource-poor island of Flores: a human example of island dwarfism. In fact there are anatomically modern humans (AMH) living on Flores that seem to have evolved dwarfism in the same way since AMH first arrived there between 50 and 5 ka. Incidentally, H. erectus fossils and artefacts were found by Eugene Dubois in the late 19th century at a famous site near Trinil in Java. In 2014, turned out that H. erectus had produced the earliest known art – zig-zag patterns on freshwater clam shells – between 540 and 430 ka ago. The episodic falls in global sea level due to massive accumulations of ice on land during successive Pleistocene glacial episodes aided migration by producing connections between the islands of SE Asia. They created a huge area of low-lying dryland known as ‘Sundaland’. The islands’ colonisation by H. erectus was made easy, perhaps inevitable.

The interconnection of SE Asian islands to form Sundaland (yellow) when sea level was 120 m lower than today. Even at that extreme the island of Sulawesi remained isolated by deep ocean water. Credit: based on Hakim et al Fig 1.

However, Flores and islands further east are separated from those to the west by a narrow but very deep strait. It channels powerful currents that are hazardous to small-boat crossings even today. Most palaeoanthropologists consider the colonisation of Flores by H. erectus most likely to have resulted by accident, reckoning that they were incapable of planning a crossing and building suitable craft. For AMH to have reached New Guinea and Australia around 60 ka ago, they must have developed sturdy craft and sea-faring skills. This paradigm suggests that the evolution of AMH, and thus their eventual occupation of all continents except Antarctica, must have involved a revolutionary ‘leap’ in their cognitive ability just before they left Africa. That view has been popularised by the presenter (Ella Al-Shamahi) of the 2025 BBC Television series Human – now on BBC iPlayer (requires viewers to create a free account) – in its second episode Into the Unknown. [The idea of a cognitive leap that ushered in the almost worldwide migration of anatomically modern humans was launched in 1995 by controversial anthropologist Chris Knight of University College London].

Flaked artefact, about the length of a human thumb, made of chert from excavations at Calio on Sulawesi, dated at 1.02 Ma. Credit: based on Hakim et al Fig 2

The large and peculiarly-shaped island of Sulawesi, also part of Indonesia, is notable for being the location of the earliest known figurative art; a cave painting of a Sulawesi warty pig, dated to at least 45.5 ka ago. Indonesian and Australian archaeologists working at a site near Calio in northern Sulawesi unearthed stone artefacts deep in river-terrace gravels that contain fossils of extinct pigs and dwarf elephants (Hakim, B. and 26 others 2025. Hominins on Sulawesi during the Early Pleistocene. Nature, v. 644;DOI: 10.1038/s41586-025-09348-6). The tools were struck from pebbles of hard fine-grained rocks by flaking to produce sharp edges. A combination of dating techniques – palaeomagnetism, uranium-series and electron-spin resonance – on the terrace sediments and fossils in them yielded ages ranging from 1.04 to 1.48 Ma; far older than the earliest known presence of AMH on the island (73–63 ka). The dates for an early human presence on Sulawesi tally with those from Flores. The tool makers were probably H. erectus. To reach the island from Sundaland at a time when global sea level was 120 m lower than at present would have required crossing more than 50 km of open water. It seems unlikely that such a journey could have been accidental. The migrants would have needed seaworthy craft; possibly rafts. Clearly the AMH crossings to New Guinea around 60 thousand years ago would have been far more daunting. Both land masses would have been below the horizon of any point of departure from the Indonesian archipelago, even with island ‘hopping’. Yet the Sulawesi discovery, combined with the plethora of islands both large and small, suggests that the earlier non-AMH inhabitants of Indonesia potentially could have spread further at times of very low sea level.

See also: Brumm, A. t al. 2025. This stone tool is over 1 million years old. How did its maker get to Sulawesi without a boat? The Conversation, 6 August 2025

Did the Meteor Crater impact in Arizona dam the Grand Canyon 56 thousand years ago?

Meteor Crater, Arizona, USA. Credit: Travel in USA

Meteor Crater, 60 km east of Flagstaff in Arizona, USA, is probably the most visited site of an impact by an extraterrestrial object. At 1.3 km across it isn’t especially big, but it is exceptionally well preserved, having formed a mere 55.6 ka ago. Apart from its shape its impact origin is proved by its rim, which shows overturning and inversion of strata that it penetrated. The 40 metre  diameter nickel-iron object that did the damage arrived at a speed around 13 km s-1 and delivered kinetic energy equivalent to an explosion of 10 million tons of TNT. This was sufficient to vaporise the body, except for a few fragments. Impressive as that is, the impact was tiny compared with others known on Earth, such as the Chicxulub impact that ended the Mesozoic Era 60 Ma ago. Nevertheless, the surface blast would have sterilised an area up to 1000 km2 around the impact, i.e. up to 17 km in all directions. Yet, most of the impact energy would have affected the surrounding crust. It’s a place worth visiting.

The other must-see site in northern Arizona is the Grand Canyon, some 100 km north of Flagstaff by train, and about 320 km by road. Unlike Meteor Crater, whose origins were well established  more than 50 years ago, the Grand Canyon still draws research teams to study the geology of the rock formations through which it cuts and the geomorphological processes that formed it. Several expeditions have examined caves high above the level of the Colorado River that has cut the Canyon since the start of the Pliocene Epoch, some 5 Ma ago. One objective of this research has been to document past flooding, due to the massive landslides and rock falls that must have occurred as cliffs became unstable during canyon formation. One cave – Stanton’s Cave – is 45 m above the present level of the Colorado: about the height of a 16 storey block of flats. The cave floor is made of well-bedded sand that contains driftwood logs, as do other caves along the canyon. Dating the logs from cave to cave should give at least an idea of the history of flooding and thus cliff collapses. In the case of Stanton’s Cave early radiocarbon dating yielded results close to the maximum that the rapid decay of 14C makes possible. Such dating at the limit of the technique is imprecise. The oldest existing radiocarbon age in this case is 43.5 ± 1.5 ka from a 1984 study. Since then, this dating technique has advanced considerably.

Fig Remnants of a landslide, subsequently breached, in the Grand Canyon downstream of Stanton’s Cave. Credit: Richard Hereford

Karl Karlstrom – whose father was also entranced by cave deposits in the Grand Canyon in the 1960s – together with colleagues from the US managed to persuade radiocarbon specialists from Australia and New Zealand to improve the sediment dating (Karlstrom, K.E and 11 others 2025. Grand Canyon landslide-dam and paleolake triggered by the Meteor Crater impact at 56 ka. Geology, v. 53, online article; DOI: 10.1130/G53571.1). The new 14Cage of the logs is  55.25 ± 2.44 ka, confirmed by infrared stimulated luminescence (IRSL) dating of feldspar grains in the cave sand at  56.00 ± 6.39 ka  Combined with a new cosmogenic nuclide exposure age of 56.00 ± 2.40 ka  for the Meteor Crater ejecta the results are exciting. It looks as if the cliff fall that dammed the Colorado River to fill the cave with sediment coincided with the impact. Crater formation is estimated to have resulted in a seismic event of magnitude 5.4. In such a teetering terrain as the Grand Canyon cliffs, the impact-induced earthquake about 100 km away, even if attenuated to an effective magnitude estimated at 3.5  may have been sufficient to topple part of the cliffs. With cliffs that average 1.6 km high, such a collapse would have displaced sufficient debris to create a substantial barrier to flow of the Colorado River, which is tightly constrained between cliffs. The chaotic debris at the suggested dam site is now partly covered by round river cobbles, suggesting that it was soon overtopped, probably within a thousand years of the cliff collapse.

Because all the dates have substantial imprecision, it is not possible to claim that the authors have proved conclusively a direct connection between impact and cliff collapse. But neither do the age data disprove what is a plausible causal connection.

See also: UNM study finds link between Grand Canyon landslide and Meteor Crater impact. University of New Mexico News 15 July 2025

Evolution of pigmentation in anatomically modern humans of Europe: a new paradigm?

The colours of human skin, eyes and hair in living people across the world are determined by variants of genes (alleles) found at the same place on a chromosome. Since chromosomes are inherited from both mother and father, an individual may have the same two alleles (homozygous), or one of each (heterozygous). A dominant allele is always expressed, even if a single copy is present. A recessive allele is only expressed if the individual inherits two copies of it. Most characteristics of individuals result from the interaction of multiple genes, rather than a single gene. A commonly cited example is the coloration of eyes. If we had a single gene for eye colour – that of the iris – that had alleles just for blue (recessive or ‘b’) and one for brown (dominant or ‘B) pigmentation, brown-eyed individuals would have one or two ‘B’ alleles (bB or BB), whereas those with blue eyes would have to have two ‘blue’ alleles (bb). But inheritance is more complicated than that: there are people with green, hazel or grey eyes and even left- and right eyes of different colour. Such examples suggest that there are more than two genes affecting human eye colour, and each must have evolved as a result of mutations. Much the same goes for hair and skin coloration.

A group of scientists from the University of Ferrara in Italy have analysed highly detailed ancient DNA in anatomically modern human remains from Russia (Palaeolithic), Sweden (Mesolithic) and Croatia (Neolithic) to tease out the complexities of pigmentation inheritance. Then they applied a statistical approach learned from that study to predict the likely skin-, eye- and hair pigmentation in 348 less detailed genomes of ancient individuals whose remains date back to 45 Ma ( Silvia Perretti et al, 2025. Inference of human pigmentation from ancient DNA by genotype likelihood. Proceedings of the National Academy of Science, v. 122, article e2502158122; DOI: 10.1073/pnas.2502158122).

An artist’s impression of a Mesolithic woman from southern Denmark (credit: Tom Bjorklund)

All the hunter-gatherer Palaeolithic individuals (12 samples between 45 and 13 ka old) bar one, showed clear signs of dark pigmentation in skin, eyes and hair – the outlier from Russia was probably lighter. Those from the Mesolithic (14 to 4 ka) showed that 11 out of 35 had a light eye colour (Northern Europe, France, and Serbia), but most retained the dark skin and hair expected in descendants of migrants from Africa. Only one 12 ka hunter-gatherer from Sweden had inferred blue eyes, blonde hair, and light skin.  The retention of dark pigmentation by European hunter-gatherers who migrated there from Africa has been noted before, using DNA from Mesolithic human remains and in one case from birch resin chewed by a Mesolithic woman. This called into question the hypothesis that high levels of melatonin in skin, which protects indigenous people in Africa from cancers, would result in their producing insufficient vitamin D for good health. That notion supposed that out-of-Africa migrants would quickly evolve paler skin coloration at higher latitudes. It is now known that diets rich in meat, nuts and fungi – staple for hunter-gatherers – provide sufficient vitamin-D for health at high latitudes. A more recent hypothesis is that pale skins may have evolved only after the widespread Neolithic adoption of farming when people came to rely on a diet dominated by cereals that are a poor source of vitamin-D.

However, 132 Neolithic farmers (10 to 4 ka ago) individuals studied by Perretti et al. showed increased diversity in pigmentation, with more frequent light skin tones, yet dark individuals persisted, particularly in southern and eastern Europe. Hair and eye colour showed considerable variability, the earliest sign of red hair showing up in Turkey. Even Copper- and Bronze Age samples ( 113 from 7 to 3 ka) and those from Iron Age Europeans (25 from 3 to 1.7 ka ago) still indicate common retention of dark skin, eyes and hair, although the proportion of lighter pigmentation increased in some regions of Europe. Other analyses of ancient DNA have shown that the Palaeo- and Mesolithic populations of Europe were quickly outnumbered by influx of early farmers, probably from the Anatolian region of modern Turkey, during the Neolithic. The farming lifestyle seems likely to have allowed the numbers of those who practised it to rise beyond the natural environment’s ‘carrying capacity’ for hunter-gatherers. The former inhabitants of Europe may simply have been genetically absorbed within the growing population of farmers. Much the same absorption of earlier groups seems to have happened with the westward migration from the Ukrainian and Russia steppes of the Yamnaya people and culture, culminating in the start of the European Bronze Age that reached western Europe around 2.1 ka, The Yamnaya introduced metal culture, horse-drawn wheeled vehicles and possibly Indo-European language.

So the novel probabilistic approach to ancient DNA by Perretti et al. also casts doubt on the diet-based evolution of light pigmentation at high latitudes. Instead, pulses of large population movements and thus changes in European population genetics probably account for the persistence of abundant evidence for dark pigmentation throughout Europe until historic times. The ‘lightening’ of Europeans’ physiognomy seems to have been vastly more complex than previously believed. Early Europe seems to have been almost bewilderingly diverse, which make a complete mockery of modern chauvinism and racism. The present European genetic ‘melting pot’ is surprisingly similar to that of Europe’s ancient past.

The end-Triassic mass extinction and ocean acidification

Triassic reef limestones in the Dolomites of northern Italy. Credit: © Matteo Volpone

Four out of six mass extinctions that ravaged life on Earth during the last 300 Ma coincided with large igneous events marked by basaltic flood volcanism. But not all such bursts of igneous activity match significant mass extinctions. Moreover, some rapid rises in the rate of extinction are not clearly linked to peaks in igneous activity. Another issue in this context is that ‘kill mechanisms’ are generally speculative rather than based on hard data. Large igneous events inevitably emit very large amounts of gases and dust-sized particulates into the atmosphere. Carbon dioxide, being a greenhouse gas, tends to heat up the global climate, but also dissolves in seawater to lower its pH. Both global warming and more acidic oceans are possible ‘kill mechanisms’. Volcanic emission of sulfur dioxide results in acid rain and thus a decrease in the pH of seawater. But if it is blasted into the stratosphere it combines with oxygen and water vapour to form minute droplets of sulfuric acid. These form long-lived haze, which reflects solar energy beck into space. Such an increased albedo therefore tends to cool the planet and create a so-called ‘volcanic winter’. Dust that reaches the stratosphere reduces penetration of visible light to the surface, again resulting in cooling. But since photosynthetic organisms rely on blue and red light to power their conversion of CO­2­ and water vapour to carbohydrates and oxygen, these primary producers at the base of the marine and terrestrial food webs decline. That presents a fourth kill mechanism that may trigger mass extinction on land and in the oceans: starvation.

Palaeontologists have steadily built up a powerful case for occasional mass extinctions since fossils first appear in the stratigraphic record of the Phanerozoic Eon. Their data are simply the numbers of species, genera and families of organisms preserved as fossils in packages of sedimentary strata that represent roughly equal ‘parcels’ of time (~10 Ma). Mass extinctions are now unchallengeable parts of life’s history and evolution. Yet, assigning specific kill mechanisms involved in the damage that they create remains very difficult. There are hypotheses for the cause of each mass extinction, but a dearth of data that can test why they happened. The only global die-off near hard scientific resolution is that at the end of the Cretaceous. The K-Pg (formerly K-T) event has been extensively covered in Earth-logs since 2000. It involved a mixture of global ecological stress from the Deccan large igneous event spread over a few million years of the Late Cretaceous, with the near-instantaneous catastrophe induced by the Chicxulub impact, with a few remaining dots and ticks needed on ‘i’s and ‘t’s. Other possibilities have been raised: gamma-ray bursts from distant supernovae; belches of methane from the sea floor; emissions of hydrogen sulfide gas from seawater itself during ocean anoxia events; sea-level changes etc.

The mass extinction that ended the Triassic (~201 Ma) coincides with evidence for intense volcanism in South and North America, Africa and southern Europe, then at the core of the Pangaea supercontinent. Flood basalts and large igneous intrusions – the Central Atlantic Magmatic Province (CAMP) – began the final break-up of Pangaea. The end-Triassic extinction deleted 34% of marine genera. Marine sediments aged around 201 Ma reveal a massive shift in sulfur and carbon isotopes in the ocean that has been interpreted as a sign of acute anoxia in the world’s oceans, which may have resulted in massive burial of oxygen-starved marine animal life. However, there is no sign of Triassic, carbon-rich deep-water sediments that characterise ocean anoxia events in later times. But it is possible that bacteria that use the reduction of sulfate (SO42-) to sulfide (S2-) ions as an energy source for them to decay dead organisms, could have produced the sulfur isotope ‘excursion’. That would also have produced massive amounts of highly toxic hydrogen sulfide gas, which would have overwhelmed terrestrial animal life at continental margins. The solution ofH2S in water would also have acidified the world’s oceans.

Molly Trudgill of the University of St Andrews, Scotland and colleagues from the UK, France, the Netherlands, the US, Norway, Sweden and Ireland set out to test the hypothesis of end-Triassic oceanic acidification (Trudgill, M. and 24 others 2025. Pulses of ocean acidification at the Triassic–Jurassic boundary. Nature Communications, v. 16, article 6471; DOI: 10.1038/s41467-025-61344-6). The team used Triassic fossil oysters from before the extinction time interval. Boron-isotope data from the shells are a means of estimating variations in the pH of seawater. Before the extinction event the average pH in Triassic seawater was about the same as today, at 8.2 or slightly alkaline. By 201 Ma the pH had shifted towards acidic conditions by at least 0.3: the biggest detected in the Phanerozoic record. One of the most dramatic changes in Triassic marine fauna was the disappearance of reef limestones made by the recently evolved modern corals on a vast scale in the earlier Triassic; a so-called ‘reef gap’ in the geological record. That suggests a possible analogue to the waning of today’s coral reefs that is thought to be a result of increased dissolution of CO2 in seawater and acidification, related to global greenhouse warming. Using the fossil oysters, Trudgill et al. also sought a carbon-isotope ‘fingerprint’ for the source of elevated CO2, finding that it mainly derived from the mantle, and was probably emitted by CAMP volcanism. So their discussion centres mainly on end-Triassic ocean acidification as an analogy for current climate change driven by CO2 largely emitted by anthropogenic burning of fossil fuels. Nowhere in their paper do they mention any role for acidification by hydrogen sulfide emitted by massive anoxia on the Triassic ocean floor, which hit the scientific headlines in 2020 (see earlier link).