Ancient mining pollutants in river sediments reveal details of early British economic history

People have been mining in Britain since Neolithic farmers opened the famous Grimes Graves in Norfolk – a large area dotted with over 400 pits up to to 13 metres deep. The target was a layer of high quality black flint in a Cretaceous limestone known as The Chalk. Later Bronze Age people in Wales and Cornwall drove mine shafts deeper underground to extract copper and tin ores to make the alloy bronze. The Iron Age added iron ore to the avid search for sources of metals. The production and even export of metals and ores eventually attracted the interest of Rome. Roman invasion in 43 CE during the reign of Claudius annexed most of England and Wales to create the province of Britannia. This lasted until the complete withdrawal of Roman forces around 410 CE. Roman imperialism and civilisation depended partly on lead for plumbing and silver coinage to pay its legionaries. Consequently, an important aspect in Rome’s four-century hegemony was mining, especially for lead ore, as far north as the North Pennines. This littered the surface in mining areas with toxic waste. Silver occurs in lead ore in varying proportions. In the Bronze Age early metallurgists extracted silver from smelted, liquid lead by a process known as cupellation. The molten Pb-Ag alloy is heated in air to a much higher temperature than its melting point, when lead reacts with oxygen to form a solid oxide (PbO) and silver remains molten.

Mine waste in the North Pennine orefield of England. Credit: North Pennines National Landscape

Until recently, historians believed that the fall of the Western Empire brought economic collapse to Britain. Yet archaeologists have revealed that what was originally called the “Dark Ages” (now Early Medieval Period) had a thriving culture among both the remaining Britons and Anglo Saxon immigrants. A means of tracking economic activity is to measure the amount of pollutants from mining waste at successive levels in the alluvium of rivers that flow through orefields. Among the best known in Britain is the North Pennine Orefield of North Yorkshire and County Durham through which substantial rivers flow eastwards, such as the River Ure that flows through the heavily mined valley of Wensleydale. A first attempt at such geochemical archaeology has been made by a British team led by Christopher Loveluck of Nottingham University (Loveluck, C.P. and 10 others 2025. Aldborough and the metals economy of northern England, c. AD 345–1700: a new post-Roman narrative. Antiquity: FirstView, online article; DOI: 10.15184/aqy.2025.10175). Aldborough in North Yorkshire – sited on the Romano-British town of Isurium Brigantum – lies in the Vale of York, a large alluvial plain. The River Ure has deposited sands, silts and muds in the area since the end of the last Ice Age, 11 thousand years ago.

Loveluck et al. extracted a 6 m core from the alluvium on the outskirts of Aldborough, using radiocarbon and optically-stimulated luminescence of quartz grains to calibrate depth to age in the sediments.  The base of the core is Mesolithic in age (~6400 years ago) and extends upwards to modern times, apparently in an unbroken sequence. Samples were taken for geochemical analysis every 2 cm through the upper 1.12 m of the core, which spans the Roman occupation (43 to 410 CE), the early medieval (420 to 1066 CE), medieval (1066 to 1540 CE), post-medieval (1540 to 1750 CE) and modern times (1750 CE to present). Each sample was analysed for 56 elements using mass spectrometry; lead, silver, copper, zinc, iron and arsenic being the elements of most interest in this context. Other data gleaned from the sediment are those of pollen, useful in establishing climate and ecological changes. Unfortunately, the metal data begin in 345 CE, three centuries after the Roman invasion, by which time occupation and acculturation were well established. The authors assume that Romans began the mining in the North Pennines. They say nothing about the pre-mining levels of pollution from the upstream orefield nor mining conducted by the Iron Age Brigantes. For this kind of survey, it is absolutely essential that a baseline is established for the pollution levels under purely natural conditions. The team could have analysed sediment from the Mesolithic when purely natural weathering, erosion and transport could safely be assumed, but they seem not to have done that.

The team has emphasised that their data suggest that mining for lead continued and even increased through the ‘Dark Ages’ rather than declining, in an economic ‘slump’ once the Romans left, as previous historians have suggested. Lead pollution continued at roughly the same levels as during the Roman occupation through the Early Medieval Period and then rose to up to three times higher after the late 14th century. The data for silver are different. The Ag data from Aldborough show a large ‘spike’ in 427 to 427 CE. Interestingly this is after the Roman withdrawal. Its level in alluvium then ‘flatlines’ at low abundances until the beginning of the 14th century when again there is a series of ‘booms’. This seems to me to mark sudden spells of coining, after the Romans left perhaps first to ensure a money economy remained possible, and then as a means of funding wars with the French in the 14th century. The authors also found changing iron abundances, which roughly double from low Roman levels to an Early Medieval peak and then fall in the 11th century: a result perhaps of local iron smelting. The overall patterns for zinc and copper differ substantially from those of lead, as does that for arsenic which roughly follows the trend for iron. That might indicate that local iron production was based on pyrite (FeS2) which can contain arsenic at moderate concentrations: pyrite is a common mineral in the ore bodies of the North Pennines’ The paper by Loveluck et al. is worth reading as a first attempt to correlate stratigraphic geochemistry data with episodes in British and, indeed, wider European history. But I think it has several serious flaws, beyond the absence of any pre-Roman geochemical baseline, as noted above. No data are presented for barium (Ba) and fluorine (F) derived from the gangue minerals baryte (BaSO4) and fluorite (CaF2), which outweigh lead and zinc sulfides in North Pennine ore bodies, yet had no use value until the Industrial Revolution. They would have made up a substantial proportion of mine spoil heaps – useful ores would have been picked out before disposal of gangue – whose erosion, comminution and transport would make contributions to downstream deposition of alluvium consistent with the pace of mining. That is: Ba and F data would be far better guides to industrial activity. There is a further difficulty with such surveys in northern Britain. The whole of the upland areas were subjected to repeated glaciation, which would have gathered exposed ore and gangue and dumped it in till, especially in the numerous moraines exposed in valleys such as Wensleydale. Such sources may yield sediment in periods of naturally high erosion during floods. Finally, the movement of sediment downstream is obviously not immediate, especially when waste is disposed in large dumps near mines Therefore phases of active mining may not contribute increased toxic waste far downstream until decades or even centuries later. These factors could easily have been clarified by a baseline study from earlier archaeological periods when mining was unlikely, into which the Aldborough alluvium core penetrates

Human interventions in geological processes

During the Industrial Revolution not only did the emission of greenhouse gases by burning fossil fuels start to increase exponentially, but so too did the movement of rock and sediment to get at those fuels and other commodities demanded by industrial capital. In the 21st century about 57 billion tons of geological materials are deliberately moved each year. Global population followed the same trend, resulting in increasing expansion of agriculture to produce food. Stripped of its natural cover on every continent soil began to erode at exponential rates too. The magnitude of human intervention in natural geological cycles has become stupendous, soil erosion now shifting on a global scale about 75 billion tons of sediment, more than three times the estimated natural rate of surface erosion. Industrial capital together with society as a whole also creates and dumps rapidly growing amounts of solid waste of non-geological provenance. The Geological Society of America’s journal Geology recently published two research papers that document how capital is transforming the Earth.

Dust Bowl conditions on the Minnesota prairies during the 1930s.

One of the studies is based on sediment records in the catchment of a tributary of the upper Mississippi River. The area is surrounded by prairie given over mainly to wheat production since the mid 19th century. The deep soil of the once seemingly limitless grassland developed by the prairie ecosystem is ideal for cereal production. In the first third of the 20th century the area experienced a burst of erosion of the fertile soil that resulted from the replacement of the deep root systems of prairie grasses by shallow rooted wheat. The soil had formed from the glacial till deposited by the Laurentide ice sheet than blanketed North America as far south as New York and Chicago. Having moved debris across almost 2000 km of low ground, the till is dominated by clay- and silt-sized particles. Once exposed its sediments moved easily in the wind. Minnesota was badly affected by the ‘Dust Bowl’ conditions of the 1930s, to the extent that whole towns were buried by up to 4.5 metres of aeolian sediment. For the first time the magnitude of soil erosion compared with natural rates has been assessed precisely by dating layers of alluvium deposited in river terraces of one of the Mississippi’s tributaries  (Penprase, S.B. et al. 2025. Plow versus Ice Age: Erosion rate variability from glacial–interglacial climate change is an order of magnitude lower than agricultural erosion in the Upper Mississippi River Valley, USA. Geology, v. 53, p. 535-539; DOI: 10.1130/G52585.1).

Shanti Penprase of the University of Minnesota and her colleagues were able to date the last time sediment layers at different depths in terraces were exposed to sunlight and cosmic rays, by analysing optically stimulated luminescence (OSL) and cosmogenic 10Be content of quartz grains from the alluvium. The data span the period since the Last Glacial Maximum 20 thousand years ago during which the ecosystem evolved from bare tundra through re-vegetation to pre-settlement prairie. They show that post-glacial natural erosion had proceeded at around 0.05 mm yr-1 from a maximum of 0.07 when the Laurentide Ice Sheet was at its maximum extent. Other studies have revealed that after the area was largely given over to cereal production in the 19th century erosion rates leapt to as high as 3.5 mm yr-1 with a median rate of 0.6 mm yr-1, 10 to 12 times that of post-glacial times. It was the plough and single-crop farming introduced by non-indigenous settlers that accelerated erosion. Surprisingly, advances in prairie agriculture since the Dust Bowl have not resulted in any decrease in soil erosion rates, although wind erosion is now insignificant. The US Department of Agriculture considers the loss of one millimetre per year to be ‘tolerable’: 14 times higher than the highest natural rate in glacial times.

The other paper has a different focus: how human activities may form solid rock. The world over, a convenient means of disposing of unwanted material in coastal areas is simply to dump waste in the sea. That has been happening for centuries, but as for all other forms of anthropogenic waste disposal the volumes have increased at an exponential rate. The coast of County Durham in Britain began to experience marine waste disposal when deep mines were driven into Carboniferous Coal Measures hidden by the barren Permian strata that rest unconformably upon them. Many mines extended eastwards beneath the North Sea, so it was convenient to dump 1.5 million tons of waste rock annually at the seaside. The 1971 gangster film Get Carter starring Michael Caine includes a sequence showing ‘spoil’ pouring onto the beach below Blackhall colliery, burying the corpse of Carter’s rival. The nightmarish, 20 km stretch of grossly polluted beach between Sunderland and Hartlepool also provided a backdrop for Alien 3. Historically, tidal and wave action concentrated the low-density coal in the waste at the high-water mark, to create a free resource for locals in the form of ‘sea coal’ as portrayed in Tom Scott Robson’s 1966 documentary Low Water. Closure of the entire Duham coalfield in the 1980s and ‘90s halted this pollution and the coast is somewhat restored – at a coast of around £10 million.

‘Anthropoclastic’ conglomerate formed from iron-smelting slag dumped on the West Cumbrian coast. It incorporates artefacts as young as the 1980s, showing that it was lithified rapidly. Credit: Owen et al, Supplementary Figure 2

On the West Cumbrian coast of Britain another industry dumped millions of tons of waste into the sea. In the case it was semi-molten ‘slag’ from iron-smelting blast furnaces poured continuously for 130 years until steel-making ended in the 1980s. Coastal erosion has broken up and spread an estimated 27 million cubic metres of slag along a 2 km stretch of beach. Astonishingly this debris has turned into a stratum of anthropogenic conglomerate sufficiently well-bonded to resist storms (Owen, A., MacDonald, J.M. & Brown, D.J 2025. Evidence for a rapid anthropoclastic rock cycle. Geology, v. 53, p. 581–586; DOI: 10.1130/G52895.1). The conglomerate is said by the authors to be a product of ‘anthropoclastic’ processes. Its cementation involves minerals such as goethite, calcite and brucite. Because the conglomerate contains car tyres, metal trouser zips, aluminium ring-pulls from beer cans and even coins lithification has been extremely rapid. One ring-pull has a design that was not used in cans until 1989, so lithification continued in the last 35 years.

Furnace slag ‘floats’ on top of smelted iron and incorporates quartz, clays and other mineral grains in iron ore into anhydrous calcium- and magnesium-rich aluminosilicates. This purification is achieved deliberately by including limestone as a fluxing agent in the furnace feed. The high temperature reactions are similar to those that produce aluminosilicates when cement is manufactured. Like them, slag breaks down in the presence of water to recrystallis in hydrated form to bond the conglomerate. This is much the same manner as concrete ‘sets’ over a few days and weeks to bind together aggregate. There is vastly more ‘anthropoclastic’ rock in concrete buildings and other modern infrastructure. Another example is tarmac that coats millions of kilometres of highway.

See also: Howell, E. 2025. Modern farming has carved away earth faster than during the ice age. Science, v. 388

Earliest hominin occupation of Sulawesi and crossing of an ocean barrier

Regular readers of Earth-logs will recall that the islands of Indonesia were reached by the archaic humans Homo erectus and H. floresiensis at least a million years ago. Anatomical comparison of their remains suggest that the diminutive H. floresiensis probably evolved from H. erectus under the stress of being stranded on the small, resource-poor island of Flores: a human example of island dwarfism. In fact there are anatomically modern humans (AMH) living on Flores that seem to have evolved dwarfism in the same way since AMH first arrived there between 50 and 5 ka. Incidentally, H. erectus fossils and artefacts were found by Eugene Dubois in the late 19th century at a famous site near Trinil in Java. In 2014, turned out that H. erectus had produced the earliest known art – zig-zag patterns on freshwater clam shells – between 540 and 430 ka ago. The episodic falls in global sea level due to massive accumulations of ice on land during successive Pleistocene glacial episodes aided migration by producing connections between the islands of SE Asia. They created a huge area of low-lying dryland known as ‘Sundaland’. The islands’ colonisation by H. erectus was made easy, perhaps inevitable.

The interconnection of SE Asian islands to form Sundaland (yellow) when sea level was 120 m lower than today. Even at that extreme the island of Sulawesi remained isolated by deep ocean water. Credit: based on Hakim et al Fig 1.

However, Flores and islands further east are separated from those to the west by a narrow but very deep strait. It channels powerful currents that are hazardous to small-boat crossings even today. Most palaeoanthropologists consider the colonisation of Flores by H. erectus most likely to have resulted by accident, reckoning that they were incapable of planning a crossing and building suitable craft. For AMH to have reached New Guinea and Australia around 60 ka ago, they must have developed sturdy craft and sea-faring skills. This paradigm suggests that the evolution of AMH, and thus their eventual occupation of all continents except Antarctica, must have involved a revolutionary ‘leap’ in their cognitive ability just before they left Africa. That view has been popularised by the presenter (Ella Al-Shamahi) of the 2025 BBC Television series Human – now on BBC iPlayer (requires viewers to create a free account) – in its second episode Into the Unknown. [The idea of a cognitive leap that ushered in the almost worldwide migration of anatomically modern humans was launched in 1995 by controversial anthropologist Chris Knight of University College London].

Flaked artefact, about the length of a human thumb, made of chert from excavations at Calio on Sulawesi, dated at 1.02 Ma. Credit: based on Hakim et al Fig 2

The large and peculiarly-shaped island of Sulawesi, also part of Indonesia, is notable for being the location of the earliest known figurative art; a cave painting of a Sulawesi warty pig, dated to at least 45.5 ka ago. Indonesian and Australian archaeologists working at a site near Calio in northern Sulawesi unearthed stone artefacts deep in river-terrace gravels that contain fossils of extinct pigs and dwarf elephants (Hakim, B. and 26 others 2025. Hominins on Sulawesi during the Early Pleistocene. Nature, v. 644;DOI: 10.1038/s41586-025-09348-6). The tools were struck from pebbles of hard fine-grained rocks by flaking to produce sharp edges. A combination of dating techniques – palaeomagnetism, uranium-series and electron-spin resonance – on the terrace sediments and fossils in them yielded ages ranging from 1.04 to 1.48 Ma; far older than the earliest known presence of AMH on the island (73–63 ka). The dates for an early human presence on Sulawesi tally with those from Flores. The tool makers were probably H. erectus. To reach the island from Sundaland at a time when global sea level was 120 m lower than at present would have required crossing more than 50 km of open water. It seems unlikely that such a journey could have been accidental. The migrants would have needed seaworthy craft; possibly rafts. Clearly the AMH crossings to New Guinea around 60 thousand years ago would have been far more daunting. Both land masses would have been below the horizon of any point of departure from the Indonesian archipelago, even with island ‘hopping’. Yet the Sulawesi discovery, combined with the plethora of islands both large and small, suggests that the earlier non-AMH inhabitants of Indonesia potentially could have spread further at times of very low sea level.

See also: Brumm, A. t al. 2025. This stone tool is over 1 million years old. How did its maker get to Sulawesi without a boat? The Conversation, 6 August 2025

Did the Meteor Crater impact in Arizona dam the Grand Canyon 56 thousand years ago?

Meteor Crater, Arizona, USA. Credit: Travel in USA

Meteor Crater, 60 km east of Flagstaff in Arizona, USA, is probably the most visited site of an impact by an extraterrestrial object. At 1.3 km across it isn’t especially big, but it is exceptionally well preserved, having formed a mere 55.6 ka ago. Apart from its shape its impact origin is proved by its rim, which shows overturning and inversion of strata that it penetrated. The 40 metre  diameter nickel-iron object that did the damage arrived at a speed around 13 km s-1 and delivered kinetic energy equivalent to an explosion of 10 million tons of TNT. This was sufficient to vaporise the body, except for a few fragments. Impressive as that is, the impact was tiny compared with others known on Earth, such as the Chicxulub impact that ended the Mesozoic Era 60 Ma ago. Nevertheless, the surface blast would have sterilised an area up to 1000 km2 around the impact, i.e. up to 17 km in all directions. Yet, most of the impact energy would have affected the surrounding crust. It’s a place worth visiting.

The other must-see site in northern Arizona is the Grand Canyon, some 100 km north of Flagstaff by train, and about 320 km by road. Unlike Meteor Crater, whose origins were well established  more than 50 years ago, the Grand Canyon still draws research teams to study the geology of the rock formations through which it cuts and the geomorphological processes that formed it. Several expeditions have examined caves high above the level of the Colorado River that has cut the Canyon since the start of the Pliocene Epoch, some 5 Ma ago. One objective of this research has been to document past flooding, due to the massive landslides and rock falls that must have occurred as cliffs became unstable during canyon formation. One cave – Stanton’s Cave – is 45 m above the present level of the Colorado: about the height of a 16 storey block of flats. The cave floor is made of well-bedded sand that contains driftwood logs, as do other caves along the canyon. Dating the logs from cave to cave should give at least an idea of the history of flooding and thus cliff collapses. In the case of Stanton’s Cave early radiocarbon dating yielded results close to the maximum that the rapid decay of 14C makes possible. Such dating at the limit of the technique is imprecise. The oldest existing radiocarbon age in this case is 43.5 ± 1.5 ka from a 1984 study. Since then, this dating technique has advanced considerably.

Fig Remnants of a landslide, subsequently breached, in the Grand Canyon downstream of Stanton’s Cave. Credit: Richard Hereford

Karl Karlstrom – whose father was also entranced by cave deposits in the Grand Canyon in the 1960s – together with colleagues from the US managed to persuade radiocarbon specialists from Australia and New Zealand to improve the sediment dating (Karlstrom, K.E and 11 others 2025. Grand Canyon landslide-dam and paleolake triggered by the Meteor Crater impact at 56 ka. Geology, v. 53, online article; DOI: 10.1130/G53571.1). The new 14Cage of the logs is  55.25 ± 2.44 ka, confirmed by infrared stimulated luminescence (IRSL) dating of feldspar grains in the cave sand at  56.00 ± 6.39 ka  Combined with a new cosmogenic nuclide exposure age of 56.00 ± 2.40 ka  for the Meteor Crater ejecta the results are exciting. It looks as if the cliff fall that dammed the Colorado River to fill the cave with sediment coincided with the impact. Crater formation is estimated to have resulted in a seismic event of magnitude 5.4. In such a teetering terrain as the Grand Canyon cliffs, the impact-induced earthquake about 100 km away, even if attenuated to an effective magnitude estimated at 3.5  may have been sufficient to topple part of the cliffs. With cliffs that average 1.6 km high, such a collapse would have displaced sufficient debris to create a substantial barrier to flow of the Colorado River, which is tightly constrained between cliffs. The chaotic debris at the suggested dam site is now partly covered by round river cobbles, suggesting that it was soon overtopped, probably within a thousand years of the cliff collapse.

Because all the dates have substantial imprecision, it is not possible to claim that the authors have proved conclusively a direct connection between impact and cliff collapse. But neither do the age data disprove what is a plausible causal connection.

See also: UNM study finds link between Grand Canyon landslide and Meteor Crater impact. University of New Mexico News 15 July 2025

Evolution of pigmentation in anatomically modern humans of Europe: a new paradigm?

The colours of human skin, eyes and hair in living people across the world are determined by variants of genes (alleles) found at the same place on a chromosome. Since chromosomes are inherited from both mother and father, an individual may have the same two alleles (homozygous), or one of each (heterozygous). A dominant allele is always expressed, even if a single copy is present. A recessive allele is only expressed if the individual inherits two copies of it. Most characteristics of individuals result from the interaction of multiple genes, rather than a single gene. A commonly cited example is the coloration of eyes. If we had a single gene for eye colour – that of the iris – that had alleles just for blue (recessive or ‘b’) and one for brown (dominant or ‘B) pigmentation, brown-eyed individuals would have one or two ‘B’ alleles (bB or BB), whereas those with blue eyes would have to have two ‘blue’ alleles (bb). But inheritance is more complicated than that: there are people with green, hazel or grey eyes and even left- and right eyes of different colour. Such examples suggest that there are more than two genes affecting human eye colour, and each must have evolved as a result of mutations. Much the same goes for hair and skin coloration.

A group of scientists from the University of Ferrara in Italy have analysed highly detailed ancient DNA in anatomically modern human remains from Russia (Palaeolithic), Sweden (Mesolithic) and Croatia (Neolithic) to tease out the complexities of pigmentation inheritance. Then they applied a statistical approach learned from that study to predict the likely skin-, eye- and hair pigmentation in 348 less detailed genomes of ancient individuals whose remains date back to 45 Ma ( Silvia Perretti et al, 2025. Inference of human pigmentation from ancient DNA by genotype likelihood. Proceedings of the National Academy of Science, v. 122, article e2502158122; DOI: 10.1073/pnas.2502158122).

An artist’s impression of a Mesolithic woman from southern Denmark (credit: Tom Bjorklund)

All the hunter-gatherer Palaeolithic individuals (12 samples between 45 and 13 ka old) bar one, showed clear signs of dark pigmentation in skin, eyes and hair – the outlier from Russia was probably lighter. Those from the Mesolithic (14 to 4 ka) showed that 11 out of 35 had a light eye colour (Northern Europe, France, and Serbia), but most retained the dark skin and hair expected in descendants of migrants from Africa. Only one 12 ka hunter-gatherer from Sweden had inferred blue eyes, blonde hair, and light skin.  The retention of dark pigmentation by European hunter-gatherers who migrated there from Africa has been noted before, using DNA from Mesolithic human remains and in one case from birch resin chewed by a Mesolithic woman. This called into question the hypothesis that high levels of melatonin in skin, which protects indigenous people in Africa from cancers, would result in their producing insufficient vitamin D for good health. That notion supposed that out-of-Africa migrants would quickly evolve paler skin coloration at higher latitudes. It is now known that diets rich in meat, nuts and fungi – staple for hunter-gatherers – provide sufficient vitamin-D for health at high latitudes. A more recent hypothesis is that pale skins may have evolved only after the widespread Neolithic adoption of farming when people came to rely on a diet dominated by cereals that are a poor source of vitamin-D.

However, 132 Neolithic farmers (10 to 4 ka ago) individuals studied by Perretti et al. showed increased diversity in pigmentation, with more frequent light skin tones, yet dark individuals persisted, particularly in southern and eastern Europe. Hair and eye colour showed considerable variability, the earliest sign of red hair showing up in Turkey. Even Copper- and Bronze Age samples ( 113 from 7 to 3 ka) and those from Iron Age Europeans (25 from 3 to 1.7 ka ago) still indicate common retention of dark skin, eyes and hair, although the proportion of lighter pigmentation increased in some regions of Europe. Other analyses of ancient DNA have shown that the Palaeo- and Mesolithic populations of Europe were quickly outnumbered by influx of early farmers, probably from the Anatolian region of modern Turkey, during the Neolithic. The farming lifestyle seems likely to have allowed the numbers of those who practised it to rise beyond the natural environment’s ‘carrying capacity’ for hunter-gatherers. The former inhabitants of Europe may simply have been genetically absorbed within the growing population of farmers. Much the same absorption of earlier groups seems to have happened with the westward migration from the Ukrainian and Russia steppes of the Yamnaya people and culture, culminating in the start of the European Bronze Age that reached western Europe around 2.1 ka, The Yamnaya introduced metal culture, horse-drawn wheeled vehicles and possibly Indo-European language.

So the novel probabilistic approach to ancient DNA by Perretti et al. also casts doubt on the diet-based evolution of light pigmentation at high latitudes. Instead, pulses of large population movements and thus changes in European population genetics probably account for the persistence of abundant evidence for dark pigmentation throughout Europe until historic times. The ‘lightening’ of Europeans’ physiognomy seems to have been vastly more complex than previously believed. Early Europe seems to have been almost bewilderingly diverse, which make a complete mockery of modern chauvinism and racism. The present European genetic ‘melting pot’ is surprisingly similar to that of Europe’s ancient past.

The end-Triassic mass extinction and ocean acidification

Triassic reef limestones in the Dolomites of northern Italy. Credit: © Matteo Volpone

Four out of six mass extinctions that ravaged life on Earth during the last 300 Ma coincided with large igneous events marked by basaltic flood volcanism. But not all such bursts of igneous activity match significant mass extinctions. Moreover, some rapid rises in the rate of extinction are not clearly linked to peaks in igneous activity. Another issue in this context is that ‘kill mechanisms’ are generally speculative rather than based on hard data. Large igneous events inevitably emit very large amounts of gases and dust-sized particulates into the atmosphere. Carbon dioxide, being a greenhouse gas, tends to heat up the global climate, but also dissolves in seawater to lower its pH. Both global warming and more acidic oceans are possible ‘kill mechanisms’. Volcanic emission of sulfur dioxide results in acid rain and thus a decrease in the pH of seawater. But if it is blasted into the stratosphere it combines with oxygen and water vapour to form minute droplets of sulfuric acid. These form long-lived haze, which reflects solar energy beck into space. Such an increased albedo therefore tends to cool the planet and create a so-called ‘volcanic winter’. Dust that reaches the stratosphere reduces penetration of visible light to the surface, again resulting in cooling. But since photosynthetic organisms rely on blue and red light to power their conversion of CO­2­ and water vapour to carbohydrates and oxygen, these primary producers at the base of the marine and terrestrial food webs decline. That presents a fourth kill mechanism that may trigger mass extinction on land and in the oceans: starvation.

Palaeontologists have steadily built up a powerful case for occasional mass extinctions since fossils first appear in the stratigraphic record of the Phanerozoic Eon. Their data are simply the numbers of species, genera and families of organisms preserved as fossils in packages of sedimentary strata that represent roughly equal ‘parcels’ of time (~10 Ma). Mass extinctions are now unchallengeable parts of life’s history and evolution. Yet, assigning specific kill mechanisms involved in the damage that they create remains very difficult. There are hypotheses for the cause of each mass extinction, but a dearth of data that can test why they happened. The only global die-off near hard scientific resolution is that at the end of the Cretaceous. The K-Pg (formerly K-T) event has been extensively covered in Earth-logs since 2000. It involved a mixture of global ecological stress from the Deccan large igneous event spread over a few million years of the Late Cretaceous, with the near-instantaneous catastrophe induced by the Chicxulub impact, with a few remaining dots and ticks needed on ‘i’s and ‘t’s. Other possibilities have been raised: gamma-ray bursts from distant supernovae; belches of methane from the sea floor; emissions of hydrogen sulfide gas from seawater itself during ocean anoxia events; sea-level changes etc.

The mass extinction that ended the Triassic (~201 Ma) coincides with evidence for intense volcanism in South and North America, Africa and southern Europe, then at the core of the Pangaea supercontinent. Flood basalts and large igneous intrusions – the Central Atlantic Magmatic Province (CAMP) – began the final break-up of Pangaea. The end-Triassic extinction deleted 34% of marine genera. Marine sediments aged around 201 Ma reveal a massive shift in sulfur and carbon isotopes in the ocean that has been interpreted as a sign of acute anoxia in the world’s oceans, which may have resulted in massive burial of oxygen-starved marine animal life. However, there is no sign of Triassic, carbon-rich deep-water sediments that characterise ocean anoxia events in later times. But it is possible that bacteria that use the reduction of sulfate (SO42-) to sulfide (S2-) ions as an energy source for them to decay dead organisms, could have produced the sulfur isotope ‘excursion’. That would also have produced massive amounts of highly toxic hydrogen sulfide gas, which would have overwhelmed terrestrial animal life at continental margins. The solution ofH2S in water would also have acidified the world’s oceans.

Molly Trudgill of the University of St Andrews, Scotland and colleagues from the UK, France, the Netherlands, the US, Norway, Sweden and Ireland set out to test the hypothesis of end-Triassic oceanic acidification (Trudgill, M. and 24 others 2025. Pulses of ocean acidification at the Triassic–Jurassic boundary. Nature Communications, v. 16, article 6471; DOI: 10.1038/s41467-025-61344-6). The team used Triassic fossil oysters from before the extinction time interval. Boron-isotope data from the shells are a means of estimating variations in the pH of seawater. Before the extinction event the average pH in Triassic seawater was about the same as today, at 8.2 or slightly alkaline. By 201 Ma the pH had shifted towards acidic conditions by at least 0.3: the biggest detected in the Phanerozoic record. One of the most dramatic changes in Triassic marine fauna was the disappearance of reef limestones made by the recently evolved modern corals on a vast scale in the earlier Triassic; a so-called ‘reef gap’ in the geological record. That suggests a possible analogue to the waning of today’s coral reefs that is thought to be a result of increased dissolution of CO2 in seawater and acidification, related to global greenhouse warming. Using the fossil oysters, Trudgill et al. also sought a carbon-isotope ‘fingerprint’ for the source of elevated CO2, finding that it mainly derived from the mantle, and was probably emitted by CAMP volcanism. So their discussion centres mainly on end-Triassic ocean acidification as an analogy for current climate change driven by CO2 largely emitted by anthropogenic burning of fossil fuels. Nowhere in their paper do they mention any role for acidification by hydrogen sulfide emitted by massive anoxia on the Triassic ocean floor, which hit the scientific headlines in 2020 (see earlier link).

Sagduction of greenstone belts and formation of Archaean continental crust

Simplified geological map of the Archaean Yilgarn Craton in Western Australia. Credit: Geological Survey of Western Australia

Every ancient craton seen from space shows patterns that are unique to Archaean continental crust: elongated, ‘canoe-shaped’ greenstone belts enveloped by granitic gneisses, both of which are punctured by domes of younger, less deformed granites. The Yilgarn Craton of Western Australia is a typical granite-greenstone terrain. Greenstone belts contain lavas of ultramafic, basaltic and andesitic compositions, which in undeformed settings show the typical pillow structures formed by submarine volcanic extrusion. There are also layered mafic to ultramafic complexes, formed by fractional crystallisation, minor sedimentary sequences and occasionally more felsic lavas and ashes. The enveloping grey gneisses are dominantly highly deformed tonalite-trondhjemite-granodiorite (TTG) composition that suggest that they formed from large volumes of sodium-rich, silicic magmas, probably generated at depth by partial melting of hydrated basaltic rocks.

The heat producing radioactive isotopes of potassium, uranium and thorium in both the Archaean mantle and crust would have been more abundant before 2.5 Ga ago, because they decay over time. Consequently the Earth’s interior would have then generated more heat than now, gradually to escape by thermal conduction towards the cooler surface. The presence of pillow lavas and detrital sediments in greenstone belts indicate that surface temperatures during the Archaean Eon were below the boiling point of water; in fact probably much the same as in the tropics at present. Indeed there is evidence that Earth was then a water world. It may even have been so during the Hadean, as revealed by the oxygen-isotope data in 4.4 Ga zircon grains. The broad conclusion from such findings is that the Archaean geothermal gradient was much steeper; there would have been a greater temperature increase with depth than now and new crust would have cooled more slowly. Subduction of cool lithosphere would have been less likely than in later times, especially as higher mantle heat production would have generated new crust more quickly. Another likely possibility is that far more heat would have been moved by convection: there would have been more mantle-penetrating plumes and they would have been larger. Large mantle plumes of the Phanerozoic have generated vast ocean floor plateaus, such as the Kerguelen and Ontong Java Plateau.

A group of geoscience researchers at The University of Hong Kong and international colleagues recently completed a geological and geochemical study of the North China Craton, analysing their data in the light of recently emerging views on Archaean processes (Dingyi Zhao et al, A two-stage mantle plume-sagduction origin of Archean continental crust revealed by water and oxygen isotopes of TTGs, Science Advances, v. 11, article eadr9513  ; DOI: 10.1126/sciadv.adr9513).They found compelling evidence that ~2.5 Ga-old Neoarchaean TTG gneisses in the North China granite-greenstone terrain formed by partial melting of an earlier mafic-ultramafic greenstone crust with high water content. They consider this to support a two-stage model for the generation of the North China Craton’s crust above a vast mantle plume. The first stage at around 2.7 Ga was the arrival of the plume at the base of the lithosphere, which partially melted as a result of the decompression of the rising ultramafic plume. The resulting mafic magma created an oceanic plateau partly by underplating the older lithosphere, intruding it and erupting onto the older ocean floor. This created the precursors of the craton’s greenstones, the upper part of which interacted directly with seawater to become hydrothermally altered. They underwent minor partial melting to produce small TTG intrusions. A second plume arriving at ~2.5 Ga resulted in sinking of the greenstones under their own weight to mix or ‘hybridise’ with the re-heated lower crust. This caused the greenstones substantially to partially melt and so generate voluminous TTG magmas that rose as the greenstones subsided. . It seems likely that this dynamic, hot environment deformed the TTGs as they rose to create the grey gneisses so typical of Archaean granite-greenstone terranes. [Note: The key evidence for Dingyi Zhao et al.’s conclusions is that the two TTG pulses yielded the 2.7 and 2.5 Ga ages, and show significantly different oxygen isotope data (δ18O)].

Two stages of TTG gneiss formation in the North China Craton and the sinking (sagduction) of greenstone belts in the second phase. Credit: Dingyi Zhao et al., Fig 4)

Such a petrogenetic scenario, termed sagduction by Dingyi Zhao and colleagues, also helps explain the unique keel-like nature of greenstone belts, and abundant evidence of vertical tectonics in many Archaean terrains (see: Vertical tectonics and formation of Archaean crust; January 2002), Their model is not entirely new, but is better supported by data than earlier, more speculative ideas. That such processes have been recognised in the Neoarchaean – the North China Craton is one of the youngest granite-greenstone terrains – may well apply to far older Archaean continental crust generation. It is perhaps the last of a series of such events that began in the Hadean, as summarised in the previous Earth-logs post.

The world’s oldest crust in the Nuvvuagittuq Greenstone Belt, Quebec

Since 1999, the rocks generally acknowledged to be the oldest on Earth were part of the Acasta gneisses in the Slave Craton in Canada’s Northwest Territories; specifically the Idiwhaa tonalitic gneisses. Zircons extracted from that unit yielded an age of 4.02 billion years (Ga) using U-Pb radimetric dating, revealing the time of their crystallisation from granitic magma. But nine years later some metabasaltic rocks from the tiny (20 km2) Nuvvuagittuq Greenstone Belt on the eastern shore of Hudson Bay were dated using the Sm-Nd method at almost 4.3 Ga (see: At last, 4.0 Ga barrier broken; November 2008). Taken at face value the metabasaltic rocks seemed to be well within the Hadean Eon (4.6 to 4.0 Ga) and could thus represent primary crust of that antiquity. However, U-Pb dating of zircons from thin sodium-rich granitic rocks (trondhjemites) that intrude them yielded ages no older than about 3.8 Ga. Similar ages emerged from zircons found in metasediments interleaved in the dominant mafic unit. Discrepancies between the two completely different dating methods resulted in the Hadean antiquity of the mafic rocks having been disputed since 2008. It was possible that the Sm-Nd results from the metabasalts may have resulted from the original mafic magmas having inherited a Hadean Sm-Nd isotopic ‘signature’ from their mantle source. That is, they may have been contaminated and could have formed in the early Archaean.

Glacially smoothed outcrops near Inukjuak, Quebec that reveals rocks of the Nuvvuagittuq Greenstone Belt. Credit: Jonathan O’Neil, University of Ottawa

Jonathan O’Neil, now at Ottawa University in Canada, led the first isotopic investigation of the Nuvvuagittuq Greenstone Belt and has engaged in research there ever since. Further field and laboratory studies revealed that the previously dated mafic rocks had been intruded by large, chemically differentiated gabbro sills. A team of geochemists from the University of Ottawa and Carleton University, including O’Neil, has now published isotopic evidence from the intrusions that suggests a Hadean age for their parent magma (C. Sole et al. 2025. Evidence for Hadean mafic intrusions in the Nuvvuagittuq Greenstone Belt, CanadaScience, v. 388, p. 1431-1435. DOI: 10.1126/science.ads8461). The authors used the decay schemes of two radioactive samarium isotopes 147Sm and 146Sm; a significant advance in radiometric dating. The first decays to 143Nd with a half-life of about 1011 years, the second to 142Nd with a much shorter half life of about 108 years. Due to its more rapid decay, in geological terms,146Sm is now much rarer than 147Sm. Consequently, using the short-lived 146Sm-142Nd decay system is technically more difficult than that of the 147Sm-143Nd system. But the team managed to get good results from both the ‘fast’ and the ‘slow’ decay schemes. They tally nicely, yielding ages of 4157 and 4196 Ma.  The gabbros provide a minimum age for the metabasalts that they cut through. The original 4.3 Ga Sm-Nd date for the metabasalts is thus plausible. Sole and colleagues consider the dominant metabasaltic rocks to have formed a primary crust in late Hadean times that was invaded by later mantle-derived mafic magma about 100 Ma later. The granitic rocks that constitute about one third of the Nuvvuagittuq terrain seem to have been generated by partial melting more than 300 Ma later still, during the Palaeoarchaean.

Perhaps similar techniques will now be deployed in granite-greenstone terrains in other cratons. Many of the older ones, generally designated as Palaeoarchaean in age, also contain abundant metamorphosed mafic and ultramafic igneous rocks. Perhaps their origin was akin to those of Nuvvuagittuq; i.e. more Hadean crust may await unmasking. Meanwhile, there seems to be more to discover from Nuvvuagittuq. For instance, some of the rocks suggested to be metasediments interleaved in the metabasalts show intricate banding that resembles products of bacterial mat accumulation in younger terrains. Signs of Hadean life?

Since the first reliable radiometric dating of Archaean rocks in 1971, there has been an element of competition to date the oldest rocks on Earth: to push history back towards the initial formation of the Earth. It is one of the most disputatious branches of Earth history. Rivalry may play a significant part in driving the science, as well as the development of novel dating techniques and the continuing discovery of clearly old relationships using ‘old-fashioned’ relative dating, such as signs of intrusion, unconformities etcetera. But in some cases there is a darker side: the potential for profit. Recently, samples from Nuvvuagittuq appeared for sale on the Internet, priced at $10,000. They may have been collected under the guise of supplying museums by a group that shipped-in mechanical excavators in 2016. Unsurprisingly this angered the local Innuit community of Inukjuak. They were also worried about bona fide collection for scientific research that had left parts of the small, once pristine area somewhat battered, including cultural features such as an inukshuk navigational monument. Their fury at commercial exploitation of their homeland resulted in the community council closing the area to collecting in 2024. I emphasise that this violation of basic geological ethics was by commercial rock collectors and dealers, not academic geologists. The local people are now considering careful issue of research permits so that important research can continue. But further rock collecting may remain banned.

See also: New Research Verifies Northern Canada Hosts Earth’s Oldest Rocks. Scienmag, 26 June 2025; Gramling, C. 2025. Earth’s oldest rocks may be at least 4.16 billion years old. ScienceNews.

PS With many thanks to ‘Piso Mojado’ for alerting me to this paper

Chinese skull confirmed as Denisovan

For over a century Chinese scientists have been puzzling over ancient human skulls that show pronounced brow ridges. Some assigned them to Homo, others to species that they believe were unique to China. A widely held view in China was that people now living there evolved directly from them, adhering to the ‘Multiregional Evolution’ hypothesis as opposed to that of ‘Out of Africa’. However, the issue might now have been resolved. In the last few years palaeoanthropologists have begun to suspect that these fossilised crania may have been Denisovans, but none had been subject to genetic and proteomic analysis. The few from Siberia and Tibet that initially proved the existence of Denisovans were very small: just a finger bone and teeth.  Out of the blue, teeth in a robust hominin mandible dredged from the Penghu Channel between Taiwan and China yielded protein sequences that matched proteomic data from Denisovan fossils in Denisova Cave and Baishiya Cave in Tibet, suggesting that Denisovans were big and roamed  widely in East Asia. In 2021 a near-complete robust cranium came to light that had been found in the 1930s near Harbin in China and hidden – at the time the area was under Japanese military occupation. It emerged only when its finder revealed its location in 2018, shortly before his death. It was provisionally called Homo longi or ‘Dragon Man’. Qiaomei Fu of the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing and her colleagues have made a comprehensive study of the fossil.

The cranium found near Harbin, China belonged to a Denisovan. Credit: Hebei Geo University

It is at least 146 ka old, probably too young to have been H. erectus, but predates the earliest anatomically modern humans to have reached East Asia from Africa (~60 ka ago). The Chinese scientists have developed protein- and DNA extraction techniques akin to those pioneered at the Max Planck Institute for Evolutionary Anthropology in Leipzig. It proved impossible to extract sufficient ancient nuclear DNA from the cranium bone for definitive genomic data to be extracted, but dental plaque (calculus) adhering around the only surviving molar in the upper jaw did yield mitochondrial DNA. The mtDNA matched that found in Siberian Denisovan remains (Qiaomei Fu et al. 2025. Denisovan mitochondrial DNA from dental calculus of the >146,000-year-old Harbin cranium. Cell, v. 188, p. 1–8; DOI: 10.1016/j.cell.2025.05.040). The bone did yield 92 proteins and 122 single amino acid polymorphisms, as well as more than 20 thousand peptides (Qiaomei Fu and 8 others 2025. The proteome of the late Middle Pleistocene Harbin individual. Science, v. 388: DOI: 10.1126/science.adu9677). Again, these established a molecular link with the already known Denisovans, specifically with one of the Denisova Cave specimens. Without the painstaking research of the Chinese team, Denisovans would have been merely a genome and a proteome without much sign of a body! From the massive skull it is clear that they were indeed big people with brains much the same size as those of living people. Estimates based on the Harbin cranium suggest an individual weighing around 100 kg (220 lb or ~15 stone): a real heavyweight or rugby prop!

The work of Qiaomei Fu and her colleagues, plus the earlier, more limited studies by Tsutaya et al., opens a new phase in palaeoanthropology. Denisovans now have a genome and well-preserved parts of an entire head, which may allow the plethora of ancient skulls from China to be anatomically assigned to the species. Moreover, by extracting DNA from dental plaque for the first time they have opened a new route to obtaining genomic material: dental calculus is very much tougher and less porous than bone.

See also: Curry, A. ‘Dragon Man’ skull belongs to mysterious human relative. 2025. Science, v. 388; DOI: 10.1126/science.z8sb68w. Smith K. 2025. We’ve had a Denisovan skull since the 1930s – only nobody knew. Ars Technica, 18 June 2025. Marshall, M. 2025. We finally know what the face of a Denisovan looked like. New Scientist 18 June 2025.

Detecting oxygenic photosynthesis in the Archaean Earth System

For life on Earth, one of the most fundamental shifts in ecosystems was the Great Oxygenation Event 2.5 to 2.3 billion years (Ga) ago. The first evidence for its occurrence was from the sedimentary record, particularly ancient soils (palaeosols) that mark exposure of the continental surface above sea level and rock weathering. Palaeosols older than 2.4 Ga have low iron contents that suggest iron was soluble in surface waters, i.e. in its reduced bivalent form Fe2+. Sediments formed by flowing water also contain rounded grains of minerals that in today’s oxygen-rich environments are soon broken down and dissolved through oxidising reactions, for instance pyrite (FeS2) and uraninite (UO2). After 2.4 Ga palaeosols are reddish to yellowish brown in colour and contain insoluble oxides and hydroxides of Fe3+ principally hematite (Fe2O3) and goethite (FeO.OH). After this time sediments deposited by wind action and rivers are similar in colour: so-called ‘redbeds’. Following the GOE the atmosphere initially contained only traces of free oxygen, but sufficient to make the surface environment oxidising. In fact such an atmosphere defies Le Chatelier’s Principle: free oxygen should react rapidly with the rest of the environment through oxidation. That it doesn’t shows that it is continually generated as a result of oxygenic photosynthesis. The CO2 + H2O = carbohydrate + oxygen equilibrium does not reach a balance because of continual burial of dead organic material.

Free oxygen is a prerequisite for all multicelled eukaryotes, and it is probably no coincidence that fossils of the earliest known ones occur in sediments in Gabon dated at 2.1 Ga: 300 Ma after the Great Oxygenation Event. However, the GOE relates to surface environments of that time. From 2.8 Ga – in the Mesoarchaean Era – to the late Palaeoproterozoic around 1.9 Ga, vast quantities of Fe3+ were locked in iron oxide-rich banded iron formations (BIFs): roughly 105 billion tons in the richest deposits alone (see: Banded iron formations (BIFs) reviewed; December 2017). Indeed, similar ironstones occur in Archaean sedimentary sequences as far back as 3.7 Ga, albeit in uneconomic amounts. Paradoxically, enormous amounts of oxygen must have been generated by marine photosynthesis to oxidise Fe2+ dissolved in the early oceans by hydrothermal alteration of basalt lava upwelling from the Archaean mantle. But none of that free oxygen made it into the atmosphere. Almost as soon as it was released it oxidised dissolved Fe2+ to be dumped as iron oxide on the ocean floor. Before the GOE that aspect of geochemistry did obey Le Chatelier!

A limestone made of stromatolites

The only likely means of generating oxygen on such a gargantuan scale from the earliest Archaean onwards is through teeming prokaryote organisms capable of oxygenic photosynthesis. Because modern cyanobacteria do that, the burden of the BIFs has fallen on them. One reason for that hypothesis stems from cyanobacteria in a variety of modern environments building dome-shaped bacterial mats. Their forms closely resemble those of Archaean stromatolites found as far back as 3.7 Ga. But these are merely peculiar carbonate bodies that could have been produced by bacterial mats which deploy a wide variety of metabolic chemistry. Laureline Patry of the Université de Bretagne Occidentale, Plouzané, France, and colleagues from France, the US, Canada and the UK have developed a novel way of addressing the opaque mechanism of Archaean oxygen production (Patry, L.A. and 12 others. Dating the evolution of oxygenic photosynthesis using La-Ce geochronology. Nature, v. 642, p. 99-104; DOI: 10.1038/s41586-025-09009-8).

They turned to the basic geochemistry of rare earth elements (REE) in Archaean stromatolitic limestones from the Superior Craton of northern Canada. Of the 17 REEs only cerium (Ce) is capable of being oxidised in the presence of oxygen. As a result Ce can be depleted relative to its neighbouring REEs in the Periodic Table, as it is in many Phanerozoic limestones. Five samples of the limestones show consistent depletion of Ce relative to all other REE. It is also possible to date when such fractionation occurred using 138La– 138Ce geochronology.  The samples were dated at 2.87 to 2.78 Ga (Mesoarchaean), making them the oldest limestones that show Ce anomalies and thus oxygenated seawater in which the microbial mats thrived. But that is only 300 Ma earlier than the start of the GOE. Stromatolites are abundant in the Archaean record as far back as 3.4 Ga, so it should be possible to chart the link between microbial carbonate mats and oxygenated seawater to a billion years before the GOE, although that does not tell us about the kind of microbes that were making stromatolites.

See also: Tracing oxygenic photosynthesis via La-Ce geochronology. Bioengineer.org, 29 May 2025; Allen, J.F. 2016. A proposal for formation of Archaean stromatolites before the advent of oxygenic photosynthesis. Frontiers in Microbiology, v. 7; DOI: 10.3389/fmicb.2016.01784.