Tsunami risk in East Africa

The 26 December 2004 Indian Ocean tsunami was one of the deadliest natural disasters since the start of the 20th century, with an estimated death toll of around 230 thousand. Millions more were deeply traumatised, bereft of homes and possessions, rendered short of food and clean water, and threatened by disease. Together with that launched onto the seaboard of eastern Japan by the Sendai earthquake of 11 March 2011, it has spurred research into detecting the signs of older tsunamis left in coastal sedimentary deposits (see for instance: Doggerland and the Storegga tsunami, December 2020). In normally quiet coastal areas these tsunamites commonly take the form of sand sheets interbedded with terrestrial sediments, such as peaty soils. On shores fully exposed to the ocean the evidence may take the form of jumbles of large boulders that could not have been moved by even the worst storm waves.

Sand sheets attributed to a succession of tsunamis, interbedded with peaty soils deposited in a swamp on Phra Thong Island, Thailand. Note that a sand sheet deposited by the 2004 Indian Ocean tsunami is directly beneath the current swamp surface (Credit: US Geological Survey)

Most of the deaths and damage wrought by the 2004 tsunami were along coasts bordering the Bay of Bengal in Indonesia, Thailand, Myanmar, India and Sri Lanka, and the Nicobar Islands. Tsunami waves were recorded on the coastlines of Somalia, Kenya and Tanzania, but had far lower amplitudes and energy so that fatalities – several hundred – were restricted to coastal Somalia. East Africa was protected to a large extent by the Indian subcontinent taking much of the wave energy released by the magnitude 9.1 to 9.3 earthquake (the third largest recorded) beneath Aceh at the northernmost tip of the Indonesian island of Sumatra. Yet the subduction zone that failed there extends far to the southeast along the Sunda Arc. Earthquakes further along that active island arc might potentially expose parts of East Africa to far higher wave energy, because of less protection by intervening land masses.

This possibility, together with the lack of any estimate of tsunami risk for East Africa, drew a multinational team of geoscientists to the estuary of the Pangani River  in Tanzania (Maselli, V. and 12 others 2020. A 1000-yr-old tsunami in the Indian Ocean points to greater risk for East Africa. Geology, v. 48, p. 808-813; DOI: 10.1130/G47257.1). Archaeologists had previously examined excavations for fish farming ponds and discovered the relics of an ancient coastal village. Digging further pits revealed a tell-tale sheet of sand in a sequence of alluvial sediments and peaty silts and fine sands derived from mangrove swamps. The peats contained archaeological remains – sherds of pottery and even beads. The tsunamite sand sheet occurs within the mangrove facies. It contains pebbles of bedrock that also litter the open shoreline of this part of Tanzania. There are also fossils; mainly a mix of marine molluscs and foraminifera with terrestrial rodents fish, birds and amphibians. But throughout the sheet, scattered at random, are human skeletons and disarticulated bones of male and female adults, and children. Many have broken limb bones, but show no signs of blunt-force trauma or disease pathology. Moreover, there is no sign of ritual burial or weaponry; the corpses had not resulted from massacre or epidemic. The most likely conclusion is that they are victims of an earlier Indian Ocean tsunami. Radiocarbon dating shows that it occurred at some time between the 11th and 13th centuries CE. This tallies with evidence from Thailand, Sumatra, the Andaman and Maldive Islands, India and Sri Lanka for a major tsunami in 950 CE.

Computer modelling of tsunami propagation reveals that the Pangani River lies on a stretch of the Tanzanian coast that is likely to have been sheltered from most Indian Ocean tsunamis by Madagascar and the shallows around the Seychelles Archipelago. Seismic events on the Sunda Arc or the lesser, Makran subduction zone of eastern Iran may not have been capable of generating sufficient energy to raise tsunami waves at the latitudes of the Tanzanian coast much higher than those witnessed there in 2004, unless their arrival coincided with high tide – damage was prevented in 2004 because of low tide levels. However, the topography of the Pangani estuary may well amplify water level by constricting a surge. Such a mechanism can account for variations of destruction during the 2011 Tohoku-Sendai tsunami in NE Japan.

If coastal Tanzania is at high risk of tsunamis, that can only be confirmed by deeper excavation into coastal sediments to check for multiple sand sheets that characterise areas closer to the Sunda Arc. So far, that in the Pangani estuary is the only one recorded in East Africa

Weak lithosphere delayed the formation of continents

There are very few tangible signs that the Earth had continents at the surface before about 4 billion years (Ga) ago. The most cited evidence that they may have existed in the Hadean Eon are zircon grains with radiometric ages up to 4.4 Ga that were recovered from much younger sedimentary rocks in Western Australia. These tiny grains also show isotopic anomalies that support the existence of continental material, i.e. rocks of broadly granitic composition, only 100 Ma after the Earth formed (see: Zircons and early continents no longer to be sneezed at; February 2006). So, how come relics of such early continents have yet to be discovered in the geological record? After all granitic rocks – in the broad sense – which form continents are so less dense than the mantle that modern subduction is incapable of recycling them en masse. Indeed, mantle convection of any type in the hotter Earth of the Hadean seems unlikely to have swallowed continents once they had formed. Perhaps they are hiding in another guise among younger rocks of the continental crust. But, believe me; geologists have been hunting for them, to no avail, in every scrap of existing continental crust since 1971 when gneisses found in West Greenland by New Zealander Vic McGregor turned out to be almost 3.8 Ga old. This set off a grail-quest, which still continues, to negate James Hutton’s ‘No vestige of a beginning …’ concept of geological time.

There is another view. Early continental lithosphere may have returned to the mantle piece by piece by other means. One that has been happening since the Archaean is as debris from surface erosion and its transportation to the ocean floor, thence to be subducted along with denser material of the oceanic lithosphere. Another possibility is that before 4 Ga continental lithosphere had far less strength than characterised it in later times; it may have been continually torn into fragments small enough for viscous drag to defy buoyancy and consign them into the mantle by convective processes. Two things seem to confer strength on continental lithosphere younger than 4 billion years: its depleted surface heat flow and heat-production that stem from low concentrations of radioactive isotopes of uranium, thorium and potassium in the lower crust and sub-continental mantle; bolstering by cratons that form the cores of all major continents. Three geoscientists at Monash University in Victoria, Australia have examined how parts of early convecting mantle may have undergone chemical and thermal differentiation (Capitanio, F.A. et al. 2020. Thermochemical lithosphere differentiation and the origin of cratonic mantle.  Nature, v. 588, p. 89-94; DOI: 10.1038/s41586-020-2976-3). These processes are an inevitable outcome of the tendency for mantle melting to begin as it becomes decompressed when pressure decreases when it rises during convection. Continual removal of the magmas produced in this way would remove not only much of the residue’s heat-producing capacity – U, Th and K preferentially enter silicate melts – but also its content of volatiles, especially water. Even if granitic magmas were completely recycled back to the mantle by the greater vigour of the hot, early Earth, at least some of the residue of partial melting would remain. Its dehydration would increase its viscosity (strength). Over time this would build what eventually became the highly viscous thick mantle roots (tectosphere) on which increasing amounts of the granitic magmas could stabilise to establish the oldest cratons. Over time more and more such cratonised crust would accumulate, becoming increasingly unlikely to be resorbed into the mantle. Although cratons are not zoned in terms of the age of their constituent rocks, they do jumble together several billion years’ worth of continental crust in what used to be called ‘the Basement Complex’.

Development of depleted and viscous sub-continental mantle on the early Earth – a precedes b – TTG signifies tonalite-trondhjemite-granodiorite rocks typical of Archaean cratons (Credit, Capitanio et al.; Fig 5)

Early in this process, heat would have made much of the lithosphere too weak to form rigid plates and the tectonics with which geologists are so familiar from the later parts of Earth’s history. The evolution that Capitanio et al. propose suggests that the earliest rigid plates were capped by Archaean continental crust. That implies subduction of oceanic lithosphere starting at their margins, with intra-oceanic destructive plate margins and island arcs being a later feature of tectonics. It is in the later, Proterozoic Eon that evidence for accretion of arc terranes becomes obvious, plastering their magmatic products onto cratons, further enlarging the continents.

Thawing permafrost, release of carbon and the role of iron

Projected shrinkage of permanently frozen ground i around the Arctic Ocean over the next 60 years

Global warming is clearly happening. The crucial question is ‘How bad can it get?’ Most pundits focus on the capacity of the globalised economy to cut carbon emissions – mainly CO2 from fossil fuel burning and methane emissions by commercial livestock herds. Can they be reduced in time to reverse the increase in global mean surface temperature that has already taken place and those that lie ahead? Every now and then there is mention of the importance of natural means of drawing down greenhouse gases: plant more trees; preserve and encourage wetlands and their accumulation of peat and so on. For several months of the Northern Hemisphere summer the planet’s largest bogs actively sequester carbon in the form of dead vegetation. For the rest of the year they are frozen stiff. Muskeg and tundra form a band across the alluvial plains of great rivers that drain North America and Eurasia towards the Arctic Ocean. The seasonal bogs lie above sediments deposited in earlier river basins and swamps that have remained permanently frozen since the last glacial period. Such permafrost begins at just a few metres below the surface at high latitudes down to as much as a kilometre, becoming deeper, thinner and more patchy until it disappears south of about 60°N except in mountainous areas. Permafrost is melting relentlessly, sometimes with spectacular results broadly known as thermokarst that involves surface collapse, mudslides and erosion by summer meltwater.

Thawing permafrost in Siberia and associated collapse structures

Permafrost is a good preserver of organic material, as shown by the almost perfect remains of mammoths and other animals that have been found where rivers have eroded their frozen banks. The latest spectacular find is a mummified wolf pup unearthed by a gold prospector from 57 ka-old permafrost in the Yukon, Canada. She was probably buried when a wolf den collapsed. Thawing exposes buried carbonaceous material to processes that release CO, as does the drying-out of peat in more temperate climes. It has long been known that the vast reserves of carbon preserved in frozen ground and in gas hydrate in sea-floor sediments present an immense danger of accelerated greenhouse conditions should permafrost thaw quickly and deep seawater heats up; the first is certainly starting to happen in boreal North America and Eurasia. Research into Arctic soils had suggested that there is a potential mitigating factor. Iron-3 oxides and hydroxides, the colorants of soils that overlie permafrost, have chemical properties that allow them to trap carbon, in much the same way that they trap arsenic by adsorption on the surface of their molecular structure (see: Screening for arsenic contamination, September 2008).

But, as in the case of arsenic, mineralogical trapping of carbon and its protection from oxidation to CO2 can be thwarted by bacterial action (Patzner, M.S. and 10 others 2020. Iron mineral dissolution releases iron and associated organic carbon during permafrost thaw. Nature Communications, v. 11, article 6329; DOI: 10.1038/s41467-020-20102-6). Monique Patzner of the University of Tuebingen, Germany, and her colleagues from Germany, Denmark, the UK and the US have studied peaty soils overlying permafrost in Sweden that occurs north of the Arctic Circle. Their mineralogical and biological findings came from cores driven through the different layers above deep permafrost. In the layer immediately above permanently frozen ground the binding of carbon to iron-3 minerals certainly does occur. However, at higher levels that show evidence of longer periods of thawing there is an increase of reduced iron-2 dissolved in the soil water along with more dissolved organic carbon – i.e. carbon prone to oxidation to carbon dioxide. Also, biogenic methane – a more powerful greenhouse gas – increases in the more waterlogged upper sediments. Among the active bacteria are varieties whose metabolism involves the reduction of insoluble iron in ferric oxyhdroxide minerals to the soluble ferrous form (iron-2). As in the case of arsenic contamination of groundwater, the adsorbed contents of iron oxyhydroxides are being released as a result of powerful reducing conditions.

Applying their results to the entire permafrost inventory at high northern latitudes, the team predicts a worrying scenario. Initial thawing can indeed lock-in up to tens of billion tonnes of carbon once preserved in permafrost, yet this amounts to only a fifth of the carbon present in the surface-to-permafrost layer of thawing, at best. In itself, the trapped carbon is equivalent to between 2 to 5 times the annual anthropogenic release of carbon by burning fossil fuels. Nevertheless, it is destined by reductive dissolution of its host minerals to be emitted eventually, if thawing continues. This adds to the even vaster potential releases of greenhouse gases in the form of biogenic methane from waterlogged ground. However, there is some evidence to the contrary. During the deglaciation between 15 to 8 thousand years ago – except for the thousand years of the Younger Dryas cold episode – land-surface temperatures rose far more rapidly than happening at present. A study of carbon isotopes in air trapped as bubbles in Antarctic ice suggests that methane emissions from organic carbon exposed to bacterial action by thawing permafrost were much lower than claimed by Patzner et al. for present-day, slower thawing (see: Old carbon reservoirs unlikely to cause massive greenhouse gas release, study finds. Science Daily, 20 February 2020) – as were those released by breakdown of submarine gas hydrates.

Origin of life: some news

For self-replicating cells to form there are two essential precursors: water and simple compounds based on the elements carbon, hydrogen, oxygen and nitrogen (CHON). Hydrogen is not a problem, being by far the most abundant element in the universe. Carbon, oxygen and nitrogen form in the cores of stars through nuclear fusion of hydrogen and helium. These elemental building blocks need to be delivered through supernova explosions, ultimately to where water can exist in liquid form to undergo reactions that culminate in living cells. That is only possible on solid bodies that lie at just the right distance from a star to support average surface temperatures that are between the freezing and boiling points of water. Most important is that such a planet in the ‘Goldilocks Zone’ has sufficient mass for its gravity to retain water. Surface water evaporates to some extent to contribute vapour to the atmosphere. Exposed to ultraviolet radiation H2O vapour dissociates into molecular hydrogen and water, which can be lost to space if a planet’s escape velocity is less than the thermal vibration of such gas molecules. Such photo-dissociation and diffusion into outer space may have caused Mars to lose more hydrogen in this way than oxygen, to leave its surface dry but rich in reddish iron oxides.

Despite liquid water being essential for the origin of planetary life it is a mixed blessing for key molecules that support biology. This ‘water paradox’ stems from water molecules attacking and breaking the chemical connections that string together the complex chains of proteins and nucleic acids (RNA and DNA). Living cells resolve the paradox by limiting the circulation of liquid water within them by being largely filled with a gel that holds the key molecules together, rather than being bags of water as has been commonly imagined. That notion stemmed from the idea of a ‘primordial soup’, popularised by Darwin and his early followers, which is now preserved in cells’ cytoplasm. That is now known to be wrong and, in any case, the chemistry simply would not work, either in a ‘warm, little pond’ or close to a deep sea hydrothermal vent, because the molecular chains would be broken as soon as they formed. Modern evolutionary biochemists suggest that much of the chemistry leading to living cells must have taken place in environments that were sometimes dry and sometimes wet; ephemeral puddles on land. Science journalist Michael Marshall has just published an easily read, open-source essay on this vexing yet vital issue in Nature (Marshall, M. 2020. The Water Paradox and the Origins of Life. Nature, v. 588, p. 210-213; DOI: 10.1038/d41586-020-03461-4). If you are interested, click on the link to read Marshall’s account of current origins-of-life research into the role of endlessly repeated wet-dry cycles on the early Earth’s surface. Fascinating reading as the experiments take the matter far beyond the spontaneous formation of the amino acid glycine found by Stanley Miller when he passed sparks through methane, ammonia and hydrogen in his famous 1953 experiment at the University of Chicago. Marshall was spurred to write in advance of NASA’s Perseverance Mission landing on Mars in February 2021. The Perseverance rover aims to test the new hypotheses in a series of lake sediments that appear to have been deposited by wet-dry cycles  in a small Martian impact crater (Jezero Crater) early in the planet’s history when surface water was present.

Crystals of hexamethylenetetramine (Credit: r/chemistry, Reddit)

That CHON and simple compounds made from them are aplenty in interstellar gas and dust clouds has been known since the development of means of analysing the light spectra from them. The organic chemistry of carbonaceous meteorites is also well known; they even smell of hydrocarbons. Accretion of these primitive materials during planet formation is fine as far as providing feedstock for life-forming processes on physically suitable planets. But how did CHON get from giant molecular clouds into such planetesimals. An odd-sounding organic compound – hexamethylenetetramine ((CH2)6N4), or HMT – formed industrially by combining formaldehyde (CH2O) and ammonia (NH3) – was initially synthesised in the late 19th century as an antiseptic to tackle UTIs and is now used as a solid fuel for lightweight camping stoves, as well as much else besides. HMT has a potentially interesting role to play in the origin of life.  Experiments aimed at investigating what happens when starlight and thermal radiation pervade interstellar gas clouds to interact with simple CHON molecules, such as ammonia, formaldehyde, methanol and water, yielded up to 60% by mass of HMT.

The structure of HMT is a sort of cage, so that crystals form large fluffy aggregates, instead of the gases from which it can be formed in deep space. Together with interstellar silicate dusts, such sail-like structures could accrete into planetesimals in nebular star nurseries under the influence of  gravity and light pressure. Geochemists from several Japanese institutions and NASA have, for the first time, found HMT in three carbonaceous chondrites, albeit at very low concentrations – parts per billion (Y. Oba et al. 2020. Extraterrestrial hexamethylenetetramine in meteorites — a precursor of prebiotic chemistry in the inner Solar SystemNature Communications, v. 11, article 6243; DOI: 10.1038/s41467-020-20038-x). Once concentrated in planetesimals – the parents of meteorites when they are smashed by collisions – HMT can perform the useful chemical ‘trick’ of breaking down once again to very simple CHON compounds when warmed. At close quarters such organic precursors can engage in polymerising reactions whose end products could be the far more complex sugars and amino acid chains that are the characteristic CHON compounds of carbonaceous chondrites. Yasuhiro Oba and colleagues may have found the missing link between interstellar space, planet formation and the synthesis of life through the mechanisms that resolve the ‘water paradox’ outlined by Michael Marshall.

See also: Scientists Find Precursor of Prebiotic Chemistry in Three Meteorites (Sci-news, 8 December 2020.)

 

How like the Neanderthals are we?

An actor made-up to resemble a Neanderthal man in a business suit traveling on the London Underground. (Source: screen-grab from BBC2 Neanderthals – Meet Your Ancestors)

In the most basic, genetic sense, we were sufficiently alike for us to have interbred with them regularly and possibly wherever the two human groups met. As a result the genomes of all modern humans contain snips derived from Neanderthals (see: Everyone now has their Inner Neanderthal; February 2020). East Asian people also carry some Denisovan genes as do the original people of Australasia and the first Americans. Those very facts suggest that members of each group did not find individuals from others especially repellent as potential sexual partners! But that covers only a tiny part of what constitutes culture. There is archaeological evidence that Neanderthals and modern humans made similar tools. Both had the skills to make bi-faced ‘hand axes’ before they even met around 45 to 40 ka ago.  A cave (La Grotte des Fées) near Châtelperron to the west of the French Alps that was occupied by Neanderthals until about 40 ka yielded a selection of stone tools, including blades, known as the Châtelperronian culture, which indicates a major breakthrough in technology by their makers. It is sufficiently similar to the stone industry of anatomically modern humans (AMH) who, around that time, first migrated into Europe from the east (Aurignacian) to pose a conundrum: Did the Neanderthals copy Aurignacian techniques when they met AMH, or vice versa? Making blades by splitting large flint cores is achieved by striking the cores with just a couple of blows with a softer tool. At the very least Neanderthals had the intellectual capacity to learn this very difficult skill, but they may have invented it (see: Disputes in the cavern; June 2012). Then there is growing evidence for artistic abilities among Neanderthals, and even Homo erectus gets a look-in (see: Sophisticated Neanderthal art now established; February 2018).

Reconstructed burial of a Neanderthal individual at La Chappelle-aux-Saints (Credit: Musée de La Chapelle-aux-Saints, Corrèze, France)

For a long time, a pervasive aspect of AMH culture has been ritual. Indeed much early art may be have been bound up with ritualistic social practices, as it has been in historic times. A persuasive hint at Neanderthal ritual lies in the peculiar structures – dated at 177 ka – found far from the light of day in the Bruniquel Cave in south-western France (see: Breaking news: Cave structures made by Neanderthals; May 2016). They comprise circles fashioned from broken-off stalactites, and fires seem to have been lit in them. The most enduring rituals among anatomically modern humans have been those surrounding death: we bury our dead, thereby preserving them, in a variety of ways and ‘send them off’ with grave goods or even by burning them and putting the ashes in a pot. A Neanderthal skeleton (dated at 50 ka) found in a cave at La Chappelle-aux-Saints appears to have been buried and made safe from scavengers and erosion. There are even older Neanderthal graves (90 to 100 ka) at Quafzeh in Palestine and Shanidar in Iraq, where numerous individuals, including a mother and child, had been interred. Some are associated with possible grave goods, such as pieces of red ochre (hematite) pigment, animal body parts and even pollen that suggests flowers had been scattered on the remains. The possibility of deliberate offerings or tributes and even the notion of burial have met with scepticism among some palaeoanthropologists. One reason for the scientific caution is that many of the finds were excavated long before the rigour of modern archaeological protocols

Recently a multidisciplinary team involving scientists from France, Belgium, Italy, Germany, Spain and Denmark exhaustively analysed the context and remains of a Neanderthal child found in the La Ferrassie cave (Dordogne region of France) in the early 1070s  (Balzeau, A. and 13 others 2020. Pluridisciplinary evidence for burial for the La Ferrassie 8 Neandertal childScientific Reports, v. 10, article 21230; DOI: 10.1038/s41598-020-77611-z). Estimated to have been about 2 years old, the child is anatomically complete. Bones of other animals found in the same deposit were less-well preserved than those of the child, adding weight to the hypothesis that a body, rather than bones, had been buried soon after death. Luminescence dating of the sediments enveloping the skeleton is considerably older than the radiocarbon age of one of the child’s bones. That is difficult to explain other than by deliberate burial. It is almost certain that a pit had been dug and the child placed in it, to be covered in sediment. The skeleton was oriented E-W, with the head towards the east. Remarkably, other Neanderthal remains at the La Ferrassie site also have heads to the east of the rest of their bones, suggesting perhaps a common practice of orientation relative to sunrise and sunset.

It is slowly dawning on palaeoanthropologists that Neanderthal culture and cognitive capacity were not greatly different from those of anatomically modern humans. That similar beings to ourselves disappeared from the archaeological record within a few thousand years of the first appearance of AMH in Europe has long been attributed to what can be summarised as the Neanderthals being ‘second best’ in many ways. That may not have been the case. Since the last glaciation something similar has happened twice in Europe, which analysis of ancient DNA has documented in far more detail than the disappearance of the Neanderthals. Mesolithic hunter-gatherers were followed by early Neolithic farmers with genetic affinities to living people in Northern Anatolia in Turkey – the region where growing crops began. The DNA record from human remains with Neolithic ages shows no sign of genomes with a clear Mesolithic signature, yet some of the genetic features of these hunter-gatherers still remain in the genomes of modern Europeans. Similarly, ancient DNA recovered from Bronze Age human bones suggests almost complete replacement of the Neolithic inhabitants by people who introduced metallurgy, a horse-centred culture and a new kind of ceramic – the Bell Beaker. This genetic group is known as the Yamnaya, whose origins lie in the steppe of modern Ukraine and European Russia. In this Neolithic-Bronze Age population transition the earlier genomes disappear from the ancient DNA record. Yet Europeans still carry traces of that earlier genetic heritage. The explanation now accepted by both geneticists and archaeologists is that both events involved assimilation and merging through interbreeding. That seems just as applicable to the ‘disappearance’ of the Neanderthals

See also: Neanderthals buried their dead: New evidence (Science Daily, 9 December 2020)

Doggerland and the Storegga tsunami

Britain is only an island when sea level stands high; i.e. during interglacial conditions. Since the last ice age global sea level have risen by about 130 m as the great northern ice sheets slowly melted. That Britain could oscillate between being part of Europe and a large archipelago as a result of major climatic cycles dates back only to between 450 and 240 ka ago. Previously it was a permanent part of what is now Europe, as befits its geological identity, joined to it by a low ridge buttressed by Chalk across the Dover Strait/Pas de Calais. All that remains of that are the white cliffs on either side. The drainage of what became the Thames, Seine and Rhine passed to the Atlantic in a much larger rive system that flowed down the axis of the Channel. Each time an ice age ended the ridge acted as a dam for glacial meltwater to form a large lake in what is now the southern North Sea. While continuous glaciers across the northern North Sea persisted the lake remained, but erosion during interglacials steadily wore down the ridge. About 450 ka ago it was low enough for this pro-glacial lake to spill across it in a catastrophic flood that began the separation. Several repeats occurred until the ridge was finally breached (See: When Britain first left Europe; September 2007). Yet sufficient remained that the link reappeared when sea level fell. What remains at present is a system of shallows and sandbanks, the largest of which is the Dogger Bank roughly halfway between Newcastle and Denmark. Consequently the swamps and river systems that immediately followed the last ice age have become known collectively as Doggerland.

The shrinkage of Doggerland since 16,000 BCE (Credit: Europe’s Lost Frontiers Project, University of Bradford)

Dredging of the southern North Sea for sand and gravel frequently brings both the bones of land mammals and the tools of Stone Age hunters to light – one fossil was a skull fragment of a Neanderthal. At the end of the Younger Dryas (~11.7 ka) Doggerland was populated and became a route for Mesolithic hunter-gatherers to cross from Europe to Britain and become transient and then permanent inhabitants. Melting of the northern ice sheets was slow and so was the pace of sea-level rise. A continuous passage across Dogger Land  remained even as it shrank. Only when the sea surface reached about 20 m below its current level was the land corridor breached bay what is now the Dover Strait, although low islands, including the Dogger Bank, littered the growing seaway. A new study examines the fate of Doggerland and its people during its final stage (Walker, J. et al. 2020. A great wave: the Storegga tsunami and the end of Doggerland? Antiquity, v. 94, p. 1409-1425; DOI: 10.15184/aqy.2020.49).

James Walker and colleagues at the University of Bradford, UK, and co-workers from the universities of Tartu, Estonia, Wales Trinity Saint David and St Andrews, UK, focus on one devastating event during Doggerland’s slow shrinkage and inundation. This took place around 8.2 ka ago, during the collapse of a section of the Norwegian continental edge. Known as the Storegga Slides (storegga means great edge in Norse), three submarine debris flows shifted 3500 km3 of sediment to blanket 80 thousand km2 of the Norwegian Sea floor, reaching more than half way to Iceland.  Tsunami deposits related to these events occur along the coast western Norway, on the Shetlands and the shoreline of eastern Scotland. They lie between 3 and 20 m above modern sea level, but allowing for the lower sea level at the time the ‘run-up’ probably reached as high as 35 m: more than the maximum of both the 26 December 2004 Indian Ocean tsunami and that in NW Japan on 11 March 2011. Two Mesolithic archaeological sites definitely lie beneath the tsunami deposit, one close to the source of the slid, another near Inverness, Scotland. At the time part of the Dogger Bank still lay above the sea, as did a wide coastal plain and offshore islands along England’s east coast. This catastrophic event was a little later than a sudden cooling event in the Northern Hemisphere. Any Mesolithic people living on what was left of Doggerland would not have survived. But quite possibly they may already have left as the climate cooled substantially

A seabed drilling programme financed by the EU targeted what lies beneath more recent sediments on the Dogger Bank and off the embayment known as The Wash of Eastern England. Some of the cores contain tsunamis deposits, one having been analysed in detail in a separate paper (Gaffney, V. and 24 others 2020. Multi-Proxy Characterisation of the Storegga Tsunami and Its Impact on the Early Holocene Landscapes of the Southern North Sea. Geosciences, v. 10, online; DOI: 10.3390/geosciences10070270). The tsunami washed across an estuarine mudflat into an area of meadowland with oak and hazel woodland, which may have absorbed much of its energy. Environmental DNA analysis suggests that this relic of Doggerland was roamed by bear, wild boar and ruminants. The authors also found evidence that the tsunamis had been guided by pre-existing topography, such as the river channel of what is now the River Great Ouse. Yet they found no evidence of human occupation. Together with other researchers, the University of Bradford’s Lost Frontiers Project have produced sufficient detail about Doggerland to contemplate looking for Mesolithic sites in the excavations for offshore wind farms.

See also: Addley, E. 2020.  Study finds indications of life on Doggerland after devastating tsunamis. (The Guardian, 1 December 2020); Europe’s Lost Frontiers website

Human impact on surface geological processes

I last wrote about sedimentation during the ‘Anthropocene’ a year ago (See: Sedimentary deposits of the ‘Anthropocene’, November 2019). Human impact in that context is staggeringly huge: annually we shift 57 billion tonnes of rock and soil, equivalent to six times the mass of the UKs largest mountain, Ben Nevis. All the world’s rivers combined move about 35 billion tonnes less. I don’t particularly care for erecting a new Epoch in the Stratigraphic Column, and even less about when the ‘Anthropocene’ is supposed to have started. The proposal continues to be debated 12 years after it was first suggested to the IUGS International Commission on Stratigraphy. I suppose I am a bit ‘old fashioned’, but the proposals is for a stratigraphic entity that is vastly shorter than the smallest globally significant subdivision of geological time (an Age) and the duration of most of the recorded mass extinctions, which are signified by horizontal lines in the Column. By way of illustration, the thick, extensive bed of Carboniferous sandstone on which I live is one of many deposited in the early part of the Namurian Age (between 328 and 318 Ma). Nonetheless, anthropogenic sediments of, say, the last 200 years are definitely substantial. A measure of just how substantial is provided by a paper published online this week (Kemp, S.B. et al. 2020. The human impact on North American erosion, sediment transfer, and storage in a geologic context. Nature Communications, v. 11, article 6012; DOI: 10.1038/s41467-020-19744-3).

‘Badlands’ formed by accelerated soil erosion.

Anthropogenic erosion, sediment transfer and deposition in North America kicked off with its colonisation by European immigrants since the early 16th century. First Americans were hunter-gatherers and subsistence farmers and left virtually no traces in the landscape, other than their artefacts and, in the case of farmers, their dwellings. Kemp and colleagues have focussed on late-Pleistocene alluvial sediment, accumulation of which seems to have been pretty stable for 40 ka. Since colonisation began the rate has increased to, at present, ten times that previously stable rate, mainly during the last 200 years of accelerated spread of farmland. This is dominated by outcomes of two agricultural practices – ploughing and deforestation. Breaking of the complex and ancient prairie soils, formerly held together by deep, dense mats of grass root systems, made even flat surfaces highly prone to soil erosion, demonstrated by the ‘dust bowl’ conditions of the Great Depression during the 1930s. In more rugged relief, deforestation made slopes more likely to fail through landslides and other mass movements. Damming of streams and rivers for irrigation or, its opposite, to drain wetlands resulted in alterations to the channels themselves and their flow regimes. Consequently, older alluvium succumbed to bank erosion. Increased deposition behind an explosion of mill dams and changed flow regimes in the reaches of streams below them had effects disproportionate to the size of the dams (see: Watermills and meanders, March 2008). Stream flow beforehand was slower and flooding more balanced than it has been over the last few hundred years. Increased flooding, the building of ever larger flood defences and an increase in flood magnitude, duration and extent when defences were breached form a vicious circle that quickly transformed the lower reaches of the largest American river basins.

North American rates of alluvium deposition since 40 Ka ago – the time axis is logarithmic. (Credit: Kemp et al., 2020; Fig. 2)

All this deserves documentation and quantification, which Kemp et al. have attempted at 400 alluvial study sites across the continent, measuring >4700 rates of sediment accumulation at various times during the past 40 thousand years. Such deposition serves roughly as a proxy for erosion rate, but that is a function of multiple factors, such as run-off of rain- and snow-melt water, anthropogenic changes to drainage courses and to slope stability. The scale of post-settlement sedimentation is not the same across the whole continent. In some areas, such as southern California, the rate over the last 200 years is lower than the estimated natural, pre-settlement rate: this example may be due to increased capture of surface water for irrigation of a semi-arid area so that erosion and transport were retarded. In others it seems to be unchanged, probably for a whole variety of reason. The highest rates are in the main areas of rain-fed agriculture of the mid-west of the US and western Canada.

In a nutshell, during the last century the North American capitalism shifted as much sediment as would be moved naturally in between 700 to 3000 years. No such investigation has been attempted in other parts of the world that have histories of intense agriculture going back several thousand years, such as the plains of China, northern India and Mesopotamia, the lower Nile valley, the great plateau of the Ethiopian Highlands, and Europe. This is a global problem and despite its continent-wide scope the study by Kemp et al. barely scratches the surface. Despite earnest endeavours to reduce soil erosion in the US and a few other areas, it does seem as if the damage has been done and is irreversible.

Earliest sign of a sense of aesthetics

Maybe because of the Covid-19 pandemic, there has been a dearth of interesting new developments in the geosciences over that last few months: the ‘bread and butter’ of Earth-logs. So instead of allowing a gap in articles to develop, and as a sign that I haven’t succumbed, this piece concerns one of the most intriguing discoveries in palaeoanthropology. In 1925 Wilfred Eitzman, a school teacher, investigated a cave in the Makapansgat Valley in Limpopo Province, South Africa that had been exposed by quarry workers.  His most striking discovery was a polished pebble made of very fine-grained, iron-rich silica, probably from a Precambrian banded iron formation. Being round and deeply pitted, it had clearly been subject to prolonged rolling and sand blasting in running water and wind. Eerily, whichever way it was viewed it bore a striking resemblance to a primate face: eyes, mouth, nose and, viewed from the rear, a disturbing, toothless grin. We have all picked up odd-looking pebbles on beaches or a river bank: I recently found a sandstone demon-cat (it even has pointy ears) when digging a new vegetable patch.

The Makapansgat Pebble. Inverted it still resembles a face and its obverse side does too.

What is different about the Makapansgat Pebble is that Eitzman found it in a cave-floor layer full of bones, including those of australopithecines. The cave is located in dolomitic limestone outcrops high in the local drainage system, so it’s unlikely that the pebble was washed into it. The nearest occurrence of banded iron formation is about 20 kilometres away, so something must have carried the pebble for a day or more to the cave. The local area has since yielded a superb palaeontological record of early hominin evolution, stimulated by  Eitzman’s finds. He gave the fossils and the pebble to Raymond Dart, the pioneer of South African palaeoanthropology. Dart named the hominin fossils Australopithecus prometheus because associated bones of other animals were covered in black stains that Dart eagerly regarded as signs of burning and thus cooking. When it became clear that the stains were of manganese oxide the name was changed to Au. africanus, the fossils eventually being dated to around 3 million years ago.

Dart was notorious for his showmanship, and the fossils and the Makapansgat Pebble ‘did the rounds’ and continue to do so. In 2016 the pebble was displayed with a golden rhino, a collection of apartheid-era badges and much more in the British Museum’s South Africa: the art of a nation exhibition. Well, is the pebble art? As it shows no evidence of deliberate working it can not be considered art, but could be termed an objet trouvé. That is, an ‘object found by chance and held to have aesthetic value to an artist’. The pebble’s original finder 3 million years ago must have found the 0.25 kg pebble sufficiently interesting to have carried it back to the cave, presumably because of its clear resemblance to a hominin head: in fact a multiple-faced head. Was it carried by a cave-dwelling australopithecine or an early member of genus Homo who left no other trace at Makapansgat? At an even earlier time a so-far undiscovered hominin did indeed make simple stone tools to dismember joints of meat on the shores of Lake Turkana in Kenya. It is impossible to know who for sure carried the pebble, nor to know why. Yet all living primates are curious creatures, so it is far from impossible that any member of the hominins in our line of descent would have collected portable curiosities.

Kerguelen Plateau: a long-lived large igneous province

It’s easy to think of the Earth’s largest outpourings of lava as being restricted to the continents; continental flood basalts with their spectacular stepped topography made up of hundreds of individual massive flows and intervening soil horizons. The Deccan Traps of western India are the epitome, having been so named by natural scientists of the late 18th century from the Swedish word for ‘stairs’ (trappa). Examples go back to the Proterozoic Era, younger ones still retaining much of their original form as huge plateaus. All began life within individual tectonic plates, although some presaged continental break-up and the formation of new oceanic spreading centres. They must have been spectacular events, up to millions of cubic kilometres of magma belched out in a few million years. They have been explained as manifestations of plumes of hot mantle rock rising from as deep as the core-mantle boundary. Unsurprisingly, the biggest continental flood-basalt outpourings coincided with mass extinction events. Otherwise known as large igneous provinces (LIPs), they are not the only signs of truly huge production of magma by partial melting in the mantle. The biggest LIP, with an estimated volume of 80 million km3, lies deep beneath the Western Pacific Ocean. To the northeast of New Guinea, the Ontong Java Plateau formed over a period of about 3 Ma in the mid-Cretaceous (~120 Ma) and blanketed one percent of the Earth’s solid surface with lavas erupted at a rate of 22 km3 per year. Possibly because this happened on the Pacific’s abyssal plains beneath around 4 km of sea water, there is little sign of any major perturbation of mid-Cretaceous life, but it is associated with evidence for global oceanic anoxia. Ontong Java isn’t the only oceanic LIP. Bearing in mind that oceanic lithosphere only goes back to the start of the Jurassic Period (200 Ma) – earlier material has largely been subducted – they are not as abundant as continental flood-basalt provinces. One of them is the Kerguelen Plateau 3000 km to the SE of Australia, which is about three times the area of Japan and the second largest LIP of the Phanerozoic Eon. The Plateau was split into two large fragments while sea-floor spreading progressed along the Southeast Indian Ridge.

Bathymetry of the Indian Ocean south-west of Australia, showing the Kerguelen Plateau and South-east Indian Ridge. The red arrows show the amount of sea-floor spreading on either side of the Ridge since it began to open. The pale blue area at the NE end of the arrow was formerly part of the Plateau (credit: Google Earth)

Long regarded as a microcontinental  fragment left when India parted company with Antarctica – based on isolated occurrences of gneisses – there is evidence that during the formation of the Kerguelen LIP the basalts rose above sea level. Because earlier radiometric dating of basalts from ocean-floor drill cores were of low quality, an Australian-Swedish group of geoscientists have re-evaluated those data and supplemented them with 25 new Ar-Ar dates from 12 sites (Jiang, Q. et al. 2020. Longest continuously erupting large igneous province driven by plume-ridge interaction. Geology, v. 48, online; DOI: 10.1130/G47850.1). Rather than a cluster of ages around a short time range as expected from the short life of most other LIPs, those from Kerguelen span 32 Ma during the Cretaceous (from 122 to 90 Ma). The magmatic pulse began at roughly the same time as that of Ontong Java, but continued for much longer. Smaller oceanic LIPs do seem to have lingered for unusually lengthy periods, but all seem to have constructed in several separate pulses. Large-volume eruption at Kerguelen was continuous for at least 32 Ma; the drilling did not penetrate the oldest of the plateau basalts. It seems that the Kerguelen LIP is unique in that respect and requires an explanation other than simply a mantle plume, however large.

Jiang et al. suggest a model of continuous interaction between a long-lived plume and the development of the Southeast Indian Ridge oceanic spreading centre. Their model involves the line of continental splitting between India and Antarctic taking place close to a major deep-mantle plume at around 128 Ma. There is nothing unique about that; incipient ocean rifting in the Horn of Africa and formation of the Red Sea and Gulf of Aden ridges is currently associated with the active Afar plume. This was followed by a kind of tectonic shuffling of the Ridge back and forth across the head of the Kerguelen plume: not far different from the Palaeogene North Atlantic LIP, where the mid-Atlantic Ridge and the still-active Iceland plume, except the ridge and plume seem more intimately involved there. However, there are probably many subtle relationships between plumes and various kind of oceanic plate margins that are still worth exploring. Since the first discovery of mantle plumes as an explanation for volcanic island chains (e.g. the Hawaiian chain) where volcanism becomes progressively older in the direction of plate movement, there is still much to discover.

See also: Magma .conveyor belt’ fuelled world’s longest erupting supervolcanoes (Science Daily, 4 November 2020)

More Denisovan connections

In 2006 mining operations in NE Mongolia uncovered a human skull cap with prominent brow ridges. After having been dubbed Mongolanthropus because of its primitive appearance and then suggested to be either a Neanderthal of Homo erectus. Radiocarbon dating in 2019 then showed the woman to be around 34,500 years old and the accompanying sequencing of its mtDNA assigned her to a widespread Eurasian haplotype of modern humans. Powdered bone samples ended up in Svante Paabo’s renowned ancient-DNA lab at the Max Planck Institute for Evolutionary Anthropology in Leipzig and yielded a full genome (Massilani, D. and 14 others 2020. Denisovan ancestry and population history of early East Asians. Science, v. 370, p. 579-583; DOI: 10.1126/science.abc1166). From this flowed some interesting genetic history.

Skull cap of a female modern human from Salkhit in Outer Mongolia, which superficially resembles those of Homo erectus from Java (Credit: Massilani et al. Fig 1a; © Institute of Archaeology, Mongolian Academy of Sciences)

First was a close overall resemblance to living East Asians and Native Americans, similar to that of an older individual from near Beijing, China. This confirmed the antiquity of the East Eurasian population’s split from that of the west, yet contained evidence of some interbreeding with West Eurasians to the extent of sharing 25% of DNA and with Neanderthals. The two specimens also contained evidence of Denisovan ancestry in their genomes, but fragments that are more akin to those in living people in East Asia than to those of Papuans and Aboriginal Australians: these were definitely cosmopolitan people! The simplest explanation is two distinct minglings with Denisovans: that involving ancestors of Papuans and Australian being the perhaps earlier, en route to their arrival at least 60 thousand years on what became an island continent in the run-up to the last glacial maximum. Be that as it may, two separate Denisovan populations interbred with modern human bands. Further genetic connections with ancient Northern Siberian humans suggests complex movement across the continent, probably inevitable because these hunter-gatherers would have followed prey animals on their seasonal migrations, which would have been longer than today because of climatic cooling. The same can be surmised for Denisovans which would have increased the chances of contact

See also: Denisovan DNA in the genome of early East Asians (Science Daily, 29 October 2020)

In May 2019 (Denisovan on top of the world) I wrote about a human lower jaw that a Buddhist monk had found in a cave at a height of 3.3 km on the Tibetan plateau. Analysis of protein traces in the teeth it retained suggested that it was Denisovan. Like the earlier small remnants from Siberia, dating this putative Denisovan precisely proved to be impossible. The jawbone was at least 160 ka old from the age of speleothem carbonate encrusting it. Excavation of the sediment layers from Baishiya Cave has enabled a large team of Chinese, Australian, US and Swedish scientists to try out the ‘environmental DNA’ approach pioneered by the Max Planck Institute for Evolutionary Anthropology (see: Detecting the presence of hominins in ancient soil samples, April 2017). The cave confirmed occupation by Denisovans from mtDNA found in layers dated using radiocarbon and optically stimulated luminescence methods. Denisovan mtDNA turned up in four layers dated at ~100, ~60 and possibly as young as 45 ka, as well at that from a variety of other mammals (Zhang, D. and 26 others. Denisovan DNA in Late Pleistocene sediments from Baishiya Karst Cave on the Tibetan Plateau. Science, v. 370, p. 584-587; DOI: 10.1126/science.abb6320).  Denisovans were clearly able to live at high elevations for at least 100 thousand years: long enough to evolve the metabolic processes essential to sustain living in low-oxygen conditions, which it has been suggested was passed on to ancestral modern Tibetans.