British government fracking fan fracked

In November 2019 the Conservative government of Boris Johnson declared a moratorium on development of shale gas by hydraulic fracturing (‘fracking’) in England. This followed determined public protests at a number of potential fracking sites, the most intransigent being residents of Lancashire’s Fylde peninsula. They had been repeatedly disturbed since mid 2017 by low-magnitude earthquakes following drilling and hydraulic-fluid injection tests by Cuadrilla Resources near Little Plumpton village. Their views were confirmed in a scientific study by the British Geological Survey for the Oil and Gas Authority that warned of the impossibility of predicting the magnitude of future earthquakes that future fracking might trigger. The shale-gas industry of North America, largely in areas of low population and simple geology, confirmed the substantial seismic hazard of this technology by regular occurrences of earthquakes up to destructive magnitudes greater than 5.0. The Little Plumpton site was abandoned and sealed in February 2022.

Cuadrilla’s exploratory fracking site near Little Plumpton in Fylde, Lancashire. (Credit: BBC)

On 22 September 2022 the moratorium was rescinded by Jacob Rees-Mogg, Secretary of State for Business, Energy and Industrial Strategy in the new government of Liz Truss, two weeks after his appointment. This was despite the 2019 Conservative manifesto pledging not to lift the moratorium unless fracking was scientifically proven to be safe. His decision involved suggesting that the seismicity threshold for pausing fracking operations be lifted from magnitude 0.5 to 2.5, which Rees-Mogg claimed without any scientific justification to be ‘a perfectly routine natural phenomenon’.  He further asserted that opposition to fracking was based around ‘hysteria’ and public ignorance of seismological science, and that some protestors had been funded by Vladimir Putin. In reality the Secretary of State’s decision was fuelled by the Russian Federation’s reducing gas supplies to Europe following its invasion of Ukraine, the soaring world price of natural gas and an attendant financial crisis. There was also a political need to be seen to be ‘doing something’, for which he has a meagre track record in the House of Commons. Rees Mogg claimed that lifting the moratorium would bolster British energy security. That view ignored the probable lead time of around 10 years before shale gas can become an established physical resource in England. Furthermore, an August 2018 assessment of the potential of UK shale-gas, by a team of geoscientists, including one from the British Geological Survey, suggested that shale-gas potential would amount to less than 10 years supply of UK needs: contrary to Rees-Mogg’s claim that England has ‘huge reserves of shale’. Indeed it does, but the vast bulk of these shales have no commercial gas potential.

Ironically, the former founder of Cuadrilla Resources, exploration geologist Chris Cornelius, and its former public affairs director, Mark Linder, questioned the move to unleash fracking in England, despite supporting shale-gas operations where geologically and economically appropriate. Their view is largely based on Britain’s highly complex geology that poses major technical and economic challenges to hydraulic fracturing. Globally, fracking has mainly been in vast areas of simple, ‘layer-cake’ geology. A glance at large-scale geological maps of British areas claimed to host shale-gas reserves reveals the dominance of hundreds of faults, large and small, formed since the hydrocarbon-rich shales were laid down. Despite being ancient, such faults are capable of being reactivated, especially when lubricated by introduction of fluids. Exactly where they go beneath the surface is unpredictable on the scales needed for precision drilling.  Many of the problems encountered by Cuadrilla’s Fylde programme stemmed from such complexity. Over their 7 years of operation, hundreds of millions of pounds were expended without any commercial gas production. Each prospective site in Britain is similarly compartmentalised by faulting so that much the same problems would be encountered during attempts to develop them. By contrast the shales fracked profitably in the USA occur as horizontal sheets deep beneath entire states: entirely predictable for the drillers. In Britain, tens of thousands of wells would need to be drilled on a ‘compartment-by-compartment’ basis at a rate of hundreds each year to yield useful gas supplies. Fracking in England would therefore present unacceptable economic risks to potential investors. Cornelius and Linder have moved on to more achievable ventures in renewables such as geothermal heating in areas of simple British geology.

Jacob Rees-Mogg’s second-class degree in history from Oxford and his long connection with hedge-fund management seem not to be appropriate qualifications for making complex geoscientific decisions. Such a view is apparently held by several fellow Conservative MPs, one of whom suggested that Rees-Mogg should lead by example and make his North East Somerset constituency the ‘first to be fracked’, because it is underlain by potentially gas-yielding shales. The adjoining constituency, Wells, has several sites with shale-gas licences but none have been sought within North East Somerset. Interestingly, successive Conservative governments since 2015, mindful of a ‘not-in-my-backyard’ attitude in the party’s many rural constituencies, have placed a de-facto ban on development of onshore wind power.

Sun, sand and sangria on the Mediterranean Costas – and tsunamis?

You can easily spot a tourist returning from a few summer weeks on the coast of the western Mediterranean, especially during 2022’s record-breaking heat wave and wildfires: sunburnt and with a smoky aroma that expensive après-sun lotion can’t mask. Judging from the seismic records, they may have felt the odd minor earthquake too, perhaps putting it down to drink, lack of sleep and an overdose of trance music. Data from the last 100 years show that southern Spain and north-west Africa have a generally uniform distribution of seismic events, mostly less than Magnitude 5. Yet there is a distinct submarine zone running NNE to SSW from Almeria to the coast of western Algeria. It crosses the Alboran Basin, and reveals significantly more events greater than M 5. Most earthquakes in the region occurred at depths less than 30 km mainly in the crust. Five geophysicists from Spain and another two from Algeria and Italy have analysed the known seismicity of the region in the light of its tectonics and lithospheric structure (Gómez de la Peña, L., et al. 2022. Evidence for a developing plate boundary in the western Mediterranean. Nature Communications, v. 13, article 4786; DOI: 10.1038/s41467-022-31895-z).

Topography of the Alboran Basin beneath the western Mediterranean. The colours grey through blue to purple indicate increasing depth of seawater. Grey circles indicate historic earthquakes, the smallest being M 3 to 4, the largest greater than M 6. Green arrows show plate motions in the area measured using GPS. Active faults are marked in red (see key for types of motion). (Credit: based on Fig 1 of Gómez de la Peña et al.)

The West Alboran Basin is underlain by thinner continental crust (orange on the inset to the map) than beneath southern Spain and western Algeria. Normal crust underpins the Southern Alboran Basin. To the east are the deeper East Alboran and Algero-Balearic Basins, the floor of the latter being true oceanic crust and that of the former created in a now extinct island arc. Running ENE to WSW across the Alboran Basin are two ridges on the sea floor. Tectonic motions determined using the Global Positioning System reveal that the African plate is moving slowly westwards at up to 1 cm yr-1, about 2 to 3 times faster than the European plate. This reflected by the dextral strike-slip along the active ~E-W Yusuf Fault (YSF). This bends southwards to roughly parallel the Alboran Ridge, and becomes a large thrust fault that shows up on ship borne seismic reflection sections. The reflection seismic survey also shows that the shallow crust beneath the Alboran Ridge is being buckled under compression above the thrust. The thrust extends to the base of the African continental crust, which is beginning to override the arc crust of the East Alboran basin. Effectively, this system of major faults seems to have become a plate boundary between Africa and Europe in the last 5 million years and has taken up about 25 km of convergence between the two plates. An estimated 16 km of this has taken place across the Alboran Ridge Thrust which has detached the overriding African crust from the mantle beneath.

The authors estimate an 8.5 to 10 km depth beneath the Alboran fault system at which the overriding crust changes from ductile to brittle deformation – the threshold for strains being taken up by earthquakes. By comparison with other areas of seismic activity, they reckon that there is a distinct chance of much larger earthquakes (up to M 8) in the geologically near future. A great earthquake in this region, where the Mediterranean narrows towards the Strait of Gibraltar, may generate a devastating tsunami. An extension of the Africa-Europe plate boundary into the Atlantic is believed to have generated a major earthquake that launched a tsunami to destroy Lisbon and batter the Atlantic coasts of Portugal, Spain and NW Africa on 1st November 1755. The situation of the active plate boundary in the Alboran Basin may well present a similar, if not worse, risk of devastation.

The dangers of rolling boulders

Field work in lonely and spectacular places is a privilege. Though it can be great, boredom sometimes sets in, which is hard for the lone geologist. Today, I guess a cell phone would help, especially in high places where the signal is good. That means of communication and entertainment only emerged in the 1980s and did not reach wild places until well into the 90s. Pre-cellnet boredom could be relieved by what remains a dark secret: lone geologists once rolled large boulders down mountains and valley sides, shouting ‘Below!’ as a warning to others. Their excuse to themselves for this unique thrill (bounding boulders reach speeds of up to 40 m s-1) was vaguely scientific: sooner or later a precarious rock would fall anyway. This week it emerged that Andrin Caviezel of the Institute for Snow and Avalanche Research in Davos, Switzerland, an Alpine geoscientist, rolls boulders for a living (Caviezel, A. 2022. The gravity of rockfalls. Where I work, Nature, v. 607, p. 838; DOI: 10.1038/d41586-022-02044-9). He finds that ‘…flinging giant objects down a mountain is still super fun’. The serious part of his job attempts to model how rockfalls actually move downslope, as an aid to risk assessment (Caviezel, A. and 23 others 2021. The relevance of rock shape over mass – implications for rockfall hazard assessments. Nature Communications, v. 12, article 5546; DOI: 10.1038/s41467-021-25794-y)

Caviezel’s team (@teamcaviezel) don’t use actual rocks but garishly painted, symmetrical blocks of reinforced concrete weighing up to 3 tonnes, which are more durable than most outcropping rock and can be re-used. A Super Puma helicopter shifts a block to the top of a slope, from which it is levered over the edge (watch video). The team deploys two types of block, one equant and resembling a giant garnet crystal, the other wheel-shaped with facets. The first represents boulders of rock types with uniform properties throughout, such as granite. The wheel type mimics boulders formed from rocks that are bedded or foliated, which are usually plate-like or spindly.

Vertical aerial photograph of a uniform, south-facing slope in the Swiss Alps used to roll concrete ‘boulders’. The red X marks the release point; the blue symbols show the points of rest of equant ‘boulders, the sizes of which are shown in the inset, the wheel-shaped ones are magenta. Coloured circles with crosses show the mean rest position of each category (the lighter the colour the smaller the set of ‘boulders’). The coloured ellipses indicate the standard deviation for each category. (Credit: Caviezel et al., Fig 2)

Unlike other gravity-driven hazards, such as avalanches and mudflows, the directions that rockfalls may follow by are impossible to predict. Rather than hugging the surface, boulders interact with it, bouncing and being deflected, and they spin rapidly. To follow each experiment’s trajectory a block contains a motion sensor, measuring speed and acceleration, and a gyroscope that shows rotation, wobbling and motion direction, while filming records jump heights – up to 11 m in the experiments. Despite the similarity of the blocks, the same release point for each roll and a uniform mountainside slope, with one cliff line, the final resting places are widely spread. That hazard zone of rockfalls is distinctly wider than that of snow avalanches; observing a boulder once it starts to move gives a potential victim little means of knowing a safe place to shelter.

The most important conclusion from the experiments is that the widest spread of tumbling ‘boulders’ is shown by the wheel-shaped ones. So, slopes made from bedded or foliated sedimentary and metamorphic rocks may pose wider hazards from rockfalls than do those underpinned by uniform rocks. However, plate-like or spindly boulders are more stable at rest than are equant ones. Yet boulders rarely fall as a result of being pushed (except in avalanches). On moderate slopes they are undermined by erosion, and on steep slopes or cliffs winter ice wedges open joints allowing blocks to fall during a thaw.

A Bronze Age catastrophe: the destruction of Sodom and Gomorrah?

“…The sun was risen upon the earth when Lot entered into Zoar. Then the Lord rained upon Sodom and Gomorrah brimstone and fire from the Lord out of heaven. And overthrew those cities, and all the plain, and all the inhabitants of the cities, and that which grew upon the ground. But his wife looked back from behind him, and she became a pillar of salt …”

This is the second catastrophe recorded in the Old Testament of the King James Bible (Genesis 19:23-26), after the Noachian Flood (Genesis 7 and 8). The Flood is now regarded by many geoscientists to be a passed-down and mythologised account of the rapid filling of the Black Sea when the Bosporus was breached around 7600 years ago, as global see level rose in the early Neolithic. Eleven Chapters and a great many begotten people later comes the dramatic punishment of the ‘sinners’ of Sodom and Gomorrah. The two legendary settlements are now considered to have been in the Lower Jordan Valley near the Dead Sea. Being on the major strike-slip fault that defines the Jordan Rift, related to the long-active spreading of the Red Sea, the most obvious rationalisation of the myth is a major earthquake. The sedimentary sequence contains sulfide-rich clays and silts, as well as thick salt beds. Major seismicity would have liquidised saturated sediments full of supersaturated salt water and the release of large volumes of hydrogen sulfide gas. There are also remains of early settlements in the form of large mounds known locally as ‘talls’. The largest  and archaeologically  most productive of these is Tall el Hammam in Jordan, whose excavation has proceeded since 2005. It lies just to the north of the Dead Sea on the eastern flank of the Jordan valley, 15 km from Jericho on the occupied West Bank.

The Tall el Hammam mound is formed from layers of debris, mainly of mud bricks, dwellings being built again and again on the remains of earlier ones. It seems to have been continuously occupied for three millennia after 6650 ka ago (4700 BCE) at the core of a presumably grain-based city state with upwards of 10 thousand inhabitants. The site was destroyed around 3600 Ka (1650 BCE). The catastrophic earthquake hypothesis can be neither confirmed nor refuted, but the destruction toppled structures with walls up to 4 m thick.. Whatever the event, 15 years of excavation have revealed that it was one of extremely high energy. There is evidence for pulverisation of mud bricks and at some dwellings they were apparently blown off-site: a possibility in a large magnitude earthquake. Unusually, however, mud bricks and clay used in pottery and roofing had been partially melted during the final destruction. Various analyses suggest temperatures were as high as 2000 °C.

Top – oblique aerial view of the mound at Tal el Hammam looking to the south-west; Bottom – the Lower Jordan Valley and Bronze age talls superimposed by the extent of the area devastated by the 1908 Tunguska air-burst. (credit: Bunch et al. 2021, Figs 1b and 52)

A detailed summary of results from the Tall el Hammam site has just appeared (Bunch T.E., and 20 others 2021. A Tunguska sized airburst destroyed Tall el-Hammam a Middle Bronze Age city in the Jordan Valley near the Dead SeaNature Scientific Reports, v. 11, article 18632; DOI: 10.1038/s41598-021-97778-3). As the title indicates, it comes to an astonishing conclusion, which rests on a large range of archaeological and geochemical data that go well beyond the earlier discovery of the tall’s destruction at very high temperatures. Radiocarbon dates of 26 samples from the destruction layer reveal that it happened in 1661±21 BCE – the mid- to late Bronze Age, as also suggested by the styles of a variety of artefacts. The most revealing data have emerged from the debris that caps the archaeological section, particularly fine-grained materials in it. There are mineral grains indicating that sand-sized grains were melted, some to form spherules or droplets of glass. Even highly refractory minerals such as zircon and chromite were melted. Mixed in with the resulting glasses are tiny nuggets of metals, including platinum-group metals. As well as high temperatures the event involved intense mechanical shock that produced tell-tale lamellae in quartz grains, familiar from sites of known extraterrestrial impacts. One specimen shows a micro-crater produced by a grain of carbonaceous material, which is now made up of ~ 1 μm diamond-like carbon (diamondoids) crystals. There is abundant evidence of directionality in the form of linear distributions of ceramic shards and carbonised cereal grains that seem to have been consistently transported in a SW to NE direction: a kind of high-speed ‘blow-over’. In the debris are also fragments of pulverised bone, most too small to assign to species. But among them are two highly damaged human skulls and isolated and charred human limb- and pelvic bones. Forensic analysis suggests at least two individuals were decapitated, dismembered and incinerated during the catastrophe. Isolated scatters of recognisable human bones indicate at least 10 people who suffered a similar death. Finally the destruction layer is marked by an unusually high concentration of salt, some of which has been melted.

Such a range of evidence is difficult to reconcile by hypotheses citing warfare, accidental burning, tornadoes or earthquakes. However, the diversity of phenomena associated with the destruction of Tall el Hammam has been compared with data from nuclear explosion sites, suggesting the huge power of the event. The authors turned to evidence linked to the air-burst detonation of a cosmic body over Tunguska, Siberia in 1908 which had a power estimated at between 12- to 23 megatonnes of TNT equivalent. Such an event seems to fit the fate of Tall el Hammam. The Tunguska event devastated an area of 2200 km2. The tall and another at Jericho lies within such an area. Perhaps not coincidentally, the destruction of Jericho was also in the mid- to late Bronze Age sometime between 1686 and 1626 BCE: i.e. statistically coeval with that of Tall el Hammam.

Archaeologists working in the Lower Jordan Valley have examined 15 other talls and more than a hundred lesser inhabited sites and have concluded that all of them were abandoned at the end of the Middle Bronze Age. The whole area is devoid of evidence for agricultural settlements for the following three to six centuries, although there are traces of pastoralist activity. The high amount of salt in the Tall el Hammam debris, if spread over the whole area would have rendered its soils infertile until it was eventually flushed out by rainfall and runoff. If, indeed, the event matches the biblical account of Sodom and Gomorrah, then Lot and his remmaing companions would have found it difficult to survive without invading the lands of other people who had escaped, much as recorded later in Genesis. Of more concern is what will become of Ted Bunch and his 20 US colleagues? Will they be charged with blasphemy?

See also: Tunguska-Sized Impact Destroyed Jordan Valley City 3,670 Years Ago, SciNews, 29 September 2021; Did an impact affect hunter gatherers at the start of the Younger Dryas? Earth-logs, 3 July 2020.

Anthropocene more an Event than an Epoch.

The Vattenfall lignite mine in Germany; the Anthropocene personified

The issue of whether or not to assign the time span during which human activities have been significantly affecting the planet and its interwoven Earth Systems has been dragging on since the term ‘Anthropocene’ was first proposed more than two decades ago. A suggestion that may resolve matters, both amicably and with a degree of scientific sense, has emerged in a short letter to the major scientific journal Nature, written by six eminent scientists (Bauer, A.M. et al. 2021. Anthropocene: event or epoch? Nature, v. 597, p. 332; DOI: 10.1038/d41586-021-02448-z). The full text is below

The concept of the Anthropocene has inspired more than two decades of constructive scholarship and public discussion. Yet much of this work seems to us incompatible with the proposal to define the Anthropocene as an epoch or series in the geological timescale, with a precise start date and stratigraphic boundary in the mid-twentieth century. As geologists, archaeologists, environmental scientists and geographers, we have another approach to suggest: recognize the Anthropocene as an ongoing geological event.

The problems with demarcating the Anthropocene as a globally synchronous change in human–environment relations, occurring in 1950 or otherwise, have long been evident (P. J. Crutzen and E. F. Stoermer IGBP Newsletter 41, 17–18; 2000). As an ongoing geological event, it would be analogous to other major transformative events, such as the Great Oxidation Event (starting around 2.4 billion years ago) or the Great Ordovician Biodiversification Event (around 500 million years ago).

Unlike formally defined epochs or series, geological events can encompass spatial and temporal heterogeneity and the diverse processes — environmental and now social — that interact to produce global environmental changes. Defining the Anthropocene in this way would, in our view, better engage with how the term has been used and criticized across the scholarly world.”

AUTHORS: Andrew M. Bauer, Stanford University, Stanford, California, USA; Matthew Edgeworth, University of Leicester, Leicester, UK;  Lucy E. Edwards, Florence Bascom Geoscience Center, Reston, Virginia, USAErle C. Ellis, University of Maryland, Baltimore County, Maryland, USA ; Philip Gibbard, Scott Polar Research Institute, University of Cambridge, Cambridge, UK;  Dorothy J. Merritts, Franklin and Marshall College, Lancaster, Pennsylvania, USA.

I have been grousing about the attempt to assign Epoch/Series status to the Anthropocene for quite a while (you can follow the development of my personal opinions by entering ‘Anthropocene’ in the Search Earth-logs box). In general I believe that the proposal being debated is scientifically absurd, and a mere justification for getting a political banner to wave. What the six authors of this letter propose seems eminently sensible. I hope it is accepted by International Commission on Stratigraphy as a solution to the increasingly sterile discussions that continue to wash to and fro in our community. Then perhaps the focus can be on action rather than propaganda.

As things have stood since 21 May 2019, a proposal to accept the Anthropocene as a formal chrono-stratigraphic unit defined by a GSSP at its base around the middle of the 20th century is before the ICS and the International Union of Geological Sciences (IUGS) for ratification. It was accepted by 88% of the 34-strong Anthropocene Working Group of the ICS Subcommission on Quaternary Stratigraphy. But that proposal has yet to be ratified by either the ICS or IUGS. Interestingly, one of the main Anthropocene proponents was recently replaced as chair of the Working Group.

How flowering plants may have regulated atmospheric oxygen

Ultimately, the source of free oxygen in the Earth System is photosynthesis, but that is the result of a chemical balance in the biosphere and hydrosphere that operates at the surface and just beneath it in sediments. Burial of dead organic carbon in sedimentary rocks allows free oxygen to accumulate whereas weathering and oxidation of that carbon, largely to CO2, tends to counteract oxygen build-up. The balance is reflected in the current proportion of 21% oxygen in the atmosphere. Yet in the past oxygen levels have been much higher. During the Carboniferous and Permian periods it rose dramatically to an all-time high of 35% in the late Permian (about 250 Ma ago). This is famously reflected in fossils of giant dragonflies and other insects from the later part of the Palaeozoic Era.  Insects breathe passively by tiny tubes (trachea) through whose walls oxygen diffuses, unlike active-breathing quadrupeds that drive air into lung alveoli to dissolve O2 directly in blood. Insect size is thus limited by the oxygen content of air; to grow wing spans of up to 2 metres a modern dragon fly’s body would consist only of trachea with no room for gut; it would starve.

Woman holding a reconstructed Late Carboniferous dragonfly (Namurotypus sippeli)

During the early Mesozoic oxygen fell rapidly to around 15% during the Triassic then rose through the Jurassic and Cretaceous Periods to about 30%, only to fall again to present levels during the Cenozoic Era. Incidentally, the mass extinction at the end of the Cretaceous (the K-Pg boundary event) was marked in the marine sedimentary record by unusually high amounts of charcoal. That is evidence for the Chixculub impact being accompanied by global wild fires that a high-oxygen atmosphere would have encouraged. The high oxygen levels of the Cretaceous marked the emergence of modern flowering plants – the angiosperms. Six British geoscientists have analysed the possible influence on the Earth System of this new and eventually dominant component of the terrestrial biosphere. (Belcher, C.M. et al. The rise of angiosperms strengthened fire feedbacks and improved the regulation of atmospheric oxygenNature Communications, v. 12, article 503; DOI 10.1038/s41467-020-20772-2)

The episodic occurrence of charcoal in sedimentary rocks bears witness to wildfires having affected terrestrial ecosystems since the decisive colonisation of the land by plants at the start of the Devonian 420 Ma ago. Fire and vegetation have since gone hand in hand, and the evolution of land plants has partly been through adaptations to burning. For instance the cones of some conifer species open only during wildfires to shed seeds following burning. Some angiosperm seeds, such as those of eucalyptus, germinate only after being subject to fire . The nature of wildfires varies according to particular ecosystems: needle-like foliage burns differently from angiosperm leaves; grassland fires differ from those in forests and so on. Massive fires on the Earth’s surface are not inevitable, however. Evidence for wildfires is absent during those times when the atmosphere’s oxygen content has dipped below an estimated 16%. The current oxygen level encourages fires in dry forest during drought, as those of Victoria in Australia and California in the US during 2020 amply demonstrated. It is possible that with oxygen above 25% dry forest would not regenerate without burning in the next dry season. Wet forest, as in Brazil and Indonesia, can burn under present conditions but only if set alight deliberately. Evidence of a global firestorm after the K-Pg extinction implies that tropical rain forest burns easily when oxygen is above 30%. So, how come the dominant flora of Earth’s huge tropical forests – the flowering angiosperms – evolved and hung on when conditions were ripe for them to burn on a massive scale?

Early angiosperms had small leaves suggesting small stature and growth in stands of open woodland [perhaps shrubberies] that favoured the fire protection of wetlands. ‘Weedy’ plants regenerate and reach maturity more quickly than do those species that are destined to produce tall trees. With endemic wildfires, tree-sized plants – e.g. the gymnosperms of the Mesozoic – cannot attain maturity by growing above the height of flames. Diminutive early angiosperms in a forest understory would probably outcompete their more ancient companions.  Yet to become the mighty trees of later rain forests angiosperms must somehow have regulated atmospheric oxygen so that it declined well below the level where wet forest is ravaged by natural wild fires. The oldest evidence for angiosperm rain forest dates to 59 Ma, when perhaps more primitive tropical trees had been almost wiped-out by wildfires. Did angiosperms also encourage wildfires, that consumed oxygen on a massive scale, as well as evolving to resist their affects on plant growth? Claire Belcher et al. suggest that they did, through series of evolutionary steps. Key to their stabilising oxygen levels at around 21%, the authors allege, was angiosperms’ suppression of weathering of phosphorus from rocks and/or transfer of that major nutrient from the land to the oceans. On land nitrogen is the most important nutrient for biomass, whereas phosphorus is the limiting factor in the ocean. Its reduction by angiosperm dominance on land thereby reduces carbon burial in ocean sediments. In a very roundabout way, therefore, angiosperms control the key factor in allowing atmospheric build-up of oxygen; by encouraging mass burning and suppressing carbon burial.  Today, about 84 percent of wildfires are started by anthropogenic activities. As yet we have little, if any, idea of how such disruption of the natural flora-fire system is going to affect future ecosystems. The ‘Pyrocene’ may be an outcome of the ‘Anthropocene’ …

Tsunami risk in East Africa

The 26 December 2004 Indian Ocean tsunami was one of the deadliest natural disasters since the start of the 20th century, with an estimated death toll of around 230 thousand. Millions more were deeply traumatised, bereft of homes and possessions, rendered short of food and clean water, and threatened by disease. Together with that launched onto the seaboard of eastern Japan by the Sendai earthquake of 11 March 2011, it has spurred research into detecting the signs of older tsunamis left in coastal sedimentary deposits (see for instance: Doggerland and the Storegga tsunami, December 2020). In normally quiet coastal areas these tsunamites commonly take the form of sand sheets interbedded with terrestrial sediments, such as peaty soils. On shores fully exposed to the ocean the evidence may take the form of jumbles of large boulders that could not have been moved by even the worst storm waves.

Sand sheets attributed to a succession of tsunamis, interbedded with peaty soils deposited in a swamp on Phra Thong Island, Thailand. Note that a sand sheet deposited by the 2004 Indian Ocean tsunami is directly beneath the current swamp surface (Credit: US Geological Survey)

Most of the deaths and damage wrought by the 2004 tsunami were along coasts bordering the Bay of Bengal in Indonesia, Thailand, Myanmar, India and Sri Lanka, and the Nicobar Islands. Tsunami waves were recorded on the coastlines of Somalia, Kenya and Tanzania, but had far lower amplitudes and energy so that fatalities – several hundred – were restricted to coastal Somalia. East Africa was protected to a large extent by the Indian subcontinent taking much of the wave energy released by the magnitude 9.1 to 9.3 earthquake (the third largest recorded) beneath Aceh at the northernmost tip of the Indonesian island of Sumatra. Yet the subduction zone that failed there extends far to the southeast along the Sunda Arc. Earthquakes further along that active island arc might potentially expose parts of East Africa to far higher wave energy, because of less protection by intervening land masses.

This possibility, together with the lack of any estimate of tsunami risk for East Africa, drew a multinational team of geoscientists to the estuary of the Pangani River  in Tanzania (Maselli, V. and 12 others 2020. A 1000-yr-old tsunami in the Indian Ocean points to greater risk for East Africa. Geology, v. 48, p. 808-813; DOI: 10.1130/G47257.1). Archaeologists had previously examined excavations for fish farming ponds and discovered the relics of an ancient coastal village. Digging further pits revealed a tell-tale sheet of sand in a sequence of alluvial sediments and peaty silts and fine sands derived from mangrove swamps. The peats contained archaeological remains – sherds of pottery and even beads. The tsunamite sand sheet occurs within the mangrove facies. It contains pebbles of bedrock that also litter the open shoreline of this part of Tanzania. There are also fossils; mainly a mix of marine molluscs and foraminifera with terrestrial rodents fish, birds and amphibians. But throughout the sheet, scattered at random, are human skeletons and disarticulated bones of male and female adults, and children. Many have broken limb bones, but show no signs of blunt-force trauma or disease pathology. Moreover, there is no sign of ritual burial or weaponry; the corpses had not resulted from massacre or epidemic. The most likely conclusion is that they are victims of an earlier Indian Ocean tsunami. Radiocarbon dating shows that it occurred at some time between the 11th and 13th centuries CE. This tallies with evidence from Thailand, Sumatra, the Andaman and Maldive Islands, India and Sri Lanka for a major tsunami in 950 CE.

Computer modelling of tsunami propagation reveals that the Pangani River lies on a stretch of the Tanzanian coast that is likely to have been sheltered from most Indian Ocean tsunamis by Madagascar and the shallows around the Seychelles Archipelago. Seismic events on the Sunda Arc or the lesser, Makran subduction zone of eastern Iran may not have been capable of generating sufficient energy to raise tsunami waves at the latitudes of the Tanzanian coast much higher than those witnessed there in 2004, unless their arrival coincided with high tide – damage was prevented in 2004 because of low tide levels. However, the topography of the Pangani estuary may well amplify water level by constricting a surge. Such a mechanism can account for variations of destruction during the 2011 Tohoku-Sendai tsunami in NE Japan.

If coastal Tanzania is at high risk of tsunamis, that can only be confirmed by deeper excavation into coastal sediments to check for multiple sand sheets that characterise areas closer to the Sunda Arc. So far, that in the Pangani estuary is the only one recorded in East Africa

Thawing permafrost, release of carbon and the role of iron

Projected shrinkage of permanently frozen ground i around the Arctic Ocean over the next 60 years

Global warming is clearly happening. The crucial question is ‘How bad can it get?’ Most pundits focus on the capacity of the globalised economy to cut carbon emissions – mainly CO2 from fossil fuel burning and methane emissions by commercial livestock herds. Can they be reduced in time to reverse the increase in global mean surface temperature that has already taken place and those that lie ahead? Every now and then there is mention of the importance of natural means of drawing down greenhouse gases: plant more trees; preserve and encourage wetlands and their accumulation of peat and so on. For several months of the Northern Hemisphere summer the planet’s largest bogs actively sequester carbon in the form of dead vegetation. For the rest of the year they are frozen stiff. Muskeg and tundra form a band across the alluvial plains of great rivers that drain North America and Eurasia towards the Arctic Ocean. The seasonal bogs lie above sediments deposited in earlier river basins and swamps that have remained permanently frozen since the last glacial period. Such permafrost begins at just a few metres below the surface at high latitudes down to as much as a kilometre, becoming deeper, thinner and more patchy until it disappears south of about 60°N except in mountainous areas. Permafrost is melting relentlessly, sometimes with spectacular results broadly known as thermokarst that involves surface collapse, mudslides and erosion by summer meltwater.

Thawing permafrost in Siberia and associated collapse structures

Permafrost is a good preserver of organic material, as shown by the almost perfect remains of mammoths and other animals that have been found where rivers have eroded their frozen banks. The latest spectacular find is a mummified wolf pup unearthed by a gold prospector from 57 ka-old permafrost in the Yukon, Canada. She was probably buried when a wolf den collapsed. Thawing exposes buried carbonaceous material to processes that release CO, as does the drying-out of peat in more temperate climes. It has long been known that the vast reserves of carbon preserved in frozen ground and in gas hydrate in sea-floor sediments present an immense danger of accelerated greenhouse conditions should permafrost thaw quickly and deep seawater heats up; the first is certainly starting to happen in boreal North America and Eurasia. Research into Arctic soils had suggested that there is a potential mitigating factor. Iron-3 oxides and hydroxides, the colorants of soils that overlie permafrost, have chemical properties that allow them to trap carbon, in much the same way that they trap arsenic by adsorption on the surface of their molecular structure (see: Screening for arsenic contamination, September 2008).

But, as in the case of arsenic, mineralogical trapping of carbon and its protection from oxidation to CO2 can be thwarted by bacterial action (Patzner, M.S. and 10 others 2020. Iron mineral dissolution releases iron and associated organic carbon during permafrost thaw. Nature Communications, v. 11, article 6329; DOI: 10.1038/s41467-020-20102-6). Monique Patzner of the University of Tuebingen, Germany, and her colleagues from Germany, Denmark, the UK and the US have studied peaty soils overlying permafrost in Sweden that occurs north of the Arctic Circle. Their mineralogical and biological findings came from cores driven through the different layers above deep permafrost. In the layer immediately above permanently frozen ground the binding of carbon to iron-3 minerals certainly does occur. However, at higher levels that show evidence of longer periods of thawing there is an increase of reduced iron-2 dissolved in the soil water along with more dissolved organic carbon – i.e. carbon prone to oxidation to carbon dioxide. Also, biogenic methane – a more powerful greenhouse gas – increases in the more waterlogged upper sediments. Among the active bacteria are varieties whose metabolism involves the reduction of insoluble iron in ferric oxyhdroxide minerals to the soluble ferrous form (iron-2). As in the case of arsenic contamination of groundwater, the adsorbed contents of iron oxyhydroxides are being released as a result of powerful reducing conditions.

Applying their results to the entire permafrost inventory at high northern latitudes, the team predicts a worrying scenario. Initial thawing can indeed lock-in up to tens of billion tonnes of carbon once preserved in permafrost, yet this amounts to only a fifth of the carbon present in the surface-to-permafrost layer of thawing, at best. In itself, the trapped carbon is equivalent to between 2 to 5 times the annual anthropogenic release of carbon by burning fossil fuels. Nevertheless, it is destined by reductive dissolution of its host minerals to be emitted eventually, if thawing continues. This adds to the even vaster potential releases of greenhouse gases in the form of biogenic methane from waterlogged ground. However, there is some evidence to the contrary. During the deglaciation between 15 to 8 thousand years ago – except for the thousand years of the Younger Dryas cold episode – land-surface temperatures rose far more rapidly than happening at present. A study of carbon isotopes in air trapped as bubbles in Antarctic ice suggests that methane emissions from organic carbon exposed to bacterial action by thawing permafrost were much lower than claimed by Patzner et al. for present-day, slower thawing (see: Old carbon reservoirs unlikely to cause massive greenhouse gas release, study finds. Science Daily, 20 February 2020) – as were those released by breakdown of submarine gas hydrates.

Doggerland and the Storegga tsunami

Britain is only an island when sea level stands high; i.e. during interglacial conditions. Since the last ice age global sea level have risen by about 130 m as the great northern ice sheets slowly melted. That Britain could oscillate between being part of Europe and a large archipelago as a result of major climatic cycles dates back only to between 450 and 240 ka ago. Previously it was a permanent part of what is now Europe, as befits its geological identity, joined to it by a low ridge buttressed by Chalk across the Dover Strait/Pas de Calais. All that remains of that are the white cliffs on either side. The drainage of what became the Thames, Seine and Rhine passed to the Atlantic in a much larger rive system that flowed down the axis of the Channel. Each time an ice age ended the ridge acted as a dam for glacial meltwater to form a large lake in what is now the southern North Sea. While continuous glaciers across the northern North Sea persisted the lake remained, but erosion during interglacials steadily wore down the ridge. About 450 ka ago it was low enough for this pro-glacial lake to spill across it in a catastrophic flood that began the separation. Several repeats occurred until the ridge was finally breached (See: When Britain first left Europe; September 2007). Yet sufficient remained that the link reappeared when sea level fell. What remains at present is a system of shallows and sandbanks, the largest of which is the Dogger Bank roughly halfway between Newcastle and Denmark. Consequently the swamps and river systems that immediately followed the last ice age have become known collectively as Doggerland.

The shrinkage of Doggerland since 16,000 BCE (Credit: Europe’s Lost Frontiers Project, University of Bradford)

Dredging of the southern North Sea for sand and gravel frequently brings both the bones of land mammals and the tools of Stone Age hunters to light – one fossil was a skull fragment of a Neanderthal. At the end of the Younger Dryas (~11.7 ka) Doggerland was populated and became a route for Mesolithic hunter-gatherers to cross from Europe to Britain and become transient and then permanent inhabitants. Melting of the northern ice sheets was slow and so was the pace of sea-level rise. A continuous passage across Dogger Land  remained even as it shrank. Only when the sea surface reached about 20 m below its current level was the land corridor breached bay what is now the Dover Strait, although low islands, including the Dogger Bank, littered the growing seaway. A new study examines the fate of Doggerland and its people during its final stage (Walker, J. et al. 2020. A great wave: the Storegga tsunami and the end of Doggerland? Antiquity, v. 94, p. 1409-1425; DOI: 10.15184/aqy.2020.49).

James Walker and colleagues at the University of Bradford, UK, and co-workers from the universities of Tartu, Estonia, Wales Trinity Saint David and St Andrews, UK, focus on one devastating event during Doggerland’s slow shrinkage and inundation. This took place around 8.2 ka ago, during the collapse of a section of the Norwegian continental edge. Known as the Storegga Slides (storegga means great edge in Norse), three submarine debris flows shifted 3500 km3 of sediment to blanket 80 thousand km2 of the Norwegian Sea floor, reaching more than half way to Iceland.  Tsunami deposits related to these events occur along the coast western Norway, on the Shetlands and the shoreline of eastern Scotland. They lie between 3 and 20 m above modern sea level, but allowing for the lower sea level at the time the ‘run-up’ probably reached as high as 35 m: more than the maximum of both the 26 December 2004 Indian Ocean tsunami and that in NW Japan on 11 March 2011. Two Mesolithic archaeological sites definitely lie beneath the tsunami deposit, one close to the source of the slid, another near Inverness, Scotland. At the time part of the Dogger Bank still lay above the sea, as did a wide coastal plain and offshore islands along England’s east coast. This catastrophic event was a little later than a sudden cooling event in the Northern Hemisphere. Any Mesolithic people living on what was left of Doggerland would not have survived. But quite possibly they may already have left as the climate cooled substantially

A seabed drilling programme financed by the EU targeted what lies beneath more recent sediments on the Dogger Bank and off the embayment known as The Wash of Eastern England. Some of the cores contain tsunamis deposits, one having been analysed in detail in a separate paper (Gaffney, V. and 24 others 2020. Multi-Proxy Characterisation of the Storegga Tsunami and Its Impact on the Early Holocene Landscapes of the Southern North Sea. Geosciences, v. 10, online; DOI: 10.3390/geosciences10070270). The tsunami washed across an estuarine mudflat into an area of meadowland with oak and hazel woodland, which may have absorbed much of its energy. Environmental DNA analysis suggests that this relic of Doggerland was roamed by bear, wild boar and ruminants. The authors also found evidence that the tsunamis had been guided by pre-existing topography, such as the river channel of what is now the River Great Ouse. Yet they found no evidence of human occupation. Together with other researchers, the University of Bradford’s Lost Frontiers Project have produced sufficient detail about Doggerland to contemplate looking for Mesolithic sites in the excavations for offshore wind farms.

See also: Addley, E. 2020.  Study finds indications of life on Doggerland after devastating tsunamis. (The Guardian, 1 December 2020); Europe’s Lost Frontiers website

Human impact on surface geological processes

I last wrote about sedimentation during the ‘Anthropocene’ a year ago (See: Sedimentary deposits of the ‘Anthropocene’, November 2019). Human impact in that context is staggeringly huge: annually we shift 57 billion tonnes of rock and soil, equivalent to six times the mass of the UKs largest mountain, Ben Nevis. All the world’s rivers combined move about 35 billion tonnes less. I don’t particularly care for erecting a new Epoch in the Stratigraphic Column, and even less about when the ‘Anthropocene’ is supposed to have started. The proposal continues to be debated 12 years after it was first suggested to the IUGS International Commission on Stratigraphy. I suppose I am a bit ‘old fashioned’, but the proposals is for a stratigraphic entity that is vastly shorter than the smallest globally significant subdivision of geological time (an Age) and the duration of most of the recorded mass extinctions, which are signified by horizontal lines in the Column. By way of illustration, the thick, extensive bed of Carboniferous sandstone on which I live is one of many deposited in the early part of the Namurian Age (between 328 and 318 Ma). Nonetheless, anthropogenic sediments of, say, the last 200 years are definitely substantial. A measure of just how substantial is provided by a paper published online this week (Kemp, S.B. et al. 2020. The human impact on North American erosion, sediment transfer, and storage in a geologic context. Nature Communications, v. 11, article 6012; DOI: 10.1038/s41467-020-19744-3).

‘Badlands’ formed by accelerated soil erosion.

Anthropogenic erosion, sediment transfer and deposition in North America kicked off with its colonisation by European immigrants since the early 16th century. First Americans were hunter-gatherers and subsistence farmers and left virtually no traces in the landscape, other than their artefacts and, in the case of farmers, their dwellings. Kemp and colleagues have focussed on late-Pleistocene alluvial sediment, accumulation of which seems to have been pretty stable for 40 ka. Since colonisation began the rate has increased to, at present, ten times that previously stable rate, mainly during the last 200 years of accelerated spread of farmland. This is dominated by outcomes of two agricultural practices – ploughing and deforestation. Breaking of the complex and ancient prairie soils, formerly held together by deep, dense mats of grass root systems, made even flat surfaces highly prone to soil erosion, demonstrated by the ‘dust bowl’ conditions of the Great Depression during the 1930s. In more rugged relief, deforestation made slopes more likely to fail through landslides and other mass movements. Damming of streams and rivers for irrigation or, its opposite, to drain wetlands resulted in alterations to the channels themselves and their flow regimes. Consequently, older alluvium succumbed to bank erosion. Increased deposition behind an explosion of mill dams and changed flow regimes in the reaches of streams below them had effects disproportionate to the size of the dams (see: Watermills and meanders, March 2008). Stream flow beforehand was slower and flooding more balanced than it has been over the last few hundred years. Increased flooding, the building of ever larger flood defences and an increase in flood magnitude, duration and extent when defences were breached form a vicious circle that quickly transformed the lower reaches of the largest American river basins.

North American rates of alluvium deposition since 40 Ka ago – the time axis is logarithmic. (Credit: Kemp et al., 2020; Fig. 2)

All this deserves documentation and quantification, which Kemp et al. have attempted at 400 alluvial study sites across the continent, measuring >4700 rates of sediment accumulation at various times during the past 40 thousand years. Such deposition serves roughly as a proxy for erosion rate, but that is a function of multiple factors, such as run-off of rain- and snow-melt water, anthropogenic changes to drainage courses and to slope stability. The scale of post-settlement sedimentation is not the same across the whole continent. In some areas, such as southern California, the rate over the last 200 years is lower than the estimated natural, pre-settlement rate: this example may be due to increased capture of surface water for irrigation of a semi-arid area so that erosion and transport were retarded. In others it seems to be unchanged, probably for a whole variety of reason. The highest rates are in the main areas of rain-fed agriculture of the mid-west of the US and western Canada.

In a nutshell, during the last century the North American capitalism shifted as much sediment as would be moved naturally in between 700 to 3000 years. No such investigation has been attempted in other parts of the world that have histories of intense agriculture going back several thousand years, such as the plains of China, northern India and Mesopotamia, the lower Nile valley, the great plateau of the Ethiopian Highlands, and Europe. This is a global problem and despite its continent-wide scope the study by Kemp et al. barely scratches the surface. Despite earnest endeavours to reduce soil erosion in the US and a few other areas, it does seem as if the damage has been done and is irreversible.