Risks of sudden changes linked to climate

The Earth system comprises a host of dynamic, interwoven components or subsystems. They involve processes deep within Earth’s interior, at its surface and in the atmosphere. Such processes combine inorganic chemistry, biology and physics. To describe them properly would require a multi-volume book; indeed an entire library, but even that would be even more incomplete than our understanding of human history and all the other social sciences. Cut to its fundamentals, Earth system science deals with – or tries to – a planetary engine. In it, the available energy from inside and from the Sun is continually shifted around to drive the bewildering variety, multiplicity of scales and variable paces of every process that makes our planet the most interesting thing in the entire universe. It has done so, with a variety of hiccups and monumental transformations, for some four and half billion years and looks likely to continue on its roiling way for about five billion more – with or without humanity. Though we occupy a tiny fraction of its history we have introduced a totally new subsystem that in several ways outpaces the speed and the magnitude of some chemical, physical and organic processes. For example: shifting mass (see the previous item, Sedimentary deposits of the ‘Anthropocene’); removing and modifying vegetation cover; emitting vast amounts of various compounds as a result of economic activity – the full list is huge. In such a complex natural system it is hardly surprising that rapidly increasing human activities in the last few centuries of our history have hitherto unforeseen effects on all the other components. The most rapidly fluctuating of the natural subsystems is that of climate, and it has been extraordinarily sensitive for the whole of Earth history.

Cartoon metaphor for a ‘tipping point’ as water is added to a bucket pivoted on a horizontal axis. As water level rises to below the axis the bucket becomes increasingly stable. Once the level rises above this pivot instability sets in until the syetem suddenly collapses

Within any dynamic, multifaceted system-component each contributing process may change, and in doing so throw the others out of kilter: there are ‘tipping points’. Such phenomena can be crudely visualised as a pivoted bucket into which water drips and escapes. While the water level remains below the pivot, the system is stable. Once it rises above that axis instability sets in; an external push can, if strong enough, tip the bucket and drain it rapidly. The higher the level rises the less of a push is needed. If no powerful push upsets the system the bucket continues filling. Eventually a state is reached when even a tiny force is able to result in catastrophe. One much cited hypothesis invokes a tipping point in the global climate system that began to allow the minuscule effect on insolation from changes in the eccentricity of Earth’s orbit to impose its roughly 100 ka frequency on the ups and downs of continental ice volume during the last 800 ka. In a recent issue of Nature a group of climate scientists based in the UK, Sweden, Germany, Denmark, Australia and China published a Comment on several potential tipping points in the climate system (Lenton, T.M. et al. 2019. Climate tipping points — too risky to bet against. Nature, v. 575, p. 592-595; DO!: 10.1038/d41586-019-03595-0). They list what they consider to be the most vulnerable to catastrophic change: loss of ice from the Greenland and Antarctic ice sheets; melting of sea ice in the Arctic Ocean; loss of tropical and boreal forest; melting of permanently frozen ground at high northern latitudes; collapse of tropical coral reefs; ocean circulation in the North and South Atlantic.

The situation they describe makes dismal reading. The only certain aspect is the steadily mounting level of carbon dioxide in the atmosphere, which boosts the retention of solar heat by delaying the escape of long-wave, thermal radiation from the Earth’s surface to outer space through the greenhouse effect. An ‘emergency’ – and there can be little doubt that one of more are just around the corner – is the product of ‘risk’ and ‘urgency’. Risk is the probability of an event times the damage it may cause. Urgency is the product of reaction time following an alert divided by the time left to intervene before catastrophe strikes. Not a formula designed to make us confident of the ‘powers’ of science! As the commentary points out, whereas scientists are aware of and have some data on a whole series of tipping points, their understanding is insufficient to ‘put numbers on’ These vital parameters. And there may be other tipping points that they are yet to recognise.  Another complicating factor is that in a complex system catastrophe in one component can cascade through all the others: a tipping may set off a ‘domino effect’ on all the others. An example is the steady and rapid melting of boreal permafrost. Frozen ground contains methane in the solid form of gas hydrate, which will release this ‘super-greenhouse’ gas as melting progresses.   Science ‘knows of’ such potential feedback loops in a largely untried, theoretical sense, which is simply not enough.

A tipping point that has a direct bearing on those of us who live around the North Atlantic resides in the way that water circulates in that vast basin. ‘Everyone knows about’ the Gulf Stream that ships warm surface water from equatorial latitudes to beyond the North Cape of Norway. It keeps NW Europe, otherwise subject to extremely cold winter temperatures, in a more equable state. In fact this northward flow of surface water and heat exerts controls on aspects of climate of the whole basin, such as the tracking of tropical storms and hurricanes, and the distribution of available moisture and thus rain- and snowfall. But the Gulf Steam also transports extra salt into the Arctic Ocean in the form of warm, more briny surface water. Its relatively high temperature prevents it from sinking, by reducing its density. Once at high latitudes, cooling allows Gulf-Steam water to sink to the bottom of the ocean, there to flow slowly southwards. This thermohaline circulation effectively ‘drags’ the Gulf Stream into its well-known course. Should it stop then so would the warming influence and the control it exerts on storm tracks. It has stopped in the past; many times. The general global cooling during the 100 ka that preceded the last ice age witnessed a series of lesser climate events. Each began with a sudden global warming followed by slow but intense cooling, then another warming to terminate these stadials or Dansgaard-Oeschger cycles (see: Review of thermohaline circulation, Earth-logs February 2002). The warming into the Holocene interglacial since about 20 ka was interrupted by a millennium of glacial cold between 12.9 and 11.7 ka, known as the Younger Dryas (see: On the edge of chaos in the Younger Dryas, Earth-logs May 2009). A widely supported hypothesis is that both kinds of major hiccup reflected shuts-down of the Gulf Stream due to sudden influxes of fresh water into North Atlantic surface water that reduced its density and ability to sink. Masses of fresh water are now flowing into the Arctic Ocean from melting of the Greenland ice sheet and thinning of Arctic sea ice (also a source of fresh water). Should the Greenland ice sheet collapse then similar conditions for shut-down may arise – rapid regional cooling amidst global warming – and similar consequences in the Southern Hemisphere from the collapse of parts of the Antarctic ice sheets and ice shelves.  Lenton et al. note that North Atlantic thermohaline circulation has undergone a 15% slowdown since the mid-twentieth century…

See also: Carrington, D. 2019. Climate emergency: world ‘may have crossed tipping points’ (Guardian, 27 November 2019)

Sedimentary deposits of the ‘Anthropocene’

Economic activity since the Industrial Revolution has dug up rock – ores, aggregate, building materials and coal. Holes in the ground are a signature of late-Modern humanity, even the 18th century borrow pits along the rural, single-track road that passes the hamlet where I live. Construction of every canal, railway, road, housing development, industrial estate and land reclaimed from swamps and sea during the last two and a half centuries involved earth and rock being pushed around to level their routes and sites. The world’s biggest machine, aside from CERN’s Large Hadron Collider near Geneva, is Hitachi’s Bertha the tunnel borer (33,000 t) currently driving tunnels for Seattle’s underground rapid transit system. But the record muck shifter is the 14,200 t MAN TAKRAF RB293 capable of moving about 220,000 t of sediment per day, currently in a German lignite mine. The scale of humans as geological agents has grown exponentially. We produce sedimentary sequences, but ones with structures that are very different from those in natural strata. In Britain alone the accumulation of excavated and shifted material has an estimated volume six times that of our largest natural feature, Ben Nevis in NW Scotland. On a global scale 57 billion t of rock and soil is moved annually, compared with the 22 billion t transported by all the world’s rivers. Humans have certainly left their mark in the geological record, even if we manage to reverse  terrestrial rapacity and stave off the social and natural collapse that now pose a major threat to our home planet.

A self propelled MAN TAKRAF bucketwheel excavator (Bagger 293) crossing a road in Germany to get from one lignite mine to another. (Credit: u/loerez, Reddit)

The holes in the ground have become a major physical resource, generating substantial profit for their owners from their infilling with waste of all kinds, dominated by domestic refuse. Unsurprisingly, large holes have become a dwindling resource in the same manner as metal ores. Yet these stupendous dumps contain a great deal of metals and other potentially useful material awaiting recovery in the eventuality that doing so would yield a profit, which presently seems a remote prospect. Such infill also poses environmental threats simply from its composition which is totally alien compared with common rock and sediment. Three types of infill common in the Netherlands, of which everyone is aware, have recently been assessed (Dijkstra, J.J. et al. 2019. The geological significance of novel anthropogenic materials: Deposits of industrial waste and by-products. Anthropocene, v. 28, Article 100229; DOI: 10.1016/j.ancene.2019.100229). These are: ash from the incineration of household waste; slags from metal smelting; builders’ waste. What unites them, aside from their sheer mass, is the fact that are each products of high-temperature conditions: anthropogenic metamorphic rocks, if you like. That makes them thermodynamically unstable under surface conditions, so they are likely to weather quickly if they are exposed at the surface or in contact with groundwater. And that poses threats of pollution of soil-, surface- and groundwater

All are highly alkaline, so they change environmental pH. Ash from waste incineration is akin to volcanic ash in that it contains a high proportion of complex glasses, which easily break down to clays and soluble products. Curiously, old dumps of ash often contain horizons of iron oxides and hydroxides, similar to the ‘iron pans’ in peaty soils. They form at contacts between oxidising and reducing conditions, such as the water table or at the interface with natural soils and rocks. Soluble salts of a variety of trace elements may accumulate, such copper, antimony and molybdenum. Slags not only contain anhydrous silicates rich in the metals of interest and other trace metals, which on weathering may yield soluble chromium and vanadium, but they also have high levels of calcium-rich compounds from the limestone flux used in smelting, i.e. agents able to create high alkalinity. Portland cement, perhaps the most common material in builders’ waste, is dominated by hydrated calcium-aluminium silicates that break-down if the concrete is crushed, again with highly alkaline products. Another component in demolition debris is gypsum from plaster, which can be a source of highly toxic hydrogen sulfide gas generated in anaerobic conditions by sulfate-sulfide reducing bacteria.

UK shale gas: fracking potential dramatically revised downwards

In 2013, much to the joy of the British government and the fracking industry, the British Geological Survey (BGS) declared that there was likely to be between 24 and 68 trillion m3 (TCM) of gas available to fracking ventures in the Carboniferous Bowland Shale, the most promising target in Britain. That is equivalent to up to about 90 years’ supply at the current UK demand for natural gas.  The BGS estimate was based on its huge archives of subsurface geology, including that of the Bowland Shale; they know where the rock is present and how much there is. But their calculations of potential gas reserves used data on the gas content of shales in the US where fracking has been booming for quite a while. Fracking depends on creating myriad cracks in a shale so that gas can escape what is an otherwise impermeable material.

Bowland Shale 1
Areas in Britain underlain by the Bowland Shale formation (credit: British Geological Survey)

How much gas might be available from a shale depends on its content of solid hydrocarbons (kerogen) and whether it has thermally matured and produced gas that remains locked within the rock. So a shale may be very rich in kerogen, but if it has not been heated to ‘maturity’ during burial it may contain no gas at all, and is therefore worthless for fracking. Likewise, a shale from which the gas has leaked away over millions of years. A reliable means of checking has only recently emerged. High-pressure water pyrolysis (HPWP) mimics the way in which oil and gas are generated during deep burial and then expelled as once deep rock is slowly uplifted (Whitelaw, P. et al. 2019. Shale gas reserve evaluation by laboratory pyrolysis and gas holding capacity consistent with field data. Nature Communications, v. 10, article 3659; DOI: 10.1038/s41467-019-11653-4). The authors from the University of Nottingham, BGS and a geochemical consulting company show that two samples of the Bowland Shale are much less promising than originally thought. Based on the HPWP results, it seems that the Bowland Shale as a whole may have gas reserves of only around 0.6 TCM of gas that may be recoverable from the estimated 4 TCM of gas that may reside in the shale formation as a whole. This is ‘considerably below 10 years supply at the current [UK] consumption’.

Unsurprisingly, the most prominent of the fracking companies, Cuadrilla, have dismissed the findings brusquely, despite having published analyses of other samples that consistent with results in this paper. Opinion in broader petroleum circles is that the only way of truly putting a number to potential reserves is to drill and frack many wells … The British government may well have a collective red face only a week after indicating that they were prepared to review regulation of fracking, which currently forces operations to stop if it causes seismic events above magnitude 0.5 on the Richter scale. A spokesperson for Greenpeace UK said that, ‘Fracking is our first post-truth industry, where there is no product, no profit and no prospect of either.’

See also: McGrath, M. 2019. Fracking: UK shale reserves may be smaller than previously estimated. (BBC News 20 August); Ambrose, J. 2019. Government’s shift to relax shale gas fracking safeguards condemned (Guardian 15 August); Fracking in the UK; will it happen? (Earth-logs June 2014)

Ecological hazards of ocean-floor mining

Spiralling prices for metals on the world market, especially those that are rare and involved in still-evolving technologies, together with depletion of onshore, high-grade reserves are beginning to make the opportunity of mining deep, ocean-floor resources attractive. By early 2018, fifteen companies had begun detailed economic assessment of one of the most remote swathes of the Pacific abyssal plains. In April 2018 (How rich are deep-sea resources?) I outlined the financial attractions and the ecological hazards of such ventures: both are substantial, to say the least. In Japan’s Exclusive Economic Zone (EEZ) off Okinawa the potential economic bonanza has begun, with extraction from deep-water sulfide deposits of zinc equivalent to Japan’s annual demand for that metal, together with copper, gold and lead. One of the most economically attractive areas lies far from EEZs, beneath the East Pacific Ocean between the Clarion and Clipperton transform faults. It is a huge field littered by polymetallic nodules, formerly known as manganese nodules because Mn is the most abundant in them. A recent article spelled out the potential environmental hazards which exploiting the resources of this region might bring (Hefferman, O. 2019. Seabed mining is coming – bringing mineral riches and fears of epic extinctions. Nature, v. 571, p. 465-468; DOI: 10.1038/d41586-019-02242-y).

ocean floor resources
The distribution of potential ocean-floor metal-rich resources (Credit: Hefferman 2019)

Recording of the ecosystem on the 4 km deep floor of the Clarion-Clipperton Zone (CCZ) began in the 1970s. It is extraordinarily diverse for such a seemingly hostile environment. Despite its being dark, cold and with little oxygen, it supports a rich and unique diversity of more than 1000 species of worms, echinoderms, crustaceans, sponges, soft corals and a poorly known but probably huge variety of smaller animals and microbes inhabiting the mud itself. In 1989, marine scientists simulated the effect on the ecosystem of mining by using an 8-metre-wide plough harrow to break up the surface of a small plot. A plume of fine sediment rained down to smother the inhabitants of the plot and most of the 11 km2 surrounding it. Four subsequent visits up to 2015 revealed that recolonisation by its characteristic fauna has been so slow that the area has not recovered from the disturbance after three decades.

The International Seabed Authority (ISA), with reps from 169 maritime member-states, was created in 1994 by the United Nations to encourage and regulate ocean-floor mining; i.e. its function seems to be ‘both poacher and gamekeeper’. In 25 years, the ISA has approved only exploration activities and has yet to agree on an environmental protection code, such is the diversity of diplomatic interests and the lack of ecological data on which to base it. Of the 29 approved exploration licences, 16 are in the CCZ and span about 20% of it, one involving British companies has an area of 55,000 km2. ISA still has no plans to test the impact of the giant harvesting vehicles needed for commercial mining, and its stated intent is to keep only 30% of the CCZ free of mining ‘to protect biodiversity’. The worry among oceanographers and conservationists is that ISA will create a regulatory system without addressing the hazards properly. Commercial and technological planning is well advanced but stalled by the lack of a regulatory system as well as wariness because of the huge start-up costs in an entirely new economic venture.

The obvious concern for marine ecosystems is the extent of disturbance and ecosystem impact, both over time and as regards scale. The main problem lies in the particles that make up ocean-floor sediments, which are dominated by clay-size particles. The size of sedimentary particles considered to be clays ranges between 2.0 and 0.06 μm. According to Stokes Law, a clay particle at the high end of the clay-size range with a diameter of 2 μm  has a settling speed in water of 2 μm s-1. The settling speed for the smallest clays is 1,000 time slower. So, even the largest clay particles injected only 100 m above the ocean floor would take 1.6 years to settle back to the ocean floor – if the water column was absolutely still. But even the 4,000 m deep abyssal plains are not at all stil, because of the ocean-water ‘conveyor belt’ driven by thermohaline circulation. An upward component of this flow would extend the time during which disturbed ocean-floor mud remains in suspension – if that component was a mere >2 μm s-1, even the largest clay particles would remain suspended indefinitely. Deepwater currents, albeit slow, would also disperse the plume of fines over much larger areas than those being mined. Moreover such turbidity pollution is likely to occur at the ocean surface as well, if the mining vessels processed the ore materials by washing nodules free of attached clay. Plumes from shipboard processing would be dispersed much further because of the greater speed of shallow currents. This would impact the upper and middling depths of the oceans that support even more diverse and, in the case of mid-depths poorly known, ecosystems Such plumes may settle only after decades or even centuries, if at all.

Processing on land, obviously, presents the same risk for near-shore waters. It may be said that such pollution could be controlled easily by settling ponds, as used in most conventional mines on land. But the ‘fines’ produced by milling hard ores are mainly silt-sized particles (2.0 to 60 μm) of waste minerals, such as quartz, whose settling speeds are proportional to the square of their diameter; thus a doubling in particle size results in four-times faster settling. The mainly clay-sized fines in deep-ocean ores would settle far more slowly, even in shallow ponds, than the rate at which they are added by ongoing ore processing; chances are, they would eventually be released either accidentally or deliberately

A mining code is expected in 2020, in which operating licences are likely to be for 30 years. Unlike the enforced allowance of environmental restoration once a land-based mining operation is approved, the sheer scale, longevity and mobility of fine-sediment plumes seem unlikely to be resolvable, however strong such environmental-protection clauses are for mining the ocean floor.

Frack me nicely?

‘There’s a seaside place they call Blackpool that’s famous for fresh air and fun’. Well, maybe, not any more. If you, dear weekender couples, lie still after the ‘fun’ the Earth may yet move for you. Not much, I’ll admit, for British fracking regulations permit Cuadrilla, who have a drill rig at nearby Preston New Road on the Fylde coastal plain of NW England, only to trigger earthquakes with a magnitude less than 0.5 on the Richter scale. This condition was applied after early drilling by Cuadrilla had stimulated earthquakes up to magnitude 3. To the glee of anti-fracking groups the magnitude 0.5 limit has been regularly exceeded, thereby thwarting Cuadrilla’s ambitions from time to time. Leaving aside the view of professional geologists that the pickings for fracked shale gas in Britain [June 2014] are meagre, the methods deployed in hydraulic fracturing of gas-prone shales do pose seismic risks. Geology, beneath the Fylde is about as simple as it gets in tectonically tortured Britain. There are no active faults, and no significant dormant ones near the surface that have moved since about 250 Ma ago; most of Britain is riven by major fault lines, some of which are occasionally active, especially in prospective shale-gas basins near the Pennines. When petroleum companies are bent on fracking they use a drilling technology that allows one site to sink several wells that bend with depth to travel almost horizontally through the target shale rock. A water-based fluid containing a mix of polymers and surfactants to make it slick, plus fine sand or ceramic particles, are pumped at very high pressures into the rock. Joints and bedding in the shale are thus forced open and maintained in that condition by the sandy material, so that gas and even light oil can accumulate and flow up the drill stems to the surface. Continue reading “Frack me nicely?”

Gravity signals of earthquakes

A sign that an earthquake is taking place is pretty obvious: the ground moves. Seismometers are now so sensitive that they record significant seismic events at the far side of the world. The Richter magnitude scale commonly used to assign the power of an event is logarithmic, and the difference between each unit represents an approximately 32-fold change in the energy released at the source, so that a magnitude 6.0 earthquake is 32 times more powerful than one rated as magnitude 5.0. Because seismic motion affects a mass of rock it also perturbs the gravitational field. So, theoretically, gravimeters should also be able to detect an earthquake. Seismic waves travel at a maximum speed of about 6 to 8 km s-1 about 20 times the speed of sound, yet changes in the gravitational field propagate at the speed of light, i.e. almost instantaneously by comparison. The first ground disturbances of the magnitude 9.0 Tohoku-Oki earthquake of NE Japan on 11 March 2011 hit Tokyo about 2 minutes after the event began offshore. Although that is a quite short time it would be sufficient for people to react and significantly reduce the earthquake’s direct impact on many of them. A seismic gravity signal would give that warning. The full horror of Tohoku-Oki was unleashed by the resulting tsunami waves, whose speed in the deep ocean water off Japan was about 800 km hr-1 (0.22 km s-1). An almost real-time warning would have allowed 40 times more time for evasion.

Tsunami Punx
Devastation in NE Japan caused by the Tohoku-Oki tsunami in March 2011

Japan is particularly well endowed with advanced geophysical equipment because of its notorious seismic and volcanic hazards. The first data to be analysed after Tohoku-Oki were understandably those from Japan’s large array of seismometers. The records from two super-sensitive gravimeters, between 436 and 515 km from the epicentre, were examined only recently. These instruments measure variations in gravity as small as a trillionth of the average gravitational acceleration of the Earth using a superconducting sphere suspended in a magnetic field, capable of detecting snow being cleared from a roof. Masaya Kimura and colleagues from Tokyo University and other geoscientific institutes in Japan undertook the analyses of both seismic and gravity data (Kimura, M. et al. 2019. Earthquake‑induced prompt gravity signals identified in dense array data in Japan. Earth Planets and Space, v. 71, online publication. DOI: 10.1186/s40623-019-1006-x). The gravimeter record did show a statistically significant perturbation at the actual time of the earthquake, albeit after complex processing of both gravity and seismographic data.

That only 2 superconducting gravimeters detected the event in real-time is quite remarkable, despite the need for a great deal of processing. It amounts to a test of the concept that such instruments or others based on different designs and deployed more widely may eventually be deployed to give prompt warnings of seismic events that could save thousands.

Read more on Geohazards