The oldest known impact structure (?)

That large, rocky bodies in the Solar System were heavily bombarded by asteroidal debris at the end of the Hadean Eon (between 4.1 to 3.8 billion years ago) is apparent from the ancient cratering records that they still preserve and their matching with dating of impact-melt rocks on the Moon. Being a geologically dynamic planet, the Earth preserves no tangible, indisputable evidence for this Late Heavy Bombardment (LHB), and until quite recently could only be inferred to have been battered in this way. That it actually did happen emerged from a study of tungsten isotopes in early Archaean gneisses from Labrador, Canada (see: Tungsten and Archaean heavy bombardment, August 2002; and Did mantle chemistry change after the late heavy bombardment? September 2009). Because large impacts deliver such vast amounts of energy in little more than a second (see: Graveyard for asteroids and comets, Chapter 10 in Stepping Stones) they have powerful consequences for the Earth System, as witness the Chicxulub impact off the Yucatán Peninsula of Mexico that resulted in a mass extinction at the end of the Cretaceous Period. That seemingly unique coincidence of a large impact with devastation of Earth’s ecosystems seems likely to have resulted from the geology beneath the impact; dominated by thick evaporite beds of calcium sulfate whose extreme heating would have released vast amounts of SO2 to the atmosphere. Its fall-out as acid rain would have dramatically affected marine organisms with carbonate shells. Impacts on land would tend to expend most of their energy throughout the lithosphere, resulting in partial melting of the crust or the upper mantle in the case of the largest such events.

The further back in time, the greater the difficulty in recognising visible signs of impacts because of erosion or later deformation of the lithosphere. With a single, possible exception, every known terrestrial crater or structure that may plausibly be explained by impact is younger than 2.5 billion years; i.e. they are post-Archaean. Yet rocky bodies in the Solar System reveal that after the LHB the frequency and magnitude of impacts steadily decreased from high levels during the Archaean; there must have been impacts on Earth during that Eon and some may have been extremely large. In the least deformed Archaean sedimentary sequences there is indirect evidence that they did occur, in the form of spherules that represent droplets of silicate melts (see: Evidence builds for major impacts in Early Archaean; August 2002, and Impacts in the early Archaean; April 2014), some of which contain unearthly proportions of different chromium isotopes (see: Chromium isotopes and Archaean impacts; March 2003). As regards the search for very ancient impacts, rocks of Archaean age form a very small proportion of the Earth’s continental surface, the bulk having been buried by younger rocks. Of those that we can examine most have been subject to immense deformation, often repeatedly during later times.

The Archaean geology of part of the Akia Terrane (Manitsoq area) in West Greenland. The suggested impact structure is centred on the Finnefjeld Gneiss (V symbols) surrounded by highly deformed ultramafic to mafic igneous rocks. (Credit: Jochen Kolb, Karlsruhe Institute of Technology, Germany)

There is, however, one possibly surviving impact structure from Archaean times, and oddly it became suspected in one of the most structurally complex areas on Earth; the Akia Terrane of West Greenland. Aeromagnetic surveys hint at two concentric, circular anomalies centred on a 3.0 billion years-old zone of grey gneisses (see figure) defining a cryptic structure. It is is surrounded by hugely deformed bodies of ultramafic and mafic rocks (black) and nickel mineralisation (red). In 2012 the whole complex was suggested to be a relic of a major impact of that age, the ultramafic-mafic bodied being ascribed to high degrees of impact-induced melting of the underlying mantle. The original proposers backed up their suggestion with several associated geological observations, the most crucial being supposed evidence for shock-deformation of mineral grains and anomalous concentrations of platinum-group metals (PGM).

A multinational team of geoscientists have subjected the area to detailed field surveys, radiometric dating, oxygen-isotope analysis and electron microscopy of mineral grains to test this hypothesis (Yakymchuck, C. and 8 others 2020. Stirred not shaken; critical evaluation of a proposed Archean meteorite impact in West Greenland. Earth and Planetary Science Letters, v. 557, article 116730 (advance online publication); DOI: 10.1016/j.epsl.2020.116730). Tectonic fabrics in the mafic and ultramafic rocks are clearly older than the 3.0 Ga gneisses at the centre of the structure. Electron microscopy of ~5500 zircon grains show not a single example of parallel twinning associated with intense shock. Oxygen isotopes in 30 zircon grains fail to confirm the original proposers’ claims that the whole area has undergone hydrothermal metamorphism as a result of an impact. All that remains of the original suggestion are the nickel deposits that do contain high PGM concentrations; not an uncommon feature of Ni mineralisation associated with mafic-ultramafic intrusions, indeed much of the world’s supply of platinoid metals is mined from such bodies. Even if there had been an impact in the area, three phases of later ductile deformation that account for the bizarre shapes of these igneous bodies would render it impossible to detect convincingly.

The new study convincingly refutes the original impact proposal. The title of Yakymchuck et al.’s paper aptly uses Ian Fleming’s recipe for James Bond’s tipple of choice; multiple deformation of the deep crust does indeed stir it by ductile processes, while an impact is definitely just a big shake. For the southern part of the complex (Toqqusap Nunaa), tectonic stirring was amply demonstrated in 1957 by Asger Berthelsen of the Greenland Geological Survey (Berthelsen, A. 1957. The structural evolution of an ultra- and polymetamorphic gneiss-complex, West Greenland. Geologische Rundschau, v. 46, p. 173-185; DOI: 10.1007/BF01802892). Coming across his paper in the early 60s I was astonished by the complexity that Berthelsen had discovered, which convinced me to emulate his work on the Lewisian Gneiss Complex of the Inner Hebrides, Scotland. I was unable to match his efforts. The Akia Terrane has probably the most complicated geology anywhere on our planet; the original proposers of an impact there should have known better …

Origin of life: some news

For self-replicating cells to form there are two essential precursors: water and simple compounds based on the elements carbon, hydrogen, oxygen and nitrogen (CHON). Hydrogen is not a problem, being by far the most abundant element in the universe. Carbon, oxygen and nitrogen form in the cores of stars through nuclear fusion of hydrogen and helium. These elemental building blocks need to be delivered through supernova explosions, ultimately to where water can exist in liquid form to undergo reactions that culminate in living cells. That is only possible on solid bodies that lie at just the right distance from a star to support average surface temperatures that are between the freezing and boiling points of water. Most important is that such a planet in the ‘Goldilocks Zone’ has sufficient mass for its gravity to retain water. Surface water evaporates to some extent to contribute vapour to the atmosphere. Exposed to ultraviolet radiation H2O vapour dissociates into molecular hydrogen and water, which can be lost to space if a planet’s escape velocity is less than the thermal vibration of such gas molecules. Such photo-dissociation and diffusion into outer space may have caused Mars to lose more hydrogen in this way than oxygen, to leave its surface dry but rich in reddish iron oxides.

Despite liquid water being essential for the origin of planetary life it is a mixed blessing for key molecules that support biology. This ‘water paradox’ stems from water molecules attacking and breaking the chemical connections that string together the complex chains of proteins and nucleic acids (RNA and DNA). Living cells resolve the paradox by limiting the circulation of liquid water within them by being largely filled with a gel that holds the key molecules together, rather than being bags of water as has been commonly imagined. That notion stemmed from the idea of a ‘primordial soup’, popularised by Darwin and his early followers, which is now preserved in cells’ cytoplasm. That is now known to be wrong and, in any case, the chemistry simply would not work, either in a ‘warm, little pond’ or close to a deep sea hydrothermal vent, because the molecular chains would be broken as soon as they formed. Modern evolutionary biochemists suggest that much of the chemistry leading to living cells must have taken place in environments that were sometimes dry and sometimes wet; ephemeral puddles on land. Science journalist Michael Marshall has just published an easily read, open-source essay on this vexing yet vital issue in Nature (Marshall, M. 2020. The Water Paradox and the Origins of Life. Nature, v. 588, p. 210-213; DOI: 10.1038/d41586-020-03461-4). If you are interested, click on the link to read Marshall’s account of current origins-of-life research into the role of endlessly repeated wet-dry cycles on the early Earth’s surface. Fascinating reading as the experiments take the matter far beyond the spontaneous formation of the amino acid glycine found by Stanley Miller when he passed sparks through methane, ammonia and hydrogen in his famous 1953 experiment at the University of Chicago. Marshall was spurred to write in advance of NASA’s Perseverance Mission landing on Mars in February 2021. The Perseverance rover aims to test the new hypotheses in a series of lake sediments that appear to have been deposited by wet-dry cycles  in a small Martian impact crater (Jezero Crater) early in the planet’s history when surface water was present.

Crystals of hexamethylenetetramine (Credit: r/chemistry, Reddit)

That CHON and simple compounds made from them are aplenty in interstellar gas and dust clouds has been known since the development of means of analysing the light spectra from them. The organic chemistry of carbonaceous meteorites is also well known; they even smell of hydrocarbons. Accretion of these primitive materials during planet formation is fine as far as providing feedstock for life-forming processes on physically suitable planets. But how did CHON get from giant molecular clouds into such planetesimals. An odd-sounding organic compound – hexamethylenetetramine ((CH2)6N4), or HMT – formed industrially by combining formaldehyde (CH2O) and ammonia (NH3) – was initially synthesised in the late 19th century as an antiseptic to tackle UTIs and is now used as a solid fuel for lightweight camping stoves, as well as much else besides. HMT has a potentially interesting role to play in the origin of life.  Experiments aimed at investigating what happens when starlight and thermal radiation pervade interstellar gas clouds to interact with simple CHON molecules, such as ammonia, formaldehyde, methanol and water, yielded up to 60% by mass of HMT.

The structure of HMT is a sort of cage, so that crystals form large fluffy aggregates, instead of the gases from which it can be formed in deep space. Together with interstellar silicate dusts, such sail-like structures could accrete into planetesimals in nebular star nurseries under the influence of  gravity and light pressure. Geochemists from several Japanese institutions and NASA have, for the first time, found HMT in three carbonaceous chondrites, albeit at very low concentrations – parts per billion (Y. Oba et al. 2020. Extraterrestrial hexamethylenetetramine in meteorites — a precursor of prebiotic chemistry in the inner Solar SystemNature Communications, v. 11, article 6243; DOI: 10.1038/s41467-020-20038-x). Once concentrated in planetesimals – the parents of meteorites when they are smashed by collisions – HMT can perform the useful chemical ‘trick’ of breaking down once again to very simple CHON compounds when warmed. At close quarters such organic precursors can engage in polymerising reactions whose end products could be the far more complex sugars and amino acid chains that are the characteristic CHON compounds of carbonaceous chondrites. Yasuhiro Oba and colleagues may have found the missing link between interstellar space, planet formation and the synthesis of life through the mechanisms that resolve the ‘water paradox’ outlined by Michael Marshall.

See also: Scientists Find Precursor of Prebiotic Chemistry in Three Meteorites (Sci-news, 8 December 2020.)

 

Supernova at the start of the Pleistocene

This brief note takes up a thread begun in Can a supernova affect the Earth System? (August 2020). In February 2020 the brightness of Betelgeuse – the prominent red star at the top-left of the constellation Orion – dropped in a dramatic fashion. This led to media speculation that it was about to ‘go supernova’, but with the rise of COVID-19 beginning then, that seemed the least of our worries. In fact, astronomers already knew that the red star had dimmed many times before, on a roughly 6.4-year time scale. Betelgeuse is a variable star and by March 2020 it brightened once again: shock-horror over; back to the latter-day plague.

When stars more than ten-times the mass of the Sun run out of fuel for the nuclear fusion energy that keeps them ‘inflated’ they collapse. The vast amount of gravitational potential energy released by the collapse triggers a supernova and is sufficient to form all manner of exotic heavy isotopes by nucleosynthesis. Such an event radiates highly energetic and damaging gamma radiation, and flings off dust charged with a soup of exotic isotopes at very high speeds. The energy released could sum to the entire amount of light that our Sun has shone since it formed 4.6 billion years ago. If close enough, the dual ‘blast’ could have severe effects on Earth, and has been suggested to have caused the mass extinction at the end of the Ordovician Period.

Betelgeuse is about 700 light years away, massive enough to become a future supernova and its rapid consumption of nuclear fuel – it is only about 10 million years old – suggests it will do so within the next hundred thousand years. Nobody knows how close such an event needs to be to wreak havoc on the Earth system, so it is as well to check if there is evidence for such linked perturbations in the geological record. The isotope 60Fe occurs in manganese-rich crusts and nodules on the floor of the Pacific Ocean and also in some rocks from the Moon. It is radioactive with a half-life of about 2.6 million years, so it soon decays away and cannot have been a part of Earth’s original geochemistry or that of the Moon. Its presence may suggest accretion of debris from supernovas in the geologically recent past: possibly 20 in the last 10 Ma but with apparently no obvious extinctions. Yet that isotope of iron may also be produced by less-spectacular stellar processes, so may not be a useful guide.

There is, however, another short-lived radioactive isotope, of manganese (53Mn), which can only form under supernova conditions. It has been found in ocean-floor manganese-rich crusts by a German-Argentinian team of physicists  (Korschinek, G. et al. 2020. Supernova-produced 53Mn on Earth. Physical Review Letters, v. 125, article 031101; DOI: 10.1103/PhysRevLett.125.031101). They dated the crusts using another short-lived cosmogenic isotope produced when cosmic rays transform the atomic nuclei of oxygen and nitrogen to 10Be that ended up in the manganese-rich crusts along with any supernova-produced  53Mn and 60Fe. These were detected in parts of four crusts widely separated on the Pacific Ocean floor. The relative proportions of the two isotopes matched that predicted for nucleosynthesis in supernovas, so the team considers their joint presence to be a ‘smoking gun’ for such an event.

The 10Be in the supernova-affected parts of the crusts yielded an age of 2.58 ± 0.43 million years, which marks the start of the Pleistocene Epoch, the onset of glacial cycles in the Northern Hemisphere and the time of the earliest known members of the genus Homo. A remarkable coincidence? Possibly. Yet cosmic rays, many of which come from supernova relics, have been cited as a significant source of nucleation sites for cloud condensation. Clouds increase the planet’s reflectivity and thus act to to cool it. This has been a contentious issue in the debate about modern climate change, some refuting their significance on the basis of a lack of correlation between cloud-cover data and changes in the flux of cosmic rays over the last century. Yet, over the five millennia of recorded history there have been no records of supernovas with a magnitude that would suggest they were able to bathe the night sky in light akin to that of daytime. That may be the signature of one capable of affecting the Earth system. Thousands that warrant being dubbed a ‘very large new star’are recorded, but none that ‘turned night into day’. The hypothesis seems to have ‘legs’, but so too do others, such as the slow influence on oceanic circulation of the formation of the Isthmus of Panama and other parochial mechanisms of changing the transfer of energy around our planet

See also: Stellar explosion in Earth’s proximity, eons ago. (Science Daily; 30 September 2020.)

Photosynthesis, arsenic and a window on the Archaean world

At the very base of the biological pyramid life is far simpler than that which we can see.  It takes the form of single cells that lack a nucleus and propagate only by cloning: the prokaryotes as opposed to eukaryote life such as ourselves. It is almost certain that the first viable life on Earth was prokaryotic, though which of its two fundamental divisions – Archaea or Bacteria – came first is still debated. At present, most prokaryotes metabolise other organisms’ waste or dead remains: they are heterotrophs (from the Greek for ‘other nutrition’). But there are others that are primary producers getting their nutrition by themselves, exploiting the inorganic world in a variety of ways: the autotrophs. Biogeochemical evidence from the earliest sedimentary rocks suggests that, in the Archaean prokaryotic autotrophs were dominant, mainly exploiting chemical reactions to gain energy necessary for building carbohydrates. Some reduced sulfate ions to those of sulphide, others combined hydrogen with carbon dioxide to generate methane as a by-product. Sunlight being an abundant energy resource in near-surface water, a whole range of prokaryotes exploit its potential through photosynthesis. Under reducing conditions some photosynthesisers convert sulfur to sulfuric acid , yet others combine photosynthesis with chemo-autotrophy. Dissolved material capable of donating electrons – i.e. reducing agents – are exploited in photosynthesis: hydrogen, ferrous iron (Fe2+), reduced sulfur, nitrite, or some organic molecules. Without one group, which uses photosynthesis to convert CO2 and water to carbohydrates and oxygen, eukaryotes would never have arisen, for they depend on free oxygen. A transformation 2400 Ma ago marked a point in Earth history when oxygen first entered the atmosphere and shallow water (see: Massive event in the Precambrian carbon cycle; January, 2012), known as Great Oxygenation Event (GOE). It has been shown that the most likely sources of that excess oxygen were extensive bacterial mats in shallow water made of photosynthesising blue-green bacteria that produced the distinctive carbonate structures known as stromatolites. These had formed in Archaean sedimentary basins for 1.9 billion years. It has been generally assumed that blue-green bacteria had formed them too, before the oxygen that they produced overcame the reducing conditions that had generally prevailed before the GOE. But that may not have been the case …

Microbial mats made by purple sulfur bacteria in highly toxic spring water flowing into a salt-lake in northern Chile. (credit: Visscher et al. 2020; Fig 1c)

Prokaryotes are a versatile group and new types keep turning up as researchers explore all kinds of strange and extreme environments, for instance: hot springs; groundwater from kilometres below the surface and highly toxic waters. A recent surprise arose from the study of anoxic springs laden with dissolved salts, sulfide ions and arsenic that feed parts of hypersaline lakes in northern Chile (Visscher, P.T. and 14 others 2020. Modern arsenotrophic microbial mats provide an analogue for life in the anoxic ArcheanCommunications Earth & Environment, v. 1, article 24; DOI: 10.1038/s43247-020-00025-2). This is a decidedly extreme environment for life, as we know it, made more challenging by its high altitude exposure to high UV radiation. The springs’ beds are covered with bright-purple microbial mats. Interestingly the water’s arsenic concentration varies from high in winter to low in summer, suggesting that some process removes it, along with sulfur, according to light levels: almost certainly the growth and dormancy of mat-forming bacteria. Arsenic is an electron donor capable of participating in photosynthesis that doesn’t produce oxygen. The microbial mats do produce no oxygen whatever – uniquely for the modern Earth – but they do form carbonate crusts that look like stromatolites. The mats contain purple sulfur bacteria (PSBs) that are anaerobic photosynthesisers, which use sulfur, hydrogen and Fe2+ as electron donors. The seasonal changes in arsenic concentration match similar shifts in sulfur, suggesting that arsenic is also being used by the PSBs. Indeed they can, as the aio gene, which encodes for such an eventuality, is present in the genome of PSBs.

Pieter Visscher and his multinational co-authors argue for prokaryotes similar to modern PSBs having played a role in creating the stromatolites found in Archaean sedimentary rocks. Oxygen-poor, the Archaean atmosphere would have contained no ozone so that high-energy UV would have bathed the Earth’s surface and its oceans to a considerable depth. Moreover, arsenic is today removed from most surface water by adsorption on iron hydroxides, a product of modern oxidising conditions (see: Arsenic hazard on a global scale; May 2020): it would have been more abundant before the GOE. So the Atacama springs may be an appropriate micro-analogue for Archaean conditions, a hypothesis that the authors address with reference to the geochemistry of sedimentary rocks in Western Australia deposited in a late-Archaean evaporating lake. Stromatolites in the Tumbiana Formation show, according to the authors, definite evidence for sulfur and arsenic cycling similar to that in that Atacama springs. They also suggest that photosynthesising blue-green bacteria (cyanobacteria) may not have viable under such Archaean conditions while microbes with similar metabolism to PSBs probably were. The eventual appearance and rise of oxygen once cyanobacteria did evolve, perhaps in the late-Archaean, left PSBs and most other anaerobic microbes, to which oxygen spells death, as a minority faction trapped in what are became ‘extreme’ environments when long before they ‘ruled the roost’. It raises the question, ‘What if cyanobacteria had not evolved?’. A trite answer would be, ‘I would not be writing this and nor would you be reading it!’. But it is a question that can be properly applied to the issue of alien life beyond Earth, perhaps on Mars. Currently, attempts are being made to detect oxygen in the atmospheres of exoplanets orbiting other stars, as a ‘sure sign’ that life evolved and thrived there too. That may be a fruitless venture, because life happily thrived during Earth’s Archaean Eon until its closing episodes without producing a whiff of oxygen.

See also: Living in an anoxic world: Microbes using arsenic are a link to early life. (Science Daily, 22 September 2020)

Centenary of the Milanković Theory

A letter in the latest issue of Nature Geoscience (Cvijanovic, I. et al. 2020. One hundred years of Milanković cycles, v. Nature Geoscience , v.13p. 524–525; DOI: 10.1038/s41561-020-0621-2) reveals the background to Milutin Milanković’s celebrated work on the astronomical  driver of climate cyclicity. Although a citizen of Serbia, he had been born at Dalj, a Serbian enclave, in what was Austro-Hungary. Just before the outbreak of World War I in 2014, he returned to his native village to honeymoon with his new bride. The assassination (29 June 2014) in Sarajevo of Archduke Franz Ferdinand by Bosnian-Serb nationalist Gavrilo Princip prompted Austro-Hungarian authorities to imprison Serbian nationals. Milanković was interned in a PoW camp. Fortunately, his wife and and a former Hungarian colleague managed to negotiate his release, on condition that he served his captivity, with a right to work but under police surveillance, in Budapest. It was under these testing conditions that he wrote his seminal Mathematical Theory of Heat Phenomena Produced by Solar Radiation; finished in 1917 but remaining unpublished until 1920 because of a shortage of paper during the war.

Curiously, Milanković was a graduate in civil engineering — parallels here with Alfred Wegener of Pangaea fame, who was a meteorologist — and practised in Austria. Appointed to a professorship in Belgrade in 1909, he had to choose a field of research. To insulate himself from the rampant scientific competitiveness of that era, he chose a blend of mathematics and astronomy to address climate change. During his period as a political prisoner Milanković became the first to explain how the full set of cyclic variations in Earth’s orbit — eccentricity, obliquity and precession — caused distinct variations in incoming solar radiation at different latitudes and changed on multi-thousand-year timescales. The gist  of what might have lain behind the cyclicity of ice ages had first been proposed by Scottish scientist James Croll almost half a century earlier, but it was Milutin Milanković who, as it were, put the icing on the cake. What is properly known as the Milanković-Croll Theory triumphed in the late 1970s as the equivalent of plate tectonics in palaeoclimatology after Nicholas Shackleton and colleagues teased out the predicted astronomical signals from time series of oxygen isotope variations in marine-sediment cores.

Appropriately, while Milanković’s revoluitionary ideas lacked corroborating geological evidence, one of the first to spring to his support was that other resilient scientific ‘prophet’, Alfred Wegener. Neither of them lived to witness their vindication.

Kicking-off planetary Snowball conditions

Untitled-1
Artist’s impression of the glacial maximum of a Snowball Earth event (Source: NASA)

Twice in the Cryogenian Period of the Neoproterozoic, glacial- and sea ice extended from both poles to the Equator, giving ‘Snowball Earth’ conditions. Notable glacial climates in the Phanerozoic – Ordovician, Carboniferous-Permian and Pleistocene – were long-lived but restricted to areas around the poles, so do not qualify as Snowball Earth conditions. It is possible, but less certain, that Snowball Earth conditions also prevailed during the Palaeoproterozoic at around 2.4 to 2.1 billion years ago. This earlier episode roughly coincided with the ‘Great Oxidation Event’, and one explanation for it is that the rise of atmospheric oxygen removed methane, a more powerful greenhouse gas than carbon dioxide, by oxidizing it to CO2 and water. That may well have been a consequence of the evolution of the cyanobacteria, their photosynthesis releasing oxygen to the atmosphere. The Neoproterozoic ‘big freezes’ are associated with rapid changes in the biosphere, most importantly with the rise of metazoan life in the form of the Ediacaran fauna, the precursor to the explosion in animal diversity during the Cambrian. Indeed all major global coolings, restricted as well as global, find echoes in the course of biological evolution. Another interwoven factor is the rock cycle, particularly volcanism and the varying pace of chemical weathering. The first releases CO2 from the mantle, the second helps draw it down from the atmosphere when weak carbonic acid in rainwater rots silicate minerals (see: Can rock weathering halt global warming, July 2020). All such interplays between major and sometimes minor ‘actors’ in the Earth system influence climate and, in turn, climate inevitably affects all the rest. With such complexity it is hardly surprising that there is a plethora of theories about past climate shifts.

As well as a link with fluctuations in the greenhouse effect, climate is influenced by changes in the amount of solar heating, for which there are yet more options to consider. For instance, the increase in Earth’s albedo (reflectivity) that results from ice cover, may lead through a feedback effect to runaway cooling, particularly once ice extends beyond the poorly illuminated poles. Volcanic dust and sulfate aerosols in the stratosphere also increase albedo and the tendency to cooling, as would interplanetary dust. More complexity to befuddle would-be modellers of ancient climates. Yet it is safe to say that, within the maelstrom of contributory factors, the freeze-overs of Snowball conditions must have resulted from our planet passing through some kind of threshold in the Earth System. Two theoretical scientists from the Department of Earth, Atmospheric, and Planetary Sciences at the Massachusetts Institute of Technology have attempted to cut through the log-jam by modelling the dynamics of the interplay between the ice-albedo feedback and the carbon-silicate cycle of weathering (Arnscheidt, C.W. & Rothman, D.H. 2020. Routes to global glaciation. Proceedings of the Royal Society A, v. 476, article 0303 online; DOI: 10.1098/rspa.2020.0303). Their mathematical approach involves two relatively simple, if long-winded, equations based on parameters that express solar heating, albedo, surface temperature and pressure, and the rate of volcanic outgassing of CO2; a simplification that sets biological processes to one side.

Unlike previous models, theirs can simulate varying rates, particularly of changes in solar energy input. The key conclusion of the paper is that if solar heating decreases faster than a threshold rate the more a planet’s surface water is likely to freeze from pole to pole. The authors suggest that a Snowball Earth event would result from a 2% fall in received solar radiation over about ten thousand years: pretty quick in a geological sense. Such a trigger might stem from a volcanic ‘winter’ scenario, an increase in clouds seeded by spores of primitive marine algae or other factors. The real ‘tipping point’ would probably be the high albedo of ice. There is a warning in this for the present, when a variety of means of decreasing solar input have been proposed as a ‘solution’ to global warming.

Because the Earth orbits the Sun in the ‘Goldilocks Zone’ and is volcanically active even global glaciation would be temporary, albeit of the order of millions of years. The cold would have shut down weathering so that volcanic CO2 could slowly build up in the atmosphere: the greenhouse effect would rescue the planet. Further from the Sun, a planet would not have that escape route, regardless of its atmospheric concentration of greenhouse gases: a neat lead-in to another recent paper about the ancient climate of Mars (Grau Galofre, A. et al. 2020. Valley formation on early Mars by subglacial and fluvial erosion. Nature Geoscience, early online article; DOI: 10.1038/s41561-020-0618-x)

A Martian channel system: note later cratering (credit: European Space Agency)

There is a lot of evidence from both high-resolution orbital images of the Martian surface and surface ‘rovers’ that surface water was abundant over a long period in Mars’s early history. The most convincing are networks of channels, mainly in the southern hemisphere highlands. They are not the vast channelled scablands, such as those associated with Valles Marineris, which probably resulted from stupendous outburst floods connected to catastrophic melting of subsurface ice by some means. There are hundreds of channel networks, that resemble counterparts on Earth. Since rainfall and melting of ice and snow have carved most terrestrial channel networks, traditionally those on Mars have been attributed to similar processes during an early warm and wet phase. The warm-early Mars hypothesis extends even to interpreting the smooth low-lying plains of its northern hemisphere – about a third of Mars’s surface area – as the site of an ocean in those ancient times. Of course, a big question is, ‘Where did all that water go?’ Another relates to the fact that the early Sun emitted considerably less radiation 4.5 billion years ago than it does now: a warm-wet early Mars is counterintuitive.

Anna Grau Galofre of the University of British Columbia and co-authors found that many of the networks on Mars clearly differ in morphology from one another, even in small areas of its surface. Drainage networks on Earth conform to far fewer morphological types. By comparing the variability on Mars with channel-network shapes on Earth, the authors found a close match for many with those that formed beneath the ice sheet that covered high latitudes of North America during the last glaciation. Some match drainage patterns typical of surface-water erosion, but both types are present in low Martian latitudes: a suggestion of ‘Snowball Mars’ conditions? The authors reached their conclusions by analysing six mathematical measures that describe channel morphology for over ten thousand individual valley systems. Previous analyses of individual systems discovered on high-resolution images have qualitative comparisons with terrestrial geomorphology

See also: Chu, J. 2020. “Snowball Earths” May Have Been Triggered by a Plunge in Incoming Sunlight – “Be Wary of Speed” (SciTech Daily 29 July 2020); Early Mars was covered in ice sheets, not flowing rivers, researchers say (Science Daily, 3 August 2020)

Earliest plate tectonics tied down?

Papers that ponder the question of when plate tectonics first powered the engine of internal geological processes are sure to get read: tectonics lies at the heart of Earth science. Opinion has swung back and forth from ‘sometime in the Proterozoic’ to ‘since the very birth of the Earth’, which is no surprise. There are simply no rocks that formed during the Hadean Eon of any greater extent than 20 km2. Those occur in the 4.2 billion year (Ga) old Nuvvuagittuq greenstone belt on Hudson Bay, which have been grossly mangled by later events. But there are grains of the sturdy mineral zircon ZrSiO4)  that occur in much younger sedimentary rocks, famously from the Jack Hills of Western Australia, whose ages range back to 4.4 Ga, based on uranium-lead radiometric dating. You can buy zircons from Jack Hills on eBay as a result of a cottage industry that sprang up following news of their great antiquity: that is, if you do a lot of mineral separation from the dust and rock chips that are on offer, and they are very small. Given a laser-fuelled SHRIMP mass spectrometer and a lot of other preparation kit, you could date them. Having gone to that expense, you might as well analyse them chemically using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) to check out their trace-element contents. Geochemist Simon Turner of Macquarie University in Sydney, Australia, and colleagues from Curtin University in Western Australia and Geowissenschaftliches Zentrum Göttingen in Germany, have done all this for 32 newly extracted Jack Hills zircons, whose ages range from 4.3 to 3.3 Ga (Turner, S. et al. 2020. An andesitic source for Jack Hills zircon supports onset of plate tectonics in the HadeanNature Communications, v. 11, article 1241; DOI: 10.1038/s41467-020-14857-1). Then they applied sophisticated geochemical modelling to tease out what kinds of Hadean rock once hosted these grains that were eventually eroded out and transported to come to rest in a much younger sedimentary rock.

Artist’s impression of the old-style hellish Hadean (Credit : Dan Durday, Southwest Research Institute)

Zircons only form duuring the crystallisation of igneous magmas, at around 700°C, the original magma having formed under somewhat hotter conditions – up to 1200°C for mafic compositions. In the course of their crystallising, minerals take in not only the elements of which they are mainly composed, zirconium, silicon and oxygen in the case of zircon , but many other elements that the magma contains in low concentrations. The relative proportions of these trace elements that are partitioned from the magma into the growing mineral grains are more or less constant and unique to that mineral, depending on the particular composition of the magma itself. Using the proportions of these trace elements in the mineral gives a clue to the original bulk composition of the parent magma. The Jack Hills zircons  mainly  reflect an origin in magmas of andesitic composition, intermediate in composition between high-silica granites and basalts that have lower silica contents. Andesitic magmas only form today by partial melting of more mafic rocks under the influence of water-rich fluid driven upwards from subducting oceanic lithosphere. The proportions of trace elements in the zircons could only have formed in this way, according to the authors.

Interestingly, the 4.2 Ga Nuvvuagittuq greenstone belt contains metamorphosed mafic andesites, though any zircons in them have yet to be analysed in the manner used by Turner et al., although they were used to date those late-Hadean rocks. The deep post-Archaean continental crust, broadly speaking, has an andesitic composition, strongly suggesting its generation above subduction zones. Yet that portion of Archaean age is not andesitic on average, but a mixture of three geochemically different rocks. It is referred to as TTG crust from those three rock types (trondhjemite, tonalite and granodiorite). That TTG nature of the most ancient continental crust has encouraged most geochemists to reject the idea of magmatic activity controlled by plate tectonics during the Archaean and, by extension, during the preceding Hadean. What is truly remarkable is that if mafic andesites – such as those implied by the Jack Hills zircons and found in the Nuvvuagittuq greenstone belt – partially melted under high pressures that formed garnet in them, they would have yielded magmas of TTG composition. This, it seems, puts plate tectonics in the frame for the whole of Earth’s evolution since it stabilised several million years after the catastrophic collision that flung off the Moon and completely melted the outer layers of our planet. Up to now, controversy about what kind of planet-wide processes operated then have swung this way and that, often into quite strange scenarios. Turner and colleagues may have opened a new, hopefully more unified, episode of geochemical studies that revisit the early Earth . It could complement the work described in An Early Archaean Waterworld published on Earth-logs earlier in March 2020.

An Early Archaean Waterworld

In Earth-logs you may have come across the uses of oxygen isotopes, mainly in connection with their variations in the fossils of marine organisms and in ice cores. The relative proportion of the ‘heavy’ 18O isotope to the ‘light’ 16O, expressed by δ18O, is a measure of the degree of fractionation between these isotopes under different temperature conditions when water evaporates. What happens is that H216O, in which the lighter isotope is bound up, slightly more easily evaporates thus enriching the remaining liquid water in H218O. As a result the greater the temperature of surface water and the more of evaporates, the higher is its δ18O value. Shells that benthonic (surface-dwelling) organism secrete are made mainly of the mineral calcite (CaCO3). Their formation involves extracting dissolved calcium ions and CO2 plus an extra oxygen from the water itself, as calcite’s formula suggests. So plankton shells fossilised  in ocean-floor sediments carry the δ18O and thus a temperature signal of surface water at the place and time in which they lived. Yet this signal is contaminated with another signal: that of the amount of water evaporated from the ocean surface (with lowered  δ18O) that has ended up falling as snow and then becoming trapped in continental ice sheets. The two can be separated using the δ18O found in shells of bottom-dwelling (benthonic) organisms, because deep ocean water maintains a similar low temperature at all time (about 2°C). Benthonic δ18O is the main guide to the changing volume of continental ice throughout the last 30 million year or so. This ingenious approach, developed about 50 years ago, has become the key to understanding past climate changes as reflected in records of ice volume and ocean surface temperature. Yet these two factors are not the only ones at work on marine oxygen isotopes.

Artistic impression of the Early Archaean Earth dominated by oceans (Credit: Sci-news.com)

When rainwater flows across the land, clays in the soil formed by weathering of crystalline rocks preferentially extract 18O and thus leave their own δ18O mark in ocean water. This has little, if any, effect on the use of δ18O to track past climate change, simply because the extent of the continents hasn’t changed much over the last 2 billion years or so. Likewise, the geological record over that period clearly indicates that rain, wet soil and water flowing across the land have all continued somewhere or other, irrespective of climate. However, one of the thorny issues in Earth science concerns changes of the area of continents in the very long term. They are suspected but difficult to tie down. Benjamin Johnson of the University of Colorado and Boswell Wing of Iowa State University, USA, have closely examined oxygen isotopes in 3.24 billion-year old rocks from a relic of Palaeoarchaean ocean crust from the Pilbara district of Western Australia that shows pervasive evidence of alteration by hot circulating ocean water (Johnson, B.W. & Wing, B.A. 2020. Limited Archaean continental emergence reflected in an early Archaean 18O-enriched ocean. Nature Geoscience, v. 13, p. 243-248; DOI: 10.1038/s41561-020-0538-9). Interestingly, apart from the composition of the lavas, the altered rocks look just the same as much more recent examples of such ophiolites.

The study used many samples taken from the base to the top of the ophiolite along some 20 traverses across its outcrop. Overall the isotopic analyses suggested that the circulating water responsible for the hydrothermal alteration 3.2 Ga ago was much more enriched in 18O than is modern ocean water. The authors’ favoured explanation is that much less continental crust was exposed above sea level during the Palaeoarchaean Era than in later times and so far less clay was around on land. That does not necessarily imply that less continental crust existed at that time compared with the Archaean during the following 700 Ma , merely that the continental ‘freeboard’ was so low that only a few islands emerged above the waves. By the end of the Archaean 2.5 Ga ago the authors estimate that oceanic δ18O had decreased to approximately modern levels. This they attribute to a steady increase in weathering of the emerging continental landmasses and the extraction of 18O into new, clay-rich soils as the continents emerged above sea level. How this scenario of a ‘drowned’ world developed is not discussed. One possibility is that the average depth of the oceans then was considerably less than it was in later times: i.e. sea level stood higher because the volume available to contain ocean water was less. One possible explanation for that and the subsequent change in oxygen isotopes might be a transition during the later Archaean Eon into modern-style plate tectonics. The resulting steep subduction forms deep trench systems able to ‘hold’ more water. Prior to that faster production of oceanic crust resulted in what are now the ocean abyssal plains being buoyed up by warmer young crust that extended beneath them. Today they average around 4000 m deep, thanks to the increased density of cooled crust, and account for a large proportion of the volume of modern ocean basins.

How did the planets form?

Animation of the 3-D shape of planetesimal Arrokoth. (Credit: Roman Tkachenko, NASA)

The latest addition to knowledge of the Solar System looks a bit like a couple of potatoes that have lain together and dried over several years. It also has a name – Arrokoth – that might have been found in a novel by H.P. Lovecraft. In fact Arrokoth meant ‘sky’ in the extinct Powhatan language once spoken by the native people of Chesapeake Bay. The planetesimal was visited by the New Horizons spacecraft two years after it had flown by Pluto (see; Most exotic geology on far-off Pluto, Earth-logs 6 April 2016). It is a small member of the Kuiper Belt of icy bodies. Data collected by a battery of imaging instruments on the spacecraft has now revealed that it has a reddish brown coloration that results from a mixture of frozen methanol mixed with a variety of organic compounds including a class known as tholins – the surface contains no water ice. Arrokoth is made of two flattened elliptical bodies (one 20.6 × 19.9 × 9.4 km the smaller 15.4 × 13.8 × 9.8 km) joined at a ‘waist’. Each of them comprises a mixture of discrete ‘terrains’ with subtly different surface textures and colours, which are likely to be earlier bodies that accreted together. On 13 February 2020 a flurry of three papers about the odd-looking planetesimal appeared in Science.

The smooth surface implies a lack of high-energy collisions when a local cluster of initially pebble sized icy bodies in the sparsely populated Kuiper Belt gradually coalesced under extremely low gravity. The lack of any fractures suggests that the accretions involved relative speeds of, at most, 2 m s-1; slow-walking speed or spacecraft docking (McKinnon, W.B. and a great many more 2020. The solar nebula origin of (486958) Arrokoth, a primordial contact binary in the Kuiper Belt. Science, article eaay6620; DOI: 10.1126/science.aay6620). The authors regard this quiet, protracted, cool accretion to have characterised at least the early stages of planet formation in the Outer Solar System. The extent to which this can be extrapolated to the formation of the giant gas- and ice worlds, and to the rocky planets and asteroids of the Inner Solar System is less certain, to me at least. It implies cold accretion over a long period that would leave large worlds to heat up only through the decay of radioactive isotopes. Once large planetesimals had accreted, however that had happened, the greater their gravitational pull the faster other objects of any size would encounter them. That scenario implies a succession of increasingly high-energy collisions during planet formation.

This hot-accretion model, to which most planetary scientists adhere, was supported by a paper published by Science a day before those about Arrokoth hit the internet (Schiller, M. et al. 2020. Iron isotope evidence for very rapid accretion and differentiation of the proto-Earth. Science Advances, v. 6, article eaay7604; DOI: 10.1126/sciadv.aay7604). This work hinged on the variation in the proportions of iron isotopes among meteorites, imparted to the local gas and dust cloud after their original nucleosynthesis in several supernovas in the Milky Way galaxy during pre-solar times. Iron found in different parts of the Earth consistently shows isotopic proportions that match just one class of meteorites: the CI carbonaceous chondrites. Yet there are many other silicate-rich meteorite classes with =different iron-isotope proportions. Had the Earth accreted from this mixed bag by random ‘collection’ of material over a protracted period prior to 4.54 billion years ago, its overall iron-isotope composition would have been more like the average of all meteorites than that of just one class. The authors conclude that Earth’s accretion, and probably that of the smaller body that crashed with it to form the Moon at about 4.4 Ga, must have taken place quickly (<5 million years) when CI carbonaceous chondrites dominated the inner part of the protoplanetary disc.

See also: Barbuzano, J. 2020. New Horizons Reveals Full Picture of Arrokoth . . . and How Planets Form. Sky & Telescope

Mineral grains far older than the Solar System

If a geologist with broad interests was asked, ‘what are the oldest materials on Earth?’ she or he would probably say the Acasta Gneiss from Canada’s North West Territories at 4.03 billion years (Ga) (see: At last, 4.0 Ga barrier broken, November 2008. A specialist in the Archaean Eon might say the Nuvvuagittuq Greenstone Belt on the eastern shore of Hudson Bay (see: Archaean continents derived from Hadean oceanic crust, March 2017); arguably 4.28 Ga old. An isotope geochemist would refer to a tiny 4.4 Ga zircon grain that had been washed into the much younger Mount Narryer quartz sandstone in Western Australia (see: Pushing back the “vestige of a beginning”, January 2001). A real smarty pants would cite a 4.5 Ga old sample of feldspar-rich Lunar Highland anorthosite in the Apollo Mission archive in Houston, USA. The last is less than 100 Ma younger than the formation of the Solar System itself at 4.568 Ga. Yet there are meteorites that have fallen to Earth, which contain minute mineral  grains that were incorporated into the initial dust from which the planets formed. Until recently, the best known were white inclusions in a 2 tonne meteorite that fell near Allende in Mexico; the largest carbonaceous chondrite ever found. This class of meteorite represents the most primitive material in orbit around the Sun. Its tiny inclusions contain proportions of isotopes of a variety of elements that are otherwise unknown in any other material from the Solar System and they are older. The conclusion is that these dust-sized, presolar grains originated elsewhere in the galaxy, perhaps from supernovas or red-giant stars.

A presolar grain from the Murchison meteorite made up of silicon carbide crystals (credit: Janaína N. Ávila)

Carbonaceous chondrites, as their name suggests, contain a huge variety of carbon-based compounds and they have been closely examined as possible suppliers of the precursor chemicals for the origin of life. Another large example of this class fell near the town of Murchison in Victoria, Australia in 1969. The first people to locate fragments of the 100 kg body noted a distinct smell of methylated spirits and steam rising from it: when crushed half a century later it still smells like rotting peanut butter. The Murchison meteorite has yielded signs of 14 thousand organic compounds, including 70 amino acids. It has also been a target for extracting possible presolar grains. This entails grinding small fragments and then dissolving out the carbonaceous and silicate material using various reagent to leave the more or less inert silicon carbide grains. The residue contains the most durable grains: despite being described as ‘large’ they are of the order of only 10 micrometres across. Some are made of silicon carbide; the same as the well-known abrasive carborundum. Throughout their lifetime in interstellar space the grains have been bombarded by high-energy protons and helium nuclei which move through space at nearly the speed of light – generally known as ‘cosmic rays’. When interacting with other matter they behave much like the particles in the Large Hadron Collider, being able to transmute natural isotopes into others. Measuring the relative proportions of these isotopes in material that has been bombarded by cosmic rays enables their exposure time to be estimated. In the case of the Murchison presolar grains the isotopes of choice are those of the noble gas neon (Heck, P.R. and 9 others 2020. Lifetimes of interstellar dust from cosmic ray exposure ages of presolar silicon carbide. Proceedings of the National Academy of Sciences, 201904573; DOI: 10.1073/pnas.1904573117). Analyses of 40 such grains yielded ages from 4.6 to 7.5 Ga, i.e. up to 3 billion years before the Solar System formed. They are, indeed, exotic. The highest age exceeds that of the oldest from such previously measured by 1.5 billion years

Investigations up to now suggest that dusts amount to about 1 % of interstellar matter, the rest being gases, mainly hydrogen and helium. With the formation of the planets and the parent bodies of asteroids a high proportion of presolar grains would have accreted to them to be mixed with other, more common stuff. What Heck and colleagues have discovered puts the Solar System into a broad framework of time and space. The grains must have formed at some stage in the evolution of stars older and larger than the Sun, to be blown out into the interstellar medium of the Milky Way galaxy. One possibility is that about 7 billion years ago there was a burst of star formation in a nearby sector of the galaxy. How the resulting dust made its way to the concentration of interstellar matter that eventually formed the Sun and Solar System is yet to be commented on.

See also: Bennett, J.  2020 Meteorite Grains Are the Oldest Known Solid Material on Earth.  Smithsonian Magazine(online)  13 January 2020.