Mineral grains far older than the Solar System

If a geologist with broad interests was asked, ‘what are the oldest materials on Earth?’ she or he would probably say the Acasta Gneiss from Canada’s North West Territories at 4.03 billion years (Ga) (see: At last, 4.0 Ga barrier broken, November 2008. A specialist in the Archaean Eon might say the Nuvvuagittuq Greenstone Belt on the eastern shore of Hudson Bay (see: Archaean continents derived from Hadean oceanic crust, March 2017); arguably 4.28 Ga old. An isotope geochemist would refer to a tiny 4.4 Ga zircon grain that had been washed into the much younger Mount Narryer quartz sandstone in Western Australia (see: Pushing back the “vestige of a beginning”, January 2001). A real smarty pants would cite a 4.5 Ga old sample of feldspar-rich Lunar Highland anorthosite in the Apollo Mission archive in Houston, USA. The last is less than 100 Ma younger than the formation of the Solar System itself at 4.568 Ga. Yet there are meteorites that have fallen to Earth, which contain minute mineral  grains that were incorporated into the initial dust from which the planets formed. Until recently, the best known were white inclusions in a 2 tonne meteorite that fell near Allende in Mexico; the largest carbonaceous chondrite ever found. This class of meteorite represents the most primitive material in orbit around the Sun. Its tiny inclusions contain proportions of isotopes of a variety of elements that are otherwise unknown in any other material from the Solar System and they are older. The conclusion is that these dust-sized, presolar grains originated elsewhere in the galaxy, perhaps from supernovas or red-giant stars.

A presolar grain from the Murchison meteorite made up of silicon carbide crystals (credit: Janaína N. Ávila)

Carbonaceous chondrites, as their name suggests, contain a huge variety of carbon-based compounds and they have been closely examined as possible suppliers of the precursor chemicals for the origin of life. Another large example of this class fell near the town of Murchison in Victoria, Australia in 1969. The first people to locate fragments of the 100 kg body noted a distinct smell of methylated spirits and steam rising from it: when crushed half a century later it still smells like rotting peanut butter. The Murchison meteorite has yielded signs of 14 thousand organic compounds, including 70 amino acids. It has also been a target for extracting possible presolar grains. This entails grinding small fragments and then dissolving out the carbonaceous and silicate material using various reagent to leave the more or less inert silicon carbide grains. The residue contains the most durable grains: despite being described as ‘large’ they are of the order of only 10 micrometres across. Some are made of silicon carbide; the same as the well-known abrasive carborundum. Throughout their lifetime in interstellar space the grains have been bombarded by high-energy protons and helium nuclei which move through space at nearly the speed of light – generally known as ‘cosmic rays’. When interacting with other matter they behave much like the particles in the Large Hadron Collider, being able to transmute natural isotopes into others. Measuring the relative proportions of these isotopes in material that has been bombarded by cosmic rays enables their exposure time to be estimated. In the case of the Murchison presolar grains the isotopes of choice are those of the noble gas neon (Heck, P.R. and 9 others 2020. Lifetimes of interstellar dust from cosmic ray exposure ages of presolar silicon carbide. Proceedings of the National Academy of Sciences, 201904573; DOI: 10.1073/pnas.1904573117). Analyses of 40 such grains yielded ages from 4.6 to 7.5 Ga, i.e. up to 3 billion years before the Solar System formed. They are, indeed, exotic. The highest age exceeds that of the oldest from such previously measured by 1.5 billion years

Investigations up to now suggest that dusts amount to about 1 % of interstellar matter, the rest being gases, mainly hydrogen and helium. With the formation of the planets and the parent bodies of asteroids a high proportion of presolar grains would have accreted to them to be mixed with other, more common stuff. What Heck and colleagues have discovered puts the Solar System into a broad framework of time and space. The grains must have formed at some stage in the evolution of stars older and larger than the Sun, to be blown out into the interstellar medium of the Milky Way galaxy. One possibility is that about 7 billion years ago there was a burst of star formation in a nearby sector of the galaxy. How the resulting dust made its way to the concentration of interstellar matter that eventually formed the Sun and Solar System is yet to be commented on.

See also: Bennett, J.  2020 Meteorite Grains Are the Oldest Known Solid Material on Earth.  Smithsonian Magazine(online)  13 January 2020.

Active volcanic processes on Venus

Earth’s nearest neighbour, apart from the Moon, is the planet Venus. As regards size and estimated density it could be Earth’s twin. It is a rocky planet, probably with a crust and mantle made of magnesium- and iron-rich silicates, and its bulk density suggests a substantial metallic core. There the resemblance ends. The whole planet is shrouded in highly reflective cloud (possibly of CO2 ‘snow’) at the top of an atmosphere almost a hundred times more massive than ours. It consists of 96% CO2 with 3% nitrogen, the rest being mainly sulfuric acid: the ultimate greenhouse world, and a very corrosive one. Only the four Soviet Venera missions have landed on Venus to provide close-up images of its surface. They functioned only for a couple of hours, after having measured a surface temperature around 500°C – high enough to melt lead. One Venera instrument, an X-ray fluorescence spectrometer – did crudely analyse some surface rock, showing it to be of basaltic composition. The atmosphere is not completely opaque, being transparent to microwave radiation. So both its surface textures and elevation variation have been imaged several times using orbital radar. Unlike the Earth, whose dual-peaked distribution of elevation – high continents and low ocean floors thanks to plate tectonics – Venus has just one and is significantly flatter. No tectonics operate there. There are far fewer impact craters on Venus than on Mars and the Moon, and most are small. This suggests that the present surface of Venus is far younger than are theirs; no more than 500 Ma compared to 3 to 4 billion years.

Volcanic ‘pancake’ domes on the surface of Venus, about 65 km wide and 1 km high, imaged by orbital radar carried by NASA’s Magellan Mission.

Somehow, Venus has been ‘repaved’, most likely by vast volcanic outpourings akin to the Earth’s flood basalt events, but on a global scale. Radar reveals some 1600 circular features that are undoubtedly volcanic in origin and younger than most of the craters. They resemble huge pancakes and are thought to be shield volcanoes similar to those seen on the Ethiopian Plateau but up to 100 times larges. Despite the high surface temperature and a caustic atmosphere, chemical weathering on Venus is likely to be much slower than on Earth because of the dryness of its atmosphere. Also, unlike the hydration reactions that produce terrestrial weathering, on Venus oxidizing processes probably produce iron oxides, sulfides, some anhydrous sulfates and secondary silicates. These would change the reflective properties of originally fresh igneous rocks, a little like the desert varnish that pervades rocky surfaces in arid areas on Earth. A group of US scientists have devised experiments to reproduce the likely conditions at the surface of Venus to see how long it takes for one mineral in basalt to become ‘tarnished’ in this way (Filberto, J. et al. 2020. Present-day volcanism on Venus as evidenced from weathering rates of olivine. Science Advances, v. 6, article eaax7445; DOI: 10.1126/sciadv.aax7445). One might wonder why, seeing as the planet’s atmosphere hides the surface in the visible and short-wavelength infrared part of the spectrum, which underpins most geological remote sensing of other planetary bodies, such as Mars. In fact, that is not strictly true. Carbon dioxide lets radiation pass through in three narrow spectral ‘windows’ (centred on 1.01, 1.10, and 1.18 μm) in which fresh olivine emits more radiation when it is heated than does weathered olivine. So detecting and measuring radiation detected in these ‘windows’ should discriminate between fresh olivine and that which has been weathered Venus-style. Indeed it may help determine the degree of weathering and thus the duration of lava flow’s exposure.

Venus VNIR
Colour-coded image of night-time thermal emissivity over Venus’s southern hemisphere as sensed by VIRTIS on Venus Express (Credit: M. Gilmore 2017, Space Sci. Rev. DOI 10.1007/s11214-017-0370-8; Fig. 3)

The European Space Agency’s Venus Express Mission in 2006 carried a remote sensing instrument (VIRTIS) mainly aimed at the structure of Venus’s clouds and their circulation. But it also covered the three CO2 ‘windows’, so it could detect and image the surface too. The image above shows significant areas of the surface of Venus that strongly emit short-wave infrared at night (yellow to dark red) and may be slightly weathered to fresh. Most of the surface in green to dark blue is probably heavily weathered. So the data may provide a crude map of the age of the surface. However, Filberto et al’s experiments show that olivine weathers extremely quickly under the surface conditions of Venus. In a matter of months signs of the fresh mineral disappeared. So the red areas on the image may well be lavas that have been erupted in the last few years before VIRTIS was collecting data, and perhaps active eruptions. Previous suggestions have been that some lava flows on large volcanoes are younger than 2.5 Ma and possible even younger than 0.25 Ma. Earth’s ‘evil twin’ now seems to be vastly more active, as befits a planet in which mantle-melting temperatures (~1200°C) are far closer to the surface as a result of the blanketing effect of its super-dense atmosphere.

The last known Homo erectus

There are a lot of assumptions made about Homo erectus and, indeed, there is much confusion surrounding the species (see: various items in Human evolution and migrations logs for 2001, 2002, 2003 and several other years). For a start, the name derives from Eugene Dubois’s 1891 discovery of several hominin cranial fragments in sediments deposited by the Solo River in Java. Dubois was the first to recognise in ‘Java Man’ the human-ape ‘missing link’ about which Charles Darwin speculated in his The Descent of Man, and Selection in Relation to Sex (1871). Dubois named the beings Pithecanthropus (now Homo) erectus. Once the “multiregional” versus “out-of-Africa” debate about the origin of anatomically modern humans (AMH) emerged after a variety of H. erectus-like fossils had also turned up in Africa and Europe, as well as in East and SE Asia, ‘Java Man’ was adopted by the multiregionalists as ‘evidence’ for separate evolution of AMH in Asia. Such a view remains adhered to by a tenacious number of Chinese palaeoanthropologists, but by virtually no-one else.

Reconstruction of the Nariokotome Boy from the skeleton found in the Turkana Basin of Kenya (credit: Atelier Daynes/Science Photo Library)

The earliest of the African ‘erects’ were distinguished as H. ergaster, represented by the 1.6 Ma old, almost intact skeleton of Nariokotome Boy from the Turkana area of Kenya. In Africa the specific names ergaster and erectus often seem to be used as synonyms, whereas similar-looking fossils from Asia are almost always referred to as ‘Asian ­H. erectus’. Matters became even more confusing when the earliest human migrants from Africa to Eurasia were discovered at Dmanisi in Georgia (see; Human evolution and migrations logs for 2002, 2003, 2007, 2013). Anatomically they deviate substantially from both H. ergaster and Asian erectus – and from each other! – and at 1.8 Ma they are very old indeed. Perhaps as a palliative in the academic rows that broke out following their discovery, for the moment they are called Homo erectus georgicus; a sub-species. But, then, how can Asian H. erectus be regarded as their descendants. Yet anatomically erectus-like fossils are known in East and SE Asia from 1.5 Ma onwards.

There is another mystery. Homo ergaster/erectus in Africa made distinctive tools, typified by the bifacial Acheulian hand axe. Their tool kit remained substantially the same for more than a million years, and was inherited by all the descendants of H. erectus in Africa and Europe: by H. antecessor, heidelbergensis, Neanderthals and early AMH. Yet in Asia, such a technology has not been discovered at sites older than around 250 thousand years. Either no earlier human migrants into Asia made and carried such artefacts or stone tools were largely abandoned by early Asian humans in favour of those more easily made from woods, for instance bamboo.

In 1996 the youngest Solo River sediments that had yielded H. erectus remains in the 1930s were dated using electron-spin resonance and uranium-series methods. The results suggested occupation by ‘erects’ between 53 and 27 ka, triggering yet more astonishment, because fully modern humans had by then also arrived in Indonesia. Could anatomically modern humans have co-existed with a species whose origin went back to almost two million years beforehand? It has taken another two decades for this perplexing issue to be clarified – to some extent. The previous dates were checked using more precise versions of the original geochronological methods covering a wider range of sediment strata (Rizal, Y. et al. 2019. Last appearance of Homo erectus at Ngandong, Java, 117,000–108,000 years ago. Nature, published online; DOI:10.1038/s41586-019-1863-2). No AMH presence in Asia is known before about 80 ka, so can the astonishment be set aside? Possibly, but what is known for sure from modern and ancient DNA comparisons is that early modern human migrants interbred with a more ancient Asian group, the Denisovans. At present that group is only known from a site in Siberia and another in Tibet through a finger bone and a few molar teeth that yielded DNA significantly different from both living humans and ancient Neanderthals. So we have no tangible evidence of what the Denisovans looked like, unlike Asian H. erectus of whom there are many substantial fossils. Yet DNA has not been extracted from any of them. That is hardly surprising for the Indonesian specimens because hot and humid conditions cause DNA to break down quickly and completely. There is a much better chance of extracting genomes from the youngest H. erectus fossils from higher latitudes in China. Once that is achieved, we will know whether they are indeed erects or can be matched genetically with Denisovans.

See also:  Price, M. 2019. Ancient human species made ‘last stand’ 100,000 years ago on Indonesian island (Science)

Chewing gum and the genetics of an ancient human

The sequencing of DNA has advanced to such a degree of precision and accuracy that minute traces of tissue, hair, saliva, sweat, semen and other bodily solids and fluids found at crime scenes are able to point to whomever was present. That is, provided that those persons’ DNA is known either from samples taken from suspects or resides in police records. In the case of individuals unknown to the authorities, archived DNA sequences from members of almost all ethnic groups can be used to ‘profile’ those present at a crime. Likely skin and hair pigmentation, and even eye colour, emerge from segments that contain the genes responsible.

One of the oddest demonstrations of the efficacy of DNA sequencing from minute samples used a wad of chewed birch resin. Such gums are still chewed widely for a number of reasons: to stave off thirst or hunger; to benefit from antiseptic compounds in the resin and to soften a useful gluing material – resin derived by heating birch bark is a particularly good natural adhesive . Today we are most familiar with chicle resin from Central America, the base for most commercial chewing gum, but a whole range of such mastics are chewed on every inhabited continent, birch gum still being used by Native North Americans: it happens to be quite sweet. The chewed wad in this case was from a Neolithic site at Syltholm on the Baltic coast of southern Denmark (Jensen, T.Z.T. and 21 others 2019. A 5700 year-old human genome and oral microbiome from chewed birch pitch. Nature Communications v. 10, Article 5520; DOI: 10.1038/s41467-019-13549-9). The sample contained enough ancient human DNA to reconstruct a full genome, and also yielded fragments from a recent meal – duck with hazelnuts – and from several oral bacteria and viruses, including a herpes variety that is a cause of glandular fever. The sample also shows that the carrier did not have the gene associated with lactase persistence that allows adults to digest milk.

An artist’s impression of the gum chewing young woman from southern Denmark (credit: Tom Bjorklund)

The chewer was female and had both dark skin and hair, together with blue eyes; similar to a Mesolithic male found in a cave in Cheddar Gorge in SW England whose petrous ear bone yielded DNA. By no means all fossil human bones still carry enough DNA for full sequencing, and are in any case rare. Chewed resin is much more commonly found and its potential awaits wider exploitation, particularly as much older wads have been found. Specifically, the Danish woman’s DNA reveals that she did not carry any ancestry from European Neolithic farmers whose DNA is well known from numerous burials. It was previously thought that farmers migrating westward from Anatolia in modern Turkey either replaced or absorbed the earlier Europeans. By 5700 years ago farming communities were widespread in western Europe, having arrived almost two thousand years earlier. The blue-eyed, dark Danish woman was probably a member of a surviving group of earlier hunter gatherers who followed the retreat of glacial conditions at the end of the Younger Dryas ice re-advance about 11,500 years ago. The Syltholm site seems to have been occupied for hundreds of generations. Clearly, the community had not evolved pale skin since its arrival, as suggested by a once popular theory that dark skin at high latitudes is unable to produce sufficient vitamin-D for good health. That notion has been superseded by knowledge that diets rich in meat, nuts and fungi provide sufficient vitamin-D. Pale skins may have evolved more recently as people came to rely on a diet dominated by cereals that are a poor source of vitamin-D.

How marine animal life survived (just) Snowball Earth events

A Cryogenian glacial diamictite containing boulders of many different provenances from the Garvellach Islands off the west coast of Scotland. (Credit: Steve Drury)

Glacial conditions during the latter part of the Neoproterozoic Era extended to tropical latitudes, probably as far as the Equator, thereby giving rise to the concept of Snowball Earth events. They left evidence in the form of sedimentary strata known as diamictites, whose large range of particle size from clay to boulders has a range of environmental explanations, the most widely assumed being glacial conditions. Many of those from the Cryogenian Period are littered with dropstones that puncture bedding, which suggest that they were deposited from floating ice similar to that forming present-day Antarctic ice shelves or extensions of onshore glaciers. Oceans on which vast shelves of glacial ice floated would have posed major threats to marine life by cutting off photosynthesis and reducing the oxygen content of seawater. That marine life was severely set back is signalled by a series of perturbations in the carbon-isotope composition of seawater. Its relative proportion of 13C to 12C (δ13C) fell sharply during the two main Snowball events and at other times between 850 to 550 Ma. The Cryogenian was a time of repeated major stress to Precambrian life, which may well have speeded up evolution, sediments of the succeeding Ediacaran Period famously containing the first large, abundant and diverse eukaryote fossils.

For eukaryotes to survive each prolonged cryogenic stress required that oxygen was indeed present in the oceans. But evidence for oxygenated marine habitats during Snowball Earth events has been elusive since these global phenomena were discovered. Geoscientists from Australia, Canada, China and the US have applied novel geochemical approaches to occasional iron-rich strata within Cryogenian diamictite sequences from Namibia, Australia and the south-western US in an attempt to resolve the paradox (Lechte, M.A. and 8 others 2019. Subglacial meltwater supported aerobic marine habitats during Snowball Earth. Proceedings of the National Academy of Sciences, 2019; 201909165 DOI: 10.1073/pnas.1909165116). Iron isotopes in iron-rich minerals, specifically the proportion of 56Fe relative to that of 54Fe (δ56Fe), help to assess the redox conditions when they formed. This is backed up by cerium geochemistry and the manganese to iron ratio in ironstones.

In the geological settings that the researchers chose to study there are sedimentological features that reveal where ice shelves were in direct contact with the sea bed, i.e. where  they were ‘grounded’. Grounding is signified by a much greater proportion of large fragments in diamictites, many of which are striated through being dragged over underlying rock. Far beyond the grounding line diamictites tend to be mainly fine grained with only a few dropstones. The redox indicators show clear changes from the grounding lines through nearby environments to those of deep water beneath the ice. Each of them shows evidence of greater oxidation of seawater at the grounding line and a falling off further into deep water. The explanation given by the authors is fresh meltwater flowing through sub-glacial channels at the base of the grounded ice fed by melting at the glacier surface, as occurs today during summer on the Greenland ice cap and close to the edge of Antarctica. Since cold water is able to dissolve gas efficiently the sub-glacial channels were also transporting atmospheric oxygen to enrich the near shore sub-glacial environment of the sea bed. In iron-rich water this may have sustained bacterial chemo-autotrophic life to set up a fringing food chain that, together with oxygen, sustained eukaryotic heterotrophs. In such a case, photosynthesis would have been impossible, yet unnecessary. Moreover, bacteria that use the oxidation of dissolved iron as an energy source would have caused Fe-3 oxides to precipitate, thereby forming the ironstones on which the study centred. Interestingly, the hypothesis resembles the recently discovered ecosystems beneath Antarctic ice shelves.

Small and probably unconnected ecosystems of this kind would have been conducive to accelerated evolution among isolated eukaryote communities. That is a prerequisite for the sudden appearance of the rich Ediacaran faunas that colonised sea floors globally once the Cryogenian ended. Perhaps these ironstone-bearing diamictite occurrences where the biological action seems to have taken place might, one day, reveal evidence of the precursors to the largely bag-like Ediacaran animals

Should you worry about being killed by a meteorite?

In 1994 Clark Chapman of the Planetary Science Institute in Arizona and David Morrison of NASA’s Ames Research Center in California published a paper that examined the statistical hazard of death by unnatural causes in the United States (Chapman, C. & Morrison, D. Impacts on the Earth by asteroids and comets: assessing the hazard. Nature, v. 367, p. 33–40; DOI:10.1038/367033a0). Specifically, they tried to place the risk of an individual being killed by a large asteroid (~2 km across) hitting the Earth in the context of more familiar unwelcome causes. Based on the then available data about near-Earth objects – those whose orbits around the Sun cross that of the Earth – they assessed the chances as ranging between 1 in 3,000 and 1 in 250,000; a chance of 1 in 20,000 being the most likely. The results from their complex calculations turned out to be pretty scary, though not as bad as dying in a car wreck, being murdered, burnt to death or accidentally shot. Asteroid-risk is about the same as electrocution, at the higher-risk end, but significantly higher than many other causes with which the American public are, unfortunately, familiar: air crash; flood; tornado and snake bite. The lowest asteroid-risk (1 in 250 thousand) is greater than death from fireworks, botulism or trichloroethylene in drinking water; the last being 1 in 10 million.

Chapman and Morrison cautioned against mass panic on a greater scale than Orson Welles’s 1938 CBS radio production of H.G. Wells’s War of the Worlds allegedly resulted in. Asteroid and comet impacts are events likely to kill between 5,000 and several hundred million people each time they happen but they occur infrequently. Catastrophes at the low end, such as the 1908 Tunguska air burst over an uninhabited area in Siberia, are likely to happen once in a thousand years. At the high end, mass extinction impacts may occur once every hundred million years. As might be said by an Australian, ‘No worries, mate’! But you never know…

Michelle Knapp’s Chevrolet Malibu the morning after a stony-iron mmeteorite struck it. Bought for US$ 300, Michelle sold the car for US$ 25,000 and the meteorite fetched US$ 50,000 (credit: John Bortle)

How about ordinary meteorites that come in their thousands, especially when the Earth’s orbit takes it through the former paths taken by disintegrating comets? When I was a kid rumours spread that a motor cyclist had a narrow escape on the flatlands around Kingston-upon-Hull in East Yorkshire, when a meteorite landed in his sidecar: probably apocryphal. But Michelle Knapp of Peeskill, New York, USA had a job for the body shop when a 12 kg extraterrestrial object hit her Chevrolet Malibu, while it was parked in the driveway. In 1954, Ann Hodges of Sylacauga, Alabama was less fortunate during an afternoon nap on her sofa, when a 4 kg chondritic meteorite crashed through her house roof, hit a radiogram and bounced to smash into her upper thigh, badly bruising her. For an object that probably entered the atmosphere at about 15 km s-1, that was indeed a piece of good luck resulting from air’s viscous drag, the roof impact and energy lost to her radiogram. The offending projectile became a doorstop in the Hodge residence, before the family kindly donated it to the Alabama Museum of Natural History. Another fragment of the same meteorite, found in a field a few kilometres away, fetched US$ 728 per gram at Christie’s auction house in 2017. Perhaps the most unlucky man of the 21st century was an Indian bus driver who was killed by debris ejected when a meteorite struck the dirt track on which he was driving in Tamil Nadu in 2016 – three passengers were also injured. Even that is disputed, some claiming that the cause was an explosive device.

When rain kick-started evolution

The end of the Palaeozoic Era was marked by the greatest known mass extinction at the Permian-Triassic boundary 252 Ma ago. An estimated 96% of known marine fossil species simply disappeared, as did 70% of vertebrates that lived on land. Many processes seem to have conspired against life on Earth although it seems that one was probably primary: the largest known flood-basalt event, evidence for which lies in the Siberian Traps. It took as long as 50 Ma for ecosystems to return to their former diversity. But, oddly, it was animals at the top of the marine food chain that recovered most quickly, in about 5 million years. There must have been food in the sea, but it was at first somewhat monotonous. The continents were still configured in the Pangaea supercontinent, so much land was far from oceans and thus dry. Oxygen was being drawn down from the atmosphere to combine with iron in Fe2O3 to form vast tracts of redbeds for which the Triassic is famous. From a peak of 30% in the Permian, atmospheric oxygen descended to 16% in the early Triassic, so living even at sea level would have been equivalent to surviving today at 2.7 km elevation today. Potential ecological niches were vastly reduced in fertility and in altitude, and Pangaea still had vast mountain ranges inherited from its formative tectonics as well as being arid, apart from in polar regions. Unsurprisingly, recovery of terrestrial diversity, especially among vertebrates, was slow during the early Triassic.

Triassic grey terrestrial sediments on the Somerset coast of SW England (credit: Margaret W. Carruthers; https://www.flickr.com/photos/64167416@N03/albums/72157659852255255)

Then, about halfway through the Triassic Period, it began to rain across Pangaea. Whether that was continual or seasonal is uncertain, although the presence of large mountains and high plateaus would favour monsoon circulation, akin to the present-day Indian monsoon associated with the Himalaya and Tibetan Plateau. How do geologists know that central Pangaea became wetter? The evidence lies in grey sedimentary strata between the otherwise universal redbeds, which occur in the Carnian Age and span one to two million years around 232 Ma (Marshall, M. 2019. Did a million years of rain jump-start dinosaur evolution? Nature, v. 576, p. 26-28; doi: 10.1038/d41586-019-03699-7). A likely driver for this change in colour is a rise in water tables that would exclude oxygen from sediments deposited recently. The red Iron-3 oxides were reduced, so that soluble iron-2 was leached out. Some marine groups, such as crinoids, underwent a sudden flurry of extinctions, as did plants and amphibians on land. But others received a clear boost from this Carnian Pluvial Event. A few dinosaurs first appear in older Triassic sediments, but during the Carnian they began to diversify from diminutive bipedal species into the main groups so familiar to many: ornithischians that lead to Stegosaurus and Triceratops and the forerunners of the saurischians that included huge long-necked herbivores and the bipedal theropods and birds. Within 4 Ma dinosaurs had truly begun their global hegemony. Offshore in shallow seas, the scleractinian corals, which dominate modern coral reef systems, also exploded during the Carnian from small beginnings in the earlier Triassic. It is even suspected that the Carnian nurtured the predecessor of mammals, although the evidence is only from isolated fossil teeth.

A Carnian shift in carbon isotopes, measured in Triassic limestones of the Italian Dolomites, to lower proportions of the heavier 13C suggests that a huge volume of the lighter 12C had entered the atmosphere. That could have resulted from large-scale volcanism, the 232 Ma old laves of the Wrangell Mountains in Alaska being a likely suspect. Such an input would have had a warming climatic outcome that would have increased tropical evaporation of ocean water and the humidity over continental masses. The once ecologically monotonous core of Pangaea may have greatly diversified into many more niches awaiting occupants, thereby stimulating the terrestrial evolutionary burst. Perhaps ironically, and fortunately, a volcanic near snuffing-out of life on Earth was soon followed by another with the opposite effect. Yet another negative outcome arrived with the flood basalts of the Central Atlantic Magmatic Province at the end of the Triassic (201 Ma), to be followed by further adaptive radiation among those organisms that survived into the Jurassic.

Why did anatomically modern humans replace Neanderthals?

Extinction of the Neanderthals has long been attributed to pressure on resources following the first influx into Europe by AMH bands and perhaps different uses of the available resources by the two groups. One often quoted piece of evidence comes from the outermost layer in the teeth of deer. Most ruminants continually replace tooth enamel to make up for wear, winter additions being darker than those during summer. Incidentally, the resulting layering gives away their age, as in, ‘Never look a gift horse in the mouth’! Deer teeth associated with Neanderthal sites show that they were killed throughout the year. Those around AMH camps are either summer or winter kills. The implication is that AMH were highly mobile, whereas Neanderthals had fixed hunting ranges whose resources would have been depleted by passing AMH bands. That is as may be, but another possibility has received more convincing support.

Neanderthal populations across their range from Gibraltar to western Siberia were extremely low and band sizes seem to have been small, even before AMH made their appearance. This may have been critical in their demise, based on considerations that arise from attempts to conserve threatened species today (Vaesen, K. et al. 2019. Inbreeding, Allee effects and stochasticity might be sufficient to account for Neanderthal extinction. PLoS One, v. 14, article e0225117; DOI: 10.1371/journal.pone.0225117). The smaller and more isolated groups are, the more likely they are to resort to inbreeding in the absence of close-by potential mates. There is evidence from Neanderthal DNA that such endogamy was practised. Long-term interbreeding between genetic relatives among living human groups is known to result in decreased fitness as deleterious traits accumulate. On top of that, very low population density makes finding mates, closely related or not, difficult (the Allee effect). A result of that is akin to the modern tendency of young people born in remote areas to leave, so that local population falls and becomes more elderly. The remaining elders face difficulties in assembling hunting and foraging parties; i.e. keeping the community going. Many Neanderthal skeletons show signs of extremely hard, repetitive physical effort and senescence; e.g. loss of teeth and evidence of having to be cared for by others. Both factors in small communities are exacerbated by fluctuating birth and death rates and changed gender ratios more than are those with larger numbers; i.e. random events have a far greater overall effect (stochasticity). Krist Vaesen and colleagues from the Netherlands use two modern demographic techniques that encapsulate these tendencies to model Neanderthal populations over  10,000 years.

By themselves, none of the likely factors should have driven Neanderthals into extinction. But in combination they may well have done so, even if modern humans hadn’t arrived around 40 ka. Completely external events, such as epidemics or sudden climate change, would have made little difference. Indeed the very isolation of Neanderthal bands over their vast geographic range would have shielded them from infection, and they had been able to survive almost half a million years of repeated climate crises. If their numbers were always small that begs the question of how they survived for so long. The authors suggest that they ran out of luck, in the sense that, finally, their precariousness came up against a rare blend of environmental fluctuations that ‘stacked the odds’ against them. It is possible that interactions, involving neither competition nor hostility, with small numbers of AMH migrants may have tipped the balance. A possibility not mentioned in the paper, perhaps because it is speculation rather than modelling, is social fusion of the two groups and interbreeding. Perhaps the Neanderthals disappeared because of hybridisation through choice of new kinds of mate. Some closely-related modern species are under threat for that very reason. Although individual living non-African humans carry little more than 3% of Neanderthal genetic material it has been estimated that a very large proportion of the Neanderthal genome is distributed mainly in the population of Eurasia. For that to have happened suggests that interbreeding was habitual and perhaps a popular option

See also: Sample, I. 2019. Bad luck may have caused Neanderthals’ extinction – study. (Guardian 27 November 2019)

Risks of sudden changes linked to climate

The Earth system comprises a host of dynamic, interwoven components or subsystems. They involve processes deep within Earth’s interior, at its surface and in the atmosphere. Such processes combine inorganic chemistry, biology and physics. To describe them properly would require a multi-volume book; indeed an entire library, but even that would be even more incomplete than our understanding of human history and all the other social sciences. Cut to its fundamentals, Earth system science deals with – or tries to – a planetary engine. In it, the available energy from inside and from the Sun is continually shifted around to drive the bewildering variety, multiplicity of scales and variable paces of every process that makes our planet the most interesting thing in the entire universe. It has done so, with a variety of hiccups and monumental transformations, for some four and half billion years and looks likely to continue on its roiling way for about five billion more – with or without humanity. Though we occupy a tiny fraction of its history we have introduced a totally new subsystem that in several ways outpaces the speed and the magnitude of some chemical, physical and organic processes. For example: shifting mass (see the previous item, Sedimentary deposits of the ‘Anthropocene’); removing and modifying vegetation cover; emitting vast amounts of various compounds as a result of economic activity – the full list is huge. In such a complex natural system it is hardly surprising that rapidly increasing human activities in the last few centuries of our history have hitherto unforeseen effects on all the other components. The most rapidly fluctuating of the natural subsystems is that of climate, and it has been extraordinarily sensitive for the whole of Earth history.

Cartoon metaphor for a ‘tipping point’ as water is added to a bucket pivoted on a horizontal axis. As water level rises to below the axis the bucket becomes increasingly stable. Once the level rises above this pivot instability sets in until the syetem suddenly collapses

Within any dynamic, multifaceted system-component each contributing process may change, and in doing so throw the others out of kilter: there are ‘tipping points’. Such phenomena can be crudely visualised as a pivoted bucket into which water drips and escapes. While the water level remains below the pivot, the system is stable. Once it rises above that axis instability sets in; an external push can, if strong enough, tip the bucket and drain it rapidly. The higher the level rises the less of a push is needed. If no powerful push upsets the system the bucket continues filling. Eventually a state is reached when even a tiny force is able to result in catastrophe. One much cited hypothesis invokes a tipping point in the global climate system that began to allow the minuscule effect on insolation from changes in the eccentricity of Earth’s orbit to impose its roughly 100 ka frequency on the ups and downs of continental ice volume during the last 800 ka. In a recent issue of Nature a group of climate scientists based in the UK, Sweden, Germany, Denmark, Australia and China published a Comment on several potential tipping points in the climate system (Lenton, T.M. et al. 2019. Climate tipping points — too risky to bet against. Nature, v. 575, p. 592-595; DO!: 10.1038/d41586-019-03595-0). They list what they consider to be the most vulnerable to catastrophic change: loss of ice from the Greenland and Antarctic ice sheets; melting of sea ice in the Arctic Ocean; loss of tropical and boreal forest; melting of permanently frozen ground at high northern latitudes; collapse of tropical coral reefs; ocean circulation in the North and South Atlantic.

The situation they describe makes dismal reading. The only certain aspect is the steadily mounting level of carbon dioxide in the atmosphere, which boosts the retention of solar heat by delaying the escape of long-wave, thermal radiation from the Earth’s surface to outer space through the greenhouse effect. An ‘emergency’ – and there can be little doubt that one of more are just around the corner – is the product of ‘risk’ and ‘urgency’. Risk is the probability of an event times the damage it may cause. Urgency is the product of reaction time following an alert divided by the time left to intervene before catastrophe strikes. Not a formula designed to make us confident of the ‘powers’ of science! As the commentary points out, whereas scientists are aware of and have some data on a whole series of tipping points, their understanding is insufficient to ‘put numbers on’ These vital parameters. And there may be other tipping points that they are yet to recognise.  Another complicating factor is that in a complex system catastrophe in one component can cascade through all the others: a tipping may set off a ‘domino effect’ on all the others. An example is the steady and rapid melting of boreal permafrost. Frozen ground contains methane in the solid form of gas hydrate, which will release this ‘super-greenhouse’ gas as melting progresses.   Science ‘knows of’ such potential feedback loops in a largely untried, theoretical sense, which is simply not enough.

A tipping point that has a direct bearing on those of us who live around the North Atlantic resides in the way that water circulates in that vast basin. ‘Everyone knows about’ the Gulf Stream that ships warm surface water from equatorial latitudes to beyond the North Cape of Norway. It keeps NW Europe, otherwise subject to extremely cold winter temperatures, in a more equable state. In fact this northward flow of surface water and heat exerts controls on aspects of climate of the whole basin, such as the tracking of tropical storms and hurricanes, and the distribution of available moisture and thus rain- and snowfall. But the Gulf Steam also transports extra salt into the Arctic Ocean in the form of warm, more briny surface water. Its relatively high temperature prevents it from sinking, by reducing its density. Once at high latitudes, cooling allows Gulf-Steam water to sink to the bottom of the ocean, there to flow slowly southwards. This thermohaline circulation effectively ‘drags’ the Gulf Stream into its well-known course. Should it stop then so would the warming influence and the control it exerts on storm tracks. It has stopped in the past; many times. The general global cooling during the 100 ka that preceded the last ice age witnessed a series of lesser climate events. Each began with a sudden global warming followed by slow but intense cooling, then another warming to terminate these stadials or Dansgaard-Oeschger cycles (see: Review of thermohaline circulation, Earth-logs February 2002). The warming into the Holocene interglacial since about 20 ka was interrupted by a millennium of glacial cold between 12.9 and 11.7 ka, known as the Younger Dryas (see: On the edge of chaos in the Younger Dryas, Earth-logs May 2009). A widely supported hypothesis is that both kinds of major hiccup reflected shuts-down of the Gulf Stream due to sudden influxes of fresh water into North Atlantic surface water that reduced its density and ability to sink. Masses of fresh water are now flowing into the Arctic Ocean from melting of the Greenland ice sheet and thinning of Arctic sea ice (also a source of fresh water). Should the Greenland ice sheet collapse then similar conditions for shut-down may arise – rapid regional cooling amidst global warming – and similar consequences in the Southern Hemisphere from the collapse of parts of the Antarctic ice sheets and ice shelves.  Lenton et al. note that North Atlantic thermohaline circulation has undergone a 15% slowdown since the mid-twentieth century…

See also: Carrington, D. 2019. Climate emergency: world ‘may have crossed tipping points’ (Guardian, 27 November 2019)

Sedimentary deposits of the ‘Anthropocene’

Economic activity since the Industrial Revolution has dug up rock – ores, aggregate, building materials and coal. Holes in the ground are a signature of late-Modern humanity, even the 18th century borrow pits along the rural, single-track road that passes the hamlet where I live. Construction of every canal, railway, road, housing development, industrial estate and land reclaimed from swamps and sea during the last two and a half centuries involved earth and rock being pushed around to level their routes and sites. The world’s biggest machine, aside from CERN’s Large Hadron Collider near Geneva, is Hitachi’s Bertha the tunnel borer (33,000 t) currently driving tunnels for Seattle’s underground rapid transit system. But the record muck shifter is the 14,200 t MAN TAKRAF RB293 capable of moving about 220,000 t of sediment per day, currently in a German lignite mine. The scale of humans as geological agents has grown exponentially. We produce sedimentary sequences, but ones with structures that are very different from those in natural strata. In Britain alone the accumulation of excavated and shifted material has an estimated volume six times that of our largest natural feature, Ben Nevis in NW Scotland. On a global scale 57 billion t of rock and soil is moved annually, compared with the 22 billion t transported by all the world’s rivers. Humans have certainly left their mark in the geological record, even if we manage to reverse  terrestrial rapacity and stave off the social and natural collapse that now pose a major threat to our home planet.

A self propelled MAN TAKRAF bucketwheel excavator (Bagger 293) crossing a road in Germany to get from one lignite mine to another. (Credit: u/loerez, Reddit)

The holes in the ground have become a major physical resource, generating substantial profit for their owners from their infilling with waste of all kinds, dominated by domestic refuse. Unsurprisingly, large holes have become a dwindling resource in the same manner as metal ores. Yet these stupendous dumps contain a great deal of metals and other potentially useful material awaiting recovery in the eventuality that doing so would yield a profit, which presently seems a remote prospect. Such infill also poses environmental threats simply from its composition which is totally alien compared with common rock and sediment. Three types of infill common in the Netherlands, of which everyone is aware, have recently been assessed (Dijkstra, J.J. et al. 2019. The geological significance of novel anthropogenic materials: Deposits of industrial waste and by-products. Anthropocene, v. 28, Article 100229; DOI: 10.1016/j.ancene.2019.100229). These are: ash from the incineration of household waste; slags from metal smelting; builders’ waste. What unites them, aside from their sheer mass, is the fact that are each products of high-temperature conditions: anthropogenic metamorphic rocks, if you like. That makes them thermodynamically unstable under surface conditions, so they are likely to weather quickly if they are exposed at the surface or in contact with groundwater. And that poses threats of pollution of soil-, surface- and groundwater

All are highly alkaline, so they change environmental pH. Ash from waste incineration is akin to volcanic ash in that it contains a high proportion of complex glasses, which easily break down to clays and soluble products. Curiously, old dumps of ash often contain horizons of iron oxides and hydroxides, similar to the ‘iron pans’ in peaty soils. They form at contacts between oxidising and reducing conditions, such as the water table or at the interface with natural soils and rocks. Soluble salts of a variety of trace elements may accumulate, such copper, antimony and molybdenum. Slags not only contain anhydrous silicates rich in the metals of interest and other trace metals, which on weathering may yield soluble chromium and vanadium, but they also have high levels of calcium-rich compounds from the limestone flux used in smelting, i.e. agents able to create high alkalinity. Portland cement, perhaps the most common material in builders’ waste, is dominated by hydrated calcium-aluminium silicates that break-down if the concrete is crushed, again with highly alkaline products. Another component in demolition debris is gypsum from plaster, which can be a source of highly toxic hydrogen sulfide gas generated in anaerobic conditions by sulfate-sulfide reducing bacteria.