Should you worry about being killed by a meteorite?

In 1994 Clark Chapman of the Planetary Science Institute in Arizona and David Morrison of NASA’s Ames Research Center in California published a paper that examined the statistical hazard of death by unnatural causes in the United States (Chapman, C. & Morrison, D. Impacts on the Earth by asteroids and comets: assessing the hazard. Nature, v. 367, p. 33–40; DOI:10.1038/367033a0). Specifically, they tried to place the risk of an individual being killed by a large asteroid (~2 km across) hitting the Earth in the context of more familiar unwelcome causes. Based on the then available data about near-Earth objects – those whose orbits around the Sun cross that of the Earth – they assessed the chances as ranging between 1 in 3,000 and 1 in 250,000; a chance of 1 in 20,000 being the most likely. The results from their complex calculations turned out to be pretty scary, though not as bad as dying in a car wreck, being murdered, burnt to death or accidentally shot. Asteroid-risk is about the same as electrocution, at the higher-risk end, but significantly higher than many other causes with which the American public are, unfortunately, familiar: air crash; flood; tornado and snake bite. The lowest asteroid-risk (1 in 250 thousand) is greater than death from fireworks, botulism or trichloroethylene in drinking water; the last being 1 in 10 million.

Chapman and Morrison cautioned against mass panic on a greater scale than Orson Welles’s 1938 CBS radio production of H.G. Wells’s War of the Worlds allegedly resulted in. Asteroid and comet impacts are events likely to kill between 5,000 and several hundred million people each time they happen but they occur infrequently. Catastrophes at the low end, such as the 1908 Tunguska air burst over an uninhabited area in Siberia, are likely to happen once in a thousand years. At the high end, mass extinction impacts may occur once every hundred million years. As might be said by an Australian, ‘No worries, mate’! But you never know…

Michelle Knapp’s Chevrolet Malibu the morning after a stony-iron mmeteorite struck it. Bought for US$ 300, Michelle sold the car for US$ 25,000 and the meteorite fetched US$ 50,000 (credit: John Bortle)

How about ordinary meteorites that come in their thousands, especially when the Earth’s orbit takes it through the former paths taken by disintegrating comets? When I was a kid rumours spread that a motor cyclist had a narrow escape on the flatlands around Kingston-upon-Hull in East Yorkshire, when a meteorite landed in his sidecar: probably apocryphal. But Michelle Knapp of Peeskill, New York, USA had a job for the body shop when a 12 kg extraterrestrial object hit her Chevrolet Malibu, while it was parked in the driveway. In 1954, Ann Hodges of Sylacauga, Alabama was less fortunate during an afternoon nap on her sofa, when a 4 kg chondritic meteorite crashed through her house roof, hit a radiogram and bounced to smash into her upper thigh, badly bruising her. For an object that probably entered the atmosphere at about 15 km s-1, that was indeed a piece of good luck resulting from air’s viscous drag, the roof impact and energy lost to her radiogram. The offending projectile became a doorstop in the Hodge residence, before the family kindly donated it to the Alabama Museum of Natural History. Another fragment of the same meteorite, found in a field a few kilometres away, fetched US$ 728 per gram at Christie’s auction house in 2017. Perhaps the most unlucky man of the 21st century was an Indian bus driver who was killed by debris ejected when a meteorite struck the dirt track on which he was driving in Tamil Nadu in 2016 – three passengers were also injured. Even that is disputed, some claiming that the cause was an explosive device.

When rain kick-started evolution

The end of the Palaeozoic Era was marked by the greatest known mass extinction at the Permian-Triassic boundary 252 Ma ago. An estimated 96% of known marine fossil species simply disappeared, as did 70% of vertebrates that lived on land. Many processes seem to have conspired against life on Earth although it seems that one was probably primary: the largest known flood-basalt event, evidence for which lies in the Siberian Traps. It took as long as 50 Ma for ecosystems to return to their former diversity. But, oddly, it was animals at the top of the marine food chain that recovered most quickly, in about 5 million years. There must have been food in the sea, but it was at first somewhat monotonous. The continents were still configured in the Pangaea supercontinent, so much land was far from oceans and thus dry. Oxygen was being drawn down from the atmosphere to combine with iron in Fe2O3 to form vast tracts of redbeds for which the Triassic is famous. From a peak of 30% in the Permian, atmospheric oxygen descended to 16% in the early Triassic, so living even at sea level would have been equivalent to surviving today at 2.7 km elevation today. Potential ecological niches were vastly reduced in fertility and in altitude, and Pangaea still had vast mountain ranges inherited from its formative tectonics as well as being arid, apart from in polar regions. Unsurprisingly, recovery of terrestrial diversity, especially among vertebrates, was slow during the early Triassic.

Triassic grey terrestrial sediments on the Somerset coast of SW England (credit: Margaret W. Carruthers; https://www.flickr.com/photos/64167416@N03/albums/72157659852255255)

Then, about halfway through the Triassic Period, it began to rain across Pangaea. Whether that was continual or seasonal is uncertain, although the presence of large mountains and high plateaus would favour monsoon circulation, akin to the present-day Indian monsoon associated with the Himalaya and Tibetan Plateau. How do geologists know that central Pangaea became wetter? The evidence lies in grey sedimentary strata between the otherwise universal redbeds, which occur in the Carnian Age and span one to two million years around 232 Ma (Marshall, M. 2019. Did a million years of rain jump-start dinosaur evolution? Nature, v. 576, p. 26-28; doi: 10.1038/d41586-019-03699-7). A likely driver for this change in colour is a rise in water tables that would exclude oxygen from sediments deposited recently. The red Iron-3 oxides were reduced, so that soluble iron-2 was leached out. Some marine groups, such as crinoids, underwent a sudden flurry of extinctions, as did plants and amphibians on land. But others received a clear boost from this Carnian Pluvial Event. A few dinosaurs first appear in older Triassic sediments, but during the Carnian they began to diversify from diminutive bipedal species into the main groups so familiar to many: ornithischians that lead to Stegosaurus and Triceratops and the forerunners of the saurischians that included huge long-necked herbivores and the bipedal theropods and birds. Within 4 Ma dinosaurs had truly begun their global hegemony. Offshore in shallow seas, the scleractinian corals, which dominate modern coral reef systems, also exploded during the Carnian from small beginnings in the earlier Triassic. It is even suspected that the Carnian nurtured the predecessor of mammals, although the evidence is only from isolated fossil teeth.

A Carnian shift in carbon isotopes, measured in Triassic limestones of the Italian Dolomites, to lower proportions of the heavier 13C suggests that a huge volume of the lighter 12C had entered the atmosphere. That could have resulted from large-scale volcanism, the 232 Ma old laves of the Wrangell Mountains in Alaska being a likely suspect. Such an input would have had a warming climatic outcome that would have increased tropical evaporation of ocean water and the humidity over continental masses. The once ecologically monotonous core of Pangaea may have greatly diversified into many more niches awaiting occupants, thereby stimulating the terrestrial evolutionary burst. Perhaps ironically, and fortunately, a volcanic near snuffing-out of life on Earth was soon followed by another with the opposite effect. Yet another negative outcome arrived with the flood basalts of the Central Atlantic Magmatic Province at the end of the Triassic (201 Ma), to be followed by further adaptive radiation among those organisms that survived into the Jurassic.

Why did anatomically modern humans replace Neanderthals?

Extinction of the Neanderthals has long been attributed to pressure on resources following the first influx into Europe by AMH bands and perhaps different uses of the available resources by the two groups. One often quoted piece of evidence comes from the outermost layer in the teeth of deer. Most ruminants continually replace tooth enamel to make up for wear, winter additions being darker than those during summer. Incidentally, the resulting layering gives away their age, as in, ‘Never look a gift horse in the mouth’! Deer teeth associated with Neanderthal sites show that they were killed throughout the year. Those around AMH camps are either summer or winter kills. The implication is that AMH were highly mobile, whereas Neanderthals had fixed hunting ranges whose resources would have been depleted by passing AMH bands. That is as may be, but another possibility has received more convincing support.

Neanderthal populations across their range from Gibraltar to western Siberia were extremely low and band sizes seem to have been small, even before AMH made their appearance. This may have been critical in their demise, based on considerations that arise from attempts to conserve threatened species today (Vaesen, K. et al. 2019. Inbreeding, Allee effects and stochasticity might be sufficient to account for Neanderthal extinction. PLoS One, v. 14, article e0225117; DOI: 10.1371/journal.pone.0225117). The smaller and more isolated groups are, the more likely they are to resort to inbreeding in the absence of close-by potential mates. There is evidence from Neanderthal DNA that such endogamy was practised. Long-term interbreeding between genetic relatives among living human groups is known to result in decreased fitness as deleterious traits accumulate. On top of that, very low population density makes finding mates, closely related or not, difficult (the Allee effect). A result of that is akin to the modern tendency of young people born in remote areas to leave, so that local population falls and becomes more elderly. The remaining elders face difficulties in assembling hunting and foraging parties; i.e. keeping the community going. Many Neanderthal skeletons show signs of extremely hard, repetitive physical effort and senescence; e.g. loss of teeth and evidence of having to be cared for by others. Both factors in small communities are exacerbated by fluctuating birth and death rates and changed gender ratios more than are those with larger numbers; i.e. random events have a far greater overall effect (stochasticity). Krist Vaesen and colleagues from the Netherlands use two modern demographic techniques that encapsulate these tendencies to model Neanderthal populations over  10,000 years.

By themselves, none of the likely factors should have driven Neanderthals into extinction. But in combination they may well have done so, even if modern humans hadn’t arrived around 40 ka. Completely external events, such as epidemics or sudden climate change, would have made little difference. Indeed the very isolation of Neanderthal bands over their vast geographic range would have shielded them from infection, and they had been able to survive almost half a million years of repeated climate crises. If their numbers were always small that begs the question of how they survived for so long. The authors suggest that they ran out of luck, in the sense that, finally, their precariousness came up against a rare blend of environmental fluctuations that ‘stacked the odds’ against them. It is possible that interactions, involving neither competition nor hostility, with small numbers of AMH migrants may have tipped the balance. A possibility not mentioned in the paper, perhaps because it is speculation rather than modelling, is social fusion of the two groups and interbreeding. Perhaps the Neanderthals disappeared because of hybridisation through choice of new kinds of mate. Some closely-related modern species are under threat for that very reason. Although individual living non-African humans carry little more than 3% of Neanderthal genetic material it has been estimated that a very large proportion of the Neanderthal genome is distributed mainly in the population of Eurasia. For that to have happened suggests that interbreeding was habitual and perhaps a popular option

See also: Sample, I. 2019. Bad luck may have caused Neanderthals’ extinction – study. (Guardian 27 November 2019)

Risks of sudden changes linked to climate

The Earth system comprises a host of dynamic, interwoven components or subsystems. They involve processes deep within Earth’s interior, at its surface and in the atmosphere. Such processes combine inorganic chemistry, biology and physics. To describe them properly would require a multi-volume book; indeed an entire library, but even that would be even more incomplete than our understanding of human history and all the other social sciences. Cut to its fundamentals, Earth system science deals with – or tries to – a planetary engine. In it, the available energy from inside and from the Sun is continually shifted around to drive the bewildering variety, multiplicity of scales and variable paces of every process that makes our planet the most interesting thing in the entire universe. It has done so, with a variety of hiccups and monumental transformations, for some four and half billion years and looks likely to continue on its roiling way for about five billion more – with or without humanity. Though we occupy a tiny fraction of its history we have introduced a totally new subsystem that in several ways outpaces the speed and the magnitude of some chemical, physical and organic processes. For example: shifting mass (see the previous item, Sedimentary deposits of the ‘Anthropocene’); removing and modifying vegetation cover; emitting vast amounts of various compounds as a result of economic activity – the full list is huge. In such a complex natural system it is hardly surprising that rapidly increasing human activities in the last few centuries of our history have hitherto unforeseen effects on all the other components. The most rapidly fluctuating of the natural subsystems is that of climate, and it has been extraordinarily sensitive for the whole of Earth history.

Cartoon metaphor for a ‘tipping point’ as water is added to a bucket pivoted on a horizontal axis. As water level rises to below the axis the bucket becomes increasingly stable. Once the level rises above this pivot instability sets in until the syetem suddenly collapses

Within any dynamic, multifaceted system-component each contributing process may change, and in doing so throw the others out of kilter: there are ‘tipping points’. Such phenomena can be crudely visualised as a pivoted bucket into which water drips and escapes. While the water level remains below the pivot, the system is stable. Once it rises above that axis instability sets in; an external push can, if strong enough, tip the bucket and drain it rapidly. The higher the level rises the less of a push is needed. If no powerful push upsets the system the bucket continues filling. Eventually a state is reached when even a tiny force is able to result in catastrophe. One much cited hypothesis invokes a tipping point in the global climate system that began to allow the minuscule effect on insolation from changes in the eccentricity of Earth’s orbit to impose its roughly 100 ka frequency on the ups and downs of continental ice volume during the last 800 ka. In a recent issue of Nature a group of climate scientists based in the UK, Sweden, Germany, Denmark, Australia and China published a Comment on several potential tipping points in the climate system (Lenton, T.M. et al. 2019. Climate tipping points — too risky to bet against. Nature, v. 575, p. 592-595; DO!: 10.1038/d41586-019-03595-0). They list what they consider to be the most vulnerable to catastrophic change: loss of ice from the Greenland and Antarctic ice sheets; melting of sea ice in the Arctic Ocean; loss of tropical and boreal forest; melting of permanently frozen ground at high northern latitudes; collapse of tropical coral reefs; ocean circulation in the North and South Atlantic.

The situation they describe makes dismal reading. The only certain aspect is the steadily mounting level of carbon dioxide in the atmosphere, which boosts the retention of solar heat by delaying the escape of long-wave, thermal radiation from the Earth’s surface to outer space through the greenhouse effect. An ‘emergency’ – and there can be little doubt that one of more are just around the corner – is the product of ‘risk’ and ‘urgency’. Risk is the probability of an event times the damage it may cause. Urgency is the product of reaction time following an alert divided by the time left to intervene before catastrophe strikes. Not a formula designed to make us confident of the ‘powers’ of science! As the commentary points out, whereas scientists are aware of and have some data on a whole series of tipping points, their understanding is insufficient to ‘put numbers on’ These vital parameters. And there may be other tipping points that they are yet to recognise.  Another complicating factor is that in a complex system catastrophe in one component can cascade through all the others: a tipping may set off a ‘domino effect’ on all the others. An example is the steady and rapid melting of boreal permafrost. Frozen ground contains methane in the solid form of gas hydrate, which will release this ‘super-greenhouse’ gas as melting progresses.   Science ‘knows of’ such potential feedback loops in a largely untried, theoretical sense, which is simply not enough.

A tipping point that has a direct bearing on those of us who live around the North Atlantic resides in the way that water circulates in that vast basin. ‘Everyone knows about’ the Gulf Stream that ships warm surface water from equatorial latitudes to beyond the North Cape of Norway. It keeps NW Europe, otherwise subject to extremely cold winter temperatures, in a more equable state. In fact this northward flow of surface water and heat exerts controls on aspects of climate of the whole basin, such as the tracking of tropical storms and hurricanes, and the distribution of available moisture and thus rain- and snowfall. But the Gulf Steam also transports extra salt into the Arctic Ocean in the form of warm, more briny surface water. Its relatively high temperature prevents it from sinking, by reducing its density. Once at high latitudes, cooling allows Gulf-Steam water to sink to the bottom of the ocean, there to flow slowly southwards. This thermohaline circulation effectively ‘drags’ the Gulf Stream into its well-known course. Should it stop then so would the warming influence and the control it exerts on storm tracks. It has stopped in the past; many times. The general global cooling during the 100 ka that preceded the last ice age witnessed a series of lesser climate events. Each began with a sudden global warming followed by slow but intense cooling, then another warming to terminate these stadials or Dansgaard-Oeschger cycles (see: Review of thermohaline circulation, Earth-logs February 2002). The warming into the Holocene interglacial since about 20 ka was interrupted by a millennium of glacial cold between 12.9 and 11.7 ka, known as the Younger Dryas (see: On the edge of chaos in the Younger Dryas, Earth-logs May 2009). A widely supported hypothesis is that both kinds of major hiccup reflected shuts-down of the Gulf Stream due to sudden influxes of fresh water into North Atlantic surface water that reduced its density and ability to sink. Masses of fresh water are now flowing into the Arctic Ocean from melting of the Greenland ice sheet and thinning of Arctic sea ice (also a source of fresh water). Should the Greenland ice sheet collapse then similar conditions for shut-down may arise – rapid regional cooling amidst global warming – and similar consequences in the Southern Hemisphere from the collapse of parts of the Antarctic ice sheets and ice shelves.  Lenton et al. note that North Atlantic thermohaline circulation has undergone a 15% slowdown since the mid-twentieth century…

See also: Carrington, D. 2019. Climate emergency: world ‘may have crossed tipping points’ (Guardian, 27 November 2019)

Sedimentary deposits of the ‘Anthropocene’

Economic activity since the Industrial Revolution has dug up rock – ores, aggregate, building materials and coal. Holes in the ground are a signature of late-Modern humanity, even the 18th century borrow pits along the rural, single-track road that passes the hamlet where I live. Construction of every canal, railway, road, housing development, industrial estate and land reclaimed from swamps and sea during the last two and a half centuries involved earth and rock being pushed around to level their routes and sites. The world’s biggest machine, aside from CERN’s Large Hadron Collider near Geneva, is Hitachi’s Bertha the tunnel borer (33,000 t) currently driving tunnels for Seattle’s underground rapid transit system. But the record muck shifter is the 14,200 t MAN TAKRAF RB293 capable of moving about 220,000 t of sediment per day, currently in a German lignite mine. The scale of humans as geological agents has grown exponentially. We produce sedimentary sequences, but ones with structures that are very different from those in natural strata. In Britain alone the accumulation of excavated and shifted material has an estimated volume six times that of our largest natural feature, Ben Nevis in NW Scotland. On a global scale 57 billion t of rock and soil is moved annually, compared with the 22 billion t transported by all the world’s rivers. Humans have certainly left their mark in the geological record, even if we manage to reverse  terrestrial rapacity and stave off the social and natural collapse that now pose a major threat to our home planet.

A self propelled MAN TAKRAF bucketwheel excavator (Bagger 293) crossing a road in Germany to get from one lignite mine to another. (Credit: u/loerez, Reddit)

The holes in the ground have become a major physical resource, generating substantial profit for their owners from their infilling with waste of all kinds, dominated by domestic refuse. Unsurprisingly, large holes have become a dwindling resource in the same manner as metal ores. Yet these stupendous dumps contain a great deal of metals and other potentially useful material awaiting recovery in the eventuality that doing so would yield a profit, which presently seems a remote prospect. Such infill also poses environmental threats simply from its composition which is totally alien compared with common rock and sediment. Three types of infill common in the Netherlands, of which everyone is aware, have recently been assessed (Dijkstra, J.J. et al. 2019. The geological significance of novel anthropogenic materials: Deposits of industrial waste and by-products. Anthropocene, v. 28, Article 100229; DOI: 10.1016/j.ancene.2019.100229). These are: ash from the incineration of household waste; slags from metal smelting; builders’ waste. What unites them, aside from their sheer mass, is the fact that are each products of high-temperature conditions: anthropogenic metamorphic rocks, if you like. That makes them thermodynamically unstable under surface conditions, so they are likely to weather quickly if they are exposed at the surface or in contact with groundwater. And that poses threats of pollution of soil-, surface- and groundwater

All are highly alkaline, so they change environmental pH. Ash from waste incineration is akin to volcanic ash in that it contains a high proportion of complex glasses, which easily break down to clays and soluble products. Curiously, old dumps of ash often contain horizons of iron oxides and hydroxides, similar to the ‘iron pans’ in peaty soils. They form at contacts between oxidising and reducing conditions, such as the water table or at the interface with natural soils and rocks. Soluble salts of a variety of trace elements may accumulate, such copper, antimony and molybdenum. Slags not only contain anhydrous silicates rich in the metals of interest and other trace metals, which on weathering may yield soluble chromium and vanadium, but they also have high levels of calcium-rich compounds from the limestone flux used in smelting, i.e. agents able to create high alkalinity. Portland cement, perhaps the most common material in builders’ waste, is dominated by hydrated calcium-aluminium silicates that break-down if the concrete is crushed, again with highly alkaline products. Another component in demolition debris is gypsum from plaster, which can be a source of highly toxic hydrogen sulfide gas generated in anaerobic conditions by sulfate-sulfide reducing bacteria.

Extraterrestrial sugar

The coding schemes for Earth’s life and evolution (DNA and RNA), its major building blocks and basic metabolic processes have various sugars at their hearts. How they arose boils down to two possibilities: either they were produced right here by the most basic, prebiotic processes or they were supplied from interplanetary or interstellar space. All kinds of simple carbon-based compounds turn up in spectral analysis of regions of star formation, or giant molecular clouds: CN, CO, C­2H, H2CO up to 10 or more atoms that make up recognisable compounds such as benzonitrile (C6H5CN). Even a simple amino acid (glycene –CH2NH2COOH) shows up in a few nearby giant molecular clouds. Brought together in close proximity, instead of dispersed through huge volumes of near-vacuum, a riot of abiotic organic chemical reactions could take place. Indeed, complex products of such reactions are abundant in carbonaceous meteorites whose parent asteroids formed within the solar system early in its formation. Some contain a range of amino acids though not, so far, the five bases on which genetics depends: in DNA adenine, cytosine, guanine and thymine (replaced by uracil in RNA). Yet, surprisingly, even simple sugars have remained elusive in both molecular clouds and meteorites.

Artist’s impression of the asteroid belt from which most meteorites are thougtht to originate (Credit: NASA/JPL)

A recent paper has broken through that particular barrier (Furukawa, Y. et al. 2019. Extraterrestrial ribose and other sugars in primitive meteorites. Proceedings of the National Academy of Sciences. Online; DOI: 10.1073/pnas.1907169116). Yoshihiro Furukawa and colleagues analysed three carbonaceous chondrites and discovered traces of 4 types of sugars. It seems that sugar compounds have remained elusive because those now detected are at concentrations thousands of times lower than those of amino acids. Contamination by terrestrial sugars that may have entered the meteorites when they slammed into soil is ruled out by their carbon isotope ratios, which are very different from those in living organisms. One of the sugars is ribose, a building block of RNA (DNA needs deoxyribose). Though a small discovery, it has great significance as regards the possibility that the components needed for living processes formed in the early Solar System. Moon formation by giant impact shortly after accretion of the proto-Earth would almost certainly have  destroyed such organic precursors. So, if the Earth’s surface was chemically ‘seeded’ in this way it is more likely to have occurred at a later time, perhaps during the Late Heavy Bombardment 4.1 to 3.8 billion years ago (see: Did mantle chemistry change after the late heavy bombardment? In Earth-logs September 2009)

Early human migrations in southern Africa

Comparing the DNA profiles of living people who are indigenous to different parts of the world has achieved a lot as regards tracing the migrations of their ancestors and amalgamations between and separations from different genetic groups along the way. Most such analyses have centred on alleles in DNA from mitochondria (maternal) and Y chromosomes (paternal), and depend on the assumption that rates of mutation (specifically those that have neither negative nor positive outcomes) in both remain constant over tens of thousand years and genetic intermixing through reproduction. Both provide plausible hypotheses of where migrations began, the approximate route that they took and the timing of both departures from and arrival at different locations en route. Most studies have focused on the ‘Out of Africa’ migration, which began, according to the latest data, around 80 ka ago. Arrival times at various locations differ considerably, from around 60 ka for the indigenous populations of Australia and New Guinea, roughly 40 ka for Europe and ~12 ka for the Americas. Yet an often overlooked factor is that not all migrating groups have descendants that are alive today. For instance, remains of anatomically modern humans (AMH)have been found in sediments in the Levant as old as 177 ka (see: Earliest departure of modern humans from Africa, January 2018), and between 170 to 210 ka in southern Greece (See: Out of Africa: The earliest modern human to leave). Neither have yielded ancient DNA, yet nor are their arrival times compatible with the ‘route mapping’ provided by genetic studies of living people. Such groups became extinct and left no traceable descendants, and there were probably many more awaiting discovery. Maybe these mysteries will be penetrated by DNA from the ancient bones, should that prove possible.

The recorded history of AMH within Africa began around 286 to 315 ka in Morocco (see: Origin of anatomically modern humans, June 2017) and their evolutionary development may have spanned much of the continent, judging by previously discovered fossils in Ethiopia and South Africa that are older than 200 ka. Again, ancient DNA has not been extracted from the oldest fossils; nor is that likely to be possible because the double helix breaks down quickly in hot and humid climates. Genetic data from living Africans are growing quickly. An additional 198 African mtDNA genomes reported recently have pushed up the total available for analysis, the bulk of them being from eastern and southern Africa (Chan, E.K.F. and 11 others 2019. Human origins in a southern African palaeo-wetland and first migrations. Nature, v. 575, p. 185-189; DOI: 10.1038/s41586-019-1714-1). The study focuses on data from the KhoeSan ethnic group, restricted to areas south of the Zambezi River, who speak a language with distinctive  click consonants. Some KhoeSan still practice a hunter-gatherer lifestyle. Previous genetic studies showed the KhoeSan to differ markedly from other inhabitants of southern Africa, and they are widely regarded as having inhabited the area for far longer than any other groups. A sign of this emerges from their mtDNA in a genetic lineage signified as L0. Comparing KhoeSan mtDNA with the wider genetic database allowed the researchers to plot a ‘family tree’. Measures of the degree of difference between samples push back the origin of L0 and the KhoeSan themselves to roughly 200 ka.

okavango
The Okavango Delta today during the wet season (Credit: Wikimedia Commons)

It turns out that the LO lineage has several variants, whose geographic distributions allow the approximate place of origin for the lineage and directions of later migration from it to be mapped. It seems that LO was originally indigenous to the modern Okavango Delta and Makgadikgadi salt flats of Botswana. People carrying the original (L0k) variant are estimated to have remained in the broad area for about 70 thousand years. During that time it was all lush, low-lying wetland around a huge, now vanished lake. The hydrology of the area was dramatically split by regional tectonic activity at around 60 ka. The lake simply evaporated to form the salt pan of the Makgadikgadi, leaving only the seasonal Okavango Delta as a destination for flood water. People carrying Lok stayed in the original homeland whereas other shifted. Migration routes to the northeast and towards the southwest and south are crudely mapped by the distribution of the other L0 variants among modern populations. They followed ‘green corridors’ between 130 and 110 ka, the collapse of the ecosystem leaving a small group of the founding population isolated from its descendants.

The paper claims that the former Botswana wetlands were the cradle of the first modern humans. Perhaps in southern Africa, but other, older AMH remains found far off and perhaps undiscovered elsewhere are more likely. But that can only be reconciled with the KhoeSan study by ancient DNA from fossils. Criticism of the sweeping claims in the paper has already been voiced, on these grounds and the study’s lack of data on paternal DNA or whole genomes from the sampled population.

See also: Gibbons, A. 2019. Experts question study claiming to pinpoint birthplace of all humans. Science (online); DOI: 10.1126/science.aba0155

Tracing hominin evolution further back

The earliest hominin known from Africa is Sahelanthropus tchadensis, announced in 2002 by Michel Brunet and his team working in 7 Ma old Miocene sediments deposited by the predecessor to Lake Chad in the central Sahara Desert. Only cranial bones were present. From the rear the skull and cranial capacity resembled what might have been regarded as an early relative of chimpanzees. But its face and teeth look very like those of an australopithecine. Sadly, the foramen magnum – where the cranium is attached to the spine – was not well preserved, and leg bones were missing. The position of the first is a clue to posture; forward of the base of the skull would suggest an habitual upright posture, towards the rear being characteristic of knuckle walkers. Some authorities, including Brunet, believe Sahelanthropus may have been upright, but others strongly contest that. The angle of the neck-and-head ball joint of the femur (thigh bone), where the leg is attached to a socket on the pelvis to form the hip joint is a clue to both posture and gait. The earliest clear sign of an upright, bipedal gait is the femur of a fossil primate from Africa – about a million years younger than Sahelanthropus, found in the Tugen Hills of Kenya. Orrorin tugenensis was described from 20 bone fragments, making up: a bit of the other femur, three hand bones; a fragment of the upper arm (humerus); seven teeth; part of the left and right side of a lower jawbone (mandible). Apart from the femur that retains a neck and head and signifies an upright gait, only the teeth offer substantial clues. Orrorin has  a dentition similar to humans apart from ape-like canines but significantly smaller in size – all known hominins lack the large canines, relative to other teeth. Despite being almost 2 Ma older than Ardipithecus ramidus, the first clearly bipedal hominin, Orrorin is more similar to humans than both it and Australopithecus afarensis, Lucy’s species.

Oreopithecus_bambolii_1
Near-complete skeleton of Oreopithecus bambolii from Italy (credit: Wikipedia Commons)

DNA differences suggest that human evolution split from that of chimpanzees about 12 Ma ago. Yet the earlier Miocene stratigraphy of Africa has yet to provide a shred of evidence for earlier members of either lineage or a plausible last common ancestor of both. In 1872, a year after publication of Charles Darwin’s The Descent of Man parts of an extinct primate were found in Miocene sediments in Tuscany and Sardinia, Italy. In 1950 an almost complete skeleton was unearthed and named Oreopithecus bambolii (see Hominin evolution becoming a thicket, January 2013). Despite dozens of specimens having been found in different localities, the creature was largely ignored in subsequent debate about human origins, until 1990 when it was discovered that not only could Oreopithecus walk on two legs, albeit differently from humans, it had relatively small canine teeth and its hands were like those of hominins, capable of a precision grip. Dated at 7 to 9 Ma, it may lie further back on the descent path of hominins; but it lived in Europe not Africa. Now the plot has thickened, for another primate has emerged from a clay pit in Bavaria, Germany (Böhme, M. and 8 others 2019. A new Miocene ape and locomotion in the ancestor of great apes and humans. Nature, online publication; DOI: 10.1038/s41586-019-1731-0).

Danuvius
Bones from 4 Danuvius guggenmosi individuals. Note the diminutive sizes compared with living apes (Credit: Christoph Jäckle)

Danuvius guggenmosi lived 11.6 Ma ago and its fossilised remains represent four individuals. Both femurs and a tibea (lower leg), together with the upper arm bones are preserved. The femurs and vertebrae strongly suggest that Danuvius could walk on two legs, indeed the vertebral shapes indicate that it had a flexible spine; essential for balance by supporting the weight of the torso over the pelvis. It also had long arms, pointing to its likely hanging in and brachiating through tree canopies. Maybe it had the benefit of two possible lifestyles; arboreal and terrestrial. Its discoverers do not go that far, suggesting that it probably lived entirely in trees using both forms of locomotion in ‘extended limb clambering’. It may not have been alone, another younger European primate found in the Miocene of Hungary, Rudapithecus hungaricus, may also have had similar clambering abilities, as might have Oreopithecus.

There is sure to be a great deal of head scratching among palaeoanthropologists, now that three species of Miocene primate seem – for the moment – to possess  ‘prototype specifications’ for early entrants on the evolutionary path to definite hominins. Questions to be asked are ‘If so, how did any of them cross the geographic barrier to Africa; i.e. the Mediterranean Sea?’, ‘Did the knuckle-walking chimps evolve from a bipedal common ancestor shared with hominins?, ‘Did bipedalism arise several times?’. The first may not have been as difficult as it might seem (see Africa_Europe exchange of faunas in the Late Miocene, July 2013). The Betic Seaway that once separated Iberia from NW Africa, in a similar manner to the modern Straits of Gibraltar, closed during the Miocene after a ‘mild’ tectonic collision that threw up the Betic Cordillera of Southern Spain. Between 5.6 and 5.3 Ma there was a brief ‘window of opportunity’ for the crossing, that ended with one of the most dramtic events in the Cenozoic Era; the Zanclean Flood, when the Atlantic burst through what is now the Straits of Gibraltar cataclysmically to refill the Mediterranean .

See also: Barras, C. 2019. Ancient ape offers clues to evolution of two-legged walking. Nature, v. 575, online; Kivell, T.L. 2019. Fossil ape hints at how walking on two feet evolved. Nature, v. 575, online; DOI: 10.1038/d41586-019-03347-0

How permanent is the Greenland ice sheet?

80% of the world’s largest island is sheathed in glacial ice up to 3 km thick, amounting to 2.85 million km3. A tenth as large as the Antarctic ice sheet, if melted it could still add over 7 m to global sea level if it melted completely; compared with 58 m should Antarctica suffer the same fate. Antarctica accumulated glacial ice from about 34 to 24 million years ago during the Oligocene Epoch, deglaciated to became largely ice free until about 12 Ma and then assumed a permanent, albeit fluctuating, ice cap until today. In contrast, Greenland only became cold enough to support semi-permanent ice cover from about 2.4 Ma during the late-Pliocene to present episode of ice-age and interglacial cycles. The base of the GRIP ice core from central Greenland has been dated at 1 Ma old, but such is the speed of ice movement driven by far higher snow precipitation than in Antarctica that it is possible that basal ice is shifted seawards. The deepest layers recovered by drilling have lost their annual layering as a result of ice’s tendency to deform in a plastic fashion so do not preserve detailed glacial history before about 110 ka. In contrast, the more slowly accumulating and more sluggishly moving Antarctic ice records over 800 ka of climatic cyclicity in continuous cores and has yielded 2.7 Ma old blue ice exposed at the surface with another 2 km lying beneath it.

However, sediments at the base of two ice cores from Greenland have raised the possibility of periods when the island was free of ice. One such example is from an early core drilled to a depth of 1390 m beneath the 1960’s US military’s nuclear weapons base, Camp Century. It helped launch the use of continental ice as a repository of Earth recent climatic history at a far better resolution than do sediment cores from the ocean floors. It languished in cold storage after it was transferred from the US to the University of Copenhagen. Recently, samples from the bottom 3 m of sediment-rich ice were rediscovered in glass jars. A workshop centring on this seemingly unprepossessing material took place in the last week of October 2019 at the University of Vermont, USA (Voosen, P. 2019. Mud in stored ice core hints at thawed Greenland. Science, v. 366, p. 556-557; DOI: 10.1126/science.366.6465.556.

Sediment recovered from the base of the Camp Century core through the Greenland ice sheet (credit Jean-Louis Tison, Free University of Brussels)

To the participants’ astonishment, among the pebbles and sand were fragments of moss and woody material. It was not till, but a soil; Greenland had once lost its ice cover. Measurement of radioactive isotopes 26Al and 10Be, that form when cosmic rays pass through exposed sand grains, revealed that the once vegetated soil had formed at about 400 ka. Preliminary DNA analyses of preserved plant material indicates species that would have thrived at around 10°C. Samples have been shared widely for comprehensive analysis  to reconstruct the kind of surface environment that developed during the 400 ka interglacial. Also, Greenland may have been bare of ice during several such relatively warm intervals. So other cores to the base of the ice may be in the funding pipeline. But most interest centres on the implications of a period of rapid anthropogenic climatic warming that may take Arctic temperatures above those that melted the Greenland ice sheet 400 ka ago.

See also: UVM Today 2019. Secrets under the ice.

More on the Younger Dryas causal mechanism

The divergence of opinion on why a millennium-long return to glacial conditions began 12.8 thousand years recently deepened. The Younger Dryas stadial was an unprecedented event that halted and even reversed the human recolonisation of mid- to high northern latitudes after the end of the last ice age. Its inception was phenomenally rapid, taking a couple of decades to as little as perhaps a few years. The first plausible explanation was put forward by Wallace Broecker in 1989, who looked to explosive release of meltwater trapped in glacial lakes astride the Canadian-US border along the present St Lawrence River Valley, effectively flooding the source of NADW with a surface layer of low-density, low-salinity water. This, he suggested, would have shut down the thermohaline circulation in the North Atlantic. This is currently driven by cooling of salty surface water brought from the tropics to the Arctic Ocean by the Gulf Stream so that the resulting increase in density causes it to sink and thereby drive this part of the ocean water ‘conveyor’ system. A massive freshwater influx would prevent sinking and shut down the Gulf Stream, with the obvious effect of cooling high northern latitudes allowing ice caps to return to the surrounding continents. Yet Broecker’s St Lawrence flood mechanism was flawed by lack of evidence and the knowledge that a well-documented flood along that valley a thousand years before had raise se level by 20 m with no climatic effect. In 2005 clear evidence was found for a huge glacial outburst flood directly to the Arctic Ocean at around 12.8 ka that had followed Canada’s MacKenzie River; a route that would force low-density seawater to the very source of North Atlantic Deep Water through the Fram Straits, thereby stopping thermohaline circulation.

The year 2007 saw the emergence of a totally different account (see Whizz-bang view of Younger Dryas, July 2007; Impact cause for Younger Dryas draws flak, May 2008) centring on evidence for a 12.8 ka major impact in the form of excess iridium; spherules; fullerenes and evidence for huge wildfires in soils directly above the last known occurrences of the superbly crafted tools known as Clovis points – the hallmark of the earliest known humans in North America. Later (see Comet slew large mammals of the Americas?, March 2009) the same team reported minute diamonds from the same soils along with evidence for extinction of the Pleistocene megafauna; a view that was panned unmercifully.  Like the yet-to-be-found ‘end-Permian impact’ previously proposed by the same team, no crater of Younger Dryas age was then known. However, in 2018, ice-penetrating radar surveys revealed a convincing, 31 km wide subglacial impact structure beneath the Greenland ice cap, that is directly overlain by ice of Holocene (<11.7 ka) age. This reopened the case for an extraterrestrial origin for the Younger Dryas, followed by evidence from Chile for 12.8 ka wildfires presented by a team that includes academics who first made claims of an impact cause.

Untitled-2
Colour-coded subglacial topography from radar sounding over the Hiawatha Glacier of NW Greenland (Credit: Kjaer et al. 2018; Fig. 1D)

Last week, the impact-hungry team provided further evidence in lake-bed sediments from South Carolina, USA, which they have dated using an advanced approach to the radiocarbon method (Moore, C.R. and 16 others 2019. Sediment Cores from White Pond, South Carolina, contain a Platinum Anomaly, Pyrogenic Carbon Peak, and Coprophilous Spore Decline at 12.8 ka. Nature Scientific Reports, v. 9, online 15121; DOI: 10.1038/s41598-019-51552-8). This centres on a large spike in platinum and palladium, which they date to 12,785 ± 58 years before present; i.e. the start of the Younger Dryas. Preceding it is a peak in soot with a distinctive δ13C value attributed to wildfires (12, 838 ± 103 years b.p), and is followed by a peak in nitrogen isotopes (δ15N), indicating environmental changes, and a sharp decline in spores (12,752 ± 54 years b.p) attributed to fungi that consume herbivore dung – a sign of a decline in the local megafauna. In other words, a confirmation of previous findings at the Clovis site– but no diamonds. The variations in different parameters are based on 30 to 35 samples (each about 2 cm long) from about 0.8 m of sediment core, so it is curious that most of the data are presented as continuous curves. That issue may become the focus of criticism, as may the need for confirmation from other lake-bed cores from a wider number of localities. With such polarised views on a crucial episode in recent geological and biological history critical scrutiny is sure to come.