The subduction pulley: a new feature of plate tectonics

Geological map of part of the Italian Alps. The Sesia-Lanzo Zone is 6 in the Key: a – highly deformed gneisses; b – metasedimentary schists with granite intrusions; c – mafic rocks; d – mixed mantle and crystalline basement rocks. (Credit: M. Assanelli, Universita degli Studi di Milano)

To a first approximation, as they say, the basis of plate tectonics is that the lithosphere is divided up into discrete, rigid plates that are bounded by lines of divergent, convergent and sideways relative motions: constructive, destructive and conservative plate margins. These are characterised by zones of earthquakes whose senses of motion roughly correspond to the nature of each boundary: normal, reverse and strike-slip, respectively. The seismicity is mainly confined to the lithosphere in the cases of constructive and conservative boundaries (i.e. shallow) but extends as deep as 700 km into the mantle at destructive margins, thereby defining the subduction of lithosphere that remains cool enough to retain its rigidity. Although the definition assumes that there is no deformation within plates, in practice that does occur for a wide variety of reasons in the form of intra-plate seismicity, mainly within continental lithosphere. Oceanic plate interiors are much stronger and largely ‘follow the rules’; they are generally seismically quiet.

One important feature of plate tectonics is the creation of new subduction zones when an earlier one eventually ceases to function. Where these form in an oceanic setting volcanism in the overriding plate creates island arcs. They create precursors of new continental crust because the density of magmas forming the new lithosphere confers sufficient buoyancy for them to be more difficult to subduct. Eventually island arcs become accreted onto continental margins through subduction of the intervening oceanic lithosphere. Joining them in such ‘docking’ are microcontinents, small fragments spalled from much older continents because of the formation of new constructive plate margins within them. It might seem that arcs and microcontinents behave like passive rafts to form the complex assemblages of terranes that characterise continental mountain belts, such as those of western North America, the Himalaya and the Alps. Yet evidence has emerged that such docking is much more complicated (Gün, E. et al. 2021. Pre-collisional extension of microcontinental terranes by a subduction pulleyNature Geoscience, v. 14, online publication; DOI: 10.1038/s41561-021-00746-9).

Erkan Gün and colleagues from the University of Toronto and Istanbul Technical University examined one of the terranes in the Italian Alps – the Sesia-Lanzo Zone (SLZ) – thought to have been a late-Carboniferous microcontinental fragment in the ocean that once separated Africa from Europe. When it accreted the SLZ was forced downwards to depths of up to 70 km and then popped up in the latter stages of the Alpine orogeny. It is now a high-pressure, low-temperature metamorphic complex, having reached eclogite facies during its evolution. Yet its original components, including granites that contain the high-pressure mineral jadeite instead of feldspar, are still recognisable. Decades of geological mapping have revealed that the SLZ sequence shows signs of large-scale extensional tectonics. Clearly that cannot have occurred after its incorporation into southern Europe, and must therefore have taken place prior to its docking. Similar features are present within the accreted microcontinental and island-arc terranes of Eastern Anatolia in Turkey. In fact, most large orogenic belts comprise hosts of accreted terranes that have been amalgamated into older continents.

An ‘engineering’ simplification of the subduction pulley. Different elements represent slab weight (slab pull force) transmitted through a pulley at the trench to a weak microcontinent and a strong oceanic lithosphere. (Credit: Gün et al., Fig. 4)

Lithospheric extension associated with convergent plate margins has been deduced widely in the form of back-arc basins. But these form in the plate being underidden by a subduction zone. Extension of the SLZ, however, must have taken place in the plate destined to be subducted. Gün et al. modelled the forces, lithospheric structure, deformation and tectonic consequences that may have operated to form the SLZ, for a variety of microcontinent sizes. The pull exerted by the subduction of oceanic lithosphere (slab pull) would exert extensional forces on the lithosphere as it approached the destructive plate boundary. Oceanic lithosphere is very strong and would remain intact, simply transmitting slab-pull force to the weaker continental lithosphere, which ultimately would be extended. This is what the authors call a subduction ‘pulley’ system. At some stage the microcontinent fails mechanically, part of it being detached to continue with the now broken slab down the subduction zone. The rest would become a terrane accreted to the overriding plate. Subduction at this site would stop because the linkage to the plate has broken. It may continue by being transferred to a new destructive margin ‘behind’ the accreted microcontinent. This would allow other weak continental and island-arc ‘passengers’ further out on the oceanic plate eventually to undergo much the same process.

The observed complexity of tectonic terranes in other vast assemblies of them, such as the northern Pacific coast of North America and in many more ancient orogenic belts, is probably as much a result of extension before accretion as the compressional deformation suffered afterwards. The theoretical work by Erkan Gün and colleagues will surely spur tectonicians to re-evaluate earlier models of orogenesis.

Note: Figure 2 in the paper by Gün et al. shows how the width (perpendicular to the subduction zone) affects the outcomes of the subduction pulley. View an animation of a subduction pulley

CSI and detecting the presence of ancient humans

Enter a room, even for a few minutes, and dead skin cells will follow you like an invisible cloud to settle on exposed surfaces. Live there and a greyish white, fluffy dust builds up in every room. Even the most obsessive cleaning will not remove it, especially under a bed or on the bathroom floor. Consider a cave as a home, but one without vacuum cleaners, any kind of sanitation, paper tissues, panty liners, nappies or wet wipes. For pre-modern human dwellings can be added snot, fecal matter, sweat, urine, menstrual blood and semen among all the other detritus of living. A modern crime-scene investigator would be overwhelmed by the sheer abundance of DNA from the host of people who had once dwelt there. CSI works today as much because most homes are pretty clean and most people are fastidious about personal hygene as because of the rapidly shrinking lower limit of DNA detection of the tools at its disposal. Except, that is, when someone from outside the home commits a criminal offence: burglary, GBH, rape, murder. We have all eagerly watched ‘police operas’ and in the absence of other evidence the forensic team generally gets its perpetrator, unless they did the deed wearing a hazmat suit, mask, bootees and latex gloves.

Artistic impression of Neanderthal extended-family life in a cave (credit: Tyler B. Tretsven)

Since 2015 analysis of environmental DNA from soils has begun to revolutionise the analysis of ancient ecosystems, including the living spaces of ancient humans (see: Detecting the presence of hominins in ancient soil samples, April 2017). It is no longer necessary to find tools or skeletal remains of humans to detect their former presence and work out their ancestry. DNA sequencing of soil samples, formerly discarded from archaeological sites, can now detect former human presence in a particular layer, as well as that of other animals. In many cases the ‘signal’ pervades the layer rather than occurring in a particular spot, as expected from shed skin cells and bodily fluids. The first results were promising but only revealed mitochondrial DNA. Now the technique has extended to nuclear DNA: the genome (Vernot, B. and 33 others 2021. Unearthing Neanderthal population history using nuclear and mitochondrial DNA from cave sediments. Science, v. 372, article eabf1667; DOI: 10.1126/science.abf1667). Benjamin Vernot and colleagues from 7 countries collected and analysed cave soils from three promising sites with tangible signs of ancient human occupation. Two of them were in Siberia and had previously yielded Neanderthal and Denisovan genomes from bones. The other is part of the Atapuerca cave complex of NW Spain that had not. The Russian caves yielded DNA from more than 60 samples, 30 being nuclear DNA consistent with that from actual Neanderthal and Denisovan bones found in the caves. Galería de las Estatuas cave in Spain presented a soil profile spanning about 40 thousand years from 112 to 70 ka.

Teasing-out nuclear DNA from soil is complicated, from both technical and theoretical standpoints. So being able to match genomes from soil and bone samples in the Russian caves validated the methodology. The Spanish samples could then be treated with confidence. Galería de las Estatuas revealed the presence of Neanderthals throughout its 40 ka soil profile, but also a surprise. The older DNA was sufficiently distinct from that from later levels to suggest that two different populations had used the cave as a home, the original occupants being replaced by another genetically different group around 100 to 115 ka ago. The earlier affinity was with the ancestors of sequenced Neanderthal remains from Belgium, the later with those from Croatia. That time is at the end of the last (Eemian) interglacial episode, so one possibility is a population change driven by climatic deterioration. This success is sure to encourage other re-examinations of caves all over the place. That is, if there is the analyical capacity to perform such painstaking work in greater volume and at greater pace. Like many other palaeo-genomic studies, this one has relied heavily on the analytical facilities pioneered and developed by Svante Paäbo at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Covid has forced genetics to the front page for a year and more. And it has led to many advances in anlytical techniques, particularly in their speed. It would nice to think that a dreadful experience may end-up with positive benefits for understanding the full history of humanity.

Multicelled fossils from the 1 Ga old Torridonian of Scotland

Beinn Alligin and Loch Torridon, Northwest Highlands of Scotland. The hills are formed by Torridonian terrestrial sediments (credit: Stefan Krause, Wikimedia Commons)

Palaeobiologists interested in the origin of animals have generally focussed on sedimentary rocks from southern China: specifically those of the 635 to 550 Ma Doushantuo Formation. Phosphorus-rich nodules in those marine sediments have yielded tiny spheroids whose structure suggests that they are fossil embryos of some unspecified eukaryote. The Doushantuo Formation lies on top of rocks associated with the Marinoan episode of global glaciation during the Neoproterozoic; a feature which suggested that the evolutionary leap from single- to multi-celled eukaryotes was associated with environmental changes associated with Snowball Earth events. In a forthcoming issue of Current Biology that view will be challenged and the origin of multicellular life pushed back to around 1 billion years ago (Strother, P.K. et al. 2021. A possible billion-year-old holozoan with differentiated multicellularity. Current Biology, v. 31, p. 1-8; DOI: 10.1016/j.cub.2021.03.051). Spherical fossils of that age have been teased out of phosphatic nodules deposited in lacustrine sediments from the lower part of the Mesoproterozoic Torridonian Group of the Northwest Highlands of Scotland.

The internal structure of the fossils has been preserved in exquisite detail. Not only are cells packed together in their interiors, but some reveal an outer layer of larger sausage-shaped cells. So, cell differentiation had taken place in the original organisms, whereas such features are not visible in the Doushantuo ‘embryos’. A few of the central cells show dark, organic spots that may be remains of theirnucleii. Whatever these multicellular spheres may have developed into, the morphology of the Torridonian fossils is consistent with a transition from single-celled holozoans to the dominant metazoans of the Phanerozoic; i.e. the stem of later animals. The younger, Chinese fossils that are reputed to be embryos cannot be distinguished from multicellular algae (see: Excitement over early animals dampened, January 2012)

Photomicrograph of Bicellum brazieiri: scale bar = 10μm; arrows point to dark spots that may be cell nuclei (credit: Charles Wellman, Sheffield University)

Interestingly, the Torridonian Group is exclusively terrestrial in origin, being dominated by sediments deposited in the alluvial plains of huge braided streams that eventually buried a rugged landscape eroded from Archaean high-grade metamorphic rocks. Thus the environment would have been continually in contact with the atmosphere and thus oxygen that is vital for eukaryote life forms. The age of the fossils also rings a bell: a molecular clock based on the genomics of all groups of animals alive today hints at around 900-1000 Ma for the emergence of the basic body plan. Because its host rocks are about that age, could Bicellum brazier be the Common Ancestor of all modern animals? That would be a nice tribute to the second author, Martin Brazier (deceased) of Oxford University, who sought signs of the most ancient life for much of his career.

See also:Billion-year-old fossil reveals missing link in the evolution of animals (Press release, Sheffield University; 29 April 2021)

Wildfires and the formation of sugar-loaf hills

One iconic feature of Rio de Janeiro is Corcovado Mountain, topped by the huge Cristo Redentor (Christ the Redeemer) statue. Another is the Sugar Loaf (Pão de Açúcar) that broods over Botafogo Bay. Each is an inselberg: a loan word from the German for ‘island mountain’. Elsewhere they are known as kopjes (southern Africa), monadnocks (North America) or bornhardts after the German explorer who first described them. But, being on the coast, the Brazilian examples are not typical. Most rise up spectacularly from almost featureless plains, a well-known case being Uluru (Ayers Rock) almost at the centre of Australia. Arid and semi-arid plains of Africa and the Indian subcontinent are liberally dotted with them. So scenically dominant and spectacularly stark, inselbergs are often revered by local people, and have been so for millennia. The only thing that I remember from a desperately boring, but compulsory, first-year course on geomorphology in 1965 is their connection with the ‘cosmogonic egg’: a mythological motif that spans Eurasia, Australia and Africa, signifying that from which the universe hatched. It is perhaps no coincidence that hills in England that suddenly rise from flat land, such as the Wrekin in Shropshire and Malvern Hill in Worcestershire, still host the sport of rolling hard-boiled eggs to celebrate the pagan festival of Eostre (now Easter) that marks the spring rebirth of the land.    

Vista of Rio de Janeiro and its inselbergs (Credit: Leonardo Ferreira Mendes, Creative Commons)

How inselbergs and their surrounding plains formed has long been a hot topic in tropical geomorphology. One theory is that they are especially resistant rocks around which eroding rivers meandered during the formation of peneplains, a variant being that they were surrounded by lines of weakness, such as faults or major joint systems. Another is that they formed by erosion into a deeply but irregularly weathered surface. Then there is L.C. Kings theory of escarpment retreat and, of course, a mixture of processes in different stages, or a unique origin for each inselberg. In effect, there has been no final, widely agreed explanation. But that that may be about to change.

A common element to most inselbergs is their very steep and sometimes vertical flanks. Some even display overhangs at their base. Such potential shelters encouraged local people to camp there and, in response to the awe inspired by the sheer majesty of the looming inselberg, to use them for sacred rites and decoration. That is especially true of Australia, so it is fitting that what may be a breakthrough in understanding inselberg formation should have arisen there. (Buckman, S. et al. 2021. Fire-induced rock spalling as a mechanism of weathering responsible for flared slope and inselberg developmentNature Communications, v. 12, article 2150; DOI: 10.1038/s41467-021-22451-2). Breaking rock by deliberate use of fire has been done for millennia. For instance, Hannibal is said to have used fire to break down huge fallen boulders that blocked passage for his war elephants as his army advanced on Rome. Fire setting is still used by villagers in South India to spall large flakes of rock from outcrops. It is done with such skill that thin slabs up to 3-4 metres across can be lifted, and then split into thin posts for fencing or training vines: an essential alternative to wooden posts that termites would otherwise devour in a matter of months.

Solomon Buckman and colleagues from the University of Wollongong, Australia were drawn to a new hypothesis for inselberg formation by observations around low rock faces and boulders after the 2019-20 “Black Summer” wildfires in eastern Australia. Where burned trees had fallen against rock faces up to hundreds of kg of spalled flakes lay at the base of each face, which also bore freshly formed scars: clear signs of fire action. Thermal expansion and contraction of rock caused by air temperatures of hundreds of degrees close to wildfires is clearly a powerful means of rapid erosion. If the rock is damp – most likely at the base of a rockface as all rainfall on the outcrop drains in its direction – the mechanism is enhanced: Hannibal’s engineers poured vinegar onto the boulders heated by fire, to great effect. Buckman et al. estimate the rate of lateral erosion by fire at slope bases in Australia to be around ten thousand times faster than those operating on horizontal rock surfaces, which are not exposed to fire as no vegetation grows on them. Over time, slopes steepen aided by the formation of flared surfaces at the base. If spalled debris is carried away quickly the developing inselberg evolves to its classical sugarloaf shape. In more arid conditions the debris builds around the outcrop to steadily smother inselberg development, leaving tors and kopjes. The paper came to press remarkably quickly relative to the authors’ field work and analyses. This is a work-in-progress to be followed up by cosmogenic-isotope and other means of surface dating of the tops and flanks of suitably accessible inselbergs and simiar features such as Western Australia’s famous Wave Rock (a flared escarpment).

Wave Rock in the interior of Western Australia is 15 m high and 100 m long and revered by the local Ballardong people as a creation of the Rainbow Serpent

Climate change has shifted Earth’s poles

The shifting position of the Tropic of Cancer in Mexico due to nutation from 2005 to 2010 (Credit: Roberto González, Wikimedia Commons)

First suggested by Isaac Newton and confirmed from observations by Seth Chandler in 1891, the Earth’s axis of rotation and thus its geographic poles wander in much the same manner as does the axis of a gyroscope, through a process known as nutation. The best-known movement of the poles – Chandler wobble – results in a change of about 9 metres in the poles’ positions every 433 days, which describes a rough circle around the mean position of each pole. Every 18.6 years the orbital behaviour of the Moon results in a substantially larger shift, illustrated by a shift in the position of the circles of latitude, as above. Essentially, nutation results from the combined effects of gravitational forces imposed by other bodies. The axial precession cycle of 26 thousand years that is part of the Milankovich effect on long-term climate forcing is a result of nutation. But the Earth’s own gravitational field changes too, as mass within and upon it shifts from place to place. So mantle convection and plate tectonics inevitably change Earth’s mode of rotation, as do changes in the Earth’s molten iron core.

The most sensitive instrument devoted to measuring changes in Earth’s gravity is the tandem of two satellites known as the Gravity Recovery and Climate Experiment or GRACE. Among much else, GRACE has revealed the rate of withdrawal of groundwater from aquifers in Northern India and areas of mass deficit over the Canadian Shield that resulted from melting of its vast ice sheet since 18 ka ago (see: Ice age mass deficit over Canada deduced from gravity data, July 2007). Further GRACE data have now confirmed that more recent melting of polar glaciers due to global warming underlie an unusual reversal and acceleration of polar wandering since the 1990s (Deng, S. et al. 2021. Polar drift in the 1990s explained by terrestrial water storage changes. Geophysical Research Letters, v. 48, online article e2020GL092114; DOI: 10.1029/2020GL092114). In 1995 polar drift changed from southwards to eastwards, and increased by 17 times from its mean speed from 1981 to 1995. That tallies with an increase in the flow of glacial meltwater from polar regions and also with changes in the mass balance of surface and subsurface water at lower latitudes, especially in India, the USA and China where groundwater pumping for irrigation is on a massive scale.

Clearly, human activity is not only changing climate, but also our planet’s astronomical behaviour. That connection, in itself, is enough to set alarm bells ringing, even though the axial shift’s main tangible effect is to change the length of the day by a few milliseconds. Polar wandering has been documented for the last 176 years. Conceivably, data on shifts in past direction and speed may allow climatic changes throughout the industrial revolution to be assessed independently of meteorological data and on a whole-planet basis.

Ses also: Climate has shifted the axis of the Earth (EurekaAlert, 22 April 2021)

Multitudes of Tyrannosaurus rex in Cretaceous North America

Full-frontal skull of ‘Sue’, the best-preserved and among the largest specimens of T. rex (Credit: Scott Robert Anselmo, Wikimedia Commons)

Long-term followers of Earth-logs and its predecessor Earth-pages News will have observed my general detachment from the dinosaur hullabaloo, which just runs and runs. That is, except for real hold-the-front-page items. One popped up in the 16 April 2021 issue of Science (Marshall, C.R. et al. 2021. Absolute abundance and preservation rate of Tyrannosaurus rexScience, v. 372, p. 284-287; DOI:10.1126/science.abc8300). For over two million years in the Late Cretaceous, just before all dinosaurs – except for birds – literally bit the dust, the authors estimated a lot of the dinosaurian poster-childTyrannosaurus rex lurking in North America. I write ‘lurking’ because ‘tyrant lizard the king’ when fully grown was so big that if it ran and fell over, it would have been unable to get up! Tangible evidence from trackways suggests that it ambled from place to place. The leg bones of a 7-tonner would probably have shattered at speeds above 18 km per hour, and accelerating to the speed of a human jogger would, anyhow, have exhausted its energy reserves, But it was agile enough to be an ambush predator; it could even pirouette! And it could crush bones so well that it was able to consume prey entirely. It has been suggested that T. rex may have been a scavenger, at least in old age. Whatever, how is it possible to estimate numbers of any extinct species, let alone dinosaurs?

The stumbling block to getting a result that is better than guesswork is the fossil record of a species. First, it is incomplete, secondly the chance of finding a fossil varies from area to area, depending on all kinds of factors. These include the degree of exposure of sedimentary rock formed by the environment in which they thrived, as well as the vagaries of preservation due to post-mortem scavenging, erosion and water transport. In life the population density of a particularspecies varies between different ecosystems and from species to species. For instance, more lions can thrive in open rangeland than in wooded environments, whereas the opposite holds for tigers: probably because of different hunting strategies. Many factors such as these conspire to thwart realistic estimates of ancient populations. Studies of living species, however, suggest that population density of an animal species is inversely related to the average body mass of individuals. Take British herbivores: there are many more rabbits than there are deer. On the grasslands of East Africa hyenas and wild dogs outnumber lions. This mass-population relationship (Damuth’s Law) outlined by US ecologist John Damuth also depends on where a species exists in the food chain (its trophic level) as well as its physiology. Yet for living species, populations of flesh-eating mammals of similar mass show a 150-fold variation; a scatter that results from their different habits and habitats and also their energy requirements. Because they are warm-blooded (endothermic), small carnivorous mammals need a greater energy intake than do similar sized, cold-blooded reptiles, which need to eat far less. But not all living reptiles are ectothermic, especially the bigger ones. The Komodo dragon is mesothermic, midway between the two, and uses about a fifth of the energy needed by a similar-sized mammal carnivore. Population densities of dragons in the Lesser Sunda Islands are more than twice those of physiologically comparable mammalian predators.

A number of features suggest that the metabolism of carnivorous dinosaurs lay midway between those of large predatory mammals and big lizards like the Komodo dragon. This is the basic assumption for the analysis by Charles Marshall and colleagues. They did not focus on the biggest T. rex specimens, but on the average, estimated body mass of adults. There are numerous smaller specimens of the beast, but clearly some of these would have been sexually immature. It has been estimated that adulthood would have been achieved by around 15 years. The size data seem to show that achieving sexual maturity was accompanied by a 4 to 5 year growth spurt from the 2 to 3 tonnes of the largest juveniles to reach >7 t in the largest known adults which may have lived into their early 30s. The authors used this range to estimate a mean adult mass of 5.2 t. Taking this parameter and much more intricate factors into account, using intricate Monte Carlo simulations Marshall et al. came up with an estimate of 20 thousand T. rex adults across North America at any one time: but with an uncertainty of between 1,300 to 328,000. Spread over the 2.3 million km2 area of Late Cretaceous North America that lay above sea level their best-estimated population density would have been about 1 individual for every 100 square kilometres. An area the size of California could have had about 3800 adult Tyrannosaurus rex, while there may well have been two in Washington DC. Lest one’s imagination gets overly excited, were tigers and lions living wild today in North America under similar ecological conditions there would have been 12 and 28 respectively in the US capital. Yet those two adult Washingtonian T. rexs would have been unable to catch anything capable of a sustained jog, without keeling over. The juveniles weighing in at up to 3 tonnes would probably have been the real top predators; the smaller, the swifter and thus most fierce. Which leaves me to wonder, “Did the early teenagers catch the prey for their massive parents to chow-down on?”

See also: How many T. rexes were there? Billions. (ScienceDaily 15 April 2021)

Relationships between modern humans and Neanderthals

Before 40 thousand years (ka) ago Europe was co-occupied by Neanderthals and anatomically modern humans (AMH) for between five to seven thousand years; about 350 generations – as long as the time since farming began in Neolithic Britain to the present day. Populations of both groups were probably low given their dependence on hunting and foraging during a period significantly colder than it is now. Crude estimates suggest between 3,000 to 12,000 individuals in each group; equivalent to the attendance at a single English Football League 2 match on a Covid-free winter Saturday afternoon. Moving around Europe south of say 55°N, their potential range would have been around 5 million square kilometres, which very roughly suggests that population density would be one person for every 200 km2. That they would have moved around in bands of, say, 10 to 25 might seem to suggest that encounters were very infrequent. Yet a hybrid Neanderthal-Denisovan female found in Siberia yielded DNA that suggested a family connection with Croatia, 5,000 km away (see: Neanderthal Mum meets Denisovan Dad, August 2018); early humans moved far and wide.

The likely appearances of Neanderthals and anatomically modern humans when they first met between 50 and 40 thousand years ago. (Credit: Jason Ford, New York University)

A sparsely populated land can be wandered through with little fear other than those of predators, sparse resources or harsh climate and lack of shelter. But it still seems incredible for there to have been regular meetings with other bands. But that view leaves out knowledge of good places to camp, hunt and forage that assure shelter, water, game and so forth, and how to get to them – a central part of hunter-gatherers’ livelihoods. There would have been a limited number of such refuges, considerably increasing chances of meeting. Whatever the physiognomic differences between AMH and Neaderthals, and they weren’t very striking, meeting up of bands of both human groups at a comfortable campsite would be cause for relief, celebration, exchanges of knowledge and perhaps individuals of one group to partner members of the other.

As well as that from Neanderthals, ancient DNA from very early European AMH remains has increasingly been teased out. The latest comes from three individuals from Bacho Kiro Cave in Bulgaria dated to between 45.9 to 42.6 ka; among the earliest known, fully modern Europeans. One had a Neanderthal ancestor less than six generations removed (perhaps even a great-great grandparent 60 years beforehand). Because of the slight elapsed time, the liaison was probably in Europe, rather than in the Middle East as previously suggested for insertion of Neanderthal genes into European ancestry. The genetic roots of the other two families stemmed back seven to ten generations – roughly 100 to 150 years (Hajdinjak, M. and 31 others 2021. Initial Upper Palaeolithic humans in Europe had recent Neanderthal ancestryNature, v. 592, p. 253–257; DOI: 10.1038/s41586-021-03335-3). The interpretation of these close relationships stems from the high proportion of Neanderthal DNA (3 to 4 %) in the three genomes. The segments are unusually lengthy, which is a major clue to the short time since the original coupling; inherited segments tend to shorten in successive generations. The groups to which these AMH individuals belonged did not contribute to later Eurasian populations, but link to living East Asians and Native Americans. They seem to have vanished from Europe long before modern times. The same day saw publication of a fourth instance of high Neanderthal genetic content (~3 %) in an early European’s genome, extracted from a ~45 ka female AMH from Zlatý kůň (Golden Horse) Cave in Czechia (Prüfer, K. and 11 others 2021. A genome sequence from a modern human skull over 45,000 years old from Zlatý kůň in Czechia. Nature Ecology & Evolution  DOI: 10.1038/s41559-021-01443-x). In her case, too, the Neanderthal DNA segments are unusually lengthy, but indicate 70 to 80 generations (~2,000 to 3,000 years) had elapsed. Her DNA also suggests that she was dark-skinned and had brown hair and brown eyes. Overall her genetics, too, do not have counterparts in later European AMH. The population to which she belonged may have migrated westwards from the Middle East, where one of her ancestors had mated with a Neanderthal, perhaps as long as 50 ka ago. But that does not rule out her group having been in Europe at that time. A later modern human, dated at 42 to 37 ka, is a young man from the Petştera cu Oase cave in Romania, whose forbears mixed with Neanderthals. His genome contains 6.4% of Neanderthal DNA, suggesting that his Neanderthal ancestor lived a mere 4 to 6 generations earlier, most likely in Europe, and was perhaps one of the last of that group.

The data suggest that once modern humans came into contact with their predecessors in the Middle East and Europe, mixture with Neanderthals was ‘the rule rather than the exception’. Yet their lack of direct relationship to later Europeans implies that AMH colonisation of Europe occurred in successive waves of people, not all of whom survived. As Palaeolithic specialist Chris Stringer of the Natural History Museum in London cautions, of these multiple waves of incomers ‘Some groups mixed with Neanderthals, and some didn’t. Some are related to later humans and some are not’. Even five thousand years after ‘first contact’, relations of modern humans with Neanderthals remained ‘cordial’, to say the least, including with the last few before their extinction.

See also: Gibbons, A. 2021. More than 45,000 years ago, modern humans ventured into Neanderthal territory. Here’s what happened next. Science, v. 372, News article; DOI: 10.1126/science.abi8830. Callaway, E. 2021. Oldest DNA from a Homo sapiens reveals surprisingly recent Neanderthal ancestry. Nature, v. 592, News article; DOI: 10.1038/d41586-021-00916-0. Genomes of the earliest Europeans (Science Daily, 7 April 2021). Bower, B. 2021 Europe’s oldest known humans mated with Neandertals surprisingly often (ScienceNews, 7 April 2021)

When did supercontinents start forming?

Plate tectonics is easily thought of as being dominated by continental drift, the phenomenon that Alfred Wegener recognised just over a century ago. So it is at present, the major continents being separated by spreading oceans. Yet, being placed on a near-spherical planet, continents also move closer to others; eventually to collide and weld together. Part of Wegener’s concept was that modern continents formed from the breakup of a single large one that he called Pangaea; a supercontinent. The current drifting apart began in earnest around the end of the Triassic Period (~200 Ma), after 200 Ma  of Pangaea’s dominance of the planet along with a single large ocean (Panthalassa) covering 70% of the Earth’s surface. Wegener was able to fit Pangaea together partly on the basis of evidence from the continents’ earlier geological history. In particular the refit joined up zones of intense deformation from continent to continent. Although he did not dwell on their origin, subsequent research has shown these zones were the lines of earlier collisions between older continental blocks, once subduction had removed the intervening oceanic lithosphere; Pangaea had formed from an earlier round of continental drift. Even older collision zones within the pre-Pangaea continental blocks suggested the former existence of previous supercontinents.

Aided by the development of means to divine the position of the magnetic poles relative to differently aged blocks on the continents, Wegener’s basic methods of refitting have resulted in the concept of supercontinent cycles of formation and break-up. It turns out that supercontinents did not form by all earlier continental clanging together at one time. The most likely scenario is that large precursors or ‘megacontinents’ (Eurasia is the current example) formed first, to which lesser entities eventually accreted  A summary of the latest ideas on such global tectonic cycles appeared in the November 2020 issue of Geology (Wang, c. et al. 2020. The role of megacontinents in the supercontinent cycle. Geology, v. 49  p. 402-406; DOI: 10.1130/G47988.1). Chong Wang of the Chinese Academy of Sciences and colleagues from Finland and Canada identify three such cycles of megacontinent formation and the accretion around them of the all-inclusive supercontinents of Columbia, Rodinia and Pangaea since about 1750 Ma (Mesoproterozoic). They also suggestion that a future supercontinent (Amasia) is destined to agglomerate around Eurasia.

Known megacontinents in relation to suggested supercontinents since the Mesoproterozoic (credit: Wang et al.; Fig 2)

The further back in time, the more cryptic are ancient continent-continent collision zone or sutures largely because they have been re-deformed long after they formed. In some cases younger events that involved heating have reset their radiometric ages. The oldest evidence of crustal deformation lies in cratons, where the most productive source of evidence for clumping of older continental masses is the use of palaeomagnetic pole positions. This is not feasible for the dominant metamorphic rocks of old suture zones, but palaeomagnetic measurements from old rocks that have been neither deformed nor metamorphosed offer the possibility of teasing out ancient supercontinents. Commonly cratons show signs of having been affected by brittle extensional deformation, most obviously as swarms of vertical sheets or dykes of often basaltic igneous rocks. Dykes can be dated readily and do yield reliable palaeomagnetic pole positions. Some cratons have multiple dyke swarms. For example the Archaean Yilgarn  Craton of Western Australia, founded on metamorphic and plutonic igneous crust that formed by tectonic accretion between 3.8 to 2.7 Ga, has five of them spanning 1.4 billion years from late-Archaean (2.6 Ga) to Mesoproterozoic (1.2 Ga). Throughout that immense span of time the Yilgarn remained as a single continental block. Also, structural trends end abrubtly at the craton margins, suggesting that it was once part of a larger ‘supercraton’ subsequently pulled apart by extensional tectonics.  The eleven known cratons show roughly the same features.

On the strength of new, high quality pole positions from dykes of about the same ages (2.62 and 2.41 Ga) cutting the Yilgarn and Zimbabwe cratons, geoscientists from Australia, China, Germany, Russia and Finland, based at Curtin University in Western Australia, have attempted to analyse all existing Archaean and Palaeoproterozoic pole positions (Liu, Y. et al. 2021. Archean geodynamics: Ephemeral supercontinents or long-lived supercratons. Geology, v. 49  ; DOI: 10.1130/G48575.1). The Zimbabwe and Yilgarn cratons, though now very far apart, were part of the same supercraton from at least 2.6 Ga ago. Good cases can be made for several other such large entities, but attempting fit them all together as supercontinents by modelling is unconvincing. The modelled fit for the 2.6 Ga datum is very unlike that for 2.4 Ga; in the intervening 200 Ma all the component cratons ould have had to shuffle around dramatically, without the whole supercontinent edifice breaking apart. However, using the data to fit cratons together at two supercratons does seem to work, for the two assemblies remain in the same configurations for both the 2.6 and 2.4 Ga data.

Interestingly, all cratonic components of one of the supercratons show geological evidence of the major 2.4 Ga glaciation, whereas those of the other show no such climatic indicator. Yet the entity with glacial evidence was positioned at low latitudes around 2.4 Ga, the ice-free one spanning mid latitudes. This may imply that the Earth’s axial tilt was far higher than at present. The persistence of two similar sized continental masses for at least 200 Ma around the end of the Archaean Eon also hints at a different style of tectonics from that with which geologists are familiar. Only palaeomagnetic data from the pre 2.6 Ga Archaean can throw light on that possibility. That requires older, very lightly or unmetamorphosed rocks to provide palaeopole positions. Only two cratons, the Pilbara of Western Australia and the Kaapvaal of South Africa, are suitable. The first yielded the oldest-known pole dated at 3.2 Ga, the oldest from the second is 2.7 Ga. A range of evidence suggests that Pilbara and Kaapvaal cratons were united during at least the late Archaean.

The only answer to the question posed by this item’s title is ‘There probably wasn’t a single supercontinent at the end of the Archaean, but maybe two megacontinents or supercratons’. Lumps of continental lithosphere would move and – given time – collide once more than one lump existed, however the Earth’s tectonics operated …

Snippet: Early human collection of useless objects

The Ga-Mohana rock shelter in North Cape Province, South Africa (Credit: Jayne Wilkins, University of the Witwatersrand)

We all, especially as kids, have collected visually interesting objects for no particular reason other than they ‘caught our eye’: at the beach; from ploughed fields; river gravel, or at the side of a path. They end up in sheds, attics and mantel shelves. In an online News and Views article at the Nature website Pamela Willoughby discusses the significance of a paper on an archaeological site in the southern Kalahari Desert, North Cape Province South Africa (Willoughby, P.R. 2021. Early humans far from the South African coast collected unusual objects. Nature, v. 323, online News and Views; DOI: 10.1038/d41586-021-00795-5). Jayne Wilkins and co-workers from South Africa, Australia, Canada, Austria and the UK have investigated a rock shelter, with floor deposits going back over 100 thousand years. The researchers have, in a sense, continued the long human habit of seeking objets trouvée by using trowels and sieves to excavate the shelter’s floor sediments. They found a collection of cleavage fragments of white calcite and abundant shards of ostrich shell. Ga-Mohana Hill is still a place that locals consider to have spiritual significance. The authors consider the original collectors to have had no other motive than aesthetic pleasure and perhaps ritual, and that this signifies perhaps the earliest truly modern human behaviour. Yet, in 1925 a cave on the other side of South Africa, in Limpopo Province, yielded a striking example of a possible ‘collector’s piece’ from much earlier times. It is associated with remains of australopithecines and has been dated to around 3 Ma ago (see: Earliest sign of a sense of aesthetics, November 2020).

Source: Wilkins, J. et al.2021. Innovative Homo sapiens behaviours 105,000 years ago in a wetter Kalahari. Nature, v. 323 DOI: 10.1038/s41586-021-03419-0

Arctic warmer than now half a million years ago

Just over a month since evidence emerged that the Arctic Ocean was probably filled with fresh water from 150 to 131 and 70 to 62 thousand years ago (When the Arctic Ocean was filled with fresh water, February 2021), another study has shaken ‘received wisdom’ about Arctic conditions. This time it is about the climate in polar regions, and comes not from an ice core but speleothem or calcium carbonate flowstone that was precipitated on a cave wall in north-eastern Greenland. The existence of caves at about 80°N between 350 to 670 m above sea level in a very cold, arid area is a surprise in itself, for they require flowing water to form. The speleothem is up to 12 cm thick, but none is growing under modern, relatively warm conditions, cave air being below freezing all year. For speleothem to form to such an extent suggests a long period when air temperature was above 0°C. So was it precipitated before glacial conditions were established in pre-Pleistocene times?

Limestone caves in the arid Grottedal region of north-eastern Greenland (Credit: Moseley et al. 2021; Fig 2D)

A standard means of discovering the age of cave deposits, such as speleothem or stalagmites, is uranium-series dating (see: Irish stalagmite reveals high-frequency climate changes, December 2001). In this case the sheet of flowstone turned out to have been deposited between 588 to 537 thousand years ago; a 50 ka ‘window’ into conditions that prevailed during the middle part of 100 ka climatic cycling – about 6 glacial-interglacial stages before present. (Moseley, G.E. et al. 2021. Speleothem record of mild and wet mid-Pleistocene climate in northeast Greenland. Science Advances, v. 7, online article  eabe1260; DOI: 10.1126/sciadv.abe1260). Roughly half the layer formed during an interglacial, the rest under glacial conditions that followed. Detailed oxygen-isotope studies revealed that air temperatures during which calcium carbonate was precipitated were at least 3.5°C above those prevailing in the area at present; warm enough to melt local permafrost and to increase the summer extent of ice-free conditions in the Arctic Ocean, thereby encouraging greater rainfall. These warm and wet conditions correlate with increased solar heating over the North Atlantic region at that time, as suggested by modelling based on Milankovich astronomical forcing.

Unfortunately, the climate record derived from cores through the Greenland ice sheet only reaches back to about 120 ka, during the last interglacial period. So it is not possible to match the speleothem results to an alternative data set. Yet, thanks to the rediscovery of dirt cored from the very base of the deepest part of the ice sheet (beneath Camp Century) in a freezer in Denmark – it was discarded as interest focused on the record preserved in the ice itself – there is now evidence for complete melting of the ice sheet at some time in the past. The dirt contains abundant fossil plants. Analysing radioactive isotopes of aluminium and beryllium that formed in associated quartz grains as a result of cosmic ray bombardment when the area was ice-free suggests two periods of complete melting followed by glaciation , the second  being within the last million years.

The onshore Arctic climate is clearly more unstable than previously believed.

See also:  Geologists Find Million-Year-Old Plant Fossils Deep Beneath Greenland Ice Sheet. Sci News, 16 March 2021.