The early signs of counting and arithmetic?

Three earlier articles in Earth-logs originally focussed on what I supposed to be ‘ancient abstract art’.  One highlighted a clam shell that bears carefully etched V-shapes found at the type locality for Asian Homo erectus at Trinil on the Solo River, Java, dated between 430 and 540 ka. Another is about parallel lines etched on a piece of defleshed bone from China dated at 78 to 123 ka, which may be a Denisovan artefact. The most complex is a piece of ochre found in the coastal Blombos Cave 300 km east of Cape Town, South Africa in association with tools ascribed to early modern humans who lived there about 73 ka ago. Fascinating as they seemed at the time, they may hold much greater significance about early-human cognitive powers than about mere decoration. That is thanks to recent evaluation of other simple artefacts made of lines and notches by anthropologists, cognitive scientists and psychologists. Their work is summarised in a recent Nature Feature by Colin Barras (Barras, C. 2021. How did Neanderthals and other ancient humans learn to count? Nature, v. 594, p. 22-25; DOI: 10.1038/d41586-021-01429-6). The European Research Council recently allocated a €10 million grant to foster research into ‘when, why and how number systems appeared and spread’.

Examples of ancient ‘abstract’ art. Top – V-shaped features inscribed on 430-540 ka freshwater clam from Java; Middle – parallel lines etched through red ochre to show white bone, from a possible Denisovan site in China; Bottom – complex inscription on a tablet of iron-rich silcrete from South Africa

Straight lines and patterns made from them are definitely deliberate, whatever their antiquity. In recent times, such devices have been used by artists to render mental images, moods and thoughts as simplified abstractions: hence ‘abstract’ art, such as that of Piet Mondrian and Kazimir Malevich. The term also applies to the dribbles and drabbles of Jackson Pollock and many more styles. But these works are a very recent evolutionary development out of earlier schools of art. So deliberate geometric shapes and arrangements of lines that are many millennia old cannot simply be termed ‘abstract art’. It is certainly not easy to see how they evolved into the magnificence of Palaeolithic figurative cave art that started at least 40 thousand years ago; Yet they are not ‘doodles’. Being so deliberate suggests that they represented something to their makers. The question is, ‘What?’

The research summarised by Barras is mainly that of Francisco d’Errico of The University of Bordeaux, France and colleagues from Canada and Italy (d’Errico, F. et al. 2018. From number sense to number symbols. An archaeological perspective. Philosophical Transactions of the Royal Society B, v. 373, article 2160518; DOI: 10.1098/rstb.2016.0518). They focused their work on two remarkable artefacts. The oldest (72 to 60 ka), from a cave near Angoulême in France, is a fragment of a hyena’s thigh bone that carries nine notches. It is associated with stone tools almost certainly made by Neandethals. The other, from the Border Cave rock shelter in KwaZulu-Natal in South Africa, is a 44 to 42 ka old baboon’s shin bone, which carries a row of 29 prominent notches, and a number of less distinct, roughly parallel scratches. The rock shelter contains remains of anatomically modern humans and a very diverse set of other artefacts that closely resemble some used by modern San people.

Top: notched hyena femur bone fragment associated with Neanderthal tools from SW France. Bottom: notched baboon shin bone from Border Cave, South Africa. Scale bars(Credit: F. d’Errica and L. Backwell)

Microscopic examination of the notches made by a Neanderthal suggest that all 9 notches were cut at the same time, using the same stone blade. Those on the Border Cave shin bone suggest that they were made using four distinctly different tools on four separate occasions. Are both objects analogous to tally sticks; i.e. to count or keep a record of things as an extension to memory? There are other known examples, such as a 30 ka-old  wolf’s radial bone from the Czech Republic having notches in groups of five, suggesting a record of counting on fingers. Yet very similar devices, made in recent times by the original people of Australia, were not used for keeping count, but to help travellers commit a verbal message to memory enabling them to recount it later.

Do read Barras’s summary and the original paper by d’Errico et al. to get an expanded notion of the arguments being debated. They emerge from the truly novel idea that just because the makers of such objects lived tens or even hundreds of thousands of years ago that doesn’t make them intellectually lacking. Imagining in the manner of Victorian scientists that ancient beings such as Neanderthals and H. erectus must have been pretty dim is akin to the prejudice of European colonialists that people of colour or with non-European cultures were somehow inferior, even non-human. To me it is admirable that the European Research Council has generously funded further research in this field at a time when research funding in the UK, especially for the disciplines involved, has been decimated by those who demanded an exit from the EU.

The older Trinil and Blombos patterns appear yet more sophisticated. The pattern on the latter looks very like the kind of thing that someone in a prison cell might draw to keep track of time. It also incorporates the zig-zag element engraved on the Trinil clam shell. Remember that the word ‘Exchequer’ is derived from a tax audit during the reign of Henry I of England that was conducted on a counting board whose surface had a checked pattern

The Great Anthropocene debate

The Bagger 288 bucket wheel reclaimer moves from one lignite mine to another in Germany: an apt expression of modern times

Followers of Earth-logs and its predecessor, should be familiar with the concept of ‘The Anthropocene’. More recent readers can hardly have escaped it, for it has become a recurrent motif that extends far beyond science to the media, the social sciences and even the arts. Some circles among the ‘chattering classes’ speak of little else. It has become a trope – a word with figurative or metaphorical meaning. In 2000, atmospheric chemist and Nobel Laureate Paul Crutzen suggested that the increasingly clear evidence that human society is having growing impacts on the Earth system should be recognised by a new stratigraphic Epoch. Some Fellows of the Geological Society of London launched an attempt to formalise the suggestion through the society’s Stratigraphic Commission (Zalasiewicz, J. and 20 others 2008. Are we now living in the Anthropocene? GSA Today, v.18(ii), p. 4-8; DOI: 10.1130/GSAT01802A.1). In 2009 Jan Zalasiewicz of the University of Leicester became the first chair of the Anthropocene Working Group (AWG) within the International Commission on Stratigraphy (ICS).  A dozen years on, stratigraphers continue to debate the Anthropocene (See: Brazil, R. 2021. Marking the Anthropocene. Chemistry World, 29 January 2021). One of the problems facing its supporters is the lack of agreement about what it is and when it started.

Since 1977 the ICS has been searching for localities, known as Global Boundary Stratotype Sections and Points, or GSSPs, that mark the actual beginning of each basic division of the geological record: Eons, Eras, Periods, Epochs and Ages. So far, those for Epochs and longer divisions have been agreed and GSSP markers have been cemented in place, sometimes with quite large monuments, if not actual golden spikes. Those for the shortest timespans – Ages – are proving more difficult to agree on. These GSSPs have to have global significance, yet the very nature of stratigraphy means that a fair number of the most brief rock sequences revealed by field work either formed at different times across the globe, or there is no incontrovertible dating method to record their beginning and end.

Currently, we live in the Holocene Epoch whose beginning marked the global climate system’s exit from the frigid Younger Dryas at 11.7 ka ago. The Holocene (‘entirely recent’) Epoch marks the latest interglacial. When it began every human being was Homo sapiens, made a living as a hunter-gatherer and eventually expanded into every ecosystem that offered sustenance on all continents bar Antarctica. Within a few thousand years some began sedentary life as farmers and herders after their domestication of a range of plant and animal species. A few millennia later agriculture had a growing foothold everywhere except in Australia. Natural tree cover began to be cleared and organised grazing steadily changed other kinds of ecosystem. Human influences, other than scattered artefacts and bones, became detectable in geological formations such as lake-bed sediments and peat mires. The geological record of the Holocene is by no means consistent globally, there being lots of gaps. That is partly because sedimentary systems continually deposited, eroded and transported sediments on the landmasses. In the tropics and much of the Southern Hemisphere the Younger Dryas is, in any case, barely recognisable in post-Ice Age deposits, so the start of the Holocene there is vague. Things are simpler on the deep sea floor, as muds accumulate with no interruption. But it was only when data became available from drill cores through continental ice masses on Antarctica, Greenland and scattered high mountains that any detailed sense of changes and their pace emerged. The major climatic perturbation of the Younger Dryas and its end only became clear from the undisturbed annual layering in Greenland ice cores. It proved to have been extremely fast: a couple of decades at most. The GSSP for the start of the Holocene therefore lies in a single Greenland ice core preserved by cold storage in Copenhagen. It is a somewhat ephemeral record.

Leaving aside for the moment that the Anthropocene adds the future to the geological record, when was it supposed to start? Its name demands that it be linked to some human act that began to change the world. That is implicit in the beginning of agriculture which held out the prospect of continuous growth in human populations by securing food resources rather than having to seek them. But such an event is not so good from the standpoint of purist stratigraphy as it happened at different times at different places and probably for different reasons (See: Mithen, S. 2004. After the Ice: A Global Human History, 20,000 – 5000 BC. Weidenfeld and Nicolson, London; ISBN-13: 978-0753813928 [A superb read]). A case has been made for the European conquest and colonisation of the Americas which was eventually followed by the death from European diseases of tens of millions of native people, many of whom were farmers in the Amazon basin. The Greenland ice records a decline in atmospheric CO2 between 1570 to 1620 CE, which has been ascribed to massive regrowth of previously cleared tropical rainforest. That would define a start for the Anthropocene at around 1610 CE. Yet the main driver for erecting an Anthropocene Epoch is global warming, which has grown exponentially with the burning of fossil fuels and CO2 emissions since the ill-defined start of the Industrial Revolution (late 18th – early 19th century). It looks like in a year or so the ICS is due to debate a much later start at the peak of nuclear weapon fallout in 1964, which its champions claim to coincide with the ‘Great Acceleration’ in world economic growth, emissions and warming.

If that is accepted, anyone still alive who was born before 1964 is a relic of the Holocene, as Philip Gibbard secretary-general of ICS wryly observed, whereas our children and grandchildren will be wholly of the Anthropocene. We Holocene relics only grasped the change at the start of the 21st century! The very nature of exponential growth is that its tangible effects always come as a surprise. The build-up of human influence on the world has been proceeding stealthily since not long after the Holocene began. Annoyingly, the very name Anthropocene lays the blame on the whole of humanity. In reality it is an outcome of a mode of economy that demands continual exponential growth. That mode – the World Economy – lies completely beyond the reach of social and political control. It is effectively inhuman. So, why the pessimism – can’t human beings get rid of an ethos that is obviously alien to their interests? Perhaps ‘Anthropocene’ might be an apt name for the aftermath of such a reckoning, which may last long enough to be properly regarded as an Epoch …

‘Green’ metal mining?

A glance at statistics for the global consumption of any particular metal reveals much about the current unfairness of the world we live in. On a per capita basis, people in the developed, rich world use vastly more than do those in the less developed countries, on average. It is commonly said that in order for everyone to live in a fair world, the poor need more metals and other physical resources in order to match the living standards of the rich, or the wealthy will need to consume much less. A new factor in the equitability equation is the necessity to stave off CO2-induced global warming, largely through replacing energy from fossil fuels with that produced by a variety of ‘green’ sources. That carries with it another issue; the technologies for carbon-free energy generation, transmission, storage and use will consume a broad range of metals and other physical resources. These include cobalt and lithium, graphite, rare-earth elements and especially copper, whose annual production is set to soar.

Copper is particularly critical. If China alone fulfils its planned production of all-electric transportation, the demand for copper will over 2 billion tonnes requiring 119 years production at the current extraction rate of 20 Mt per year. The rush to electric cars has already forced copper demand above production, resulting in soaring prices on the world market. In the last year they have doubled, reaching US$10,000 per ton at the time ofwriting. Most metals are won by digging up their ores, often from considerable depths in the crust. Ores then have to be concentrated, smelted and the elemental metals refined. About 6 % of global energy is consumed by this process, adding CO2 and a variety of noxious gases to the atmosphere, let alone the stupendous amounts of uneconomic waste rock and polluted water. Copper is high up the list for environmental impact, being extracted from some of the world biggest mines. Like all physical resources, its extraction cannot be continued without further environmental deterioration. But is there a more sustainable way of extracting metals from the Earth?

Bingham Canyon copper mine in Utah, USA; at 4.5 km diameter and 1.2 km depth it is the world’s largest excavation. (Credit: Mining Magazine)

Under the right chemical conditions many metals can be dissolved, so longs as fluids can pass through the ore. One example is the use of sodium cyanide solution (known as a lixiviant) to dissolve gold from low-grade ore: so-called ‘heap leaching’. But this is done at the surface, either using newly crushed ore from an excavation or waste from earlier mining that could not extract fine-grained gold. A similar approach uses bacteria whose metabolism involves oxidation of sulfide ore minerals, resulting in chemical reactions that liberate their desirable metal content to solution in water. If buried orebodies are fractured in situ this kind of leaching will supposedly transform metal production, in an analogous fashion to fracking for gas and oil. Like fracking, current operations that involve both forms of hydrometallurgy generate highly toxic fluids, and in many cases extract only a fraction of the target metal. But a novel alternative has just emerged, which involves leaching based on electrical means (Martens, E. and 9 others 2021. Toward a more sustainable mining future with electrokinetic in situ leaching. Science Advances, v. 7, article eabf9971; DOI: 10.1126/sciadv.abf9971). It isn’t totally new, for it uses the same chemistry as in heap leaching. However, it does not involve shattering the orebody at depth. Instead, low-voltage currents are passed through the orebody which induce a lixiviant to migrate through the rock, along mineral-grain boundaries rather than through fractures. Fluid movement becomes more efficient over time as the host rock is artificially ‘weathered’ thereby making it permeable. In effect, electrokinetic leaching creates a kind of hydrothermal system in reverse, by replacing the chemically reducing conditions of ore deposition with oxidising dissolution and transportation.

So far, the method has only been demonstrated through a small-scale test of concept using drill core samples of ore from a copper mine. Tests over a few days consumed more than half the grains of a copper ore mineral (chalcopyrite) present in the ore sample. So, it seems to work and astonishingly rapidly too. No doubt metal-mining companies, who are currently coining it hand-over-fist during a boom in metal prices, will beat a path to the doors of the team of researchers. But is it an economic proposition? They authors will soon find out … More important, if it is deployed widely will it increase the sustainability of metal mining? At first glance, yes: by removing the need for excavation of ore, liberation of ore-mineral grains by milling and their separation from valueless waste and many other aspects of beneficiation at the surface. Yet, the bottom line is that mining companies deploy their capital not so much to make ingots of useful metal but primarily to yield profits.  Speeding up metal extraction and thereby its supply to the world market could drive down the price that they can get for each tonne. Perversely, it is perceived shortages on metals and the resulting inflation of price that really yield bonanzas. My guess is that the industry will continue mining in the present manner, with all its lack of sustainability and environmental impact, for that very reason. The real way to reduce damage is to reduce demand for metals: do people in general really need more of them and the goods in which they are bound up in such vast amounts?

Update: Can a supernova affect the Earth System?

Earth-pages asked this question in August 2020 because it had been suggested that at least one mass extinction – the protracted faunal decline during the Late Devonian – may provide evidence that supernovas can have deadly influence. The authors of the paper that I discussed proposed mass spectrometric analysis of isotopes, such as 146Sm, 235U and  244Pu  in sediments deposited in an extinction event to test the hypothesis. In the 14 May issue of Science a multinational group of geochemists and physicists, led by Anton Wallner of the Australian National University, report detection of alien isotopes in roughly 10 million-year-old sediments sampled from the Pacific Ocean floor (Wallner, A and 12 others 2021. 60Fe and 244Pu deposited on Earth constrain the r-process yields of recent nearby supernovae. Science, v. 372, p. 742-745; DOI: 10.1126/science.aax3972).

Many of the chemical elements whose atomic masses are greater than 56 form by a thermonuclear fusion process known as rapid neutron capture – termed the ‘r-process’ by physicists. This requires such high energy that the likely heavy-element ‘nurseries must be events such as supernovas and/or mergers of neutron stars. The iron and plutonium isotopes  detected at very low concentrations are radioactive, with half-lives of 2.6 Ma for 60Fe and 80.6 Ma for 244Pu. That makes it impossible for them to be terrestrial in origin because, over the lifetime of the Earth, they would decayed away completely. They must be from recent, alien sources either in our galaxy or one of the nearby galaxies. In fact two ‘doses’ were involved. The authors make no comment on any relationship with marine or continental extinctions at that time in the Miocene Epoch

The subduction pulley: a new feature of plate tectonics

Geological map of part of the Italian Alps. The Sesia-Lanzo Zone is 6 in the Key: a – highly deformed gneisses; b – metasedimentary schists with granite intrusions; c – mafic rocks; d – mixed mantle and crystalline basement rocks. (Credit: M. Assanelli, Universita degli Studi di Milano)

To a first approximation, as they say, the basis of plate tectonics is that the lithosphere is divided up into discrete, rigid plates that are bounded by lines of divergent, convergent and sideways relative motions: constructive, destructive and conservative plate margins. These are characterised by zones of earthquakes whose senses of motion roughly correspond to the nature of each boundary: normal, reverse and strike-slip, respectively. The seismicity is mainly confined to the lithosphere in the cases of constructive and conservative boundaries (i.e. shallow) but extends as deep as 700 km into the mantle at destructive margins, thereby defining the subduction of lithosphere that remains cool enough to retain its rigidity. Although the definition assumes that there is no deformation within plates, in practice that does occur for a wide variety of reasons in the form of intra-plate seismicity, mainly within continental lithosphere. Oceanic plate interiors are much stronger and largely ‘follow the rules’; they are generally seismically quiet.

One important feature of plate tectonics is the creation of new subduction zones when an earlier one eventually ceases to function. Where these form in an oceanic setting volcanism in the overriding plate creates island arcs. They create precursors of new continental crust because the density of magmas forming the new lithosphere confers sufficient buoyancy for them to be more difficult to subduct. Eventually island arcs become accreted onto continental margins through subduction of the intervening oceanic lithosphere. Joining them in such ‘docking’ are microcontinents, small fragments spalled from much older continents because of the formation of new constructive plate margins within them. It might seem that arcs and microcontinents behave like passive rafts to form the complex assemblages of terranes that characterise continental mountain belts, such as those of western North America, the Himalaya and the Alps. Yet evidence has emerged that such docking is much more complicated (Gün, E. et al. 2021. Pre-collisional extension of microcontinental terranes by a subduction pulleyNature Geoscience, v. 14, online publication; DOI: 10.1038/s41561-021-00746-9).

Erkan Gün and colleagues from the University of Toronto and Istanbul Technical University examined one of the terranes in the Italian Alps – the Sesia-Lanzo Zone (SLZ) – thought to have been a late-Carboniferous microcontinental fragment in the ocean that once separated Africa from Europe. When it accreted the SLZ was forced downwards to depths of up to 70 km and then popped up in the latter stages of the Alpine orogeny. It is now a high-pressure, low-temperature metamorphic complex, having reached eclogite facies during its evolution. Yet its original components, including granites that contain the high-pressure mineral jadeite instead of feldspar, are still recognisable. Decades of geological mapping have revealed that the SLZ sequence shows signs of large-scale extensional tectonics. Clearly that cannot have occurred after its incorporation into southern Europe, and must therefore have taken place prior to its docking. Similar features are present within the accreted microcontinental and island-arc terranes of Eastern Anatolia in Turkey. In fact, most large orogenic belts comprise hosts of accreted terranes that have been amalgamated into older continents.

An ‘engineering’ simplification of the subduction pulley. Different elements represent slab weight (slab pull force) transmitted through a pulley at the trench to a weak microcontinent and a strong oceanic lithosphere. (Credit: Gün et al., Fig. 4)

Lithospheric extension associated with convergent plate margins has been deduced widely in the form of back-arc basins. But these form in the plate being underidden by a subduction zone. Extension of the SLZ, however, must have taken place in the plate destined to be subducted. Gün et al. modelled the forces, lithospheric structure, deformation and tectonic consequences that may have operated to form the SLZ, for a variety of microcontinent sizes. The pull exerted by the subduction of oceanic lithosphere (slab pull) would exert extensional forces on the lithosphere as it approached the destructive plate boundary. Oceanic lithosphere is very strong and would remain intact, simply transmitting slab-pull force to the weaker continental lithosphere, which ultimately would be extended. This is what the authors call a subduction ‘pulley’ system. At some stage the microcontinent fails mechanically, part of it being detached to continue with the now broken slab down the subduction zone. The rest would become a terrane accreted to the overriding plate. Subduction at this site would stop because the linkage to the plate has broken. It may continue by being transferred to a new destructive margin ‘behind’ the accreted microcontinent. This would allow other weak continental and island-arc ‘passengers’ further out on the oceanic plate eventually to undergo much the same process.

The observed complexity of tectonic terranes in other vast assemblies of them, such as the northern Pacific coast of North America and in many more ancient orogenic belts, is probably as much a result of extension before accretion as the compressional deformation suffered afterwards. The theoretical work by Erkan Gün and colleagues will surely spur tectonicians to re-evaluate earlier models of orogenesis.

Note: Figure 2 in the paper by Gün et al. shows how the width (perpendicular to the subduction zone) affects the outcomes of the subduction pulley. View an animation of a subduction pulley

CSI and detecting the presence of ancient humans

Enter a room, even for a few minutes, and dead skin cells will follow you like an invisible cloud to settle on exposed surfaces. Live there and a greyish white, fluffy dust builds up in every room. Even the most obsessive cleaning will not remove it, especially under a bed or on the bathroom floor. Consider a cave as a home, but one without vacuum cleaners, any kind of sanitation, paper tissues, panty liners, nappies or wet wipes. For pre-modern human dwellings can be added snot, fecal matter, sweat, urine, menstrual blood and semen among all the other detritus of living. A modern crime-scene investigator would be overwhelmed by the sheer abundance of DNA from the host of people who had once dwelt there. CSI works today as much because most homes are pretty clean and most people are fastidious about personal hygene as because of the rapidly shrinking lower limit of DNA detection of the tools at its disposal. Except, that is, when someone from outside the home commits a criminal offence: burglary, GBH, rape, murder. We have all eagerly watched ‘police operas’ and in the absence of other evidence the forensic team generally gets its perpetrator, unless they did the deed wearing a hazmat suit, mask, bootees and latex gloves.

Artistic impression of Neanderthal extended-family life in a cave (credit: Tyler B. Tretsven)

Since 2015 analysis of environmental DNA from soils has begun to revolutionise the analysis of ancient ecosystems, including the living spaces of ancient humans (see: Detecting the presence of hominins in ancient soil samples, April 2017). It is no longer necessary to find tools or skeletal remains of humans to detect their former presence and work out their ancestry. DNA sequencing of soil samples, formerly discarded from archaeological sites, can now detect former human presence in a particular layer, as well as that of other animals. In many cases the ‘signal’ pervades the layer rather than occurring in a particular spot, as expected from shed skin cells and bodily fluids. The first results were promising but only revealed mitochondrial DNA. Now the technique has extended to nuclear DNA: the genome (Vernot, B. and 33 others 2021. Unearthing Neanderthal population history using nuclear and mitochondrial DNA from cave sediments. Science, v. 372, article eabf1667; DOI: 10.1126/science.abf1667). Benjamin Vernot and colleagues from 7 countries collected and analysed cave soils from three promising sites with tangible signs of ancient human occupation. Two of them were in Siberia and had previously yielded Neanderthal and Denisovan genomes from bones. The other is part of the Atapuerca cave complex of NW Spain that had not. The Russian caves yielded DNA from more than 60 samples, 30 being nuclear DNA consistent with that from actual Neanderthal and Denisovan bones found in the caves. Galería de las Estatuas cave in Spain presented a soil profile spanning about 40 thousand years from 112 to 70 ka.

Teasing-out nuclear DNA from soil is complicated, from both technical and theoretical standpoints. So being able to match genomes from soil and bone samples in the Russian caves validated the methodology. The Spanish samples could then be treated with confidence. Galería de las Estatuas revealed the presence of Neanderthals throughout its 40 ka soil profile, but also a surprise. The older DNA was sufficiently distinct from that from later levels to suggest that two different populations had used the cave as a home, the original occupants being replaced by another genetically different group around 100 to 115 ka ago. The earlier affinity was with the ancestors of sequenced Neanderthal remains from Belgium, the later with those from Croatia. That time is at the end of the last (Eemian) interglacial episode, so one possibility is a population change driven by climatic deterioration. This success is sure to encourage other re-examinations of caves all over the place. That is, if there is the analyical capacity to perform such painstaking work in greater volume and at greater pace. Like many other palaeo-genomic studies, this one has relied heavily on the analytical facilities pioneered and developed by Svante Paäbo at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Covid has forced genetics to the front page for a year and more. And it has led to many advances in anlytical techniques, particularly in their speed. It would nice to think that a dreadful experience may end-up with positive benefits for understanding the full history of humanity.

Multicelled fossils from the 1 Ga old Torridonian of Scotland

Beinn Alligin and Loch Torridon, Northwest Highlands of Scotland. The hills are formed by Torridonian terrestrial sediments (credit: Stefan Krause, Wikimedia Commons)

Palaeobiologists interested in the origin of animals have generally focussed on sedimentary rocks from southern China: specifically those of the 635 to 550 Ma Doushantuo Formation. Phosphorus-rich nodules in those marine sediments have yielded tiny spheroids whose structure suggests that they are fossil embryos of some unspecified eukaryote. The Doushantuo Formation lies on top of rocks associated with the Marinoan episode of global glaciation during the Neoproterozoic; a feature which suggested that the evolutionary leap from single- to multi-celled eukaryotes was associated with environmental changes associated with Snowball Earth events. In a forthcoming issue of Current Biology that view will be challenged and the origin of multicellular life pushed back to around 1 billion years ago (Strother, P.K. et al. 2021. A possible billion-year-old holozoan with differentiated multicellularity. Current Biology, v. 31, p. 1-8; DOI: 10.1016/j.cub.2021.03.051). Spherical fossils of that age have been teased out of phosphatic nodules deposited in lacustrine sediments from the lower part of the Mesoproterozoic Torridonian Group of the Northwest Highlands of Scotland.

The internal structure of the fossils has been preserved in exquisite detail. Not only are cells packed together in their interiors, but some reveal an outer layer of larger sausage-shaped cells. So, cell differentiation had taken place in the original organisms, whereas such features are not visible in the Doushantuo ‘embryos’. A few of the central cells show dark, organic spots that may be remains of theirnucleii. Whatever these multicellular spheres may have developed into, the morphology of the Torridonian fossils is consistent with a transition from single-celled holozoans to the dominant metazoans of the Phanerozoic; i.e. the stem of later animals. The younger, Chinese fossils that are reputed to be embryos cannot be distinguished from multicellular algae (see: Excitement over early animals dampened, January 2012)

Photomicrograph of Bicellum brazieiri: scale bar = 10μm; arrows point to dark spots that may be cell nuclei (credit: Charles Wellman, Sheffield University)

Interestingly, the Torridonian Group is exclusively terrestrial in origin, being dominated by sediments deposited in the alluvial plains of huge braided streams that eventually buried a rugged landscape eroded from Archaean high-grade metamorphic rocks. Thus the environment would have been continually in contact with the atmosphere and thus oxygen that is vital for eukaryote life forms. The age of the fossils also rings a bell: a molecular clock based on the genomics of all groups of animals alive today hints at around 900-1000 Ma for the emergence of the basic body plan. Because its host rocks are about that age, could Bicellum brazier be the Common Ancestor of all modern animals? That would be a nice tribute to the second author, Martin Brazier (deceased) of Oxford University, who sought signs of the most ancient life for much of his career.

See also:Billion-year-old fossil reveals missing link in the evolution of animals (Press release, Sheffield University; 29 April 2021)

Wildfires and the formation of sugar-loaf hills

One iconic feature of Rio de Janeiro is Corcovado Mountain, topped by the huge Cristo Redentor (Christ the Redeemer) statue. Another is the Sugar Loaf (Pão de Açúcar) that broods over Botafogo Bay. Each is an inselberg: a loan word from the German for ‘island mountain’. Elsewhere they are known as kopjes (southern Africa), monadnocks (North America) or bornhardts after the German explorer who first described them. But, being on the coast, the Brazilian examples are not typical. Most rise up spectacularly from almost featureless plains, a well-known case being Uluru (Ayers Rock) almost at the centre of Australia. Arid and semi-arid plains of Africa and the Indian subcontinent are liberally dotted with them. So scenically dominant and spectacularly stark, inselbergs are often revered by local people, and have been so for millennia. The only thing that I remember from a desperately boring, but compulsory, first-year course on geomorphology in 1965 is their connection with the ‘cosmogonic egg’: a mythological motif that spans Eurasia, Australia and Africa, signifying that from which the universe hatched. It is perhaps no coincidence that hills in England that suddenly rise from flat land, such as the Wrekin in Shropshire and Malvern Hill in Worcestershire, still host the sport of rolling hard-boiled eggs to celebrate the pagan festival of Eostre (now Easter) that marks the spring rebirth of the land.    

Vista of Rio de Janeiro and its inselbergs (Credit: Leonardo Ferreira Mendes, Creative Commons)

How inselbergs and their surrounding plains formed has long been a hot topic in tropical geomorphology. One theory is that they are especially resistant rocks around which eroding rivers meandered during the formation of peneplains, a variant being that they were surrounded by lines of weakness, such as faults or major joint systems. Another is that they formed by erosion into a deeply but irregularly weathered surface. Then there is L.C. Kings theory of escarpment retreat and, of course, a mixture of processes in different stages, or a unique origin for each inselberg. In effect, there has been no final, widely agreed explanation. But that that may be about to change.

A common element to most inselbergs is their very steep and sometimes vertical flanks. Some even display overhangs at their base. Such potential shelters encouraged local people to camp there and, in response to the awe inspired by the sheer majesty of the looming inselberg, to use them for sacred rites and decoration. That is especially true of Australia, so it is fitting that what may be a breakthrough in understanding inselberg formation should have arisen there. (Buckman, S. et al. 2021. Fire-induced rock spalling as a mechanism of weathering responsible for flared slope and inselberg developmentNature Communications, v. 12, article 2150; DOI: 10.1038/s41467-021-22451-2). Breaking rock by deliberate use of fire has been done for millennia. For instance, Hannibal is said to have used fire to break down huge fallen boulders that blocked passage for his war elephants as his army advanced on Rome. Fire setting is still used by villagers in South India to spall large flakes of rock from outcrops. It is done with such skill that thin slabs up to 3-4 metres across can be lifted, and then split into thin posts for fencing or training vines: an essential alternative to wooden posts that termites would otherwise devour in a matter of months.

Solomon Buckman and colleagues from the University of Wollongong, Australia were drawn to a new hypothesis for inselberg formation by observations around low rock faces and boulders after the 2019-20 “Black Summer” wildfires in eastern Australia. Where burned trees had fallen against rock faces up to hundreds of kg of spalled flakes lay at the base of each face, which also bore freshly formed scars: clear signs of fire action. Thermal expansion and contraction of rock caused by air temperatures of hundreds of degrees close to wildfires is clearly a powerful means of rapid erosion. If the rock is damp – most likely at the base of a rockface as all rainfall on the outcrop drains in its direction – the mechanism is enhanced: Hannibal’s engineers poured vinegar onto the boulders heated by fire, to great effect. Buckman et al. estimate the rate of lateral erosion by fire at slope bases in Australia to be around ten thousand times faster than those operating on horizontal rock surfaces, which are not exposed to fire as no vegetation grows on them. Over time, slopes steepen aided by the formation of flared surfaces at the base. If spalled debris is carried away quickly the developing inselberg evolves to its classical sugarloaf shape. In more arid conditions the debris builds around the outcrop to steadily smother inselberg development, leaving tors and kopjes. The paper came to press remarkably quickly relative to the authors’ field work and analyses. This is a work-in-progress to be followed up by cosmogenic-isotope and other means of surface dating of the tops and flanks of suitably accessible inselbergs and simiar features such as Western Australia’s famous Wave Rock (a flared escarpment).

Wave Rock in the interior of Western Australia is 15 m high and 100 m long and revered by the local Ballardong people as a creation of the Rainbow Serpent

Climate change has shifted Earth’s poles

The shifting position of the Tropic of Cancer in Mexico due to nutation from 2005 to 2010 (Credit: Roberto González, Wikimedia Commons)

First suggested by Isaac Newton and confirmed from observations by Seth Chandler in 1891, the Earth’s axis of rotation and thus its geographic poles wander in much the same manner as does the axis of a gyroscope, through a process known as nutation. The best-known movement of the poles – Chandler wobble – results in a change of about 9 metres in the poles’ positions every 433 days, which describes a rough circle around the mean position of each pole. Every 18.6 years the orbital behaviour of the Moon results in a substantially larger shift, illustrated by a shift in the position of the circles of latitude, as above. Essentially, nutation results from the combined effects of gravitational forces imposed by other bodies. The axial precession cycle of 26 thousand years that is part of the Milankovich effect on long-term climate forcing is a result of nutation. But the Earth’s own gravitational field changes too, as mass within and upon it shifts from place to place. So mantle convection and plate tectonics inevitably change Earth’s mode of rotation, as do changes in the Earth’s molten iron core.

The most sensitive instrument devoted to measuring changes in Earth’s gravity is the tandem of two satellites known as the Gravity Recovery and Climate Experiment or GRACE. Among much else, GRACE has revealed the rate of withdrawal of groundwater from aquifers in Northern India and areas of mass deficit over the Canadian Shield that resulted from melting of its vast ice sheet since 18 ka ago (see: Ice age mass deficit over Canada deduced from gravity data, July 2007). Further GRACE data have now confirmed that more recent melting of polar glaciers due to global warming underlie an unusual reversal and acceleration of polar wandering since the 1990s (Deng, S. et al. 2021. Polar drift in the 1990s explained by terrestrial water storage changes. Geophysical Research Letters, v. 48, online article e2020GL092114; DOI: 10.1029/2020GL092114). In 1995 polar drift changed from southwards to eastwards, and increased by 17 times from its mean speed from 1981 to 1995. That tallies with an increase in the flow of glacial meltwater from polar regions and also with changes in the mass balance of surface and subsurface water at lower latitudes, especially in India, the USA and China where groundwater pumping for irrigation is on a massive scale.

Clearly, human activity is not only changing climate, but also our planet’s astronomical behaviour. That connection, in itself, is enough to set alarm bells ringing, even though the axial shift’s main tangible effect is to change the length of the day by a few milliseconds. Polar wandering has been documented for the last 176 years. Conceivably, data on shifts in past direction and speed may allow climatic changes throughout the industrial revolution to be assessed independently of meteorological data and on a whole-planet basis.

Ses also: Climate has shifted the axis of the Earth (EurekaAlert, 22 April 2021)

Multitudes of Tyrannosaurus rex in Cretaceous North America

Full-frontal skull of ‘Sue’, the best-preserved and among the largest specimens of T. rex (Credit: Scott Robert Anselmo, Wikimedia Commons)

Long-term followers of Earth-logs and its predecessor Earth-pages News will have observed my general detachment from the dinosaur hullabaloo, which just runs and runs. That is, except for real hold-the-front-page items. One popped up in the 16 April 2021 issue of Science (Marshall, C.R. et al. 2021. Absolute abundance and preservation rate of Tyrannosaurus rexScience, v. 372, p. 284-287; DOI:10.1126/science.abc8300). For over two million years in the Late Cretaceous, just before all dinosaurs – except for birds – literally bit the dust, the authors estimated a lot of the dinosaurian poster-childTyrannosaurus rex lurking in North America. I write ‘lurking’ because ‘tyrant lizard the king’ when fully grown was so big that if it ran and fell over, it would have been unable to get up! Tangible evidence from trackways suggests that it ambled from place to place. The leg bones of a 7-tonner would probably have shattered at speeds above 18 km per hour, and accelerating to the speed of a human jogger would, anyhow, have exhausted its energy reserves, But it was agile enough to be an ambush predator; it could even pirouette! And it could crush bones so well that it was able to consume prey entirely. It has been suggested that T. rex may have been a scavenger, at least in old age. Whatever, how is it possible to estimate numbers of any extinct species, let alone dinosaurs?

The stumbling block to getting a result that is better than guesswork is the fossil record of a species. First, it is incomplete, secondly the chance of finding a fossil varies from area to area, depending on all kinds of factors. These include the degree of exposure of sedimentary rock formed by the environment in which they thrived, as well as the vagaries of preservation due to post-mortem scavenging, erosion and water transport. In life the population density of a particularspecies varies between different ecosystems and from species to species. For instance, more lions can thrive in open rangeland than in wooded environments, whereas the opposite holds for tigers: probably because of different hunting strategies. Many factors such as these conspire to thwart realistic estimates of ancient populations. Studies of living species, however, suggest that population density of an animal species is inversely related to the average body mass of individuals. Take British herbivores: there are many more rabbits than there are deer. On the grasslands of East Africa hyenas and wild dogs outnumber lions. This mass-population relationship (Damuth’s Law) outlined by US ecologist John Damuth also depends on where a species exists in the food chain (its trophic level) as well as its physiology. Yet for living species, populations of flesh-eating mammals of similar mass show a 150-fold variation; a scatter that results from their different habits and habitats and also their energy requirements. Because they are warm-blooded (endothermic), small carnivorous mammals need a greater energy intake than do similar sized, cold-blooded reptiles, which need to eat far less. But not all living reptiles are ectothermic, especially the bigger ones. The Komodo dragon is mesothermic, midway between the two, and uses about a fifth of the energy needed by a similar-sized mammal carnivore. Population densities of dragons in the Lesser Sunda Islands are more than twice those of physiologically comparable mammalian predators.

A number of features suggest that the metabolism of carnivorous dinosaurs lay midway between those of large predatory mammals and big lizards like the Komodo dragon. This is the basic assumption for the analysis by Charles Marshall and colleagues. They did not focus on the biggest T. rex specimens, but on the average, estimated body mass of adults. There are numerous smaller specimens of the beast, but clearly some of these would have been sexually immature. It has been estimated that adulthood would have been achieved by around 15 years. The size data seem to show that achieving sexual maturity was accompanied by a 4 to 5 year growth spurt from the 2 to 3 tonnes of the largest juveniles to reach >7 t in the largest known adults which may have lived into their early 30s. The authors used this range to estimate a mean adult mass of 5.2 t. Taking this parameter and much more intricate factors into account, using intricate Monte Carlo simulations Marshall et al. came up with an estimate of 20 thousand T. rex adults across North America at any one time: but with an uncertainty of between 1,300 to 328,000. Spread over the 2.3 million km2 area of Late Cretaceous North America that lay above sea level their best-estimated population density would have been about 1 individual for every 100 square kilometres. An area the size of California could have had about 3800 adult Tyrannosaurus rex, while there may well have been two in Washington DC. Lest one’s imagination gets overly excited, were tigers and lions living wild today in North America under similar ecological conditions there would have been 12 and 28 respectively in the US capital. Yet those two adult Washingtonian T. rexs would have been unable to catch anything capable of a sustained jog, without keeling over. The juveniles weighing in at up to 3 tonnes would probably have been the real top predators; the smaller, the swifter and thus most fierce. Which leaves me to wonder, “Did the early teenagers catch the prey for their massive parents to chow-down on?”

See also: How many T. rexes were there? Billions. (ScienceDaily 15 April 2021)