The ‘boring billion’ years of the Mesoproterozoic: plate tectonics and the eukaryotes

The emergence of the eukaryotes – of which we are a late-entry member – has been debated for quite a while. In 2023 Earth-logs reportedthat a study of ‘biomarker’ organic chemicals in Proterozoic sediments suggests that eukaryotes cannot be traced back further than about 900 Ma ago using such an approach. At about the same time another biomarker study showed signs of a eukaryote presence at around 1050 Ma. Both outcomes seriously contradicted a ‘molecular-clock’ approach based on the DNA of modern members of the Eukarya and estimates of the rate of genetic mutation. That method sought to deduce the time in the past when the last eukaryotic common ancestor (LECA) appeared. It pointed to about 2 Ga ago, i.e. a few hundred million years after the Great Oxygenation Event got underway. Since eukaryote metabolism depends on oxygen, the molecular-clock result seems reasonable. The biomarker evidence does not. But were the Palaeo- and Mesoproterozoic Eras truly ‘boring’? A recent paper by Dietmar Müller and colleagues from the Universities of Sydney and Adelaide, Australia definitely shows that geologically they were far from that (Müller, R.D. et al. 2025. Mid-Proterozoic expansion of passive margins and reduction in volcanic outgassing supported marine oxygenation and eukaryogenesis. Earth and Planetary Science Letters, v. 672; DOI: 10.1016/j.epsl.2025.119683).

Carbon influx (million tons per year) into tectonic plates and into the ocean-atmosphere system from 1800 Ma to present. The colour bands represent: total carbon influx into the atmosphere (mauve); sequestered in tectonic plates (green); net atmospheric influx i.e. total minus carbon sequestered into plates (orange). The widths of the bands show the uncertainties of the calculated masses shown as darker coloured lines.

From 1800 to 800 Ma two supercontinents– Nuna-Columbia and Rodinia – aggregated nearly all existing continental masses, and then broke apart. Continents had collided and then split asunder to drift. So plate tectonics was very active and encompassed the entire planet, as Müller et al’s palaeogeographic animation reveals dramatically. Tectonics behaved in much the same fashion through the succeeding Neoproterozoic and Phanerozoic to build-up then fragment the more familiar supercontinent of Pangaea. Such dynamic events emit magma to form new oceanic lithosphere at oceanic rift systems and arc volcanoes above subduction zones, interspersed with plume-related large igneous provinces and they wax and wane. Inevitably, such partial melting delivered carbon dioxide to the atmosphere. Reaction on land and in the rubbly flanks of spreading ridges between new lithosphere and dissolved CO2 drew down and sequestered some of that gas in the form of solid carbonate minerals. Continental collisions raised the land surface and the pace of weathering, which also acted as a carbon sink. But they also involved metamorphism that released carbon dioxide from limestones involved in the crustal transformation. This protracted and changing tectonic evolution is completely bound up through the rock cycle with geochemical change in the carbon cycle.

From the latest knowledge of the tectonic and other factors behind the accretion and break-up of Nuna and Rodinia, Müller et al. were able to model the changes in the carbon cycle during the ‘boring billion’ and their effects on climate and the chemistry of the oceans. For instance, about 1.46 Ga ago, the total length of continental margins doubled while Nuna broke apart. That would have hugely increased the area of shallow shelf seas where living processes would have been concentrated, including the photosynthetic emission of oxygen. In an evolutionary sense this increased, diversified and separated the ecological niches in which evolution could prosper. It also increased the sequestration of greenhouse gas through reactions on the flanks of a multiplicity of oceanic rift systems, thereby cooling the planet. Translating this into a geochemical model of the changing carbon cycle (see figure) suggests that the rate of carbon addition to the atmosphere (outgassing) halved during the Mesoproterozoic. The carbon cycle and probable global cooling bound up with Nuna’s breakup ended with the start of Rodinia’s aggregation about 1000 Ma ago and the time that biomarkers first indicate the presence of eukaryotes.

Simplified structures of (a) a prokaryote cell; (b) a simple eukaryote animal cell. Plants also contain organelles called chloroplasts

So, did tectonics play a major role in the rise of the Eukarya? Well, of course it did, as much as it was subsequently the changing background to the appearance of the Ediacaran animals and the evolutionary carnival of the Phanerozoic. But did it affect the billion-year delay of ‘eukaryogenesis’ during prolonged availability of the oxygen that such a biological revolution demanded? Possibly not. Lyn Margulis’s hypothesis of the origin of the basic eukaryote cell by a process of ‘endosymbiosis’ is still the best candidate 50 years on. She suggested that such cells were built from various forms of bacteria and archaea successively being engulfed within a cell wall to function together through symbiosis. Compared with prokaryote cells those of the eukaryotes are enormously complex. At each stage the symbionts had to be or become compatible to survive. It is highly unlikely that all components entered the relationship together. Each possible kind of cell assembly was also subject to evolutionary pressures. This clearly was a slow evolutionary process, probably only surviving from stage to stage because of the global presence of a little oxygen. But the eukaryote cell may also have been forced to restart again and again until a stable form emerged.

See also: New Clues Show Earth’s “Boring Billion” Sparked the Rise of Life. SciTechDaily, 3  November 2025

Evolution of pigmentation in anatomically modern humans of Europe: a new paradigm?

The colours of human skin, eyes and hair in living people across the world are determined by variants of genes (alleles) found at the same place on a chromosome. Since chromosomes are inherited from both mother and father, an individual may have the same two alleles (homozygous), or one of each (heterozygous). A dominant allele is always expressed, even if a single copy is present. A recessive allele is only expressed if the individual inherits two copies of it. Most characteristics of individuals result from the interaction of multiple genes, rather than a single gene. A commonly cited example is the coloration of eyes. If we had a single gene for eye colour – that of the iris – that had alleles just for blue (recessive or ‘b’) and one for brown (dominant or ‘B) pigmentation, brown-eyed individuals would have one or two ‘B’ alleles (bB or BB), whereas those with blue eyes would have to have two ‘blue’ alleles (bb). But inheritance is more complicated than that: there are people with green, hazel or grey eyes and even left- and right eyes of different colour. Such examples suggest that there are more than two genes affecting human eye colour, and each must have evolved as a result of mutations. Much the same goes for hair and skin coloration.

A group of scientists from the University of Ferrara in Italy have analysed highly detailed ancient DNA in anatomically modern human remains from Russia (Palaeolithic), Sweden (Mesolithic) and Croatia (Neolithic) to tease out the complexities of pigmentation inheritance. Then they applied a statistical approach learned from that study to predict the likely skin-, eye- and hair pigmentation in 348 less detailed genomes of ancient individuals whose remains date back to 45 Ma ( Silvia Perretti et al, 2025. Inference of human pigmentation from ancient DNA by genotype likelihood. Proceedings of the National Academy of Science, v. 122, article e2502158122; DOI: 10.1073/pnas.2502158122).

An artist’s impression of a Mesolithic woman from southern Denmark (credit: Tom Bjorklund)

All the hunter-gatherer Palaeolithic individuals (12 samples between 45 and 13 ka old) bar one, showed clear signs of dark pigmentation in skin, eyes and hair – the outlier from Russia was probably lighter. Those from the Mesolithic (14 to 4 ka) showed that 11 out of 35 had a light eye colour (Northern Europe, France, and Serbia), but most retained the dark skin and hair expected in descendants of migrants from Africa. Only one 12 ka hunter-gatherer from Sweden had inferred blue eyes, blonde hair, and light skin.  The retention of dark pigmentation by European hunter-gatherers who migrated there from Africa has been noted before, using DNA from Mesolithic human remains and in one case from birch resin chewed by a Mesolithic woman. This called into question the hypothesis that high levels of melatonin in skin, which protects indigenous people in Africa from cancers, would result in their producing insufficient vitamin D for good health. That notion supposed that out-of-Africa migrants would quickly evolve paler skin coloration at higher latitudes. It is now known that diets rich in meat, nuts and fungi – staple for hunter-gatherers – provide sufficient vitamin-D for health at high latitudes. A more recent hypothesis is that pale skins may have evolved only after the widespread Neolithic adoption of farming when people came to rely on a diet dominated by cereals that are a poor source of vitamin-D.

However, 132 Neolithic farmers (10 to 4 ka ago) individuals studied by Perretti et al. showed increased diversity in pigmentation, with more frequent light skin tones, yet dark individuals persisted, particularly in southern and eastern Europe. Hair and eye colour showed considerable variability, the earliest sign of red hair showing up in Turkey. Even Copper- and Bronze Age samples ( 113 from 7 to 3 ka) and those from Iron Age Europeans (25 from 3 to 1.7 ka ago) still indicate common retention of dark skin, eyes and hair, although the proportion of lighter pigmentation increased in some regions of Europe. Other analyses of ancient DNA have shown that the Palaeo- and Mesolithic populations of Europe were quickly outnumbered by influx of early farmers, probably from the Anatolian region of modern Turkey, during the Neolithic. The farming lifestyle seems likely to have allowed the numbers of those who practised it to rise beyond the natural environment’s ‘carrying capacity’ for hunter-gatherers. The former inhabitants of Europe may simply have been genetically absorbed within the growing population of farmers. Much the same absorption of earlier groups seems to have happened with the westward migration from the Ukrainian and Russia steppes of the Yamnaya people and culture, culminating in the start of the European Bronze Age that reached western Europe around 2.1 ka, The Yamnaya introduced metal culture, horse-drawn wheeled vehicles and possibly Indo-European language.

So the novel probabilistic approach to ancient DNA by Perretti et al. also casts doubt on the diet-based evolution of light pigmentation at high latitudes. Instead, pulses of large population movements and thus changes in European population genetics probably account for the persistence of abundant evidence for dark pigmentation throughout Europe until historic times. The ‘lightening’ of Europeans’ physiognomy seems to have been vastly more complex than previously believed. Early Europe seems to have been almost bewilderingly diverse, which make a complete mockery of modern chauvinism and racism. The present European genetic ‘melting pot’ is surprisingly similar to that of Europe’s ancient past.

Arsenic: an agent of evolutionary change?

The molecules that make up all living matter are almost entirely (~98 %) made from the elements Carbon, Hydrogen, Oxygen, Nitrogen and Phosphorus (CHONP) in order of their biological importance. All have low atomic numbers, respectively 6th, 1st, 8th, 7th and 15th in the Periodic Table. Of the 98 elements found in nature, about 7 occur only because they form in the decay schemes of radioactive isotopes. Only the first 83 (up to Bismuth) are likely to be around ‘for ever’; the fifteen heavier than that are made up exclusively of unstable isotopes that will eventually disappear, albeit billions of years from now. There are other oddities that mean that the 92 widely accepted  to be naturally occurring is not strictly correct. That CHONP are so biologically important stems partly from their abundances in the inorganic world and also because of the ease with which they chemically combine together. But they are not the only ones that are essential.

About 20 to 25% of the other elements are also literally vital, even though many are rare. Most of the rest are inessential except in vanishingly small amounts that do no damage, and may or may not be beneficial. However some are highly toxic. Any element can produce negative biological outcomes if above certain levels. Likewise, deficiencies can result in ill thrift and event death. For the majority of elements, biologists have established concentrations that define deficiency and toxic excess. The World Health Organisation has charted the maximum safe levels of elements in drinking water in milligrams per litre. In this regard, the lowest safe level is for thallium (Tl) and mercury (Hg) at 0.002 mg l-1.Other highly toxic elements are cadmium (Cd) (0.003 mg l-1), then arsenic (As) and lead (Pb) (0.01 mg l-1) that ‘everyone knows’ are elements to avoid like the plague. In nature lead is very rarely at levels that are unsafe because it is insoluble, but arsenic is soluble under reducing conditions and is currently responsible for a pandemic of related ailments, especially in the Gangetic plains of India and Bangladesh and similar environments worldwide.

Biological evolution has been influenced since life appeared by the availability, generally in water, of both essential and toxic elements. In 2020 Earth-logs summarised a paper about modern oxygen-free springs in Chile in which photosynthetic purple sulfur bacteria form thick microbial mats. The springs contain levels of arsenic that vary from high in winter to low in summer. This phenomenon can only be explained by some process that removes arsenic from solution in summer but not in winter. The purple-bacteria’s photosynthesis uses electrons donated by sulfur, iron-2 and hydrogen – the spring water is highly reducing so they thrive in it. In such a simple environment this suggested a reasonable explanation: the bacteria use arsenic too. In fact they contain a gene (aio) that encodes for such an eventuality. The authors suggested that purple sulfur bacteria may well have evolved before the Great Oxygenation Event (GOE). They reasoned that in an oxygen-free world arsenic, as well as Fe2+ would be readily available in water that was in a reducing state, whereas oxidising conditions after the GOE would suppress both: iron-2 would be precipitated as insoluble iron-3 oxides that in turn efficiently absorb arsenic (see: Arsenic hazard on a global scale, May 2020).

Colour photograph and CT scans of Palaeoproterozoic discoidal fossils from the Francevillian Series in Gabon. (Credit: El Albani et al. 2010; Fig. 4).

A group of geoscientists from France, the UK, Switzerland and Austria have investigated the paradox of probably high arsenic levels before the GOE and the origin and evolution of life during the Archaean  (El Khoury et al. 2025. A battle against arsenic toxicity by Earth’s earliest complex life forms. Nature Communications, v. 16, article 4388; DOI: 10.1038/s41467-025-59760-9). Note that the main, direct evidence for Archaean life are fossilized microbial mats known as stromatolites, some palaeobiologists reckoning they were formed by oxygenic photosynthesising cyanobacteria others favouring the purple sulfur bacteria (above). The purple sulfur bacteria in Chile and other living prokaryotes that tolerate and even use arsenic in their metabolism clearly evolved that potential plus necessary chemical defence mechanisms, probably when arsenic was more available in the anoxic period before the GOE. Anna El Khoury and her colleagues sought to establish whether or not eukaryotes evolved similar defences by investigating the earliest-known examples; the 2.1 Ma old Francevillian biota of Gabon that post-dates the GOE. They are found in black shales, look like tiny fried eggs and are associated with clear signs of burrowing. The shales contain steranes that are breakdown products of steroids, which are unique to eukaryotes.

The fossils have been preserved by precipitation of pyrite (Fe2S) granules under highly reducing conditions. Curiously, the cores of the pyrite granules in the fossils are rich in arsenic, yet pyrite grains in the host sediments have much lower As concentrations. The latter suggest that seawater 2.1 Ma ago held little dissolved arsenic as a result of its containing oxygen. The authors interpret the apparently biogenic pyrite’s arsenic cores as evidence of the organism having sequestered As into specialized compartments in their bodies: their ancestors must have evolved this efficient means of coping with significant arsenic stress before the GOE. It served them well in the highly reducing conditions of black shale sedimentation. Seemingly, some modern eukaryotes retain an analogue of a prokaryote As detoxification gene.

Impact debris in Neoproterozoic sediments of Scotland and biological evolution?

False-colour electron microscope image of a shocked grain of zircon recovered from the Stac Fada Member. The red and pink material is a high-pressure polymorph of zircon, arranged in shock lamellae. Zircon is rendered in cyan, some of which is in granulated form. Credit: Kirkland et al. 2025, Fig 2C

Judging by its content of shards and spherules made of murky green glass, one of the lowest units in the Torridonian continental sediments of NW Scotland had long been regarded as simply red sandstone that contained volcanic debris. This Stac Fada Member was thus celebrated as the only sign of a volcanic contribution to a vast thickness (up to 2.5 km) of Neoproterozoic lake and fluviatile sediments. Current flow indicators suggested that the Torridonian was laid down by large alluvial fans derived by erosion of much older crystalline basement far to what is today the west. That is, the Archaean core of the ancient continent of Laurentia, now the other side of the North Atlantic. In 2002 more sophisticated sedimentological and geochemical analysis of the Stac Fada Member revealed a surprise: it contains anomalously elevated platinum-group elements, quartz grains that show signs of shock and otherworldly chromium isotope concentrations. The 10 m thick bed is made from ejecta, perhaps from a nearby impact crater to the WNW concluded from brittle fractures that may have been produced by the impact. Some idea of its age was suggested by Ar-Ar dating of feldspar crystals (~1200 Ma) believed to have formed authigenically in the hot debris. Being the only decent impactite known in Britain, it continues to attract attention.

A group of geoscientists from Western Australia, NASA and the UK, independent of the original discoverers, have now added new insights ( Kirkland, C.L. and 12 others 2025. A one-billion-year old Scottish meteorite impact. Geology, v. 53, early online publication; DOI: 10.1130/G53121.1). They dated shocked zircon grains using U-Pb analyses at 990 ± 22 Ma; some 200 Ma younger than the previously dated, authigenic feldspars.  Detrital feldspar grains in the Stac Fada Member yield Rb-Sr radiometric ages of 1735 and 1675, that are compatible with Palaeoproterozoic granites in the underlying Lewisian Gneiss Complex.

Photomicrograph of Bicellum brazieiri: scale bar = 10μm; arrows point to dark spots that may be cell nuclei (credit: Charles Wellman, Sheffield University)

In a separate publication (Kirkland, C.L et al 2025. 1 billion years ago, a meteorite struck Scotland and influenced life on Earth. The Conversation, 29 April 2025) three of the authors take things a little further, as their title suggests. In this Conversation piece they ponder, perhaps unwarily, on the spatial and temporal association of the indubitable impact with remarkably well-preserved spherical fossils found in Torridonian lake-bed sediments (Bicellum brasieri, reported in Earth-logs in May 2021), which are the earliest-known holozoan animal ancestors. The Torridonian phosphatic concretions in which these important fossils were found at a different locality are roughly 40 Ma younger than the Stac Fada impactite. The authors of the Conversation article appeal to the residual thermal effect of the impact as a possible driver for the appearance of these holozoan organisms. Whether a residual thermal anomaly would last long enough for them to evolve to this biological status would depend on the magnitude of the impact, of which we know nothing.  Eukaryote fossils are known from at least  650 Ma older sedimentary rocks in northern China and perhaps as far back as 2.2 Ga in a soil that formed in the Palaeoproterozoic of South Africa. Both the Torridonian organism and impactite were found in a small area of fascinating geology that has been studied continuously in minute detail since Victorian times, and visited by most living British geologists during their undergraduate days. Ideas will change as curiosity draws geologists and palaeobiologists to less-well studied sites of Proterozoic antiquity, quite possibly in northern China.

A fully revised edition of Steve Drury’s book Stepping Stones: The Making of Our Home World can now be downloaded as a free eBook

Modelling climate change since the Devonian

A consortium of geoscientists from Australia, Britain and France, led by Andrew Merdith of the University of Adelaide examines the likely climate cooling mechanisms that may have set off the two great ‘icehouse’ intervals in the last 541 Ma (Merdith, A.S. et al. 2025. Phanerozoic icehouse climates as the result of multiple solid-Earth cooling mechanisms. Science Advances, v. 11, article eadm9798: DOI: 10.1126/sciadv.adm9798). They consider the first to be the global cooling that began in the latter part of the Devonian culminating in the Carboniferous-Permian icehouse. The second is the Cenozoic global cooling to form the permanent Antarctic ice cap around 34 Ma and culminated in cyclical ice ages on the northern continents after 2.4 Ma during the Pleistocene. They dismiss the 40 Ma long, late Ordovician to early Silurian glaciation that left its imprint on North Africa and South America –  then combined in the Gondwana supercontinent. The data about two of the parameters used in their model – the degree of early colonisation of the continents by plants and their influence on terrestrial weathering are uncertain in that protracted event.  Yet the Hirnantian glaciation reached 20°S at its maximum extent in the Late Ordovician around 444 Ma to cover about a third of Gondwana: it was larger than the present Antarctic ice cap. For that reason, their study spans only Devonian and later times.

Fluctuation in evidence for the extent of glacial conditions since the Devonian: the ‘ice line’ is grey. The count of glacial proxy occurrences in each 10° of latitude through time is shown in the colour key. Credit: Merdith et al., Fig 2A.

Merdith et al. rely on four climatic proxies. The first of these comprises indicators of cold climates, such as glacial dropstones, tillites and evidence in sedimentary rocks of crystals of hydrated calcium carbonate (ikaite – CaCO3.6H2O) that bizarrely forms only at around 0°C . From such occurrences it is possible to define an ‘ice line’ linking different latitudes through geological time. Then there are estimates of global average surface temperature; low-latitude sea surface temperature; and estimates of atmospheric CO2. The ‘ice-line’ data records an additional, long period of glaciation in the Jurassic and early Cretaceous, but evidence does not extend to latitudes lower than 60°. It is regarded by Merdith et al. as an episode of ‘cooling’ rather than an ‘icehouse’. Their model assesses sources and sinks of COsince the Devonian Period.

The main natural source of the principal greenhouse gas CO2 is degassing through volcanism expelled from the mantle and breakdown of carbonate rock in subducted lithosphere. Natural sequestration of carbon involves weathering of exposed rock that releases dissolved CO2 and ions of calcium and magnesium.   A recently compiled set of plate reconstructions that chart the waxing and waning of tectonics since the Devonian Period allows them to model the tectonically driven release of carbon over time, with time scales on the order of tens to hundreds of Ma. The familiar Milanković forcing cycles on the order of tens to hundreds of ka are thus of no significance in Merdith et al.’s  broader conception of icehouse episodes  Their modelling shows high degassing during the Cretaceous, modern levels during the late Palaeozoic and early Mesozoic, and low emissions during the Devonian. The model also suggests that cooling stemmed from variations in the positions and configuration of continents over time.  Another crucial factor is the tempo of exposure of rocks that are most prone to weathering. The most important are rocks of the ocean lithosphere incorporated into the continents to form ophiolite masses. The release of soluble products of weathering into ocean basins through time acts as a fluctuating means of ‘fertilising’ so that more carbon can be sequestered in deep sediments in the form of organisms’ unoxidised tissue and hard parts made of calcium carbonates and phosphates. Less silicate weathering results in a boost to atmospheric CO2.

Only two long, true icehouse episodes emerge from the empirical proxy data, expressed by the ‘ice-line’ plots. Restricting the modelling to single global processes that might be expected to influence degassing or carbon sequestration produces no good fits to the climatic proxy data. Running the model with all the drivers “off” produces more or less continuous icehouse conditions since the Devonian. The model’s climate-related outputs thus imply that many complex processes working together in syncopation may have driven the gross climate vagaries over the last 400 Ma or so. A planet of Earth’s size without such complexity would throughout that period have had a high-CO2 warm climate. According to Andrew Merdith its fluctuation from greenhouse to icehouse conditions in the late Palaeozoic and the Cenozoic were probably due to “coincidental combination of very low rates of global volcanism, and highly dispersed continents with big mountains, which allow for lots of global rainfall and therefore amplify reactions that remove carbon from the atmosphere”.

Geological history is, almost by definition, somewhat rambling. So, despite despite the large investment in seeking a computed explanation of data drawn from the record, the outcome reflects that in a less than coherent account. To state that many complex processes working at once may have driven climate vagaries over the last 400 Ma or so, is hardly a major advance: palaeoclimatologists have said more or less the same for a couple of decades or more, but have mainly proposed single driving mechanisms. One aspect of Merdith et al.’s  results seems to be of particular interest. ‘Icehouse’ conditions seem to be rare events interspersed with broader ice-free periods. We evolved within the mammal-dominated ecosystems on the continents during the latest of these anomalous climatic episodes. And we and those ecosystems now rely on a cool world. As the supervisor of the project commented, ‘Over its long history, the Earth likes it hot, but our human society does not’.

Readers may like to venture into how some philosophers of science deal with a far bigger question; ‘Is intelligent life a rare, chance event throughout the universe?’ That is, might we be alone in the cosmos? In the same issue of Science Advances is a paper centred on just such questions (Mills, D.B. et al. 2025. A reassessment of the “hard-steps” model for the evolution of intelligent life. Science Advances, v. 11, article eads5698; DOI: 10.1126/sciadv.ads5698). It stems from cosmologist Brandon Carter’s ‘Anthropic Principle’ first developed at Nicolas Copernicus’s 500th birthday celebrations in 1973. This has since been much debated by scientists and philosophers – a gross understatement as it knocks the spots off the Drake Equation. To take the edge off what seems to be a daunting task, Mills et al. consider a corollary of the Anthropic Principle, the ‘hard steps model’. That, in a nutshell, postulates that the origin of humanity and its ability to ponder on observations of the universe required a successful evolutionary passage through a number of hard steps. It predicts that such intelligence is ‘exceedingly rare’ in the universe. Icehouse conditions are respectable candidates for evolutionary ‘hard steps’, and in the history of Earth there have been five of them.

A fully revised edition of Steve Drury’s book Stepping Stones: The Making of Our Home World can now be downloaded as a free eBook

The origin of life on Earth: new developments

Debates around the origin of Earth’s life and what the first organism was like resemble the mythical search for the Holy Grail. Chivalric romanticists of the late 12th and early 13th centuries were pretty clear about the Grail – some kind of receptacle connected either with the Last Supper or Christ’s crucifixion – but never found it. Two big quests that engage modern science centre on how the chemical building blocks of the earliest cells arose and the last universal common ancestor (LUCA) of all living things. Like the Grail’s location, neither is likely to be fully resolved because they can only be sought in a very roundabout way: both verge on the imaginary. The fossil record is limited to organisms that left skeletal remains, traces of their former presence, and a few degraded organic molecules. The further back in geological time the more sedimentary rock has either been removed by erosion or fundamentally changed at high temperatures and pressures. Both great conundrums can only be addressed by trying to reconstruct processes and organisms that occurred or existed more than 4 billion years ago.

Artistic impression of the early Earth dominated by oceans (Credit: Sci-news.com)

In the 1950s Harold Urey of the University of Chicago and his student Stanley Miller mixed water, methane, ammonia and hydrogen sulfide in lab glassware, heated it up and passed electrical discharges through it. They believed the simple set-up crudely mimicked Hadean conditions at the Earth surface. They were successful in generating more complex organic chemicals than their starting materials, though the early atmosphere and oceans are now considered to have been chemically quite different. Such a ‘Frankenstein’ approach has been repeated since with more success (see Earth-logs April 2024), creating 10 of the 20 amino acids plus the peptide bonds that link them up to make all known proteins, and even amphiphiles, the likely founders of cell walls. The latest attempt has been made by Spanish scientists at the Andalusian Earth Sciences Institute, the Universities of Valladolid and Cadiz, and the International Physics Centre in San Sebastian (Jenewein, C. et al 2024. Concomitant formation of protocells and prebiotic compounds under a plausible early Earth atmosphere. Proceedings of the National Academy of Sciences, v. 122, article 413816122; DOI: 10.1073/pnas.241381612).

Biomorphs formed by polymerisation of HCN (Credit: Jenewein, C. et al 2024, Figure 2)

Jenewein and colleagues claim to have created cell-like structures, or ‘biomorphs’ at nanometre- and micrometre scale – spheres and polyp-like bodies – from a more plausible atmosphere of CO2 , H2O, and N2. These ‘protocells’ seem to have formed from minutely thin (150 to 3000 nanometres) polymer films built from hydrogen cyanide that grew  on the surface of the reaction chamber as electric discharges and UV light generated HCN and more complex ‘prebiotic’ chemicals. Apparently, these films were catalysed by SiO2 (silica) molecules from the glass reactor. Note:  In the Hadean breakdown of olivine to serpentinite as sea water reacted with ultramafic lavas would have released abundant silica. Serpentinisation also generates hydrogen. Intimate release of gas formed bubbles to create the spherical and polyp-like ‘protocells’. The authors imagine the Hadean global ocean permanently teeming with such microscopic receptacles. Such a veritable ‘primordial soup’ would be able to isolate other small molecules, such as amino acids, oligopeptides, nucleobases, and fatty acids, to generate more complex organic molecules in micro-reactors en route  to the kind of complex, self-sustaining systems we know as life.

So, is it possible to make a reasonable stab at what that first kind of life may have been? It was without doubt single celled. To reproduce it must have carried a genetic code enshrined in DNA, which is unique not only to all species, but to individuals. The key to tracking down LUCA is that it represents the point at which the evolutionary trees of the fundamental domains of modern life life – eukarya (including animals, plants and fungi), bacteria, and archaea – converge to a single evolutionary stem. There is little point in using fossils to resolve this issue because only multicelled life leaves tangible traces, and the first of those was found in 2,100 Ma old sediments in Gabon (see: The earliest multicelled life; July 2010). The key is using AI to compare the genetic sequences of the hugely diverse modern biosphere. Modern molecular phylogenetics and computing power can discern from their similarities and differences the relative order in which various species and broader groups split from others. It can also trace the origins of specific genes that provides clues about earlier genetic associations. Given a rate of mutation the modern differences provide estimates of when each branching occurred. The most recent genetic delving has been achieved by a consortium based at various institutions in Britain, the Netherlands, Hungary and Japan  (Moody, E.R.R. and 18 others 2024. The nature of the last universal common ancestor and its impact on the early Earth system. Nature Ecology & Evolution, v.8, pages 1654–1666; DOI: 10.1038/s41559-024-02461-1).

Moody et al have pushed back the estimated age of LUCA to halfway through the Hadean, between 4.09 to 4.33 billion years (Ga), well beyond the geologically known age of the earliest traces of life (3.5 Ga). That age for LUCA in itself is quite astonishing: it could have been only a couple of hundred million years after the Moon-forming interplanetary collision. Moreover, they have estimated that Darwin’s Ur-organism had a genome of around 2 million base pairs that encoded about 2600 proteins: roughly comparable to living species of bacteria and archaea, and thus probably quite advanced in evolutionary terms. The gene types probably carried by LUCA suggest that it may have been an anaerobic acetogen; i.e. an organism whose metabolism generated acetate (CH3COO) ions. Acetogens may produce their own food as autotrophs, or metabolise other organisms (heterotrophs). If LUCA was a heterotroph, then it must have subsisted in an ecosystem together with autotrophs which it consumed, possibly by fermentation. To function it also required hydrogen that can be supplied by the breakdown of ultramafic rocks to serpentinites, which tallies with the likely ocean-world with ultramafic igneous crust of the Hadean (see the earlier paragraphs about protocells). If an autotroph, LUCA would have had an abundance of CO2 and H2 to sustain it, and may have provided food for heterotrophs in the early ecosystem. The most remarkable possibility discerned by Moody et al is that LUCA may have had a kind of immune system to stave off viral infection.

The carbon cycle on the Hadean Earth (Credit: Moody et al. 2024; Figure 3e)

The Hadean environment was vastly different to that of modern times: a waterworld seething with volcanism; no continents; a target for errant asteroids and comets; more rapidly spinning with a 12 hour day; a much closer Moon and thus far bigger tides. The genetic template for the biosphere of the following four billion years was laid down then. LUCA and its companions may well have been unique to the Earth, as are their descendants. It is hard to believe that other worlds with the potential for life, even those in the solar system, could have followed a similar biogeochemical course. They may have life, but probably not as we know it  . . .

See also: Ball, P. 2025. Luca is the progenitor of all life on Earth. But its genesis has implications far beyond our planet. The Observer, 19 January 2025.

A fully revised edition of Steve Drury’s book Stepping Stones: The Making of Our Home World can now be downloaded as a free eBook

How changes in the Earth System have affected human evolution, migration and culture

Refugees from the Middle East migrating through Slovenia in 2015. Credit: Britannica

During the Pliocene (5.3 to 2.7 Ma) there evolved a network of various hominins, with their remains scattered across both the northern and southern parts of that continent. The earliest, though somewhat disputed hominin fossil Sahelanthropus tchadensis hails from northern Chad and lived  around 7 Ma ago, during the late Miocene, as did a similarly disputed creature from Kenya Orrorin tugenensis (~5.8 Ma). The two were geographically separated by 1500 km, what is now the Sahara desert and the East African Rift System.  The suggestion from mtDNA evidence that humans and chimpanzees had a common ancestor, the uncertainty about when it lived (between 13 to 5 Ma) and what it may have looked like, let alone where it lived, makes the notion debateable. There is even a possibility that the common ancestor of humans and the other anthropoid apes may have been European. Its descendants could well have crossed to North Africa when the Mediterranean Sea had been evaporated away to form the thick salt deposits that now lie beneath it: what could be termed the ‘Into Africa’ hypothesis. The better known Pliocene hominins were also widely distributed in the east and south of the African continent. Wandering around was clearly a hominin predilection from their outset. The same can be said about humans in the general sense (genus Homo) during the Early Pleistocene when some of them left Africa for Eurasia. Artifacts dated at 2.1 Ma have been found on the Loess Plateau of western China, and Georgia hosts the earliest human remains known from Eurasia. Since them H. antecessor, heidelbergensis, Neanderthals and Denisovans roamed Eurasia. Then, after about 130 ka, anatomically modern humans progressively populated all continents, except Antarctica, to their geographic extremities and from sea level to 4 km above it.

There is a popular view that curiosity and exploration are endemic and perhaps unique to the human line: ‘It’s in our genes’. But even plants migrate, as do all animal species. So it is best to be wary of a kind of hominin exceptionalism or superior motive force. Before settled agriculture, simply diffusion of populations in search of sustenance could have achieved the enormous migrations undertaken by all hominins: biological resources move and hunter gatherers follow them. The first migration of Homo erectus from Africa to northern China by way of Georgia seems to taken 200 ka at most and covered about ten thousand kilometres: on average a speed of only 50 m per year! That achievement and many others before and later were interwoven with the evolution of brain size, cognitive ability, means of communication and culture. But what were the ultimate drivers? Two recent papers in the journal Nature Communications make empirically-based cases for natural forces driving the movement of people and changes in demography.

The first considers hominin dispersal in the Palaearctic biogeographic realm: the largest of eight originally proposed by Alfred Russel Wallace in the late 19th century that encompasses the whole of Eurasia and North Africa (Zan, J. et al. 2024. Mid-Pleistocene aridity and landscape shifts promoted Palearctic hominin dispersals. Nature Communications, v. 15, article 10279; DOI: 10.1038/s41467-024-54767-0). The Palearctic comprises a wide range of ecosystems: arid to wet, tropical to arctic. After 2 Ma ago, hominins moved to all its parts several times. The approach followed by Zan et al. is to assess the 3.6 Ma record of the thick deposits of dust carried by the perpetual westerly winds that cross Central Asia. This gave rise to the huge (635,000 km2) Loess Plateau. At least 17 separate soil layers in the loess have yielded artefacts during the last 2.1 Ma. The authors radiocarbon dated the successive layers of loess in Tajikistan (286 samples) and the Tarim Basin (244 samples) as precisely as possible, achieving time resolutions of 5 to 10 ka and 10 to 20 ka respectively. To judge variations in climate in these area they also measured the carbon isotopic proportions in organic materials preserved within the layers. Another climate-linked metric that Zan et al. is a time series showing the development of river terraces across Eurasia derived from the earlier work of many geomorphologists. The results from those studies are linked to variations through time in the numbers of archaeological sites across Eurasia that have yielded hominin fossils, stone tools and signs of tool manufacture, many of which have been dated accurately.

The authors use sophisticated statistics to find correlations between times of climatic change and the signs of hominin occupation. Episodes of desertification in Palaearctic Eurasia clearly hindered hominins’ spreading across the continent either from west to east of vice versa. But there were distinct, periodic windows of climatic opportunity for that to happen that coincide with interglacial episodes, whose frequency changed at the Mid Pleistocene Transition (MPT) from about 41 ka to roughly every 100 ka. That was suggested in 2021 to have arisen from an increased roughness of the rock surface over which the great ice sheets of the Northern Hemisphere moved. This suppressed the pace of ice movement so that the 41 ka changes in the tilt of the Earth’s rotational axis could no longer drive climate change during the later Pleistocene, despite the fact that the same astronomical influence continued. The succeeding ~100 ka pulsation may or may not have been paced by the very much weaker influence of Earth changing orbital eccentricity. Whichever, after the MPT climate changes became much more extreme, making human dispersal in the Palearctic realm more problematic. Rather than hominin’s evolution driving them to a ‘Manifest Destiny’ of dominating the world vastly larger and wider inorganic forces corralled and released them so that, eventually, they did.

Much the same conclusion, it seems to me, emerges from a second study that covers the period since ~ 9 ka ago when anatomically modern humans transitioned from a globally dominant hunter-gatherer culture to one of ‘managing’ and dominating ecosystems, physical resources and ultimately the planet itself. (Wirtz, K.W et al. 2024. Multicentennial cycles in continental demography synchronous with solar activity and climate stability. Nature Communications, v. 15, article 10248; DOI: 10.1038/s41467-024-54474-w). Like Zan et al., Kai Wirtz and colleagues from Germany, Ukraine and Ireland base their findings on a vast accumulated number (~180,000) of radiocarbon dates from Holocene archaeological sites from all inhabited continents. The greatest number (>90,000) are from Europe. The authors applied statistical methods to judge human population variations since 11.7 ka in each continental area. Known sites are probably significantly outweighed by signs of human presence that remain hidden, and the diligence of surveys varies from country to country and continent to continent: Britain, the Netherlands and Southern Scandinavia are by far the best surveyed. Given those caveats, clearly this approach gives only a blurred estimate of population dynamics during the Holocene. Nonetheless the data are very interesting.

The changes in population growth rates show distinct cyclicity during the Holocene, which Wirtz et al. suggest are signs of booms and busts in population on all six continents. Matching these records against a large number of climatic time series reveals a correlation. Their chosen metric is variation in solar irradiance: the power per unit area received from the Sun. That has been directly monitored only over a couple of centuries. But ice cores and tree rings contain proxies for solar irradiance in the proportions of the radioactive isotopes 10Be and 14C contained in them respectively. Both are produced by the solar wind of high-energy charged particles (electrons, protons and helium nuclei or alpha particles) penetrating the upper atmosphere. The two isotopes have half-lives long enough for them to remain undecayed and thus detectable for tens of thousand years. Both ice cores and tree rings have decadal to annual time resolutions. Wirtz et al. find that their crude estimates of booms and busts in human populations during the Holocene seem closely to match variations in solar activity measured in this way. Climate stability favours successful subsistence and thus growth in populations. Variable climatic conditions seem to induce subsistence failures and increase mortality, probably through malnutrition.

A nice dialectic clearly emerges from these studies. ‘Boom and bust’ as regards populations in millennial and centennial to decadal terms stem from climate variations. Such cyclical change thus repeatedly hones natural selection among the survivors, both genetically and culturally, increasing their general fitness to their surroundings. Karl Marx and Friedrich Engels would have devoured these data avidly had they emerged in the 19th century. I’m sure they would have suggested from the evidence that something could go badly wrong – negation of negation, if readers care to explore that dialectical law further . . . And indeed that is happening. Humans made ecologically very fit indeed in surviving natural pressures are now stoking up a major climatic hiccup, or rather the culture and institutions that humans have evolved are doing that.

Hominin footprints in Kenya confirm two species occupied the same ecosystem the same time

For the last forty thousand years anatomically modern humans have been the only primates living on Planet Earth with a sophisticated culture; i.e. using tools, fire, language, art etcetera. Since Homo sapiens emerged some 300 ka ago, they joined at least two other groups of humans – Neanderthals and Denisovans – and not only shared Eurasia with them, but interbred as well. In fact no hominin group has been truly alone since Pliocene times, which began 5.3 Ma ago. Sometimes up to half a dozen species occupied the habitable areas of Africa. Yet we can never be sure whether or not they bumped into one another. Dates for fossils are generally imprecise; give or take a few thousand years. The evidence is merely that sedimentary strata of roughly the same age in various places have yielded fossils of several hominins, but that co-occupation has never been proved in a single stratum in the same place: until now.

Footprints from Koobi Fora: left – right foot of H. erectus; right – left foot of Paranthropus boisei. Credit: Kevin Hatala. Chatham University

The Koobi Fora area near modern Lake Turkana has been an important, go-to site, courtesy of the Leakey palaeoanthropology dynasty (Louis and Mary, their son and daughter-in-law Richard and Meave, and granddaughter Louise). They discovered five hominin species there dating from 4.2 to 1.4 Ma. So there was a chance that this rich area might prove that two of the species were close neighbours in both space and time. In 2021 Kenyan members of the Turkana Basin Institute based in Nairobi spotted a trackway of human footprints on a bedding surface of sediments that had been deposited about 1.5 Ma ago. Reminiscent of the famous, 2 million years older Laetoli trackway of Australopithecus afarensis in Tanzania, that at Koobi Fora is scientifically just as exciting  for it shows footprints of two hominin species Homo erectus and Paranthropus boisei who had walked through wet mud a few centimetres below the surface of Lake Turkana’s ancient predecessor (Hatala, K.G. and 13 others, 2024. Footprint evidence for locomotive diversity and shared habitats among early Pleistocene hominins. Science, v. 386, p. 1004-1010; DOI: 10.1126/science.ado5275). The trackway is littered with the footprints of large birds and contains evidence of zebra.

One set of prints attributed to H. erectus suggest the heels struck the surface first, then the feet rolled forwards before pushing off with the soles: little different from our own, unshod footprints in mud. They are attributed to H. erectus. The others also show a bipedal gait, but different locomotion. The feet that made them were significantly flatter than ours and had a big toe angled away from the smaller toes. They are so different that no close human relative could have made them. The local fossil record includes paranthropoids (Paranthropus boisei), whose fossil foot bones suggest an individual of that speciesmade those prints. It also turns out that a similar, dual walkers’ pattern was found 40 km away in lake sediments of roughly the same age. The two species cohabited the same terrain for a substantial period of time. As regards the Koobi Fora trackway, it seems the two hominins plodded through the mud only a few hours apart at most: they were neighbours.

Artists’ reconstructions of: left – H. erectus; right – Paranthropus boisei. Credits: Yale University, Roman Yevseyev respectively

From their respective anatomies they were very different. Homo erectus was, apart from having massive brow ridges, similar to us. Paranthropus boisei had huge jaws and facial muscles attached to a bony skull crest. So how did they get along? The first was probably omnivorous and actively hunted or scavenged meaty prey: a bifacial axe-wielding hunter-gatherer. Paranthropoids seem to have sought and eaten only vegetable victuals, and some sites preserve bone digging sticks. They were not in competition for foodstuffs and there was no reason for mutual intolerance. Yet they were physically so different that intimate social relations were pretty unlikely. Also their brain sizes were very different, that of P. Boisei’s being far smaller than that of H. erectus , which may not have encouraged intellectual discourse. Both persist in the fossil record for a million years or more. Modern humans, Neanderthals and Denisovans, as we know, sometimes got along swimmingly, possibly because they were cognitively very similar and not so different physically.

Since many hominin fossils are associated with riverine and lake-side environments, it is surprising that more trackways than those of Laetoli and Koobi Fora have been found. Perhaps that is because palaeoanthropologists are generally bent on finding bones and tools! Yet trackways show in a very graphic way how animals behave and interrelate with their environment, for example dinosaurs. Now anthropologists have learned how to spot footprint trace fossils that will change, and enrich the human story.

See also: Ashworth, J. Fossil footprints of different ancient humans found together for the first time. Natural History Museum News 28 November 2024; Marshall, M. Ancient footprints show how early human species lived side by side. New Scientist, 28 November 2024

Multiple Archaean gigantic impacts, perhaps beneficial to some early life

In March 1989 an asteroid half a kilometre across passed within 500 km of the Earth at a speed of 20 km s-1. Making some assumptions about its density, the kinetic energy of this near miss would have been around 4 x 1019 J: a million times more than Earth’s annual heat production and humanity’s annual energy use; and about half the power of detonating every thermonuclear device ever assembled. Had that small asteroid struck the Earth all this energy would have been delivered in a variety of forms to the Earth System in little more than a second – the time it would take to pass through the atmosphere. The founder of “astrogeology” and NASA’s principal geological advisor for the Apollo programme, the late Eugene Shoemaker, likened the scenario to a ‘small hill falling out of the sky’. (Read a summary of what would happen during such an asteroid strike).  But that would have been dwarfed by the 10 to 15 km impactor that resulted in the ~200 km wide Chicxulub crater and the K-Pg mass extinction 66 Ma ago. Evidence has been assembled for Earth having been struck during the Archaean around 3.6 billion years (Ga) ago by an asteroid 200 to 500 times larger: more like four Mount Everests ‘falling out of the sky’ (Drabon, N. et al. 2024. Effect of a giant meteorite impact on Paleoarchean surface environments and life. Proceedings of the National Academy of Sciences, v. 121, article e2408721121; DOI: 10.1073/pnas.2408721121

Impact debris layer in the Palaeoarchaean Barberton greenstone belt of South Africa, which contains altered glass spherules and fragments of older carbonaceous cherts. (Credit: Credit: Drabon, N. et al., Appendix Fig S2B)

In fact the Palaeoarchaean Era (3600 to 3200 Ma) was a time of multiple large impacts. Yet their recognition stems not from tangible craters but strata that contain once glassy spherules, condensed from vaporised rock, interbedded with sediments of Palaeoarchaean ‘greenstone belts’ in Australia and South Africa (see: Evidence builds for major impacts in Early Archaean; August 2002, and Impacts in the early Archaean; April 2014), some of which contain unearthly proportions of different chromium isotopes (see: Chromium isotopes and Archaean impacts; March 2003). Compared with the global few millimetres of spherules at the K-Pg boundary, the Barberton greenstone belt contains eight such beds up to 1.3 m thick in its 3.6 to 3.3 Ga stratigraphy. The thickest of these beds (S2) formed by an impact at around 3.26 Ga by an asteroid estimated to have had a mass 50 to 200 times that of the K-Pg impactor.

Above the S2 bed are carbonaceous cherts that contain carbon-isotope evidence of a boom in single-celled organisms with a metabolism that depended on iron and phosphorus rather than sunlight. The authors suggest that the tsunami triggered by impact would have stirred up soluble iron-2 from the deep ocean and washed in phosphorus from the exposed land surface, perhaps some having been delivered by the asteroid itself. No doubt such a huge impact would have veiled the Palaeoarchaean Earth with dust that reduced sunlight for years: inimical for photosynthesising bacteria but unlikely to pose a threat to chemo-autotrophs. An unusual feature of the S2 spherule bed is that it is capped by a layer of altered crystals whose shapes suggest they were originally sodium bicarbonate and calcium carbonate. They may represent flash-evaporation of up to tens of metres of ocean water as a result of the impact. Carbonates are less soluble than salt and more likely to crystallise during rapid evaporation of the ocean surface than would NaCl.   

Time line of possible events following a huge asteroid impact during the Palaeoarchaean. (Credit: Drabon, N. et al. Fig 8)

So it appears that early extraterrestrial bombardment in the early Archaean had the opposite effect to the Chicxulub impactor that devastated the highly evolved life of the late Mesozoic. Many repeats of such chaos during the Palaeoarchaean could well have given a major boost to some forms of early, chemo-autotrophic life, while destroying or setting back evolutionary attempts at photo-autotrophy.

See also: King, A. 2024. Meteorite 200 times larger than one that killed dinosaurs reset early life. Chemistry World 23 October 2024.

News about ‘hobbits’ (Homo floresiensis)

The roof lifted for palaeoanthropologists in October 2004 when news emerged of a fossil from Liang Bua cave on Flores in the Indonesian archipelago. It was an adult female human skull about a third the size of those of anatomically modern humans (see: The little people of Flores, Indonesia; October 2004). Immediately it was dubbed ‘Hobbit’, and from the start controversy raged around this diminutive human. The cave layer contained evidence of fire and sophisticated tools as well as bones of giant rats and minute elephants, presumed to be staple prey for these little people. Despite having brains about the size of a grapefruit – as did australopithecines – the little people challenged our assumptions about intelligence. Preliminary dating from 95 to 17 ka suggested they may have cohabited Indonesia with both H. erectus and AMH. Indeed, modern people of Flores tell legends of the little people they call Ebo Go Go. Like both their ancestors must have crossed treacherous straits between the Indonesian islands, which existed even when global sea level was drawn down by polar icecaps. Once an early suggestion that the original find was the skull of a deformed, microcephalic individual had been refuted by further finds in Flores, scientists turned to natural selection of small stature through living on a small island with limited resources – similar to the tiny elephant Stegodon and other island faunas elsewhere. By 2007, it had become clear from other, similar fossils that they were definitely a distinct species Homo floresiensis (see: Now we can celebrate the ‘Hobbits’! November 2007) with several anatomical similarities to H. erectus. Then more sophisticated dating revealed that the Flores cave sediments containing their fossils and tools spanned 100 to 60 ka, well before AMH reached Indonesia. By 2018 their arrival on Flores, marked by a mandible fragment and 6 teeth in sediments from sediment excavation at Mata Menge 70 km east of Luing Bua, had been pushed back to 773 ka.  At the new site stone tools were found in even earlier sediments (1.02 Ma). In 2019 evidence emerged that isolated island evolution in the Philippines had produced similar small descendants (H. luzonensis) by around 67 ka.

Artist’s impression of Homo floresiensis with giant rat. (Credit: Box of Oddities podcast)

The latest development is the finding of a fragment of an adult humerus (an arm bone) in the Mata Menge excavations that had yielded the oldest dates for Homo floresiensis fossils (Kaifu, Y. and 12 others 2024. Early evolution of small body size in Homo floresiensis. Nature Communications, v. 15, article number 6381; DOI: 10.1038/s41467-024-50649-7). Comparing the teeth and arm-bone fragment with an intact adult from Liang Bua suggests that the earliest known ancestors of Homo floresiensis were even smaller. The teeth, albeit much smaller, resemble those of Indonesian specimens of H. erectus. That observation helps to rule out earlier speculation that the tiny people of Flores descended from the earliest humans from Africa (H. habilis) that were about the same size, but more than twice as old (2.3 to 1.7 Ma). The evidence points more plausibly towards their evolution from Asian H. erectus, who arrived in Java around 1.1 million years ago. Having solved the issue of ‘island hopping’ to reach Java a group of Asian H. erectus could have found their way to Flores. That island’s biological resources may not have met the survival requirements of a much larger human ancestor but evolution in isolation kept the arrivals alive. Within 300 ka, and perhaps much less for a small population, survival of smaller offspring allowed them a very long and apparently quite comfortable stay on the island. Though diminished in stature, they demonstrated the survival strategies conferred by being smart.