A new explanation for banded iron formations (BIFs)

The main source for iron and steel has for more than half a century been Precambrian rock characterised by intricate interlayering of silica- and iron oxide-rich sediments known as banded iron formations or BIFs. They always appear in what were shallow-water parts of Precambrian sedimentary basins. Although much the same kind of material turns up in sequences from 3.8 to 0.6 Ga, by far the largest accumulations date from 2.6 to 1.8 Ga, epitomised by the vast BIFs of the Palaeoproterozoic Hamersley Basin in Western Australia. This peak of iron-ore deposition brackets the time (~2.4 Ga) when world-wide evidence suggests that the Earth’s atmosphere first acquired tangible amounts of free oxygen: the so-called ‘Great Oxidation Event’. Yet the preservation of such enormous amounts of oxidised iron compounds in BIFs is paradoxical for two reasons: the amount of freely available atmospheric oxygen at their acme was far lower than today; had the oceans contained much oxygen, dissolved ions of reduced Fe-2 would not have been able to pervade seawater as they had to for BIFs to have accumulated in shallow water. Iron-rich ocean water demands that its chemical state was highly reducing.

Oblique view of an open pit mine in banded iron formation at Mount Tom Price, Hamersley region Western Australia (Credit Google earth)
Oblique view of an open pit mine in banded iron formation at Mount Tom Price, Hamersley region Western Australia (Credit Google earth)

The paradox of highly oxidised sediments being deposited when oceans were highly reduced was resolved, or seemed to have been, in the late 20th century. It involved a hypothesis that reduced, Fe-rich water entered shallow, restricted basins where photosynthetic organisms – probably cyanobacteria – produced localised enrichments in dissolved oxygen so that the iron precipitated to form BIFs. Later work revealed oddities that seemed to suggest some direct role for the organisms themselves, a contradictory role for the co-dominant silica-rich cherty layers and even that another kind of bacteria that does not produce oxygen directly may have deposited oxidised iron minerals. Much of the research focussed on the Hamersley BIF deposits, and it comes as no surprise that another twist in the BIF saga has recently emerged from the same, enormous repository of evidence (Rasmussen, B. et al. 2015. Precipitation of iron silicate nanoparticles in early Precambrian oceans marks Earth’s first iron age. Geology, v. 43, p. 303-306).

The cherty laminations have received a great deal less attention than the iron oxides. It turns out that they are heaving with minute particles of iron silicate. These are mainly the minerals stilpnomelane [K(Fe,Mg)8(Si, Al)12(O, OH)27] and greenalite [(Fe)2–3Si2O5(OH)4] that account for up to 10% of the chert. They suggest that ferruginous, silica-enriched seawater continually precipitated a mixture of iron silicate and silica, with cyclical increases in the amount of iron-silicate. Being such a tiny size the nanoparticles would have had a very high surface area relative to their mass and would therefore have been highly reactive. The authors suggest that the present mineralogy of BIFs, which includes iron carbonates and, in some cases, sulfides as well as oxides may have resulted from post-depositional mineral reactions. Much the same features occur in 3.46 Ga Archaean BIFs at Marble Bar in Western Australia that are almost a billion years older that the Hamersley deposits, suggesting that a direct biological role in BIF formation may not have been necessary.

More on BIFs and the Great Oxidation Event

Anthropocene: what (or who) is it for?

The made-up word chrononymy could be applied to the study of the names of geological divisions and their places on the International Stratigraphic Chart. Until 2008 that was something of a slow-burner, as careers go. It all began with Giovanni Arduino and Johann Gotlob Lehman in the mid- to late 18th century, during the informal historic episode known as the Enlightenment. To them we owe the first statements of stratigraphic principles and the beginning of stratigraphic divisions: rocks divided into the major segments of Primitive, Secondary, Tertiary and Quaternary (Arduino). Thus stratigraphy seeks to set up a fundamental scale or chart for expressing Earth’s history as revealed by rocks. The first two divisions bit the dust long ago; Tertiary is now an informal synonym for the Cenozoic Era; only Quaternary clings on as the embattled Period at the end of the Cenozoic.  All 11 Systems/Periods of the Phanerozoic, their 37 Series/Epochs and 85 Stages/Ages in the latest version of the International Stratigraphic Chart have been thrashed out since then, much being accomplished in the late 19th and early 20th centuries. Curiously, the world body responsible for sharpening up the definition of this system of ‘chrononymy’, the International Commission on Stratigraphy (ICS), seems not to have seen fit to record the history of stratigraphy: a great mystery. Without it geologists would be unable to converse with one another and the world at large.

Yet now an increasing number of scientists are seriously proposing a new entry at the 4th level of division after Eon, Era and Period: a new Epoch that acknowledges the huge global impact of human activity on atmosphere, hydrosphere, biosphere and even lithosphere. They want it to be called the Anthropocene, and for some its eventual acceptance ought to relegate the current Holocene Epoch, in which humans invented agriculture, a form of economic intercourse and exchange known as capital and all the trappings of modern industry, to the 5th division or Stage. Earth-pages has been muttering about the Anthropocene for the past decade, as charted in a number of the links above, so if you want to know which way its author is leaning and how he came to find the proposal an unnecessary irritation, have a look at them. Last week things became sufficiently serious for another comment. Simon Lewis and Mark Maslin of the Department of Geography at University College London have summarised the scientific grounds alleged to justify an Anthropocene Epoch and its strict definition in a Nature Perspective (Lewis, S.J. & Maslin, M.A. 2015. Defining the Anthropocene. Nature, v. 519, p. 171-180).-=, which is interestingly discussed in the same Issue by Richard Monastersky.

Lewis and Maslin present two dates that their arguments and accepted stratigraphic protocols suggest as candidates for the start of the Anthropocene: 1610 and 1964 CE, both of which relate to features that are expressed by geological records that should last indefinitely. The first is a decline and eventual recovery in the atmospheric CO2 level recorded in high-resolution Antarctic ice core records between 1570 and 1620 CE that can be ascribed to the decline in the population of the Americas’ native peoples from an estimated 60 to 6 million. This result of the impact of European first colonisation – disease, slaughter, enslavement and famine – reduced agriculture and fire use and saw the regeneration of 5 x 107 hectares of forest, which drew down CO2 globally. It also coincides with the coolest part of the Little Ice Age from 1594-1677 CE. They caution against the start of the Industrial Revolution as an alternative for a ‘Golden Spike’ since it was a diachronous event, beginning in Europe. Instead, they show that the second proposal for a start in 1964 has a good basis in the record of global anthropogenic effects on the Earth marked by the peak fallout of radioactive isotopes generated by atomic weapons tests during the Cold War, principally 14C with a 5730 year half life, together with others more long-lived. The year 1964 is also roughly when growth in all aspects of human activity really took off, which some dub in a slightly Tolkienesque manner the ‘Great Acceleration’. [There is a growing taste for this kind of hyperbole, e.g. the ‘Great Oxygenation Event’ around 2.4 Ga and the ‘Great Dying’ for the end-Permian mass extinction]. Yet they neglect to note that the geochronological origin point for times past has been defined as 1950 CE when nucleogenic 14C contaminated later materials as regards radiocarbon dating, which had just become feasible.   Lewis and Maslin conclude their Perspective as follows:

To a large extent the future of the only place where life is known to exist is being determined by the actions of humans. Yet, the power that humans wield is unlike any other force of nature, because it is reflexive and therefore can be used, withdrawn or modified. More widespread recognition that human actions are driving far-reaching changes to the life-supporting infrastructure of Earth may well have increasing philosophical, social, economic and political implications over the coming decades.

So the Anthropocene adds the future to the stratigraphic column, which seems more than slightly odd. As Richard Monastersky notes, it is in fact a political entity: part of some kind of agenda or manifesto; a sort of environmental agitprop from the ‘geos’. As if there were not dozens of rational reasons to change human impacts to haul society back from catastrophe, which many people outside the scientific community have good reason to see as  hot air on which there is never any concrete action by ‘the great and the good’. Monastersky also notes that the present Anthropocene record in naturally deposited geological materials accounts for less than a millimetre at the top of ocean-floor sediments. How long might the proposed Epoch last? If action to halt anthropogenic environmental change does eventually work, the Anthropocene will be  very short in historic terms let alone those which form the currency of geology. If it doesn’t, there will be nobody around able to document, let alone understand, the epochal events recorded in rocks. At its worst, for some alien, visiting planetary scientists, far in the future, an Anthropocene Epoch will almost certainly be far shorter than the 104 to 105 years represented by the hugely more important Palaeozoic-Mesozoic and Mesozoic-Cenozoic boundary sequences; but with no Wikipedia entry.

Not everybody gets a vote on these kinds of thing, such is the way that science is administered, but all is not lost. The final arbiter is the Executive Committee of the International Union of Geological Sciences (IUGS), but first the Anthropocene’s status as a new Epoch has to be approved by 60% of the ICS Subcommission on Quaternary Stratigraphy, if put to a vote. Then such a ‘supermajority’ would be needed from the chairs of all 16 of the ICS subcommissions that study Earth’s major time divisions. But first, the 37 members of the Subcommission on Quaternary Stratigraphy’s ‘Anthropocene’ working group have to decide whether or not to submit a proposal: things may drag on at an appropriately stratigraphic pace. Yet the real point is that the effect of human activity on Earth-system processes has been documented and discussed at length. I’ll give Marx the last word in this ‘The philosophers have only interpreted the world, in various ways. The point, however, is to change it’. A new stratigraphic Epoch doesn’t really seem to measure up to that…

Genus Homo pushed back nearly half a million years

Bill Deller, a friend whose Sunday is partly spent reading the Observer and Sunday Times from cover to cover, alerted me to a lengthy article by Britain’s doyen of paleoanthropologists Chris Stringer of the Natural History Museum. (Stringer, C. 2015. First human? The jawbone that makes us question where we’re from. Observer, 8 March 2015, p. 36). His piece sprang from two Reports published online in Science that describe about 1/3 of a hominin lower jaw unearthed – where else? – in the Afar Depression of Ethiopia. The discovery site of Ledi-Geraru is a mere 30 km from the most hominin-productive ground in Africa: Hadar and Dikika for Australopithecus afarensis (‘Lucy’ at 3.2 Ma and ‘Selam’ at 3.3 Ma, respectively); Gona for the earliest-known stone tools (2.6 Ma); and the previously earliest member of the genus Homo, also close to Hadar.

On some small objects mighty tales are hung, and the Ledi-Geraru jawbone and 6 teeth is one of them. It has features intermediate between Australopithecus and Homo, but more important is its age: Pliocene, around 2.8 to 2.75 Ma (Villmoare, B. And 8 others. Early Homo at 2.8 Ma from Ledi Geraru, Afar, Ethiopia. Science Express doi: 10.1126/science.aaa1343). The sediments from which Ethiopian geologist Chalachew Seyoum, studying at Arizona State University, extracted the jawbone formed in a river floodplain. Other fossils suggest open grassland rich with game, similar to that of the Serengeti in Tanzania, with tree-lined river courses. These were laid down at a time of climatic transition from humid to more arid conditions, that several authors have suggested to have provided the environmental stresses that drove evolutionary change, including that of hominins (DiMaggio, E.N. and 10 others 2015. Late Pliocene fossiliferous sedimentary record and the environmental context of early Homo from Afar, Ethiopia. Science Express doi: 10.1126/science.aaa1415).

Designating the jawbone as evidence for the earliest known member of our genus rests almost entirely on the teeth, and so is at best tentative awaiting further fossil material. The greatest complicating factor is that the earliest supposed fossils of Homo (i.e. H. habilis, H rudolfensis and others yet to be assigned a species identity) are a morphologically more mixed bunch than those younger than 2 Ma, such as H. ergaster and H. erectus. Indeed, every one of them has some significant peculiarity. That diversity even extends to the earliest humans to have left Africa, found in 1.8 Ma old sediments at Dmanisi in Georgia (Homo georgicus), where each of the 5 well-preserved skulls is unique.  The Dmanisi hominins have been likened to the type specimen of H. habilis, but such is the diversity of both that is probably a shot in the dark.

English: Cast replica of OH 7, the type specim...
Replica of OH 7, the deformed type specimen of Homo habilis. (credit: Wikipedia)

Coinciding with the new Ethiopian hominin papers a study was published in Nature the same week that describes how the type specimen of H. habilis (found, in close association with crude stone tools and cut bones, by Mary and Lewis Leakey at Olduvai Gorge, Tanzania in 1960) has been digitally restored from its somewhat deformed state when found (Spoor, F. et al. 2015. Reconstructed Homo habilis type OH 7 suggests deep-rooted species diversity in early Homo. Nature, v. 519, p. 83-86, doi:10.1038/nature14224). The restored lower jaw and teeth, and part of its cranium, deepened the mysterious diversity of the group of fossils for which it is the type specimen, but boosts its standing as regards probable brain size from one within the range of australopithecines to significantly larger –~750 ml compared with <600 ml – about half that of modern humans. The habilis diversity is largely to do with jaws and teeth: it is the estimated brain size as well as the type specimen’s association with tools and their use that elevates them all to human status. Yet, the reconstruction is said by some to raise the issue of a mosaic of early human species. The alternative is an unusual degree of shape diversity (polymorphism) among a single emerging species, which is not much favoured these days. An issue to consider is: what constitutes a species? For living organisms morphological similarity has to be set against the ability for fertile interbreeding. Small, geographically isolated populations of a single species often diverge markedly in terms of what they look like yet continue to be interfertile, the opposite being convergence in form by organisms that are completely unrelated.

Palaeontologists tend to go largely with division on grounds of form, so that when a specimen falls outside some agreed morphological statistics, it crosses a species boundary. Set against that the incontrovertible evidence that at least 3 recent human species interbred successfully to leave the mark in all non-African living humans. What if the first humans emerging from, probably, a well-defined population of australopithecines continued to interbreed with them, right up to the point when they became extinct about 2 Ma ago?

On a more concrete note, the Ledi Geraru hominin is a good candidate for the maker of the first stone tools found ‘just down the road’ at Gona!

Wet spells in Arabia and human migration

In September 2014, Earth Pages  reported how remote sensing had revealed clear signs of extensive fossil drainage systems and lakes at the heart of the Arabian Peninsula, now the hyper-arid Empty Quarter (Rub al Khali). Their association with human stone artifacts dated as far back as 211 ka, those with affinities to collections from East Africa clustering between 74-90 ka, supported the sub-continent possibly having been an early staging post for fully modern human migrants from Africa. Member of the same archaeological team based at Oxford University have now published late Pleistocene palaeoclimatic records from alluvial-fan sediments in the eastern United Arab Emirates that add detail to this hypothesis (Parton, A. ­et al. 2015. Alluvial fan records from southeast Arabia reveal multiple windows for human dispersal. Geology, advance online publication doi:10.1130/G36401.1).

The eastern part of the Empty Quarter is a vast bajada formed from coalesced alluvial fans deposited by floods rising in the Oman Mountains and flowing westwards to disappear in the great sand sea of dunes. Nowadays floods during the Arabian Sea monsoons are few and far between, and restricted to the west-facing mountain front. Yet, older alluvial fans extend far out into the Empty Quarter, some being worked for aggregate used in the frantic building boom in the UAE. In one of the quarries, about 100 km south of the Jebel Faya Upper Palaeolithic tool site , the alluvial deposit contains clear signs of cyclical deposition in the form of 13 repeated gradations from coarse to fine waterlain sediment, each capped by fossil soils and dune sands. The soils contain plant remains that suggest they formed when the area was colonized by extensive grasslands formed under humid conditions.

Dating the sequence reveals that 6 of the cycles formed over a 10 thousand-year period between 158 to 147 ka, which coincides with a peak in monsoon intensity roughly between 160 and 150 ka during the glacial period that preceded the last one. Three later cycles formed at times of monsoon maxima during the last interglacial and in the climatic decline leading to the last glacial maximum, at ~128 to 115 ka, 105 to 95 ka, 85 to 74 ka. So, contrary to the long-held notion that the Arabian Peninsula formed a hostile barrier to migration, from time to time it was a well watered area that probably had abundant game. Between times, though, it was a vast, inhospitably dry place.

English: SeaWiFS collected this view of the Ar...
Satellite view of the Arabian Peninsula. The Oman mountains sweep in a dark arc south eastwards from the Staits of Hormuz at the mouth of the Persian Gulf. The brownish grey area to the south of the arc is the bajada that borders the bright orange Empty Quarter (credit: NOAA)

The authors suggest that the climatic cyclicity was dominated by a 23 ka period. As regards the southern potential migration route out of Africa, via the Straits of Bab el Mandab, which has been highly favoured by palaeoanthropologists lately, opportunities for migration in the absence of boats would have depended on sea-level lows. They do not necessarily coincide with wet windows of opportunity for crossing the cyclically arid Arabian peninsula that would allow both survival and proceeding onwards to south and east Asia. So far as I can judge, the newly published work seems to favour a northward then eastward means of migration, independent of fluctuations in land-ice volume and sea level, whenever the driest areas received sufficient water to support vegetation and game. In fact most of NE Africa is subject to the Arabian Sea monsoons, and when they were at their least productive crossing much of Ethiopia’s Afar depression and the coastal areas of Eritrea, Sudan and Egypt would have been almost as difficult as the challenge of the Empty Quarter.

A tsunami and NW European Mesolithic settlements

About 8.2 ka ago sediments on the steep continental edge of the North and Norwegian Seas slid onto the abyssal plain of the North Atlantic. This huge mass displacement triggered a tsunami whose effects manifest themselves in sand inundations at the heads of inlets and fjords along the Norwegian and eastern Scottish coasts that reach up to 10 m above current sea level. At that time actual sea level was probably 10 m lower than at present as active melting of the last glacial ice sheets was still underway: the waves may have reached 20-30 m above the 8.2 ka sea level. So powerful were the tsunami waves in the constricted North Sea that they may have separated the British Isles from the European mainland by inundating Doggerland, the low-lying riverine plain that joined them before global sea level rose above their elevation at around the same time. Fishing vessels plying the sandbanks of the southern North Sea often trawl-up well preserved remains of land mammals and even human tools: almost certainly Doggerland was prime hunting territory during the Mesolithic, as well as an easily traversed link to the then British Peninsula. Mesolithic settlements close by tsunami deposits are known from Inverness in Scotland and Dysvikja north of Bergen in Norway and individual Mesolithic dwellings occur on the Northumberland coast. The tsunami must have had some effect on Mesolithic hunter gatherers who had migrated into a game-rich habitat. The question is: How devastating was it.

English: Maelmin - reconstruction of Mesolithi...
Reconstruction of Mesolithic hut based on evidence from two archaeological sites in Northumberland, UK. (credit: Lisa Jarvis; see http://www.maelmin.org.uk/index.html )

Hunter gatherers move seasonally with favoured game species, often returning to semi-permanent settlements for the least fruitful late-autumn to early spring season. The dominant prey animals, red deer and reindeer also tend to migrate to the hills in summer, partly to escape blood-feeding insects, returning to warmer, lower elevations for the winter. If that movement pattern dominated Mesolithic populations then the effects of the tsunami would have been most destructive in late-autumn to early spring. During warmer seasons, people may not even have noticed its effects although coastal habitations and boats may have been destroyed.

Splendid Feather Moss, Step Moss, Stair Step Moss
Stair-step moss (credit: Wikipedia)

Norwegian scientists Knut Rydgren and Stein Bondevik from Sogn og Fjordane University College, Sognda devised a clever means of working out the tsunami’s timing from mosses preserved in the sand inundations that added to near-shore marine sediments. (Rydgren, K. & Bondevik, S. 2015. Most growth patterns and timing of human exposure to a Mesolithic tsunami in the North Atlantic. Geology, v. 43, p. 111-114). Well-preserved stems of stair-step moss Hylocomium splendens still containing green chlorophyll occur, along with ripped up fragments of peat and soil, near the top of the tsunami deposit which has been uplifted by post-glacial isostatic uplift to form a bog. This moss grows shoots annually, the main growth spurt being at the end of the summer-early autumn growing season. Nineteen preserved samples preserved such new shoots that were as long as or longer than the preceding year’s shoots. This suggests that they were torn up by the tsunami while still alive towards the end of the growing season, around late-October. All around the North Sea Mesolithic people could have been returning from warm season hunting trips to sea-shore winter camps, only to have their dwellings, boats and food stores devastated, if indeed they survived such a terrifying event.