The oldest impact structure

Ilulisat Isfjord
Ilulisat Grenland (credit: Wikipedia)

Various lines of evidence, such as sedimentary deposits of glass spherules and shocked minerals or signs of unusual isotopic chemistry (see Ejecta from the Sudbury impact and Evidence builds for major impacts in Early Archaean in EPN April 2005 and August 2002) point to the predicted intensity of meteorite or comet bombardment of the early Earth, and evidence is growing for some events that had global effects. Yet no actual impact sites from the Archaean Eon have been found, until recently. That is not entirely unexpected because erosion during the last few billion years will have removed all trace of the characteristic surface craters. But perhaps there is cryptic evidence in Archaean terrains for the deeper influence of impacts: after all, the depth of penetration of large meteoritic ‘missiles’ would have been of a similar order to their diameter where shock structures in minerals would slowly anneal and impact-generated melts would crystallise slowly enough to masquerade as plutonic igneous rocks.

Close to the Arctic Circle in SW Greenland Archaean gneisses are associated with a roughly 200 km wide geomagnetic anomaly and regionally curvilinear features that suggest a series of concentric closed structures over a 100 km diameter area (Garde, A.A. et al. 2012. Searching for giant, ancient impact structures on Earth: The Mesoarchaean Maniitsoq structure, West Greenland. Earth and Planetary Science Letters, v.  337, p. 197-210). Adam Garde and colleagues from the Greenland Geological Survey, Cardiff University UK and Lund University Sweden focused on the central part of these anomalies where gneisses are extensively brecciated with signs of annealed shock-induced lamellae in quartz, feldspar melting and fluidization of highly comminuted mylonites. They ascribe this assemblage of features on a variety of scales to the effects of a major meteorite impact on 25 km deep continental crust. The metamorphic complex contains the famous Amitsoq Gneisses that once had the status of the world’s oldest rocks at around 3.8 Ga, but is dominated by migmatites formed around 3.1 Ga that are akin to the Nuuk Gneisses from further south.

The possible signs of a deeply penetrating impact are cut through by small ultramafic intrusions, zircons from which yield 207Pb/206Pb ages between 3.01 and 2.98 Ma, confirming the structures’ Mesoarchaean age. An interesting and unanswered question concerns the origin of these magmas together with marginally younger, voluminous granites. Were the ultramafic magmas generated by high degrees of partial melting of mantle as a result of the immense energy of impact?  Having temperatures well above those of basaltic melts, could the ultramafic intrusions in turn have induced crustal melting within the depths of a large impact basin?

Burrowers: knowing front from back

In sedimentary rocks below the base of the Cambrian there is not only a dearth of body fossils, but signs of creatures burrowing and stirring up the sediment are most uncommon. A burrower needs several criteria to be fulfilled: a supply of oxygen; sufficient food; a body able to penetrate and an ability to move back and forth, but forth would probably do fine, provided the animal could turn corners. The amount of oxygen in bottom waters would have influenced its availability beneath the seabed. Whatever the conditions, dead organic matter falls and is buried by sediment before it is oxidised away, even nowadays.  There is little sign that there was any marked change between the oxygenation of the planet just before and after the start of the Cambrian Period, so the main control over burrowing is that of animal morphology.

Many modern burrowing animals are pretty flaccid but moving sediment aside and upwards demands some muscle power. Most important, the creature needs a means of navigation, albeit of a rudimentary kind, and since what goes in beneath the surface – food – must go out – excreta – there must be a front- and a back end. That ‘fore-and-aft’ symmetry is the essential feature of bilaterian animals. Only a limited range of animal taxa don’t have that built-in. Sponges are the most obvious example, having no discernible symmetry of any kind. Radially symmetrical animals such as jellyfish and coral polyps only have a top and a bottom. An absence of inbuilt horizontal directionality stops non-bilaterians from burrowing in any shape or form. But, so what?

The vast majority of animals have some kind of bilateral symmetry; even echinoderms have it from their 5-fold symmetry that is also the simplest kind of radiality. By the start of the Cambrian, not only had bilaterians split off from the less symmetrical but almost all the phyla living today, and several that became extinct in the last 542 Ma, have representatives in the Cambrian fossil record. The only logical conclusion is that emergence of bilaterians and their fundamental diversification took place in the Precambrian: they are absent  from earlier strata only because they had no hard parts. Comparing the DNA of living representatives of the main bilaterian phyla and with that of non-bilaterians can help date the times of genetic and morphological separation, but only crudely. This ‘molecular clock’ approach points to some time between 900 and 650 Ma ago for the last common ancestor of bilaterians.

Uruguayan fossil burrows from late Neoproterozoic (Credit: Pecoits, E. et al. 2012)

Getting a handle on the minimum time for the split depends either on finding fossils or unequivocal signs of bilaterian activity. The oldest unequivocally bilaterian fossils occur in rocks about 550 Ma old, which doesn’t take us much further back than the base of the Cambrian. But there are trace fossils that are significantly more ancient (Pecoits, E. et al. 2012. Bilaterian burrows and grazing behaviour at >585 million years ago. Science, v. 336, p. 1693-1696). They are tiny burrows in fine-grained sediments from Uruguay, so tiny that there is a chance that they may be traces of grazing bacterial films on the seabed rather than beneath it. The decider is the mechanics of trace fossil formation. Surface tracks only a millimetre or so across would only penetrate the biofilm, so on lithification they would simply disappear. Burrows on the other hand penetrate the sediment itself to get at food items. Even if this was a biofilm, the track would be in sediment above the film, so compaction would preserve it. The Uruguayan exam-[les are exquisite horizontal burrows, and they push back the minimum age for the origin of the bilaterians to at least 40 Ma older than the start of the Cambrian. In fact 585 Ma is a minimum age for the sediments as it is the U-Pb age of zircons in a granite that intrudes and metamorphoses them.

An equally significant observation is that the burrows only appear towards the end of a glacial episode – probably the last of the Neoproterozoic ‘Snowball Earth’ events – as marked by tillites below the burrowed shales and occasional ‘dropstones’ in them. Could it be that the climatic and other stresses of a global glaciation triggered the fundamental division among the Animalia?

Eats barks leaves nuts and fruits

English: The Malapa site valley, looking North...
The Malapa valley South Africa, where Australopithecus sediba was found. (Credit: Lee R. Berger via Wikipedia)

The first stone tools and bones that had been cut by them, found in rocks  dated at 2.5-2.6 Ma in the Bouri area of Ethiopia’s Afar Depression, have generally been taken as a sign that their invention was connected with more easily accessing meat for food. A corollary of this idea is that it was the introduction of meat into the hominin diet that helped ‘fuel’ the growth of their brains: meat-tools-brain interrelated in an evolutionary sense. There is a spatial link between such  tools and fossils of Australopithecus, but direct attribution of the tools to these australopithecines  has not been widely accepted. It is more generally accepted that a link to tools can be made with Homo habilis, but they lived, at the earliest, 200 to 300 ka later. The wear patterns on their teeth and association with animal bones bearing cut marks has been taken to indicate that at least part of their diet was meat.

Another approach to diet is to analyse the proportions of stable carbon isotopes (13C and 12C) in tooth enamel, which can discriminate between the ultimate plant source in their diet, i.e. between grasses that use  the C4 photosynthetic pathway and the C3 version used by woody and herbaceous plants. The isotopic ‘signature’ of plants is also passed on to animals, depending on what vegetation they eat, and so up the food chain to predators and the scavengers that depend on their leavings. South African Au. africanus of around 2.5 Ma ago show a definite  C4 preference as do local paranthropoids (‘robust’ australopithecine-like creatures) from around 1.8 Ma. The early humans H. habilis and H. ergaster also show the C4 isotopic proportions, which in both cases may be from a meaty diet or from a vegetarian component. The main point from these similar results, whatever the plant-meat proportions being consumed, is that these hominins were very different from chimpanzees in their eating habits, and probably as regards their habitats: savannah rather than woodlands respectively.

There are no reports of C-isotope research on Au. garhi teeth, but results from 2 Ma old Au. sediba found in South Africa have just been published (Henry, A.G and 8 others 2012. The diet of Australopithecus sediba, Nature, v. 487, p. 90-93) along with plant materials from dental plaque and tooth wear patterns. Au. sediba is notable for its very modern-looking hands and other ‘advanced’ features. Some believe it to have been closer to the direct line of human descent than a number of other hominin species, including the poor quality remains of H. habilis. So, did sediba eat meat? The forensic evidence suggests something unexpected. The C-isotope data points towards food dominated by C3 plants – less grasses and sedges, and more shrubbery. Tooth wear suggests bark was eaten, while plant remains from plaque indicate fruit leaves and wood. This is a feeding pattern more like that of chimpanzees than Homo species, Au. africanus and the paranthropoids  that are roughly contemporary with Au. sediba. Ecological analysis of the sediments which buried the hominin specimens suggest a seasonal climate and savannah biome with abundant C4 plants that supported grazing herds, mixed with possibly some denser woodland along drainages. This is a pattern familiar from living savannah chimpanzee bands.

English: The hand and forearm of Australopithe...
The hand and forearm of Australopithecus sediba (Credit: Peter Schmid, courtesy Lee R. Berger via Wikipedia)

So, despite being an ‘advanced’ hominin, by carrying clear signs of foods that were not consumed by meaty potential prey animals Au. sediba probably was not a meat eater. Yet species with strong C4 ‘signatures’ cannot be assigned to carnivory on C-isotope  evidence alone. One has to decide from other data, such as tooth-wear and plaque, whether this or that hominin ate grasses, those that clearly did not becoming candidates for dominantly meat-eating. How to detect a tool-using species with a mixed diet, akin to more modern humans, is a tough nut to crack.

A mighty sag or a big wrench for Mars

MOLA colorized relief map of the western hemis...
Colour-coded relief map of the Thatsis bulge on Mars, with Valles Marineris at left centre (Credit: Goddard Space Flight Center, NASA, via Wikipedia)

In the Solar System topographic features don’t come larger than Valles Marineris on Mars. At between 5 to 10 kilometres deep and extending along a fifth of the planet’s circumference, it makes the Grand Canyon and The Gorge of the Nile look puny.

The base and margins of this stupendous valley contains all manner of evidence for erosion, huge landslips and signs of collapse into voids in Mars’s crust. Much of the erosion on Mars seems to have stemmed from catastrophic floods several billion years ago, though whether they were all of water or if some were volcanic in origin is being debated (Leverington, D.W. 2011. A volcanic origin for the outflow channels of Mars: Key evidence and major implications. Geomorphology, v. 132, p. 51-75 http://www.webpages.ttu.edu/dleverin/leverington_mars_outflow_channels_geomorphology_2011.pdf  , but see http://www.universetoday.com/94367/did-water-or-lava-carve-the-outflow-channels-on-mars/)

It is difficult to imagine anything other than some kind of fault control over the almost straight, roughly east-west trend of Vales Marineris, but the scale suggests, again, an unmatched scale of tectonics. It has long been thought that the massive canyon resulted from extensional rifting that created a major weakness etched out by later erosion and/or collapse into huge subsurface voids in the crust. Yet there is little sign of commensurately large faults, through there are some. But the structure is an integral part of yet another superlative. It is on the eastern flank of the mighty Tharsis bulge on which several humongous volcanoes, including Mons Olympus, developed: perhaps there is a causal link between the two dominating features.

Jeffrey Andrews-Hanna of the Colorado School of Mines in the US has tried to model the bulge-chasm pair, coming to the conclusion that there is little sign of major extension. The finale of his study zeroes-in on the possibility of dominant subsidence producing the structure (Andrews-Hanna, J.C. 2012. The formation of Valles Marineris:  3. Trough formation through super-isostasy, stress, sedimentation, and subsidence.  Journal of Geophysical Research, v. 117, E06002, doi:10.1029/2012JE004059).

In this model, the Tharsis bulge and its associated volcanic province rose so high that on the scale of the planet it must have created a large positive gravitational anomaly. This remains for the most part, but in the Valles Marineris region the crust is now either in isostatic balance or has large negative gravity anomalies, complicated by the fact that the very carving of the canyon system must have resulted in some uplift through unloading. For a while the whole bulge was supported in this gravitationally unstable state by the strength of the Martian lithosphere, and most of it is still in a state of disequilibrium.

Andrews-Hanna’s novel view is that a small amount of extension allowed residual magma to rise in linear zone along the eventual length of Valles Marineris as dykes. The magmas and their heating effect reduced the strength of the lithosphere, locally removing support for the huge load, which subsided. By creating greater slope on the surface of Tharsis the subsidence would have become a focus for both erosion and sedimentation, the increased sedimentary load adding to the subsidence to give the present stupendous depth of the canyons and chasms.

Polski: NASA World Wind - Mars (MOLA Shaded el...
Simulated oblique view of the topography of Valles Marineris looking westwards (Credit: Goddard Space Flight Center, NASA, via Wikipedia)

But this isn’t the only model for the canyon system (Yin, A. Structural analysis of the Valles Marineris fault zone: Possible evidence for large-scale strike-slip faulting on Mars. Lithosphere, v. 4 doi:10.1130/L192.1). An Yin of the University of California used a combination of remote sensing data from Mars Reconnaissance Orbiter and Mars Odyssey to perform detailed lithological and structural mapping  along Valles Marineris. What emerged were several  fault zones up to 2000 km long. Instead of an expected extensional sense of movement they are strike-slip faults, with displacements of the order of 100 km in a left-lateral sense. Yin’s model is that the canyon system bean as a zone of transtensional  deformation: very different from that of Andrews-Hanna. It also begs the question of the underlying tectonic processes, because strike-slip zone on Earth are usually associated with distributed stress from plate tectonics.

The Great Blurting

It is hard to resist curiosity when a phrase includes a superlative. Dickens knew this when he opened A Tale of Two Cities with the words, ‘It was the best of times, it was the worst of times…’. So impacted into post-Victorian English language are they that the Daily Mirror of 13 May 2012 used them to celebrate ‘The most scintillating finish in Premier League history’: referring of course to the footballing tales of the city of Manchester (UK, that is). So it was with some gaiety that I turned to a paper in the May 2012 issue of  Geology (Løseth. H. et al. 2012. World’s largest extrusive body of sand? Geology, v. 40, p. 467-470). Now, that is a title to conjure with, and I would advise any academic author to add a superlative adjective of some kind to their next manuscript title, to ensure more than 5 readers and at least one citation to add to her/his CV. Conversely, I caution against seemingly ultra-high impact, exclamatory single-word titles such as ‘Coelocanth!’, Porphyroblast!’, ‘Ignimbrite!’ or ‘Sphenochasm!’: they summon untoward visions of geoscientists much given to ‘snorting and pawing the air in salivating lust and groveling need’, in the manner of Hungry Joe’s reaction to a pornographic cameo brooch (Heller, J. 1961. Catch 22: Simon & Schuster).

The sand body in question lies in the Pleistocene subsurface of the Norwegian sector of the North Sea above the Snorre oilfield, and came to light through a 3-D seismic survey with extraordinarily good resolution that allowed the reconstruction of its base and top structure contours (in two-way time) and thus its overall volume and shape. At 10 km3, were it to have formed yesterday to cover Manhattan the paper’s abstract suggests that it would have reached the 37th floor of the Empire State Building. More parochially, had it engulfed  London’s old financial quarter centred on London Bridge (Post Codes EC1 to 4 and SE1) 30 St Mary Axe (‘The Gherkin’) and ‘The Shard’ would be buried in their entirety leaving one of capitalism’s iconic heartlands a curiously gnarled sandy plain.

English: Mud volcano, Romania Polski: Wulkan b...
Small mud volcano, Romania (Photo credit: Wikipedia)

That the sand is extrusive rather than being simply a sedimentary stratum is revealed by its extraordinary shape. Its thickest part is in a depression surrounded by mounds of the underlying unit – the former seabed – above which the body is absent. These mounds show marginal signs on the seismic sections of dykes that could have acted as feeders from stratiform sands deeper in the sequence, the dykes coinciding with the base of  ‘ditches’ in the body’s upper surface. In turn, the ditches have flanking ridges as if the ditches and the dykes below were feeders for the sand extrusion. Such an extrusive sand body is currently forming at the accidentally triggered Lusi sand volcano in Indonesia where a single vent exudes about 50 thousand m3 each day; a rate that would take 550 years to produce the Snorre field body. Pleistocene stratigraphy surrounding the vast North Sea ‘boil’ suggests that it formed during a period of rapid sedimentation from the huge North Sea ice shelf supplied by the Scandinavian ice sheet.

Helge Løseth and colleagues from Statoil and the University of Rennes ran a series of dry sandbox experiments to mimic the process of sand injection. By pumping air through interbedded sand, glass ballotini and silica powder, to represent two types of non cohesive sands and cohesive mudrocks, they found that increasing the overall air pressure in the box eventually fluidized the ‘sands’ which blurted through the ‘clays’ to form ‘volcanoes’ with plumes of sand that enlarged the area of deposition at the surface. Cutting into the sediments after the experiments revealed a remarkably real-looking system of intrusive sand bodies (dykes, sills and laccoliths) as well as the extrusive mass of sand. Chances are that such bodies may form more commonly in marine sequences, given encouraging over-pressuring through sudden increases in normal sedimentation. If so, the very open grain structure of the vented sands might provide superb petroleum reservoir characteristics.

Carbon dioxide burial: an analogy of some pitfalls

Schematic showing both terrestrial and geologi...
geological sequestration of carbon dioxide emissions from a coal-fired power plant.  (Photo credit: LeJean Hardin and Jamie Payne Wikipedia)

Of all the ‘geoengineering’ approaches that may offer some relief from global warming pumping CO2 into deep sedimentary rocks, through carbon capture and storage (CCS) is one that most directly intervenes in the natural carbon cycle. In fact it adds an almost wholly  anthropogenic route to the movement of carbon. It is difficult if not impossible for natural processes to ‘pump’ gases downwards except when they are dissolved in water and most often through the conversion of CO2 to solid carbonates or carbohydrates that are simply buried on the ocean floor. Artificially producing carbonate or organic matter on a sufficient scale to send meaningful amounts of anthropogenic carbon dioxide to long-term rock storage is pretty much beyond current technology, but gas sequestration seems feasible, if costly. The main issues concern making sure geological traps are ‘tight’ enough to prevent sufficient leakage to render the exercise of little use and to understand the geochemical effects of large amounts of buried gas that would inevitably move around to some extent.

The geochemistry is interesting as reactions of CO2 with rock and subsurface water are inevitable. The most obvious is that solution in water releases hydrogen ions to create weakly acidic fluids: on the one hand that might be a route for precipitation of carbonate and more secure carbon storage, through reaction with minerals (see http://earth-pages.co.uk/2012/04/10/possible-snags-and-boons-for-co2-disposal/), but another possibility is increasing solution of minerals that might eventually cause a trap to leak. A counterpart of pH change is the release of electrons, whose acceptance in chemical reactions creates reducing conditions. The most common minerals to be affected by reducing reactions are the iron oxides, hydroxides and sulfates that often coat sand-sized grains in sedimentary rocks, or occur as accessory minerals in igneous and metamorphic rocks. Iron in such minerals is in the Fe-3 valence state (ferric iron from which an electron has been lost through oxidation) which makes them among the least soluble common materials, provided conditions remain oxidising. Flooding sedimentary rocks with CO2 inevitably produces a commensurate flow of electrons that readily interact with Fe-3. The oxidised product Fe-2 (ferrousiron) is soluble in water, and so reduction breaks down iron-rich grain coatings. Much the same happens with less abundant manganese oxides and hydroxides. One important concern is that iron hydroxide (FeO.OH or goethite) has a molecular structure so open that it becomes a kind of geochemical sponge. Goethite may lock up a large range of otherwise soluble ions, including those of arsenic and some toxic metals. Should goethite be dissolved by reduction that toxic load moves into solution and can migrate.

Bleached zone with carbonate-oxide core in Jurassic Entrada Sandstone, Green River, Utah. (Image: Max Wigley, University of Cambridge)

Except where deep, carbonated groundwater leaks to the surface in springs – the famous Perrier brand of mineral water is an example – it is difficult to judge what is happing to gases and fluids at depth. But their long-past activity can leave signatures in sedimentary rocks exhumed to the surface. Most continental sandstones, formed either through river or wind action, are strongly coloured by iron minerals simply because of strongly oxidising conditions at the Earth’s surface for the past two billion years or more. Should reducing fluids move through the, the iron is dissolved and leached away to leave streaks and patches of bleached sandstone in otherwise red rocks. In a few cases an altogether more pervasive bleaching of hundreds of metres of rock marks the site of massive fluid-leakage zones. Terrestrial Mesozoic sedimentary sequences in the Green River area of Utah, USA exhibit spectacular examples, easily amenable to field and lab study (Wigley, M. et al. 2012. Fluid-mineral reactions and trace metal mobilization in an exhumed natural CO2 reservoir, Green River, Utah. Geology, v. 40, p. 555-558). There the bleaching rises up through the otherwise brown and yellow sandstones, cutting across the bedding. In the bleached zone, secondary calcite fills pore spaces. At the contact with unbleached sandstone there are layers of carbonate and metal oxides, enriched in cobalt, copper, zinc, nickel, lead, tin, molybdenum and chromium: not ores but clear signs confirming the general model of reductive dissolution of iron minerals and movement of metal-rich fluid. Carbon isotopes from the junction are richer in 13C than could be explained by the gas phase having been methane, and confirm naturally CO2 – rich fluids.

So, Green River provides a natural analogue for a carbon capture and storage system, albeit one that leaked so profusely it would be a latter day disaster zone. In that sense the site will help in deciding where not to construct CCS facilities.

Disputes in the cavern

If Ignatius Loyola been a child of the late 20th century, it is quite likely that he would have chosen palaeoanthropology as a career rather than theology, seeing as he was so predisposed to casuistry. When I innocently asked a vertebrate palaeontologist who specialized in the Pliocene and Pleistocene Epochs why it was that students of hominins were so prone to controversy, his answer was revealing: ‘They don’t have many fossils’. One place where there are lots of hominin fossils, in fact the largest known sample of them, is the Atapuerca cavern in northern Spain. At the deepest level of the cave system there is a veritable charnel house containing the remains of at least 28 individuals. Because there are bones from all parts of the human anatomy, some have suggested that the cache is one of deliberate burial, but there is a disturbing dearth of the smaller bones of feet and hands. Consequently, other voices claim that the bodies were washed in by floods, losing extremities en route – though that view would be easily tested using other signs of trauma on large bones. Yet that is a minor quibble compared with one that is developing around the age of the boneyard and the taxonomy of the cadavers in it (http://www.guardian.co.uk/science/2012/jun/10/fossil-dating-row-sima-huesos-spain).

Head of Homo heidelbergensis (Replika), Sencke...
Head of Homo heidelbergensis , Senckenberg Museum, Frankfurt am Main, Germany (Photo credit: Wikipedia)

The Spanish team responsible for the evolutionary wealth in the entire Atapuerca cave complex, which ranges from almost a million years ago to recent times, assigned the Sima de los Huesos (Pit of Bones) fossils to Homo heidelbergensis. In fact about 90% of all H. heidelbergensis remains are from Atapuerca, so any anatomical dispute over these specimens is a threat to the status of the species itself. One leading authority who does dispute this assignment is Chris Stringer of the UK Natural History Museum, who claims that many of the heads have teeth and jaws with shapes that fall within the range of Neanderthals – supposedly descended from H. heidelbergensis. The age of the deposit is the focus of debate. Were it to be around 400 ka or younger, as early attempts at dating suggested, then the fossils might well be those of Neanderthals for that is early in the range of that species as determined by ‘molecular-clock’ studies of Neanderthal DNA. However, the material most likely to yield a good radiometric age is carbonate speleothem, the stuff of stalactites and stalagmites though more commonly a matrix that binds together old cave detritus. The fossils are undoubtedly far older than the maximum age that can be achieved using the well known radiocarbon method (<60 ka), but speleothem lends itself to a precise dating technique based on the decay series of uranium isotopes. In the case of Sima de los Huesos, the fossils lie in a clay breccia overlain by a layer of speleothem, which has yielded a U-series age of around 600 Ma (Bischoff, J.L. et al. 2007. High-resolution U-series dates from the Sima de los Huesos hominids yields 600 kyrs: implications for the evolution of the early Neanderthal lineage. Journal of Archaeological Science, v. 34, p. 763-770).

The ‘bone breccia’ in Sima de los Huesos, Atapuerca caverns Spain (from Bischoff, J.L. et al. 2007)
English: Skhul V
Neanderthal head from Israel (Wikipedia)

Stringer argues that the hominins’ anatomy is so like that of Neanderthals that, somehow, the radiometric age must be wrong – i.e. “too old” – perhaps because the speleothem is in fact from a 600 ka block that fell onto the fossils after they had accumulated. His view is that they are Neanderthals descended from H. heidelbergensis living in the earlier Pleistocene and which was the common ancestor of both Neanderthals and anatomically modern humans. Bischoff et al. consider the Sima de los Huesos hominids to be ‘at the very beginnings of the Neanderthal evolutionary lineage’, which seems to me to be a reasonable deduction from both stratigraphic and anatomical data. To demand that they must be at least 200 ka younger, apparently on the basis of an estimate of Neanderthal origination from DNA data seems less reasonable. The appearance of Stringer’s detailed arguments  in Evolutionary Anthropology (v. 21(3)) is eagerly awaited, following the Observer’s take on his position.

Another area in which controversy is brewing – and has been for decades – is that of the origin of human artistic culture. One of the gem-boxes of early art is the Geissenclösterle (monastery of the goats) cavern in southern Germany, in which have been found various figurines made of bird bone and ivory, including a celebrated lion-man theriomorph, highly exaggerated female figures, flutes and beads. They belong to the Aurignacian culture brought by the earliest anatomically modern Europeans who diffused westwards along the Danube from the near-East as early as 45 ka ago. The layer containing the artifacts was originally dated at about 35 ka, but new radiocarbon techniques have been tried on bone with cut marks, among other materials (Higham, T. et al. 2012. Testing models for the beginnings of the Aurignacian and the advent of art and music: the radiocarbon chronology of Geissenclösterle. Journal of Human Evolution, v. 62, p. 664-676 doi:10.1016/j.jhevol.2012.03.003) and found to yield a much older age of 42.5 ka, close to the oldest European date for modern human occupation 43-45 ka for the stratigraphically older Uluzzian tool industry.

Lion_man_photo
Lion-man sculpture from Geissenclösterle ( J. Duckek Wikipedia)

The date is also considerably earlier than the demise of the Neanderthals and raises the issue of modern-Neanderthal contacts. Indeed the layer below that assigned to Aurignacian contains tools made by Neanderthals, whose age is statistically indistinguishable from the later occupation level. The Chatelperronian tool industry, which closely resembles the Aurignacian but is ascribed to Neanderthals, is supposed to be around 40 ka old, but the advanced radiocarbon technique that yielded much older ages for Geissenclösterle apparently has not yet been deployed on this culture. On the basis of limited age data, it does seem likely that Neanderthals adopted the new technology after they encountered it. The Aurignacian artistic products are vastly more advanced than any found at older sites in Africa.

Original Venus from Hohle Fels, mammoth ivory,...
Aurignacian female figurine from near Geissenclösterle..(Silosarg: Wikipedia)

In the context of the debate about modern human and Neanderthal cognitive abilities, which suggests the former were altogether smarter and more creative, there is an unvoiced or at least unheeded argument. Whether or not Neanderthals originated artifacts that were ‘modern’ for their time or copied them is not as important as the fact that this group, previously isolated for up to 400 millennia, were able to appreciate and learn these novelties. That is much the same as people living today, in Australia for instance, a couple of generations from hunter-gatherer origins, working on production lines, piloting aircraft, social networking and creating world-class abstract art. What did they, and the Aurignacians, produce from other materials that have not survived decay; ditto for any pre-45 ka humans? Another point rarely raised, but surely valid, is that previous people may not have felt any need to produce art in forms that survive for tens or hundreds of millennia. Forty-odd thousand years ago, climate was undergoing rapid ups and downs of temperature and humidity in the run-up to the last glacial maximum. Conditions at mid-latitudes would have been much more changeable than those of the tropics. Both anatomically modern humans and Neanderthals faced the same attendant ecological changes, and as co-occupants of southern Europe they faced each other as rivals for available resources. Finally, Aurignacians hailed from the east, also Neanderthal territory and severely affected by rapid climate change from around 80 ka; so did they bring with them a culture formed elsewhere? Europe concentrates palaeoanthropologists and their endeavours, while much of the planet to which humans diffused from Africa – and Africa itself – are grossly under-investigated by comparison: ideas will undoubtedly change drastically as these areas get the attention they deserve.

Controversy is not a problem. Indeed, with imperfect, inadequate or ambiguous data it is unavoidable, and heated disputes spur the search for more information that can help resolve ideas or change them. What cannot be sidestepped is the potential for havoc that may arise with new and improved methods. In both cases outlined here radiometric dates have thrown the proverbial spanner into the works. The method used in the Geissenclösterle cavern was designed to remove younger contaminating material from samples for radiocarbon dating and inevitably tends to push 14C dates further back in time. By removing a source of inaccuracy it highlights the inadequacies of dates obtained by earlier approaches on which a great deal of current archaeological thinking relies. Just how much younger contamination is present in a sample only emerges after the improved dating: it may be absent but an be substantial. So, until materials dated by earlier radiocarbon methods are re-run using the new approach neither their absolute ages nor their relative sequence in time can be considered reliable.

Español: Réplica del techo de la cueva de Alta...
Art on the walls of Altamira Cave, northern Spoain, including both older abstract works and younger figurative depictions of prey animals (Photo credit: Wikipedia)

Results from just such an advance in radiometric dating of cave deposits in northern Spain will really cause a stir, when they sink in (Pike, A.W.G. and 10 others 2012. U-series dating of Paleolithic art in 11 caves in Spain. Science, v. 336, p. 1409-1413). The U-series method used at the University of Bristol by the joint British-Spanish collaborators dates calcite deposits on painted cave walls, including those at the famous Altamira site. This  ‘flowstone’ may underlie artwork or may have grown over it after its completion, giving maximum or minimum ages for the painting, respectively. If a work has flowstone underneath and as a coating, dating potentially ‘brackets’ a possible age range. The superb figurative depictions of various prey animals, such as bison in Altamira cave, turn out to have been painted at around 18 ka, during the last glacial maximum. However a lot of the art there is abstract, such as hands picked out by red pigment presumably sprayed onto the wall from the artist’s mouth, various stippled discs and dots. Many of the abstracts are beneath flowstone that is around twice as old as the more familiar objects and range in age from 34 to 41 ka, thereby being close in time with the Geissenclösterle materials. Like them, their ages may coincide with the arrival of the earliest anatomically modern Europeans, but they are also towards the end of the period when Neanderthals were still present in much of Europe, including northern Spain. It cannot be ruled out therefore that the earliest paintings were Neanderthal symbolic art.

When Iapetus opened

English: Global paleogeographic reconstruction...
The Iapetus Ocean separating the paleocontinents of Baltica, Laurentia and Avalonia about 460 million years ago. (Rob Blakey http://jan.ucc.nau.edu/~rcb7/, Wikipedia)

The first sign that there was something odd about the Lower Palaeozoic in NW Europe and North America stemmed from gross mismatches between fossil assemblages only a few tens of kilometers apart across the regional strike of sedimentary rocks older than the Upper Silurian. It didn’t show up in the Devonian and Carboniferous, and nothing like it reappeared until well into the Jurassic. Until the 1960s the separation of these faunal provinces was ascribed to something akin to the Wallace Line that currently separates the flora and fauna of Oceania, Australia and the eastern islands of Indonesia from those of western Indonesia and Asia: a barrier to migration presented by the deep-water but narrow channel between Bali and Lombok in the Indonesian archipelago. The ancient biological boundary roughly coincides with the long-described Caledonian and Acadian Orogens of NW Europe and eastern North America respectively. With the discovery of plate tectonics another explanation arose: that formerly the opposite sides of the once contiguous orogens had been separated by thousands of kilometers across a former ocean. This was named in 1966 by John Tuzo Wilson after Iapetus , one of the mythical Greek titans who fathered Atlas – the eponym of the Atlantic Ocean. So, in the tectonic canon, the Caledonian-Acadian mountain belt marks the closure through subduction of its former oceanic lithosphere which brought the distinct faunal provinces together across a line known as the Iapetus Suture. Many lines of evidence time-stamp this continental collision to the end of the Silurian Period.

The Iapetus Suture, also known as the Niarbyl ...
The Iapetus Suture, marked by the Niarbyl Fault on the Isle of Man. One of few places one can believably straddle two ancient continents. (G.J Kingsley at  Wikipedia)

When the Iapetus Ocean began to open is not so easy to pin-point, save that it predated the Cambrian Period. The most likely possibility is that it marked the line of separation between fragments of the 1 billion-year old Rodinia supercontinent, which started to break up in the early Neoproterozoic. That was a protracted event, palaeomagnetic, radiometric and stratigraphic data loosely constraining extension between the former two sides of Iapetus to between 620 and 570 Ma. Around Quebec City, Canada are a number of large faults in the St Lawrence rift system that bound a zone of deep water sediments and volcanic rocks that yielded this broad age range. Yet the faults are associated with glassy rocks formed by frictional melting during brittle fracturing. These pseudotachylites can be dated, and have now helped resolve the ‘fuzziness’ of Iapetus’s formation (O’Brien, T.M. & van der Pluijm, B.A. 2012. Timing of Iapetus Ocean rifting from Ar geochronology of pseudotachylites in the St Lawrence rift system of southern Quebec. Geology, v. 40, p. 443-446). The two co-workers from the University of Michigan show that the rifting occurred between 613 and 614 Ma, coinciding with a brief period of mafic dyke emplacement in Newfoundland and Labrador. Since the Iapetus Suture occurs not far away from the St Lawrence rift system in eastern Canada the area has now become the best constrained example of what soon became known in the early days of plate tectonics as a Wilson Cycle, representing rift, drift and collision. John Tuzo Wilson (1908-1993), a Canadian descended from French and Scottish settlers, and a pioneer of the modern phase of geology, would be pleased it had finally homed in on terrain he knew well.

Early origins of meat and two veg

Barbecue chef at festival
(Photo credit: Wikipedia)

When and how humans acquired fire on demand and began to cook has long engaged story tellers and historians. Entertaining tales are those of the titan Prometheus, who stole fire from  Zeus and then had his liver eaten by  an eagle (http://en.wikipedia.org/wiki/Prometheus ), and of Bo-bo, who accidentally discovered the barbecue approach to the meat of pigs (http://www.amazingribs.com/BBQ_articles/dissertation_on_roast_pork.html). Despite the secretive pleasures of some French and Ethiopian gourmets, raw flesh is not widely appreciated, although a rare steak comes pretty close. There is nothing wrong with it apart from its usually being tough and prone to deliver spectacular evacuations. Cooking  unfolds the proteins in meat making them easier to digest and therefore portions of cooked meat deliver higher nutrition than they would direct from the carcase. Likewise, cooking some vegetables, especially various tubers, breaks down their chemistry to more easily digested and more palatable materials: think ‘potato’ in this context. In fact many potentially nutritious tubers are positively toxic if not processed and cooked, classic examples being cassava and wild yams.

While some anthropologists consider a change in hominin habits to eating meat per se, probably originally as carrion, as the necessary step to a leap in nutrition from which an enlarged brain developed, others favour the harnessing of fire and the invention of cooking that released greater proportions of proteins and carbohydrates from available foodstuffs. Since hominins evolved in distinctly seasonal savannas and open woodland, the shortage of game and directly edible above-ground plant parts in the dry season suggests indirectly that our early ancestors had two possible survival paths open to them: powerful jaws and complex digestive tracts to survive on woody stems or digging up tubers. Respectively, the anatomy and tooth-wear patterns of paranthropoids and early Homo to some extent support such a dichotomy that arose from the australopithecines after about 2 Ma ago. Both succeeded and cohabited roughly the same ranges in eastern Africa for as long as a million years.

So pinning down the origin of controlled use of fire is a major goal of Pleistocene archaeology to settle the issue of nutrition and brain growth. Also, it would help explain how hominins were able to diffuse far beyond their home ranges to northern latitudes sufficiently high to place fire as an essential source of warmth at night and in winters. Yet, evidence for habitual use of fire is younger than 400 thousand years among H. heidelbergensis, H. neanderthalensis and H. sapiens, literally leaving the wide roaming H. erectus to shiver as far as scientific proof of hearth and home is concerned. There have been claims of early charring, burnt bones and ashes but until recently such evidence has been ambiguous, largely because fire can start easily and naturally in tinder-rich conditions. There are now, however, advanced microscopic, chemical and physical techniques for estimating temperatures to which bones have been subjected and detecting changes in materials caused by fire, which can be applied to minute samples from sites once occupied by earlier people. One test site for the methods has been the Wonderwerk Cave in South Africa  that is known from Acheulean tools and cut bone to have been occupied as long ago as 1.1 Ma. They gave a positive result for the use of fire by the earliest cave occupants (Berna, F, et al. 2012. Microstratigraphic evidence of in situ fire in the Acheulean strata of Wonderwerk Cave, Northern Cape province, South Africa.  Proceedings of the National Academy of Science USA, www.pnas.org/cgi/doi/10.1073/pnas.1117620109 – open access). The same methods had previously been used to establish controlled human use of fire around 400 ka in once occupied caves in Israel, but at Wonderwerk almost triple the age of earliest known use. But they have refuted similar claims from the famous Zhoukoudian site of ‘Peking Man’ (Asian H. erectus) (http://www.unesco.org/ext/field/beijing/whc/pkm-site.htm).

A useful adage is that ‘the absence of evidence is not evidence of absence’, and it is early days for the routine archaological use of micromorphology, Fourier transform infrared (FTIR) spectroscopy in the search for human embers. In drylands naturally started fires, either as a result of lightning or spontaneous combustion, are so common that hominins would have been well aware of them, their dangers and perhaps their advantages as regards a free barbecue. Possibly Bo-bo’s salivating at the aroma of roast pig from the wreckage of his father house that he had razed to the ground though sheer stupidity would have struck some early hominins as a useful connection between a lucky feast and the still glowing embers of a bush fire. With care, embers can survive for long enough to be carried and used to start controlled fire; a fact not lost on many surviving fully human foragers, and also kids on a South Yorkshire council estate eager for the delights of roasting some ‘borrowed’ potatoes.

Groundwater in Africa

English: Mwamanongu Village water source, Tanz...
Drinking water for many rural Africans often comes from open holes dug in the sand of dry riverbeds, and it is invariably contaminated. (Bob Metcalf on Wikipedia)

Sub-surface water supplies have rarely, if ever, figured in Earth Pages except in passing or in relation to the on-going crisis of arsenic pollution in drinking-water supplies. That is largely because of the paucity of groundwater publications that have a general interest. So it was welcome news to learn that hydrogeologists of the British Geological Survey and University College London have produced a continent-wide review of groundwater prospects for Africa, probably in most need of good news about water supplies (MacDonald, A.M. et al. 2012. Quantitative maps of groundwater in Africa. Environmental Research Letters, v. 7 doi:10.1088/1748-9326/7/2/024009. They used existing hydrogeological maps, publications and other publically available data to estimate total groundwater storage in a variety of aquifer types and the yield potentials of boreholes. Details can be seen at http://www.bgs.ac.uk/research/groundwater/international/africanGroundwater/maps.html

Dominated by the vast sedimentary aquifers of Libya, Algeria, Egypt and Sudan, such as the Nubian Sandstone, around 0.66 million km3 may lie below the continental surface: more than 100 times the annually renewable freshwater resources, including the flows in  three of the world’s largest rivers, the Nile, Congo and Niger. Though only a fraction of this subsurface potential may be available for extraction through wells, the arithmetic, or rather the statistics, suggest that small diameter boreholes and simple handpumps, as well as traditional wells, can sustainably satisfy the drinking water needs of the bulk of Africa’s rural populations with yields of individual wells between 0.1 to 1 l s-1. However, groundwater use in irrigation and for large urban supplies demands well productivities an order of magnitude higher from thick sedimentary sequences, which rarely coincide in Africa with areas suitable for large-scale agriculture or existing cities and large towns. Both the humid tropical lowlands with thick unconsolidated sediments and the deep sedimentary rock aquifers beneath the Sahara and other arid areas match great groundwater potential with either little need for groundwater or virtually no potential for agricultural development and very few people. Moreover, the truly vast reserves of North Africa that are an order of magnitude or more greater than in any other countries are at such depths and so remote that development needs commensurately huge investment, in the manner of oil-rich Libya’s Great Man Made River Project projected at more than US$25 billion investment. To say that reserves, convenience and yields are inequitably distributed in Africa would understate the hydrogeological difficulties of the continent.

Average well productivity predicted by MacDonald et al from Africa’s regional geology

Much of Africa has crystalline basement at the surface that has useful yields (>0.1  l s-1) only when deeply weathered, and even then rarely yields better than 1 l s1. An exception to this general rule is where basement has been shattered by large faults and fractures. Sedimentary cover is generally thin across the continent and with highly variable yield potential. The other issue is that of sustainability, for if extraction rates exceed those of recharge then groundwater effectively becomes a non-renewable resource. About half of the African surface, mainly in its western equatorial region, has sufficient rainfall and infiltration potential to outpace universally high evapotranspiration to give recharge rates of more than 2.5 cm of annual rainfall. For all the areas repeatedly hit by drought and famine, average recharge through the surface that escapes being literally blown away on the wind is less than half a centimetre.

To have synopses of all the important issues surrounding African groundwater – the best choice for safe domestic supplies in hot, poor areas – would seem to be very useful to those engaged in development and relief strategies; i.e. to governments, the UN ‘family’ and World Bank. But there are important caveats. An obvious one is the antiquity of many of the surveys drawn on by MacDonald et al. Some 23 out of 33 were published more than 20 years ago using data that may be a great deal older: such has been the snail-like pace of publication by all geological surveys, including BGS. That is compounded by the small scale of the maps (mainly smaller than 1:1 million) and the extremely sparse geophysical data concerning subsurface geology across most of Africa. ‘Quantitative’ is not the adjective to use here, for unlike in most of the developed world, groundwater reserves and locations in Africa have not been measured, but estimated from pretty meagre data. In fact to be brutally realistic, most of the source maps are based on educated guesswork by a few hard-pressed geoscientists once personally responsible for areas that would cripple most of their colleagues working in say Europe or North America.

If there is a truism about water exploration in Africa, outside the well-watered parts, it is this: sink a well at random, and it will probably be dry. The stats may well be encouraging, as MacDonald et al. clearly believe, but finding useful groundwater supplies relies on a great deal more. Outside cities, people survive as regards groundwater often as a result of traditional means of water exploration and well digging: they or at least some locals are experts at locating shallow sources. Yet to improve their access to decent water in the face of both rising populations and climate change demands sophisticated exploration techniques based on geological knowledge. Most important is to ensure supplies to existing communities, whose locations do not necessarily match deeper groundwater availability, bearing in mind that a universal problem for most African villagers is the sheer distance to wells with safe water. Rigs used to drill tube wells are expensive to hire, so the likelihood of success needs to be maximised. In the absence of large-scale (1:50 000) geological maps – rarities throughout Africa – only skilled hydrogeological interpretation of aerial or satellite images followed-up by geophysical ground traverses offer that vital confidence.

Geologically useful ASTER image of the Danakil Block in Eritrea/Ethiopia, showing Mesozoic and Recent sedimentary aquifers and crystalline basement (Steve Drury)

In fact, thanks to the joint US-Japan ASTER system carried in sun-synchronous orbit, geologically-oriented image data are available for the whole continent. Interpretation requires some skills but few if any beyond learning in a practical, field setting. Indeed, the African surface in its arid to semi-arid parts, most at risk of drought and famine, lends itself to rapid hydrogeological reconnaissance mapping using ASTER data. Given on-line training in image interpretation, a ‘crowd-source’ approach coordinating many interpreters could complete a truly life-giving and easily available map base for local people to focus their own well-construction programmes.