Early origins of meat and two veg

Barbecue chef at festival
(Photo credit: Wikipedia)

When and how humans acquired fire on demand and began to cook has long engaged story tellers and historians. Entertaining tales are those of the titan Prometheus, who stole fire from  Zeus and then had his liver eaten by  an eagle (http://en.wikipedia.org/wiki/Prometheus ), and of Bo-bo, who accidentally discovered the barbecue approach to the meat of pigs (http://www.amazingribs.com/BBQ_articles/dissertation_on_roast_pork.html). Despite the secretive pleasures of some French and Ethiopian gourmets, raw flesh is not widely appreciated, although a rare steak comes pretty close. There is nothing wrong with it apart from its usually being tough and prone to deliver spectacular evacuations. Cooking  unfolds the proteins in meat making them easier to digest and therefore portions of cooked meat deliver higher nutrition than they would direct from the carcase. Likewise, cooking some vegetables, especially various tubers, breaks down their chemistry to more easily digested and more palatable materials: think ‘potato’ in this context. In fact many potentially nutritious tubers are positively toxic if not processed and cooked, classic examples being cassava and wild yams.

While some anthropologists consider a change in hominin habits to eating meat per se, probably originally as carrion, as the necessary step to a leap in nutrition from which an enlarged brain developed, others favour the harnessing of fire and the invention of cooking that released greater proportions of proteins and carbohydrates from available foodstuffs. Since hominins evolved in distinctly seasonal savannas and open woodland, the shortage of game and directly edible above-ground plant parts in the dry season suggests indirectly that our early ancestors had two possible survival paths open to them: powerful jaws and complex digestive tracts to survive on woody stems or digging up tubers. Respectively, the anatomy and tooth-wear patterns of paranthropoids and early Homo to some extent support such a dichotomy that arose from the australopithecines after about 2 Ma ago. Both succeeded and cohabited roughly the same ranges in eastern Africa for as long as a million years.

So pinning down the origin of controlled use of fire is a major goal of Pleistocene archaeology to settle the issue of nutrition and brain growth. Also, it would help explain how hominins were able to diffuse far beyond their home ranges to northern latitudes sufficiently high to place fire as an essential source of warmth at night and in winters. Yet, evidence for habitual use of fire is younger than 400 thousand years among H. heidelbergensis, H. neanderthalensis and H. sapiens, literally leaving the wide roaming H. erectus to shiver as far as scientific proof of hearth and home is concerned. There have been claims of early charring, burnt bones and ashes but until recently such evidence has been ambiguous, largely because fire can start easily and naturally in tinder-rich conditions. There are now, however, advanced microscopic, chemical and physical techniques for estimating temperatures to which bones have been subjected and detecting changes in materials caused by fire, which can be applied to minute samples from sites once occupied by earlier people. One test site for the methods has been the Wonderwerk Cave in South Africa  that is known from Acheulean tools and cut bone to have been occupied as long ago as 1.1 Ma. They gave a positive result for the use of fire by the earliest cave occupants (Berna, F, et al. 2012. Microstratigraphic evidence of in situ fire in the Acheulean strata of Wonderwerk Cave, Northern Cape province, South Africa.  Proceedings of the National Academy of Science USA, www.pnas.org/cgi/doi/10.1073/pnas.1117620109 – open access). The same methods had previously been used to establish controlled human use of fire around 400 ka in once occupied caves in Israel, but at Wonderwerk almost triple the age of earliest known use. But they have refuted similar claims from the famous Zhoukoudian site of ‘Peking Man’ (Asian H. erectus) (http://www.unesco.org/ext/field/beijing/whc/pkm-site.htm).

A useful adage is that ‘the absence of evidence is not evidence of absence’, and it is early days for the routine archaological use of micromorphology, Fourier transform infrared (FTIR) spectroscopy in the search for human embers. In drylands naturally started fires, either as a result of lightning or spontaneous combustion, are so common that hominins would have been well aware of them, their dangers and perhaps their advantages as regards a free barbecue. Possibly Bo-bo’s salivating at the aroma of roast pig from the wreckage of his father house that he had razed to the ground though sheer stupidity would have struck some early hominins as a useful connection between a lucky feast and the still glowing embers of a bush fire. With care, embers can survive for long enough to be carried and used to start controlled fire; a fact not lost on many surviving fully human foragers, and also kids on a South Yorkshire council estate eager for the delights of roasting some ‘borrowed’ potatoes.

Groundwater in Africa

English: Mwamanongu Village water source, Tanz...
Drinking water for many rural Africans often comes from open holes dug in the sand of dry riverbeds, and it is invariably contaminated. (Bob Metcalf on Wikipedia)

Sub-surface water supplies have rarely, if ever, figured in Earth Pages except in passing or in relation to the on-going crisis of arsenic pollution in drinking-water supplies. That is largely because of the paucity of groundwater publications that have a general interest. So it was welcome news to learn that hydrogeologists of the British Geological Survey and University College London have produced a continent-wide review of groundwater prospects for Africa, probably in most need of good news about water supplies (MacDonald, A.M. et al. 2012. Quantitative maps of groundwater in Africa. Environmental Research Letters, v. 7 doi:10.1088/1748-9326/7/2/024009. They used existing hydrogeological maps, publications and other publically available data to estimate total groundwater storage in a variety of aquifer types and the yield potentials of boreholes. Details can be seen at http://www.bgs.ac.uk/research/groundwater/international/africanGroundwater/maps.html

Dominated by the vast sedimentary aquifers of Libya, Algeria, Egypt and Sudan, such as the Nubian Sandstone, around 0.66 million km3 may lie below the continental surface: more than 100 times the annually renewable freshwater resources, including the flows in  three of the world’s largest rivers, the Nile, Congo and Niger. Though only a fraction of this subsurface potential may be available for extraction through wells, the arithmetic, or rather the statistics, suggest that small diameter boreholes and simple handpumps, as well as traditional wells, can sustainably satisfy the drinking water needs of the bulk of Africa’s rural populations with yields of individual wells between 0.1 to 1 l s-1. However, groundwater use in irrigation and for large urban supplies demands well productivities an order of magnitude higher from thick sedimentary sequences, which rarely coincide in Africa with areas suitable for large-scale agriculture or existing cities and large towns. Both the humid tropical lowlands with thick unconsolidated sediments and the deep sedimentary rock aquifers beneath the Sahara and other arid areas match great groundwater potential with either little need for groundwater or virtually no potential for agricultural development and very few people. Moreover, the truly vast reserves of North Africa that are an order of magnitude or more greater than in any other countries are at such depths and so remote that development needs commensurately huge investment, in the manner of oil-rich Libya’s Great Man Made River Project projected at more than US$25 billion investment. To say that reserves, convenience and yields are inequitably distributed in Africa would understate the hydrogeological difficulties of the continent.

Average well productivity predicted by MacDonald et al from Africa’s regional geology

Much of Africa has crystalline basement at the surface that has useful yields (>0.1  l s-1) only when deeply weathered, and even then rarely yields better than 1 l s1. An exception to this general rule is where basement has been shattered by large faults and fractures. Sedimentary cover is generally thin across the continent and with highly variable yield potential. The other issue is that of sustainability, for if extraction rates exceed those of recharge then groundwater effectively becomes a non-renewable resource. About half of the African surface, mainly in its western equatorial region, has sufficient rainfall and infiltration potential to outpace universally high evapotranspiration to give recharge rates of more than 2.5 cm of annual rainfall. For all the areas repeatedly hit by drought and famine, average recharge through the surface that escapes being literally blown away on the wind is less than half a centimetre.

To have synopses of all the important issues surrounding African groundwater – the best choice for safe domestic supplies in hot, poor areas – would seem to be very useful to those engaged in development and relief strategies; i.e. to governments, the UN ‘family’ and World Bank. But there are important caveats. An obvious one is the antiquity of many of the surveys drawn on by MacDonald et al. Some 23 out of 33 were published more than 20 years ago using data that may be a great deal older: such has been the snail-like pace of publication by all geological surveys, including BGS. That is compounded by the small scale of the maps (mainly smaller than 1:1 million) and the extremely sparse geophysical data concerning subsurface geology across most of Africa. ‘Quantitative’ is not the adjective to use here, for unlike in most of the developed world, groundwater reserves and locations in Africa have not been measured, but estimated from pretty meagre data. In fact to be brutally realistic, most of the source maps are based on educated guesswork by a few hard-pressed geoscientists once personally responsible for areas that would cripple most of their colleagues working in say Europe or North America.

If there is a truism about water exploration in Africa, outside the well-watered parts, it is this: sink a well at random, and it will probably be dry. The stats may well be encouraging, as MacDonald et al. clearly believe, but finding useful groundwater supplies relies on a great deal more. Outside cities, people survive as regards groundwater often as a result of traditional means of water exploration and well digging: they or at least some locals are experts at locating shallow sources. Yet to improve their access to decent water in the face of both rising populations and climate change demands sophisticated exploration techniques based on geological knowledge. Most important is to ensure supplies to existing communities, whose locations do not necessarily match deeper groundwater availability, bearing in mind that a universal problem for most African villagers is the sheer distance to wells with safe water. Rigs used to drill tube wells are expensive to hire, so the likelihood of success needs to be maximised. In the absence of large-scale (1:50 000) geological maps – rarities throughout Africa – only skilled hydrogeological interpretation of aerial or satellite images followed-up by geophysical ground traverses offer that vital confidence.

Geologically useful ASTER image of the Danakil Block in Eritrea/Ethiopia, showing Mesozoic and Recent sedimentary aquifers and crystalline basement (Steve Drury)

In fact, thanks to the joint US-Japan ASTER system carried in sun-synchronous orbit, geologically-oriented image data are available for the whole continent. Interpretation requires some skills but few if any beyond learning in a practical, field setting. Indeed, the African surface in its arid to semi-arid parts, most at risk of drought and famine, lends itself to rapid hydrogeological reconnaissance mapping using ASTER data. Given on-line training in image interpretation, a ‘crowd-source’ approach coordinating many interpreters could complete a truly life-giving and easily available map base for local people to focus their own well-construction programmes.

Origin of the arms race

Global paleogeographic reconstruction of the E...
Global paleogeographic reconstruction of the Earth in the early Cambrian period 540 million years ago. (credit:Ron Blakey, Northern Arizona University)

Palaeontologists generally agree on one broad aspect of animal evolution: the central role of predation versus defence in animal diversification to occupy different ecological niches. Indeed that co-relation has to an extent been responsible for the diversification of potentially habitable niches themselves. Armour and arms form a dialectic within the animal world, but one that only rose to dominate when hard materials became an integral part of animal morphology, allowing some to bite, gnaw or rasp and others to develop shelly or horny skeletons. The Kingdom Animalia within the domain of the eukaryotes – organisms based on cells that bear a nucleus – is united by one life style, that of feeding directly or indirectly on other living things. They are heterotrophs unable to generate energy and tissue through the fundamental harnessing of chemistry and physics to use the inorganic world directly, as do autotrophs.  One of the earliest discoveries about the history of animals was that fossils in the widely accepted meaning of the word appeared suddenly in the geological record, earlier rocks containing virtually no tangible signs of life: fossils explode in numbers from the start of the Cambrian Period at 542 Ma. Subsequently, geologists did discover imprints of clearly quite complicated organisms in rocks a few tens of million years older than the start of the Cambrian. But these were flaccid, bag like creatures that recent research has shown to rely on filtering microorganisms from water or directly absorbing organic matter through their skin.

Cropped and digitally remastered version of an...
An animal from the late Precambrian(Photo credit: Wikipedia)

Another feature of sediments of the oldest Cambrian is that in many parts of the world they rest with or=profound unconformity on deformed older rocks of Precambrian age. Throughout Britain the lowest Cambrian rocks are almost pure quartz sandstones that rest upon older more complex rocks ranging from only a few tens of million years older than 542 Ma to some of the oldest rocks in Europe, the Lewisian complex dating back 3 billion years. Many of the hills of North West Scotland have a gleaming white cap of Lower Cambrian quartzite above what has been termed the Great Unconformity where it occurs in Arizona’s Grand Canyon. Sedimentary sequences that continuously record the Precambrian to Cambrian transition and the biological explosion at the juncture are rare. But they show two curious features in sediments that immediately predate those bearing recognisable fossils: a complete lack of evidence for burrowing and millimetre-scale shell-like bodies made of calcium phosphate and carbonate, which are thought to have adorned the skins of otherwise unprotected animals.

Português: Classe Radiodonta
Creatures from the Cambrian Period (credit: Wikipedia)

Calcium, while a very common element is one of the most dangerous to life. Traces are essential for the signalling that goes on in cell metabolism, and too little snuffs out those vital processes.  Yet too much – still a very low concentration in cell cytoplasm – results in the growth of minute mineral crystals within cells, also spelling cell death. This results from the limited solubility of calcium in water, compared with those of other common metals.  At an early stage in evolution cells developed means of restricting the admission of calcium ions and efficient means of expelling excess amounts of calcium. The ubiquitous occurrence of Ca-rich marine limestones throughout the geological record bears witness to two things: the abundance of calcium ions in seawater; a closer look reveals that a great many limestones, going back some 3.5 billion years show traces of biomineralisation that helped form the limey sediments. In the second case, the calcium carbonate in most Precambrian limestones was secreted by photosynthetic blue-green bacteria in minutely thing layers, probably in the form of a slimy film excreted to avoid calcium toxicity. Palaeontologists have long suspected that the earliest skeletal materials formed by animals evolved from the need to excrete biomineralised films by turning a metabolic necessity into functional and integral parts of their body plans: arms and armour. Yet limestones are not rare signs of the presence of a dissolved calcium threat, so why the sudden adoption of waste products in this way?

A fairly old hypothesis is that calcium in seawater must have risen above a threshold that posed toxic threat to all living things and excretion had to increase to maintain the balance, perhaps matched with increasing sizes of animals in the late Precambrian. Only recently has support been found for this suggested evolutionary trigger, initially from analysis of brines trapped in crystalline materials within sediments, such as salt (NaCl). But the very presence of such halite in a sediment is a universally accepted sign of evaporation increasing ionic concentrations in isolated seawater lagoons, whereas a general increase in marine calcium would be needed to present sufficient chemical stress that the whole of animal evolution would require a step-change for survival.  It turns out that support for the hypothesis stems from two isotopic systems most usually associated with dating the formation and weathering of continental  crust: those of strontium and neodymium. The global record of ratios of 87Sr/86Sr and 143Nd/144Nd show unusually large changes in the run-up to the Cambrian Period, the first rising to the highest level recorded in geological history and the second reaching a historic nadir during the Cambrian. This anti-correlation signifies the greatest chemical weathering of older continental crust in the history of the Earth (Peters, S. & Gaines, R.R. 2012. Formation of the ‘Great Unconformity’ as a trigger for the Cambrian explosion. Nature, v. 484, p. 363-366). Not only would this have poured dissolved ions, including those of calcium, into the oceans and raised their concentrations in seawater, but vast areas of the continents would have been eroded to form huge coastal plains, ripe for marine inundation. The last is exactly what the near-universal unconformity at the base of the Cambrian signifies. Presaging this long drawn-out grinding of continents to their gums had been a protracted bout of continental assembly to form the Rodinia supercontinent around 1000 Ma though collision and mountain building. Then Rodinia broke apart, its fragments being driven by plate tectonics to reassemble, along with vast chains of new crust formed in volcanic island arcs, by yet more orogenesis: tectonically high-energy times matched by the processes of denudation on land.

A nice example of planetary interconnectedness on the largest scale with the greatest conceivable consequences, for we vertebrates anyhow. This comes as a great comfort to me in the twilight of my career, since in 1999 I stuck out my neck with a similar concept in Stepping Stones only to meet a suitably stony silence.

Large-animal extinction in Australia linked to human hunters

Diprotodon optatum, Pleistocene of Australia. ...
Artist's impression of a giant Australian wombat (Diprotodon) (credit: Wikipedia)

In North America, between 13 and 11.5 ka, around 30 species of large herbivorous mammals became extinct. Much the same occurred in Australia around 45 ka. Both cases roughly coincided with the entry of anatomically modern humans, where neither they nor earlier hominids had lived earlier. Such extinctions are not apparent in the Pleistocene records of Africa or Eurasia. An obvious implication is that initial human colonisation and a collapse of local megafaunas are somehow connected, perhaps even that highly efficient early hunting bands slaughtered and ate their way through both continents. But other possibilities can not be ruled out, including coincidences between colonisation and climate or ecosystem change. As many as thirteen different hypotheses await resolution, one that inevitably makes headline news repeatedly: that both the early Clovis culture and North American megafaunas met their end around the same time as the start of the Younger Dryas millennial cold snap because a meteorite exploded above North America (http://earth-pages.co.uk/2009/03/01/comet-slew-large-mammals-of-the-americas/). One problem in assessing the various ideas is accurately dating the actual extinctions, partly because terrestrial environments rarely undergo the continual sedimentation that builds up easily interpreted stratigraphic sequences. Another is that it is not easy to prove, say, that all giant kangaroos died in a short period of time because of the poor record of preservation of skeletons on land. A cautionary take concerns the demise of the woolly mammoth that roamed the frigid deserts of northern Eurasia and definitely was hunted by both modern humans and Neanderthals. It was eventually discovered that herds still survived on Wrangell Island until the second millennium BC. There is a need for a proxy that charts indirectly the fate of megafaunas plus accurate estimates of the timing of human colonisation. In North America there is a candidate for the first criterion: traces of a fungus (Sporormiella – see Fungal clue to fate of North American megafauna in EPN of January 2010) that exclusively lives in the dung of large herbivores. Fungal spores get everywhere, being wind-dispersed, and in NE US lake cores they fell abruptly at about 13.7 ka. Sporormiella needs to pass through the gut of herbivores to complete its life cycle.

Aboriginal Rock Art, Anbangbang Rock Shelter, ...
Aboriginal Rock Art, Kakadu National Park, Australia (Photo credit: Wikipedia)

The same genus of fungus breaks down dung in Australia. Measuring spore content in sediment on the floor of a Queensland lake shows the same abrupt decline in abundance at between 43 to 39 ka before present (Rule, S. et al. 2012. The aftermath of a megafaunal extinction: ecosystem transformation in Pleistocene Australia. Science, v. 335, p. 1483-1486). Moreover, the fungal collapse is accompanied by a marked increase in fine-grained charcoal – a sign of widespread fires – and is followed by a steady increase in pollen of scrub vegetation at the expense of that of tropical rain forest trees. The shifts do not correlate with any Southern Hemisphere climatic proxy for cooling and drying that might have caused ecosystem collapse. That still does not mark out newly arrived humans as the culprits, as the early archaeological record of Australia, as in North America, is sparse and only estimated to have started at around 45 ka. Yet this is quite strong circumstantial evidence. The 20 or more animals – marsupials, birds and reptiles – with a mass more than 40 kg that formerly inhabited the continent would probably have been ‘naive’ as regards newly arrived, organised, well-armed and clever new predators, as would those of North America and much later in New Zealand, and would have been ‘easy prey’. Incidentally, faunas of both Africa and Eurasia are extremely wary of humans, possibly as a result of a far longer period of encounters with human hunter-gatherers.  In Australia’s case, the use of deliberate fire clearing to improve visibility of game may have had a major role, although it is equally likely that the demise of large herbivores would have left large amounts of leaf litter and dry grasses to combust naturally. Yet the Earth as a whole around 40 ka was slowly cooling and drying towards the last glacial maximum around 20 ka, so human influence may merely have pushed the megafauna towards extinction, such is the fragility of Australia’s ecosystems.

A cuddly tyrannosaur

Feathered Dinosaurs 1
Feathered dinosaur Deinonychus (Photo credit: Aaron Gustafson)

Feathered and fluffy dinosaurs in the families that may have led to birds have become almost commonplace, thanks to wonderful preservation in some Chinese Mesozoic sedimentary rocks (see http://earth-pages.co.uk/2003/03/01/flying-feathers/)  and what has become a cottage industry for local people, under professional direction. Most have been small theropods in the Coelurosauria taxonomic branch that span the Jurassic and Cretaceous Periods. The famous Lower Cretaceous Liaoning lagerstätte in NE China recently yielded something truly awesome: three well-preserved specimens of a feathered dinosaur almost as large as the giant tyrannosaurs of the Late Cretaceous (i.e. > 1 tonne in life) (Xu, X. et al.2012. A gigantic feathered dinosaur from the Lower Cretaceous of China. Nature, v. 484. P. 92-95). In fact Yutyrannus huali (‘beautiful feathered tyrant)is a member of the same subgroup as the Upper Cretaceous T. rex and was clearly a top predator in its day. Equally fortuitous is that the three specimens  comprise one with a living body weight of about 1.4 t, the other two being between 500 and 600 kg. Various differences between the largest and the two smaller individuals suggest that thee find represents two generations, the largest perhaps 8 years older than the two smaller ones. All three preserve densely packed filaments suggesting that they were fluffy rather than truly feathered. So why the difference from its probably scaly relative tyrannosaurs from about 50 Ma later?

Around 125 Ma global climate was considerably cooler than the Late Cretaceous greenhouse world, Liaoning probably having mean annual air temperatures around 10°C compared with 18°C late in the Period. Yutyrannus huali and some of its contemporary theropods probably evolved high TOG insulation to ensure all-season sprightliness. It is also possible that a display function was also involved, as seems to have been the case for other dinosaurs.

Possible snags and boons for CO2 disposal

Partial panorama of a colossal mountain of asb...
Asbestos mine tailingsat Thetford in Quebec, Canada.(Photo credit: Wikipedia)

Not many people would like to visit a waste heap at an asbestos mine. That is not because waste heaps are generally boring but all forms of asbestos are carcinogens when inhaled. Encountering pits in the tailings that emits puffs of warm air would cause health and safety alarm bells to ring. Yet that is exactly what has attracted researchers to the huge asbestos mining complex at Thetford in Quebec, Canada: the air leaving the vents can be extremely depleted in carbon dioxide (Pronost, J. and 10 others 2012. CO3-depleted warm air venting from chrysotile milling waste (Thetford Mines, Canada): Evidence for in-situ carbon capture and storage. Geology, v. 40, p. 275-278). More precisely, the depletion – down to less than 10 parts per million (ppm) compared with normal atmospheric levels of 385 ppm – occurs in winter, when the puffing pits emit warm air far above the frigid air temperatures encountered in winter Quebec. The chrysotile must be reacting with groundwater and CO2, and is therefore a potential means of using near-surface natural materials for carbon capture and storage (CCS). The end product is an innocuous carbonate – Mg5(OH)2(CO3)4·4H2O – and dissolved silica. Quite a find, it might seem, as the reaction is exothermic too: CCS plus geothermal energy plus safe decomposition of a major environmental hazard. In fact any magnesium-rich silicates are likely to undergo the same carbonation reaction, especially if ground-up to increase the net surface area exposed to moist air.

Schematic showing both terrestrial and geologi...
scheme for carbon sequestration and storage at a coal-fired power plant. Rendering by LeJean Hardin and Jamie Payne. Source: http://www.ornl.gov/info/ornlreview/v33_2_00/research.htm

The parent asbestos rock at Thetford is a metamorphic derivative from mantle ultramafic rocks in an ophiolite, and the asbestos insulation business, both for extremely hazardous blue (crocidolite) and less dangerous white (chrysotile) asbestos has been hugely profitable since the 19th century. Consequently, wherever there are altered ophiolites, generally in collision-zone orogenic belts, asbestos has been exposed either naturally or through mining and processing. There are many related cancer ‘hot spots’ in populous mining areas of Canada, India, the Alps and southern Africa, and in dry climates even natural exposures pose considerable risk. Could these blighted areas take on a new role in lessening the chance of global warming? About 30 billion tonnes of CO2 are emitted by burning fossil fuels each year. To keep pace, at the current atmospheric concentration of CO, some 75 trillion tonnes of air would have to react annually with about 100 billion tonnes of magnesian silicate, making this form of CCS the largest industry on the planet (http://www.newscientist.com/article/mg21428593.800-stripping-co2-from-air-requires-largest-industry-ever.html).

Another factor tempering somewhat forced optimism for CCS as a way of having our fossil fuel cake and eating it is that direct injection of greenhouse gases into deep storage may have an unforeseen down-side. Deep drilling and injection of fluids may trigger earthquakes. The alarm raised by small yet disturbing seismicity accompanying sites for shale-gas development by ‘fracking’ (http://earth-pages.co.uk/2011/11/04/fracking-check-list/ and http://earth-pages.co.uk/2011/10/14/britain-to-be-comprehensively-fracked/) has died down to some extent following detailed analysis of small earthquakes around drilling sites. It turns out that they are triggered not by the drilling itself but the subsurface disposal of the large amounts of fluids that have to be passed through the oil shales to make the tight rock permeable to gas (Kerr, R.A. 2012 Learning how to NOT make earthquakes. Science, v. 23 p. 1436-1437). Safe subsurface disposal requires injection wells penetrating 1 to 3 km below the surface, often below the cover of sedimentary strata and into crystalline basement. Such hard rocks store elastic strain induced by burial and tectonics, and release it when lubricated by fluids, especially if they contain dormant faults. Once impermeable rock can thus be hydrofractured in the same manner as ‘fracked’ gas-prone shales and old, often unsuspected faults reactivate: a catastrophic prospect for injected CO2. In sedimentary sequences, drilling CCS wells into porous rocks capped by impermeable ones – the scenario for ‘safe’ gas storage – could also induce ‘fracking’ of the sealing rocks and thereby causing leakage (see also http://www.newscientist.com/article/dn21633-fracking-could-foil-carbon-capture-plans.html).

Feet of the ancients

Cast of Footprints, Laetoli Museum
Cast of footprints, probably of Au. afrensis, from the famous trackway of Laetoli in Tanzania (Photo credit: GIRLintheCAFE)

Much of what palaeoanthropologists have surmised about the evolution of humans and their hominin forebears has come from fossils of their heads. Crania, jaws and teeth can reveal a lot about human ancestors and related species, and inevitably smart modern humans would dearly like to know how brainy and clever they were and when possible intellectual changes, such as the acquisition of language, might have taken place. But only the rest of the body gives us clues about what they did and potentially might have done. If, like Darwin, and following his lead Frederick Engels (http://www.marxists.org/archive/marx/works/1876/part-played-labour/index.htm), we believe that the single most important development was adopting an upright gait and thereby freeing the hands to manipulate the world, then fossil hands and feet are of very high importance. Yet they are among the most fragile appendages consisting of a great many separate bones, each being small enough to be transported by flowing water once soft tissues decay and a corpse falls apart. And they are easily bitten off by scavengers.  Heads are a lot bigger, heavier and robust, and being round and smooth, quite difficult for, say, a hyena or porcupine to gnaw. Moreover, disaggregated hominin foot and hand bones are not easy to recognise in fossiliferous sediments, especially if they have been scattered far and wide: the big prize being heads jaws and teeth, professional hominin hunters become expert at spotting them, but not necessarily the other 80% of skeletons.

Ardi (Ardipithecus ramidus)
Artists reconstruction of female Ardipithecus ramidus (Photo credit: Mike Licht, NotionsCapital.com)

So, the discovery of hominin hands or feet is a rare cause for celebration. A new partial foot has turned up in the hominin ‘bran-tub’ that is the Afar depression of NE Ethiopia (Haile-Selassie, Y. et al. 2012. A new hominin foot from Ethiopia shows multiple Pliocene bipedal adaptations. Nature, v. 483, p. 565-569) and has caused quite a stir. It is significantly different from the few other feet known from the hominin record. Moreover, it adds a sixth design to those already know, leaving out those of chimps, taken as likely to be similar to those of our shared common ancestor, Homo sapien, Neanderthals and H. erectus whose feet are much the same. While being easily distinguished from the feet of Homo species, those of australopithecines are sufficiently like them in basic morphology to suggest that Au. africanus and sediba both walked the savannas as upright as we do. But one of the earlier hominins, Ardipithecus ramidus, also from Afar but dated at more than 4 Ma, has provided an almost complete foot whose geometry , including a spayed-out, short big toe capable of grasping, almost certainly indicates that the creature was equally at home in trees as it was on the ground. Ardipithecus walked upright, but probably could not run as its gait placed the side of the foot on the ground, much like a chimpanzee, instead of proceeding heel-to-toe as we do (Lieberman, D.E. 2012. Those feet in ancient times. Nature, v. 483, p. 550-551). The new find seems similar, although better adapted for upright walking. Yet no other body parts have been found so it has not been assigned to a species, though it almost certainly represents a new one. The excitement concerns its age, which at 3.4 Ma is within the time range of Australopithecus afarensis, a family of which left the famous trackway at Laetoli in Tanzania whose foot prints strongly suggest full adaptation to human-like gait: walking, running and abandonment of partially habitual life in the trees.

It seems therefore that the multiplicity of co-existing hominins from 2 million years ago to very recently existed much further back in their evolutionary history. That raises several possibilities, among which is the possibility of repeated evolution of bipedality, hinted at by some similarities to the feet of modern gorillas in that of the newly found foot. Another implication is that simply being able to walk upright did not lead quickly to a tool-making ability because the earliest stone tools capable of cutting through meat, skin and sinew did not arise until 2.6 Ma. Like fossils of feet, those of hominin hands are extremely rare. The first crucial evidence of a hand with potential to manipulate objects delicately and with purpose is around 2 Ma, with the astonishingly well preserved hand of a young Au. sediba unearthed in South Africa (http://earth-pages.co.uk/2011/10/12/another-candidate-for-earliest-direct-human-ancestor/). Frustratingly, the 2.6 Ma tools are not associated with fossil hominins, and the Au. sediba skeletons had no tools.

Charting the growth of continental crust

Česky: Budynáž nedaleko obce Kangerlussuaq, zá...
Archaean gneisses from West Greenland (Photo credit: Wikipedia)

When continents first appeared; the pace at which they grew; the tectonic and magmatic processes responsible for continental crust, and whether or not crustal material is consumed by the mantle to any great extent have been tough issues for geologists and geochemists to ponder on for the last four decades. Clearly, continental material was rare if not absent in the earliest days of the solid Earth, otherwise Hadean crust should have been found by now. Despite the hints at some differentiated, high silica rocks that may have hosted >4 billion-year old zircon crystals from much younger sediments, the oldest tangible crust – the Acasta Gneiss of northern Canada – just breaks the 4 Ga barrier: half a billion years short of the known age of the Earth (http://earth-pages.co.uk/2008/11/01/at-last-4-0-ga-barrier-broken/). Radiometric ages for crustal rocks steadily accumulated following what was in the early 1970s the astonishing discovery by Stephen Moorbath and colleagues at Oxford University and the Geological Survey of Greenland of a 3.8 billion year age for gneisses from West Greenland.  For a while it seemed as if there had been great pulses that formed new crust, such as one between 2.8 and 2.5 Ga (the Neoarchaean) separated by quieter episodes. Yet dividing genuinely new material coming from the mantle from older crust that later thermal and tectonic events had reworked and remelted required – and still does – lengthy and expensive radiometric analysis of rock samples with different original complements of radioactive isotopes.

One approach to dating has been to separate tiny grains of zircon from igneous and metamorphic rocks and date them using the U-Pb method as a route to the age at which the rock formed, but that too was slow and costly. Yet zircons, being among the most intransigent of Earth materials, end up in younger sedimentary rocks after their parents have been weathered and eroded. It was an investigation of what earlier history a sediment’s zircons might yield that lead to the discovery of grains almost as old as the Earth itself (http://earth-pages.co.uk/2011/12/21/mistaken-conclusions-from-earths-oldest-materials/ http://earth-pages.co.uk/2005/05/01/zircon-and-the-quest-for-life%E2%80%99s-origin/). That approach is beginning to pay dividends as regards resolving crustal history as a whole. Almost 7000 detrital zircon grains separated from sediments have been precisely dated using lead and hafnium isotopes. Using the age distribution alone suggests that the bulk of continental crust formed in the Precambrian, between 3 and 1 Ga ago, at a faster rate than it formed during the Phanerozoic. However, that assumes that a zircon’s radiometric age signifies the time of separation from the mantle of the magmas from which the grain crystallised. Yet other dating methods have shown that zircon-bearing magmas also form when old crust is remelted, and so it is important to find a means of distinguishing zircons from entirely new blocks of crust and those which result from crustal reworking. It turns out that zircons from mantle-derived crust have different oxygen isotope compositions from those which crystallised from remelted crust.

U-Pb ages of detrital zircons from sediments o...
An example of ages of detrital zircons from sediments, in this case from five Russian rivers (credit: Wikipedia)

Bruno Dhuime and colleagues from St.Andrew’s and Bristol universities in the UK measures hafnium model ages and δ18O  values in a sample of almost 1400 detrital zircons collected across the world from sediments of different ages (Dhuime, B. et al. 2012. A change in the geodynamics of continental growth 3 billion years ago. Science, v. 335, p. 1334-1336). Plotting δ18O  against Hf model age reveals two things: there are more zircons from reworked crust than from mantle-derived materials; plotting the proportion of new crust ages to those of reworked crust form 100 Ma intervals through geological time reveals dramatic changes in the relative amounts of ‘mantle-new’ crust being produced. Before 3 Ga about three quarters of all continental crust emerged directly from the mantle. Instead of the period from 3 to 1 Ga being one of massive growth in the volume of the crust, apparently the production rate of new crust fell to about a fifth of all crust in each 100 Ma time span by around 2 Ga and then rose to reach almost 100% in the Mesozoic and Cenozoic. This suggests that the late Archaean and most of the Proterozoic were characterised by repeated reworking of earlier crust, perhaps associated with the repeated formation and break-up of supercontinents by collision orogeny and then tectonic break up and continental drift.

Dhuine and colleagues then use the record of varying new crust proportions to ‘correct’ the much larger database of detrital zircon ages. What emerges is a well-defined pattern in the rate of crustal growth through time. In the Hadean and early Archaean the net growth of the continents was 3.0 km3 yr-1, whereas throughout later time this suddenly fell to and remained at 0.8 km3 yr-1. Their explanation is that the Earth only came to be dominated by plate tectonic processes mainly driven by slab-pull at subduction zones after 3 Ga. Subduction not only produces mantle-derived magmas but inevitably allows continents to drift and collide, thereby leading to massive deformation and thermal reworking of older crust in orogenic belts and an apparent peak in zircon ages. The greater rate of new crust generation before 3 Ga may therefore have been due to other tectonic processes than the familiar dominance of subduction. Yet, since there is convincing evidence for subduction in a few ancient crustal blocks, such as west Greenland and around Hudson’s Bay in NE Canada, plate tectonics must have existed but was overwhelmed perhaps by processes more directly linked to mantle plumes.

More on continental growth can be found here

Two smoking barrels on the Moon

This image is an elevation map of the South Po...
Elevation map of the South Pole-Aitken basin on the Moon, from the NASA/SDIO probe Clementine mission. magenta and blue show the lowest elevation rising through a rainbow spectrum to red, the highest elevations

The South Pole and the farside of the Moon contain, at 2500 km across and 13 km deep, the largest impact structure in the Solar System: the South Pole-Aitken (SPA) basin. Being partly camouflaged by many later craters up to several 100 km across, typical of the lunar far side and the lunar highlands in general, the SPA basin formed early in the Moon’s cratering history, and is unlike the mare basins of the near side that are filled with basalt lavas. The light colour of the lunar highlands into which the SPA basin was excavated signifies that they are dominated by almost pure feldspar in the form of anorthosite rock. These anorthosites are prime evidence for the former melting of much if not all of the Moon at the time of its formation: low-density feldspar with a very high melting point could only have accumulated with the degree of purity of anorthosite if early-formed crystals floated to the top of the magma ocean.

Total magnetic field strength at the surface o...
Total magnetic field strength at the surface of the Moon from the NASA Lunar Prospector mission

The other feature of feldspars is that they are among the least magnetic of minerals, so it came as a surprise that the northern rim of the SPA basin is studded with positive magnetic anomalies (Wieczorek, M.A. et al. 2012. An impactor origin for lunar magnetic anomalies. Science, v. 335, p. 1212-1215). Lunar samples returned by the Apollo Programme are consistently lacking in all but the weakest remanent magnetism, suggesting that the Moon either never had a magnetic field or if it did the field was extremely weak. Even if it did once have a magnetic field, the anomaly patterns are small with high amplitude and reminiscent of a target hit by a shotgun blast. Similar anomalies are scattered on the near side.

The SPA basin is elliptical, suggesting that the projectile responsible for it struck at an oblique angle. The far=side magnetic anomalies cluster exactly where impact modelling would suggest for debris displaced by impact from a northward travelling body. The interpretation arrived at by Mark Wieczorek of the Parisian Institut de Physique du Globe and colleagues from MIT and Harvard University in the US is that the anomalies mark landing sites for large fragments of an easily magnetised,  iron-rich asteroid that excavated the basin. Moreover, the same impact might explain magnetic anomalies much further from the basin, on the lunar near side. The remaining mystery is how fragments of the impactor came to be magnetised. The impact would have ensured their being heated well above the temperature of the Curie point at which even the most magnetically susceptible materials lose their magnetisation. The most likely possibility is that the fragments attained their magnetised state at a time when the moon did have a core-generated magnetic field, albeit weak.

Denisovans scooped?

In late 2010 it emerged from genomic studies of a finger bone from Denisova Cave in eastern Siberia that a probably archaic human group had shared genes with ancestors of some modern humans who colonised West Pacific islands around 45 Ka ago, well before the last glacial maximum. Melanesians, including tpeople living in Papua-New Guinea have DNA that contains on average around 6% contributed from fertile interbreeding with Denisovans. This ancient groups are suggested by comparative studies of their and Neanderthal mitochondrial DNA to have split from them as lond as a million years ago. Now it seems possible that much more complete fossils of Denisovans may have been discovered in China (Curnoe, D. And 16 others 2012. Human Remains from the Pleistocene-Holocene Transition of Southwest China Suggest a Complex Evolutionary History for East Asians. PLoS ONE, http://www.plosone.org/article/info:doi/10.1371/journal.pone.0031918).

Skull from Red Deer Cave in Guanxi Province, southern China. Darren Curnoe

A block of sediment from Longlin Cave in Guanxi Province in southern China that was collected more than 30 years ago, has yielded skull fragments whose reconstruction reveals a most unusual individual, very different from anatomically modern humans, Neanderthals and from H. erectus. It had a wide flat face with highly prominent cheek bones, strong brow ridges and a diminutive chin.  Remains of three other individuals found by recent excavations in Maludong (Red Deer) Cave 300 km to the south of Longlin share similar characteristics. Yet there are similarities to moderns, for instance CT-scans show that the brain likely had a height and frontal lobes similar to ours, but different from Neanderthals.

These are not truly ancient fossils; radiocarbon and uranium-series dating give an age range from 14.3 to 11.5 ka, around the time of the Younger Dryas cold episode that preceded the Holocene. These two individuals lived when East Asia had long been home to fully modern humans.

The finds perhaps open a major new focus for human evolution, directed towards less-well studied older fossils from elsewhere in the East including those referred to by Jonathan Kingdon as ‘Mapas’ from both southern and northern China. Certainly it will boost palaeoanthropological research within China