Excitement over early animals dampened

Alga (Volvox sp.)
Volvox cyst. Image via Wikipedia

The Neoproterozoic lagerstätte in the Doushantuo Formation in the south of China was until recently thought to be a source of astonishing information about Earth’s earliest animals (See Ancestral animal? in EPN August 2004) that preceded the appearance of those with hard parts at the start of the Phanerozoic.  It contains well-preserved fossils that resemble embryos, algae, acritarchs, and small bilaterians. Dated at between 580 to 600 Ma(See Age range of early fossil treasure trove  in EPN February 2005), the Doushantuo directly overlies cap carbonates representing the emergence of Earth’s climate from a Snowball epoch represented by a tillite beneath the carbonate sequence. A detailed examination using synchrotron X-ray tomography of the putative animal embryos does show clear signs of cell doubling or palintomy (Huldtgren, T. et al. 2011. Fossilized nucluei and germination structures identify Ediacaran ‘animal embryos’ as encysting protists. Science. V. 334, p. 1696-1699) but also internal cell features most likely to be nuclei, but which have no counterparts in animal embryos. The organisms which the fossils most resemble are indeed eukaryotes, but of a kind separate from animals known as Holozoa. Yet there are striking resemblances with eukaryotes more distant from animals, such as the modern Volvox, a type of alga (Butterfield, N.J. 2011. Terminal developments in Ediacaran embryology. Science. V. 334, p. 1655-1656), that developed from an ancestor further back in time than the separation of metazoan animals from holozoans.

Mistaken conclusions from Earth’s oldest materials

Microscope projection close-upThe oldest materials on the planet are tiny zircon grains that were washed into conglomerate in Western  Australia about 2650 to 3050 Ma ago. It wasn’t the fact that the grains are zircons, which are among the most durable materials around, but the range of ages that they revealed when routinely analysed. U-Pb dating of detrital zircons is a well tested means of finding the provenance of sedimentary materials as an indicator of orogenic and igneous events that formed the crust from which they were eroded. In the original study of the Jack Hills zircons some showed ages that might reasonably have been expected from late sediments in an Archaean craton: around 3.5 billion years is about the maximum age for orogenic events there. What astonished all geoscientists was that a proportion of the grains gave ages of more than 4 billion years, some as old as 4.4 Ga: here was a window on the missing first half billion years of Earth history, the Hadean.

Subsequent work on yet more zircons confirmed the original age span but other kinds of analysis led to a variety of claims: that continental crust was around in abundance within 100 Ma of Earth having formed; geothermal heat =flow was not especially high;  liquid water was available for geological processes, including the origin of life; plate tectonics may have started early…. The topic has cropped up several times in EPN since the issue of 1 January 2001. Quite a lot of the claims emerged from studies of other minerals enclosed by the ancient zircons, such as quartz and micas, and now they have been checked again by geochemists from Western Australia (Rasmussen, B. et al. 2011. Metamorphic replacement of mineral inclusions in detrital zircons from Jack Hills, Australia: Implications for the Hadean Earth. Geology, v. 39, p. 1143-1146). It turns out that the inclusions formed at temperatures well below those of magmas, between 350 to 490°C: more like those of metamorphism. Indeed, uranium-bearing rare-earth phosphate minerals, xenotime and monazite, also locked in the zircons not only turn out to be metamorphic in origin too (both are also formed magmatically) but date to between 2700 and 800 Ma.

While the  Hadean zircon dates remain robust, a closer look at their inclusions shows that they did not remain geochemically closed systems thereafter. It was on the assumption of zircons being geological ‘time capsules’ that much of the excitement rested. Even using the presence of zircons from 4.4 Ga – they are most common in granites but do occur in mafic and intermediate igneous rocks – to suggest early ‘sialic’ continental crust is suspect. Despite having some tiny bits from Earth’s early days, it seems we are none the wiser.

Galactic controls

English: Artist's conception of the Milky Way ...
Artists impression of the Milky Way viewed along its axis. Image via Wikipedia

Palaeoclimatologists are quite content that an important element in controlling the vagaries of climate is due to gravitational forces that cyclically perturb Earth’s orbit, it axial tilt and the way the axis of rotation wobbles in a similar manner to that of a gyroscope. The predictions about this by James Croll in the late 19th century, which were quantified by Milutin Milankovich during his incarceration during World War I, triumphed when the predicted periods of change were found in deep-sea floor sediment records in 1972. Authors of ideas that link Earth system changes  to the progress of the Solar system through the Milky Way galaxy haven’t had the same accolades. One of the first to suggest a galactic link was Joe Steiner (Steiner, J. 1967. The sequence of geological events and the dynamics of the Milky Way Galaxy. Journal of the  Geological Society of  Australia, v.  14, p. 99–132.) but his work is rarely credited.

There has been an upsurge of interest in the last decade or so. In a recent issue of New Scientist Stephen Battersby reviews what galactic ‘forcings’ may have accomplished during the 4.5 billion-year history of our world (Battersby, S. 2011. Earth odyssey. New Scientist, v. 212 (3 December issue), p. 42-45). Having formed probably much closer to the galactic centre than its current position the Solar System has drifted, perhaps even ‘surfed’ gravitationally, outwards to reach its present ‘suburban’ position in one of the spiral arms. There are regularities to the now stabilised orbital movements: once every 200 million years the Solar System completes a full orbit; this orbit wobbles across the hypothetical plane of the galactic disc by as much as 200 light years, moving with and against the Milky Way’s cosmic motion. It has proved impossible so far to detect any sign of the orbital 200 Ma periodicity in events on the Earth, and most attention has centred on the wobble.

Steiner suggested that this motion may have crossed different polarities of the galactic magnetic field, perhaps triggering the periodicity of geomagnetic  changes in polarity, but this now seems unlikely. However, his suggestion that glacial epochs, such as those in the Palaeo- and Neoproterozoic, at the end of the Palaeozoic Era and at present, may have resulted from the Solar System’s passage through dust and gas banding in the Milky Way continues to have its attractions (e.g. Pavlov, A.A. et al. 2005. Passing through a giant molecular cloud: “Snowball” glaciations produced by interstellar dust, Geophysical Research Letters, v. 32, p. L03705). The direction of motion relative to the Milky Way’s cosmic drift governs the exposure to cosmic rays that result from a kind of ‘bow-shock’ ahead of the galaxy

Stellar motion through the Milky Way is semi-independent so that from time to time the Solar System may have been sufficiently close to regions of dense dust and gas that nurture the formation of super-massive stars. These huge objects quickly evolve to end in supernovae, proximity to which would have exposed life to ‘hard’ X- and  γ-rays and would be trigger for mass extinction, for instance by accompanying cosmic rays in destroying the ozone protection from UV radiation from the Sun.

The dynamism of the Earth and the resulting complexity of its surface processes makes it a poor place to look for physical signs of galactic influences. No so the Moon: for almost 4.5 billion years it has been a passive receptor for virtually anything that the cosmos could fling at it, and so geologically inert that its surface layers may well preserve a complete ‘stratigraphic’ record of all kinds of process. Should lunar landings with geological capabilities once more prove economically possible, or politically useful, that hidden history could be read.

Hominin updates

A new approach to 14C dating at the Oxford Radiocarbon Accelerator Unit at the University of Oxford UK, combined with detailed analysis of human teeth to distinguish fully modern human remains from those of Neanderthals has pushed back the date and pace of migration into Europe by people whose tools define the Aurignacian and Italian Uluzzian technologies. These are the earliest modern-human cultures found in Europe, but some of the tools are similar to those produced by Neanderthals (Châtelperronian culture), raising the possibility of transfer of technologies between the two groups. So, without confirmation from human remains of the anatomical affinities the would be doubts about using tools of these kinds to signify the presence at a site of full modern humans. Teeth found decades ago at caves in SW England and southern Italy prove, on detailed comparative study, to be from ‘moderns’ (Higham, T. And 12 others 2011. The earliest evidence for anatomically modern humans in northwestern Europe. Nature, v. 479, p. 521-524; Benazzi, S. And 13others 2011. Early dispersal of modern humans in Europe and implications for Neanderthal behaviour. Nature, v. 479, p. 525-528).The new carbon-isotope method  efficiently eliminates chemical contamination of material by post-fossilisation processes and so tend to increase the measured age of samples. The two studies produced exciting results: dates of occupation between 42-43 and 43-45 ka from SW England and southern Italy respectively. Together with results from other sites throughout central and southern Europe, the discovery shows that widespread colonisation was accomplished in three to five thousand years by migrants probably from the Levant, who may have travelled along three routes fanning out from the Bosporus in modern Turkey: along the Danube; along the Adriatic coast; from southern Greece to the ‘heel’ of Italy.

In early 2011 a group of archaeologists led by Simon Armitage of the University of Birmingham, UK reported stone tools from a cave in the United Arab Emirates for which they derived possible ages of 125, 95 and 40 ka (see Human migration in EPN for January 2011). The older dates were coeval with anatomically modern humans in the Levant, but the tools themselves showed features that could not be matched decisively with those from any other sites, including those in the Leant, though they most resembled collections from East and NE Africa. Armitage and colleagues suggested that the people who occupied the UAE cave had crossed the Red Sea at the time of the glacial maximum around 130 ka, at a time of unprecedented low sea level. A recent paper adds considerable weight to this idea (Rose, J.I. and 9 others 2011. The Nubian Complex of Dhofar, Oman: An African Middle Stone Age Industry in Southern Arabia at http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0028239). Jeffrey Rose, also of the University of Birmingham, and colleagues from Ukraine, US, UK, Germany, the Czech Republic and Australia excavated site in Dhofar southern Oman, much closer to the Straits of Bab el Mandab than the UAE. Chert tools found in the area are of the Levallois type, specifically resembling closely those found widely in the Nile Valley of southern Egypt and northern Sudan, and in the Afar Depression of Ethiopia, in deposits dated between 128 to 74 ka. The Omani tools yielded an optically stimulated luminescence age of about 106 ka. This nicely confirms that Africans had moved far beyond the confines of their home continent by the last interglacial episode, with the route to South Asia open to them along the shores of the Persian Gulf and Indian Ocean. However, the route that they had taken could equally have been around the head of the Red Sea as across the Bab el Mandab.

Desert varnish: an outdoor canvas

Petroglyphs carved in desert varnish at the Va...
Petroglyphs in desert varnish near Las Vegas, Nevada, USA. Image via Wikipedia

Early occupants of semi-arid areas found a cultural use for what is one of geology’s greatest annoyances: desert varnish. Annoying because once developed it leaves an extremely durable brownish to black, shiny coating over rock surfaces: be they dunite, marble or quartzite, sandstone or granite, desert outcrops all look very much the same. You have to bash them unmercifully to see the true texture and mineralogy, and, except on images of thermally emitted infrared, remote sensing doesn’t help as the varnish has the same reflectance whatever the wavelength of radiation. Yet to the former inhabitants of dry lands – and latter day ‘taggers’ – desert varnish has been irresistible for millennia. Lightly peck away with a sharp pebble – and some ability to depict your thoughts – and you can leave an almost indelible sign that you and your ideas were at that very rock face: a petroglyph, picked out for all time in the manner of chalk on a blackboard. Even more spectacular, given an oversight of a varnished cobbly plain and it is possible to magnify your tag, or whatever petroglyphs once signified, a hundredfold or more. That happened on the famous Nazca Plain of Peru  and continues to do so in especially dry places in the south-western US, as around Lake Havasu City in Arizona. Varnish forms only on the exposed face of cobbles, the downward side remaining more or less the original rock’s colour; generally lighter. Turn over the cobbles in an organised way, with a degree of persistence as well as talent and you too can make your mark on Google Earth! (Do not pass this on to Banksy – it doesn’t hurt the ecosystem, but will annoy the authorities immensely).

Français : Lignes de Nazca au Pérou. Le contra...
Ancient art depicting a hummingbird on the Nazca Plain, Peru. Image via Wikipedia

For all this period of artistic endeavour, stretching back in some places to the Palaeolithic, it now seems that desert varnish also records how environments have changed as well as the religiosity, humour or downright egotism of its inhabitants (Dickerson, R. 2011. Desert varnish – nature’s smallest sedimentary formation. Geology Today, v. 27 (November-December issue), p. 216-219). As well as reviewing how the varnish forms (see also Desert varnish in EPN May 2008, in Subjects: GIS and Remote Sensing)., Dickerson flags-up the little-known fact that the minute layers produced as varnish imperceptibly develops record changes in environmental conditions – wet, dry and middling – and, moreover they can be dated precisely despite being extremely thin (e.g. Liu, T. & Broeker, W.S. 2008. Rock varnish microlamination dating of late Quaternary geomorphic features in the dry lands of wester USA. Geomorphology, v. 93, p. 501-523). Liu and Broeker were able to match variations in the colour of varnish layers with important climatic episodes of the Northern Hemisphere, such as the Younger Dryas and other warming-cooling, dry-wet shifts as far back as the Last Glacial Maximum. Their approach offers a chance of dating petroglyphs and thereby cultural changes during critical stages in the history of modern human migrations, occupations and abandonments, even when no artefacts or bones remain. That is because once made, petroglyphs gradually become varnished themselves.

Pan African Review

A terrane boundary close to the Nile in the Sudan, detected by radar from the Space Shuttle: the Keraf Suture. From NASA

Undoubtedly the best exposed and one of the biggest examples, the accretionary orogen of the Arabian-Nubian Shield (ANS) is a witness to the creation of a supercontinent from the remnants of an earlier one. At about 1 Ga, most of the Earth’s continental material was clumped together in the Rodinia supercontinent that existed for a quarter of a billion years. At a time of massive mantle upheaval that left most crust of that age affected by basaltic magmatism, in the form of lava flows and dyke swarms, Rodinia began to break up at 800 Ma to scatter continental fragments. Subduction zone accommodated this continental drift to form many ocean and continental-margin volcanic arcs. The ANS is a repository for many of these arcs which episodically accreted between earlier cratons to the west in Africa and those comprising Somalia and the present Indian subcontinent. Primarily the terranes are oceanic in origin and formed in the aftermath of the dismemberment of Rodinia, although a few slivers of older, reworked crust occur in Saudi Arabia and Yemen. Among the various components are ophiolites marking sutures and other major tectonic features of the orogen. The shape of the Shield is unlike that of any other major orogen of later times, for it shrinks from a width estimated at ~2000 km in Arabia to the north to vanish just south of the Equator in southern Kenya. This ‘pinched’ structure has suggested to some that the bulk of the new crust was forced laterally northwards when the African and Indian cratons collided, in the manner of toothpaste from a trodden-on tube.

Today the ANS is a harsh place, some off-limits to geologists either for political reasons or the sheer hostility and remoteness of the environment. Yet a picture has emerged, bit by bit, over the last 30 years. So a detailed review of the most extensive and varied part from 7° to 32°N and 26° to 50°E – in Egypt, Saudia Arabia, eastern Sudan, Eritrea, Yemen and northern Ethiopia is especially welcome (Johnson, P.R. et al. 2011. Late Cryogenian–Ediacaran history of the Arabian–Nubian Shield: A review of depositional, plutonic, structural, and tectonic events in the closing stages of the northern East African Orogen. Journal of African Earth Sciences, v. 61, p. 167-232). Peter Johnson himself compiled a vast amount of information during his career with the US Geological Survey Mission in Saudi Arabia and has blended the inevitably diverse ideas of his 7 co-workers – but by no means all the ideas that are in the literature. The result is a readable and well illustrated account of how the ANS assembled tectonically during times when a near-global glaciation took place, and the first macroscopic animals appear in the fossil record. Tillites and other glaciogenic rocks from the Marinoan ‘SnowBall’ occur from place to place in the ANS, as do banded iron formations that made a surprise return after a billion-year or longer absence in the Cryogenian Period . Coincidentally, glacial conditions returned to the region twice in Ordovician and Carboniferous to Permian times, forming distinctive, tectonically undisturbed sediments in the Phanerozoic cover that unconformably overlies the Neoproterozoic orogen.

Except in a few areas only recently explored, geologists have assiduously dated events in the ANS, showing nicely that all the basement rock formed after 800 Ma, and that orogenic events culminated before the start of the Cambrian period, although one or two unusual granites intruded as late as the Ordovician. The deformation is immense in places, with huge nappes, often strike-slip shear zones and exposure ranging from the lowest metamorphic grade to that in which water and granitic magma was driven from the lower Pan African crust. The range of exposed crustal levels stems partly from the tectonics, but owes a lot to the 2-3 km of modern topographic relief, unique to NE Africa and Arabia. Yet it is not uncommon to come upon delicate features such as pillowed lavas, conglomerates and finely laminated volcanoclastic tuffs. Following tectonic welding, more brittle deformation opened subsiding basins that contain exclusively sedimentary rocks derived from the newly uplifted crust, both marine and terrestrial in formation (basins of this type, in Eritrea and Ethiopia, unfortunately do not figure in the regional maps). Much of the ANS is currently the object of a gold rush, encouraged by a rising world price for the ‘inflation-proof’ comfort blanket provided by the yellow metal. Consequently, newcomers to the stampede will be well advised to mug-up on the regional picture of occurrences and gold-favourable geology provided in the review, and may be interested by other exploration possibilities for rare-earth metals and other rising stars on the London Metal Exchange, such as tin, which are often hosted in evolved granites, that stud the whole region.

Water sources and early migration from Africa

SeaWiFS collected this view of the Arabian Pen...
The Arabian Peninsula today. Image via Wikipedia

In March 2011 EPN reported in Human migration a puzzle relating to evidence for modern human occupation of Arabia on the southern shore of the Persian Gulf during the last Eemian interglacial at 125 and 95 ka. At that time sea level would have been as it is now, discouraging any attempt to cross the Red Sea via the Straits of Bab el Mandab; a widely suggested short-cut from East Africa to the rest of the world. Around 125 ka modern humans were making a living from coastal resources in Eritrea, leaving abundant stone tools in shoreline deposits at the head of the Gulf of Zula, and in the Sodmein Cave on Egypt’s Red Sea coast. They had also reached the famous Qafzeh and Skhul caves of Mount Carmel in today’s Israel around 100 thousand years ago. A route out of Africa through the Levant has not been widely favoured and the humans of Qafzeh and Skhul have been suggested to have reached a geographic cul-de-sac with no eastward exit because of the aridity of the Arabian Peninsula. Yet once in the Levant they could have skirted the desert interior by following the east coast of the Red Sea, and ‘strandloped’, as Jonathan Kingdon has dubbed following the coastline. But continuous access to fresh water would still have been essential.

The shores of the Red Sea preserve many examples of uplifted coral reefs, indeed signs of human presence in Eritrea occur in such a terrace. Being extremely porous, reef terraces are potential aquifers and a sign that they may have sourced freshwater springs is the conversion of the intricate coral skeletons from one form of calcium carbonate to another; original aragonite changes to calcite in the presence of fresh water, a complete replacement being estimated to take a thousand years of continual contact with fresh water. This change allowed Boaz Lazar and Mordechai Stein of the Hebrew University of Jerusalem and the Geological Survey of Israel to check for the presence of freshwater coastal springs in the past (Lazar, B. & Stein, M. 2011. Freshwater on the route of hominins out of Africa revealed by U-Th in Red Sea corals. Geology, v. 39, p. 1067-1070). Their test site was a series of uplifted reefs near Aqaba on the Red Sea coast of Jordan. The authors determined variations in the 230Th/238U ratio in the reefs relative to that of 234U/238U and showed open-system addition of 230Th and 234U during the aragonite to calcite recrystallization, that results in an isotopic compositional trend charting the timing of any alteration. Thus, the original age of reef terraces can be backtracked, revealing at Aqaba successively higher terraces formed recently and at 120, 142 and  190 ka. The oldest of the terraces seems to have been flooded with fresh water at the start of the Eemian interglacial (~140 ka), and may have been a source of springs that would have served the earliest human travellers well. It remains to use Lazar and Stein’s approach at other reef terraces along the postulated northern exit route for the earliest modern human emigrants from Africa and, more important, to find traces of their passage.

Added 21 December 2011. The likely route for leaving Africa got a push towards the Bab el Mandab with publication of evidence for a greener south Arabia at several times in the late Pleistocene (Rosenberg, T.M. and 8 others 2011. Humid periods in southern Arabia: Windows of opportunity for modern human dispersal. Geology, v. 39, p. 1115-1118). On the eastern edge of the now hyper-arid Rub al Khali are a series of former lakes with thin sediments. When first discovered they yielded radiocarbon ages of fossil molluscs of around 40 to 20 and 10.5 to 6 ka. However recent dating using optically stimulated luminescence (OSL) of the dune sands between which occur lacustrine muds and silts suggest that the lakes were water-filled  for lengthy periods  before those ages – radiocarbon dating can be reset to younger ages by precipitation of carbonates on older  fossils.  The OSL results show wet periods around 80, 100 and 125 ka, suggesting that around these times the Intertropical Convergence Zone was pulled northwards so taking seasonal monsoon rains well into the Arabian Peninsula. They tie in nicely with a variety of other parameters, including the timing of lowstands of the Red Sea. This created episodes a few thousand years long that would have been conducive to humans living there and passing through en route to Asia around eastern Arabia and perhaps to the Levant up the west side of the sub-continent. Potential occupancy was shut off by long arid periods, which might have allowed only pulses of migration. Had such episodic diffusion occurred it might have left a record in human DNA that ongoing and planned population genetic research may reveal.

South Asian arsenic update

Skin lesions from arsenic poisoning in Bangladesh
The first signs of chronic arsenic poisoning: skin keratoses. Image by waterdotorg via Flickr

That groundwater in West Bengal, India was polluted with arsenic to such levels that symptoms of poisoning had become endemic was reported by Depankar Chakraborti in 1983, leading to his being branded a ‘panic monger’ by the Indian authorities. The news broke internationally in 1993 as the now infamous tragedy in neighbouring Bangladesh emerged. Means of mitigating the effects – lesions or keratoses and skin discoloration, and later increases in incidence of several forms of cancer – and ideas of how the pollution had occurred had to await proper geochemical analyses of well waters and logging of the mainly alluvial sediments from which water was being withdrawn; another 8 years went by. Reports of arsenicosis began to emerge from other areas of alluvial sediments in SE Asia, revealing by far the worst mass poisoning in history and the likelihood that the lives of millions would be blighted by what Bangladeshis dubbed ‘the Black  Rain’ from the resemblance of the characteristic skin lesions to drops of black water.

Thanks principally to the work of water engineer Peter Ravenscroft with other geochemists, the source of arsenic in groundwater was narrowed down to the effect of reducing conditions in grey, carbonaceous sandstones and peats on the mineral goethite, an iron oxy-hydroxide that forms the main colorant in oxidised sediments and whose loose structure normally encourages the mopping-up by surface adsorption of a wide spectrum of dissolved ions, including those of arsenic. Goethite readily breaks down under reducing conditions, and when that happens all the adsorbed material is released into solution. The upper parts of the alluvial and deltaic sediments in the lower reaches of the Ganges and Brahmaputra rivers contain abundant organic remains picked up when vegetation burgeoned during the Holocene, which mixed with goethite-coated sand grains derived from erosion in the Himalayan stretches of the rivers. Purely natural sedimentary and hydrogeological processes created the dreadful plight of villagers. The terrible irony was that before the 1980s there were no signs of arsenicosis, yet mortality, especially of under-fives, was very high due to water-borne pathogens in surface water supplies. Indian and Bangladeshi authorities and UN agencies waged a campaign to sink shallow wells for drinking water rather than relying on river and pond supplies. At first rural people resisted the change since they regarded water from wells as the ‘Devil’s water’, but as infant mortality began to fall, the resistance turned to rapid construction nationwide of wells, both public and private. A few years later came the ‘Black Rain’.

In the attempts to mitigate the arsenicosis plague, filters containing adsorptive materials, including goethite, were installed on pumps. However, the geochemists showed that in the deeper wells there were consistently low concentrations of arsenic in sediments that were brown-coloured due to prevailing oxidising conditions and the presence of goethite. Although arsenic was present in the sediments it was safely locked in the goethite coatings of sand grains. Steadily major public supplies were transferred to deep, high-yield wells. Alluvial and deltaic deposits are generally highly permeable, so it was feared that as the deeper wells were pumped arsenic-rich water from the reduced shallow sediments would replace the safe groundwater. Thankfully, it seems that is not likely to be a problem (Radloff, K.A. and 12 others 2011. Arsenic migration to deep groundwater in Bangladesh influenced by adsorption and water demand. Nature Geoscience, v. 4, p. 793-798). The study injected As-bearing groundwater into a deep aquifer and monitored its arsenic concentration over time, once in place. Within a day, the concentration of dissolved arsenic fell by 70% and by 5 days had fallen below recommended maximum levels for drinking water; a dramatic demonstration of the clean-up power of even minute films of goethite in sediments, for that seems the only explanation for the fall. The US-Bangladeshi team verified this by testing samples of the deeper sediments from drill cuttings. They mixed highly contaminated groundwater with the cuttings, to find that arsenic sorption over  about a week was extremely high (~40mg kg-1).

Water well in Bangladesh. From http://www.flickr.com/photos/waterdotorg/3696304044

Rather than just publishing their reassuring findings, the team input them to hydrogeological models of the Bengal Basin, varying hypothetical pumping rates to assess the changes in deep-groundwater chemistry over time due to downward migration of the highly polluted near-surface waters. Sure enough, the As-rich waters would end up in the deep aquifer eventually to overwhelm the sorptive capacity of its goethite content; arsenic would once again enter well supplies. However, if deep extraction was limited to drinking water by limiting pumping for irrigation to intermediate depths, safe limits could be sustained theoretically for a thousand years or more, except in some areas especially prone downward intrusion of polluted shallow groundwater. (Use of highly contaminated shallow groundwater for irrigation would simply transfer the problem to crops.) Clearly, monitoring is obligatory, but one hopes this important study does resolve the horrifying plight faced by so many people in catchments fed by Himalayan waters.

Fracking check list

Bergung der Opfer des Grubenunglücks
Aftermath of the 1906 mine explosion at Courrières, northern France; the largest mining disaster in Europe with 1099 fatalities. Image via Wikipedia

Britain is on the cusp of a shale-gas boom (see Britain to be comprehensively fracked? : EPN 14 October 2011) and it is as well to be prepared for some potential consequences. In extensively fracked parts of the US – the states of New York, Pennsylvania, Texas and Colorado – there are reports of water taps emitting roaring flames after dissolved methane in groundwater ignites. This is largely due to common-place household water supplies from unprocessed groundwater, which are rare in Britain. But there are other hazards (Mooney, C. 2011. The truth about fracking. Scientific American, v. 305 (Nov 2011), p. 62-67) that have enraged Americans in affected areas, which are just as likely to occur in Britain. In fact the nature of shale-gas exploitation by horizontal drilling beneath large areas poses larger threats in densely populated area, as the people of Blackpool have witnessed in the form of small earthquakes that the local shale-gas entrepreneur Cuadrilla admit as side effects of their exploratory operations .

Chris Mooney succinctly explains the processes involved in fracking shale reservoirs; basically huge volumes of water laced with a cocktail of hazardous chemicals and sand being blasted into shales at high pressure to fracture the rock hydraulically and create pathways for natural gas to leak to the wells. One risk is that this water has to be recovered and stored in surface ponds for re-use. About 75% returns to the surface and also carries whatever has been dissolved from the shales, which can be extremely hazardous. By definition a shale containing hydrocarbons creates strongly reducing conditions, which in turn can induce several elements to enter solution as well as easily dissolved salts; for instance divalent iron (Fe2+) is highly soluble, whereas more oxidised Fe3+ is not, so waters having passed through gas-rich shales will be iron-rich. But that is by no means the worst possibility; one of the most common iron minerals in sedimentary rocks is goethite (FeOOH), which adsorbs many otherwise soluble elements and compounds. In reducing conditions goethite can break down to release its adsorbed elements, among which is commonly arsenic. The blazing faucet hazard results from hydrocarbon gases leaking through imperfectly sealed well casings to enter shallow groundwater, where the gases can also create reducing conditions and release toxic elements and compounds into otherwise pure groundwater by dissolution of ubiquitous goethite, as in the infamous arsenic crisis of Bangladesh and adjoining West Bengal in India where natural reducing conditions do the damage.

What is not mentioned in the Scientific American article is the common association of hydrogen sulfide gas with petroleum, produced from abundant sulfate ions in formation water by bacteria that reduce sulfate to sulfide in the metabolism. This ‘sour gas’, as it is known in the oil industry, is a stealthy killer: at high concentrations it loses its rotten-eggs smell and in the early days of the petroleum industry killed more oil workers than did any other occupational hazard. Visit the spa towns of Harrogate in Yorkshire and Strathpeffer in northern Scotland and sample their waters for examples of what Carboniferous and Devonian gas-rich shales produce quite naturally: noxious stuff of questionable efficacy. The environmental effects of such natural seepage from gas-rich rocks tell a cautionary tale as regards fracking. The highly reducing cocktail of hydrocarbon and sulfide gases in rising, mineral-rich formation water kills the microbiotic symbionts that are essential to plant root systems for nutrient uptake die and so too do trees. The onshore Solway Basin of Carboniferous age in NW England illustrates both points, having many chalybeate springs as the sulfide- and iron-rich waters are euphemistically known and also a strange phenomenon in many of the deep valleys cut by glacial melt waters as land rose following the last glacial maximum. Once trees reach a certain height – and correspondingly deep root systems – they die, to litter the valley woodland with large dead-heads.  Also leaves on smaller trees turn to their autumnal colours earlier than on higher ground. Both seem to be due to minor gas seepages from thick sale sequences in the depths of the sedimentary basin. Indeed, both are botanical indicators to the hydrocarbon explorationist.

To recap, a common size of a fracking operation using several horizontal wells driven from a single wellhead is 4km in diameter entering gas-rich shales at up to 2 km depth. Each well can generate fractures of a hundred metres or more in the shales and surrounding rocks, as they have to for commercial production. In Britain, most of the sites underlain by shales with gas potential are low-lying agricultural- or urban land. The producing rock in the Blackpool area is the Middle Carboniferous Bowland Shale that lies beneath the Coal Measures of what was formerly the Lancashire coalfield, now a patchwork of expanding urban centres. On 23 May 1984 an explosion occurred in Abbystead, Lancashire at an installation designed to pump winter flood water between the rivers Lune and Wyre through a tunnel beneath the Lower to Middle Carboniferous Bowland Fells. The Abbystead Disaster coincided with an inaugural demonstration of the pumping station to visitors, of whom 16 were killed and 22 injured. Methane had escaped from Carboniferous shales to build up in the flood-balancing  tunnel soon after its construction. Methane build-ups were by far the worst hazard throughout the history of British coal mining, thousands dying and being maimed as a result of explosions. One of the largest death tolls in British coal-mining history was 344 miners at Hulton Colliery in Westhoughton, Lancashire in 1910 after a methane explosion; the methane may well have escaped from the underlying Bowland Shales.

Snippets on human evolution

Image copyright held by author, Chris Henshilw...
Artifacts from the Blombos Cave, South Africa, including deliberately etched block of hematite Image by Chris Henshilwood via Wikipedia

The news that most humans outside of Africa carry fragments of DNA that match with those of Neanderthals and the mysterious Denisovan archaic humans ( see Yes, it seems that they did… and Other rich hominin pickings in the May 2010 issue of EPN) has entered into popular culture; or soon will have! Similar dalliances with the ‘older folk’ seem also to have occurred among those humans who remained in Africa (Hammer, M.F. et al. 2011. Genetic evidence for archaic admixture in Africa. Proceedings of the National Academy of Sciences, v. 108, p. 15123-15128). The DNA of three groups in West Africa who maintain a hunter-gatherer lifestyles show regions that are not involved in coding for proteins that differ from the African norm. This suggests mating with an entirely separate and unknown group of hominins – probably archaic forms of humans – that produced fertile offspring, probably around 35 thousand years ago. The find spurred re-evaluation of bones with a mix of archaic and modern features that were discovered in a Nigerian cave in the 1960s (Harvati, K. et al. 2011. The Later Stone Age Calvaria from Iwo Eleru, Nigeria: Morphology and Chronology. PLoS ONE, v.  6: e24024. doi:10.1371/journal.pone.0024024). The study confirms that the skulls are outside the fully modern human range, but display a close similarity with Neanderthal and H. erectus. The big surprise is that U-Th dating suggests they are quite recent, around 16 ka. The stage seems set for nor only a burst of exploration for human remains of less antiquity than early hominins but a ‘paradigm shift’ in our view of what constitutes a human species.

See also: Gibbons, A. 2011, African data bolster new view of modern human origins. Science, v. 334, p. 167.

Another interesting link with archaic humans who had the closest of relationships with some of our ancestors is that their union may have bolstered the resistance of migrants from Africa to Eurasian pathogens (Abi-Rached, L. and 22 others 2011. The shaping of modern human immune systems by multiregional admixture with archaic humans. Science, v. 334, p. 89-94). The focus was on the human leucocyte antigen (HLA) group that is a vital part of our immune system in the form of ‘killer cells’. Part of modern Eurasian DNA that codes for the group (HLA-B*73 allele) appears in the Neanderthal and Denisovan genomes; indeed more than half the HLA alleles of modern Eurasians may have originated in this way, and have also been introduced into Africans subsequently.

Also at the front line of genomic research into human origins, DNA sequenced from a lock of hair given to an Edwardian anthropologist by a native Australian turns out to have an extreme antiquity compared with that of other Eurasian people descended from African migrants (Rasmussen, M. and 57 others. An aboriginal Australian genome reveals separate human dispersals into Asia. Science, v. 334, p. 94-98). The unique aspects of the Australian genome signify separation of a group of individuals from the main African population around 62-75 thousand years ago; significantly earlier than and different from ‘run of the mill’ migrants from whom modern Asians arose at between 25 to 38 ka. There is little doubt that native Australians are descended from the pioneers who first diffused from Africa either by crossing the Straits of Bab el Mandab or taking another route and they moved more speedily across southern Asia than other waves made possible by climate change and sea-level falls following the Eemian interglacial of 133-115 ka.

Despite the lingering Eurocentrist view that somehow fully modern human consciousness sprang into being at the time the famous French and Spanish cave art was painted, around 30 ka, increasing evidence points to an African origin for a sense of aesthetics and the ability to express it. The latest is the discovery of a 100 ka ‘paint box’ in a South African coastal cave (Henshilwood, C.S. et al. 2011. A 100,000-year-old ochre-processing workshop at Blombos Cave, South Africa. Science, v. 334, p. 219-223). The material consists of two large abalone shells containing traces of red and orange ochre, together with a hammer stone and grinder with adhering ochre, and fat-rich bones which ground-up would have produced a binder for the ochre. No art occurs in the cave and it might be supposed that the pigments were intended for face- or body adornment.