New data on starting point for Earth evolution

Slowly, geochemists as well as planetary scientists have been taking up the implications of a likely infernal origin for the Earth-Moon system that resulted from a Mars-size planet colliding with the proto-Earth, shortly after planetary accretion.  The chemistries of both Earth and Moon have sufficient similarities for a common origin to be almost certain.  There is one difference: lunar rocks are more depleted in volatiles than those accessible on the Earth.  Terrestrial rocks were at some stage in their evolution purged of some volatile elements.  The Moon’s early history seems to be extraordinarily simple.  It is recorded in the pale rocks of the lunar highlands that are made dominantly of feldspars.  Their low density and abundance suggest that feldspars floated to the top of completely molten rock, in much the same way as similar anorthosites on Earth seem to have formed in large magma chambers. The difference is that lunar anorthosites probably once formed the entire crust of the early Moon, and formed by simple differentiation of a deep, all-encompassing magma ocean.  The late Dennis Shaw applied this simple notion to the Earth’s earliest evolution during the 1970s, but his vision was largely ignored by his geochemist peers.  A mantle-wide zone of complete melting was resurrected when William Hartmann’s giant impact theory appeared: the energy involved seems to make this an inevitable corollary of his idea.

Indirect analysis of the mantle from the geochemistry of its basaltic products has shown that the mantle is not homogeneous.  Some has been partially stripped of basalt-forming elements, and there are other chemical heterogeneities.  However, examined from the standpoint of isotopes of neodymium (142Nd and 144Nd) more or less every magmatic rock has been considered to have been ultimately derived from material with the same isotopic composition as chondritic meteorites, and by extension, that of the Galaxy in the vicinity of what became the Solar System.  That observation has been a major counter argument to the notion of an early terrestrial magma ocean. Differentiation of such a fundamentally molten Earth would have separated some of the samarium-146 (the source of 142‑Nd through radioactive decay) from 144Nd, thereby imparting different growth histories for 142Nd/144Nd ratios to different mantle ‘reservoirs’.  The half-life of 147Sm is about 100 million years, so that radiogenic 142Nd would accumulate most in Earth’s early history, thereafter tending towards a constant proportion of neodymium, unlike the 143Nd used in radiometric dating that accumulates much more slowly from decay of 147Sm (half life about 100 billion years).

There was a flaw in this counter argument.  The similarity of chondritic and terrestrial Nd isotope patterns might have stemmed from isotopic measurements that were insufficiently precise to detect significant differences. Mass spectrometry has undergone a near-quantum leap in precision.  Applied to the chondrite-Earth rock comparison, the neodymium data for chondrites remains as determined earlier, but the 142Nd/144Nd ratios of terrestrial rocks turn out to be 20 parts in a million higher than for chondrites (Boyet, M & Carlson, R.W. 2005. 142Nd Evidence for Early (>4.53 Ga) Global Differentiation of the Silicate Earth.  Science, Published online June 16 2005; 10.1126/science.1113634).  That doesn’t seem very much, but quite sufficient to suggest plausibly that indeed the Earth’s mantle did indeed evolve from a magma ocean.  Its upper part was enriched in samarium by its fractionation as a solid that probably crystallised downwards.  Whatever was left of the original liquid would be at the base of the protomantle, and in it many other elements that favoured melt over crystals – so-called ‘incompatible’ elements – would have been enriched.  Boyet and Carson suggest that such a deep, enriched layer may amount to between 5 to 30% of the current mass of the mantle. 

The implications, if the ideas are confirmed, are enormous, because geochemists up to now have taken the bulk of the mantle that supplies basalt magmas – and whose composition is quite well constrained – to represent the whole silicate Earth.  That may satisfy geochemical parameters, but worries geophysicists.  The ‘standard’ Earth has insufficient radioactive uranium, thorium and potassium to account for the heat that flows to the surface. In fact it generates about a half, leaving the rest to speculation. One school looks to supposed gravitational potential energy locked in the core when it formed by inward collapse of iron-nickel alloy and slowly released thereafter.  Another theorises about radioactive potassium-40 combined in sulphides of the core, which also ‘leaks’ out.  The possible existence of the last dregs of an early magma ocean, near the core-mantle boundary (CMB), would not only account for 43% of surface heat flow, but might also drive convection in the liquid outer core as a means of generating Earth’s magnetic field.  Even more important, it might fuel the rise of plumes from the CMB that are increasingly implicated in periodic repaving of the Earth’s surface by flood-basalt volcanism.  Since flood basalts are a popular source for mantle geochemists’ data, why are the signs of such a peculiar source region not clear in their analyses?  Either they are not looking with the requisite precision, or the source itself does not move with plumes, merely setting them in motion.  Eminent geochemists see a bit of a hectic time ahead….. 

See also: Kerr, R.A. 2005. New geochemical benchmark changes everything on Earth.  Science, v. 308, p. 1723-1724.

Here is the earthquake forecast

Earth Pages News of June 2005 reported on the development by the US Geological Survey of the first daily seismic forecasting service, which covers California.  It has a web site at http://pasadena.wr.usgs.gov/step.   The forecast is for events, generally aftershocks of earlier earthquakes, with sufficient energy to throw objects off shelves (Modified Mercalli Index VI). On June 30 2005, Lake Tahoe had a chance around 1 in 100 of such a tremblor, with the length of the San Andreas and related fault systems highlighted at between 1 in 10 000 to 1000.  Of course, it will take some time before people link as quickly as they do to the weather forecast.

Stay of execution for Quaternary

The last remaining division of geological time that Giovanni Arduino erected in the mid- to late 18th century, has been under threat for some time (see EPN of September 2004).  For over seven years, the ‘Time Lords’ of the International Commission of Stratigraphy have striven to resolve, at least for a while, al the fundamental divisions of stratigraphic nomenclature.  To the horror of researchers concerned with the last 2 million years or so, publication of the new time scale in 2004 seemed to have allowed the Neogene to swallow the Quaternary Period whole.  Muttering broke into a storm of angry e-mails demanding its restoration.

The reason behind the annoyance is simple.  The Quaternary is unique for two reasons: it includes the Great Ice Age, and it is the time of humanity – the first stone tools appear in the geological record between 2.4 and 2.6 Ma ago.  But those who demand the resurrection of the old name are not entirely in agreement among themselves, particularly about when it started.  The problem arose from the manner in which systematisation of both relative and radiometric time evolved.  Arduino recognised four divisions only, Primary, Secondary, Tertiary and Quaternary based on decreasing compactness and complexity of rocks that he had seen in Italy.  The Quaternary was defined as unconsolidated material that sat upon the other three.  As fossils became the main tools of establishing relative time and wide correlation, Primary and Secondary were soon dropped.  But Tertiary and Quaternary remained as broad divisions until the late 20th century.  Tertiary strata became divided into 5 lesser palaeontological divisions, and Quaternary into two: Pleistocene and Holocene.  Radiometric dating demonstrated the brevity of the Tertiary compared with major stratigraphic divisions further back in time, so it was designated as a Period, subdivided into 5 epochs.  Tertiary itself then became elevated to Era status as the Cenozoic, despite its short time span, and its first three and last two epochs were bracketed by two new periods: Palaeogene and Neogene.  Development of geosciences was clearly marginalizing the Quaternary Period to which many devotees cling tenaciously.

The furore burst at the 32nd International Geological Congress in Florence in August 2004, and the ICS was duly chastened and apologetic.  It set up a task force to reunite the warring forces, or at least to draw plans for a truce. The task force voted in early June 2005 to retain the name Quaternary and to set its beginning at 2.6 Ma, thereby defining it as both the Great Ice Age and that of humankind.  Ironically, 2.6 Ma also marks the start of the Late Pliocene, defined by a Global Boundary Stratotype Sections and Point (the midpoint of sapropelic Nicola Bed (“A5”), Monte San Nicola, Gela, Sicily, Italy). You see, there has to be somewhere that you can visit and ‘put your finger on the proper boundary’.  This particular GSSP is defined as a stage in the fluctuation of oxygen isotopes in deep-sea sediments, at the start of the Matuyama geomagnetic reversal, and just below the points of extinction of two echinoid species…..  Incidentally, the ICS is by far the largest of the bodies within the International Union of Geological Sciences, the ‘UN’ of the geoscience community.  Acquiring the prestige of a GSSP ranks with many countries’ geoscientists at least as high as hosting an Olympic Games. Italy hosts 9 of the 22 Cenozoic GSSPs (5 are not yet placed), so clearly Arduino’s influence has been long lasting in some respects.  Several features of the New Timescale as a whole may confuse far into the future (should it stand the test of time).  The Stage names, learned by generations of stratigraphers, often through cunning mnemonics, are mainly taken from places or regions.  Most of the GSSPs at their bases are somewhere else (browse http://www.stratigraphy.org/).

Source: Giles, J. 2005.  Geologists call time on dating dispute.  Nature, v. 435, p. 865.

Hydrogen sulfide and mass extinction

Naughty school kids once used to hurl glass vials that launched the most pervading smell of rotten eggs when they smashed.  Stink bombs produce hydrogen sulfide.  Interestingly, if you can smell it you are more or less safe – though not from flying glass shards.  When H2S is more concentrated, it becomes an odourless and stealthy killer, as ‘sour gas’ emitted from oil drilling rigs.  A group of anaerobic bacteria generate the gas when there are abundant sulfate ions in oxygen-starved conditions.  They use these ions as electron acceptors in their metabolism, thereby reducing sulfate to sulfide ions; a common phenomenon in stagnant swamps, and especially prevalent at depth in the Black Sea.

Several times during the Phanerozoic global ocean depths became anoxic, when thermohaline circulation shut down.  The consequences show up in black mudrocks, rich in partially broken down hydrocarbons and iron sulfide.  Some of these are major source rocks for petroleum.  Unstirred by deep current flow, bottom waters pervaded by H2S are covered by oxygenated water, so it might seem that there is little threat to surface dwellers and air breathers, although any animal unwarily entering toxic bottom water would instantly die.  That is why black mudrocks are repositories of exquisite fossils.  Should H2S build up in deep water, however, there might be chemical instability that would result in large-scale emissions to the upper ocean and to the atmosphere.  Geochemists from the universities of Pennsylvania and Colorado have made some simple chemical calculations to see if such a potentially catastrophic leakage is within the bounds of possibility (Kump, L.R. et al. 2005.  Massive release of hydrogen sulfide to the surface ocean and atmosphere during intervals of oceanic anoxia.  Geology, v. 33, p. 397-400).  Theoretically it is, once a threshold concentration of around 1 mmol kg-1 of H2S dissolved in deep water is exceeded.  There would be sulfidic upwellings involving emissions of the order of teratonnes of sulfide per year to the atmosphere; more than 2000 times that today from volcanoes, with the added risk that it would also permeate upper-ocean water.

As well as witnessing mass extinctions, the Late Devonian, end-Permian and Middle Cretaceous were characterized by widespread anoxia.  Leakage of H2S would not only have killed directly, but would have destroyed the ozone layer that protects from UV radiation.  Inevitably, methane produced by other anaerobic bacteria would also have been released in the same way to force global warming.  Rather than being the result of dramatic impacts or monstrous flood basalt effusions, mass extinctions at these times would have been quiet, but efficient nonetheless

Potted history of atmospheric oxygen

Potted history of atmospheric oxygen

The most likely hallmark of an inhabited planet is an atmosphere that contains oxygen; a simple rule of thumb made popular by James Lovelock.  By assembling complex molecules based on carbon, life increases the degree of chemical reduction in its environment.  Effectively it draws in electrons, and the counterpart of that must be that some other component loses them through oxidation.  On Earth the source of electrons needed to make organic molecules through the action of photosynthesis is predominantly the oxygen atoms locked in molecules of water and carbon dioxide.  By losing 4 electrons, 2 oxygens bonded in those two simple compounds are oxidised to become the gas O2, which itself has become the commonest and most active acceptor of electrons from reduced ions and compounds.  Oxygen gives its name to oxidation, which is the inevitable fate of most organisms, thereby reversing the process of photosynthesis.   A planet whose surface topography is continually changing, because more radioactive energy is produced in its mantle than can be lost to space by simple conduction, generates physical conditions that continually bury and store some unoxidised carbon compounds.  Carbon burial together with continued living processes keeps the photosynthetic chemical equation weighted in favour of free oxygen.

Since the domain of living things to which we and all advanced organisms belong, the Eukarya, is almost wholly one to which oxygen is vital in metabolism, there can be few more important geoscientific topics than how and when oxygen emerged as a free element.  There have been major recent developments in addressing these questions, so it is useful and fascinating to find an up-to-date and easily read review (Kerr, R.A. 2005.  The story of O2Science, v. 308, p. 1730-1732).  Among its highlights is evidence that although cyanobacteria (the most primitive oxygenic photosynthesisers) were definitely around at 2.7 Ga, they may not have produced oxygen until about 300 Ma later, when the first signs of free environmental oxygen appear.  Photosynthetic release of oxygen during life’s early period was not the only reduction-oxidation regime adopted by organisms.  Another of huge importance was generation of methane, which can rise to the limits of the atmosphere unlike the other major hydrogen-bearing gas, water, which is condensed out at quite low altitudes.  Photochemical breakdown of methane at the limits of outer space would release hydrogen to leak away from the Earth, removing a reductant gas that would otherwise consume highly reactive oxygen: without this process, modelling suggests that Earth’s atmosphere would never have accumulated free oxygen, even had primitive life emerged.

Once free oxygen appeared, about 2.4 Ga ago, it took almost 2 billion years for enough to accumulate so that complicated, multicelled Eukarya could use its potential (see The Malnourished Earth hypothesis – evolutionary stasis in the mid-Proterozoic in EPN of September 2002). What kept the levels down?  Quite probably it was oxidation of sulfide minerals on exposed land.  That supplied sulfate ions to a still reducing ocean, so that sulfide ions formed again to become metal sulfide precipitates, which drew from ocean water several essential nutrients for Eukarya.  Oxygen-producing Eukarya (algae) would not be able to bloom because of this ‘starvation’.  Nonetheless, about 600 Ma ago, surface oxidation potential soared to almost modern levels, sufficient for large organisms to appear and evolve, to lead to life as we know it. Another series of questions surrounds this tremendous event, but they remain to be answered convincingly.

Another view of causes for the Younger Dryas cooling event

High latitudes in the North Atlantic, especially on its eastern side, are warmed today by the Gulf Stream.  That current, which defies the Coriolis effect, is pulled northwards by the sinking of cold dense sea water between Greenland, Iceland and Scandinavia to form North Atlantic Deep Water (NADW).  The thermohaline circulation here is driven by both cooling of salty surface water in the Gulf Stream and further salinisation as sea ice forms in this area each winter.  The Younger Dryas cold period between 13 and 11.5 ka is regarded by most oceanographers and climatologists to have resulted from sudden freshening of the North Atlantic at these critical high latitudes, so that surface water density became too low to sink.  Such a process had occurred several times during the last glacial period, each of which has been correlated with release of massive amounts of glacial ice as icebergs.  There melting caused the freshening. The Younger Dryas is a different kind of event, because it occurred well into the period of global warming that brought the Ice Age to an end.  A seemingly plausible explanation was suggested by Wallace Broecker in 1989, who looked to explosive release of meltwater trapped in glacial lakes roughly along the Canadian-US border along the present St Lawrence River Valley, effectively flooding the source of NADW with a surface layer of low-density, low-salinity water.

The problem with Broecker’s mechanism is that sea-level records through the Younger Dryas show no sudden rise, whereas at about 14 ka a meltwater pulse had resulted in a 20 m rise over about 500 years, with no sign of a climatic response to a shutdown of the Gulf Stream by the freshening that it caused.  A similar event occurred shortly after the waning of the Younger Dryas.  There is no doubt that throughout high northern latitudes the great ice sheets were melting since about 18 ka. A new approach to the Younger Dryas concentrates on where the meltwater formed in northern North America probably escaped to the sea (Tarasov, L & Peltier, W.R. 2005.  Arctic freshwater forcing of the Youner Dryas cold reversal.  Nature, v. 435, p. 62-665).  Through their analysis of the drainage chronology of the Canadian Shield Tarasov and Pelter conclude that at the time of the onset of the Younger Dryas most flow was roughly along the present MacKenzie River valley to the Arctic Ocean.  Freshening of the Arctic Ocean would escape through the narrow Fram Straits directly to the source region for NADW. It would not necessarily have been through currents, for escape of increased amounts of pack ice would have much the same effect.  Central to their hypothesis are new data that relate to extraordinarily thick continental ice in the Keewatin glacial dome, that formed just to the east of modern Great Slave Lake.

Acidification of the oceans

When gases such as CO2 and H2S permeate through ocean water they dissolve to form weak acids: carbonic and sulfurous acid respectively. So many organisms, plants as well as animals, incorporate carbonates into their hard parts that changes in acidity constitute an important kind of stress.  The acidity of water combines with increasing pressure as water deepens to create a zone (the lysocline) in which water is undersaturated in calcium carbonate.  Below the lysocline carbonate shells begin to dissolve.  Deeper still is a level (the carbonate compensation depth, or CCD) below which there is no free CaCO3 in the water column.  Falling shelly material dissolves completely, so that deep-ocean sediments contain few if any shells other than those of silica-secreting organisms.  At present the CCD is around 4 km deep.  Any shift in the pH of the oceans causes the CCD either to rise or fall.  The signatures of such shifts lie in the composition of ocean-floor sediments.  In the deepest parts, where silica and clays dominate, layers in which carbonate shells are preserved signify a decrease in acidity (increased pH) and descent of the CCD to below the elevation of the ocean floor.  On the other hand, the appearance of pure clay-silica oozes in otherwise shelly muds, where the sea floor has been well above the CCD for long periods, show that acidity increased (a drop in pH) over a period.  Such anomalous sediment layers are often easy to see in cores because their colour is different from the common sediments.

In cores from ocean depths between 2 and 4 km, the second kind of anomaly appears consistently at the level of the Palaeocene-Eocene boundary: it signifies a massive increase in acidity (Zachos, J.C. et al. 2005.  Rapid acidification of the ocean during the Paleocene-Eocene thermal maximum.  Science, v. 308, p. 1611-1614). Carbon-isotope measurements from the same cores also show a marked shift.  The sediments are depleted in 13C, which has generally been taken to indicate a huge release of methane from storage as gas hydrate in sea-floor sediment at the time of the Palaeocene-Eocene boundary.  Most palaeoclimatologists consider the C-isotope “spike” to be a proxy for sudden, intense warming that resulted from methane – a more efficient ‘greenhouse’ gas than CO2 – and the carbon dioxide produced as it was oxidized.  The range of water depths where the carbonate-free layers occur enables marine geochemists to estimate the rate of acidification.  In around only 10 ka the CCD rose 1.3 to 2.0 km above its current level.  From the degree of acidification needed it seems that considerably more than 2 x 1012 t of carbon was released in the form of methane that eventually oxidized to CO2, and returned to the ocean.  The carbonate content of the ocean sediments rose gradually over the next 100 ka, by the end of which the former balance was restored. This information in turn gives a picture of the rate at which sudden ‘greenhouse’ events subside once their cause has stopped being produced, almost certainly by the drawdown of atmospheric CO2 by weathering of silicate minerals exposed on the continental surface.

At the end of the Palaeocene, the effect on organisms was mainly restricted to benthic foraminifera that live in moderately deep water, which show a selective extinction.  The eventual release by human activity of carbon contained in accessible fossil fuel reserves, will give a mass of carbon in ‘greenhouse’ gases of about twice that released at the Palaeocene-Eocene boundary over perhaps 300 years.  Such rapid release may result in acidity that is incompatible with carbonate-secreting organisms anywhere in the oceans: the CCD will effectively be at the sea surface

The earliest lichens

Lichens are not individual species, although they are given Linnaean names, but symbiotic associations of two or more species.  In the lichens the mutual relationship is between entirely different organisms: fungi with either algae or blue-green bacteria.  Although lichen form one of the plagues set to try geologists, their fossil record is extremely sparse.  Once again, Chinese lagerstätten in the Doushantuo Formation establish a first, in this case preserved in phosphorites (Yuan, X. et al. 2005.  Lichen-like symbiosis 600 million years ago.  Science, v. 308, p. 1017-1020).  The fossils show exquisite detail, sufficient to reveal both fungus-like hyphae and cells that resemble those of cyanobacteria.  They are from the late Neoproterozoic, Ediacaran period, when all manner of evolutionary developments were taking place.  One question that is unanswered is whether or not these fossils were marine or subaerial.  Modern lichens are intolerant of salt water.

Methuselah

Since the 1960s claims have been made for the oldest living organism being found in brine inclusions from salt deposits, and most have been dismissed as modern contaminants.  In 2000 that easy avoidance was ruled out by super-sterile culturing of the contents of a fluid inclusion in a Permian halite crystal from New Mexico (Vreeland, R.H. et al. 2000.  Isolation of a 250 million-year-old halotolerant bacterium from a primary salt crystal.  Nature, v. 407, p. 897-900).  The research produced a culture of a salt-tolerant bacterium that was dubbed Virgilbacillus.  However, the odd nature of the crystal could have formed much later than the deposition of the salt beds.  Confirming a Permian age for a fluid inclusion is not easy.  One approach is by comparing the composition and formation temperature of the bacterium-hosting fluid with that from other, more usual inclusions in the same deposit and from fluids that form when salt deposits are exposed to air (“weeps”), as might be included when salt deposits recrystallise long after their formation (Satterfield, C.L. et al. 2005.  New evidence for 250 Ma age of halotolerant bacterium from a Permian salt crystal.  Geology, v. 33, p. 265-268).  The study found that the inclusion fluids along with others from halite at the same level in the salt deposit have significantly different compositions from “weeps”.  The latter reflect the composition of the salts in the deposit which formed by precipitation of the less soluble components of seawater.  The inclusions have compositions more like sea water that has been concentrated by evaporation, albeit different from that of modern halite inclusions.  So it does indeed seem as if Virgilbacillus is a Permian creature.  Yet to emerge are DNA analyses that can be compared with modern salt-tolerant bacteria.

Mars: the best may yet be to come

The US and ESA satellites orbiting Mars have so far deployed remote sensing instruments that detect visible to thermal infrared radiation from the planet’s surface.  Ultimately the energy involved is from the Sun: these are passive instruments.  Engrossing as they are, images from these sensors reveal only details of surface mineralogy and the Martian topography.  So far, virtually nothing is known about what lies buried beneath it, apart from inferences about ground ice.  The ESA Mars Express has one last imaging trick up its sleeve, which uses energy generated on board and beamed obliquely down to the surface.  This is the Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS).  Radar remote sensing on Earth generally uses high-frequency microwaves in the wavelength range from 0.01 to 0.1 metres, and the images produced show how much energy is scattered by surfaces of varying roughness, to be received by antennae deployed from an aircraft or satellite.  The longer the wavelength the greater the height of small-scale surface irregularities that cause scattering and therefore a received signal.  Smooth perfectly surfaces reflect all the energy away from the antennae, like a mirror, so no energy returns to be sensed.  How microwaves interact with the Earth’s surface depends on the electrical properties of the materials.  Good electrical conductors, such as metals and liquid water are extremely efficient reflectors, whereas minerals are poor conductors and tend to absorb microwaves to some extent.  If soils are extremely dry, with less than 1% moisture content, as in some deserts, some of the absorbed energy is scattered by materials below the surface and images show subsurface features.  This lies behind the principle of ground penetrating radar, but since many soils are damp, only radar waves generated at the surface give good signals in most areas, to be exploited by civil engineers and archaeologists.  Ice is very different from liquid water, being so poorly conductive that it is almost transparent to microwaves.  Consequently it has proved possible to sound the depth of glaciers and ice sheets using ground penetrating radar deployed from aircraft.  The depth of penetration, and of course that involves energy returning to the surface in order to get a signal, is governed by the radar wavelength.  For instance, unknown former courses of the River Nile’s tributaries have been detected by 0.25 m radar waves beneath the hyperarid eastern Sahara through about 3 metres of dry sand.

MARSIS can transmit microwaves with 4 wavelengths 170 , 100 , 80  and 60 m.  Given rocks and soils free of liquid water, which comprise most of Mars’s surface, or ice, it can penetrate as deep as almost 5 km.  The multi-wavelength arrangement can also potentially discriminate water ice from rock and soil.  A great deal of speculation and some evidence suggest that parts of Mars may be underlain by permafrost, that is melted only under unusual conditions, such as after meteorite impacts.  There are also suggestions that glaciogenic-like landforms may still be underlain by ice, and bizarrely that there are frozen seas (see The triumph of the old on Mars in April 2005 EPN).  MARSIS may well throw Mars investigations into a turmoil, but maybe not.  The delay in sparking it up has been caused by fears that deploying its antennae might damage the whole spacecraft, and the first attempt seems to have got stuck.  It’s other drawback is limited power so that horizontal resolution will be between 5 to 10 km and vertically only 100 m, so results may be so blurred as to be inconclusive.  NASA plans a similar device aboard its Mars Reconnaissance Orbiter (launch date August 2005).  The Shallow Subsurface Radar (SHARAD) will use microwaves with 12 to 20 m wavelengths that give penetration to 1 km, but horizontal and vertical resolutions of 300 and 15 metres.

See: Reichhardt, T. 2005.  Going underground.  Nature, v. 435, p. 266-267.

Water and the G8

On May 24 the government of Tanzania cancelled a contract with the commercial water giant Biwater, which was supposed to bring clean water to the country’s largest city Dar es Salaam, and establish a privatised water supply.  The UK-based company had won a £76.5 million contract from the World Bank, with the support of the British government’s Department for International Development (DfID).  DiFID had paid the free-market thinktank £0.5 million in fees to advise the Tanzanian government and promote privatisation, out of a total expenditure of more that £36 million since 1998 for similar consultancies.  In two years Biwater has failed to install a single pipe (Vidal, J. 2005.  Flagship water privatisation fails in Tanzania.  The Guardian 25 May 2005, p. 4).

In her statement to the International Conference on Water and Sustainable Development in Paris (March 1998) Clare Short (British minister then heading DfID) outlined the New Labour government’s “vision” on water resources in the Third World, “Partnerships among governments, the private sector and civil society are critical to sustainable development [of water resources]”.  Policy of the International Monetary Fund is to enforce “structural adjustment programmes” on poorer countries as a condition for rescheduling debt repayments. Into these are written the privatisation of formerly public assets, such as water utilities. The first targets for this in Africa were the townships of South Africa, following the removal of apartheid.  Although very poor by western standards, and with unemployment running at up to 50%, people in South African townships are better off than the majority of sub-Saharan Africans.  Potential profits from water metering seemed attractive.  However, a great many people found themselves cut off from this most basic necessity in 2000, being unable to pay the increased water rates.  This led to nationwide protests, the most violent being in the arid Transvaal.  The company involved in that region was also Biwater, with bids for contracts worth 12 billion rand.  The company has an interesting history, having been an early beneficiary of the Conservative government’s “aid for trade” programme in the 1980s, including dam and water distribution contracts in Malaysia and Thailand that were linked to British arms supplies to the governments involved.

Water privatisation is a target outside Africa, perhaps the most notorious case being in South America. Bolivian trades unionists demonstrated on 6 April 2000 against a 35% rise in water prices imposed on the city of Cochabamba.  Military forces opened fire, killing 6 demonstrators, and a state of siege was declared by the authorities. The price hike stemmed from the new owner of the region’s water system – International Waters Ltd (IWL) of London, a subsidiary of Bechtel, based in San Francisco.  IWL’s Bolivian operation centres on the Misicuni dam project.  Water from the dam will cost 6 times more than it would from alternative sources.  The increased water charges were to recover the cost of the dam, with one problem: the dam had not been built, and IWL/Bechtel had put no funds into the construction project.  Subsequently, public pressure forced the ending of the contract.  Similar upheavals have been seen in Ghana, Trinidad, Argentina and the Phillipines.

News of Tanzania’s decision to end the ill-fated contract with Biwater followed announcements in the same week that the EU would effectively double its Third World aid.  In early July, Britain will host the 2005 G8 summit, which will be dominated by discussion of ways to increase the flow of finance into Africa in particular.  This follows the publication in early 2005 of the Commission for Africa Report sponsored by the New Labour government. Two thirds of the world’s population lacks sanitation that is adequate for healthy living.  Of them, one billion people, including the majority of Africans, have no access to safe drinking water.  Poor water supplies form the main contributor to the death of children under five years old.  For hundreds of millions of people, getting water for domestic use consumes much of their daily labour, which involves mainly women and children trudging to distant water sources and carrying it home, on average twice each day.  The failure of private enterprise to deliver water to the needy suggests that the small print of any declaration from the G8 summit needs the most careful scrutiny.

The route and the pace out of Africa

Tool making hominid species left their African homeland several times in the past, the earliest being shortly after the appearance of Homo erectus, about 1.8 Ma ago.  Those early migrants ended up in eastern Asia, where they thrived until as recently as 12 thousand years ago (if indeed H. floresiensis does prove to be a miniature erect).  Europe was reached by at least three waves: possibly advanced H. erectus around 0.5 Ma; Neanderthals as early as 0.25 Ma; modern humans around 40 thousand years ago, at the earliest.  The fully modern human record in Asia begins at 67 thousand years ago, suggesting an exodus from Africa at between 80 and 70 thousand years.  There is an oddity here: simple geography suggests that Europe should have been colonised first in each wave out of Africa, because it is closer.  But the Nile to Middle East to Europe route was not successfully used by our immediate forebears until long after they moved eastwards, although there is evidence of H. sapiens temporary occupation of parts of Palestine between 100 to 80 thousand years.  Several reasons for this have been suggested, including the possibility of direct competition with Neanderthals who occupied the same 100 ka sites in the Middle East, and the relative difficulty of passage along the Nile compared with a coastal route in NE Africa. 

Eritrean and US archaeologists have shown that around 100 ka the Eritrean coast was occupied by humans who subsisted on seafood: always available whatever the climate, whereas terrestrial game potential fluctuates.  That has led to the suggestion that Africans who colonised Asia and Australasia left by island hopping across the narrow Straits of Bab el Mandab when sea-level began to fall around 70 ka.  A coastal route, well stocked with food items would have allowed rapid movement eastwards.  That seems intuitively likely, because an eastward route through the Middle East is barred by deserts, which would have been even more arid as glacial conditions developed.  Moreover, a Middle Eastern route would have led more directly to Asia Minor and ultimately Europe.  The conundrum deepens, since the Straits of Bab el Mandab would have been even easier to cross at the time of the last glacial maximum, around 20 ka, yet there are no archaeological signs of populations of that age in Yemen and Oman; research has hardly begun there.  Unravelling routes is possible, just, by analysing modern population genetics (Macaulay, V. et al. 2005. Single, rapid coastal settlement of Asia by analysis of complete mitochondrial genomes.  Science, v. 308, p. 1034-1036).  People living in the Andaman islands and the Malaysian Peninsula include groups who differ substantially from their neighbours and may be descendants of the original colonisers.  Mitochondrial DNA from these groups indicates a branching from an original type around 65 ka, remarkably suggesting a single founding woman.  That cannot be taken exactly at face value, but does suggest that only a small band migrated to these two areas, perhaps no larger than a few hundred.  The fact that they reached the Andaman islands may indicate that theirs was a boat-using culture.  Whatever, movement was rapid, possibly as high as 4 km per year, thereby allowing the early colonisation of Australia.

Analyses of mtDNA in Africa suggest that about 85 ka ago there was a major expansion of people, whose descendants make up more than two thirds of modern Africans.  Could it be that this expansion reflected climate and ecological change, so that migration from elsewhere drove inhabitants of the Red Sea coast to cross the daunting Straits of Bab el Mandab because of severe competition?  Perhaps it was the driving force as late as 40 ka, when modern humans reached Europe itself, undoubtedly along the Middle East route.

See also:  Forster, P. & Matsumura, S. 2005.  Did early humans go north or south?  Science, v. 3308, p. 965-966.