Are geoscience job prospects about to boom?

Metal thefts in the UK have increased to such an extent in 2008 that police are marking lead on church roofs with the same identification tags as televisions and DVD players. Similarly there has been an outbreak of filching heating oil and diesel from isolated farmsteads. This follows the surge in commodity prices during the first two quarters of 2008. On a more legal note, oil and mining companies have found that their assets have soared, and unsurprisingly they want more of the same, while the prices hold or rise even further. Exploration managers with increased budgets are set to thrust out to the frontiers, and consultants are rubbing their hands with glee. On the surface, these developments might seem to foretell a welcome rise in the employability of people with a geoscience degree; or so think three contributors to the 8 August 2008 issue of Science (Gramling, C. 2008. In the geosciences, business is booming. Science v. 321, p. 856-7. Laursen, L. 2008. Geoscientists in high demand in the oil industry Science v. 321, p. 857-9. Coontz, R. 2008. Hydrogeologists tap into demand for an irreplaceable resource. Science v. 321, p. 858-9).  It is claimed that geoscience jobs in the US will rise by 22% in the next decade, compared with an overall jobs forecast around 10%. Low place-value physical resources being, by definition, potentially profitable world-wide, prospects ought to be good for ‘geos’ globally.  Salaries also seem to be set to rise, along with employability for individuals with first degrees, as opposed to master’s qualifications. The ruthless downsizing, outsourcing and  lay-offs of the 80s and 90s have also placed greater value on Earth science qualifications, simply because there has been a decline in students opting for seemingly moribund career prospects; a matter of increased demand facing diminished supply, as any trader at the London metal exchange or the world’s petroleum spot markets would verify. At the same time, shifts in research funding from rock-oriented geosciences to Earth system science have created a bear market for geological academic posts. High-flying geologists in universities and surveys may well be polishing up their CVs in anticipation of a growing wage differential between the public and private sectors.

Set against such rosy prospects are the inherent economic risks that are bound up with inflation in commodity prices. Historically, there has been a tendency for boom then bust in mining and the oil industry. The contrast between the surge in petroleum and metals prices following the Yon Kippur War and the Iranian Revolution and recession in the 80s and 90s being too recent to ignore, as many ‘geos’ who found themselves ‘over the hill’ in its aftermath will admit. It would be wise to look on prospects with caution. One area that is likely to rise in prominence is ‘environmental’ geology: the likes of hydrogeology; geotechnics; coastal and flood defence. The problems that global warming may bring, an increased focus on leisure learning and heritage, and the fact that around 20% of all living people have little if any access to clean drinking water and adequate standards of public hygiene compete in many ways for young geoscientists’ aspirations. On a mercenary yet acutely practical note, growing environmental legislation and provision of development funds by non-governmental agencies that range in scale from the UN ‘family’ to small charitable bodies suggest that these fields are likely to provide satisfyingly useful employment with longer-term stability than the uncontrollable vagaries of the commodity markets, albeit at somewhat more modest salaries.

Ocean chemistry at the time of the earliest animals

The Ediacaran fauna of the late Neoproterozoic (occurring between 575-543 Ma) marks the first clear sign of animal life, although the affinities of many of the taxa are obscure. ‘Molecular clocks’ based on differences between the DNA of living organisms seems to suggest a last common ancestor of all animals somewhat earlier than the Ediacaran period, perhaps as early as 1000 Ma. Whatever that first animal was, its emergence and that of the Ediacarans took place in climatically and chemically peculiar times. The Neoproterozoic was marked by at least three glacial epochs that left traces at palaeolatitudes as low as the tropics: so-called ‘Snowball Earth’ events. It also contains the most erratic swings in carbon isotopes that are known from the geological record, which have something to do with ups and downs of life at the time, probably variations in global biomass and/or the rate at which organic carbon was buried in seafloor sediments. Among Neoproterozoic sediments two are outstanding: graphitic and sulfidic mudrocks; banded iron formations (BIFs) which are sulfur-poor. BIFs of that age have been an enigma, the most massive and long-lived being those in the Palaeoproterozoic (before 1.8 Ga) and the Archaean. Neoproterozoic BIFs seem to mark the return after a billion years of most peculiar ocean chemistry, when soluble iron(II) ions were abundant at all depths in the ocean yet were oxidised to insoluble iron(III) at the sites where Fe2O3 was deposited in huge amounts. In the earlier BIF period that had to have been where oxygen was being locally emitted by primitive blue-green bacterial photosynthesisers, i.e. in shallow water. We must surmise that occurred again in the Neoproterozoic, although the source of oxygen would then have included more advanced oxygenic photosynthesisers. But that is not the puzzle. How did ocean-wide conditions return to allow the abundance of dissolved iron(II) ions and why did those conditions not prevail in the BIF-less billion years?

Donald Canfield of the University of Southern Denmark has long been immersed in issues of ocean-chemistry evolution in relation to atmospheric oxygen levels, and offered an answer to the second question that has largely replaced the once accepted wisdom that ocean water became oxygenated throughout after 1.8 Ga thereby allowing iron to enter oxidised minerals immediately it emerged in ocean-floor basalts magmas. Instead, he suggested that the deep ocean, at least, contained abundant hydrogen sulfide as witnessed by sulfur isotope patterns in marine sediments. In other words, oceanic Fe(II) was efficiently precipitated through the Mesoproterozoic in the form of sulfides. The H2S was probably generated by bacterial reduction of sulfate ions, themselves derived by oxidation of on-land exposures of sulfidic rocks because of low but increasing atmospheric oxygen. Canfield and a rich variety of international colleagues once again has an authoritative say, this time as regards the Neoproterozoic iron formations (Canfield, D.E. et al. 2008. Ferruginous conditions dominated later Neoproterozoic deep-water chemistry. Science, v. 321, p. 949-952).

If the supply of sulfate from the continents waned, then bacterial production of sulfides would follow suit in sulfur-poor oceans. Provided deep-ocean oxygen levels remained very low, iron(II) derived from continually generated ocean-floor basalts and their hydrothermal alteration could once again pervade the oceans. Oxygen in shallow water would again encourage precipitation of hematite and BIFs. This hypothesis does not need a special explanation for fully oxygenated Precambrian oceans reverting back to anoxia in the Neoproterozoic and then back and forth in their oxygen concentrations to explain short BIF episodes, merely variations in the supply of sulfate from weathered continental surfaces. Canfield et al. tested this hypothesis by examining the proportions of total iron in 800-530Ma sediments contained by minerals able to react easily with their environment, such as sulfides and carbonates, and the proportions of such reactive iron in sulfide minerals. In modern oxygenated waters the proportion of such reactive iron in sediments does not rise above about 40%, and is often lower. In the Neoproterozoic samples, shallow marine rocks obeyed the modern <40% rule, but those from intermediate to deep-water settings (below storm-wave base) sometimes show far higher values. That is a clear signature of anoxic waters, and it persists into the Cambrian. Interestingly, many deep-water sediments from the Ediacaran Period do show signs of oxygenation, while others were anoxic. Among the sediments deposited under anoxic conditions none have iron sulfide proportions as high as those produced in modern euxinic basins such as the Black Sea, thereby signalling a dearth of bacterially generated H2S and low sulfate supply to the oceans as predicted. But why did the supply dry up? One possibility is that chemical weathering on the continents plummeted during ‘Snowball Earth’ episodes. Yet the evidence for anoxic, high iron(II) conditions in the oceans persisted well beyond the times of the known glacial epochs. Another plausible explanation is pyrite burial, analogous to that of carbon, and subduction of sulfide-rich sediments that progressively completely stripped the oceans of sulfate. What of the effect on early animal life? Iron is an essential micronutrient, much touted today as a means of encouraging phytoplankton blooms in ocean surface water. Together with rising shallow-water oxygen levels, perhaps an explosion in food supply enabled large early animals, such as the Ediacarans, to develop and thrive, instead of much smaller precursors whose survival as fossils would be less likely.

The next big step was also one of geochemistry, when animals became able to secrete calcium-rich skeletons by extracting that element from seawater. It took place around 543 Ma at the start of the Cambrian, while iron-rich deep waters were also common. Was there somehow a connection between the two chemical highlights of the late Precambrian? Calcium is very interesting metabolically: too little and cells do not function properly; too much and they die. The ‘window’ of metabolically tolerable calcium concentrations is narrow. One possible means whereby calcium-rich hard parts may have developed among animals is that their outer cells were harnessed by evolution to rid the body of excess calcium in an organised way, creating the opportunity for both armour and armaments. Would elevated iron enhance the solubility of calcium in ocean water?

See also: Lyons, T.W 2008. Ironing out ocean chemistry at the dawn of animal life. Science, v. 321, p.923-924.

The Great Ordovician Diversification

Geologists in general learn that the tangible fossils first appeared at the start of the Cambrian Period. So they did, but we refer to that event as the Cambrian Explosion, but it was hardly explosive as there were very few fossil taxa of Lower Cambrian age. Indeed, by the end of the Cambrian only 500 or so genera are known. Fossils truly exploded in the later Ordovician, reaching 1600 genera, which number wasn’t exceeded until the start of the Cretaceous, 300 Ma later. Sudden rises in diversity, like mass extinctions, demand an explanation, but few have been offered for the late Ordovician explosive diversification, unlike the mass extinction at its close, which halved the number of genera living at the time. That has been attributed to the widespread glaciation of Gondwanaland, the fall in sea levels drastically reducing ecological niches (a wilder scenario is that the extinction was caused instantaneously by a gamma-ray burst from a nearby supernova, but there is little evidence for such an event).

The Ordovician has been assumed to have been a period that experienced ‘supergreenhouse’ conditions because of a far greater proportion of CO2 in the atmosphere in the early Palaeozoic. Advances in stable-isotope analyses of small samples allow that idea to be tested (Trotter, J.A. et al. 2008. Did cooling oceans trigger Ordovician biodiversification? Evidence from conodont thermometry. Science, v. 321, p. 550-554). Julie Trotter of the Australian National University and her French and Canadian colleagues show that oxygen isotopes in conodonts that range in age from Lower Ordovician to Lower Silurian changed steadily with time. Assuming the conodont animals were planktonic, the increase in the proportion of 18O represents decreasing sea-surface temperatures, from around 40ºC (truly supergreenhouse) to levels very similar to those that prevail in today’s tropical ocean, around 30ºC, to even more temperate levels (24ºC) by the close of the Ordovician. So it seems as if cooling encouraged rapid evolution of new organisms at that time.

Return to ‘Doggerland’

Because sea levels rose world-wide after the last glacial maximum, archaeologists have been largely stymied as regards exactly where migrating people lived and what they did. Much migration since fully modern humans left Africa around 70-80 ka is likely to have been ‘strandloping’ along coastal lowlands exposed as sea level fell as the last glacial period developed. Of course, this vast area is now drowned. It takes both a lot of work and a degree of good fortune to make anything of this landscape for ancient humans. Luck definitely played its part in getting some clue about one of the last of the migrations: from continental Europe to the British Isles, in the aftermath of the last glacial maximum. Trawlers have dredged not only animal bones from what was a great plain where the North Sea now sits, but also a superb bone harpoon point recovered in 1931. It has been a while in coming, but researchers at Birmingham University, UK have finally defined and mapped that drowned land area – Doggerland (see: Spinney, L. 2008. The lost world. Nature, v. 454, p. 151-153).

Mercury in the news

It has been more than 3 decades since the Mariner 10 mission took a close look at the surface of the innermost planet Mercury. In January 2008 NASA’s MESSENGER spacecraft flew past and the 4 July issue of Science contained a special section on the early observations (Several reports 2008. Messenger Special Section. Science, v. 321, p. 58-94). These involve images, spectral observations, laser altimetry, estimates of chemistry in Mercury’s surrounding space and measurements of the mercurial magnetic field. The data bear on surface mineralogy, geological structures, regolith formation, cratering – especially the giant Caloris Basin, and evidence for volcanism.

Oh dear; water on the Moon…

The accepted wisdom about the Moon is that it is and always has been supremely dry. That notion stems from analyses of every single solid rock brought back by the Apollo astronauts, and the probability that the Moon formed from incandescent vapour blasted into orbit by a giant collision between the original Earth and an errant planet as big as Mars. Water and indeed most volatile elements and compounds ought to have been driven off the orbiting gas and debris that coalesced to form the Moon around 4.5 Ga ago. Most people believe that more or less everything the astronauts dragged back to Houston has been analysed: not so. There are millions of glass beads that constitute a sizeable fraction of the lunar regolith. Some of these turn out to be volatile rich, and may have been blown out by early lunar volcanism (Saal, A.E. et al. 2008. Volatile content of lunar volcanic glasses and the presence of water in the Moon’s interior. Nature, v. 454, p. 192-195). If the glasses are volcanic in origin, that implies there is water in the Moon’s mantle. So, you might ask, how come the Moon is not a vibrant place rather than being as dead as a doorknob? The Earth is so interesting partly because it is a wet planet. The Moon has very little in the way of heat production, so even if its mantle contained hydrous phases, it cannot reach basalt solidus temperatures unless energy is delivered mightily by impacts. That did happen around 4 Ga, when the lunar maria formed and became floored by gigantic floods of basalt. Yet those basalts are extremely dry, thereby posing a bit of a question for Saal and his colleagues.

See also: Chaussidon, M. 2008. The early Moon was rich in water. Nature, v. 454, p. 170-172.

Refined seismic tomography of North American subduction

For some time relics of the Farallon plate that was subducted beneath North America during its late Mesozoic and Cenozoic westward drift have been known from seismic tomography, but only in a blurred form. Advances in computation from many seismic records are steadily improving the resolution of this revolutionary technique, and a more finely tuned picture of the mantle beneath the North American continent has now emerged (Sigloch, K. et al. 2008. Two stage subduction history under North America inferred from multiple-frequency tomography. Nature Geoscience, v. 1, p. 458-462). The American-German-French team reveal several pieces of the ‘lost’ plate in an astonishingly complex 3-D representation of the North American mantle down to 1800 km. There are two main blocks: one still active and connected to the active subduction zone between British Columbia and northern California that dips steeply to about 1500 km depth, the other inactive and stranded beneath the eastern part of the continent. The authors believe that the two separated around the end of the Mesozoic. They suggest that the break coincided with the within-plate deformation and volcanism known as the Laramide era that lasted from 70-50 Ma, which probably coincided with low-angled subduction of the Farallon plate. After the break, the flat subduction ‘rolled-back’ westwards, leaving a track on volcanism across the western part of the continent. The authors also ponder on the relationship between the changed style of subduction and the thermal event that produced the Columbia River continental flood basalt event at 17 Ma.

Geomagnetic cows

Unless you are a committed ‘towny’, you may have noticed that livestock tend to face in the same direction when feeding and lying down; so much so that a herd of grazing cows can resemble a collective harvesting machine. However, few of us country folk have bothered to see if the direction in which they face varies from day to day. In fact it does; but only a bit. Thanks to the high-resolution images provided by Google Earth, a group of German and Czech scientists have measure the alignment of almost 3000 cows and wild deer that show up on images of 241 localities on 6 continents (Begall, S. et al. 2008. Magnetic alignment in grazing and resting cattle and deer. Proceedings of the National Academy of Sciences, v. 105, p. 13451–13455). In all the populations the animals roughly align themselves north-south. More to the point, they line up parallel to the local lines of magnetic force with a remarkable degree of consistency.

Now, this is not a study aimed at the annual IgNoble Awards, but a cunning check on whether herding animals have some kind of built in compass akin to those in birds. That would have an evolutionary advantage in seasonal migration – domestic cows are derived from wild bovids of the Pleistocene temperate grassland plains. I have a made a quick check of some local cattle and sheep, again using Google Earth, and I can’t say that I am convinced. But the study is based on statistical analysis of rose diagrams of the long axes of cattle, so there may be a tendency for poleward pointing. However, the herds and flock that I examined may be independent minded beasts. Yet, if Begall et al.’s stats are correct, then geophysicists have perhaps a new means of exploration for local distortions in the magnetic field as might happen near magnetite ores; incidentally sometimes rich sources of vanadium. The method may delay disoriented ramblers lacking compass or GPS receiver, and might place them at some risk. Frankly, they would be better off looking for which side of trees the moss grows on…

See also: Callaway, E. 2008. Magnetic cows in mystery alignment. New Scientist, v. 199 30 August 2008 issue, p. 10.

Screening for arsenic contamination

Millions of people in Bangladesh and West Bengal have unwittingly drunk groundwater that is contaminated with arsenic as a result of natural processes for up to 20 years. They are potential victims of the greatest mass poisoning in human history. Dreadful as the possible fate awaiting them might be – they may develop various cancers – discovery and ten years of research into their problems has alerted geoscientists to the hazard of environments like those in which they live. That arsenic poses great dangers is common knowledge, but until unmistakable signs of arsenic poisoning appeared there (black wart- and mole-like skin lesions), the hazard was thought to be restricted to former mining areas where oxidation of iron sulfides released the traces of arsenic locked within those minerals. From studies in West Bengal and Bangladesh has emerged a cause that was completely unexpected: it involves one of the commonest minerals at the Earth’s surface, goethite or FeOOH. This yellow-brown colorant of many sediments has the remarkable property of being able to adsorb or ‘mop-up’ a large range of elements dissolved in water with which it comes into contact. Among these is arsenic. In the oxidising conditions that sponsor the formation of goethite as a coating on sedimentary grains the mineral actually prevents a great deal of natural, geochemical pollution. Yet, exposed to reducing conditions, commonly developed when buried organic material begins to rot, goethite may dissolve and release its potentially toxic load into groundwater. This is precisely the source of arsenic at levels more than 100 times the safe level in some wells on the Ganges-Brahmaputra plains. The story does not stop there, however.

When sea level stood about 130 m lower than now, at the last glacial maximum, rivers rising in the Himalaya cut deep valleys in the coastal areas. As sea-levels rose these rapidly filled with new sediments, most of which were stained with goethite. But they were interbedded with thick organic-rich peats that formed during periods of slow sea-level rise. It is the peats and more finely dispersed vegetable matter that caused the reduction and solution of goethite, and thus the arsenic that it carried. Especially high arsenic levels develop in sediments derived from specific areas in the Himalaya. So a suite of conditions conducive to arsenic hazard have emerged from unravelling the tragedy of the northern plains of the Indian subcontinent. It is possible to use that suite as a means of predicting other risky areas, one of the first to be revealed being in the Red River delta of northern Vietnam: the population of Hanoi is at risk from well water drawn from the Red River sands and gravels. Systematic computer screening of known geology, topography and soil conditions in Southeast Asia is beginning to throw up other problematic areas (Winkel, L. et al. 2008. Predicting groundwater arsenic contamination in Southeast Asia from surface parameters. Nature Geoscience, v. 1, p. 536-542) where concentrations of arsenic in drinking water are highly likely to exceed the maximum recommended level of 10 μg l-1 (parts per billion). The pilot study highlights the known areas, but also the deltas of Mekong River in Cambodia and southern Vietnam, the Irrawaddy in Burma (Myanmar) and the Chao Phraya basin of Thailand. Hopefully, geochemical testing will reveal in details which wells are at risk and which are not, in these three regions: it would be easy to reject perfectly safe groundwater that often occurs close to contaminated areas, as found in Bangladesh, without careful testing. The implicated mineral, goethite, is itself a cheap and abundant means of remediation if contaminated water is passed through goethite-rich filters. But the large areas at risk in SE Asia, together with others discovered by epidemiologists in northwestern India, the Indus plains of Pakistan and in Mongolia, create a chilling scenario for many other populous, sediment-rich areas elsewhere. Winkel et al’s approach surely needs to be refined and applied globally.

See also: Polizzotto, M.L. et al. 2008. Near-surface wetland sediments as a source of arsenic release to ground water in Asia. Nature, v. 454, p. 505-508. Harvey, C.F 2008. Poisoned waters traced to source. Nature, v. 454, p. 415-416.

Cause of Javan mud volcano

Since May 2006 the largely urban Sidoarjo area of eastern Java has been plagued by continuous eruption of hot mud and steam from a vent that suddenly appeared. Around 7 km2 have been buried by up to 20 m of noxious mud, giving a total emission of about 0.05 km3 at a rate of 100 thousand m3 per day. Although nobody has been killed, the mud volcano is an economic and social disaster, 30 thousand people having been displaced. The area is one of active petroleum exploration, and locals blame a blow out from a nearby gas exploration well, though scientists and the exploration company point to the eruption having begun a couple of days after a magnitude 6.3 earthquake in the area around the capital Yogyakarta, 250 km away. If the latter, economic losses may be difficult to recover from insurers; if the former, there will be a rare old furore. So, a thorough evaluation of what the cause may have been is welcome (Tingay, M. et al. 2008. Triggering of the Lusi mud volcano: Earthquake versus drilling initiation. Geology, v. 36, p. 639-642). Being a mix of Australian, German and British geologists, the authors have no axe to grind. They consider that seismic influence was highly unlikely, in this case, although many mud volcanoes have formed close to earthquake epicentres in other areas. On the other hand, the well that was being drilled at the time suffered a loss of drilling mud shortly before the volcano began to erupt, suggesting escape to fractures at depth around the well. Moreover, the hole was not cased at depth. The most likely trigger was creating a passageway up the well for high-pressure fluids to escape from the 3 km deep target limestone sequence into shallower unconsolidated clays. They were liquefied and escaped as a lateral blow out

Testing hypotheses for the onset of Northern Hemisphere glaciation

Whereas Antarctica began to develop significant ice caps in the early Oligocene (maybe in late Eocene times) those of the Northern Hemisphere, principally on Greenland, did not arise until about 3 Ma ago. There are several hypotheses for that onset of the Great Ice Age: closure of the Panama seaway and increased poleward heat transport in the North Atlantic; perhaps related development of the El Niño cycle in the East Pacific; uplift of the Himalaya and Rocky Mountains changing atmospheric circulation; lowered atmospheric CO2, and a combination of all four that allowed the Milankovich astronomical forcing to get a grip on Earth’s climate ‘machine’. Testing the hypotheses is somewhat more difficult than find empirical support for them; i.e. coincidences in timing. Climate scientists from Bristol, Cambridge and Leeds universities in the UK have attempted such a test, using a complex climate model involving coupled atmosphere-ocean circulation and ice-sheet models (Lunt, D.J. et al. 2008. Late Pliocene Greenland glaciation controlled by a decline in atmospheric CO2 levels. Nature, v. 454, p. 1102-1105). Only a decrease in the greenhouse effect could have transformed climate over Greenland sufficiently to equip it with a large ice sheet, the other three main hypotheses falling a long way short, although each could have led to small ice volumes. Significantly, the study failed to find support for any of the terrestrial processes having been capable of ‘priming’ orbital and rotational forcing to such an extent that they triggered glaciation. Despite the claims by the authors, as computing power goes up and the resolution of feasible climate modelling comes down it is quite likely that within a few years there will be another view ‘supported’ by models.

Climate shock of the Younger Dryas

Between 12,900-11,500 years before the present, high northern latitudes returned to almost full glacial conditions, after about 6000 years of warming since the last glacial maximum. Just prior to the Younger Dryas cooling event, conditions had warmed sufficiently that European people had migrated northwards, some to occupy what are now the British Isles. Temperate grasslands teeming with game were the probable attraction, and still-low sea levels permitted crossing of what became the North Sea. Although it is possible that some people remained in Britain through the thousand-year mini glaciation, conditions would have been at the extremes of winter cold and year-long windiness, judging from the Greenland ice-core records of air temperature and dust. Those records have shown for some time that the transition from warmth to frigidity was rapid, but not how rapid. The cold spell had much in common with sudden, millennial-scale coolings repeated several times during the run-up to the last glacial maximum. Each such event has been linked with interruptions in the shallow and deep circulation of North Atlantic ocean waters, a likely trigger having been reduction in the salinity of surface waters as a result of floods of fresh water, either through collapses of ice caps and melting of icebergs or, in the case of the Younger Dryas, release of massive amounts of fresh water from glacially-blocked lakes in North America. One result would have been failure of cold surface water to sink at high latitudes, thereby shutting down the suction effect that drags warm water northwards to raise temperatures, especially in NW Europe.

There are concerns that unsuspected climate shifts that stem from the Earth System rather than astronomical influences – the Milankovich effect – may characterise the period of global warming caused by human activities. Increased precipitation at high northern latitudes or melting of ice on Greenland could result in falling ocean salinity and slowing or shutdown of the North Atlantic heat conveyor. Two sets of data published in August 2008 highlight potential climate shifts that may arise with virtually no warning. Both rely on the potentially high resolution of cores through ice caps and stagnant lakes that are annually layered, which has hitherto not been fully exploited by climate scientists. European and North American researchers have focussed on the upper part of the latest core through the Greenland ice cap, using two or three samples from each annual layer (Steffensen, J.P. and 19 others 2008. High-resolution Greenland ice core data show abrupt climate change happens in few years. Science, v. 321, p. 680-684). Deuterium and oxygen isotopes during the onset of the Younger Dryas show a marked cooling at the source of moisture precipitated as snow within 1 to 3 years, which the authors ascribe to the Intertropical Convergence Zone migrating northwards through a major change in atmospheric circulation. Temperature over the Greenland ice cap also changed, but over about 50 years [note however, that the sharp warming of the Bolling episode took less than a decade].

The second study uses annually varved lake sediments that accumulated in an isolated lake in central Germany that filled a circular depression formed by explosive volcanism (Brauer, A. et al. 2008. An abrupt wind shift in western Europe at the onset of the Younger Dryas cold period. Nature Geoscience, v. 1, 520-523). The seasonal sediment layers change in thickness, colour and mineralogy as warmth gave way to the frigidity of the Younger Dryas. One of the proxies, the iron content of the sediments deposited under anoxic conditions during winters fell significantly within a year at 12679 BP, along with a 4-5 fold increase in the rate of sediment deposition. Together with shifts in the lake biota, these features suggest to the authors that within a year wind strength increased greatly, probably due to a greater incidence of storm-force westerlies brought on by a change in the position of the jet stream. Today, westerly winds add to warming in northern Europe, around 12.7 ka they added to cooling, which can only be explained by global cooling or a southward excursion of sea ice in the North Atlantic.

Neither abrupt climate shift can be produced by validation of today’s climate models using actual data from the time just before they took place. It follows therefore that similar shifts in the near future could make themselves felt with no warning.

Opinion has drifted back and forth regarding the global effects of the Younger Dryas, evidence for its effects in the Southern Hemisphere being scanty. The best place to look for direct evidence would be in mid-latitude glaciers, especially where they are abundant in South America and New Zealand. A study of the largest of these, the Southern Patagonian Icefield (Ackert, R.P et al. 2008. Patagonian glacier response during the late glacial-Holocene transition. Science, v. 321, p. 392-395) indicates that the ice there advanced around the time of the YD. However, its dating indicates that the advance lay outside the 1300 year span of the cold period in the Northern Hemisphere. It was more likely due to a local response to increased precipitation from air moving from the east.

See also: Flückiger, J. 2008. Did you say “fast”? Science, v. 321, p. 650-651.

The Sichuan earthquake

Beneath the Dragon’s Gate (Longmenshan) Mountains of Sichuan Province, China an apparently ‘stuck’ segment of a major fault complex failed on 12 May 2008 (Stone, R. 2008. An unpredictably violent fault. Science, v. 320, p. 1578-1580). Unprecedented access to the world’s media resulted in our exposure to the full horror of the results of major seismic events in mountainous terrain and on habitations, especially schools, whose building standards were unable to withstand ground shaking. &0 thousand souls died, thousands more are still unaccounted for and more than 1.5 million people have become refugees in a country that is rapidly emerging from Third World status. Now that aftershocks have subsided massive threats remain from the many landslide-blocked rivers and fractured dams. Yet we also witnessed enormous mobilisation of the People’s Army within hours of the earthquake and truly heroic attempts to rescue as many trapped people as possible. Without that swift response the casualties would undoubtedly have been worse.

China boasts one of the most sophisticated seismic warning systems outside of California and Japan, deploying robotic seismometers and GPS recorders in the most risky regions, and with a 10-thousand strong Earthquake Administration. Sadly, Chinese seismologists regarded the faults shown to be accumulating displacement most quickly as those most likely to fail. It is generally ‘stuck’ segments that fail catastrophically. China has a long-respected reputation for gathering data generally regarded as ‘non-scientific’, such as well water levels, and animal behaviour, that might give empirical clues to impending earthquakes. The Tangshan earthquake of 28 July 1976, which killed a quarter of a million people 160 km from Beijing, was preceded by reports of shifts in the water table, odd ‘earthlights’ and unusual animal behaviour. Paying serious attention to reports by ordinary people of such oddities is reported to have avoided untold numbers of deaths in the period since Tangshan, but not in the case of Sichuan. Strangely, a Taiwanese weather satellite detected decreased electrical activity in the ionosphere above Sichuan hours before the recent earthquake (see Clouds and large earthquakes in May 2008 issue of EPN). Geophysicists have noted increased emissions of radon in the period immediately preceding some major earthquakes which might conceivably have an effect on the ionosphere. Whatever, prediction of catastrophic earthquakes has had very few successes in terms of lives saved, and the signal lesson from Sichuan, as from that which destroyed the Japanese city of Kobe in 1995, is that building standards in zones of active faulting must take account of the risk of ground movement.

See also: Stone, R. 2008. Landslide, flooding pose threats as experts survey quake’s impact. Science, v. 320, p. 996-997.

Extraterrestrial impactors

July 2008

June 30, 2008 was the centenary of the mysterious Tunguska event that devastated more than 2000 km2 of forest 1000 km north of Lake Baikal in Siberia at 7 am a hundred years before. Much of the mystery stems from there being no sign of a crater and therefore of the process involved. Speculation about the cause of a massive explosion between 5-10 km above the surface still goes on (Steel, D, 2008. Tunguska at 100. Nature, v. 453, p. 1157-1159). Ideas have ranged over a gamut of high-energy physical processes involved in the explosion: a deuterium-rich, fluffy comet that was ignited as a thermonuclear explosion by hypersonic atmospheric entry; a lump of antimatter; a miniature black hole; explosive release and ignition of natural gas; a ‘Verneshot’, and even an alien space craft involved in an accident. The chances are that the explosion was more mundane, and akin to what occurs inside a diesel engine. Compressive heating of the air in front of a small asteroid or comet travelling at more than 15 km s-1 would generate temperatures around 50 thousand degrees. Flash vaporisation of a small comet or asteroid would add to a massive shock wave at the epicentre, rather than by an intact projectile. It is thought that many small craters, such as Meteor Crater in Arizona, result from impacts by strong metallic asteroids, whereas stony ones or comets easily disintegrate. Whatever, research still goes on at the site, now completely reforested.

The centenary spurred Nature to devote pages 1157-1175 in its 26 June 2008 issue to impact-induced features from Earth and other planets, together with three Letters and two reviews. Topics covered include the search for near-Earth objects and the Spaceguard survey, which is beginning to suggest that humanity can concentrate on global warming for the next century or so, and truly monster impact structures from the Moon and Mars, including evidence for one that may have ‘scalped’ northern Mars. In one of the reviews it is said that a sci-fi novel (Niven, R. & Pournelle, J. 1977. Lucifer’s Hammer. Harper Collins) inspired the Alvarez father-and-son team that first postulated an impact origin for the K-T mass extinction event. The second review is of a highly realistic sculptural depiction of a pope (John Paul II) knocked over by a meteorite: perhaps planetary science’s first involvement, literally, in what some might consider lèse majesté. So, in many ways, quite an event…

See also: Cohen, D. 2008. The day the sky exploded. New Scientist, v. 198, 28 June 2008 issue, p. 38-41.

Entire Landsat archive now accessible by all, free of cost

May 2008 saw probably the most significant announcement for geologists of this century (The Landsat Science Team 2008. Free access to Landsat imagery. Science, v.  320, p. 1011; and see landsat.usgs.gov/images/squares/USGS_Landsat_Imagery_Release.pdf). Given a broadband internet connection, it will soon be possible to download Landsat data (MSS, TM and ETM+) covering any area on Earth free of charge from the US Geological Survey, provided it occurs among the >2 million scenes archived by their EROS Data Center. This act of open-handed generosity by the USGS marks a key step in revolutionising the activities of geologists of the Third World, especially those in Africa; the least well-mapped continent. Landsat data and those from the Japanese-US ASTER instrument aboard the Terra satellite offer huge potential for mapping rocks and soils, especially in dry lands, at scales of up to 1:50 000. Africans need to know about their physical resources, especially water, instead of well-heeled mining, petroleum and consulting companies from rich countries, who have more or less monopolised (and sometimes eked out) knowledge of the continent’s riches. Now they can begin to find out for themselves.

Satnavs useful to hydrogeologists as well as white-van drivers

Microwave radiation emitted by radar remote sensing systems does not merely produce useful images of the Earth when all else fails because of cloud cover. They interact with the surface in such a way that their characteristics change, specifically when the moisture content of surface materials such as soil varies. This phenomenon has spurred development of satellite-borne estimation of soil moisture. But since the launch of constellations of satellites aimed at precise navigation, such as the well-known US Global Positioning System (GPS) and Europe’s Galileo system, everywhere on the Earth is continually bathed in weak microwaves. Researchers at the University of Colorado, Boulder have done a test of the concept using a single GPS receiver recording continuously at one site in Tashkent, Uzbekistan (Larson, K.M. et al. 2008. Using GPS multipath to measure soil moisture fluctuations: initial results. GPS Solutions, v. 12, p. 173-177).

Multipath signals are received when an electromagnetic signal arrives at an antenna, not along a direct path from its source, but indirectly due to reflection of the signal by an object or surface near the antenna. Multipath contaminates all GPS measurements, leading to small positional errors, because the receiver locks onto a signal that mixes the direct and reflected signal. It is difficult to isolate the effects of multipath in GPS carrier phase signals. However, the signal-to-noise ratio (SNR) data computed by a GPS receiver are also affected by multipath and provide an easier route to quantifying multipath effects.  In fact the authors found that the amplitude of the SNR varies over time and correlates well with variations in local soil moisture following rainy and dry episodes. Although a first test of concept, the results are sufficiently encouraging that specialist GPS receivers may be developed that allow both precise positioning and accurate measurements of soil moisture – what may become a must for hydrogeologists, especially in arid and semi-arid terrains.

Dietary negation

The hominin genus Paranthropus rarely hits the front page by comparison with the related australopithecines, despite their having had jaw and cheek bones that would put Sandy Shaw and a variety of 60s catwalkers to shame. (It is only polite to observe that there the vague similarity ends, for paranthropoids have a bizarre skull crest for attachment of jaw muscles and brow ridges that were probably better than a baseball cap at preventing glare.) The first (P. boisei) to be unearthed at Olduvai, Tanzania in 1959, was dubbed ‘Nutcracker Man’ by its finder Philip Tobias. Despite having formidable chewing tackle to drive its massive flat, thickly enameled cheek teeth, wear on their surfaces is little different from that on the teeth of ‘gracile’ australopithecines. (Ungar, P.S. et al. 2008. Dental Microwear and Diet of the Plio-Pleistocene Hominin Paranthropus boisei. PLoS ONE, v. 3, on-line e2044 (www.plosone.org) doi:10.1371/journal.pone.0002044). They show no sign of the microscopic pitting that characterises teeth of living primates that eat hard, brittle foods, such as nuts or woody stems.  Similar studies of the teeth of P. robustus show insufficient wear to suggest an habitual diet of that kind, although it may have eaten such foods when others were in short supply. Chances are that huge jaws and big teeth evolved to give paranthropoids a wider choice of diet and hence greater fitness in a climatically fluctuating terrain. It seems they chose to eat soft foods when available, as do gorillas today. In any event, they were remarkably successful creatures, and the two species cohabited the East African savannah with several human species, including H. erectus, for around a million years from 2.2 Ma when they appeared. Carbon-isotope data obtained from 20 paranthropoid and 25 australopithecine teeth by other researchers reveal a broad but similar diet for both, i.e. a mix of grasses and fruits, suggesting both had eating habits that could shift from apes to those of baboons. However, such C-isotope data cannot distinguish between exclusive vegetarianism and eating the flesh of herbivores. Low dental wear is also associated with meat eating…

See also: Gibbons, A. 2008. Australopithecus not much of a nutcracker. Science, v.  320, p. 608-609; part of a report on the April 2008 meeting of the American Association of Physical Anthropologists