A tsunami’s reach

 

The Boxing Day 2004 Indian Ocean tsunamis were recorded by tidal gauges across the planet, both as amplitude and time of arrival. Armed with such calibrating data, detailed ocean-floor bathymetry and means of modelling wave propagation, oceanographers and geophysicists from the US, Canada and Russia have been able to estimate just how the terrible waves travelled the globe (Titov, V. et al. 2005. The global reach of the 26 December 2004 Sumatra tsunami. Science, v. 309, p. 2045-2048). Highlighting their article wonderfully is a colour-coded map that shows offshore amplitude and arrival time for the world’s oceans and shores. Its most fascinating feature is the manner in which the worst of the disturbance was guided by ocean-ridge systems, principally the Ninety-East and Southwest Indian Ridges, but also the mid-Atlantic Ridge. That is of no comfort to the survivors of the disasters around the Bay of Bengal, although the Irriwaddy delta in Myanmar was spared by the influence of the northern part of the Ninety East Ridge. That Madagascar and East Africa, except for northern Somalia, suffered far less than anticipated is thanks to the peculiar effect of the ridge systems.

The fluoride saga

Archaeological work on Icelandic burial grounds of the 18th century in the early 21st century exhumed victims of the Laki eruption of XXXX. Many skeletons bore the distinctive signs of bizarre bone growth that characterises massive ingestion of fluoride ions. The victims had endured prolonged and worsening suffering after exposure to hydrogen fluoride-rich gases that seem to characterise Laki’s effusions. It is a now well-documented geotragedy. Equally well recorded are the lives of Iceland’s early inhabitants from the 8th century onwards, but in the form of epic prose in Old Norse: the Sagas. Being prone to repeated volcanism, an obvious question is, “Did the Viking heroes experience the same problems?”

One of them was huge, both a righter of injustice and a tidy hand with the battleaxe. Egil Skallagrimsson was ‘a man who caught the eye’, reputedly being awesomely ugly and capable of jerking an eyebrow down to his chin line. Such attributes might seem to have been passed on to the legendary centre-half, ‘Skinner’ Normanton, who graced Barnsley football club in the 1950s. The traditions perhaps, but Egil’s visage was probably a result of chronic fluorosis rather than parentage (Weinstein, P. 2005. Palaeopathology by proxy: the case of Egil’s bones. Journal of Archaeological Science, v. 32, p. 1077-1082). His relatives Hallbjorn Half-troll and Grim Hairy-Cheeks seem from the saga to have been equally afflicted, yet successful. As befits a Viking battler, Egil had a thick skull; when exhumed by descendants in the 12th century, it was found to be ridged like a scallop shell – the attending priest hit it with the back of an axe, to no avail. Some have inferred abnormal bone growth and deformities due to Paget’s disease, but that tends to produce massive but weak growths, following repeated crumbling of bone. Weinstein’s theory may be verifiable, since Egil’s Saga reveals the final resting place of this enigmatic giant.

Source: Pain, S. 2005. Egil the enigmatic. New Scientist, 17 September 2005, p. 48-49

Earth’s biggest ‘bull’s eye’

Since astronauts and satellite imaging devices first made pictures from orbit, top of the list for oddness is the Richat structure of Mauritania. Sitting out in the Sahara is series of perfectly concentric rings that are almost circular. The structure is at least 40 km across, and even today, many geoscientists use images of Richat as a superb example of a meteorite impact. It is not (Matton, G. et al. 2005. Resolving the Richat enigma: Doming and hydrothermal karstification above an alkaline complex. Geology, v. 33, p. 665-668). Spectacular from space, Richat is not easily accessible. Early field work reported a breccia on a kilometric scale at its high-relief core, which unsurprisingly added to its designation as an impact structure. There are other possibilities: a structural dome, perhaps due to interference between open folds of a couple of generation; the result of upward forces from magmatic activity, such as an underlying plutonic diapir.

The rocks involved are Neoproterozoic to Ordovician sediments of various kinds, which dip radially outwards from Richat’s core, so it is some kind of dome, rather than the sort of circular breach expected of an impact. Two large, basaltic ring dykes, whose centre coincides with that of the dome, cut the sediments. Other igneous materials are: carbonatites (formed from unusual carbonate-rich magmas) in dykes and sills; alkaline silicate-rich intrusions and flows occurring close to the central breccia; kimberlites in the form of plugs and sills. The central breccia is in fact a roughly horizontal lens, about 3 km across, that is made mainly of local sedimentary material, mainly once carbonates, set in a silica-rich matrix. The clasts range from highly angular to rounded, but show abundant evidence of some kind of corrosion and silicification. Matton et al. interpret the breccia as a zone of intense dissolution that caused the original sediments at the structure’s core to collapse as volume was reduced as magmatic gases (supercritical fluids) rushed to the surface. So the Richat structure has all the hallmarks of doming above an alkaline igneous pluton, followed by intense hydrothermal activity that was able to dissolve carbonates and produce features akin to those formed by weathering in areas of karst. Rather than being particularly ancient, the igneous activity dates to the Middle Cretaceous. Richat is still unique. Diatremes (vertical breccia tubes) formed by explosive release of fluids from alkaline magmas are quite common, especially in areas dotted with kimberlites, but nowhere else have they produced doming on such a grand scale and with such a spectacular shape.

Detecting the effects of slab to wedge fluid transfer in subduction zones

A fundamental hypothesis concerning the formation of magmas above subduction zones is that partial melting in the over-riding wedge of mantle is induced by upward transfer of water vapour produced by dehydration of the descending lithospheric slab. Many aspects of the chemistry of igneous rocks in supra-subduction zone settings are explained by such dehydration-hydration. However, such fluid transfer is difficult to demonstrate, other than by its ‘second-hand’ geochemical effects on crustal magmas. It should have another, physical effect: in the presence of water vapour, some of the dominant olivine in mantle rocks should break down to form hydrated minerals of the serpentine family. Since olivine is an iron-magnesium silicate, whereas serpentine contains only magnesium, the hydration reaction should release iron to crystallise in the form of iron oxide; specifically Fe3O4 or magnetite. Geophysicists at the US Geological Survey have been able to detect at first hand the effects of this process, thereby allowing zones of hydration in the mantle wedge to be mapped (Blakely, R.J. 2005. Subduction-zone magnetic anomalies and implications for hydrated forearc mantle. Geology, v. 33, p. 445-448). As well as finding substantial magnetic anomalies caused by the release of magnetite by olivine dehydration over the forearc of the Cascadia subduction zone in Oregon, they show gravity anomalies that reflect density variations in the underlying mantle. The other aspect of the olivine-serpentine transformation is a large decrease in density, which should result in a decrease in gravity anomaly should sufficient olivine have been transformed. The coincidence of gravity lows with magnetic highs allowed Blakely et al. to model the location of hydrated mantle wedge in the Cascadia subduction system: probably just above the zone where subducting oceanic crust is transformed to ecologite.

Serpentinite also has a marked effect on the rheology of mantle rocks, because of its ease of ductile deformation. It should allow subduction deformation to proceed in a continuous fashion within the part of the system where it occurs, yet may focus sudden strain in great earthquakes to shallow levels up-dip of its position.

Arsenic removal no cure

It is now a decade since the enormity of natural arsenic contamination in groundwater below the great plains of northern India and Bangladesh came to light. In 1995 the World Health Organisation announced that this waterborne arsenic was causing the world’s largest case of mass poisoning. Since then other areas at risk have emerged in East and Central Asia and South America. The tragedy is that groundwater generally presents the safest option for drinking water because sediments filter water and encourage biogenic oxidation that remove common pathogens. That tens of million people in West Bengal and Bangladesh face stealthy poisoning results from channels cut in the low-lying plains during the last glacial maximum being filled rapidly with sediment as sea level rose during climatic recovery. Sedimentation buried large amounts of organic debris to form anoxic conditions in the shallower sediments. Reducing conditions encourage breakdown of the common colorant in sediments, iron hydroxide grain coatings that, having adsorbed most arsenic and other ions from water, releases them when it dissolves. That this should occur was unsuspected during a massive programme of well sinking to relieve endemic ill health from waterborne disease, yet early signs that arsenic had replaced pathogens as a hazard was widely ignored, despite a few warning voices who discovered the unmistakable signs of arsenicosis in the 1980s. They include disfiguring pigmented skin spots and horny growths on hands and feet.

By 1995, the rest of the world took notice, pouring in funds to document occurrences and causes, and to remediate a clearly catastrophic situation. There are three main strategies: to remove arsenic from well water using chemical filters; to return to water from surface sources, though with careful processing to remove pathogens; to sink wells below the level known to encourage arsenic release from iron hydroxide dissolution. For two decades affected populations had been bombarded with encouragement to turn to groundwater: against their better judgement – they termed it the Devil’s water. Once using wells they saw that infant mortality plummeted, so they developed a new enthusiasm for water deemed safe. Caught on the horns of a dilemma, when arsenicosis appeared they were reluctant to return to what appeared to be the greater of two evils. In only a few places were wells deepened to safe depths, and the externally sponsored drive for a solution centred on arsenic removal techniques. Even that was not widespread: of millions of risky wells some 2000 were equipped with arsenic extracting devices, at around US$ 1500 each. It now emerges that the technologies chosen are not doing their intended job (Hossain, M.A. (and 10 others) 2005. Ineffectiveness and Poor Reliability of Arsenic Removal Plants in West Bengal, India. Environmental Science & Technology, v. 39, p. 4300-4306). The team, led by Depankar Chakraborti, who first spoke out about arsenicosis in 1983, tested the efficacy of 18 different devices installed in West Bengal. Only two reduced arsenic levels to the maximum of 50 parts per billion accepted by the Indian government, which is itself five times more than that deemed safe by the WHO. The teams view, supported by the agency that did most to encourage the massive well-driving programme since the 1970s (UNICEF), is that the only realistic solution is a return to rainwater harvesting and purification.

See also: Ball, P. 2005. Arsenic-free water still a pipedream. Nature, v. 436, p. 313.

Legendary events at the Gibraltar Straits

Everyone has heard of Atlantis, but few would care either to point to its former position, or to accept its existence without a shed-full of salt. Nevertheless, no lesser an authority than Plato first described the legend of Atlantis in the 4th century BC, following verbal accounts that originated in pharaonic Egypt. In the last decade a number of legends, if not their religious connotations, have received scientific support. Foremost among these is that of the biblical Flood, which Ryan and Pitman pursued relentlessly, using the Epic of Gilgamesh as a geographic and chronological guide. They discovered that the Black Sea had catastrophically filled through the Bosphorus once global sea level topped the level of its floor, following glacial melting. Their evidence now includes numerous examples of habitations now inundated by the Black Sea.

As with Ryan and Pitman’s work, one key to resolving a real basis for a legend is carefully puzzling out clues in the most detailed accounts of it. In the case of Atlantis, the clues come from Plato himself (Gutscher, M-A. 2005. Destruction of Atlantis by a great earthquake and tsunami? A geological analysis of the Spartle Bank hypothesis. Geology, v. 33, p. 685-688). Marc-André Gutscher and previous workers focused on Plato’s geographic description of Atlantis, as well as its fate. Plato clearly specified an island in the Atlantic beyond the Straits of Gibraltar, and an earthquake and flood that put paid to the Atlanteans in a single day. Indeed, bathymetry does show well-defined shallows (less than 100 m depth) in such a location, but only about 5 km across. This is the Spartel palaeo-island, on which Gutscher turns his focus. Until the final, decisive rise in sea level after around 12 ka, Spartel would have been a low island. Plato’s account is supported by the existence of a proto subduction zone on the Atlantic sea floor off the Straits of Gibratlar, a major earthquake on which devastated Cadiz in 1755, partly because of a 10 m tsunami. Offshore sediments include turbidites that indicate 8 tsunamis since 12 ka, suggesting a 1500- to 2000-year periodicity of large earthquakes at the entrance to the Mediterranean. Plato’s version of the events includes a rough chronology that suggests a time around 11.6 ka before the present. The thickest of the tsunami-driven turbidites is of roughly that age. Unfortunately for the hypothesis that Spartel was Atlantis, at that time only two tiny islets would have stood above the waves. Seismic destruction of coastal regions by tsunamis is something that might easily become legendary, the more so in the distant past. There is one other possibility that might revive the Spartle hypothesis, demonstrated by the great Indian Ocean tsunami of 26 December 2004. Very powerful earthquakes can also result in massive displacement of the crust, or the order of tens of metres. Spartle might have sunk repeatedly since 11.6 ka, as a result of later events.

Documenting the Palaeogene transition from ‘hothouse’ to ‘icehouse’

It is well-established that the first large ice sheets that presaged descent into the oscillating climate of the Neogene formed about 34 Ma ago (the Eocene-Oligocene boundary) on Antarctica. Some 21 Ma before, at the Palaeocene-Eocene boundary, global temperatures had leaped following what many believe was a massive blurt of methane previously held in cold storage in ocean-floor sediments as gas hydrate. A monstrous ‘greenhouse’ climatic system must sometime in the interim have reverted to the cooling trend begun at the outset of the Cenozoic. Defining that transformation relies on assembling and interpreting newly available, high-resolution records of climatic proxies through the Eocene and Early Oligocene (Tripati, A. et al. 2005. Eocene bipolar glaciation associated with global carbon cycle changes. Nature, v. 436, p. 341-346). Hitherto, the Eocene part of the ocean-floor sedimentary column had been poorly sampled, so that only broad trends showed.

As you might expect, the change was not a simple transition. At about 42 Ma the record of the Pacific Ocean calcite compensation depth (CCD – the depth at which carbonate remains are dissolved in the deep oceans) shows a remarkable perturbation long before the CCD dipped decisively from about 3.5 km to around 5 km at the start of the Oligocene. A close look at the oxygen isotope record of that age in a highly detailed marine sediment core shows an increase in d 18O that corresponds to either some 6° of cooling or a 120 m fall in sea-level due to build-up somewhere of ice on land. Coinciding with this perturbation are shifts in the carbon-isotope record in carbonates. The authors suggest that the mid-Eocene cooling and continental glaciation that produced falling sea level triggered the weathering of shallow-water carbonates, which together with river transport increased the oceans’ alkalinity. That would have increased deep-water carbonate formation enormously and accelerated the effective ‘burial’ of carbon from the atmosphere

Smithsonian geological timeline

A measure of the quality of a science website, apart from its visual appeal, is a mixture of how much it teaches you and what you can snaffle to help teach others. As a point of departure for E-geology, it will be hard to beat the Smithsonian Institutions geotime site (www.nmnh.si.edu/paleo/geotime). That’s because it focuses first on the history, and if you care to you can discover how that was constructed from the geological record. Its central organiser is a slider that can be zoomed, which lays out the geological past – the literal time line divided into stratigraphic Eons, Eras, Periods and Epochs. Each division is clickable, although zooming in several times is needed to see the Cenozoic Epochs. But, hang on, there is no Ediacaran Period, the newest addition, nor the subdivision of the Proterozoic on the timeline. Whatever, clicking on a division opens a thumbnail sketch of each and links to pages that give more detail on the highlights, plus introductions to the founding concepts behind geological time and unravelling Earth and life processes. There is a glossary, which shows the influence of Encarta and Wikipedia. Here is a chance to learn for hours in a most convenient and engaging way, but graphics are few and far between in the various main panes. There are examples of important fossil organisms, but displayed at a size that lacks satisfying detail. What the site needs are maps and explanatory diagrams, which are available elesewhere. So the Smithsonian needs, I think, to liase a bit with other learning resources in the geosciences. It would be good to have a one-stop shop.

Has human evolution stopped?

There can be no doubt that the way in which humans consciously build ‘shields’ of many kinds between themselves and their surroundings placed our species, and those leading up to it, in an increasingly different relationship to the environment than those of other organisms. Fire, habitations, tools, weapons and clothing emerged far back in our evolutionary ‘bush’, to be followed more recently by artificial means of feeding ourselves in a vast range of climatic conditions. In the last century these ‘shields’ have been added to by medical protection against pathogens.

Many of the physical traits of the modern human frame would not be ‘fit’ in a purely Darwinian sense for life unprotected by myriads of cultural devices: they arose from genetic potential largely because growing human culture allowed them to be fit for purposes other than survival at its simplest level. The range of basic physiognomies among modern humans does seem to reflect natural selection to suit various climatic regions, such as the differences between cold- and heat adapted peoples. That perhaps began during the great expansion out of Africa some 70 ka ago. But the much greater range of facial characteristics among all populations (a really human characteristic compared with other primates) is probably a result of genetic drift at random, rather than any kind of evolutionary selection. There are also differences that have arisen since the widespread adoption of agriculturally produced foods since about 10 ka ago, as in jaw shapes and those of the skull, probably linked to easier mastication. That can be explained most easily by the manner in which the use of muscle tends to sculpt the bone to which it is attached: it arises during the life of the individual.

With what appears to be the start of a global unification of cultures, and greater security for the more fortunate one third of humanity at least, it might be expected that natural selection is on the wane for humans. A mere 10 thousand years since the rise of agriculture and far less since modern cultures arose, it is perhaps too soon to conclude that we have cut loose from Darwinian processes. Indeed, recent genetic research has come up with several developments that must be recent results of natural selection. One is the split between adults who can metabolise cows’ milk and those who cannot. The first group, a minority, cluster around the Near East (most Europeans) and in a few parts of Africa where cattle domestication arose. A large block of the human genome, about a million base pairs of nucleotides, includes the gene that produces the necessary enzyme lactase, and its persistence in those adults able to digest milk. The large size of the whole haplotype is typical of recent genetic developments, and the researchers are certain that it resulted from selective pressure where dairy farming began at between 5-10 ka.

Genes that confer resistance to infectious diseases that can cut life short before successful reproduction are good candidates for showing the effects of natural selection, especially in those areas where medical care and drugs are not available. For a long while natural resistance among some west Africans to malaria parasites was linked with heritable sickle-cell anaemia, but recent research has shown a more complex reason that involves several genes. Interestingly, ‘dating’ of the associated genetic changes gives recent ages between 3 and 6 ka, perhaps linked to the rise of farming practices. Clearing land and ponding of water on fields would have encouraged the malaria-carrying Anopheles mosquitoes, which are not forest species: a cultural change presaged a genetic one. Similar results have emerged from studies of inherited protection against HIV/AIDS, yet that only appeared in pandemic form very recently (unless misidentified earlier). An explanation may centre on selective pressure on mutation to form the protecting gene as a result of the appearance of previous epidemics, such as plague and smallpox among early Europeans, who seem to have the highest resistance to HIV/AIDS.

So it is hard to say if selective pressures will work in future on the human genome, as culture convergence continues, and (hopefully) equitably shared living standards. Since the limit on human brain size is the skull, and that is limited by the near-maximum pathway through the human female pelvis, it is very difficult to imagine our evolution into big-heads.

Source: Balter, M. 2005. Are humans still evolving? Science, v. 309, p. 234-237.

Modelling the core

Judging by the growing procession of research grant proposals aimed at studying the inner workings of the Earth’s core through computer modelling, it would be easy to assume that a major breakthrough was just over the horizon. What you need is some kind of supercomputer to handle the massive complexity of core fluid dynamics and then channel that through one of several concepts of a geodynamo, first towards simulating the present field and then to how the geomagnetic field swirls and occasionally flips. The fourth biggest there is belongs to the Japanese geophysical community; the Earth Simulator, which is certainly well ahead, in terms of power and speed, of facilities available to less endowed scientists. Recently, about 10% of its power was let loose for a 9 month modelling run that focussed on complex motion in the liquid outer core that theory should generate (Takahashi, F. et al. 2005. Simulations of a quasi-Taylor state geomagnetic field including polarity reversals on the Earth Simulator. Science, v. 309, p. 459-461). Hitherto, modelling had produced pictures of varying magnetic intensity that bore some resemblance to the real magnetic field at the Earth’s surface, and did indeed come up with reversals. Yet a variety of models all produced similarly plausible patterns in space and time. The snag was the limit to matching the viscosity of liquid iron with spin rate. Geomagnetists suspect that the Ekman number, which represents that relationship, is very low in the Earth’s core, i.e. there is very low drag in core circulation, and that adds to complexity. Until the Earth Simulator was built, no power on Earth could deal with the high spatial resolution needed to simulate properly motions at low Ekman numbers. Takahashi and colleagues were able to drop the Ekman number 10 times below any previous simulation.

Real-looking features did begin to emerge in the time sequence for the field at the core’s surface. The most interesting was the formation of zones of opposed polarity at high latitudes, soon (in about 1000 years of simulated time) to be followed by a reversal. The zones move progressively polewards to coalesce, when the overall magnetic polarity all but disappears, and then a reversed field becomes established. However, this is not real but a model dependant phenomenon, even though it is possible to see patterns akin to those observed today – many geophysicists believe the Earth is on a magnetic cusp before a reversal. Will it ever be real is an obvious question, in the same way that related climate simulations may flatter to deceive. The problem is not a lack of models, nor conceivably computing power, but a lack of real data. The ocean floor contains masses of information on past reversals, and cunning analyses of palaeomagnetism in lavas that cooled slowly through the Curie point at the time of a reversal show astonishing things that happened. Excellent maps of the modern field are available, but reality in a reversal is a time series of that mapped field. Without such data, and the time to collect it (the modelling simulates evolution over 5200 years) before the next order-of-magnitude jump in computing power (perhaps 10 years off), it is very difficult to see a justification for this kind of modelling, as opposed to that for climate, which does have a more rapid response time.

See also: Kerr, R.A. 2005. Threshold crossed on the way to a geodynamo in a computer. Science, v. 309, p. 364-365.

Did oil and gas fields form during the Precambrian?

Since the origin of life it is certain that a proportion of biological materials would have been preserved in sediments after organisms died. As today, such material would have evolved or matured as the host sediments were buried and heated. There is plenty of evidence that such maturation did occur as far back as 3250 Ma ago, but signs that oil-fields formed by migration and trapping have proved elusive. Several lines of evidence, such as carbon-isotope anomalies in Precambrian limestones, point to periods when enormous amounts of organic material were buried, much as happens in the formation of Phanerozoic petroleum source rocks during periods of ocean anoxia. Before about 2400 Ma, when evidence for an oxidising surface environment first appears in the rock record, such conditions would have been pervasive. The first hints of large-scale petroleum formation and migration have been found in the low-grade Pilbara craton (3500-2850 Ma) of Western Australia and 2770-2450 Ma sediments that overlie the older Archaean complex (Rasmussen, B. 2005. Evidence for pervasive petroleum generation and migration in 3.2 and 2.63 Ga shales. Geology, v. 33, p. 497-500). Black shales in the Pilbara contain not only lots of fine-grained carbonaceous matter, but some in forms that clearly suggest that they had been thermally matured (‘cracked’) to low-viscosity fluids that could migrate. There are blobs of bitumen contained within iron sulfide layers that seem to have formed later, to engulf petroleum liquids. Molecules within the bitumens resemble those formed by photosynthesising blue-green bacteria, methanogen and sulfate-reducing bacteria and arguably perhaps primitive eukaryotes. It appears that the bitumens probably formed as residues as lighter and more fluid hydrocarbons migrated out of these substantial source rocks. What has yet to be demonstrated are Archaean and Palaeoproterozoic reservoir rocks where such migrating petroleum accumulated. Another question is whether or not the source rocks, which are extremely widespread and thick, might have retained some potential for sourcing petroleum much later in the geological history of Western Australia and similar cratons elsewhere.

Precise timing of petroleum migration

In their slack moments, petroleum geologists ponder on when oil and gas got into a particular reservoir and became trapped.  One aspect of the conundrum is easy to answer: after the reservoir rock and trap formed.  But timing is not so trivial, for an important consideration in exploration for new oilfields concerns the actual rock that sourced hydrocarbons in known fields, almost always a highly reduced, black mudrock in which unoxidised dead organic matter accumulated and matured. Repeated anoxic events, both regional and global, provide several alternatives in many petroleum provinces.

Hydrocarbons, having formed under highly reducing conditions, contain several metals and other elements well above normal crustal concentrations.  Among these are rhenium and osmium, which allow radiometric dating through the decay of 187Re to 187Os.  In principle, therefore, it is possible to date oil and relate it to a particular source rock. Interestingly, it is easier to date the actual time at which oil has accumulated in a trap.  In an analogous way to the equilibration of parent and daughter isotopes in magmas, which is halted by crystallization so that the system evolves and dating can be done, once oil settles in a trap after migration the timing can be dated sing the Re-Os method.  David Selby and Robert Creaser of the University of Alberta, Canada applied this approach for the first time, using the vast reservoirs of oil sand in Alberta as a test (Selby, D. & Creaser, R.A. 2005.  Direct radiometric dating of hydrocarbon deposits using rhenium-osmium isotopes.  Science, v. 308, p. 1293-1295). The oil in the sands were emplaced around 112 ±5 Ma ago, during the Early Cretaceous, not long after the host sandstones had been deposited.  Previous work using ideas on oil maturation suggested that migration had taken place during the Early Palaeocene, around 60 Ma ago, when potential source rocks were heated by tectonic burial during the Laramide orogeny.  The Re-Os results point to migration from the west while the Cretaceous sedimentary basin was filling.  This may explain the high viscosity of the oils as a result of near-surface biodegradation.

Another product of isotopic dating is establishing the initial 187Os/188Os ratio of the petroleum system, which relates to that of the original source and its isotopic evolution.  In the case of the oil sands this value points to source rocks of earlier Mesozoic and even Palaeozoic age, rather than a Cretaceous source that had been suggested previously.

Britain above convecting mantle?

Being able to picture Earth features far beneath the surface is what makes seismic tomography such an exciting tool, even though it is in its infancy. It shows variations in the velocity of P and S waves in 3-D. Regions of fast waves are likely be cooler than those in which wave speeds are relatively slow.  The detail depends on the spacing between seismic recorders and the distribution of natural seismic events, whose interactions produce tomographic data.  Despite being rarely affected by seismicity themselves, the British Isles have a remarkably dense network of seismic stations that was developed for research.  Given arrival times at the different stations by waves from earthquakes that occurred over a wide range of epicentral angles from the British Isles, it becomes possible to probe in detail what lies beneath.  Exploiting the potential to the full, a group of British and US geophysicists has shown that the ‘British’ mantle is far from boring (Arrowsmith, S.J. et al. 2005. Seismic imaging of a hot upwelling beneath the British Isles.  Geology, v. 33, p. 345-348).

Down to a depth of 600 km, Britain is underlain by a series of significantly slow and fast mantle ‘blobs’.  The seismically slow, probably warm mantle zones seem to follow large features last active during Early Palaeogene magmatism that affected the Hebrides and Northern Ireland, and roughly parallel the 60 Ma dyke swarms that radiate from these centres.  They also correlate with regions of anomalously high gravity.  It seems highly likely that both features are long-lived relics of a spur of the still active Iceland plume that is intimately associated with spreading on the Mid-Atlantic Ridge.  The warm zones also underlie those parts of the British Isles that were most affected by uplift and erosion during the Cenozoic: as much as 3 km in the case of the Irish Sea.  Such areas also focused extension at the time of the magmatism, and they are still most affected by minor seismicity.

Estimates of the magnitude of the temperature anomaly associated with the slowing of P-waves are as much as 200 °C above ambient mantle temperature; sufficient to be associated with partial melts.  That Britain might once more have active volcanoes is highly unlikely, and the anomalies are probable parts of the Iceland plume system that became trapped beneath zones of crustal thinning. Their loss of heat is sufficiently slow for them to have bolstered areas of uplift and erosion for tens of million years.  There is even a chance that some form of convection might yet be going on.