Another nail in the coffin for fossil fuels

The 30- or more year long debate about anthropogenic climate change resulting from the ‘greenhouse’ effect of carbon dioxide releases by fossil-fuel burning has grown sharper in recent years.  Some specialists have cast doubt on climatologists’ ability to unravel human effects on the undoubted rise in mean surface temperature over the last 150 years from underlying fluctuations that stem from natural processes.  Considering the number of forcing factors, both large and small and with different periodicities, such doubts are valid, even though they may well be overemphasised in order to support continued and rising use of petroleum and coal.

The climate record for the last millennium is known to have been one of considerable change, involving a mediaeval warm period during its first half and the so-called ‘Little Ice Age’  from around 1600 to the mid 19th century.  Even within the recent warming trend there have been climatic ups and downs in the northern hemisphere, such as the mid-20th century warm period and several documented examples of cooling associated with major volcanism than punched aerosols into the stratosphere.

Thomas Crowley of Texas A&M University (Crowley, T.J., 2000.  Causes of climate change over the past 1000 years.  Science, v. 289, p 270-277) has attempted to isolate the known natural forcing functions and model their individual effects on mean surface temperatures in the northern hemisphere, to isolate meaningful signs of human effects from the various records temperature changes.  Even charting the changes is no simple task, as most records are local to regional, rather than valid for the whole hemisphere.  The most comprehensive climate data model uses proxy indicators from ice cores, tree-ring studies and coral time series, scaled to instrumental temperature records for the century from 1860-1965.  Uncertainty increases backwards with time, to around + 0.3°C in year 1000 AD.  This is somewhat greater than the fluctuations recorded in the temperature reconstructions, that Crowley and others have derived by a complex statistical method that fits a smoothed trend to the temperature fluctuations modelled from proxy data.

The approach used in Crowley’s analysis is to identify each likely, natural factor that influences energy balance in the northern hemisphere, and then to model the trends that they produced, and should continue to be producing.  There are two factors that are significant in millennial to shorter timescales: influences of volcanic aerosols, as timed by ash layers in ice-sheet cores; and variations in solar output based on 10Be and 14C variations in ice cores and tree rings (solar radiation generates both at the top of the atmosphere).  Possible anthropogenic forcing factors are numerous, including industrial aerosols and emitted gases other than the usual suspect, carbon dioxide.  The end product is a temperature time series which removes the effects of all known forcing factors, except those connected with ‘greenhouse’ gases, from the northern-hemisphere temperature model.  Until the early 19th century, this hovers close to zero variation, and then starts an upward rise to a value of about 0.75°C by now.

Crowley’s modelling adds significantly to the case for a detectable economic influence on climatic warming, by showing that known natural forcings cannot account for the rising trend since the start of the Industrial Revolution.  It does not prove the case, and leaves several features in recent climatic change unexplained, notably the cooling in the late 19th century.  Also, this approach does not exhaust all possibilities involved in climate shifts, such as linked fluctuations in energy movement by ocean-atmosphere circulation that work to make some regions experience cooling while others warm.  Such processes might respond to variations in solar-energy input by a kind of complex resonance, which remains to be looked at.

See also 20 percent more oil in the ground

Methane hydrate – more evidence for the ‘greenhouse’ time bomb

Where ocean water is more than 400 metres deep and bottom temperatures fall below 1 to 2°C methane and water can freeze to form crystals of methane hydrate.  These efficiently absorb more methane in a gas-solid solution, known as a clathrate.  Being lighter than seawater, methane clathrates do not carpet the ocean floor, but occupy pore spaces in sediments.  Under the anaerobic conditions of marine sediments, bacteria break down buried organic matter to release methane.  Build up of the gas in clathrates forms distinct reflecting horizons seen on many seismic sections of marine basins.  Estimates suggest that methane clathrates contain around the same amount of buried carbon as all fossil fuels lumped together.  Since methane is a powerful ‘greenhouse’ gas when released into the atmosphere, breakdown of the clathrates is a potential mechanism for global warming.

In 1995, evidence began to emerge that 55 million years ago, at the Palaeocene-Eocene boundary, a pulse of global warming probably stemmed from catastrophic release of methane from ocean-floor clathrates.  The signs lay in the proportions of 12C to 13C in organic matter within marine sediments of that age.  Since organisms selectively take up the light isotope of carbon, when organic matter becomes buried, seawater becomes enriched in 13C.  Buried carbon, both in organic molecules and in marine carbonates, takes on the isotopic signature of seawater at the time.  This means that the ups and downs of carbon burial leave an imprint in the carbon-isotope record of marine sediments.  When warming of deep-ocean water or a pressure release caused by a lowering of sea-level releases biogenic methane from clathrates, its high 12C content quickly appears in the carbon in seawater as a whole, by oxidation to CO2 and solution.  This reduces the proportion of heavy to light carbon in both buried organic matter and marine carbonates, forming a downward ‘spike’ in the carbon-isotope record.  Other processes can produce the same effect, such as increased release of volcanic CO2, which is also isotopically light, or a collapse of the marine biosphere.  So 12C ‘spikes’ need to be matched with other evidence.

Co-workers from Britain and Denmark have just reported in detail on just such an excursion that took place about 183 Ma ago, during a period of massive burial of carbon in the Jurassic Period (Hesselbo, S.P. et al., 2000.  Massive dissociation of gas hydrate during a Jurassic ocean anoxic event.  Nature, v. 406, p. 392-395).  The Early Toarcian was a period where circulation in the deep ocean stopped, to give anoxic conditions, ideal for burial of dead organisms.  While that lasted, important hydrocarbon-rich source rocks for petroleum reserves were laid down, and the carbon isotopes in sea water became unusually ‘heavy’.  Several stratigraphic sections show evidence for a 12C ‘spike’ in the middle of this period of general 13C enrichment of the oceans.  Hesselbo and co-workers isotopically analysed Toarcian mudstones exposed on the Yorkshire coast in England, which contain both marine matter and fragments of wood formed in terrestrial ecosystems.  Both show 12C enrichment, and that means that the entire carbon cycle at the time was somehow perturbed.  With no sign of massive extinction, the signature pointed either to a methane release or to hugely increased volcanism.

Although there was strong volcanic activity on Earth during the Toarcian, it came nowhere near being able to generate the anomaly.  Also the ‘spike’ occupies at most a period of a few tens of thousand years.  Only catastrophic release of methane from clathrates, equivalent to 20% of those estimated to be present today, is able to account for the anomaly.  Why it happened is nor certain; it may have been a result of either increased temperature of deep-ocean water by general global warming, to which it added, or perhaps too great a release of methane by decay in organic-rich sediments to be taken in by clathrates.  Another trigger, for which evidence is lacking at present, is through a comet impact in an ocean basin.

Methane hydrate layers in the oceans pose an ever-present threat today, because of their extreme sensitivity to temperature and pressure.  Some scientists believe that small releases may lie behind inexplicable disappearances of ships due to the drop in bulk density of seawater frothed by bubbles.  Also many areas of shallow seas are pockmarked by vents marking methane release when sea level stood lower during glacial epochs, and at least one methane spike in ice-core records can be correlated with a massive submarine landslide off western Norway.

Silica as a control over atmospheric CO2 levels

Today the oceans far from land are the equivalent of deserts, having very low biological productivity.  This is not due to a lack of the main nutrients, potassium, nitrogen and phosphorus, or to too little sunlight for photosynthesis.  For some time, marine specialists have suggested that the culprit is too little soluble iron – a micronutrient at the core of pigments and the enzyme RuBisCO, on which photosynthesis and the fundamental Calvin cycle depend.  The halving of atmospheric CO2 levels during glacial maxima is widely believed to reflect more efficient ocean bioproductivity and thus burial of dead organic matter.  The idea that general dryness and windiness during glacial epochs delivered soluble iron to the remote ocean surface is one means of explaining this.  However, it took CO2 8 000 years to rise to pre-industrial levels after the last dusty period when ice sheets reacehd their maximum extent, whereas iron lingers in seawater for only a few tens of years at most.  Dust carries far more silica than soluble iron, and SiO2 resides for 15 000 years or so.  This encourages the blooming of silica secreting diatoms in competition with calcium-carbonate secreting plankton.  Carbonate production by cells actually generates CO2, so less carbonate secreters relative to those producing silica shells means that tendency is offset by a greater contribution to buried carbon from dead silica secreters (Source:  Tréguer, P and Pndaven, P., 2000.  Silica control of carbon dioxide.  Nature, v. 406, p. 358-359)

20 percent more oil in the ground

A friend from the USA, who visited me a month back, was surprised to find Britain not yet in the throes of a popular insurrection.  He gasped each time I filled up at a fuel station.  Clearly, he has yet to divine the depth of phlegmatic resources endowed to motorists stuck between junctions 8 and 12 on the M6.  Few of us now bother to ponder whether the recent price hikes should be put down to the laws of supply, demand and price, or to the addiction of the British economy to gouging fuel tax and duties from the hapless road user.  The Organization of Petroleum Exporting Countries (OPEC) refound its awesome powers of the 1970’s during 1998-9 and cut back production to drive a near tripling of spot prices.  Concerned that the political impact of this on a now globalized economy might bear down on them, the Saudi delegate to the recent OPEC meeting in Vienna announced an increase in Saudi crude production that halted the upward spiral.  Should Iraq be allowed to pump to capacity and Libya reach peak output, the situation would rapidly reverse.

It is a curious time, for the petroleum sector of the North American and European economies faces dwindling home reserves while their industrial production is hard hit by rising fuel prices – a case of ‘tails you lose, heads we win’, it might seem.  Since the Limits to Growth prognosis in the late 1960’s of rapid exhaustion of petroleum reserves, each decade has seen the ‘evil day’ recede into the future, as exploration frontiers have pushed forward and extractive methods become more efficient.  In its latest assessment of world fossil-fuel reserves, the US Geological Survey has taken everyone by surprise (greenwood.cr.usgs.gov/energy/WorldEnergy/DDS-60).

A new approach to estimation using the latest geological data from around the world’s petroleum-prone basins suggests that undiscovered conventional oil resources are 20 % larger than believed previously.  A substantial proportion of this increased estimate stems from evaluating the formerly overlooked  tendency for ‘finding elephants in elephant country’, i.e. hitting previously undiscovered reservoirs within or just beyond existing fields.  This suggests that old fields should grow by up to a quarter in the future (612 billion barrels or about 9 years of global production), while new exploration should come on stream with 732 billion barrels, eventually.  The estimates are not uniform, however.  European and North American production still remains doomed to rapid exhaustion, with the bulk of new resources adding to the already huge dominance of the Arabian peninsula, and to the worrisome former Soviet Union.

Despite the flurry of optimism among petroleum economists and the industry in general, a sober assessment is that the new USGS assessment delays matters by a decade or two at most, given the annual production of 27 billion barrels per year and 1.5 to 2% annual growth  – business-as-usual and barely a sign of significantly replacing petroleum with alternative, renewable energy sources that do not add to global warming.  Re-emphasis of the overwhelming dominance of the Arabian peninsula, North Africa and the former Soviet Union as suppliers to fuel continuing demand, and the certain increase in one-sideness of the economic relationship have big political implications.  Some analysts foresee a ‘second coming’ of OPEC, and greater tension surrounding the areas formerly in the Soviet sphere of influence.  China barely figures as a significant player, despite former optimism.

Water resources under threat

We now live in an epoch where the ‘first provision of any civilized society, after a system of laws, is a water supply’ has begun to pass definitively from municipal to privatized control.  The private sector in water provision is exploding worldwide, particularly in potentially profitable urban areas of the ‘two-thirds world’; i.e. the poorest countries.  This is a tendency explicitly encouraged by multi- and bilateral sources of developmental aid, such as the World Bank, departments of the EU and Britain’s Department for International Development (DfID).  Popular unrest concerning rapidly rising water prices are sweeping through the townships of South Africa and several South American countries, as people find themselves unable to pay for supplies and find them cut off.

The Water Systems Analysis Group of the University of New Hampshire, USA has released a depressing and highly detailed assessment of the future fragility of global fresh-water supplies (Vörösmarty, C.J, et al., 2000.  Global water resources: vulnerability from climate change and population growth.  Science, v. 289, p 284-288).  Their analysis is based on geographic cells half a degree square (about 55 km), and considers the fresh water flow by surface run-off and movement through shallow aquifers, which constitutes the locally sustainable supply (deep aquifers are non-renewable in the short- to medium-term, without being engineered for recharge by surface water) relative to population density and domestic, industrial and agricultural uses.  Unlike assessment of petroleum reserves (above), which stems from detailed information supplied by giant transnational companies, doing the same for water is at best a sketchy exercise, because of the wide variations in quality of data.

Using aggregations for individual countries suggests that one third of the world’s population lived in 1985 under conditions of water scarcity, and about 450 million faced severe water stress.  However, by looking more closely, on a cell-by-cell basis, the Group shows that levels of stress are grossly underestimated by conventional country assessments.  They found that 1.8 billion people had to survive 15 years ago at the highest level of water shortage.

To model future changes in water stress, the Group considered both climate and population change.  They based the first on a water-balance model that incorporates global hydrological information and the precipitation side of climate change modelling of the anthropogenic ‘greenhouse’ effect.  Climate change is subordinate to growing population and likely shifts in population (the rural to urban drift that is growing at present).  Things seem likely to improve for people living in or moving to relatively water-rich areas, but probably at the expense of worsening water quality.  The most dramatic feature is a possible 85% increase in the population subject to the highest levels of water stress.  Since agriculture that depends on irrigation centres in already water stressed areas for domestic and industrial use, the prognosis is doubly worrisome, since those areas are likely to face even worse food shortages.  Looking at individual drainage basins shows that some, including that of the Huang He (Yellow River) in China, which has the highest population density anywhere, seem destined soon to show an excess of demand over discharge.

It is not difficult to foresee from the Group’s analysis a rapidly approaching period of curtailment of economic activities, mass migration and conflict in transnational river basins.  The danger areas are overwhelmingly in the ‘two-thirds world’, where the search for profits by water companies finds strategic focuses at present.  Being the ultimate in supply-demand forces (demand for water is the least ‘elastic’ imaginable) this is hardly surprising.  It should, however, come as no great surprise if such ventures are expropriated by people themselves.

Geology on the Internet

Richard Robinson of Santa Monica College, California has provided a web site bulging with links to useful resources, including virtual field excursions that cover many different aspects of geology that range from modern environments to structural geology and tectonics.

http://homepage.smc.edu/robinson_richard/geologycentral.htm

Near-miss for Australian town

Up until 10 years back, I was under the impression that as individuals we run little risk of being struck by objects falling on us from between the orbits of Mars and Jupiter.  A slim chance, but one tempered by a recollection of my father’s news clip of a small meteorite landing in the sidecar of a 1930’s biker on his way from Hull to Hornsea.  The biker finished his journey.  These days aliens seem to be falling thick and fast.

Late last year, the sleepy hamlet of Guyra, Australia, about 400 kilometres north of Sydney had a heavenly visitor, or so it seemed. On December 7, an object the size of a cricket ball slammed into the town water supply. In recent months, town officials have been pondering how to exploit their near misfortune.

In early July, a local businessman pledged AU$3,000 to dredge the rock out of the reservoir’s bed so it could be put on display, given to a local university or donated to the Australian Museum in Sydney.  Intrepid snorklers discovered that the object had drilled a 1 metre hole in the mud, after penetrating the reservoir itself.  Because such a small meteorite should have slowed to terminal velocity on entering the atmosphere from space, it is highly unlikely that it would have had enough remaining energy after ploughing through water to have buried itself that deep.  Experts have cautioned the amateur meteorite collectors to leave the object well alone, pending more cautious examination.

Earth’s earliest events

The Earth has a core made, probably, of alloyed iron, nickel and sulphur.  Much evidence points to the core having formed very early in our planet’s history, probably in its first 100 million years.  Core formation explains the depletion in iron of mantle rocks and magmas derived from them, compared with iron’s abundance in the cosmos.  Because some rarer elements have a 10 000 times greater tendency to partition into melts containing metallic iron than into silicates, such siderophile (‘iron-loving’) metals are also highly depleted in the outer Earth.  That is one of the reasons why gold and the platinum-group metals are so rare and highly prized at the Earth’s surface.  In fact, such noble metals are a lot more abundant than the presence of a metallic core could have allowed; they should be at vanishingly low abundances.

One solution to this paradox is that the ‘extra’ gold and PGEs arrived after core-formation had finished, the agency of delivery being continual bombardment by meteoritic debris in the first half billion years of the Solar System’s history.  The other is that somehow, the affinity of such metals for iron drops off at extremely high pressures.  German, Canadian and Australian geochemists  (Holzheid, A. et al., 2000.  Evidence for a late chondritic veneer in the Earth’s mantle from high-pressure partitioning of palladium and platinum.  Nature, v. 406, p. 396-399) have shown experimentally that such a decrease doesn’t occur, at least in the outermost 500 km of the Earth.  This points strongly to impacts having seeded the upper mantle with noble metals, and therefore, perhaps, with lots more besides.  This re-opens the old controversy between homo- and heterogeneous accretion of the Earth, tempered by the fact that more common siderophile metals, such as nickel and cobalt do not show mantle abundances that are in disequilibrium with core formation.  The distinction is not trivial, for much of Earth’s evolution has been driven by its internal composition, most especially its content of radioactive isotopes and water.

The Moon seems to have formed as a result of a gigantic impact of a Mars-sized body with the early Earth.  Since the Moon has neither a core nor its full cosmic complement of iron, such a catastrophic beginning (effectively ‘Year Zero’ for the geochemistry of both bodies) must have taken place after core formation in the Earth.  Because lunar rocks are so little changed by later events, its age is known with considerable accuracy – the Lunar Highlands are about 4450 million years old.  It would be interesting to compare gold and PGE abundances between Earth and its Moon, for that might reveal the period during which bombardment delivered siderophile elements.  Up to 3.8 billion years ago, both bodies received lots of visitors, culminating in a bout of huge impacts between 4.0 and 3.8 billion years ago that formed the huge lunar craters, that early astronomers termed maria or ‘seas’.

Your ancestor was a cannibal

The world was shocked when Armin Meiwes of Rotenburg, Germany admitted eating his pen pal, the more so when he convinced his trial jury that his friend was a willing menu item.  However, so bizarre was their mutual obsession that ordinary folk could rest easy that human cannibalism was an aberration or born of necessity; no need for nervous glances at your neighbourly gourmand.  Every time earlier human remains that show signs of cut marks or having been boiled in the proverbial pot emerge the “necessity” card or that of burial practices are played in the storm of controversy surrounding possibly unwholesome aspects of older cultures.  That is not so easy when forensic pathology is applied to cooking utensils and fossilised dung, spanning several hundred years, and finds traces of protein that can only have come from deep muscle tissue (myoglobin), as occurred in archaeological investigations in pre-Columbian Colorado.  But there is worse, as recounted by Richard Hollingham (Natural born cannibals.  New Scientist 10 July 2004, p. 31-33).  Hollingham reviews recent research that has cannibalistic themes.  The truly grim findings were made by a group at University College, London, who have studied a brain disease related to the variant CJD induced in humans who ate BSE-infected beef (Mead, S. et al. 2003.  Balancing selection at the prion protein gene centre consistent with prehistoric Kuru-like epidemics.  Science, v. 300, p. 640-643).  Kuru affected the Fore people of highland Papua New Guinea, who ritually ate dead relatives’ brains, before the authorities banned the practice, and is caused by rogue proteins known as prions.  In that respect it is similar to vCJD, BSE and a number of other mammalian brain disorders.  The UCL group studied the genetic effects of Kuru on the Fore, to see if any immune resistance to prion infections had developed.  There are two genes linked with prions, and people who possess both have greater resistance to vCJD, whereas people having only one are susceptible.  In the Fore study,  a surprising 75% of women (usually the main consumers of human brain tissue) had both, which the team put down to evolutionary pressure that had resulted from thousands of years of the practice.  Turning to genetic data from different ethnic groups world-wide, they found such heterozygotes were widespread, although with different proportions in different groups.  Even though these global populations do not generally eat other people now, there is a distinct possibility that their distant ancestors did, for a very long time.  That is welcome news that counters the fears of massive vCJD epidemics from eating animals unnaturally fed on animal protein.  What is wholly disturbing is that for much of human evolutionary history cannibalism was unnecessary for survival, and Miewes, in his initial police statement, claimed that there are around 800 cannibals in Germany alone…..

Water on Mars

If Mars is ever to visited by astronauts, and for there to be any chance of finding living things there, water close to the surface is vital.  Not surprisingly, the search for Martian water, albeit not in a network of canals, is becoming a thriving cottage industry. The last week of June 2000 saw a leaked report from research using images from the Mars Global Surveyor spacecraft, publicised in New Scientist and Science for that week..  Some of these showed systems of V-shaped gullies on steep sides of valleys and craters, which are extremely sharp.  Several workers claim that they were cut by running water in the recent past.  That they are young features is clear, because they are not blurred by dust blown across the Martian surface by it nightmarish winds, and none are cut by craters.    How water might have flowed freely a short time ago is not too clear.  The Martian surface is well below freezing point for most of the time (average temperature -50°C).

The explanation given by the researchers is that a layer of frozen pore water a few hundred metres below the surface can melt because of  built up of pressure.  Where the layer meet the surface in valleys cut through it, the pore water remains frozen, and acts as a dam.  When this becomes breached, water simply squirts out to form the peculiar runnels seen at more then 150 sites.  Several of the gullies lie below signs of collapse on the slopes above, suggesting that water release has removed support for debris on the steep slopes.

There a number of reasons to take these accounts with a pinch of salt.  Sure, increased pressure depresses the melting point of water, but at -50°C it would have to be pretty high.  In permafrost areas on Earth, waterlogged soil freezes from the top down in winter, thereby trapping the last dregs of water.  This becomes pressurised, to remain liquid in a supercooled state.  If it breaks out it does not flow, but forms ice almost instantly.  As well as forming the famous pingoes (ice cored mounds) of Arctic alluvial plains, this phenomenon almost caused a bizarre disaster during one of the Yukon gold rushes.  High-pressure water jetted into a public bath house – the warmth of the building had created a trough of melt water directly beneath – and filled the entire edifice with ice.  Fortunately, this happened at night and no prospector was encased.  Much the same would probably happen to any such water escape on Mars, unless it was preternaturally warm.  Such was the case for the truly huge and unmistakable water-cut valleys on Mars.  But they formed far back in Martian history, perhaps as a result of energy introduced by large impacts.

It is tempting to look to other explanations for the gullies.  Very dry sand flows down the lee slopes of dunes, often to form runnels with collapse features above them.  Perhaps some attention to the physics of dry sand – Mars is a sandy and silty place – under near-airless conditions and suitably reduced gravity, might offer an alternative explanation.

Even more optimistic is the notion that Mars once has seas, based on the discovery of various salts in an Egyptian meteorite that approximate the blend of dissolved ions in Earthly seawater (New Scientist, 1 July 2000, In Brief).  The evidence that the class to which this meteorite belongs comes from Mars rests on comparison of its noble-gas content with the extremely imprecise measurements or Mars’ air by the Viking mission in the 1970s.  Why the chemistry of Martian ‘seas’, or any of its water for that matter should bear comparison with that for waters derived from a planet with both weather and highly evolved continents seems to demand an explanation.  Oh well, no doubt we will get answers when astronauts do get there – it is not inconceivable that all the papers suggesting it is important to go have some relation to NASA’s decades long fight for funds to do that.

Landsat-7 and SPOT images

Using satellite images for geological mapping and exploration, or for monitoring short-lived phenomena, such as volcanic eruptions, is now standard Earth sciences technology.  But it involves substantial costs for data and for the software needed for analysis, or so it was.  Access to the most recent images from the US Landsat-7 and French SPOT systems is now on-line using sophisticated browsing sites on the Web.  Both enable guest users, as well as those who have signed up for slightly more sophisticated services, to browse and download reduced-resolution JPEG versions of archived images, and to order data, if needs be.  For Landsat-7, go to http://landsat7.usgs.gov/ though this means going through several pages.  To jump straight to the Earth Observation System (EOS) Data Gateway try http://edcimswww.cr.usgs.gov:80/ims-bin/pub/nph-ims.cgi?endform=1&u=259015&sid=959259015%2D52891371&mode=SRCHFORM .  This currently opens a data search and order form.  Choose a search keyword first using the Data Set button, selecting Landsat-7 Level 1 data.  You can choose several options for the geographic search area, and simply enter a date range (e.g. 2000-01-01 and 2000-05-25 for this year’s archives.  Then Start Search.  Sometimes your search will take quite a while, dues to pressure on the server’s bandwidth.  The good news is, you can disconnect and go back later to the relevant page using Internet Explorer or Netscape History listing.  For SPOT, access is via the DALI server at http://www.spotimage.fr/home/proser/whatdali/daligst/daligst.htm or the Sirius server at http://sirius.spotimage.fr/anglais/Welcome.htm – the Sirius service is a little more complicated than DALI, but is set to become SPOT-Image’s standard browser.

Image quality in both cases is excellent, with the Landsat-7 browse images having a roughly 250 m resolution, and SPOT data showing at about 120 m (4 to 8 times better than similarly available data from meteorological satellites).  Use the right mouse button with cursor over the image and select Save Image As: assigning your own name instead of the default given by the server, e.g.  geology1.jpg.  You can then make some cosmetic changes to contrast and colour balance using graphics software such as MS PhotoEditor or Adobe PhotoShop.

Remember that SPOT data of whatever kind are covered by SPOT-Image copyright, but the USGS who distribute Landsat-7 data make no such claim.  Clearing copyright for publication and acknowledging sources is an important responsibility for uses in research or publications.

SRTM and ASTER

There are several other web sites to watch.   In February 2000 NASA, the US National Image and Mapping Agency (NIMA), and the Italian and German Space Agencies deployed the Shuttle Radar Topographic Mission (SRTM) aboard a Space Shuttle flight.  The SRTM uses radar reflection received by two antennae separated by a long arm deployed from the Shuttle to estimate topographic elevation of the Earths surface, with a method known as radar interferometry.  The resulting data are in the form of a digital elevation model (DEM) with elevation values for cells 30 metres square.  A DEM is therefore a 3-D model of topography, and shows landforms in stunning detail, together with geological features that control them.  Because radar relies on energy transmitted from the spacecraft and radar waves can penetrate cloud, the SRTM produces data whatever the time or the weather.  The mission successfully captured the entire continental surface between 60°N and 60°S, and will revolutionize both geomorphology and geology.  From November 2001 the US Geological Survey and the German Space Agency (DLR) will release DEMs publicly and at low cost, but ‘tasters’ are available from the following web sites:  NASA – http://www.jpl.nasa.gov/srtm DLR – http://www.dlr.de/srtm .

For the next decade or so, the main Earth-oriented thrust by NASA is the Earth Observing System (EOS), which will be a constellation of satellites that orbit from pole to pole to give coverage of the entire surface.  On 18 December 2000 NASA launched the first of these, named Terra.  This satellite carries several payloads that produce images of various kinds, the most geologically important of which is the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), designed and built by the Japanese space agency (ERSDAC), but operated jointly with NASA.  ASTER captures images for several ranges of wavelength in the visible, reflected and emitted infrared, which are designed to highlight the spectral properties of common minerals.  So, ASTER is a geological remote-sensing system par excellence.  The system is working properly, but will begin to produce scientific data sometime in late 2000.  Like the SRTM, the plan for ASTER is eventually to produce full coverage of the continental surface during its lifetime.  For the moment, you can keep a watching brief by visiting NASA – http://terra.nasa.gov and ERSDAC – http://astweb.ersdac.or.jp

Milankovic forcing flawed?

Milutin Milankovic built on James Croll’s notion that perturbation of Earth’s astronomical behaviour is likely to cause variations in solar heating that might lie behind repeated glacial epochs.  One of the most fertile discoveries in Earth science since World War 2 is that the periodicities that Milankovic calculated do seem to dominate the time-record of climate change over the last 2.5 million years, when 50 glacial-interglacial cycles forced changes in the oxygen isotope content of fossils from deep-ocean cores.  These data record the variations in long-term storage of water in continental ice sheets, and are a near-incontrovertible ‘proxy’ for both varying extents of glaciation and sea levels    Much the same kinds of signals also appear to turn up in time series of other kinds from sediments of a much wider range of ages.  Milankovic theory now has as much popular support as Alfred Wegener’s idea of drifting continents.  But problematic aspects refuse to go away.  Not the least of these is the conversion of depth to time in oceanic sediments from which the longest and most detailed records have emerged.  An analysis of the frequencies involved in past climate change rests or falls on the accurate conversion of depths in sediment cores to time.  In oceanic sediments this is by no means an easy job, because of a lack of material that can be dated precisely.

The first attempt to unscramble the complex variation in ocean cores used a few calibration points in climate time series onshore, whose shapes seemed to match those of ocean records.  The most crucial of these was that for the rise in sea level at the end of the ice age before last (called Termination II) recorded in coral reefs in the Caribbean.  The most widely used date for Termination II in the Caribbean is 127+6 thousand years (ka).  It was from using this date as a global time correlation that the Milankovic signals of 100, 41, 23 and 19 ka periodicities popped out of the mathematical analysis.  One surprise was that the match with the prediction of varying solar heating referred to the Northern rather than the Southern Hemisphere, or the planet as a whole.  A great deal of later work hangs on that, and much of it has simply assumed the Northern-Hemisphere pacemaker, such as the widely used SPECMAP time scale.

Daniel Karner and Richard Muller of the University of California in Berkeley summarise the most contrary pieces of evidence for the timing of climatic change in a recent issue of Science (Karner, D.B. and Muller, R.A., 2000.  A causality problem for Milankovitch. Science, 288, p.2143-2144).    Using a different dating approach the Caribbean timing of Termination II comes out at 132 ka, while for a series of coral reefs in Papua New Guinea it appears as early as 142 ka.  Detailed climate changes recorded in stalactitic material from a cave in Nevada (Devils Hole) also show an ‘early’ Termination II.  All these ages are as precise or better than the accepted 127ka date for Termination II, so Karner and Muller see a big problem.  Whereas the end of the last glaciation (Termination I) is pretty well tied down to about 12 ka, and corresponds to increased solar heating in the Northern Hemisphere from the Milankovic predictions, Termination II bucks that by 5 to 10 thousand years.  A theory that is only 50% believable needs a serious seeing to!

For the last four terminations, the most varied and informative data come from cores through the Antarctic ice sheet.  Though that too has its problems regarding calibration of depth to time, a recent evaluation of the climate variations over the last 420 ka (Petit, J.R. et al., 2000.  Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica.  Nature,  399, p. 429-436) shows significant differences between the last four terminations.  Karner and Muller suggest that each glacial-interglacial cycle may have different controls, and encourage a new look at the wealth of data, unbiased by earlier ideas centring on pacing by a single, astronomical pacemaker and accepting that climate controls are multidimensional.

A ‘treasure map’ for asteroids

Not only geologists are waking up to the influence that stray asteroids and comets have had on geological and biological evolution, but so too are politicians.  Despite the minuscule chances of a sizeable body hitting the Earth within our lifetime, the devastation would be awesome.  Insurance actuaries have calculated the risk from such rare events, taking into account the number of likely deaths in the same way as for airline disasters.  You or I are more likely to perish in the aftermath of an asteroid or comet strike than from botulism or a fireworks accident, and the risk is comparable with that of intercontinental flying.  Governments are beginning to find money to support systematic mapping of bodies that may pose a threat; not a lot, but sufficient to spot bad news and refine the risks.

On June 22, a French-US team released a first assessment of the near-Earth objects (NEOs) that pose the biggest threat; those more than 1 kilometre in diameter (Bottke, W.F. et al., 2000.  Understanding the distribution of near-Earth asteroids.  Science, 288, p. 2190-2194).  They estimate about 900 big asteroids in orbits that will pass eventually within a few moon distances of us. “Sometime in the future, one of these objects could conceivably run into the Earth,” warns astronomy researcher William Bottke at Cornell University. “One kilometer (about .6 of a mile) in size is thought to be a magic number, because it has been estimated that these asteroids are capable of wreaking global devastation if they hit the Earth.”  Much smaller objects caused the celebrated Meteor Crater in Arizona (20 000 years ago) and the Tunguska explosion (1905), and seem to pose the greatest hazard, being undetectable at present.

The Cambridge-Conference Network (CCNet) freely provides a regular electronic newsletter about research into short-lived catastrophic events, including climate change, the effects of supervolcanoes, and impacts, both in the geological record and possible in future from NEOs.  To subscribe, contact the moderator Benny J Peiser at b.j.peiser@livjm.ac.uk .

The K-T event is back for the death of the dinosaurs

Just when those palaeontologists who don’t like ‘whizz-bang’ theories for the fossil record had begun once more to feel comfortable, the geological record has bitten back.

One of the main planks against an impact cause for the extinction of all the dinosaurs at the end of the Cretaceous Period was the raraity of their remains in the top 3 metres of the Hell Creek Formation in the Great Plains of North America.  The Hell Creek Formation is noted for clear signs of the Chixculub bolide strike very close to its top, as well as for a rich dinosaur fauna.  Previous workers stated that a rarity of dinosaur signs just below this signified that they were under considerable evolutionary stress before any catastrophe; support for a gradualist notion of mass extinction.  A team of geologists and biologists from the US have just published the results of a painstaking survey of the Hell Creek (15 thousand hours of field survey of 11 million square meters of its outcrops in North Dakota and Montana) (Sheehan, P.M. et al., 2000.  Dinosaur abundance was not declining in a “3 m gap” at the top of the Hell Creek Formation, Montana and North Dakota.  Geology, 28, p. 523-526).  Their work finds that the top 3 metres are just as rich in dinosaur signs as any of the strata below it, right up to the layer immediately beneath the signal of Chixculub.  They do not report any findings from above the impactite, though dinosaur teeth are reported to be present by earlier workers.

As journalists say, this will run and run!