Early humans of Beijing

One of the most remarkable achievements of early humans (Homo ergaster aka H. erectus) was not their tools, but their migration out of Africa around 1.8 Ma, to reach as far as Indonesia and China.  There is no evidence for that feat having occurred again until fully modern humans arrived in east Asia about 70 ka ago.  The toolkit of Asian “Action Man” is unimpressive, in the sense that it resembles the slightly reshaped broken pebbles of the Oldowan culture, that first appears in the African archaeological record about 2.4 Ma ago.  Development in Africa of the enigmatic and beautiful bi-face or Acheulean axe was after the first Asians had departed, around 1.5 Ma.  So what were these early wanderers like; what did they want?  The decade-long work in China by Noel Boaz, an anatomist from the Ross School of Medicine in New Jersey and anthropologist Russell Ciochon of the University of Iowa will soon appear in their book Dragon Bone Hill, an Ice-Age Saga of Homo Erectus (Oxford University Press), which they preview in the 17 April 2004 issue of New Scientist (p. 32-35).  Boaz and Ciochon have worked mainly in Zhoukoudian near Beijing, a major resource for human remains whose different levels extend back to about 800 thousand years.  Another site in China, Longouppo, contains disputed remains as old as 1.8 Ma, as are Dubois’ famous discoveries of the type specimens of H. erectus by the Solo River in Java.  From the time when Zhoukoudian became famous among Chinese apothecaries as a source of “dragon’s bones” (a mixture of human and other animal remains) there has always been an air of myth about the findings there – a permanent dwelling for hundreds of thousand years, protected from glacial temperature falls by the consistent use of fire.  In essence, the publicised view is that “Peking Man” led a cosy hearthside existence for a very long time indeed.  Boaz and Ciochon tell a different, and more mundane story.  Most bones in  the deposit are those of a great variety of other animals, with disproportionately few of human origin, and those are highly fragmented.  The dominant species is a giant hyena, and many of the bones, including humans, are well gnawed, which is what hyenas do especially well.  There are occasional signs of human occupation and use of fire.  The human remains are encased in layered carbonate flowstone,.  Records of fluctuating d18O from that matrix, matched against the global time series of climate change, show that occupation was only during interglacials – the site was abandoned or unvisited during the depth of glacial periods.  Some animal bones show cut marks made by stone tools, and it is more likely that H. erectus raided to get remnants of other beasts’ kills, perhaps using fire, rather than being top of the predatory order.  The great surprise throughout Asia is the complete lack of development of stone tools from the primitive culture that arrived there, until as late as 20 to 30 thousand years ago, when Asian H. erectus vanished.  Apart from the stunning breakthrough to the bi-face axe, African erects also had a million-year long cultural stasis – resting on laurels with a vengeance.  Finally, from a number of skulls at Zhoukodian, Boaz and Ciochon have shown signs of trauma.  These are depression fractures, probably not necessarily fatal, but indicate sharp blows to the head with blunt instruments.  Their interpretation is that the Chinese erects settled disputes by bashing heads; so that aspect of culture has not changed a lot since.  Their story is not “politically correct”, but with publication of their book, other palaeoanthropologists can judge it on the basis of the evidence from Dragon Bone Hill.

Faster development of Neanderthals

Go to any horse sale and you will see bidders closely studying the teeth of their prospective purchases; the origin of the saying, “Never look a gift horse in the mouth”.  Teeth show growth ridges, and in grazing animals they are prominent, so that it is possible to judge the age of a horse easily and accurately.  Human teeth are different only in the less obvious signs of growth.  Microscopic examination reveals such records, down to the daily level, although the most prominent features are curious disturbances in their deposition that form approximately weekly.  They appear as ridges on the crowns of teeth.  The variable spacing of these perikymata provides a record of the pace at which adult teeth develop.  In modern humans the spacing becomes very much closer in the later growth history (towards the tooth’s cutting edge) than in its early stages, and reflects the slow development to full adult dentition.  In a painstaking study of hundreds of teeth from Cro Magnon and Neanderthal teeth, Fernando Rozzi of the University of Paris and José Bermudez de Castro of the Spanish National Museum of Natural Sciences have discovered an odd difference in the development rates of Neanderthals (Rozzi, F.V.R & Bermudez de Castro, J.M. 2004.  Surprisingly rapid growth in Neanderthals.  Nature, v. 428, p. 936-939).  The late perikymata of Neanderthals are more widely spaced than in Cro Magnon and modern humans, strongly suggesting that Neanderthals developed to adulthood by about the age of 15, three to five years earlier than us and our immediate ancestors.  As well as confirming that they are a separate species, the results suggest that Neanderthals, while acquiring brains as large, and in some cases even larger than ours, had evolved more rapid maturation and probably a genetically determined shorter adult life.  This would have had some effect on transfer of culture, which in human societies is often the most important value of elderly folk.   The fewer samples of teeth of earlier human species (H. heidelbergensis and H. antecessor) reveal an even greater surprise.  They are more like modern human teeth (albeit with signs of somewhat faster growth), which suggests that evolution of the Neanderthals involved a regression.  The authors suggest that the combination of a backward step to faster development with rapid brain growth to large size might reflect a very-high calorie diet together with adverse environmental conditions.

River incision and anticlines

In many areas of active deformation, landforms that suggest that uplift and river down-cutting keep pace are very common.  Stream courses cross zones of uplift, rather than being diverted or ponded up to form lakes.  Traditionally, geomorphologists have described such drainages as “antecedent”, i.e. rivers that were present before uplift began.  They can be seen on all scales up to examples such as the Indus and Brahmaputra rivers that carve their way across the actively rising Himalaya.  The most common are anticlines through which streams flow in canyons perpendicular to the fold axes.  A curious and common feature is that the canyons are not haphazard, but often cut the fold where its amplitude is greatest and its axis plunges away from the site of incision.  The stupendous rates at which crustal rocks are eroded and transported away in the courses of the Indus and Brahmaputra, and in lesser drainages on the flanks of major extensional orogens, such as the Red Sea, clearly removes load from the crust.  Consequently there is an isostatic component to the uplift involved in the two cases at a grand scale.  Peter Molnar and Phillip England suggested an erosional role in large-scale uplift over a decade ago.  Intervening ridges rise higher than they would if erosion was slower or non-existent.  In major rift systems, the highest peaks are often within the escarpments rather than at the lip of uplift, sometimes more than 500 m higher.   Bearing this well-known process in mind, Guy Simpson of ETH Zurich, has sought evidence that it functions on much smaller scales (Simpson, G. 2004.  Role of river incision in enhancing deformation.  Geology, v. 32, p. 341-344).  That comes from the surprising symmetry of doubly plunging anticlines that are cut by rivers at their highest point.  His modelling suggests that the phenomenon can occur when the crust deforms plastically, allowing isostatic response to erosion on even minor scales during compression.  When deformation is by brittle means, any uplift of rigid crust is flexural and has long wavelengths, so that rivers bear no relation to local structures

Water on Mars; almost official

Two lines of evidence from the current robotic explorations of Mars add to less tenuous ones that the planet is really wet – icy to be precise.  One is mineralogical.  Spectroscopy of the surface being slowly trundled across by a NASA rover, shows abundant signs of the hydrated, iron-potassium sulphate jarosite, which probably can only form under wet conditions.  When it was precipitated is not known with certainty, but it occurs in layered sediments that contain structures that clearly point to transport in and deposition from surface water.  The time when liquid water could exist at the surface probably goes back to the earliest events on Mars, tied to the famous canyons and more recently discovered dendritic drainage patterns.  The other evidence stems from even more remote sensing, that captures short-wavelength infrared radiation emitted by the Sun and reflected from the Martian surface.  Ices of water and carbon dioxide have distinct and unique reflected spectra, because of the different ways in which they absorb a small proportion of solar radiation.  Results from the OMEGA instrument aboard the European Space Agency’s Mars Express satellite show that the south polar region contains as much as 15% water ice mixed with solid CO2 (Bibring, J-P et al. 2004.  Perennial water ice identified in the south polar cap of Mars.  Nature, v. 428, p. 627-630).

Devonian broad-shouldered fish

How, when and under what circumstances vertebrates got limbs to take them charging across the forested land of the late Palaeozoic form a central issue in our own evolution, as well as that of the other four-footed land animals.  By negative analogy with the functional though rather rudimentary enlarged fins of various modern fish that flop from pond to pond during dry seasons, many vertebrate palaeontologists have considered limbs as evolutionary adaptations in air-breathing fish once they made this a habit.  As so often, the fossil record has not given up enough evidence for that to be certain.  Well, an upper foreleg bone (humerus) has turned up in Late Devonian rocks from Pennsylvania at a time and in a context that strongly suggests it was carried by a fish (Shubin, N.H. et al. 2004.  The early evolution of the tetrapod humerus.  Science, v. 304, p. 90-93).  While not able to ride a bicycle, the advanced fish probably used what became limbs to hold itself motionless while lying in ambush for its prey.  That would provide a plausible point of departure from which walking might develop.

Early biomarkers in South African pillow lavas

It is now established that various kinds of bacteria infest rocks down to depths of 2 km or more, one particularly favourable habitat being in sea-floor basalts though which hydrothermal fluids travel.  Although the majority probably inhabits cracks and joints, some seem to work actively to corrode rock, especially volcanic glass, thereby obtaining mineral nutrients.  Signs of this microbial corrosion in modern volcanic glasses are radiating tubes on a scale of a few micrometres, that show up in micrographs, and many may have been overlooked by petrographers in all kinds of rock.  That they are definitely formed by organic activity is demonstrated by the presence of nucleic acids, carbon and nitrogen in the tubules.  Carbon isotopes from them show the strong depletion in 13C that is the hallmark of organic fractionation of natural carbon.  A team of geoscientists, from Norway, Canada and the USA, who have steadily accumulated evidence for biological rotting in modern oceanic basalts, turned their focus to the oldest, well- preserved pillow lavas in the 3.5 billion-year old Barberton greenstone belt of north-eastern South Africa (Furnes, H. et al. 2004.  Early life recorded in Archean pillow lavas.  Science, v. 304, p. 578-581).  Virtually identical microtubules seem common in them too, particularly in hydrated glasses that are now tinged with the low-grade metamorphic mineral chlorite.  Indeed, chlorite seems to have grown preferentially from clusters of the holes, which suggests that they formed before metamorphism of the basalts.  Micro-geochemical studies confirm the presence of hydrocarbons with low d13C.  The bulk of the tubules occur in the inter-pillow debris, that probably formed as glassy rinds as magma protruded on the Archaean sea floor.  As well as adding to evidence for ancient terrestrial life, the find has inevitably opened up the search for such signs in meteorites reckoned to have come from Mars.  In two, olivine grains show similar structures, although why the olivine hadn’t broken down in the presence of water that is essential for life makes such observations worth taking with a pinch of salt. A number of studies have stymied claims for early bacterial fossils (see Artificial Archaean “fossils” and Doubt cast on earliest bacterial fossils, April 2002 and December 2003 issues of EPN) and inorganic processes conceivably might create structures that can be mistaken for ones formed by biological action.  The Fischer- Tropsch  process is capable of producing hydrocarbons, and produces depletion in 13C abiogenically.  In the on-line April edition of Science Express (www.sciencexpress.org) experiments are reported that highlight the possible influence of chromium-bearing mineral catalysts in hydrothermal generation of hydrocarbons from inorganic carbon dioxide(Foustoukos, D.I. & Seyfried, W.E. 2004.  Hydrocarbons in hydrothermal vent fluids: the role of chrome-bearing catalysts.  Science Express, April 2004).  The Barberton greenstone belt is well known for ultramafic lavas rich in chromium, as are most early volcanic sequences.

See also:  Kerr, R.A. 2004.  New biomarker proposed for earliest life on Earth.  Science, v. 304, p. 503.

Australian surface not so old

One of the most widely quoted bits of geological information that appear in non-specialist literature is that the oldest land surface on Earth is that of interior Australia.  Vast tracts are Precambrian capped by horizontal Permian glaciogenic rocks in places, but for the most part by relics of lateritic palaeosols that give it is famous red appearance.  The oldest outlying platform sediments are 1100 Ma old, so the actual surface does date back at least as far, but has it been exposed at the surface for that long?  Dating the present surface has not been easy.  New methods involving the creation of unstable isotopes by cosmic-ray bombardment offer a solution (see Measuring erosion rates, February 2002 issue of EPN), combined with apatite fission-track dating (Belton, D.X. et al. 2004. Quantitative resolution of the debate over antiquity of the central Australian landscape: implications for the tectonic and geomorphic stability of cratonic interiors.  Earth and Planetary Science Letters, v. 219, p. 21-34).  The results suggest that Australian landscape antiquity is a myth.  Erosion rates since the Cambrian varied over most of the Red Centre from 0.4 to 4.0 metres per million years, and reached as high as 17 m per Ma on occasion.  They suggest a common or garden history, comparable with those of most continental interiors.  Again and again it has been buried by sediments, albeit on a flat surface, and equally it has been exhumed several times by erosion.  Only at the outset of the Cenozoic did much of it sit unchanged for long, which enabled its red surface to develop.  The present surface is covered with what is termed regolith by Australians, but much of that is reworked material from the Palaeocene laterites that sits in a network of shallow drainage systems, including huge ephemeral lakes.  It might seem that recourse to Hutton’s “the present is the key to the past” should long ago have staved off the myth of the gnarled old place of which Australians have become inordinately proud.

Weak jaws allow bigger brains

There is no topic in the geosciences that is more interdisciplinary than that of human origins.  Geologists, anthropologists (social as well as physical), archaeologists, geochemists, linguists, geneticists, dentists, specialists in nutrition and even novelists (for example Jean M. Auel) contribute.  Everyone is interested, and so everyone not only wants to have a say, but somehow to be involved.  Again and again in the pages, it becomes clear that bones and artefacts can no longer make major breaks through.  The Out of Africa hypothesis, although suggested by Charles Darwin and many palaeoanthropologists since, became widely accepted (though not completely) after the evidence for relatedness emerged from comparisons of mitochondrial DNA from women throughout the world.  That showed clear signs of a last common ancestor for all human groups around 200 thousand years ago, to whom modern Africans were most related.  At the end of March 2004 geneticists have again come up with something startling, but this time not guessed at before.

The first beings to whom the generic name Homo seems appropriate appear in the hominid fossil record about 2.0 million years ago.  Apart from evidence for bipedality and their association with rudimentary, but nonetheless deliberately made stone tools, the earliest humans are marked by the fragility and roundness of their skulls.  Many specialists have argued that “gracile” crania are an evolutionary pre-requisite for the growth of brain capacity – they can expand for a long period during development, before becoming completely ossified in adulthood.  The predecessors of these early humans (australopithecines) and their close companions in the African savannahs (paranthropoids) had smaller brain capacity and also very bony heads.  In the case of the paranthropoids, undoubtedly as closely related to earlier hominids as the first tool-making humans were, they survived as a group for another million years but never expanded their brains, nor presumably their intellects.  Bone-headed hominids had one feature in common with all earlier apes, and with the genera that survive today; powerful jaws and muscles that drive them.  To some degree or other they all have crests on top of their skulls, which provide the seats for these big jaw muscles.  Wielding awesome biting power requires skull strength, and therefore bulky bone.  That encumbers any possibility for expansion of the internal brain cavity, and also drives their bearing species into tight feeding habits.

A team of geneticists, anatomists, developmental biologists and plastic surgeons from the University of Pennsylvania and the Children’s’ Hospital of Philadelphia have studied one gene sequence of several that encode for a type of protein (myosin heavy chain) associated with the powerhouse muscles that are attached directly to bone, such as those which drive jaws (Stedman, H.H. and 9 others 2004.  Myosin gene mutation correlates with anatomical changes in the human lineage.  Nature, v. 428, p. 415-418).  Their investigation began with an interest in muscular dystrophy and possible underlying factors.  Specifically, the most interesting gene (MYH16) is expressed in primate jaw muscles.  The human gene contains a mutation that prevents the accumulation of the protein in our jaw muscles, so they cannot be as strong as those of other primates and mammals in general, in which the gene functions as it should.  By analysing MYH16 and related gene sequences in humans from widely separated populations, the researchers showed that the mutation in MYH16 diverged earlier than those in other MYH-related genes.  To estimate the time of that divergence involved detailed analysis of the mutations in other living species – dogs, macaque monkeys, oran-utans and chimpanzees.  This showed that MYH16 evolved under Darwinian selection, conferring fitness advantage, in the ancestral lineages leading to each species, whereas in humans there was no selective constraint.  Under the second condition, it can be assumed that any evolutionarily neutral changes took place at a constant rate.  Calculations suggest that in the human lineage, the mutation appeared 2.4±0.3 Ma ago.  That coincides with the earliest appearance of tools and a little earlier than the first remains of early Homo fossils.  The conclusion could be one of several: lost of biting power created conditions for expansion of a lighter skull; a changed diet to include more meat reduced the need for strong jaws, so that the mutation did not have a deleterious effect; or hands freed by walking upright did a lot of the work that other primates can only accomplish with their mouths.  Whichever, once established without decreasing fitness, the road to enlarged brains and fuller consciousness was opened by a chance event.

See also:  Ananthaswami, A. 2004.  less bite, more brain.  New Scientist, 27 March 2004, p. 7;  Currie, P. 2004.  Muscling in on hominid evolution.  Nature, v. 428, p. 373-374

Dental records of earliest hominids

Conditions on land are not as conducive to preservation of fossil remains as those on the sea floor.  When an animal dies it is generally eaten, what is left rots and is gnawed, and the action of wind and water breaks up the skeleton and transports it, and only this debris is preserved if it is buried by sediment.  The best chance of preservation is if the animal falls in a lake or bog, or in the case of fully modern humans if it is deliberately buried.  The so-called Turkana Boy (H. erectus) is an almost complete skeleton, because he did end up, uneaten, in a swamp.  Sturdy, large animals and those small and light enough to be quickly washed to burial stand the best chance of appearing as complete fossils.  Primates are medium-sized and lightweight, and that presents palaeoanthropologists with their single biggest problem, incompleteness of most fossils that they find.  In the depths of the Afar Depression of Ethiopia and Eritrea, which is the most productive area for hominid specialists, conditions from the early Miocene were not the best for preservation.  While the depression developed by extensional tectonics, its flanks rose to form the mighty Ethiopian escarpment from which torrents flowed seasonally.  High-energy streams clearly will break up any articulated skeleton and batter what is left before they end up in gravels and sands on the floor of the depression.  So it is a credit to the patience, experience and sheer visual acuity of those who work there that they can piece together the earliest parts of the human story.  Yohannes Haile-Selassie, Gen Suwa and Tim White have pushed back and detailed our record further than any other group, thanks in part to the richness of the Miocene to Recent Middle Awash sedimentary and volcanic sequence with which they work.  In 2001 Haile-Selassie discovered the earliest Afar hominid so far (see Taking stock of hominid evolution, March 2002 issue of EPN), Ardepithecus ramidus kadabba dated between 5.2 and 5.8 Ma.  In age it roughly matches Sahelanthropus and Orrorin from Chad and Kenya.  Only a leg bone from Orrorin gives some indication that it was bipedal, but all show cranial features that mark them out as probable hominids.  Of all the body parts of any animal, the teeth are the most likely to survive with little change.  Because our closest living relative are chimps, comparing early teeth with theirs, as well as with those of later hominids, is about the best that can be done to seek relatedness.  The three notable workers on Awash hominds have now reported their results (Haile-Selassie, J. et al. 2004.  Late Miocene teeth from Middle Awash, Ethiopia, and early hominid dental evolution.  Science, v. 303, p. 1503-1505), which suggest the earlier find is a distinct species A. kadabba.  Putting together upper and lower canines and adjacent premolars shows a close resemblance to those of modern chimps.  However, it requires detailed measurements of the tooth shapes to check if the resemblance is more than superficial, and it is not.  All extinct and modern apes show signs of automatic honing of their canines, whereas hominids do not.  Not only A. kadabba but Orrorin and Sahelanthropus too, show no sign of canine honing.  That points to early members of human evolution.  Yet, the three show such close similarity that it is hard to support the idea that they are from anatomically different genera, despite their occurrence thousands of kilometres apart.  It is that close resemblance (and in other features as well) that re-opens the long debate between a complex, messy “bush” of human descent made up of many contemporary, different creatures, and one of a single line of descent.  Dental features are not enough to decide between the two.

New take on end-Palaeocene warming

Six years ago vast areas of Indonesia caught fire after an unusually dry phase in the El Niño – Southern Oscillation (ENSO).  Burning forest and peat deposits swathed a vast area in smoke, but another alarming aspect was the greatest addition of carbon dioxide to the atmosphere in half a century.  Such a wildfire on a global scale is thought to have marked the end of the Mesozoic, perhaps triggered by the K-T impact event and encouraged by higher oxygen content in the atmosphere.  Present oxygen levels seem to be at a balance that staves off spontaneous combustion of green vegetation, but only a few percent more would render vegetation much more prone to bursting into flame.  The end of the Palaeocene involved a sudden global warming that coincides with a decrease in the proportion of 13C in marine carbonates.  Since photoynthesis, at the base of the trophic pyramid, favours light 12C, such a negative d13C “spike” is generally ascribed to an unusually high release of organic carbon to the environment.  The end-Palaeocene warming may have resulted from a massive release of methane from gas-hydrate buried in shallow seafloor sediments (See Methane hydrate – more evidence for the ‘greenhouse’ time bomb and Plankton and the end of the Palaeocene-Eocene global warming August and October 2000 issues of EPN).  However, massive burning of living biomass could also produce the carbon-isotope signal.   Telling the two mechanisms apart requires information from other organic-related cycles.  One key is comparing the carbon- and sulphur-isotopic records that enables the place in which carbon had been stored geologically.  For marine burial, the effect of aerobic bacteria that completely oxidises hydrocarbons back to carbon dioxide and water needs to have been suppressed.  Periods of massive marine carbon burial coincide with oceanic anoxia episodes, when anaerobic bacteria beneath the seafloor reduce dissolved sulphate ions to sulphides, thereby depositing lots of iron sulphide (pyrite) in black organic mudrocks.  This sequesters sulphur that is depleted in 32S into marine sediments, so that the marine carbon- and sulphur-isotope records fluctuate in a clearly related way.  During the Palaeocene this relationship is absent, while overall the carbon isotopes do signify progressive burial of organic carbon.  The decoupling of the two cycles points to carbon burial on the continents, forming peat and eventually coal deposits.

Playing games on Snowball Earth

For as long as anyone can remember there has been a parade of geoscientific bandwagons in town.  Three of the floats today carry banners saying, “Snowball Earth”, “Climate models” and “continental erosion and CO2 drawdown”.  Of course there is serious science aboard each, but they are getting overcrowded, especially as separate bands try to jump from one to another.  When it sometimes seems, as now, that the “next Big Thing” is some way off, we get the unseemly spectacle of some bands trying to straddle two or even several of the wagons.  Three is quite a feat, yet the 18 March 2004 issue of Nature contains perhaps not a vast human pyramid, but at least a tetrahedron of the genre (Donnadieu, Y. et al. 2004.  A “snowball Earth” climate triggered by continental break-up through change in runoff.  Nature, v. 428, p. 303-306).  From about 1100 to 750 Ma ago, the bulk of continental lithosphere was gathered in a supercontinent known as Rodinia (from the Russian for “Mother Earth”).  By analogy with modern Eurasia, and the stratigraphic record from the Phanerozoic Pangaea supercontinent, the centre of Rodinia would almost certainly have been dry, being so far from the ocean.  Break-up of that continental mass would also probably have allowed moist maritime air to penetrate over a larger proportion of the fragments.  The hypothesis that Donnadieu and colleagues try to test using linked geochemical and climate models is that such a tectonic change would increase continental weathering and reduce the “greenhouse” effect.  The weak acid formed by solution of carbon dioxide in rain water can provide hydrogen ions to break down silicate minerals.  The reactions contribute bicarbonate and soluble metal ions to surface and subsurface water.  Ultimately, both reach the oceans and contribute to its chemistry.  If conditions are suitable, calcium ions in particular combine with bicarbonate to precipitate calcium carbonate on the ocean floor, either through the action of organisms or inorganically.  The two chemical equilibria involved result in a net burial of one carbon atom out of the two involved in the weathering, thereby drawing down carbon dioxide from the atmosphere.  The climate model used in their cyber-experiment resolves the Neoproterozoic Earth into cells that are 10 x 10 degrees (about 100 thousand km2) and considers Rodinia at 800 Ma and the result of its break-up at 750 Ma, the time of the first good evidence for extensive low-latitude glaciation.  The results, after some tinkering, suggest that increased continental weathering could have reduced CO2 levels to 250 parts per million.  Taking account of a 6% less energetic Sun at the time, this would have produced sufficient cooling for ice caps to exist to sea level at the equator.  So, taken at face value, the hypothesis seems plausible.  However, there are major snags.  First, in a mere 50 million years their model sees continental dispersion on a scale that has not yet happened to Pangaea in about 200 Ma of Phanerozoic time.  Second, since continental area remains constant, the proportion of rainfall, and therefore weathering and runoff, involving continental crust also stays fixed.  Third, continental weathering refers to the crystalline part of its crust, in which there are unstable minerals, such as feldspars, that can do the chemical trick.  We have little idea how much of the continents at that time was veneered by sediments that are the products of earlier chemical weathering, and contribute nothing to the process.  Exposing such deep crust depends to a large extent on mountain building, which continental extension does not encourage.  Fourth, carbon dioxide is not the only source of hydrogen ions that are involved in weathering, especially as much of it goes on in groundwater – bacterial action and oxidation of iron sulphides create much more acid conditions that rainwater.  Fifth, and most important, where is the complementary geochemical evidence?  Feldspars of the continental crust, on which the hypothesis mainly rests, have high contents of rubidium compared with their oceanic counterparts, and they are old.  Much of Rodinia was underpinned by crust formed as far back as 4 billion years ago.  Prolonged decay of 87Rb to radiogenic 87Sr makes the strontium isotopes of continental material very different from those of the ocean floor – it has a much higher 87Sr/86Sr ratio.  Since soluble strontium would be released to runoff by continental weathering, that signature makes its way to the ocean and should pop up in marine carbonates.  Although the ocean strontium isotopes in the Neoproterozoic did rise a little, it did not peak until the very end.  In fact, the details show that the periods around supposed “snowball” conditions involved downturns in radiogenic strontium supply to the oceans.  Whatever the model suggests, all that it amounts to is the equivalent of a table-top train set

Could ice sheets have existed in the Cretaceous?

Finds of Late Cretaceous dinosaur remains and substantial coal deposits at near-polar latitudes in both hemispheres seemed to confirm that the end of the Mesozoic experienced hothouse conditions.  Even so, both are very odd because of the darkness of polar winters; how could plants photosynthesise and supposedly cold-blooded reptiles stay warm?  To add to these oddities, it has now been suggested that periodically there were Antarctic ice sheets substantial enough to draw down sea-level (Miller, K.G. et al. 2004. Upper Cretaceous sequences and sea-level history, New Jersey Coastal Plain.  Geological Society of America Bulletin, v. 116, p. 368-393).  The possibility comes from a detailed stratigraphic and palaeontological analysis of Late Cretaceous sequences on and off the eastern seaboard of the US.  There are 11 to 14 sequences that show shallowing-upwards changes in the near-shore environment, somewhat similar to the cyclicity of Carboniferous times.  Calibrating the section with strontium isotopes and fossil changes suggests that sea-level ups and downs greater than 25 metres occurred swiftly (much less than 1 Ma).  This is considerably faster than changes due to variations in the volume of the ocean basins that result from fluctuations in sea-floor spreading rates, but if localised in eastern North America might have resulted from local tectonics, such as episodic deepening related to extensional tectonics.  The surprise is that the changes correlate well with those in western Europe and on the stable Russian platform, pointing to global, eustatic changes in sea level.  There is some correlation with oxygen-isotope records from foraminifera, so there is a strong possibility of a glacial cause.  The degree of fluctuation matches the effect on sea level of ice volumes of the order of 106 to 107 km3.  This is considerably more than the volume of the present Greenland ice cap, but on Antarctica it would have occupied only a small part of the surface.  There is another alternative; that eustatic changes are not well understood and there is a bias because of the Pleistocene correspondence between them and changes in continental Arctic ice sheets.  The amplitudes of the three different records do not match well, although their timing does.

Biology and iron minerals

The principal colouring agents in rocks, especially those of sedimentary origin, are iron minerals, foremost of which are oxides and hydroxides (e.g. hematite and goethite).  It doesn’t take much of either in a sedimentary grain coating to impart the vivid colour variations seen in some sedimentary formations.  It is easy to suppose that such veneers formed while the sediments were at the surface in an unconsolidated state, but there is much evidence that at least some, if not all, formed in buried sediments saturated with groundwater.  But the problem is getting the iron into pore spaces as well as precipitating its oxides and hydroxides.  Iron in its divalent state (Fe-2) is soluble, but exists only under reducing conditions, so it does not easily enter surface waters that supply groundwater.  In its trivalent state (Fe-3) iron is highly insoluble, and that is how it occurs in oxides and hydroxides.  Yet groundwater tends to lose its oxidising potential because dissolved oxygen is consumed by aerobic bacteria, and oxidation is required to convert soluble Fe-2 to insoluble Fe-3, so that hematite and goethite skins can form around sediment grains.  A clue to the precipitation method comes from a study of slime-encrusted surfaces in old mine workings (Chan, C.S. et al. 2004.  Microbial polysaccharides template assembly of nanocrystal fibers.  Science, v. 303, p. 1656-1658).  Although oriented towards the possibility of bacteria creating materials useful in nanotechnology, this non-geological paper might ring a few bells.  It shows how filaments (of the order of a few nm) that make up bacterial slime are associated with similarly thin and long filaments of one of the precursors to goethite.  The bacteria involved use the oxidation (electron removal) of Fe-2 to Fe-3 as a source of metabolic energy.  They colonise highly reducing waters, so there is a ready source of dissolved Fe-2 for them to exploit, especially in old mine workings, but also in groundwater cut off from the air  There is a snag for the bacteria, because Fe-3 is highly insoluble and could easily snuff out processes in the cells and cause their death.  So in evolving this chemo-autotrophic metabolism they would also have to evolve a means of disposing of its by-product.  The filaments are chains of polysaccharides grown outside the cell wall that act as templates for the precipitation of Fe-3 minerals.  The techniques used to show this include very-high resolution electron microscopy.  It would be interesting to see if very high resolution images of iron-stained mineral grains reveal  relics of these intricate structures.  Less powerful methods have already shown tiny spheres of magnetite in sediments above petroleum fields that formed biogenically through another metabolic process.

How old is the Dalradian?

Half the Scottish Highlands, from the Great Glen to the Highland Boundary Fault, and their equivalent in Ireland, is occupied by a convoluted orogen that is dominated by an almost exclusively sedimentary sequence of Neoproterozoic age – the Dalradian Supergroup.  Its importance is historical, for this is where many of the fundamental tenets used in unravelling complex terrains were developed and tested.  This still goes on, building on over a century of research in an easily accessible area.  Briefly, the Dalradian orogen evolved from a series of extensional basins, in a shelf area, that imposed considerable variations in thickness of the Dalradian sequence.  Protracted deformation in the Late Cambrian to Early Ordovician developed the structural complexity of the orogen, partly controlled by the original variations in sedimentary thicknesses.  We know the youngest age of the Dalradian, because its upper parts contain Cambrian fossils, estimated to be about 509 Ma old.  The earliest age for sedimentation has so far only been guessed, and must be younger than the 800 Ma of migmatites on which its lowest members rest .  The problem is that only one series of dateable volcanic rocks occur in the pile, and they are towards the top (601 Ma old).  At most the whole sedimentary sequence spans 300 Ma, and that in itself is most peculiar.  Most geologists have assumed continuous sedimentation under a great range of environments, but only because they have never found evidence for erosion in the sequence; hardly surprising from the complexity, and not-so-good exposure.  Yet nowhere on the planet is there a sedimentary sequence spanning such a time period that does not contain several unconformities; things have never been that quiet for so long.  Probably the only feasible way to get a handle on the duration of the Dalradian sedimentation is by matching geochemistry of the numerous marine limestones in the sequence with the global record for the Neoproterozoic, that is by seeking signs of the secular variations in the composition of seawater during that Era.  Scottish geoscientists have applied that technique, using 47 samples of Dalradian limestones (Thomas, C.W. et al. 2004. 87Sr/86Sr chemostratigraphy of Neoproterozoic Dalradian limestones of Scotland and Ireland: constraints on depositional ages and time scales.  Journal of the Geological Society of London, v. 161, p. 229-242).  Unsurprisingly, the results do not show a smooth curve that can be matched directly with various estimates of secular change in seawater strontium isotopes; the limestones occur haphazardly through the sequence.  The effort is not helped by considerable differences between global seawater strontium isotope curves compiled by several authors, so Thomas and colleagues’ interpretation is limited.  Yes, the Dalradian is younger than 800 Ma, but by how much cannot be said with confidence.  Its base is an unconformity that represents erosion of an older 800 Ma orogen, and how long that took is anyone’s guess.  The lowest Dalradian limestone falls in a strontium-isotope span that matches that for about 700 Ma, which fits with recent evidence for continued thermal activity in the underlying complex at 730 Ma.  Around the middle of the Dalradian deposition there occurs one of the most spectacular examples of possible glaciogenic rocks in the Precambrian, the Port Askaig Formation, which has been widely regarded as a product of one of the “Snowball” Earth events of the late Precambrian.  If the Dalradian deposition did begin around 700 Ma, then this unit cannot have formed in the earliest and best documented Sturtian glacial episode at 730 Ma, but perhaps in the younger Marinoan-Varangerian one (640 to 560 Ma).  The paper concludes with the time-honoured phrase “…await the application of alternative dating techniques”.   It may be a long wait, and perhaps the most important unresolved aspects of the Dalradian are whether or not its 30 km maximum thickness represents several distinct depositional basins, and if it contains numerous breaks in deposition.

A “Whoops” moment for geochemists?

A great deal of effort and innumerable theses and papers have gone into modelling the derivation of magmas from their parent rocks, especially the mantle, over the last three decades.  Most is based on the division of trace elements into “compatible” and “incompatible”, the first being those which tend to remain in minerals that make up the residuum during magmagenesis, and the second those that favour melts.  Most incompatible elements have large ionic radii. The modelling centres on the degree to which elements remain in solids, the appropriate parameter being an element’s mineral-melt partition coefficient (KD).  Partition coefficients are usually deduced from an element’s abundance in phenocrysts that are in contact (and supposed equilibrium) with an igneous rock’s groundmass material, which is assumed to have formed from magma, and its concentration in that once liquid phase.  Models for partial melting and fractional crystallisation, plus several variants, all involve KDs, for olivines, pyroxenes, feldspars, garnet, amphiboles and so on.  For the generation of basaltic magmas, the first step is partial melting in the mantle itself, for which direct estimation of KDs is not possible.  Instead they are assumed from mineral-melt chemistries in crustal igneous rocks, with some allowance for elevated temperatures and pressures and other conditions.  Each mineral has its own distinctive suite of KDs for many elements, and the chemistry of an igneous rock has often been traced back to which suite of minerals was present in a residue, i.e. the source rock itself, as well as the degree to which one or other process proceeded.  The 19 February 2004 issue of Nature included an ominous article (Hiraga, T, et al. 2004.  Grain boundaries as reservoirs of incompatible elements in the Earth’s mantle.  Nature, v. 427, p. 699-703). 

The study by geochemists at the University of Minnesota and Oak Ridge National Laboratory, USA, concentrated only on the mineral olivine, and a few elements present at trace levels in it.  Their experiments simulated equilibrium conditions under mantle conditions.  Results showed that incompatible elements in olivine, such as Ca and Al, tend to concentrate mainly at boundaries between grains where they are readily available to any melt that starts to form, rather than uniformly throughout the mineral grain.  The finer the grain size of the rock, the greater the area of grain boundaries, and so the more incompatible elements tend to be concentrated at them  The tendency is predictable on thermodynamic grounds, but has only been studied previously in alloys and other artificial materials.  Geochemists have generally regarded grain boundaries as places where impurities in rocks gather.  If the same rock is analysed with and without the crushed powder having been washed in acid, different trace element concentrations result.  This has been attributed to secondary effects, such as the passage of hydrothermal fluids or groundwater.  Since KDs that are used widely involve concentrations in whole mineral grains, the basis of geochemical modelling might be compromised.  Melting begins at grain boundaries, so the low degrees involved in generating basalts could be biased by the effect.  Moreover, vapour phases moving through the mantle (supercritical water and CO2), will follow grain boundaries too, and so may easily pick up and transport incompatible elements.  Their entry into the crust carrying mantle-derived incompatible elements, such as rare-earths, strontium and lead, would lead to metasomatic effects that could play havoc with interpretations of isotopic data based on these elements.  Carbonatites, probably formed from mantle-derived carbonic fluids, are enriched in many incompatible elements.  Similarly worrying data, such as estimates of the incompatible element partitioning into carbonic fluids, have emerged in the past, but so far have been notable only for the silence with which most geochemists greeted them.

Kennewick Man may not be re-interred

Seven and a half years after the discovery of a 9300-year old human skeleton in Columbia River alluvium in Washington state, USA, researchers may finally be able to study the remains.  So-called Kennewick Man caused a storm when first unearthed, for his skull was very different from that of any other early American colonist.  Indeed, partial studies suggested close resemblance to Europeans.  Four Native American tribes in the Pacific Northwest claimed the skeleton for reburial, under the Native American Graves Protection and Repatriation Act.  The move was not entirely connected with respect for sacred rites.  Evidence that the area might have been first colonised by people who were not related to the tribes living there just before European occupation in the 19th century could undermine claims for mineral and other land rights by native people.  On 4 February 2004 a San Francisco court ruled that the remains were so different from any North American indigenous people, that the claimants had no rights over them.  Studies of a skull cast of Kennewick Man since he was placed under lock and key now suggest a possible origin from Asian hunter-gatherers similar to the Ainu people of modern Japan.  However, modern techniques of genetic analysis and isotopic studies of tooth enamel that could settle the issue of origin and relatedness require the original material.  Interestingly, a spear point is lodged in the pelvis, so, like the famous Ice Man of the Italian-Austrian Alps, Kennewick Man may have been the victim of either a deadly dispute or ritual killing.