Impact cause for Younger Dryas draws flak

Almost a year ago two dozen scientists presented evidence to suggest that onset of the Younger Dryas at 12.9 ka followed upper atmosphere explosions of cometary material (Firestone, R.B. and 25 others 2007. Evidence for an extraterrestrial impact 12,900 years ago that contributed to the megafaunal extinctions and the Younger Dryas cooling. Proceedings of the National Academy of Sciences of the United States of America, v. 104, 16016-16021; see Whizz-bang view of Younger Dryas in EPN July 2007). Evidence cited included: excess iridium; tiny spherules; fullerenes containing extraterrestrial helium; nanodiamonds and evidence for huge wildfires. Not quite the Full Monty, as neither crater nor shocked mineral grains were claimed, hence the teams’ opting for a cometary airburst. In North America such signs were said to overly the last known occurrences of Clovis tools at 7 archaeological sites (see Clovis First hypothesis dumped above). It was pretty clear that the suggestion for a hitherto unnoticed event with a widespread signature – 26 sites either side of the Atlantic were cited –  was going to be challenged, and so it has (see Kerr, R.A. Experts find no evidence for a mammoth-killer impact. Science, v. 319, p. 1331-1332), perhaps not unconnected with the blaze of publicity surrounding the paper’s appearance, including several TV documentaries.

Well, say experts, sooty layers do suggest large-scale fires, but forest fires occur every year, especially when humans are around. Fullerenes or ‘buckyballs’ equally can form terrestrially, except those containing ET helium. The last is regarded by many critics as ‘inventive’; they have never been isolated since such combinations were first reported in 2001 (see Extinctions by impacts: smoking artillery in EPN March 2002). The accepted methodology for detection of tiny diamonds seems to have been ignored, and that claimed to have found them misused. The iridium ‘spike’ – crucial in identifying the global nature of the K-T event – by itself is not enough for claims of impacts. Astonishingly, the authors cited such a Younger Dryas iridium spike in a Greenland ice core, yet the originator of those data says his paper does not report abnormal iridium at 12.9 ka or anywhere during the YD. Microspherules rain down all the time with interplanetary dust, and do not constitute sound evidence either.

So, what on Earth is going on? A collaboration between 26 authors, who willingly supply other workers with materials for checking surely cannot be conspiring at a hoax. Impact experts are hinting at ‘over-enthusiasm’ by a team outside the ‘impact community’. It all sounds oddly similar to the furore that in 1980 greeted  first suggestions by the Alvarezes for the K-T impact…

May geologists now synchronise their watches?

Calibrating the stratigraphic column to absolute time depends, of course, on radiometrically dating geochemically suitable rocks or minerals. Yet there is a range of available methods based on decay of unstable isotopes, such as 14C, 40K, 87Rb, 147Sm, uranium and thorium. All depend on a variety of assumptions, of which that of a constant, well-established half-life is common to all. If all were perfect, several methods applied to the same materials should give the same results. The trouble is, each parent isotope favours different minerals and different compositions of igneous rocks, so that discrepancies in the dates assigned by different methods to the same stratigraphic unit may either be due to disturbance of one isotopic system relative to the other or to the half-life of one (or both) parent isotope being inaccurate. Currently, the two most widely used and best-regarded methods are U-Pb and Ar-Ar, the latter depending on 40K being converted to 40Ar by neutron bombardment. The first often uses zircons, the second various potassium minerals such as alkali feldspar. Both minerals are magmatic in origin and so the same igneous rock may sometimes be dated by either method or both. It is becoming increasingly clear that the two approaches do not give the same age, which is worrisome at the detailed level permitted by the high precision of each of the methods.

A means of checking the timing parameters for radiometric dating is to compare its results with absolute age determined by a non-radiometric method. The best-calibrated and most widely possible method that does not rely on radioactive decay is based on the astronomical pacing of climate, with its 100, 41, 23 and 19 ka cycles. Analysis of cyclicity in repetitive sedimentary sequences reveals patterns of frequencies that match the astronomical signals. So, within such a sequence it is possible to chart time differences to within a few thousand years. If there are igneous rocks interlayered with the cyclical sediments it should be possible to check their radiometric age differences against the difference determined independently. A Miocene sequence in Morocco has many intercalations of igneous tephras, and therefore provides a crucial test for radiometric approaches (Kuiper, K.F. et al. 2008. Synchronizing rock clocks of Earth history. Science, v. 320, p. 500-504). The team from the University of Utrecht, the Free University of Amsterdam in the Netherlands, and the University of California, dated sanidine (K-feldspar) from the tephras using the Ar-Ar method. This involved using a standard age determined for sanidines from a similar rock type at Fish Canyon in Colorado USA. By turning the approach on its head, i.e. by using astronomically calibrated ages for the samples, they recalculated the age of the Fish Canyon standard. It seems to be 0.65% older than previously thought (from rather dodgy U-Pb dating of  zircons in the Fish Canyon Tuff).

All Ar-Ar ages involve the Fish Canyon standard. So, an underestimate of its age would imply revision of quite a lot of geological events dated by Ar-Ar, especially those that happened abruptly, such as mass extinctions, impacts and magnetic reversals. Using the new standard age puts the K/T boundary event back to 66 Ma from 65.5 Ma. The formerly 251.0 Ma mass extinction at the end of the Permian becomes 252.5 Ma, which coincides better with the outpouring of the Siberian Traps. Similarly, the once 200 Ma end-Triassic extinction, but now possibly 201.6 Ma, links better to the Central Atlantic Magmatic Province outpourings. Quite a stir may be on the horizon, if Kuiper and colleagues’ recalibration is confirmed by similar independent measures.

That radiocarbon dates need to be used with caution is well known, as the amount of 14C produced by cosmic ray bombardment of atmospheric nitrogen varies markedly over time. Again, the ‘work-around’ involves using non-radiometric ages to calibrate the fluctuating relationship between radiocarbon ages and real time. The data of choice are those from tree-ring analysis, but ice cores also preserve ages with a 1-year precision from their annual layering. The Younger Dryas cold period that interrupted the global deglaciation began when atmospheric 14C production was high. It was also a tremendously important event in the progress of human migration and perhaps even genetics – population crashes in hard times can have a ‘bottleneck’ effect on evolution. A multinational team has addressed the interrelations between radiocarbon dating, ice-core climate proxy records and tree-ring analysis for this crucial episode (Muscheler, R. et al. 2008. Tree rings and ice cores reveal calibration uncertainties during the Younger Dryas. Nature Geoscience, v. 1, p. 263-267). They combined measures of varying 14C in tree rings and 10Be in ice cores, both of which are cosmogenic. Rather than resolving the issue, they discovered that the best marine record of the carbon-cycle during the YD, in the Cariaco basin off Venezuela, has a bias caused by anomalous concentration of 14C in shallow seawater as the YD began. Their study open the possibility of resolving such changes in the marine C-cycle.

See also: Kerr, R.A. 2008. Two geological clocks finally keeping the same time. Science, v. 320, p.434-435.

Great surprise: Deccan flood volcanism emitted gases

The only documented volcanic eruption resembling those thought to characterise effusion of flood basalts was of the Icelandic Laki fissure in 1783. At 14 km3 its lava volume was minuscule compared with those of ancient flood-basalt flows, but it did have a remarkable effect on the atmosphere and climate of the Northern Hemisphere. A bluish, ground-hugging dry fog spread over much of Europe and North America. The fog caused severe chest ailments and was probably full of sulfuric acid aerosols. Such droplets also serve to increase the reflectivity of the atmosphere, thereby reducing solar heating. In fact, witnesses remarked on how dim the summer sun appeared that year, although it seems not to be particularly chilly. The climatic effects emerged the following winter with the average temperature in Paris falling by almost 5°C from the long-term average. On Iceland itself, crops failed during the eruption, but worse was to come. Both livestock and humans developed the awful bone lesions associated with fluorosis, for the basalt magma emitted hydrogen fluoride as well as SO2. Human and animal skeletons from the time show gross bone deformities, often like fibrous needles that would have grown through living flesh. Gas emissions from modern basalt flows chemically similar to those of Laki and far larger flood basalts are well documented, and the potential climate effects of continental flood basalt magmatism have been modelled repeatedly using those data.

Measuring actual gas contents of the magmas that fed ancient lava flows is difficult, simply because most magma degasses before it finally crystallises. Even vesicles are devoid of pristine gas that formed them, due to later percolation of fluids. In a few extremely fresh flows some of the original magma may have been preserved as glassy blobs trapped within phenocrysts such as olivine or Ca-plagioclase that formed in magma chambers before eruption. A group from the Open University, UK has analysed sulfur and chlorine content in four such minute samples by electron probe and XRF, finding levels up to 1400 and 900 ppm respectively (Self, S. et al. 2008. Sulfur and chlorine in late Cretaceous Deccan Magmas and eruptive gas release. Science, v. 319, p. 1654-1657).  The sulfur values are not unusual compared with modern basaltic glasses that have not lost their magmatic gases, though chlorine concentrations are somewhat high in the known range.

The climatic and environmental implications of both gases are noteworthy, mainly because each basalt flood would have emitted hundreds to thousands of teragrams of each annually – vastly more than modern emissions by both humanity and active volcanoes. In the lower atmosphere effects would have been like those of Laki – locally choking fogs acid rain, and cooling. Had chlorine reached the stratosphere it would have destroyed ozone to increase exposure of terrestrial life to UV radiation. So quite a few large-scale kill mechanisms may be ascribed to continental flood basalts such as the Deccan province.

This may well be the first direct evidence for actual gas-emission potential of ancient basalt magma samples. Sadly, however, the specimens containing glass were erupted some time before the K-T extinction event – the on-line data supplement reports ages of 66-68 Ma for the lower Deccan flows in which glass inclusions occur, between 0.5 to 2.5 Ma earlier than the end of the Cretaceous. That undermines, to some extent, the need to have analysed the glasses in the first place, when modern data serve well for modelling the effects of CFBs.  Still, even at the low end of S and Cl contents of modern undegassed basalt magmas, the stupendous volume of any flood basalt province – up to millions of km3 – would have repeatedly placed great stresses on the biosphere. The wonder is that not all CFBs are associated with mass extinctions, so maybe the environmentally less-destructive CFB provinces since 250 Ma ago (8 out of 11) involved magmas with extremely low S and Cl contents…

Clovis First hypothesis dumped

For decades palaeoanthropology of the Americas has been dominated by a single idea; that nobody entered the continents before those people who used the elegant fluted spear blades first found near Clovis, New Mexico in the 1930s. These were eventually dated at a maximum age of around 13 ka before the present. One reason for accepting the Clovis people as the first Americans, apart from the lack of conclusive evidence for any earlier occupation, was the fact that glaciers blocked the route from the Bering land bridge of the last Ice age until about 13 ka. Increasing evidence has suggested earlier penetration by people who did not use Clovis tools from Asia, which reached Chile by around the same time and possibly as early as 33 ka. However, none of the evidence is definitive and the Clovis First hypothesis has been stoutly defended against this growing body of contrary evidence.

The ‘traditional’ idea of American occupation by humans after 13ka has taken a double whammy from an unusual set of fossils – of human excrement – discovered in a cave in Oregon. These have been dated at up to 15 ka and are unmistakably human, containing human mtDNA with genetic signatures typical of Native Americans (Waters, M.R. & Stafford, T.W., Jr. 2007. Redefining the Age of Clovis: Implications for the Peopling of the Americas. Science v. 315, p. 1122-1126; Gilbert, M.T.P et al. 2008. DNA from pre-Clovis human coprolites in Oregon, North America. Science, DOI:10.1126/science.1154116).

Ideas of how and when the Americas were colonised are changing rapidly after decades of ossification. A fascinating article in the 14 March 2008 issue of Science magazine reviews the issues and prospects (Goebel, E. et al. 2008. The late Pleistocene dispersal of modern humans in the Americas. Science, v. 319, p. 1497-1502). Genetic studies of living native Americans suggest their common ancestry in a Siberian population no earlier than 30 ka, and perhaps as late as 22 ka. The Beringia land bridge had repeatedly created a possible migration route during every major glaciation followed by many of the Pleistocene mammals that inhabited the Americas, but not by humans until the late stages of the last glaciation. Dating of archaeological sites and remains, including the human coprolites found by Waters and Stafford, is slowly pushing back the earliest evidence for a human presence to around 15 ka, several trhosand years before the Clovis culture appeared. Sometime before that, the first Americans had arrived and begun to spread. Ice barred their way through the interior of Alaska and NW Canada, and they must therefore have travelled along the coast, where the way was open from Beringia to Cape Horn; perhaps they used boats to move along the flat, but frigid shores of Beringia and the rugged western seaboard of North America. Early populations subsisting on shoreline resources would not have needed the heavy projectiles of the Clovis culture that are more attuned to ‘big-game’ hunting on plains. That may explain the sudden appearance of Clovis artefacts once access to plains was possible around 13.5 ka and its equally sudden disappearance at the start of the Younger Dryas around 12.8 ka when survival on icy plains would have become very difficult. Interestingly, the period of occupation of Siberia around 30 ka, would have presented the Beringia route to migration to North America when climate was similar to that following the last glacial maximum. So far, no tangible evidence

Homo floresiensis had big feet

Controversy has raged about her identity since the skull of a minute female hominin was unearthed from the Liang Bua cave on the Indonesian island of Flores. On the one hand are authorities who believe the fossil is that of a distinct human species, while on the other are sceptics convinced that the diminutive stature and chimp-like brain capacity reflect some pathological issue in a population of ordinary humans. The 12 April meeting of the American Association of Physical Anthropology in Columbus, Ohio (see Culotta, E. 2008. When hobbits (slowly) walked the Earth. Science, v. 320, p. 433-435) were treated to an anatomical exposition of the rest of the Liang Bua skeleton. A great deal more turns out to be different from human characteristics, including the legs and feet. Amusingly, for J.R.R. Tolkien’s Hobbit had them, the feet of H. floresiensis were disproportionately large. Also, her gait was quite different from ours – a kind of careful, high-stepping plod. Although not all agree, the post-cranial bones of H. floresiensis appear to bear close resemblance to those of early Homo species. Those favouring a separate species from our own suggest either that it arose through allopatric speciation from SE Asian H. erectus  after isolation of a population on Flores, or perhaps even that it is a relic of an early migration of H. habilis from Africa almost 2 Ma ago. Whatever, it is now going to be even more difficult not to speak of hobbits.

Orrorin walked the walk

Orrorin tugenensis is one of those fossils over which palaeontologists tend get heated. It is a hominin, old (~6 Ma) and fragmentary, so it just might be the daddy of us all. That possibility takes a significant step forward with statistical evidence that Orrorin walked upright in a similar manner to the much later australopithecines and paranthropoids (Richmond, B.G. & Junggers, W.L. 2008. Orrorin tugenensis femoral morphology and the evolution of hominin bipedalism. Science, v, 319, p. 1662-1665). The study was made independently of the original discoverers, who claim that the femur has especially human-like features. Whichever, one of the original suggestions that Orrorin  was on the ancestral line to gorillas has become improbable. The creature clearly displays the oldest known example of a bipedal gait (the older Sahelanthropus (~7 Ma) is known only from skull fragments and teeth, although its skull’s foramen magnum hints at bipedalism). In itself, Orrorin’s walking biomechanics is remarkable, as molecular evidence suggests that the branching that led to chimpanzees and to hominins is not much older than 6 Ma. It does seem as if that phylogenetic split may well have centred first on adaptation for traversing open ground from a forest common ancestor.

Colonisation of Europe pushed further back

Europe is so close to Africa that in recent years repeated waves of immigrants have crossed the Straits of Gibraltar, often on frighteningly flimsy craft. Their driving force is simply the search for a better life in the booming economies of Spain and Italy. Far more intense pressure from deteriorating climate and vanishing game drove Africans of many earlier times to escape their home continent, reaching back almost 2 million years. So how come the European hominin record is so short? At last count it went to H. antecessor around 750 ka, albeit a species that was sufficiently adventurous to reach British shores (see Earliest tourism in Northern Europe in EPN of January 2006). The famous Sierra de Atapuerca cave systems in northern Spain have now yielded clear evidence of much earlier occupants from around 1.1 to 1.2 Ma ago in the form of a lower jaw fragment in association with tools and bones showing signs of butchery (Carbonell, E. and 29 others 2008. The first hominin of Europe. Nature, v. 452, p. 465-469). Provisionally, the person has been assigned to H. antecessor, and there are two possible interpretations: either (s)he was a new immigrant from Africa, or represents a new speciation in northern Spain from an earlier population of African colonists. The paper’s title may prove to be premature.

 

Clouds and large earthquakes

The press announced in April that the USGS and other western US geoscience institutes had issues the first ever comprehensive earthquake forecast for California (see http://www.scec.org/ucerf/) , but it was cautiously phrased in terms of probabilities of destructive magnitudes (>6.7) over the next 30 years. That might be fine and dandy for administrators and civil engineers, but not so good for anyone who becomes a victim at the precise time this or that Californian fault ‘goes off’. People world-wide have rarely chosen where to live based on knowledge of geological risks; indeed most threatened communities have little choice, for many reasons. What would be useful is being warned that a devastating earthquake is definitely due where one lives, and it will happen sometime in the next few days or weeks. Even an hour’s warning will save many lives. But no geological survey will commit itself to that kind of pronouncement, except perhaps some of the many surveys in China. The fact that all kinds of phenomena, such as nervousness among animals, rising water levels in wells and so-on have been shown to occur shortly before many big earthquakes has prompted a kind of ‘barefoot’ monitoring that is officially co-ordinated in some parts of China. It is said that lives have been saved on a number or recent occasions.

It is easy for western scientists to make the analogy with homeopathy, and pooh-pooh such methodology. Also, there has been a succession of observations from space that could prove useful, such as ‘earth lights’ and magnetic-field fluctuations that accompany some seismic events (see Remote signs of earthquakes in EPN August 2003, Early warning of earthquakes in EPN December 2005). The latest odd, but conceivably useful connection is an association of unusual cloud formations with earthquakes in Iran (Guo, G. & Wang, B. 2008. Cloud anomaly before Iran earthquake. International Journal of Remote Sensing, v. 29, p. 1921-1928). The authors, from Nanyang Normal University in China, scrutinised free, hourly images from the geostationary Meteosat-5 satellite covering the whole of Iran, where seismicity is concentrated on a single large zone of deformation that trends NW-SE through the Zagros mountains. On several dates they found cloud formations parallel to the fault zone. Between 60 to 70 days later large eathquakes took place along the fault, including the highly destructive Bam earthquake of 26 December 2003. Indeed, a noticeable thermal anomaly in clouds directly above Bam occurred 5 days before the disaster.

How often do tsunamis occur?

Fortunately, truly destructive tsunamis on the scale of that of 26 December 2004 are rare events. So much so that nobody has a clear idea of their average frequency at different exposed shorelines; a vital statistic for risk analysis. Tsunamis produce high energy marine deposits, but unless they are preserved in accessible locations their incidence would be difficult to estimate, and they may be confused with tempestites generated by hurricanes. One characteristic of tsunamis is that they are waves that affect the entire ocean volume, unlike wind waves whose effects are restricted to a few tens to hundred of metres, which can create unique features. Canadian, US and Omani sedimentologists have examined a sediment deposited in Oman by a recorded tsunami generated by a large earthquake off Pakistan in 1945 and have discovered one such signature (Donato, S.V et al 2008. Identifying tsunami .deposits using bivalve shell taphonomy.  Geology, v. 36, p. 199-202). The deposit, a coquina rich in bivalve shells, contains an unusually high proportion of still-articulated shells, suggesting that living animals were ripped from the seabed and then flung into a lagoon. Along with oddities in fragmentation of other shells and the sheer size and extent of the coquina, this feature seems to be characteristic of tsunamites. Features in the Oman example closely match those in another on the eastern shore of the Mediterranean Sea in Israel.

New Journal

The New Year saw the launch of a new Earth science journal: Nature Geoscience, part of the growing ‘family’ of specialist twiglets from the main trunk of their parent. Whether publishing in it will match the kudos of having a Letter in Nature itself remains to be seen. A monthly rather than a weekly format will keep an issue on the shelf for browsers, but will they rush to thumb through it in paper or on-line? Should Nature Geoscience take off and attract all the geoscience that was once in Nature, then Earth scientists may stop checking through each issue of that august journal, which would be a shame when our discipline is looking for an upsurge in cross-pollination with others. Whatever, the first issue had enough to interest me – three noteworthy Letters – but I can’t say the same for the second.

Epoch, Age, Zone or Nonsense

The International Commission on Stratigraphy lists 37 Series/Epochs and 85 Stages/Ages in the latest version of the International Stratigraphic Chart for the 11 Systems/Periods of the Phanerozoic. A great battle against ICS’s attempt to extinguish the Quaternary, the only enduring Era originated by Giovanni Arduino (1714-1795) and Johann Gotlob Lehmann (1719-1767), now seems to have ended in a compromise (Kerr, R.A. 2008. A time war over the period we live in. Science, v. 319, p. 402-403). While that vigorous struggle has apparently petered out, the Stratigraphic Commission of the Geological Society of London has launched another by proposing a new Epoch – the Anthropocene. This follow a suggestion by Nobel laureate and chemist Paul Crutzen that the Holocene Epoch ended once humanity made a significant impact on the Earth system (Zalasiewicz, J. and 20 others 2008. Are we now living in the Anthropocene? GSA Today, v.18(ii), p. 4-8).

The device intended by the ICS to mark boundaries between Periods, Epochs and Ages in the Phanerozoic is a symbolic Global Standard Section and Point (GSSP), combining an absolute age definition and a type section. A growing number of boundaries are marked by a physical ‘golden’ spike (not necessarily made of gold) including a plaque engraved with the Period or Age names, welded into the agreed boundary itself. There is good reason for this seemingly odd behaviour; geologists need to have agreed nomenclature and locations so that their discourse can be internationally sensible. It is also a deeply exciting, even exalting moment when any geologist puts her/his finger on a boundary of global significance: and how supremely triumphant actually to wield the hammer that drives the spike home. So much so, that there have been monumental squabbles, some not far short of diplomatic ‘incidents’, about exactly where GSSPs should be placed.

But the whole bureaucratic process has its awkwardly humorous side. There is a proposal that the GSSP for the Pleistocene/Holocene boundary be located in a Greenland ice core. Is that to be in the hole left by the NGRIP core drill at the centre of Greenland, at the depth at which evidence for the warming at the end of the Younger Dryas (11.5 ka) occurs? Or should it be in the core itself – a GSSP in a fridge? Either way, it is going to be difficult to put a finger on that particular boundary Moreover, global warming and the attendant social disruption might remove both. The proposed Anthropocene might have an even stranger GSSP. For a start, when did it begin? An anthropogenic human signature appears clearly in the NGRIP core around 8 ka bp, and at a variety of levels in pollen records, but the GSL’s Stratigraphic Commission wants it to start at the beginning of the Industrial Revolution. Sadly, that is a profoundly diachronous, economic boundary. To make it Eurocentric, as Crutzen suggested, would be a bit non-PC.

Let’s face it, the Holocene is just an interglacial, similar to a great many since 2.4 Ma ago. It is noted only for the brief period in which humanity became separated into two groups: a very small one owning the means of production; the other, initially diverse, being forced to work for the first in order to survive. The Industrial Revolution marked a social simplification into two opposed classes, as clearly defined by Marx, and the increased dominance of human affairs by an inhuman entity called capital. The working through of the contradictions bound up in class society and in capital itself has been largely responsible for the huge environmental changes drawn on by Zalasiewicz et al. It seems our somewhat po-faced authors forget the great many more scholars of human affairs than there are geologists: historians and political economists. Already there are plenty of anthropocentric equivalents of GSSPs in London itself, in the form of its celebrated blue plaques. Historians and political economists might well agree that the rise to dominance of capital – and hence the emergence of rapid environmental change during the uniquely short-lived Anthropocene – began outside the Banqueting Hall on Whitehall at 2.04 pm on Tuesday 30 January 1649 with the separation of the head of the divinely righteous monarch, Charles I, from his body. Ladies and Gentlemen of the SC of the GSL, that is where you place your ‘golden’ spike. However, geology might yet have its say, any time now (and geologists cannot really foretell): a super-volcanic eruption; a comet strike or a cosmic gamma-ray burst. So you had better be quick, if your aim is posterity.

Watermills and meanders

The classic notion of a floodplain is that the streams responsible for it meander to create point bars, overbank muds and all the other paraphernalia of the fluvial sedimentologist. River authorities seeking to restore floodplains see the meandering stream as the ideal to aim for, and increasingly as a means of natural flood amelioration. All this may turn out to be illusory following publication of a study on long-vanished human activities (Walter, R.C. & Merritts, D.J. 2008. Natural streams and the legacy of water-powered mills. Science, v. 319, p. 299-304). By mapping and dating alluvial deposits along 1st to 3rd order streams in the north-eastern USA, in relation to milldams recorded on 19th century maps, Walter and Merritts of Franklin and Marshall College, Pennsylvania found that up to 5 metres of sediment had accumulated behind the dams since the 17th century up to the abandonment of watermills.

The conclusion is that mill dams together with increased sediment load following deforestation for agriculture created valley flats on a vast scale – three counties in Pennsylvania had over a thousand mill dams. In places along the north-eastern Piedmont the density of water mills reaches as many as one per square kilometre, and the median density of around 1 per 10 km2 involved more than 22 000 mills out of a total in 1840 of >65 000. Once the mills were abandoned, either because their dams had silted up or milling turned to larger facilities powered other energy sources, streams developed meanders that gradually incised the artificial flood plains. The situation now is that the small floodplains rarely flood, spates being unable to spill over the current bank height. Consequently, many of the low-order streams in major river catchments discharge floods quickly to the larger streams and rivers, which themselves burst their banks to cause floods with major social and economic consequences.

Walter and Merritts’ findings are also based on their analysis of the kinds of sediment that accumulated before European colonisation. In most small valleys these indicate extensive forested wetlands with multiple small channels and drier islands. A major influence over this earlier state was the formation of logjams, and perhaps beaver lodges, that spread normal and spate flows. Slow steam flow carried less sediment than nowadays, and the older Holocene alluvial deposits are organic rich. In addition, stream flow, once directly connected to groundwater, has become disconnected thereby reducing both recharge and the flood balancing achieved by truly natural streams.

The whole of Europe had a history of milling around five times as long as that in the eastern USA, as well as higher population densities. In addition, urban mill dams for metal forging and textile manufacture were on a larger scale. The UK’s National River Authority, Environment Agency and Phil Woolas, the Minister of State (Environment) need to read this study with care, as another flood season is almost certain in the summer of 2008 or the winter of 2008-9. As far as I can judge, it demands a reassessment of flood prevention ‘best practice’ in any populated humid-temperate landscape. Whatever, Walter and Merritts’ study forces a new look at the European lowland and upland geomorphology used for teaching at all levels.

An old bat from Wyoming

The Lower Eocene Green River Formation of Wyoming is dominated by fine-grained lake sediments, mainly made of laminated limy mudstones. Many layers constitute superb lagerstätten teeming with remains of delicate organisms. As well as much else, The Green River Formation is noted for its early bats, which suddenly appear in the fossil record with all the prerequisites for flight. The cover of the 14 February 2008 issue of Nature depicts a perfect specimen showing the four elongated ‘fingers’ that supported its wing membrane, and a long tail, which few modern bats have, except in atrophied form to support the rear part of the wing. In many respects it has a transitional structure between non-flying mammals and later bats, but would definitely have been a good flyer or rather flutterer-glider.

Not only is the fossil spectacularly well-preserved, detail of its head morphology helps resolve the issue of whether echolocation preceded flight (Simmons, .B. et al. 2008. Primitive Early Eocene at from Wyoming and the evolution of flight and echolocation. Nature, v. 451, p. 818-821). Other, slightly later fossil bats from the Green River Formation probably did echolocate, as evidenced by their stomach contents, and enlarged larynx and cochlea for transmitting and receiving the now typical high pitched squeaks of many bats. Onychonychteris doesn’t have such characteristics, so it seems as if echolocation did not evolve before flight, thereby resolving one of Darwin’s vexations about the universality of natural selection. Prior to the discovery by Simmons et al. many bat-oriented evolutionists speculated that echolocation evolved among small arboreal mammals so that they could detect passing insects. A habit of leaping to grab the prey in turn selected for an ability to glide from a strategic perch, for quite obvious reasons. Success further encouraged the evolution of powered flight. Yet no other living mammals have echolocation, probably because it is a highly energy-intensive habit. However, the muscles used by a flying mammal serve also to make squeaking a ‘cost-free’ bonus. So, the findings in Onychonychteris seem to resolve the matter nicely.

See also: Speakman, J. 2008. A first for bats. Nature, v. 451, p. 774-775.

Life perked up by repeated impacts

Following the blazes of publicity since the early 1980s about the demise of the dinosaurs at the K/T boundary it is easy to regard objects the size of mountains that fall out of the sky as bad news for life. That is despite the fact that, bar the Chicxulub impact structure that exactly matches the timing of the end-Cretaceous mass extinction, no other significant and rapid drop in the diversity of life has been found to be associated with an extraterrestrial impact. Whatever their cause, mass extinction events sometimes seem to be followed by bursts in biodiversity, presumably as the survivors eventually find lots of new opportunities and diversity to occupy them. One exception is the end-Ordovician mass extinction that was also preceded by a tripling in the number of families, which the extinction rudely interrupted. This has often been seen as a somewhat delayed exploitation of all the advantages and competitive opportunities conferred by the appearance of hard parts at the start of the Cambrian. But remarkable finds in the limestone-rich Ordovician of Scandinavia suggest an unexpected connection with meteorite bombardment (Schmitz, B. and 8 others 2008. Asteroid breakup linked to the Great Ordovician Biodiversification Event. Nature Geoscience, v. 1, p. 49-53).

The most usual measure of diversity used by stratigraphic palaeontologists is the number of families at a particular time, and the overall tripling in the Middle to Upper Ordovician is notable. However, if specimens of individual groups, such as brachiopods, are collected from the Scandinavian limestones on a bed by bed basis, increased diversity at the species level is even more dramatic. There are sudden doublings or triplings over periods of what can be no more than a few hundreds of ka, especially around 470 Ma ago. In the 1960s potassium-argon dating of chondritic meteorite collections revealed a cluster of reheating ages between 500 and 450 Ma (Upper Cambrian to Upper Ordovician); about 20% of all meteorites fall into this age-cluster, and most show evidence of having been shocked as well as heated up. This seems to signify a major collision or series of collisions in the Asteroid Belt around the early Palaeozoic. More reliable and precise 40Ar-39Ar dating narrows this event to a period between 463 and 477 Ma in the Middle Ordovician. In 2001, Birger Schmitz of the University of Lund reported, with others, more than 50 sizeable chondritic meteorites in the Middle Ordovician limestones of Sweden. Schmitz and his Damnish, US and Chinese colleagues in the new paper give plots of brachiopod species and also the abundance of chromite grains of meteoritic origin in Middle Ordovician limestones from Sweden and China. Two sharp jumps in brachiopod species numbers are  preceded and accompanied by ‘spikes’ in the number of extraterrestrial chromite grains, so the link seems to be real. Yet what can have produced such a counter-intuitive result? One possibility is that the undoubted disturbance may have killed off species of one group, maybe trilobites, so that the resources used by them became available to more sturdy groups, whose speciation filled the newly available niches. Such a scenario would make sense, as mobile predators/scavengers (e.g. trilobites) may have been less able to survive disruption, thereby favouring the rise of less metabolically energetic filter feeders (e.g. brachiopods).

A Cretaceous Ice Age?

Accepted geoscientific ‘wisdom’ is that the Cretaceous Period was so warm that forests reached polar latitudes and so too did cold-blooded reptiles. Planktonic foram oxygen isotopes indicate that the Cretaceous ‘hothouse’ in the Turonian (93.5-89.3 Ma) produced tropical sea-surface temperatures up to 37°C; warmer than human blood temperature. It also saw sea level reach an all time high. Both features have been attributed to the rate of ocean-floor volcanism being at its highest. It has, however, been difficult to model the warmth at high latitudes without fudging the input to general circulation models.

Measuring d18O in both planktonic and benthonic (ocean-floor) forams at centimetre spacings in Turonian ocean-floor sediments seems to have truly bamboozled specialists in the Cretaceous. They reveal a period of ~200 ka  at around 91.2 Ma where both show a sharp increase (Bornemann, A. and 8 others 2008. Isotopic evidence for glaciation during the Cretaceous supergreenhouse. Science, v. 319, p. 189-192). Respectively, the peaks reflect decreased sea-surface temperature (but only down to 32°C in the tropics) and an increase in the extraction of light 16O from the oceans; only likely when ice caps build up on land. The size of the benthonic d18O increase suggests ice caps about half the size of that now blanketing Antarctica. Other evidence includes rapid decreases in Turonian sea level in Europe, North America and Russia; only likely on such a scale as a result of glacio-eustasy. However, direct evidence in the form of tillites, striated pavements and glacio-marine sediments has yet to turn up

Until these convincing data emerged, it seemed that sufficient post-Permian frigidity for large-scale glaciation had not developed until Oligocene times. However, the paradox of high-latitude ice caps and low-latitude balmy seas is resolvable. Evaporation from the tropical sea surface would have been much greater than nowadays. Transport of moisture to cooler areas may have resulted in such immense winter snowfall at high latitudes that sufficient remained unmelted after winter darkness for its albedo to further cool the polar region. Almost certainly the site for the ice cap would have been Antarctica, which in the Cretaceous, as now, sat over the South Pole. Remove the present ice, and that continent would have had an average surface height of between 1 and 2 km that would have encouraged snow build up were sufficient to have fallen during the Turonian. Yet without the direct evidence for glaciation in sediments – much would be buried by the present Antarctic ice cap, if not eroded away – the scenario is difficult for some to believe.

Holocene cold spell and glacial lake burst

The most startling event during the gradual warming after the last glacial maximum was the millennium of icy conditions between 12.5 and 11.5 ka; the Younger Dryas. Long after Holocene warmth seemed well established and agriculture had been underway for two millennia, with perhaps increased human population, a smaller cold ‘snap’ took place, between 8.21 and 8.17 ka; i.e. for about 70 years. Its main effect was around the North Atlantic, but it was felt over the whole hemisphere. It must have been devastating for early farmers and new migrants into higher latitude lands. High-resolution records of many kinds are possible for such a young event, from both ice and marine cores, and also terrestrial pollen records. Norwegian, French and Dutch climate researchers have gleaned a great deal from a sea-floor core from between southern Greenland and Labrador (Kleiven, H.F. et al. 2008. Reduced North Atlantic deep water and the glacial Lake Agassiz outburst. Science, v. 319, p. 60-64). Their combined fossil, oxygen-isotope and mineralogical study shows anomalies from about 170 years before to 100 years after the drop in regional temperatures.  These include signs of decreased saltiness of the water in the Labrador Basin and a reduction in production of deep water in the North Atlantic. This is exactly the predicted signature for a shut-down of the Gulf Stream, similar to those implicated in Dansgaard-Oeschger events through the last Ice Age and the Younger Dryas itself.

The Younger Dryas has been linked to sudden drainage of huge glacially dammed lakes that once surrounded the ice cap of the Canadian Shield.  One scenario for that is a huge, protracted flood down the St Lawrence River into the North Atlantic, another being one down the MacKenzie River into the Arctic Ocean. Freshening of surface waters by such means would have reduced the formation of the dense cold brines that sink to form North Atlantic Deep Water today. In so doing these down-wellings drag surface waters northwards from low latitudes to form the Gulf Stream that makes the western side of the North Atlantic unusually warm. If they stop or slow significantly regional air temperatures fall, as they did again around 8.2 ka. In this case the likely cause was escape of water melted from the last dregs of the North American ice sheet that had been held in a glacial lake south of Hudson Bay: Lake Agassiz.