Saturday, April 15, 2006

Barking up the evolutionary tree

NEXT time you see a stray dog nosing around in a pile of rubbish, save your pity. According to Raymond and Lorna Coppinger, the grubby mongrel is doing what evolution designed it for.

The main theme of “Dogs” is that the species was not, as widely thought, domesticated by man, but domesticated itself. Dogs have evolved, according to the Coppingers, from a wolf-like ancestor by adapting to a new ecological niche: rubbish dumps of human villages, which provided food that did not have to be chased after. The shared characteristics of village dogs the world over—small brains and teeth compared with wolves of the same size, an adult weight of 10-15kg, two litters a year instead of a wolf's one—are fine-tuned to this environment by natural selection. And so-called mongrels are the very best form of an animal that is, in many ways, more distinct from wolves than wolves are from their wild relatives, coyotes and jackals.

This contrasts with the conventional view that dogs are descended from wolves that were tamed and used as hunting companions by people. Indeed, the Coppingers, who have studied dogs for three decades, believe that working dogs, such as hunting hounds, sheepdogs (particularly those that guard sheep rather than herding them) and even sled dogs have evolved from ancestral village dogs by a process that owes more to Darwin than to selective breeding of the sort that has created modern, registered breeds.

Cruft's aspirants will be depressed to learn that the Coppingers frown on selective breeding. Closed stud-books result in the exposure of harmful recessive genes. And that is before the selection of exaggerated—and to the animal, damaging—traits that are thought of as breed-specific. The authors are also critical of dog-training programmes—including those for guide-dogs for the blind and deaf—that ignore the right learning-stages in young pups.

This all may sound over-doctrinal. But “Dogs” avoids a nature-nurture conflict. The postulation of genetically determined key periods for learning suggests how genetic and environmental factors may work together to create patterns of behaviour. “Dogs” will be interesting to anyone who has ever wondered about the origin of their favourite mutt's species.

Evolution is still continuing

WHAT, then, of the future? Sitting in the comfort of the concrete savannah, has humanity stopped evolving?

To help answer that question, it is instructive to look at a paper published earlier this year by Gregory Cochran. Dr Cochran, a scientist who, in the tradition of Darwin himself, works independently of an academic institution, looked at the unusual neurological illnesses commonly suffered by Ashkenazi Jews. Traditional wisdom has it that these diseases, which are caused by faulty genes, are a consequence of inbreeding in a small, closed population. The fact that they persist is said to show that human evolution has stopped in our ever more mollycoddled and medicalised world. Dr Cochran begged not only to differ, but to draw precisely the opposite conclusion. He sees these diseases as evidence of very recent evolution.

Until a century or two ago, the Ashkenazim—the Jews of Europe—were often restricted by local laws to professions such as banking, which happened to require high intelligence. This is the sort of culturally created pressure that might drive one of Dr Deacon's feedback loops for mental abilities (though it must be said that Dr Deacon himself is sceptical about this example). Dr Cochran, however, suspects that this is exactly what happened. He thinks the changes in the brain brought about by the genes in question will be shown to enhance intelligence when only one copy of a given disease gene is present (you generally need two copies, one from each parent, to suffer the adverse symptoms). Indeed, in the case of Gaucher's disease, which is not necessarily lethal, there is evidence that sufferers are more intelligent than average. If Ashkenazi Jews need to be more intelligent than others, such genes will spread, even if they sometimes cause disease.

The fact is, you can't stop evolution. Those who argue the opposite, pointing to the survival thanks to modern medicine of people who would previously have died, are guilty of more than just gross insensitivity. They have tumbled into an intellectual pitfall that has claimed many victims since Darwin first published his theory. Evolution is not about progress. It is about adaptation. Sometimes adaptation and progress are the same. Sometimes they are the opposite. (Ask a tapeworm, which has “degenerated” into a mere egg-laying machine by a very successful process of adaptation.) If a mutation provides a better adaptation, as Dr Cochran thinks these disease genes did in financiers, it will spread. Given the changes that humanity has created in its own habitat, it seems unlikely that natural selection has come to a halt. If Dr Deacon is right, it may even be accelerating as cultural change speeds up, although the current rapid growth in the human population will disguise that for a while, because selection works best in a static population.
The next big thing

Evolution, then, has not stopped. Indeed, it might be about to get an artificial helping hand in the form of genetic engineering. For the fallacy of evolutionary progress has deep psychological roots, and those roots lie in Dr Miller's peacock-tail version of events. The ultimate driver of sexual selection is the need to produce offspring who will be better than the competition, and will thus be selected by desirable sexual partners. Parents know what traits are required. They include high intelligence and a handful of physical characteristics, some of which are universal and some of which vary according to race. That is why, once the idea of eliminating disease genes has been aired, every popular discussion on genetic engineering and cloning seems to get bogged down in intelligence, height and (in the West) fair hair and blue eyes.

This search for genetic perfection has an old and dishonourable history, of course, starting with the eugenic movement of the 19th century and ending in the Nazi concentration camps of the 20th, where millions of the confrères of Dr Cochran's subjects were sent to their deaths. With luck, the self-knowledge that understanding humanity's evolution brings will help avert such perversions in the future. And if genetic engineering can be done in a way that does not harm the recipient, it would not make sense to ban it in a liberal society. But the impulse behind it will not go away because, progressive or not, it is certainly adaptive. Theodosius Dobzhansky, one of the founders of genetics, once said that “nothing in biology makes sense except in the light of evolution”. And that is true even of humanity's desire to take control of the process itself.

Human Evolution:Three of a kind

HAVING been trumped last week by the decision of the chimpanzee genome-sequencing consortium to publish in their rival, Nature (see article), the editors of Science have now got somewhat of their own back with a trio of papers that look at genes which seem to be involved in the evolution of the human brain.

Two of these papers reported studies carried out by Bruce Lahn, of the University of Chicago, and his colleagues. Dr Lahn has been studying two genes that tell the brain what size to grow to. If either of these genes, known as Microcephalin and ASPM, fails to do its job properly, the result is a brain that, though normal in its structure, is far smaller than it ought to be—somewhere between a quarter and a third of the normal volume—and which does not work properly. One of the characteristics of Homo sapiens is an exceedingly large brain, and some biologists have speculated that changes in these two genes might be part of the cause of this enlargement. Those speculations have been supported by evidence that these genes have changed significantly since the human and ape lines separated several million years ago.
[-68772]

Dr Lahn has added to that evidence, and has shown that this evolution continued even after Homo sapiens became a species in its own right, less than 200,000 years ago. One variant of Microcephalin, now widespread, came into existence only about 37,000 years ago, while a widespread version of ASPM originated a mere 5,800 years ago—meaning that it post-dates the beginning of civilisation.

Dr Lahn and his team were able to estimate the dates that the two gene-variants first appeared by looking at which groups of people have them. The past two decades have revealed a lot about how humanity has spread across the globe, and when. By tracing branches of the family trees containing the variants in question backward until they join, the dates at which the variants appeared can be worked out.

That the two variants have spread by natural selection rather than chance can be seen from the speed with which they have become established. If they had no positive consequences, their frequency would rise, if at all, by chance—a process known as neutral drift.

The third paper, by Toshiyuki Hayakawa and Takashi Angata, of the University of California, San Diego and their colleagues, looks at a molecular receptor for a chemical called sialic acid. This chemical caused a stir a few years ago when it was discovered that human sialic acid is different from that found in apes—and, indeed, any other mammals. Dr Hayakawa and Dr Angata have found a receptor for sialic acid that occurs in human brain cells (though the cells in question are support cells rather than actual nerve cells), but not in those of apes. The gene that encodes this receptor molecule seems to have been cobbled together from bits of two other genes one of which, in a curious twist, had itself stopped working properly during the course of evolution.

What all this means is still mysterious. The study of brain evolution is still in the stamp-collecting phase that begins most branches of science, when researchers are looking for interesting facts to stick in their albums, rather than assembling overarching hypotheses. These three stamps, though, are very pretty. Eventually, they may turn out to be precious.

The story of man

t was Spencer, an early contributor to The Economist, who invented that poisoned phrase, “survival of the fittest”. He originally applied it to the winnowing of firms in the harsh winds of high-Victorian capitalism, but when Darwin's masterwork, “On the Origin of Species”, was published, he quickly saw the parallel with natural selection and transferred his bon mot to the process of evolution. As a result, he became one of the band of philosophers known as social Darwinists. Capitalists all, they took what they thought were the lessons of Darwin's book and applied them to human society. Their hard-hearted conclusion, of which a 17th-century religious puritan might have been proud, was that people got what they deserved—albeit that the criterion of desert was genetic, rather than moral. The fittest not only survived, but prospered. Moreover, the social Darwinists thought that measures to help the poor were wasted, since such people were obviously unfit and thus doomed to sink.

Sadly, the slur stuck. For 100 years Darwinism was associated with a particularly harsh and unpleasant view of the world and, worse, one that was clearly not true—at least, not the whole truth. People certainly compete, but they collaborate, too. They also have compassion for the fallen and frequently try to help them, rather than treading on them. For this sort of behaviour, “On the Origin of Species” had no explanation. As a result, Darwinism had to tiptoe round the issue of how human society and behaviour evolved. Instead, the disciples of a second 19th-century creed, Marxism, dominated academic sociology departments with their cuddly collectivist ideas—even if the practical application of those ideas has been even more catastrophic than social Darwinism was.

But the real world eventually penetrates even the ivory tower. The failure of Marxism has prompted an opening of minds, and Darwinism is back with a vengeance—and a twist. Exactly how humanity became human is still a matter of debate. But there are, at least, some well-formed hypotheses (see article). What these hypotheses have in common is that they rely not on Spencer's idea of individual competition, but on social interaction. That interaction is, indeed, sometimes confrontational and occasionally bloody. But it is frequently collaborative, and even when it is not, it is more often manipulative than violent.

Modern Darwinism's big breakthrough was the identification of the central role of trust in human evolution. People who are related collaborate on the basis of nepotism. It takes outrageous profit or provocation for someone to do down a relative with whom they share a lot of genes. Trust, though, allows the unrelated to collaborate, by keeping score of who does what when, and punishing cheats.

Very few animals can manage this. Indeed, outside the primates, only vampire bats have been shown to trust non-relatives routinely. (Well-fed bats will give some of the blood they have swallowed to hungry neighbours, but expect the favour to be returned when they are hungry and will deny favours to those who have cheated in the past.) The human mind, however, seems to have evolved the trick of being able to identify a large number of individuals and to keep score of its relations with them, detecting the dishonest or greedy and taking vengeance, even at some cost to itself. This process may even be—as Matt Ridley, who wrote for this newspaper a century and a half after Spencer, described it—the origin of virtue.

The new social Darwinists (those who see society itself, rather than the savannah or the jungle, as the “natural” environment in which humanity is evolving and to which natural selection responds) have not abandoned Spencer altogether, of course. But they have put a new spin on him. The ranking by wealth of which Spencer so approved is but one example of a wider tendency for people to try to out-do each other. And that competition, whether athletic, artistic or financial, does seem to be about genetic display. Unfakeable demonstrations of a superiority that has at least some underlying genetic component are almost unfailingly attractive to the opposite sex. Thus both of the things needed to make an economy work, collaboration and competition, seem to have evolved under Charles Darwin's penetrating gaze.

Intelligent design rears its head

Intelligent design derives from an early-19th-century explanation of the natural world given by an English clergyman, William Paley. Paley was the populariser of the famous watchmaker analogy. If you found a watch in a field, he wrote in 1802, you would infer that so fine and intricate a mechanism could not have been produced by unplanned, unguided natural forces; it could have been made only by an intelligent being. This view—that the complexity of an organism is evidence for the existence of God—prevailed until 1859, when Charles Darwin's “Origin of Species” showed how natural selection could indeed “explain so many classes of facts” (as Darwin put it).

Proponents of intelligent design are renewing Paley's argument with a new gloss from molecular biology. Darwin himself acknowledged that “If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” Intelligent designers claim that living things are full of such examples at the molecular level. Blood clotting is one: ten proteins have to work together in sequence for the process to occur. So-called eukaryotic cells, which digest nutrients or excrete waste, are another: these cells contain an elaborate “traffic system” which directs proteins to the right compartments.

In both cases, argues Michael Behe, whose book “Darwin's Black Box” is one of the bibles of intelligent design, you have complex systems that will work only if all the components operate at once. He argues that you could not get such a thing from “successive, slight modifications”. Hence the molecular machines inside living beings are evidence of an intelligent designer—God.

Intelligent design asks interesting questions about evolution, but since all its answers are usually “God”, scientists have rejected it. As the National Academy of Sciences has said, intelligent design “and other claims of supernatural intervention in the origin of life” are not science because their claims cannot be tested by experiment and propose no new hypotheses of their own. (Instead, intelligent designers poke holes in evolutionary theory.)

In addition, biologists point out that the intelligent designers' favourite examples of “irreducible complexity” often prove not to be. Some organisms, for example, use only six proteins to clot blood—irreducibility reduced. In other cases, single parts of a complex mechanism turn out to have useful functions of their own, meaning that the complex mechanism could have been produced by step-by-step evolution. When the Discovery Institute, a promoter of intelligent design, came up with a list of 370 people with science degrees who backed their ideas, the National Centre for Science Education responded with almost 600 scientists called Steve or Stephanie who rejected them.

Whichever way the argument over intelligent design is finally resolved, it is likely to damage science teaching. This is not because bad science standards will necessarily be adopted but because—as Diane Ravitch of the Brookings Institution showed in “The Language Police” in 2003—the biggest threat to high standards is the unwillingness of state Boards of Education to offend any sort of pressure group, whether right or left. Instead, they avoid controversial topics altogether. In 2000, a survey by the Fordham Foundation found that only ten states taught evolution fully, six did so skimpily and in 13 the treatment was considered useless or absent. (Kansas received an F minus, and “disgraceful”.) These failings shame American evolution teaching, and the manufactured controversy over intelligent design will do nothing to make them better.