Sam Kean The Violinist - S Thumb - and Other Lost Tales of Love - War - and Genius - As Written by Our Gen
Sam Kean The Violinist - S Thumb - and Other Lost Tales of Love - War - and Genius - As Written by Our Gen
Sam Kean The Violinist - S Thumb - and Other Lost Tales of Love - War - and Genius - As Written by Our Gen
Table of Contents
Copyright Page
In accordance with the U.S. Copyright Act of 1976, the scanning, uploading, and electronic sharing
of any part of this book without the permission of the publisher constitute unlawful piracy and theft
of the author’s intellectual property. If you would like to use material from the book (other than for
review purposes), prior written permission must be obtained by contacting the publisher at
permissions@hbgusa.com. Thank you for your support of the author’s rights.
Life, therefore, may be considered a DNA chain reaction.
—MAXIM D. FRANK-KAMENETSKII, UNRAVELING DNA
Acrostic: n., an incognito message formed by stringing together the initial
letters of lines or paragraphs or other units of composition in a work.
This might as well come out up front, first paragraph. This is a book about
DNA—about digging up stories buried in your DNA for thousands, even
millions of years, and using DNA to solve mysteries about human beings
whose solutions once seemed lost forever. And yes, I’m writing this book
despite the fact that my father’s name is Gene. As is my mother’s name.
Gene and Jean. Gene and Jean Kean. Beyond being singsong absurd, my
parents’ names led to a lot of playground jabs over the years: my every fault
and foible was traced to “my genes,” and when I did something idiotic,
people smirked that “my genes made me do it.” That my parents’ passing
on their genes necessarily involved sex didn’t help. The taunts were doubly
barbed, utterly unanswerable.
Bottom line is, I dreaded learning about DNA and genes in science
classes growing up because I knew some witticism would be coming within
about two seconds of the teacher turning her back. And if it wasn’t coming,
some wiseacre was thinking it. Some of that Pavlovian trepidation always
stayed with me, even when (or especially when) I began to grasp how
potent a substance DNA is. I got over the gibes by high school, but the
word gene still evoked a lot of simultaneous responses, some agreeable,
some not.
On the one hand, DNA excites me. There’s no bolder topic in science
than genetics, no field that promises to push science forward to the same
degree. I don’t mean just the common (and commonly overblown) promises
of medical cures, either. DNA has revitalized every field in biology and
remade the very study of human beings. At the same time, whenever
someone starts digging into our basic human biology, we resist the intrusion
—we don’t want to be reduced to mere DNA. And when someone talks
about tinkering with that basic biology, it can be downright frightening.
More ambiguously, DNA offers a powerful tool for rooting through our
past: biology has become history by other means. Even in the past decade
or so, genetics has opened up a whole Bible’s worth of stories whose
plotlines we assumed had vanished—either too much time had lapsed, or
too little fossil or anthropological evidence remained to piece together a
coherent narrative. It turns out we were carrying those stories with us the
entire time, trillions of faithfully recorded texts that the little monks in our
cells transcribed every hour of every day of our DNA dark age, waiting for
us to get up to speed on the language. These stories include the grand sagas
of where we came from and how we evolved from primordial muck into the
most dominant species the planet has known. But the stories come home in
surprisingly individual ways, too.
If I could have had one mulligan in school (besides a chance to make up
safer names for my parents), I’d have picked a different instrument to play
in band. It wasn’t because I was the only boy clarinetist in the fourth, fifth,
sixth, seventh, eighth, and ninth grades (or not only because of that). It was
more because I felt so clumsy working all the valves and levers and
blowholes on the clarinet. Nothing to do with a lack of practice, surely. I
blamed the deficit on my double-jointed fingers and splayed hitchhiker
thumbs. Playing the clarinet wound my fingers into such awkward braids
that I constantly felt a need to crack my knuckles, and they’d throb a little.
Every blue moon one thumb would even get stuck in place, frozen in
extension, and I had to work the joint free with my other hand. My fingers
just didn’t do what the better girl clarinetists’ could. My problems were
inherited, I told myself, a legacy of my parents’ gene stock.
After quitting band, I had no reason to reflect on my theory about
manual dexterity and musical ability until a decade later, when I learned the
story of violinist Niccolò Paganini, a man so gifted he had to shake off
rumors his whole life that he’d sold his soul to Satan for his talent. (His
hometown church even refused to bury his body for decades after his
death.) It turns out Paganini had made a pact with a subtler master, his
DNA. Paganini almost certainly had a genetic disorder that gave him
freakishly flexible fingers. His connective tissues were so rubbery that he
could pull his pinky out sideways to form a right angle to the rest of his
hand. (Try this.) He could also stretch his hands abnormally wide, an
incomparable advantage when playing the violin. My simple hypothesis
about people “being born” to play (or not play) certain instruments seemed
justified. I should have quit when ahead. I kept investigating and found out
that Paganini’s syndrome probably caused serious health problems, as joint
pain, poor vision, weakness of breath, and fatigue dogged the violinist his
whole life. I whimpered about stiff knuckles during early a.m. marching-
band practice, but Paganini frequently had to cancel shows at the height of
his career and couldn’t perform in public during the last years of his life. In
Paganini, a passion for music had united with a body perfectly tuned to take
advantage of its flaws, possibly the greatest fate a human could hope for.
Those flaws then hastened his death. Paganini may not have chosen his pact
with his genes, but he was in one, like all of us, and the pact both made and
unmade him.
DNA wasn’t done telling its stories to me. Some scientists have
retroactively diagnosed Charles Darwin, Abraham Lincoln, and Egyptian
pharaohs with genetic disorders. Other scientists have plumbed DNA itself
to articulate its deep linguistic properties and surprising mathematical
beauty. In fact, just as I had crisscrossed from band to biology to history to
math to social studies in high school, so stories about DNA began popping
up in all sorts of contexts, linking all sorts of disparate subjects. DNA
informed stories about people surviving nuclear bombs, and stories about
the untimely ends of explorers in the Arctic. Stories about the near
extinction of the human species, or pregnant mothers giving cancer to their
unborn children. Stories where, as with Paganini, science illuminates art,
and even stories where—as with scholars tracing genetic defects through
portraiture—art illuminates science.
One fact you learn in biology class but don’t appreciate at first is the
sheer length of the DNA molecule. Despite being packed into a tiny closet
in our already tiny cells, DNA can unravel to extraordinary distances.
There’s enough DNA in some plant cells to stretch three hundred feet;
enough DNA in one human body to stretch roughly from Pluto to the sun
and back; enough DNA on earth to stretch across the known universe many,
many times. And the further I pursued the stories of DNA, the more I saw
that its quality of stretching on and on—of unspooling farther and farther
out, and even back, back through time—was intrinsic to DNA. Every
human activity leaves a forensic trace in our DNA, and whether that DNA
records stories about music or sports or Machiavellian microbes, those tales
tell, collectively, a larger and more intricate tale of the rise of human beings
on Earth: why we’re one of nature’s most absurd creatures, as well as its
crowning glory.
Chills and flames, frost and inferno, fire and ice. The two scientists who
made the first great discoveries in genetics had a lot in common—not least
the fact that both died obscure, mostly unmourned and happily forgotten by
many. But whereas one’s legacy perished in fire, the other’s succumbed to
ice.
The blaze came during the winter of 1884, at a monastery in what’s now
the Czech Republic. The friars spent a January day emptying out the office
of their deceased abbot, Gregor Mendel, ruthlessly purging his files,
consigning everything to a bonfire in the courtyard. Though a warm and
capable man, late in life Mendel had become something of an
embarrassment to the monastery, the cause for government inquiries,
newspaper gossip, even a showdown with a local sheriff. (Mendel won.) No
relatives came by to pick up Mendel’s things, and the monks burned his
papers for the same reason you’d cauterize a wound—to sterilize, and
stanch embarrassment. No record survives of what they looked like, but
among those documents were sheaves of papers, or perhaps a lab notebook
with a plain cover, probably coated in dust from disuse. The yellowed pages
would have been full of sketches of pea plants and tables of numbers
(Mendel adored numbers), and they probably didn’t kick up any more
smoke and ash than other papers when incinerated. But the burning of those
papers—burned on the exact spot where Mendel had kept his greenhouse
years before—destroyed the only original record of the discovery of the
gene.
The chills came during that same winter of 1884—as they had for many
winters before, and would for too few winters after. Johannes Friedrich
Miescher, a middling professor of physiology in Switzerland, was studying
salmon, and among his other projects he was indulging a long-standing
obsession with a substance—a cottony gray paste—he’d extracted from
salmon sperm years before. To keep the delicate sperm from perishing in
the open air, Miescher had to throw the windows open to the cold and
refrigerate his lab the old-fashioned way, exposing himself day in and day
out to the Swiss winter. Getting any work done required superhuman focus,
and that was the one asset even people who thought little of Miescher
would admit he had. (Earlier in his career, friends had to drag him from his
lab bench one afternoon to attend his wedding; the ceremony had slipped
his mind.) Despite being so driven, Miescher had pathetically little to show
for it—his lifetime scientific output was meager. Still, he kept the windows
open and kept shivering year after year, though he knew it was slowly
killing him. And he still never got to the bottom of that milky gray
substance, DNA.
DNA and genes, genes and DNA. Nowadays the words have become
synonymous. The mind rushes to link them, like Gilbert and Sullivan or
Watson and Crick. So it seems fitting that Miescher and Mendel discovered
DNA and genes almost simultaneously in the 1860s, two monastic men just
four hundred miles apart in the German-speaking span of middle Europe. It
seems more than fitting; it seems fated.
But to understand what DNA and genes really are, we have to decouple
the two words. They’re not identical and never have been. DNA is a thing
—a chemical that sticks to your fingers. Genes have a physical nature, too;
in fact, they’re made of long stretches of DNA. But in some ways genes are
better viewed as conceptual, not material. A gene is really information—
more like a story, with DNA as the language the story is written in. DNA
and genes combine to form larger structures called chromosomes, DNA-
rich volumes that house most of the genes in living things. Chromosomes in
turn reside in the cell nucleus, a library with instructions that run our entire
bodies.
All these structures play important roles in genetics and heredity, but
despite the near-simultaneous discovery of each in the 1800s, no one
connected DNA and genes for almost a century, and both discoverers died
uncelebrated. How biologists finally yoked genes and DNA together is the
first epic story in the science of inheritance, and even today, efforts to refine
the relationship between genes and DNA drive genetics forward.
Mendel and Miescher began their work at a time when folk theories—some
uproarious or bizarre, some quite ingenious, in their way—dominated most
people’s thinking about heredity, and for centuries these folk theories had
colored their views about why we inherit different traits.
Everyone knew on some level of course that children resemble parents.
Red hair, baldness, lunacy, receding chins, even extra thumbs, could all be
traced up and down a genealogical tree. And fairy tales—those codifiers of
the collective unconscious—often turned on some wretch being a “true”
prince(ss) with a royal bloodline, a biological core that neither rags nor an
amphibian frame could sully.
That’s mostly common sense. But the mechanism of heredity—how
exactly traits got passed from generation to generation—baffled even the
most intelligent thinkers, and the vagaries of this process led to many of the
wilder theories that circulated before and even during the 1800s. One
ubiquitous folk theory, “maternal impressions,” held that if a pregnant
woman saw something ghoulish or suffered intense emotions, the
experience would scar her child. One woman who never satisfied an intense
prenatal craving for strawberries gave birth to a baby covered with red,
strawberry-shaped splotches. The same could happen with bacon. Another
woman bashed her head on a sack of coal, and her child had half, but only
half, a head of black hair. More direly, doctors in the 1600s reported that a
woman in Naples, after being startled by sea monsters, bore a son covered
in scales, who ate fish exclusively and gave off fishy odors. Bishops told
cautionary tales of a woman who seduced her actor husband backstage in
full costume. He was playing Mephistopheles; they had a child with hooves
and horns. A beggar with one arm spooked a woman into having a one-
armed child. Pregnant women who pulled off crowded streets to pee in
churchyards invariably produced bed wetters. Carrying fireplace logs about
in your apron, next to the bulging tummy, would produce a grotesquely
well-hung lad. About the only recorded happy case of maternal impressions
involved a patriotic woman in Paris in the 1790s whose son had a birthmark
on his chest shaped like a Phrygian cap—those elfish hats with a flop of
material on top. Phrygian caps were symbols of freedom to the new French
republic, and the delighted government awarded her a lifetime pension.
Much of this folklore intersected with religious belief, and people
naturally interpreted serious birth defects—cyclopean eyes, external hearts,
full coats of body hair—as back-of-the-Bible warnings about sin, wrath,
and divine justice. One example from the 1680s involved a cruel bailiff in
Scotland named Bell, who arrested two female religious dissenters, lashed
them to poles near the shore, and let the tide swallow them. Bell added
insult by taunting the women, then drowned the younger, more stubborn
one with his own hands. Later, when asked about the murders, Bell always
laughed, joking that the women must be having a high time now, scuttling
around among the crabs. The joke was on Bell: after he married, his
children were born with a severe defect that twisted their forearms into two
awful pincers. These crab claws proved highly heritable to their children
and grandchildren, too. It didn’t take a biblical scholar to see that the
iniquity of the father had been visited upon the children, unto the third and
fourth generations. (And beyond: cases popped up in Scotland as late as
1900.)
If maternal impressions stressed environmental influences, other
theories of inheritance had strong congenital flavors. One, preformationism,
grew out of the medieval alchemists’ quest to create a homunculus, a
miniature, even microscopic, human being. Homunculi were the biological
philosopher’s stone, and creating one showed that an alchemist possessed
the power of gods. (The process of creation was somewhat less dignified.
One recipe called for fermenting sperm, horse dung, and urine in a pumpkin
for six weeks.) By the late 1600s, some protoscientists had stolen the idea
of the homunculus and were arguing that one must live inside each female
egg cell. This neatly did away with the question of how living embryos
arose from seemingly dead blobs of matter. Under preformationist theory,
such spontaneous generation wasn’t necessary: homuncular babies were
indeed preformed and merely needed a trigger, like sperm, to grow. This
idea had only one problem: as critics pointed out, it introduced an infinite
regress, since a woman necessarily had to have all her future children, as
well as their children, and their children, stuffed inside her, like Russian
matryoshka nesting dolls. Indeed, adherents of “ovism” could only deduce
that God had crammed the entire human race into Eve’s ovaries on day one.
(Or rather, day six of Genesis.) “Spermists” had it even worse—Adam must
have had humanity entire sardined into his even tinier sperms. Yet after the
first microscopes appeared, a few spermists tricked themselves into seeing
tiny humans bobbing around in puddles of semen. Both ovism and
spermism gained credence in part because they explained original sin: we
all resided inside Adam or Eve during their banishment from Eden and
therefore all share the taint. But spermism also introduced theological
quandaries—for what happened to the endless number of unbaptized souls
that perished every time a man ejaculated?
However poetic or deliciously bawdy these theories were, biologists in
Miescher’s day scoffed at them as old wives’ tales. These men wanted to
banish wild anecdotes and vague “life forces” from science and ground all
heredity and development in chemistry instead.
Miescher hadn’t originally planned to join this movement to demystify
life. As a young man he had trained to practice the family trade, medicine,
in his native Switzerland. But a boyhood typhoid infection had left him hard
of hearing and unable to use a stethoscope or hear an invalid’s bedside
bellyaching. Miescher’s father, a prominent gynecologist, suggested a
career in research instead. So in 1868 the young Miescher moved into a lab
run by the biochemist Felix Hoppe-Seyler, in Tübingen, Germany. Though
headquartered in an impressive medieval castle, Hoppe-Seyler’s lab
occupied the royal laundry room in the basement; he found Miescher space
next door, in the old kitchen.
Friedrich Miescher (inset) discovered DNA in this laboratory, a renovated kitchen in the
basement of a castle in Tübingen, Germany. (University of Tübingen library)
This was not how a Nobel laureate should have to spend his time. In late
1933, shortly after winning science’s highest honor, Thomas Hunt Morgan
got a message from his longtime assistant Calvin Bridges, whose libido had
landed him in hot water. Again.
A “confidence woman” from Harlem had met Bridges on a cross-
country train a few weeks before. She quickly convinced him not only that
she was a regal princess from India, but that her fabulously wealthy
maharaja of a father just happened to have opened—coincidence of all
coincidences—a science institute on the subcontinent in the very field that
Bridges (and Morgan) worked in, fruit fly genetics. Since her father needed
a man to head the institute, she offered Bridges the job. Bridges, a real
Casanova, would likely have shacked up with the woman anyway, and the
job prospect made her irresistible. He was so smitten he began offering his
colleagues jobs in India and didn’t seem to notice Her Highness’s habit of
running up extraordinary bills whenever they went carousing. In fact, when
out of earshot, the supposed princess claimed to be Mrs. Bridges and
charged everything she could to him. When the truth emerged, she tried to
extort more cash by threatening to sue him “for transporting her across state
lines for immoral purposes.” Panicked and distraught—despite his adult
activities, Bridges was quite childlike—he turned to Morgan.
Morgan no doubt consulted with his other trusted assistant, Alfred
Sturtevant. Like Bridges, Sturtevant had worked with Morgan for decades,
and the trio had shared in some of the most important discoveries in
genetics history. Sturtevant and Morgan both scowled in private over
Bridges’s dalliances and escapades, but their loyalty trumped any other
consideration here. They decided that Morgan should throw his weight
around. In short order, he threatened to expose the woman to the police, and
kept up the pressure until Miss Princess disappeared on the next train.
Morgan then hid Bridges away until the situation blew over.*
When he’d hired Bridges as a factotum years before, Morgan could
never have expected he’d someday be acting as a goodfella for him. Then
again, Morgan could never have expected how most everything in his life
had turned out. After laboring away in anonymity, he had now become a
grand panjandrum of genetics. After working in comically cramped quarters
in Manhattan, he now oversaw a spacious lab in California. After lavishing
so much attention and affection on his “fly boys” over the years, he was
now fending off charges from former assistants that he’d stolen credit for
others’ ideas. And after fighting so hard for so long against the overreach of
ambitious scientific theories, he’d now surrendered to, and even helped
expand, the two most ambitious theories in all biology.
Morgan’s younger self might well have despised his older self for this
last thing. Morgan had begun his career at a curious time in science history,
around 1900, when a most uncivil civil war broke out between Mendel’s
genetics and Darwin’s natural selection: things got so nasty, most biologists
felt that one theory or the other would have to be exterminated. In this war
Morgan had tried to stay Switzerland, refusing at first to accept either
theory. Both relied too much on speculation, he felt, and Morgan had an
almost reactionary distrust of speculation. If he couldn’t see proof for a
theory in front of his corneas, he wanted to banish it from science. Indeed,
if scientific advances often require a brilliant theorist to emerge and explain
his vision with perfect clarity, the opposite was true for Morgan, who was
cussedly stubborn and notoriously muddled in his reasoning—anything but
literally visible proof bemused him.
And yet that very confusion makes him the perfect guide to follow along
behind during the War of the Roses interlude when Darwinists and
Mendelists despised each other. Morgan mistrusted genetics and natural
selection equally at first, but his patient experiments on fruit flies teased out
the half-truths of each. He eventually succeeded—or rather, he and his
talented team of assistants succeeded—in weaving genetics and evolution
together into the grand tapestry of modern biology.
Playboy Calvin Bridges (left) and a rare photo of Thomas Hunt Morgan (right). Morgan so
detested having his picture taken that an assistant who once wanted one had to hide a camera in
a bureau in the fly lab and snap the photo remotely by tugging a string. (Courtesy of the
National Library of Medicine)
Meeting Bridges and Sturtevant must have cheered Morgan, because his
experiments had all but flopped until then. Unable to find any natural
mutants, he’d exposed flies to excess heat and cold and injected acids, salts,
bases, and other potential mutagens into their genitals (not easy to find).
Still nothing. On the verge of giving up, in January 1910 he finally spotted a
fly with a strange trident shape tattooed on its thorax. Not exactly a de
Vriesian über-fly, but something. In March two more mutants appeared, one
with ragged moles near its wings that made it appear to have “hairy
armpits,” another with an olive (instead of the normal amber) body color. In
May 1910 the most dramatic mutant yet appeared, a fly with white (instead
of red) eyes.
Anxious for a breakthrough—perhaps this was a mutation period—
Morgan tediously isolated white-eyes. He uncapped the milk bottle,
balanced another one upside down on top of it like mating ketchup bottles,
and shined a light through the top to coax white-eyes upward. Of course,
hundreds of other flies joined white-eyes in the top bottle, so Morgan had to
quickly cap both, get a new milk bottle, and repeat the process over and
over, slowly dwindling the number with each step, praying to God white-
eyes didn’t escape meantime. When he finally, finally segregated the bug,
he mated it with red-eyed females. Then he bred the descendants with each
other in various ways. The results were complex, but one result especially
excited Morgan: after crossing some red-eyed descendants with each other,
he discovered among the offspring a 3:1 ratio of red to white eyes.
The year before, in 1909, Morgan had heard the Danish botanist
Wilhelm Johannsen lecture about Mendelian ratios at Columbia. Johannsen
used the occasion to promote his newly minted word, gene, a proposed unit
of inheritance. Johannsen and others freely admitted that genes were
convenient fictions, linguistic placeholders for, well, something. But they
insisted that their ignorance about the biochemical details of genes
shouldn’t invalidate the usefulness of the gene concept for studying
inheritance (similar to how psychologists today can study euphoria or
depression without understanding the brain in detail). Morgan found the
lecture too speculative, but his experimental results—3:1—promptly
lowered his prejudice to Mendel.
This was quite a volte-face for Morgan, but it was just the start. The eye-
color ratios convinced him that gene theory wasn’t bunk. But where were
genes actually located? Perhaps on chromosomes, but fruit flies had
hundreds of inheritable traits and only four chromosomes. Assuming one
trait per chromosome, as many scientists did, there weren’t enough to go
around. Morgan didn’t want to get dragged into debates on so-called
chromosome theory, but a subsequent discovery left him no choice: because
when he scrutinized his white-eyed flies, he discovered that every last
mutant was male. Scientists already knew that one chromosome determined
the gender of flies. (As in mammals, female flies have two X chromosomes,
males one.) Now the white-eye gene was linked to that chromosome as well
—putting two traits on it. Soon the fly boys found other genes—stubby
wings, yellow bodies—also linked exclusively to males. The conclusion
was inescapable: they’d proved that multiple genes rode around together on
one chromosome.* That Morgan had proved this practically against his own
will mattered little; he began to champion chromosome theory anyway.
Overthrowing old beliefs like this became a habit with Morgan,
simultaneously his most admirable and most maddening trait. Although he
encouraged theoretical discussions in the fly room, Morgan considered new
theories cheap and facile—worth little until cross-examined in the lab. He
didn’t seem to grasp that scientists need theories as guides, to decide what’s
relevant and what’s not, to frame their results and prevent muddled
thinking. Even undergraduates like Bridges and Sturtevant—and especially
a student who joined the fly room later, the abrasively brilliant and
brilliantly abrasive Hermann Muller—grew hair-rippingly frustrated with
Morgan in the many quarrels they had over genes and heredity. And then,
just as exasperating, when someone did wrestle Morgan into a headlock and
convince him he was wrong, Morgan would ditch his old ideas and with no
embarrassment whatsoever absorb the new ones as obvious.
To Morgan, this quasi plagiarism was no big deal. Everyone was
working toward the same goal (right, fellas?), and only experiments
mattered anyway. And to his credit, his about-faces proved that Morgan
listened to his assistants, a contrast to the condescending relationship most
European scientists had with their help. For this reason Bridges and
Sturtevant always publicly professed their loyalty to Morgan. But visitors
sometimes picked up on sibling rivalries among the assistants, and secret
smoldering. Morgan didn’t mean to connive or manipulate; credit for ideas
just meant that little to him.
Nevertheless ideas kept ambushing Morgan, ideas he hated. Because not
long after the unified gene-chromosome theory emerged, it nearly
unraveled, and only a radical idea could salvage it. Again, Morgan had
determined that multiple genes clustered together on one chromosome. And
he knew from other scientists’ work that parents pass whole chromosomes
on to their children. All the genetic traits on each chromosome should
therefore always be inherited together—they should always be linked. To
take a hypothetical example, if one chromosome’s set of genes call for
green bristles and sawtooth wings and fat antennae, any fly with one trait
should exhibit all three. Such clusters of traits do exist in flies, but to their
dismay, Morgan’s team discovered that certain linked traits could
sometimes become unlinked—green bristles and sawtooth wings, which
should always appear together, would somehow show up separately, in
different flies. Unlinkings weren’t common—linked traits might separate 2
percent of the time, or 4 percent—but they were so persistent they might
have undone the entire theory, if Morgan hadn’t indulged in a rare flight of
fancy.
He remembered reading a paper by a Belgian biologist-priest who had
used a microscope to study how sperm and eggs form. One key fact of
biology—it comes up over and over—is that all chromosomes come in
pairs, pairs of nearly identical twins. (Humans have forty-six chromosomes,
arranged in twenty-three pairs.) When sperm and eggs form, these near-twin
chromosomes all line up in the middle of the parent cell. During division
one twin gets pulled one way, the other the other way, and two separate
cells are born.
However, the priest-biologist noticed that, just before the divvying up,
twin chromosomes sometimes interacted, coiling their tips around each
other. He didn’t know why. Morgan suggested that perhaps the tips broke
off during this crossing over and swapped places. This explained why
linked traits sometimes separated: the chromosome had broken somewhere
between the two genes, dislocating them. What’s more, Morgan speculated
—he was on a roll—that traits separating 4 percent of the time probably sat
farther apart on chromosomes than those separating 2 percent of the time,
since the extra distance between the first pair would make breaking along
that stretch more likely.
Morgan’s shrewd guess turned out correct, and with Sturtevant and
Bridges adding their own insights over the next few years, the fly boys
began to sketch out a new model of heredity—the model that made
Morgan’s team so historically important. It said that all traits were
controlled by genes, and that these genes resided on chromosomes in fixed
spots, strung along like pearls on a necklace. Because creatures inherit one
copy of each chromosome from each parent, chromosomes therefore pass
genetic traits from parent to child. Crossing over (and mutation) changes
chromosomes a little, which helps make each creature unique. Nevertheless
chromosomes (and genes) stay mostly intact, which explains why traits run
in families. Voilà: the first overarching sense of how heredity works.
In truth, little of this theory originated in Morgan’s lab, as biologists
worldwide had discovered various pieces. But Morgan’s team finally linked
these vaguely connected ideas, and fruit flies provided overwhelming
experimental proof. No one could deny that sex chromosome linkage
occurred, for instance, when Morgan had ten thousand mutants buzzing on
a shelf, nary a female among them.
Of course, while Morgan won acclaim for uniting these theories, he’d
done nothing to reconcile them with Darwinian natural selection. That
reconciliation also arose from work inside the fly room, but once again
Morgan ended up “borrowing” the idea from assistants, including one who
didn’t accept this as docilely as Bridges and Sturtevant did.
Hermann Muller began poking around the fly room in 1910, though only
occasionally. Because he supported his elderly mother, Muller lived a
haphazard life, working as a factotum in hotels and banks, tutoring
immigrants in English at night, bolting down sandwiches on the subway
between jobs. Somehow Muller found time to befriend writer Theodore
Dreiser in Greenwich Village, immerse himself in socialist politics, and
commute two hundred miles to Cornell University to finish a master’s
degree. But no matter how frazzled he got, Muller used his one free day,
Thursday, to drop in on Morgan and the fly boys and bandy about ideas on
genetics. Intellectually nimble, Muller starred in these bull sessions, and
Morgan granted him a desk in the fly room after he graduated from Cornell
in 1912. The problem was, Morgan declined to pay Muller, so Muller’s
schedule didn’t let up. He soon had a mental breakdown.
From then on, and for decades afterward, Muller seethed over his status
in the fly room. He seethed that Morgan openly favored the bourgeois
Sturtevant and shunted menial tasks like preparing bananas onto the blue-
collar, proletariat Bridges. He seethed that both Bridges and Sturtevant got
paid to experiment on his, Muller’s, ideas, while he scrambled around the
five boroughs for pocket change. He seethed that Morgan treated the fly
room like a clubhouse and sometimes made Muller’s friends work down the
hall. Muller seethed above all that Morgan was oblivious to his
contributions. This was partly because Muller proved slow in doing the
thing Morgan most valued—actually carrying out the clever experiments he
(Muller) dreamed up. Indeed, Muller probably couldn’t have found a worse
mentor than Morgan. For all his socialist leanings, Muller got pretty
attached to his own intellectual property, and felt the free and communal
nature of the fly room both exploited and ignored his talent. Nor was Muller
exactly up for Mr. Congeniality. He harped on Morgan, Bridges, and
Sturtevant with tactless criticism, and got almost personally offended by
anything but pristine logic. Morgan’s breezy dismissal of evolution by
natural selection especially irked Muller, who considered it the foundation
of biology.
Despite the personality clashes he caused, Muller pushed the fly group
to greater work. In fact, while Morgan contributed little to the emerging
theory of inheritance after 1911, Muller, Bridges, and Sturtevant kept
making fundamental discoveries. Unfortunately, it’s hard to sort out
nowadays who discovered what, and not just because of the constant idea
swapping. Morgan and Muller often scribbled thoughts down on
unorganized scraps, and Morgan purged his file cabinet every five years,
perhaps out of necessity in his cramped lab. Muller hoarded documents, but
many years later, yet another colleague he’d managed to alienate threw out
Muller’s files while Muller was working abroad. Morgan also (like
Mendel’s fellow friars) destroyed Bridges’s files when the free lover died of
heart problems in 1938. Turns out Bridges was a bedpost notcher, and when
Morgan found a detailed catalog of fornication, he thought it prudent to
burn all the papers and protect everyone in genetics.
But historians can assign credit for some things. All the fly boys helped
determine which clusters of traits got inherited together. More important,
they discovered that four distinct clusters existed in flies—exactly the
number of chromosome pairs. This was a huge boost for chromosome
theory because it showed that every chromosome harbored multiple genes.
Sturtevant built on this notion of gene and chromosome linkage. Morgan
had guessed that genes separating 2 percent of the time must sit closer
together on chromosomes than genes separating 4 percent of the time.
Ruminating one evening, Sturtevant realized he could translate those
percentages into actual distances. Specifically, genes separating 2 percent of
the time must sit twice as close together as the other pair; similar logic held
for other percent linkages. Sturtevant blew off his undergraduate homework
that night, and by dawn this nineteen-year-old had sketched the first map of
a chromosome. When Muller saw the map, he “literally jumped with
excitement”—then pointed out ways to improve it.
Bridges discovered “nondisjunction”—the occasional failure of
chromosomes to separate cleanly after crossing over and twisting arms.
(The excess of genetic material that results can cause problems like Down
syndrome.) And beyond individual discoveries, Bridges, a born tinkerer,
industrialized the fly room. Instead of tediously separating flies by turning
bottle after bottle upside down, Bridges invented an atomizer to puff wee
doses of ether over flies and stun them. He also replaced loupes with
binocular microscopes; handed out white porcelain plates and fine-tipped
paintbrushes so that people could see and manipulate flies more easily;
eliminated rotting bananas for a nutritious slurry of molasses and cornmeal;
and built climate-controlled cabinets so that flies, which become sluggish in
cold, could breed summer and winter. He even built a fly morgue to dispose
of corpses with dignity. Morgan didn’t always appreciate these
contributions—he continued to squish flies wherever they landed, despite
the morgue. But Bridges knew that mutants popped up so rarely, and when
they did, his biological factory allowed each one to thrive and produce
millions of descendants.*
Muller contributed insights and ideas, dissolving apparent contradictions
and undergirding lean-to theories with firm logic. And although he had to
argue with Morgan until his tongue bled, he finally made the senior scientist
see how genes, mutations, and natural selection work together. As Muller
(among others) outlined it: Genes give creatures traits, so mutations to
genes change traits, making creatures different in color, height, speed, or
whatever. But contra de Vries—who saw mutations as large things,
producing sports and instant species—most mutations simply tweak
creatures. Natural selection then allows the better-adapted of these creatures
to survive and reproduce more often. Crossing over comes into play
because it shuffles genes around between chromosomes and therefore puts
new versions of genes together, giving natural selection still more variety to
work on. (Crossing over is so important that some scientists today think that
sperm and eggs refuse to form unless chromosomes cross a minimum
number of times.)
Muller also helped expand scientists’ very ideas about what genes could
do. Most significantly, he argued that traits like the ones Mendel had
studied—binary traits, controlled by one gene—weren’t the only story.
Many important traits are controlled by multiple genes, even dozens of
genes. These traits will therefore show gradations, depending on which
exact genes a creature inherits. Certain genes can also turn the volume up or
down on other genes, crescendos and decrescendos that produce still finer
gradations. Crucially, however, because genes are discrete and particulate, a
beneficial mutation will not be diluted between generations. The gene stays
whole and intact, so superior parents can breed with inferior types and still
pass the gene along.
To Muller, Darwinism and Mendelism reinforced each other beautifully.
And when Muller finally convinced Morgan of this, Morgan became a
Darwinian. It’s easy to chuckle over this—yet another Morgan conversion
—and in later writings, Morgan still emphasizes genetics as more important
than natural selection. However, Morgan’s endorsement was important in a
larger sense. Grandiloquent theories (including Darwin’s) dominated
biology at the time, and Morgan had helped keep the field grounded, always
demanding hard evidence. So other biologists knew that if some theory
convinced even Thomas Hunt Morgan, it had something going for it.
What’s more, even Muller recognized Morgan’s personal influence. “We
should not forget,” Muller once admitted, “the guiding personality of
Morgan, who infected all the others by his own example—his indefatigable
activity, his deliberation, his jolliness, and courage.” In the end, Morgan’s
bonhomie did what Muller’s brilliant sniping couldn’t: convinced
geneticists to reexamine their prejudice against Darwin, and take the
proposed synthesis of Darwin and Mendel, natural selection and genetics,
seriously.
Many other scientists did indeed take up the work of Morgan’s team in
the 1920s, spreading the unassuming fruit fly to labs around the world. It
soon became the standard animal in genetics, allowing scientists
everywhere to compare discoveries on equal terms. Building on such work,
a generation of mathematically minded biologists in the 1930s and 1940s
began investigating how mutations spread in natural populations, outside
the lab. They demonstrated that if a gene gave some creatures even a small
survival advantage, that boost could, if compounded long enough, push
species in new directions. What’s more, most changes would take place in
tiny steps, exactly as Darwin had insisted. If the fly boys’ work finally
showed how to link Mendel with Darwin, these later biologists made the
case as rigorous as a Euclidean proof. Darwin had once moaned how
“repugnant” math was to him, how he struggled with most anything beyond
taking simple measurements. In truth, mathematics buttressed Darwin’s
theory and ensured his reputation would never lapse again.* And in this way
the so-called eclipse of Darwinism in the early 1900s proved exactly that: a
period of darkness and confusion, but a period that ultimately passed.
Beyond the scientific gains, the diffusion of fruit flies around the world
inspired another legacy, a direct outgrowth of Morgan’s “jolliness.”
Throughout genetics, the names of most genes are ugly abbreviations, and
they stand for monstrous freak words that maybe six people worldwide
understand. So when discussing, say, the alox12b gene, there’s often no
point in spelling out its name (arachidonate 12-lipoxygenase, 12R type),
since doing so confuses rather than clarifies, methinks. (To save everyone a
migraine, from now on I’ll just state gene acronyms and pretend they stand
for nothing.) In contrast, whereas gene names are intimidatingly complex,
chromosome names are stupefyingly banal. Planets are named after gods,
chemical elements after myths, heroes, and great cities. Chromosomes were
named with all the creativity of shoe sizes. Chromosome one is the longest,
chromosome two the second longest, and (yawn) so on. Human
chromosome twenty-one is actually shorter than chromosome twenty-two,
but by the time scientists figured this out, chromosome twenty-one was
famous, since having an extra number twenty-one causes Down syndrome.
And really, with such boring names, there was no point in fighting over
them and bothering to change.
Fruit fly scientists, God bless ’em, are the big exception. Morgan’s team
always picked sensibly descriptive names for mutant genes like speck,
beaded, rudimentary, white, and abnormal. And this tradition continues
today, as the names of most fruit fly genes eschew jargon and even shade
whimsical. Different fruit fly genes include groucho, smurf, fear of
intimacy, lost in space, smellblind, faint sausage, tribble (the multiplying
fuzzballs on Star Trek), and tiggywinkle (after Mrs. Tiggy-winkle, a
character from Beatrix Potter). The armadillo gene, when mutated, gives
fruit flies a plated exoskeleton. The turnip gene makes flies stupid. Tudor
leaves males (as with Henry VIII) childless. Cleopatra can kill flies when it
interacts with another gene, asp. Cheap date leaves flies exceptionally tipsy
after a sip of alcohol. Fruit fly sex especially seems to inspire clever names.
Ken and barbie mutants have no genitalia. Male coitus interruptus mutants
spend just ten minutes having sex (the norm is twenty), while stuck mutants
cannot physically disengage after coitus. As for females, dissatisfaction
mutants never have sex at all—they spend all their energy shooing suitors
away by snapping their wings. And thankfully, this whimsy with names has
inspired the occasional zinger in other areas of genetics. A gene that gives
mammals extra nipples earned the name scaramanga, after the James Bond
villain with too many. A gene that removes blood cells from circulation in
fish became the tasteful vlad tepes, after Vlad the Impaler, the historical
inspiration for Dracula. The backronym for the “POK erythroid myeloid
ontogenic” gene in mice—pokemon—nearly provoked a lawsuit, since the
pokemon gene (now known, sigh, as zbtb7) contributes to the spread of
cancer, and the lawyers for the Pokémon media empire didn’t want their
cute little pocket monsters confused with tumors. But my winner for the
best, and freakiest, gene name goes to the flour beetle’s medea, after the
ancient Greek mother who committed infanticide. Medea encodes a protein
with the curious property that it’s both a poison and its own antidote. So if a
mother has this gene but doesn’t pass it to an embryo, her body
exterminates the fetus—nothing she can do about it. If the fetus has the
gene, s/he creates the antidote and lives. (Medea is a “selfish genetic
element,” a gene that demands its own propagation above all, even to the
detriment of a creature as a whole.) If you can get beyond the horror, it’s a
name worthy of the Columbia fruit fly tradition, and it’s fitting that the most
important clinical work on medea—which could lead to very smart
insecticides—came after scientists introduced it into Drosophila for further
study.
But long before these cute names emerged, and even before fruit flies
had colonized genetics labs worldwide, the original fly group at Columbia
had disbanded. Morgan moved to the California Institute of Technology in
1928 and took Bridges and Sturtevant with him to his new digs in sunny
Pasadena. Five years later Morgan became the first geneticist to win the
Nobel Prize, “for establishing,” one historian noted, “the very principles of
genetics he had set out to refute.” The Nobel committee has an arbitrary
rule that three people at most can share a Nobel, so the committee awarded
it to Morgan alone, rather than—as it should have—splitting it between
him, Bridges, Sturtevant, and Muller. Some historians argue that Sturtevant
did work important enough to win his own Nobel but that his devotion to
Morgan and willingness to relinquish credit for ideas diminished his
chances. Perhaps in tacit acknowledgment of this, Morgan shared his prize
money from the Nobel with Sturtevant and Bridges, setting up college
funds for their children. He shared nothing with Muller.
Muller had fled Columbia for Texas by then. He started in 1915 as a
professor at Rice University (whose biology department was chaired by
Julian Huxley, grandson of Darwin’s bulldog) and eventually landed at the
University of Texas. Although Morgan’s warm recommendation had gotten
him the Rice job, Muller actively promoted a rivalry between his Lone Star
and Morgan’s Empire State groups, and whenever the Texas group made a
significant advance, which they trumpeted as a “home run,” they preened.
In one breakthrough, biologist Theophilus Painter discovered the first
chromosomes—inside fruit fly spit glands*—that were large enough to
inspect visually, allowing scientists to study the physical basis of genes. But
as important as Painter’s work was, Muller hit the grand slam in 1927 when
he discovered that pulsing flies with radiation would increase their mutation
rate by 150 times. Not only did this have health implications, but scientists
no longer had to sit around and wait for mutations to pop up. They could
mass-produce them. The discovery gave Muller the scientific standing he
deserved—and knew he deserved.
Inevitably, though, Muller got into spats with Painter and other
colleagues, then outright brawls, and he soured on Texas. Texas soured on
him, too. Local newspapers outed him as a political subversive, and the
precursor to the FBI put him under surveillance. Just for fun, his marriage
crumbled, and one evening in 1932 his wife reported him missing. A posse
of colleagues later found him muddied and disheveled in the woods, soaked
by a night of rain, his head still foggy from the barbiturates he’d swallowed
to kill himself.
Burned out, humiliated, Muller abandoned Texas for Europe. There he
did a bit of a Forrest Gump tour of totalitarian states. He studied genetics in
Germany until Nazi goons vandalized his institute. He fled to the Soviet
Union, where he lectured Joseph Stalin himself on eugenics, the quest to
breed superior human beings through science. Stalin was not impressed,
and Muller scurried to leave. To avoid being branded a “bourgeois
reactionary deserter,” Muller enlisted on the communist side in the Spanish
Civil War, working at a blood bank. His side lost, and fascism descended.
Disillusioned yet again, Muller crawled back to the United States, to
Indiana, in 1940. His interest in eugenics grew; he later helped establish
what became the Repository for Germinal Choice, a “genius sperm bank” in
California. And as the capstone to his career, Muller won his own unshared
Nobel Prize in 1946 for the discovery that radiation causes genetic
mutations. The award committee no doubt wanted to make up for shutting
Muller out in 1933. But he also won because the atomic bomb attacks on
Hiroshima and Nagasaki in 1945—which rained nuclear radiation on Japan
—made his work sickeningly relevant. If the fly boys’ work at Columbia
had proved that genes existed, scientists now had to figure out how genes
worked and how, in the deadly light of the bomb, they too often failed.
3
Them’s the DNA Breaks
How Does Nature Read—and Misread—DNA?
August 6, 1945, started off pretty lucky for perhaps the most unlucky man
of the twentieth century. Tsutomu Yamaguchi had stepped off his bus near
Mitsubishi headquarters in Hiroshima when he realized he’d forgotten his
inkan, the seal that Japanese salarymen dip in red ink and use to stamp
documents. The lapse annoyed him—he faced a long ride back to his
boardinghouse—but nothing could really dampen his mood that day. He’d
finished designing a five-thousand-ton tanker ship for Mitsubishi, and the
company would finally, the next day, send him back home to his wife and
infant son in southwest Japan. The war had disrupted his life, but on August
7 things would return to normal.
As Yamaguchi removed his shoes at his boardinghouse door, the elderly
proprietors ambushed him and asked him to tea. He could hardly refuse
these lonely folk, and the unexpected engagement further delayed him.
Shod again, inkan in hand, he hurried off, caught a streetcar, disembarked
near work, and was walking along near a potato field when he heard a gnat
of an enemy bomber high above. He could just make out a speck
descending from its belly. It was 8:15 a.m.
Many survivors remember the curious delay. Instead of a normal bomb’s
simultaneous flash-bang, this bomb flashed and swelled silently, and got
hotter and hotter silently. Yamaguchi was close enough to the epicenter that
he didn’t wait long. Drilled in air-raid tactics, he dived to the ground,
covered his eyes, and plugged his ears with his thumbs. After a half-second
light bath came a roar, and with it came a shock wave. A moment later
Yamaguchi felt a gale somehow beneath him, raking his stomach. He’d
been tossed upward, and after a short flight he hit the ground, unconscious.
He awoke, perhaps seconds later, perhaps an hour, to a darkened city.
The mushroom cloud had sucked up tons of dirt and ash, and small rings of
fire smoked on wilted potato leaves nearby. His skin felt aflame, too. He’d
rolled up his shirtsleeves after his cup of tea, and his forearms felt severely
sunburned. He rose and staggered through the potato field, stopping every
few feet to rest, shuffling past other burned and bleeding and torn-open
victims. Strangely compelled, he reported to Mitsubishi. He found a pile of
rubble speckled with small fires, and many dead coworkers—he’d been
lucky to be late. He wandered onward; hours slipped by. He drank water
from broken pipes, and at an emergency aid station, he nibbled a biscuit and
vomited. He slept that night beneath an overturned boat on a beach. His left
arm, fully exposed to the great white flash, had turned black.
All the while, beneath his incinerated skin, Yamaguchi’s DNA was
nursing even graver injuries. The nuclear bomb at Hiroshima released
(among other radioactivity) loads of supercharged x-rays called gamma
rays. Like most radioactivity, these rays single out and selectively damage
DNA, punching DNA and nearby water molecules and making electrons fly
out like uppercut teeth. The sudden loss of electrons forms free radicals,
highly reactive atoms that chew on chemical bonds. A chain reaction begins
that cleaves DNA and sometimes snaps chromosomes into pieces.
By the mid-1940s, scientists were starting to grasp why the shattering or
disruption of DNA could wreak such ruin inside cells. First, scientists based
in New York produced strong evidence that genes were made of DNA. This
upended the persistent belief in protein inheritance. But as a second study
revealed, DNA and proteins still shared a special relationship: DNA made
proteins, with each DNA gene storing the recipe for one protein. Making
proteins, in other words, was what genes did—that’s how genes created
traits in the body.
In conjunction, these two ideas explained the harm of radioactivity.
Fracturing DNA disrupts genes; disrupting genes halts protein production;
halting protein production kills cells. Scientists didn’t work this out
instantly—the crucial “one gene/one protein” paper appeared just days
before Hiroshima—but they knew enough to cringe at the thought of
nuclear weapons. When Hermann Muller won his Nobel Prize in 1946, he
prophesied to the New York Times that if atomic bomb survivors “could
foresee the results 1,000 years from now… they might consider themselves
more fortunate if the bomb had killed them.”
Despite Muller’s pessimism, Yamaguchi did want to survive, badly, for
his family. He’d had complicated feelings about the war—opposing it at
first, supporting it once under way, then shading back toward opposition
when Japan began to stumble, because he feared the island being overrun
by enemies who might harm his wife and son. (If so, he’d contemplated
giving them an overdose of sleeping pills to spare them.) In the hours after
Hiroshima, he yearned to get back to them, so when he heard rumors about
trains leaving the city, he sucked up his strength and resolved to find one.
Hiroshima is a collection of islands, and Yamaguchi had to cross a river
to reach the train station. All the bridges had collapsed or burned, so he
steeled himself and began crossing an apocalyptic “bridge of corpses”
clogging the river, crawling across melted legs and faces. But an
uncrossable gap in the bridge forced him to turn back. Farther upstream, he
found a railroad trestle with one steel beam intact, spanning fifty yards. He
clambered up, crossed the iron tightrope, and descended. He pushed
through the mob at the station and slumped into a train seat. Miraculously
the train pulled out soon afterward—he was saved. The train would run all
night, but he was finally headed home, to Nagasaki.
A physicist stationed in Hiroshima might have pointed out that the gamma
rays finished working over Yamaguchi’s DNA in a millionth of a billionth
of a second. To a chemist, the most interesting part—how the free radicals
gnawed through DNA—would have ceased after a millisecond. A cell
biologist would have needed to wait maybe a few hours to study how cells
patch up torn DNA. A doctor could have diagnosed radiation sickness—
headaches, vomiting, internal bleeding, peeling skin, anemic blood—within
a week. Geneticists needed the most patience. The genetic damage to the
survivors didn’t surface for years, even decades. And in an eerie
coincidence, scientists began to piece together how exactly genes function,
and fail, during those very decades—as if providing a protracted running
commentary on DNA devastation.
However definitive in retrospect, experiments on DNA and proteins in
the 1940s convinced only some scientists that DNA was the genetic
medium. Better proof came in 1952, from virologists Alfred Hershey and
Martha Chase. Viruses, they knew, hijacked cells by injecting genetic
material. And because the viruses they studied consisted of only DNA and
proteins, genes had to be one or the other. The duo determined which by
tagging viruses with both radioactive sulfur and radioactive phosphorus,
then turning them loose on cells. Proteins contain sulfur but no phosphorus,
so if genes were proteins, radioactive sulfur should be present in cells
postinfection. But when Hershey and Chase filtered out infected cells, only
radioactive phosphorus remained: only DNA had been injected.
Hershey and Chase published these results in April 1952, and they
ended their paper by urging caution: “Further chemical inferences should
not be drawn from the experiments presented.” Yeah, right. Every scientist
in the world still working on protein heredity dumped his research down the
sink and took up DNA. A furious race began to understand the structure of
DNA, and just one year later, in April 1953, two gawky scientists at
Cambridge University in England, Francis Crick and James Watson (a
former student of Hermann Muller), made the term “double helix”
legendary.
Watson and Crick’s double helix was two loooooooong DNA strands
wrapped around each other in a right-handed spiral. (Point your right thumb
toward the ceiling; DNA twists upward along the counterclockwise curl of
your fingers.) Each strand consisted of two backbones, and the backbones
were held together by paired bases that fit together like puzzle pieces—
angular A with T, curvaceous C with G. Watson and Crick’s big insight was
that because of this complementary A-T and C-G base pairing, one strand
of DNA can serve as a template for copying the other. So if one side reads
CCGAGT, the other side must read GGCTCA. It’s such an easy system that
cells can copy hundreds of DNA bases per second.
However well hyped, though, the double helix revealed zero about how
DNA genes actually made proteins—which is, after all, the important part.
To understand this process, scientists had to scrutinize DNA’s chemical
cousin, RNA. Though similar to DNA, RNA is single-stranded, and it
substitutes the letter U (uracil) for T in its strands. Biochemists focused on
RNA because its concentration would spike tantalizingly whenever cells
started making proteins. But when they chased the RNA around the cell, it
proved as elusive as an endangered bird; they caught only glimpses before
it vanished. It took years of patient experiments to determine exactly what
was going on here—exactly how cells transform strings of DNA letters into
RNA instructions and RNA instructions into proteins.
Cells first “transcribe” DNA into RNA. This process resembles the
copying of DNA, in that one strand of DNA serves as a template. So the
DNA string CCGAGT would become the RNA string GGCUCA (with U
replacing T). Once constructed, this RNA string leaves the confines of the
nucleus and chugs out to special protein-building apparatuses called
ribosomes. Because it carries the message from one site to another, it’s
called messenger RNA.
The protein building, or translation, begins at the ribosomes. Once the
messenger RNA arrives, the ribosome grabs it near the end and exposes just
three letters of the string, a triplet. In our example, GGC would be exposed.
At this point a second type of RNA, called transfer RNA, approaches. Each
transfer RNA has two key parts: an amino acid trailing behind it (its cargo
to transfer), and an RNA triplet sticking off its prow like a masthead.
Various transfer RNAs might try to dock with the messenger RNA’s
exposed triplet, but only one with complementary bases will stick. So with
the triplet GGC, only a transfer RNA with CCG will stick. And when it
does stick, the ribosome unloads its amino acid cargo.
At this point the transfer RNA leaves, the messenger RNA shifts down
three spots, and the process repeats. A different triplet is exposed, and a
different transfer RNA with a different amino acid docks. This puts amino
acid number two in place. Eventually, after many iterations, this process
creates a string of amino acids—a protein. And because each RNA triplet
leads to one and only one amino acid being added, information should
(should) get translated perfectly from DNA to RNA to protein. This same
process runs every living thing on earth. Inject the same DNA into guinea
pigs, frogs, tulips, slime molds, yeast, U.S. congressmen, whatever, and you
get identical amino acid chains. No wonder that in 1958 Francis Crick
elevated the DNA → RNA → protein process into the “Central Dogma” of
molecular biology.*
Still, Crick’s dogma doesn’t explain everything about protein
construction. For one thing, notice that, with four DNA letters, sixty-four
different triplets are possible (4 × 4 × 4 = 64). Yet all those triplets code for
just twenty amino acids in our bodies. Why?
A physicist named George Gamow founded the RNA Tie Club in 1954
in part to figure out this question. A physicist moonlighting in biology
might sound odd—Gamow studied radioactivity and Big Bang theory by
day—but other carpetbagging physicists like Richard Feynman joined the
club as well. Not only did RNA offer an intellectual challenge, but many
physicists felt appalled by their role in creating nuclear bombs. Physics
seemed life destroying, biology life restoring. Overall, twenty-four
physicists and biologists joined the Tie Club’s roster—one for each amino
acid, plus four honorary inductees, for each DNA base. Watson and Crick
joined (Watson as official club “Optimist,” Crick as “Pessimist”), and each
member sported a four-dollar bespoke green wool tie with an RNA strand
embroidered in gold silk, made by a haberdasher in Los Angeles. Club
stationery read, “Do or die, or don’t try.”
RNA Tie Club members sporting green wool ties with gold silk RNA embroidery. From left,
Francis Crick, Alexander Rich, Leslie E. Orgel, James Watson. (Courtesy of Alexander Rich)
Despite its collective intellectual horsepower, in one way the club ended
up looking a little silly historically. Problems of perverse complexity often
attract physicists, and certain physics-happy club members (including
Crick, a physics Ph.D.) threw themselves into work on DNA and RNA
before anyone realized how simple the DNA → RNA → proteins process
was. They concentrated especially on how DNA stores its instructions, and
for whatever reason they decided early on that DNA must conceal its
instructions in an intricate code—a biological cryptogram. Nothing excites
a boys’ club as much as coded messages, and like ten-year-olds with
Cracker Jack decoder rings, Gamow, Crick, and others set out to break this
cipher. They were soon scribbling away with pencil and paper at their
desks, page after page piling up, their imaginations happily unfettered by
doing experiments. They devised solutions clever enough to make Will
Shortz smile—“diamond codes” and “triangle codes” and “comma codes”
and many forgotten others. These were NSA-ready codes, codes with
reversible messages, codes with error-correction mechanisms built in, codes
that maximized storage density by using overlapping triplets. The RNA
boys especially loved codes that used equivalent anagrams (so CAG = ACG
= GCA, etc.). The approach was popular because when they eliminated all
the combinatorial redundancies, the number of unique triplets was exactly
twenty. In other words, they’d seemingly found a link between twenty and
sixty-four—a reason nature just had to use twenty amino acids.
In truth, this was so much numerology. Hard biochemical facts soon
deflated the code breakers and proved there’s no profound reason DNA
codes for twenty amino acids and not nineteen or twenty-one. Nor was there
any profound reason (as some hoped) that a given triplet called for a given
amino acid. The entire system was accidental, something frozen into cells
billions of years ago and now too ingrained to replace—the QWERTY
keyboard of biology. Moreover, RNA employs no fancy anagrams or error-
correcting algorithms, and it doesn’t strive to maximize storage space,
either. Our code is actually choking on wasteful redundancy: two, four,
even six RNA triplets can represent the same amino acid.* A few
biocryptographers later admitted feeling annoyed when they compared
nature’s code to the best of the Tie Club’s codes. Evolution didn’t seem
nearly as clever.
Any disappointment soon faded, however. Solving the DNA/RNA code
finally allowed scientists to integrate two separate realms of genetics, gene-
as-information and gene-as-chemical, marrying Miescher with Mendel for
the first time. And it actually turned out better in some ways that our DNA
code is kludgy. Fancy codes have nice features, but the fancier a code gets,
the more likely it will break down or sputter. And however crude, our code
does one thing well: it keeps life going by minimizing the damage of
mutations. It’s exactly that talent that Tsutoma Yamaguchi and so many
others had to count on in August 1945.
Scientists have known for decades that DNA, a long and active
molecule, can tangle itself into some horrific snarls. What scientists didn’t
grasp was why these snarls don’t choke our cells. In recent years, biologists
have turned to an obscure twig of mathematics called knot theory for
answers. Sailors and seamstresses mastered the practical side of knots many
millennia ago, and religious traditions as distant as Celtic and Buddhist hold
certain knots sacred, but the systematic study of knots began only in the
later nineteenth century, in Carroll/Dodgson’s Victorian Britain. At that
time the polymath William Thomson, Lord Kelvin, proposed that the
elements on the periodic table were really microscopic knots of different
shapes. For precision’s sake, Kelvin defined his atomic knots as closed
loops. (Knots with loose ends, somewhat like shoelaces, are “tangles.”) And
he defined a “unique” knot as a unique pattern of strands crossing over and
under each other. So if you can slide the loops around on one knot and
jimmy its over-under crossings to make it look like another, they’re really
the same knot. Kelvin suggested that the unique shape of each knot gave
rise to the unique properties of each chemical element. Atomic physicists
soon proved this clever theory false, but Kelvin did inspire Scottish
physicist P. G. Tait to make a chart of unique knots, and knot theory
developed independently from there.
Much of early knot theory involved playing cat’s cradle and tallying the
results. Somewhat pedantically, knot theorists defined the most trivial knot
—O, what laymen call a circle—as the “unknot.” They classified other
unique knots by the number of over-under crossings and by July 2003 could
identify 6,217,553,258 distinct knots with up to twenty-two over-under do-
si-dos—roughly one knot per person on earth. Meanwhile other knot
theorists had moved beyond taking simple censuses, and devised ways to
transform one knot into another. This usually involved snipping the string at
an under-over crossing, passing the top strand below, and fusing the snipped
ends—which sometimes made knots more complicated but often simplified
them. Although studied by legitimate mathematicians, knot theory retained
a sense of play throughout. And America’s Cup aspirants aside, no one
dreamed of applications for knot theory until scientists discovered knotted
DNA in 1976.
Knots and tangles form in DNA for a few reasons: its length, its constant
activity, and its confinement. Scientists have effectively run simulations of
DNA inside a busy nucleus by putting a long, thin rope in a box and jostling
it. The rope ends proved quite adept at snaking their way through the rope’s
coils, and surprisingly complicated knots, with up to eleven crossings,
formed in just seconds. (You probably could have guessed this if you’ve
ever dropped earphones into a bag and tried to pull them out later.) Snarls
like this can be lethal because the cellular machinery that copies and
transcribes DNA needs a smooth track to run on; knots derail it.
Unfortunately, the very processes of copying and transcribing DNA can
create deadly knots and tangles. Copying DNA requires separating its two
strands, but two interlaced helix strands cannot simply be pulled apart, any
more than plaits of tightly braided hair can. What’s more, when cells do
start copying DNA, the long, sticky strings dangling behind sometimes get
tangled together. If the strings won’t disentangle after a good tug, cells
commit suicide—it’s that devastating.
Beyond knots per se, DNA can find itself in all sorts of other topological
predicaments. Strands can get welded around each other like interlocking
links in a chain. They can get twisted excruciatingly tight, like someone
wringing out a rag or giving a snake burn on the forearm. They can get
wound up into coils tenser than any rattlesnake. And it’s this last
configuration, the coils, that loops back to Lewis Carroll and his Mock
Turtle. Rather imaginatively, knot theorists refer to such coils as “writhes”
and refer to the act of coiling as “writhing,” as if ropes or DNA were
bunched that way in agony. So could the Mock Turtle, per a few recent
rumors, have slyly been referring to knot theory with his “reeling and
writhing”?
On the one hand, Carroll was working at a prestigious university when
Kelvin and Tait began studying knot theory. He might easily have come
across their work, and this sort of play math would have appealed to him.
Plus, Carroll did write another book called A Tangled Tale in which each
section—called not chapters but “knots”—consisted of a puzzle to solve. So
he certainly incorporated knotty themes into his writing. Still, to be a party
pooper, there’s good reason to think the Mock Turtle knew nothing about
knot theory. Carroll published Alice in 1865, some two years before Kelvin
broached the idea of knots on the periodic table, at least publicly. What’s
more, while the term writhing might well have been used informally in knot
theory before, it first appeared as a technical term in the 1970s. So it seems
likely the Mock Turtle didn’t progress much past ambition, distraction,
uglification, and derision after all.
Nevertheless, even if the punniness of the line postdates Carroll, that
doesn’t mean we can’t enjoy it today. Great literature remains great when it
says new things to new generations, and the loops of a knot quite nicely
parallel the contours and convolutions of Carroll’s plot anyway. What’s
more, he probably would have delighted at how this whimsical branch of
math invaded the real world and became crucial to understanding our
biology.
Different combinations of twists and writhes and knots ensure that DNA
can form an almost unlimited number of snarls, and what saves our DNA
from this torture are mathematically savvy proteins called topoisomerases.
Each of these proteins grasps one or two theorems of knot theory and uses
them to relieve tension in DNA. Some topoisomerases unlink DNA chains.
Other types nick one strand of DNA and rotate it around the other to
eliminate twists and writhes. Still others snip DNA wherever it crosses
itself, pass the upper strand beneath the lower, and re-fuse them, undoing a
knot. Each topoisomerase saves our DNA from a Torquemada-style doom
countless time each year, and we couldn’t survive without these math nerds.
If knot theory sprang from Lord Kelvin’s twisted atoms and then went off
on its own, it has now circled back to its billions-of-years-old molecular
roots in DNA.
Knot theory hasn’t been the only unexpected math to pop up during DNA
research. Scientists have used Venn diagrams to study DNA, and the
Heisenberg uncertainty principle. The architecture of DNA shows traces of
the “golden ratio” of length to width found in classical edifices like the
Parthenon. Geometry enthusiasts have twisted DNA into Möbius strips and
constructed the five Platonic solids. Cell biologists now realize that, to even
fit inside the nucleus, long, stringy DNA must fold and refold itself into a
fractal pattern of loops within loops within loops, a pattern where it
becomes nearly impossible to tell what scale—nano-, micro-, or millimeter
—you’re looking at. Perhaps most unlikely, in 2011 Japanese scientists used
a Tie Club–like code to assign combinations of A, C, G, and T to numbers
and letters, then inserted the code for “E = mc2 1905!” in the DNA of
common soil bacteria.
DNA has especially intimate ties to an oddball piece of math called
Zipf’s law, a phenomenon first discovered by a linguist. George Kingsley
Zipf came from solid German stock—his family had run breweries in
Germany—and he eventually became a professor of German at Harvard
University. Despite his love of language, Zipf didn’t believe in owning
books, and unlike his colleagues, he lived outside Boston on a seven-acre
farm with a vineyard and pigs and chickens, where he chopped down the
Zipf family Christmas tree each December. Temperamentally, though, Zipf
did not make much of a farmer; he slept through most dawns because he
stayed awake most nights studying (from library books) the statistical
properties of languages.
A colleague once described Zipf as someone “who would take roses
apart to count their petals,” and Zipf treated literature no differently. As a
young scholar Zipf tackled James Joyce’s Ulysses, and the main thing he
got out of it was that it contained 29,899 different words, and 260,430
words total. From there Zipf dissected Beowulf, Homer, Chinese texts, and
the oeuvre of the Roman playwright Plautus. By counting the words in each
work, he discovered Zipf’s law. It says that the most common word in a
language appears roughly twice as often as the second most common word,
roughly three times as often as the third most common, a hundred times as
often as the hundredth most common, and so on. In English, the accounts
for 7 percent of words, of about half that, and a third of that, all way down
to obscurities like grawlix or boustrophedon. These distributions hold just
as true for Sanskrit, Etruscan, or hieroglyphics as for modern Hindi,
Spanish, or Russian. (Zipf also found them in the prices in Sears Roebuck
mail-order catalogs.) Even when people make up languages, something like
Zipf’s law emerges.
After Zipf died in 1950, scholars found evidence of his law in an
astonishing variety of other places—in music (more on this later), city
population ranks, income distributions, mass extinctions, earthquake
magnitudes, the ratios of different colors in paintings and cartoons, and
more. Every time, the biggest or most common item in each class was twice
as big or common as the second item, three times as big or common as the
third, and so on. Probably inevitably, the theory’s sudden popularity led to a
backlash, especially among linguists, who questioned what Zipf’s law even
meant, if anything.* Still, many scientists defend Zipf’s law because it feels
correct—the frequency of words doesn’t seem random—and, empirically, it
does describe languages in uncannily accurate ways. Even the “language”
of DNA.
Of course, it’s not apparent at first that DNA is Zipfian, especially to
speakers of Western languages. Unlike most languages DNA doesn’t have
obvious spaces to distinguish each word. It’s more like those ancient texts
with no breaks or pauses or punctuation of any kind, just relentless strings
of letters. You might think that the A-C-G-T triplets that code for amino
acids could function as “words,” but their individual frequencies don’t look
Zipfian. To find Zipf, scientists had to look at groups of triplets instead, and
a few turned to an unlikely source for help: Chinese search engines. The
Chinese language creates compound words by linking adjacent symbols. So
if a Chinese text reads ABCD, search engines might examine a sliding
“window” to find meaningful chunks, first AB, BC, and CD, then ABC and
BCD. Using a sliding window proved a good strategy for finding
meaningful chunks in DNA, too. It turns out that, by some measures, DNA
looks most Zipfian, most like a language, in groups of around twelve bases.
Overall, then, the most meaningful unit for DNA might not be a triplet, but
four triplets working together—a dodecahedron motif.
The expression of DNA, the translation into proteins, also obeys Zipf’s
law. Like common words, a few genes in every cell get expressed time and
time again, while most genes hardly ever come up in conversion. Over the
ages cells have learned to rely on these common proteins more and more,
and the most common one generally appears twice and thrice and quatrice
as often as the next-most-common proteins. To be sure, many scientists
harrumph that these Zipfian figures don’t mean anything; but others say it’s
time to appreciate that DNA isn’t just analogous to but really functions like
a language.
And not just a language: DNA has Zipfian musical properties, too.
Given the key of a piece of music, like C major, certain notes appear more
often than others. In fact Zipf once investigated the prevalence of notes in
Mozart, Chopin, Irving Berlin, and Jerome Kern—and lo and behold, he
found a Zipfian distribution. Later researchers confirmed this finding in
other genres, from Rossini to the Ramones, and discovered Zipfian
distributions in the timbre, volume, and duration of notes as well.
So if DNA shows Zipfian tendencies, too, is DNA arranged into a
musical score of sorts? Musicians have in fact translated the A-C-G-T
sequence of serotonin, a brain chemical, into little ditties by assigning the
four DNA letters to the notes A, C, G, and, well, E. Other musicians have
composed DNA melodies by assigning harmonious notes to the amino acids
that popped up most often, and found that this produced more complex and
euphonious sounds. This second method reinforces the idea that, much like
music, DNA is only partly a strict sequence of “notes.” It’s also defined by
motifs and themes, by how often certain sequences occur and how well they
work together. One biologist has even argued that music is a natural
medium for studying how genetic bits combine, since humans have a keen
ear for how phrases “chunk together” in music.
Something even more interesting happened when two scientists, instead
of turning DNA into music, inverted the process and translated the notes
from a Chopin nocturne into DNA. They discovered a sequence “strikingly
similar” to part of the gene for RNA polymerase. This polymerase, a
protein universal throughout life, is what builds RNA from DNA. Which
means, if you look closer, that the nocturne actually encodes an entire life
cycle. Consider: Polymerase uses DNA to build RNA. RNA in turn builds
complicated proteins. These proteins in turn build cells, which in turn build
people, like Chopin. He in turn composed harmonious music—which
completed the cycle by encoding the DNA to build polymerase.
(Musicology recapitulates ontology.)
So was this discovery a fluke? Not entirely. Some scientists argue that
when genes first appeared in DNA, they didn’t arise randomly, along any
old stretch of chromosome. They began instead as repetitive phrases, a
dozen or two dozen DNA bases duplicated over and over. These stretches
function like a basic musical theme that a composer tweaks and tunes (i.e.,
mutates) to create pleasing variations on the original. In this sense, then,
genes had melody built into them from the start.
Humans have long wanted to link music to deeper, grander themes in
nature. Most notably astronomers from ancient Greece right through to
Kepler believed that, as the planets ran their course through the heavens,
they created an achingly beautiful musica universalis, a hymn in praise of
Creation. It turns out that universal music does exist, only it’s closer than
we ever imagined, in our DNA.
Genetics and linguistics have deeper ties beyond Zipf’s law. Mendel
himself dabbled in linguistics in his older, fatter days, including an attempt
to derive a precise mathematical law for how the suffixes of German
surnames (like -mann and -bauer) hybridized with other names and
reproduced themselves each generation. (Sounds familiar.) And heck,
nowadays, geneticists couldn’t even talk about their work without all the
terms they’ve lifted from the study of languages. DNA has synonyms,
translations, punctuation, prefixes, and suffixes. Missense mutations
(substituting amino acids) and nonsense mutations (interfering with stop
codons) are basically typos, while frameshift mutations (screwing up how
triplets get read) are old-fashioned typesetting mistakes. Genetics even has
grammar and syntax—rules for combining amino acid “words” and clauses
into protein “sentences” that cells can read.
More specifically, genetic grammar and syntax outline the rules for how
a cell should fold a chain of amino acids into a working protein. (Proteins
must be folded into compact shapes before they’ll work, and they generally
don’t work if their shape is wrong.) Proper syntactical and grammatical
folding is a crucial part of communicating in the DNA language. However,
communication does require more than proper syntax and grammar; a
protein sentence has to mean something to a cell, too. And, strangely,
protein sentences can be syntactically and grammatically perfect, yet have
no biological meaning. To understand what on earth that means, it helps to
look at something linguist Noam Chomsky once said. He was trying to
demonstrate the independence of syntax and meaning in human speech. His
example was “Colorless green ideas sleep furiously.” Whatever you think of
Chomsky, that sentence has to be one of the most remarkable things ever
uttered. It makes no literal sense. Yet because it contains real words, and its
syntax and grammar are fine, we can sort of follow along. It’s not quite
devoid of meaning.
In the same way, DNA mutations can introduce random amino acid
words or phrases, and cells will automatically fold the resulting chain
together in perfectly syntactical ways based on physics and chemistry. But
any wording changes can change the sentence’s whole shape and meaning,
and whether the result still makes sense depends. Sometimes the new
protein sentence contains a mere tweak, minor poetic license that the cell
can, with work, parse. Sometimes a change (like a frameshift mutation)
garbles a sentence until it reads like grawlix—the #$%^&@! swear words
of comics characters. The cell suffers and dies. Every so often, though, the
cell reads a protein sentence littered with missense or nonsense… and yet,
upon reflection, it somehow does make sense. Something wonderful like
Lewis Carroll’s “mimsy borogoves” or Edward Lear’s “runcible spoon”
emerges, wholly unexpectedly. It’s a rare beneficial mutation, and at these
lucky moments, evolution creeps forward.*
Because of the parallels between DNA and language, scientists can even
analyze literary texts and genomic “texts” with the same tools. These tools
seem especially promising for analyzing disputed texts, whose authorship
or biological origin remains doubtful. With literary disputes, experts
traditionally compared a piece to others of known provenance and judged
whether its tone and style seemed similar. Scholars also sometimes
cataloged and counted what words a text used. Neither approach is wholly
satisfactory—the first too subjective, the second too sterile. With DNA,
comparing disputed genomes often involves matching up a few dozen key
genes and searching for small differences. But this technique fails with
wildly different species because the differences are so extensive, and it’s not
clear which differences are important. By focusing exclusively on genes,
this technique also ignores the swaths of regulatory DNA that fall outside
genes.
To circumvent these problems, scientists at the University of California
at Berkeley invented software in 2009 that again slides “windows” along a
string of letters in a text and searches for similarities and patterns. As a test,
the scientists analyzed the genomes of mammals and the texts of dozens of
books like Peter Pan, the Book of Mormon, and Plato’s Republic. They
discovered that the same software could, in one trial run, classify DNA into
different genera of mammals, and could also, in another trial run, classify
books into different genres of literature with perfect accuracy. In turning to
disputed texts, the scientists delved into the contentious world of
Shakespeare scholarship, and their software concluded that the Bard did
write The Two Noble Kinsmen—a play lingering on the margins of
acceptance—but didn’t write Pericles, another doubtful work. The Berkeley
team then studied the genomes of viruses and archaebacteria, the oldest and
(to us) most alien life-forms. Their analysis revealed new links between
these and other microbes and offered new suggestions for classifying them.
Because of the sheer amount of data involved, the analysis of genomes can
get intensive; the virus-archaebacteria scan monopolized 320 computers for
a year. But genome analysis allows scientists to move beyond simple point-
by-point comparisons of a few genes and read the full natural history of a
species.
S-A-T-O-R
A-R-E-P-O
T-E-N-E-T
O-P-E-R-A
R-O-T-A-S
Sister Miriam Michael Stimson, a DNA pioneer, wore her enormous hooded habit even in the
laboratory. (Archives: Siena Heights University)
This demotion hurt McClintock badly. Decades after the talk, she still
smoldered about colleagues supposedly sniggering at her, or firing off
accusations—You dare question the stationary-gene dogma? There’s little
evidence people actually laughed or boiled in rage; again, most accepted
jumping genes, just not her theory of control. But McClintock warped the
memory into a conspiracy against her. Jumping genes and genetic control
had become so interwoven in her heart and mind that attacking one meant
attacking both, and attacking her. Crushed, and lacking a brawling
disposition, she withdrew from science.*
So began the hermit phase. For three decades, McClintock continued
studying maize, often dozing on a cot in her office at night. But she stopped
attending conferences and cut off communication with fellow scientists.
After finishing experiments, she usually typed up her results as if to submit
them to a journal, then filed the paper away without sending it. If her peers
dismissed her, she would hurt them back by ignoring them. And in her
(now-depressive) solitude, her mystic side emerged fully. She indulged in
speculation about ESP, UFOs, and poltergeists, and studied methods of
psychically controlling her reflexes. (When visiting the dentist, she told him
not to bother with Novocain, as she could lock out pain with her mind.) All
the while, she grew maize and squashed slides and wrote up papers that
went as unread as Emily Dickinson’s poems in her day. She was her own
sad scientific community.
Meanwhile something funny was afoot in the larger scientific
community, a change almost too subtle to notice at first. The molecular
biologists whom McClintock was ignoring began spotting mobile DNA in
microbes in the late 1960s. And far from this DNA being a mere novelty,
the jumping genes dictated things like whether microbes developed drug
resistance. Scientists also found evidence that infectious viruses could (just
like mobile DNA) insert genetic material into chromosomes and lurk there
permanently. Both were huge medical concerns. Mobile DNA has become
vital, too, in tracking evolutionary relationships among species. That’s
because if you compare a few species, and just two of them have the same
transposon burrowed into their DNA at the same point among billions of
bases, then those two species almost certainly shared an ancestor recently.
More to the point, they shared that ancestor more recently than either shared
an ancestor with a third species that lacks the transposon; far too many
bases exist for that insertion to have happened twice independently. What
look like DNA marginalia, then, actually reveal life’s hidden recorded
history, and for this and other reasons, McClintock’s work suddenly seemed
less cute, more profound. As a result her reputation stopped sinking, then
rose, year by year. Around 1980 something tipped, and a popular biography
of the now-wrinkled McClintock, A Feeling for the Organism, appeared in
July 1983, making her a minor celebrity. The momentum bucked out of
control after that, and unthinkably, just as her own work had done for
Morgan a half century before, the adulation propelled McClintock to a
Nobel Prize that October.
The hermit had been fairy-tale transformed. She became a latter-day
Gregor Mendel, a genius discarded and forgotten—only McClintock lived
long enough to see her vindication. Her life soon became a rallying point
for feminists and fodder for didactic children’s books on never
compromising your dreams. That McClintock hated the publicity from the
Nobel—it interrupted her research and set reporters prowling about her
door—mattered little to fans. And even scientifically, winning the Nobel
panged her. The committee had honored her “discovery of mobile genetic
elements,” which was true enough. But in 1951 McClintock had imagined
she’d unlocked how genes control other genes and control development in
multicellular creatures. Instead scientists honored her, essentially, for her
microscope skills—for spotting minnows of DNA darting around. For these
reasons, McClintock grew increasingly weary of life post-Nobel, even a
little morbid: in her late eighties, she started telling friends she’d surely die
at age ninety. Months after her ninetieth birthday party, at James Watson’s
home, in June 1992, she did indeed pass, cementing her reputation as
someone who envisioned things others couldn’t.
In the end, McClintock’s life’s work remained unfulfilled. She did
discover jumping genes and vastly expanded our understanding of corn
genetics. (One jumping gene, hopscotch, seems in fact to have transformed
the scrawny wild ancestor of corn into a lush, domesticatable crop in the
first place.) More generally McClintock helped establish that chromosomes
regulate themselves internally and that on/off patterns of DNA determine a
cell’s fate. Both ideas remain crucial tenets of genetics. But despite her
fondest hopes, jumping genes don’t control development or turn genes on
and off to the extent she imagined; cells do these things in other ways. In
fact it took other scientists many years to explain how DNA accomplishes
those tasks—to explain how powerful but isolated cells pulled themselves
together long ago and started building truly complex creatures, even
creatures as complex as Miriam Michael Stimson, Lynn Margulis, and
Barbara McClintock.
6
The Survivors, the Livers
What’s Our Most Ancient and Important DNA?
Even the most scientific naturalist during the age of exploration, Carl
von Linné, d.b.a. Linnaeus, speculated on monsters. Linnaeus’s Systema
Naturae set forth the binomial system for naming species that we still use
today, inspiring the likes of Homo sapiens and Tyrannosaurus rex. The
book also defined a class of animals called “paradoxa,” which included
dragons, phoenixes, satyrs, unicorns, geese that sprouted from trees,
Heracles’s nemesis the Hydra, and remarkable tadpoles that not only got
smaller as they aged, but metamorphosed into fish. We might laugh today,
but in the last case at least, the joke’s on us: shrinking tadpoles do exist,
although Pseudis paradoxa shrink into regular old frogs, not fish. What’s
more, modern genetic research reveals a legitimate basis for some of
Linnaeus’s and Münster’s legends.
A few key genes in every embryo play cartographer for other genes and
map out our bodies with GPS precision, front to back, left to right, and top
to bottom. Insects, fish, mammals, reptiles, and all other animals share
many of these genes, especially a subset called hox genes. The ubiquity of
hox in the animal kingdom explains why animals worldwide have the same
basic body plan: a cylindrical trunk with a head at one end, an anus at the
other, and various appendages sprouting in between. (The Blemmyae, with
faces low enough to lick their navels, would be unlikely for this reason
alone.)
Unusually for genes, hox remain tightly linked after hundreds of
millions of years of evolution, almost always appearing together along
continuous stretches of DNA. (Invertebrates have one stretch of around ten
genes, vertebrates four stretches of basically the same ones.) Even more
unusually, each hox’s position along that stretch corresponds closely to its
assignment in the body. The first hox designs the top of the head. The next
hox designs something slightly lower down. The third hox something
slightly lower, and so on, until the final hox designs our nether regions.
Why nature requires this top-to-bottom spatial mapping in hox genes isn’t
known, but again, all animals exhibit this trait.
Scientists refer to DNA that appears in the same basic form in many,
many species as highly “conserved” because creatures remain very careful,
very conservative, about changing it. (Some hox and hox-like genes are so
conserved that scientists can rip them out of chickens, mice, and flies and
swap them between species, and the genes more or less function the same.)
As you might suspect, being highly conserved correlates strongly with the
importance of the DNA in question. And it’s easy to see, literally see, why
creatures don’t mess with their highly conserved hox genes all that often.
Delete one of these genes, and animals can develop multiple jaws. Mutate
others and wings disappear, or extra sets of eyes appear in awful places,
bulging out on the legs or staring from the ends of antennae. Still other
mutations cause genitals or legs to sprout on the head, or cause jaws or
antennae to grow in the crotchal region. And these are the lucky mutants;
most creatures that gamble with hox and related genes don’t live to speak of
it.
Genes like hox don’t build animals as much as they instruct other genes
on how to build animals: each one regulates dozens of underlings. However
important, though, these genes can’t control every aspect of development.
In particular, they depend on nutrients like vitamin A.
Despite the singular name, vitamin A is actually a few related molecules
that we non-biochemists lump together for convenience. These various
vitamins A are among the most widespread nutrients in nature. Plants store
vitamin A as beta carotene, which gives carrots their distinct color. Animals
store vitamin A in our livers, and our bodies convert freely between various
forms, which we use in a byzantine array of biochemical processes—to
keep eyesight sharp and sperm potent, to boost mitochondria production
and euthanize old cells. For these reasons, a lack of vitamin A in the diet is
a major health concern worldwide. One of the first genetically enhanced
foods created by scientists was so-called golden rice, a cheap source of
vitamin A with grains tinted by beta carotene.
Vitamin A interacts with hox and related genes to build the fetal brain,
lungs, eyes, heart, limbs, and just about every other organ. In fact, vitamin
A is so important that cells build special drawbridges in their membranes to
let vitamin A, and only vitamin A, through. Once inside a cell, vitamin A
binds to special helper molecules, and the resulting complex binds directly
to the double helix of DNA, turning hox and other genes on. While most
signaling chemicals get repulsed at the cell wall and have to shout their
instructions through small keyholes, vitamin A gets special treatment, and
the hox build very little in a baby without a nod from this master nutrient.
But be warned: before you dash to the health store for megadoses of
vitamin A for a special pregnant someone, you should know that too much
vitamin A can cause substantial birth defects. In fact the body tightly caps
its vitamin A concentration, and even has a few genes (like the awkwardly
initialed tgif gene) that exist largely to degrade vitamin A if its
concentration creeps too high. That’s partly because high levels of vitamin
A in embryos can interfere with the vital, but even more ridiculously
named, sonic hedgehog gene.
(Yes, it’s named for the video game character. A graduate student—one
of those wacky fruit fly guys—discovered it in the early 1990s and
classified it within a group of genes that, when mutated, cause Drosophila
to grow spiky quills all over, like hedgehogs. Scientists had already
discovered multiple “hedgehog” genes and named them after real hedgehog
species, like the Indian hedgehog, moonrat hedgehog, and desert hedgehog.
Robert Riddle thought naming his gene after the speedy Sega hero would be
funny. By happenstance, sonic proved one of the most important genes in
the animal repertoire, and the frivolity has not worn well. Flaws can lead to
lethal cancers or heartbreaking birth defects, and scientists cringe when they
have to explain to some poor family that sonic hedgehog will kill a loved
one. As one biologist told the New York Times about such names, “It’s a
cute name when you have stupid flies and you call [a gene] turnip. When
it’s linked to development in humans, it’s not so cute anymore.”)
Just as hox genes control our body’s top-to-bottom pattern, shh—as
scientists who detest the name refer to sonic hedgehog—helps control the
body’s left-right symmetry. Shh does so by setting up a GPS gradient. When
we’re still a ball of protoplasm, the incipient spinal column that forms our
midline starts to secrete the protein sonic produces. Nearby cells absorb lots
of it, faraway cells much less. Based on how much protein they absorb,
cells “know” exactly where they are in relation to the midline, and therefore
know what type of cell they should become.
But if there’s too much vitamin A around (or if shh fails for a different
reason), the gradient doesn’t get set up properly. Cells can’t figure out their
longitude in relation to the midline, and organs start to grow in abnormal,
even monstrous ways. In severe cases, the brain doesn’t divide into right
and left halves; it ends up as one big, undifferentiated blob. The same can
happen with the lower limbs: if exposed to too much vitamin A, they fuse
together, leading to sirenomelia, or mermaid syndrome. Both fused brains
and fused legs are fatal (in the latter case because holes for the anus and
bladder don’t develop). But the most distressing violations of symmetry
appear on the face. Chickens with too much sonic have faces with extra-
wide midlines, sometimes so wide that two beaks form. (Other animals get
two noses.) Too little sonic can produce noses with a single giant nostril, or
prevent noses from growing at all. In some severe cases, noses appear in the
wrong spot, like on the forehead. Perhaps most distressing of all, with too
little sonic, the two eyes don’t start growing where they should, an inch or
so to the left and right of the facial midline. Both eyes end up on the
midline, producing the very Cyclops* that cartographers seemed so silly for
including on their maps.
Scenes from Barentsz’s doomed voyage over the frosty top of Russia. Clockwise from top left:
encounters with polar bears; the ship crushed in ice; the hut where the crew endured a grim
winter in the 1590s. (Gerrit de Veer, The Three Voyages of William Barents to the Arctic
Regions)
Only in the mid–twentieth century did scientists determine why polar bear
livers contain such astronomical amounts of vitamin A. Polar bears survive
mostly by preying on ringed and bearded seals, and these seals raise their
young in about the most demanding environment possible, with the 35°F
Arctic seas wicking away their body heat relentlessly. Vitamin A enables
the seals to survive in this cold: it works like a growth hormone, stimulating
cells and allowing seal pups to add thick layers of skin and blubber, and do
so quickly. To this end, seal mothers store up whole crates of vitamin A in
their livers and draw on this store the whole time they’re nursing, to make
sure pups ingest enough.
Polar bears also need lots of vitamin A to pack on blubber. But even
more important, their bodies tolerate toxic levels of vitamin A because they
couldn’t eat seals—about the only food source in the Arctic—otherwise.
One law of ecology says that poisons accumulate as you move up a food
chain, and carnivores at the top ingest the most concentrated doses. This is
true of any toxin or any nutrient that becomes toxic at high levels. But
unlike many other nutrients, vitamin A doesn’t dissolve in water, so when a
king predator overdoses, it can’t expel the excess through urine. Polar bears
either have to deal with all the vitamin A they swallow, or starve. Polar
bears adapted by turning their livers into high-tech biohazard containment
facilities, to filter vitamin A and keep it away from the rest of the body.
(And even with those livers, polar bears have to be careful about intake.
They can dine on animals lower on the food chain, with lesser
concentrations. But some biologists have wryly noted that if polar bears
cannibalized their own livers, they would almost certainly croak.)
Polar bears began evolving their impressive vitamin A–fighting
capabilities around 150,000 years ago, when small groups of Alaskan
brown bears split off and migrated north to the ice caps. But scientists
always suspected that the important genetic changes that made polar bears
polar bears happened almost right away, instead of gradually over that
span. Their reasoning was this. After any two groups of animals split
geographically, they begin to acquire different DNA mutations. As the
mutations accumulate, the groups develop into different species with
different bodies, metabolisms, and behaviors. But not all DNA changes at
the same rate in a population. Highly conserved genes like hox change
grudgingly slowly, at geological paces. Changes in other genes can spread
quickly, especially if creatures face environmental stress. For instance,
when those brown bears wandered onto the bleak ice sheets atop the Arctic
Circle, any beneficial mutations to fight the cold—say, the ability to digest
vitamin A–rich seals—would have given some of those bears a substantial
boost, and allowed them to have more cubs and take better care of them.
And the greater the environmental pressure, the faster such genes can and
will spread through a population.
Another way to put this is that DNA clocks—which look at the number
and rate of mutations in DNA—tick at different speeds in different parts of
the genome. So scientists have to be careful when comparing two species’
DNA and dating how long ago they split. If scientists don’t take conserved
genes or accelerated changes into account, their estimates can be wildly off.
With these caveats in mind, scientists determined in 2010 that polar bears
had armed themselves with enough cold-weather defenses to become a
separate species in as few as twenty thousand years after wandering away
from ancestral brown bears—an evolutionary wink.
As we’ll see later, humans are Johnny-come-latelies to the meat-eating
scene, so it’s not surprising that we lack the polar bear’s defenses—or that
when we cheat up the food chain and eat polar bear livers, we suffer.
Different people have different genetic susceptibility to vitamin A
poisoning (called hypervitaminosis A), but as little as one ounce of polar
bear liver can kill an adult human, and in a ghastly way.
Our bodies metabolize vitamin A to produce retinol, which special
enzymes should then break down further. (These enzymes also break down
the most common poison we humans ingest, the alcohol in beers, rums,
wines, whiskeys, and other booze.) But polar bear livers overwhelm our
poor enzymes with vitamin A, and before they can break it all down, free
retinol begins circulating in the blood. That’s bad. Cells are surrounded by
oil-based membranes, and retinol acts as a detergent and breaks the
membranes down. The guts of cells start leaking out incontinently, and
inside the skull, this translates to a buildup of fluids that causes headaches,
fogginess, and irritability. Retinol damages other tissues as well (it can even
crimp straight hair, turning it kinky), but again the skin really suffers.
Vitamin A already flips tons of genetic switches in skin cells, causing some
to commit suicide, pushing others to the surface prematurely. The burn of
more vitamin A kills whole additional swaths, and pretty soon the skin
starts coming off in sheets.
We hominids have been learning (and relearning) this same hard lesson
about eating carnivore livers for an awfully long time. In the 1980s,
anthropologists discovered a 1.6-million-year-old Homo erectus skeleton
with lesions on his bones characteristic of vitamin A poisoning, from eating
that era’s top carnivores. After polar bears arose—and after untold centuries
of casualties among their people—Eskimos, Siberians, and other northern
tribes (not to mention scavenging birds) learned to shun polar bear livers,
but European explorers had no such wisdom when they stormed into the
Arctic. Many in fact regarded the prohibition on eating livers as “vulgar
prejudice,” a superstition about on par with worshipping trees. As late as
1900 the English explorer Reginald Koettlitz relished the prospect of
digging into polar bear liver, but he quickly discovered that there’s
sometimes wisdom in taboos. Over a few hours, Koettlitz felt pressure
building up inside his skull, until his whole head felt crushed from the
inside. Vertigo overtook him, and he vomited repeatedly. Most cruelly, he
couldn’t sleep it off; lying down made things worse. Another explorer
around that time, Dr. Jens Lindhard, fed polar bear liver to nineteen men
under his care as an experiment. All became wretchedly ill, so much so that
some showed signs of insanity. Meanwhile other starved explorers learned
that not just polar bears and seals have toxically elevated levels of vitamin
A: the livers of reindeer, sharks, swordfish, foxes, and Arctic huskies* can
make excellent last meals as well.
For their part, after being blasted by polar bear liver in 1597, Barentsz’s
men got wise. As the diarist de Veer had it, after their meal, “there hung a
pot still ouer the fire with some of the liuer in it. But the master tooke it and
cast it out of the dore, for we had enough of the sawce thereof.”
The men soon recovered their strength, but their cabin, clothes, and
morale continued to disintegrate in the cold. At last, in June, the ice started
melting, and they salvaged rowboats from their ship and headed to sea.
They could only dart between small icebergs at first, and they took heavy
flak from pursuing polar bears. But on June 20, 1597, the polar ice broke,
making real sailing possible. Alas, June 20 also marked the last day on earth
of the long-ailing Willem Barentsz, who died at age fifty. The loss of their
navigator sapped the courage of the remaining twelve crew members, who
still had to cross hundreds of miles of ocean in open boats. But they
managed to reach northern Russia, where the locals pitied them with food.
A month later they washed ashore on the coast of Lapland, where they ran
into, of all people, Captain Jan Corneliszoon Rijp, commander of the very
ship that Barentsz had ditched the winter before. Overjoyed—he’d assumed
them dead—Rijp carried the men home to the Netherlands* in his ship,
where they arrived in threadbare clothes and stunning white fox fur hats.
Their expected heroes’ welcome never materialized. That same day
another Dutch convoy returned home as well, laden with spices and
delicacies from a voyage to Cathay around the southern horn of Africa.
Their journey proved that merchant ships could make that long voyage, and
while tales of starvation and survival were thrilling, tales of treasure truly
stirred the Dutch people’s hearts. The Dutch crown granted the Dutch East
India Company a monopoly on traveling to Asia via Africa, and an epic
trade route was born, a mariner’s Silk Road. Barentsz and crew were
forgotten.
Perversely, the monopoly on the African route to Asia meant that other
maritime entrepreneurs could seek their fortunes only via the northeast
passage, so forays into the 500,000-square-mile Barents Sea continued.
Finally, salivating over a possible double monopoly, the Dutch East India
Company sent its own crew—captained by an Englishman, Henry Hudson
—north in 1609. Once again, things got fouled up. Hudson and his ship, the
Half Moon, crawled north past the tip of Norway as scheduled. But his crew
of forty, half of them Dutch, had no doubt heard tales of starvation,
exposure, and, God help them, skin peeling off people’s bodies head to toe
—and mutinied. They forced Hudson to turn west.
If that’s what they wanted, Hudson gave it to them, sailing all the way
west to North America. He skimmed Nova Scotia and put in at a few places
lower down on the Atlantic coast, including a trip up a then-unnamed river,
past a skinny swamp island. While disappointed that Hudson had not
circled Russia, the Dutch made lemonade by founding a trading colony
called New Amsterdam on that isle, Manhattan, within a few years. It’s
sometimes said of human beings that our passion for exploration is in our
genes. With the founding of New York, this was almost literally the case.
7
The Machiavelli Microbe
How Much Human DNA Is Actually Human?
Sometimes you acquire wisdom the hard way. “You can visualize a hundred
cats,” Jack Wright once said. “Beyond that, you can’t. Two hundred, five
hundred, it all looks the same.” This wasn’t just speculation. Jack learned
this because he and his wife, Donna, once owned a Guinness-certified
world record 689 housecats.
It started with Midnight. Wright, a housepainter in Ontario, fell in love
with a waitress named Donna Belwa around 1970, and they moved in
together with Donna’s black, long-haired cat. Midnight committed a
peccadillo in the yard one night and became pregnant, and the Wrights
didn’t have the heart to break up her litter. Having more cats around
actually brightened the home, and soon after, they felt moved to adopt
strays from the local shelter to save them from being put down. Their house
became known locally as Cat Crossing, and people began dropping off
more strays, two here, five there. When the National Enquirer held a
contest in the 1980s to determine who had the most cats in one house, the
Wrights won with 145. They soon appeared on The Phil Donahue Show,
and after that, the “donations” really got bad. One person tied kittens to the
Wrights’ picnic table and drove off; another shipped a cat via courier on a
plane—and made the Wrights pay. But the Wrights turned no feline away,
even as their brood swelled toward seven hundred.
Bills reportedly ran to $111,000 per year, including individually
wrapped Christmas toys. Donna (who began working at home, managing
Jack’s painting career) rose daily at 5:30 a.m. and spent the next fifteen
hours washing soiled cat beds, emptying litter boxes, forcing pills down
cats’ throats, and adding ice to kitty bowls (the friction of so many cats’
tongues made the water too warm to drink otherwise). But above all she
spent her days feeding, feeding, feeding. The Wrights popped open 180 tins
of cat food each day and bought three extra freezers to fill with pork, ham,
and sirloin for the more finicky felines. They eventually took out a second
mortgage, and to keep their heavily leveraged bungalow clean, they tacked
linoleum to the walls.
Jack and Donna eventually put their four feet down and by the late
1990s had reduced the population of Cat Crossing to just 359. Almost
immediately it crept back up, because they couldn’t bear to go lower. In
fact, if you read between the lines here, the Wrights seemed almost addicted
to having cats around—addiction being that curious state of getting acute
pleasure and acute anxiety from the same thing. Clearly they loved the cats.
Jack defended his cat “family” to the newspapers and gave each cat an
individual name,* even the few that refused to leave his closet. At the same
time, Donna couldn’t hide the torment of being enslaved to cats. “I’ll tell
you what’s hard to eat in here,” she once complained, “Kentucky Fried
Chicken. Every time I eat it, I have to walk around the house with the plate
under my chin.” (Partly to keep cats away, partly to deflect cat hair from her
sticky drumsticks.) More poignantly, Donna once admitted, “I get a little
depressed sometimes. Sometimes I just say, ‘Jack, give me a few bucks,’
and I go out and have a beer or two. I sit there for a few hours and it’s great.
It’s peaceful—no cats anywhere.” Despite these moments of clarity, and
despite their mounting distress,* she and Jack couldn’t embrace the obvious
solution: ditch the damn cats.
To give the Wrights credit, Donna’s constant cleaning made their home
seem rather livable, especially compared to the prehistoric filth of some
hoarders’ homes. Animal welfare inspectors not infrequently find decaying
cat corpses on the worst premises, even inside a home’s walls, where cats
presumably burrow to escape. Nor is it uncommon for the floors and walls
to rot and suffer structural damage from saturation with cat urine. Most
striking of all, many hoarders deny that things are out of control—a classic
sign of addiction.
Scientists have only recently begun laying out the chemical and genetic
basis of addiction, but growing evidence suggests that cat hoarders cling to
their herds at least partly because they’re hooked on a parasite, Toxoplasma
gondii. Toxo is a one-celled protozoan, kin to algae and amoebas; it has
eight thousand genes. And though originally a feline pathogen, Toxo has
diversified its portfolio and can now infect monkeys, bats, whales,
elephants, aardvarks, anteaters, sloths, armadillos, and marsupials, as well
as chickens.
Wild bats or aardvarks or whatever ingest Toxo through infected prey or
feces, and domesticated animals absorb it indirectly through the feces found
in fertilizers. Humans can also absorb Toxo through their diet, and cat
owners can contract it through their skin when they handle kitty litter.
Overall it infects one-third of people worldwide. When Toxo invades
mammals, it usually swims straight for the brain, where it forms tiny cysts,
especially in the amygdala, an almond-shaped region in the mammal brain
that guides the processing of emotions, including pleasure and anxiety.
Scientists don’t know why, but the amygdala cysts can slow down reaction
times and induce jealous or aggressive behavior in people. Toxo can alter
people’s sense of smell, too. Some cat hoarders (those most vulnerable to
Toxo) become immune to the pungent urine of cats—they stop smelling it.
A few hoarders, usually to their great shame, reportedly even crave the
odor.
Toxo does even stranger things to rodents, a common meal for cats.
Rodents that have been raised in labs for hundreds of generations and have
never seen a predator in their whole lives will still quake in fear and
scamper to whatever cranny they can find if exposed to cat urine; it’s an
instinctual, totally hardwired fear. Rats exposed to Toxo have the opposite
reaction. They still fear other predators’ scents, and they otherwise sleep,
mate, navigate mazes, nibble fine cheese, and do everything else normally.
But these rats adore cat urine, especially male rats. In fact they more than
adore it. At the first whiff of cat urine, their amygdalae throb, as if meeting
females in heat, and their testicles swell. Cat urine gets them off.
Toxo toys with mouse desire like this to enrich its own sex life. When
living inside the rodent brain, Toxo can split in two and clone itself, the
same method by which most microbes reproduce. It reproduces this way in
sloths, humans, and other species, too. Unlike most microbes, though, Toxo
can also have sex (don’t ask) and reproduce sexually—but only in the
intestines of cats. It’s a weirdly specific fetish, but there it is. Like most
organisms, Toxo craves sex, so no matter how many times it has passed its
genes on through cloning, it’s always scheming to get back inside those
erotic cat guts. Urine is its opportunity. By making mice attracted to cat
urine, Toxo can lure them toward cats. Cats happily play along, of course,
and pounce, and the morsel of mouse ends up exactly where Toxo wanted to
be all along, in the cat digestive tract. Scientists suspect that Toxo learned to
work its mojo in other potential mammal meals for a similar reason, to
ensure that felines of all sizes, from tabbies to tigers, would keep ingesting
it.
This might sound like a just-so story so far—a tale that sounds clever
but lacks real evidence. Except for one thing. Scientists have discovered
that two of Toxo’s eight thousand genes help make a chemical called
dopamine. And if you know anything about brain chemistry, you’re
probably sitting up in your chair about now. Dopamine helps activate the
brain’s reward circuits, flooding us with good feelings, natural highs.
Cocaine, Ecstasy, and other drugs also play with dopamine levels, inducing
artificial highs. Toxo has the gene for this potent, habit-forming chemical in
its repertoire—twice—and whenever an infected brain senses cat urine,
consciously or not, Toxo starts pumping it out. As a result, Toxo gains
influence over mammalian behavior, and the dopamine hook might provide
a plausible biological basis for hoarding cats.*
Toxo isn’t the only parasite that can manipulate animals. Much like
Toxo, a certain microscopic worm prefers paddling around in the guts of
birds but often gets ejected, forcefully, in bird droppings. So the ejected
worm wriggles into ants, turns them cherry red, puffs them up like Violet
Beauregarde, and convinces other birds they’re delicious berries. Carpenter
ants also fall victim to a rain-forest fungus that turns them into mindless
zombies. First the fungus hijacks an ant’s brain, then pilots it toward moist
locations, like the undersides of leaves. Upon arriving, the zombified ant
bites down, and its jaws lock into place. The fungus turns the ant’s guts into
a sugary, nutritious goo, shoots a stalk out of its brain, and sends out spores
to infect more ants. There’s also the so-called Herod bug—the Wolbachia
bacteria, which infects wasps, mosquitoes, moths, flies, and beetles.
Wolbachia can reproduce only inside female insects’ eggs, so like Herod in
the Bible, it often slaughters infant males wholesale, by releasing
genetically produced toxins. (In certain lucky insects, Wolbachia has mercy
and merely fiddles with the genes that determine sex in insects, converting
male grubs into females ones—in which case a better nickname might be
the Tiresias bug.) Beyond creepy-crawlies, a lab-tweaked version of one
virus can turn polygamous male voles—rodents who normally have, as one
scientist put it, a “country song… love ’em and leave ’em” attitude toward
vole women—into utterly faithful stay-at-home husbands, simply by
injecting some repetitive DNA “stutters” into a gene that adjusts brain
chemistry. Exposure to the virus arguably even made the voles smarter.
Instead of blindly having sex with whatever female wandered near, the
males began associating sex with one individual, a trait called “associative
learning” that was previously beyond them.
The vole and Toxo cases edge over into uncomfortable territory for a
species like us that prizes autonomy and smarts. It’s one thing to find
broken leftover virus genes in our DNA, quite another to admit that
microbes might manipulate our emotions and inner mental life. But Toxo
can. Somehow in its long coevolution with mammals, Toxo stole the gene
for dopamine production, and the gene has proved pretty successful in
influencing animal behavior ever since—both by ramping up pleasure
around cats and by damping any natural fear of cats. There’s also anecdotal
evidence that Toxo can alter other fear signals in the brain, ones unrelated to
cats, and convert those impulses into ecstatic pleasure as well. Some
emergency room doctors report that motorcycle crash victims often have
unusually high numbers of Toxo cysts in their brains. These are the hotshots
flying along highways and cutting the S-turns as sharp as possible—the
people who get off on risking their lives. And it just so happens that their
brains are riddled with Toxo.
It’s hard to argue with Toxo scientists who—while thrilled about what
Toxo has uncovered about the biology of emotions and the interconnections
between fear, attraction, and addiction—also feel creeped out by what their
work implies. One Stanford University neuroscientist who studies Toxo
says, “It’s slightly frightening in some ways. We take fear to be basic and
natural. But something can not only eliminate it but turn it into this
esteemed thing, attraction. Attraction can be manipulated so as to make us
attracted to our worst enemy.” That’s why Toxo deserves the title of the
Machiavellian microbe. Not only can it manipulate us, it can make what’s
evil seem good.
Peyton Rous’s life had a happy if complicated ending. During World War I,
he helped establish some of the first blood banks by developing a method to
store red blood cells with gelatin and sugar—a sort of blood Jell-O. Rous
also bolstered his early work on chickens by studying another obscure but
contagious tumor, the giant papilloma warts that once plagued cottontail
rabbits in the United States. Rous even had the honor, as editor of a
scientific journal, of publishing the first work to firmly link genes and
DNA.
Nevertheless, despite this and other work, Rous grew suspicious that
geneticists were getting ahead of themselves, and he refused to connect the
dots that other scientists were eagerly connecting. For example, before he
would publish the paper linking genes and DNA, he made the head scientist
strike out a sentence suggesting that DNA was as important to cells as
amino acids. Indeed, Rous came to reject the very idea that viruses cause
cancer by injecting genetic material, as well as the idea that DNA mutations
cause cancer at all. Rous believed viruses promoted cancer in other ways,
possibly by releasing toxins; and although no one knows why, he struggled
to accept that microbes could influence animal genetics quite as much as his
work implied.
Still, Rous never wavered in his conviction that viruses cause tumors
somehow, and as his peers unraveled the complicated details of his
contagious chicken cancer, they began to appreciate the clarity of his early
work all the more. Respect was still grudging in some quarters, and Rous
had to endure his much younger son-in-law winning a Nobel Prize for
medicine in 1963. But in 1966 the Nobel committee finally vindicated
Francis Peyton Rous with a prize of his own. The gap between Rous’s
important papers and his Nobel, fifty-five years, is one of the longer in prize
history. But the win no doubt proved one of the most satisfying, even if he
had just four years to enjoy it before dying in 1970. And after his death, it
ceased to matter what Rous himself had believed or rejected; young
microbiologists, eager to explore how microbes reprogram life, held him up
as an idol, and textbooks today cite his work as a classic case of an idea
condemned in its own time and later exonerated by DNA evidence.
The story of Cat Crossing also ended in a complicated fashion. As the
bills piled higher, creditors nearly seized the Wrights’ house. Only
donations from cat lovers saved them. Around this time, newspapers also
began digging into Jack’s past, and reported that, far from being an innocent
animal lover, he’d once been convicted of manslaughter for strangling a
stripper. (Her body was found on a roof.) Even after these crises passed, the
daily hassles continued for Jack and Donna. One visitor reported that
“neither had any vacation, any new clothes, any furniture, or draperies.” If
they rose in the night to go to the bathroom, the dozens of cats on their bed
would expand like amoebas to fill the warm hollow, leaving no room to
crawl back beneath the covers. “Sometimes you think it’ll make you crazy,”
Donna once confessed. “We can’t get away… I cry just about every day in
the summertime.” Unable to stand the little indignities anymore, Donna
eventually moved out. Yet she got drawn back in, unable to walk away from
her cats. She returned every dawn to help Jack cope.*
Despite the near certainty of Toxo exposure and infection, no one knows
to what extent (if any) Toxo turned Jack and Donna’s life inside out. But
even if they were infected—and even if neurologists could prove that Toxo
manipulated them profoundly—it’s hard to censure someone for caring so
much for animals. And in a (much, much) larger perspective, the behavior
of hoarders might be doing some greater evolutionary good, in the Lynn
Margulis sense of mixing up our DNA. Interactions with Toxo and other
microbes have certainly influenced our evolution at multiple stages, perhaps
profoundly. Retroviruses colonized our genome in waves, and a few
scientists argue that it’s not a coincidence that these waves appeared just
before mammals began to flourish and just before hominid primates
emerged. This finding dovetails with another recent theory that microbes
may explain Darwin’s age-old quandary of the origin of new species. One
traditional line of demarcation between species is sexual reproduction: if
two populations can’t breed and produce viable children, they’re separate
species. Usually the reproductive barriers are mechanical (animals don’t
“fit”) or biochemical (no viable embryos result). But in one experiment
with Wolbachia (the Herod-Tiresias bug), scientists took two populations of
infected wasps that couldn’t produce healthy embryos in the wild and gave
them antibiotics. This killed the Wolbachia—and suddenly allowed the
wasps to reproduce. Wolbachia alone had driven them apart.
Along these lines, a few scientists have speculated that if HIV ever
reached truly epidemic levels and wiped out most people on earth, then the
small percentage of people immune to HIV (and they do exist) could evolve
into a new human species. Again, it comes down to sexual barriers. These
people couldn’t have sex with the nonimmune population (most of us)
without killing us off. Any children from the union would have a good
chance of dying of HIV, too. And once erected, these sexual and
reproductive barriers would slowly but inevitably drive the two populations
apart. Even more wildly, HIV, as a retrovirus, could someday insert its DNA
into these new humans in a permanent way, joining the genome just as other
viruses have. HIV genes would then be copied forever in our descendants,
who might have no inkling of the destruction it once wrought.
Of course, saying that microbes infiltrate our DNA may be nothing but a
species-centric bias. Viruses have a haiku-like quality about them, some
scientists note, a concentration of genetic material that their hosts lack.
Some scientists also credit viruses with creating DNA in the first place
(from RNA) billions of years ago, and they argue that viruses still invent
most new genes today. In fact the scientists who discovered bornavirus
DNA in humans think that, far from the bornavirus forcing this DNA into
us primates, our chromosomes stole this DNA instead. Whenever our
mobile DNA starts swimming about, it often grabs other scraps of DNA and
drags them along to wherever it’s going. Bornavirus replicates only in the
nucleus of cells, where our DNA resides, and there’s a good chance that
mobile DNA mugged the bornavirus long ago, kidnapped its DNA, and
kept it around when it proved useful. Along these lines, I’ve accused Toxo
of stealing the dopamine gene from its more sophisticated mammalian
hosts. And historical evidence suggests it did. But Toxo also hangs out
primarily in the cell nucleus, and there’s no theoretical reason we couldn’t
have stolen this gene from it instead.
It’s hard to decide what’s less flattering: that microbes outsmarted our
defenses and inserted, wholly by accident, the fancy genetic tools that
mammals needed to make certain evolutionary advances; or that mammals
had to shake down little germs and steal their genes instead. And in some
cases these truly were advances, leaps that helped make us human. Viruses
probably created the mammalian placenta, the interface between mother
and child that allows us to give birth to live young and enables us to nurture
our young. What’s more, in addition to producing dopamine, Toxo can ramp
up or ramp down the activity of hundreds of genes inside human neurons,
altering how the brain works. The bornavirus also lives and works between
the ears, and some scientists argue that it could be an important source for
adding variety to the DNA that forms and runs the brain. This variety is the
raw material of evolution, and passing around microbes like the bornavirus
from human to human, probably via sex, might well have increased the
chances of someone getting beneficial DNA. In fact, most microbes
responsible for such boosts likely got passed around via sex. Which means
that, if microbes were as important in pushing evolution forward as some
scientists suggest, STDs could be responsible in some way for human
genius. Descended from apes indeed.
As the virologist Luis Villarreal has noted (and his thoughts apply to
other microbes), “It is our inability to perceive viruses, especially the silent
virus, that has limited our understanding of the role they play in all life.
Only now, in the era of genomics, can we more clearly see ubiquitous
footprints in the genomes of all life.” So perhaps we can finally see too that
people who hoard cats aren’t crazy, or at least not merely crazy. They’re
part of the fascinating and still-unfolding story of what happens when you
mix animal and microbe DNA.
8
Love and Atavisms
What Genes Make Mammals Mammals?
Given the thousands upon thousands of babies born in and around Tokyo
each year, most don’t attract much attention, and in December 2005, after
forty weeks and five days of pregnancy, a woman named Mayumi quietly
gave birth to a baby girl named Emiko. (I’ve changed the names of family
members for privacy’s sake.) Mayumi was twenty-eight, and her blood
work and sonograms seemed normal throughout her pregnancy. The
delivery and its aftermath were also routine—except that for the couple
involved, of course, a first child is never routine. Mayumi and her husband,
Hideo, who worked at a petrol station, no doubt felt all the normal fluttering
anxieties as the ob-gyn cleared the mucus from Emiko’s mouth and coaxed
her into her first cry. The nurses drew blood from Emiko for routine testing,
and again, everything came back normal. They clamped and cut Emiko’s
umbilical cord, her lifeline to her mother’s placenta, and it dried out
eventually, and the little stub blackened and fell off in the normal fashion,
leaving her with a belly button. A few days later, Hideo and Mayumi left
the hospital in Chiba, a suburb across the bay from Tokyo, with Emiko in
their arms. All perfectly normal.
Thirty-six days after giving birth, Mayumi began bleeding from her
vagina. Many women experience vaginal hemorrhages after birth, but three
days later, Mayumi also developed a robust fever. With the newborn Emiko
to take care of, the couple toughed out Mayumi’s spell at home for a few
days. But within a week, the bleeding had become uncontrollable, and the
family returned to the hospital. Because the wound would not clot, doctors
suspected something was wrong with Mayumi’s blood. They ordered a
round of blood work and waited.
The news was not good. Mayumi tested positive for a grim blood cancer
called ALL (acute lymphoblastic leukemia). While most cancers stem from
faulty DNA—a cell deletes or miscopies an A, C, G, or T and then turns
against the body—Mayumi’s cancer had a more complicated origin. Her
DNA had undergone what’s called a Philadelphia translocation (named after
the city in which it was discovered in 1960). A translocation takes place
when two non-twin chromosomes mistakenly cross over and swap DNA.
And unlike a mutational typo, which can occur in any species, this blunder
tends to target higher animals with specific genetic features.
Protein-producing DNA—genes—actually makes up very little of the
total DNA in higher animals, as little as 1 percent. Morgan’s fly boys had
assumed that genes almost bumped up against each other on chromosomes,
strung along tightly like Alaska’s Aleutian Islands. In reality genes are
precious rare, scattered Micronesian islands in vast chromosomal Pacifics.
So what does all that extra DNA do? Scientists long assumed it did
nothing, and snubbed it as “junk DNA.” The name has haunted them as an
embarrassment ever since. So-called junk DNA actually contains thousands
of critical stretches that turn genes on and off or otherwise regulate them—
the “junk” manages genes. To take one example, chimpanzees and other
primates have short, fingernail-hard bumps (called spines) studding their
penises. Humans lack the little prick pricks because sometime in the past
few million years, we lost sixty thousand letters of regulatory junk DNA—
DNA that would otherwise coax certain genes (which we still have) into
making the spines. Besides sparing vaginas, this loss decreases male
sensation during sex and thereby prolongs copulation, which scientists
suspect helps humans pair-bond and stay monogamous. Other junk DNA
fights cancer, or keeps us alive moment to moment.
To their amazement, scientists even found junk DNA—or, as they say
now, “noncoding DNA”—cluttering genes themselves. Cells turn DNA into
RNA by rote, skipping no letters. But with the full RNA manuscript in
hand, cells narrow their eyes, lick a red pencil, and start slashing—think
Gordon Lish hacking down Raymond Carver. This editing consists mostly
of chopping out unneeded RNA and stitching the remaining bits together to
make the actual messenger RNA. (Confusingly, the excised parts are called
“introns,” the included parts “exons.” Leave it to scientists…) For example,
raw RNA with both exons (capital letters) and introns (lowercase) might
read: abcdefGHijklmnOpqrSTuvwxyz. Edited down to exons, it says
GHOST.
Lower animals like insects, worms, and their ick ilk contain only a few
short introns; otherwise, if introns run on for too long or grow too
numerous, their cells get confused and can no longer string something
coherent together. The cells of mammals show more aptitude here; we can
sift through pages and pages of needless introns and never lose the thread of
what the exons are saying. But this talent does have disadvantages. For one,
the RNA-editing equipment in mammals must work long, thankless hours:
the average human gene contains eight introns, each an average of 3,500
letters long—thirty times longer than the exons they surround. The gene for
the largest human protein, titin, contains 178 fragments, totaling 80,000
bases, all of which must be stitched together precisely. An even more
ridiculously sprawling gene—dystrophin, the Jacksonville of human DNA
—contains 14,000 bases of coding DNA among 2.2 million bases of intron
cruft. Transcription alone takes sixteen hours. Overall this constant splicing
wastes incredible energy, and any slipups can ruin important proteins. In
one genetic disorder, improper splicing in human skin cells wipes out the
grooves and whorls of fingerprints, rendering the fingertips completely
bald. (Scientists have nicknamed this condition “immigration delay
disease,” since these mutants get a lot of guff at border crossings.) Other
splicing disruptions are more serious; mistakes in dystrophin cause
muscular dystrophy.
Animals put up with this waste and danger because introns give our cells
versatility. Certain cells can skip exons now and then, or leave part of an
intron in place, or otherwise edit the same RNA differently. Having introns
and exons therefore gives cells the freedom to experiment: they can produce
different RNA at different times or customize proteins for different
environments in the body.* For this reason alone, mammals especially have
learned to tolerate vast numbers of long introns.
But as Mayumi discovered, tolerance can backfire. Long introns provide
places for non-twin chromosomes to get tangled up, since there are no
exons to worry about disrupting. The Philadelphia swap takes places along
two introns—one on chromosome nine, one on chromosome twenty-two—
that are exceptionally long, which raises the odds of these stretches coming
into contact. At first our tolerant cells see this swap as no big deal, since it’s
fiddling “only” with soon-to-be-edited introns. It is a big deal. Mayumi’s
cells fused two genes that should never be fused—genes that formed, in
tandem, a monstrous hybrid protein that couldn’t do the job of either
individual gene properly. The result was leukemia.
Doctors started Mayumi on chemotherapy at the hospital, but they had
caught the cancer late, and she remained quite ill. Worse, as Mayumi
deteriorated, their minds started spinning: what about Emiko? ALL is a
swift cancer, but not that swift. Mayumi almost certainly had it when
pregnant with Emiko. So could the little girl have “caught” the cancer from
her mother? Cancer among expectant women is not uncommon, happening
once every thousand pregnancies. But none of the doctors had ever seen a
fetus catch cancer: the placenta, the organ that connects mother to child,
should thwart any such invasion, because in addition to bringing nutrients
to the baby and removing waste, the placenta acts as part of the baby’s
immune system, blocking microbes and rogue cells.
Still, a placenta isn’t foolproof—doctors advise pregnant women not to
handle kitty litter because Toxo can occasionally slip through the placenta
and ravage the fetal brain. And after doing some research and consulting
some specialists, the doctors realized that on rare occasions—a few dozen
times since the first known instance, in the 1860s—mothers and fetuses
came down with cancer simultaneously. No one had ever proved anything
about the transmission of these cancers, however, because mother, fetus,
and placenta are so tightly bound up that questions of cause and effect get
tangled up, too. Perhaps the fetus gave the cancer to the mother in these
cases. Perhaps they’d both been exposed to unknown carcinogens. Perhaps
it was just a sickening coincidence—two strong genetic predispositions for
cancer going off at once. But the Chiba doctors, working in 2006, had a tool
no previous generation did: genetic sequencing. And as the Mayumi-Emiko
case progressed, these doctors used genetic sequencing to pin down, for the
first time, whether or not it’s possible for a mother to give cancer to her
fetus. What’s more, their detective work highlighted some functions and
mechanisms of DNA unique to mammals, traits that can serve as a
springboard for exploring how mammals are genetically special.
Of course, the Chiba doctors weren’t imagining their work would take
them so far afield. Their immediate concern was treating Mayumi and
monitoring Emiko. To their relief, Emiko looked fine. True, she had no idea
why her mother had been taken from her, and any breast-feeding—so
important to mammalian mothers and children—ceased during
chemotherapy. So she certainly felt distress. But otherwise Emiko hit all her
growth and development milestones and passed every medical examination.
Everything about her seemed, again, normal.
Saying so might creep expectant mothers out, but you can make a good case
that fetuses are parasites. After conception, the tiny embryo infiltrates its
host (Mom) and implants itself. It proceeds to manipulate her hormones to
divert food to itself. It makes Mom ill and cloaks itself from her immune
system, which would otherwise destroy it. All well-established games that
parasites play. And we haven’t even talked about the placenta.
In the animal kingdom, placentas are practically a defining trait of
mammals.* Some oddball mammals that split with our lineage long ago
(like duck-billed platypi) do lay eggs, just as fish, reptiles, birds, insects,
and virtually every other creature do. But of the roughly 2,150 types of
mammals, 2,000 have a placenta, including the most widespread and
successful mammals, like bats, rodents, and humans. That placental
mammals have expanded from modest beginnings into the sea and sky and
every other niche from the tropics to the poles suggests that placentas gave
them—gave us—a big survival boost.
As probably its biggest benefit, the placenta allows a mammal mother to
carry her living, growing children within her. As a result she can keep her
children warm inside the womb and run away from danger with them,
advantages that spawning-into-water and squatting-on-nest creatures lack.
Live fetuses also have longer to gestate and develop energy-intensive
organs like the brain; the placenta’s ability to pump bodily waste away
helps the brain develop, too, since fetuses aren’t stewing in toxins. What’s
more, because she invests so much energy in her developing fetus—not to
mention the literal and intimate connection she feels because of the placenta
—a mammal mom feels incentive to nurture and watch over her children,
sometimes for years. (Or at least feels the need to nag them for years.) The
length of this investment is rare among animals, and mammal children
reciprocate by forming unusually strong bonds to their mothers. In one
sense, then, the placenta, by enabling all this, made us mammals caring
creatures.
That makes it all the more creepy that the placenta, in all likelihood,
evolved from our old friends the retroviruses. But from a biological
standpoint, the connection makes sense. Clamping on to cells happens to be
a talent of viruses: they fuse their “envelopes” (their outer skin) to a cell
before injecting their genetic material into it. When a ball of embryonic
cells swims into the uterus and anchors itself there, the embryo also fuses
part of itself with the uterine cells, by using special fusion proteins. And the
DNA that primates, mice, and other mammals use to make the fusion
proteins appears to be plagiarized from genes that retroviruses use to attach
and meld their envelopes. What’s more, the uterus of placental mammals
draws heavily on other viruslike DNA to do its job, using a special jumping
gene called mer20 to flick 1,500 genes on and off in uterine cells. With both
organs, it seems we once again borrowed some handy genetic material from
a parasite and adapted it to our own ends. As a bonus, the viral genes in the
placenta even provide extra immunity, since the presence of retrovirus
proteins (either by warning them off, or outcompeting them) discourages
other microbes from circling the placenta.
As another part of its immune function, the placenta filters out any cells
that might try to invade the fetus, including cancer cells. Unfortunately,
other aspects of the placenta make it downright attractive to cancer. The
placenta produces growth hormones to promote the vigorous division of
fetal cells, and some cancers thrive on these growth hormones, too.
Furthermore, the placenta soaks up enormous amounts of blood and siphons
off nutrients for the fetus. That means that blood cancers like leukemia can
lurk inside the placenta and flourish. Cancers genetically programmed to
metastasize, like the skin cancer melanoma, take to the blood as they slither
around inside the body, and they find the placenta quite hospitable as well.
In fact, melanoma is the most common cancer that mothers and fetuses
get simultaneously. The first recorded simultaneous cancer, in 1866, in
Germany, involved a roaming melanoma that randomly took root in the
mother’s liver and the child’s knee. Both died within nine days. Another
horrifying case claimed a twenty-eight-year-old Philadelphia woman,
referred to only as “R. McC” by her doctors. It all started when Ms. McC
got a brutal sunburn in April 1960. Shortly afterward a half-inch-long mole
sprung up between her shoulder blades. It bled whenever she touched it.
Doctors removed the mole, and no one thought about it again until May
1963, when she was a few weeks pregnant. During a checkup, doctors
noticed a nodule beneath the skin on her stomach. By August the nodule
had widened even faster than her belly, and other, painful nodules had
sprung up. By January, lesions had spread to her limbs and face, and her
doctors opened her up for a cesarean section. The boy inside appeared fine
—a full seven pounds, thirteen ounces. But his mother’s abdomen was
spotted with dozens of tumors, some of them black. Not surprisingly, the
birth finished off what little strength she had. Within an hour, her pulse
dropped to thirty-six beats per minute, and though her doctors resuscitated
her, she died within weeks.
And the McC boy? There was hope at first. Despite the widespread
cancer, doctors saw no tumors in Ms. McC’s uterus or placenta—her points
of contact with her son. And although he was sickly, a careful scan of every
crevice and dimple revealed no suspicious-looking moles. But they couldn’t
check inside him. Eleven days later tiny, dark blue spots began breaking out
on the newborn’s skin. Things deteriorated quickly after that. The tumors
expanded and multiplied, and killed him within seven weeks.
Mayumi had leukemia, not melanoma, but otherwise her family in Chiba
reprised the McC drama four decades later. In the hospital, Mayumi’s
condition deteriorated day by day, her immune system weakened by three
weeks of chemotherapy. She finally contracted a bacterial infection and
came down with encephalitis, inflammation of the brain. Her body began to
convulse and seize—a result of her brain panicking and misfiring—and her
heart and lungs faltered, too. Despite intensive care, she died two days after
contracting the infection.
Even worse, in October 2006, nine months after burying his wife, Hideo
had to return to the hospital with Emiko. The once-bouncing girl had fluid
in her lungs and, more troublesome, a raw, fever-red mass disfiguring her
right cheek and chin. On an MRI, this premature jowl looked enormous—as
large as tiny Emiko’s brain. (Try expanding your cheek as far as it will go
with air, and its size still wouldn’t be close.) Based on its location within
the cheek, the Chiba doctors diagnosed sarcoma, cancer of the connective
tissue. But with Mayumi in the back of their minds, they consulted experts
in Tokyo and England and decided to screen the tumor’s DNA to see what
they could find.
They found a Philadelphia swap. And not just any Philadelphia swap.
Again, this crossover takes place along two tremendously long introns,
68,000 letters long on one chromosome, 200,000 letters long on the other.
(This chapter runs about 30,000 letters.) The two arms of the chromosomes
could have crossed over at any one of thousands of different points. But the
DNA in both Mayumi’s and Emiko’s cancer had crossed over at the same
spot, down the same letter. This wasn’t chance. Despite lodging in Emiko’s
cheek, the cancer basically was the same.
But who gave cancer to whom? Scientists had never solved this mystery
before; even the McC case was ambiguous, since the fatal tumors appeared
only after the pregnancy started. Doctors pulled out the blood-spot card
taken from Emiko at birth and determined that the cancer had been present
even then. Further genetic testing revealed that Emiko’s normal (nontumor)
cells did not show a Philadelphia swap. So Emiko had not inherited any
predisposition to this cancer—it had sprung up sometime between
conception and the delivery forty weeks later. What’s more, Emiko’s
normal cells also showed, as expected, DNA from both her mother and
father. But her cheek tumor cells contained no DNA from Hideo; they were
pure Mayumi. This proved, indisputably, that Mayumi had given cancer to
Emiko, not vice versa.
Whatever sense of triumph the scientists might have felt, though, was
muted. As so often happens in medical research, the most interesting cases
spring from the most awful suffering. And in virtually every other historical
case where a fetus and mother had cancer simultaneously, both had
succumbed to it quickly, normally within a year. Mayumi was already gone,
and as the doctors started the eleven-month-old Emiko on chemotherapy,
they surely felt these dismal odds weighing on them.
The geneticists on the case felt something different nagging them. The
spread of the cancer here was essentially a transplant of cells from one
person to another. If Emiko had gotten an organ from her mother or had
tissue grafted onto her cheek, her body would have rejected it as foreign.
Yet cancer, of all things, had taken root without triggering the placenta’s
alarms or drawing the wrath of Emiko’s immune system. How? Scientists
ultimately found the answer in a stretch of DNA far removed from the
Philadelphia swap, an area called the MHC.
If we follow the thread far enough, the MHC can help illuminate one more
aspect of Hideo and Mayumi and Emiko’s story, a thread that runs back to
our earliest days as mammals. A developing fetus has to conduct a whole
orchestra of genes inside every cell, encouraging some DNA to play louder
and hushing other sections up. Early on in the pregnancy, the most active
genes are the ones that mammals inherited from our egg-laying, lizardlike
ancestors. It’s a humbling experience to flip through a biology textbook and
see how uncannily similar bird, lizard, fish, human, and other embryos
appear during their early lives. We humans even have rudimentary gill slits
and tails—honest-to-god atavisms from our animal past.
After a few weeks, the fetus mutes the reptilian genes and turns on a
coterie of genes unique to mammals, and pretty soon the fetus starts to
resemble something you could imagine naming after your grandmother.
Even at this stage, though, if the right genes are silenced or tweaked,
atavisms (i.e., genetic throwbacks) can appear. Some people are born with
the same extra nipples that barnyard sows have.* Most of these extra
nipples poke through the “milk line” running vertically down the torso, but
they can appear as far away as the sole of the foot. Other atavistic genes
leave people with coats of hair sprouting all over their bodies, including
their cheeks and foreheads. Scientists can even distinguish (if you’ll forgive
the pejoratives) between “dog-faced” and “monkey-faced” coats, depending
on the coarseness, color, and other qualities of the hair. Infants missing a
snippet at the end of chromosome five develop cri-du-chat, or “cry of the
cat” syndrome, so named for their caterwauling chirps and howls. Some
children are also born with tails. These tails—usually centered above their
buttocks—contain muscles and nerves and run to five inches long and an
inch thick. Sometimes tails appear as side effects of recessive genetic
disorders that cause widespread anatomical problems, but tails can appear
idiosyncratically as well, in otherwise normal children. Pediatricians have
reported that these boys and girls can curl their tails up like an elephant’s
trunk, and that the tails contract involuntarily when children cough or
sneeze.* Again, all fetuses have tails at six weeks old, but they usually
retract after eight weeks as tail cells die and the body absorbs the excess
tissue. Tails that persist probably arise from spontaneous mutations, but
some children with tails do have betailed relatives. Most get the harmless
appendage removed just after birth, but some don’t bother until adulthood.
A hale and healthy baby boy born with a tail—a genetic throwback from our primate past. (Jan
Bondeson, A Cabinet of Medical Curiosities, reproduced by permission)
All of us have other atavisms dormant within us as well, just waiting for
the right genetic signals to awaken them. In fact, there’s one genetic
atavism that none of us escapes. About forty days after conception, inside
the nasal cavity, humans develop a tube about 0.01 inches long, with a slit
on either side. This incipient structure, the vomeronasal organ, is common
among mammals, who use it to help map the world around them. It acts like
an auxiliary nose, except that instead of smelling things that any sentient
creature can sniff out (smoke, rotten food), the vomeronasal organ detects
pheromones. Pheromones are veiled scents vaguely similar to hormones;
but whereas hormones give our bodies internal instructions, pheromones
give instructions (or at least winks and significant glances) to other
members of our species.
Because pheromones help guide social interactions, especially intimate
encounters, shutting off the VNO in certain mammals can have awkward
consequences. In 2007, scientists at Harvard University genetically rewired
some female mice to disable their VNOs. When the mice were by
themselves, not much changed—they acted normally. But when let loose on
regular females, the altered mice treated them like the Romans did the
Sabine women. They stormed and mounted the maidens, and despite
lacking the right equipment, they began thrusting their hips back and forth.
The bizarro females even groaned like men, emitting ultrasonic squeals
that, until then, were heard only from male mice at climax.
Humans rely less on scent than other mammals do; throughout our
evolution we’ve lost or turned off six hundred common mammalian genes
for smell. But that makes it all the more striking that our genes still build a
VNO. Scientists have even detected nerves running from the fetal VNO to
the brain, and have seen these nerves send signals back and forth. Yet for
unknown reasons, despite going through the trouble of creating the organ
and wiring it up, our bodies neglect this sixth sense, and after sixteen weeks
it starts shriveling. By adulthood, it has retracted to the point that most
scientists dispute whether humans even have a VNO, much less a functional
one.
The debate about the human VNO fits into a larger and less-than-
venerable historical debate over the supposed links between scent,
sexuality, and behavior. One of Sigmund Freud’s nuttier friends, Dr.
Wilhelm Fliess, classified the nose as the body’s most potent sex organ in
the late 1800s. His “nasal reflux neurosis theory” was an unscientific hash
of numerology, anecdotes about masturbation and menstruation, maps of
hypothetical “genital spots” inside the nose, and experiments that involved
dabbing cocaine on people’s mucus membranes and monitoring their
libidos. His failure to actually explain anything about human sexuality
didn’t lower Fliess’s standing; to the contrary, his work influenced Freud,
and Freud allowed Fliess to treat his patients (and, some have speculated,
Freud himself) for indulging in masturbation. Fliess’s ideas eventually died
out, but pseudoscientific sexology never has. In recent decades, hucksters
have sold perfumes and colognes enriched with pheromones, which
supposedly make the scentee a sexual magnet. (Don’t hold your breath.)
And in 1994 a U.S. military scientist requested $7.5 million from the air
force to develop a pheromone-based “gay bomb.” His application described
the weapon as a “distasteful but completely non-lethal” form of warfare.
The pheromones would be sprayed over the (mostly male) enemy troops,
and the smell would somehow—the details were tellingly sketchy, at least
outside the scientist’s fantasies—whip them into such a froth of randiness
that they’d drop their weapons and make whoopee instead of war. Our
soldiers, wearing gas masks, would simply have to round them up.*
Perfumes and gay bombs aside, some legitimate scientific work has
revealed that pheromones can influence human behavior. Forty years ago,
scientists determined that pheromones cause the menstrual cycles of women
who live together to converge toward the same date. (That’s no urban
legend.) And while we may resist reducing human love to the interaction of
chemicals, evidence shows that raw human lust—or more demurely,
attraction—has a strong olfactory component. Old anthropology books, not
to mention Charles Darwin himself, used to marvel that in societies that
never developed the custom of kissing, potential lovers often sniffed each
other instead of smooching. More recently, Swedish doctors ran some
experiments that echo the dramatic Harvard study with mice. The doctors
exposed straight women, straight men, and homosexual men to a
pheromone in male sweat. During this exposure, the brain scans of straight
women and gay men—but not straight men—showed signs of mild arousal.
The obvious follow-up experiment revealed that pheromones in female
urine can arouse straight men and gay women, but not straight women. It
seems the brains of people with different sexual orientations respond
differently to odors from either sex. This doesn’t prove that humans have a
functioning VNO, but it does suggest we’ve retained some of its
pheromone-detecting ability, perhaps by genetically shifting its
responsibilities to our regular nose.
Probably the most straightforward evidence that smells can influence
human arousal comes from—and we’ve finally circled back to it—the
MHC. Like it or not, your body advertises your MHC every time you lift
your arm. Humans have a high concentration of sweat glands in the armpit,
and mixed in with the excreted water, salt, and oil are pheromones that spell
out exactly what MHC genes people have to protect them from disease.
These MHC ads drift into your nose, where nasal cells can work out how
much the MHC of another person differs from your own. That’s helpful in
judging a mate because you can estimate the probable health of any
children you’d have together. Remember that MHC genes don’t interfere
with each other—they codominate. So if Mom and Dad have different
MHCs, baby will inherit their combined disease resistance. The more
genetic disease resistance, the better off baby will be.
This information trickles into our brains unconsciously but can make
itself known when we suddenly find a stranger unaccountably sexy. It’s
impossible to say for sure without testing, but when this happens, the odds
are decent that his or her MHC is notably different from your own. In
various studies, when women sniffed a T-shirt worn to bed by men they
never saw or met, the women rated the men with wild MHCs (compared to
their own) as the sexiest in the batch. To be sure, other studies indicate that,
in places already high in genetic diversity, like parts of Africa, having a
wildly different MHC doesn’t increase attraction. But the MHC-attraction
link does seem to hold in more genetically homogeneous places, as studies
in Utah have shown. This finding might also help explain why—because
they have more similar MHCs than average—we find the thought of sex
with our siblings repugnant.
Again, there’s no sense in reducing human love to chemicals; it’s
waaaay more complex than that. But we’re not as far removed from our
fellow mammals as we might imagine. Chemicals do prime and propel love,
and some of the most potent chemicals out there are the pheromones that
advertise the MHC. If two people from a genetically homogeneous locale—
take Hideo and Mayumi—came together, fell in love, and decided to have a
child, then as far as we can ever explain these things biologically, their
MHCs likely had something to do with it. Which makes it all the more
poignant that the disappearance of that same MHC empowered the cancer
that almost destroyed Emiko.
Almost. The survival rate for both mothers and infants with
simultaneous cancer has remained abysmally low, despite great advances in
medicine since 1866. But unlike her mother, Emiko responded to treatment
well, partly because her doctors could tailor her chemotherapy to her
tumor’s DNA. Emiko didn’t even need the excruciating bone-marrow
transplants that most children with her type of cancer require. And as of
today (touch wood) Emiko is alive, almost seven years old and living in
Chiba.
We don’t think of cancer as a transmissible disease. Twins can
nevertheless pass cancer to each other in the womb; transplanted organs can
pass cancer to the organ recipient; and mothers can indeed pass cancer to
their unborn children, despite the defenses of the placenta. Still, Emiko
proves that catching an advanced cancer, even as a fetus, doesn’t have to be
fatal. And cases like hers have expanded our view of the MHC’s role in
cancer, and demonstrated that the placenta is more permeable than most
scientists imagined. “I’m inclined to think that maybe cells get by [the
placenta] in modest numbers all the time,” says a geneticist who worked
with Emiko’s family. “You can learn a lot from very odd cases in
medicine.”
In fact, other scientists have painstakingly determined that most if not all
of us harbor thousands of clandestine cells from our mothers, stowaways
from our fetal days that burrowed into our vital organs. Every mother has
almost certainly secreted away a few memento cells from each of her
children inside her, too. Such discoveries are opening up fascinating new
facets of our biology; as one scientist wondered, “What constitutes our
psychological self if our brains are not entirely our own?” More personally,
these findings show that even after the death of a mother or child, cells from
one can live on in the other. It’s another facet of the mother-child
connection that makes mammals special.
9
Humanzees and Other Near Misses
When Did Humans Break Away from Monkeys, and
Why?
God knows the evolution of human beings didn’t stop with fur, mammary
glands, and placentas. We’re also primates—although that was hardly
something to brag about sixty million years ago. The first rudimentary
primates probably didn’t crack one pound or live beyond six years. They
probably lived in trees, hopped about instead of striding, hunted nothing
bigger than insects, and crept out of their hovels only at night. But these
milquetoast midnight bug-biters got lucky and kept evolving. Tens of
millions of years later, some clever, opposable-thumbed, chest-beating
primates arose in Africa, and members of one of those lines of primates
literally rose onto two feet and began marching across the savannas.
Scientists have studied this progression intensely, picking it apart for clues
about the essence of humanity. And looking back on the whole picture—
that National Geographic sequence of humans getting up off our knuckles,
shedding our body hair, and renouncing our prognathous jaws—we can’t
help but think about our emergence a little triumphantly.
Still, while the rise of human beings was indeed precious, our DNA—
like the slave in Roman times who followed a triumphant general around—
whispers in our ears, Remember, thou art mortal. In reality the transition
from apelike ancestor to modern human being was more fraught than we
appreciate. Evidence tattooed into our genes suggests that the human line
almost went extinct, multiple times; nature almost wiped us out like so
many mastodons and dodos, with nary a care for our big plans. And it’s
doubly humbling to see how closely our DNA sequence still resembles that
of so-called lower primates, a likeness that conflicts with our inborn feeling
of preordainment—that we somehow sit superior to other creatures.
One strong piece of evidence for that inborn feeling is the revulsion we
feel over the very idea of mixing human tissues with tissues from another
creature. But serious scientists throughout history have attempted to make
human-animal chimeras, most recently by adulterating our DNA. Probably
the all-time five-alarmer in this realm took place in the 1920s, when a
Russian biologist named Ilya Ivanovich Ivanov tried to unite human genes
with chimpanzee genes in some hair-raising experiments that won the
approval of Joseph Stalin himself.
Ivanov started his scientific career around 1900, and worked with
physiologist Ivan Pavlov (he of the drooling dogs) before branching out to
become the world’s expert in barnyard insemination, especially with horses.
Ivanov crafted his own instruments for the work, a special sponge to sop up
semen and rubber catheters to deliver it deep inside the mares. For a decade,
he worked with the Department for State Stud-Farming, an official bureau
that supplied the ruling Romanov government with pretty horses. Given
those political priorities, it’s not hard to imagine why the Romanovs were
overthrown in 1917, and when the Bolsheviks took over and founded the
Soviet Union, Ivanov found himself unemployed.
It didn’t help Ivanov’s prospects that most people at the time considered
artificial insemination shameful, a corruption of natural copulation. Even
those who championed the technique went to ridiculous lengths to preserve
an aura of organic sex. One prominent doctor would wait outside a barren
couple’s room, listening at the keyhole while they went at it, then rush in
with a baster of sperm, practically push the husband aside, and spurt it into
the woman—all to trick her egg cells into thinking that insemination had
happened during the course of intercourse. The Vatican banned artificial
insemination for Catholics in 1897, and Russia’s Greek Orthodox Church
similarly condemned anyone, like Ivanov, who practiced it.
But the religious snit eventually helped Ivanov’s career. Even while
mired in the barnyard, Ivanov had always seen his work in grander terms—
not just a way to produce better cows and goats, but a way to probe
Darwin’s and Mendel’s fundamental theories of biology, by mixing
embryos from different species. After all, his sponges and catheters
removed the main barrier to such work, coaxing random animals to
conjugate. Ivanov had been chewing over the ultimate test of Darwinian
evolution, humanzees, since 1910, and he finally (after consulting with
Hermann Muller, the Soviet-loving Drosophila scientist) screwed up the
courage to request a research grant in the early 1920s.
Ivanov applied to the people’s commissar of enlightenment, the official
who controlled Soviet scientific funding. The commissar, a theater and art
expert in his former life, let the proposal languish, but other top Bolsheviks
saw something promising in Ivanov’s idea: a chance to insult religion, the
Soviet Union’s avowed enemy. These farsighted men argued that breeding
humanzees would be vital “in our propaganda and in our struggle for the
liberation of the working people from the power of the Church.” Ostensibly
for this reason, in September 1925—just months after the Scopes trial in the
United States—the Soviet government granted Ivanov $10,000 ($130,000
today) to get started.
Ivanov had good scientific reasons to think the work could succeed.
Scientists knew at the time that human and primate blood showed a
remarkable degree of similarity. Even more exciting, a Russian-born
colleague, Serge Voronoff, was wrapping up a series of sensational and
supposedly successful experiments to restore the virility of elderly men by
transplanting primate glands and testicles into them. (Rumors spread that
Irish poet William Butler Yeats had undergone this procedure. He hadn’t,
but the fact that people didn’t dismiss the rumor as rubbish says a lot about
Yeats.) Voronoff’s transplants seemed to show that, at least physiologically,
little separated lower primates and humans.
Ivanov also knew that quite distinct species can reproduce together. He
himself had blended antelopes with cows, guinea pigs with rabbits, and
zebras with donkeys. Besides amusing the tsar and his minions (very
important), this work proved that animals whose lines had diverged even
millions of years ago could still have children, and later experiments by
other scientists provided further proof. Pretty much any fantasy you’ve got
—lions with tigers, sheep with goats, dolphins with killer whales—
scientists have fulfilled it somewhere. True, some of these hybrids were and
are sterile, genetic dead ends. But only some: biologists find many bizarre
couplings in the wild, and of the more than three hundred mammalian
species that “outbreed” naturally, fully one-third produce fertile children.
Ivanov fervently believed in crossbreeding, and after he sprinkled some
good old Marxist materialism into his calculations—which denied human
beings anything as gauche as a soul that might not condescend to
commingle with chimps—then his humanzee experiments seemed, well,
doable.
A modern zonkey—a zebra-donkey mix. Ilya Ivanov created zonkeys (which he called
“zeedonks”) and many other genetic hybrids before pursuing humanzees. (Tracy N. Brandon)
Scientists don’t know even today whether humanzees, however icky and
unlikely, are at least possible. Human sperm can pierce the outer layer of
some primate eggs in the lab, the first step in fertilization, and human and
chimpanzee chromosomes look much the same on a macro scale. Heck,
human DNA and chimp DNA even enjoy each other’s company. If you
prepare a solution with both DNAs and heat it up until the double strands
unwind, human DNA has no problem embracing chimp DNA and zipping
back up with it when things cool down. They’re that similar.*
What’s more, a few primate geneticists think that our ancestors resorted
to breeding with chimps long after we’d split away to become a separate
species. And according to their controversial but persistent theory, we
copulated with chimps far longer than most of us are comfortable thinking
about, for a million years. If true, our eventual divergence from the chimp
line was a complicated and messy breakup, but not inevitable. Had things
gone another way, our sexual proclivities might well have rubbed the
human line right out of existence.
The theory goes like this. Seven million years ago some unknown event
(maybe an earthquake opened a rift; maybe half the group got lost looking
for food one afternoon; maybe a bitter butter battle broke out) split a small
population of primates. And with every generation they remained apart,
these two separate groups of chimp-human ancestors would have
accumulated mutations that gave them unique characteristics. So far, this is
standard biology. More unusually, though, imagine that the two groups
reunited some time later. Again, the reason is impossible to guess; maybe
an ice age wiped out most of their habitats and squeezed them together into
small woodland refugia. Regardless, we don’t need to propose any
outlandish, Marquis de Sade motivations for what happened next. If lonely
or low in numbers, the protohumans might eagerly—despite having
forsworn the comforts of protochimps for a million years—have welcomed
them back into their beds (so to speak) when the groups reunited. A million
years may seem like forever, but the two protos would have been less
distinct genetically than many interbreeding species today. So while this
interbreeding might have produced some primate “mules,” it might have
produced fertile hybrids as well.
Therein lay the danger for protohumans. Scientists know of at least one
case in primate history, with macaques, when two long-separate species
began mating again and melded back into one, eliminating any special
differences between them. Our interbreeding with chimps was no weekend
fling or dalliance; it was long and involved. And if our ancestors had said
what the hell and settled down with protochimpanzees permanently, our
unique genes could have drowned in the general gene pool in the same way.
Not to sound all eugenicky, but we would have humped ourselves right out
of existence.
Of course, this all assumes that chimps and humans did revert to
sleeping together after an initial split. So what’s the evidence for this
charge? Most of it lies on our (wait for it) sex chromosomes, especially the
X. But it’s a subtle case.
When female hybrids have fertility trouble, the flaw usually traces back
to their having one X from one species, one X from another. For whatever
reason, reproduction just doesn’t go as smoothly with a mismatch.
Mismatched sex chromosomes hit males even harder: an X and a Y from
different species almost always leave them shooting blanks. But infertility
among women is a bigger threat to group survival. A few fertile males can
still impregnate loads of females, but no gang of fertile males can make up
for low female fecundity, because females can have children only so
quickly.
Nature’s solution here is genocide. That is, gene-o-cide: nature will
eliminate any potential mismatches among the interbreeders by eradicating
the X chromosome of one species. It doesn’t matter which, but one has to
go. It’s a war of attrition, really. Depending on the messy details of how
many protochimps and protohumans interbred, and then whom exactly the
first generation of hybrids reproduced with, and then their differential
birthrates and mortality—depending on all that, one species’ X
chromosomes probably appeared in higher numbers initially in the gene
pool. And in the subsequent generations, the X with the numbers advantage
would slowly strangle the other one, because anyone with similar Xs would
outbreed the half-breeds.
Notice there’s no comparable pressure to eliminate nonsex
chromosomes. Those chromosomes don’t mind being paired with
chromosomes from the other species. (Or if they do mind, their quarrel
likely won’t interfere with making babies, which is what counts to DNA.)
As a result, the hybrids and their descendants could have been full of
mismatched nonsex chromosomes and survived just fine.
Scientists realized in 2006 that this difference between sex and nonsex
chromosomes might explain a funny characteristic of human DNA. After
the initial split between their lines, protochimps and protohumans should
have started down different paths and accumulated different mutations on
each chromosome. And they did, mostly. But when scientists look at
chimps and humans today, their Xs look more uniform than other
chromosomes. The DNA clock on X got reset, it seems; it retained its
girlish looks.
We hear the statistic sometimes that we have 99 percent of our DNA
coding region in common with chimps, but that’s an average, overall
measure. It obscures the fact that human and chimp Xs, a crucial
chromosome for Ivanov’s work, look even more identical up and down the
line. One parsimonious way to explain this similarity is interbreeding and
the war of attrition that would probably have eliminated one type of X. In
fact, that’s why scientists developed the theory about protohuman and
protochimp mating in the first place. Even they admit it sounds a little batty,
but they couldn’t puzzle out another way to explain why human and chimp
X chromosomes have less variety than other chromosomes.
Fittingly, however (given the battle of the sexes), research related to Y
chromosomes may contradict the X-rated evidence for human-chimp
interbreeding. Again, scientists once believed that Y—which has undergone
massive shrinkage over the past 300 million years, down to a chromosomal
stub today—would one day disappear as it continued to shed genes. It was
considered an evolutionary vestige. But in truth Y has evolved rapidly even
in the few million years since humans swore off chimpanzees (or vice
versa). Y houses the genes to make sperm, and sperm production is an area
of fierce competition in wanton species. Many different protogents would
have had sex with each protolady, so one gent’s sperm constantly had to
wrestle with another gent’s inside her vagina. (Not appetizing, but true.)
One evolutionary strategy to secure an advantage here is to produce loads
and loads of sperm each time you ejaculate. Doing so of course requires
copying and pasting lots of DNA, because each sperm needs its own genetic
payload. And the more copying that takes place, the more mutations that
occur. It’s a numbers game.
However, these inevitable copying mistakes plague the X chromosome
less than any other chromosome because of our reproductive biology. Just
like making sperm, making an egg requires copying and pasting lots of
DNA. A female has equal numbers of every chromosome: two chromosome
ones, two chromosome twos, and so on, as well as two Xs. So during the
production of eggs, each chromosome, including the Xs, gets copied
equally often. Males also have two copies of chromosomes one through
twenty-two. But instead of two Xs, they have one X and one Y. During the
production of sperm, then, the X gets copied less often compared to other
chromosomes. And because it gets copied less often, it picks up fewer
mutations. That mutation gap between X and other chromosomes widens
even more when—because of Y chromosome–fueled sperm competition—
males begin churning out loads of sperm. Therefore, some biologists argue,
the seeming lack of mutations on X when comparing chimps and humans
might not involve an elaborate and illicit sexual history. It might result from
our basic biology, since X should always have fewer mutations.*
Regardless of who’s right, work along these lines has undermined the
old view of the Y as a misfit of the mammal genome; it’s quite
sophisticated in its narrow way. But for humans, it’s hard to say if the
revisionary history is for the better. The pressure to develop virile sperm is
much higher in chimps than humans because male chimps have more sex
with different partners. In response, evolution has remade the chimp Y
thoroughly top to bottom. So thoroughly, in fact, that—contrary to what
most men probably want to believe—chimps have pretty much left us guys
in the dust evolutionarily. Chimps simply have tougher, smarter swimmers
with a better sense of direction, and the human Y looks obsolete in
comparison.
But that’s DNA for you—humbling. As one Y-chromosome specialist
comments, “When we sequenced the chimp genome, people thought we’d
understand why we have language and write poetry. But one of the most
dramatic differences turns out to be sperm production.”
Crisp mice in golden batter. Panther chops. Rhino pie. Trunk of elephant.
Crocodile for breakfast. Sliced porpoise head. Horse’s tongue. Kangaroo
ham.
Yes, domestic life was a trifle off at William Buckland’s. Some of his
Oxford houseguests best remembered the front hallway, lined like a
catacomb with the grinning skulls of fossilized monsters. Others
remembered the live monkeys swinging around, or the pet bear dressed in a
mortarboard cap and academic robes, or the guinea pig nibbling on people’s
toes beneath the dinner table (at least until the family hyena crushed it one
afternoon). Fellow naturalists from the 1800s remembered Buckland’s
bawdy lectures on reptile sex (though not always fondly; the young Charles
Darwin thought him a buffoon, and the London Times sniffed that Buckland
needed to watch himself “in the presence of ladies”). And no Oxonian ever
forgot the performance art stunt he pulled one spring when he wrote “G-U-
A-N-O” on the lawn with bat feces, to advertise it as fertilizer. The word
did indeed blaze green all summer.
But most people remembered William Buckland for his diet. A biblical
geologist, Buckland held the story of Noah’s ark dear, and he ate his way
through most of Noah’s litter, a habit he called “zoophagy.” Any flesh or
fluid from any beast was eligible for ingestion, be it blood, skin, gristle, or
worse. While touring a church once, Buckland startled a local vicar—who
was showing off the miraculous “martyr’s blood” that dripped from the
rafters every night—by dropping to the stone floor and dabbing the stain
with his tongue. Between laps Buckland announced, “It’s bat urine.”
Overall Buckland found few animals he couldn’t stomach: “The taste of
mole was the most repulsive I knew,” he once mused. “Until I tasted a
bluebottle [fly].”*
William Buckland ate his way through most of the animal kingdom. (Antoine Claudet)
Buckland may have hit upon zoophagy while collecting fossils in some
remote pocket of Europe with limited dining options. It may have been a
harebrained scheme to get inside the minds of the extinct animals whose
bones he dug up. Mostly, though, he just liked barbecuing, and he kept up
his hyper-carnivorous activities well into old age. But in one sense, the most
amazing thing about Buckland’s diet wasn’t the variety. It was that
Buckland’s intestines, arteries, and heart could digest so much flesh, period,
and not harden over the decades into a nineteenth-century Body Worlds
exhibit. Our primate cousins could never survive the same diet, not even
close.
Monkeys and apes have molars and stomachs adapted to pulping plant
matter, and eat mostly vegan diets in the wild. A few primates, like
chimpanzees, do eat a few ounces of termites or other animals each day on
average, and boy do they love tucking into small, defenseless mammals
now and then. But for most monkeys and apes, a high-fat, high-cholesterol
diet trashes their insides, and they deteriorate at sickening speeds compared
to modern humans. Captive primates with regular access to meat (and
dairy) often end up wheezing around inside their cages, their cholesterol
pushing 300 and their arteries paved with lard. Our protohuman ancestors
certainly also ate meat: they left too many stone cleavers lying next to piles
of megamammal bones for it all to be coincidence. But for eons early
humans probably suffered no less than monkeys for their love of flesh—
Paleolithic Elvises wandering the savanna.
So what changed between then and now, between Grunk in ancient
Africa and William Buckland at Oxford? Our DNA. Twice since we split
off from chimps, the human apoE gene has mutated, giving us distinct
versions. Overall it’s the strongest candidate around (though not the only
candidate) for a human “meat-eating gene.” The first mutation boosted the
performance of killer blood cells that attack microbes, like the deadly
microbes lingering in mouthfuls of raw flesh. It also protected against
chronic inflammation, the collateral tissue damage that occurs when
microbial infections never quite clear up. Unfortunately this apoE probably
mortgaged our long-term health for short-term gain: we could eat more
meat, but it left our arteries looking like the insides of Crisco cans. Lucky
for us, a second mutation appeared 220,000 years ago, which helped break
nasty fats and cholesterol down and spared us from premature decrepitude.
What’s more, by sweeping dietary toxins from the body, it kept cells fitter
and made bones denser and tougher to break in middle age, further
insurance against early death. So even though early humans ate a veritable
Roman-orgy diet compared to their fruitarian cousins, apoE and other genes
helped them live twice as long.
Before we congratulate ourselves, though, about getting our hands on
better apoEs than monkeys, a few points. For starters, bones with hack
marks and other archaeological evidence indicate that we started dining on
meat eons before the cholesterol-fighting apoE appeared, at least 2.5
million years ago. So for millions of years we were either too dim to link
eating meat and early retirement, too pathetic to get enough calories without
meat, or too brutishly indulgent to stop sucking down food we knew would
kill us. Even less flattering is what the germicidal properties of the earlier
apoE mutation imply. Archaeologists have found sharpened wooden spears
from 400,000 years ago, so some caveman studs were bringing home bacon
by then. But what about before that? The lack of proper weapons, and the
fact that apoE combats microbes—which thrive in shall we say less-than-
fresh cuts of carrion—hint that protohumans scavenged carcasses and ate
putrid leftovers. At best, we waited for other animals to fell game, then
scared them off and stole it, hardly a gallant enterprise. (At least we’re in
good company here. Scientists have been having the same debate for some
time about Tyrannosaurus rex: Cretaceous alpha-killer, or loathsome
poacher?)
Once again DNA humbles and muddies our view of ourselves. And
apoE is just one of many cases where DNA research has transformed our
knowledge of our ancient selves: filling in forgotten details in some
narratives, overthrowing long-held beliefs in others, but always, always
revealing how fraught hominid history has been.
The expansion of our ancestors across the globe required more than luck
and persistence. To dodge extinction after extinction, we also needed us
some brains. There’s clearly a biological basis for human intelligence; it’s
too universal not to be inscribed in our DNA, and (unlike most cells) brain
cells use almost all the DNA we have. But despite centuries of inquiry, by
everyone from phrenologists to NASA engineers, on subjects from Albert
Einstein to idiot savants, no one quite knows where our smarts come from.
Early attempts to find the biological basis of intelligence played off the
idea that bigger was better: more brain mass meant more thinking power,
just like more muscles meant more lifting power. Although intuitive, this
theory has its shortcomings; whales and their twenty-pound brains don’t
dominate the globe. So Baron Cuvier, the half Darwin, half Machiavelli
from Napoleonic France, suggested that scientists also examine a creature’s
brain-body ratio, to measure its relative brain weight as well.
Nonetheless scientists in Cuvier’s day maintained that bigger brains did
mean finer minds, especially within a species. The best evidence here was
Cuvier himself, a man renowned (indeed, practically stared at) for the
veritable pumpkin atop his shoulders. Still, no one could say anything
definitive about Cuvier’s brain until 7 a.m. on Tuesday, May 15, 1832,
when the greatest and most shameless doctors in Paris gathered to conduct
Cuvier’s autopsy. They sliced open his torso, sluiced through his viscera,
and established that he had normal organs. This duty dispatched, they
eagerly sawed through his skull and extracted a whale of a specimen, sixty-
five ounces, over 10 percent larger than any brain measured before. The
smartest scientist these men had ever known had the biggest brain they’d
ever seen. Pretty convincing.
By the 1860s, though, the tidy size-smarts theory had started unraveling.
For one, some scientists questioned the accuracy of the Cuvier
measurement—it just seemed too outré. No one had bothered to pickle and
preserve Cuvier’s brain, unfortunately, so these later scientists grasped at
whatever evidence they could find. Someone eventually dug up Cuvier’s
hat, which was indeed commodious; it fell over the eyes of most everyone
who donned it. But those wise in the ways of milliners pointed out that the
hat’s felt might have stretched over the years, leading to overestimates.
Tonsorial types suggested instead that Cuvier’s bushy hairdo had made his
head merely appear enormous, biasing his doctors to expect (and, because
expecting, find) a vast brain. Still others built a case that Cuvier suffered
from juvenile hydrocephaly, a feverish swelling of the brain and skull when
young. In that case, Cuvier’s big head might be accidental, unrelated to his
genius.*
Baron Cuvier—a half-Darwin, half-Machiavelli biologist who lorded over French science
during and after Napoleon—had one of the largest human brains ever recorded. (James
Thomson)
Fragments of Einstein’s brain, shellacked in hard celloidin after the physicist’s death in 1955.
(Getty Images)
This certainly wasn’t the first autopsy of a famous person to take a lurid
turn. Doctors set aside Beethoven’s ear bones in 1827 to study his deafness,
but a medical orderly nicked them. The Soviet Union founded an entire
institute in part to study Lenin’s brain and determine what makes a
revolutionary a revolutionary. (The brains of Stalin and Tchaikovsky also
merited preservation.) Similarly, and despite the body being mutilated by
mobs, Americans helped themselves to half of Mussolini’s brain after
World War II, to determine what made a dictator a dictator. That same year
the U.S. military seized four thousand pieces of human flesh from Japanese
coroners to study nuclear radiation damage. The spoils included hearts,
slabs of liver and brain, even disembodied eyeballs, all of which doctors
stored in jars in radiation-proof vaults in Washington, D.C., at a cost to
taxpayers of $60,000 per year. (The U.S. repatriated the remains in 1973.)
Even more grotesquely, William Buckland—in a story that’s possibly
apocryphal, but that his contemporaries believed—topped his career as a
gourmand when a friend opened a silver snuffbox to show off a desiccated
morsel of Louis XIV’s heart. “I have eaten many strange things, but have
never eaten the heart of a king,” Buckland mused. Before anyone thought to
stop him, Buckland wolfed it down. One of the all-time raciest stolen body
parts was the most private part of Cuvier’s patron, Napoleon. A spiteful
doctor lopped off L’Empereur’s penis during the autopsy in 1821, and a
crooked priest smuggled it to Europe. A century later, in 1927, the unit went
on sale in New York, where one observer compared it to a “maltreated strip
of buckskin shoelace.” It had shriveled to one and one-half inches, but a
urologist in New Jersey bought it anyway for $2,900. And we can’t wrap up
this creepy catalog without noting that yet another New Jersey doctor
disgracefully whisked away Einstein’s eyeballs in 1955. The doctor later
refused Michael Jackson’s offer to pay millions for them—partly because
the doc had grown fond of gazing into them. As for the rest of Einstein’s
body, take heart (sorry). It was cremated, and no one knows where in
Princeton his family scattered the ashes.*
Perhaps the most disheartening thing about the whole Einstein fiasco is
the paltry knowledge scientists gained. Neurologists ended up publishing
only three papers on Einstein’s brain in forty years, because most found
nothing extraordinary there. Harvey kept soliciting scientists to take another
look, but after the initial null results came back, the brain chunks mostly
just sat around. Harvey kept each section wrapped in cheesecloth and piled
them into two wide-mouthed glass cookie jars full of formaldehyde broth.
The jars themselves sat in a cardboard box labeled “Costa Cider” in
Harvey’s office, tucked behind a red beer cooler. When Harvey lost his job
later and took off for greener pastures in Kansas (where he moved in next
door to author and junkie William S. Burroughs), the brain rode shotgun in
his car.
In the past fifteen years, though, Harvey’s persistence has been justified,
a little. A few cautious papers have highlighted some atypical aspects of
Einstein’s brain, on both microscopic and macroscopic levels. Coupled with
loads of research into the genetics of brain growth, these findings may yet
provide some insight into what separates a human brain from an animal
brain, and what pushes an Einstein a few standard deviations beyond that.
First, the obsession with overall brain size has given way to obsessing
over the size of certain brain parts. Primates have particularly beefy neuron
shafts (called axons) compared to other animals and can therefore send
information through each neuron more quickly. Even more important is the
thickness of the cortex, the outermost brain layer, which promotes thinking
and dreaming and other flowery pursuits. Scientists know that certain genes
are crucial for growing a thick cortex, partly because it’s so sadly obvious
when these genes fail: people end up with primitively tiny brains. One such
gene is aspm. Primates have extra stretches of DNA in aspm compared to
other mammals, and this DNA codes for extra strings of amino acids that
bulk up the cortex. (These strings usually start with the amino acids
isoleucine and glutamine. In the alphabetic abbreviations that biochemists
use for amino acids, glutamine is usually shortened to Q [G was taken] and
isoleucine to plain I—which means we probably got an intelligence boost
from a string of DNA referred to, coincidentally, as the “IQ domain.”)
In tandem with increasing cortex size, aspm helps direct a process that
increases the density of neurons in the cortex, another trait that correlates
strongly with intelligence. This increase in density happens during our
earliest days, when we have loads of stem cells, undeclared cells that can
choose any path and become any type of cell. When stem cells begin
dividing in the incipient brain, they can either produce more stem cells, or
they can settle down, get a job, and become mature neurons. Neurons are
good, obviously, but each time a neuron forms, the production of new stem
cells (which can make additional neurons in the future) stops. So getting a
big brain requires building up the base population of stem cells first. And
the key to doing that is making sure that stem cells divide evenly: if the
cellular guts get divided equally between both daughter cells, each one
becomes another stem cell. If the split is unequal, neurons form
prematurely.
To facilitate an even split, aspm guides the “spindles” that attach to
chromosomes and pull them apart in a nice, clean, symmetrical way. If
aspm fails, the split is uneven, neurons form too soon, and the child is
cheated of a normal brain. To be sure, aspm isn’t the gene responsible for
big brains: cell division requires intricate coordination among many genes,
with master regulator genes conducting everything from above, too. But
aspm can certainly pack the cortex with neurons* when it’s firing right—or
sabotage neuron production if it misfires.
Einstein’s cortex had a few unusual features. One study found that,
compared to normal elderly men, his had the same number of neurons and
the same average neuron size. However, part of Einstein’s cortex, the
prefrontal cortex, was thinner, which gave him a greater density of neurons.
Closely packed neurons may help the brain process information more
quickly—a tantalizing find considering that the prefrontal cortex
orchestrates thoughts throughout the brain and helps solve multistep
problems.
Further studies examined certain folds and grooves in Einstein’s cortex.
As with brain size, it’s a myth that simply having more folds automatically
makes a brain more potent. But folding does generally indicate higher
functioning. Smaller and dumber monkeys, for instance, have fewer
corrugations in their cortexes. As, interestingly, do newborn humans. Which
means that as we mature from infants to young adults, and as genes that
wrinkle our brains start kicking on, every one of us relives millions of years
of human evolution. Scientists also know that a lack of brain folds is
devastating. The genetic disorder “smooth brain” leaves babies severely
retarded, if they even survive to term. Instead of being succulently
furrowed, a smooth brain looks eerily polished, and cross sections of it,
instead of showing scrunched-up brain fabric, look like slabs of liver.
Einstein had unusual wrinkles and ridges in the cortex of his parietal
lobe, a region that aids in mathematical reasoning and image processing.
This comports with Einstein’s famous declaration that he thought about
physics mostly through pictures: he formulated relativity theory, for
instance, in part by imagining what would happen if he rode around
bareback on light rays. The parietal lobe also integrates sound, sight, and
other sensory input into the rest of the brain’s thinking. Einstein once
declared that abstract concepts achieved meaning in his mind “only through
their connection with sense-experiences,” and his family remembers him
practicing his violin whenever he got stuck with a physics problem. An
hour later, he’d often declare, “I’ve got it!” and return to work. Auditory
input seemed to jog his thinking. Perhaps most telling, the parietal wrinkles
and ridges in Einstein’s lobes were steroid thick, 15 percent bigger than
normal. And whereas most of us mental weaklings have skinny right
parietal lobes and even skinnier left parietal lobes, Einstein’s were equally
buff.
Finally, Einstein appeared to be missing part of his middle brain, the
parietal operculum; at the least, it didn’t develop fully. This part of the brain
helps produce language, and its lack might explain why Einstein didn’t
speak until age two and why until age seven he had to rehearse every
sentence he spoke aloud under his breath. But there might have been
compensations. This region normally contains a fissure, or small gap, and
our thoughts get routed the long way around. The lack of a gap might have
meant that Einstein could process certain information more speedily, by
bringing two separate parts of his brain into unusually direct contact.
All of which is exciting. But is it exciting bunkum? Einstein feared his
brain becoming a relic, but have we done something equally silly and
reverted to phrenology? Einstein’s brain has deteriorated into chopped liver
by now (it’s even the same color), which forces scientists to work mostly
from old photographs, a less precise method. And not to put too fine a point
on it, but Thomas Harvey coauthored half of the various studies on the
“extraordinary” features of Einstein’s brain, and he certainly had an interest
in science learning something from the organ he purloined. Plus, as with
Cuvier’s swollen brain, maybe Einstein’s features are idiosyncratic and had
nothing to do with genius; it’s hard to tell with a sample size of one. Even
trickier, we can’t sort out if unusual neurofeatures (like thickened folds)
caused Einstein’s genius, or if his genius allowed him to “exercise” and
build up those parts of his brain. Some skeptical neuroscientists note that
playing the violin from an early age (and Einstein started lessons at six) can
cause the same brain alterations observed in Einstein.
And if you had hopes of dipping into Harvey’s brain slices and
extracting DNA, forget it. In 1998, Harvey, his jars, and a writer took a road
trip in a rented Buick to visit Einstein’s granddaughter in California.
Although weirded out by Grandpa’s brain, Evelyn Einstein accepted the
visitors for one reason. She was poor, reputedly dim, and had trouble
holding down a job—not exactly an Einstein. In fact Evelyn was always
told she’d been adopted by Einstein’s son, Hans. But Evelyn could do a
little math, and when she started hearing rumors that Einstein had
canoodled with various lady friends after his wife died, Evelyn realized she
might be Einstein’s bastard child. The “adoption” might have been a ruse.
Evelyn wanted to do a genetic paternity test to settle things, but it turned out
that the embalming process had denatured the brain’s DNA. Other sources
of his DNA might still be floating around—strands in mustache brushes,
spittle on pipes, sweated-on violins—but for now we know more about the
genes of Neanderthals who died fifty thousand years ago than the genes of a
man who died in 1955.
But if Einstein’s genius remains enigmatic, scientists have sussed out a
lot about the everyday genius of humans compared to that of other primates.
Some of the DNA that enhances human intelligence does so in roundabout
ways. A two-letter frameshift mutation in humans a few million years ago
deactivated a gene that bulked up our jaw muscles. This probably allowed
us to get by with thinner, more gracile skulls, which in turn freed up
precious cc’s of skull for the brain to expand into. Another surprise was that
apoE, the meat-eating gene, helped a lot, by helping the brain manage
cholesterol. To function properly, the brain needs to sheathe its axons in
myelin, which acts like rubber insulation on wires and prevents signals
from short-circuiting or misfiring. Cholesterol is a major component of
myelin, and certain forms of apoE do a better job distributing brain
cholesterol where it’s needed. ApoE also seems to promote brain plasticity.
Some genes lead to direct structural changes in the brain. The lrrtm1
gene helps determine which exact patches of neurons control speech,
emotion, and other mental qualities, which in turn helps the human brain
establish its unusual asymmetry and left-right specialization. Some versions
of lrrtm1 even reverse parts of the left and right brain—and increase your
chances of being left-handed to boot, the only known genetic association
for that trait. Other DNA alters the brain’s architecture in almost comical
ways: certain inheritable mutations can cross-wire the sneeze reflex with
other ancient reflexes, leaving people achooing uncontrollably—up to forty-
three times in a row in one case—after looking into the sun, eating too
much, or having an orgasm. Scientists have also recently detected 3,181
base pairs of brain “junk DNA” in chimpanzees that got deleted in humans.
This region helps stop out-of-control neuron growth, which can lead to big
brains, obviously, but also brain tumors. Humans gambled in deleting this
DNA, but the risk apparently paid off, and our brains ballooned. The
discovery shows that it’s not always what we gained with DNA, but
sometimes what we lost, that makes us human. (Or at least makes us
nonmonkey: Neanderthals didn’t have this DNA either.)
How and how quickly DNA spreads through a population can reveal
which genes contribute to intelligence. In 2005 scientists reported that two
mutated brain genes seem to have swept torrentially through our ancestors,
microcephalin doing so 37,000 years ago, aspm just 6,000 years ago.
Scientists clocked this spread by using techniques first developed in the
Columbia fruit fly room. Thomas Hunt Morgan discovered that certain
versions of genes get inherited in clusters, simply because they reside near
each other on chromosomes. As an example, the A, B, and D versions of
three genes might normally appear together; or (lowercase) a, b, and d
might appear together. Over time, though, chromosomal crossing-over and
recrossing will mix the groups, giving combos like a, B, and D; or A, b, and
D. After enough generations, every combination will appear.
But say that B mutates to B0 at some point, and that B0 gives people a
hell of a brain tune-up. At that point it could sweep through a population,
since B0 people can outthink everyone else. (That spread will be especially
easy if the population drops very low, since the novel gene has less
competition. Bottlenecks aren’t always bad!) And notice that as B0 sweeps
through a population, the versions of A/a and D/d that happen to be sitting
next to B0 in the first person with the mutation will also sweep through the
population, simply because crossing over won’t have time to break the trio
apart. In other words, these genes will ride along with the advantageous
gene, a process called genetic hitchhiking. Scientists see especially strong
signs of hitchhiking with aspm and microcephalin, which means they
spread especially quickly and probably provided an especially strong
advantage.
Beyond any specific brain-boosting genes, DNA regulation might
explain a lot about our gray matter. One flagrant difference between human
and monkey DNA is that our brain cells splice DNA far more often,
chopping and editing the same string of letters for many different effects.
Neurons mix it up so much, in fact, that some scientists think they’ve
upended one central dogma of biology—that all cells in your body have the
same DNA. For whatever reason, our neurons allow much more free play
among mobile DNA bits, the “jumping genes” that wedge themselves
randomly into chromosomes. This changes the DNA patterns in neurons,
which can change how they work. As one neuroscientist observes, “Given
that changing the firing patterns of single neurons can have marked effects
on behavior… it is likely that some [mobile DNA], in some cells, in some
humans, will have significant, if not profound, effects on the final structure
and function of the human brain.” Once again viruslike particles may prove
important to our humanity.
Beyond the fact that you can spin DNA itself into art, the two intersect on
more profound levels. The most miserable societies in human history still
found time to carve and color and croon, which strongly implies that
evolution wired these impulses into our genes. Even animals show artistic
urges. If introduced to painting, chimpanzees often skip feedings to keep
smearing canvases and sometimes throw tantrums if scientists take their
brushes and palettes away. (Cross, sunburst, and circle motifs dominate
their work, and chimps favor bold, Miró-esque lines.) Some monkeys also
have musical biases as ruthless as any hipster’s,* as do birds. And birds and
other creatures are far more discriminating connoisseurs of dance than your
average Homo sapiens, since many species dance to communicate or court
lovers.
Still, it’s not clear how to fix such impulses in a molecule. Does “artistic
DNA” produce musical RNA? Poetic proteins? What’s more, humans have
developed art qualitatively different from animal art. For monkeys, an eye
for strong lines and symmetry probably helps them craft better tools in the
wild, nothing more. But humans infuse art with deeper, symbolic meanings.
Those elks painted on cave walls aren’t just elks, they’re elks we will hunt
tomorrow or elk gods. For this reason, many scientists suspect that symbolic
art springs from language, since language teaches us to associate abstract
symbols (like pictures and words) with real objects. And given that
language has genetic roots, perhaps untangling the DNA of language skills
can illuminate the origins of art.
Perhaps. As with art, many animals have hardwired protolanguage
skills, with their warbles and screeches. And studies of human twins show
that around half of the variability in our normal, everyday aptitude with
syntax, vocabulary, spelling, listening comprehension—pretty much
everything—traces back to DNA. (Linguistic disorders show even stronger
genetic correlation.) The problem is, attempts to link linguistic skills or
deficits to DNA always run into thickets of genes. Dyslexia, for instance,
links to at least six genes, each of which contributes unknown amounts.
Even more confusing, similar genetic mutations can produce different
effects in different people. So scientists find themselves in the same
position as Thomas Hunt Morgan in the fruit fly room. They know that
genes and regulatory DNA “for” language exist; but how exactly that DNA
enhances our eloquence—increasing neuron counts? sheathing brain cells
more efficiently? fiddling with neurotransmitter levels?—no one knows.
Given this disarray, it’s easy to understand the excitement, even hype,
that attended the recent discovery of a purported master gene for language.
In 1990 linguists inferred the gene’s existence after studying three
generations of a London family known only (for privacy) as the KEs. In a
simple pattern of single-gene dominance, half the KEs suffer from a strange
suite of language malfunctions. They have trouble coordinating their lips,
jaws, and tongues, and stumble over most words, becoming especially
incomprehensible on the phone. They also struggle when asked to ape a
sequence of simple facial expressions, like opening their mouths, sticking
out their tongues, and uttering an uuuuaaaahh sound. But some scientists
argue that the KEs’ problems extend beyond motor skills to grammar. Most
of them know the plural of book is books, but seemingly because they’ve
memorized that fact. Give them made-up words like zoop or wug, and they
cannot figure out the plural; they see no connection between book/books
and zoop/zoops, even after years of language therapy. They also fail fill-in-
the-blank tests about the past tense, using words like “bringed.” The IQs of
affected KEs sink pretty low—86 on average, versus 104 for nonaffected
KEs. But the language hiccups probably aren’t a simple cognitive deficit: a
few afflicted KEs have nonverbal IQ scores above average, and they can
spot logical fallacies in arguments when tested. Plus, some scientists found
that they understand reflexives just fine (e.g., “he washed him” versus “he
washed himself”), as well as passive versus active voice and possessives.
It baffled scientists that one gene could cause such disparate symptoms,
so in 1996 they set out to find and decode it. They narrowed its locus down
to fifty genes on chromosome seven and were tediously working through
each one when they caught a break. Another victim turned up, CS, from an
unrelated family. The boy presented with the same mental and mandibular
problems, and doctors spotted a translocation in his genes: a Philadelphia-
like swap between the arms of two chromosomes, which interrupted the
foxp2 gene on chromosome seven.
Like vitamin A, the protein produced by foxp2 clamps onto other genes
and switches them on. Also like vitamin A, foxp2 has a long reach,
interacting with hundreds of genes and steering fetal development in the
jaw, gut, lungs, heart, and especially the brain. All mammals have foxp2,
and despite billions of years of collective evolution, all versions look pretty
much the same; humans have accumulated just three amino acid differences
compared to mice. (This gene looks strikingly similar in songbirds as well,
and is especially active when they’re learning new songs.) Intriguingly,
humans picked up two of our amino-acid changes after splitting from
chimps, and these changes allow foxp2 to interact with many new genes.
Even more intriguingly, when scientists created mutant mice with the
human foxp2, the mice had different neuron architecture in a brain region
that (in us) processes language, and they conversed with fellow mice in
lower-pitched, baritone squeaks.
Conversely, in the affected KEs’ brains, the regions that help produce
language are stunted and have low densities of neurons. Scientists have
traced these deficits back to a single A-for-G mutation. This substitution
altered just one of foxp2’s 715 amino acids, but it’s enough to prevent the
protein from binding to DNA. Unfortunately, this mutation occurs in a
different part of the gene than the human-chimp mutations, so it can’t
explain much about the evolution and original acquisition of language. And
regardless, scientists still face a cause-and-effect tangle with the KEs: did
the neurological deficits cause their facial clumsiness, or did their facial
clumsiness lead to brain atrophy by discouraging them from practicing
language? Foxp2 can’t be the only language gene anyway, since even the
most afflicted in the KE clan aren’t devoid of language; they’re orders of
magnitude more eloquent than any simian. (And sometimes they seemed
more creative than the scientists testing them. When presented with the
puzzler “Every day he walks eight miles. Yesterday he_____,” instead of
answering, “walked eight miles,” one afflicted KE muttered, “had a rest.”)
Overall, then, while foxp2 reveals something about the genetic basis of
language and symbolic thought, the gene has proved frustratingly
inarticulate so far.
Even the one thing scientists had all agreed on with foxp2—its unique
form in humans—proved wrong. Homo sapiens split from other Homo
species hundreds of thousands of years ago, but paleogeneticists recently
discovered the human version of foxp2 in Neanderthals. This might mean
nothing. But it might mean that Neanderthals also had the fine motor skills
for language, or the cognitive wherewithal. Perhaps both: finer motor skills
might have allowed them to use language more, and when they used it
more, maybe they found they had more to say.
All that’s certain is that the foxp2 discovery makes another debate about
Neanderthals, over Neanderthal art, more urgent. In caves occupied by
Neanderthals, archaeologists have discovered flutes made from bear
femurs, as well as oyster shells stained red and yellow and perforated for
stringing on necklaces. But good luck figuring out what these trinkets
meant to Neanderthals. Again, perhaps Neanderthals just aped humans and
attached no symbolic meaning to their toys. Or perhaps humans, who often
colonized Neanderthal sites after Neanderthals died, simply tossed their
worn-out flutes and shells in with Neanderthal rubbish, scrambling the
chronology. The truth is, no one has any idea how articulate or artsy-fartsy
Neanderthals were.
So until scientists catch another break—find another KE family with
different DNA flaws, or root out more unexpected genes in Neanderthals—
the genetic origins of language and symbolic art will remain murky. In the
meantime we’ll have to content ourselves with tracing how DNA can
augment, or make a mess of, the work of modern artists.
No different than with athletes, tiny bits of DNA can determine whether
budding musicians fulfill their talents and ambitions. A few studies have
found that one key musical trait, perfect pitch, gets inherited with the same
dominant pattern as the KE language deficit, since people with perfect pitch
passed it to half their children. Other studies found smaller and subtler
genetic contributions for perfect pitch instead, and found that this DNA
must act in concert with environmental cues (like music lessons) to bestow
the gift. Beyond the ear, physical attributes can enhance or doom a musician
as well. Sergei Rachmaninoff’s gigantic hands—probably the result of
Marfan syndrome, a genetic disorder—could span twelve inches, an octave
and a half on the piano, which allowed him to compose and play music that
would tear the ligaments of lesser-endowed pianists. On the other mitt,
Robert Schumann’s career as a concert pianist collapsed because of focal
dystonia—a loss of muscle that caused his right middle finger to curl or jerk
involuntarily. Many people with this condition have a genetic susceptibility,
and Schumann compensated by writing at least one piece that avoided that
finger entirely. But he never let up on his grinding practice schedule, and a
jerry-built mechanical rack he designed to stretch the finger may have
exacerbated his symptoms.
Still, in the long, gloried history of ailing and invalid musicians, no
DNA proved a more ambivalent friend and ambiguous foe than the DNA of
nineteenth-century musician Niccolò Paganini, the violin virtuoso’s violin
virtuoso. The opera composer (and noted epicurean) Gioacchino Rossini
didn’t like acknowledging that he ever wept, but one of the three times he
owned up to crying* was when he heard Paganini perform. Rossini bawled
then, and he wasn’t the only one bewitched by the ungainly Italian.
Paganini wore his dark hair long and performed his concerts in black frock
coats with black trousers, leaving his pale, sweaty face hovering spectrally
onstage. He also cocked his hips at bizarre angles while performing, and
sometimes crossed his elbows at impossible angles in a rush of furious
bowing. Some connoisseurs found his concerts histrionic, and accused him
of fraying his violin strings before shows so they’d snap dramatically
midperformance. But no one ever denied his showmanship: Pope Leo XII
named him a Knight of the Golden Spur, and royal mints struck coins with
his likeness. Many critics hailed him as the greatest violinist ever, and he
has proved almost a singular exception to the rule in classical music that
only composers gain immortality.
Paganini rarely if ever played the old masters during his concerts,
preferring his own compositions, which highlighted his finger-blurring
dexterity. (Ever a crowd-pleaser, he also included lowbrow passages where
he mimicked donkeys and roosters on his violin.) Since his teenage years,
in the 1790s, Paganini had labored over his music; but he also understood
human psychology and so encouraged various legends about the
supernatural origins of his gifts. Word got around that an angel had
appeared at Paganini’s birth and pronounced that no man would ever play
the violin so sweetly. Six years later, divine favor seemingly resurrected
him from a Lazarus-like doom. After he fell into a cataleptic coma, his
parents gave him up for dead—they wrapped him in a burial shroud and
everything—when, suddenly, something made him twitch beneath the cloth,
saving him by a whisker from premature burial. Despite these miracles,
people more often attributed Paganini’s talents to necromancy, insisting
he’d signed a pact with Satan and exchanged his immortal soul for
shameless musical talent. (Paganini fanned these rumors by holding
concerts in cemeteries at twilight and giving his compositions names like
“Devil’s Laughter” and “Witches’ Dance,” as if he had firsthand
experience.) Others argued that he’d acquired his skills in dungeons, where
he’d supposedly been incarcerated for eight years for stabbing a friend and
had nothing better to do than practice violin. More sober types scoffed at
these stories of witchcraft and iniquity. They patiently explained that
Paganini had hired a crooked surgeon to snip the motion-limiting ligaments
in his hands. Simple as that.
However ludicrous, that last explanation hits closest to the mark.
Because beyond Paganini’s passion, charisma, and capacity for hard work,
he did have unusually supple hands. He could unfurl and stretch his fingers
impossibly far, his skin seemingly about to rip apart. His finger joints
themselves were also freakishly flexible: he could wrench his thumb across
the back of his hand to touch his pinky (try this), and he could wriggle his
midfinger joints laterally, like tiny metronomes. As a result, Paganini could
dash off intricate riffs and arpeggios that other violinists didn’t dare, hitting
many more high and low notes in swift succession—up to a thousand notes
per minute, some claim. He could double- or triple-stop (play multiple notes
at once) with ease, and he perfected unusual techniques, like left-handed
pizzicato, a plucking technique, that took advantage of his plasticity.
Normally the right hand (the bow hand) does pizzicato, forcing the violinist
to chose between bowing or plucking during each passage. With left-handed
pizzicato, Paganini didn’t have to choose. His nimble fingers could bow one
note and pluck the next, as if two violins were playing at once.
Beyond being flexible, his fingers were deceptively strong, especially
the thumbs. Paganini’s great rival Karol Lipi ski watched him in concert
one evening in Padua, then retired to Paganini’s room for a late dinner and
some chitchat with Paganini and friends. At the table, Lipi ski found a
disappointingly scanty spread for someone of Paganini’s stature, mostly
eggs and bread. (Paganini could not even be bothered to eat that and
contented himself with fruit.) But after some wine and some jam sessions
on the guitar and trumpet, Lipi ski found himself staring at Paganini’s
hands. He even embraced the master’s “small bony fingers,” turning them
over. “How is it possible,” Lipi ski marveled, “for these thin small fingers
to achieve things acquiring extraordinary strength?” Paganini answered,
“Oh, my fingers are stronger than you think.” At this he picked up a saucer
of thick crystal and suspended it over the table, fingers below, thumb on
top. Friends gathered around to laugh—they’d seen the trick before. While
Lipi ski stared, bemused, Paganini flexed his thumb almost imperceptibly
and—crack!—snapped the saucer into two shards. Not to be outdone, Lipi
ski grabbed a plate and tried to shatter it with his own thumb, but couldn’t
come close. Nor could Paganini’s friends. “The saucers remained just as
they were before,” Lipi ski recalled, “while Paganini laughed maliciously”
at their futility. It seemed almost unfair, this combination of power and
agility, and those who knew Paganini best, like his personal physician,
Francesco Bennati, explicitly credited his success to his wonderfully
tarantular hands.
Of course, as with Einstein’s violin training, sorting out cause and effect
gets tricky here. Paganini had been a frail child, sickly and prone to coughs
and respiratory infections, but he nevertheless began intensive violin
lessons at age seven. So perhaps he’d simply loosened up his fingers
through practice. However, other symptoms indicate that Paganini had a
genetic condition called Ehlers-Danlos syndrome. People with EDS cannot
make much collagen, a fiber that gives ligaments and tendons some rigidity
and toughens up bone. The benefit of having less collagen is circus
flexibility. Like many people with EDS, Paganini could bend all his joints
alarmingly far backward (hence his contortions onstage). But collagen does
more than prevent most of us from touching our toes: a chronic lack can
lead to muscle fatigue, weak lungs, irritable bowels, poor eyesight, and
translucent, easily damaged skin. Modern studies have shown that
musicians have high rates of EDS and other hypermobility syndromes (as
do dancers), and while this gives them a big advantage at first, they tend to
develop debilitating knee and back pain later, especially if, like Paganini,
they stand while performing.
Widely considered the greatest violinist ever, Niccolò Paganini owed much of his gift to a
genetic disorder that made his hands freakishly flexible. Notice the grotesquely splayed thumb.
(Courtesy of the Library of Congress)
Constant touring wore Paganini down after 1810, and although he’d just
entered his thirties, his body began giving out on him. Despite his growing
fortune, a landlord in Naples evicted him in 1818, convinced that anyone as
skinny and sickly as Paganini must have tuberculosis. He began canceling
engagements, unable to perform his art, and by the 1820s he had to sit out
whole years of tours to recuperate. Paganini couldn’t have known that EDS
underlay his general misery; no doctor described the syndrome formally
until 1901. But ignorance only heightened his desperation, and he sought
out quack apothecaries and doctors. After diagnosing syphilis and
tuberculosis and who knows what else, the docs prescribed him harsh,
mercury-based purgative pills, which ravaged his already fragile insides.
His persistent cough worsened, and eventually his voice died completely,
silencing him. He had to wear blue-tinted shades to shield his sore retinas,
and at one point his left testicle swelled, he sobbed, to the size of “a little
pumpkin.” Because of chronic mercury damage to his gums, he had to bind
his wobbly teeth with twine to eat.
Sorting out why Paganini finally died, in 1840, is like asking what
knocked off the Roman Empire—take your pick. Abusing mercury drugs
probably did the most intense damage, but Dr. Bennati, who knew Paganini
before his pill-popping days and was the only doctor Paganini never
dismissed in a rage for fleecing him, traced the real problem further back.
After examining Paganini, Bennati dismissed the diagnoses of tuberculosis
and syphilis as spurious. He noted instead, “Nearly all [Paganini’s] later
ailments can be traced to the extreme sensitivity of his skin.” Bennati felt
that Paganini’s papery EDS skin left him vulnerable to chills, sweats, and
fevers and aggravated his frail constitution. Bennati also described the
membranes of Paganini’s throat, lungs, and colon—all areas affected by
EDS—as highly susceptible to irritation. We have to be cautious about
reading too much into a diagnosis from the 1830s, but Bennati clearly
traced Paganini’s vulnerability to something inborn. And in the light of
modern knowledge, it seems likely Paganini’s physical talents and physical
tortures had the same genetic source.
Paganini’s afterlife was no less doomed. On his deathbed in Nice, he
refused communion and confession, believing they would hasten his
demise. He died anyway, and because he’d skipped the sacraments, during
Eastertide no less, the Catholic Church refused him proper burial. (As a
result his family had to schlep his body around ignominiously for months. It
first lay for sixty days in a friend’s bed, before health officials stepped in.
His corpse was next transferred to an abandoned leper’s hospital, where a
crooked caretaker charged tourists money to gawk at it, then to a cement tub
in an olive oil processing plant. Family finally smuggled his bones back
into Genoa in secret and interred him in a private garden, where he lay for
thirty-six years, until the church finally forgave him and permitted burial.*)
Paganini’s ex post facto excommunication fueled speculation that
church elders had it in for Paganini. He did cut the church out of his ample
will, and the Faustian stories of selling his soul couldn’t have helped. But
the church had plenty of nonfictional reasons to spurn the violinist.
Paganini gambled flagrantly, even betting his violin once before a show.
(He lost.) Worse, he caroused with maidens, charwomen, and blue-blooded
dames all across Europe, betraying a truly capacious appetite for
fornication. In his most ballsy conquests, he allegedly seduced two of
Napoleon’s sisters, then discarded them. “I am ugly as sin, yet all I have to
do,” he once bragged, “is play my violin and women fall at my feet.” The
church was not impressed.
Nevertheless Paganini’s hypersexual activity brings up a salient point
about genetics and fine arts. Given their ubiquity, DNA probably encodes
some sort of artistic impulses—but why? Why should we respond so
strongly to the arts? One theory is that our brains crave social interaction
and affirmation, and shared stories, songs, and images help people bond.
Art, in this view, fosters societal cohesion. Then again, our cravings for art
could be an accident. Our brain circuits evolved to favor certain sights,
sounds, and emotions in our ancient environment, and the fine arts might
simply exploit those circuits and deliver sights, sounds, and emotions in
concentrated doses. In this view, art and music manipulate our brains in
roughly the same way that chocolate manipulates our tongues.
Many scientists, though, explain our lust for art through a process called
sexual selection, a cousin of natural selection. In sexual selection, the
creatures that mate the most often and pass on their DNA don’t necessarily
do so because they have survival advantages; they’re simply prettier, sexier.
Sexy in most creatures means brawny, well-proportioned, or lavishly
decorated—think bucks’ antlers and peacocks’ tails. But singing or dancing
can also draw attention to someone’s robust physical health. And painting
and witty poetry highlight someone’s mental prowess and agility—talents
crucial for navigating the alliances and hierarchies of primate society. Art,
in other words, betrays a sexy mental fitness.
Now, if talents on par with Matisse or Mozart seem a trifle elaborate for
getting laid, you’re right; but immodest overabundance is a trademark of
sexual selection. Imagine how peacock tails evolved. Shimmering feathers
made some peacocks more attractive long ago. But big, bright tails soon
became normal, since genes for those traits spread in the next generations.
So only males with even bigger and brighter feathers won attention. But
again, as the generations passed, everyone caught up. So winning attention
required even more ostentation—until things got out of hand. In the same
way, turning out a perfect sonnet or carving a perfect likeness from marble
(or DNA) might be we thinking apes’ equivalent of four-foot plumage,
fourteen-point antlers, and throbbing-red baboon derrieres.*
Of course, while Paganini’s talents raised him to the apex of European
society, his DNA hardly made him worthy stud material: he was a mental
and physical wreck. It just goes to show that people’s sexual desires can all
too easily get misaligned from the utilitarian urge to pass on good genes.
Sexual attraction has its own potency and power, and culture can override
our deepest sexual instincts and aversions, making even genetic taboos like
incest seem attractive. So attractive, in fact, that in certain circumstances,
those very perversions have informed and influenced our greatest art.
Painter Henri Toulouse-Lautrec, the offspring of first cousins, had a genetic disorder that stunted
his growth and subtly shaped his art. He often sketched or painted from unusual points of view.
(Henri Toulouse-Lautrec)
But Toulouse-Lautrec had immersed himself in the Paris art scene, and it
was then, in the 1880s, that his DNA began to shape his art. His genetic
disorder had left him frankly unattractive, bodily and facially—rotting his
teeth, swelling his nose, and causing his lips to flop open and drool. To
make himself more appealing to women, he masked his face somewhat with
a stylish beard and also, like Paganini, encouraged certain rumors. (He
allegedly earned the nickname “Tripod” for his stumpy legs and long, you
know.) Still, the funny-looking “dwarf” despaired of ever winning a
mistress, so he began cruising for women in the slummy Paris bars and
bordellos, sometimes disappearing into them for days. And in all of noble
Paris, that’s where this aristocrat found his inspiration. He encountered
scores of tarts and lowlifes, but despite their low status Toulouse-Lautrec
took the time to draw and paint them, and his work, even when shading
comic or erotic, lent them dignity. He found something human, even noble,
in dilapidated bedrooms and back rooms, and unlike his impressionist
predecessors, Toulouse-Lautrec renounced sunsets, ponds, sylvan woods,
all outdoor scenes. “Nature has betrayed me,” he explained, and he
forswore nature in return, preferring to have cocktails at hand and women
of ill repute posing in front of him.
His DNA likely influenced the type of art he did as well. With his
stubby arms, and with hands he mocked as grosses pattes (fat paws),
manipulating brushes and painting for long stretches couldn’t have been
easy. This may have contributed to his decision to devote so much time to
posters and prints, less awkward mediums. He also sketched extensively.
The Tripod wasn’t always extended in the brothels, and during his
downtime, Henri whipped up thousands of fresh drawings of women in
intimate or contemplative moments. What’s more, in both these sketches
and his more formal portraits of the Moulin Rouge, he often took unusual
points of view—drawing figures from below (a “nostril view”), or cutting
their legs out of the frame (he loathed dwelling on others’ legs, given his
own shortcomings), or raking scenes at upward angles, angles that someone
of greater physical but lesser artistic stature might never have perceived.
One model once remarked to him, “You are a genius of deformity.” He
responded, “Of course I am.”
Unfortunately, the temptations of the Moulin Rouge—casual sex, late
nights, and especially “strangling the parakeet,” Toulouse-Lautrec’s
euphemism for drinking himself stupid—depleted his delicate body in the
1890s. His mother tried to dry him out and had him institutionalized, but the
cures never took. (Partly because Toulouse-Lautrec had a custom hollowed-
out cane made, to fill with absinthe and drink from surreptitiously.) After
relapsing again in 1901, Toulouse-Lautrec had a brain-blowing stroke and
died from kidney failure just days later, at thirty-six. Given the painters in
his glorious family line, he probably had some genes for artistic talent
etched inside him; the counts of Toulouse had also bequeathed him his
stunted skeleton, and given their equally notable history of dipsomania,
they probably gave him genes that contributed to his alcoholism as well. As
with Paganini, if Toulouse-Lautrec’s DNA made him an artist in one sense,
it undid him at last.
PART IV
The Oracle of DNA
Genetics in the Past, Present, and Future
13
The Past Is Prologue—Sometimes
What Can (and Can’t) Genes Teach Us About
Historical Heroes?
All of them are past helping, so it’s not clear why we bother. But whether
it’s Chopin (cystic fibrosis?), Dostoyevsky (epilepsy?), Poe (rabies?), Jane
Austen (adult chicken pox?), Vlad the Impaler (porphyria?), or Vincent van
Gogh (half the DSM), we’re incorrigible about trying to diagnose the
famous dead. We persist in guessing despite a rather dubious record, in fact.
Even fictional characters sometimes receive unwarranted medical advice.
Doctors have confidently diagnosed Ebenezer Scrooge with OCD, Sherlock
Holmes with autism, and Darth Vader with borderline personality disorder.
A gawking fascination with our heroes certainly explains some of this
impulse, and it’s inspiring to hear how they overcame grave threats. There’s
an undercurrent of smugness, too: we solved a mystery previous generations
couldn’t. Above all, as one doctor remarked in the Journal of the American
Medical Association in 2010, “The most enjoyable aspect of retrospective
diagnoses [is that] there is always room for debate and, in the face of no
definitive evidence, room for new theories and claims.” Those claims often
take the form of extrapolations—counterfactual sweeps that use mystery
illnesses to explain the origins of masterpieces or wars. Did hemophilia
bring down tsarist Russia? Did gout provoke the American Revolution? Did
bug bites midwife Charles Darwin’s theories? But while our amplified
knowledge of genetics makes trawling through ancient evidence all the
more tempting, in practice genetics often adds to the medical and moral
confusion.
For various reasons—a fascination with the culture, a ready supply of
mummies, a host of murky deaths—medical historians have pried
especially into ancient Egypt and into pharaohs like Amenhotep IV.
Amenhotep has been called Moses, Oedipus, and Jesus Christ rolled into
one, and while his religious heresies eventually destroyed his dynasty, they
also ensured its immortality, in a roundabout way. In the fourth year of his
reign in the mid-1300s BC, Amenhotep changed his name to Akhenaten
(“spirit of the sun god Aten”). This was his first step in rejecting the rich
polytheism of his forefathers for a starker, more monotheistic worship.
Akhenaten soon constructed a new “sun-city” to venerate Aten, and shifted
Egypt’s normally nocturnal religious services to Aten’s prime afternoon
hours. Akhenaten also announced the convenient discovery that he was
Aten’s long-lost son. When hoi polloi began grumbling about these
changes, he ordered his praetorian thugs to destroy any pictures of deities
besides his supposed father, whether on public monuments or some poor
family’s crockery. Akhenaten even became a grammar nazi, purging all
traces of the plural hieroglyphic gods in public discourse.
Akhenaten’s seventeen-year reign witnessed equally heretical changes in
art. In murals and reliefs from Akhenaten’s era, the birds, fish, game, and
flowers start to look realistic for the first time. Akhenaten’s harem of artists
also portrayed his royal family—including Nefertiti, his most favored wife,
and Tutankhamen, his heir apparent—in shockingly mundane domestic
scenes, eating meals or caressing and smooching. Yet despite the care to get
most details right, the bodies themselves of the royal family members
appear grotesque, even deformed. It’s all the more mysterious because
servants and other less-exalted humans in these portraits still look, well,
human. Pharaohs in the past had had themselves portrayed as North African
Adonises, with square shoulders and dancers’ physiques. Not Akhenaten;
amid the otherwise overwhelming naturalism, he, Tut, Nefertiti, and other
blue bloods look downright alien.
Archaeologists describing this royal art sound like carnival barkers. One
promises you’ll “recoil from this epitome of physical repulsiveness.”
Another calls Akhenaten a “humanoid praying mantis.” The catalog of
freakish traits could run for pages: almond-shaped heads, squat torsos,
spidery arms, chicken legs (complete with knees bending backward),
Hottentot buttocks, Botox lips, concave chests, pendulous potbellies, and so
on. In many pictures Akhenaten has breasts, and the only known nude
statue of him has an androgynous, Ken-doll crotch. In short, these works are
the anti-David, the anti– Venus de Milo, of art history.
As with the Hapsburg portraits, some Egyptologists see the pictures as
evidence of hereditary deformities in the pharaonic line. Other evidence
dovetails with this idea, too. Akhenaten’s older brother died in childhood of
a mysterious ailment, and a few scholars believe Akhenaten was excluded
from court ceremonies when young because of physical handicaps. And in
his son Tut’s tomb, amid the plunder, archaeologists discovered 130
walking canes, many showing signs of wear. Unable to resist, doctors have
retroactively diagnosed these pharaohs with all sorts of ailments, like
Marfan syndrome and elephantiasis. But however suggestive, each
diagnosis suffered from a crippling lack of hard evidence.
The Egyptian pharaoh Akhenaten (seated left) had his court artists depict him and his family as
bizarre, almost alien figures, leading many modern doctors to retrodiagnose Akhenaten with
genetic ailments. (Andreas Praefcke)
Enter genetics. The Egyptian government had long hesitated to let
geneticists have at their most precious mummies. Boring into tissues or
bones inevitably destroys small bits of them, and paleogenetics was pretty
iffy at first, plagued by contamination and inconclusive results. Only in
2007 did Egypt relent, allowing scientists to withdraw DNA from five
generations of mummies, including Tut and Akhenaten. When combined
with meticulous CT scans of the corpses, this genetic work helped resolve
some enigmas about the era’s art and politics.
First, the study turned up no major defects in Akhenaten or his family,
which hints that the Egyptian royals looked like normal people. That means
the portraits of Akhenaten—which sure don’t look normal—probably didn’t
strive for verisimilitude. They were propaganda. Akhenaten apparently
decided that his status as the sun god’s immortal son lifted him so far above
the normal human rabble that he had to inhabit a new type of body in public
portraiture. Some of Akhenaten’s strange features in the pictures (distended
bellies, porcine haunches) call to mind fertility deities, so perhaps he
wanted to portray himself as the womb of Egypt’s well-being as well.
All that said, the mummies did show subtler deformities, like clubbed
feet and cleft palates. And each succeeding generation had more to endure.
Tut, of the fourth generation, inherited both clubfoot and a cleft palate. He
also broke his femur when young, like Toulouse-Lautrec, and bones in his
foot died because of poor congenital blood supply. Scientists realized why
Tut suffered so when they examined his genes. Certain DNA “stutters”
(repetitive stretches of bases) get passed intact from parent to child, so they
offer a way to trace lineages. Unfortunately for Tut, both his parents had the
same stutters—because his mom and dad had the same parents. Nefertiti
may have been Akhenaten’s most celebrated wife, but for the crucial
business of producing an heir, Akhenaten turned to a sister.
This incest likely compromised Tut’s immune system and did the
dynasty in. Akhenaten had, one historian noted, “a pathological lack of
interest” in anything beyond Egypt, and Egypt’s foreign enemies gleefully
raided the kingdom’s outer edges, imperiling state security. The problem
lingered after Akhenaten died, and a few years after the nine-year-old Tut
assumed the throne, the boy renounced his father’s heresies and restored the
ancient gods, hoping for better fortune. It didn’t come. While working on
Tut’s mummy, scientists found scads of malarial DNA deep inside his
bones. Malaria wasn’t uncommon then; similar tests reveal that both of
Tut’s grandparents had it, at least twice, and they both lived until their
fifties. However, Tut’s malarial infection, the scientists argued, “added one
strain too many to a body that”—because of incestuous genes—“could no
longer carry the load.” He succumbed at age nineteen. Indeed, some strange
brown splotches on the walls inside Tut’s tomb provide clues about just
how sudden his decline was. DNA and chemical analysis has revealed these
splotches as biological in origin: Tut’s death came so quickly that the
decorative paint on the tomb’s inner walls hadn’t dried, and it attracted
mold after his retinue sealed him up. Worst of all, Tut compounded his
genetic defects for the next generation by taking a half sister as his own
wife. Their only known children died at five months and seven months and
ended up as sorry swaddled mummies in Tut’s tomb, macabre additions to
his gold mask and walking sticks.
Powerful forces in Egypt never forgot the family’s sins, and when Tut
died heirless, an army general seized the throne. He in turn died childless,
but another commander, Ramses, took over. Ramses and his successors
expunged most traces of Akhenaten, Tut, and Nefertiti in the annals of the
pharaohs, erasing them with the same determination Akhenaten had shown
in erasing other gods. As a final insult, Ramses and his heirs erected
buildings over Tut’s tomb to conceal it. In fact, they concealed it so well
that even looters struggled to find it. As a result, Tut’s treasures survived
mostly intact over the centuries—treasures that, in time, would grant him
and his heretical, incestuous family something like immortality again.
While studying DNA can be helpful in answering some questions, you can’t
always tell if a famous someone suffered from a genetic disorder just by
testing his or her descendants. That’s because even if scientists find a clean
genetic signal for a syndrome, there’s no guarantee the descendants
acquired the defective DNA from their celebrated great-great-whatever.
That fact, along with the reluctance of most caretakers to disinter ancient
bones for testing, leaves many medical historians doing old-fashioned
genetic analysis—charting diseases in family trees and piecing together
diagnoses from a constellation of symptoms. Perhaps the most intriguing
and vexing patient undergoing analysis today is Charles Darwin, because of
both the elusive nature of his illness and the possibility that he passed it to
his children by marrying a close relative—a potentially heartbreaking
example of natural selection in action.
After enrolling in medical school in Edinburgh at age sixteen, Darwin
dropped out two years later, when surgery lessons began. In his
autobiography Darwin tersely recounted the scenes he endured, but he did
describe watching an operation on a sick boy, and you can just imagine the
thrashing and screaming in those days before anesthesia. The moment both
changed and presaged Darwin’s life. Changed, because it convinced him to
drop out and do something else for a living. Presaged, because the surgery
roiled Darwin’s stomach, a premonition of the ill health that dogged him
ever after.
His health started to fall apart aboard HMS Beagle. Darwin had skipped
a prevoyage physical in 1831, convinced he would fail it, and once asea he
proved an inveterate landlubber, constantly laid low by seasickness. His
stomach could handle only raisins for many meals, and he wrote woeful
letters seeking advice from his father, a physician. Darwin did prove
himself fit during the Beagle’s layovers, taking thirty-mile hikes in South
America and collecting loads of samples. But after returning to England in
1836 and marrying, he deteriorated into a honest-to-gosh invalid, a
wheezing wreck who often disgusted even himself.
It would take the genius of Akhenaten’s greatest court caricaturist to
capture how cramped and queasy and out-of-sorts Darwin usually felt. He
suffered from boils, fainting fits, heart flutters, numb fingers, insomnia,
migraines, dizziness, eczema, and “fiery spokes and dark clouds” that
hovered before his eyes. The strangest symptom was a ringing in his ears,
after which—as thunder follows lightning—he’d always pass horrendous
gas. But above all, Darwin barfed. He barfed after breakfast, after lunch,
after dinner, brunch, teatime—whenever—and kept going until he was dry
heaving. In peak form he vomited twenty times an hour, and once vomited
twenty-seven days running. Mental exertion invariably made his stomach
worse, and even Darwin, the most intellectually fecund biologist ever, could
make no sense of this. “What thought has to do with digesting roast beef,”
he once sighed, “I cannot say.”
The illness upended Darwin’s whole existence. For healthier air, he
retreated to Down House, sixteen miles from London, and his intestinal
distress kept him from visiting other people’s homes, for fear of fouling up
their privies. He then invented rambling, unconvincing excuses to forbid
friends from calling on him in turn: “I suffer from ill-health of a very
peculiar kind,” he wrote to one, “which prevents me from all mental
excitement, which is always followed by spasmodic sickness, and I do not
think I could stand conversation with you, which to me would be so full of
enjoyment.” Not that isolation cured him. Darwin never wrote for more
than twenty minutes without stabbing pain somewhere, and he cumulatively
missed years of work with various aches. He eventually had a makeshift
privy installed behind a half-wall, half-screen in his study, for privacy’s
sake—and even grew out his famous beard largely to soothe the eczema
always scratching at his face.
That said, Darwin’s sickness did have its advantages. He never had to
lecture or teach, and he could let T. H. Huxley, his bulldog, do the dirty
work of sparring with Bishop Wilberforce and other opponents while he lay
about the house and refined his work. Uninterrupted months at home also
let Darwin keep up his correspondence, through which he gathered
invaluable evidence of evolution. He dispatched many an unwary naturalist
on some ridiculous errand to, say, count pigeon tail feathers, or search for
greyhounds with tan spots near their eyes. These requests seem strangely
particular, but they revealed intermediate evolutionary forms, and in sum
they reassured Darwin that natural selection took place. In one sense, then,
being an invalid might have been as important to On the Origin of Species
as visiting the Galápagos.
Darwin understandably had a harder time seeing the benefits of
migraines and dry heaving, and he spent years searching for relief. He
swallowed much of the periodic table in various medicinal forms. He
dabbled in opium, sucked lemons, and took “prescriptions” of ale. He tried
early electroshock therapy—a battery-charged “galvanization belt” that
zapped his abdomen. The most eccentric cure was the “water cure,”
administered by a former classmate from medical school. Dr. James Manby
Gully had had no serious plans to practice medicine while in school, but the
family’s coffee plantation in Jamaica went bust after Jamaican slaves gained
their freedom in 1834, and Gully had no choice but to see patients full-time.
He opened a resort in Malvern in western England in the 1840s, and it
quickly became a trendy Victorian spa; Charles Dickens, Alfred, Lord
Tennyson, and Florence Nightingale all took cures there. Darwin decamped
to Malvern in 1849 with his family and servants.
The water cure basically consisted of keeping patients as moist as
possible at all times. After a 5 a.m. cock-a-doodle-doo, servants wrapped
Darwin in wet sheets, then doused him with buckets of cold water. This was
followed by a group hike that included plenty of hydration breaks at various
wells and mineral springs. Back at their cottages, patients ate biscuits and
drank more water, and the completion of breakfast opened up the day to
Malvern’s main activity, bathing. Bathing supposedly drew blood away
from the inflamed inner organs and toward the skin, providing relief.
Between baths, patients might have a refreshing cold water enema, or strap
themselves into a wet abdominal compress called a “Neptune Girdle.”
Baths often lasted until dinner, which invariably consisted of boiled mutton,
fish, and, obviously, some sparkling local H2O. The long day ended with
Darwin crashing asleep into a (dry) bed.
Scenes from the popular “water cure” in Victorian times, for patients with stubborn ailments.
Charles Darwin underwent a similar regime to cure his own mystery illness, which dogged him
most of his adult life. (Courtesy of the National Library of Medicine)
Considering its scale, its scope, its ambition, the Human Genome Project
—a multidecade, multibillion-dollar effort to sequence all human DNA—
was rightly called the Manhattan Project of biology. But few anticipated at
the outset that the HGP would be beset with just as many moral ambiguities
as the venture in Los Alamos. Ask your biologist friends for a précis of the
project, in fact, and you’ll get a pretty good handle on their values. Do they
admire the project’s government scientists as selfless and steadfast or
dismiss them as stumbling bureaucrats? Do they praise the private-sector
challenge to the government as heroic rebellion or condemn it as greedy
self-aggrandizement? Do they think the project succeeded or harp on its
disappointments? Like any complex epic, the sequencing of the human
genome can support virtually any reading.
The HGP traces its pedigree to the 1970s, when British biologist
Frederick Sanger, already a Nobel laureate, invented a method to sequence
DNA—to record the order of the A’s, C’s, G’s, and T’s and thereby
(hopefully) determine what the DNA does. In brief, Sanger’s method
involved three basic steps: heating the DNA in question until its two strands
separated; breaking those strands into fragments; and using individual A’s,
C’s, G’s, and T’s to build new complementary strands based on the
fragments. Cleverly, though, Sanger sprinkled in special radioactive
versions of each base, which got incorporated into the complements.
Because Sanger could distinguish whether A, C, G, or T was producing
radioactivity at any point along the complement, he could also deduce
which base resided there, and tally the sequence.*
Sanger had to read these bases one by one, an excruciatingly tedious
process. Nevertheless it allowed him to sequence the first genome, the fifty-
four hundred bases and eleven genes of the virus φ-X174. (This work won
Sanger a second Nobel in 1980—not bad for someone who once confessed
he could never have attended Cambridge University “if my parents had not
been fairly rich.”) In 1986 two biologists in California automated Sanger’s
method. And instead of using radioactive bases, they substituted fluorescent
versions of A, C, G, and T, each of which produced a different color when
strummed by a laser—DNA in Technicolor. This machine, run by a
computer, suddenly made large-scale sequencing projects seem feasible.
Strangely, though, the U.S. government agency that funded most biology
research, the National Institutes of Health, showed zero interest in DNA
sequencing. Who, the NIH wondered, wanted to wade through three billion
letters of formless data? Other departments weren’t so dismissive. The
Department of Energy considered sequencing a natural extension of its
work on how radioactivity damages DNA, and it appreciated the
transformative potential of the work. So in April 1987, the DoE opened the
world’s first human genome project, a seven-year, $1-billion effort centered
in Los Alamos, across town from the site of the Manhattan Project. Funnily
enough, as soon as NIH bureaucrats heard the B-word, billion, they decided
sequencing made sense after all. So in September 1988 the NIH set up a
rival sequencing institute to scoop up its share of the budgetary pie. In a
scientific coup, it secured James Watson as the institute’s chief.
By the 1980s, Watson had developed a reputation as the “Caligula of
biology,” someone who, as one science historian put it, “was given license
to say anything that came to his mind and expect to be taken seriously. And
unfortunately he did so, with a casual and brutal offhandedness.” Still,
however much he repulsed some of them personally, Watson retained the
intellectual respect of his colleagues, which proved crucial for his new job,
since few big-name biologists shared his enthusiasm for sequencing. Some
biologists disliked the reductionist approach of the HGP, which threatened
to demote human beings to dribbles of data. Others feared the project would
swallow up all available research funds but not yield usable results for
decades, a classic boondoggle. Still others simply found the work
unbearably monotonous, even with machines helping. (One scientist
cracked that only incarcerated felons should have to sequence—“twenty
megabases [each],” he suggested, “with time off for accuracy.”) Most of all,
scientists feared losing autonomy. A project so extensive would have to be
coordinated centrally, and biologists resented the idea of becoming
“indentured servants” who took orders on what research to pursue. “Many
people in the American scientific community,” one early HGP supporter
moaned, “will support small mediocrity before they can even consider the
possibility that there can be some large excellence.”
For all his crassness, Watson assuaged his colleagues’ fears and helped
the NIH wrest control of the project from the DoE. He canvassed the
country, giving a stump speech about the urgency of sequencing, and
emphasized that the HGP would sequence not only human DNA but mouse
and fruit fly DNA, so all geneticists would benefit. He also suggested
mapping human chromosomes first thing, by locating every gene on them
(similar to what Charles Sturtevant did in 1911 with fruit flies). With the
map, Watson argued, any scientist could find her pet gene and make
progress studying it without waiting fifteen years, the NIH’s timeline for
sequencing. With this last argument, Watson also had his eye on Congress,
whose fickle, know-nothing members might yank funding if they didn’t see
results last week. To further persuade Congress, some HGP boosters all but
promised that as long as Congress ponied up, the HGP would liberate
humans from the misery of most diseases. (And not just diseases; some
hinted that hunger, poverty, and crime might cease.) Watson brought in
scientists from other nations, too, to give sequencing international prestige,
and soon the HGP had lumbered to life.
Then Watson, being Watson, stepped in it. In his third year as HGP
director, he found out that the NIH planned to patent some genes that one of
its neuroscientists had discovered. The idea of patenting genes nauseated
most scientists, who argued that patent restrictions would interfere with
basic research. To compound the problem, the NIH admitted it had only
located the genes it wanted to patent; it had no idea what the genes did.
Even scientists who supported DNA patents (like biotech executives)
blanched at this revelation. They feared that the NIH was setting a terrible
precedent, one that would promote the rapid discovery of genes above
everything else. They foresaw a “genome grab,” where businesses would
sequence and hurriedly patent any gene they found, then charge “tolls”
anytime anyone used them for any purpose.
Watson, who claimed that no one had consulted him on all this, went
apoplectic, and he had a point: patenting genes could undermine the public-
good arguments for the HGP, and it would certainly renew scientists’
suspicions. But instead of laying out his concerns calmly and
professionally, Caligula lit into his boss at the NIH, and behind her back he
told reporters the policy was moronic and destructive. A power struggle
ensued, and Watson’s supervisor proved the better bureaucratic warrior: she
raised a stink behind the scenes, Watson alleges, about conflicts of interest
in biotech stock he owned, and continued her attempts to muzzle him. “She
created conditions by which there was no way I could stay,” Watson fumed.
He soon resigned.
But not before causing more trouble. The NIH neuroscientist who’d
found the genes had discovered them with an automated process that
involved computers and robots and little human contribution. Watson didn’t
approve of the procedure because it could identify only 90 percent of
human genes, not the full set. Moreover—always a sucker for elegance—he
sneered that the process lacked style and craft. In a hearing before the U.S.
Senate about the patents, Watson dismissed the operation as something that
“could be run by monkeys.” This didn’t exactly charm the NIH “monkey”
in question, one J. Craig Venter. In fact, partly because of Watson, Venter
soon became (in)famous, an international scientific villain. Yet Venter
found himself quite suited to the role. And when Watson departed, the door
suddenly opened for Venter, perhaps the only scientist alive who was even
more polarizing, and who could dredge up even nastier feelings.
Craig Venter started raising hell in childhood, when he’d sneak his bicycle
onto airport runways to race planes (there were no fences) and then ditch
the cops that chased him. In junior high, near San Francisco, he began
boycotting spelling tests, and in high school, his girlfriend’s father once
held a gun to Venter’s head because of the lad’s overactive Y chromosome.
Later Venter shut down his high school with two days of sit-ins and
marches to protest the firing of his favorite teacher—who happened to be
giving Venter an F.*
Despite a GPA well below the Mendoza Line, Venter hypnotized himself
into believing he would achieve something magnificent in life, but he
lacked much purpose beyond that delusion. At twenty-one, in August 1967,
Venter joined a M*A*S*H-like hospital in Vietnam as a medic. Over the
next year he watched hundreds of men his own age die, sometimes with his
hands on them, trying to resuscitate them. The waste of lives disgusted him,
and with nothing specific to live for, Venter decided to commit suicide by
swimming out into the shimmering-green South China Sea until he
drowned. A mile out, sea snakes surfaced around him. A shark also began
thumping him with its skull, testing him as prey. As if suddenly waking up,
Venter remembered thinking, What the fuck am I doing? He turned and
scrambled back to shore.
Vietnam stirred in Venter an interest in medical research, and a few
years after earning a Ph.D. in physiology in 1975, he landed at the NIH.
Among other research, he wanted to identify all the genes our brain cells
use, but he despaired over the tedium of finding genes by hand. Salvation
came when he heard about a colleague’s method of quickly identifying the
messenger RNA that cells use to make proteins. Venter realized this
information could reveal the underlying gene sequences, because he could
reverse-transcribe the RNA into DNA. By automating the technique, he
soon cut down the price for detecting each gene from $50,000 to $20, and
within a few years he’d discovered a whopping 2,700 new genes.
These were the genes the NIH tried to patent, and the brouhaha
established a pattern for Venter’s career. He’d get itchy to do something
grand, get irritated over slow progress, and find shortcuts. Other scientists
would then denounce the work as cheating; one person compared his
process for discovering genes to Sir Edmund Hillary taking a helicopter
partway up Mount Everest. Whereafter Venter would strongly encourage
his detractors to get bent. But his arrogance and gruffness often ended up
alienating his allies, too. For these reasons, Venter’s reputation grew
increasingly ugly in the 1990s: one Nobel laureate jokingly introduced
himself by looking Venter up and down and saying, “I thought you were
supposed to have horns.” Venter had become a sort of Paganini of genetics.
Devil or no, Venter got results. And frustrated by the bureaucracy at the
NIH, he quit in 1992 and joined an unusual hybrid organization. It had a
nonprofit arm, TIGR (the Institute for Genomic Research), dedicated to
pure science. It also had—an ominous sign to scientists—a very-much-for-
profit arm backed by a health-care corporation and dedicated to capitalizing
on that research by patenting genes. The company made Venter rich by
loading him with stock, then loaded TIGR with scientific talent by raiding
thirty staff members from the NIH. And true to its rebellious demeanor,
once the TIGR team settled in, it spent the next few years refining “whole-
genome shotgun sequencing,” a radicalized version of Sanger’s old-
fashioned sequencing methods.
The NIH consortium planned to spend its first few years and its first
billion dollars constructing meticulous maps of each chromosome. That
completed, scientists would divide each chromosome into segments and
send each segment to different labs. Each lab would make copies of the
segment and then “shotgun” them—use intense sound waves or another
method to blast them into tiny, overlapping bits roughly a thousand bases
long. Scientists would next sequence every bit, study how they overlapped,
and piece them together into a coherent overall sequence. As observers
have noted, the process was analogous to dividing a novel into chapters,
then each chapter into sentences. They’d photocopy each sentence and
shotgun all the copies into random phrases—“Happy families are all,” “are
all alike; every unhappy,” “every unhappy family is unhappy,” and
“unhappy in its own way.” They would then reconstruct each sentence
based on the overlaps. Finally, the chromosome maps, like a book’s index,
would tell them where their passage was situated overall.
Venter’s team loved the shotgun but decided to skip the slow mapping
step. Instead of dividing the chromosome into chapters and sentences, they
wanted to blast the whole book into overlapping smithereens right away.
They’d then whirlwind everything together at once by using banks of
computers. The consortium had considered this whole-genome shotgun
approach but had dismissed it as slapdash, prone to leaving gaps and putting
segments in the wrong place. Venter, however, proclaimed that speed
should trump precision in the short term; scientists needed some, any data
now, he argued, more than they needed perfect data in fifteen years. And
Venter had the fortune to start working in the 1990s, when computer
technology exploded and made impatience almost a virtue.
Almost—other scientists weren’t so thrilled. A few patient geneticists
had been working since the 1980s to sequence the first genome of a fully
living creature, a bacterium. (Sanger sequenced only viruses, which aren’t
fully alive; bacteria have vastly bigger genomes.) These scientists were
creeping, tortoise-like, toward finishing their genome, when in 1994
Venter’s team began scorching through the two million bases of
Haemophilus influenzae, another bacterium. Partway through the process,
Venter applied for NIH funds to support the work; months later, he received
a pink rejection notice, denying him money because of the “impossible”
technique he proposed using. Venter laughed; his genome was 90 percent
done. And soon afterward the hare won the race: TIGR blew by its poky
rivals and published its genome just one year after starting. TIGR
completed another full bacterium sequence, of Mycoplasma genitalium, just
months later. Ever cocky, Venter not only gloated about finishing both first
—and without a red cent from the NIH—he also printed up T-shirts for the
second triumph that read I MY GENITALIUM.
However begrudgingly impressed, HGP scientists had doubts, sensible
doubts, that what worked for bacterial DNA would work for the far more
complicated human genome. The government consortium wanted to piece
together a “composite” genome—a mishmash of multiple men’s and
women’s DNA that would average out their differences and define a
Platonic ideal for each chromosome. The consortium felt that only a
cautious, sentence-by-sentence approach could sort through all the
distracting repeats, palindromes, and inversions in human DNA and achieve
that ideal. But microprocessors and sequencers kept getting speedier, and
Venter gambled that if his team gathered enough data and let the computers
churn, it could beat the consortium. To give due credit, Venter didn’t invent
shotgunning or write the crucial computer algorithms that pieced sequences
together. But he had the hubris (or chutzpah—pick your word) to ignore his
distinguished detractors and plunge forward.
And boy did he. In May 1998, Venter announced that he’d cofounded a
new company to more or less destroy the international consortium.
Specifically, he planned to sequence the human genome in three years—
four years before the consortium would finish—and for one-tenth of its $3
billion budget. (Venter’s team threw the plans together so quickly the new
company had no name; it became Celera.) To get going, Celera’s parent
corporation would supply it with hundreds of $300,000, state-of-the-science
sequencers, machines that (although monkeys could probably run them)
gave Venter more sequencing power than the rest of the world combined.
Celera would also build the world’s largest nonmilitary supercomputer to
process data. As a last gibe, even though his work threatened to make them
superfluous, Venter suggested to consortium leaders that they could still
find valuable work to do. Like sequencing mice.
Venter’s challenge demoralized the public consortium. Watson
compared Venter to Hitler invading Poland, and most HGP scientists feared
they’d fare about as well. Despite their head start, it didn’t seem implausible
that Venter could catch and pass them. To appease its scientists’ demands
for independence, the consortium had farmed its sequencing out to multiple
U.S. universities and had formed partnerships with labs in Germany, Japan,
and Great Britain. With the project so scattered, even some insiders
believed the HGP satellites would never finish on time: by 1998, the eighth
of the HGP’s fifteen years, the groups had collectively sequenced just 4
percent of human DNA. U.S. scientists were especially trembling. Five
years earlier, Congress had eighty-sixed the Superconducting Super
Collider, a massive particle accelerator in Texas, after delays and overruns
had bloated its budget by billions of dollars. The HGP seemed similarly
vulnerable.
Key HGP scientists, however, refused to cower. Francis Collins took
over the consortium after Watson’s resignation, albeit over the objection of
some scientists. Collins had done fundamental genetics work at the
University of Michigan; he’d found the DNA responsible for cystic fibrosis
and Huntington’s disease and had consulted on the Lincoln DNA project.
He was also fervently Christian, and some regarded him as “ideologically
unsound.” (After receiving the consortium job offer, Collins spent an
afternoon praying in a chapel, seeking Jesus’s guidance. Jesus said go for
it.) It didn’t help matters that, in contrast to the flamboyant Venter, Collins
seemed dowdy, once described as having “home-cut hair [and a] Ned
Flanders mustache.” Collins nevertheless proved politically adept. Right
after Venter announced his plans, Collins found himself on a flight with one
of Venter’s bosses at Celera’s money-hungry parent corporation. Thirty
thousand feet up, Collins bent the boss’s ear, and by the time they landed,
Collins had sweet-talked him into supplying the same fancy sequencers to
government labs. This pissed Venter off no end. Then, to reassure Congress,
Collins announced that the consortium would make the changes necessary
to finish the full sequence two years early. It would also release a “rough
draft” by 2001. This all sounded grand, but in practical terms, the new
timetable forced Collins to eliminate many slower satellite programs,
cutting them out of the historic project entirely. (One axed scientist
complained of “being treated with K-Y jelly by the NIH” before being you-
know-whated guess-where.)
Collins’s burly, bearded British counterpart in the consortium was John
Sulston, a Cambridge man who’d helped sequence the first animal genome,
a worm’s. (Sulston was also the sperm donor whose DNA appeared in the
supposedly realistic portrait in London.) For most of his career, Sulston had
been a lab rat—apolitical, and happiest when holed up indoors and fussing
with equipment. But in the mid-1990s, the company that supplied his DNA
sequencers began meddling with his experiments, denying Sulston access to
raw data files unless he purchased an expensive key, and arguing that it, the
company, had the right to analyze Sulston’s data, possibly for commercial
purposes. In response Sulston hacked the sequencers’ software and rewrote
their code, cutting the company off. From that moment on, he’d grown
wary of business interests and became an absolutist on the need for
scientists to exchange DNA data freely. His views became influential when
Sulston found himself running one of the consortium’s multimillion-dollar
labs at the (Fred) Sanger Centre in England. Celera’s parent corporation
happened to be the same company he’d tangled with before about data, and
Sulston viewed Celera itself as Mammon incarnate, certain to hold DNA
data hostage and charge researchers exorbitant fees to peruse it. Upon
hearing Venter’s announcement, Sulston roused his fellow scientists with a
veritable St. Crispin’s Day speech at a conference. He climaxed by
announcing that his institute would double its funding to fight Venter. His
troops huzzahed and stomped their feet.
And so it began: Venter versus the consortium. A furious scientific
competition, but a peculiar one. Winning was less about insight, reasoning,
craft—the traditional criteria of good science—and more about who had the
brute horsepower to work faster. Mental stamina was also critical, since the
genome competition had, one scientist noted, “all the psychological
ingredients of a war.” There was an arms race. Each team spent tens of
millions to scale up its sequencing power. There was subterfuge. At one
point two consortium scientists reviewed for a magazine the fancy new
sequencers Celera was using. They gave them a decidedly mixed review—
but meanwhile their bosses were secretly negotiating to buy dozens of the
machines for themselves. There was intimidation. Some third-party
scientists received warnings about their careers being over if they
collaborated with Venter, and Venter claims the consortium tried to block
publication of his work. There was tension among purported allies. Venter
got into innumerable fights with his managers, and a German scientist at
one consortium meeting screamed hysterically at Japanese colleagues for
making mistakes. There was propaganda. Venter and Celera crowed their
every achievement, but whenever they did, Collins would dismiss their
“Mad magazine” genome, or Sulston would appear on television to argue
that Celera had pulled another “con.” There was even talk of munitions.
After employees received death threats from Luddites, Celera cut down
trees near its corporate campus to prevent snipers from nesting in them, and
the FBI warned Venter to scan his mail in case a Unabomber wannabe
targeted him.
Naturally, the nastiness of the competition titillated the public and
monopolized its attention. But all the while, work of real scientific value
was emerging. Under continued criticism, Celera felt it once again had to
prove that whole-genome shotgunning worked. So it laid aside its human
genome aspirations and in 1999 began sequencing (in collaboration with an
NIH-funded team at the University of California, Berkeley) the 120 million
bases of the fruit fly genome. To the surprise of many, they produced an
absolute beaut: at a meeting just after Celera finished, Drosophila scientists
gave Venter a standing ovation. And once both teams ramped up their
human genome work, the pace was breathtaking. There were still disputes,
naturally. When Celera claimed it had surpassed one billion bases, the
consortium rejected the claim because Celera (to protect its business
interests) didn’t release the data for scientists to check. One month later, the
consortium itself bragged that it surpassed a billion bases; four months
after, it preened over passing two billion. But the harping couldn’t diminish
the real point: that in just months, scientists had sequenced more DNA, way
more, than in the previous two decades combined. Geneticists had
excoriated Venter during his NIH days for churning out genetic information
without knowing the function. But everyone was playing Venter’s game
now: blitzkrieg sequencing.
Other valuable insights came when scientists started analyzing all that
sequence data, even preliminarily. For one, humans had an awful lot of
DNA that looked microbial, a stunning possibility. What’s more, we didn’t
seem to have enough genes. Before the HGP most scientists estimated that,
based on the complexity of humans, we had 100,000 genes. In private,
Venter remembers a few straying as high as 300,000. But as the consortium
and Celera rifled through the genome, that estimate dropped to 90,000, then
70,000, then 50,000—and kept sinking. During the early days of
sequencing, 165 scientists had set up a pool with a $1,200 pot for whoever
came closest to guessing the correct number of human genes. Usually the
entries in a bubble-gum-counting contest like this cluster in a bell curve
around the correct answer. Not so with the gene sweepstakes: with every
passing day the low guesses looked like the smartest bets.
Thankfully, though, whenever science threatened to become the real
HGP story, something juicy happened to distract everyone. For example, in
early 2000 President Clinton announced, seemingly out of the clear blue
sky, that the human genome belonged to all people worldwide, and he
called on all scientists, including ones in the private sector, to share
sequence information immediately. There were also whispers of the
government eliminating gene patents, and investors with money in
sequencing companies stampeded. Celera got trampled, losing $6 billion in
stock value—$300 million of it Venter’s—in just weeks. As a balm against
this and other setbacks, Venter tried around this time to secure a piece of
Einstein’s brain, to see if someone could sequence its DNA after all,* but
the plan came to naught.
Almost touchingly, a few people held out hope that Celera and the
consortium could still work together. Sulston had put the kibosh on a cease-
fire with Venter in 1999, but shortly thereafter other scientists approached
Venter and Collins to broker a truce. They even floated the idea of the
consortium and Celera publishing the 90-percent-complete rough draft of
the human genome as one joint paper. Negotiations proceeded apace, but
government scientists remained chary of Celera’s business interests and
bristled over its refusal to publish data immediately. Throughout the
negotiations, Venter displayed his usual charm; one consortium scientist
swore in his face, countless others behind his back. A New Yorker profile of
Venter from the time opened with a (cowardly anonymous) quote from a
senior scientist: “Craig Venter is an asshole.” Not surprisingly, plans for a
joint publication eventually disintegrated.
Appalled by the bickering, and eyeing an upcoming election, Bill
Clinton finally intervened and convinced Collins and Venter to appear at a
press conference at the White House in June 2000. There the two rivals
announced that the race to sequence the human genome had ended—in a
draw. This truce was arbitrary and, given the lingering resentments, largely
bogus. But rather than growling, both Collins and Venter wore genuine
smiles that summer day. And why not? It was less than a century after
scientists had identified the first human gene, less than fifty years after
Watson and Crick had elucidated the double helix. Now, at the millennium,
the sequencing of the human genome promised even more. It had even
changed the nature of biological science. Nearly three thousand scientists
contributed to the two papers that announced the human genome’s rough
draft. Clinton had famously declared, “The era of big government is over.”
The era of big biology was beginning.
The two papers outlining the rough draft of the human genome appeared in
early 2001, and history should be grateful that the joint publication fell
apart. A single paper would have forced the two groups into false
consensus, whereas the dueling papers highlighted each side’s unique
approach—and exposed various canards that had become accepted wisdom.
In its paper, Celera acknowledged that it had poached the free
consortium data to help build part of its sequence—which sure undermined
Venter’s rebel street cred. Furthermore, consortium scientists argued that
Celera wouldn’t even have finished without the consortium maps to guide
the assembly of the randomly shotgunned pieces. (Venter’s team published
angry rebuttals.) Sulston also challenged the Adam Smith–ish idea that the
competition increased efficiency and forced both sides to take innovative
risks. Instead, he argued, Celera diverted energy away from sequencing and
toward silly public posturing—and sped up the release only of the “fake”
rough draft anyway.
Of course, scientists loved the draft, however rough, and the consortium
would never have pushed itself to publish one so soon had Venter not
flipped his gauntlet at their face. And whereas the consortium had always
portrayed itself as the adults here—the ones who didn’t care about speedy
genomic hot-rodding, just accuracy—most scientists who examined the two
drafts side by side proclaimed that Celera did a better job. Some said its
sequence was twice as good and less riddled with virus contamination. The
consortium also (quietly) put the lie to its criticisms of Venter by copying
the whole-genome shotgun approach for later sequencing projects, like the
mouse genome.
By then, however, Venter wasn’t around to bother the public consortium.
After various management tussles, Celera all but sacked Venter in January
2002. (For one thing, Venter had refused to patent most genes that his team
discovered; behind the scenes, he was a rather indifferent monomaniacal
capitalist.) When Venter left, Celera lost its sequencing momentum, and the
consortium claimed victory, loudly, when it alone produced a full human
genome sequence in early 2003.*
After years of adrenalized competition, however, Venter, like a fading
football star, couldn’t simply walk away. In mid-2002 he diverted attention
from the consortium’s ongoing sequencing efforts by revealing that Celera’s
composite genome had actually been 60 percent Venter sperm DNA; he had
been the primary “anonymous” donor. And, undisturbed by the tsk-tsking
that followed his revelation—“vainglorious,” “egocentric,” and “tacky”
were some of the nicer judgments—Venter decided he wanted to analyze
his pure DNA, unadulterated by other donors. To this end, he founded a
new institute, the Center for the Advancement of Genomics (TCAG, har,
har), that would spend $100 million over four years to sequence him and
him alone.
This was supposed to be the first complete individual genome—the first
genome that, unlike the Platonic HGP genome, included both the mother’s
and father’s genetic contributions, as well as every stray mutation that
makes a person unique. But because Venter’s group spent four whole years
polishing his genome, base by base, a group of rival scientists decided to
jump into the game and sequence another individual first—none other than
Venter’s old nemesis, James Watson. Ironically, the second team—dubbed
Project Jim—took a cue from Venter and tried to sweep away the prize with
new, cheaper, dirtier sequencing methods, ripping through Watson’s full
genome in four months and for a staggeringly modest sum, around $2
million. Venter, being Venter, refused to concede defeat, though, and this
second genome competition ended, probably inevitably, in another draw:
the two teams posted their sequences online within days of each other in
summer 2007. The speedy machines of Project Jim wowed the world, but
Venter’s sequence once again proved more accurate and useful for most
research.
(The jockeying for status hasn’t ended, either. Venter remains active in
research, as he’s currently trying to determine [by subtracting DNA from
microbes, gene by gene] the minimum genome necessary for life. And
however tacky the action might have seemed, publishing his individual
genome might have put him in the catbird seat for the Nobel Prize—an
honor that, according to the scuttlebutt that scientists indulge in over late-
night suds, he covets. A Nobel can be split among three people at most, but
Venter, Collins, Sulston, Watson, and others could all make legitimate
claims for one. The Swedish Nobel committee would have to overlook
Venter’s lack of decorum, but if it awards him a solo Nobel for his
consistently excellent work, Venter can claim he won the genome war after
all.*)
So what did all the HGP competition earn us, science-wise? Depends on
whom you ask.
Most human geneticists aim to cure diseases, and they felt certain that
the HGP would reveal which genes to target for heart disease, diabetes, and
other widespread problems. Congress in fact spent $3 billion largely on this
implicit promise. But as Venter and others have pointed out, virtually no
genetic-based cures have emerged since 2000; virtually none appear
imminent, either. Even Collins has swallowed hard and acknowledged, as
diplomatically as possible, that the pace of discoveries has frustrated
everyone. It turns out that many common diseases have more than a few
mutated genes associated with them, and it’s nigh impossible to design a
drug that targets more than a few genes. Worse, scientists can’t always pick
out the significant mutations from the harmless ones. And in some cases,
scientists can’t find mutations to target at all. Based on inheritance patterns,
they know that certain common diseases must have significant genetic
components—and yet, when scientists scour the genes of victims of those
diseases, they find few if any shared genetic flaws. The “culprit DNA” has
gone missing.
There are a few possible reasons for these setbacks. Perhaps the real
disease culprits lie in noncoding DNA that lies outside of genes, in regions
scientists understand only vaguely. Perhaps the same mutation leads to
different diseases in different people because of interactions with their
other, different genes. Perhaps the odd fact that some people have duplicate
copies of some genes is somehow critically important. Perhaps sequencing,
which blasts chromosomes into bits, destroys crucial information about
chromosome structure and architectural variation that could tell scientists
what genes work together and how. Most scary of all—because it highlights
our fundamental ignorance—perhaps the idea of a common, singular
“disease” is illusory. When doctors see similar symptoms in different
people—fluctuating blood sugar, joint pain, high cholesterol—they
naturally assume similar causes. But regulating blood sugar or cholesterol
requires scores of genes to work together, and a mutation in any one gene in
the cascade could disrupt the whole system. In other words, even if the
large-scale symptoms are identical, the underlying genetic causes—what
doctors need to pinpoint and treat—might be different. (Some scientists
misquote Tolstoy to make this point: perhaps all healthy bodies resemble
each other, while each unhealthy body is unhealthy in its own way.) For
these reasons, some medical scientists have mumbled that the HGP has—
kinda, sorta, so far—flopped. If so, maybe the best “big science”
comparison isn’t the Manhattan Project but the Apollo space program,
which got man to the moon but fizzled afterward.
Then again, whatever the shortcomings (so far) in medicine, sequencing
the human genome has had trickle-down effects that have reinvigorated, if
not reinvented, virtually every other field of biology. Sequencing DNA led
to more precise molecular clocks, and revealed that animals harbor huge
stretches of viral DNA. Sequencing helped scientists reconstruct the origins
and evolution of hundreds of branches of life, including those of our
primate relatives. Sequencing helped trace the global migration of humans
and showed how close we came to extinction. Sequencing confirmed how
few genes humans have (the lowest guess, 25,947, won the gene
sweepstakes), and forced scientists to realize that the exceptional qualities
of human beings derive not so much from having special DNA as from
regulating and splicing DNA in special ways.
Finally, having a full human genome—and especially having the
individual genomes of Watson and Venter—emphasized a point that many
scientists had lost sight of in the rush to sequence: the difference between
reading a genome and understanding it. Both men risked a lot by publishing
their genomes. Scientists across the world pored over them letter by letter,
looking for flaws or embarrassing revelations, and each man had different
attitudes about this risk. The apoE gene enhances our ability to eat meat but
also (in some versions) multiplies the risk for Alzheimer’s disease.
Watson’s grandmother succumbed to Alzheimer’s years ago, and the
prospect of losing his own mind was too much to bear, so he requested that
scientists not reveal which apoE gene he had. (Unfortunately, the scientists
he trusted to conceal these results didn’t succeed.*) Venter blocked nothing
about his genome and even made private medical records available. This
way, scientists could correlate his genes with his height and weight and
various aspects of his health—information that, in combination, is much
more medically useful than genomic data alone. It turns out that Venter has
genes that incline him toward alcoholism, blindness, heart disease, and
Alzheimer’s, among other ailments. (More strangely, Venter also has long
stretches of DNA not normally found in humans but common in chimps. No
one knows why, but no doubt some of Venter’s enemies have suspicions.) In
addition, a comparison between Venter’s genome and the Platonic HGP
genome revealed far more deviations than anyone expected—four million
mutations, inversions, insertions, deletions, and other quirks, any of which
might have been fatal. Yet Venter, now approaching seventy years old, has
skirted these health problems. Similarly, scientists have noted two places in
Watson’s genome with two copies of devastating recessive mutations—for
Usher syndrome (which leaves victims deaf and blind), and for Cockayne
syndrome (which stunts growth and prematurely ages people). Yet Watson,
well over eighty, has never shown any hint of these problems.
So what gives? Did Watson’s and Venter’s genomes lie to us? What’s
wrong with our reading of them? We have no reason to think Watson and
Venter are special, either. A naive perusal of anybody’s genome would
probably sentence him to sicknesses, deformities, and a quick death. Yet
most of us escape. It seems that, however powerful, the A-C-G-T sequence
can be circumscribed by extragenetic factors—including our epigenetics.
15
Easy Come, Easy Go?
How Come Identical Twins Aren’t Identical?
Jean-Baptiste Lamarck devised perhaps the first scientific theory of evolution. Though
mistaken, his theory resembles in some ways the modern science of epigenetics. (Louis-Léopold
de Boilly)
The first time scientists caught this epigenetic smuggling in action was in
Överkalix, a farming hamlet in the armpit between Sweden and Finland. It
was a tough place to grow up during the 1800s. Seventy percent of
households there had five or more children—a quarter, ten or more—and all
those mouths generally had to be fed from two acres of poor soil, which
was all most families could scrape together. It didn’t help that the weather
above sixty-six degrees north latitude laid waste to their corn and other
crops every fifth year or so. During some stretches, like the 1830s, the crops
died almost every year. The local pastor recorded these facts in the annals
of Överkalix with almost lunatic fortitude. “Nothing exceptional to
remark,” he once observed, “but that the eighth [consecutive] year of crop
failure occurred.”
Not every year was wretched, naturally. Sporadically, the land blessed
people with an abundance of food, and even families of fifteen could gorge
themselves and forget the scarce times. But during those darkest winters,
when the corn had withered and the dense Scandinavian forests and frozen
Baltic Sea prevented emergency supplies from reaching Överkalix, people
slit the throats of hogs and cows and just held on.
This history—fairly typical on the frontier—would probably have gone
unremarked except for a few modern Swedish scientists. They got
interested in Överkalix because they wanted to sort out whether
environmental factors, like a dearth of food, can predispose a pregnant
woman’s child to long-term health problems. The scientists had reason to
think so, based on a separate study of 1,800 children born during and just
after a famine in German-occupied Holland—the Hongerwinter of 1944–
45. Harsh winter weather froze the canals for cargo ships that season, and as
the last of many favors to Holland, the Nazis destroyed bridges and roads
that could have brought relief via land. The daily ration for Dutch adults fell
to five hundred calories by early spring 1945. Some farmers and refugees
(including Audrey Hepburn and her family, trapped in Holland during the
war) took to gnawing tulip bulbs.
After liberation in May 1945, the ration jumped to two thousand
calories, and this jump set up a natural experiment: scientists could compare
fetuses who gestated during the famine to fetuses who gestated afterward,
and see who was healthier. Predictably, the starved fetuses were generally
smaller and frailer babies at birth, but in later years they also had higher
rates of schizophrenia, obesity, and diabetes. Because the babies came from
the same basic gene pool, the differences probably arose from epigenetic
programming: a lack of food altered the chemistry of the womb (the baby’s
environment) and thereby altered the expression of certain genes. Even
sixty years later, the epigenomes of those who’d starved prenatally looked
markedly different, and victims of other modern famines—the siege of
Leningrad, the Biafra crisis in Nigeria, the Great Leap Forward in Mao’s
China—showed similar long-term effects.
But because famines had happened so often in Överkalix, the Swedish
scientists realized they had an opportunity to study something even more
intriguing: whether epigenetic effects could persist through multiple
generations. Kings of Sweden had long demanded crop records from every
parish (to prevent anyone from cheating on fealties), so agricultural data
existed for Överkalix from well before 1800. Scientists could then match
the data with the meticulous birth, death, and health records the local
Lutheran church kept. As a bonus, Överkalix had very little genetic influx
or outflow. The risk of frostbite and a garish local accent kept most Swedes
and Lapps from moving there, and of the 320 people the scientists traced,
just nine abandoned Överkalix for greener pastures, so scientists could
follow families for years and years.
Some of what the Swedish team uncovered—like a link between
maternal nutrition and a child’s future health—made sense. Much of it
didn’t. Most notably, they discovered a robust link between a child’s future
health and a father’s diet. A father obviously doesn’t carry babies to term,
so any effect must have slipped in through his sperm. Even more strangely,
the child got a health boost only if the father faced starvation. If the father
gorged himself, his children lived shorter lives with more diseases.
The influence of the fathers turned out to be so strong that scientists
could trace it back to the father’s father, too—if grandpa Harald starved,
baby grandson Olaf would benefit. These weren’t subtle effects, either. If
Harald binged, Olaf’s risk of diabetes increased fourfold. If Harald
tightened his belt, Olaf lived (after adjusting for social disparities) an
average of thirty years longer. Remarkably, this was a far greater effect than
starvation or gluttony had on Grandpa himself: grandpas who starved,
grandpas who gorged, and grandpas who ate just right all lived to the same
age, seventy years.
This father/grandfather influence didn’t make any genetic sense; famine
couldn’t have changed the parent’s or child’s DNA sequence, since that was
set at birth. The environment wasn’t the culprit, either. The men who
starved ended up marrying and reproducing in all different years, so their
children and grandchildren grew up in different decades in Överkalix, some
good, some bad—yet all benefited, as long as Dad or his dad had done
without.
But the influence might make epigenetic sense. Again, food is rich in
acetyls and methyls that can flick genes on and off, so bingeing or starving
can mask or unmask DNA that regulates metabolism. As for how these
epigenetic switches got smuggled between generations, scientists found a
clue in the timing of the starvation. Starving during puberty, during infancy,
during peak fertility years—none of that mattered for the health of a man’s
child or grandchild. All that mattered was whether he binged or starved
during his “slow growth period,” a window from about nine to twelve years
old, right before puberty. During this phase, males begin setting aside a
stock of cells that will become sperm. So if the slow growth period
coincided with a feast or famine, the pre-sperm might be imprinted with
unusual methyl or acetyl patterns, patterns that would get imprinted on
actual sperm in time.
Scientists are still working out the molecular details of what must have
happened at Överkalix. But a handful of other studies about soft paternal
inheritance in humans supports the idea that sperm epigenetics has
profound and inheritable effects. Men who take up smoking before eleven
years old will have tubbier children, especially tubbier boys, than men who
start smoking later, even if the grade-school smokers snuff the habit sooner.
Similarly, the hundreds of millions of men in Asia and Africa who chew the
pulp of betel nuts—a cappuccino-strength stimulant—have children with
twice the risk of heart disease and metabolic ailments. And while
neuroscientists cannot always find anatomical differences between healthy
brains and brains addled with psychoses, they have detected different
methyl patterns in the brains of schizophrenics and manic-depressives, as
well as in their sperm. These results have forced scientists to revise their
assumption that a zygote wipes clean all the environmental tarnish of sperm
(and egg) cells. It seems that, Yahweh-like, the biological flaws of the
fathers can be visited unto their children, and their children’s children.
The primacy of sperm in determining a child’s long-term health is
probably the most curious aspect of the whole soft inheritance business.
Folk wisdom held that maternal impressions, like exposure to one-armed
men, was devastating; modern science says paternal impressions count as
much or more. Still, these parent-specific effects weren’t wholly
unexpected, since scientists already knew that maternal and paternal DNA
don’t quite contribute equally to children. If male lions mount female tigers,
they produce a liger—a twelve-foot cat twice as heavy as your average king
of the jungle. But if a male tiger knocks up a lion, the resulting tiglon isn’t
nearly as hefty. (Other mammals show similar discrepancies. Which means
that Ilya Ivanov’s attempts to impregnate female chimpanzees and female
humans weren’t as symmetrical as he’d hoped.) Sometimes maternal and
paternal DNA even engage in outright combat for control of the fetus. Take
the igf gene (please).
For once, spelling out a gene’s name helps make sense of it: igf stands
for “insulin-like growth factor,” and it makes children in the womb hit their
size milestones way earlier than normal. But while fathers want both of a
child’s igf genes blazing away, to produce a big, hale baby that will grow up
fast and pass its genes on early and often, mothers want to temper the igfs
so that baby number one doesn’t crush her insides or kill her in labor before
she has other children. So, like an elderly couple fighting over the
thermostat, sperm tend to snap their igf into the on position, while eggs snap
theirs off.
Hundreds of other “imprinted” genes turn off or on inside us, too, based
on which parent bestowed them. In Craig Venter’s genome, 40 percent of
his genes displayed maternal/paternal differences. And deleting the exact
same stretch of DNA can lead to different diseases, depending on whether
Mom’s or Dad’s chromosome is deficient. Some imprinted genes even
switch allegiance over time: in mice (and presumably in humans) maternal
genes maintain control over brains as children, while paternal genes take
over later in life. In fact, we probably can’t survive without proper
“epigender” imprinting. Scientists can easily engineer mice embryos with
two sets of male chromosomes or two sets of female chromosomes, and
according to traditional genetics, this shouldn’t be a big deal. But these
double-gendered embryos expire in the womb. When scientists mixed in a
few cells from the opposite sex to help the embryos survive, the males2
became huge Botero babies (thanks to igf) but had puny brains. Females2
had small bodies but oversized brains. Variations, then, between the brain
sizes of Einstein and Cuvier might be nothing but a quirk of their parents’
bloodlines, like male pattern baldness.
So-called parent-of-origin effects have also revived interest in one of the
most egregious scientific frauds ever perpetrated. Given the subtlety of
epigenetics—scientists have barely gotten a handle in the past twenty years
—you can imagine that a scientist stumbling across these patterns long ago
would have struggled to interpret his results, much less convince his
colleagues of them. And Austrian biologist Paul Kammerer did struggle, in
science and love and politics and everything else. But a few epigeneticists
today see his story as maybe, just maybe, a poignant reminder about the
peril of making a discovery ahead of its time.
Paul Kammerer, a tormented Austrian biologist who perpetrated one of the great frauds in
science history, may have been an unwitting pioneer in epigenetics. (Courtesy of the Library of
Congress)
Epigenetics has expanded so rapidly in the past decade that trying to catalog
every advance can get pretty overwhelming. Epigenetic mechanisms do
things as frivolous as give mice polka-dot tails—or as serious as push
people toward suicide (perhaps a final irony in the Kammerer case). Drugs
like cocaine and heroin seem to spool and unspool the DNA that regulates
neurotransmitters and neurostimulants (which explains why drugs feel
good), but if you keep on chasing the dragon, that DNA can become
permanently misspooled, leading to addiction. Restoring acetyl groups in
brain cells has actually resurrected forgotten memories in mice, and more
work emerges every day showing that tumor cells can manipulate methyl
groups to shut off the genetic governors that would normally arrest their
growth. Some scientists think they can even tease out information about
Neanderthal epigenetics someday.
All that said, if you want to make a biologist cranky, start expounding
about how epigenetics will rewrite evolution or help us escape our genes, as
if they were fetters. Epigenetics does alter how genes function, but doesn’t
vitiate them. And while epigenetic effects certainly exist in humans, many
biologists suspect they’re easy come, easy go: methyls and acetyls and
other mechanisms might well evaporate within a few generations as
environmental triggers change. We simply don’t know yet whether
epigenetics can permanently alter our species. Perhaps the underlying A-C-
G-T sequence always reasserts itself, a granite wall that emerges as the
methyl-acetyl graffiti wears away.
But really, such pessimism misses the point, and promise, of epigenetics.
The low genetic diversity and low gene count of human beings seem unable
to explain our complexity and variety. The millions upon millions of
different combinations of epigenes just might. And even if soft inheritance
evaporates after, say, a half-dozen generations, each one of us lives for two
or three generations only—and on those timescales, epigenetics makes a
huge difference. It’s much easier to rewrite epigenetic software than to
rewire genes themselves, and if soft inheritance doesn’t lead to true genetic
evolution, it does allow us to adapt to a rapidly shifting world. As a matter
of fact, thanks to the new knowledge that epigenetics lends us—about
cancer, about cloning, about genetic engineering—our world will likely
shift even more rapidly in the future.
16
Life as We Do (and Don’t) Know It
What the Heck Will Happen Now?
Around the end of the 1950s, a DNA biochemist (and RNA Tie Club
member) named Paul Doty was strolling through New York, minding his
own, when a street vendor’s wares caught his eye, and he halted,
bewildered. The vendor sold lapel buttons, and among the usual crude
assortment, Doty noticed one that read “DNA.” Few people worldwide
knew more about DNA than Doty, but he assumed the public knew little
about his work and cared less. Convinced the initialism stood for something
else, Doty asked the vendor what D-N-A might be. The vendor looked the
great scientist up and down. “Get with it, bud,” he barked in New Yawk
brogue. “Dat’s da gene!”
Jump forward four decades to the summer of 1999. Knowledge of DNA
had mushroomed, and Pennsylvania legislators, stewing over the impending
DNA revolution, asked a bioethics expert (and Celera board member)
named Arthur Caplan to advise them on how lawmakers might regulate
genetics. Caplan obliged, but things got off to a rocky start. To gauge his
audience, Caplan opened with a question: “Where are your genes?” Where
are they located in the body? Pennsylvania’s best and brightest didn’t know.
With no shame or irony, one quarter equated their genes with their gonads.
Another overconfident quarter decided their genes resided in their brains.
Others had seen pictures of helixes or something but weren’t sure what that
meant. By the late 1950s, the term DNA was enough a part of the zeitgeist
to grace a street vendor’s button. Dat’s da gene. Since then public
understanding had plateaued. Caplan later decided, given their ignorance,
“Asking politicians to make regulations and rules about genetics is
dangerous.” Of course, befuddlement or bewilderment about gene and DNA
technology doesn’t prevent anyone from having strong opinions.
That shouldn’t surprise us. Genetics has fascinated people practically
since Mendel tilled his first pea plant. But a parasite of revulsion and
confusion feeds on that fascination, and the future of genetics will turn on
whether we can resolve that push-pull, gotta-have-it-won’t-stand-for-it
ambivalence. We seem especially mesmerized/horrified by genetic
engineering (including cloning) and by attempts to explain rich,
complicated human behavior in terms of “mere” genes—two often
misunderstood ideas.
Although humans have been genetically engineering animals and plants
since the advent of agriculture ten thousand years ago, the first explicit
genetic engineering began in the 1960s. Scientists basically started dunking
fruit fly eggs in DNA goo, hoping that the porous eggs would absorb
something. Amazingly these crude experiments worked; the flies’ wings
and eyes changed shape and color, and the changes proved heritable. A
decade later, by 1974, a molecular biologist had developed tools to splice
DNA from different species together, to form hybrids. Although this
Pandora restricted himself to microbes, some biologists saw these chimeras
and shivered—who knew what was next? They decided that scientists had
gotten ahead of themselves, and called for a moratorium on this
recombinant DNA research. Remarkably, the biology community (including
the Pandora) agreed, and voluntarily stopped experimenting to debate safety
and rules of conduct, almost a unique event in science history. By 1975
biologists decided they did understand enough to proceed after all, but their
prudence reassured the public.
That glow didn’t last. Also in 1975, a slightly dyslexic myrmecologist
born in evangelical Alabama and working at Harvard published a six-
pound, 697-page book called Sociobiology. Edward O. Wilson had labored
for decades in the dirt over his beloved ants, figuring out how to reduce the
byzantine social interactions of serfs, soldiers, and queens into simple
behavioral laws, even precise equations. In Sociobiology the ambitious
Wilson extended his theories to other classes, families, and phyla, ascending
the evolutionary ladder rung by rung to fish, birds, small mammals,
mammalian carnivores, and primates. Wilson then plowed straight through
chimps and gorillas to his notorious twenty-seventh chapter, “Man.” In it,
he suggested that scientists could ground most if not all human behavior—
art, ethics, religion, our ugliest aggressions—in DNA. This implied that
human beings were not infinitely malleable but had a fixed nature. Wilson’s
work also implied that some temperamental and social differences
(between, say, men and women) might have genetic roots.
Wilson later admitted he’d been politically idiotic not to anticipate the
firestorm, maelstrom, hurricane, and plague of locusts that such suggestions
would cause among academics. Sure enough, some Harvard colleagues,
including the publicly cuddly Stephen Jay Gould, lambasted Sociobiology
as an attempt to rationalize racism, sexism, poverty, war, a lack of apple pie,
and everything else decent people abhor. They also explicitly linked Wilson
with vile eugenics campaigns and Nazi pogroms—then acted surprised
when other folks lashed out. In 1978, Wilson was defending his work at a
scientific conference when a few half-wit activists stormed onstage. Wilson,
in a wheelchair with a broken ankle, couldn’t dodge or fight back, and they
wrested away his microphone. After charging him with “genocide,” they
poured ice water over his head, and howled, “You’re all wet.”
By the 1990s, thanks to its dissemination by other scientists (often in
softer forms), the idea that human behavior has firm genetic roots hardly
seemed shocking. Similarly, we take for granted today another
sociobiological tenet, that our hunter-scavenger-gatherer legacy left us with
DNA that still biases our thinking. But just as the sociobiology ember was
flickering, scientists in Scotland spurted kerosene on the public’s fear of
genetics by announcing, in February 1997, the birth of probably the most
famous nonhuman animal ever. After transferring adult sheep DNA into
four hundred sheep eggs, then zapping them Frankenstein-style with
electricity, the scientists managed to produce twenty viable embryos—
clones of the adult donor. These clones spent six days in test tubes, then 145
in utero, during which time nineteen spontaneously aborted. Dolly lived.
In truth, most of the humans gawking at this little lamb cared nothing
about Dolly qua Dolly. The Human Genome Project was rumbling along in
the background, promising scientists a blueprint of humanity, and Dolly
stoked fears that scientists were ramping up to clone one of our own—and
with no moratorium in sight. This frankly scared the bejeezus out of most
people, although Arthur Caplan did field one excited phone call about the
possibility of cloning Jesus himself. (The callers planned to lift DNA from
the Shroud of Turin, natch. Caplan remembered thinking, “You are trying to
bring back one of the few people that are supposed to come back anyway.”)
Dolly, the first cloned mammal, undergoes a checkup. (Photo courtesy of the Roslin Institute,
University of Edinburgh)
Dolly’s pen mates accepted her, and didn’t seem to care about her
ontological status as a clone. Nor did her lovers—she eventually gave birth
to six (naturally begotten) lambs, all strapping. But for whatever reason,
human beings fear clones almost instinctively. Post-Dolly, some people
hatched sensational supposes about clone armies goose-stepping through
foreign capitals, or ranches where people would raise clones to harvest
organs. Less outlandishly, some feared that clones would be burdened by
disease or deep molecular flaws. Cloning adult DNA requires turning on
dormant genes and pushing cells to divide, divide, divide. That sounds a lot
like cancer, and clones do seem prone to tumors. Many scientists also
concluded (although Dolly’s midwives dispute this) that Dolly was born a
genetic geriatric, with unnaturally old and decrepit cells. Arthritis did in fact
stiffen Dolly’s legs at a precocious age, and she died at age six (half her
breed’s life span) after contracting a virus that, à la Peyton Rous, gave her
lung cancer. The adult DNA used to clone Dolly had been—like all adult
DNA—pockmarked with epigenetic changes and warped by mutations and
poorly patched breaks. Such flaws might have corrupted her genome before
she was ever born.*
But if we’re toying with playing god here, we might as well play devil’s
advocate, too. Suppose that scientists overcome all the medical limitations
and produce perfectly healthy clones. Many people would still oppose
human cloning on principle. Part of their reasoning, however, relies on
understandable but thankfully faulty assumptions about genetic
determinism, the idea that DNA rigidly dictates our biology and personality.
With every new genome that scientists sequence, it becomes clearer that
genes deal in probabilities, not certainties. A genetic influence is just that,
only that. Just as important, epigenetic research shows that the environment
changes how genes work and interact, so cloning someone faithfully might
require preserving every epigenetic tag from every missed meal and every
cigarette. (Good luck.) Most people forget too that it’s already too late to
avoid exposure to human clones; they live among us even now,
monstrosities called identical twins. A clone and its parent would be no
more alike than twins are with all their epigenetic differences, and there’s
reason to believe they’d actually be less alike.
Consider: Greek philosophers debated the idea of a ship whose hull and
decks were gradually rotting, plank by plank; eventually, over the decades,
every original scrap of wood got replaced. Was it still the same ship at the
end? Why or why not? Human beings present a similar stumper. Atoms in
our body get recycled many, many times before death, so we don’t have the
same bodies our whole lives. Nevertheless we feel like the same person.
Why? Because unlike a ship, each human has an uninterrupted store of
thoughts and remembrances. If the human soul exists, that mental memory
cache is it. But a clone would have different memories than his parent—
would grow up with different music and heroes, be exposed to different
foods and chemicals, have a brain wired differently by new technologies.
The sum of these differences would be dissimilar tastes and inclinations—
leading to a dissimilar temperament and a distinct soul. Cloning would
therefore not produce a doppelgänger in anything but literal superficialities.
Our DNA does circumscribe us; but where we fall within our range of
possibilities—our statures, what diseases we’ll catch, how our brains handle
stress or temptation or setbacks—depends on more than DNA.
Make no mistake, I’m not arguing in favor of cloning here. If anything,
this argues against—since what would be the point? Bereaved parents might
yearn to clone Junior and ease that ache every time they walked by his
empty room, or psychologists might want to clone Ted Kaczynski or Jim
Jones and learn how to defuse sociopaths. But if cloning won’t fulfill those
demands—and it almost certainly cannot—why bother?
Cloning not only riles people up over unlikely horrors, it distracts from
other controversies about human nature that genetic research can, and has,
dredged up. As much as we’d like to close our eyes to these quarrels, they
don’t seem likely to vanish.
Sexual orientation has some genetic basis. Bees, birds, beetles, crabs,
fish, skinks, snakes, toads, and mammals of all stripes (bison, lions,
raccoons, dolphins, bears, monkeys) happily get frisky with their own sex,
and their coupling often seems hardwired. Scientists have discovered that
disabling even a single gene in mice—the suggestively named fucM gene—
can turn female mice into lesbians. Human sexuality is more nuanced, but
gay men (who have been studied more extensively than gay women) have
substantially more gay relatives than heterosexual men raised in similar
circumstances, and genes seem like one strong differentiator.
This presents a Darwinian conundrum. Being gay decreases the
likelihood of having children and passing on any “gay genes,” yet
homosexuality has persisted in every last corner of the globe throughout all
of history, despite often-violent persecution. One theory argues that perhaps
gay genes are really “man-loving” genes—androphilic DNA that makes
men love men but also makes women who have it lust after men, too,
increasing their odds of having children. (Vice versa for gynophilic DNA.)
Or perhaps homosexuality arises as a side effect of other genetic
interactions. Multiple studies have found higher rates of left-handedness
and ambidextrousness among gay men, and gay men frequently have longer
ring fingers, too. No one really believes that holding a salad fork in one
hand or the other causes homosexuality, but some far-reaching gene might
influence both traits, perhaps by fiddling with the brain.
These discoveries are doubled-edged. Finding genetic links would
validate being gay as innate and intrinsic, not a deviant “choice.” That said,
people already tremble about the possibility of screening for and singling
out homosexuals, even potential homosexuals, from a young age. What’s
more, these results can be misrepresented. One strong predictor of
homosexuality is the number of older biological brothers someone has; each
one increases the odds by 20 to 30 percent. The leading explanation is that a
mother’s immune system mounts a progressively stronger response to each
“foreign” Y chromosome in her uterus, and this immune response somehow
induces homosexuality in the fetal brain. Again, this would ground
homosexuality in biology—but you can see how a naive, or malicious,
observer could twist this immunity link rhetorically and equate
homosexuality with a disease to eradicate. It’s a fraught picture.
Race also causes a lot of discomfort among geneticists. For one thing,
the existence of races makes little sense. Humans have lower genetic
diversity than almost any animal, but our colors and proportions and facial
features vary as wildly as the finalists each year at Westminster. One theory
of race argues that near extinctions isolated pockets of early humans with
slight variations, and as these groups migrated beyond Africa and bred with
Neanderthals and Denisovans and who knows what else, those variations
became exaggerated. Regardless, some DNA must differ between ethnic
groups: an aboriginal Australian husband and wife will never themselves
produce a freckled, red-haired Seamus, even if they move to the Emerald
Isle and breed till doomsday. Color is encoded in DNA.
The sticking point, obviously, isn’t Maybelline-like variations in skin
tone but other potential differences. Bruce Lahn, a geneticist at the
University of Chicago, started his career cataloging palindromes and
inversions on Y chromosomes, but around 2005 he began studying the brain
genes microcephalin and aspm, which influence the growth of neurons.
Although multiple versions exist in humans, one version of each gene had
numerous hitchhikers and seemed to have swept through our ancestors at
about Mach 10. This implied a strong survival advantage, and based on
their ability to grow neurons, Lahn took a small leap and argued that these
genes gave humans a cognitive boost. Intriguingly, he noted that the brain-
boosting versions of microcephalin and aspm started to spread, respectively,
around 35,000 BC and 4,000 BC, when, respectively, the first symbolic art
and the first cities appeared in history. Hot on the trail, Lahn screened
different populations alive today and determined that the brain-boosting
versions appeared several times more often among Asians and Caucasians
than among native Africans. Gulp.
Other scientists denounced the findings as speculative, irresponsible,
racist, and wrong. These two genes exercise themselves in many places
beyond the brain, so they may have aided ancient Europeans and Asians in
other ways. The genes seem to help sperm whip their tails faster, for one
thing, and might have outfitted the immune system with new weapons.
(They’ve also been linked to perfect pitch, as well as tonal languages.) Even
more damning, follow-up studies determined that people with these genes
scored no better on IQ tests than those without them. This pretty much
killed the brain-boosting hypothesis, and Lahn—who, for what it’s worth, is
a Chinese immigrant—soon admitted, “On the scientific level, I am a little
bit disappointed. But in the context of the social and political controversy, I
am a little bit relieved.”
He wasn’t the only one: race really bifurcates geneticists. Some swear
up and down that race doesn’t exist. It’s “biologically meaningless,” they
maintain, a social construct. Race is indeed a loaded term, and most
geneticists prefer to speak somewhat euphemistically of “ethnic groups” or
“populations,” which they confess do exist. But even then some geneticists
want to censor investigations into ethnic groups and mental aptitude as
inherently wounding—they want a moratorium. Others remain confident
that any good study will just prove racial equality, so what the hey, let them
continue. (Of course the act of lecturing us about race, even to point out its
nonexistence, probably just reinforces the idea. Quick—don’t think of green
giraffes.)
Meanwhile some otherwise very pious scientists think the “biologically
meaningless” bit is baloney. For one thing, some ethnic groups respond
poorly—for purely biochemical reasons—to certain medications for
hepatitis C and heart disease, among other ailments. Other groups, because
of meager conditions in their ancient homelands, have become vulnerable to
metabolic disorders in modern times of plenty. One controversial theory
argues that descendants of people captured in slave raids in Africa have
elevated rates of hypertension today in part because ancestors of theirs
whose bodies hoarded nutrients, especially salt, more easily survived the
awful oceanic voyages to their new homes. A few ethnic groups even have
higher immunity to HIV, but each group, again, for different biochemical
reasons. In these and other cases—Crohn’s disease, diabetes, breast cancer
—doctors and epidemiologists who deny race completely could harm
people.
On a broader level, some scientists argue that races exist because each
geographic population has, indisputably, distinct versions of some genes. If
you examine even a few hundred snippets of someone’s DNA, you can
segregate him into one of a few broad ancestral groups nearly 100 percent
of the time. Like it or not, those groups do generally correspond to people’s
traditional notion of races—African, Asian, Caucasian (or “swine-pink,” as
one anthropologist put it), and so on. True, there’s always genetic bleed-
over between ethnic groups, especially at geographic crossroads like India,
a fact that renders the concept of race useless—too imprecise—for many
scientific studies. But people’s self-identified social race does predict their
biological population group pretty well. And because we don’t know what
every distinct version of every stretch of DNA does, a few polemical and
very stubborn scientists who study races/populations/whatever-you-want-
to-call-thems argue that exploring potential differences in intellect is fair
game—they resent being censored. Predictably, both those who affirm and
those who deny race accuse the other side of letting politics color their
science.*
Beyond race and sexuality, genetics has popped up recently in
discussions of crime, gender relations, addiction, obesity, and many other
things. Over the next few decades, in fact, genetic factors and
susceptibilities will probably emerge for almost every human trait or
behavior—take the over on that one. But regardless of what geneticists
discover about these traits or behaviors, we should keep a few guidelines in
mind when applying genetics to social issues. Most important, no matter the
biological underpinnings of a trait, ask yourself if it really makes sense to
condemn or dismiss someone based on how a few microscopic genes
behave. Also, remember that most of our genetic predilections for behavior
were shaped by the African savanna many thousands if not millions of
years ago. So while “natural” in some sense, these predilections don’t
necessarily serve us well today, since we live in a radically different
environment. What happens in nature is a poor guide for making decisions
anyway. One of the biggest boners in ethical philosophy is the naturalistic
fallacy, which equates nature with “what’s right” and uses “what’s natural”
to justify or excuse prejudice. We human beings are humane in part because
we can look beyond our biology.
In any study that touches on social issues, we can at least pause and not
draw sensational conclusions without reasonably complete evidence. In the
past five years, scientists have conscientiously sought out and sequenced
DNA from more and more ethnic groups worldwide, to expand what
remains, even today, an overwhelmingly European pool of genomes
available to study. And some early results, especially from the self-
explanatory 1,000 Genomes Project, indicate that scientists might have
overestimated the importance of genetic sweeps—the same sweeps that
ignited Lahn’s race-intelligence firecracker.
By 2010 geneticists had identified two thousand versions of human
genes that showed signs of being swept along; specifically, because of low
diversity around these genes, it looked as if hitchhiking had taken place.
And when scientists looked for what differentiated these swept-along
versions from versions not swept along, they found cases where a DNA
triplet had mutated and now called for a new amino acid. This made sense:
a new amino acid could change the protein, and if that change made
someone fitter, natural selection might indeed sweep it through a
population. However, when scientists examined other regions, they found
the same signs of sweeps in genes with silent mutations—mutations that,
because of redundancy in the genetic code, didn’t change the amino acid.
Natural selection cannot have swept these changes along, because the
mutation would be invisible and offer no benefits. In other words, many
apparent DNA sweeps could be spurious, artifacts of other evolutionary
processes.
That doesn’t mean that sweeps never happen; scientists still believe that
genes for lactose tolerance, hair structure, and a few other traits (including,
ironically, skin color) did sweep through various ethnic groups at various
points as migrants encountered new environments beyond Africa. But those
might represent rare cases. Most human changes spread slowly, and
probably no one ethnic group ever “leaped ahead” in a genetic sweepstakes
by acquiring blockbuster genes. Any claims to the contrary—especially
considering how often supposedly scientific claims about ethnic groups
have fallen apart before—should be handled with caution. Because as the
old saw says, it’s not what we don’t know that stirs up trouble, it’s what we
do know that just ain’t so.
Becoming wiser in the ways of genetics will require not only advances in
understanding how genes work, but advances in computing power. Moore’s
Law for computers—which says that microchips get roughly twice as
powerful every two years—has held for decades, which explains why some
pet collars today could outperform the Apollo mission mainframes. But
since 1990 genetic technology has outstripped even Moore’s projections. A
modern DNA sequencer can generate more data in twenty-four hours than
the Human Genome Project did in ten long years, and the technology has
become increasingly convenient, spreading to labs and field stations
worldwide. (After killing Osama bin Laden in 2011, U.S. military personnel
identified him—by matching his DNA to samples collected from relatives
—within hours, in the middle of the ocean, in the dead of the a.m.)
Simultaneously, the cost of sequencing an entire genome has gone into
vacuum free-fall—from $3,000,000,000 to $10,000, from $1 per base pair
to around 0.0003¢. If scientists want to study a single gene nowadays, it’s
often cheaper to sequence the entire genome instead of bothering to isolate
the gene first and sequence just that part.
Of course, scientists still need to analyze the bajillions of A’s, C’s, G’s,
and T’s they’re gathering. Having been humbled by the HGP, they know
they can’t just stare at the stream of raw data and expect insights to pop out,
Matrix style. They need to consider how cells splice DNA and add
epigenetic marginalia, much more complicated processes. They need to
study how genes work in groups and how DNA packages itself in three
dimensions inside the nucleus. Equally important, they need to determine
how culture—itself a partial product of DNA—bends back and influences
genetic evolution. Indeed, some scientists argue that the feedback loop
between DNA and culture has not only influenced but outright dominated
human evolution over the past sixty thousand years or so. Getting a handle
on all of this will require serious computing horsepower. Craig Venter
demanded a supercomputer, but geneticists in the future might need to turn
to DNA itself, and develop tools based on its amazing computational
powers.
On the software side of things, so-called genetic algorithms can help
solve complicated problems by harnessing the power of evolution. In short,
genetic algorithms treat the computer commands that programmers string
together as individual “genes” strung together to make digital
“chromosomes.” The programmer might start with a dozen different
programs to test. He encodes the gene-commands in each one as binary 0s
and 1s and strings them together into one long, chromosome-like sequence
(0001010111011101010…). Then comes the fun part. The programmer runs
each program, evaluates it, and orders the best programs to “cross over”—
to exchange strings of 0s and 1s, just like chromosomes exchange DNA.
Next the programmer runs these hybrid programs and evaluates them. At
this point the best cross over and exchange more 0s and 1s. The process
then repeats, and continues again, and again, allowing the programs to
evolve. Occasional “mutations”—flipping 0s to 1s, or vice versa—add more
variety. Overall, genetic algorithms combine the best “genes” of many
different programs into one near-optimal one. Even if you start with
moronic programs, genetic evolution improves them automatically and
zooms in on better ones.
On the hardware (or “wetware”) side of things, DNA could someday
replace or augment silicon transistors and physically perform calculations.
In one famous demonstration, a scientist used DNA to solve the classic
traveling salesman problem. (In this brainteaser, a salesman has to travel to,
say, eight cities scattered all over a map. He must visit each city once, but
once he leaves a city he cannot visit it again, even just to pass through on
his way somewhere else. Unfortunately, the cities have convoluted roads
between them, so it’s not obvious in what order to visit.)
To see how DNA could possibly solve this problem, consider a
hypothetical example. First thing, you’d make two sets of DNA snippets.
All are single-stranded. The first set consists of the eight cities to visit, and
these snippets can be random A-C-G-T strings: Sioux Falls might be
AGCTACAT, Kalamazoo TCGACAAT. For the second set, use the map.
Every road between two cities gets a DNA snippet. However—here’s the
key—instead of making these snippets random, you do something clever.
Say Highway 1 starts in Sioux Falls and ends in Kalamazoo. If you make
the first half of the highway’s snippet the A/T and C/G complement of half
of Sioux Falls’s letters, and make the second half of the highway’s snippet
the A/T and C/G complement of half of Kalamazoo’s letters, then Highway
1’s snippet can physically link the two cities:
After encoding every other road and city in a similar way, the
calculation begins. You mix a pinch of all these DNA snippets in a test tube,
and presto change-o, one good shake computes the answer: somewhere in
the vial will be a longer string of (now) double-stranded DNA, with the
eight cities along one strand, in the order the salesman should visit, and all
the connecting roads on the complementary strand.
Of course, that answer will be written in the biological equivalent of
machine code (GCGAGACGTACGAATCC…) and will need deciphering.
And while the test tube contains many copies of the correct answer, free-
floating DNA is unruly, and the tube also contains trillions of wrong
solutions—solutions that skipped cities or looped back and forth endlessly
between two cities. Moreover, isolating the answer requires a tedious week
of purifying the right DNA string in the lab. So, yeah, DNA computing isn’t
ready for Jeopardy. Still, you can understand the buzz. One gram of DNA
can store the equivalent of a trillion CDs, which makes our laptops look like
the gymnasium-sized behemoths of yesteryear. Plus, these “DNA
transistors” can work on multiple calculations simultaneously much more
easily than silicon circuits. Perhaps best of all, DNA transistors can
assemble and copy themselves at little cost.
If deoxyribonucleic acid can indeed replace silicon in computers,
geneticists would effectively be using DNA to analyze its own habits and
history. DNA can already recognize itself; that’s how its strands bond
together. So DNA computers would give the molecule another modest level
of reflexivity and self-awareness. DNA computers could even help DNA
refine itself and improve its own function. (Makes you wonder who’s in
charge…)
And what kinds of DNA improvements might DNA computing bring
about? Most obviously, we could eradicate the subtle malfunctions and
stutters that lead to many genetic diseases. This controlled evolution would
finally allow us to circumvent the grim waste of natural selection, which
requires that the many be born with genetic flaws simply so that the few can
advance incrementally. We might also improve our daily health, cinching
our stomachs in by engineering a gene to burn high-fructose corn syrup (a
modern answer to the ancient apoE meat-eating gene). More wildly, we
could possibly reprogram our fingerprints or hairstyles. If global
temperatures climb and climb, we might want to increase our surface area
somehow to radiate heat, since squatter bodies retain more heat. (There’s a
reason Neanderthals in Ice Age Europe had beer keg chests.) Furthermore,
some thinkers suggest making DNA adjustments not by tweaking existing
genes but by putting updates on an extra pair of chromosomes and inserting
them into embryos*—a software patch. This might prevent
intergenerational breeding but would bring us back in line with the primate
norm of forty-eight.
These changes could make human DNA worldwide even more alike
than it is now. If we tinker with our hair and eye color and figures, we
might end up looking alike, too. But based on the historical pattern with
other technologies, things might well go the other way instead: our DNA
could become as diverse as our taste in clothing, music, and food. In that
case, DNA could go all postmodern on us, and the very notion of a standard
human genome could disappear. The genomic text would become a
palimpsest, endlessly overwriteable, and the metaphor of DNA as “the”
blueprint or “the” book of life would no longer hold.
Not that it ever really did hold, outside our imaginations. Unlike books
and blueprints, both human creations, DNA has no fixed or deliberate
meaning. Or rather, it has only the meaning we infuse it with. For this
reason we should interpret DNA cautiously, less like prose, more like the
complicated and solemn utterances of an oracle.
As with scientists studying DNA, pilgrims to the Delphic oracle in
ancient Greece always learned something profound about themselves when
they inquired of it—but rarely what they assumed they’d learned at first.
The general-king Croesus once asked Delphi if he should engage another
emperor in battle. The oracle answered, “You will destroy a great empire.”
Croesus did—his own. The oracle informed Socrates that “no one is wiser.”
Socrates doubted this, until he’d canvassed and interrogated all the
reputedly wise men around. He then realized that, unlike them, he at least
admitted his ignorance and didn’t fool himself into “knowing” things he
didn’t. In both cases, the truth emerged only with time, with reflection,
when people had gathered all the facts and could parse the ambiguities. The
same with DNA: it all too often tells us what we want to hear, and any
dramatist could learn a thing about irony from it.
Unlike Delphi, our oracle still speaks. From so humble a beginning,
despite swerves and near extinctions, our DNA (and RNA and other ’NAs)
did manage to create us—creatures bright enough to discover and decipher
the DNA inside them. But bright enough as well to realize how much that
DNA limits them. DNA has revealed a trove of stories about our past that
we thought we’d lost forever, and it has endowed us with sufficient brains
and curiosity to keep mining that trove for centuries more. And despite that
push-pull, gotta-have-it-won’t-stand-for-it ambivalence, the more we learn,
the more tempting, even desirable, it seems to change that DNA. DNA
endowed us with imagination, and we can now imagine freeing ourselves
from the hard and heartbreaking shackles it puts on life. We can imagine
remaking our very chemical essences; we can imagine remaking life as we
know it. This oracular molecule seems to promise that if we just keep
pushing, keep exploring and sounding out and tinkering with our genetic
material, then life as we know it will cease. And beyond all the intrinsic
beauty of genetics and all the sobering insights and all the unexpected
laughs that it provides, it’s that promise that keeps drawing us back to it, to
learn more and more and yet still more about our DNA and our genes, our
genes and our DNA.
Epilogue
Genomics Gets Personal
Although they know better, many people versed in science, even many
scientists, still fear their genes on some subliminal level. Because no matter
how well you understand things intellectually, and no matter how many
counterexamples turn up, it’s still hard to accept that having a DNA
signature for a disease doesn’t condemn you to develop the disease itself.
Even when this registers in the brain, the gut resists. This discord explains
why memories of his Alzheimer’s-ridden grandmother convinced James
Watson to suppress his apoE status. It also explains, when I plumbed my
own genes, why boyhood memories of fleeing from my grandfather
convinced me to conceal any hints about Parkinson’s disease.
During the writing of this book, however, I discovered that Craig Venter
had published everything about his genome, uncensored. Even if releasing
it publicly seemed foolhardy, I admired his aplomb in facing down his
DNA. His example fortified me, and every day that passed, the discrepancy
between what I’d concluded (that people should indeed face down their
genes) and how I was behaving (hiding from my Parkinson’s status) nagged
me more and more. So eventually I sucked it up, logged on to the testing
company, and clicked to break the electronic seal on that result.
Admittedly, it took another few seconds before I could look up from my
lap to the screen. As soon as I did, I felt a narcotic of relief flood through
me. I felt my shoulders and limbs unwind: according to the company, I had
no increased risk for Parkinson’s after all.
I whooped. I rejoiced—but should I have? There was a definite irony in
my happiness. Genes don’t deal in certainties; they deal in probabilities.
That was my mantra before I peeked, my way of convincing myself that
even if I had the risky DNA, it wouldn’t inevitably ravage my brain. But
when things looked less grim suddenly, I happily dispensed with
uncertainty, happily ignored the fact that lower-risk DNA doesn’t mean I’ve
inevitably escaped anything. Genes deal in probabilities, and some
probability still existed. I knew this—and for all that, my relief was no less
real. It’s the paradox of personal genetics.
Over the next months, I shooed away this inconvenient little cognitive
dissonance and concentrated on finishing the book, forgetting that DNA
always gets the last word. On the day I dotted the last i, the testing company
announced some updates to old results, based on new scientific studies. I
pulled up my browser and started scrolling. I’d seen previous rounds of
updates before, and in each case the new results had merely corroborated
what I’d already learned; my risks for things had certainly never changed
much. So I barely hesitated when I saw an update for Parkinson’s. Fortified
and foolhardy, I clicked right through.
Before my mind registered anything, my eyes lit on some green letters in
a large font, which reinforced my complacency. (Only red lettering would
have meant watch out.) So I had to read the accompanying text a few times
before I grasped it: “Slightly higher odds of developing Parkinson’s
disease.”
Higher? I looked closer. A new study had scrutinized DNA at a different
spot in the genome from the results I’d seen before. Most Caucasian people
like me have either CT or TT at the spot in question, on chromosome four. I
had (per the fat green letters) CC there. Which meant, said the study, higher
odds.
I’d been double-crossed. To expect a genetic condemnation and receive
it in due course is one thing. But to expect a condemnation, get pardoned,
and find myself condemned again? Infinitely more torture.
Somehow, though, receiving this genetic sentence didn’t tighten my
throat as it should have. I felt no panic, either, no fight-or-flight jolt of
neurotransmitters. Psychologically, this should have been the worst possible
thing to endure—and yet my mind hadn’t erupted. I wasn’t exactly pumped
up about the news, but I felt more or less tranquil, untroubled.
So what happened between the first revelation and the second, the setup
and the would-be knockdown? Without sounding too pompous, I guess I
got an education. I knew now that for a complex disease like Parkinson’s—
subject to the sway of many genes—any one gene probably contributes
little to my risk. I then investigated what a “slightly higher” risk meant,
anyway—just 20 percent, it turns out. And that’s for a disease that affects
(as further digging revealed) only 1.6 percent of men anyway. The new
study was also, the company admitted, “preliminary,” subject to
amendments and perhaps outright reversals. I might still be saddled with
Parkinson’s as an old man; but somewhere in the generational shuffling of
genes, somewhere between Grandpa Kean and Gene and Jean, the
dangerous bits might well have been dealt out—and even if they’re still
lurking, there’s no guarantee they’ll flare up. There’s no reason for the little
boy in me to keep fleeing.
It had finally penetrated my skull: probabilities, not certainties. I’m not
saying personal genetics is useless. I’m glad to know, for instance (as other
studies tell me), that I face higher odds of developing prostate cancer, so I
can always make sure the doctor dons a rubber glove to check for that as I
age. (Something to look forward to.) But in the clinic, for a patient, genes
are just another tool, like blood work or urinalysis or family history. Indeed,
the most profound changes that genetic science brings about likely won’t be
instant diagnoses or medicinal panaceas but mental and spiritual enrichment
—a more expansive sense of who we humans are, existentially, and how we
fit with other life on earth. I enjoyed having my DNA sequenced and would
do it again, but not because I might gain a health advantage. It’s more that
I’m glad I was here, am here, in the beginning.
ACKNOWLEDGMENTS
First off, a thank-you to my loved ones. To Paula, who once again held my
hand and laughed with me (and at me when I deserved it). To my two
siblings, two of the finest people around, lucky additions to my life. To all
my other friends and family in D.C. and South Dakota and around the
country, who helped me keep perspective. And finally to Gene and Jean,
whose genes made this book possible. :)
I would furthermore like to thank my agent, Rick Broadhead, for
embarking on another great book with me. And thank you as well to my
editor at Little, Brown, John Parsley, who helped shape and improve the
book immensely. Also invaluable were others at and around Little, Brown
who’ve worked with me on this book and on Spoon, including William
Boggess, Carolyn O’Keefe, Morgan Moroney, Peggy Freudenthal, Bill
Henry, Deborah Jacobs, Katie Gehron, and many others. I offer thanks, too,
to the many, many scientists and historians who contributed to individual
chapters and passages, either by fleshing out stories, helping me hunt down
information, or offering their time to explain something. If I’ve left anyone
off this list, my apologies. I remain thankful, if embarrassed.
ABOUT THE AUTHOR
SAM KEAN is a writer in Washington, D.C. He is the author of the New York
Times national bestseller The Disappearing Spoon, which was also a
runner-up for the Royal Society’s book of the year for 2011. His work has
appeared in the New York Times Magazine, Mental Floss, Slate, and New
Scientist, and has been featured on NPR’s Radiolab and All Things
Considered.
http://samkean.com
ALSO BY SAM KEAN
The Disappearing Spoon
NOTES AND ERRATA
Chapter 1: Genes, Freaks, DNA
The 3:1 ratio: Welcome to the endnotes! Wherever you see an asterisk
(*) in the text, you can flip back here to find digressions, discussions,
scuttlebutt, and errata about the subject at hand. If you want to flip back
immediately for each note, go right ahead; or if you prefer, you can wait
and read all the notes after finishing each chapter, as a sort of afterword.
This first endnote provides a refresher for Mendelian ratios, so if you’re
comfortable with that, feel free to move along. But do flip back again. The
notes get more salacious. Promise.
The refresher: Mendel worked with dominant traits (like tallness, capital
A) and recessive traits (like shortness, lowercase a). Any plant or animal
has two copies of each gene, one from Mom, one from Dad. So when
Mendel crossed AA plants with aa plants (below, left), the progeny were all
Aa and therefore all tall (since A dominates a):
|A|A| |A|a|
a | Aa | Aa | A | AA | Aa |
a | Aa | Aa | a | Aa | aa |
When Mendel crossed one Aa with another (above, right), things got
more interesting. Each Aa can pass down A or a, so there are four
possibilities for the offspring: AA, Aa, aA, and aa. The first three are again
tall, but the last one will be short, though it came from tall parents. Hence a
3:1 ratio. And just to be clear, the ratio holds in plants, animals, whatever;
there’s nothing special about peas.
The other standard Mendelian ratio comes about when Aa mates with
aa. In this case, half the children will be aa and won’t show the dominant
trait. Half will be Aa and will show it.
|A|a|
a | Aa | aa |
a | Aa | aa |
This 1:1 pattern is especially common in family trees when a dominant
A trait is rare or arises spontaneously through a mutation, since every rare
Aa would have to mate with the more common aa.
Overall the 3:1 and 1:1 ratios pop up again and again in classic genetics.
If you’re curious, scientists identified the first recessive gene in humans in
1902, for a disorder that turned urine black. Three years later, they pinned
down the first dominant human gene, for excessively stubby fingers.
Chapter 2: The Near Death of Darwin
until the situation blew over: The details about Bridges’s private life
appear in Lords of the Fly, by Robert Kohler.
to proceed by jumps: When they were both young, in the 1830s,
Darwin convinced his first cousin, Francis Galton, to drop out of medical
school and take up mathematics instead. Darwin’s later defenders must have
rued this advice many, many times, for it was Galton’s pioneering statistical
work on bell curves—and Galton’s relentless arguments, based on that work
—that most seriously undermined Darwin’s reputation.
As detailed in A Guinea Pig’s History of Biology, Galton had gathered
some of his evidence for bell curves in his typically eccentric way, at the
International Health Exhibition in London in 1884. The expo was as much a
social affair as a scientific endeavor: as patrons wandered through exhibits
about sanitation and sewers, they gulped mint juleps and spiked arrack
punch and kumiss (fermented mare’s milk produced by horses on-site) and
generally had a gay old time. Galton set up a booth at the expo as well and
doggedly measured the stature, eyesight, and hearing of nine thousand
occasionally intoxicated Englishmen. He also tested their strength with
fairground games that involved punching and squeezing various
contraptions, a task that proved more difficult than Galton had anticipated:
oafs who didn’t understand the equipment constantly broke it, and others
wanted to show off their strength and impress girls. It was a true fairground
atmosphere, but Galton had little fun: he later described “the stupidity and
wrong-headedness” of his fellow fairgoers as “so great as to be scarcely
credible.” But as expected, Galton gathered enough data to confirm that
human traits also formed bell curves. The finding further bolstered his
confidence that he, not Cousin Charles, understood how evolution
proceeded, and that small variations and small changes played no important
role.
This wasn’t the first time Galton had thwarted Darwin, either. From the
day he published On the Origin of Species, Darwin was aware that his
theory lacked something, badly. Evolution by natural selection requires
creatures to inherit favorable traits, but no one (save an obscure monk) had
any idea how that worked. So Darwin spent his last years devising a theory,
pangenesis, to explain that process.
Pangenesis held that each organ and limb pumped out microscopic
spores called gemmules. These circulated inside a creature, carrying
information about both its inborn traits (its nature) and also any traits it
acquired during its lifetime (its environment, or nurture). These gemmules
got filtered out by the body’s erogenous zones, and copulation allowed male
and female gemmules to mix like two drops of water when males deposited
their semen.
Although ultimately mistaken, pangenesis was an elegant theory. So
when Galton designed an equally elegant experiment to hunt for gemmules
in rabbits, Darwin heartily encouraged the emprise. His hopes were soon
dashed. Galton reasoned that if gemmules circulated, they must do so in the
blood. So he began transfusing blood among black, white, and silver hares,
hoping to produce a few mottled mongrels when they had children. But
after years of breeding, the results were pretty black-and-white: not a single
multishaded rabbit appeared. Galton published a quickie scientific paper
suggesting that gemmules didn’t exist, at which point the normally
avuncular Darwin went apoplectic. The two men had been warmly
exchanging letters for years on scientific and personal topics, often
flattering each other’s ideas. This time Darwin lit into Galton, fuming that
he’d never once mentioned gemmules circulating in blood, so transfusing
blood among rabbits didn’t prove a damn thing.
On top of being disingenuous—Darwin hadn’t said boo about blood not
being a good vehicle for gemmules when Galton was doing all the work—
Darwin was deceiving himself here. Galton had indeed destroyed
pangenesis and gemmules in one blow.
together on one chromosome: Sex-linked recessive traits like these
show up more often in males than in females for a simple reason. An XX
female with a rare white-eye gene on one X would almost certainly have the
red-eye gene on the other X. Since red dominates white, she wouldn’t have
white eyes. But an XY male has no backup if he gets the white-eye gene on
his X; he will be white-eyed by default. Geneticists call females with one
recessive version “carriers,” and they pass the gene to half their male
children. In humans hemophilia is one example of a sex-linked trait,
Sturtevant’s red-green color blindness another.
produce millions of descendants: Many different books talk a bit about
the fly room, but for the full history, check out A Guinea Pig’s History of
Biology, by Jim Endersby, one of my favorite books ever. Endersby also
touches on Darwin’s adventures with gemmules, Barbara McClintock (from
chapter 5), and other fascinating tales.
his reputation would never lapse again: A historian once wisely noted
that “in reading Darwin, as in reading Shakespeare or the Bible, it is
possible to support almost any viewpoint desirable by focusing on certain
isolated passages.” So you have to be careful when drawing broad
conclusions from Darwin quotes. That said, Darwin’s antipathy for math
seemed genuine, and some have suggested that even elementary equations
frustrated him. In one of history’s ironies, Darwin ran his own experiments
on plants in the primrose genus, just like de Vries, and came up with clear
3:1 ratios among offspring traits. He obviously wouldn’t have linked this to
Mendel, but he seems not to have grasped that ratios might be important at
all.
inside fruit fly spit glands: Drosophila go through a pupa stage where
they encase themselves in gluey saliva. To get as many saliva-producing
genes going as possible, salivary-gland cells repeatedly double their
chromosomes, which creates gigantic “puff chromosomes,” chromosomes
of truly Brobdingnagian stature.
Chapter 3: Them’s the DNA Breaks
the “Central Dogma” of molecular biology: Despite its regal name,
the Central Dogma has a mixed legacy. At first, Crick intended the dogma
to mean something general like DNA makes RNA, RNA makes proteins.
Later he reformulated it more precisely, talking about how “information”
flowed from DNA to RNA to protein. But not every scientist absorbed the
second iteration, and just like old-time religious dogmas, this one ended up
shutting down rational thought among some adherents. “Dogma” implies
unquestionable truth, and Crick later admitted, roaring with laughter, that he
hadn’t even known the definition of dogma when he defined his—it just
sounded learned. Other scientists paid attention in church, however, and as
word of this supposedly inviolable dogma spread, it transmogrified in many
people’s minds into something less precise, something more like DNA exists
just to make RNA, RNA just to make proteins. Textbooks sometimes refer to
this as the Central Dogma even today. Unfortunately this bastardized dogma
seriously skews the truth. It hindered for decades (and still occasionally
hinders) the recognition that DNA and especially RNA do much, much
more than make proteins.
Indeed, while basic protein production requires messenger RNA
(mRNA), transfer RNA (tRNA), and ribosomal RNA (rRNA), dozens of
other kinds of regulatory RNA exist. Learning about all the different
functions of RNA is like doing a crossword puzzle when you know the last
letters of an answer but not the opening, and you run through the alphabet
under your breath. I’ve seen references to aRNA, bRNA, cRNA, dRNA,
eRNA, fRNA, and so on, even the scrabbulous qRNA and zRNA. There’s
also rasiRNA and tasiRNA, piRNA, snoRNA, the Steve Jobs–ish RNAi,
and others. Thankfully, mRNA, rRNA, and tRNA cover all the genetics
we’ll need in this book.
can represent the same amino acid: To clarify, each triplet represents
only one amino acid. But the inverse is not true, because some amino acids
are represented by more than one triplet. As an example, GGG can only be
glycine. But GGU, GGC, and GGA also code for glycine, and that’s where
the redundancy comes in, because we really don’t need all four.
onto the succeeding generation: A few other events in history have
exposed masses of people to radioactivity, most notoriously at the
Chernobyl nuclear power plant in modern Ukraine. The 1986 Chernobyl
meltdown exposed people to different types of radioactivity than the
Hiroshima and Nagasaki bombs—fewer gamma rays and more radioactive
versions of elements like cesium, strontium, and iodine, which can invade
the body and unload on DNA at short range. Soviet officials compounded
the problem by allowing crops to be harvested downwind of the accident
and allowing cows to graze on exposed grass, then letting people eat and
drink the contaminated milk and produce. The Chernobyl region has
already reported some seven thousand cases of thyroid cancer, and medical
officials expect sixteen thousand extra cancer deaths over the next few
decades, an increase of 0.1 percent over background cancer levels.
And in contrast to Hiroshima and Nagasaki, the DNA of children of
Chernobyl victims, especially children of men near Chernobyl, does show
signs of increased mutations. These results remain disputed, but given the
different exposure patterns and dosage levels—Chernobyl released
hundreds of times more radioactivity than either atomic bomb—they could
be real. Whether those mutations actually translate to long-term health
problems among Chernobyl babies remains to be seen. (As an imperfect
comparison, some plants and birds born after Chernobyl showed high
mutation rates, but most seemed to suffer little for that.)
Sadly, Japan will now have to monitor its citizens once again for the
long-term effects of fallout because of the breech of the Fukushima Daiichi
nuclear power plant in spring 2011. Early government reports (some of
which have been challenged) indicate that the damage was contained to an
area one-tenth the size of Chernobyl’s exposure footprint, mostly because
radioactive elements at Chernobyl escaped into the air, while in Japan the
ground and water absorbed them. Japan also intercepted most contaminated
food and drink near Fukushima within six days. As a result, medical experts
suspect the total number of cancer deaths in Japan will be correspondingly
small—around one thousand extra deaths over the next few decades,
compared to the twenty thousand who died in the earthquake and tsunami.
just beginning to explore: For a full account of Yamaguchi’s story—
and for eight other equally riveting tales—see Nine Who Survived
Hiroshima and Nagasaki, by Robert Trumbull. I can’t recommend it highly
enough.
For more detail on Muller and many other players in early genetics
(including Thomas Hunt Morgan), check out the wonderfully
comprehensive Mendel’s Legacy, by Elof Axel Carlson, a former student of
Muller’s.
For a detailed but readable account of the physics, chemistry, and
biology of how radioactive particles batter DNA, see Radiobiology for the
Radiologist, by Eric J. Hall and Amato J. Giaccia. They also discuss the
Hiroshima and Nagasaki bombs specifically.
Finally, for an entertaining rundown of early attempts to decipher the
genetic code, I recommend Brian Hayes’s “The Invention of the Genetic
Code” in the January–February 1998 issue of American Scientist.
Chapter 4: The Musical Score of DNA
even meant, if anything: Zipf himself believed that his law revealed
something universal about the human mind: laziness. When speaking, we
want to expend as little energy as possible getting our points across, he
argued, so we use common words like bad because they’re short and pop
easily to mind. What prevents us from describing every last coward, rogue,
scuzzbag, bastard, malcontent, coxcomb, shit-for-brains, and misanthrope
as “bad” is other people’s laziness, since they don’t want to mentally parse
every possible meaning of the word. They want precision, pronto. This tug-
of-war of slothfulness results in languages where common words do the
bulk of the work, but rarer and more descriptive words must appear now
and then to appease the damn readers.
That’s clever as far as it goes, but many researchers argue that any
“deep” explanation of Zipf’s law is, to use another common word, crap.
They point out that something like a Zipfian distribution can arise in almost
any chaotic situation. Even computer programs that spit out random letters
and spaces—digital orangutans banging typewriters—can show Zipfian
distributions in the resulting “words.”
evolution creeps forward: The analogy between genetic language and
human language seems fuzzy to some, almost too cute to be true. Analogies
can always be taken too far, but I think that some of this dismissal stems
from our sort-of-selfish tendency to think that language can only be sounds
that humans make. Language is wider than just us: it’s the rules that govern
any communication. And cells as surely as people can take feedback from
their environment and adjust what they “say” in response. That they do so
with molecules instead of air-pressure waves (i.e., sound) shouldn’t
prejudice us. Recognizing this, a few recent cellular biology textbooks have
included chapters on Chomsky’s theories about the underlying structure of
languages.
sator… rotas: The palindrome means something like “The farmer Arepo
works with his plow,” with rotas, literally “wheels,” referring to the back-
and-forth motion that plows make as they till. This “magic square” has
delighted enigmatologists for centuries, but scholars have suggested it
might have served another purpose during imperial Roman reigns of terror.
An anagram of these twenty-five letters spells out paternoster, “Our
Father,” twice, in an interlocking cross. The four letters left over from the
anagram, two a’s and two o’s, could then refer to the alpha and omega
(famous later from the Book of Revelation). The theory is that, by sketching
this innocuous palindrome on their doors, Christians could signal each other
without arousing Roman suspicion. The magic square also reportedly kept
away the devil, who traditionally (so said the church) got confused when he
read palindromes.
his boss’s lunatic projects: Friedman’s boss, “Colonel” George Fabyan,
had quite a life. Fabyan’s father started a cotton company called Bliss
Fabyan and groomed Fabyan to take over. But succumbing to wanderlust,
the boy ran away to work as a Minnesota lumberjack instead, and his
outraged and betrayed father disinherited him. After two years, Fabyan tired
of playing Paul Bunyan and decided to get back into the family business—
by applying, under an assumed name, to a Bliss Fabyan office in St. Louis.
He quickly set all sort of sales records, and his father at corporate HQ in
Boston soon summoned this young go-getter to his office to talk about a
promotion. In walked his son.
After this Shakespearean reunion, Fabyan thrived in the cotton business
and used his wealth to open the think tank. He funded all sorts of research
over the years but fixated on Shakespeare codes. He tried to publish a book
after he supposedly broke the code, but a filmmaker working on some
adaptations of Shakespeare sued to stop publication, arguing that its
contents would “shatter” Shakespeare’s reputation. For whatever reason, the
local judge took the case—centuries of literary criticism apparently fell
under his jurisdiction—and, incredibly, sided with Fabyan. His decision
concluded, “Francis Bacon is the author of the works so erroneously
attributed to William Shakespeare,” and he ordered the film producer to pay
Fabyan $5,000 in damages.
Most scholars look on arguments against Shakespeare’s authorship about
as kindly as biologists do on theories of maternal impressions. But several
U.S. Supreme Court justices, most recently in 2009, have also voiced
opinions that Shakespeare could not have written his plays. The real lesson
here is that lawyers apparently have different standards of truth and
evidence than scientists and historians.
to rip off casinos at roulette: The casino gambit never paid off. The
idea started with the engineer Edward Thorp, who in 1960 recruited
Shannon to help him. At the roulette table, the two men worked as a team,
though they pretended not to know each other. One watched the roulette
ball as it spun around the wheel and noted the exact moment it passed
certain points. He then used a toe-operated switch in his shoe to send
signals to the small computer in his pocket, which in turn transmitted radio
signals. The other man, wearing an earpiece, heard these signals as musical
notes, and based on the tune, he would know where to toss his money. They
painted any extruding wires (like the earpiece’s) the color of flesh and
pasted the wires to their skin with spirit gum.
Thorp and Shannon calculated an expected yield of 44 percent from
their scheme, but Shannon turned chicken on their first test run in a casino
and would only place dime bets. They won more often than not, but,
perhaps after eyeing some of the heavies manning the casino door, Shannon
lost his appetite for the enterprise. (Considering that the two men had
ordered a $1,500 roulette wheel from Reno to practice, they probably lost
money on the venture.) Abandoned by his partner, Thorp published his
work, but it apparently took a number of years before casinos banned
portable electronics outright.
Chapter 5: DNA Vindication
an inside-out (and triple-stranded): For an account of the
embarrassment and scorn Watson and Crick endured for this odd DNA
model, please see my previous book, The Disappearing Spoon.
complex and beautiful life: For a more detailed account of Miriam’s
life, I highly recommend The Soul of DNA, by Jun Tsuji.
the oldest matrilineal ancestor: Using this logic, scientists also know
that Mitochondrial Eve had a partner. All males inherit Y chromosomes
from their fathers alone, since females lack the Y. So all men can trace
strictly paternal lines back to find this Y-chromosomal Adam. The kicker is
that, while simple laws of mathematics prove that this Adam and Eve must
have existed, the same laws reveal that Eve lived tens of thousands of years
earlier than Adam. So the Edenic couple could never have met, even if you
take into account the extraordinary life expectancies in the Bible.
By the by, if we relax the strictly patrilineal or strictly matrilineal bit and
look for the last ancestor who—through men or women—passed at least
some DNA to every person alive today, that person lived only about five
thousand years ago, long after humans had spread over the entire earth.
Humans are strongly tribal, but genes always find a way to spread.
got downgraded: Some historians argue that McClintock struggled to
communicate her ideas partly because she couldn’t draw, or at least didn’t.
By the 1950s molecular biologists and geneticists had developed highly
stylized cartoon flowcharts to describe genetic processes. McClintock, from
an older generation, never learned their drawing conventions, a deficit that
—combined with the complexity of maize in the first place—might have
made her ideas seem too convoluted. Indeed, some students of McClintock
recall that they never remember her drawing any diagrams, ever, to explain
anything. She was simply a verbal person, rutted in logos.
Compare this to Albert Einstein, who always maintained that he thought
in pictures, even about the fundamentals of space and time. Charles Darwin
was of McClintock’s ilk. He included just one picture, of a tree of life, in
the hundreds of pages of On the Origin of Species, and one historian who
studied Darwin’s original notebook sketches of plants and animals
acknowledged he was a “terrible drawer.”
she withdrew from science: If you’re interested in learning more about
the reception of McClintock’s work, the scholar most responsible for
challenging the canonical version of her life’s story is Nathaniel Comfort.
Chapter 6: The Survivors, the Livers
producing the very Cyclops: Most children born with cyclopia (the
medical term) don’t live much past delivery. But a girl born with cyclopia in
India in 2006 astounded doctors by surviving for at least two weeks, long
enough for her parents to take her home. (No further information about her
survival was available after the initial news reports.) Given the girl’s classic
symptoms—an undivided brain, no nose, and a single eye—it was almost
certain that sonic hedgehog had malfunctioned. And sure enough, news
outlets reported that the mother had taken an experimental cancer drug that
blocks sonic.
Maurice of Nassau: Prince Mo belonged to the dynastic House of
Orange in the Netherlands, a family with an unusual (and possibly
apocryphal) legend attached to its name. Centuries ago, wild carrots were
predominantly purple. But right around 1600, Dutch carrot farmers,
indulging in old-fashioned genetic engineering, began to breed and cultivate
some mutants that happened to have high concentrations of the vitamin A
variant beta carotene—and in doing so developed the first orange carrots.
Whether farmers did this on their own or (as some historians claim) to
honor Maurice’s family remains unknown, but they forever changed the
texture, flavor, and color of this vegetable.
German biologist August Weismann: Although an undisputed brainiac
and hall-of-fame biologist, Weismann once claimed—uproariously, given
the book’s mammoth size—to have read On the Origin of Species in one
sitting.
a fifth official letter to the DNAlphabet: A few scientists have even
expanded the alphabet to six, seven, or eight letters, based on chemical
variations of methylated cytosine. Those letters are called (if you’re into the
whole brevity thing) hmC, fC, and caC. It’s not clear, though, whether these
“letters” function independently or are just intermediate steps in the
convoluted process by which cells strip the m from mC.
and Arctic huskies: The tale of the husky liver is dramatic, involving a
doomed expedition to reach the South Pole. I won’t expand on the story
here, but I have written something up and posted it online at
http://samkean.com/thumb-notes. My website also contains links to tons of
pictures (http://samkean.com/thumb-pictures), as well as other notes a little
too digressive to include even here. So if you’re interested in reading about
Darwin’s role in musicals, perusing an infamous scientific fraud’s suicide
note, or seeing painter Henri Toulouse-Lautrec nude on a public beach, take
a look-see.
carried the men home to the Netherlands: Europeans did not set eyes
on the Huys again until 1871, when a party of explorers tracked it down.
The white beams were green with lichen, and they found the hut sealed
hermetically in ice. The explorers recovered, among other detritus, swords,
books, a clock, a coin, utensils, “muskets, a flute, the small shoes of the
ship’s boy who had died there, and the letter Barents put up the chimney for
safekeeping” to justify what some might see as a cowardly decision to
abandon his ship on the ice.
Chapter 7: The Machiavelli Microbe
the “RNA world” theory: Though RNA probably preceded DNA,
other nucleic acids—like GNA, PNA, or TNA—might have preceded both
of them. DNA builds its backbone from ringed deoxyribose sugars, which
are more complicated than the building blocks likely available on the
primordial earth. Glycol nucleic acid and peptide nucleic acid look like
better candidates because neither uses ringed sugars for its vertebrae. (PNA
doesn’t use phosphates either.) Threose nucleic acid does use ringed sugars,
but again, sugars simpler than DNA’s. Scientists suspect those simpler
backbones proved more robust as well, giving these ’NAs an advantage
over DNA on the sun-scorched, semimolten, and oft-bombarded early earth.
viruses that infect only other parasites: This idea of parasites feasting
on parasites always puts me in mind of a wonderful bit of doggerel by
Jonathan Swift:
Great fleas have little fleas upon their backs to bite ’em,
And little fleas have lesser fleas, and so ad infinitum.
And the great fleas themselves, in turn, have greater fleas to go on,
While these again have greater still, and greater still, and so on.
Here’s a list of books and papers I consulted while writing The Violinist’s
Thumb. Anything marked with an asterisk I recommend especially. I’ve
annotated the ones I recommend especially for further reading.
Chapter 1: Genes, Freaks, DNA
Bondeson, Jan. A Cabinet of Medical Curiosities. W. W. Norton, 1999.
*Contains an astounding chapter on maternal impressions, including the
fish boy of Naples.
Darwin, Charles. On the Origin of Species. Introduction by John Wyon
Burrow. Penguin, 1985.
———. The Variation of Animals and Plants Under Domestication. J.
Murray, 1905.
Henig, Robin Marantz. The Monk in the Garden. Houghton Mifflin
Harcourt, 2001.
*A wonderful general biography of Mendel.
Lagerkvist, Ulf. DNA Pioneers and Their Legacy. Yale University Press,
1999.
Leroi, Armand Marie. Mutants: On genetic variety and the human body.
Penguin, 2005.
*A fascinating account of maternal impressions, including the lobster
claw–like birth defects.
Chapter 2: The Near Death of Darwin
Carlson, Elof Axel. Mendel’s Legacy. Cold Spring Harbor Laboratory Press,
2004.
*Loads of anecdotes about Morgan, Muller, and many other key players
in early genetics, by a student of Muller’s.
Endersby, Jim. A Guinea Pig’s History of Biology. Harvard University
Press, 2007.
*A marvelous history of the fly room. One of my favorite books ever, in
fact. Endersby also touches on Darwin’s adventures with gemmules,
Barbara McClintock, and other tales.
Gregory, Frederick. The Darwinian Revolution. DVDs. Teaching Company,
2008.
Hunter, Graeme K. Vital Forces. Academic Press, 2000.
Kohler, Robert E. Lords of the Fly. University of Chicago Press, 1994.
*Includes details about Bridges’s private life, like the anecdote about his
Indian “princess.”
Steer, Mark, et al., eds. Defining Moments in Science. Cassell Illustrated,
2008.
Chapter 3: Them’s the DNA Breaks
Hall, Eric J., and Amato J. Giaccia. Radiobiology for the Radiologist.
Lippincott Williams and Wilkins, 2006.
*A detailed but readable account of how exactly radioactive particles
batter DNA.
Hayes, Brian. “The Invention of the Genetic Code.” American Scientist,
January–February 1998.
*An entertaining rundown of early attempts to decipher the genetic
code.
Judson, Horace F. The Eighth Day of Creation. Cold Spring Harbor
Laboratory Press, 2004.
*Includes the story of Crick not knowing what dogma meant.
Seachrist Chiu, Lisa. When a Gene Makes You Smell Like a Fish. Oxford
University Press, 2007.
Trumbull, Robert. Nine Who Survived Hiroshima and Nagasaki. Dutton,
1957.
*For a fuller account of Yamaguchi’s story—and for eight other equally
riveting tales—I can’t recommend this book enough.
Chapter 4: The Musical Score of DNA
Flapan, Erica. When Topology Meets Chemistry. Cambridge University
Press, 2000.
Frank-Kamenetskii, Maxim D. Unraveling DNA. Basic Books, 1997.
Gleick, James. The Information. HarperCollins, 2011.
Grafen, Alan, and Mark Ridley, eds. Richard Dawkins. Oxford University
Press, 2007.
Zipf, George K. Human Behavior and the Principle of Least Effort.
Addison-Wesley, 1949.
———. The Psycho-biology of Language. Routledge, 1999.
Chapter 5: DNA Vindication
Comfort, Nathaniel C. “The Real Point Is Control.” Journal of the History
of Biology 32 (1999): 133–62.
*Comfort is the scholar most responsible for challenging the mythic, fairy-
tale version of Barbara McClintock’s life and work.
Truji, Jan. The Soul of DNA. Llumina Press, 2004.
*For a more detailed account of Sister Miriam, I highly recommend this
book, which chronicles her life from its earliest days to the very end.
Watson, James. The Double Helix. Penguin, 1969.
*Watson recalls multiple times his frustration over the different shapes
of each DNA base.
Chapter 6: The Survivors, the Livers
Hacquebord, Louwrens. “In Search of Het Behouden Huys.” Arctic 48
(September 1995): 248–56.
Veer, Gerrit de. The Three Voyages of William Barents to the Arctic
Regions. N.p., 1596.
Chapter 7: The Machiavelli Microbe
Berton, Pierre. Cats I Have Known and Loved. Doubleday Canada, 2002.
Dulbecco, Renato. “Francis Peyton Rous.” In Biographical Memoirs, vol.
48. National Academies Press, 1976.
McCarty, Maclyn. The Transforming Principle. W. W. Norton, 1986.
Richardson, Bill. Scorned and Beloved: Dead of Winter Meetings with
Canadian Eccentrics. Knopf Canada, 1997.
Villarreal, Luis. “Can Viruses Make Us Human?” Proceedings of the
American Philosophical Society 148 (September 2004): 296–323.
Chapter 8: Love and Atavisms
Bondeson, Jan. A Cabinet of Medical Curiosities. W. W. Norton, 1999.
*A marvelous section on human tails, from a book chock-full of
gruesome tales from the history of anatomy.
Isoda, T., A. Ford, et al. “Immunologically Silent Cancer Clone
Transmission from Mother to Offspring.” Proceedings of the National
Academy of Sciences of the United States of America 106, no. 42
(October 20, 2009): 17882–85.
Villarreal, Luis P. Viruses and the Evolution of Life. ASM Press, 2005.
Chapter 9: Humanzees and Other Near Misses
Rossiianov, Kirill. “Beyond Species.” Science in Context 15, no. 2 (2002):
277–316.
*For more on Ivanov’s life, this is the most authoritative and least
sensationalistic source.
Chapter 10: Scarlet A’s, C’s, G’s, and T’s
Barber, Lynn. The Heyday of Natural History. Cape, 1980.
*A great source for information about the Bucklands, père and fils.
Carroll, Sean B. Remarkable Creatures. Houghton Mifflin Harcourt, 2009.
Finch, Caleb. The Biology of Human Longevity. Academic Press, 2007.
Finch, Caleb, and Craig Stanford. “Meat-Adaptive Genes Involving Lipid
Metabolism Influenced Human Evolution.” Quarterly Review of
Biology 79, no. 1 (March 2004): 3–50.
Sommer, Marianne. Bones and Ochre. Harvard University Press, 2008.
Wade, Nicholas. Before the Dawn. Penguin, 2006.
*A masterly tour of all aspects of human origins.
Chapter 11: Size Matters
Gould, Stephen Jay. “Wide Hats and Narrow Minds.” In The Panda’s
Thumb. W. W. Norton, 1980.
*A highly entertaining rendition of the story of Cuvier’s autopsy.
Isaacson, Walter. Einstein: His Life and Universe. Simon and Schuster,
2007.
Jerison, Harry. “On Theory in Comparative Psychology.” In The Evolution
of Intelligence. Psychology Press, 2001.
Treffert, D., and D. Christensen. “Inside the Mind of a Savant.” Scientific
American, December 2005.
*A lovely account of Peek by the two scientists who knew him best.
Chapter 12: The Art of the Gene
Leroi, Armand Marie. Mutants: On Genetic Variety and the Human Body.
Penguin, 2005.
*This marvelous book discusses in more detail what specific disease
Toulouse-Lautrec might have had, and also the effect on his art.
Sugden, John. Paganini. Omnibus Press, 1986.
*One of the few biographies of Paganini in English. Short, but well
done.
Chapter 13: The Past Is Prologue—Sometimes
Reilly, Philip R. Abraham Lincoln’s DNA and Other Adventures in
Genetics. Cold Spring Harbor Laboratory Press, 2000.
*Reilly sat on the original committee that studied the feasibility of
testing Lincoln’s DNA. He also delves into the testing of Jewish
people’s DNA, among other great sections.
Chapter 14: Three Billion Little Pieces
Angrist, Misha. Here Is a Human Being. HarperCollins, 2010.
*A lovely and personal rumination on the forthcoming age of genetics.
Shreeve, James. The Genome War. Ballantine Books, 2004.
*If you’re interested in an insider’s account of the Human Genome Project,
Shreeve’s book is the best written and most entertaining I know of.
Sulston, John, and Georgina Ferry. The Common Thread. Joseph Henry
Press, 2002.
Venter, J. Craig. A Life Decoded: My Genome—My Life. Penguin, 2008.
*The story of Venter’s entire life, from Vietnam to the HGP and beyond.
Chapter 15: Easy Come, Easy Go?
Gliboff, Sander. “Did Paul Kammerer Discover Epigenetic Inheritance? No
and Why Not.” Journal of Experimental Zoology 314 (December 15,
2010): 616–24.
Gould, Stephen Jay. “A Division of Worms.” Natural History, February
1999.
*A masterly two-part article about the life of Jean-Baptiste Lamarck.
Koestler, Arthur. The Case of the Midwife Toad. Random House, 1972.
Serafini, Anthony. The Epic History of Biology. Basic Books, 2002.
Vargas, Alexander O. “Did Paul Kammerer Discover Epigenetic
Inheritance?” Journal of Experimental Zoology 312 (November 15,
2009): 667–78.
Chapter 16: Life as We Do (and Don’t) Know It
Caplan, Arthur. “What If Anything Is Wrong with Cloning a Human
Being?” Case Western Reserve Journal of International Law 35 (Fall
2003): 69–84.
Segerstråle, Ullica. Defenders of the Truth. Oxford University Press, 2001.
Wade, Nicholas. Before the Dawn. Penguin, 2006.
*Among other people, Nicholas Wade suggested adding the extra pair
of chromosomes.
* This and all upcoming asterisks refer to the Notes and Errata section,
which begins here and goes into more detail on various interesting points.
CONTENTS
Welcome
Frontispiece
Epigraph
Introduction
PART I
A, C, G, T, AND YOU:
HOW TO READ A GENETIC SCORE
1. Genes, Freaks, DNA: How Do Living Things Pass Down Traits to Their
Children?
2. The Near Death of Darwin: Why Did Geneticists Try to Kill Natural
Selection?
3. Them’s the DNA Breaks: How Does Nature Read—and Misread—
DNA?
4. The Musical Score of DNA: What Kinds of Information Does DNA
Store?
PART II
OUR ANIMAL PAST:
MAKING THINGS THAT CRAWL AND FROLIC AND KILL
5. DNA Vindication: Why Did Life Evolve So Slowly—Then Explode in
Complexity?
6. The Survivors, the Livers: What’s Our Most Ancient and Important
DNA?
7. The Machiavelli Microbe: How Much Human DNA Is Actually Human?
8. Love and Atavisms: What Genes Make Mammals Mammals?
9. Humanzees and Other Near Misses: When Did Humans Break Away
from Monkeys, and Why?
PART III
GENES AND GENIUSES:
HOW HUMANS BECAME ALL TOO HUMAN
10. Scarlet A’s, C’s, G’s, and T’s: Why Did Humans Almost Go Extinct?
11. Size Matters: How Did Humans Get Such Grotesquely Large Brains?
12. The Art of the Gene: How Deep in Our DNA Is Artistic Genius?
PART IV:
THE ORACLE OF DNA:
GENETICS IN THE PAST, PRESENT, AND FUTURE
13. The Past Is Prologue—Sometimes: What Can (and Can’t) Genes Teach
Us About Historical Heroes?
14. Three Billion Little Pieces: Why Don’t Humans Have More Genes Than
Other Species?
15. Easy Come, Easy Go? How Come Identical Twins Aren’t Identical?
16. Life as We Do (and Don’t) Know It: What the Heck Will Happen Now?
Epilogue: Genomics Gets Personal
Acknowledgments
About the Author
Also by Sam Kean
Notes and Errata
Selected Bibliography
Copyright
Copyright
All rights reserved. In accordance with the U.S. Copyright Act of 1976, the
scanning, uploading, and electronic sharing of any part of this book without
the permission of the publisher constitute unlawful piracy and theft of the
author’s intellectual property. If you would like to use material from the
book (other than for review purposes), prior written permission must be
obtained by contacting the publisher at permissions@hbgusa.com. Thank
you for your support of the author’s rights.
Little, Brown and Company is a division of Hachette Book Group, Inc., and
is celebrating its 175th anniversary in 2012. The Little, Brown name and
logo are trademarks of Hachette Book Group, Inc.
The publisher is not responsible for websites (or their content) that are not
owned by the publisher.
ISBN 978-0-316-20297-8