Оценить:
 Рейтинг: 0

The Emperor of All Maladies

Год написания книги
2018
<< 1 ... 9 10 11 12 13 14 15 16 17 ... 22 >>
На страницу:
13 из 22
Настройки чтения
Размер шрифта
Высота строк
Поля

(#litres_trial_promo) woke up from her yearlong coma to watch a basketball game in her hospital room.

But like surgery, radiation medicine also struggled against its inherent limits. Emil Grubbe had already encountered the first of these limits with his earliest experimental treatments: since X-rays could only be directed locally, radiation was of limited use for cancers that had metastasized.* (#litres_trial_promo) One could double and quadruple the doses of radiant energy, but this did not translate into more cures. Instead, indiscriminate irradiation left patients scarred, blinded, and scalded by doses that had far exceeded tolerability.

The second limit was far more insidious: radiation produced cancers. The very effect of X-rays killing rapidly dividing cells—DNA damage—also created cancer-causing mutations in genes. In the 1910s, soon after the Curies had discovered radium, a New Jersey corporation called U.S. Radium began to mix radium with paint to create a product called Undark—radium-infused paint that emitted a greenish white light at night. Although aware of the many injurious effects of radium, U.S. Radium promoted Undark for clock dials, boasting of glow-in-the-dark watches. Watch painting was a precise and artisanal craft, and young women with nimble, steady hands were commonly employed. These women were encouraged to use the paint without precautions, and to frequently lick the brushes with their tongues to produce sharp lettering on watches.

Radium workers soon began to complain of jaw pain, fatigue, and skin and tooth problems. In the late 1920s, medical investigations revealed that the bones in their jaws had necrosed, their tongues had been scarred by irradiation, and many had become chronically anemic (a sign of severe bone marrow damage). Some women, tested with radioactivity counters, were found to be glowing with radioactivity. Over the next decades, dozens of radium-induced tumors sprouted in these radium-exposed workers—sarcomas and leukemias, and bone, tongue, neck, and jaw tumors. In 1927, a group of five severely afflicted women in New Jersey—collectively termed “Radium girls”

(#litres_trial_promo) by the media—sued U.S. Radium. None of them had yet developed cancers; they were suffering from the more acute effects of radium toxicity—jaw, skin, and tooth necrosis. A year later, the case was settled out of court with a compensation of $10,000 each to the girls, and $600 per year to cover living and medical expenses. The “compensation” was not widely collected. Many of the Radium girls, too weak even to raise their hands to take an oath in court, died of leukemia and other cancers soon after their case was settled.

Marie Curie died of leukemia

(#litres_trial_promo) in July 1934. Emil Grubbe, who had been exposed to somewhat weaker X-rays, also succumbed to the deadly late effects of chronic radiation. By the mid-1940s, Grubbe’s fingers had been amputated

(#litres_trial_promo) one by one to remove necrotic and gangrenous bones, and his face was cut up in repeated operations to remove radiation-induced tumors and premalignant warts. In 1960, at the age of eighty-five, he died in Chicago, with multiple forms of cancer that had spread throughout his body.

The complex intersection of radiation with cancer—cancer-curing at times, cancer-causing at others—dampened the initial enthusiasm of cancer scientists. Radiation was a powerful invisible knife—but still a knife. And a knife, no matter how deft or penetrating, could only reach so far in the battle against cancer. A more discriminating therapy was needed, especially for cancers that were nonlocalized.

In 1932, Willy Meyer

(#litres_trial_promo), the New York surgeon who had invented the radical mastectomy contemporaneously with Halsted, was asked to address the annual meeting of the American Surgical Association. Gravely ill and bedridden, Meyer knew he would be unable to attend the meeting, but he forwarded a brief, six-paragraph speech to be presented. On May 31, six weeks after Meyer’s death, his letter was read aloud to the roomful of surgeons. There is, in that letter, an unfailing recognition that cancer medicine had reached some terminus, that a new direction was needed. “If a biological systemic after-treatment were added in every instance,” Meyer wrote, “we believe the majority of such patients would remain cured after a properly conducted radical operation.”

Meyer had grasped a deep principle about cancer. Cancer, even when it begins locally, is inevitably waiting to explode out of its confinement. By the time many patients come to their doctor, the illness has often spread beyond surgical control and spilled into the body exactly like the black bile that Galen had envisioned so vividly nearly two thousand years ago.

In fact, Galen seemed to have been right after all—in the accidental, aphoristic way that Democritus had been right about the atom or Erasmus had made a conjecture about the Big Bang centuries before the discovery of galaxies. Galen had, of course, missed the actual cause of cancer. There was no black bile clogging up the body and bubbling out into tumors in frustration. But he had uncannily captured something essential about cancer in his dreamy and visceral metaphor. Cancer was often a humoral disease. Crablike and constantly mobile, it could burrow through invisible channels from one organ to another. It was a “systemic” illness, just as Galen had once made it out to be.

Dyeing and Dying (#ulink_058779e2-204e-587d-927d-7b59feac4b21)

Those who have not been trained in chemistry

(#litres_trial_promo) or medicine may not realize how difficult the problem of cancer treatment really is. It is almost—not quite, but almost—as hard as finding some agent that will dissolve away the left ear, say, and leave the right ear unharmed. So slight is the difference between the cancer cell and its normal ancestor.

—William Woglom

Life is . . . a chemical incident

(#litres_trial_promo).

—Paul Ehrlich

—as a schoolboy, 1870

A systemic disease demands a systemic cure—but what kind of systemic therapy could possibly cure cancer? Could a drug, like a microscopic surgeon, perform an ultimate pharmacological mastectomy—sparing normal tissue while excising cancer cells? Willy Meyer wasn’t alone in fantasizing about such a magical therapy—generations of doctors before him had also fantasized about such a medicine. But how might a drug coursing through the whole body specifically attack a diseased organ?

Specificity refers to the ability of any medicine to discriminate between its intended target and its host. Killing a cancer cell in a test tube is not a particularly difficult task: the chemical world is packed with malevolent poisons that, even in infinitesimal quantities, can dispatch a cancer cell within minutes. The trouble lies in finding a selective poison—a drug that will kill cancer without annihilating the patient. Systemic therapy without specificity is an indiscriminate bomb. For an anticancer poison to become a useful drug, Meyer knew, it needed to be a fantastically nimble knife: sharp enough to kill cancer yet selective enough to spare the patient.

The hunt for such specific, systemic poisons for cancer was precipitated by the search for a very different sort of chemical. The story begins with colonialism and its chief loot: cotton. In the mid-1850s, as ships from India and Egypt laden with bales of cotton unloaded their goods in English ports, cloth milling boomed into a spectacularly successful business in England, an industry large enough to sustain an entire gamut of subsidiary industries. A vast network of mills sprouted up in the industrial basin of the Midlands, stretching through Glasgow, Lancashire, and Manchester. Textile exports dominated the British economy. Between 1851 and 1857

(#litres_trial_promo), the export of printed goods from England more than quadrupled—from 6 million to 27 million pieces per year. In 1784, cotton products had represented a mere 6 percent of total British exports. By the 1850s, that proportion had peaked

(#litres_trial_promo) at 50 percent.

The cloth-milling boom set off a boom in cloth dyeing, but the two industries—cloth and color—were oddly out of technological step. Dyeing, unlike milling, was still a preindustrial occupation. Cloth dyes had to be extracted

(#litres_trial_promo) from perishable vegetable sources—rusty carmines from Turkish madder root, or deep blues from the indigo plant—using antiquated processes that required patience, expertise, and constant supervision. Printing on textiles with colored dyes (to produce the ever-popular calico prints

(#litres_trial_promo), for instance) was even more challenging—requiring thickeners, mordants, and solvents in multiple steps—and often took the dyers weeks to complete. The textile industry thus needed professional chemists to dissolve its bleaches and cleansers, to supervise the extraction of dyes, and to find ways to fasten the dyes on cloth. A new discipline called practical chemistry, focused on synthesizing products for textile dyeing, was soon flourishing in polytechnics and institutes all over London.

In 1856, William Perkin, an eighteen-year-old student at one of these institutes, stumbled on what would soon become a Holy Grail of this industry: an inexpensive chemical dye that could be made entirely from scratch. In a makeshift one-room laboratory in his apartment in the East End of London (“half of a small but long-shaped room

(#litres_trial_promo) with a few shelves for bottles and a table”) Perkin was boiling nitric acid and benzene in smuggled glass flasks and precipitated an unexpected reaction. A chemical had formed inside the tubes with the color of pale, crushed violets. In an era obsessed with dye-making, any colored chemical was considered a potential dye—and a quick dip of a piece of cotton into the flask revealed the new chemical could color cotton. Moreover, this new chemical did not bleach or bleed. Perkin called it aniline mauve.

Perkin’s discovery was a godsend for the textile industry. Aniline mauve was cheap and imperishable—vastly easier to produce and store than vegetable dyes. As Perkin soon discovered, its parent compound could act as a molecular building block for other dyes, a chemical skeleton on which a variety of side chains could be hung to produce a vast spectrum of vivid colors. By the mid-1860s, a glut of new synthetic dyes, in shades of lilac, blue, magenta, aquamarine, red, and purple flooded the cloth factories of Europe. In 1857, Perkin, barely nineteen years old, was inducted into the Chemical Society of London as a full fellow, one of the youngest in its history to be thus honored.

Aniline mauve was discovered in England, but dye making reached its chemical zenith in Germany. In the late 1850s, Germany, a rapidly industrializing nation, had been itching to compete in the cloth markets of Europe and America. But unlike England, Germany had scarcely any access to natural dyes: by the time it had entered the scramble to capture colonies, the world had already been sliced up into so many parts, with little left to divide. German cloth millers thus threw themselves into the development of artificial dyes, hoping to rejoin an industry that they had once almost given up as a lost cause.

Dye making in England had rapidly become an intricate chemical business. In Germany—goaded by the textile industry, cosseted by national subsidies, and driven by expansive economic growth—synthetic chemistry underwent an even more colossal boom. In 1883, the German output of alizarin

(#litres_trial_promo), the brilliant red chemical that imitated natural carmine, reached twelve thousand tons, dwarfing the amount being produced by Perkin’s factory in London. German chemists rushed to produce brighter, stronger, cheaper chemicals and muscled their way into textile factories all around Europe. By the mid-1880s, Germany had emerged as the champion of the chemical arms race (which presaged a much uglier military one) to become the “dye basket” of Europe.

Initially, the German textile chemists lived entirely in the shadow of the dye industry. But emboldened by their successes, the chemists began to synthesize not just dyes and solvents, but an entire universe of new molecules: phenols, alcohols, bromides, alkaloids, alizarins, and amides, chemicals never encountered in nature. By the late 1870s, synthetic chemists in Germany had created more molecules than they knew what to do with. “Practical chemistry” had become almost a caricature of itself: an industry seeking a practical purpose for the products that it had so frantically raced to invent.

Early interactions between synthetic chemistry and medicine had largely been disappointing. Gideon Harvey, a seventeenth-century physician, had once called chemists the “most impudent, ignorant, flatulent, fleshy,

(#litres_trial_promo) and vainly boasting sort of mankind.” The mutual scorn and animosity between the two disciplines had persisted. In 1849, August Hofmann, William Perkin’s teacher at the Royal College, gloomily acknowledged the chasm between medicine and chemistry: “None of these compounds have, as yet,

(#litres_trial_promo) found their way into any of the appliances of life. We have not been able to use them . . . for curing disease.”

But even Hofmann knew that the boundary between the synthetic world and the natural world was inevitably collapsing. In 1828, a Berlin scientist named Friedrich Wöhler

(#litres_trial_promo) had sparked a metaphysical storm in science by boiling ammonium cyanate, a plain, inorganic salt, and creating urea, a chemical typically produced by the kidneys. The Wöhler experiment—seemingly trivial—had enormous implications. Urea was a “natural” chemical, while its precursor was an inorganic salt. That a chemical produced by natural organisms could be derived so easily in a flask threatened to overturn the entire conception of living organisms: for centuries, the chemistry of living organisms was thought to be imbued with some mystical property, a vital essence that could not be duplicated in a laboratory—a theory called vitalism. Wöhler’s experiment demolished vitalism. Organic and inorganic chemicals, he proved, were interchangeable. Biology was chemistry: perhaps even a human body was no different from a bag of busily reacting chemicals—a beaker with arms, legs, eyes, brain, and soul.

With vitalism dead, the extension of this logic to medicine was inevitable. If the chemicals of life could be synthesized in a laboratory, could they work on living systems? If biology and chemistry were so interchangeable, could a molecule concocted in a flask affect the inner workings of a biological organism?

Wöhler was a physician himself, and with his students and collaborators he tried to backpedal from the chemical world into the medical one. But his synthetic molecules were still much too simple—mere stick figures of chemistry where vastly more complex molecules were needed to intervene on living cells.

But such multifaceted chemicals already existed: the laboratories of the dye factories of Frankfurt were full of them. To build his interdisciplinary bridge between biology and chemistry, Wöhler only needed to take a short day-trip from his laboratory in Göttingen to the labs of Frankfurt. But neither Wöhler nor his students could make that last connection. The vast panel of molecules sitting idly on the shelves of the German textile chemists, the precursors of a revolution in medicine, may as well have been a continent away.

It took a full fifty years after Wöhler’s urea experiment for the products of the dye industry to finally make physical contact with living cells. In 1878, in Leipzig, a twenty-four-year-old

(#litres_trial_promo) medical student, Paul Ehrlich, hunting for a thesis project, proposed using cloth dyes—aniline and its colored derivatives—to stain animal tissues. At best, Ehrlich hoped that the dyes might stain the tissues to make microscopy easier. But to his astonishment, the dyes were far from indiscriminate darkening agents. Aniline derivatives stained only parts of the cell, silhouetting certain structures and leaving others untouched. The dyes seemed able to discriminate among chemicals hidden inside cells—binding some and sparing others.

This molecular specificity, encapsulated so vividly in that reaction between a dye and a cell, began to haunt Ehrlich. In 1882, working with Robert Koch

(#litres_trial_promo), he discovered yet another novel chemical stain, this time for mycobacteria, the organisms that Koch had discovered as the cause of tuberculosis. A few years later, Ehrlich found that certain toxins, injected into animals, could generate “antitoxins,” which bound and inactivated poisons with extraordinary specificity (these antitoxins would later be identified as antibodies). He purified a potent serum against diphtheria toxin from the blood of horses, then moved to the Institute for Sera Research and Serum Testing in Steglitz to prepare this serum in gallon buckets, and then to Frankfurt to set up his own laboratory.

But the more widely Ehrlich explored the biological world, the more he spiraled back to his original idea. The biological universe was full of molecules picking out their partners like clever locks designed to fit a key: toxins clinging inseparably to antitoxins, dyes that highlighted only particular parts of cells, chemical stains that could nimbly pick out one class of germs from a mixture of microbes. If biology was an elaborate mix-and-match game of chemicals, Ehrlich reasoned, what if some chemical could discriminate bacterial cells from animal cells—and kill the former without touching the host?

Returning from a conference late one evening, in the cramped compartment of a night train from Berlin to Frankfurt, Ehrlich animatedly described his idea to two fellow scientists, “It has occurred to me
<< 1 ... 9 10 11 12 13 14 15 16 17 ... 22 >>
На страницу:
13 из 22