Week7_Humphrey

Week7_Humphrey - 30 The Mist Rises Malaria in the...

Info iconThis preview shows pages 1–20. Sign up to view the full content.

View Full Document Right Arrow Icon
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 4
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 6
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 8
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 10
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 12
Background image of page 13

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 14
Background image of page 15

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 16
Background image of page 17

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 18
Background image of page 19

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 20
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 30 The Mist Rises: Malaria in the Nineteenth Century Chapter 2 As the nineteenth century opened, malignant mists were thought to cause malarial fevers; one hundred years later a complex chain of parasite and mosquito explained the disease. Over the course of the century, malaria afflicted the American frontier, helping produce the roughness and hard- ship that defined frontier life in contrast to more eastern civilization. Dur- ing at least some decades of the nineteenth century, malaria affected all regions significantly and severely damaged the health of troops in the Civil War. Toward the end of the century, however, the disease retreated, so that by 1900 it was largely a disease of the southern states. What had been a dis- ease of all parts of the United States became in the twentieth century one more indicator of the poverty, backwardness, and unhealthiness of the South. Malaria on the Frontier While malaria declined in the northeastern states, it grew briskly along the westward-moving line of the frontier. When the lands across the Appa- lachians became available for settlement, Euro-Americans flowed through gaps and down rivers that took them into Ohio, Indiana, Illinois, Ken- tucky, Tennessee, and states farther south. As families traveled for days by flatboat and set up flimsy camps near the water, their transportation connection to goods and markets, malaria blossomed. During the antebel- lum period the wave of malaria subsided in these initial encampments as sturdier houses were built farther off the water, while at the same time it moved on to even newer camps farther west and north. Malaria, most of it vivax, became a common feature of raw frontier life, defining in part what it meant to be in the woods, beyond civilization, beyond the safe life of “back home?“ Malaria had traditionally been viewed, at least in the more temperate climates of Europe, as a country disease. David Ramsay, a South Carolina physician writing in the late eighteenth century, typically noted that inter- mittent fevers first appeared after an area had been cleared to make way for settlements and farms. So there was an initial stage without disease, followed by chronic ill health from marsh fevers. But as Cities such as Charleston grew, the land became progressively better drained and the lo- cation healthier. “It has long been observed in the low countries,” Ramsay wrote, “that they who reside in towns, are more healthy than they who live dispersed in the country.”2 Given its association with swamps, malaria de- clined where people built clusters of houses. It was general knowledge that low, wet lands made bad sites for dwellings or towns; such areas were prone to both flooding and disease. So towns tended to be built on the higher elevations in a region, where the topography encouraged drainage. This tendency was countered by the need for populations to cluster near modes of transportation, and before the advent of railroads in the 18403, that meant near bodies of water. Some cities, such as Charleston, were fa- vored by sandy soil that drained easily and by surrounding salt marshes that were inhospitable to disease-carrying anophelines. Other cities, just by dint of construction, paving, and drainage, broke the malaria chain by denying anophelines the requisite swampy expanse within a mile of hu- man populations. Hence even in the relatively rural American South and early frontier, as towns became established, malaria withdrew to the coun- tryside. This was evident to medical observers in the South, who sometimes contrasted the preferences of malaria and yellow fever for different stages of settlement. Mobile physicianjosiah Clark Nott commented in 1847, for example: “When the forest is first leveled and a town commenced, inter- mittents and remittents spring up.” So malaria was tied to breaking new ground on the frontier. Yellow fever, however, came later: “As the popula- tion increases, the town spreads, and draining and paving are introduced,” he continued, “yellow fever, the mighty monarch of the South, who scoms the rude field and forest, plants his sceptre in the centre, and drives all other fevers to the outskirts.” This would become a recurring theme throughout malaria’s course in the United States. It was associated with rough, frontier conditions, not with the increasing civilization of towns and cities. Perhaps the best description of this phenomenon comes to us from the pen of Charles Dickens, who traveled the Ohio and Mississippi Rivers in the spring of 1842 and painted a memorable scene of the raw, primitive frontier lifestyle made bleak and helpless by disease. Dickens’s account was inspired by his visit to Cairo, Illinois, situated at the junction of the two rivers. “[W]e arrived at a spot so much more desolate than any we had yet beheld,” he began in American Notes. There, “on ground so flat and low and marshy, that at certain seasons of the year, it is inundated to the house-tops, lies a breeding-place of fever, ague, and death.” It was a “dis— mal swamp . . . teeming . . . with rank unwholesome vegetation, in whose baleful shade the wretched wanderers who are tempted hither, droop, and Malaria in the Nineteenth Century 31 32 die, and lay their bones.” Cairo was, in his summary, “a place without one single quality, in earth or air or water, to commend it?” Dickens incorporated this vision into his novel about the adventures of Martin Chuzzlewit.” Here the benighted countryside serves as a metaphor for a particularly American hell. Chuzzlewit has traveled to the United States to seek his fortune, accompanied by his always cheerful servant, Mark Tapley. After meeting assorted ridiculous Americans, prone to brag- gadocio and bombast, the two men are conned into buying land in the thriving community of Eden, vaguely located somewhere downriver from the community wherein they are lodging. It is touted as “an awful lovely place, sure-1y. And frightful wholesome, likewise!” (p. 348). They hear some unsettling comments, though. One man, after carrying on about the danger of snakes on the frontier, denies that mosquitoes are a significant problem, saying that “there air some catawampous chawers in the small way, too, as graze upon a human being pretty strong; but don’t mind them— they’re company” (p. Another tells Tapley, just as he is running to catch the boat, that “nobody as goes to Eden ever comes back a-live!” (p- 371)- Chuzzlevvit and Tapley find in Eden a “dismal swamp,” full of “nox- ious vapour” and “pestilential air.” Upon their arrival at the primitive river landing, a man approaches them. “As he drew nearer, they observed that he was pale and worn, and that his anxious eyes were deeply sunken in his head.” The man explains that “I’ve had the fever very bad, . . . I haven’t stood upright these many weeks.” When Chuzzlewit and Tapley inquire whether anyone could help them with their baggage, the man replies that his eldest son would help if he could, “but today he has the chill upon him, and is lying wrapped up in the blankets.” As to the rest of his family, well, “[m]y youngest died last week.” He has buried most of his family and friends, except those who have fled. “Them that we have here, don’t come out at night.” Tapley inquires of him, “The night air an’t quite wholesome, I suppose.” The settler answers, “It’s deadly poison” (pp. 375—76). Tapley, as usual, endeavors to put a positive spin on the situation. He sets up their cabin as comfortably as possible, then walks down to the riverfront to draw water. Around him, “ [a] fetid vapor; hot and sickening as the breath of an oven, rose up from the earth, and hung on everything around; and as his foot-prints sunk into the marshy ground, a black ooze started forth to blot them out” (p. 378). As Tapley makes the acquaintance of the neighborhood, he finds that all are sickly, and many have lost family members and friends. There is “an air of great despondency and little hope on everything” (p. 515). In his cheerful way, Tapley passes it off as seasoning, for “we must all be seasoned, one way or the other. That’s reli— gion, that is, you know” (p. 380). Chuzzlewit quickly falls ill. “He shook and shivered horribly; not as Malaria people do from cold, but in a frightful kind of spasm or convulsion, that racked his whole body.” Tapley goes to a neighbor for help, who “pro- nounced his disease an aggravated kind of fever, accompanied with ague; which was very commori iri those parts, and which he predicted would be worse to-morrow, and for many more to-morrows.” Opening a trunk in his own sparse cabin, the friend brings forth a medicine that has been of some help in his own fever bouts (p. 517). After several weeks Chuzzlewit recov- ers, but Tapley falls ill. In the structure of the novel, this seasoning time makes Chuzzlewit a less selfish young man and serves as a turning point for his fortunes. The two escape from the mires of Illinois and make it, happily, back to the civilized land of England. There is little subtlety in Dickens’s narrative. He damns a whole coun- try, occupied in the East by fools, in the South by evil slaveholders, and in the West by people enfeebled by the very environment that he hears so often praised. Dickens went no farther south than Richmond, so he had no direct experience of the climate there. Frederick Law Olmsted of New York did, however, and his descriptions of the desolation frequently found throughout the southern states in the 18508 echo Dickens’s image of Illi- nois. The people are boastful, lazy, and ignorant; the forms of travel are hideously uncomfortable; the food is awful. Olmsted’s sojourn followed Dickens’s by ten years, and the former had clearly perused the latter’s work, for Tapley is mentioned in his narrative. What is striking, though, in comparing the two accounts, is how similar the South and the West some- times sound—even the southern part of the East Coast, in Virginia and North Carolina. The lack of civilization as defined by both authors, the presence of squalid living conditions, and the apathetic enervation draw the two regions together. And malaria is a defining feature of both.6 Mark Twain likewise remembered a boyhood on the Mississippi River with malaria a common visitor. He wrote fondly of a swimming hole where he spent many hours cavorting, but qualified his own version of Eden: “Bear Creek . . . was a famous breeder of chills and fever in its day. I re- member one summer when everybody in town had this disease at once. Many chimneys were shaken down, and all the houses were so racked that the town had to be rebuilt.” With the hyperbole that typified his later writ- ings, Twain went on to claim that the shaking was so bad that the land- scape was altered: “The chasm or gorge between Lover’s Leap and the hill west of it is supposed by scientists to have been caused by glacial action. This is a mistake.”7 Less famous observers of the North American frontier repeatedly echoed these descriptions of widespread, debilitating malaria that dark- ened the frontier experience. One mid-nineteenth-century jingle about Michigan advertised its charms: “Don’t go to Michigan, that land of ills; The word means ague, fever and chills.” Nearby Ontario was similarly Malaria in the Nineteenth Century 33 34 plagued, from the late eighteenth century into the 18703. Malaria thrived near the rivers and lakes that formed the region’s crucial transportation routes.9 All along the Mississippi and Missouri valleys, as well as up and down the West Coast, malaria was reported as arriving shortly after the first pioneers built houses and began to clear land for farming.10 It was, as Tapley said, evidently a necessary part of the seasoning, part of the transi- tion from wild to civilized. How accurate were these portrayals of the nineteenth-century frontier? Dickens’s observations were based on a brief trip, but a physician without any inherent antipathy to the country, after a thorough investigation, ar- rived at similar conclusions for the majority of the North American inte- rior. Daniel Drake, the preeminent physician and medical educator of ante- bellum Ohio, studied the diseases of the area between the Appalachians and the Rocky Mountains in depth and published his findings in 1850.11 Drake gave prominent place to what he termed autumnal fever, a com- plaint known variously as “bilious, intermittent, remittent, congestive, mi- asmatic, malarial, marsh, malignant, chill-fever, ague, fever and ague, dumb ague, and lastly the Fever.”" From his descriptions it is clear that these terms encompassed vivax malaria, falciparum malaria, and probably ty- phoid and a host of other fevers as well. The mapping of the modern diag- nostic term malaria onto Drake’s label of autumnal fever remains inexact, but certainly there was significant overlap. Most probably falcipamm ma- laria was part of this “autumnal fever” complex, as it generally occurred in the fall; vivax would have been less prominently associated with the fall months but could also occur then as well. . Drake, like Dickens, saw in the swampy lowlands near the region’s great rivers the primary breeding grounds of autumnal fever. He blamed the frequent inundations caused by seasonal flooding for much of this problem, as well as the presence of multiple natural and artificial lakes and ponds. Drake’s book had a long section exploring the causes of his autum- nal fever, and he discussed the idea that bad air, or malaria, arising from putrefying animal or vegetable matter, as distinguished from airborne ani- malcules, spread the disease. Drake clearly defined malaria as a cause of autumnal fever, not as a synonym for it; it was “the poison that produces autumnal fever,” acting in conjunction with heat and moisture. ‘3 Not until much later in the century did the term for the cause of this fever became the name of the disease itself. Drake found his autumnal fever to be widely distributed in the Old Northwest, as well as in the more tropical regions of the South. In the more northern areas, the fever was much more likely to be a benign or simple intermittent, while farther south the mortality was higher, since the more malignant versions prevailed. Drake did not acc'ord specificity to the vari- ous forms of autumnal fever; rather he saw them merging and transform- Malaria ing according to complex local conditions. Still, his account does support the likelihood that falcipamm malaria was rare in the Ohio and upper Mississippi valleys, while vivax was common. His description of 3 to 5 percent mortality from the benign autumnal fever accords with twentieth— century observations of vivax malaria. It is here that Dickens was most off the mark, for he exaggerated the southern Illinois settlement’s mortality rate. Drake recorded other facets of frontier history that agree with modern knowledge of malaria. First, mosquitoes existed in large numbers on the frontier. One early French explorer of the lower Mississippi recorded in his journal that the “musketoes” made rest impossible and life miserable. “One is perfectly eaten and devoured. They get into the mouth, the nos- trils, and the ears; the face, the hands, the body are all covered.”14 This plague of insects extended far north, into the icy Canadian wilderness. Dickens had a similar experience: on one occasion while meeting a digni- tary in St. Louis, he commented that the fellow did not seem too im- pressed to meet the great writer, perhaps because of his casual clothing, “and my face and nose profusely ornamented with the stings of mosqui- toes and bites of bugs?” Hence the mosquitoes were there, presumably including anopheles species, although observers made no such fine dis- tinction. But the generation of “autumnal fevers” required more than mosqui- toes and people; the plasmodium had to be present as well. One would expect that initially the river settlements would be healthy, and that only over time would malaria appear. This is in fact what Drake described. The experience of a group settling near Peoria, Illinois, was typical. At first, “a number of families had settled (as is common) on the margin of a large prairie, and remained healthy in autumn.” Then more people came and increased the amount of plowing around their cabins. By the second fall they “suffered severely in autumn from fever.”“s Another frontiersman liv- ing near Springfield told Drake that he had “resided where [Drake] found him three years, before a member of his family was seized with that fever.” Drake was puzzled: “Such instances are not uncommon, though difficult to explain.”17 While contemporary opinion held that perhaps this resulted from turning up the soil and exposing poisonous sources of vapor, it is clear in retrospect that time was needed for the critical conjunction of peo- ple, parasites, and mosquitoes to converge on a given spot. Drake’s observations were echoed in the writings of frontier explorers and settlers. Historian Erwin Ackerknecht, whose study on malaria in the Old Northwest deserves its place as a classic in American medical histori- ography, provides abundant evidence of malaria and mosquitoes in the writings of settlers in such unlikely sites as Wisconsin, Minnesota, and Iowa.18 More recently, Conevery Bolton has described the pervasive effect Malaria in the Nineteenth Century 35 of malaria on life in Arkansas and Missouri in the first half of the nineteenth century.19 Again and again lands that initially seemed wholesome became laden with periodic fevers, exhausting human capital. Malaria was added to the many dangers of the frontier—Indians, starvation, lawlessness, and rattlesnakes—to build an image of wildness and peril. But as civilization moved in, with its tighter houses, better-drained towns and fields, abun- dant food, and access to quinine, malaria receded. Everywhere, that is, except in the South, which in many ways retained a primitive, frontierlike culture well into the twentieth century. Still, on the eve of the Civil War, physicians in Indiana, Ohio, and Illi- nois were just as familiar with the ravages of malarial fevers as were their colleagues in the southern and southwestern states. All were aware that quinine sulfate, first manufactured in the 1820s, was efficacious in cases of intermittent and remittent fever, although they continued to employ other remedies to “ready the system” for quinine, such as bloodletting, emetics, and purgatives.” In fact, southern physicians continued to administer the latter two treatments for malaria into the 1940s. So quinine was not seen as a specific for intermittent fevers——it was used for other fevers and as one of several remedies for intermittants—but we can at least say in retrospect that Civil War physicians had one tool in their armamentarium that did actively alleviate their patients’ suffering. The medicine that Tapley’s Eden friend pulled out of his trunk was probably quinine, in the form of a patent remedy called Sappington’s Anti-Fever Pills. john Sappington was a rural Missouri physician who had heard about the isolation of quinine from cinchona bark early in the 18205. He rode off to Philadelphia, acquired a large supply, brought it back to Missouri, and began manufacturing a legendary product that may well have facilitated the growth of the American Midwest. Sappington’s pills contained a of quinine, and he recommended 5 grains a day as a preventive and 8 to 16 grains a day for treatment. Over the next thirty years, Sappington sold nearly six million boxes containing twenty-four pills apiece. So quinine, in this and other forms, was widely available and familiar to patients and doctors alike on the eve of the Civil War.21 Other patent remedies followed quickly on Sappington’s heels, both incorporat- ing quinine and offering alternatives to it. Quinine is unpleasant to take, given its bitter taste and unpleasant side effects of tinnitus and nausea. Drugs such as Therrnaline (“A carefully prepared combination of the ac- tive principles of Calisaya Bark, and a species of the Fever Tree of Aus- tralia”) which claimed to offer all of quinine’s benefits with none of its side effects, also sold we .22 Malaria Malaria in the Civil War Not surprisingly, quinine was one of the most frequently used drugs of Civil War medicine. After'v‘arious dysenteries and diarrhea, malarial fevers were the most common diagnoses in Union camp hospitals. The medical statistics compiled after the war for Union troops listed 1.3 million cases and more than ten thousand deaths from intermittent and remittent fe- vers.” Some of this infection happened to New England boys who met the malaria parasite for the first time in the boggy peninsular campaigns or in the battle for Vicksburg. Other Yankees hailing from the Old Northwest brought their own parasites along, allowing for a rapid spread of disease as the Union troops camped near mosquito breeding grounds throughout the South. Southern troops no doubt suffered as well, but the statistics for their morbidity and mortality are not systematically available.“ Midcentury physicians had the knowledge to reduce the impact of dis- ease on troops, but the application of this knowledge was spotty at best. Contagious diseases like measles and smallpox were recognized as such and were dealt with by quarantine, movement of camp, and in the case of smallpox, vaccination. That the fecal filth generated by man and beast was connected to the onset of diarrhea] diseases was widely acknowledged, even though the suspected source—poisonous vapors arising from piles of rotting manure—was not correct. Nevertheless, the enforcement of more thorough sanitation in camp and hospital, based on available knowledge, would have reduced casualties.25 Finally, midcentury Americans knew how to avoid malarial fevers: stay two or three miles from the source of poi- sonous vapor, namely wetlands filled with rotting vegetation. Again, the etiology was wrong, but the prophylaxis effective. This understanding that staying away from swampy areas was a way to avoid intermittent fever was not elite knowledge confined to physicians. A volunteer at Harper’s Ferry in 1861 noted the “hopeless desperation chill- ing one when engaged in a contest with disease. The unseen malaria has such an advantage in the fight.” His solution? “A week on a high piece of ground three miles from the river would put us all on our feet again.” But he despaired of this simple solution, because the troops were needed to guard the river, not the land three miles away. So, he concluded, “as long as the morning sun rises only to quicken the fatal exhalations from this pestilential Potomac, and the evening dews fall only to rise again with fe- ver,” his comrades would remain cursed by fever and chills.25 The Union medical corps did try another way of preventing intermit- tent fever, which was to dose the men with quinine on a daily basis. All together the Union Army consumed almost 600,000 ounces of quinine sulfate and a similar amount of a cinchona extract. Although the doses were too low and irregularly given to do much to prevent malarial infec- Malaria in the Nineteenth Century 37 tion, they may have alleviated some of the debilitating symptoms. Quinine was dissolved in whiskey, to increase its appeal (and help hide its bitter taste), but most men preferred the whiskey alone. All told, quinine rations were not particularly successful."7 When soldiers returned from the Civil War, they started countless local epidemics of malaria at home, including areas not afflicted for decades. In New England, an epidemic ignited in the Connecticut River valley did not burn out until the 1890s. Not only did Yankee soldiers bring malaria home A from the American South, but European immigrants imported malaria from Italy, Greece, and elsewhere as well. There were several outbreaks of malaria in New York City in the last half of the nineteenth century, proba- bly due to the immigrant influence more than to the Civil War.28 Acker- knecht’s work shows that the parasite had a postwar boom in many of the states of the upper Mississippi valley, but by the first decade of the twenti- eth century, it had retired to the southern states. The exceptions were the southern. tip of Illinois (site of Dickens’s Eden), eastern Missouri, and western Kentucky, where malaria remained measurable into the 1930s.’9 Malaria also persisted in the central valley of California well into the twen- tieth century. Malaria Retreats to the South Why did malaria largely disappear in the Old Northwest, where it had for- merly been a major disease problem? And why did it not leave the South until five decades later? Although malaria had a major presence on the frontier in the mid-nineteenth century, it faded out in that region before measures based on an accurate knowledge of disease transmission could be implemented. In other words, malaria disappeared from the upper Mis- sissippi valley without any active public health campaign against it. Writ- ing in the early 19405, Ackerknecht observed that malaria continued to thrive in the American South and wondered what critical differences be- tween these regions had determined these outcomes. Ackerknecht was a historian, not a malariologist, but he drew on the works of contemporary specialists such as Mark Boyd, M. A. Barber, L. W. Hackett, and C. C. Bass in his examination of this question.” Ackerknecht distilled the ideas of these various malariologists—a long list of potential factors—into a concise roster of possible explanations for malaria’s disappearance from the upper Mississippi valley. He concluded that no one of them held the complete answer, but that several components of frontier life changed over the nineteenth century in ways that routed malaria. He was particularly interested in those lifestyle aspects that might be collected under the rubric of “increasing civilization”: improvements in housing and food supply, drainage of the land, and increased access to physicians and medicine. Driven by a prosperous economy, settlers built Malaria houses that were more airtight to keep out winter’s cold (and incidentally mosquitoes), drained land to expand their arable acreage for profitable crops (reducing mosquito breeding sites), and built a market economy that provided a steady supply‘of varied and nutritious foods. The most pros- perous settlers screened their houses to keep out pests, without any direct knowledge that they were thereby avoiding disease. With rising prosper- ity more people could afi'ord quinine, and the price of quinine also fell over the last decades of the nineteenth century, making it even more accessible. Quinine neither cured nor prevented malaria, but it did enable workers to get out of bed and go back to work, harvesting crops or otherwise main- taining the family’s income. And quinine may have reduced the parasite burdenvenough to diminish disease transfer from one person to the next, although this effect would have been minor. All together, as the prosperity of the upper Mississippi valley made its environment less hospitable, malaria was less able to thrive. Ackerknecht also examined whether the growing population of cattle and other livestock might have deterred malaria. Here he drew on the works of Rockefeller researcher Lewis Hackett. Hackett had explored the puzzle of “anophelism without malaria,” a phenomenon in parts of Europe where anopheles mosquitoes feasted on people with abundance, but malaria did not occur. Hackett discovered that certain subspecies of the European malaria vector, Anopheles muculipennis, preferred to take its blood meals from animals rather than humans. Only if no animals were available would the mosquito choose human victims. So if livestock were present, they would divert the mosquitoes.”1 Drawing on this relatively new information (Hackett’s book was only published in 1937), Ackerknecht posited that the growing numbers of beef and dairy cattle in the Old Northwest could have diverted American anopheles, reducing the number of human bites and breaking the chain of malaria transmission.” Ackerknecht was also aware that as railroads were built throughout the area, population moved away from watercourses, the only previous source of rapid transportation. While he did not dismiss the importance of house location, he might have made more of the voluntary and deliberate nature of such movements. People did not just move toward the railroad; they moved away fiom areas they perceived as sickly. There was general knowl- edge that swamps and other stagnant waters were unhealthy, even if the assumed source of danger, swamp air, was falsely charged. Patent medi- cine advertisements played on this theme of wetlands as a fever source and informed any who were not aware that the night air off still waters could breed malarious fevers. For example, the j. C. Ayer Company of Lowell, Massachusetts, one of the most successful nineteenth-century patent med- icine companies, advertised its “ague cure” on advertising cards that read “Malarial disorders . . . owe their origin to a miasmatic poison, which en- Malaria in the Nineteenth Century 39 40 ters the blood through the Lungs, deranges the Liver, and causes the var- ious forms of agues and fevers, and blood-poisoning.” In case the audience was unclear about the origin of the malarial poison, the advertising copy was accompanied by a cartoon illustrating a cabin on the edge of a lake. Live oaks and palm trees, along with an alligator and frogs inset in one comer, make it clear that this is a bayou scene from the southern Gulf Coast. Carolina Tolu Tonic noted malaria’s prevalence in the wet fields of rice cultivation, while Brown’s Iron Bitters claimed, “Malaria[’s] . . . cause is in most cases attributed to local surroundings, impure water and marshy ground.” These advertising messages, all produced between 1870 and 1900, illustrate the popular understanding of malaria fevers as being generated by place and caused by the inhalation of bad air.“3 It is not at all inconceivable that the inhabitants of the upper Mississippi valley, inun- dated with such messages, deliberately moved their houses as far away as possible from dangerous wetlands (and not just toward railroad depots). “Now, the eradication of malaria from the Upper Mississippi Valley was to a large extent the work of indirect measures undertaken without sani- tary intentions: better agricultural methods, cattle breeding, better hous- ing, screening, more prosperity, education, . . . [and] quinine,” Acker- knecht summarized.”4 The important point here is that while many human actions changed the environment in ways that made malaria less likely to occur, few of those changes were made deliberately for the purpose of con- trolling malaria. Ackerknecht also stressed that no single effort brought an end to malaria, and that the risk remained (in 1945) that the disease could return to the area. “[I]t may be well to remember,” he said in closing, “that malaria in the Upper Mississippi Valley was not killed by a single magic bullet; the monster was only put in chains. . . . Each link of the chain is important, and the breaking of one link may set free again the evil fiend.”35 At that time American malariologists were actively fighting malaria in the southern states and straining to prevent its reintroduction to multiple parts of the country by troops returning from disease—ridden war zones. Ack— erknecht’s struggle to assign proper weight to the different factors of dis- ease causation and disappearance in the nineteenth century held common cause with the malariologists of his day, who were combating malaria in the United States and around the world. Medical Knowledge in Transition Knowledge about the dangers of swamps, habitation near them, and night air was widespread. Some physicians were becoming dissatisfied with this explanation of intermittent fevers, however, seeing it as simplistic and in- complete. In the 1840s a transition in Western medical explanation began that prepared the way for the germ theory of disease and ultimately the understanding of the mosquito as a disease vector. Early nineteenth-cen- Malaria ' Ayer’s Ague Cure. The Spanish moss, palm trees, flamingo, and alligator on this nineteenth-century advertising card all point to a Gulf Coast location, although for much of the century malaria was a problem of the temperate United States. Claiming to be quinine- free promised avoidance of quinine’s unpleasant side effects, while the inclusion of typhoid recalls the common contemporary confu- sion of the two diseases. (Advertising card in the author’s possession) 42 tury European and American physicians commonly believed that fevers were fluid disorders. Although generated by exposure to foul miasms (which could arise, say, from rotting vegetable matter or animal excrement), fevers could be modulated by age, sex, climate, ethnicity, or season of the year. After physicians roughly localized a fever to an organ if possible (lungs, gut, joints, skin, brain), they classified it as mild or malignant, con- tinuous or intermittent. Thus one disease could turn into another, as when a pneumonia turned into tuberculosis, or a mild dysentery “went into” cholera. This confusion was never absolute—smallpox was easily recog- nized as a distinct disease, for example—but the lack of specificity in cate- gorizing infectious diseases limited understanding and research into cause and cure.36 This began to change by midcentury. After Asiatic cholera swept repeat- edly over Europe and the United States between 1832 and 1865, almost all physicians recognized its identity as a separate and distinctively lethal dis- ease. Medical researchers in the hospitals of Paris, who followed patients from clinic to autopsy, began to recognize distinctive pathological signs for such diseases as typhoid fever and typhus fever, and were able to cor- relate their discoveries with symptoms. German pathologist Rudolf Vir- chow used newly powerful microscopes to demonstrate the footprints of disease at the cellular level, driving an awareness that precise classification was possible. All told, physicians moved away from seeing diseases as fluid entities malleable by place and circumstance toward recognizing that there could be fixed disease descriptions, based on symptoms, physical signs, and pathological anatomy.”7 This tendency toward greater specificity in disease classification led to a concomitant drive toward greater specificity in disease etiology. As long as physicians believed that local circumstances could transmogrify a fever into pneumonia or smallpox or cholera, it did not much matter what ex- actly set it off, and a vaguely defined miasm would suffice as a cause. But if typhoid fever could be distinguished from cholera and was a disease sui generis, then it needed its own distinctive etiology, separate from that of cholera. Simply blaming miasms arising from foul watercourses was not enough. What exactly was in that bad air that made people sick? One Charleston physician despaired in 1849, “The precise nature and compo- sition of the noxious exhalations called technically miasma or malaria have never been discovered by the most skilful chemists?” Researchers looked for a specific component of the malodorous air that would be consistent with the overabundance of heat, moisture, and rotting substances that seemed to characterize miasms. Some explored the morbid qualities of hydrogen sulfide; others targeted heat and moisture themselves as causing disease. A few physicians speculated that suspensions of microorganisms, especially moisture-loving fungi, explained the matter. For example,John Malaria K. Mitchell of Cincinnati argued that different species of fungal spores caused, specifically, yellow fever, intermittent fever, and cholera. On the other hand, Josiah C.,Nott of Mobile published the theory, considered prescient by some, that tiny winged insects or animalcules floated in the rniasmatic mists, each specific for a certain infectious disease.39 This discussion emerged largely in a climate of concern about cholera and yellow fever, two devastating epidemic fevers that demanded attention and explanation at midcentury. The arguments about their specific causes were tied directly to controversies about contagion. If cholera had a specific cause, then why was it present in some years (i.e., 1832 and 1848) but absent for decades at a time? Surely there were plenty of filth in the streets and plenty of miasms in the air in the meantime. The same argu- ment applied to yellow fever: why was it sometimes present in filthy south- ern cities and not in others? One proposed answer was that the germ or animalcule or fungus was transported into the community, then thrived and reproduced in appropriately foul air. Arguments about contagionism and transportability, however, were never relevant to malaria. Malaria tended to occur in the same places, year after year. No one even suggested that it might be contagious; it was too clearly tied to place, not person. By 1878ione Savannah physician could write with confidence, “It is pretty generally admitted, by medical scien- tists, both in America and Europe, that malarial or paludal fevers are pro- duced by plants, or spores of plants, growing in marshes, stagnant water or elsewhere.” He went on to admit that, “it would be impossible to say, whether those [plants] producing malarial fever belong invariably to the fungi order.”‘° Leaving aside his confusion about fungi being plants, this physician clearly expressed the typical view of his day that malaria was tied to marshes, and that something that was grong in the marshes and that could be inhaled probably caused the disease. In sorting out specific disease entities and their etiologies, intermittent fever‘was frequently compared to yellow fever. In particular, falcipamm malaria was compared to yellow fever, since the two diseases shared a geo- graphical and seasonal locale. Now known to be tied to the habitation and life cycle of mosquitoes, the occurrence of these two diseases from mid- summer to first frost in areas of the hot humid South attracted questions about what they had in common. First to be sorted out was the issue of whether they were in fact separate diseases. Articles appeared comparing and cOntrasting the various symptoms and signs of the two diseases.‘1 This distinction became of critical importance when yellow fever first appeared in a city. By the 18705 most physicians, government officials, and the lay public believed that yellow fever was contagious and could be stopped by quarantine. If it should arrive in one location, the best response was to use quarantine to contain it there. Naturally, quarantine harmed the commer- Malaria in the Nineteenth Century cial interests of a city, since it stopped trade, so those interests put pressure on local physicians not to reveal the presence of yellow fever. One common alternative diagnosis was malaria, as in “It’s only a severe case of malaria. Nothing to be alarmed about. Not yellow fever at all.” Multiple disputes over such sentinel diagnoses divided the public health officials of south- ern communities, especially those marked by commercial rivalries.42 During the 18705 a “new” disease appeared in the southern medical lit- erature: hemorrhagic malarial fever. This form of malaria included a strong bleeding tendency, in which the patient produced bloody urine as well as black vomit, a cardinal symptom of yellow fever. When the esophagus or stomach lining bleeds, the resulting blood is turned black by gastric acid, resembling coffee grounds when vomited. Any disease process that causes upper gastrointestinal bleeding (like peptic ulcers and esophageal varices) can cause black vomit, but in the setting of a high fever and jaundice dur- ing a hot New Orleans summer in the nineteenth century, yellow fever had priority in the differential diagnosis. As a result, the claim that certain forms of malaria could become hemorrhagic became critical to public health de- bate. Leading the discussion was]. C. F aget, a New Orleans physician, who said that hemorrhagic intermittent fever had appeared for the first time during the 1853 yellow fever epidemic in New Orleans. “We saw the chil- dren, creole as well as stranger, colored as well as white, attacked epidemi- cally with a fever, during which the black vomit was quite frequent.” Faget concluded the disease could not be yellow fever, because it responded to quinine and was not deadly, whereas yellow fever, having reached that stage, would have behaved just the opposite.43 While F aget garnered con- siderable support from other physicians who claimed to have seen his severe type of malaria] fevers, in retrospect it appears that his critics were right that he was describing variants of yellow fever.‘4 Some severe forms of malaria cause hemorrhage; another contender for this phenomenon is severe dengue. But this “hemorrhagic malaria” seems to have been too consistently tied to yellow fever epidemics to be a distinct etiology. In any event, the disease appears to have disappeared from the South along with yellow fever. Another disease that emerged after the Civil War to confiise research on intermittent fever was typhomalaria. First described among Civil War soldiers by American physicianJJ. Woodward in 1863, typhomalaria was distinguished by fever, extreme fatigue, headache, splenomegaly, and di- arrhea. Woodward believed that typhomalaria was a specific disease gen- erated by two separate causes—miasms from rotting vegetable matter and . miasms from human feces and other animal wastes. Physicians found ty- phomalaria a useful label, for epidemic fevers often did not sort themselves into neat categories. British army surgeons stationed at Malta during the Malaria 18705 took up the designation for a fever raging in their camps that seemed to be linked both to poor sanitary conditions and to nearby malodorous marshes. Although some critics doubted the specificity of the new disease, it managed to survive challenges from microbiology in the 18805 and 18905, when specific causes of both typhoid fever and malaria were identified. American army researchers put the matter to rest during the Spanish- American War in 1898. Armed with accurate diagnostic tools, they estab- lished once and for all the myriad presentations of typhoid fever and at- tached a relatively tiny role to malaria in characterizing an outbreak of what had been called typhomalarial fever. The men had typhoid fever, and camp sanitation, not quinine, was the necessary solution.45 - During the last two decades of the nineteenth century, a great revolu- tion in etiological thought oceurred in medicine. By the 18903 the work of Robert Koch, Louis Pasteur, and their students had established that many diseases are caused by microorganisms. The first microbes suspected as disease agents were the fungi, for the action of yeast suggested analogically that such beings could reproduce explosively in human bodies and cause disease. Since certain diseases seemed to be correlated with heat, mois- ture, and filth, and fungi were observed to grow well in these conditions, the conclusion made sense that what was growing in all that humid putres- cence was a fungal organism that caused disease. Not surprisingly, a fun- gus was “discovered” that caused yellow fever, and a fimgus was sought to explain intermittent fever. This research plan proved a frustrating dead end, even though it correlated with the known distribution of these two diseases.46 During the 18803 bacteria came to the forefront as the principal sus- pects in infectious diseases. One after the other, the bacterial etiology of tuberculosis, cholera, diphtheria, and pneumonia was demonstrated. At the height of this bacterial excitement, a French physician in Algiers noted odd dark forms inside the blood cells of malaria patients. Could this be the causative organism? But the creature had many different forms, and it resembled debris more than any known bacterial cause, so when Alphonse Laveran proposed it as the cause of malaria, he was at first met with sneers of ridicule from the great modern minds of medicine. Koch doubted his findings, and the great Sir William Osler declared that he was seeing only detritus. Still Laveran labored away, trying to find his crescent-shaped be- ings in the soil and swamps of Algiers, with no success.“7 Others thought they had found the causative organism of malaria, and that it was a bacil- lus. Edwin Klebs, the codiscoverer of the diphtheria bacillus, and Corrado Tommasi-Crudeli claimed to have identified the germ in postmortem ma- laria cases, and they inoculated animals with it, creating new cases of malaria. Eventually their ideas faded, and by the mid-1890s most microbi- Malaria in the Nineteenth Century ologists accepted Laveran’s organism as the cause of malaria. But like Lav- eran, they were stymied in trying to find its place in nature. Where did it live, and how did it get inside human bodies?48 A change in language coincided with this insight into malaria’s etiology. Before the 18805 the word malaria had been used to refer to an aeriform poison, but it now came to mean the disease itself. One eminent NewJer- sey physician laid out this transition in 1886. “The word Malaria may be said to be in everybody’s mouth,” he began. “To the unprofessional mind, it means chill, aching bones,—creeping rigors,——and all, or either of the symptoms which announce the advent of fever,” whatever its periodicity. Even physicians had used the term “in a vague and limitless meaning.” He questioned whether “there is another word in the nomenclature of medi- cine, that is employed in such a careless and indeterminate sense as this single word,—Malaria.” He summed up the confusion with the revealing statement, “It passes current in professional circles, as denoting an invisi- ble, intangible, undefined, and undiscovered cause for a variety of condi- tions, which are recognized as malarial, and which are diagnosed as ma- laria?” In the 18905, however, as Laveran’s germ gained in recognition, the old use of the term faded. N 0 longer was malaria a poison but a disease caused by these dark, odd-shaped forms. Some held tenaciously to the swamp poison concept, for how else could one explain the near-constant association of malarial fevers with low, wet, hot places? The strength of this correlation made acceptance of the mos- quito theory overwhelmingly rapid. It was an amazing answer—Laveran’s organism could be conveyed from person to person only by means of this winged pest, and that pest resided mainly in the vicinity of just those low, wet, hot places. Patrick Manson of England had suspected that mosqui- toes explained malaria, after he had established that the tropical worm of ‘ filariasis was also carried by mosquitoes. He urged Ronald Ross, a student of his stationed in India, to find the means to establish the mosquito vec- tor of malaria. In 1897 Ross did so, in birds. Italian researchers, furious at being scooped by this British colonial physician, rushed to prove that the anopheles mosquito was the vector for human malaria as well. The fact that their announcement in 1898 did not achieve the world acclaim (or Nobel Prize) that Ross’s discovery inspired created a national antagonism in ma- laria research that persists, at least in humor, between English and Italian malariologists today.50 Robert Koch provided a further piece of the puzzle: the role of the ma~ laria carrier and early concepts of malaria immunities. Koch studied ma- laria in the German colonies, first in East Africa and later in New Guinea. He paid little attention to traditional concepts of race and instead divided the world into two categories: the civilized German and the Eingeborm, the natives of a place. Thus he saw the black African and the light brown Malaria New Guinea native in the same light~they were the native inhabitants of a malarious region, and as such a threat to the German colonists.51 Koch believed that malaria immunity was not genetic but primarily acquired. In communitievaith high malaria endemicity, he discovered, it was principally the children who became ill and who served to keep the malaria chain intact. The highest number of malarial illnesses was found in children, as well as the most deaths. Children had the greatest density of parasites in their blood. Adults, on the other hand, were rarely ill with malaria, even though parasites could be found in them. They were in- fected but not sick. Koch thereby concluded that the adults had acquired immunity by surviving the childhood disease experience. Nothing in his model suggested the concept of racial immunity; indeed, his very multira- cial exposure may have argued against it, for who could say that the two native populations he studied were of the same race, in some sort of con- tradistinction to his own. Being an Eingeboren mattered; growing up in a malarious place mattered; race did not particularly matter. Koch’s work on malaria carriers dominated malariology in the United States and was firmly associated with his goal of eradicating carriers via the use of quinine.52 A textbook about diseases of the American South, defini- tive for its time, illustrates the impact of chh’s work. After citing his New Guinea data, the authors agreed that immunity was acquired, not racially determined. They concluded that “the resistance of the black race to ma- laria is due to repeated attacks in early childhood, and not to any great extent to heredity?” Emphasis on acquired immunity highlighted the ex- istence of adult carriers, while downplaying the race issue. Anyone could become an adult tolerant of malaria if they grew up in the right environ- ment, so the link between the carrier state and race was much weaker than might be expected. By 1900 the foundation was complete. Scientists knew the plasmodia by name and activity, the role of the anopheles mosquito, and how to fight that mosquito menace. This last part of the structure came from the work of L. 0. Howard, America’s great entomologist. Howard did experiments on the destruction of mosquito larvae, demonstrating that putting a thin layer of oil on top of their watery incubation sites would smother fledgling mosquitoes. After studying its effect on fish and other wildlife, he pro- nounced the oiling method safe. So there were now two basic tools for at- tacking mosquitoes—draining wetlands, and coating bodies of water with oil. In addition to these rudimentary techniques, Howard pointed out the usefulness of installing fine mesh screens in houses to shut out mosqui- toes, burning mosquitocidal chemicals such as pyrethrum, and physically hand-killing adult mosquitoes.54 It was the yellow fever mosquito, whose viciousness was established by Walter Reed and his research colleagues in 1900, that first attracted the Malaria in the Nineteenth Century 47 48 attention of public health ofiicials. Campaigns in Cuba and New Orleans targeted the Aedes aegypti mosquito with the methods listed above, al- though the strategies had to be adapted to the preference of this mosquito for freshwater containers in urban settings. The most famous of such cam- paigns was carried out in the Panama Canal Zone, where American physi- cian William Crawford Gorgas controlled both yellow fever and malaria, allowing the canal at long last to be built. Gorgas fought malaria with two weapons, quinine and antimosquito measures, thereby reducing its inci- dence by 80 to 90 percent. Knowledge of the mosquito vector justified older methods of malaria control (quinine, drainage, location away from swamps) and guided new actions, such as oiling wetlands to kill mosquito larvae. Public health officials throughout the United States were impressed by the campaign’s success. Gorgas had shown that it was possible to con- quer two great diseases. But could it be done in the American South, which was now character- ized as diseased, slothful, and mired in poverty?55 Yellow fever, the terri- fying disease that had stimulated so much federal and local public health activity targeting the South, proved surprisingly easy to eliminate using the new methods. After the epidemic in New Orleans in 1905 was brought under control by antimosquito measures, yellow fever never returned to the North American continent. Malaria, though, was not to be expelled until the 19403. Why should the less malignant disease be so much harder to destroy? What was it about the South—its people, its topography, its political will—that made malaria such a persistent pestilence? The disease had once been a problem throughout the United States. Now, with the exception of remo te California, it was an exclusively southern blight. It had once defined the very nature of the frontier; now it indicted the state of southern civilization. As the twentieth century dawned, the tools were in place for the deliberate eradication of malaria. The chapters that follow will explore why, in spite of such available technology, it took another half century for the last indigenous pockets of malaria to vanish from the United States. ' Malaria Race, Poverty, and Place At the turn of the twentieth century, malaria was in retreat everywhere in the United States except in the South. Analyzing this phenomenon re- ' quires leaving the dynamic, feverish communities of the nineteenth-cen- tury frontier for the sharecroppers’ torpid life on the South’s plantations. From 1900 to 1950, the southern economy changed slowly, maintaining a population in the greatest depths of poverty, a population dragged down by malaria, tuberculosis, syphilis, and hookworm. Local and federal pub- lic health officials implemented programs for combating these diseases, with varying degrees of success. The region was America’s economic em- barassment, and its multiple diseases a blemish on the national escutch— eon. Before we can answer the question “Why did malaria ultimately dis- appear from the United States?” we must ponder why it persisted so long in the states of the old Confederacy. My own approach to malaria’s twentieth-century American career is to argue that malaria has to be understood within a web of socioeconomic as well as biological influences. This topic is part of a broader debate on the causes of malaria and the best way to control it. Stating that malaria is caused by poverty, for example, implies that social welfare programs that improve socioeconomic status will also depress the malaria rates. That experiment had been tried in Italy, without success.1 A caution is in order, however. It is tempting to see socioeconomic explanations as somehow more moral than ones based on, for example, insect behavior. Poverty is evil and should be condemned, goes this line of thought; any diversion of such condemnation should be seen as abandoning the cause of social re- form and improvement. Yet it is a bit fatuous to expect that every compo- nent of disease ecology will be traceable to racial and class discrimination. The mosquito’s behavior may be just as relevant. So may the use of in- secticides, which are easy to condemn in this post—Silent Spring era but clearly have their place in this story. In understanding malaria’s disappear- ance from the South in the 19403, one has to consider many factors, and evaluations of their importance should be based soley on their degree of impact on the parasite’s biography. 49 5o Malaria’s Ecology Exploring the ecology of the parasite and its host vector is a good place to start in understanding when and why the parasite thrived in the American South. Several strains of anopheles mosquitoes are found in the conti- nental United States, including A. quadn'maculatm, A. punctipennis, A. cruciam, and A. mamlipennis. (A. maculipennis can be broken down into subspecies, and some authors put A. quadrimaculatus as one of them; such distinctions do not materially change the analysis that follows.) A. crucians and A. punctipennis are southern anophelines capable of trans- mitting malaria under laboratory conditions but were rarely significant vectors in nature. Both strongly prefer animals as the source of their blood meals, which likely explains this anomaly. A. maculipenm's is the mosquito of the northern tier of states, the Great Plains, the mountains, and West Coast. A. quadrimaculatus is the vector of the South and lower Midwest and was the most important actor in malaria transmission there in the twen- tieth century. A. maculipennis is easily diverted to animals, and Acker- knecht was probably right that such diversion was important in the Old Northwest.2 As the only significant malaria vector in the American South, A. quadri- maculatus, with its peculiarities and habits, becomes central to our story. In the 1920s and 1930s, once Lewis Hackett and his colleagues had shown that diversion of mosquitoes from humans to animals could significantly lower malaria rates, malariologists in the United States asked whether A. quadrimaculatus likewise preferred animal blood to that of humans. If it did, then “zooprophylaxis”—locating animals near humans—should work. Hackett pointed out that such a method argued for a pig under every bed rather than a net over it (leaving aside the question of a chicken in every pot).’ Research on this issue was both ingenious and tedious. Fol- lowing a method devised by]. B. Rice and M. A. Barber, mosquitoes were trapped from a location near animals, such as the area under a farmhouse located next to stables containing livestock. The insects were dissected to see if they had taken a blood meal. If so, through a precipitin test, the re- searcher could discover if the blood came from a cow, goat, horse, dog, human, or cat. (Although dogs were occasional bite victims, cat blood almost never showed up.) By varying the testing environment, entomolo- gists could discover the preferential and possible meals of a particular type of mosquito. It turned out that A. quadrimaculatus was indilferent to host species: it would take blood from whatever warm body was nearby, be it mare, cow, or woman.4 Since the South’s principal malaria vector showed no strong preference, the increasing livestock population likely had little influence on southern malaria rates. A. quadrimaculatus has peculiar behavioral characteristics. It breeds Malaria only in still water and prefers an alkaline pH. Females will not lay their eggs alongside streams or drainage ditches, unless congestion or drought has created pools where a watercourse formerly flowed. The larvae are concentrated on the edges of ponds, where there is little wave action, and debris protects the larvae from foraging minnows.5 In the subtropical cli- mate that characterizes most of the South, the mosquito becomes dormant in winter. It hibernates in trees, under houses, and in caves. If a mosquito is carrying malaria plasmodia when she retires for the winter, the organism rarely, if ever, survives long enough within the mosquito for her to be infec- tive come spring.6 Lewis Hackett once remarked that it was remarkable that malaria spread in the United States at all. “The most astonishing thing about malaria, con- sidering the chances against its successful transmission in nature, is the appalling amount of it in the world,” he wrote.7 Compared to the tropics, the months with temperatures high enough for both mosquito breeding and parasite development within the mosquito host are quite limited. Fur- ther, A. quadn'maculatus is not a very efficient malaria transmitter. Trans- mission depends on the density of carriers, the density of female mosqui- toes, and access of the mosquitoes to the carriers. It also depends on gametocyte density, and the ease with which the mosquito takes in the infection. Research has shown that only a tiny percentage of bites result in vector infection. Once infected, a mosquito has to survive for one to two weeks, so that the parasite can mature in her body; only then is it ready to be injected into a new host. For transmission to occur, the mosquito has to take a blood meal from a human within two months. If she does not, the parasite usually withers and dies in the mosquito host. Breaking the chain of transmission should thus be possible without eliminating all A. quadri— maculatus and all malaria carriers. As a U.S. Public Health Service malaria worker said in 1924, “Malaria is an exotic in the United States; it is a trop- ical disease. It does not belong here and does not flourish in the U.S. except . . . under abnormal drainage and subnorrnal living conditions.“ Ma- laria’s conquest of the United States was far more tenuous, and easier to break down, than it was in more tropical climates. Certain areas of the South were particularly hospitable to breeding A. quadrimaculatus, featuring the requisite “subnorrnal living conditions.” The most notorious was the delta region of the Mississippi and Yazoo Rivers. Since the river bottoms had been converted to agriculture by the levees built in the late nineteenth century, a once totally swampy area was made partially habitable and yielded incredibly rich cotton-growing soil. Some swamps remained, of course, and the rivers were still prone to flood- ing in wet years, creating countless ponds as the floodwaters receded.9 Kenneth Maxcy of the Public Health Service described it eloquently in 1923: “The Mississippi Delta . . . is particularly favorable to heavy pro- Race, Poverty, and Place 51 52 duction of anopheles quadrimaculatus. The flat ‘river bottom’ land is everywhere traversed by sluggish streams, with dendritic bayou connec- tions forming innumerable cypress and sweet-gum swamps.” Within this bayou country were located “great cotton plantations worked by thou- sands upon thousands of negro families living under conditions of maxi- mum exposure to mosquito bites.’”" The land was just dry enough to farm, but wet enough to breed A. quadrimaculatus abundantly.ll The Atlantic coastal plain, made up of the lowlands of the Carolinas, Georgia, northern Florida, and Alabama, formed the eastern malaria belt. There the underlying ground rock was limestone, a soft base that could easily be eroded or undercut by water, creating multiple shallow ponds with an alkaline pH perfect for breeding A. quadrimaculatus. Such “lime sinks” were ubiquitous in the landscape; TVA malariologists working in Alabama found some counties with 150 sink ponds. Other areas in the South had enough water to maintain A. quadrimaculatus as well. Eastern Texas and Oklahoma in particular had their own environments suitable for breeding—and a significant malaria problem into the 1940s.12 Malaria and the Southern Economy More than one observer has noted that areas where cotton is grown tend to be the most malarious. Cotton was an intensive hand labor crop, and most cotton was grown on plantations where sharecroppers were clus- tered in shanties located on the most marginal land, which was likely to be swampy. Maxcy accurately described the connection between type of farming and malaria. “Conditions are adverse to free transmission of ma- laria where the farms are large, . . . as in hay farms and stock farms, requir- ing only a few employees with machinery to cultivate large tracts.” On such farms he noticed that “the houses are likely to be far removed from each other and from the breeding places of anophelines.” He contrasted this sit- uation to that of the cotton plantation. “Where the type of agriculture is intensive, requiring many hand laborers, as in the raising of cotton, where the homes are close together and located in the rich ‘bottom lands’ near anopheline breeding places,” malaria thrives. He concluded, “In the south there is a striking connection between malaria and the raising of cotton?” This relationship was caricatuer in a 1923 cartoon about the Georgia State Board of Health’s efforts against malaria. It shows a white farmer car- rying a bale of cotton on his back. Sitting on the bale is a black man, asleep. On his head is a mosquito as big as a terrier. Labeled “The Southern Farmer’s Burden,” this cartoon was ludicrous in its overt assumption that white southemers were doing all the work, but apt in its awareness of the connection between cotton culture and malaria.“ Malaria was less a problem on the rice and sugarcane plantations of Louisiana and Arkansas, a fact that intrigued malariologist M. A. Barber. Malaria The Southern Farmer’s Burden. In this 1923 USPHS cartoon, the white farmer is carrying the dual burdens of the fickle cotton econ- omy and a black labor force enervated by malaria. The cartoon reflects the broader message that malaria blocked development and prosperity, and also the widespread assumption that blacks were disproportionately infected with malaria. (Photographic Collection, Records of the USPHS, Record Group 90, NACP) 54 He explored this area in the mid-19205 and found Anopheles quadrimacu- latus in great numbers. Like all rice fields, the ones he studied were criss- crossed with irrigation ditches and frequently submerged altogether. Var- ious shelters amid the rice fields—such as outhouses, stables, unscreened dwellings, and even hollow trees—~were blackened by the large number of mosquitoes roosting on them. He found one 12-by-13-inch board with 304 A. quadrimaculatus, while a colleague counted 2,768 in one barrel. So the right anopheles were there, and the people were there, but the malaria was much less substantial than on cotton plantations. Barber explained this puzzle by using socioeconomic factors. Most of the rice field labor was done by machine, and no large rural population lived near the fields. More houses had screens in rice county, and the stan- dard of living was generally higher. The sugarcane areas had concentra- tions of labor typical of cotton plantations, but much less malaria. When cotton areas typically had malaria rates of 8 to 10 percent, the sugarcane plantations only had 0.6 percent. This difference could not be explained by any public health interventions. Barber drew the following conclusion: It would seem that, with even a moderate betterment of social conditions, malaria in the US. tends to disappear or become relatively inconsiderable provided such improvement is general. Or, to state the proposition in another way, the maintenance of high endemic malaria requires a permanent reservoir of infec- tion such as is furnished by a considerable body of people lack- ing proper housing, proper food, and adequate medical treat- ment. Now that pioneer conditions of life have in most parts of the country disappeared or become modified, it is usually a cer- tain type of renter class which provides the necessary reservoir of infection. . . . It would seem that nearly every phase of eco- nomic improvement has had some elfect on the reduction of malaria.15 In his 1946 autobiography Barber again told this story and still argued that what distinguished rice and sugar plantations was the relative prosperity of their workers. Moreover, in rice and sugar country, black farmhands lived in towns, rather than in “[t]he bedraggled huts so commonly seen along the bayous in cotton country.”16 Where people lived was indeed critical to malaria prevalence, and the behavior of A. quadrimaculatus is important for understanding why. Whereas other mosquito species are likely to be found in barns or out in the woods, this mosquito tends to be found in houses. Hence it has easy contact with humans. After feeding, the female A. quadrimaculatus rests on a nearby wall (as opposed to flying outside), which makes it vulnerable to swatting or spraying. The mosquito rarely flies more than a mile from Malaria its breeding place, so in malarious areas the mosquitoes are densely clus- tered within a one-mile radius of ponds, while large swaths of drier land are visited only lightly. Malaria too is intensely local, so that in one county school 50 to 75 percent of'smdents might be infected, while in another, in a higher, drier area of the same county, students might be much more sparsely infected. Malaria is not spread over an area like butter on bread but appears more like the currants in a bun, with foci of disease separated by healthy areas.l7 So location was everything, and it was tied to economic conditions as well. Typically in untying the malaria knot, each aspect is connected to another. Such is the case with malaria and environment. Farmers who have adequate capital, producing a valuable crop, will drain as much of their land as possible to maximize the arable acreage. The affluent build their houses on high hills to catch the breezes and avoid mosquitoes; in the summer they send their children to vacation in the mountains to avoid the summer fevers. They can afford screens, quinine, and a doctor’s care. Malaria is tied to poverty, but prosperity per se will not prevent or cure malaria. Gold coins laid over the eyes have no therapeutic value. Rather, it is possible and profitable to tease out just what it is that prosperity buys that wards off malaria. B As in most infectious diseases, malnutrition plays some role in immu- nity and resistance to malaria, but its role is difficult to quantify. The mal- nourished body may actually be less likely to get malaria, because it offers less sustenance to the invading parasite. The role of malnutrition in ma- laria has been discussed extensively, and one recent analyst has suggested that while chronic malnutrition is inimical to the parasite, episodes of acute starvation promote malarial deaths.18 An indirect effect may be more im- portant here than the direct impact on immune fianction during the acqui— sition of infection. The likelihood that a biting mosquito will pick up the gametocyte form of malaria from human blood—~the only form that is ready in the mosquito’s gut to continue the parasite life cycle—is corre- lated to immune adequacy. Malnutrition depresses immune fimction, in- creasing the likelihood that a malaria patient will have numerous gameto- cytes in his or her bloodstream. Thus a person’s likelihood of getting malaria increases proportionately to the number of diseased bodies in the vicinity who are malnourished. As a corollary, the more such persons there are in one’s home, the more chances exist for transmission to occur. The poor live more densely packed than the rich. Similarly, comorbidities also influence immunity. Being sick with tu- berculosis, hookworm, pellagra, or typhoid—all common diseases of the South’s poor in the first half of the twentieth century—lowered resistance to malaria and encouraged gametocyte generation. Hookworrn was partic- ularly common in malarious areas, and some doctors even claimed that Race, Poverty, and Place 55 they could not cure a case of malaria until the hookworm was cleared as well. ‘9 Certainly pairing the anemia from hookworm with the anemia from malaria made for a very weak, sickly body. The same was true of tuber- culosis, which also caused anemia, weight loss, and chronic" disability. A woman worn outfrom repeated childbirth, whose body was sapped of iron and other nutrients, was an easy target for the malaria plasmodium. Prosperity bought a decent house; poverty lived where it could. The “decentness” of a house could be defined in several ways. Its distance from swamps and ponds was key, and most people who could chose to live away from neighborhoods swarming with mosquitoes. A decent house had solid walls and floors; the sharecropper worried about rats and snakes entering through holes in the house, much less mosquitoes. Increasingly over the first few decades of the twentieth century, being prosperous meant having screens on windows and doors. Screens became an indicator of cleanliness . and being middle class, though the issue driving screens was exposure less to mosquitoes than to flies. Anxious mothers feared the typhoid and polio carried on flies’ feet, and increasingly an insect-free house became a sign of good housekeeping and hygiene. As early as 1915, the North Carolina Health Bulletin proclaimed: “Any home that would today be healthy or that has any idea whatever of decencywill certainly have screens in the win- dows and doors of at least the dining room and the kitchen. The whole house should be screened.”’° In 1940 another author echoed this theme: “Screening is so much a matter of course in middle and upper-class homes and places of evening assembly in this country that we nowadays take it entirely for granted?" Home insecticides became another common aspect of sanitary home treatment in the 19305. Most widely used were kerosene solutions of py- rethrum, a powder made from a Chrysanthemum indigenous to Dalmatia. Although pyrethrum powder had been burned for centuries to kill insects, the spray solution first became available for the domestic market after pat- ent litigation in the 1920s released it for general production. The most common brand name was F lit, dispensed out of barrel-shaped sprayers with a plunger. Dr. Seuss illustrated advertisements for Flit, which all carried the punchline “Quick Henry, the F lit.” The “Flit gun” became a common household implement, at least in middle-class households, and by 1935 pyrethrum imports exceeded 16 million pounds. Pyrethrum is a “knockdown” insecticide: it kills on contact but its toxicity wanes in a few hours. Still, studies in India in the 1930s showed that regularly killing adult mosquitoes with pyrethrum led to reduced rates of malaria transmission. Malariologist Paul Russell, for one, attributed some of malaria’s decline in the United States to the ubiquity of domestic pyrethrum, saying in 1955 that “[s]ince 1930 hardly a house in the formerly malarious area [of the South] has been without its ‘flit gun.”’“ This was not likely true for the Malaria sharecropper’s shack, but by the 19403 home insecticides, like screens, had become a common feature of middle-class homes. In a sense this discussion about prosperity as a cause of diminished ma- laria is a subset of the broaider debate over how much weight to attribute to housing, food, ands-comorbidities, and how much to attribute to med- ical practice. Did it matter that the affluent had access to doctors, hospi- tals, and medicines? Quinine was not an effective public health measure, but it did save lives on the individual level. The children of the affluent were not likely to die of malaria, but the infants of the poor might well do so. The poor tended to resort to self-medication with patent medicines such as Grove’s Chill Tonic which promised effective quinine in a palat- able form for infants. Yet the amount of quinine in chill tonics was rarely sufficient to make a serious dent in the disease. One physician found a young boy swarming with parasites—yet the boy had consumed three bot- tles of Grove’s Chill Tonic in previous weeks."3 The affluent who were ill with malaria, by contrast, got sufficient quinine, food, and nursing care, were kept in screened rooms, and had access to opioid analgesics to make the pains easier to bear. Since mortality was not the main issue in malaria, the doctor’s presence and the ability to buy medicines would have had a greater impact on general health and well-being (although it did reduce mortality as well). But did medical care either reduce the transmission of malaria or limit it as a communitywide disease? Without knowing the an- swer, the impact of medical attendance on malaria rates remains unclear. Race and Malaria What impact on malaria’s prevalence in the South did the disproportion- ate percentage of southerners with African-American heritage have? The antebellum argument that slaves were appropriate for work in the South because of their immunity to malaria is well known. But what happened to this argument after the war? Did having a large black population make the South less prone to malaria, or more? Between the 18905 and the Great Depression, discussions of the health of black Americans turned on two questions. Were they as a race declining to extinction? Were they a threat to the health of white Americans? The first, if answered positively, lent credence to the argument that blacks were better off under slavery and unfit for free society; the second, if answered positively, raised fears of racial contamination in the home, for blacks toiled in close intimacy to white families, cooking their food, cleaning their houses, and nursing their children. Curiously, while malaria carriers were recognized as a danger, they were not prominent in discussions about the health relationship between black and white Americans. The most prominent voice proclaiming the imminent demise of the African race in America was Frederick Hoffman, an analyst for the Pru- Race, Poverty, and Place 57 Quick, Henry, the F lit! One ofa series of cartoons drawn by Dr. Seuss for a popular brand ofpyrethrum-based insecticide, this ad plays on parental guilt about both mosquitoes and flies, known to be important carriers of the typhoid bacillus. By 1945 enjoying a mosquito- and fly-free home had increasingly become part of the middle-class standard of living. (Advertisement from unknown magazine, 1945, in the author’s possession) i dential Life Insurance Company. Hoffman, born in Germany, claimed in the preface to his 1896 treatise on health and race that he could be impartial on the subject because he had grown up with no inborn prejudices what- ever.24 His argument was that since the black population growth rate was only about half of the white, and the black experience of early disease mor- tality was growing, then the race was doomed to extinction in the United States. However self-congratulatory Hoffman may have been about reach- ing this conclusion, his racism shines through in comments such as, “The whole history of Anglo-Saxon conquest and colonization is one of endless proof of race superiority and race supremacy,” as well as his assertion that “the rate of increase in lynching [in the American South] may be accepted as representing fairly the increasing tendency of colored men to commit this most frightful of crimes [rape] 3’25 Degeneracy was a major theme of fin de siécle sociology. Proponents of eugenics harped on the decline of the American breeding stock due to the influx of supposedly inferior immigrants, not to mention the sort of racial intermingling that was evident in the many lighter-skinned, Anglo-fea- tured blacks in the United States. Fear of degeneracy fueled campaigns to sterilize the mentally deficient, whether their blight was insanity or low intelligence. Such thinking shored up the legitimacy ofjim Crow laws, for certainly everything should be done to keep the races separate to avoid that most dreaded of interactions, miscegenation, which would further weaken the stock."6 Hoffman’s statistics also supported racist ideology that portrayed the black as helpless, shiftless, and unable to maintain his or her health and well-being without guidance from white masters. After claiming that blacks were too ignorant to care for their children properly, Hoffman went on to blame the higher mortality rates of African-Americans on another innate racial characteristic. “For the root of the evil,” he claimed, “lies in the fact of an immense amount of immorality, which is a race trait, and of which scrof- ula, syphilis, and even consumption are the inevitable consequences.”27 Antebellum defenders of slavery had proclaimed that the black slave was like a child, in need of sustenance and discipline from the more mature white race. The infamous census of 1840, which purported to show that free blacks had high rates of insanity (unlike contented, mentally stable slaves) had been grist for this mill. Hoffman’s work followed in this tradi- tion of using statistics to demonstrate the inferiority and proper subju— gation of black Americans. Certainly many racists were cheered by the thought that black health would so degenerate that the race would disap- pear.”m Nearly two decades after Hoffman’s treatise appeared, however, it was obvious that the black population was not fading obligingly away, but rather was continuing to grow, albeit at a slower pace than whites. Disease Race, Poverty, and Place 59 60 rates were markedly higher among blacks, especially for tuberculosis and other respiratory ailments, and so were death rates. But given its com- pensatory birth rate, the population was not dwindling. Medical pundits, aware of these statistics, turned increasingly to seeing the black population as a direct threat to whites. “The Negro a Menace to the Health of the White Race,” proclaimed the title of one typical 1916 article in the South- ern Medical]oumal.’9 Two infections drew most of the attention of this and other medical authors: tuberculosis and venereal disease. Tuberculo- sis was by then the subject of major public health campaigns, which fo- cused particularly on teaching the patient to avoid infecting others by managing his or her infectious sputum properly. Venereal diseases were known to be a consequence of intimacy, but there was also fear that they could be transmitted through more casual contact. Experts repeatedly warned about the proximity of the races. “We in the far South cannot afford to ignore the problem of the health of the negro,” said one white New Orleans physician. “Negroes cook our meals, serve us at table, clean our houses, make our beds, launder our clothes, care for our children, in short, live in intimate daily contact with us and our families.”30 Another physician noted that the “negro health problem” was one of the “white man’s burdens” because “[t]he white race and the black race will continue to live side by side in the South, and whatever injuriously affects the health of one race is deleterious to the other also.” He was particularly concerned about venereal disease, because black women were “known” to be so ignorant, indifferent, and immoral. “Many negro women have gon- orrhea, and pay little attention to it,” he asserted. “This is a very real men- ace, to our white boys, and through them, after marriage, to our innocent daughters also. For, despite our best efforts, many boys are going to sow wild oats.”l Tuberculosis and venereal disease were at the center of discussion at a 1914 meeting of health officers from the southern states on the subject “The Negro Health Problem.” Oscar Dowling, head of the Louisiana State Board of Health, organized the meeting and served as its president. Rep- resentatives from every southern state board of health arrived in New Or- leans to consider the “health of the Negro,” “the most important single element in our problem of sanitary betterment.” Much of the discussion concerned the role of poor housing in creating and maintaining disease. Although many no doubt agreed with the South Carolina physician who argued that only lazy, wastrel, and indifferent blacks lived in bad housing, which he said was all that they deserved, other voices called for landlords to take responsibility for the quality of their rental properties. The meet- ing allowed a group of black physicians to speak, and their message fo- cused on the need for better housing and better health education for black Malaria people. Throughout this extensive discussion, tuberculosis and venereal diseases remained center stage; malaria was nowhere to be found.’2 If black people were seen as such a source of infection, why wasn’t the black malaria carrier a concern? The absence of malaria from these dis- cussions demonstrates clearly that in the early twentieth century, the link between malaria and race had been considerably weakened. The idea that malaria could be spread by carriers, apparently healthy individuals circulating in society, emerged only in the first decade of the twentieth century. Its conception depended on knowledge of the plas- modium and the mosquito vector, and that knowledge was in place as physicians began to design campaigns to control malaria. The earlier no- tion, that malaria was caused by a miasm emanating from swampy lands, made the potential victim fear a place, not a person. But now the infected individual was an essential part of the malaria chain, which linked victim to victim via the mosquito. Bacteriological studies of other diseases, such as diphtheria and typhoid, had shown that healthy people could spread disease without any outward sign of their infectivity. Typhoid Mary, an Irish cook in New York who shed typhoid bacilli in her stools but appeared robust and well, was the most famous. Researchers also discovered that the infected could be moderately ill but still up and about, serving equally well as transports for organisms. Tuberculosis was particularly notorious in this regard, and fear of the person who coughed on the train or spit on its floor grew accordingly. So malaria researchers were not completely taken by sur- prise when they found adults at their regular work with parasites abundant in their blood.” German bacteriologist Robert Koch did much of the research on ma- laria carriers, as he had done for carriers of typhoid, tuberculosis, and other diseases. Koch, as we have seen, tended to divide inhabitants of the tropical communities that he studied into locals and foreigners, and he par- ticularly worried that the locals were carriers and hence dangerous. He was less interested in a person’s race than in where he or she had been born and raised. Koch saw almost all immunity in malaria as due to acquisition of protection over time after multiple exposures to the parasite. Race, and the idea of inborn characteristics, were not important to his understand- ing.’4 His outlook was widely accepted by malariologists in the United States.’5 Interest in malaria carriers seems to have emerged and then faded with the rise and fall of quinine as a public health measure. This method, also tied to Koch’s pronouncements, remained popular in the United States into the 19205. Louisiana physician C. C. Bass promoted quinine therapy as a method of eradicating the carrier state and preventing relapse as well, frequently mentioning the importance of the carrier."5 But once quinine Race, Poverty, and Place 61 62 was abandoned in favor of attacks on the mosquito, the carrier became rare as a topic in malaria control discussions. So while blacks were feared as transmitters of tuberculosis and venereal disease, their danger as malaria sources was not considered in the “Negro Health Problem” literature. Blacks and Malaria Perhaps the major reason that blacks’ supposed immunity to malaria faded from white consciousness was the fact that in the early twentieth—century South, a great many blacks had malaria. Over and over again mortality sta- tistics showed more blacks than whites dying of malaria, often in ratios as high as two to one. U.S. Public Health Service researcher Kenneth Maxcy, for example, found twice as many cases of splenomegaly (a sign of malaria) among black Mississippi Delta schoolchildren as among white.37 This difference was also borne out in studies that looked at parasite rates be- tween the races. Although these statistics may have had various biases, public health officials, and the public at large, nevertheless clearly saw the black population as more at risk for malaria than the white. Table 1 shows parasite rates found in multiple surveys of black populations, especially black schoolchildren, and where available of whites.” When researchers compared black and white, blacks usually exceeded the parasite rates of whites. At certain sites as much as 42 percent of the black population was infected. Of course such percentages are very much dependent on the population surveyed, but these results at least indicate that public health researchers had no problem finding abundant malaria infestation among southern blacks. Mortality rates, while less reliable than parasite surveys, also showed the greater prevalence of malaria among blacks than whites. Statistician Cover summarized the data from the annual vital statistics gathered by the federal government from the state boards of health in papers pub- lished after World War II. She found that the comparative malaria death rate per hundred thousand people between black and white was at times as divergent as 22-.4 to 1.9 (1919—21) and 5.6 to 0.6 (1939—41). The distance between the races remained remarkably constant, even as cases were de- clining in number for both.“9 Discussions of this phenomenon reveal much about prevailing medi- cal attitudes toward race and malaria. Some commentators said that if blacks had previously been somewhat immune to malaria, they had lost that protection now. Others explained that the prevalence of malaria among African-Americans had nothing at all to do with race but merely served as a marker of their socioeconomic status. And as always, questions arose about the validity of the data itself. Arguments about blacks’ loss of immunity drew explicitly on Koch’s work. Those who maintained that blacks must have lost some protection Malaria Table 1: Malaria Parasite Rates in the South Population Percent Observer Place Year Surveyed Number Positive Mitzmain Scott, Ms \. .1915 black sharecroppers 1,184 42.0 surveyed von Ezdorf multiple sites in 1912—15 black and white ‘ 13,500 13 .3 AL, AK, MS, NC surveyed von Ezdorf multiple sites in 1912—15 blacks 5,607 20.6 AL, AK, MS, NC whites 7,893 8.1 Taylor Pamlico Co., NC 1923 black and while not given B 37.0 general population W 32.0 Hass/Derivaux Lake Village, AK 1916 black sharecroppers 430 16.0 surveyed Barber multiple southern 1924 general population 4,535 pos. B 54.0 sites smears W 46.0 Bass Bolivar Co., MS 1916 black sharecroppers 31,459 22.2 surveyed Underwood unspecified 1925 black sharecroppers 72 l 7.0 MS Delta survyed Crillitts Dougherty Co., 1930 black and white 334 B 42.9 Georgia .rural schoolchildren surveyed W 4.5 offered two strains of analysis. The first pdsited that since all immunity was acquired, the person who was not exposed frequently would lose pro- tection. So if malaria declined in frequency, the continuous infection that had once produced an immune black adult was disrupted, leaving him or her just as susceptible to the disease as whites in colonial South Carolina had been. For example,]ulian Herman Lewis argued in 1942 that “[t]here is no evidence that race per se is a factor in the existence or perpetuation of malaria.” He admitted that “it has long been thought that the Negro is immune to the disease, the basis for which is the well~known fact that white settlers in parts of Africa where malaria is endemic rapidly succumb to the disease, while the natives flourish.” Lewis explained this apparent immunity by noting a curious paradox—Africans were both heavily in- fected with parasites yet seemingly well. A black person’s race did not determine immunity per se, for black children were badly afflicted with the disease. Rather, adults acquired immunity through long exposure. If immunity against malaria had declined among blacks in the United States, it was because they no longer had the protective effect of constant infec- tion from infancy on.40 A second argument tied into the general theme of degeneracy. Propo- nents of this line of thought argued that just as blacks had not had tuber- culosis to a significant degree before the Civil War, they also had not had malaria. Whether this was due to the good care they received at the hands of plantation masters or to some biological tendency was not always elab- orated. Clearly, however, the black race had degenerated in fi'eedom, they Race, Poverty, and Place said, and now was sickly—sick with tuberculosis, sick with venereal dis- ease, and sick with malaria. Their very degeneracy as a race set them up for this weakened state of affairs, it was said. Frederick Hoffman expressed this view clearly when he opined that “the tendency of the colored race towards a higher death rate and disease prevalence from malaria, is of comparatively recent origin,” and “is in marked contrast to the lesser sus- ceptibility of the white race.” He attributed this to a nonspecific decline in “vital resistance” and tied it to the overall degeneracy of the black race in freedom}1 But more sympathetic observers of the high malaria rate in blacks at— tributed it to their socioeconomic status. Black southerners lived in por- ous houses on swampy land, which exposed them to many mosquito bites. The housing was crowded, both in terms of the occupancy of each cabin, and in the houses’ proximity to each other, allowing for easy malaria trans- mission. Southern blacks were malnourished and overworked, making their general resistance low. And they lacked access to medical care, with- out money to pay doctors or buy medicine. These factors fed upon them- selves, for they made not just malaria but other infectious diseases such as tuberculosis more likely, lowering blacks’ immunity in general. Malaria was a marker of poverty, and advocates of this point of view believed that affluent southern whites should be ashamed that they allowed African- Americans to exist among them in such dire straits.42 A third response to the statistics was to examine the quality of the data itself. Were blacks more likely than whites to be labeled with malaria as a cause of death? Probably: when a team of public health officials reviewed the malaria death statistics in 1925, they found much reason to doubt the accuracy of malaria as a cause of death in general. More importantly for the current discussion, they discovered racial differences in malaria mortality reports. In the coastal counties of South Carolina, for example, many deaths occurred without the presence of a physician, and when the local registrar came to put down a cause of death, he or she had to guess, based on what the relatiVes said. Of these unattended deaths, 11 percent were called typhoid, 17 percent tuberculosis, and 50 percent malaria. That was for black people. But only 13.6 percent of the white deaths that went unat- tended were attributed to malaria. In two South Carolina counties that had the highest death rates from malaria in the United States, only 4 of 95 deaths from malaria had been certified by a physician.“ By and large, though, black Americans most likely suffered from a dis- proportionate burden of malaria. Their very ability to tolerate the organ- isms, both from acquired and inherited immunities, meant that the mos- quitoes, which swarmed in the crowded sharecropper cabins, had a high rate of infectiousness and continued the malaria chain into the next gener- ation. In spite of having some biological protection against the disease, Malaria black southerners undoubtedly suffered mightily from it well into this cen- tury. Their protection was far from absolute, as was transparent to medical observers. 5 Why did the very prevalence of malaria among black people not cause whites to fear them as a source of disease? The answers to this are complex and highly speculative. Blacks were not silent carriers, as Typhoid Mary had been, since they so evidently suffered from the disease themselves. For the black adult who was swarming with parasites but working in proxim- ity with whites, the mosquito vector was less commonly available to trans- mit the parasite between them. Through mosquito control, malaria had been eradicated from most southern towns by the mid-19205. Townspeo- ple would have had less reason to fear a carrier when the vector, and the disease, appeared so diminished. Malaria was not like tuberculosis, spread by the wayward cough, or venereal disease, transmitted through the open sore. The requirement of the mosquito perhaps sheltered the functioning black worker from appearing to be an immediate source of disease. The black child, more evidently sick with malaria, would due to his or her very childishness be nonthreatening. Finally, the general focus on the mos- quito, and inattention to the carrier, would deflect fear from the malaria patient. Whatever the reason, southern blacks acquired the reputation as victims of malaria rather than purveyors of it. While various commentators thought the prevalence of malaria among southern blacks was worthy of comment, the notion that black people were somehow privileged with immunity to malaria seemed put to rest. Not until the 19305 did the notion of a racial immunity again surface in American medical thought. At that time this fact emerged from an odd source of ther- apeutic research: the use ofmalariatherapy for treating neurosyphilis. Malaria, Syphilis, and Race ' A curious chapter in the history of malaria spans the three decades from 1920 to 1950. Physicians who were treating syphilis had been pleased that Paul Ehrlich’s “magic bullet,” salvarsan, had shown effectiveness in treat- ing syphilis, even though the course of therapy was long and difficult. One component of syphilis remained stubbornly indifferent to salvarsan ther- apy, however—the infection of the central nervous system with the syphilis spirochete, called neurosyphilis. In the 1910s physicians began to note the odd fact that symptoms of neurosyphilis, such as dementia and partial paralysis, improved after the patient suffered from another disease that generated high fevers. They built on that observation by trying to design therapy that would intentionally generate a benign but high fever and thus help control the crippling neurosyphilis sequelae. Various methods of in- ducing fever were tried, including vaccines, infection with the erysipelas germ (a streptococcus), and finally malaria parasites.“ Race, Poverty, and Place 66 So successful did malariatherapy appear to be for neurosyphilis that in 1927 its designer, a Viennese physician namedjulius Wagner vonjuaregg, earned the Nobel Prize for his discovery. Soon physicians throughout Eu- rope and the United States were using malariatherapy for their patients with severe neurological side effects from syphilis. These patients included a significant percent of inhabitants of mental hospitals and asylums, pro- viding an ample population for research on the new technique. Physicians considered malariatherapy to be “standard of care” for neurosyphilis until after World War II, when penicillin became widely available for its treat- ment and slowly replaced the more dangerous and difficult use of live ma- laria parasites.“5 The organism of choice for malariatherapy was Plasmodium vivax. Vi- vax caused very satisfactorily high fevers, was rarely a risk to life, and could be controlled with quinine. Ideally, the infection was transferred into the syphilitic via mosquitoes, which helped to filter out other possible impurities from the donor’s malarious blood, although at times blood was taken directly from the donor and injected with a hypodermic. This mode of therapy was not without problems. It called for ingredients that might be difficult to find: a donor had to be identified and then paid not to take treatment so that his or her bloodstream could continue to provide para- sites. If mosquitoes were used, they had to be bred and maintained in cages to avoid contamination with outside strains of malaria or other dis- eases. And there was always the danger that the malariatherapy might prove too toxic for the patient.46 These problems are all revealed in the notebooks that Rockefeller re- searcher and malariologist Mark Boyd kept about his use of malaria- therapy at a mental hospital in Florida, along with a fiirther issue. When Boyd and his colleagues tried to infect black syphilis patients with vivax malaria, the results were disappointing. The patient might develop a slight malaise and low-grade fever in response to the injection, but certainly not the high spiking fever that was required for the syphilis therapy to be suc- cessfiil. Boyd tried increasing the dose, but that did not help. He checked to be sure that his mosquitoes were actually “loaded,” and that the para- sites were present in the blood of the recipient. Everything about the tech- nique was fine—but his patients did not become ill with malaria. He had proven, serendipitously, that many African-Americans were immune to vivax malaria. The explanation, based on the Duffy antigen, did not come until 1975, but the fact was thoroughly established through the research of Boyd and others in the 19305. Boyd was able to infect his black patients with the much more dangerous Plasmodium falcipamm, showing that they were not immune to infection with this parasite.‘7 It remained for A. C. Allison, in 1954, to postulate the connection be— tween sickle-cell hemoglobin and partial protection from falcipamm ma- Malaria laria. This work had nothing to do with malariatherapy for neurosyphilis, but instead grew out of his puzzlement about the overlapping distribution of high rates of sickle-cellfidisease with high densities of falciparum infec- tion. He found that in parts bf the world where malaria was hyperendemic (i.e., always present and causing high numbers of cases), the prevalence of sickle-cell trait was usually above 20 percent, and sometimes as high as 40 percent. But in areas of sub-Saharan Africa where malaria was sporadic, the sickle-cell trait appeared in less than 10 percent of the population. Alli- son fiirther found that children with sickle-cell trait had a lower density of malaria parasites in their blood, and a greater chance of surviving to age five. Others later confirmed Allison’s initial hypothesis, making the case of malaria and sickle-cell a classic instance of balanced polymorphism, a ge- netic phenomenon in which the heterozygous state of a particular trait is preferred over both the homozygous state and the absence of the trait.‘8 Tracing the complex course of southern whites’ conceptions about black malaria immunity requires seeing both the underlying biology and the tendency of people to use science to argue the things they already be- lieve. Arguments about blacks’ abilities to survive in the tropics and func- tion there as workers certainly had a phenomenological basis. They were biologically more suited to the sort of climate in which they had evolved for centuries and in which malaria had been a companion for generations. Once recognized, however, this fact was used by white southemers to jus- tify slavery, arguing that enslaved blacks were well off as they were and where they were. After the Civil War, the pressure for this argument dis- appeared, and in its place grew an “I told you so” attitude that gloried in the unhealthiness of now-emancipated black southerners. Even as this attitude was replaced by general fears of contamination from tuberculosis and veneral diseases, malaria remained out of the limelight in discussions of race. Malaria might well have been categorized with venereal disease and tuberculosis as threats of contaminating contact, if the personal contagion aspect of the disease had been more pronounced in the public health gospel. But after the interest in quininization and carriers faded with the failure of Bass’s public health plan, the mosquito moved to the forefront of the public health malaria campaign. There simply was not much use for the race card in discussions of mosquitoes, and if anything the levels of malaria among blacks provoked discussions of their shoddy housing (lead- ing to mosquito exposure) rather than fears about carriers. The housing issue cast a bad light on white property owners and pricked the commu- nal social conscience; it did not raise concerns about personal infection, particularly after malaria disappeared from southern towns in the 19203. When information about racial immunity to malaria emerged in the 1930s, it served no propagandist issue. The focus was on the mosquito, Race, Poverty, and Place 68 and that preoccupation continued with the DDT campaigns of the 19405. By the time Allison identified the sickle—cell phenomenon and its relation to malaria in the 19505, malaria was largely of historical interest in the United States. Knowledge of malaria immunities would be useful for his- torians such as Philip Curtin, Peter Wood, and Ken Kiple in understand— ing the New World’s history, but it had little relevance to the domestic public health effort in the United States. Malariologist Marshall A. Barber summed up malaria’s racial connec— tions in the history of the United States in a particularly compelling pas- sage of his autobiography: Negroes have long served as reservoirs of malaria parasites. Although perhaps suffering less from an attack of the disease than do white persons, they are probably quite as easily in- fected, certainly by the estivo-autumnal form of parasite. As a rule, Negroes are less protected from mosquito bites; and since they are more often neglectful of treatment, they are likely to harbor the parasite for long periods of time. Of course, any neglected people will serve as a reservoir of infectious disease. But with respect to malaria, Negroes are a particularly efficient reservoir or at least have been in the past. Equatorial Afiica is stocked with species and strains of malaria parasites in great variety and abundance, and thousands of Negroes must have been loaded with them when the ships of the “middle passage” brought the Negro slaves to the Americas. There they remained an abundant source of infection, a menace to their former mas- ters many years after slavery was abolished. Thus the Dark Con- tinent avenged itself for the theft of its children.49 All together, in assessing why the South was peculiar in its holding on to malaria, multiple factors are important. It retained its frontier quality, with poor housing located near swampy land. It containeda large popula- tion of African-Americans who supplied a steady reservoir of falciparum malaria parasites. It contained a population that lacked adequate medical care or access to insecticides, and it had an inadequate tax base to sup- port government programs. All of these factors contributed to the persis- tence of malaria in the South. These social, racial, and geographic aspects proved a mighty hurdle for the poorly fimded public health infrastructure of southern states to overcome. Malaria Chapter 4 Making Malaria Control Profitable The first decades of the twentieth century were a time of great hope and excitement about the possibilites of public health. Diphtheria, the child- killing horror that strangled its victims, had been brought to heel by the curative power of antitoxin and, like typhoid, would soon yield to vacci- nation. In 1905 the U.S. Public Health Service (USPHS) slew the fearsome dragon of yellow fever, terminating an epidemic in New Orleans by means of mosquito control. New techniques of contagion control and therapy diminished the terrors of the great white plague tuberculosis. Paul Ehr- lich’s “magic bullet” for syphilis, salvarsan, raised hopes that other new drugs against infectious disease would soon follow. Progressive muckrak- ers like Upton Sinclair and Harvey Wiley led a fierce crusade to purify the country’s food, drink, and drugs. It was a time of great exploits and pub- lic health heroes.l N 0 American exemplified this trend as well as William Crawford Gor- gas, conqueror of Panama Canal fevers. This Alabama-born army surgeon used the discoveries of Ronald Ross and Walter Reed to carry out the first practical demonstrations of disease eradication via antimosquito cam- paigns. The Panama Canal had first been a French project, but the ravages of yellow fever had driven the French from the field in 1889. Gorgas, as chief sanitary officer of the Panama Canal Commission from 1904 to 1913, used strict antimosquito measures to make digging the canal a possibility. His fame in eliminating yellow fever reached so far that it could even be- come the subject of a madman’s ravings in an American comedy published almost four decades later.2 Gorgas also attacked malaria, and while he did not remove it entirely from the Canal Zone, he did reduce the number of cases by as much as 80 percent. His approach was mainly one of drainage, screening, use of quinine to treat active cases, and the hand killing of adult mosquitoes. He enlisted children in this last effort, paying them a certain amount per dead mosquito harvested.’ Gorgas’s techniques were labor-and cost-intensive. His budget for ma- laria and yellow fever control far exceeded the amount of money available in the southern states for public health work. If southern public health ...
View Full Document

This note was uploaded on 08/20/2008 for the course HIST 3C taught by Professor Porter during the Winter '07 term at UCLA.

Page1 / 20

Week7_Humphrey - 30 The Mist Rises Malaria in the...

This preview shows document pages 1 - 20. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online