ACS Publications Division - to list of Journals
Pharmaceutical Centuryto Pharmaceutical Century home
Analytical Chemistry | Chemical & Engineering News | Modern Drug Discovery
| Today's Chemist at Work | E-Mail Us | Electronic Readers Service 
1920s and 1930s
opening art circulation system of human head
Corporate Sponsors
Archimica
Beckman-Coulter, Inc.
Buchi Analytical, Inc. .
Dynamit Nobel GmbH
EMAX Solutions
Fisher Chemical
International Equipment Trading
Julabo USA, Inc.
Pressure Chemical Co.
Saurefabrik Schweizerhall
The Learning Key
Salving with Science

Introduction  Throughout the 1920s and 1930s, new technologies and new science intersected as physiology led to the discovery of vitamins and to increasing knowledge of hormones and body chemistry. New drugs and new vaccines flowed from developments started in the previous decades. Sulfa drugs became the first of the anti bacterial wonder drugs promising broad-spectrum cures. Penicillin was discovered, but its development had to await new technology (and World War II, which hastened it). New instruments such as the ultracentrifuge and refined techniques of X-ray crystallography paralleled the development of virology as a science. Isoelectric precipitation and electrophoresis first became important for drug purification and analysis. Overall, the medical sciences were on a firmer footing than ever before. This was the period in which the Food and Drug Administration (FDA) gained independence as a regulatory agency.And researchers reveled in the expanding knowledge base and their new instruments. They created a fusion of medicine and machines that would ultimately be known as “molecular biology.”

The assault on disease
Just as pharmaceutical chemists sought “magic bullets” for myriad diseases in the first two decades of the century, chemists inthe 1920s and 1930s expanded the search for solutions to the bacterial and viral infections that continued to plague humankind. Yet, the first great pharmaceutical discovery of the 1920s addressed not an infectious disease but a physiological disorder.

Diabetes mellitus is caused by a malfunction of the pancreas, resulting in the failure of that gland to produce insulin, the hormone that regulates blood sugar. For most of human history, this condition meant certain death. Since the late 19th century, when the connection between diabetes and the pancreas was first determined, scientists had attempted to isolate the essential hormone and inject it into the body to control the disorder. Using dogs as experimental subjects, numerous researchers had tried and failed, but in 1921, Canadian physician Frederick Banting made the necessary breakthrough.

Banting surmised that if he tied off the duct to the pancreas of a living dog and waited until the gland atrophied before removing it, there would be no digestive juices left to dissolve the hormone, which was first called iletin. Beginning in the late spring of 1921, Banting worked on his project at the University of Toronto with his assistant, medical student Charles Best. After many failures, one of the dogs whose pancreas had been tied off showed signs of diabetes. Banting and Best removed the pancreas, ground it up, and dissolved it in a salt solution to create the long-sought extract. They injected the extract into the diabetic dog, and within a few hours the canine’s blood sugar returned to normal. The scientists had created the first effective treatment for diabetes.

At the insistence of John Macleod, a physiologist at the University of Toronto who provided the facilities for Banting’s work, biochemists James Collip and E. C. Noble joined the research team to help purify and standardize the hormone, which was renamed insulin. Collip purified the extract for use in human subjects, and enough successful tests were performed on diabetic patients to determine that the disorder could be reversed.

The Connaught Laboratories in Canada and the Eli Lilly Co. in the United States were awarded the rights to manufacture the drug. Within a few years, enough insulin was being produced to meet the needs of diabetics around the world. Although Banting and Best had discovered the solution to a problem that had troubled humans for millennia, it was Lilly’s technical developments (such as the use of an isoelectric precipitation step) that enabled large-scale collection of raw material, extraction and purification of insulin, and supplying of the drug in a state suitable for clinical use. Only after proper bulk production and/or synthesis techniques were established did insulin and many other hormones discovered in the 1920s and 1930s (such as estrogen, the corticosteroids, and testosterone) become useful and readily available to the public. This would continue to be the case with most pharmaceutical breakthroughs throughout the century.

Although the isolation and development of insulin was a critically important pharmaceutical event, diabetes was by no means the greatest killer of the 19th and early 20th centuries. That sordid honor belonged to infectious diseases, especially pneumonia, and scientists in the 1920s and 1930s turned with increasing success to the treatment of some of the most tenacious pestilences. Paul Ehrlich introduced the world to chemotherapy in the early years of the 20th century, and his successful assault on syphilis inspired other chemists to seek “miracle drugs” and “magic bullets.” But what was needed was a drug that could cure general bacterial infections such as pneumonia and septicemia. Bacteriologists began in the 1920s to experiment with dyes that were used to stain bacteria to make them more visible under the microscope, and a breakthrough was achieved in the mid-1930s.

Sulfa drugs and more
The anti-infective breakthrough occurred at Germany’s I. G. Farben, which had hired Gerhard Domagk in the late 1920s to direct its experimental pathology laboratory in a drive to become a world leader in the production of new drugs. Domagk performed a series of experiments on mice infected with streptococcus bacteria. He discovered that some previously successful compounds killed the bacteria in mice but were too toxic to give to humans. In 1935, after years of experimentation, Domagk injected an orange-red azo dye called Prontosil into a group of infected mice. The dye, which was primarily used to color animal fibers, killed the bacteria, and, most importantly, all the mice survived. The first successful use of Prontosil on humans occurred weeks later, when Domagk gave the drug to a desperate doctor treating an infant dying of bacterial infection. The baby lived, but this did not completely convince the scientific community of the drug’s efficacy. Only when 26 women similarly afflicted with life-threatening infections were cured during clinical trials in London in late 1935 did Prontosil become widely known and celebrated for its curative powers.
BIOGRAPHY: An instrumental figure
Warren Weaver, a University of Wisconsin professor and mathematical physicist, was the principal architect of the Rockefeller Foundation’s programs in the natural sciences beginning in 1932. Weaver saw his task as being “to encourage the application of the whole range of scientific tools and techniques, and especially those that had been so superbly developed in the physical sciences, to the problems of living matter.” Indeed, it was Weaver who coined the term “molecular biology.” Although many of the instruments Weaver would use had already been developed for use in the physical sciences, under Weaver’s financial and promotional encouragement, they were applied in novel ways in pharmaceutical chemistry and molecular biology. According to Robert E. Kohler, “For about a decade, there was hardly any major new instrument that Weaver did not have a hand in developing. In addition to the ultracentrifuge and X-ray and electron diffraction, the list includes electrophoresis, spectroscopy, electron microscopy, . . . radioactive isotopes, and particle accelerators.” Weaver’s impact as an administrative promoter of science was as important in his era as that of his “spiritual” successor, Vannevar Bush, science advisor to the President in the post–World War II era.

The active part of Prontosil was a substance called sulfanilamide, so termed by Daniele Bovet of the Pasteur Institute, who determined that Prontosil broke down in the body and that only a fragment of the drug’s molecule worked against an infection. After the discovery of the active ingredient, more than 5000 different “sulfa” drugs were made and tested, although only about 15 ultimately proved to be of value. In 1939, Domagk received the 1939 Nobel Prize in Physiology or Medicine.

Sulfanilamide was brought to the United States by Perrin H. Long and Eleanor A. Bliss, who used it in clinical applications at Johns Hopkins University in 1936. It was later discovered that the sulfa drugs, or sulfonamides, did not actually kill bacteria outright, like older antiseptics, but halted the growth and multiplication of the bacteria, while the body’s natural defenses did most of the work.

Certainly the most famous antibacterial discovered in the 1920s and 1930s was penicillin—which was found through almost sheer serendipity. In the years after World War I, Alexander Fleming was seeking better antiseptics, and in 1921 he found a substance in mucus that killed bacteria. After further experimentation, he learned that the substance was a protein, which he called lysozyme. Although Fleming never found a way to purify lysozymes or use them to treat infectious diseases, the discovery had implications for his later encounter with penicillin because it demonstrated the existence of substances that are lethal to certain microbes and harmless to human tissue.

Fleming’s major discovery came almost seven years later. While cleaning his laboratory one afternoon, he noticed large yellow colonies of mold overgrowing a culture of staphylococcus bacteria on an agar plate. Fleming realized that something was killing the bacteria, and he proceeded to experiment with juice extracted from the mold by spreading it on agar plates covered with more bacteria. He found that even when the juice was highly diluted, it destroyed the bacteria. Calling the new antiseptic penicillin, after the Latin term for brush, Fleming had two assistants purify the mold juice, but he performed no tests on infected animal subjects. He published a paper in 1929 discussing the potential use of penicillin in surgical dressings but went no further. It wasn’t until the 1940s that penicillin was taken up by the medical community.

Another important achievement in antibacterial research occurred in the late 1930s, when Rene Dubos and colleagues at the Rockefeller Institute for Medical Research inaugurated a search for soil microorganisms whose enzymes could destroy lethal bacteria. The hope was that the enzymes could be adapted for use in humans. In 1939, Dubos discovered a substance extracted from a soil bacillus that cured mice infected with pneumococci. He named it tyrothricin, and it is regarded as the first antibiotic to be established as a therapeutic substance. The 1920s and 1930s were also interesting times in malaria research. The popular antimalarial drug chloroquine was not formally recognized in the United States until 1946, but it had been synthesized 12 years before at Germany’s Bayer Laboratories under the name Resochin.

While much of pharmaceutical science concentrated on finding answers to the problems posed by bacterial infections, there was some significant work done on viruses as well. Viruses were identified in the late 19th century by Dutch botanist Martinus Beijerinck, but a virus was not crystallized until 1935, when biochemist Wendell Stanley processed a ton of infected tobacco leaves down to one tablespoon of crystalline powder—tobacco mosaic virus. Unlike bacteria, viruses proved to be highly resistant to assault by chemotherapy, and thus antiviral research during the era did not yield the successes of antibiotic research.

Most clinical research was dedicated to the search for vaccines and prophylactics rather than treatments. Polio was one of the most feared scourges threatening the world’s children in the 1920s and 1930s, but no real breakthroughs came until the late 1940s. Scientifically, the greatest progress in antiviral research was probably made in investigations of the yellow fever virus, which were underwritten by the Rockefeller Foundation in the 1930s. But as in the case of polio, years passed before a vaccine was developed. In the meantime, efforts by the public health services of many nations, including the United States, promoted vaccination as part of the battle against several deadly diseases, from smallpox to typhoid fever. Special efforts were often made to attack the diseases in rural areas, where few doctors were available.

Vitamins and deficiency diseases
Whereas the medical fight against pernicious bacteria and viruses brings to mind a military battle against invading forces, some diseases are caused by internal treachery and metabolic deficiency rather than external assault. This notion is now commonplace, yet in the early 20th century it was hotly contested.

For the first time, scientists of the era isolated and purified “food factors,” or “vitamines,” and understood that the absence of vitamins has various detrimental effects on the body, depending on which ones are in short supply. Vitamins occur in the body in very small concentrations; thus, in these early years, determining which foods contained which vitamins, and analyzing their structure and effects on health, was complex and time-intensive. Ultimately, scientists discerned that vitamins are essential for converting food into energy and are critical to human growth. As a result of this research, by the late 1930s, several vitamins and vitamin mixtures were used for therapeutic purposes.

The isolation of specific vitamins began in earnest in the second decade of the 20th century and continued into the 1920s and 1930s. Experiments in 1916 showed that fat-soluble vitamin A was necessary for normal growth in young rats; in 1919, Harry Steenbock, then an agricultural chemist at the University of Michigan, observed that the vitamin A content of vegetables varies with the degree of vegetable pigmentation. It was later determined that vitamin A is derived from the plant pigment carotene. Also in 1919, Edward Mellanby proved that rickets is caused by a dietary deficiency. His research indicated that the deficiency could be overcome—and rickets prevented or cured—by adding certain fats to the diet, particularly cod-liver oil. At first, Mellanby thought that vitamin A was the critical factor, but further experimentation did not support this hypothesis. Three years later, Elmer McCollum and associates at Johns Hopkins University offered clear proof that vitamin A did not prevent rickets and that the antirachitic factor in cod-liver oil was the fat-soluble vitamin D. The research team soon developed a method for estimating the vitamin D content in foods.

Experiments on vitamin D continued into the mid-1920s. The most significant were the projects of Steenbock and Alfred Hess, working in Wisconsin and New York, respectively, who reported that antirachitic potency could be conveyed to some biological materials be exposing them to a mercury-vapor lamp. The substance in food that was activated by ultraviolet radiation was not fat, but a compound associated with fat called ergosterol, which is also present in human skin. The scientists surmised that the explanation of the antirachitic effect of sunlight is that ultraviolet rays form vitamin D from ergosterol in the skin, which then passes into the blood. The term vitamin D-1 was applied to the first antirachitic substance to be isolated from irradiated ergosterol, and we now know that there are several forms of vitamin D.

Other important vitamin studies took place in the 1920s and 1930s that had implications for the future of pharmaceutical chemistry. In 1929, Henrik Dam, a Danish biochemist, discovered that chicks fed a diet that contained no fat developed a tendency toward hemophilia. Five years later, Dam and colleagues discovered that if hemp seeds were added to the chicks’ diet, bleeding did not occur. The substance in the seeds that protected against hemorrhage was named vitamin K, for koagulation vitamin. In 1935, Armand Quick and colleagues at Marquette University reported that the bleeding often associated with jaundiced patients was caused by a decrease in the blood coagulation factor prothrombin. This study was complemented by a report by H. R. Butt and E. D. Warner that stated that a combination of bile salts and vitamin K effectively relieved the hemorrhagic tendency in jaundiced patients. All of these scientists’ work pointed to the conclusion that vitamin K was linked to the clotting of blood and was necessary for the prevention of hemorrhage, and that vitamin K was essential for the formation of prothrombin.

Dam and the Swiss chemist Paul Karrer reported in 1939 that they had prepared pure vitamin K from green leaves. In the same year, Edward Doisy, a biochemist at Saint Louis University, isolated vitamin K from alfalfa, determined its chemical composition, and synthesized it in the laboratory. Vitamin K was now available for treating patients who suffered from blood clotting problems. Dam and Doisy received the 1943 Nobel Prize in Physiology or Medicine for their work.

With the advent of successful research on vitamins came greater commercial exploitation of these substances. In 1933, Tadeus Reichstein synthesized ascorbic acid (vitamin C), making it readily available thereafter. The consumption of vitamins increased in the 1930s, and popular belief held them to be almost magical. Manufacturers, of course, did not hesitate to take advantage of this credulity. There was no informed public regulation of the sale and use of vitamins, and, as some vitamins were dangerous in excess quantities, this had drastic results in isolated cases. Water-soluble vitamins such as vitamin C easily flow out of the body through the kidneys, but the fat-soluble vitamins, such as A, D, and K, could not so easily be disposed of and might therefore prove especially dangerous. Many physicians of the era did nothing to discourage popular misconceptions about vitamins, or harbored relatively uncritical beliefs themselves.
SOCIETY: Long before Viagra...
Male sexual dysfunction is certainly not a new condition to be subjected to pharmaceutical intervention. Today, Viagra is a hugely successful “lifestyle drug” used to treat this condition; in the early 20th century, injections of glandular materials and testicular transplants had a heyday.

In the 1920s and 1930s, experimentation culminated in the discovery of testosterone. In 1918, Leo L. Stanley, resident physician of San Quentin State Prison in California, transplanted testicles removed from recently executed prisoners into inmates, some of whom claimed that they recovered sexual potency. In 1920, a lack of human material led to the substitution of boar, deer, goat, and ram testes. In the 1920s, Russian–French surgeon Serge Voronoff made a fortune transplanting monkey glands into aging men. Throughout this period, researchers tested the androgenic effects of substances isolated from large quantities of animal testicles and from human urine. (Adolf Butenandt isolated milligram amounts of androsterone from 15,000 L of policemen’s urine.) Finally, Karoly G. David, Ernst Laqueur, and colleagues isolated crystalline testosterone from testicles and published the results in 1935. Within a few months, groups led by Butenandt and G. Hanisch (funded by Schering Corp. in Berlin), and Leopold Ruzicka and A. Wettstein of Ciba, developed synthetic methods of preparing testosterone. Butenandt and Ruzicka shared the 1939 Nobel Prize in Chemistry for this achievement.

The FDA and federal regulation
The threat posed by unregulated vitamins was not nearly as dangerous as the potential consequences of unregulated drugs. Yet legislation was enacted only when drug tragedies incensed the public and forced Congress to act. After the 1906 Pure Food and Drug Act, there was no federal legislation dealing with drugs for decades, although the American Medical Association (AMA) did attempt to educate physicians and the public about pharmaceuticals. The AMA published books exposing quack medicines, gradually adopted standards for advertisements in medical journals, and in 1929, initiated a program of testing drugs and granting a Seal of Acceptance to those meeting its standards. Only drugs that received the seal were eligible to advertise in AMA journals.

Dangerous drugs were still sold legally, however, because safety testing was not required before marketing. Well into the 1930s, pharmaceutical companies still manufactured many 19th and early 20th century drugs that were sold in bulk to pharmacists, who then compounded them into physicians’ prescriptions. But newer drugs, such as many biologicals and sulfa drugs (after 1935), were packaged for sale directly to consumers and seemed to represent the future of drug manufacturing.

In 1937, an American pharmaceutical company produced a liquid sulfa drug. Attempting to make sulfanilamide useful for injections, the company mixed it with diethylene glycol—the toxic chemical now used in automobile antifreeze. Ultimately sold as a syrup called Elixir of Sulfanilamide, the drug concoction was on the market for two months, in which time it killed more than 100 people, including many children, who drank it.

Under existing federal legislation, the manufacturer could be held liable only for mislabeling the product. In response to this tragedy and a series of other scandals, Congress passed the Food, Drug and Cosmetic Act of 1938, which banned drugs that were dangerous when used as directed, and required drug labels to include directions for use and appropriate warnings. The act also required new drugs to be tested for safety before being granted federal government approval and created a new category of drugs that could be dispensed to a patient only at the request of a physician. Before the act was passed, patients could purchase any drug, except narcotics, from pharmacists. The Food and Drug Administration (the regulatory division established in 1927 from the former Bureau of Chemistry) was given responsibility for implementing these laws.

The 1938 legislation is the basic law that still regulates the pharmaceutical industry. New manufacturing and mass-marketing methods demanded changes in federal oversight, because a single compounding error can cause hundreds or even thousands of deaths. Yet before the 1940s, the law did not require drugs to be effective, only safe when used as directed. It was not until the 1940s that the Federal Trade Commission forced drug manufacturers to substantiate claims made about their products, at least those sold in interstate commerce.

Instrumentation
Although the 1920s and 1930s were especially fruitful for “soft” technologies such as antibiotics and vitamin production, these decades also produced several significant “hard” technologies—scientific instruments that transformed pharmaceutical R&D. Initially, one might think of the electron microscope, which was developed in 1931 in Germany. This early transmission electron microscope, invented by Max Knoll and Ernst Ruska, was essential for the future of pharmaceutical and biomedical research, but many other critical instruments came out of this era. Instrument production often brought people from disparate disciplines together on research teams, as physical chemists and physicists collaborated with biochemists and physiologists. New research fields were created along the boundary between chemistry and physics, and in the process, many new instruments were invented or adapted to molecular phenomena and biomolecular problems.
TECHNOLOGY: Electrifying separations
Spurred by the desire to separate proteins, Theodor Svedberg and his student Arne Tiselius were responsible for the development of electrophoresis. Tiselius transformed the electrophoresis apparatus into a powerful analytical instrument in the late 1920s. He used a U-shaped tube as the electrophoresis chamber, added a cooling system, and adapted a Schlieren optical system to visualize the refraction boundaries of colorless solutions. Despite its expense ($6000 to build and $5000 a year to maintain and operate), there were 14 Tiselius stations in the United States by the end of 1939.

Electrophoresis became the workhorse instrument for several generations of biologists interested in physiology and genetics—and the enabling instrument for most of the coming work in protein and nucleic acid purification and analysis. It proved the absolute requisite for the rational development of pharmaceuticals based on interaction with human enzymes or human genes.

One of the scientists who worked under the aegis of Warren Weaver, research administrator for the natural sciences at the Rockefeller Foundation, was Swedish chemist Theodor Svedberg. Svedberg’s early research was on colloids, and he began to develop high-speed centrifuges in the hope that they might provide an exact method for measuring the distribution of particle size in the solutions. In 1924, he developed the first ultracentrifuge, which generated a centrifugal force up to 5000 times the force of gravity. Later versions generated forces hundreds of thousands of times the force of gravity. Svedberg precisely determined the molecular weights of highly complex proteins, including hemoglobin. In later years, he performed studies in nuclear chemistry, contributed to the development of the cyclotron, and helped his student, Arne Tiselius, develop electrophoresis to separate and analyze proteins.

Another essential instrument developed in this era was the pH meter with a glass electrode. Kenneth Goode first used a vacuum triode to measure pH in 1921, but this potentiometer was not coupled to a glass electrode until 1928, when two groups (at New York University and the University of Illinois) measured pH by using this combination. Rapid and inexpensive pH measurement was not a reality until 1934, however, when Arnold Beckman of the California Institute of Technology and corporate chemist Glen Joseph substituted a vacuum tube voltmeter for a galvanometer and assembled a sturdy measuring device with two vacuum tubes and a milliammeter. The portable pH meter was marketed in 1935 for $195. In the world of medical applications, the electrometer dosimeter was developed in the mid-1920s to assess exposure to ionizing radiation for medical treatment, radiation protection, and industrial exposure control. For clinical dosimetry and treatment planning, an ionization chamber connected to an electrometer was valued for its convenience, versatility, sensitivity, and reproducibility.

It was not simply the invention of instruments, but the way research was organized around them, that made the 1920s and 1930s so fertile for biochemistry. Weaver was involved in encouraging and funding much of this activity, whether it was Svedberg’s work on molecular evolution or Linus Pauling’s use of X-ray diffraction to measure bond lengths and bond angles, and his development of the method of electron diffraction to measure the architecture of organic compounds. Inventing and developing new instruments allowed scientists to combine physics and chemistry and advance the field of pharmaceutical science.

Radioisotopes
Another powerful technological development that was refined during the 1920s and 1930s was the use of radioactive forms of elements—radioisotopes—in research. Hungarian chemist Georg von Hevesy introduced radioisotopes into experimental use in 1913, tracing the behavior of nonradioactive forms of selected elements; he later used a radioisotope of lead to trace the movement of lead from soil into bean plants.

The radioactive tracer was an alternative to more arduous methods of measurement and study. By the late 1920s, researchers applied the tracer technique to humans by injecting dissolved radon into the bloodstream to measure the rate of blood circulation. Yet there were limits to the use of radioisotopes, owing to the fact that some important elements in living organisms do not possess naturally occurring radioisotopes.

This difficulty was overcome in the early 1930s, when medical researchers realized that the cyclotron, or “atom smasher,” invented by physicist Ernest Lawrence, could be used to create radioisotopes for treatment and research. Radiosodium was first used in 1936 to treat several leukemia patients; the following year, Lawrence’s brother, John, used radiophosphorus to treat the same disease. A similar method was used to treat another blood disease, polycythemia vera, and soon it became a standard treatment for that malady. Joseph Hamilton and Robert Stone at the University of California, Berkeley, pioneered the use of cyclotron-produced radioisotopes for treating cancer in 1938; and one year later, Ernest Lawrence constructed an even larger atom smasher, known as the “medical cyclotron,” which would create additional radioisotopes in the hopes of treating cancer and other diseases.
Suggested reading
  • FDA History on the Web:

  • www.fda.gov/oc/history/default.htm
  • Final Report of the Advisory Committee on Human Radiation Experiments (Stock # 061-000-00-848-9) (Superintendent of Documents, U.S. Government Printing Office: Washington, DC, 1995) 
  • In Search of A Cure: A History of Pharmaceutical Discovery Weatherall, M. (Oxford University Press: New York, 1990) 
  • Partners in Science: Foundations and Natural Scientists, 1900–1945 Kohler, R. E. (University of Chicago Press: Chicago, 1991) 
  • The Greatest Benefit to Mankind: A Medical History of Humanity Porter, R. (W. W. Norton: New York, 1997) 

Thus began the age of “nuclear medicine,” in which the skills of physicists were necessary to produce materials critical to biochemical research. The new use of radioisotopes was a far cry from the quack medicines of the period that used the mystique of radioactivity to peddle radium pills and elixirs for human consumption—although in their ignorance, many legitimate doctors also did far more harm than good. The cost of producing radioisotopes was high, as cyclotrons often operated continuously at full power, requiring the attention of physicists around the clock. The human and financial resources of physics departments were strained in these early years, which made the contributions of foundations essential to the continuation of these innovative projects. Ultimately, as the century wore on, radioisotopes came into routine use, the federal government’s role increased immensely, and radioisotopes were mass-produced in the reactors of the Atomic Energy Commission. After World War II, nuclear medicine occupied a permanent place in pharmaceutical science.

By the end of the 1930s, the work of scientists such as Svedberg, Tiselius, Banting, Dubos and Domagk, along with the vision of administrators such as Weaver, set the stage for the unparalleled developments of the antibiotic era to come. In addition, World War II, already beginning in Europe, spurred a wealth of research into perfecting known technologies and developing new ones—instruments and processes that would have a profound impact on the direction that biology and pharmacology would ultimately take.


© 2000 American Chemical Society
Analytical Chemistry Chemical and Engineering News Modern Drug Discovery Today's Chemist at Work
CASChemPortChemCenterPubs Page